repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
cathywu/flow
tutorials/tutorial10_controllers.ipynb
mit
[ "Tutorial 10: Custom Controllers\nThis tutorial walks through the process of defining controllers for the lateral and longitudinal movement of human-driven vehicles within a network. Such controllers may be necessary in order to model types of human behavior not already supported in SUMO. Controllers can be defined by adding to the existing controllers defined in the directory flow/controllers/. \nHere, we will discuss Flow's BaseController class and then build two controllers: a longitudinal Intelligent Driver Model controller [CITE] and a lateral controller that attempts to move all vehicles into the same lane.\nWhen adding a custom controller, ensure changes are reflected in flow/controllers/__init__.py under the import statements as well as in the list __all__. \n1 Longitudinal Controller\n1.1 BaseController\nFlow's BaseController class is an abstract class to use when implementing longitudinal controllers. It includes failsafe methods and the get_action method called by Flow's core.base_env module. get_action adds noise to actions and runs failsafes, if specified. BaseController does not implement get_accel; that method should be implemented in any controllers that are subclasses of BaseController. \nAs such, any longitudinal controller must import BaseController. We also import NumPy in order to use some mathematical functions.", "import numpy as np\n\nfrom flow.controllers.base_controller import BaseController", "1.2 Controller Initialization\nHere we initialize an IDM controller class and the __init__ function storing class attributes.\nThe Intelligent Driver Model is a car-following model specifying vehicle dynamics by a differential equation for acceleration $\\dot{v}$. The differential equation follows:\n$$\\dot{v} = a \\left[ 1- \\left( \\frac{v}{v_0} \\right)^\\delta -\\left( \\frac{s^}{h} \\right)^2 \\right] \\textbf{, with } \\ s^ := s_0 + \\max \\left( 0, vT + \\frac{v\\Delta v}{2\\sqrt{ab}} \\right)$$\nThe IDM parameters are: desired speed $v_0$, time gap $T$, min gap $s_0$, acceleration exponent $\\delta$, acceleration term $a$, and comfortable deceleration $b$. $h$ is the vehicle headway (the distance to the vehicle ahead) and $\\Delta v$ is the velocity difference compared to the lead vehicle (current velocity - lead velocity).", "class IDMController(BaseController):\n def __init__(self, veh_id, v0=30, T=1, a=1, b=1.5, \n delta=4, s0=2, s1=0, time_delay=0.0, \n dt=0.1, noise=0, fail_safe=None, car_following_params=None):\n \"\"\"\n veh_id: str\n unique vehicle identifier\n car_following_params: SumoCarFollowingParams\n see parent class\n v0: float, optional\n desirable velocity, in m/s (default: 30)\n T: float, optional\n safe time headway, in s (default: 1)\n b: float, optional\n comfortable deceleration, in m/s2 (default: 1.5)\n delta: float, optional\n acceleration exponent (default: 4)\n s0: float, optional\n linear jam distance, in m (default: 2)\n s1: float, optional\n nonlinear jam distance, in m (default: 0)\n dt: float, optional\n timestep, in s (default: 0.1)\n noise: float, optional\n std dev of normal perturbation to the acceleration (default: 0)\n fail_safe: str, optional\n type of flow-imposed failsafe the vehicle should posses, defaults\n to no failsafe (None)\n \"\"\"\n \n BaseController.__init__(self, veh_id, car_following_params,\n delay=time_delay, fail_safe=fail_safe,\n noise=noise)\n self.v0 = v0\n self.T = T\n self.a = a\n self.b = b\n self.delta = delta\n self.s0 = s0\n self.s1 = s1\n self.dt = dt", "1.3 Acceleration Command\nNext, we implement the acceleration equation specified by IDM: \n$$\\dot{v} = a \\left[ 1- \\left( \\frac{v}{v_0} \\right)^\\delta -\\left( \\frac{s^}{h} \\right)^2 \\right] \\textbf{, with } \\ s^ := s_0 + \\max \\left( 0, vT + \\frac{v\\Delta v}{2\\sqrt{ab}} \\right)$$\nThe vehicle's velocity v is fetched by getter method get_speed of the environment's vehicles object, as is the id of the lead vehicle lead_id and headway h. \nInput-checking to ensure that overly small headways are not used is performed, as well as a step to set $s^$ properly when no car is ahead of the vehicle being controlled. If there is a lead vehicle, $s^$ is calculated as described, and then the IDM acceleration is returned.", "class IDMController(BaseController):\n def __init__(self, veh_id, v0=30, T=1, a=1, b=1.5, \n delta=4, s0=2, s1=0, time_delay=0.0, \n dt=0.1, noise=0, fail_safe=None, car_following_params=None):\n \"\"\"Docstring eliminated here for brevity\"\"\"\n BaseController.__init__(self, veh_id, car_following_params,\n delay=time_delay, fail_safe=fail_safe,\n noise=noise)\n self.v0 = v0\n self.T = T\n self.a = a\n self.b = b\n self.delta = delta\n self.s0 = s0\n self.s1 = s1\n self.dt = dt\n\n \n ##### Below this is new code #####\n def get_accel(self, env):\n v = env.k.vehicle.get_speed(self.veh_id)\n lead_id = env.k.vehicle.get_leader(self.veh_id)\n h = env.k.vehicle.get_headway(self.veh_id)\n\n # negative headways may be registered by sumo at intersections/\n # junctions. Setting them to 0 causes vehicles to not move; therefore,\n # we maintain these negative headways to let sumo control the dynamics\n # as it sees fit at these points.\n if abs(h) < 1e-3:\n h = 1e-3\n\n if lead_id is None or lead_id == '': # no car ahead\n s_star = 0\n else:\n lead_vel = env.k.vehicle.get_speed(lead_id)\n s_star = self.s0 + max(\n 0,\n v * self.T + v*(v-lead_vel) / (2*np.sqrt(self.a*self.b)))\n\n return self.a * (1 - (v/self.v0)**self.delta - (s_star/h)**2)", "2 Lateral Controller\n2.1 BaseLaneChangeController\nIn this section we will implement a lane-change controller that sends lane-change commands to move a vehicle into lane 2. Flow includes a BaseLaneChangeController abstract class that functions similarly to the BaseController class, implementing safety-checking utility methods for control.\nFirst, we import the BaseLaneChangeController object and define a lane-change controller class, but leave method definition until the next step.", "from flow.controllers.base_lane_changing_controller import BaseLaneChangeController\n\nclass LaneZeroController(BaseLaneChangeController):\n \"\"\"A lane-changing model used to move vehicles into lane 0.\"\"\"\n pass", "2.2 Lane-Change Command\nLane-change controllers must implement the method get_lane_change_action. Actions in Flow are specified as directions, which are a number out of [-1, 0, 1]. Lane 0 is the farthest-right, so the direction -1 is a lane change to the right. \nThis get_lane_change_action implementation fetches the current lane the vehicle is in, using the get_lane method of the Vehicles object and passing in self.veh_id. If the vehicle is in a lane different from lane 0, it must have a lane number above 0, since lane numbers are positive in SUMO. In that case, a lane-change to the right is specified by returning the direction -1. If the vehicle is in lane 0, then the direction 0 is returned.", "class LaneZeroController(BaseLaneChangeController):\n \"\"\"A lane-changing model used to move vehicles into lane 0.\"\"\"\n\n ##### Below this is new code #####\n def get_lane_change_action(self, env):\n current_lane = env.k.vehicle.get_lane(self.veh_id)\n if current_lane > 0:\n return -1\n else:\n return 0" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sjsrey/pysal
notebooks/explore/segregation/local_measures_example.ipynb
bsd-3-clause
[ "Local Measures of segregation\nThis is an example notebook of functionalities for local measures of the segregation module. Firstly, we need to import the packages and functions we need:", "import pysal.lib\nfrom pysal.explore import segregation\nimport geopandas as gpd\nimport matplotlib.pyplot as plt\n\nfrom pysal.explore.segregation.local import MultiLocationQuotient, MultiLocalDiversity, MultiLocalEntropy, MultiLocalSimpsonInteraction, MultiLocalSimpsonConcentration, LocalRelativeCentralization", "Then it's time to load some data to estimate segregation. We use the data of 2000 Census Tract Data for the metropolitan area of Sacramento, CA, USA. \nWe use a geopandas dataframe available in PySAL examples repository.\nFor more information about the data: https://github.com/pysal/pysal.lib/tree/master/pysal.lib/examples/sacramento2", "input_df = gpd.read_file(pysal.lib.examples.get_path(\"sacramentot2.shp\"))\ninput_df.columns", "Important: all classes that start with \"Multi_\" expects a specific type of input of multigroups since the index will be calculated using many groups.\nOn the other hand, other classes expects a single group for calculation of the metrics.\nThe groups of interest are White, Black, Asian and Hispanic population. Therefore, we create an auxiliary list with only the necessary columns for fitting the index.", "groups_list = ['WHITE_', 'BLACK_', 'ASIAN_','HISP_']", "We also can plot the spatial distribution of the composition of each of these groups over the tracts of Sacramento:", "for i in range(len(groups_list)):\n input_df['comp_' + groups_list[i]] = input_df[groups_list[i]] / input_df['TOT_POP']\n\nfig, axes = plt.subplots(ncols = 2, nrows = 2, figsize = (17, 10))\n\n\ninput_df.plot(column = 'comp_' + groups_list[0],\n cmap = 'OrRd',\n legend = True, ax = axes[0,0])\naxes[0,0].set_title('Composition of ' + groups_list[0], fontsize = 18)\naxes[0,0].set_xticks([])\naxes[0,0].set_yticks([])\naxes[0,0].set_facecolor('white')\n\n\ninput_df.plot(column = 'comp_' + groups_list[1],\n cmap = 'OrRd',\n legend = True, ax = axes[0,1])\naxes[0,1].set_title('Composition of ' + groups_list[1], fontsize = 18)\naxes[0,1].set_xticks([])\naxes[0,1].set_yticks([])\naxes[0,1].set_facecolor('white')\n\n\ninput_df.plot(column = 'comp_' + groups_list[2],\n cmap = 'OrRd',\n legend = True, ax = axes[1,0])\naxes[1,0].set_title('Composition of ' + groups_list[2], fontsize = 18)\naxes[1,0].set_xticks([])\naxes[1,0].set_yticks([])\naxes[1,0].set_facecolor('white')\n\ninput_df.plot(column = 'comp_' + groups_list[3],\n cmap = 'OrRd',\n legend = True, ax = axes[1,1])\naxes[1,1].set_title('Composition of ' + groups_list[3], fontsize = 18)\naxes[1,1].set_xticks([])\naxes[1,1].set_yticks([])\naxes[1,1].set_facecolor('white')", "Location Quotient (LQ)", "index = MultiLocationQuotient(input_df, groups_list)\nindex.statistics", "Important to note that column k has the Location Quotient (LQ) of position k in groups. Therefore, the LQ of the first unit of 'WHITE_' is 1.36543221 and, for example the LQ of 'BLACK_' of the last spatial unit is 0.07674888. In addition, in this case we can plot the LQ of every group in the dataset similarly the way we did previously with the composition:", "for i in range(len(groups_list)):\n input_df['LQ_' + groups_list[i]] = index.statistics[:,i]\n\nfig, axes = plt.subplots(ncols = 2, nrows = 2, figsize = (17, 10))\n\n\ninput_df.plot(column = 'LQ_' + groups_list[0],\n cmap = 'inferno_r',\n legend = True, ax = axes[0,0])\naxes[0,0].set_title('Location Quotient of ' + groups_list[0], fontsize = 18)\naxes[0,0].set_xticks([])\naxes[0,0].set_yticks([])\naxes[0,0].set_facecolor('white')\n\n\ninput_df.plot(column = 'LQ_' + groups_list[1],\n cmap = 'inferno_r',\n legend = True, ax = axes[0,1])\naxes[0,1].set_title('Location Quotient of ' + groups_list[1], fontsize = 18)\naxes[0,1].set_xticks([])\naxes[0,1].set_yticks([])\naxes[0,1].set_facecolor('white')\n\n\ninput_df.plot(column = 'LQ_' + groups_list[2],\n cmap = 'inferno_r',\n legend = True, ax = axes[1,0])\naxes[1,0].set_title('Location Quotient of ' + groups_list[2], fontsize = 18)\naxes[1,0].set_xticks([])\naxes[1,0].set_yticks([])\naxes[1,0].set_facecolor('white')\n\ninput_df.plot(column = 'LQ_' + groups_list[3],\n cmap = 'inferno_r',\n legend = True, ax = axes[1,1])\naxes[1,1].set_title('Location Quotient of ' + groups_list[3], fontsize = 18)\naxes[1,1].set_xticks([])\naxes[1,1].set_yticks([])\naxes[1,1].set_facecolor('white')", "Local Diversity", "index = MultiLocalDiversity(input_df, groups_list)\nindex.statistics[0:10] # Values of first 10 units\n\ninput_df['Local_Diversity'] = index.statistics\ninput_df.head()\nax = input_df.plot(column = 'Local_Diversity', cmap = 'inferno_r', legend = True, figsize = (15,7))\nax.set_title(\"Local Diversity\", fontsize = 25)", "Local Entropy", "index = MultiLocalEntropy(input_df, groups_list)\nindex.statistics[0:10] # Values of first 10 units\n\ninput_df['Local_Entropy'] = index.statistics\ninput_df.head()\nax = input_df.plot(column = 'Local_Entropy', cmap = 'inferno_r', legend = True, figsize = (15,7))\nax.set_title(\"Local Entropy\", fontsize = 25)", "Local Simpson Interaction", "index = MultiLocalSimpsonInteraction(input_df, groups_list)\nindex.statistics[0:10] # Values of first 10 units\n\ninput_df['Local_Simpson_Interaction'] = index.statistics\ninput_df.head()\nax = input_df.plot(column = 'Local_Simpson_Interaction', cmap = 'inferno_r', legend = True, figsize = (15,7))\nax.set_title(\"Local Simpson Interaction\", fontsize = 25)", "Local Simpson Concentration", "index = MultiLocalSimpsonConcentration(input_df, groups_list)\nindex.statistics[0:10] # Values of first 10 units\n\ninput_df['Local_Simpson_Concentration'] = index.statistics\ninput_df.head()\nax = input_df.plot(column = 'Local_Simpson_Concentration', cmap = 'inferno_r', legend = True, figsize = (15,7))\nax.set_title(\"Local Simpson Concentration\", fontsize = 25)", "Local Centralization\nLet's assume we want to calculate the Local Centralization to the group 'BLACK_':", "index = LocalRelativeCentralization(input_df, 'BLACK_', 'TOT_POP')\nindex.statistics[0:10] # Values of first 10 units\n\ninput_df['Local_Centralization'] = index.statistics\ninput_df.head()\nax = input_df.plot(column = 'Local_Centralization', cmap = 'inferno_r', legend = True, figsize = (15,7))\nax.set_title(\"Local Centralization\", fontsize = 25)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statkraft/shyft-doc
notebooks/nea-example/calibration-configured-dev.ipynb
lgpl-3.0
[ "Running a calibration with SHyFT\nThis notebook is guiding through the simulation process of a catchment. The following steps are described:\n\nLoading required python modules and setting path to SHyFT installation\nConfiguration of a SHyFT calibration\nRunning a SHyFT calibration\nInspecting the calibration results\n\n1. Loading required python modules and setting path to SHyFT installation\nShyft requires a number of different modules to be loaded as part of the package. Below, we describe the required steps for loading the modules, and note that some steps are only required for the use of the jupyter notebook.", "# Pure python modules and jupyter notebook functionality\n# first you should import the third-party python modules which you'll use later on\n# the first line enables that figures are shown inline, directly in the notebook\n%pylab inline\nimport os\nimport datetime as dt\nimport pandas as pd\nfrom os import path\nimport sys\nfrom matplotlib import pyplot as plt", "The Shyft Environment\nThis next step is highly specific on how and where you have installed Shyft. If you have followed the guidelines at github, and cloned the three shyft repositories: i) shyft, ii) shyft-data, and iii) shyft-doc, then you may need to tell jupyter notebooks where to find shyft. Uncomment the relevant lines below.\nIf you have a 'system' shyft, or used conda install -s sigbjorn shyft to install shyft, then you probably will want to make sure you have set the SHYFT_DATA directory correctly, as otherwise, Shyft will assume the above structure and fail. This has to be done before import shyft. In that case, uncomment the relevant lines below.\nnote: it is most likely that you'll need to do one or the other.", "# try to auto-configure the path, -will work in all cases where doc and data\n# are checked out at same level\nshyft_data_path = path.abspath(\"../../../shyft-data\")\nif path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ:\n os.environ['SHYFT_DATA']=shyft_data_path\n \n# shyft should be available either by it's install in python\n# or by PYTHONPATH set by user prior to starting notebook.\n# This is equivalent to the two lines below\n# shyft_path=path.abspath('../../../shyft')\n# sys.path.insert(0,shyft_path)\n\n# importing the shyft modules needed for running a calibration\nfrom shyft.repository.default_state_repository import DefaultStateRepository\nfrom shyft.orchestration.configuration.yaml_configs import YAMLCalibConfig, YAMLSimConfig\nfrom shyft.orchestration.simulators.config_simulator import ConfigCalibrator, ConfigSimulator", "2. Configuration of a SHyFT calibration", "# conduct a configured simulation first.\nconfig_file_path = os.path.abspath(\"../nea-example/nea-config/neanidelva_simulation.yaml\")\ncfg = YAMLSimConfig(config_file_path, \"neanidelva\")\nsimulator = ConfigSimulator(cfg) \n# run the model, and we'll just pull the `api.model` from the `simulator`\nsimulator.run()\nstate = simulator.region_model.state", "Now that we have the initial state, we'll run the calibration (this is not a strictly required step, but we use it later)", "# set up configuration using *.yaml configuration files\nconfig_file_path = os.path.abspath(\"./nea-config/neanidelva_calibration.yaml\") # here is the *.yaml file\ncfg = YAMLCalibConfig(config_file_path, \"neanidelva\")\n\n# initialize an instance of the orchestration's ConfigCalcalibrator class, which has all the functionality needed\n# to run a calibration using the above initiated configuration\ncalib = ConfigCalibrator(cfg)\nn_cells = calib.region_model.size()\nstate_repos = DefaultStateRepository(calib.region_model) # Notice that this repository needs the real model\n# so that it's able to generate a precise\n# default state-with-id vector for this\n# specific model\n", "3. Running a SHyFT calibration", "# once the calibrator is set up, all you need to do is running the calibration...\n# the calibrated parameters are stored in a model.yaml. \nresults = calib.calibrate(cfg.sim_config.time_axis, state_repos.get_state(0).state_vector, \n cfg.optimization_method['name'],\n cfg.optimization_method['params'])", "4. Inspecting the calibration results\nFirst the Nash-Suttcliffe-efficiency of the calibrated simulation is computed to see the quality of the calibration.\nThen the calibrated model parameters are accessed and printed out.", "# Get NSE of calibrated run:\nresult_params = []\nfor i in range(results.size()):\n result_params.append(results.get(i))\nprint(\"Final NSE =\", 1-calib.optimizer.calculate_goal_function(result_params))\n\n# Check out the calibrated parameters.\n\ndiff = 1.0E-3\nprint(\"{0:30s} {1:10s}\".format(\"PARAM-NAME\", \"CALIB-VALUE\"))\n\nfor i in range(results.size()):\n print(\"{0:30s} {1:10f}\".format(results.get_name(i), results.get(i)))", "Plotting simulated and observed discharge\nWe are now plotting the simulated and observed discharge timeseries over the course of the melt period.", "# get the target vector and discharge statistics from the configured calibrator\ntarget_obs = calib.tv[0]\ndisch_sim = calib.region_model.statistics.discharge(target_obs.catchment_indexes).average(target_obs.ts.time_axis)\ndisch_obs = target_obs.ts.values\n\nts_timestamps = [dt.datetime.utcfromtimestamp(p.start) for p in target_obs.ts.time_axis]\n\n\n# plot up the results\nfig, ax = plt.subplots(1, figsize=(15,10))\nax.plot(ts_timestamps, disch_sim.values, lw=2, label = \"sim\")\nax.plot(ts_timestamps, disch_obs, lw=2, ls='--', label = \"obs\")\nax.set_title(\"observed and simulated discharge\")\nax.legend()\nax.set_ylabel(\"discharge [m3 s-1]\")", "5. Changing parameters on-the-fly\nInstead of changing model parameters in the yaml-configs, reload the configuration and re-run the model, we can also just change the parameters \"on-the-fly\", and rerun the model. This makes it easy to investigate the influence of certain model parameters on the simulation results.\n5a. Snow or rain? The parameter gs.tx sets the threshold temperature at which precipitation is treated as snow fall.\nIn the following we'll investigate the impact of manually manipulating this parameter.", "parameters = calib.region_model.get_region_parameter() # fetching parameters from the simulator object\nprint(u\"Calibrated rain/snow threshold temp: {} C\".format(parameters.gs.tx)) # print current value of hs.tx", "In the following, we first set the hs.tx parameter to a higher, and then to a lower value compared to the value the calibration results suggest. We re-run the simulation, respicetively, and plot the results.", "calib.optimizer.calculate_goal_function(result_params) # reset the parameters to the values of the calibration\nparameters.gs.tx = 4.0 # setting a higher value for tx\ns_init = state.extract_state([])\n# type(state)\n# s0=state_repos.get_state(0)\n# s0.state_vector\n# state.apply_state(s0, [])\ncalib.run(state=s_init) # rerun the model, with new parameter\ndisch_sim_p_high = calib.region_model.statistics.discharge(target_obs.catchment_indexes).average(target_obs.ts.time_axis) # fetch discharge ts\nparameters.gs.tx = -4.0 # setting a higher value for tx\n\ncalib.run(state=s_init) # rerun the model, with new parameter\n\ndisch_sim_p_low = calib.region_model.statistics.discharge(target_obs.catchment_indexes).average(target_obs.ts.time_axis) # fetch discharge ts\nfig, ax = plt.subplots(1, figsize=(15,10))\nax.plot(ts_timestamps, disch_sim.values, lw=2, label = \"calib\")\nax.plot(ts_timestamps, disch_sim_p_high.values, lw=2, label = \"high\")\nax.plot(ts_timestamps, disch_sim_p_low.values, lw=2, label = \"low\")\nax.plot(ts_timestamps, disch_obs, lw=2, ls='--', label = \"obs\")\nax.set_title(\"investigating parameter gs.tx\")\nax.legend()\nax.set_ylabel(\"discharge [m3 s-1]\")\n\ns_init = state.extract_state([])\n\n# reset the max water parameter\nparameters.gs.max_water = 1.0 # setting a higher value for tx\ncalib.run(state=s_init) # rerun the model, with new parameter\ndisch_sim_p_high = calib.region_model.statistics.discharge(target_obs.catchment_indexes).average(target_obs.ts.time_axis) # fetch discharge ts\n\nparameters.gs.max_water = .001 # setting a higher value for tx\ncalib.run(state=s_init) # rerun the model, with new parameter\ndisch_sim_p_low = calib.region_model.statistics.discharge(target_obs.catchment_indexes).average(target_obs.ts.time_axis) # fetch discharge ts\n\n# plot the results\nfig, ax = plt.subplots(1, figsize=(15,10))\nax.plot(ts_timestamps, disch_sim.values, lw=2, label = \"calib\")\nax.plot(ts_timestamps, disch_sim_p_high.values, lw=2, label = \"high\")\nax.plot(ts_timestamps, disch_sim_p_low.values, lw=2, label = \"low\")\nax.plot(ts_timestamps, disch_obs, lw=2, ls='--', label = \"obs\")\nax.set_title(\"investigating parameter gs.max_water\")\nax.legend()\nax.set_ylabel(\"discharge [m3 s-1]\")", "6. Play with some sensitive parameters in real time", "# at this point we could look at the time series for every cell. Or plot a spatial map...\n# TODO: https://data-dive.com/cologne-bike-rentals-interactive-map-bokeh-dynamic-choropleth\nfrom ipywidgets import interact\nimport numpy as np\n\nfrom bokeh.io import push_notebook, show, output_notebook\nfrom bokeh.plotting import figure\nfrom bokeh.palettes import viridis\nfrom bokeh.models.sources import ColumnDataSource\n\n\noutput_notebook()\n\np = figure(title='Parameters', plot_height=300, plot_width=800)\n\npallette = viridis(10)\nts_timestamps = [dt.datetime.utcfromtimestamp(ta.start) for ta in target_obs.ts.time_axis]\n \n\ndef plot_simobs(calib):\n \n model = calib.region_model\n disch_sim = model.statistics.discharge(calib.tv[0].catchment_indexes).average(calib.tv[0].ts.time_axis)\n disch_obs = calib.tv[0].ts\n \n data = {\n 'time': ts_timestamps,\n 'sim': model.statistics.discharge(calib.tv[0].catchment_indexes).average(calib.tv[0].ts.time_axis).values,\n 'obs': calib.tv[0].ts.values\n }\n source = ColumnDataSource(data)\n \n p.line('time', 'sim', source=source, line_color=pallette[0])\n p.line('time', 'obs', source=source, line_color='red')\n \n return p\n\n\ndef update(tx=0):\n parameters.gs.tx = tx\n calib.run(state=s_init)\n plot_simobs(calib)\n push_notebook()\n \n\nmodel = calib.region_model\np = plot_simobs(calib)\nshow(p, notebook_handle=True)\ninteract(update, tx=np.arange(-3.,4.))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
modin-project/modin
examples/quickstart.ipynb
apache-2.0
[ "<center><h2>Scale your pandas workflows by changing one line of code</h2>\nGetting Started\nTo install the most recent stable release for Modin run the following code on your command line:", "!pip install modin[all] ", "For further instructions on how to install Modin with conda or for specific platforms or engines, see our detailed installation guide.\nModin acts as a drop-in replacement for pandas so you can simply change a single line of import to speed up your pandas workflows. To use Modin, you simply have to replace the import of pandas with the import of Modin, as follows.", "import modin.pandas as pd\nimport pandas\n\n#############################################\n### For the purpose of timing comparisons ###\n#############################################\nimport time\nimport ray\nray.init()\nfrom IPython.display import Markdown, display\ndef printmd(string):\n display(Markdown(string))", "Dataset: NYC taxi trip data\nLink to raw dataset: https://modin-test.s3.us-west-1.amazonaws.com/yellow_tripdata_2015-01.csv (Size: ~200MB)", "# This may take a few minutes to download\nimport urllib.request\ns3_path = \"https://modin-test.s3.us-west-1.amazonaws.com/yellow_tripdata_2015-01.csv\"\nurllib.request.urlretrieve(s3_path, \"taxi.csv\") ", "Faster Data Loading with Modin's read_csv", "start = time.time()\n\npandas_df = pandas.read_csv(\"taxi.csv\", parse_dates=[\"tpep_pickup_datetime\", \"tpep_dropoff_datetime\"], quoting=3)\n\nend = time.time()\npandas_duration = end - start\nprint(\"Time to read with pandas: {} seconds\".format(round(pandas_duration, 3)))\n\nstart = time.time()\n\nmodin_df = pd.read_csv(\"taxi.csv\", parse_dates=[\"tpep_pickup_datetime\", \"tpep_dropoff_datetime\"], quoting=3)\n\nend = time.time()\nmodin_duration = end - start\nprint(\"Time to read with Modin: {} seconds\".format(round(modin_duration, 3)))\n\nprintmd(\"## Modin is {}x faster than pandas at `read_csv`!\".format(round(pandas_duration / modin_duration, 2)))", "You can quickly check that the result from pandas and Modin is exactly the same.", "pandas_df\n\nmodin_df", "Faster Append with Modin's concat\nOur previous read_csv example operated on a relatively small dataframe. In the following example, we duplicate the same taxi dataset 100 times and then concatenate them together.", "N_copies= 100\nstart = time.time()\n\nbig_pandas_df = pandas.concat([pandas_df for _ in range(N_copies)])\n\nend = time.time()\npandas_duration = end - start\nprint(\"Time to concat with pandas: {} seconds\".format(round(pandas_duration, 3)))\n\nstart = time.time()\n\nbig_modin_df = pd.concat([modin_df for _ in range(N_copies)])\n\nend = time.time()\nmodin_duration = end - start\nprint(\"Time to concat with Modin: {} seconds\".format(round(modin_duration, 3)))\n\nprintmd(\"### Modin is {}x faster than pandas at `concat`!\".format(round(pandas_duration / modin_duration, 2)))", "The result dataset is around 19GB in size.", "big_modin_df.info()", "Faster apply over a single column\nThe performance benefits of Modin becomes aparent when we operate on large gigabyte-scale datasets. For example, let's say that we want to round up the number across a single column via the apply operation.", "start = time.time()\nrounded_trip_distance_pandas = big_pandas_df[\"trip_distance\"].apply(round)\n\nend = time.time()\npandas_duration = end - start\nprint(\"Time to apply with pandas: {} seconds\".format(round(pandas_duration, 3)))\n\nstart = time.time()\n\nrounded_trip_distance_modin = big_modin_df[\"trip_distance\"].apply(round)\n\nend = time.time()\nmodin_duration = end - start\nprint(\"Time to apply with Modin: {} seconds\".format(round(modin_duration, 3)))\n\nprintmd(\"### Modin is {}x faster than pandas at `apply` on one column!\".format(round(pandas_duration / modin_duration, 2)))", "Summary\nHopefully, this tutorial demonstrated how Modin delivers significant speedup on pandas operations without the need for any extra effort. Throughout example, we moved from working with 100MBs of data to 20GBs of data all without having to change anything or manually optimize our code to achieve the level of scalable performance that Modin provides.\nNote that in this quickstart example, we've only shown read_csv, concat, apply, but these are not the only pandas operations that Modin optimizes for. In fact, Modin covers more than 90% of the pandas API, yielding considerable speedups for many common operations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb
apache-2.0
[ "# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI SDK for Python: AutoML training video object tracking model for batch prediction\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb\">\n <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\n Open in Vertex AI Workbench\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex AI SDK for Python to create video object tracking models and do batch prediction using a Google Cloud AutoML model.\nDataset\nThe dataset used for this tutorial is the Traffic dataset. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.\nObjective\nIn this tutorial, you create an AutoML video object tracking model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nView the model evaluation.\nMake a batch prediction.\n\nThere is one key difference between using batch prediction and using online prediction:\n\n\nPrediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.\n\n\nBatch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.\n\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"[your-region]\"\nif REGION == \"[your-region]\":\n REGION = \"us-central1\"", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_URI = \"\" # @param {type:\"string\"}\n\nif BUCKET_URI == \"\" or BUCKET_URI is None or BUCKET_URI == \"gs://[your-bucket-name]\":\n BUCKET_URI = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_URI", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import json\nimport os\n\nimport google.cloud.aiplatform as aiplatform\nfrom google.cloud import storage", "Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.", "aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)", "Tutorial\nNow you are ready to start creating your own AutoML video object tracking model.\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.", "IMPORT_FILE = \"gs://cloud-samples-data/ai-platform-unified/video/traffic/traffic_videos_labels.csv\"", "Quick peek at your data\nThis tutorial uses a version of the Traffic dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.", "FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\n! gsutil cat $FILE | head", "Create the Dataset\nNext, create the Dataset resource using the create method for the VideoDataset class, which takes the following parameters:\n\ndisplay_name: The human readable name for the Dataset resource.\ngcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.\n\nThis operation may take several minutes.", "dataset = aiplatform.VideoDataset.create(\n display_name=\"Traffic\" + \"_\" + TIMESTAMP,\n gcs_source=[IMPORT_FILE],\n import_schema_uri=aiplatform.schema.dataset.ioformat.video.object_tracking,\n)\n\nprint(dataset.resource_name)", "Create and run training pipeline\nTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.\nCreate training pipeline\nAn AutoML training pipeline is created with the AutoMLVideoTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the TrainingJob resource.\nprediction_type: The type task to train the model for.\nclassification: A video classification model.\nobject_tracking: A video object tracking model.\naction_recognition: A video action recognition model.", "job = aiplatform.AutoMLVideoTrainingJob(\n display_name=\"traffic_\" + TIMESTAMP,\n prediction_type=\"object_tracking\",\n)\n\nprint(job)", "Run the training pipeline\nNext, you start the training job by invoking the method run, with the following parameters:\n\ndataset: The Dataset resource to train the model.\nmodel_display_name: The human readable name for the trained model.\ntraining_fraction_split: The percentage of the dataset to use for training.\ntest_fraction_split: The percentage of the dataset to use for test (holdout data).\n\nThe run method when completed returns the Model resource.\nThe execution of the training pipeline will take upto 5 hours.", "model = job.run(\n dataset=dataset,\n model_display_name=\"traffic_\" + TIMESTAMP,\n training_fraction_split=0.8,\n test_fraction_split=0.2,\n)", "Review model evaluation scores\nAfter your model has finished training, you can review the evaluation scores for it.\nFirst, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.", "# Get model resource ID\nmodels = aiplatform.Model.list(filter=\"display_name=traffic_\" + TIMESTAMP)\n\n# Get a reference to the Model Service client\nclient_options = {\"api_endpoint\": f\"{REGION}-aiplatform.googleapis.com\"}\nmodel_service_client = aiplatform.gapic.ModelServiceClient(\n client_options=client_options\n)\n\nmodel_evaluations = model_service_client.list_model_evaluations(\n parent=models[0].resource_name\n)\nmodel_evaluation = list(model_evaluations)[0]\nprint(model_evaluation)", "Send a batch prediction request\nSend a batch prediction to your deployed model.\nGet test item(s)\nNow do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.", "test_items = ! gsutil cat $IMPORT_FILE | head -n2\ncols_1 = test_items[0].split(\",\")\ncols_2 = test_items[1].split(\",\")\nif len(cols_1) > 12:\n test_item_1 = str(cols_1[1])\n test_item_2 = str(cols_2[1])\n test_label_1 = str(cols_1[2])\n test_label_2 = str(cols_2[2])\nelse:\n test_item_1 = str(cols_1[0])\n test_item_2 = str(cols_2[0])\n test_label_1 = str(cols_1[1])\n test_label_2 = str(cols_2[1])", "Make a batch input file\nNow make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs:\n\ncontent: The Cloud Storage path to the video.\nmimeType: The content type. In our example, it is a avi file.\ntimeSegmentStart: The start timestamp in the video to do prediction on. Note, the timestamp must be specified as a string and followed by s (second), m (minute) or h (hour).\ntimeSegmentEnd: The end timestamp in the video to do prediction on.", "test_filename = \"test.jsonl\"\ngcs_input_uri = BUCKET_URI + \"/test.jsonl\"\n# making data_1 and data_2 variables using the structure mentioned above\ndata_1 = {\n \"content\": test_item_1,\n \"mimeType\": \"video/avi\",\n \"timeSegmentStart\": \"0.0s\",\n \"timeSegmentEnd\": \"5.0s\",\n}\n\ndata_2 = {\n \"content\": test_item_2,\n \"mimeType\": \"video/avi\",\n \"timeSegmentStart\": \"0.0s\",\n \"timeSegmentEnd\": \"5.0s\",\n}\n\n# getting reference to bucket\nbucket = storage.Client(project=PROJECT_ID).bucket(BUCKET_URI.replace(\"gs://\", \"\"))\n\n# creating a blob\nblob = bucket.blob(blob_name=test_filename)\n\n# creating data variable\ndata = json.dumps(data_1) + \"\\n\" + json.dumps(data_2) + \"\\n\"\n\n# uploading data variable content to bucket\nblob.upload_from_string(data)\n\n# printing path of uploaded file\nprint(gcs_input_uri)\n\n# printing content of uploaded file\n! gsutil cat $gcs_input_uri", "Make the batch prediction request\nNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:\n\njob_display_name: The human readable name for the batch prediction job.\ngcs_source: A list of one or more batch request input files.\ngcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.\nsync: If set to True, the call will block while waiting for the asynchronous batch job to complete.", "batch_predict_job = model.batch_predict(\n job_display_name=\"traffic_\" + TIMESTAMP,\n gcs_source=gcs_input_uri,\n gcs_destination_prefix=BUCKET_URI,\n sync=False,\n)\n\nprint(batch_predict_job)", "Wait for completion of batch prediction job\nNext, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.", "batch_predict_job.wait()", "Get the predictions\nNext, get the results from the completed batch prediction job.\nThe results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:\n\ncontent: The prediction request.\nprediction: The prediction response.\nid: The internal assigned unique identifiers for each prediction request.\ndisplayName: The class names for the predicted label.\nconfidences: The predicted confidence, between 0 and 1, per class label.\ntimeSegmentStart: The time offset in the video to the start of the video sequence.\ntimeSegmentEnd: The time offset in the video to the end of the video sequence.\nframes: Location with frames of the tracked object.", "bp_iter_outputs = batch_predict_job.iter_outputs()\n\nprediction_results = list()\nfor blob in bp_iter_outputs:\n if blob.name.split(\"/\")[-1].startswith(\"prediction\"):\n prediction_results.append(blob.name)\n\ntags = list()\nfor prediction_result in prediction_results:\n gfile_name = f\"gs://{bp_iter_outputs.bucket.name}/{prediction_result}\".replace(\n BUCKET_URI + \"/\", \"\"\n )\n data = bucket.get_blob(gfile_name).download_as_string()\n data = json.loads(data)\n print(data)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "# Delete the dataset using the Vertex dataset object\ndataset.delete()\n\n# Delete the model using the Vertex model object\nmodel.delete()\n\n# Delete the AutoML or Pipeline training job\njob.delete()\n\n# Delete the batch prediction job using the Vertex batch prediction object\nbatch_predict_job.delete()\n\nif os.getenv(\"IS_TESTING\"):\n ! gsutil -m rm -r $BUCKET_URI" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Atomahawk/flagging-suspicious-blockchain-transactions
lab_notebooks/blockchain-preprocessing.ipynb
mit
[ "Import Blockchain", "import errno\nimport os\nimport shutil\nimport zipfile\nimport numpy as np\nimport pandas as pd\n\n\n# In[22]:\n\n# TARGETDIR = '../btc/graphs_njp.zip'\n\n\n# In[23]:\n\n# with open(doc, \"rb\") as zipsrc:\n# zfile = zipfile.ZipFile(zipsrc)\n# for member in zfile.infolist():\n# target_path = os.path.join(TARGETDIR, member.filename)\n# if target_path.endswith('/'): # folder entry, create\n# try:\n# os.makedirs(target_path)\n# except (OSError, IOError) as err:\n# # Windows may complain if the folders already exist\n# if err.errno != errno.EEXIST:\n# raise\n# continue\n# with open(target_path, 'wb') as outfile, zfile.open(member) as infile:\n# shutil.copyfileobj(infile, outfile)", "Change the headers", "!ls\n\n!unzip bitcoin_dataset.zip", "NODES\n\nConvert txt files to csv\nhttp://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html\nnodes first\nhttps://neo4j.com/docs/opera", "# addresses\naddr = pd.read_csv('addresses.txt', header = None, names = ['user:ID', 'addr'], delimiter= '\\t')\naddr[':LABEL'] = 'address'\naddr.to_csv('addresses.csv', index = False)\naddr.head()\n\n# blocks\nblks = pd.read_csv('blockhash.txt', header = None, names = ['block:ID', 'bhash', 'btime', 'txs' ], delimiter= '\\t')\nblks[':LABEL'] = 'blockchain'\nblks.to_csv('blockhash.csv', index = False)\nblks.head()\n\n!rm addresses.txt \n!rm blockhash.txt\n\n# transactions + transaction time\ntxns = pd.read_csv('txhash.txt', header = None, names = ['tx:ID', 'txhash'], delimiter= '\\t')\ntxns2 = pd.read_csv('tx.txt', header = None, names = ['tx:ID', 'block' ,'n_inputs', 'n_outputs'], delimiter= '\\t')\n# txns = txns.drop(['Unnamed: 0', ':LABEL'], axis = 1)\ntxnstime = pd.read_csv('txtime.txt', header = None, names = ['tx:ID', 'unixtime'], delimiter= '\\t')\n\ntxnstime['time'] = pd.to_datetime(txnstime['unixtime'],unit='s')\n\ntxns = txns.merge(txnstime, how = 'left', on = 'tx:ID')\ntxns = txns.merge(txns2, how = 'left', on = 'tx:ID')\ntxns[':LABEL'] = 'transaction'\n\ntxns.head()\n\ntxns.to_csv('tx.csv', index = False)\n\nprint(txns.columns)\n\n!rm txhash.txt\n!rm txtime.txt\n!rm tx.txt\n\n# txnsin\n\ntxnsin = pd.read_csv('txin.txt', header = None, names = ['tx:ID', 'user' ,'value'], delimiter= '\\t')\n\ntxnsin[':LABEL'] = 'incoming_payment'\n\ntxnsin.to_csv('txin.csv', index = False)\n\ntxnsin.head()\n\n# txnsout\n\ntxnsout = pd.read_csv('txout.txt', header = None, names = ['tx:ID', 'user' ,'value'], delimiter= '\\t')\n\ntxnsout[':LABEL'] = 'sent_coins'\n\ntxnsout.to_csv('txout.csv', index = False)\n\ntxnsout.head()\n\n!rm txin.txt\n!rm txout.txt", "-----------------------------\nEDGES", "import numpy as np\nimport pandas as pd\n\n!ls\n\nrels_txns_to_block = pd.read_csv('tx.csv') #, compression='zip')\n\nrels_txns_to_block.columns\n\nrels_txns_to_block = rels_txns_to_block.drop(['txhash', 'unixtime', 'time', 'n_inputs', 'n_outputs',\n ':LABEL'], axis = 1)\n\nrels_txns_to_block.head()\n\nrels_txns_to_block[':TYPE'] = 'part_of_block'\n\nrels_txns_to_block.to_csv('relsPaymentsToAddress.csv', index = False)\n\ntxnsout.head()", "Import script - LOAD CSV?? - STOPPED HERE\ngraphdb lives here: /Users/eastblue/Documents/Neo4j/default.graphdb\n```bash\n./bin/neo4j-import --into /Users/eastblue/Documents/Neo4j/default.graphdb\n--nodes /Users/eastblue/ds/metis/challenges/Proj_Kojak/btc/addresses.csv\n--nodes /Users/eastblue/ds/metis/challenges/Proj_Kojak/btc/blockhash.csv\n--nodes /Users/eastblue/ds/metis/challenges/Proj_Kojak/btc/tx.csv\n--nodes /Users/eastblue/ds/metis/challenges/Proj_Kojak/btc/txin.csv\n--nodes /Users/eastblue/ds/metis/challenges/Proj_Kojak/btc/txout.csv \n--relationships /Users/eastblue/ds/metis/challenges/Proj_Kojak/btc/relsPaymentsToAddress.csv \n--relationships /Users/davidfauth/neo4j_nyc_graphday/bitcoinData/poc_relsBlocksTransactions.txt \n--relationships /Users/davidfauth/neo4j_nyc_graphday/bitcoinData/poc_relsBlockData.txt\n--relationships /Users/davidfauth/neo4j_nyc_graphday/bitcoinData/poc_relsRedeemedFromAddress.txt \n--relationships /Users/davidfauth/neo4j_nyc_graphday/bitcoinData/poc_relsTransactionsOut.txt \n--relationships /Users/davidfauth/neo4j_nyc_graphday/bitcoinData/poc_relsTransactionsIn.txt\n```\n--------------\nREFERENCE", "\n\n# In[ ]:\n\nfile = open(\"poc_transaction.txt\", \"w\")\nfileIn = open(\"poc_transactionsIn.txt\", \"w\")\nfileOut = open(\"poc_transactionsOut.txt\", \"w\")\nfileTransList = open(\"poc_transactionList.txt\",\"w\")\nfileAddressList = open (\"poc_addressList.txt\",\"w\")\nfileBlockList = open(\"poc_blockdata.txt\", \"w\")\n\nfile.write(\":ID,Transaction_ID,Hash,Time,V_In,V_Out,:LABEL\" + \"\\n\")\nfileIn.write(\":ID,Transaction_ID,Transaction_Hash,Address,Spent,Value,:LABEL\" + \"\\n\")\nfileOut.write(\":ID,Transaction_ID,Transaction_Hash,Type,:LABEL\" + \"\\n\")\nfileAddressList.write(\":ID,AddressID,:LABEL\" + \"\\n\")\nfileBlockList.write(\":ID,BlockID,Hash,Received_Time,Previous_Block_Hash,Transaction_Count,Height,:LABEL\" + \"\\n\")\n\n\nfileBlockTrans = open(\"poc_relsBlocksTransactions.txt\",\"w\")\nfileTransOut = open(\"poc_relsTransactionsOut.txt\",\"w\")\nfileTransInRels = open(\"poc_relsTransactionsIn.txt\",\"w\")\nfilePaymentOutAddr = open(\"poc_relsPaymentsToAddress.txt\",\"w\")\nfilePaymentInAddr = open(\"poc_relsRedeemedFromAddress.txt\",\"w\")\n\nfileBlockTrans.write(\":START_ID,:END_ID,:TYPE\" + \"\\n\")\nfileTransOut.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\nfileTransInRels.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\nfilePaymentOutAddr.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\nfilePaymentInAddr.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\n\nfileBlockTrans.write(str(hash) + ',' + str('trans' + str(item[\"tx_index\"])) + ',' + 'PART_OF_BLOCK' + \"\\n\")\nfileTransOut.write('trans' + str(xx[\"tx_index\"]) + ',' + 'out_' + str(tout) + ',' + str(xx[\"spent\"]) + ','+ str( Decimal(xx[\"value\"]) / Decimal(100000000.0)) + ',' + 'SENT_COINS' + \"\\n\")\nfileTransInRels.write('transin_' + str(tin) + ',' + 'trans' + str(tx_index) + ',' + str(nn[\"prev_out\"][\"spent\"]) + ','+ str( Decimal(nn[\"prev_out\"][\"value\"]) / Decimal(100000000.0)) + ',' + 'INCOMING_PAYMENT' + \"\\n\")\nfilePaymentOutAddr.write('out_' + str(tout) + ',' + rec + ',' + str(xx[\"spent\"]) + ','+ str( Decimal(xx[\"value\"]) / Decimal(100000000.0)) + ','+ 'WAS_SENT_TO' + \"\\n\")\nfilePaymentInAddr.write(strAddr + ',' + 'transin_' + str(tin) + ',' + str(nn[\"prev_out\"][\"spent\"]) + ','+ str( Decimal(nn[\"prev_out\"][\"value\"])/ Decimal(100000000.0)) + ','+ 'REDEEMED' + \"\\n\")\n\n\n# tout = 1\n# tin = 1\n# f = open('blockList.txt', 'r')\n# temp = f.read().splitlines()\n# for line in temp:\n# \twords = line.split(\"|\")\n# \ts = words[1]\n\t\n# \turl = \"https://blockchain.info/rawblock/\" + str(s)\n# \tprint url\n# \tusock = urllib2.urlopen(url)\n# \tdata = usock.read()\n# \tresult = json.loads(data)\n\n# \thash = result['hash']\n# \tblock_index = result['block_index']\n# \theight = result['height']\n# \tsize = result['size']\n# \tmain_chain = result['main_chain']\n# \tprev_block = result['prev_block']\n# \ttry: \n# \t\treceived_time = result['received_time']\n# \texcept KeyError:\n# \t\treceived_time = 'NA'\n# \tn_tx = result['n_tx']\n# \tfileBlockList.write(str(hash) + ',' + str(block_index) + ',' +str(hash) + ',' + str(received_time) + ',' + str(prev_block) + ',' + str(n_tx) + ',' + str(height) + \",BlockChain\" + \"\\n\");\n\n# \tparent = result[\"tx\"]\n# \tfor item in parent:\n# \t\ttx_index = str(item[\"tx_index\"])\n# \t\ttx_hash = str(item[\"hash\"])\n# \t\tfile.write(str('trans' + str(item[\"tx_index\"])) + ',' + str(item[\"tx_index\"]) + ',' +str(item[\"hash\"]) + ',' + str(item[\"time\"]) + ',' + str(item[\"vin_sz\"]) + ',' + str(item[\"vout_sz\"]) + \",Transaction\"+ \"\\n\");\n# \t\tfileBlockTrans.write(str(hash) + ',' + str('trans' + str(item[\"tx_index\"])) + ',' + 'PART_OF_BLOCK' + \"\\n\")\n# \t\tif 'inputs' in item :\n# \t\t\tfor nn in item[\"inputs\"]:\n# #\t\t\t\tprint nn[\"sequence\"]\n# \t\t\t\tif 'prev_out' in nn :\n# #\t\t\t\t\tprint nn[\"prev_out\"][\"addr\"]\n# #\t\t\t\t\tprint nn[\"prev_out\"][\"spent\"]\n# #\t\t\t\t\tprint nn[\"prev_out\"][\"value\"]\n# \t\t\t\t\ttry: \n# \t\t\t\t\t\tstrAddr = str(nn[\"prev_out\"][\"addr\"])\n# \t\t\t\t\texcept KeyError:\n# \t\t\t\t\t\tstrAddr = 'NA'\n\n# \t\t\t\t\tfileIn.write('transin_' + str(tin) + ',' + tx_index + ',' + str(tx_hash) + ',' + strAddr + ',' +str(nn[\"prev_out\"][\"spent\"]) + ',' + str( Decimal(nn[\"prev_out\"][\"value\"]) / Decimal(100000000.0)) + \",IncomingPayment\" + \"\\n\");\n# \t\t\t\t\tfileTransInRels.write('transin_' + str(tin) + ',' + 'trans' + str(tx_index) + ',' + str(nn[\"prev_out\"][\"spent\"]) + ','+ str( Decimal(nn[\"prev_out\"][\"value\"]) / Decimal(100000000.0)) + ',' + 'INCOMING_PAYMENT' + \"\\n\")\n# \t\t\t\t\tfilePaymentInAddr.write(strAddr + ',' + 'transin_' + str(tin) + ',' + str(nn[\"prev_out\"][\"spent\"]) + ','+ str( Decimal(nn[\"prev_out\"][\"value\"])/ Decimal(100000000.0)) + ','+ 'REDEEMED' + \"\\n\")\n# \t\t\t\t\tfileTransList.write('transin_' + str(tin) + ',' + tx_hash + \"\\n\")\n# \t\t\t\t\tfileAddressList.write(strAddr + ',' + strAddr + ',Address' + \"\\n\")\n# \t\t\t\t\ttin = tin + 1\n\t\t\t\t\t\n# \t\tif 'out' in item :\n# \t\t\tfor xx in item[\"out\"]:\n# #\t\t\t\t\t\t\tprint xx[\"tx_index\"]\n# #\t\t\t\t\t\t\tprint xx[\"type\"]\n# #\t\t\t\t\t\t\tprint xx[\"addr\"]\n# #\t\t\t\t\t\t\tprint xx[\"spent\"]\n# #\t\t\t\t\t\t\tprint xx[\"value\"]\n# \t\t\t\ttry: \n# \t\t\t\t\trec = str(xx[\"addr\"])\n# \t\t\t\t\tfileOut.write('out_' + str(tout) + ',' + str(xx[\"tx_index\"]) + ',' + str(tx_hash) + ',' + str(xx[\"type\"]) + \",OutgoingPayment\"+ \"\\n\");\n# \t\t\t\t\tfileTransOut.write('trans' + str(xx[\"tx_index\"]) + ',' + 'out_' + str(tout) + ',' + str(xx[\"spent\"]) + ','+ str( Decimal(xx[\"value\"]) / Decimal(100000000.0)) + ',' + 'SENT_COINS' + \"\\n\")\n# \t\t\t\t\tfilePaymentOutAddr.write('out_' + str(tout) + ',' + rec + ',' + str(xx[\"spent\"]) + ','+ str( Decimal(xx[\"value\"]) / Decimal(100000000.0)) + ','+ 'WAS_SENT_TO' + \"\\n\")\n# \t\t\t\t\tfileAddressList.write(str(rec) + ',' + str(rec) + ',Address' + \"\\n\")\n# \t\t\t\t\ttout = tout+1\n# \t\t\t\texcept KeyError:\n# \t\t\t\t\trec = 'Unavailable'\n\n\n\tusock.close()\nfile.close()\nfileIn.close()\nfileOut.close()\nfileTransList.close()\nfileBlockTrans.close()\nfileTransInRels.close()\nfilePaymentOutAddr.close()\nfilePaymentInAddr.close()\nfileAddressList.close()\nfileBlockList.close()\nf.close();\nprint \"Done\"", "Original Blockchain API", "#!/usr/bin/env python\n\nimport simplejson as json\nimport httplib\nimport urllib2\n\nfrom httplib import HTTPConnection, HTTPS_PORT\nimport ssl\nfrom decimal import *\nfile = open(\"poc_transaction.txt\", \"w\")\nfileIn = open(\"poc_transactionsIn.txt\", \"w\")\nfileOut = open(\"poc_transactionsOut.txt\", \"w\")\nfileTransList = open(\"poc_transactionList.txt\",\"w\")\nfileBlockTrans = open(\"poc_relsBlocksTransactions.txt\",\"w\")\nfileTransOut = open(\"poc_relsTransactionsOut.txt\",\"w\")\nfileTransInRels = open(\"poc_relsTransactionsIn.txt\",\"w\")\nfilePaymentOutAddr = open(\"poc_relsPaymentsToAddress.txt\",\"w\")\nfilePaymentInAddr = open(\"poc_relsRedeemedFromAddress.txt\",\"w\")\nfileAddressList = open (\"poc_addressList.txt\",\"w\")\nfileBlockList = open(\"poc_blockdata.txt\", \"w\")\n\nfile.write(\":ID,Transaction_ID,Hash,Time,V_In,V_Out,:LABEL\" + \"\\n\")\nfileIn.write(\":ID,Transaction_ID,Transaction_Hash,Address,Spent,Value,:LABEL\" + \"\\n\")\nfileOut.write(\":ID,Transaction_ID,Transaction_Hash,Type,:LABEL\" + \"\\n\")\nfileBlockTrans.write(\":START_ID,:END_ID,:TYPE\" + \"\\n\")\nfileTransOut.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\nfileTransInRels.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\nfilePaymentOutAddr.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\nfilePaymentInAddr.write(\":START_ID,:END_ID,Spent,Value,:TYPE\" + \"\\n\")\nfileAddressList.write(\":ID,AddressID,:LABEL\" + \"\\n\")\nfileBlockList.write(\":ID,BlockID,Hash,Received_Time,Previous_Block_Hash,Transaction_Count,Height,:LABEL\" + \"\\n\")\n\n\ntout = 1\ntin = 1\nf = open('blockList.txt', 'r')\ntemp = f.read().splitlines()\nfor line in temp:\n\twords = line.split(\"|\")\n\ts = words[1]\n\t\n\turl = \"https://blockchain.info/rawblock/\" + str(s)\n\tprint url\n\tusock = urllib2.urlopen(url)\n\tdata = usock.read()\n\tresult = json.loads(data)\n\n\thash = result['hash']\n\tblock_index = result['block_index']\n\theight = result['height']\n\tsize = result['size']\n\tmain_chain = result['main_chain']\n\tprev_block = result['prev_block']\n\ttry: \n\t\treceived_time = result['received_time']\n\texcept KeyError:\n\t\treceived_time = 'NA'\n\tn_tx = result['n_tx']\n\tfileBlockList.write(str(hash) + ',' + str(block_index) + ',' +str(hash) + ',' + str(received_time) + ',' + str(prev_block) + ',' + str(n_tx) + ',' + str(height) + \",BlockChain\" + \"\\n\");\n\n\tparent = result[\"tx\"]\n\tfor item in parent:\n\t\ttx_index = str(item[\"tx_index\"])\n\t\ttx_hash = str(item[\"hash\"])\n\t\tfile.write(str('trans' + str(item[\"tx_index\"])) + ',' + str(item[\"tx_index\"]) + ',' +str(item[\"hash\"]) + ',' + str(item[\"time\"]) + ',' + str(item[\"vin_sz\"]) + ',' + str(item[\"vout_sz\"]) + \",Transaction\"+ \"\\n\");\n\t\tfileBlockTrans.write(str(hash) + ',' + str('trans' + str(item[\"tx_index\"])) + ',' + 'PART_OF_BLOCK' + \"\\n\")\n\t\tif 'inputs' in item :\n\t\t\tfor nn in item[\"inputs\"]:\n#\t\t\t\tprint nn[\"sequence\"]\n\t\t\t\tif 'prev_out' in nn :\n#\t\t\t\t\tprint nn[\"prev_out\"][\"addr\"]\n#\t\t\t\t\tprint nn[\"prev_out\"][\"spent\"]\n#\t\t\t\t\tprint nn[\"prev_out\"][\"value\"]\n\t\t\t\t\ttry: \n\t\t\t\t\t\tstrAddr = str(nn[\"prev_out\"][\"addr\"])\n\t\t\t\t\texcept KeyError:\n\t\t\t\t\t\tstrAddr = 'NA'\n\n\t\t\t\t\tfileIn.write('transin_' + str(tin) + ',' + tx_index + ',' + str(tx_hash) + ',' + strAddr + ',' +str(nn[\"prev_out\"][\"spent\"]) + ',' + str( Decimal(nn[\"prev_out\"][\"value\"]) / Decimal(100000000.0)) + \",IncomingPayment\" + \"\\n\");\n\t\t\t\t\tfileTransInRels.write('transin_' + str(tin) + ',' + 'trans' + str(tx_index) + ',' + str(nn[\"prev_out\"][\"spent\"]) + ','+ str( Decimal(nn[\"prev_out\"][\"value\"]) / Decimal(100000000.0)) + ',' + 'INCOMING_PAYMENT' + \"\\n\")\n\t\t\t\t\tfilePaymentInAddr.write(strAddr + ',' + 'transin_' + str(tin) + ',' + str(nn[\"prev_out\"][\"spent\"]) + ','+ str( Decimal(nn[\"prev_out\"][\"value\"])/ Decimal(100000000.0)) + ','+ 'REDEEMED' + \"\\n\")\n\t\t\t\t\tfileTransList.write('transin_' + str(tin) + ',' + tx_hash + \"\\n\")\n\t\t\t\t\tfileAddressList.write(strAddr + ',' + strAddr + ',Address' + \"\\n\")\n\t\t\t\t\ttin = tin + 1\n\t\t\t\t\t\n\t\tif 'out' in item :\n\t\t\tfor xx in item[\"out\"]:\n#\t\t\t\t\t\t\tprint xx[\"tx_index\"]\n#\t\t\t\t\t\t\tprint xx[\"type\"]\n#\t\t\t\t\t\t\tprint xx[\"addr\"]\n#\t\t\t\t\t\t\tprint xx[\"spent\"]\n#\t\t\t\t\t\t\tprint xx[\"value\"]\n\t\t\t\ttry: \n\t\t\t\t\trec = str(xx[\"addr\"])\n\t\t\t\t\tfileOut.write('out_' + str(tout) + ',' + str(xx[\"tx_index\"]) + ',' + str(tx_hash) + ',' + str(xx[\"type\"]) + \",OutgoingPayment\"+ \"\\n\");\n\t\t\t\t\tfileTransOut.write('trans' + str(xx[\"tx_index\"]) + ',' + 'out_' + str(tout) + ',' + str(xx[\"spent\"]) + ','+ str( Decimal(xx[\"value\"]) / Decimal(100000000.0)) + ',' + 'SENT_COINS' + \"\\n\")\n\t\t\t\t\tfilePaymentOutAddr.write('out_' + str(tout) + ',' + rec + ',' + str(xx[\"spent\"]) + ','+ str( Decimal(xx[\"value\"]) / Decimal(100000000.0)) + ','+ 'WAS_SENT_TO' + \"\\n\")\n\t\t\t\t\tfileAddressList.write(str(rec) + ',' + str(rec) + ',Address' + \"\\n\")\n\t\t\t\t\ttout = tout+1\n\t\t\t\texcept KeyError:\n\t\t\t\t\trec = 'Unavailable'\n\n\n\tusock.close()\nfile.close()\nfileIn.close()\nfileOut.close()\nfileTransList.close()\nfileBlockTrans.close()\nfileTransInRels.close()\nfilePaymentOutAddr.close()\nfilePaymentInAddr.close()\nfileAddressList.close()\nfileBlockList.close()\nf.close();\nprint \"Done\"\n", "OLD CODE", "# # NODES \n# - Convert txt files to csv\n# - http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html\n# - nodes first\n# - https://neo4j.com/docs/operations-manual/3.2/tools/import/file-header-format/#import-tool-id-spaces\n\n# In[10]:\n\ntest = pd.read_csv('graph_addresses.txt', header = None, names = ['user:ID', ':LABEL'], delimiter= '\\t')\n\n\n# In[11]:\n\ntest.head()\n\n\n# In[12]:\n\ntest2 = test.groupby('user:ID').sum()\n\n\n# In[13]:\n\ntest2.head()\n\n\n# In[27]:\n\ntest.to_csv('graph_addresses.csv')\n\n\n# In[39]:\n\ntest.shape # number of nodes\n\n\n# # EDGES\n# - Specify start, end, and type for each edge\n\n# In[ ]:\n\ntest = pd.read_csv('txedgeunique.txt', header = None, delimiter= '\\t')\n\n\n# In[29]:\n\ntest1 = pd.read_csv('../btc/au_graph.txt', header = None, names = [':START_ID', 'stop', 'unixtime'], delimiter= '\\t')\n\n\n# In[30]:\n\ntest1[':END_ID'] = test1['stop']\n\n\n# In[35]:\n\ntest1 = test1.drop('stop', axis = 1)\n# test[':TYPE'] = 'TXN'\n\n\n# In[36]:\n\ntest1.head()\n\n\n# In[37]:\n\ntest1.to_csv('au_graph.csv')\n\n\n# In[40]:\n\ntest1.shape # number of transactions\n", "benford's law test for ticker data\n\napplies the law to a couple of days of BTC-E non-zero price returns.\nhttps://plus.maths.org/content/looking-out-number-one", "import pandas\nfrom math import log10, floor\nfrom scipy.constants import codata\n\n \ndef most_significant_digit(x):\n e = floor(log10(x))\n return int(x*10**-e)\n\ndef f(x):\n return most_significant_digit(abs(x))\n\n# read in the ticker data\ntick = pandas.read_csv('./your_ticker_data.csv')\ntick_ret = tick.diff()\n \n# count leading digits\ndata = tick_ret[tick_ret!=0]\ncounts = data.fillna(method='bfill').apply(f).value_counts()\n\ntotal = counts.sum()\n \n# expected number of each leading digit per Benford's law\nbenford = [total*log10(1 + 1./i) for i in range(1, 10)]\n\n\n# plot actual vs expected\nbins = np.arange(9)\nerror_config = {'ecolor': '0.3'}\n\nr1 = plt.bar(bins, counts.values, 0.35, alpha=0.4, color='b', error_kw=error_config, label = 'actual')\nr2 = plt.bar(bins + 0.35, benford, 0.35, alpha=0.4, color='r', error_kw=error_config, label = 'expected')\nplt.xlabel('Most significant digit')\nplt.ylabel('Occurence count')\nplt.title('Leading digits in BTC-E ticker volume')\nplt.xticks(bins + 0.35, bins+1)\nplt.legend()\n\nplt.show()", "Python Drivers\n\nhttps://marcobonzanini.com/2015/04/06/getting-started-with-neo4j-and-python/\nhttps://neo4j.com/developer/python/", "from neo4j.v1 import GraphDatabase, basic_auth\n\n\n# In[ ]:\n\ndriver = GraphDatabase.driver(\"bolt://localhost:7687\", auth=basic_auth(\"neo4j\", \"neo4j\"))\nsession = driver.session()\n\nsession.run(\"CREATE (a:Person {name: {name}, title: {title}})\",\n {\"name\": \"Arthur\", \"title\": \"King\"})\n\nresult = session.run(\"MATCH (a:Person) WHERE a.name = {name} \"\n \"RETURN a.name AS name, a.title AS title\",\n {\"name\": \"Arthur\"})\nfor record in result:\n print(\"%s %s\" % (record[\"title\"], record[\"name\"]))\n\nsession.close()\n\n\n# In[ ]:\n\nfrom py2neo import Graph, Path\ngraph = Graph()\n\ntx = graph.cypher.begin()\nfor name in [\"Alice\", \"Bob\", \"Carol\"]:\n tx.append(\"CREATE (person:Person {name:{name}}) RETURN person\", name=name)\nalice, bob, carol = [result.one for result in tx.commit()]\n\nfriends = Path(alice, \"KNOWS\", bob, \"KNOWS\", carol)\ngraph.create(friends)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eriksalt/jupyter
Python Quick Reference/Base Language.ipynb
mit
[ "Python Base Language Quick Reference\nTable of contents\n\n<a href=\"#1.-Imports\">Imports</a>\n<a href=\"#2.-Simple-Data-Types\">Simple Data Types</a>\n<a href=\"#3.-Math\">Math</a>\n<a href=\"#4.-Comparisons-and-Boolean-Operations\">Comparisons and Boolean Operations</a>\n<a href=\"#5.-Conditional-Statements\">Conditional Statements</a>\n<a href=\"#6.-For-Loops-and-While-Loops\">For Loops and While Loops</a>\n<a href=\"#7.-Slicing\">Slicing</a>\n<a href=\"#8.-With_Statement\">With Statement</a>\n<a href=\"#9.-DateTime\">DateTime</a>\n\n1. Imports", "# 'generic import' of math module\nimport math\nmath.sqrt(25)\n\n# import a function\nfrom math import sqrt\nsqrt(25) # no longer have to reference the module\n\n# import multiple functions at once\nfrom math import cos, floor\n\n# import all functions in a module (generally discouraged)\nfrom csv import *\n\n# define an alias\nimport datetime as dt", "2. Simple Data Types\nDetermine type of an object", "print(type(2)) #int\n\ntype(2.0) #float\n\ntype(\"two\") #string\n\ntype(True) # bool\n\ntype(None) #NoneType", "Check if an object is of a given type", "isinstance(2.0, float)\n\nisinstance(\"two\", (float, str))", "Convert an object to a given type", "float(2)\n\nint(2.9)\n\nstr(2.9)", "Zero, None and empty containers are converted to False:", "bool(0)\n\nbool(None)\n\nbool('') # empty string\n\nbool({}) # empty dictionary", "Non-Empty containers and non-zeros are converted to True", "bool(2) \n\nbool('two')\n\n bool([2])\n\nbool({'key':'val'})", "3. Math", "10 ** 4 # exponent\n\n5 % 4 # modulo\n\n# previous versions of python did integer division\n# python 3 coerces values into floats\n10/4 \n\n10 // 4 # floor division", "4. Comparisons and Boolean Operations", "x = 5\n\nx != 3\n\nx >= 5 and x < 10\n\nx < 5 or x ==5", "5. Conditional Statements", "if x > 0:\n print('positive')\nelif x == 0:\n print('zero')\nelse:\n print('negative')\n\n#single-line if\nif x > 0: print('positive')\n\n#single line if/else statement (ternary operator)\n'positive' if x > 0 else 'zero or negative'", "6. For Loops and While Loops\nrange returns a sequence of integers from 0 to n-1", "# includes the start value but not the stop value [start, end)\nrange(0,3) #[0, ,1, 2]\nrng = range(0,3)\nlist(rng)\n\n# default start is 0\nrng = range(3)\nlist(rng)\n\n#third argument is a step value\nrng = range(0,5,2)\nlist(rng)", "for loops:", "# not the recommended style\nfruits = ['apple', 'banana', 'cherry']\nfor i in range(len(fruits)):\n print(fruits[i].upper())\n\n# recommended style\nfor fruit in fruits:\n print(fruit.upper())\n\n# iterate through two things at once (using tuple unpacking)\nfamily = {'dad':'homer', 'mom':'marge', 'size':6}\nfor key, value in family.items():\n print( key,value)\n\n# use enumerate if you need to access the index value within the loop\nfor index,fruit in enumerate(fruits):\n print(index,fruit)", "for/else loop:", "for fruit in fruits:\n if fruit == 'banana':\n print('Found the banana!')\n break # exit the loop and skip the 'else' block\nelse:\n # this block executes ONLY if the for loop completes without hitting 'break'\n print(\"Can't find the banana\")", "while loop:", "count = 0\nwhile count < 5:\n print('printing %s time(s)' % count)\n count += 1", "7. Slicing", "weekdays = ['mon', 'tues', 'wed', 'thurs', 'fri']\n\nweekdays[0]\n\n# elements 0 (inclusive) to 3 (exclusive)\nweekdays[0:3]\n\n# starting point implied to be zero\nweekdays[:3]\n\n# elements 3 (inclusive) through implied end\nweekdays[3:]\n\n# last element\nweekdays[-1]\n\n# eveery second element (step by 2)\nweekdays[::2]\n\n# backwards (step by -1)\nweekdays[::-1]", "8. With Statement", "# the With statement (like using in C#) is used to issue cleanup code when a variable comes out of scope\nclass cleanup_thing:\n def __enter__(self):\n self.open=True\n return self\n def __exit__(self, type, value, traceback):\n self.open=False\n def isOpen(self):\n return self.open\n\nwith cleanup_thing() as thing:\n print(thing.isOpen())\nprint(thing.isOpen())", "9. DateTime", "from datetime import datetime\nfrom datetime import timedelta\n\n#get now\ndatetime.today()\n\na = datetime(2012, 9, 23)\na\n\nb = timedelta(days=2, hours=6)\na+b\n\na-b\n\nd=b+timedelta(hours=4.5)\nd.days\n\nd.seconds\n\n# total timedelta in seconds\nd.total_seconds()", "Converting to / from strings", "text = '2012-09-20'\ndatetime.strptime(text, '%Y-%m-%d')\n\nz = datetime(2012, 9, 23, 21, 37, 4, 177393)\ndatetime.strftime(z, '%A %B %d, %Y')", "Time Zones", "from pytz import timezone\n\nd = datetime(2012, 12, 21, 9, 30, 0)\n\n# localize the date for Chicago\ncentral = timezone('US/Central')\nloc_d = central.localize(d)\nprint(loc_d)\n\n# Convert to Bangalore time\nbang_d = loc_d.astimezone(timezone('Asia/Kolkata'))\nprint(bang_d)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dwhswenson/openpathsampling
examples/toy_model_mstis/toy_mstis_1_setup.ipynb
mit
[ "Obtaining the first trajectories for a Toy Model\nTasks covered in this notebook:\n\nSetting up a system using the OPS toy engine\nUsing a user-defined function to create a collective variable\nUsing collective variables to define states and interfaces\nStoring things manually\n\nPath sampling methods require that the user supply an input path for each path ensemble. This means that you must somehow generate a first input path. The first rare path can come from any number of sources. The main idea is that any trajectory that is nearly physical is good enough. This is discussed more in the OPS documentation on initial trajectories.\nIn this example, we use a bootstrapping/ratcheting approach, which does create paths satisfying the true dynamics of the system. This approach is nice because it is quick and convenient, although it is best for smaller systems with less complicated transitions. It works by running normal MD to generate a path that satisfies the innermost interface, and then performing shooting moves in that interface's path ensemble until we have a path that crosses the next interface. Then we switch to the path ensemble for the next interface, and shoot until the path crossing the interface after that. The process continues until we have paths for all interfaces.\nIn this example, we perform multiple state (MS) TIS. Therefore we do one bootstrapping calculation per initial state.", "# Basic imports\nfrom __future__ import print_function\nimport openpathsampling as paths\nimport numpy as np\n%matplotlib inline\n\n# used for visualization of the 2D toy system\n# we use the %run magic because this isn't in a package\n%run ../resources/toy_plot_helpers.py", "Basic system setup\nFirst we set up our system: for the toy dynamics, this involves defining a potential energy surface (PES), setting up an integrator, and giving the simulation an initial configuration. In real MD systems, the PES is handled by the combination of a topology (generated from, e.g., a PDB file) and a force field definition, and the initial configuration would come from a file instead of being described by hand.", "# convenience for the toy dynamics\nimport openpathsampling.engines.toy as toys", "Set up the toy system\nFor the toy model, we need to give a snapshot as a template, as well as a potential energy surface. The template snapshot also includes a pointer to the topology information (which is relatively simple for the toy systems.)", "# Toy_PES supports adding/subtracting various PESs. \n# The OuterWalls PES type gives an x^6+y^6 boundary to the system.\npes = (\n toys.OuterWalls([1.0, 1.0], [0.0, 0.0])\n + toys.Gaussian(-0.7, [12.0, 12.0], [0.0, 0.4])\n + toys.Gaussian(-0.7, [12.0, 12.0], [-0.5, -0.5])\n + toys.Gaussian(-0.7, [12.0, 12.0], [0.5, -0.5])\n)\n\ntopology=toys.Topology(\n n_spatial=2,\n masses=[1.0, 1.0],\n pes=pes\n)", "Set up the engine\nThe engine needs the template snapshot we set up above, as well as an integrator and a few other options. We name the engine; this makes it easier to reload it in the future.", "integ = toys.LangevinBAOABIntegrator(dt=0.02, temperature=0.1, gamma=2.5)\n\noptions={\n 'integ': integ,\n 'n_frames_max': 5000,\n 'n_steps_per_frame': 1\n}\n\ntoy_eng = toys.Engine(\n options=options,\n topology=topology\n).named('toy_engine')\n\ntemplate = toys.Snapshot(\n coordinates=np.array([[-0.5, -0.5]]), \n velocities=np.array([[0.0,0.0]]),\n engine=toy_eng\n)\n\ntoy_eng.current_snapshot = template", "Finally, we make this engine into the default engine for any PathMover that requires an one (e.g., shooting movers, minus movers).", "paths.PathMover.engine = toy_eng", "Now let's look at the potential energy surface we've created:", "plot = ToyPlot()\nplot.contour_range = np.arange(-1.5, 1.0, 0.1)\nplot.add_pes(pes)\nfig = plot.plot()", "Defining states and interfaces\nTIS methods usually require that you define states and interfaces before starting the simulation. State and interfaces are both defined in terms of Volume objects. The most common type of Volume is one based on some set of collective variables, so the first thing we have to do is to define the collective variable.\nFor this system, we'll define the collective variables as circles centered on the middle of the state. OPS allows us to define one function for the circle, which is parameterized by different centers. Note that each collective variable is in fact a separate function.", "def circle(snapshot, center):\n import math\n return math.sqrt((snapshot.xyz[0][0]-center[0])**2\n + (snapshot.xyz[0][1]-center[1])**2)\n \nopA = paths.CoordinateFunctionCV(name=\"opA\", f=circle, center=[-0.5, -0.5])\nopB = paths.CoordinateFunctionCV(name=\"opB\", f=circle, center=[0.5, -0.5])\nopC = paths.CoordinateFunctionCV(name=\"opC\", f=circle, center=[0.0, 0.4])", "Now we define the states and interfaces in terms of these order parameters. The CVRangeVolumeSet gives a shortcut to create several volume objects using the same collective variable.", "stateA = paths.CVDefinedVolume(opA, 0.0, 0.2)\nstateB = paths.CVDefinedVolume(opB, 0.0, 0.2)\nstateC = paths.CVDefinedVolume(opC, 0.0, 0.2)\n\ninterfacesA = paths.VolumeInterfaceSet(opA, 0.0, [0.2, 0.3, 0.4])\ninterfacesB = paths.VolumeInterfaceSet(opB, 0.0, [0.2, 0.3, 0.4])\ninterfacesC = paths.VolumeInterfaceSet(opC, 0.0, [0.2, 0.3, 0.4])", "Build the MSTIS transition network\nOnce we have the collective variables, states, and interfaces defined, we can create the entire transition network. In this one small piece of code, we create all the path ensembles needed for the simulation, organized into structures to assist with later analysis.", "ms_outers = paths.MSOuterTISInterface.from_lambdas(\n {ifaces: 0.5\n for ifaces in [interfacesA, interfacesB, interfacesC]}\n)\nmstis = paths.MSTISNetwork(\n [(stateA, interfacesA),\n (stateB, interfacesB),\n (stateC, interfacesC)],\n ms_outers=ms_outers\n)", "Bootstrap to fill all interfaces\nNow we actually run the bootstrapping calculation. The full_bootstrap function requires an initial snapshot in the state, and then it will generate trajectories satisfying TIS ensemble for the given interfaces. To fill all the ensembles in the MSTIS network, we need to do this once for each initial state.", "initA = toys.Snapshot(\n coordinates=np.array([[-0.5, -0.5]]), \n velocities=np.array([[1.0,0.0]]),\n)\nbootstrapA = paths.FullBootstrapping(\n transition=mstis.from_state[stateA],\n snapshot=initA,\n engine=toy_eng,\n forbidden_states=[stateB, stateC],\n extra_interfaces=[ms_outers.volume_for_interface_set(interfacesA)]\n)\ngsA = bootstrapA.run()\n\ninitB = toys.Snapshot(\n coordinates=np.array([[0.5, -0.5]]), \n velocities=np.array([[-1.0,0.0]]),\n)\n\nbootstrapB = paths.FullBootstrapping(\n transition=mstis.from_state[stateB], \n snapshot=initB, \n engine=toy_eng,\n forbidden_states=[stateA, stateC]\n)\ngsB = bootstrapB.run()\n\ninitC = toys.Snapshot(\n coordinates=np.array([[0.0, 0.4]]), \n velocities=np.array([[0.0,-0.5]]),\n)\nbootstrapC = paths.FullBootstrapping(\n transition=mstis.from_state[stateC], \n snapshot=initC, \n engine=toy_eng, \n forbidden_states=[stateA, stateB]\n)\ngsC = bootstrapC.run()", "Now that we've done that for all 3 states, let's look at the trajectories we generated.", "plot.plot([s.trajectory for s in gsA]+[s.trajectory for s in gsB]+[s.trajectory for s in gsC]);", "Finally, we join these into one SampleSet. The function relabel_replicas_per_ensemble ensures that the trajectory associated with each ensemble has a unique replica ID.", "total_sample_set = paths.SampleSet.relabel_replicas_per_ensemble(\n [gsA, gsB, gsC]\n)", "Storing stuff\nUp to this point, we haven't stored anything in files. In other notebooks, a lot of the storage is done automatically. Here we'll show you how to store a few things manually. Instead of storing the entire bootstrapping history, we'll only store the final trajectories we get out.\nFirst we create a file. When we create it, the file also requires the template snapshot.", "storage = paths.Storage(\"mstis_bootstrap.nc\", \"w\")", "The storage will recursively store data, so storing total_sample_set leads to automatic storage of all the Sample objects in that sampleset, which in turn leads to storage of all the ensemble, trajectories, and snapshots.\nSince the path movers used in bootstrapping and the engine are not required for the sampleset, they would not be stored. We explicitly store the engine for later use, but we won't need the path movers, so we don't try to store them.", "storage.save(total_sample_set)\nstorage.save(toy_eng)", "Now we can check to make sure that we actually have stored the objects that we claimed to store. There should be 0 pathmovers, 1 engine, 12 samples (4 samples from each of 3 transitions), and 1 sampleset. There will be some larger number of snapshots. There will also be a larger number of ensembles, because each ensemble is defined in terms of subensembles, each of which gets saved.", "print(\"PathMovers:\", len(storage.pathmovers))\nprint(\"Engines:\", len(storage.engines))\nprint(\"Samples:\", len(storage.samples))\nprint(\"SampleSets:\", len(storage.samplesets))\nprint(\"Snapshots:\", len(storage.snapshots))\nprint(\"Ensembles:\", len(storage.ensembles))\nprint(\"CollectiveVariables:\", len(storage.cvs))", "Finally, we close the storage. Not strictly necessary, but a good habit.", "storage.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
slundberg/shap
notebooks/api_examples/explainers/Exact.ipynb
mit
[ "Exact explainer\nThis notebooks demonstrates how to use the Exact explainer on some simple datasets. The Exact explainer is model-agnostic, so it can compute Shapley values and Owen values exactly (without approximation) for any model. However, since it completely enumerates the space of masking patterns it has $O(2^M)$ complexity for Shapley values and $O(M^2)$ complexity for Owen values on a balanced clustering tree for M input features.\nBecause the exact explainer knows that it is fully enumerating the masking space it can use optimizations that are not possible with random sampling based approaches, such as using a grey code ordering to minimize the number of inputs that change between successive masking patterns, and so potentially reduce the number of times the model needs to be called.", "import shap\nimport xgboost\n\n# get a dataset on income prediction\nX,y = shap.datasets.adult()\n\n# train an XGBoost model (but any other model type would also work)\nmodel = xgboost.XGBClassifier()\nmodel.fit(X, y);", "Tabular data with independent (Shapley value) masking", "# build an Exact explainer and explain the model predictions on the given dataset\nexplainer = shap.explainers.Exact(model.predict_proba, X)\nshap_values = explainer(X[:100])\n\n# get just the explanations for the positive class\nshap_values = shap_values[...,1]", "Plot a global summary", "shap.plots.bar(shap_values)", "Plot a single instance", "shap.plots.waterfall(shap_values[0])", "Tabular data with partition (Owen value) masking\nWhile Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below:", "# build a clustering of the features based on shared information about y\nclustering = shap.utils.hclust(X, y)\n\n# above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker\n# now we explicitly use a Partition masker that uses the clustering we just computed\nmasker = shap.maskers.Partition(X, clustering=clustering)\n\n# build an Exact explainer and explain the model predictions on the given dataset\nexplainer = shap.explainers.Exact(model.predict_proba, masker)\nshap_values2 = explainer(X[:100])\n\n# get just the explanations for the positive class\nshap_values2 = shap_values2[...,1]", "Plot a global summary\nNote that only the Relationship and Marital status features share more that 50% of their explanation power (as measured by R2) with each other, so all the other parts of the clustering tree are removed by the the default clustering_cutoff=0.5 setting:", "shap.plots.bar(shap_values2)", "Plot a single instance\nNote that there is a strong similarity between the explanation from the Independent masker above and the Partition masker here. In general the distinctions between these methods for tabular data are not large, though the Partition masker allows for much faster runtime and potentially more realistic manipulations of the model inputs (since groups of clustered features are masked/unmasked together).", "shap.plots.waterfall(shap_values2[0])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
machine-learning/saving_machine_learning_models.ipynb
mit
[ "Title: Saving Machine Learning Models\nSlug: saving_machine_learning_models\nSummary: Saving Machine Learning Models from scikit learn. \nDate: 2016-09-22 12:00\nCategory: Machine Learning\nTags: Basics\nAuthors: Chris Albon\nIn scikit there are two main ways to save a model for future use: a pickle string and a pickled model as a file.\nPreliminaries", "from sklearn.linear_model import LogisticRegression\nfrom sklearn import datasets\nimport pickle\nfrom sklearn.externals import joblib", "Load Data", "# Load the iris data\niris = datasets.load_iris()\n\n# Create a matrix, X, of features and a vector, y.\nX, y = iris.data, iris.target", "Train Model", "# Train a naive logistic regression model\nclf = LogisticRegression(random_state=0)\nclf.fit(X, y) ", "Save To String Using Pickle", "# Save the trained model as a pickle string.\nsaved_model = pickle.dumps(clf)\n\n# View the pickled model\nsaved_model\n\n# Load the pickled model\nclf_from_pickle = pickle.loads(saved_model)\n\n# Use the loaded pickled model to make predictions\nclf_from_pickle.predict(X)", "Save To Pickled File Using joblib", "# Save the model as a pickle in a file\njoblib.dump(clf, 'filename.pkl') \n\n# Load the model from the file\nclf_from_joblib = joblib.load('filename.pkl') \n\n# Use the loaded model to make predictions\nclf_from_joblib.predict(X)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/nicam16-9s/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: NICAM16-9S\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:41\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9s', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fluxcapacitor/source.ml
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/02_Feed_Queue_HDFS.ipynb
apache-2.0
[ "Feed Data with Queue from HDFS\nPopulate HDFS with Sample Dataset", "%%bash\n\nhadoop fs -copyFromLocal /root/datasets/linear /\n\n%%bash\n\nhadoop fs -ls /linear", "Create TensorFlow Session", "import tensorflow as tf\n\ntf.reset_default_graph()\n\nsess = tf.Session()\nprint(sess)", "Create Queue and Feed Tensorflow from HDFS\nThe HDFS Namenode is running locally and listening on port 39000.", "filename_queue = tf.train.string_input_producer([\n \"hdfs://127.0.0.1:39000/linear/training.csv\",\n \"hdfs://127.0.0.1:39000/linear/validation.csv\",\n])", "Parse HDFS File(s)", "reader = tf.TextLineReader()\nfilename, text = reader.read(filename_queue)\nx_observed, y_observed = tf.decode_csv(text, [[0.0],[0.0]])\n\ncoord = tf.train.Coordinator()\nthreads = tf.train.start_queue_runners(sess=sess, \n coord=coord)\nn = 20\n\nprint('First %s Training Examples...' % n)\nprint('')\n\nfrom tabulate import tabulate\n\nexamples = []\ntry:\n\n for i in range(n):\n features, label = sess.run([x_observed, y_observed])\n examples.append([features, label])\n print(tabulate(examples, headers=[\"x_observed\", \"y_observed\"]))\n\nfinally:\n coord.request_stop()\n coord.join(threads) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mercybenzaquen/foundations-homework
foundations_hw/07/07/Homework7_part2_benzaquen.ipynb
mit
[ "!pip3 install matplotlib\n\nimport pandas as pd\n\n!pip3 install xlrd\n\ndf = pd.read_excel(\"richpeople.xlsx\")", "1)What country are most billionaires from? For the top ones, how many billionaires per billion people?", "df.head(3)\n\ndf.columns\n\nrecent = df[df['year']==2014]\nrecent.head()\n\ndf['citizenship'].value_counts().head(5)\n#I am going to skip the second part of the question\n#because we would have to create a new column with the number of people per country. Easier joining tables?", "2)Who are the top 10 richest billionaires?", "recent.sort_values(by='rank').head(10)", "3)What's the average wealth of a billionaire? Male? Female?", "recent['networthusbillion'].describe()\n\nfemales = recent[recent['gender'] == 'female']\nmales = recent[recent['gender'] == 'male']\n\nfemales['networthusbillion'].describe()\n\n\n\nmales['networthusbillion'].describe()\n\n\n\n\n", "4)Who is the poorest billionaire? Who are the top 10 poorest billionaires?", "recent.sort_values(by='rank').tail(1)\n\nrecent.sort_values(by='rank').tail(10)", "5)'What is relationship to company'? And what are the most common relationships?", "recent['relationshiptocompany'].value_counts().head(10)", "6)Most common source of wealth? Male vs. female?", "recent['sourceofwealth'].value_counts().head(10)\n\nfemales = recent[recent['gender'] == 'female']\nmales = recent[recent['gender'] == 'male']\n\nfemales['sourceofwealth'].value_counts().head(10)\n\n\nmales['sourceofwealth'].value_counts().head(10)", "9)What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?", "recent['industry'].value_counts().head(10)\n\nrecent.groupby('industry')['networthusbillion'].sum()", "10)How many self made billionaires vs. others?", "recent['selfmade'].value_counts()", "11)How old are billionaires? How old are billionaires self made vs. non self made? or different industries?", "billionaires_age = ['name', 'age']\nrecent[billionaires_age]\n\n\n\nrecent.groupby('selfmade')['age'].describe()\n\nrecent.groupby('industry')['age'].describe()", "12)Who are the youngest billionaires? The oldest? Age distribution - maybe make a graph about it?", "recent.sort_values('age', ascending=True).head(10)\n\ndf.sort_values('age', ascending=False).head(10)\n\nimport matplotlib.pyplot as plt\n\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nplt.style.available\n\nplt.style.use('dark_background')\nyoung_age_ordered = recent.sort_values('age', ascending=True).head(10)\nyoung_age_ordered.plot(kind='scatter', x='age', y='networthusbillion')\n#oops misread instructions\n\n\nold_age_ordered = recent.sort_values('age', ascending=False).head(10)\nold_age_ordered.plot(kind='scatter', x='age', y='networthusbillion')\n\n#oops misread instructions\n\n\n\nplt.style.use('seaborn-bright')\nage_distribution = recent['age'].value_counts()\nage_distribution.describe()\nage_distribution.head(30).plot(kind='bar', x='', y='') #i am not sure how to comple x,y fields in this case", "Maybe just made a graph about how wealthy they are in general?", "\nrecent.plot(kind='bar', x='name', y='networthusbillion')\n#I know this is awful but looks cool lol\n\nordered_by_wealth = recent.sort_values('networthusbillion', ascending=False)\nordered_by_wealth.head(30).plot(kind='bar', x='rank', y='networthusbillion', color=['g'])", "Maybe plot their net worth vs age (scatterplot)", "recent.plot(kind='scatter', x='age', y='networthusbillion')", "Make a bar graph of the top 10 or 20 richest", "top_10 = recent.sort_values(by='networthusbillion', ascending=False).head(10)\n\ntop_10.plot(kind='barh', x='name', y='networthusbillion', color=\"r\")" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
volgy/notebook
PSD.ipynb
mit
[ "Power Spectrum Estimation\nThis notebook is to understand the details of the power spectrum, its computation with FFT, carefully considering scaling issues, units and interpretation.\nImportant lessons to be learned (non-windowed case)\n\nNormalizing the FFT by sqrt(N)\nsquared magnitudes: Energy Spectrum [V^2 s] - grows w/ N\n\n\nNormalizing the FFT by N: \nmagnitudes are RMS amplitudes [V] (for the given frequency bin)\nsquared magnitudes: Power Spectrum [V^2]\nsquared magnitudes normalized by the width of the bin: Power Spectral Density [V^2/Hz]\n\n\n\nPower spectral density better suits wide-band (i.e. noise) signals. Power spectrum is better for interpreting narrow-band (i.e. single frequency) signals.\nAlternative view on DFT: By looking at the definition of DFT, it can be interpreted as a mixer (complex exponential multipler) and a low-pass filer (box-car or simple average). The low-pass filter (hence the DFT bins) will gets narrower as you increase N.\nTODO: understand why we need to scale bins /2 (except at DC) - Hint: this is needed only for real (non-complex) signals\nCreate a discrete sinusoid signal with some added noise. We assume that this is a voltage signal.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import signal\n\n# constants\nFS = 1e4 # sampling rate (Hz)\nSIG_F = 1e3 # signal frequency (Hz)\nSIG_DB = 0 # signal amplitude (dB)\nNOISE_DB = -15 # noise amplitude (dB)\nT = 1 # signal length (s)\ndT = 1 / FS\n\nt = np.arange(0, T, 1/FS)\nsig = np.sin(2 * np.pi * SIG_F * t) * (10 ** (SIG_DB / 20))\nnoise = np.random.randn(sig.size) * (10 ** (NOISE_DB / 20))\nsamples = sig + noise\n\nplt.plot(t[:100], samples[:100])\nplt.xlabel('Time (s)')\nplt.ylabel('Amplitude (V)')", "Calculate the average power of the clean signal and of the noise from the time domain samples. Compute SNR. \nNote: the power of a sinusoid with unit amplitude is -3dB.", "P_sig_t = np.mean(sig ** 2) # same as np.sum((sig ** 2) * dT) / T\nP_noise_t = np.mean(noise ** 2)\nSNR_t = 10 * np.log10(P_sig_t / P_noise_t)\n\nprint('P(sig)= %.2f V^2, P(noise)= %.2f V^2, SNR= %.2f dB' % (P_sig_t, P_noise_t, SNR_t))\nprint('RMS(sig)= %.2f V, RMS(noise)= %.2f V' % (np.sqrt(P_sig_t), np.sqrt(P_noise_t)))", "Power Spectrum\nCompute the DFT of the time domain samples using a fixed length (N). \nNote: the DFT results have to be scaled by 1/sqrt(N) to conserve energy (unitary operator). You can achieve the same results with np.fft.fft(samples, norm='ortho'). Also, see Parseval's Theorem.", "N = 1000 # must be even for these computations\n\nX = np.fft.fft(samples, n=N) / np.sqrt(N)\nf = np.fft.fftfreq(N, dT)\n# Verify if time and frequency domain energies are the same\nnp.sum(np.abs(X) ** 2), np.sum(samples[:N] ** 2)", "First important observation: the squared magnitude of the FFT values represent the energy distribution across the frequency bins for the given signal length (N). Thus, the absolute bin values depend on N.", "Exx = np.abs(X) ** 2\n\nplt.semilogy(np.fft.fftshift(f), np.fft.fftshift(Exx))\nplt.title('Energy Spectrum')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('Energy in bin ($V^2s$)')", "Let's convert the FFT values to power. In the time domain, we divided the sum energy by N. This is what we do in the frequency domain, too to get average power in each freq bin. If you followed carefully, we normalized the the FFT squared magnitudes by N to get energy and again by N to get power. This is why people prefer to normalize the FFT values by N (so the squared magnitudes are in the power units).", "Pxx = Exx / N\nplt.semilogy(np.fft.fftshift(f), np.fft.fftshift(Exx))\nplt.title('Power Spectrum')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('Power in bin ($V^2$)')", "Due to the real time-domain samples we have a symmetric spectrum (complex conjugate). Let's take and scale the positive half of it.", "Pxx = Pxx[:N // 2]\nPxx[1:-2] *= 2 # conserve energy\nf = f[:N // 2]\n\nplt.semilogy(f, Pxx)\nplt.title('Power Spectrum')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('Power in bin ($V^2$)')\nplt.ylim(1e-6, 1)\nplt.grid()", "Let's compare the result with the built-in periodogram function.", "f2, Pxx2 = signal.periodogram(samples, FS, nfft=N, scaling='spectrum')\nplt.semilogy(f2, Pxx2)\nplt.title('Power Spectrum using scipy.signal.periodogram')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('Power in bin ($V^2$)')\nplt.ylim(1e-6, 1)\nplt.grid()\nplt.show()", "Calculate SNR using the frequency domain (first peak is the signal assumption).", "f_sig_idx = np.argmax(Pxx)\nSNR_f = 10 * np.log10(Pxx[f_sig_idx] / np.sum(np.delete(Pxx, f_sig_idx)))\n\nprint('SNR= %.2f dB (time domain SNR= %.2f dB)' % (SNR_f, SNR_t))", "Power Spectrum Density\nInstead of ploting the (average) power in each frequency bin we can compute/plot the power density. This is a scaling of the power spectrum results by the width of the bin (in Hz). We also compare this to the built-in periodogram with density scaling.", "plt.semilogy(f, Pxx / (FS / N))\nplt.title('PSD computed from DFT')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD ($V^2/Hz$)')\nplt.ylim(1e-7, 1)\nplt.grid()\nplt.show()\n\nf2, Pxx2 = signal.periodogram(samples, FS, nfft=N, scaling='density')\nplt.semilogy(f2, Pxx2)\nplt.title('PSD using scipy.signal.periodogram')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD ($V^2/Hz$)')\nplt.ylim(1e-7, 1)\nplt.grid()\n", "Observation: the PSD figure is better for showing the noise level (it's height does not change with N), but is hard to interpret for the signal (it's height changes). The 'spectrum' scaling is better for the signal (does not change with N) but misleading for the noise level.", "f3, Pxx3 = signal.periodogram(samples, FS, nfft=512, scaling='density')\nplt.semilogy(f3, Pxx3)\nplt.title('PSD with N=1024')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD ($V^2/Hz$)')\nplt.ylim(1e-7, 1)\nplt.grid()\nplt.show()\n\nf3, Pxx3 = signal.periodogram(samples, FS, nfft=8192, scaling='density')\nplt.semilogy(f3, Pxx3)\nplt.title('PSD with N=1024')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD ($V^2/Hz$)')\nplt.ylim(1e-7, 1)\nplt.grid()\nplt.show()\n\nf3, Pxx3 = signal.periodogram(samples, FS, nfft=512, scaling='spectrum')\nplt.semilogy(f3, Pxx3)\nplt.title('Power Spectrum with N=1024')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD ($V^2/Hz$)')\nplt.ylim(1e-7, 1)\nplt.grid()\nplt.show()\n\nf3, Pxx3 = signal.periodogram(samples, FS, nfft=8192, scaling='spectrum')\nplt.semilogy(f3, Pxx3)\nplt.title('Power Spectrum with N=1024')\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD ($V^2/Hz$)')\nplt.ylim(1e-7, 1)\nplt.grid()\nplt.show()", "TODO: Windowing" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scikit-optimize/scikit-optimize.github.io
0.8/notebooks/auto_examples/hyperparameter-optimization.ipynb
bsd-3-clause
[ "%matplotlib inline", "Tuning a scikit-learn estimator with skopt\nGilles Louppe, July 2016\nKatie Malone, August 2016\nReformatted by Holger Nahrstaedt 2020\n.. currentmodule:: skopt\nIf you are looking for a :obj:sklearn.model_selection.GridSearchCV replacement checkout\nsphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py instead.\nProblem statement\nTuning the hyper-parameters of a machine learning model is often carried out\nusing an exhaustive exploration of (a subset of) the space all hyper-parameter\nconfigurations (e.g., using :obj:sklearn.model_selection.GridSearchCV), which\noften results in a very time consuming operation.\nIn this notebook, we illustrate how to couple :class:gp_minimize with sklearn's\nestimators to tune hyper-parameters using sequential model-based optimisation,\nhopefully resulting in equivalent or better solutions, but within less\nevaluations.\nNote: scikit-optimize provides a dedicated interface for estimator tuning via\n:class:BayesSearchCV class which has a similar interface to those of\n:obj:sklearn.model_selection.GridSearchCV. This class uses functions of skopt to perform hyperparameter\nsearch efficiently. For example usage of this class, see\nsphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py\nexample notebook.", "print(__doc__)\nimport numpy as np", "Objective\nTo tune the hyper-parameters of our model we need to define a model,\ndecide which parameters to optimize, and define the objective function\nwe want to minimize.", "from sklearn.datasets import load_boston\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.model_selection import cross_val_score\n\nboston = load_boston()\nX, y = boston.data, boston.target\nn_features = X.shape[1]\n\n# gradient boosted trees tend to do well on problems like this\nreg = GradientBoostingRegressor(n_estimators=50, random_state=0)", "Next, we need to define the bounds of the dimensions of the search space\nwe want to explore and pick the objective. In this case the cross-validation\nmean absolute error of a gradient boosting regressor over the Boston\ndataset, as a function of its hyper-parameters.", "from skopt.space import Real, Integer\nfrom skopt.utils import use_named_args\n\n\n# The list of hyper-parameters we want to optimize. For each one we define the\n# bounds, the corresponding scikit-learn parameter name, as well as how to\n# sample values from that dimension (`'log-uniform'` for the learning rate)\nspace = [Integer(1, 5, name='max_depth'),\n Real(10**-5, 10**0, \"log-uniform\", name='learning_rate'),\n Integer(1, n_features, name='max_features'),\n Integer(2, 100, name='min_samples_split'),\n Integer(1, 100, name='min_samples_leaf')]\n\n# this decorator allows your objective function to receive a the parameters as\n# keyword arguments. This is particularly convenient when you want to set\n# scikit-learn estimator parameters\n@use_named_args(space)\ndef objective(**params):\n reg.set_params(**params)\n\n return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,\n scoring=\"neg_mean_absolute_error\"))", "Optimize all the things!\nWith these two pieces, we are now ready for sequential model-based\noptimisation. Here we use gaussian process-based optimisation.", "from skopt import gp_minimize\nres_gp = gp_minimize(objective, space, n_calls=50, random_state=0)\n\n\"Best score=%.4f\" % res_gp.fun\n\nprint(\"\"\"Best parameters:\n- max_depth=%d\n- learning_rate=%.6f\n- max_features=%d\n- min_samples_split=%d\n- min_samples_leaf=%d\"\"\" % (res_gp.x[0], res_gp.x[1],\n res_gp.x[2], res_gp.x[3],\n res_gp.x[4]))", "Convergence plot", "from skopt.plots import plot_convergence\n\nplot_convergence(res_gp)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tbphu/fachkurs_master_2016
07_modelling/20160527_NumericalSolver.ipynb
mit
[ "ODE to joy\nJens Hahn - 27/05/2016\nCountinuous deterministic modelling with differential equations\nNumerical integration\nEvery numerical procedure to solve an ODE is based on the discretisation of the system and the difference quotient. It's very easy to understand, you just have to read the $\\frac{\\text{d}\\vec{x}}{\\text{d}t}$ as a $\\frac{\\Delta \\vec{x}}{\\Delta t}$. Then you can multiply both sides of the equation with $\\Delta t$ and you have an equation describing the change of your variables during a certain time intervall $\\Delta t$:\n$$ \\Delta \\vec{x} = \\vec{f}(\\vec{x}, t)\\times \\Delta t$$\nNext step, the discretisation:\n$$\\vec{x}_{i+1} - \\vec{x}_i = \\vec{f}(\\vec{x}_i, t)\\times \\Delta t$$\nNext thing is putting the $\\vec{x}_i$ on the other side and naming $\\Delta t$ to $h$:\n$$\\vec{x}_{i+1} = \\vec{x}_i + \\vec{f}(\\vec{x}_i, t)\\times h$$\nOf course, the smaller you choose the time intervall $h$, the more accurate your result will be in comparison to the analytical solution. \nSo it's clear, we chose a tiny one, right? Well, not exactly, the smaller your time intervall the longer the simulation will take. Therefore, we need a compromise and here the provided software will help us by constantly testing and observing the numerical solution and adapt the \"step size\" $h$ automatically.\nEuler's method\nThe Euler's method is the simplest way to solve ODEs numerically. It can be written in a short formula. \n$h$ is again the difference of time step: h_i = $t_{i+1} - t_i$ \nThen, the solution looks like that:\n$$\\Phi (t,x,h) = x + h\\cdot f(t,x)$$\nUnfortunately, this method is highly dependent on the size of $h_i$, the smaller the more accurate is the solution.\n<img src=\"Euler.png\">\nAnother way to understand this is to take a look at the Riemann sum. You probably know it already, calculate the value of $f(x)$ and multiply it by the step size. So it's not a new idea to you.\n<img src=\"Riemann.gif\">\nLet's test the method with our well-known predator-prey model (Lotka-Volterra):", "import numpy as np\n\n# Lotka Volterra model\n# initialise parameters\nk1 = 1.5\nk2 = 1.\nk3 = 3.\nk4 = 1.\n\ndef my_dxdt(s,t):\n \"\"\"\n Function returns values of derivatives of Lotka Volterra model\n \"\"\"\n return [k1*s[0] - k2*s[0]*s[1], - k3*s[1]+k4*s[0]*s[1]]\n\ndef my_euler_solver(dxdt, s0, timegrid):\n \"\"\"\n Implementation of a simple Euler method (constant stepsize)\n \"\"\"\n # first species values are s0\n s = s0\n # do timesteps\n for j, time in enumerate(timegrid):\n # first time step, just save initial values\n if j == 0:\n result = [[value] for value in s0]\n continue\n # next time step, calculate values and save them\n for i, species in enumerate(s):\n hi = (timegrid[j] - timegrid[j-1])\n species = species + dxdt(s,time)[i] * hi\n result[i].append(species)\n # update species with new values\n s[0] = result[0][-1]\n s[1] = result[1][-1]\n return result", "To test the accuracy, we run the simulation with 2 different time grids, one with a step size of 0.01 and one with step size 0.001", "import matplotlib.pyplot as plt\n%matplotlib inline\n\n# timegrids\ntimegrid_e3 = np.linspace(0,20,2000)\ntimegrid_e4 = np.linspace(0,20,20000)\n\n# get solutions\ns0=[5,10]\nmy_euler_result_e3 = my_euler_solver(my_dxdt, s0, timegrid_e3)\ns0=[5,10]\nmy_euler_result_e4 = my_euler_solver(my_dxdt, s0, timegrid_e4)\n", "Heun's method\nIf you want to increase the accuracy of your method, you could use the trapezoidal rule you know from the approximation of integrals. The second point is of course missing, but here you could use Euler's method! \nAs we will see, this method is a huge improvement compared to Euler's method! \n$$\\Phi (t,x,h) = x + \\frac{h}{2}\\Bigl(f(t,x)+f\\bigl(t+h,\\underbrace{x+h\\cdot f(t,x)}_{Euler's\\ method}\\bigr)\\Bigr)$$\nRunge - Kutta method\nThe idea of Runge and Kutta were quite straight forward: Why not using Heun's method recursive? To get the second point you do not use Euler's method but again the trapezoidal rule... and again... and again. This method is still used and very good for most of the ODE systems!", "def my_heun_solver(dxdt, s0, timegrid):\n \"\"\"\n Implementation of the Heun method (constant stepsize)\n \"\"\"\n # first species values are s0\n s = s0\n # do timesteps\n for j, time in enumerate(timegrid):\n # first time step, just save initial values\n if j == 0:\n result = [[value] for value in s0]\n continue\n # next time step, calculate values and save them\n for i, species in enumerate(s):\n hi = (timegrid[j] - timegrid[j-1])\n species = species + (hi/2)*(dxdt(s,time)[i]+dxdt([s[k]+hi*dxdt(s,time)[k] for k in range(len(s))], time+hi)[i])\n result[i].append(species)\n # update species with new values\n s[0] = result[0][-1]\n s[1] = result[1][-1]\n return result\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# timegrids\ntimegrid_e3 = np.linspace(0,20,2000)\ntimegrid_e4 = np.linspace(0,20,20000)\n\n# plot results\ns0=[5,10]\nmy_heun_result_e3 = my_heun_solver(my_dxdt, s0, timegrid_e3)\ns0=[5,10]\nmy_heun_result_e4 = my_heun_solver(my_dxdt, s0, timegrid_e4)", "Let's simulate the same system also with odeint, the standard ODE solver of the scipy Python package.", "import scipy.integrate\n\ntimegrid = np.linspace(0,20,2000)\ns0 = [5,10]\nresult = scipy.integrate.odeint(my_dxdt, s0, timegrid)", "And now we compare the results. I marked the amplitude and position of the maxima with red dotted lines.", "plt.figure(1)\nplt.plot(timegrid_e3, my_euler_result_e3[0], label=\"X 0.01\")\nplt.plot(timegrid_e3, my_euler_result_e3[1], label=\"Y 0.01\")\nplt.plot(timegrid_e4, my_euler_result_e4[0], label=\"X 0.001\")\nplt.plot(timegrid_e4, my_euler_result_e4[1], label=\"Y 0.001\")\nplt.plot([0,20], [13.67, 13.67], 'r--')\nplt.plot([4.32,4.32], [0,14], 'r--')\nplt.plot([8.9,8.9], [0,14], 'r--')\nplt.plot([13.46,13.46], [0,14], 'r--')\nplt.plot([18.06,18.06], [0,14], 'r--')\nplt.legend(loc=2)\nplt.title('Euler method')\n\nplt.figure(2)\nplt.plot(timegrid_e3, my_heun_result_e3[0], label=\"X 0.01\")\nplt.plot(timegrid_e3, my_heun_result_e3[1], label=\"Y 0.01\")\nplt.plot(timegrid_e4, my_heun_result_e4[0], label=\"X 0.001\")\nplt.plot(timegrid_e4, my_heun_result_e4[1], label=\"Y 0.001\")\nplt.plot([0,20], [13.67, 13.67], 'r--')\nplt.plot([4.32,4.32], [0,14], 'r--')\nplt.plot([8.9,8.9], [0,14], 'r--')\nplt.plot([13.46,13.46], [0,14], 'r--')\nplt.plot([18.06,18.06], [0,14], 'r--')\nplt.legend(loc=2)\nplt.title('Heun method')\n\nplt.figure(3)\nplt.plot(timegrid, result.T[0], label='X')\nplt.plot(timegrid, result.T[1], label='Y')\nplt.plot([0,20], [13.67, 13.67], 'r--')\nplt.plot([4.32,4.32], [0,14], 'r--')\nplt.plot([8.9,8.9], [0,14], 'r--')\nplt.plot([13.46,13.46], [0,14], 'r--')\nplt.plot([18.06,18.06], [0,14], 'r--')\nplt.legend(loc=2)\nplt.title('odeint')", "As you can see, the Heun's method seems to have already a remarkable accuracy, even if it is a very simple method. Let's compare the results of odeint and the Heun's method directly:", "plt.plot(timegrid, result.T[0], label='X odeint')\nplt.plot(timegrid, result.T[1], label='Y odeint')\nplt.legend(loc=2)\n\nplt.plot(timegrid_e4, my_heun_result_e4[0], label=\"X Heun\")\nplt.plot(timegrid_e4, my_heun_result_e4[1], label=\"Y Heun\")\nplt.plot([0,20], [13.67, 13.67], 'r--')\nplt.plot([4.32,4.32], [0,14], 'r--')\nplt.plot([8.9,8.9], [0,14], 'r--')\nplt.plot([13.46,13.46], [0,14], 'r--')\nplt.plot([18.06,18.06], [0,14], 'r--')\nplt.legend(loc=2)\nplt.title('Comparison odeint & Heun method')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quantopian/research_public
notebooks/lectures/The_Dangers_of_Overfitting/notebook.ipynb
apache-2.0
[ "Overfitting\nBy Evgenia \"Jenny\" Nitishinskaya and Delaney Granizo-Mackenzie. Algorithms by David Edwards.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\n\nWhat is overfitting?\nWhen constructing a model, we tune both the parameters and the model by fitting to sample data. We then use the model and parameters to predict data we have not yet observed. We say a model is overfit when it is overly sensitive to noise and idiosyncracies in the sample data, and therefore does not reflect the underlying data-generating process.\nTo understand why this happens, one has to consider the amount of noise present in any dataset. One can consider a set of data as $D_{T}$, the true underlying data that came from whatever process we are trying to model, and $\\epsilon$, some random noise. Because what we see is $D = D_{T} + \\epsilon$, we might fit our model to very perfectly predict for the given $\\epsilon$, but not for $D_{T}$.\nThis is problematic because we only care about fitting to the sample insofar as that gives an accurate fit to future data. The two broad causes of overfitting are:\n* small sample size, so that noise and trend are not distinguishable\n* choosing an overly complex model, so that it ends up contorting to fit the noise in the sample\nVerbal Example: Too Many Rules (Complexity)\nLet's say you have the following dataset:\n| TV Channel | Room Lighting Intensity | Enjoyment |\n|------------|-------------------------|-----------|\n| 1 | 2 | 1 |\n| 2 | 3 | 2 |\n| 3 | 1 | 3 |\nYou are trying to predict enjoyment, so you create the following rules:\n\nIf TV Channel is 1 and Lighting Intensity is 2, then Enjoyment will be 1.\nIf TV Channel is 2 and Lighting Intensity is 3, then Enjoyment will be 2.\nIf TV Channel is 3 and Lighting Intensity is 1, then Enjoyment will be 3.\nIn all other cases predict an average enjoyment of 2.\n\nThis is a well defined model for future data, however, in this case let's say your enjoyment is purely dependent on the tv channel and not on the lighting. Because we have a rule for each row in our dataset, our model is perfectly predictive in our historical data, but would performly poorly in real trials because we are overfitting to random noise in the lighting intensity data.\nGeneralizing this to stocks, if your model starts developing many specific rules based on specific past events, it is almost definitely overfitting. This is why black-box machine learning (neural networks, etc.) is so dangerous when not done correctly.\nExample: Curve fitting\nOverfitting is most easily seen when we look at polynomial regression. Below we construct a dataset which noisily follows a quadratic. The linear model is underfit: simple linear models aren't suitable for all situations, especially when we have reason to believe that the data is nonlinear. The quadratic curve has some error but fits the data well.\nWhen we fit a ninth-degree polynomial to the data, the error is zero - a ninth-degree polynomial can be constructed to go through any 10 points - but, looking at the tails of the curve, we know that we can't expect it to accurately predict other samples from the same distribution. It fits the data perfectly, but that is because it also fits the noise perfectly, and the noise is not what we want to model. In this case we have selected a model that is too complex.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport statsmodels.api as sm\nfrom statsmodels import regression\nfrom scipy import poly1d\n\nx = np.arange(10)\ny = 2*np.random.randn(10) + x**2\nxs = np.linspace(-0.25, 9.25, 200)\n\nlin = np.polyfit(x, y, 1)\nquad = np.polyfit(x, y, 2)\nmany = np.polyfit(x, y, 9)\n\nplt.scatter(x, y)\nplt.plot(xs, poly1d(lin)(xs))\nplt.plot(xs, poly1d(quad)(xs))\nplt.plot(xs, poly1d(many)(xs))\nplt.ylabel('Y')\nplt.xlabel('X')\nplt.legend(['Underfit', 'Good fit', 'Overfit']);", "When working with real data, there is unlikely to ever be a situation where a ninth-degree polynomial is appropriate: our choice of function should reflect a belief about the underlying process, and real-world processes generally do not follow high-degree polynomial curves. This example is contrived, but it can be tempting to use a quadratic or cubic model just to decrease sample error.\nNote: Model/Parameter Parsimony\nJust as the most elegant physics models describe a tremendous amount of our world through a few equations, a good trading model should explain most of the data through a few rules. Any time you start to have a number of rules even close to the number of points in your data set, you can be sure you are overfitting. Since parameters can be thought of as rules as they equivalently constrain a model, the same is true of parameters. Fewer parameters is better, and it is better to explain 60% of the data with 2-3 parameters than 90% with 10.\nBeware of the perfect fit\nBecause there is almost always noise present in real data, a perfect fit is almost always indicative of overfitting. It is almost impossible to know the percentage noise/signal in a given data set while you are developing the model, but use your common sense. Are the predictions surprisingly good? Then you're probably overfitting.\nExample: Regression parameters\nHow do we know which variables to include in a model? If we're afraid of omitting something important, we might try different ones and include all the variables we can find that improve the fit. Below we regress one asset that is in the same sector as the asset whose price we're trying to predict, and three other unrelated ones. In our initial timeframe, we are able to fit the model more closely to the data when using multiple variables than when using just one.", "# Load one year's worth of pricing data for five different assets\nstart = '2013-01-01'\nend = '2014-01-01'\nx1 = get_pricing('PEP', fields='price', start_date=start, end_date=end)\nx2 = get_pricing('MCD', fields='price', start_date=start, end_date=end)\nx3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end)\nx4 = get_pricing('DOW', fields='price', start_date=start, end_date=end)\ny = get_pricing('PG', fields='price', start_date=start, end_date=end)\n\n# Build a linear model using only x1 to explain y\nslr = regression.linear_model.OLS(y, sm.add_constant(x1)).fit()\nslr_prediction = slr.params[0] + slr.params[1]*x1\n\n# Run multiple linear regression using x1, x2, x3, x4 to explain y\nmlr = regression.linear_model.OLS(y, sm.add_constant(np.column_stack((x1,x2,x3,x4)))).fit()\nmlr_prediction = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4\n\n# Compute adjusted R-squared for the two different models\nprint 'SLR R-squared:', slr.rsquared_adj\nprint 'SLR p-value:', slr.f_pvalue\nprint 'MLR R-squared:', mlr.rsquared_adj\nprint 'MLR p-value:', mlr.f_pvalue\n\n# Plot y along with the two different predictions\ny.plot()\nslr_prediction.plot()\nmlr_prediction.plot()\nplt.ylabel('Price')\nplt.xlabel('Date')\nplt.legend(['PG', 'SLR', 'MLR']);", "However, when we use the same estimated parameters to model a different time period, we find that the single-variable model fits worse, while the multiple-variable model is entirely useless. It seems that the relationships we found are not consistent and are particular to the original sample period.", "# Load the next of pricing data\nstart = '2014-01-01'\nend = '2015-01-01'\nx1 = get_pricing('PEP', fields='price', start_date=start, end_date=end)\nx2 = get_pricing('MCD', fields='price', start_date=start, end_date=end)\nx3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end)\nx4 = get_pricing('DOW', fields='price', start_date=start, end_date=end)\ny = get_pricing('PG', fields='price', start_date=start, end_date=end)\n\n# Extend our model from before to the new time period\nslr_prediction2 = slr.params[0] + slr.params[1]*x1\nmlr_prediction2 = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4\n\n# Manually compute adjusted R-squared over the new time period\n\n# Adjustment 1 is for the SLR model\np = 1\nN = len(y)\nadj1 = float(N - 1)/(N - p - 1)\n\n# Now for MLR\np = 4\nN = len(y)\nadj2 = float(N - 1)/(N - p - 1)\n\nSST = sum((y - np.mean(y))**2)\nSSRs = sum((slr_prediction2 - y)**2)\nprint 'SLR R-squared:', 1 - adj1*SSRs/SST\nSSRm = sum((mlr_prediction2 - y)**2)\nprint 'MLR R-squared:', 1 - adj2*SSRm/SST\n\n# Plot y along with the two different predictions\ny.plot()\nslr_prediction2.plot()\nmlr_prediction2.plot()\nplt.ylabel('Price')\nplt.xlabel('Date')\nplt.legend(['PG', 'SLR', 'MLR']);", "If we wanted, we could scan our universe for variables that were correlated with the dependent variable, and construct an extremely overfitted model. However, in most cases the correlation will be spurious, and the relationship will not continue into the future.\nExample: Rolling windows\nOne of the challenges in building a model that uses rolling parameter estimates, such as rolling mean or rolling beta, is choosing a window length. A longer window will take into account long-term trends and be less volatile, but it will also lag more when taking into account new observations. The choice of window length strongly affects the rolling parameter estimate and can change how we see and treat the data. Below we calculate the rolling averages of a stock price for different window lengths:", "# Load the pricing data for a stock\nstart = '2011-01-01'\nend = '2013-01-01'\npricing = get_pricing('MCD', fields='price', start_date=start, end_date=end)\n\n# Compute rolling averages for various window lengths\nmu_30d = pricing.rolling(window=30).mean()\nmu_60d = pricing.rolling(window=60).mean()\nmu_100d = pricing.rolling(window=100).mean()\n\n# Plot asset pricing data with rolling means from the 100th day, when all the means become available\nplt.plot(pricing[100:], label='Asset')\nplt.plot(mu_30d[100:], label='30d MA')\nplt.plot(mu_60d[100:], label='60d MA')\nplt.plot(mu_100d[100:], label='100d MA')\nplt.xlabel('Day')\nplt.ylabel('Price')\nplt.legend();", "If we pick the length based on which seems best - say, on how well our model or algorithm performs - we are overfitting. Below we have a simple trading algorithm which bets on the stock price reverting to the rolling mean (for more details, check out the mean reversion notebook). We use the performance of this algorithm to score window lengths and find the best one. However, when we consider a different timeframe, this window length is far from optimal. This is because our original choice was overfitted to the sample data.", "# Trade using a simple mean-reversion strategy\ndef trade(stock, length):\n \n # If window length is 0, algorithm doesn't make sense, so exit\n if length == 0:\n return 0\n \n # Compute rolling mean and rolling standard deviation\n rolling_window = stock.rolling(window=length)\n mu = rolling_window.mean()\n std = rolling_window.std()\n \n # Compute the z-scores for each day using the historical data up to that day\n zscores = (stock - mu)/std\n \n # Simulate trading\n # Start with no money and no positions\n money = 0\n count = 0\n for i in range(len(stock)):\n # Sell short if the z-score is > 1\n if zscores[i] > 1:\n money += stock[i]\n count -= 1\n # Buy long if the z-score is < 1\n elif zscores[i] < -1:\n money -= stock[i]\n count += 1\n # Clear positions if the z-score between -.5 and .5\n elif abs(zscores[i]) < 0.5:\n money += count*stock[i]\n count = 0\n return money\n\n# Find the window length 0-254 that gives the highest returns using this strategy\nlength_scores = [trade(pricing, l) for l in range(255)]\nbest_length = np.argmax(length_scores)\nprint 'Best window length:', best_length\n\n# Get pricing data for a different timeframe\nstart2 = '2013-01-01'\nend2 = '2015-01-01'\npricing2 = get_pricing('MCD', fields='price', start_date=start2, end_date=end2)\n\n# Find the returns during this period using what we think is the best window length\nlength_scores2 = [trade(pricing2, l) for l in range(255)]\nprint best_length, 'day window:', length_scores2[best_length]\n\n# Find the best window length based on this dataset, and the returns using this window length\nbest_length2 = np.argmax(length_scores2)\nprint best_length2, 'day window:', length_scores2[best_length2]", "Clearly fitting to our sample data doesn't always give good results in the future. Just for fun, let's plot the length scores computed from the two different timeframes:", "plt.plot(length_scores)\nplt.plot(length_scores2)\nplt.xlabel('Window length')\nplt.ylabel('Score')\nplt.legend(['2011-2013', '2013-2015']);", "To avoid overfitting, we can use economic reasoning or the nature of our algorithm to pick our window length. We can also use Kalman filters, which do not require us to specify a length; this method is covered in another notebook.\nAvoiding overfitting\nWe can try to avoid overfitting by taking large samples, choosing reasonable and simple models, and not cherry-picking parameters to fit the data; but just running two backtests is already overfitting.\nOut of Sample Testing\nTo make sure we haven't broken our model with overfitting, we have to out of sample. That is, we need to gather data that we did not use in constructing the model, and test whether our model continues to work. If we cannot gather large amounts of additional data at will, we should split the sample we have into two parts, of which one is reserved for testing only.\nCommon Mistake: Abusing Out of Sample Data\nSometimes people will construct a model on in-sample data, test on out-of-sample data, and conclude it doesn't work. They will then repeat this process until they find a model that works. This is still overfitting, as you have no overfit the model to the out-of-sample data by using it many times, and when you actually test on true out-of-sample data your model will likely break down.\nCross Validation\nCross validation is the process of splitting your data into n parts, then estimating optimal parameters for n-1 parts combined and testing on the final part. By doing this n times, one for each part held out, we can establish how stable our parameter estimates are and how predictive they are on data not from the original set.\nInformation Criterion\nInformation criterion are a rigorous statistical way to test if the amount of complexity in your model is worth the extra predictive power. The test favors simpler models and will tell you if you are introducing a large amount of complexity without much return. One of the most common methods is Akaike Information Criterion.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.13/_downloads/plot_artifacts_correction_ica.ipynb
bsd-3-clause
[ "%matplotlib inline", "Artifact Correction with ICA\nICA finds directions in the feature space\ncorresponding to projections with high non-Gaussianity. We thus obtain\na decomposition into independent components, and the artifact's contribution\nis localized in only a small number of components.\nThese components have to be correctly identified and removed.\nIf EOG or ECG recordings are available, they can be used in ICA to\nautomatically select the corresponding artifact components from the\ndecomposition. To do so, you have to first build an Epoch object around\nblink or heartbeat event.", "import numpy as np\n\nimport mne\nfrom mne.datasets import sample\n\nfrom mne.preprocessing import ICA\nfrom mne.preprocessing import create_eog_epochs, create_ecg_epochs\n\n# getting some data ready\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname, preload=True, add_eeg_ref=False)\nraw.filter(1, 40, n_jobs=2) # 1Hz high pass is often helpful for fitting ICA\n\npicks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,\n stim=False, exclude='bads')", "Before applying artifact correction please learn about your actual artifacts\nby reading tut_artifacts_detect.\nFit ICA\nICA parameters:", "n_components = 25 # if float, select n_components by explained variance of PCA\nmethod = 'fastica' # for comparison with EEGLAB try \"extended-infomax\" here\ndecim = 3 # we need sufficient statistics, not all time points -> saves time\n\n# we will also set state of the random number generator - ICA is a\n# non-deterministic algorithm, but we want to have the same decomposition\n# and the same order of components each time this tutorial is run\nrandom_state = 23", "Define the ICA object instance", "ica = ICA(n_components=n_components, method=method, random_state=random_state)\nprint(ica)", "we avoid fitting ICA on crazy environmental artifacts that would\ndominate the variance and decomposition", "reject = dict(mag=5e-12, grad=4000e-13)\nica.fit(raw, picks=picks_meg, decim=decim, reject=reject)\nprint(ica)", "Plot ICA components", "ica.plot_components() # can you spot some potential bad guys?", "Component properties\nLet's take a closer look at properties of first three independent components.", "# first, component 0:\nica.plot_properties(raw, picks=0)", "we can see that the data were filtered so the spectrum plot is not\nvery informative, let's change that:", "ica.plot_properties(raw, picks=0, psd_args={'fmax': 35.})", "we can also take a look at multiple different components at once:", "ica.plot_properties(raw, picks=[1, 2], psd_args={'fmax': 35.})", "Instead of opening individual figures with component properties, we can\nalso pass an instance of Raw or Epochs in inst arument to\nica.plot_components. This would allow us to open component properties\ninteractively by clicking on individual component topomaps. In the notebook\nthis woks only when running matplotlib in interactive mode (%matplotlib).", "# uncomment the code below to test the inteactive mode of plot_components:\n# ica.plot_components(picks=range(10), inst=raw)", "Advanced artifact detection\nLet's use a more efficient way to find artefacts", "eog_average = create_eog_epochs(raw, reject=dict(mag=5e-12, grad=4000e-13),\n picks=picks_meg).average()\n\n# We simplify things by setting the maximum number of components to reject\nn_max_eog = 1 # here we bet on finding the vertical EOG components\neog_epochs = create_eog_epochs(raw, reject=reject) # get single EOG trials\neog_inds, scores = ica.find_bads_eog(eog_epochs) # find via correlation\n\nica.plot_scores(scores, exclude=eog_inds) # look at r scores of components\n# we can see that only one component is highly correlated and that this\n# component got detected by our correlation analysis (red).\n\nica.plot_sources(eog_average, exclude=eog_inds) # look at source time course", "We can take a look at the properties of that component, now using the\ndata epoched with respect to EOG events.\nWe will also use a little bit of smoothing along the trials axis in the\nepochs image:", "ica.plot_properties(eog_epochs, picks=eog_inds, psd_args={'fmax': 35.},\n image_args={'sigma': 1.})", "That component is showing a prototypical average vertical EOG time course.\nPay attention to the labels, a customized read-out of the\n:attr:ica.labels_ &lt;mne.preprocessing.ICA.labels_&gt;", "print(ica.labels_)", "These labels were used by the plotters and are added automatically\nby artifact detection functions. You can also manually edit them to annotate\ncomponents.\nNow let's see how we would modify our signals if we removed this component\nfrom the data", "ica.plot_overlay(eog_average, exclude=eog_inds, show=False)\n# red -> before, black -> after. Yes! We remove quite a lot!\n\n# to definitely register this component as a bad one to be removed\n# there is the ``ica.exclude`` attribute, a simple Python list\nica.exclude.extend(eog_inds)\n\n# from now on the ICA will reject this component even if no exclude\n# parameter is passed, and this information will be stored to disk\n# on saving\n\n# uncomment this for reading and writing\n# ica.save('my-ica.fif')\n# ica = read_ica('my-ica.fif')", "Exercise: find and remove ECG artifacts using ICA!", "ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)\necg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')\nica.plot_properties(ecg_epochs, picks=ecg_inds, psd_args={'fmax': 35.})", "What if we don't have an EOG channel?\nWe could either:\n\nmake a bipolar reference from frontal EEG sensors and use as virtual EOG\n channel. This can be tricky though as you can only hope that the frontal\n EEG channels only reflect EOG and not brain dynamics in the prefrontal\n cortex.\ngo for a semi-automated approach, using template matching.\n\nIn MNE-Python option 2 is easily achievable and it might give better results,\nso let's have a look at it.", "from mne.preprocessing.ica import corrmap # noqa", "The idea behind corrmap is that artefact patterns are similar across subjects\nand can thus be identified by correlating the different patterns resulting\nfrom each solution with a template. The procedure is therefore\nsemi-automatic. :func:mne.preprocessing.corrmap hence takes a list of\nICA solutions and a template, that can be an index or an array.\nAs we don't have different subjects or runs available today, here we will\nsimulate ICA solutions from different subjects by fitting ICA models to\ndifferent parts of the same recording. Then we will use one of the components\nfrom our original ICA as a template in order to detect sufficiently similar\ncomponents in the simulated ICAs.\nThe following block of code simulates having ICA solutions from different\nruns/subjects so it should not be used in real analysis - use independent\ndata sets instead.", "# We'll start by simulating a group of subjects or runs from a subject\nstart, stop = [0, len(raw.times) - 1]\nintervals = np.linspace(start, stop, 4, dtype=int)\nicas_from_other_data = list()\nraw.pick_types(meg=True, eeg=False) # take only MEG channels\nfor ii, start in enumerate(intervals):\n if ii + 1 < len(intervals):\n stop = intervals[ii + 1]\n print('fitting ICA from {0} to {1} seconds'.format(start, stop))\n this_ica = ICA(n_components=n_components, method=method).fit(\n raw, start=start, stop=stop, reject=reject)\n icas_from_other_data.append(this_ica)", "Remember, don't do this at home! Start by reading in a collection of ICA\nsolutions instead. Something like:\nicas = [mne.preprocessing.read_ica(fname) for fname in ica_fnames]", "print(icas_from_other_data)", "We use our original ICA as reference.", "reference_ica = ica", "Investigate our reference ICA:", "reference_ica.plot_components()", "Which one is the bad EOG component?\nHere we rely on our previous detection algorithm. You would need to decide\nyourself if no automatic detection was available.", "reference_ica.plot_sources(eog_average, exclude=eog_inds)", "Indeed it looks like an EOG, also in the average time course.\nWe construct a list where our reference run is the first element. Then we\ncan detect similar components from the other runs using\n:func:mne.preprocessing.corrmap. So our template must be a tuple like\n(reference_run_index, component_index):", "icas = [reference_ica] + icas_from_other_data\ntemplate = (0, eog_inds[0])", "Now we can do the corrmap.", "fig_template, fig_detected = corrmap(icas, template=template, label=\"blinks\",\n show=True, threshold=.8, ch_type='mag')", "Nice, we have found similar ICs from the other (simulated) runs!\nThis is even nicer if we have 20 or 100 ICA solutions in a list.\nYou can also use SSP to correct for artifacts. It is a bit simpler and\nfaster but also less precise than ICA and requires that you know the event\ntiming of your artifact.\nSee tut_artifacts_correct_ssp." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
harmsm/pythonic-science
labs/00.0_python-practice/intro-to-python-homework_key.ipynb
unlicense
[ "Intro to Python Homework\n\nWrite a line of code that stores the value of the $atan(5)$ in the variable y.", "import numpy as np\ny = np.arctan(5)", "In words, what the math.ceil and math.floor functions do?\n\nThey return round down to the nearest integer (ceil) or up to the nearest integer (floor). \n\nStore the result of $5x^{4} - 3x^{2} + 0.5x - 20$ in the variable y, where $x$ is 2.", "x = 2\ny = 5*(x**4) - 3*x**2 + 0.5*x - 20", "Construct a conditional that prints $x$ if it is smaller than 20 but greater than -5", "if x > -5 and x < 20:\n print(x)", "Construct a conditional that prints $x$ if it not between 5 and 12.", "if x <= 5 and x >= 12:\n print(x)", "What will the following code print? (Don't just copy and paste -- reason through it!)\n\n\nIt will print c. x does not meet the first two conditionals (x &gt; 2 or x &lt; 0 and not x == 2, so it will reach x == 2. Since it equals 2, this will print c and skip the last line printing d. \n\nWrite a loop that prints every 5th number between 1 and 2000. (HINT: try help(range))", "for i in range(1,2001,5):\n print(i)", "What will the following program print out?\n\n\nIt will print 0 to 9 and then -10 to -99. \n\nWrite a loop that calculates a Riemann sum for $x^2$ for $x \\in [-5,5]$.", "\n# left hand integral\ndx = 1\nintegral = 0\nfor x in range(-5,5):\n integral = integral + dx*x**2\nprint(integral)\n\n## A better, higher accuracy way\ndx = 0.001\nmidpoints = np.arange(-5,5,dx) + dx/2\nprint(np.sum(midpoints**2)*dx)\n\n", "Create a list of all integers between 2 and 30.", "some_list = []\nfor i in range(2,31):\n some_list.append(i)\n \n## another (better!) way is to cast it; faster\nsome_list = list(range(2,31))", "Create a list called some_list that has all sin(x) for x $\\in$ [$0$, $\\pi/4$, $\\pi /2$, $3 \\pi /4$... $2 \\pi$].", "import numpy as np\nsome_list = []\nfor i in range(0,9):\n some_list.append(np.sin(np.pi*i/4))\n ", "+ Create a list called some_list from -100 to 0 and then replace every even number with its positive value. (You'll want to use the (modulo operator)[https://stackoverflow.com/questions/4432208/how-does-work-in-python]). The output should look like:\n```python\nprint(some_list)\n[100,-99,98,-97,96,...,0]\n```", "some_list = []\nfor i in range(-100,1):\n if i % 2:\n some_list.append(i)\n else:\n some_list.append(-i)\n ", "Write a loop that creates a dictionary that uses the letters A-E as keys to the values 0-4.", "letters = \"ABCDE\"\nsome_dict = {}\nfor i in range(5):\n some_dict[letters[i]] = i\n\n## A different way using a cool function called enumerate\nsome_dict = {}\nfor number, letter in enumerate(\"ABCDE\"):\n some_dict[letter] = number\n \n## Or even MORE compact using list comprehension\nsome_dict = dict([(letter,number) for number, letter in enumerate(\"ABCDE\")])", "Create a $3 \\times 3$ numpy array with the integers 0-8:\n\npython\n [[0,1,2],\n [3,4,5],\n [6,7,8]]\n Multiply the whole array by 5 and then take the natural log of all values (elementwise). What is the sum of the right-most column?", "some_list = [[0,1,2],[3,4,5],[6,7,8]]\nfor i in range(3):\n for j in range(3):\n some_list[i][j] = some_list[i][j]*5\n some_list[i][j] = np.log(some_list[i][j])\n\ntotal = 0\nfor j in range(3):\n total = total + some_list[j][2]\n \nprint(total)\n\n\n", "Repeat the exercise above using a numpy array. Use a numpy array to calcualte all sin(x) for x $\\in$ [$0$, $\\pi/4$, $\\pi /2$, $3 \\pi /4$... $2 \\pi$].", "some_array = np.array([[0,1,2],[3,4,5],[6,7,8]],dtype=np.int)\n\n## OR\nsome_array = np.zeros((3,3),dtype=np.int)\ntotal = 0\nfor i in range(3):\n for j in range(3):\n some_array[i,j] = total\n total += 1\n\n## OR (probably most efficient of the set)\nsome_array = np.array(range(9),dtype=np.int)\nsome_array = some_array.reshape((3,3))\n \nprint(np.sum(np.log((5*some_array))[:,2]))\n\nnp.log(some_array*5)", "Write a function that takes a string and returns it in all uppercase. (Hint, google this one)", "def capitalize(some_string):\n return some_string.upper()\ncapitalize(\"test\")", "Use matplotlib to plot $sin(x)$ for x $\\in$ [$0$, $\\pi/4$, $\\pi /2$, $3 \\pi /4$... $2 \\pi$]. Use both orange points and a green line.", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nx = np.arange(0,2.25*np.pi,0.25*np.pi)\ny = np.sin(x)\nplt.plot(x,y,\"-\",color=\"green\")\nplt.plot(x,y,\"1\",color=\"orange\",markersize=12)\n", "You measure the doubling times of bacterial strains A-D under identical conditions. \n| strain | doubling time (min) | \n|:------:|:-------------------:|\n| A | 20 |\n| B | 25 |\n| C | 39 |\n| D | 53 |\nAssuming you start with a single cell and have nutrients in excess, you can calculate the number of bacteria $N(t)$ in a culture after $t$ minutes according to: \n$$N(t) = 2^{t/d}$$\n\nWrite a function called num_bacteria that takes the time and doubling time and returns the number of bacteria present.", "def num_bacteria(t,doubling_time):\n \n return 2**(t/doubling_time)\n ", "Create a dictionary called doubling that keys the name of each strain to its population after 12 hours.", "doubling = {}\ndoubling[\"A\"] = num_bacteria(12*60,20)\ndoubling[\"B\"] = num_bacteria(12*60,25)\ndoubling[\"C\"] = num_bacteria(12*60,39)\n\ndoubling", "Use matplotlib to create a single graph that shows $N(t)$ for all four bacterial strains from 0 to 18 hr. Make sure you label your axes appropriately.", "for k in doubling.keys():\n print(k)\n\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nt = np.arange(0,18*60+1,1)\nsome_dict = {\"A\":20.0,\"B\":25.0}\nfor k in some_dict.keys():\n plt.plot(t,num_bacteria(t,some_dict[k]),\".\")\n \nplt.yscale(\"log\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/awi/cmip6/models/awi-cm-1-0-hr/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: AWI\nSource ID: AWI-CM-1-0-HR\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:37\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-hr', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nick-youngblut/SIPSim
ipynb/bac_genome/n1147/Meselson_diff/validation.ipynb
mit
[ "Goal\n\nA basic, full run of the SIPSim pipeline with the whole bacterial genome dataset to see:\nIs the pipeline functional?\nCheck of output at each stage of pipeline\nNote: using diffusion method from Meselson etla., 1957\n\nSetting variables", "workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/Meselson_diff/validation/'\ngenomeDir = '/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/'\nR_dir = '/home/nick/notebook/SIPSim/lib/R/'\n#figureDir = '/home/nick/notebook/SIPSim/figures/bac_genome_n1147/'\n\nbandwidth = 0.8\nDBL_scaling = 0.5\nsubsample_dist = 'lognormal'\nsubsample_mean = 9.432\nsubsample_scale = 0.5\nsubsample_min = 10000\nsubsample_max = 30000", "Init", "import glob\nfrom os.path import abspath\nimport nestly\nfrom IPython.display import Image\nimport os\n%load_ext rpy2.ipython\n%load_ext pushnote\n\n%%R\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(gridExtra)\n\nif not os.path.isdir(workDir):\n os.makedirs(workDir)\n \nif not os.path.isdir(figureDir):\n os.makedirs(figureDir) \n \n%cd $workDir \n\n# Determining min/max BD that \n## min G+C cutoff\nmin_GC = 13.5\n## max G+C cutoff\nmax_GC = 80\n## max G+C shift\nmax_13C_shift_in_BD = 0.036\n\n\nmin_range_BD = min_GC/100.0 * 0.098 + 1.66 \nmax_range_BD = max_GC/100.0 * 0.098 + 1.66 \n\nmax_range_BD = max_range_BD + max_13C_shift_in_BD\n\nprint 'Min BD: {}'.format(min_range_BD)\nprint 'Max BD: {}'.format(max_range_BD)", "Creating a community file\n\n2 communities\ncontrol vs treatment", "!SIPSim communities \\\n $genomeDir/genome_index.txt \\\n --n_comm 2 \\\n > comm.txt", "Plotting community rank abundances", "%%R -w 750 -h 300\n\n\ntbl = read.delim('comm.txt', sep='\\t')\n\ntbl$library = as.character(tbl$library)\ntbl$library = ifelse(tbl$library == 1, 'Control', 'Treatment')\n\nggplot(tbl, aes(rank, rel_abund_perc, color=library, group=taxon_name)) +\n geom_point() +\n scale_y_log10() +\n scale_color_discrete('Community') +\n labs(x='Rank', y='Relative abundance (%)') +\n theme_bw() +\n theme(\n text=element_text(size=16)\n )", "Simulating gradient fractions", "!SIPSim gradient_fractions \\\n --BD_min $min_range_BD \\\n --BD_max $max_range_BD \\\n comm.txt \\\n > fracs.txt ", "Plotting fractions", "%%R -w 600 -h 300\n\ntbl = read.delim('fracs.txt', sep='\\t')\n\nggplot(tbl, aes(fraction, fraction_size)) +\n geom_bar(stat='identity') +\n facet_grid(library ~ .) +\n labs(y='fraction size') +\n theme_bw() +\n theme(\n text=element_text(size=16)\n )\n\n%%R -w 300 -h 250\ntbl$library = as.character(tbl$library)\n\nggplot(tbl, aes(library, fraction_size)) +\n geom_boxplot() +\n labs(y='fraction size') +\n theme_bw() +\n theme(\n text=element_text(size=16)\n )", "Simulating fragments", "# estimated coverage\nmean_frag_size = 9000.0\nmean_amp_len = 300.0\nn_frags = 10000\n\ncoverage = round(n_frags * mean_amp_len / mean_frag_size, 1)\nmsg = 'Average coverage from simulating {} fragments: {}X'\nprint msg.format(n_frags, coverage)\n\n!SIPSim fragments \\\n $genomeDir/genome_index.txt \\\n --fp $genomeDir \\\n --fr ../../../515F-806R.fna \\\n --fld skewed-normal,9000,2500,-5 \\\n --flr None,None \\\n --nf 10000 \\\n --np 24 \\\n 2> ampFrags.log \\\n > ampFrags.pkl ", "Number of amplicons per taxon", "!grep \"Number of amplicons: \" ampFrags.log | \\\n perl -pe 's/.+ +//' | hist\n\n!printf \"Number of taxa with >=1 amplicon: \"\n!grep \"Number of amplicons: \" ampFrags.log | \\\n perl -ne \"s/^.+ +//; print unless /^0$/\" | wc -l", "Converting fragments to kde object", "!SIPSim fragment_KDE \\\n ampFrags.pkl \\\n > ampFrags_kde.pkl", "Checking ampfrag info", "!SIPSim KDE_info \\\n -s ampFrags_kde.pkl \\\n > ampFrags_kde_info.txt\n\n%%R \n# loading\ndf = read.delim('ampFrags_kde_info.txt', sep='\\t')\ndf.kde1 = df %>%\n filter(KDE_ID == 1)\ndf.kde1 %>% head(n=3)\n\nBD_GC50 = 0.098 * 0.5 + 1.66\n\n%%R -w 500 -h 250\n# plotting\np.amp = ggplot(df.kde1, aes(median)) +\n geom_histogram(binwidth=0.001) +\n geom_vline(xintercept=BD_GC50, linetype='dashed', color='red', alpha=0.7) +\n labs(x='Median buoyant density') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\np.amp", "Adding diffusion", "!SIPSim diffusion \\\n --bw $bandwidth \\\n --np 24 \\\n -m Meselson \\\n ampFrags_kde.pkl \\\n > ampFrags_kde_dif.pkl \\\n 2> ampFrags_kde_dif.log", "Adding DBL 'contamination'\n\nDBL = diffusive boundary layer", "!SIPSim DBL \\\n --comm comm.txt \\\n --commx $DBL_scaling \\\n --np 24 \\\n ampFrags_kde_dif.pkl \\\n > ampFrags_kde_dif_DBL.pkl \\\n 2> ampFrags_kde_dif_DBL.log\n \n# checking output \n!tail -n 5 ampFrags_kde_dif_DBL.log", "Comparing DBL+diffusion to diffusion", "# none\n!SIPSim KDE_info \\\n -s ampFrags_kde.pkl \\\n > ampFrags_kde_info.txt\n \n# diffusion\n!SIPSim KDE_info \\\n -s ampFrags_kde_dif.pkl \\\n > ampFrags_kde_dif_info.txt\n \n# diffusion + DBL \n!SIPSim KDE_info \\\n -s ampFrags_kde_dif_DBL.pkl \\\n > ampFrags_kde_dif_DBL_info.txt\n\n%%R \n\ninFile = 'ampFrags_kde_info.txt'\ndf.raw = read.delim(inFile, sep='\\t')\ndf.raw$stage = 'raw'\n\ninFile = 'ampFrags_kde_dif_info.txt'\ndf.dif = read.delim(inFile, sep='\\t')\ndf.dif$stage = 'diffusion'\n\ninFile = 'ampFrags_kde_dif_DBL_info.txt'\ndf.DBL = read.delim(inFile, sep='\\t')\ndf.DBL$stage = 'diffusion +\\nDBL'\n\ndf = rbind(df.raw, df.dif, df.DBL)\ndf.dif = ''\ndf.DBL = ''\ndf %>% head(n=3)\n\n%%R -w 350 -h 300\n\ndf$stage = factor(df$stage, levels=c('raw', 'diffusion', 'diffusion +\\nDBL'))\n\nggplot(df, aes(stage)) +\n geom_boxplot(aes(y=min), color='red') +\n geom_boxplot(aes(y=median), color='darkgreen') +\n geom_boxplot(aes(y=max), color='blue') +\n scale_y_continuous(limits=c(1.3, 2)) +\n labs(y = 'Buoyant density (g ml^-1)') +\n theme_bw() +\n theme(\n text = element_text(size=16),\n axis.title.x = element_blank()\n )", "Making an incorp config file\n\n10% of taxa with 100% atom excess 13C", "!SIPSim incorpConfigExample \\\n --percTaxa 10 \\\n --percIncorpUnif 100 \\\n > PT10_PI100.config\n \n# checking output\n!head PT10_PI100.config", "Adding isotope incorporation to BD distribution", "!SIPSim isotope_incorp \\\n --comm comm.txt \\\n --np 24 \\\n --shift ampFrags_BD-shift.txt \\\n ampFrags_kde_dif_DBL.pkl \\\n PT10_PI100.config \\\n > ampFrags_kde_dif_DBL_incorp.pkl \\\n 2> ampFrags_kde_dif_DBL_incorp.log\n \n# checking log\n!tail -n 5 ampFrags_kde_dif_DBL_incorp.log", "Plotting stats on BD shift from isotope incorporation", "%%R\ninFile = 'ampFrags_BD-shift.txt'\ndf = read.delim(inFile, sep='\\t') %>%\n mutate(library = library %>% as.character)\n\n%%R -h 275 -w 375\n\ninFile = 'ampFrags_BD-shift.txt'\ndf = read.delim(inFile, sep='\\t') %>%\n mutate(library = library %>% as.character)\n\ndf.s = df %>% \n mutate(incorporator = ifelse(min > 0.001, TRUE, FALSE),\n incorporator = ifelse(is.na(incorporator), 'NA', incorporator),\n library = ifelse(library == '1', 'control', 'treatment')) %>%\n group_by(library, incorporator) %>%\n summarize(n_incorps = n())\n\n# summary of number of incorporators\ndf.s %>%\n filter(library == 'treatment') %>%\n mutate(n_incorps / sum(n_incorps)) %>% \n as.data.frame %>% print\n\n# plotting\nggplot(df.s, aes(library, n_incorps, fill=incorporator)) +\n geom_bar(stat='identity') +\n labs(y = 'Count', title='Number of incorporators\\n(according to BD shift)') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )", "Simulating an OTU table", "!SIPSim OTU_table \\\n --abs 1e9 \\\n --np 20 \\\n ampFrags_kde_dif_DBL_incorp.pkl \\\n comm.txt \\\n fracs.txt \\\n > OTU_n2_abs1e9.txt \\\n 2> OTU_n2_abs1e9.log \n \n# checking log\n!tail -n 5 OTU_n2_abs1e9.log ", "Plotting taxon abundances", "%%R\n## BD for G+C of 0 or 100\nBD.GCp0 = 0 * 0.098 + 1.66\nBD.GCp50 = 0.5 * 0.098 + 1.66\nBD.GCp100 = 1 * 0.098 + 1.66\n\n%%R -w 700 -h 350\n# plotting absolute abundances\n\n# loading file\ndf = read.delim('OTU_n2_abs1e9.txt', sep='\\t') \n\ndf.s = df %>%\n group_by(library, BD_mid) %>%\n summarize(total_count = sum(count)) \n\n## plot\np = ggplot(df.s, aes(BD_mid, total_count)) +\n #geom_point() +\n geom_area(stat='identity', alpha=0.3, position='dodge') +\n geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Total abundance') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16) \n )\np\n\n%%R -w 700 -h 350\n# plotting number of taxa at each BD\n\ndf.nt = df %>%\n filter(count > 0) %>%\n group_by(library, BD_mid) %>%\n summarize(n_taxa = n())\n\n## plot\np = ggplot(df.nt, aes(BD_mid, n_taxa)) +\n #geom_point() +\n geom_area(stat='identity', alpha=0.3, position='dodge') +\n #geom_histogram(stat='identity') +\n geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Number of taxa') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np\n\n%%R -w 700 -h 350\n# plotting relative abundances\n\n## plot\np = ggplot(df, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Absolute abundance') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity', position='dodge', alpha=0.5)\n\n%%R -w 700 -h 350\np + \n geom_area(stat='identity', position='fill') +\n labs(x='Buoyant density', y='Relative abundance')", "Simulating PCR bias", "!SIPSim OTU_PCR \\\n OTU_n2_abs1e9.txt \\\n --debug \\\n > OTU_n2_abs1e9_PCR.txt", "Plotting change in relative abundances", "%%R -w 800 -h 300\n# loading file\nF = 'OTU_n2_abs1e9_PCR.txt'\ndf.SIM = read.delim(F, sep='\\t') %>%\n mutate(molarity_increase = final_molarity / init_molarity * 100)\n\np1 = ggplot(df.SIM, aes(init_molarity, final_molarity)) +\n geom_point(shape='O', alpha=0.5) +\n labs(x='Initial molarity', y='Final molarity') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\n\np2 = ggplot(df.SIM, aes(init_molarity, molarity_increase)) +\n geom_point(shape='O', alpha=0.5) +\n scale_y_log10() +\n labs(x='Initial molarity', y='% increase in molarity') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\n\ngrid.arrange(p1, p2, ncol=2)\n\n%%R -w 800 -h 450\n# plotting rank abundances\n\ndf.SIM = df.SIM %>%\n group_by(library, fraction) %>%\n mutate(rel_init_molarity = init_molarity / sum(init_molarity),\n rel_final_molarity = final_molarity / sum(final_molarity),\n init_molarity_rank = row_number(rel_init_molarity),\n final_molarity_rank = row_number(rel_final_molarity)) %>%\n ungroup() \n \n\np1 = ggplot(df.SIM, aes(init_molarity_rank, rel_init_molarity, color=BD_mid, group=BD_mid)) +\n geom_line(alpha=0.5) +\n scale_y_log10(limits=c(1e-7, 0.1)) +\n scale_x_reverse() +\n scale_color_gradient('Buoyant\\ndensity') +\n labs(x='Rank', y='Relative abundance', title='pre-PCR') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\n\np2 = ggplot(df.SIM, aes(final_molarity_rank, rel_final_molarity, color=BD_mid, group=BD_mid)) +\n geom_line(alpha=0.5) +\n scale_y_log10(limits=c(1e-7, 0.1)) +\n scale_x_reverse() +\n scale_color_gradient('Buoyant\\ndensity') +\n labs(x='Rank', y='Relative abundance', title='post-PCR') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\n\n\ngrid.arrange(p1, p2, ncol=1)", "Notes\n\nThe PCR raises the relative abundances most for low-abundance taxa\nResults in a more even rank-abundance distribution", "# PCR w/out --debug\n!SIPSim OTU_PCR \\\n OTU_n2_abs1e9.txt \\\n > OTU_n2_abs1e9_PCR.txt", "Subsampling from the OTU table\n\nsimulating sequencing of the DNA pool", "!SIPSim OTU_subsample \\\n --dist $subsample_dist \\\n --dist_params mean:$subsample_mean,sigma:$subsample_scale \\\n --min_size $subsample_min \\\n --max_size $subsample_max \\\n OTU_n2_abs1e9_PCR.txt \\\n > OTU_n2_abs1e9_PCR_subNorm.txt", "Plotting seq count distribution", "%%R -w 300 -h 250\n\ndf = read.csv('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\\t')\n\ndf.s = df %>% \n group_by(library, fraction) %>%\n summarize(total_count = sum(count)) %>%\n ungroup() %>%\n mutate(library = as.character(library))\n\nggplot(df.s, aes(library, total_count)) +\n geom_boxplot() +\n labs(y='Number of sequences\\nper fraction') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )", "Plotting abundance distributions (paper figure)", "%%R \n\n# loading file\ndf.abs = read.delim('OTU_n2_abs1e9.txt', sep='\\t')\ndf.sub = read.delim('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\\t')\n\nlib.reval = c('1' = 'control',\n '2' = 'treatment')\n\ndf.abs = mutate(df.abs, library = plyr::revalue(as.character(library), lib.reval))\ndf.sub = mutate(df.sub, library = plyr::revalue(as.character(library), lib.reval))\n\n%%R -w 700 -h 800\n# plotting absolute abundances\n## plot\np = ggplot(df.abs, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank(),\n legend.position = 'none',\n plot.margin=unit(c(1,1,0.1,1), \"cm\")\n )\np1 = p + geom_area(stat='identity', position='dodge', alpha=0.5) +\n labs(y='Total community\\n(absolute abundance)')\n\n# plotting absolute abundances of subsampled\n## plot\np = ggplot(df.sub, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np2 = p + geom_area(stat='identity', position='dodge', alpha=0.5) +\n labs(y='Subsampled community\\n(absolute abundance)') +\n theme(\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank(),\n plot.margin=unit(c(0.1,1,0.1,1), \"cm\")\n )\n\n# plotting relative abundances of subsampled\np3 = p + geom_area(stat='identity', position='fill') +\n geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +\n labs(y='Subsampled community\\n(relative abundance)') +\n theme(\n axis.title.y = element_text(vjust=1),\n plot.margin=unit(c(0.1,1,1,1.35), \"cm\")\n )\n\n# combining plots\ngrid.arrange(p1, p2, p3, ncol=1)", "Making a wide OTU table", "!SIPSim OTU_wideLong -w \\\n OTU_n2_abs1e9_PCR_subNorm.txt \\\n > OTU_n2_abs1e9_PCR_subNorm_w.txt", "Making metadata (phyloseq: sample_data)", "!SIPSim OTU_sampleData \\\n OTU_n2_abs1e9_PCR_subNorm.txt \\\n > OTU_n2_abs1e9_PCR_subNorm_meta.txt", "Community analysis\nPhyloseq", "# making phyloseq object from OTU table\n!SIPSimR phyloseq_make \\\n OTU_n2_abs1e9_PCR_subNorm_w.txt \\\n -s OTU_n2_abs1e9_PCR_subNorm_meta.txt \\\n > OTU_n2_abs1e9_PCR_subNorm.physeq\n\n## making ordination\n!SIPSimR phyloseq_ordination \\\n OTU_n2_abs1e9_PCR_subNorm.physeq \\\n OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf \n\n## filtering phyloseq object to just taxa/samples of interest (eg., BD-min/max)\n!SIPSimR phyloseq_edit \\\n OTU_n2_abs1e9_PCR_subNorm.physeq \\\n --BD_min 1.71 --BD_max 1.75 --occur 0.25 \\\n > OTU_n2_abs1e9_PCR_subNorm_filt.physeq\n\n## making ordination\n!SIPSimR phyloseq_ordination \\\n OTU_n2_abs1e9_PCR_subNorm_filt.physeq \\\n OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf\n \n# making png figures\n!convert OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png\n!convert OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png \n\nImage(filename='OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png') \n\nImage(filename='OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png')", "DESeq2", "## DESeq2\n!SIPSimR phyloseq_DESeq2 \\\n --log2 0.25 \\\n --hypo greater \\\n OTU_n2_abs1e9_PCR_subNorm_filt.physeq \\\n > OTU_n2_abs1e9_PCR_subNorm_DESeq2\n \n## Confusion matrix\n!SIPSimR DESeq2_confuseMtx \\\n --padj 0.1 \\\n ampFrags_BD-shift.txt \\\n OTU_n2_abs1e9_PCR_subNorm_DESeq2\n\n%%R -w 500 -h 250\n\nbyClass = read.delim('DESeq2-cMtx_byClass.txt', sep='\\t') %>%\n filter(library == 2) \n\nggplot(byClass, aes(variables, values)) +\n geom_bar(stat='identity') +\n labs(y='Value') +\n theme_bw() +\n theme(\n text = element_text(size=16),\n axis.title.x = element_blank(),\n axis.text.x = element_text(angle=45, hjust=1)\n )", "Plotting results of DESeq2", "%%R\n\nclsfy = function(guess,known){\n if(is.na(guess) | is.na(known)){\n return(NA)\n }\n if(guess == TRUE){\n if(guess == known){\n return('True positive')\n } else {\n return('False positive')\n }\n } else\n if(guess == FALSE){\n if(guess == known){\n return('True negative')\n } else {\n return('False negative')\n }\n } else {\n stop('Error: true or false needed')\n }\n }\n\n%%R \n\ndf = read.delim('DESeq2-cMtx_data.txt', sep='\\t')\n\ndf = df %>%\n filter(! is.na(log2FoldChange), library == 2) %>%\n mutate(taxon = reorder(taxon, -log2FoldChange),\n cls = mapply(clsfy, incorp.pred, incorp.known))\n\ndf %>% head(n=3)\n\n%%R -w 800 -h 350\n\ndf.TN = df %>% filter(cls == 'True negative')\ndf.TP = df %>% filter(cls == 'True positive')\ndf.FP = df %>% filter(cls == 'False negative')\n\nggplot(df, aes(taxon, log2FoldChange, color=cls, \n ymin=log2FoldChange - lfcSE, ymax=log2FoldChange + lfcSE)) +\n geom_pointrange(size=0.4, alpha=0.5) +\n geom_pointrange(data=df.TP, size=0.4, alpha=0.3) +\n geom_pointrange(data=df.FP, size=0.4, alpha=0.3) +\n labs(x = 'Taxon', y = 'Log2 fold change') +\n theme_bw() +\n theme(\n text = element_text(size=16),\n panel.grid.major.x = element_blank(),\n panel.grid.minor.x = element_blank(), \n legend.title=element_blank(),\n axis.text.x = element_blank(),\n legend.position = 'bottom'\n )", "Notes:\n\n\nRed circles = true positives\n\n\nFalse positives should increase with taxon GC \n\nHigher GC moves 100% incorporators too far to the right the gradient for the 'heavy' BD range of 1.71-1.75\nLines indicate standard errors.\n\nsensitivity ~ pre-frac relative_abundance\n\n\nEnrichment of TP for abundant incorporators?\n\n\nWhat is the abundance distribution of TP and FP?\n\nAre more abundant incorporators being detected more than low abundant taxa", "%%R\ndf.ds = read.delim('DESeq2-cMtx_data.txt', sep='\\t') \ndf.comm = read.delim('comm.txt', sep='\\t')\n\ndf.j = inner_join(df.ds, df.comm, c('taxon' = 'taxon_name',\n 'library' = 'library'))\n\ndf.ds = df.comm = NULL\ndf.j %>% head(n=3)\n\n%%R -h 500 -w 600\n\ndf.j.f = df.j %>%\n filter(! is.na(log2FoldChange),\n library == 2) %>%\n mutate(cls = mapply(clsfy, incorp.pred, incorp.known)) \n\ny.lab = 'Pre-fractionation\\nabundance (%)'\np1 = ggplot(df.j.f, aes(padj, rel_abund_perc, color=cls)) +\n geom_point(alpha=0.7) +\n scale_y_log10() +\n labs(x='P-value (adjusted)', y=y.lab) +\n theme_bw() +\n theme(\n text = element_text(size=16),\n legend.position = 'bottom'\n )\n\np2 = ggplot(df.j.f, aes(cls, rel_abund_perc)) +\n geom_boxplot() +\n scale_y_log10() +\n labs(y=y.lab) +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\n\ngrid.arrange(p1, p2, ncol=1)\n\n%%R -h 300\n# plotting\nggplot(df.j.f, aes(log2FoldChange, rel_abund_perc, color=cls)) +\n geom_point(alpha=0.7) +\n scale_y_log10() +\n labs(x='log2 fold change', y=y.lab) +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\n", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
brean/python-pathfinding
pathfinding.ipynb
mit
[ "A simple usage example to find a path using A*.", "from pathfinding.core.diagonal_movement import DiagonalMovement\nfrom pathfinding.core.grid import Grid\nfrom pathfinding.finder.a_star import AStarFinder", "Create a map using a 2D-list. Any value smaller or equal to 0 describes an obstacle. Any number bigger than 0 describes the weight of a field that can be walked on. The bigger the number the higher the cost to walk that field. In this example we like the algorithm to create a path from the upper left to the bottom right. To make it not to easy for the algorithm we added an obstacle in the middle, so it can not use the direct way. We ignore the weight for now, all fields have the same cost. Feel free to create a more complex map", "matrix = [\n [1, 1, 1],\n [1, 0, 1],\n [1, 1, 1]\n]", "Note: you can use negative values to describe different types of obstacles. It does not make a difference for the path finding algorithm but it might be useful for your later map evaluation.\nwe create a new grid from this map representation. This will create Node instances for every element of our map. It will also set the size of the map. We assume that your map is a square, so the size height is defined by the length of the outer list and the width by the length of the first list inside it.", "grid = Grid(matrix=matrix)\n(grid.height, grid.width)", "we get the start (top-left) and endpoint (bottom-right) from the map:", "start = grid.node(0, 0)\nend = grid.node(2, 2)", "create a new instance of our finder and let it do its work. We allow diagonal movement. The find_path function does not only return you the path from the start to the end point it also returns the number of times the algorithm needed to be called until a way was found.", "finder = AStarFinder(diagonal_movement=DiagonalMovement.always)\npath, runs = finder.find_path(start, end, grid)", "thats it. We found a way. Now we can print the result (or do something else with it). Note that the start and end points are part of the path.", "print('operations:', runs, 'path length:', len(path))\n\nprint(grid.grid_str(path=path, start=start, end=end))", "you can also print the path as tuple of x and y coordinates:", "path" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jserenson/Python_Bootcamp
Strings.ipynb
gpl-3.0
[ "Strings\nStrings are used in Python to record text information, such as name. Strings in Python are actually a sequence, which basically means Python keeps track of every element in the string as a sequence. For example, Python understands the string \"hello' to be a sequence of letters in a specific order. This means we will be able to use indexing to grab particular letters (like the first letter, or the last letter).\nThis idea of a sequence is an important one in Python and we will touch upon it later on in the future.\nIn this lecture we'll learn about the following:\n1.) Creating Strings\n2.) Printing Strings\n3.) Differences in Printing in Python 2 vs 3\n4.) String Indexing and Slicing\n5.) String Properties\n6.) String Methods\n7.) Print Formatting\n\nCreating a String\nTo create a string in Python you need to use either single quotes or double quotes. For example:", "# Single word\n'hello'\n\n# Entire phrase \n'This is also a string'\n\n# We can also use double quote\n\"String built with double quotes\"\n\n# Be careful with quotes!\n' I'm using single quotes, but will create an error'", "The reason for the error above is because the single quote in I'm stopped the string. You can use combinations of double and single quotes to get the complete statement.", "\"Now I'm ready to use the single quotes inside a string!\"", "Now let's learn about printing strings!\nPrinting a String\nUsing Jupyter notebook with just a string in a cell will automatically output strings, but the correct way to display strings in your output is by using a print function.", "# We can simply declare a string\n'Hello World'\n\n# note that we can't output multiple strings this way\n'Hello World 1'\n'Hello World 2'", "We can use a print statement to print a string.", "print 'Hello World 1'\nprint 'Hello World 2'\nprint 'Use \\n to print a new line'\nprint '\\n'\nprint 'See what I mean?'", "<font color='red'>Python 3 Alert!</font>\nSomething to note. In Python 3, print is a function, not a statement. So you would print statements like this:\nprint('Hello World')\nIf you want to use this functionality in Python2, you can import form the future module. \nA word of caution, after importing this you won't be able to choose the print statement method anymore. So pick whichever one you prefer depending on your Python installation and continue on with it.", "# To use print function from Python 3 in Python 2\nfrom __future__ import print_function\n\nprint('Hello World')", "String Basics\nWe can also use a function called len() to check the length of a string!", "len('Hello World')", "String Indexing\nWe know strings are a sequence, which means Python can use indexes to call parts of the sequence. Let's learn how this works.\nIn Python, we use brackets [] after an object to call its index. We should also note that indexing starts at 0 for Python. Let's create a new object called s and the walk through a few examples of indexing.", "# Assign s as a string\ns = 'Hello World'\n\n#Check\ns\n\n# Print the object\nprint(s) ", "Let's start indexing!", "# Show first element (in this case a letter)\ns[0]\n\ns[1]\n\ns[2]", "We can use a : to perform slicing which grabs everything up to a designated point. For example:", "# Grab everything past the first term all the way to the length of s which is len(s)\ns[1:]\n\n# Note that there is no change to the original s\ns\n\n# Grab everything UP TO the 3rd index\ns[:3]", "Note the above slicing. Here we're telling Python to grab everything from 0 up to 3. It doesn't include the 3rd index. You'll notice this a lot in Python, where statements and are usually in the context of \"up to, but not including\".", "#Everything\ns[:]", "We can also use negative indexing to go backwards.", "# Last letter (one index behind 0 so it loops back around)\ns[-1]\n\n# Grab everything but the last letter\ns[:-1]", "We can also use index and slice notation to grab elements of a sequence by a specified step size (the default is 1). For instance we can use two colons in a row and then a number specifying the frequency to grab elements. For example:", "# Grab everything, but go in steps size of 1\ns[::1]\n\n# Grab everything, but go in step sizes of 2\ns[::2]\n\n# We can use this to print a string backwards\ns[::-1]", "String Properties\nIts important to note that strings have an important property known as immutability. This means that once a string is created, the elements within it can not be changed or replaced. For example:", "s\n\n# Let's try to change the first letter to 'x'\ns[0] = 'x'", "Notice how the error tells us directly what we can't do, change the item assignment!\nSomething we can do is concatenate strings!", "s\n\n# Concatenate strings!\ns + ' concatenate me!'\n\n# We can reassign s completely though!\ns = s + ' concatenate me!'\n\nprint(s)\n\ns", "We can use the multiplication symbol to create repetition!", "letter = 'z'\n\nletter*10", "Basic Built-in String methods\nObjects in Python usually have built-in methods. These methods are functions inside the object (we will learn about these in much more depth later) that can perform actions or commands on the object itself.\nWe call methods with a period and then the method name. Methods are in the form:\nobject.method(parameters)\nWhere parameters are extra arguments we can pass into the method. Don't worry if the details don't make 100% sense right now. Later on we will be creating our own objects and functions!\nHere are some examples of built-in methods in strings:", "s\n\n# Upper Case a string\ns.upper()\n\n# Lower case\ns.lower()\n\n# Split a string by blank space (this is the default)\ns.split()\n\n# Split by a specific element (doesn't include the element that was split on)\ns.split('W')", "There are many more methods than the ones covered here. Visit the advanced String section to find out more!\nPrint Formatting\nWe can use the .format() method to add formatted objects to printed string statements. \nThe easiest way to show this is through an example:", "'Insert another string with curly brackets: {}'.format('The inserted string')", "We will revisit this string formatting topic in later sections when we are building our projects!\nNext up: Lists!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kimkipyo/dss_git_kkp
Python 복습/14일차.금_pandas + SQL_2/14일차_2T_os, shutil 모듈을 이용한 파일,폴더 관리하기 (2) - 압축 파일 생성 및 해제.ipynb
mit
[ "2T_os, shutil 모듈을 이용한 파일, 폴더 관리하기 (2) - 압축 파일 생성 및 해제\n데이터 관리를 위한 Tool\n\nDropbox\ngithub\nAmazon Web Service API -- S3 ( Simple Storage Service ) (*)\nCloud Storage Service -> 이게 큰 회사에서의 프로젝트에서 자주 사용되는 것이다.", "import shutil\nimport os\n\nshutil.make_archive(\n os.path.join(os.curdir, \"data\", \"world\"),\n \"zip\" # 어떤 압축파일 형태로 저장할 것인가. (\".zip\" => \"zip\", \".tar.gz\" => \"gztar\")\n)\n\nshutil.make_archive(\n os.path.join(os.curdir, \"data\", \"world\"),\n \"gztar\"\n)", "실습)\n\n다시 실행해도 문제가 없이,\n대륙명 / 국가명.csv로 City에 대한 정보가 저장.\n대륙명.tar.gz (대륙명 별 압축파일로 저장)\n/data/world/Asia/Korea.csv\n/Japan.csv\n...\n/data/world/Europe/France.csv", "for index, row in country_df.iterrows():\n country_code = row[\"Code\"]\n country_name = row[\"Name\"]\n \n if country_code in city_df[\"CountryCode\"].unique():\n one_city_df = city_groups.get_group(country_code)\n one_city_df.to_csv(os.path.join(os.curdir, \"data\", \"world\", \"{country_name}.csv\".format(country_name=country_name)))\n\nimport pymysql\ndb = pymysql.connect(\n \"db.fastcamp.us\",\n \"root\",\n \"dkstncks\",\n \"world\",\n charset='utf8',\n)\ncity_df = pd.read_sql(\"SELECT * FROM City;\", db)\ncountry_df = pd.read_sql(\"SELECT * FROM Country;\", db)\n\nif \"data\" in os.listdir():\n print(\"./data/폴더를 삭제합니다.\")\n shutil.rmtree(os.path.join(os.curdir, \"data\"))\n\nprint(\"./data/폴더를 생성합니다.\")\nos.makedirs(os.path.join(os.curdir, \"data\"))\nos.makedirs(os.path.join(os.curdir, \"data\", \"world\"))\n\n# country_df => group_by => continent\n# cotinent 폴더 ...\n# continent_df => group_by => ...\n\ncontinent_groups = country_df.groupby(\"Continent\")\ncity_groups = city_df.groupby(\"CountryCode\")\n\n\n# \"ATA\" ... => 예외처리(city_df가 없는 경우)\nunique_country_code_in_city = city_df[\"CountryCode\"].unique()\n\nfor continent_name in country_df[\"Continent\"].unique():\n os.makedirs(os.path.join(os.curdir, \"data\", \"world\", continent_name))\n continent_df = continent_groups.get_group(continent_name)\n \n #continent_df (대륙별 DataFrame)에서 \"Code\"를 가져와서 city_groups에서 찾은 다음에 폴더에 넣어주자\n for index, row in continent_df.iterrows():\n country_code = row[\"Code\"]\n country_name = row[\"Name\"]\n \n if country_code in unique_country_code_in_city:\n# print((continent_name, country_name))\n df = city_groups.get_group(country_code)\n df.to_csv(os.path.join(\n os.curdir,\n \"data\",\n \"world\",\n continent_name,\n \"{country_name}.csv\".format(country_name=country_name)\n ))\n # 압축하기\n shutil.make_archive(\n os.path.join(os.curdir, \"data\", \"world\", continent_name), # 압축파일명\n \"gztar\",\n os.path.join(os.curdir, \"data\", \"world\", continent_name), # 압축할 위치\n)\n\nfor continent_name in country_df[\"Continent\"].unique():\n country_count = len(os.listdir(os.path.join(os.curdir, \"data\", \"world\", continent_name)))\n print((continent_name, country_count))", "압축 푸는 방법", "shutil.unpack_archive(\"./data/world/Asia.tar.gz\", \"./Asia\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Bedrock-py/bedrock-core
examples/RAND2011study/CooperationAnalysis_Bayesian.ipynb
lgpl-3.0
[ "Rand 2011 Bayesian Analysis\nThis notebook outlines how to begin the duplication the analysis of the Rand et al. 2011 study \"Dynamic social networks promote cooperation in experiments with humans\" Link to Paper\nThis notebook focuses on using a Bayesian approach. Just one example is shown. Refer to the other Cooperation Analysis notebook for the remaining regression formulas to do full replication\n\nSpreadsheet\nStan_GLM\nselect-from-dataframe\nsummarize\n\nThis notebook also requires that bedrock-core be installed locally into the python kernel running this notebook. This can be installed via command line using:\npip install git+https://github.com/Bedrock-py/bedrock-core.git\nThe other requirements to run this notebook are:\n\npandas\n\nStep 1: Check Environment\nFirst check that Bedrock is installed locally. If the following cell does not run without error, check the install procedure above and try again. Also, ensure that the kernel selected is the same as the kernel where bedrock-core is installed", "from bedrock.client.client import BedrockAPI", "Test Connection to Bedrock Server\nThis code assumes a local bedrock is hosted at localhost on port 81. Change the SERVER variable to match your server's URL and port.", "import requests\nimport pandas\nimport pprint\nSERVER = \"http://localhost:81/\"\napi = BedrockAPI(SERVER)", "Check for Spreadsheet Opal\nThe following code block checks the Bedrock server for the Spreadsheet Opal. This Opal is used to load .csv, .xls, and other such files into a Bedrock matrix format. The code below calls the Bedrock /dataloaders/ingest endpoint to check if the opals.spreadsheet.Spreadsheet.Spreadsheet opal is installed.\nIf the code below shows the Opal is not installed, there are two options:\n1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the Spreadsheet Opal with pip on the server Spreadsheet\n2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed", "resp = api.ingest(\"opals.spreadsheet.Spreadsheet.Spreadsheet\")\nif resp.json():\n print(\"Spreadsheet Opal Installed!\")\nelse:\n print(\"Spreadsheet Opal Not Installed!\")", "Check for STAN GLM Opal\nThe following code block checks the Bedrock server for the STAN GLM Opal. \nIf the code below shows the Opal is not installed, there are two options:\n1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the Stan GLM Opal with pip on the server Stan GLM\n2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed", "resp = api.analytic('opals.stan.Stan.Stan_GLM')\nif resp.json():\n print(\"Stan_GLM Opal Installed!\")\nelse:\n print(\"Stan_GLM Opal Not Installed!\")", "Check for select-from-dataframe Opal\nThe following code block checks the Bedrock server for the select-from-dataframe Opal. This allows you to filter by row and reduce the columns in a dataframe loaded by the server. \nIf the code below shows the Opal is not installed, there are two options:\n1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the select-from-datafram Opal with pip on the server select-from-dataframe\n2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed", "resp = api.analytic('opals.select-from-dataframe.SelectByCondition.SelectByCondition')\nif resp.json():\n print(\"Select-from-dataframe Opal Installed!\")\nelse:\n print(\"Select-from-dataframe Opal Not Installed!\")", "Check for summarize Opal\nThe following code block checks the Bedrock server for the summarize Opal. This allows you to summarize a matrix with an optional groupby clause.\nIf the code below shows the Opal is not installed, there are two options:\n1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the summarize with pip on the server summarize\n2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed", "resp = api.analytic('opals.summarize.Summarize.Summarize')\nif resp.json():\n print(\"Summarize Opal Installed!\")\nelse:\n print(\"Summarize Opal Not Installed!\")", "Step 2: Upload Data to Bedrock and Create Matrix\nNow that everything is installed, begin the workflow by uploading the csv data and creating a matrix. To understand this fully, it is useful to understand how a data loading workflow occurs in Bedrock.\n\nCreate a datasource that points to the original source file\nGenerate a matrix from the data source (filters can be applied during this step to pre-filter the data source on load\nAnalytics work on the generated matrix\n\n Note: Each time a matrix is generated from a data source it will create a new copy with a new UUID to represent that matrix \nCheck for csv file locally\nThe following code opens the file and prints out the first part. The file must be a csv file with a header that has labels for each column. The file is comma delimited csv.", "filepath = 'Rand2011PNAS_cooperation_data.csv'\ndatafile = pandas.read_csv('Rand2011PNAS_cooperation_data.csv')\ndatafile.head(10)", "Now Upload the source file to the Bedrock Server\nThis code block uses the Spreadsheet ingest module to upload the source file to Bedrock. Note: This simply copies the file to the server, but does not create a Bedrock Matrix format \nIf the following fails to upload. Check that the csv file is in the correct comma delimited format with headers.", "ingest_id = 'opals.spreadsheet.Spreadsheet.Spreadsheet'\nresp = api.put_source('Rand2011', ingest_id, 'default', {'file': open(filepath, \"rb\")})\n\nif resp.status_code == 201:\n source_id = resp.json()['src_id']\n print('Source {0} successfully uploaded'.format(filepath))\nelse:\n try:\n print(\"Error in Upload: {}\".format(resp.json()['msg']))\n except Exception:\n pass\n \n try:\n source_id = resp.json()['src_id']\n print(\"Using existing source. If this is not the desired behavior, upload with a different name.\")\n except Exception:\n print(\"No existing source id provided\")", "Check available data sources for the CSV file\nCall the Bedrock sources list to see available data sources. Note, that the Rand2011 data source should now be available", "available_sources = api.list(\"dataloader\", \"sources\").json()\ns = next(filter(lambda source: source['src_id'] == source_id, available_sources),'None')\nif s != 'None':\n pp = pprint.PrettyPrinter()\n pp.pprint(s)\nelse:\n print(\"Could not find source\")", "Create a Bedrock Matrix from the CSV Source\nIn order to use the data, the data source must be converted to a Bedrock matrix. The following code steps through that process. Here we are doing a simple transform of csv to matrix. There are options to apply filters (like renaming columns, excluding colum", "resp = api.create_matrix(source_id, 'rand_mtx')\nmtx = resp[0]\nmatrix_id = mtx['id']\nprint(mtx)\nresp", "Look at basic statistics on the source data\nHere we can see that Bedrock has computed some basic statistics on the source data.\nFor numeric data\nThe quartiles, max, mean, min, and standard deviation are provided\nFor non-numeric data\nThe label values and counts for each label are provided.\nFor both types\nThe proposed tags and data type that Bedrock is suggesting are provided", "analytic_id = \"opals.summarize.Summarize.Summarize\"\ninputData = {\n 'matrix.csv': mtx,\n 'features.txt': mtx\n}\n\nparamsData = []\n\nsummary_mtx = api.run_analytic(analytic_id, mtx, 'rand_mtx_summary', input_data=inputData, parameter_data=paramsData)\noutput = api.download_results_matrix(matrix_id, summary_mtx['id'], 'matrix.csv')\noutput", "Step 3: Filter the data based on a condition\nFilter the data to only the Static Condition", "analytic_id = \"opals.select-from-dataframe.SelectByCondition.SelectByCondition\"\ninputData = {\n 'matrix.csv': mtx,\n 'features.txt': mtx\n}\n\nparamsData = [\n {\"attrname\":\"colname\",\"value\":\"condition\"},\n {\"attrname\":\"comparator\",\"value\":\"==\"},\n {\"attrname\":\"value\",\"value\":\"Static\"}\n]\n\nfiltered_mtx = api.run_analytic(analytic_id, mtx, 'rand_static_only', input_data=inputData, parameter_data=paramsData)\n\nfiltered_mtx", "Check that Matrix is filtered", "output = api.download_results_matrix('rand_mtx', 'rand_static_only', 'matrix.csv', remote_header_file='features.txt')\noutput", "Step 4: Run Bayesian Logistic Regression\nThis uses Stan to perform Bayesian logistic regression comparing the effect of the round on the decision", "analytic_id = \"opals.stan.Stan.Stan_GLM\"\ninputData = {\n 'matrix.csv': filtered_mtx,\n 'features.txt': filtered_mtx\n}\n\nparamsData = [\n {\"attrname\":\"formula\",\"value\":\"decision0d1c ~ round_num\"},\n {\"attrname\":\"family\",\"value\":'logit'},\n {\"attrname\":\"chains\",\"value\":\"3\"},\n {\"attrname\":\"iter\",\"value\":\"3000\"}\n]\n\nresult_mtx = api.run_analytic(analytic_id, mtx, 'rand_bayesian1', input_data=inputData, parameter_data=paramsData)\n\nresult_mtx", "Visualize the output of the analysis\nHere the output of the analysis is downloaded and from here can be visualized and exported", "summary_table = api.download_results_matrix('rand_mtx', 'rand_bayesian1', 'matrix.csv')\nsummary_table\n\nprior_summary = api.download_results_matrix('rand_mtx', 'rand_bayesian1', 'prior_summary.txt')\nprint(prior_summary)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
khalido/deep-learning
sentiment-rnn/Sentiment RNN Solution.ipynb
mit
[ "Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.", "import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment_network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment_network/labels.txt', 'r') as f:\n labels = f.read()\n\nreviews[:2000]", "Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nall_text[:2000]\n\nwords[:100]", "Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.", "from collections import Counter\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\nreviews_ints = []\nfor each in reviews:\n reviews_ints.append([vocab_to_int[word] for word in each.split()])", "Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.", "labels = labels.split('\\n')\nlabels = np.array([1 if each == 'positive' else 0 for each in labels])\n\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.", "non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\nlen(non_zero_idx)\n\nreviews_ints[-1]", "Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.", "reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nlabels = np.array([labels[ii] for ii in non_zero_idx])", "Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "seq_len = 200\nfeatures = np.zeros((len(reviews_ints), seq_len), dtype=int)\nfor i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_len]\n\nfeatures[:10,:100]", "Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.", "split_frac = 0.8\nsplit_idx = int(len(features)*0.8)\ntrain_x, val_x = features[:split_idx], features[split_idx:]\ntrain_y, val_y = labels[:split_idx], labels[split_idx:]\n\ntest_idx = int(len(val_x)*0.5)\nval_x, test_x = val_x[:test_idx], val_x[test_idx:]\nval_y, test_y = val_y[:test_idx], val_y[test_idx:]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate", "lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001", "For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.", "n_words = len(vocab_to_int)\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')\n labels_ = tf.placeholder(tf.int32, [None, None], name='labels')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')", "Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].", "# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs_)", "LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.", "with graph.as_default():\n # Your basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)", "RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.", "with graph.as_default():\n outputs, final_state = tf.nn.dynamic_rnn(cell, embed,\n initial_state=initial_state)", "Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.", "with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.", "with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].", "def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]", "Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.", "epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")", "Testing", "test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NathanYee/ThinkBayes2
code/.ipynb_checkpoints/report01-checkpoint.ipynb
gpl-2.0
[ "License: Attribution 4.0 International (CC BY 4.0)", "from thinkbayes2 import Pmf, Suite\nimport thinkplot\nimport math\n\n% matplotlib inline", "Twin brothers and bayes theorem\nSuppose we are asked the question: <b>Elvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin?</b>\nIn order to make our problem easier, we will clarify a few facts about identical twins. Identical twins are known as monozygotic twins, meaning that they both devolop from a single zygote. As a result, monozygotic twins are the same gender, either male-male or female-female. So, we rephrase our question: <b>What percentage of male-male twins are monozygotic.</b>\nIn addition, here is an important fact: .08% of twins are monozygotic.\nWithout Bayes' theorem (counting)\nWe use a tree to visualize the problem.\n<img src=\"treeReport1.jpg\" alt=\"Probability Tree\" height=\"400\" width=\"400\">\nAssuming we have 100 twins, lets calculate the number of male-male dizygotic, male-male monozygotic, and total number of male-male twins.", "# calculate number of male-male dizygotic twins using the percentage of dizygotic and percentage of male-male\nDiMM = 100 * .92 * .25\n\n# calculate number of male-male monozygotic twins using the percentage of monozygotic and percentage of male-male\nMoMM = 100 * .08 * .5\n\n# calculate total number of male-male twins\nTotalMM = DiMM + MoMM\n\nprint(\"Number of male-male dizygotic twins: {}\".format(DiMM))\nprint(\"Number of male-male monozygotic twins: {}\".format(MoMM))\nprint(\"Total number of male-male twins: {}\".format(TotalMM))\n\n# next we can calculate the fraction of male-male twins that are monozygotic\nfractionMoMM = MoMM / TotalMM\npercentMoMM = fractionMoMM * 100\nprint(\"Percentage of male-male monozygotic twins: {0:.1f}%\".format(percentMoMM))", "So, we can conclude that Elvis had a 14.8% chance to identical twins with his brother.\nWith Bayes' theorem (math)\nHowever, rather than using a huge tree, we can use Bayes' theorem to make a much more eligant solution. First, assuming we are only dealing with twins, we find must find P(male-male|monozygotic), P(male-male), and P(monozygotic). Then we can calculate P(monozygotic|male-male).", "twins = dict()\n\n# first calculate the total percentage of male-male twins. We can do this by adding the percentage of male-male\n# monozygotic and the percentage of male-male dizygotic\ntwins['male-male'] = (.08*.50 + .92*.25)\ntwins['male-male|monozygotic'] = (.50)\ntwins['monozygotic'] = (.08)\n\nprint(twins['male-male'])\nprint(twins['male-male|monozygotic'])\nprint(twins['monozygotic'])\n\n# now using bayes theorem\ntemp = twins['male-male|monozygotic'] * twins['monozygotic'] / twins['male-male']\nprint(\"P(monozygotic|male-male): {0:.3f}\".format(temp))", "The Dice Problem chapter 3\nWe are given dice 4, 6, 8, 12, and 20 sides. If we roll a a die many times at random, what is the probability that we roll each die.\nFirst we must define the Likelihood function for the dice. In this case, if we roll a number greater than that dice (roll 5 for 4 sided dice), the probability of that being the chosen dice goes to zero. Else, the probability is multiplied by 1 over the number of sides.", "class Dice(Suite):\n def Likelihood(self, data, hypo):\n if hypo < data:\n return 0\n else:\n return 1 / hypo", "Next create a dice object with dice of 4, 6, 8, 12 and 20 sides", "suite = Dice([4, 6, 8, 12, 20])", "Roll a 6 and see the probabilities of being each dice", "suite.Update(6)\nsuite.Print()", "Now roll a series of numbers", "for roll in [6, 8, 7, 7, 5, 4]:\n suite.Update(roll)\n\nsuite.Print()", "For these roles, we see that the 8 sided dice is most probable. It is still possible for the 20 sided dice, but only with a .1% chance.\nThe Train Problem chapter 3\nRailroads number trains from 1 to N. One day you see a train numbered 60. How many trains does the railroad have?\nFirst define the Train suite. The likelihood is the same as the above dice problem. We can think of it like this: each number correspond to a number of trains. If we see train N, then all hypothesis less than N are 0. Else, they are 1 / N.", "class Train(Suite):\n # hypo is the number of trains\n # data is an observed serial number\n def Likelihood(self, data, hypo):\n if data > hypo:\n return 0\n else:\n return 1 / hypo", "Create train object and update with train number 60", "hypos = range(1, 1001)\ntrain = Train(hypos)\ntrain.Update(60)", "Plot current probabilities of numbers of trains", "thinkplot.Pdf(train)", "Because 60 is not actually a good guess, we will compute the mean of the posterior distribution", "def Mean(suite):\n total = 0\n for hypo, prob in suite.Items():\n total += hypo * prob\n return total\n\nprint(Mean(train))", "The mean of the posterior distribution is the value that minimizes error. In simpler terms, we get the smallest number (error) when we subtract the actual number of trains from the mean of posterior distribution.\nNext, update the train with two more sightings, 50 and 90", "for data in [50, 90]:\n train.Update(data)\nprint(Mean(train))\nthinkplot.Pdf(train)", "After the two updates, the error minimizing value has gone down to 164.\nAt the start of the problem, we assumed that there was an equal chance to any number of trains. However, most rail companies don't have thousands of trains. To better represent this fact, we can give each hypotheses greater for smaller numbers of trains.", "class Train2(Dice):\n def __init__(self, hypos, alpha=1.0):\n Pmf.__init__(self)\n for hypo in hypos:\n self.Set(hypo, hypo**(-alpha))\n self.Normalize()\n\nhypos2 = range(1, 1001)\ntrain2 = Train2(hypos2)\n\nthinkplot.Pmf(train2)\n\nfor data in [50, 60, 90]:\n train2.Update(data)\n \nthinkplot.Pmf(train2)", "We initally thought that givin lower number of trains higher probabilities would give us a more accurate result. However, over just a few data points, we get a nearly identical graph to the one with linearly represented hypotheses.\nOriginal Bayes Problem - Two Watches\nSuppose you are a student who goes to various classes. Every morning you wake up and put on one of two watches. The first watch is on time. The second watch is 5 minutes slow. If you arrive to class 3 minutes late, what is the probability you wore the slow watch. Assume that arrival times follow the Gaussian function where b is an offset in minutes:\n$$f(x) = e^{-\\frac{(x-b)^2}{32}}$$\nFirst we want to make sure that our gaussian function is a reasonable approximation of arrival time. Below is a plot of the function from 15 minutes late to 15 minutes early. With some quick looks at the graph, you can see that you arrive to class +- 2 minutes around 45% of the time which is reasonable most students.\n<img src=\"gaussianFunctions.png\" alt=\"Gaussian Function\" height=\"600\" width=\"600\">\nNext we define our Watch Suite. Our hypotheses will be the watches described above:\n'watch 1' is that you used the on time watch\n'watch 2' is that you used the 5 minute slow watch", "class Watch(Suite):\n \"\"\"\n Maps watch hypotheses to probabilities\n \"\"\"\n \n def f(x, b):\n \"\"\"\n f is a function that returns a Gaussian Function.\n \n Args:\n x (int): the primary variable\n b (int): a constant offset used to make fast or slow clocks\n \"\"\"\n return math.exp((-1 * (x-b)**2) / (32))\n \n watch1_probs = dict()\n for i in range(-15,15):\n watch1_probs[i] = f(i, 0)\n\n watch2_probs = dict()\n for i in range(-15,15):\n watch2_probs[i] = f(i, -5)\n \n hypotheses = {\n 'watch 1':watch1_probs,\n 'watch 2':watch2_probs\n }\n \n def __init__(self, hypos):\n Pmf.__init__(self)\n for hypo in hypos:\n self.Set(hypo, 1)\n self.Normalize()\n \n def Likelihood(self, data, hypo):\n time = self.hypotheses[hypo]\n like = time[data]\n return like", "Next create the two hypotheses. As expected, before we see any class arival data, both watches have equal chances of being worn.", "watches = Watch(['watch 1', 'watch 2'])\nwatches.Print()", "As a sanity check, suppose we arrive to class exactly on time, and the next 5 minutes late", "for arrival_time in [0,-5]:\n watches.Update(arrival_time)\nwatches.Print()", "Our model says that both hypotheses have still have the same probabilility, this makes sense because one hypotheses is centered at 0 and the other at 5.\nNow, lets see what happens if we visit 5 more classes", "for arrival_time in [0,-2,-2,-3,-5]:\n watches.Update(arrival_time)\n \nwatches.Print()", "After this series of updates, we have a slightly increased chance of using the on time watch. This makes sense because on average, the times have been slightly closer to 0 than -5. \nEven though our model performs reasonable close data, it falls apart if you arrive either really late or early. Suppose you arrive to class 11 minutes late", "watches.Update(-11)\nwatches.Print()", "Now, the slow watch, is significantly more probable than the on time watch. If we didn't want extreme values to have as big of an impact in updates, we would simply modify the denominator of our Gaussian function" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fluffy-hamster/A-Beginners-Guide-to-Python
A Beginners Guide to Python/19. For-Loops.ipynb
mit
[ "For-loops\nFor loops; what are they? A super useful construct that allows us to do hundreds of calculations with 2-3 lines of code. The Syntax:\nfor {item} in {iterable}:\n {code block}\n\nSo what is an iterable? Well basically it is a data-type that can be considered as 'sequence' of items. In this course we have seen three iterables already: strings, lists and the range function. Lets check that out now:", "a_string = \"12345\"\na_list = list(range(1,6))\na_range_object = range(1, 6)\n\nfor num in a_range_object:\n print(num, num*num) # prints num and num**2.\n \nfor num in a_list:\n print(num, \"is {}even\".format(\"not \" if num % 2 != 0 else \"\")) # returns num and whether it is/isnot even\n\nfor num in a_string:\n print(int(num)) # returns num, after converting it to an int. ", "From this code snippet I want you to understand a few things. The first is that \"num\" takes on all the values 1...5 sequentially. The other (main) thing I want you to realise is that within the context of the loop \"num\" works just like any other normal variable name. And that means we can test it (is it prime, even, divisible by 6, etc) or operate on it (e.g. +,-,/, etc), append it to lists and so on. \nOkay, lets heat things up a little. Lets imagine we have a task which states:\n\nfind every single possible sequence of length 2 (XY) where X and Y are in the English alphabet.\nfor example: \"aa\", \"ab\", \"ac\"... \"zz\"\n\nOkay, how many combinations are there? Well, we have 26 possible letters and for each letter there are 26 continuations. 26*26= 676. Thats a lot of work for a human being to do by hand, but fortunately for us we have an unthinking machine that can do it all the grunt work. How could we go about solving this task?", "import string\nalphabet = string.ascii_lowercase # this string is simply abc...xyz\n\ncombinations = [] \nfor letter in alphabet:\n for letter2 in alphabet: # a for loop-inside a for-loop! Sexy. \n sequence = letter + letter2 # aa, ab, ac, etc\n combinations.append(sequence)\n \nprint(len(combinations)) # length of the list\nprint(combinations[:10], \"...\", combinations[-9:]) # because the list is so long we are printing two smaller slices of it.", "So as you guys can see, it wasn't actually that difficult to enumerate all the combinations. Using for-loops inside for loops is a powerful way of expressing this type of problem but one should always be aware of the maths involved if we wanted to write code in this way for sequence of length five thats 26**5 or 11881376 possible combinations. And so, although double for-loops is a super useful tool to have at ones disposal the reality is an algorithm of the order O(n**2) (see wiki's 'Big O notation' article) isn't practical for large problem sets.\nCautionary tale, beware of the infinite!\nWhen using for and while loops it is possible to accidentally create programs that never terminate. And thats err, usually bad. \nThere are several ways to make this mistake and below I'm going to showcase one particular issue. Although, since I don't want to crash my computer I have added a 'safety valve' which will allow us to run the code safely.", "lst = [1,2]\n\nloop_counter = 0\nfor item in lst:\n lst.append(item+2)\n loop_counter +=1 # every time we go through the loop, we add one to our counter. \n if exit_counter == 100: # if we have gone through the loop 100 times...\n break # we escape the loop.\nprint(lst)", "Okay so this code starts with a list of 2 items, and for each item it adds a new item to the list (item + 2). Because we are constantly adding one to the list we never reach the end of the list, and so therefore the program never terminates. \nThe 'bandage' fix in place here is a loop_counter. This simply keeps check of how many times we have gone through the loop. Once we hit 100 loops we execute \"break\" to escape the loop. And thus the program doesn't waltz toward infinity.\nThere is another (often better) way to handle this issue however, and that is to separate out the thing we wish to iterate over and the thing we wish to change. Here's an example:", "lst = [1,2]\n\nfor item in lst[:]: # <--- see lecture on splicing, lst[:] is a COPY of the lst, not the list itself.\n lst.append(item+2)\nprint(lst)", "The difference here, as noted in the comment, is that lst[:] is a copy of lst and this copy doesn't actually change as lst changes, and so therefore the program terminates.\nHomework Assignment\nYour task this week is to make a program named \"fizzbuzz\". The rules\n\nYour program should go through the numbers 1 to 100,\nif the number is even, print \"fizz\"\nif the number is divisible by 5 print \"buzz\"\nif the number is both even and divisible by 5 print \"fizzbuzz\"\notherwise just print the number. \n\nFor extra points:\nFor extra points you need to make your code easily modifiable. I want to be able to:\n\nchange the string printed out (e.g change 'fizz' to 'dave', or some other string)\nbe able to change the total\nchange one(or both) of the rules (e.g. change the rule to be divisible by 6, not 5)\n\nTo do this you should ask the user what the rules should be (Hint you should use Pythons \"input\" Function).", "# YOUR CODE HERE" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/building_production_ml_systems/labs/4b_streaming_data_inference.ipynb
apache-2.0
[ "Working with Streaming Data\nLearning Objectives\n 1. Learn how to process real-time data for ML models using Cloud Dataflow\n 2. Learn how to serve online predictions using real-time data\nIntroduction\nIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. \nTypically you will have the following:\n - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis)\n - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub)\n - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow)\n - A persistent store to keep the processed data (in our case this is BigQuery)\nThese steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. \nOnce this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. \n<img src='../assets/taxi_streaming_data.png' width='80%'>\nIn this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of trips_last_5min data as an additional feature. This is our proxy for real-time traffic.", "import os\nimport shutil\n\nimport googleapiclient.discovery\nimport numpy as np\nimport tensorflow as tf\nfrom google import api_core\nfrom google.api_core.client_options import ClientOptions\nfrom google.cloud import bigquery\nfrom matplotlib import pyplot as plt\nfrom tensorflow import keras\nfrom tensorflow.keras.callbacks import TensorBoard\nfrom tensorflow.keras.layers import Dense, DenseFeatures\nfrom tensorflow.keras.models import Sequential\n\nprint(tf.__version__)\n\n# Change below if necessary\nPROJECT = !gcloud config get-value project # noqa: E999\nPROJECT = PROJECT[0]\nBUCKET = PROJECT\nREGION = \"us-central1\"\n\n%env PROJECT=$PROJECT\n%env BUCKET=$BUCKET\n%env REGION=$REGION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set ai_platform/region $REGION", "Re-train our model with trips_last_5min feature\nIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook 4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.\nSimulate Real Time Taxi Data\nSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.\nInspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. \nIn production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. \nTo execute the iot_devices.py script, launch a terminal and navigate to the asl-ml-immersion/notebooks/building_production_ml_systems/labs directory. Then run the following two commands.\nbash\nPROJECT_ID=$(gcloud config get-value project)\npython3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID\nYou will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.\nCreate a BigQuery table to collect the processed data\nIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.", "bq = bigquery.Client()\n\ndataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\ntry:\n bq.create_dataset(dataset) # will fail if dataset already exists\n print(\"Dataset created.\")\nexcept api_core.exceptions.Conflict:\n print(\"Dataset already exists.\")", "Next, we create a table called traffic_realtime and set up the schema.", "dataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\n\ntable_ref = dataset.table(\"traffic_realtime\")\nSCHEMA = [\n bigquery.SchemaField(\"trips_last_5min\", \"INTEGER\", mode=\"REQUIRED\"),\n bigquery.SchemaField(\"time\", \"TIMESTAMP\", mode=\"REQUIRED\"),\n]\ntable = bigquery.Table(table_ref, schema=SCHEMA)\n\ntry:\n bq.create_table(table)\n print(\"Table created.\")\nexcept api_core.exceptions.Conflict:\n print(\"Table already exists.\")", "Launch Streaming Dataflow Pipeline\nNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.\nThe pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it. \nThere are 5 transformations being applied:\n - Read from PubSub\n - Window the messages\n - Count number of messages in the window\n - Format the count for BigQuery\n - Write results to BigQuery\nTODO: Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the beam programming guide for guidance. To check your answer reference the solution. \nFor the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. \nIn a new terminal, launch the dataflow pipeline using the command below. You can change the BUCKET variable, if necessary. Here it is assumed to be your PROJECT_ID.\nbash\nPROJECT_ID=$(gcloud config get-value project)\nREGION=$(gcloud config get-value ai_platform/region)\nBUCKET=$PROJECT_ID # change as necessary\npython3 ./taxicab_traffic/streaming_count.py \\\n --input_topic taxi_rides \\\n --runner=DataflowRunner \\\n --project=$PROJECT_ID \\\n --region=$REGION \\\n --temp_location=gs://$BUCKET/dataflow_streaming\nOnce you've submitted the command above you can examine the progress of that job in the Dataflow section of Cloud console. \nExplore the data in the table\nAfter a few moments, you should also see new data written to your BigQuery table as well. \nRe-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.", "%%bigquery\nSELECT\n *\nFROM\n `taxifare.traffic_realtime`\nORDER BY\n time DESC\nLIMIT 10", "Make predictions from the new data\nIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook. \nThe add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.\nExercise. Complete the code in the function below. Write a SQL query that will return the most recent entry in traffic_realtime and add it to the instance.", "# TODO 2a. Write a function to take most recent entry in `traffic_realtime`\n# table and add it to instance.\ndef add_traffic_last_5min(instance):\n bq = bigquery.Client()\n query_string = \"\"\"\n TODO: Your code goes here\n \"\"\"\n trips = bq.query(query_string).to_dataframe()[\"trips_last_5min\"][0]\n instance[\"traffic_last_5min\"] = # TODO: Your code goes here.\n return instance", "The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.", "add_traffic_last_5min(\n instance={\n \"dayofweek\": 4,\n \"hourofday\": 13,\n \"pickup_longitude\": -73.99,\n \"pickup_latitude\": 40.758,\n \"dropoff_latitude\": 41.742,\n \"dropoff_longitude\": -73.07,\n }\n)", "Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.\nExercise. Complete the code below to call prediction on an instance incorporating realtime traffic info. You should\n- use the function add_traffic_last_5min to add the most recent realtime traffic data to the prediction instance\n- call prediction on your model for this realtime instance and save the result as a variable called response\n- parse the json of response to print the predicted taxifare cost", "# TODO 2b. Write code to call prediction on instance using realtime traffic info.\n# Hint: Look at the \"Serving online predictions\" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras\nMODEL_NAME = \"taxifare\"\nVERSION_NAME = \"traffic\"\n\nservice = googleapiclient.discovery.build(\"ml\", \"v1\", cache_discovery=False)\nname = \"projects/{}/models/{}/versions/{}\".format(\n PROJECT, MODEL_NAME, VERSION_NAME\n)\n\ninstance = # TODO\n\nresponse = # TODO\n\nif \"error\" in response:\n raise RuntimeError(response[\"error\"])\nelse:\n print(response[\"predictions\"][0][\"output_1\"][0])", "Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Caoimhinmg/PmagPy
data_files/Essentials_Examples/Notebooks/essentials_ch_3_template.ipynb
bsd-3-clause
[ "Jupyter Notebook for turning in solutions to the problems in the Essentials of Paleomagnetism Textbook by L. Tauxe\nProblems in Chapter 3\nProblem 1a\nTo make a plot, we need to import the plotting package matplotlib.pyplot and tell ipython to plot inside the notebook:", "import numpy as np\nimport matplotlib.pyplot as plt # import the plotting module\n%matplotlib inline \n# This allows us to plot in the notebook environment", "Write your equation for magnetic energy in words (and Latex) here.", "thetas=np.arange(0,180,1) # makes an array of thetas from 0 to 180 at 1 degree increments. \nEs=np.cos(np.radians(thetas)) # replace this with YOUR equation - this is just and EXAMPLE. \nplt.plot(thetas,Es) # make a nice plot\nplt.title(\"Write your title here\")\nplt.xlabel(\"Write your X-axis Label here\")\nplt.ylabel(\"Write your Y-axis Label here\")", "Problem 1b\nWhat are you trying to do.", "# figure out thermal energy here and print it out\nYOUR_NUMBER=1 # obviously this is not the real number. what is it? \nprint 'thermal energy is: ',YOUR_NUMBER", "How does this compare with 1a?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
henriquepgomide/caRtola
src/python/desafio_valorizacao/Descobrindo o algoritmo de valorização do Cartola FC - Parte I.ipynb
mit
[ "Descobrindo o algoritmo de valorização do Cartola FC - Parte I\nExplorando o algoritmo de valorização do Cartola.\nOlá! Este é o primeiro tutorial da série que tentará descobrir o algoritmo de valorização do Cartola FC. Neste primeiro estudo, nós iremos:\n\nAvaliar o sistema de valorizção ao longo das rodadas; \nEstudar a distribuição a variação para cada rodada; \nRealizar um estudo de caso com um jogador específico, estudando sua valorização e criando um modelo específico de valorização para o jogador.\n\nAlém disso, você estudará análise de dados usando Python com Pandas, Seaborn, Sklearn. Espero que você tenha noção sobre:\n\nModelos lineares\nAnálise de séries temporais\nConhecimentos básicos do Cartola FC.", "# Importar bibliotecas\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn import linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\n\npd.options.mode.chained_assignment = None # default='warn'\n%matplotlib inline\npd.options.display.float_format = '{:,.2f}'.format\n\n# Abrir banco de dados\ndados = pd.read_csv('~/caRtola/data/desafio_valorizacao/valorizacao_cartola_2018.csv')\n\n# Listar nome das variáveis\nstr(list(dados))\n\n# Selecionar variáveis para análise\ndados = dados[['slug', 'rodada', 'posicao',\n 'status', 'variacao_preco', 'pontos',\n 'preco', 'media_pontos']]\n\n# Explorar dados de apenas um jogador\npaqueta = dados[dados.slug == 'lucas-paqueta']\npaqueta.head(n=15)", "Algumas observações sobre a estrutura dos dados. Na linha '21136', Paquetá está como dúvida é teve pontuação de 0. Na linha abaixo ('21137'), ele está suspenso, no entanto pontuou. \nA explicação para este erro nos dados está ligada em como os dados da API da Globo são organizados. Embora para o front-end do Cartola os dados estejam corretos, para nossa análise eles são inadequados. Por quê?\nVamos pensar que você está escalando para a rodada 38. Para esta rodada, a pontuação do jogador ainda não está disponível, somente a variação do seu preço, sua média e seu preço até a rodada 38. Assim, precisamos ajustar a coluna 'pontos', usando uma técnica simples de deslocar (lag) os dados da coluna. Além disso, precisaremos aplicar o mesmo processo na coluna 'variacao_preco' que também está ligada aos dados da rodada anterior.\nAssim, a coluna 'variacao_preco' e 'pontos' estão deslocadas para cima e precisam ser corrigidas;", "# Criar coluna variacao_preco_lag e pontos_lag\npaqueta['variacao_preco_lag'] = paqueta['variacao_preco'].shift(1)\npaqueta['pontos_lag'] = paqueta['pontos'].shift(1)\npaqueta['media_lag'] = paqueta['media_pontos'].shift(-1)\n\npaqueta[['slug', 'rodada', 'status',\n 'pontos_lag', 'variacao_preco_lag',\n 'preco', 'media_pontos']].head(n=15)", "Como podemos observar na tabela acima, os novos atributos que criamos agora estão alinhados com o status do atleta e poderão nos ajudar na etapa da modelagem. Antes de modelar, vamos explorar ainda nossos dados.\nPrimeira, observação para entendermos o modelo. O jogador quando está suspenso (linha 21137) ou seu status é nulo, não houve variação de preço. Há também outro ponto a ser observado, caso a pontuação do atleta seja positiva, há uma tendência de valorização. Vamos analisar isso nos dois gráficos abaixo.", "# Transformar dados para plotar resultados\npaqueta_plot = pd.melt(paqueta, \n id_vars=['slug','rodada'], \n value_vars=['variacao_preco_lag', 'pontos_lag', 'preco'])\n\n# Plotar gráfico com variacao_preco_lag, pontos_lag e preco\nplt.figure(figsize=(16, 6))\ng = sns.lineplot(x='rodada', y='value', hue='variable', data=paqueta_plot)", "Neste gráfico, podemos observar que o preço do atleta foi razoavelmente estável ao longo do tempo. Ao observar o comportamento das linhas azul e laranja, conseguimos notar que quando uma linha tem inclinação negativa a outra parece acompanhar. Isso nos leva a concluir o óbvio, a pontuação do atleta está ligada diretamente a sua variação de preço.", "plt.figure(figsize=(16, 6))\ng = sns.scatterplot(x='pontos_lag', y='variacao_preco_lag', hue='status', data=paqueta)", "Opa, aparentemente há uma relação entre os pontos e a variação do preço. Vamos analisar a matriz de correlação.", "paqueta[['pontos_lag','variacao_preco_lag','preco','media_pontos']].corr()", "Temos algumas informações uteis que saíram da matriz de correlação. Primeira, a pontuação está correlacionada positivamente com a variação e o preço do atleta negativamente correlacionada. Estas duas variáveis já podem nos ajudar a montar um modelo.", "# Set predictors and dependent variable\npaqueta_complete = paqueta[(~paqueta.status.isin(['Nulo', 'Suspenso'])) & (paqueta.rodada > 5)]\npaqueta_complete = paqueta_complete.dropna()\n\npredictors = paqueta_complete[['pontos_lag','preco','media_lag']]\noutcome = paqueta_complete['variacao_preco_lag']\n\nregr = linear_model.LinearRegression()\nregr.fit(predictors, outcome)\npaqueta_complete['predictions'] = regr.predict(paqueta_complete[['pontos_lag', 'preco', 'media_lag']])\n\nprint('Intercept: \\n', regr.intercept_)\nprint('Coefficients: \\n', regr.coef_)\nprint(\"Mean squared error: %.2f\"\n % mean_squared_error(paqueta_complete['variacao_preco_lag'], paqueta_complete['predictions']))\nprint('Variance score: %.2f' % r2_score(paqueta_complete['variacao_preco_lag'], paqueta_complete['predictions']))", "Boa notícia! Nós estamos prevendo os resultados do jogador muito bem. O valor é aproximado, mas nada mal! A fórmula de valorização do jogador para uma dada rodada é:\n$$ Variacao = 16.12 + (pontos * 0,174) - (preco * 0,824) + (media * 0,108) $$\nVamos abaixo em que medida nossas predições são compatíveis com o desempenho do jogador.", "# Plotar variação do preço por valor previsto do modelo linear. \n\nplt.figure(figsize=(8, 8))\ng = sns.regplot(x='predictions',y='variacao_preco_lag', data=paqueta_complete)\n# Plotar linhas com rodadas para avaliar se estamos errando alguma rodada específica\nfor line in range(0, paqueta_complete.shape[0]):\n g.text(paqueta_complete.iloc[line]['predictions'], \n paqueta_complete.iloc[line]['variacao_preco_lag']-0.25, \n paqueta_complete.iloc[line]['rodada'], \n horizontalalignment='right', \n size='medium', \n color='black', \n weight='semibold')", "Nossa previsão para o jogador Paquetá estão muito boas. Não descobrimos o algoritmo do Cartola, mas já temos uma aproximação acima do razoável. Será que nosso modelo é generalizável aos outros jogadores?\nAcompanhe nossa próxima publicação..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kubeflow/pipelines
samples/contrib/pytorch-samples/Pipeline-Cifar10-hpo.ipynb
apache-2.0
[ "# Copyright (c) Facebook, Inc. and its affiliates.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "KubeFlow Pipelines : HPO with AX - Pytorch Cifar10 Image classification\nIn this example, we train a Pytorch Lightning model to using image classification cifar10 dataset. A parent run will be created during the training process,which would dump the baseline model and relevant parameters,metrics and model along with its summary,subsequently followed by a set of nested child runs, which will dump the trial results. The best parameters would be dumped into the parent run once the experiments are completed.\nThis notebook shows PyTorch CIFAR10 end-to-end classification example using Kubeflow Pipelines. \nAn example notebook that demonstrates how to:\n\nGet different tasks needed for the pipeline\nCreate a Kubeflow pipeline\nInclude Pytorch KFP components to preprocess, train, visualize and deploy the model in the pipeline\nSubmit a job for execution\nQuery(prediction and explain) the final deployed model\n\nimport the necessary packages", "! pip uninstall -y kfp\n! pip install --no-cache-dir kfp ax-platform\n\nimport kfp\nimport json\nimport os\nfrom kfp.onprem import use_k8s_secret\nfrom kfp import components\nfrom kfp.components import load_component_from_file, load_component_from_url, func_to_container_op, InputPath\nfrom kfp import dsl\nfrom kfp import compiler\n\nimport numpy as np\nimport logging\n\nfrom ax.service.ax_client import AxClient\nimport json\n\nkfp.__version__", "Enter your gateway and the auth token\nUse this extension on chrome to get token\n\nUpdate values for the ingress gateway and auth session", "INGRESS_GATEWAY='http://istio-ingressgateway.istio-system.svc.cluster.local'\nAUTH=\"<auth-token>\" \nNAMESPACE=\"kubeflow-user-example-com\"\nCOOKIE=\"authservice_session=\"+AUTH\nEXPERIMENT=\"Default\"\ndist_volume = 'dist-vol'\nvolume_mount_path =\"/model\"\nresults_path = volume_mount_path+\"/results.json\"", "Set the Log bucket and Tensorboard Image", "MINIO_ENDPOINT=\"http://minio-service.kubeflow:9000\"\nLOG_BUCKET=\"mlpipeline\"\nTENSORBOARD_IMAGE=\"public.ecr.aws/pytorch-samples/tboard:latest\"", "Set the client and create the experiment", "client = kfp.Client(host=INGRESS_GATEWAY+\"/pipeline\", cookies=COOKIE)\n\nclient.create_experiment(EXPERIMENT)\nexperiments = client.list_experiments(namespace=NAMESPACE)\nmy_experiment = experiments.experiments[0]\nmy_experiment", "Set the Inference parameters", "DEPLOY_NAME=\"torchserve\"\nMODEL_NAME=\"cifar10\"\nISVC_NAME=DEPLOY_NAME+\".\"+NAMESPACE+\".\"+\"example.com\"\nINPUT_REQUEST=\"https://raw.githubusercontent.com/kubeflow/pipelines/master/samples/contrib/pytorch-samples/cifar10/input.json\"", "Load the the components yaml files for setting up the components", "! python utils/generate_templates.py cifar10/ax_template_mapping.json\n\nprepare_tensorboard_op = load_component_from_file(\"yaml/tensorboard_component.yaml\")\n\ngenerate_trails_op = components.load_component_from_file(\n \"yaml/ax_generate_trials_component.yaml\"\n)\n\ncomplete_trails_op = components.load_component_from_file(\n \"yaml/ax_complete_trials_component.yaml\"\n)\n\nget_keys_op = components.load_component_from_file(\n \"../../../components/json/Get_keys/component.yaml\"\n)\n\nget_element_op = components.load_component_from_file(\n \"../../../components/json/Get_element_by_key/component.yaml\"\n)\nprep_op = components.load_component_from_file(\n \"yaml/preprocess_component.yaml\"\n)\n\n# Uncomment hpo inputs in component yaml\ntrain_op = components.load_component_from_file(\n \"yaml/ax_train_component.yaml\"\n)\n\ndeploy_op = load_component_from_file(\"yaml/deploy_component.yaml\")\n\npred_op = load_component_from_file(\"yaml/prediction_component.yaml\")\n\nminio_op = components.load_component_from_file(\n \"yaml/minio_component.yaml\"\n)\n\nkubernetes_create_pvc_op = load_component_from_file(\"../../../components/kubernetes/Create_PersistentVolumeClaim/component.yaml\")\n\nfrom kubernetes.client.models import V1Volume, V1PersistentVolumeClaimVolumeSource\ndef create_dist_pipeline():\n kubernetes_create_pvc_op(name=dist_volume, storage_size= \"20Gi\")\n\ncreate_volume_run = client.create_run_from_pipeline_func(create_dist_pipeline, arguments={})\ncreate_volume_run.wait_for_run_completion()\n\nparameters = [\n {\"name\": \"lr\", \"type\": \"range\", \"bounds\": [1e-4, 0.2], \"log_scale\": True},\n {\"name\": \"weight_decay\", \"type\": \"range\", \"bounds\": [1e-4, 1e-2]},\n {\"name\": \"eps\", \"type\": \"range\", \"bounds\": [1e-8, 1e-2]},\n ]", "Define the pipeline", "@dsl.pipeline(\n name=\"AX Hpo\", description=\"Estimating best parameters using AX\"\n)\ndef pytorch_ax_hpo( # pylint: disable=too-many-arguments\n minio_endpoint=MINIO_ENDPOINT,\n log_bucket=LOG_BUCKET,\n log_dir=f\"tensorboard/logs/{dsl.RUN_ID_PLACEHOLDER}\",\n mar_path=f\"mar/{dsl.RUN_ID_PLACEHOLDER}/model-store\",\n config_prop_path=f\"mar/{dsl.RUN_ID_PLACEHOLDER}/config\",\n model_uri=f\"s3://mlpipeline/mar/{dsl.RUN_ID_PLACEHOLDER}\",\n best_params=f\"hpo/{dsl.RUN_ID_PLACEHOLDER}\",\n tf_image=TENSORBOARD_IMAGE,\n deploy=DEPLOY_NAME,\n isvc_name=ISVC_NAME,\n model=MODEL_NAME,\n namespace=NAMESPACE,\n confusion_matrix_log_dir=f\"confusion_matrix/{dsl.RUN_ID_PLACEHOLDER}/\",\n checkpoint_dir=\"checkpoint_dir/cifar10\",\n input_req=INPUT_REQUEST,\n cookie=COOKIE,\n total_trials=2,\n ingress_gateway=INGRESS_GATEWAY,\n):\n \n \"\"\"This method defines the pipeline tasks and operations\"\"\"\n pod_template_spec = json.dumps({\n \"spec\": {\n \"containers\": [{\n \"env\": [\n {\n \"name\": \"AWS_ACCESS_KEY_ID\",\n \"valueFrom\": {\n \"secretKeyRef\": {\n \"name\": \"mlpipeline-minio-artifact\",\n \"key\": \"accesskey\",\n }\n },\n },\n {\n \"name\": \"AWS_SECRET_ACCESS_KEY\",\n \"valueFrom\": {\n \"secretKeyRef\": {\n \"name\": \"mlpipeline-minio-artifact\",\n \"key\": \"secretkey\",\n }\n },\n },\n {\n \"name\": \"AWS_REGION\",\n \"value\": \"minio\"\n },\n {\n \"name\": \"S3_ENDPOINT\",\n \"value\": f\"{minio_endpoint}\",\n },\n {\n \"name\": \"S3_USE_HTTPS\",\n \"value\": \"0\"\n },\n {\n \"name\": \"S3_VERIFY_SSL\",\n \"value\": \"0\"\n },\n ]\n }]\n }\n })\n\n prepare_tb_task = prepare_tensorboard_op(\n log_dir_uri=f\"s3://{log_bucket}/{log_dir}\",\n image=tf_image,\n pod_template_spec=pod_template_spec,\n ).set_display_name(\"Visualization\")\n\n prep_task = (\n prep_op().after(prepare_tb_task).set_display_name(\"Preprocess & Transform\")\n )\n\n gen_trials_task = generate_trails_op(total_trials, parameters, 'test-accuracy').after(prep_task).set_display_name(\"AX Generate Trials\")\n \n get_keys_task = get_keys_op(gen_trials_task.outputs[\"trial_parameters\"]).after(gen_trials_task).set_display_name(\"Get Keys of Trials\")\n \n confusion_matrix_url = f\"minio://{log_bucket}/{confusion_matrix_log_dir}\"\n script_args = f\"model_name=resnet.pth,\" \\\n f\"confusion_matrix_url={confusion_matrix_url}\"\n ptl_args = f\"max_epochs=1, profiler=pytorch\"\n\n with dsl.ParallelFor(get_keys_task.outputs[\"keys\"]) as item:\n get_element_task = get_element_op(gen_trials_task.outputs[\"trial_parameters\"], item).after(get_keys_task).set_display_name(\"Get Element from key\")\n train_task = (\n train_op(\n trial_id=item,\n input_data=prep_task.outputs[\"output_data\"],\n script_args=script_args,\n model_parameters=get_element_task.outputs[\"output\"],\n ptl_arguments=ptl_args,\n results=results_path\n ).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(get_element_task).set_display_name(\"Training\")\n# For GPU uncomment below line and set GPU limit and node selector\n# ).set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accelerator','nvidia-tesla-p4')\n )\n \n complete_trials_task = complete_trails_op(gen_trials_task.outputs[\"client\"], results_path).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(train_task).set_display_name(\"AX Complete Trials\")\n\n dsl.get_pipeline_conf().add_op_transformer(\n use_k8s_secret(\n secret_name=\"mlpipeline-minio-artifact\",\n k8s_secret_key_to_env={\n \"secretkey\": \"MINIO_SECRET_KEY\",\n \"accesskey\": \"MINIO_ACCESS_KEY\",\n },\n )\n )\n", "Compile the pipeline", "compiler.Compiler().compile(pytorch_ax_hpo, 'pytorch.tar.gz', type_check=True)", "Execute the pipeline", "run = client.run_pipeline(my_experiment.id, 'pytorch_ax_hpo', 'pytorch.tar.gz')", "Viewing results\nWait for the pipeline execution to be completed. Sample pipeline shown below\n\nClick on \"AX Complete Trials\" component. The best hyperparameters are shown in the Input/Output tab as shown below" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
marksibrahim/musings
notebooks/.ipynb_checkpoints/A Neural Network Classifier using Keras-checkpoint.ipynb
mit
[ "Neural Network Classifier\nNeural networks can learn", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.linear_model import LogisticRegressionCV\nfrom sklearn import datasets\n\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation\nfrom keras.utils import np_utils", "Load Iris Data", "iris = datasets.load_iris()\niris_df = pd.DataFrame(data= np.c_[iris['data'], iris['target']],\n columns= iris['feature_names'] + ['target'])\n\niris_df.head()", "Targets 0, 1, 2 correspond to three species: setosa, versicolor, and virginica.", "sns.pairplot(iris_df, hue=\"target\")\n\nX = iris_df.values[:, :4]\nY = iris_df.values[: , 4]", "Split into Training and Testing", "train_X, test_X, train_Y, test_Y = train_test_split(X, Y, train_size=0.5, random_state=0)", "Let's test out a Logistic Regression Classifier", "lr = LogisticRegressionCV()\nlr.fit(train_X, train_Y)\n\nprint(\"Accuracy = {:.2f}\".format(lr.score(test_X, test_Y)))", "Let's Train a Neural Network Classifier", "# Let's Encode the Output in a vector (one hot encoding)\n # since this is what the network outputs\ndef one_hot_encode_object_array(arr):\n '''One hot encode a numpy array of objects (e.g. strings)'''\n uniques, ids = np.unique(arr, return_inverse=True)\n return np_utils.to_categorical(ids, len(uniques))\n\ntrain_y_ohe = one_hot_encode_object_array(train_Y)\ntest_y_ohe = one_hot_encode_object_array(test_Y)", "Defining the Network\n\nwe have four features and three classes\ninput layer must have 4 units\noutput must have 3\nwe'll add a single hidden layer (choose 16 units)", "model = Sequential()\nmodel.add(Dense(16, input_shape=(4,)))\nmodel.add(Activation(\"sigmoid\"))\n\n# define output layer\nmodel.add(Dense(3))\n# softmax is used here, because there are three classes (sigmoid only works for two classes)\nmodel.add(Activation(\"softmax\"))\n\n# define loss function and optimization\nmodel.compile(optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"])", "What's happening here? \n\noptimizier: examples include stochastic gradient descent (going down steepest point)\nADAM (the one selected above) stands for Adaptive Moment Estimation\nsimilar to stochastic gradient descent, but looks as exponentially decaying average and has a different update rule\n\n\nloss: classficiation error or mean square error are fine options\nCategorical Cross Entropy is a better option for computing the gradient supposedly", "model.fit(train_X, train_y_ohe, epochs=100, batch_size=1, verbose=0)\n\nloss, accuracy = model.evaluate(test_X, test_y_ohe, verbose=0)\nprint(\"Accuracy = {:.2f}\".format(accuracy))", "Nice! Much better performance than logistic regression!\nHow about training with stochastic gradient descent?", "stochastic_net = Sequential()\nstochastic_net.add(Dense(16, input_shape=(4,)))\nstochastic_net.add(Activation(\"sigmoid\"))\n\nstochastic_net.add(Dense(3))\n\nstochastic_net.add(Activation(\"softmax\"))\nstochastic_net.compile(optimizer=\"sgd\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"])\n\nstochastic_net.fit(train_X, train_y_ohe, epochs=100, batch_size=1, verbose=0)\n\nloss, accuracy = stochastic_net.evaluate(test_X, test_y_ohe, verbose=0)\nprint(\"Accuracy = {:.2f}\".format(accuracy))", "based on Mike William's introductory tutorial on safaribooksonline" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_sensor_permutation_test.ipynb
bsd-3-clause
[ "%matplotlib inline", "Permutation T-test on sensor data\nOne tests if the signal significantly deviates from 0\nduring a fixed time window of interest. Here computation\nis performed on MNE sample dataset between 40 and 60 ms.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne import io\nfrom mne.stats import permutation_t_test\nfrom mne.datasets import sample\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = 1\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: MEG + STI 014 - bad channels (modify to your needs)\ninclude = [] # or stim channel ['STI 014']\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\n\n# pick MEG Gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n include=include, exclude='bads')\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))\ndata = epochs.get_data()\ntimes = epochs.times\n\ntemporal_mask = np.logical_and(0.04 <= times, times <= 0.06)\ndata = np.mean(data[:, :, temporal_mask], axis=2)\n\nn_permutations = 50000\nT0, p_values, H0 = permutation_t_test(data, n_permutations, n_jobs=1)\n\nsignificant_sensors = picks[p_values <= 0.05]\nsignificant_sensors_names = [raw.ch_names[k] for k in significant_sensors]\n\nprint(\"Number of significant sensors : %d\" % len(significant_sensors))\nprint(\"Sensors names : %s\" % significant_sensors_names)", "View location of significantly active sensors", "evoked = mne.EvokedArray(-np.log10(p_values)[:, np.newaxis],\n epochs.info, tmin=0.)\n\n# Extract mask and indices of active sensors in layout\nstats_picks = mne.pick_channels(evoked.ch_names, significant_sensors_names)\nmask = p_values[:, np.newaxis] <= 0.05\n\nevoked.plot_topomap(ch_type='grad', times=[0], scale=1,\n time_format=None, cmap='Reds', vmin=0., vmax=np.max,\n unit='-log10(p)', cbar_fmt='-%0.1f', mask=mask,\n size=3, show_names=lambda x: x[4:] + ' ' * 20)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ocelot-collab/ocelot
demos/ipython_tutorials/accelerator_optim.ipynb
gpl-3.0
[ "This notebook was created by Sergey Tomin (sergey.tomin@desy.de) for IPC seminar. Source and license info is on GitHub. October 2020.", "from ocelot import *\nfrom ocelot.gui import *\nimport copy", "Lattice: buch compressor", "d = Drift(0.5)\n# Quadrupoles \nqf = Quadrupole(l=0.2, k1=1, k2=2)\nqd = Quadrupole(l=0.2, k1=-1, k2=-2)\n\n# Bends\nangle0 = 14*np.pi/180\nb1 = SBend(l=0.5, angle=angle0, e2=angle0, eid='B1')\nb2 = SBend(l=0.5, angle=-angle0, e1=-angle0, eid='B2')\nb3 = SBend(l=0.5, angle=-angle0, e2=-angle0, eid='B3')\nb4 = SBend(l=0.5, angle=angle0, e1=angle0, eid='B4')\n\nm1 = Marker(eid=\"START\")\nm2 = Marker(eid=\"SCREEN\")\n\nchicane = (b1, d, b2, d, d, b3, d, b4)\n\nfodo = (qf, d, qd, d)\n\ncell = (m1, d, chicane, d, fodo, m2)\n\nlat = MagneticLattice(cell, method=MethodTM({\"global\": SecondTM}))", "Twiss parameters", "tws0 = Twiss()\ntws0.beta_x = 10\ntws0.beta_y = 10\ntws0.alpha_x = 0\ntws0.alpha_y = 0\ntws0.E = 100e-3\n\ntws = twiss(lat, tws0)\n\nplot_opt_func(lat, tws)\nplt.show()\n\nR = lattice_transfer_map(lat, energy=100e-3)\nprint(R[4,5])", "Generate the electron beam - ParticleArray", "p_array_init = generate_parray(sigma_x=1e-3, sigma_px=5e-5, chirp=0.01,\n nparticles=20000, charge=1e-09, energy=tws0.E, tws=tws0)\n\nshow_e_beam(p_array_init, figsize=(9,6))\nplt.show()", "Track the beam through the lattice", "p_array = copy.deepcopy(p_array_init)\n\nnavi = Navigator(lat)\n\n#navi.unit_step = 0.1\n#csr = CSR()\n#csr.sigma_min = 4e-6\n#navi.add_physics_proc(csr, m1, m2)\n\ntws_track, _ = track(lat, p_array, navi)\n\nshow_e_beam(p_array, figsize=(9,6))\nplt.show()", "What the screen should show?", "show_density(p_array.x()* 1e3, p_array.y() * 1e3, \n xlabel=\"x [mm]\", ylabel='y [mm]', title=\"Screen\", limits=[(-4, 4), (-2, 2)])\n\nprint(f\"std(x) = {np.std(p_array.x()) * 1e6} um; std(y) = {np.std(p_array.y()) * 1e6} um;\")", "What if we have not identical dipoles?", "p_array = copy.deepcopy(p_array_init)\n\nb1.angle = angle0 * (1 + .02)\nb2.angle = -angle0 * (1 - .02)\nb3.angle = -angle0 * (1 - .02)\nb4.angle = angle0 * (1 + .02)\nlat.update_transfer_maps()\n\ntws = twiss(lat, tws0)\nplot_opt_func(lat, tws, legend=False)\nplt.show()", "Has the beam size on the screen changed?", "navi = Navigator(lat)\ntws_track, _ = track(lat, p_array, navi)\nprint(f\"std(x) = {np.std(p_array.x()) * 1e6} um; std(y) = {np.std(p_array.y()) * 1e6} um;\")\n\nshow_density(p_array.x()* 1e3, p_array.y() * 1e3, \n xlabel=\"x [mm]\", ylabel='y [mm]', title=\"Screen\", limits=[(-4, 4), (-2, 2)], grid=False)", "Let's minimize the horizontal beam size (dispersion?) on the screen with two last dipoles.", "from scipy.optimize import minimize\n\n# Our objective function \ndef get_beam_size(angles):\n p_array = copy.deepcopy(p_array_init)\n\n b3.angle = angles[0]\n b4.angle = angles[1]\n lat.update_transfer_maps()\n \n # NOTE: for simplicity, we do not take into account changes in the drift length between dipoles B3 and B4.\n\n navi = Navigator(lat)\n # navi.unit_step = 0.1\n # csr = CSR()\n # csr.sigma_min = 4e-6\n # navi.add_physics_proc(csr, m1, m2)\n tws_track, _ = track(lat, p_array, navi)\n return np.std(p_array.x()) \n\nangles = np.copy([b3.angle, b4.angle])\nprint(f\"init: angles = {angles}\")\n\nres = minimize(fun=get_beam_size, x0=angles)\n\nprint()\nprint(f\"res: angles = {res['x']}\")\n\np_array = copy.deepcopy(p_array_init)\n\nnavi = Navigator(lat)\ntws_track, _ = track(lat, p_array, navi)\n\nshow_density(p_array.x()*1e3, p_array.y()*1e3, xlabel=\"x [mm]\", ylabel='y [mm]', limits=[(-4, 4), (-2, 2)])\nprint(f\"std(x) = {np.std(p_array.x()) * 1e6} um; std(y) = {np.std(p_array.y()) * 1e6} um;\")", "What is going on? Let's have a look to the twiss parameters", "tws = twiss(lat, tws0)\nplot_opt_func(lat, tws, legend=False)\nplt.show()\n\nshow_e_beam(p_array, figsize=(9,6))\nplt.show()\n\nR = lattice_transfer_map(lat, energy=100e-3)\nprint(\"R56 = \", R[4,5])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hide-tono/python-training
python-machine-learning/ch04/ch04_2.ipynb
apache-2.0
[ "データセットをトレーニングデータセットとテストデータセットに分割する\n\nWineデータセットを用い、前処理を行った後次元数を減らすための特徴選択の手法を見ていく。\nwineデータセットのクラスは1,2,3の3種類。これは3種類の葡萄を表している。", "import pandas as pd\nimport numpy as np\ndf_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)\ndf_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']\nprint('Class labels', np.unique(df_wine['Class label']))\ndf_wine.head()\n\nfrom sklearn.cross_validation import train_test_split\n# X:特徴量 y: クラスラベル\nX, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)", "特徴量の尺度を揃える\n一般的な手法は__正規化(normalization)__と__標準化(standardization)__の2つ。\n正規化\n特徴量を[0,1]の範囲にスケーリングし直すこと。\n$$ x_{norm}^{(i)} = \\frac{x^{(i)} - x_{min}}{x_{max} - x_{min}} $$", "from sklearn.preprocessing import MinMaxScaler\nmms = MinMaxScaler()\nX_train_norm = mms.fit_transform(X_train)\nX_test_norm = mms.transform(X_test)\nprint('正規化前')\nprint(X_train[0])\nprint('正規化後')\nprint(X_train_norm[0])", "標準化\n平均値0, 標準偏差1となるように変換する。以下の点で正規化より優れている。\n\n特徴量の列は正規分布に従うため、重みを学習しやすくなる\n外れ値に関する有益な情報が維持されるため、外れ値の影響を受けにくい\n\n$$ x_{std}^{(i)} = \\frac{x^{(i)} - \\mu_x}{\\sigma_x} $$\n\n\\( \\mu_x \\):特徴量の列の平均値\n\\( \\sigma_x \\):対応する標準偏差", "from sklearn.preprocessing import StandardScaler\n\nstdsc = StandardScaler()\nX_train_std = stdsc.fit_transform(X_train)\nX_test_std = stdsc.transform(X_test)\n\nprint('標準化前')\nprint(X_train[0])\nprint('標準化後')\nprint(X_train_std[0])", "有益な特徴量の選択\n汎化誤差を減らすための一般的な方法は以下のとおり\n\n更に多くのトレーニングデータを集める\n正則化を通じて複雑さにペナルティを課す\nパラメータの数が少ない、より単純なモデルを選択する\nデータの次元の数を減らす\n\nL1正則化による疎な解\nL2正則化は以下だった。\n$$ L2:||w||2^2 = \\sum{j=1}^m w^2_j $$\nL2正則化は以下のとおり。\n$$ L1:||w||1 = \\sum{j=1}^m |w_j| $$\n差は二乗和を絶対値の和に置き換えている。\n\nL1正則化によって返されるのは疎な特徴ベクトル\n殆どの特徴量の重みは0\n無関係な特徴量の個数が多い高次元データセットに対して特徴量を選択するのに有効\n\nなぜ特徴量を選択できる?\n\nL2正則化のペナルティは二乗和なので原点を中心とした円のようなものになる。\nL1正則化のペナルティは絶対値の和なので原点を中心としたひし形のようなものになる。\n\n→ ひし形の頂点がコストの一番低いところになりやすい。\n頂点となる箇所はどちらかの重みがゼロで、どちらかが最大となる。", "from sklearn.linear_model import LogisticRegression\nlr = LogisticRegression(penalty='l1', C=0.1)\nlr.fit(X_train_std, y_train)\nprint('Training accuracy:', lr.score(X_train_std, y_train))\nprint('Test accuracy:', lr.score(X_test_std, y_test))\nprint('切片:', lr.intercept_)\nprint('重み係数:', lr.coef_)", "上記で切片が3つあるが、3種類のクラス(葡萄)を見分けるため、1つ目はクラス1対クラス2,3に適合するモデルの切片(2つ目以降同様)となっている。\n重み係数も3×13の行列で、クラスごとに重みベクトルが含まれる。\n総入力\\( z \\)は、各重みに対して特徴量をかける\n\n$$ z = w_1x_1 + ... + w_mx_m + b = \\sum_{j=1}^m x_jw_j + b = {\\boldsymbol w^Tx} + b $$\nL1正則化により殆どの重みが0となったため、無関係な特徴量に対しても頑健なモデルになった。\n以下は正則化パス(正則化の強さに対する特徴量の重み係数)のグラフ。", "import matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = plt.subplot(111)\n \ncolors = ['blue', 'green', 'red', 'cyan', \n 'magenta', 'yellow', 'black', \n 'pink', 'lightgreen', 'lightblue', \n 'gray', 'indigo', 'orange']\n\nweights, params = [], []\nfor c in np.arange(-4., 6.):\n lr = LogisticRegression(penalty='l1', C=10.**c, random_state=0)\n lr.fit(X_train_std, y_train)\n weights.append(lr.coef_[1])\n params.append(10.**c)\n\nweights = np.array(weights)\n\nfor column, color in zip(range(weights.shape[1]), colors):\n plt.plot(params, weights[:, column],\n label=df_wine.columns[column + 1],\n color=color)\nplt.axhline(0, color='black', linestyle='--', linewidth=3)\nplt.xlim([10**(-5), 10**5])\nplt.ylabel('weight coefficient')\nplt.xlabel('C')\nplt.xscale('log')\nplt.legend(loc='upper left')\nax.legend(loc='upper center', \n bbox_to_anchor=(1.38, 1.03),\n ncol=1, fancybox=True)\n# plt.savefig('./figures/l1_path.png', dpi=300)\nplt.show()", "逐次特徴選択アルゴリズム\n\n特徴選択は次元削減法の一つ\n逐次特徴選択は貪欲探索アルゴリズムの一つ\n貪欲探索アルゴリズムは、d次元の特徴空間をk次元に削減するために使用される\n\n特徴選択の目的\n\n関連データのみを計算することによる計算効率の改善\n無関係なノイズを取り除くことによる汎化誤差の削減\n\n逐次後退選択(Sequential Backward Selection: SBS)\n\n特徴量を逐次的に削除していく\n削除する特徴量は評価関数\\( J \\)によって決め、性能の低下が最も少ない特徴量を削除する\n\nステップは以下の通り\n\nアルゴリズムを\\( k=d \\)で初期化する。\\( d \\)は全体の特徴空間\\( X_d \\)の次元数を表す。\n\\( J \\)の評価を最大化する特徴量\\( x^- \\)を決定する。\\( x \\)は\\( x \\in X_k \\)である\n$$ x^- = argmax J(X_k-x) $$\n特徴量の集合から特徴量\\( x^- \\)を削除する\n$$ x_{k-1} = X_k - x^-;k := k-1 $$\n\\( k \\)が目的とする特徴量の個数に等しくなれば終了する。そうでなければ、ステップ2に戻る。", "from sklearn.base import clone\nfrom itertools import combinations\nimport numpy as np\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\n\nclass SBS():\n '''逐次後退選択を実行するクラス\n \n Parameters\n ---------\n estimator : 推定器\n k_features : 選択する特徴量の個数\n scoring : 特徴量を評価する指標\n test_size : テストデータの割合\n random_state : 乱数シード\n '''\n def __init__(self, estimator, k_features, scoring=accuracy_score,\n test_size=0.25, random_state=1):\n self.scoring = scoring\n self.estimator = clone(estimator)\n self.k_features = k_features\n self.test_size = test_size\n self.random_state = random_state\n\n def fit(self, X, y):\n X_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=self.test_size,\n random_state=self.random_state)\n\n dim = X_train.shape[1]\n self.indices_ = tuple(range(dim))\n self.subsets_ = [self.indices_]\n # 全ての特徴量を用いてスコアを算出する\n score = self._calc_score(X_train, y_train,\n X_test, y_test, self.indices_)\n self.scores_ = [score]\n # 指定した特徴量の個数になるまで処理\n while dim > self.k_features:\n scores = []\n subsets = []\n # 特徴量の部分集合を表す列インデックスの組み合わせごとに反復\n for p in combinations(self.indices_, r=dim - 1):\n score = self._calc_score(X_train, y_train,\n X_test, y_test, p)\n scores.append(score)\n subsets.append(p)\n # 一番良いスコアを抽出\n best = np.argmax(scores)\n self.indices_ = subsets[best]\n self.subsets_.append(self.indices_)\n # 特徴量の個数を1つ減らす\n dim -= 1\n\n self.scores_.append(scores[best])\n self.k_score_ = self.scores_[-1]\n\n return self\n\n def transform(self, X):\n return X[:, self.indices_]\n\n def _calc_score(self, X_train, y_train, X_test, y_test, indices):\n self.estimator.fit(X_train[:, indices], y_train)\n y_pred = self.estimator.predict(X_test[:, indices])\n score = self.scoring(y_test, y_pred)\n return score\n\nfrom sklearn.neighbors import KNeighborsClassifier\nimport matplotlib.pyplot as plt\n\nknn = KNeighborsClassifier(n_neighbors=2)\nsbs = SBS(knn, k_features=1)\nsbs.fit(X_train_std, y_train)\n\n# 近傍点の個数のリスト\nk_feat = [len(k) for k in sbs.subsets_]\nplt.plot(k_feat, sbs.scores_, marker='o')\nplt.ylim([0.7, 1.1])\nplt.ylabel('Accuracy')\nplt.xlabel('Number of features')\nplt.grid()\nplt.show()\n\n# 上記で100%の正答率を出した5つの特徴量を調べる\nk5 = list(sbs.subsets_[8])\nprint(df_wine.columns[1:][k5])\n\n# 特徴量の削減の様子\nsbs.subsets_\n\n# 全特徴量を使用した場合\nknn.fit(X_train_std, y_train)\nprint('Training accuracy:', knn.score(X_train_std, y_train))\nprint('Test accuracy:', knn.score(X_test_std, y_test))\n\n# 5角特徴量を使用した場合\nknn.fit(X_train_std[:, k5], y_train)\nprint('Training accuracy:', knn.score(X_train_std[:, k5], y_train))\nprint('Test accuracy:', knn.score(X_test_std[:, k5], y_test))", "ランダムフォレストで特徴量の重要度にアクセスする\n\nフォレスト内の全ての決定木から計算された不純度の平均的な減少量として特徴量の重要度を測定できる。\nscikit-learnではfeature_importances_属性を使って値を取得できる", "from sklearn.ensemble import RandomForestClassifier\n\nfeat_labels = df_wine.columns[1:]\nforest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)\nforest.fit(X_train, y_train)\n# 重要度を抽出\nimportances = forest.feature_importances_\nindices = np.argsort(importances)[::-1]\nfor f in range(X_train.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, feat_labels[indices[f]], importances[indices[f]]))\n\nplt.title('Feature Importances')\nplt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center')\nplt.xticks(range(X_train.shape[1]), feat_labels[indices], rotation=90)\nplt.xlim([-1, X_train.shape[1]])\nplt.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations_preproc.ipynb
apache-2.0
[ "Neural network hybrid recommendation system on Google Analytics data preprocessing\nThis notebook demonstrates how to implement a hybrid recommendation system using a neural network to combine content-based and collaborative filtering recommendation models using Google Analytics data. We are going to use the learned user embeddings from wals.ipynb and combine that with our previous content-based features from content_based_using_neural_networks.ipynb\nFirst we are going to preprocess our data using BigQuery and Cloud Dataflow to be used in our later neural network hybrid recommendation model.\nApache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select python2.", "# Import helpful libraries and setup our project, bucket, and region\nimport os\n\nPROJECT = \"cloud-training-demos\" # REPLACE WITH YOUR PROJECT ID\nBUCKET = \"cloud-training-demos-ml\" # REPLACE WITH YOUR BUCKET NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# Do not change these\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"BUCKET\"] = BUCKET\nos.environ[\"REGION\"] = REGION\nos.environ[\"TFVERSION\"] = \"1.13\"\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION", "<h2> Create ML dataset using Dataflow </h2>\nLet's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.\nFirst, let's create our hybrid dataset query that we will use in our Cloud Dataflow pipeline. This will combine some content-based features and the user and item embeddings learned from our WALS Matrix Factorization Collaborative filtering lab that we extracted from our trained WALSMatrixFactorization Estimator and uploaded to BigQuery.", "query_hybrid_dataset = \"\"\"\nWITH CTE_site_history AS (\n SELECT\n fullVisitorId as visitor_id,\n (SELECT MAX(IF(index = 10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,\n (SELECT MAX(IF(index = 7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category, \n (SELECT MAX(IF(index = 6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,\n (SELECT MAX(IF(index = 2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,\n SPLIT(RPAD((SELECT MAX(IF(index = 4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') AS year_month_array,\n LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) AS nextCustomDimensions\n FROM \n `cloud-training-demos.GA360_test.ga_sessions_sample`, \n UNNEST(hits) AS hits\n WHERE \n # only include hits on pages\n hits.type = \"PAGE\"\n AND\n fullVisitorId IS NOT NULL\n AND\n hits.time != 0\n AND\n hits.time IS NOT NULL\n AND\n (SELECT MAX(IF(index = 10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL\n),\nCTE_training_dataset AS (\n SELECT\n (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) AS next_content_id,\n\n visitor_id,\n content_id,\n category,\n REGEXP_REPLACE(title, r\",\", \"\") AS title,\n REGEXP_EXTRACT(author_list, r\"^[^,]+\") AS author,\n DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970, 1, 1), MONTH) AS months_since_epoch\n FROM\n CTE_site_history\n WHERE\n (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL)\n\nSELECT\n CAST(next_content_id AS STRING) AS next_content_id,\n\n CAST(training_dataset.visitor_id AS STRING) AS visitor_id,\n CAST(training_dataset.content_id AS STRING) AS content_id,\n CAST(IFNULL(category, 'None') AS STRING) AS category,\n CONCAT(\"\\\\\"\", REPLACE(TRIM(CAST(IFNULL(title, 'None') AS STRING)), \"\\\\\"\",\"\"), \"\\\\\"\") AS title,\n CAST(IFNULL(author, 'None') AS STRING) AS author,\n CAST(months_since_epoch AS STRING) AS months_since_epoch,\n\n IFNULL(user_factors._0, 0.0) AS user_factor_0,\n IFNULL(user_factors._1, 0.0) AS user_factor_1,\n IFNULL(user_factors._2, 0.0) AS user_factor_2,\n IFNULL(user_factors._3, 0.0) AS user_factor_3,\n IFNULL(user_factors._4, 0.0) AS user_factor_4,\n IFNULL(user_factors._5, 0.0) AS user_factor_5,\n IFNULL(user_factors._6, 0.0) AS user_factor_6,\n IFNULL(user_factors._7, 0.0) AS user_factor_7,\n IFNULL(user_factors._8, 0.0) AS user_factor_8,\n IFNULL(user_factors._9, 0.0) AS user_factor_9,\n\n IFNULL(item_factors._0, 0.0) AS item_factor_0,\n IFNULL(item_factors._1, 0.0) AS item_factor_1,\n IFNULL(item_factors._2, 0.0) AS item_factor_2,\n IFNULL(item_factors._3, 0.0) AS item_factor_3,\n IFNULL(item_factors._4, 0.0) AS item_factor_4,\n IFNULL(item_factors._5, 0.0) AS item_factor_5,\n IFNULL(item_factors._6, 0.0) AS item_factor_6,\n IFNULL(item_factors._7, 0.0) AS item_factor_7,\n IFNULL(item_factors._8, 0.0) AS item_factor_8,\n IFNULL(item_factors._9, 0.0) AS item_factor_9,\n\n FARM_FINGERPRINT(CONCAT(CAST(visitor_id AS STRING), CAST(content_id AS STRING))) AS hash_id\nFROM\n CTE_training_dataset AS training_dataset\nLEFT JOIN\n `cloud-training-demos.GA360_test.user_factors` AS user_factors\n ON CAST(training_dataset.visitor_id AS FLOAT64) = CAST(user_factors.user_id AS FLOAT64)\nLEFT JOIN\n `cloud-training-demos.GA360_test.item_factors` AS item_factors\n ON CAST(training_dataset.content_id AS STRING) = CAST(item_factors.item_id AS STRING)\n\"\"\"", "Let's pull a sample of our data into a dataframe to see what it looks like.", "from google.cloud import bigquery\nbq = bigquery.Client(project = PROJECT)\ndf_hybrid_dataset = bq.query(query_hybrid_dataset + \"LIMIT 100\").to_dataframe()\ndf_hybrid_dataset.head()\n\ndf_hybrid_dataset.describe()\n\nimport apache_beam as beam\nimport datetime, os\n\ndef to_csv(rowdict):\n # Pull columns from BQ and create a line\n import hashlib\n import copy\n CSV_COLUMNS = \"next_content_id,visitor_id,content_id,category,title,author,months_since_epoch\".split(\",\")\n FACTOR_COLUMNS = [\"user_factor_{}\".format(i) for i in range(10)] + [\"item_factor_{}\".format(i) for i in range(10)]\n\n # Write out rows for each input row for each column in rowdict\n data = \",\".join([\"None\" if k not in rowdict else (rowdict[k].encode(\"utf-8\") if rowdict[k] is not None else \"None\") for k in CSV_COLUMNS])\n data += \",\"\n data += \",\".join([str(rowdict[k]) if k in rowdict else \"None\" for k in FACTOR_COLUMNS])\n yield (\"{}\".format(data))\n \ndef preprocess(in_test_mode):\n import shutil, os, subprocess\n job_name = \"preprocess-hybrid-recommendation-features\" + \"-\" + datetime.datetime.now().strftime(\"%y%m%d-%H%M%S\")\n\n if in_test_mode:\n print(\"Launching local job ... hang on\")\n OUTPUT_DIR = \"./preproc/features\"\n shutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n os.makedirs(OUTPUT_DIR)\n else:\n print(\"Launching Dataflow job {} ... hang on\".format(job_name))\n OUTPUT_DIR = \"gs://{0}/hybrid_recommendation/preproc/features/\".format(BUCKET)\n try:\n subprocess.check_call(\"gsutil -m rm -r {}\".format(OUTPUT_DIR).split())\n except:\n pass\n\n options = {\n \"staging_location\": os.path.join(OUTPUT_DIR, \"tmp\", \"staging\"),\n \"temp_location\": os.path.join(OUTPUT_DIR, \"tmp\"),\n \"job_name\": job_name,\n \"project\": PROJECT,\n \"teardown_policy\": \"TEARDOWN_ALWAYS\",\n \"no_save_main_session\": True\n }\n opts = beam.pipeline.PipelineOptions(flags = [], **options)\n if in_test_mode:\n RUNNER = \"DirectRunner\"\n else:\n RUNNER = \"DataflowRunner\"\n p = beam.Pipeline(RUNNER, options = opts)\n \n query = query_hybrid_dataset\n\n if in_test_mode:\n query = query + \" LIMIT 100\" \n\n for step in [\"train\", \"eval\"]:\n if step == \"train\":\n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hash_id, 10)) < 9\".format(query)\n else:\n selquery = \"SELECT * FROM ({}) WHERE ABS(MOD(hash_id, 10)) = 9\".format(query)\n\n (p \n | \"{}_read\".format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))\n | \"{}_csv\".format(step) >> beam.FlatMap(to_csv)\n | \"{}_out\".format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, \"{}.csv\".format(step))))\n )\n\n job = p.run()\n if in_test_mode:\n job.wait_until_finish()\n print(\"Done!\")\n \npreprocess(in_test_mode = False)", "Let's check our files to make sure everything went as expected", "%%bash\nrm -rf features\nmkdir features\n\n!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/features/*.csv* features/\n\n!head -3 features/*", "<h2> Create vocabularies using Dataflow </h2>\n\nLet's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.\nNow we'll create our vocabulary files for our categorical features.", "query_vocabularies = \"\"\"\nSELECT\n CAST((SELECT MAX(IF(index = index_value, value, NULL)) FROM UNNEST(hits.customDimensions)) AS STRING) AS grouped_by\nFROM `cloud-training-demos.GA360_test.ga_sessions_sample`,\n UNNEST(hits) AS hits\nWHERE\n # only include hits on pages\n hits.type = \"PAGE\"\n AND (SELECT MAX(IF(index = index_value, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL\nGROUP BY\n grouped_by\n\"\"\"\n\nimport apache_beam as beam\nimport datetime, os\n\ndef to_txt(rowdict):\n # Pull columns from BQ and create a line\n\n # Write out rows for each input row for grouped by column in rowdict\n return \"{}\".format(rowdict[\"grouped_by\"].encode(\"utf-8\"))\n \ndef preprocess(in_test_mode):\n import shutil, os, subprocess\n job_name = \"preprocess-hybrid-recommendation-vocab-lists\" + \"-\" + datetime.datetime.now().strftime(\"%y%m%d-%H%M%S\")\n\n if in_test_mode:\n print(\"Launching local job ... hang on\")\n OUTPUT_DIR = \"./preproc/vocabs\"\n shutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n os.makedirs(OUTPUT_DIR)\n else:\n print(\"Launching Dataflow job {} ... hang on\".format(job_name))\n OUTPUT_DIR = \"gs://{0}/hybrid_recommendation/preproc/vocabs/\".format(BUCKET)\n try:\n subprocess.check_call(\"gsutil -m rm -r {}\".format(OUTPUT_DIR).split())\n except:\n pass\n\n options = {\n \"staging_location\": os.path.join(OUTPUT_DIR, \"tmp\", \"staging\"),\n \"temp_location\": os.path.join(OUTPUT_DIR, \"tmp\"),\n \"job_name\": job_name,\n \"project\": PROJECT,\n \"teardown_policy\": \"TEARDOWN_ALWAYS\",\n \"no_save_main_session\": True\n }\n opts = beam.pipeline.PipelineOptions(flags = [], **options)\n if in_test_mode:\n RUNNER = \"DirectRunner\"\n else:\n RUNNER = \"DataflowRunner\"\n\n p = beam.Pipeline(RUNNER, options = opts)\n \n def vocab_list(index, name):\n query = query_vocabularies.replace(\"index_value\", \"{}\".format(index))\n\n (p \n | \"{}_read\".format(name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))\n | \"{}_txt\".format(name) >> beam.Map(to_txt)\n | \"{}_out\".format(name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, \"{0}_vocab.txt\".format(name))))\n )\n\n # Call vocab_list function for each\n vocab_list(10, \"content_id\") # content_id\n vocab_list(7, \"category\") # category\n vocab_list(2, \"author\") # author\n \n job = p.run()\n if in_test_mode:\n job.wait_until_finish()\n print(\"Done!\")\n \npreprocess(in_test_mode = False)", "Also get vocab counts from the length of the vocabularies", "import apache_beam as beam\nimport datetime, os\n\ndef count_to_txt(rowdict):\n # Pull columns from BQ and create a line\n\n # Write out count\n return \"{}\".format(rowdict[\"count_number\"])\n \ndef mean_to_txt(rowdict):\n # Pull columns from BQ and create a line\n\n # Write out mean\n return \"{}\".format(rowdict[\"mean_value\"])\n \ndef preprocess(in_test_mode):\n import shutil, os, subprocess\n job_name = \"preprocess-hybrid-recommendation-vocab-counts\" + \"-\" + datetime.datetime.now().strftime(\"%y%m%d-%H%M%S\")\n\n if in_test_mode:\n print(\"Launching local job ... hang on\")\n OUTPUT_DIR = \"./preproc/vocab_counts\"\n shutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n os.makedirs(OUTPUT_DIR)\n else:\n print(\"Launching Dataflow job {} ... hang on\".format(job_name))\n OUTPUT_DIR = \"gs://{0}/hybrid_recommendation/preproc/vocab_counts/\".format(BUCKET)\n try:\n subprocess.check_call(\"gsutil -m rm -r {}\".format(OUTPUT_DIR).split())\n except:\n pass\n\n options = {\n \"staging_location\": os.path.join(OUTPUT_DIR, \"tmp\", \"staging\"),\n \"temp_location\": os.path.join(OUTPUT_DIR, \"tmp\"),\n \"job_name\": job_name,\n \"project\": PROJECT,\n \"teardown_policy\": \"TEARDOWN_ALWAYS\",\n \"no_save_main_session\": True\n }\n opts = beam.pipeline.PipelineOptions(flags = [], **options)\n if in_test_mode:\n RUNNER = \"DirectRunner\"\n else:\n RUNNER = \"DataflowRunner\"\n\n p = beam.Pipeline(RUNNER, options = opts)\n \n def vocab_count(index, column_name):\n query = \"\"\"\n SELECT\n COUNT(*) AS count_number\n FROM ({})\n \"\"\".format(query_vocabularies.replace(\"index_value\", \"{}\".format(index)))\n\n (p \n | \"{}_read\".format(column_name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))\n | \"{}_txt\".format(column_name) >> beam.Map(count_to_txt)\n | \"{}_out\".format(column_name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, \"{0}_vocab_count.txt\".format(column_name))))\n )\n \n def global_column_mean(column_name):\n query = \"\"\"\n SELECT\n AVG(CAST({1} AS FLOAT64)) AS mean_value\n FROM ({0})\n \"\"\".format(query_hybrid_dataset, column_name)\n\n (p \n | \"{}_read\".format(column_name) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))\n | \"{}_txt\".format(column_name) >> beam.Map(mean_to_txt)\n | \"{}_out\".format(column_name) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, \"{0}_mean.txt\".format(column_name))))\n )\n \n # Call vocab_count function for each column we want the vocabulary count for\n vocab_count(10, \"content_id\") # content_id\n vocab_count(7, \"category\") # category\n vocab_count(2, \"author\") # author\n\n # Call global_column_mean function for each column we want the mean for\n global_column_mean(\"months_since_epoch\") # months_since_epoch\n \n job = p.run()\n if in_test_mode:\n job.wait_until_finish()\n print(\"Done!\")\n \npreprocess(in_test_mode = False)", "Let's check our files to make sure everything went as expected", "%%bash\nrm -rf vocabs\nmkdir vocabs\n\n!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/vocabs/*.txt* vocabs/\n\n!head -3 vocabs/*\n\n%%bash\nrm -rf vocab_counts\nmkdir vocab_counts\n\n!gsutil -m cp -r gs://{BUCKET}/hybrid_recommendation/preproc/vocab_counts/*.txt* vocab_counts/\n\n!head -3 vocab_counts/*" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fluxcapacitor/source.ml
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/05_Train_Model_Distributed.ipynb
apache-2.0
[ "Train Model on Distributed Cluster\nIMPORTANT: You Must STOP All Kernels and Terminal Session\nThe GPU is wedged at this point. We need to set it free!!\n\nDefine ClusterSpec", "import tensorflow as tf\n\ncluster = tf.train.ClusterSpec({\"local\": [\"localhost:2222\", \"localhost:2223\"]})", "Start Server \"Task 0\" (localhost:2222)", "server0 = tf.train.Server(cluster, job_name=\"local\", task_index=0)\n\nprint(server0)", "Start Server \"Task 1\" (localhost:2223)", "server1 = tf.train.Server(cluster, job_name=\"local\", task_index=1)\n\nprint(server1)", "Define a Computationally-intensive TensorFlow Graph", "import tensorflow as tf\n\nn = 2\nc1 = tf.Variable([])\nc2 = tf.Variable([])\n\ndef matpow(M, n):\n if n < 1: \n return M\n else:\n return tf.matmul(M, matpow(M, n-1))", "Assign Devices Manually\nAll CPU Devices\nNote the execution time.", "import datetime\n\nwith tf.device(\"/job:local/task:0/cpu:0\"):\n A = tf.random_normal(shape=[1000, 1000])\n c1 = matpow(A,n)\n\nwith tf.device(\"/job:local/task:1/cpu:0\"):\n B = tf.random_normal(shape=[1000, 1000])\n c2 = matpow(B,n)\n\nwith tf.Session(\"grpc://127.0.0.1:2222\") as sess:\n sum = c1 + c2\n start_time = datetime.datetime.now()\n print(sess.run(sum))\n print(\"Execution time: \" \n + str(datetime.datetime.now() - start_time))\n ", "CPU and GPU\nNote the execution time.", "with tf.device(\"/job:local/task:0/gpu:0\"):\n A = tf.random_normal(shape=[1000, 1000])\n c1 = matpow(A,n)\n\nwith tf.device(\"/job:local/task:1/cpu:0\"):\n B = tf.random_normal(shape=[1000, 1000])\n c2 = matpow(B,n)\n\nwith tf.Session(\"grpc://127.0.0.1:2222\") as sess:\n sum = c1 + c2\n start_time = datetime.datetime.now()\n print(sess.run(sum))\n print(\"Execution time: \" \n + str(datetime.datetime.now() - start_time))", "All GPU Devices\nNote the execution time.", "with tf.device(\"/job:local/task:0/gpu:0\"):\n A = tf.random_normal(shape=[1000, 1000])\n c1 = matpow(A,n)\n\nwith tf.device(\"/job:local/task:1/gpu:0\"):\n B = tf.random_normal(shape=[1000, 1000])\n c2 = matpow(B,n)\n\nwith tf.Session(\"grpc://127.0.0.1:2222\") as sess:\n sum = c1 + c2\n start_time = datetime.datetime.now()\n print(sess.run(sum))\n print(\"Execution time: \" \n + str(datetime.datetime.now() - start_time))", "Auto-assign Device by TensorFlow (Round-Robin by Default)\nNote the execution time.", "with tf.device(tf.train.replica_device_setter(worker_device=\"/job:worker/task:0\",\n cluster=cluster)):\n A = tf.random_normal(shape=[1000, 1000])\n c1 = matpow(A,n)\n\nwith tf.device(tf.train.replica_device_setter(worker_device=\"/job:worker/task:1\",\n cluster=cluster)):\n B = tf.random_normal(shape=[1000, 1000])\n c2 = matpow(B,n)\n\nwith tf.Session(\"grpc://127.0.0.1:2222\") as sess:\n sum = c1 + c2\n start_time = datetime.datetime.now()\n print(sess.run(sum))\n print(\"Multi node computation time: \" \n + str(datetime.datetime.now() - start_time))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tesera/pygypsy
notebooks/#32-address-testing-findings/#32-isolated-profiling-3.ipynb
mit
[ "Recap\nIn order of priority/time taken\n\nbasalareaincremementnonspatialaw\nthis is actually slow because of the number of times the BAFromZeroToDataAw function is called as shown above\nrelaxing the tolerance may help\nindeed the tolerance is 0.01 * some value while the other factor finder functions have 0.1 tolerance i think\ncan also use cython for the increment functions\n\n\nvectorize merch and gross volume functions\nthey require a lot of getting scalars off data frame, which is quite slow. faster to get an array\n\n\n\ndo a profiling run with IO (of reading input data and writing the plot curves to files) in next run\nDecide on the action\n\nspeed up increment functions\nuse cython for increment functions\nit turns out this may not help that much. the function is pretty fast, it's called almost 500,000 times on the sample of 300 plots\nreduce the number of times its called maybe by using gradient descent for the optimization?\nrelax the tolerance\ngives a context to refactor them as well (into their own module) which would be a welcome change\nthe increment functions use numpy functions but operate on scalars, there is no benefit to using numpy functions there\n\n\nperformance-wise, it is not clear that this will pay off so much. vectorizing the volume functions is probably wiser\n\nCharacterize what is happening", "import pandas as pd\nimport numpy as np", "The original gross volume function checks that top height is greater than 0\n``` python\ndef GrossTotalVolume_Pl(BA_Pl, topHeight_Pl):\n Tvol_Pl = 0\nif topHeight_Pl &gt; 0:\n a1 = 0.194086\n a2 = 0.988276\n a3 = 0.949346\n a4 = -3.39036\n Tvol_Pl = a1* (BA_Pl**a2) * (topHeight_Pl **a3) * numpy.exp(1+(a4/((topHeight_Pl**2)+1)))\n\nreturn Tvol_Pl\n\n```\nThis makes it fail if trying to use it on an array:", "from gypsy.GYPSYNonSpatial import GrossTotalVolume_Pl\n\n\nGrossTotalVolume_Pl(np.random.random(10) * 100, np.random.random(10) * 100)", "MWEs\nIf we can rewrite it to handle 0s properly, i.e. to return 0 where an input is 0, then it is trivial to vectorize", "def GrossTotalVolume_Pl_arr(BA_Pl, topHeight_Pl):\n a1 = 0.194086\n a2 = 0.988276\n a3 = 0.949346\n a4 = -3.39036\n Tvol_Pl = a1* (BA_Pl**a2) * (topHeight_Pl **a3) * np.exp(1+(a4/((topHeight_Pl**2)+1)))\n\n return Tvol_Pl\n\nprint(GrossTotalVolume_Pl_arr(10, 10))\nprint(GrossTotalVolume_Pl_arr(0, 10))\nprint(GrossTotalVolume_Pl_arr(10, 0))\nprint(GrossTotalVolume_Pl_arr(np.random.random(10) * 100, np.random.random(10) * 100))\nprint(GrossTotalVolume_Pl_arr(np.zeros(10) * 100, np.random.random(10) * 100))", "Timings", "ba = np.random.random(1000) * 100\ntop_height = np.random.random(1000) * 100\nd = pd.DataFrame({'ba': ba, 'th': top_height})\n\n%%timeit\nd.apply(\n lambda x: GrossTotalVolume_Pl(\n x.at['ba'],\n x.at['th']\n ),\n axis=1\n)\n\n%%timeit\nGrossTotalVolume_Pl_arr(ba, top_height)", "The array method is 20x faster. This is worth implementing. We should also add tests to help be explicity about the behaviour of these volume functions.\nRevise the code\nGo on. Do it.\nTests\nYes, though data was changed as the new implementations yield NaN where input is NaN, instead of yielding 0.\nReview code changes", "%%bash\ngit log --since \"2016-11-14 19:30\" --oneline # 19:30 GMT/UTC\n\n! git diff \"HEAD~$(git log --since \"2016-11-14 19:30\" --oneline | wc -l)\" ../gypsy", "Run timings\nFrom last time:\nreal 5m36.407s\nuser 5m25.740s\nsys 0m2.140s\nAfter cython'ing iter functions:", "%%bash\n# git checkout 36941343aca2df763f93192abef461093918fff4 -b vectorize-volume-functions\n# time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp\n# rm -rfd tmp\n\n# real\t4m51.287s\n# user\t4m41.770s\n# sys\t0m1.070s\n\n45/336.", "It yielded a 13% reduction in the time.\nRun profiling", "from gypsy.forward_simulation import simulate_forwards_df\n\ndata = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10)\n\n%%prun -D forward-sim-3.prof -T forward-sim-3.txt -q\nresult = simulate_forwards_df(data)\n\n!head forward-sim-3.txt", "Compare performance visualizations\nNow use either of these commands to visualize the profiling\n```\npyprof2calltree -k -i forward-sim-1.prof forward-sim-3.txt\nor\ndc run --service-ports snakeviz notebooks/forward-sim-3.prof\n```\nOld\n\nNew\n\nSummary of performance improvements\nThe calculation of gross and merchantable volume is drastically faster now; under profiling it decrease to 1 second from 22 seconds.\nA lot of that seems to be profiler overhead, as when using gypsy simulate CLI it only got 15% faster; however I expect i/o is obfuscating the outcome there.\nProfile with I/O", "! rm -rfd gypsy-output\n\noutput_dir = 'gypsy-output'\n\n%%prun -D forward-sim-2.prof -T forward-sim-2.txt -q\n# restart the kernel first\ndata = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10)\nresult = simulate_forwards_df(data)\nos.makedirs(output_dir)\nfor plot_id, df in result.items():\n filename = '%s.csv' % plot_id\n output_path = os.path.join(output_dir, filename)\n df.to_csv(output_path)\n", "Identify new areas to optimize\n\nfrom last time:\nparallel (3 cores) gets us to 2 - 6 days - save for last\nAWS with 36 cores gets us to 4 - 12 hours ($6.70 - $20.10 USD on a c4.8xlarge instance in US West Region)\naws lambda and split up the data \n\n\nnow:\ncython for increment functions epsecially bA\ni/o" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eds-uga/cbio4835-sp17
lectures/Lecture12.ipynb
mit
[ "Lecture 12: Functions\nCBIO (CSCI) 4835/6835: Introduction to Computational Biology\nOverview and Objectives\nIn this lecture, we'll introduce the concept of functions, critical abstractions in nearly every modern programming language. Functions are important for abstracting and categorizing large codebases into smaller, logical, and human-digestable components. By the end of this lecture, you should be able to:\n\nDefine a function that performs a specific task\nSet function arguments and return values\nDifferentiate positional arguments from keyword arguments\nConstruct functions that take any number of arguments, in positional or key-value format\n\nPart 1: Defining Functions\nA function in Python is not very different from a function as you've probably learned since algebra.\n\"Let $f$ be a function of $x$\"...sound familiar? We're basically doing the same thing here.\nA function ($f$) will [usually] take something as input ($x$), perform some kind of operation on it, and then [usually] return a result ($y$). Which is why we usually see $f(x) = y$. A function, then, is composed of three main components:\n1: The function itself. A [good] function will have one very specific task it performs. This task is usually reflected in its name. Take the examples of print, or sqrt, or exp, or log; all these names are very clear about what the function does.\n2: Arguments (if any). Arguments (or parameters) are the input to the function. It's possible a function may not take any arguments at all, but often at least one is required. For example, print has 1 argument: a string.\n3: Return values (if any). Return values are the output of the function. It's possible a function may not return anything; technically, print does not return anything. But common math functions like sqrt or log have clear return values: the output of that math operation.\nPhilosophy\nA core tenet in writing functions is that functions should do one thing, and do it well (with apologies to the Unix Philosophy).\nWriting good functions makes code much easier to troubleshoot and debug, as the code is already logically separated into components that perform very specific tasks. Thus, if your application is breaking, you usually have a good idea where to start looking.\nWARNING: It's very easy to get caught up writing \"god functions\": one or two massive functions that essentially do everything you need your program to do. But if something breaks, this design is very difficult to debug.\nFunctions vs Methods\nYou've probably heard the term \"method\" before, in this class. Quite often, these two terms are used interchangeably, and for our purposes they are pretty much the same.\nBUT. These terms ultimately identify different constructs, so it's important to keep that in mind. Specifically:\n\n\nMethods are functions inside classes (not really covered in this course).\n\n\nFunctions are not inside classes. In some sense, they're \"free\" (though they may be found inside specific modules; however, since a module != a class, they're still called functions).\n\n\nOtherwise, functions and methods work identically.\nSo how do we write functions? At this point in the course, you've probably already seen how this works, but we'll go through it step by step regardless.\nFirst, we define the function header. This is the portion of the function that defines the name of the function, the arguments, and uses the Python keyword def to make everything official:", "def our_function():\n pass", "That's everything we need for a working function! Let's walk through it.", "def our_function():\n pass", "def keyword: required before writing any function, to tell Python \"hey! this is a function!\"\nFunction name: one word (can \"fake\" spaces with underscores), which is the name of the function and how we'll refer to it later\nArguments: a comma-separated list of arguments the function takes to perform its task. If no arguments are needed (as above), then just open-paren-close-paren.\nColon: the colon indicates the end of the function header and the start of the actual function's code.\npass: since Python is sensitive to whitespace, we can't leave a function body blank; luckily, there's the pass keyword that does pretty much what it sounds like--no operation at all, just a placeholder.\n\nAdmittedly, our function doesn't really do anything interesting. It takes no parameters, and the function body consists exclusively of a placeholder keyword that also does nothing. Still, it's a perfectly valid function!", "# Call the function!\n\nour_function()\n\n# Nothing happens...no print statement, no computations, nothing. But there's no error either...so, yay?", "Other notes on functions\n\n\nYou can define functions (as we did just before) almost anywhere in your code. As we'll see when we get to functional programming, you can literally define functions in the middle of a line of code. Still, good coding practices behooves you to generally group your function definitions together, e.g. at the top of your module.\n\n\nInvoking or activating a function is referred to as calling the function.\n\n\nFunctions can be part of modules. You've already seen some of these in action: the numpy.array() functionality is indeed a function.\n\n\nThough not recommended, it's possible to import only select functions from a module, so you no longer have to specify the module name in front of the function name when calling the function. This uses the from keyword during import:", "from numpy import array", "Now the array() method can be called directly without prepending the package name numpy in front. USE THIS CAUTIOUSLY: if you accidentally name a variable array later in your code, you will get some very strange errors!\nPart 2: Function Arguments\nArguments (or parameters), as stated before, are the function's input; the \"$x$\" to our \"$f$\", as it were.\nYou can specify as many arguments as want, separating them by commas:", "def one_arg(arg1):\n pass\n\ndef two_args(arg1, arg2):\n pass\n\ndef three_args(arg1, arg2, arg3):\n pass\n\n# And so on...", "Like functions, you can name the arguments anything you want, though also like functions you'll probably want to give them more meaningful names besides arg1, arg2, and arg3. When these become just three functions among hundreds in a massive codebase written by dozens of different people, it's helpful when the code itself gives you hints as to what it does.\nWhen you call a function, you'll need to provide the same number of arguments in the function call as appear in the function header, otherwise Python will yell at you.", "one_arg(\"some arg\")\n\ntwo_args(\"some arg\")\n\ntwo_args(\"some arg\", \"another arg\")", "To be fair, it's a pretty easy error to diagnose, but still something to keep in mind--especially as we move beyond basic \"positional\" arguments (as they are so called in the previous error message) into optional arguments.\nDefault arguments\n\"Positional\" arguments--the only kind we've seen so far--are required. If the function header specifies a positional argument, then every single call to that functions needs to have that argument specified.\nThere are cases, however, where it can be helpful to have optional, or default, arguments. In this case, when the function is called, the programmer can decide whether or not they want to override the default values.\nYou can specify default arguments in the function header:", "def func_with_default_arg(positional, default = 10):\n print(\"'\" + positional + \"' with default arg '\" + str(default) + \"'\")\n\nfunc_with_default_arg(\"Input string\")\nfunc_with_default_arg(\"Input string\", default = 999)", "If you look through the NumPy online documentation, you'll find most of its functions have entire books' worth of default arguments.\nThe numpy.array function we've been using has quite a few; the only positional (required) argument for that function is some kind of list/array structure to wrap a NumPy array around. Everything else it tries to figure out on its own, unless the programmer explicitly specifies otherwise.", "import numpy as np\nx = np.array([1, 2, 3])\ny = np.array([1, 2, 3], dtype = float) # Specifying the data type of the array, using \"dtype\"\n\nprint(x)\nprint(y)", "Notice the decimal points that follow the values in the second array! This is NumPy's way of showing that these numbers are floats, not integers!\nKeyword Arguments\nKeyword arguments are a something of a superset of positional and default arguments.\nBy the names, positional seems to imply a relationship with position (specifically, position in the list of arguments), and default seems obvious enough: it takes on a default value unless otherwise specified.\nKeyword arguments can overlap with both, in that they can be either required or default, but provide a nice utility by which you can ensure the variable you're passing into a function is taking on the exact value you want it to.\nLet's take the following function.", "def pet_names(name1, name2):\n print(\"Pet 1: \" + name1)\n print(\"Pet 2: \" + name2)\n\npet1 = \"King\"\npet2 = \"Reginald\"\npet_names(pet1, pet2)\npet_names(pet2, pet1)", "In this example, we switched the ordering of the arguments between the two function calls; consequently, the ordering of the arguments inside the function were also flipped. Hence, positional: position matters.\nIn contrast, Python also has keyword arguments, where order no longer matters as long as you specify the keyword.\nWe can use the same function as before, pet_names, only this time we'll use the names of the arguments themselves (aka, keywords):", "pet1 = \"Rocco\"\npet2 = \"Lucy\"\n\npet_names(name1 = pet1, name2 = pet2)\npet_names(name2 = pet2, name1 = pet1)", "As you can see, we used the names of the arguments from the function header itself, setting them equal to the variable we wanted to use for that argument.\nConsequently, order doesn't matter--Python can see that, in both function calls, we're setting name1 = pet1 and name2 = pet2.\nEven though keyword arguments somewhat obviate the need for strictly positional arguments, keyword arguments are extremely useful when it comes to default arguments.\nIf you take a look at any NumPy API--even the documentation for numpy.array--there are LOTS of default arguments. Trying to remember their ordering is a pointless task. What's much easier is to simply remember the name of the argument--the keyword--and use that to override any default argument you want to change.\nOrdering of the keyword arguments doesn't matter; that's why we can specify some of the default parameters by keyword, leaving others at their defaults, and Python doesn't complain.\nHere's an important distinction, though:\n\n\nDefault (optional) arguments are always keyword arguments, but...\n\n\nPositional (required) arguments MUST come before default arguments!\n\n\nIn essence, when using the argument keywords, you can't mix-and-match the ordering of positional and default arguments.\n(you can't really mix-and-match the ordering of positional and default arguments anyway, so hopefully this isn't a rude awakening)\nHere's an example of this behavior in action:", "# Here's our function with a default argument.\ndef pos_def(x, y = 10):\n return x + y\n\n# Using keywords in the same order they're defined is totally fine.\nz = pos_def(x = 10, y = 20)\nprint(z)\n\n# Mixing their ordering is ok, as long as I'm specifying the keywords.\nz = pos_def(y = 20, x = 10)\nprint(z)\n\n# Only specifying the default argument is a no-no.\nz = pos_def(y = 20)\nprint(z)", "Arbitrary Argument Lists\nThere are instances where you'll want to pass in an arbitrary number of arguments to a function, a number which isn't known until the function is called and could change from call to call!\nOn one hand, you could consider just passing in a single list, thereby obviating the need. That's more or less what actually happens here, but the syntax is a tiny bit different.\nHere's an example: a function which lists out pizza toppings. Note the format of the input argument(s):", "def make_pizza(*toppings):\n print(\"Making a pizza with the following toppings:\")\n for topping in toppings:\n print(\" - \" + topping)\n\nmake_pizza(\"pepperoni\")\nmake_pizza(\"pepperoni\", \"banana peppers\", \"green peppers\", \"mushrooms\")", "Inside the function, it's basically treated as a list: in fact, it is a list.\nSo why not just make the input argument a single variable which is a list?\nConvenience.\nIn some sense, it's more intuitive to the programmer calling the function to just list out a bunch of things, rather than putting them all in a list structure first.\nPart 3: Return Values\nJust as functions [can] take input, they also [can] return output for the programmer to decide what to do with.\nAlmost any function you will ever write will most likely have a return value of some kind. If not, your function may not be \"well-behaved\", aka sticking to the general guideline of doing one thing very well.\nThere are certainly some cases where functions won't return anything--functions that just print things, functions that run forever (yep, they exist!), functions designed specifically to test other functions--but these are highly specialized cases we are not likely to encounter in this course. Keep this in mind as a \"rule of thumb.\"\nTo return a value from a function, just use the return keyword:", "def identity_function(in_arg):\n return in_arg\n\nx = \"this is the function input\"\nreturn_value = identity_function(x)\nprint(return_value)", "This is pretty basic: the function returns back to the programmer as output whatever was passed into the function as input. Hence, \"identity function.\"\nAnything you can pass in as function parameters, you can return as function output, including lists:", "def compute_square(number):\n square = number ** 2\n return square\n\nstart = 3\nend = compute_square(start)\nprint(\"Square of \" + str(start) + \" is \" + str(end))", "You can even return multiple values simultaneously from a function. They're just treated as tuples!", "import numpy.random as r\n\ndef square_and_rand(number):\n square = compute_square(number)\n rand_num = r.randint(0, 100)\n return rand_num, square\n\nretvals = square_and_rand(3)\nprint(retvals)", "This two-way communication that functions enable--arguments as input, return values as output--is an elegant and powerful way of allowing you to design modular and human-understandable code.\nPart 4: A Note on Modifying Arguments\nThis is arguably one of the trickiest parts of programming, so please ask questions if you're having trouble.\nLet's start with an example to illustrate what's this is. Take the following code:", "def magic_function(x):\n x = 20\n print(\"Inside function: x = \" + str(x))\n\nx = 10\nprint(\"Before calling 'magic_function': x = \" + str(x))\n\n# Now, let's call magic_function(). What is x = ?\n\nmagic_function(x)", "Once the function finishes running, what is the value of x?", "print(x)", "It prints 10. Can anyone explain why?\nLet's take another, slightly different, example.", "def magic_function2(x):\n x[0] = 20\n print(\"Inside function: x = \" + str(x))\n\nx = [10, 10]\nprint(\"Before function: x = \" + str(x))\n\n# Now, let's call magic_function2(x). What is x = ?\n\nmagic_function2(x)", "Once the function finishes running, what is the value of x?", "print(x)", "It prints [20, 10]. Can anyone explain why?\nThis is one of the trickiest aspects of programming and isn't something I want to get into (look up pass by value and pass by reference if you're curious about the theory).\nHowever, I bring this up because you still need to understand good programming practices when writing functions so your code doesn't do weird things.\n\n\nIn general, when you write functions that accept arguments, you should NOT modify the arguments themselves.\n\n\nInstead, treat them as constants, and return any new values you want to use later.\n\n\nAdministrivia\n\n\nHow were the guest lecturers last week?\n\n\nHow was Assignment 2?\n\n\nAssignment 3 is out! Due Thursday, February 23 (last assignment before the midterm!).\n\n\nAdditional Resources\n\nMatthes, Eric. Python Crash Course. 2016. ISBN-13: 978-1593276034" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phungkh/phys202-2015-work
assignments/assignment03/NumpyEx01.ipynb
mit
[ "Numpy Exercise 1\nImports", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va", "Checkerboard\nWrite a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:\n\nYour function should work for both odd and even size.\nThe 0,0 element should be 1.0.\nThe dtype should be float.", "def checkerboard(size):\n \"\"\"Return a 2d checkboard of 0.0 and 1.0 as a NumPy array\"\"\"\n a=np.zeros((size,size))\n for x in range(size):\n for y in range(size):\n if (x%2==0 and y%2==0)or ((x%2!=0) and y%2!=0): \n a[x,y]=1\n\n else:\n a[x,y]=0\n return a\n\nprint(checkerboard(10))\n\n\n \n \n\na = checkerboard(4)\nassert a[0,0]==1.0\nassert a.sum()==8.0\nassert a.dtype==np.dtype(float)\nassert np.all(a[0,0:5:2]==1.0)\nassert np.all(a[1,0:5:2]==0.0)\n\nb = checkerboard(5)\nassert b[0,0]==1.0\nassert b.sum()==13.0\nassert np.all(b.ravel()[0:26:2]==1.0)\nassert np.all(b.ravel()[1:25:2]==0.0)", "Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.", "va.set_block_size(10)\n\nva.vizarray(checkerboard(20))\n\n\n\nassert True", "Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.", "va.set_block_size(5)\nva.vizarray(checkerboard(27))\n\nassert True" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ARM-software/lisa
ipynb/deprecated/examples/android/workloads/Android_Viewer.ipynb
apache-2.0
[ "Generic Android viewer", "from conf import LisaLogging\nLisaLogging.setup()\n\n%pylab inline\n\nimport json\nimport os\n\n# Support to access the remote target\nimport devlib\nfrom env import TestEnv\n\n# Import support for Android devices\nfrom android import Screen, Workload, System, ViewerWorkload\nfrom target_script import TargetScript\n\n# Support for trace events analysis\nfrom trace import Trace\n\n# Suport for FTrace events parsing and visualization\nimport trappy\n\nimport pandas as pd\nimport sqlite3\n\nfrom IPython.display import display", "Test environment setup\nFor more details on this please check out examples/utils/testenv_example.ipynb.\ndevlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.\nIn case more than one Android device are connected to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.", "# Setup target configuration\nmy_conf = {\n\n # Target platform and board\n \"platform\" : 'android',\n \"board\" : 'hikey960',\n \n # Device serial ID\n # Not required if there is only one device connected to your computer\n \"device\" : \"0123456789ABCDEF\",\n \n # Android home\n # Not required if already exported in your .bashrc\n #\"ANDROID_HOME\" : \"/home/vagrant/lisa/tools/\",\n\n # Folder where all the results will be collected\n \"results_dir\" : \"Viewer_example\",\n\n # Define devlib modules to load\n \"modules\" : [\n 'cpufreq' # enable CPUFreq support\n ],\n\n # FTrace events to collect for all the tests configuration which have\n # the \"ftrace\" flag enabled\n \"ftrace\" : {\n \"events\" : [\n \"sched_switch\",\n \"sched_wakeup\",\n \"sched_wakeup_new\",\n \"sched_overutilized\",\n \"sched_load_avg_cpu\",\n \"sched_load_avg_task\",\n \"sched_load_waking_task\",\n \"cpu_capacity\",\n \"cpu_frequency\",\n \"cpu_idle\",\n \"sched_tune_config\",\n \"sched_tune_tasks_update\",\n \"sched_tune_boostgroup_update\",\n \"sched_tune_filter\",\n \"sched_boost_cpu\",\n \"sched_boost_task\",\n \"sched_energy_diff\"\n ],\n \"buffsize\" : 100 * 1024,\n },\n\n # Tools required by the experiments\n \"tools\" : [ 'trace-cmd', 'taskset'],\n}\n\n# Initialize a test environment using:\nte = TestEnv(my_conf, wipe=False)\ntarget = te.target", "Workload definition\nThe Viewer workload will simply read an URI and let Android pick the best application to view the item designated by that URI. That item could be a web page, a photo, a pdf, etc. For instance, if given an URL to a Google Maps location, the Google Maps application will be opened at that location. If the device doesn't have Google Play Services (e.g. HiKey960), it will open Google Maps through the default web browser.\nThe Viewer class is intended to be subclassed to customize your workload. There are pre_interact(), interact() and post_interact() methods that are made to be overridden.\nIn this case we'll simply execute a script on the target to swipe around a location on Gmaps. This script is generated using the TargetScript class, which is used here on System.{h,v}swipe() calls to accumulate commands instead of executing them directly. Those commands are then outputted to a script on the remote device, and that script is later on executed as the item is being viewed. See ${LISA_HOME}/libs/util/target_script.py", "class GmapsViewer(ViewerWorkload):\n \n def pre_interact(self):\n self.script = TargetScript(te, \"gmaps_swiper.sh\")\n\n # Define commands to execute during experiment\n for i in range(2):\n System.hswipe(self.script, 40, 60, 100, False)\n self.script.append('sleep 1')\n System.vswipe(self.script, 40, 60, 100, True)\n self.script.append('sleep 1')\n System.hswipe(self.script, 40, 60, 100, True)\n self.script.append('sleep 1')\n System.vswipe(self.script, 40, 60, 100, False)\n self.script.append('sleep 1')\n\n # Push script to the target\n self.script.push()\n \n def interact(self):\n self.script.run()\n\ndef experiment():\n # Configure governor\n target.cpufreq.set_all_governors('sched')\n \n # Get workload\n wload = Workload.getInstance(te, 'gmapsviewer')\n \n # Run workload\n wload.run(out_dir=te.res_dir,\n collect=\"ftrace\",\n uri=\"https://goo.gl/maps/D8Sn3hxsHw62\")\n \n # Dump platform descriptor\n te.platform_dump(te.res_dir)", "Workload execution", "results = experiment()\n\n# Load traces in memory (can take several minutes)\nplatform_file = os.path.join(te.res_dir, 'platform.json')\n\nwith open(platform_file, 'r') as fh:\n platform = json.load(fh)\n\ntrace_file = os.path.join(te.res_dir, 'trace.dat')\ntrace = Trace(trace_file, my_conf['ftrace']['events'], platform, normalize_time=False)", "Traces visualisation", "!kernelshark {trace_file} 2>/dev/null" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yevheniyc/Projects
1j_NLP_Python/ex03.ipynb
mit
[ "Exercise 03: Splitting sentences and PoS annotation\nLet's start with a simple paragraph, copied from the course description:", "text = \"\"\"\nIncreasingly, customers send text to interact or leave comments, \nwhich provides a wealth of data for text mining. That’s a great \nstarting point for developing custom search, content recommenders, \nand even AI applications.\n\"\"\"\nrepr(text)", "Notice how there are explicit line breaks in the text. Let's write some code to flow the paragraph without any line breaks:", "text = \" \".join(map(lambda x: x.strip(), text.split(\"\\n\"))).strip()\nrepr(text)", "Now we can use TextBlob to split the paragraph into sentences:", "from textblob import TextBlob\n\nfor sent in TextBlob(text).sentences:\n print(\"> \", sent)", "Next we take a sentence and annotate it with part-of-speech (PoS) tags:", "import textblob_aptagger as tag\n\nsent = \"Increasingly, customers send text to interact or leave comments, which provides a wealth of data for text mining.\"\n\nts = tag.PerceptronTagger().tag(sent)\nprint(ts)", "Given these annotations for part-of-speech tags, we can lemmatize nouns and verbs to get their root forms. This will also singularize the plural nouns:", "from textblob import Word\n\nts = [('InterAct', 'VB'), ('comments', 'NNS'), ('provides', 'VBZ'), ('mining', 'NN')]\n\nfor lex, pos in ts:\n w = Word(lex.lower())\n lemma = w.lemmatize(pos[0].lower())\n print(lex, pos, lemma)", "We can also lookup synonyms and definitions for each word, using synsets from WordNet:", "from textblob.wordnet import VERB\n\nw = Word(\"comments\")\n\nfor synset, definition in zip(w.get_synsets(), w.define()):\n print(synset, definition)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
abevieiramota/data-science-cookbook
2017/06-linear-regression/resp_abelardo_mota.ipynb
mit
[ "Regressão Linear Simples - Trabalho\nEstudo de caso: Seguro de automóvel sueco\nAgora, sabemos como implementar um modelo de regressão linear simples. Vamos aplicá-lo ao conjunto de dados do seguro de automóveis sueco. Esta seção assume que você baixou o conjunto de dados para o arquivo insurance.csv, o qual está disponível no notebook respectivo.\nO conjunto de dados envolve a previsão do pagamento total de todas as reclamações em milhares de Kronor sueco, dado o número total de reclamações. É um dataset composto por 63 observações com 1 variável de entrada e 1 variável de saída. Os nomes das variáveis são os seguintes:\n\nNúmero de reivindicações.\nPagamento total para todas as reclamações em milhares de Kronor sueco.\n\nVoce deve adicionar algumas funções acessórias à regressão linear simples. Especificamente, uma função para carregar o arquivo CSV chamado load_csv (), uma função para converter um conjunto de dados carregado para números chamado str_column_to_float (), uma função para avaliar um algoritmo usando um conjunto de treino e teste chamado split_train_split (), a função para calcular RMSE chamado rmse_metric () e uma função para avaliar um algoritmo chamado evaluate_algorithm().\nUtilize um conjunto de dados de treinamento de 60% dos dados para preparar o modelo. As previsões devem ser feitas nos restantes 40%. \nCompare a performabce do seu algoritmo com o algoritmo baseline, o qual utiliza a média dos pagamentos realizados para realizar a predição ( a média é 72,251 mil Kronor).", "import pandas as pd\n\ndf = pd.read_csv(\"insurance.csv\", header=None, names=['r', 'p'])\ndf.head()", "Baseline model", "from sklearn.dummy import DummyRegressor\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import ShuffleSplit\n\ndm = DummyRegressor()\nparam_grid = {\"strategy\": [\"mean\", \"median\"]}\nss = ShuffleSplit(n_splits=1, test_size=.4, random_state=100)\n\ncv = GridSearchCV(dm, cv=ss, param_grid=param_grid, scoring=\"neg_mean_squared_error\")\ncv.fit(df[['r']], df['p'])\n\ncv.best_score_ * -1\n\ncv.best_params_", "Linear Regression Model", "from sklearn.linear_model import LinearRegression\nimport math\n\nln = LinearRegression()\n\ncv = GridSearchCV(ln, param_grid = {}, cv=ss, scoring=\"neg_mean_squared_error\")\ncv.fit(df[['r']], df['p'])\n\nmath.sqrt(cv.best_score_ * -1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
simulkade/peteng
python/.ipynb_checkpoints/two_phase_1D_fipy_seq-checkpoint.ipynb
mit
[ "FiPy 1D two-phase flow in porous mediaq, 11 October, 2019\nDifferent approaches:\n * Coupled\n * Sequential\n * ...", "from fipy import Grid2D, CellVariable, FaceVariable\nimport numpy as np\n\n\ndef upwindValues(mesh, field, velocity):\n \"\"\"Calculate the upwind face values for a field variable\n\n Note that the mesh.faceNormals point from `id1` to `id2` so if velocity is in the same\n direction as the `faceNormal`s then we take the value from `id1`s and visa-versa.\n\n Args:\n mesh: a fipy mesh\n field: a fipy cell variable or equivalent numpy array\n velocity: a fipy face variable (rank 1) or equivalent numpy array\n \n Returns:\n numpy array shaped as a fipy face variable\n \"\"\"\n # direction is over faces (rank 0)\n direction = np.sum(np.array(mesh.faceNormals * velocity), axis=0)\n # id1, id2 are shaped as faces but contains cell index values\n id1, id2 = mesh._adjacentCellIDs\n return np.where(direction >= 0, field[id1], field[id2])\n\n\n# mesh = Grid2D(nx=3, ny=3)\n# print(\n# upwindValues(\n# mesh,\n# np.arange(mesh.numberOfCells),\n# 2 * np.random.random(size=(2, mesh.numberOfFaces)) - 1\n# )\n# )\n\n\nfrom fipy import *\n\n# relperm parameters\nswc = 0.0\nsor = 0.0\nkrw0 = 0.3\nkro0 = 1.0\nnw = 2.0\nno = 2.0\n\n# domain and boundaries\nk = 1e-12 # m^2\nphi = 0.4\nu = 1.e-5\np0 = 100e5 # Pa\nLx = 100.\nLy = 10.\nnx = 100\nny = 10\ndx = Lx/nx\ndy = Ly/ny\n\n# fluid properties\nmuo = 0.002\nmuw = 0.001\n\n# define the fractional flow functions\ndef krw(sw):\n res = krw0*((sw-swc)/(1-swc-sor))**nw\n return res\n\ndef dkrw(sw):\n res = krw0*nw/(1-swc-sor)*((sw-swc)/(1-swc-sor))**(nw-1)\n return res\n\n\ndef kro(sw):\n res = kro0*((1-sw-sor)/(1-swc-sor))**no\n return res\n\ndef dkro(sw):\n res = -kro0*no/(1-swc-sor)*((1-sw-sor)/(1-swc-sor))**(no-1)\n return res\n\ndef fw(sw):\n res = krw(sw)/muw/(krw(sw)/muw+kro(sw)/muo)\n return res\n\ndef dfw(sw):\n res = (dkrw(sw)/muw*kro(sw)/muo-krw(sw)/muw*dkro(sw)/muo)/(krw(sw)/muw+kro(sw)/muo)**2\n return res\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nsw_plot = np.linspace(swc, 1-sor, 50)", "Visualize the relative permeability and fractional flow curves", "krw_plot = [krw(sw) for sw in sw_plot]\nkro_plot = [kro(sw) for sw in sw_plot]\nfw_plot = [fw(sw) for sw in sw_plot]\n\nplt.figure(1)\nplt.plot(sw_plot, krw_plot, sw_plot, kro_plot)\nplt.show()\n\nplt.figure(2)\nplt.plot(sw_plot, fw_plot)\nplt.show()\n\n# create the grid\nmesh = Grid1D(dx = Lx/nx, nx = nx)\nx = mesh.cellCenters\n\n# create the cell variables and boundary conditions\nsw = CellVariable(mesh=mesh, name=\"saturation\", hasOld=True, value = swc)\np = CellVariable(mesh=mesh, name=\"pressure\", hasOld=True, value = p0)\nsw.setValue(1-sor,where = x<=dx)\n\nsw.constrain(1,mesh.facesLeft)\n#sw.constrain(0., mesh.facesRight)\nsw.faceGrad.constrain([0], mesh.facesRight)\np.faceGrad.constrain([-u/(krw(1-sor)*k/muw)], mesh.facesLeft)\np.constrain(p0, mesh.facesRight)\n# p.constrain(3.0*p0, mesh.facesLeft)\nu/(krw(1-sor)*k/muw)", "Equations\n$$\\nabla.\\left(\\left(-\\frac{k_{rw} k}{\\mu_w}-\\frac{k_{ro} k}{\\mu_o} \\right)\\nabla p \\right)=0$$ or\n$$\\varphi \\frac{\\partial S_w}{\\partial t}+\\nabla.\\left(-\\frac{k_{rw} k}{\\mu_w} \\nabla p \\right)=0$$", "# eq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw.faceValue)/muw+kro(sw.faceValue)/muo))- \\\n# UpwindConvectionTerm(var=sw, coeff=-k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*p.faceGrad)- \\\n# (k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*sw.faceValue*p.faceGrad).divergence == 0\n\n# eq_sw = TransientTerm(coeff=phi, var=sw) + \\\n# DiffusionTerm(var=p, coeff=-k*krw(sw.faceValue)/muw)+ \\\n# UpwindConvectionTerm(var=sw, coeff=-k*dkrw(sw.faceValue)/muw*p.faceGrad)- \\\n# (-k*dkrw(sw.faceValue)/muw*p.faceGrad*sw.faceValue).divergence == 0\n\neq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw.faceValue)/muw+kro(sw.faceValue)/muo)) == 0\n\neq_sw = TransientTerm(coeff=phi, var=sw) + \\\n(-k*krw(sw.faceValue)/muw*p.faceGrad).divergence == 0\n\nsw_face = sw.faceValue\n\n# eq = eq_p & eq_sw\nsteps = 1000\ndt0 = 5000.\ndt = dt0\nt_end = steps*dt0\nt = 0.0\nviewer = Viewer(vars = sw, datamax=1.1, datamin=-0.1)\nwhile t<t_end:\n eq_p.solve(var=p)\n eq_sw.solve(var=sw, dt=dt0)\n sw.value[sw.value>1-sor]=1-sor\n sw.value[sw.value<swc]=swc\n p.updateOld()\n sw.updateOld()\n u_w = -k*krw(sw_face)/muw*p.faceGrad\n sw_face = FaceVariable(mesh, upwindValues(mesh, sw, u_w))\n sw_face.value[0] = 1.0\n eq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw_face)/muw+kro(sw_face)/muo)) == 0\n eq_sw = TransientTerm(coeff=phi, var=sw) + (-k*krw(sw_face)/muw*p.faceGrad).divergence == 0\n t=t+dt0\n \n# Note: try to use the Appleyard method; the overflow is a result of wrong rel-perm values \nviewer.plot()\n\nsw_face.value[0] =1.0\nsw_face.value\n\n0.5>1-0.6\n\nupwindValues(mesh, sw, u_w)", "Analytical solution", "import fractional_flow as ff\nxt_shock, sw_shock, xt_prf, sw_prf, t, p_inj, R_oil = ff.frac_flow_wf(muw=muw, muo=muo, ut=u, phi=1.0, \\\n k=1e-12, swc=swc, sor=sor, kro0=kro0, no=no, krw0=krw0, \\\n nw=nw, sw0=swc, sw_inj=1.0, L=Lx, pv_inj=5.0)\n\nplt.figure()\nplt.plot(xt_prf, sw_prf)\nplt.plot(x.value.squeeze()/(steps*dt), sw.value)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fifabsas/talleresfifabsas
python/Extras/Incertezas/introduccion.ipynb
mit
[ "Taller de Python - Estadística en Física Experimental - 1er día\n\nEsta presentación/notebook está disponible:\nRepositorio Github FIFA BsAs (para descargarlo, usen el botón raw o hagan un fork del repositorio)\nPágina web de talleres FIFA BsAs\nProgramar ¿con qué se come?\nProgramar es dar una lista de tareas concretas a la computadora para que haga. Esencialmente, una computadora sabe:\n\nLeer datos \nEscribir datos \nTransformar datos\n\nY nada más que esto,. Así, la computadora pasa a ser suna gran gran calculadora que permite hacer cualquier tipo de cuenta de las que necesitemos dentro de la Física (y de la vida también) mientras sepamos cómo decirle a la máquina qué cómputos hacer. \nPero, ¿qué es Python?\nPython es un lenguaje para hablarle a la computadora, que se denominan lenguajes de programación. Este lenguaje, que puede ser escrito y entendido por la computadora debe ser transformado a un lenguaje que entieda la computadora (o un intermediario, que se denomina maquina virtual) así se hacen las transformaciones. Todo este modelo de programación lo podemos ver esquematizado en la figura siguiente\n<img src=\"modelo_computacional_python.png\" alt=\"Drawing\" style=\"width: 400px;\"/>\nHistoria\nPython nació en 1991, cuando su creador Guido Van Rossum lo hizo público en su versión 0.9. El lenguaje siempre buscó ser fácil de aprender y poder hacer tareas de todo tipo. Es fácil de aprender por su sintaxis, el tipado dinámico (que vamos a ver de que se trata) y además la gran cantidad de librerías/módulos para todo. \nHerramientas para el taller\nPara trabajar vamos a usar algún editor de texto (recomendamos Visual Studio Code, que viene con Anaconda), una terminal, o directamente el editor Spyder (que pueden buscarlo en las aplicaciones de la computadora si instalaron Anaconda o si lo instalaron en la PC del aula). También, si quieren podemos trabajar en un Jupyter Notebook, que permite hacer archivos como este (y hacer informes con código intercalado)\nEsto es a gusto del consumidor, sabemos usar todas esas herramientas. Cada una tiene sus ventajas y desventajas:\n- Escribir y ejecutar en consola no necesita instalar nada más que Python. Aprender a usar la consola da muchos beneficios de productividad\n- El editor o entorno de desarrollo al tener más funcionalidad es más pesado, y probablemente sea más caro (Pycharm, que es el entorno de desarrollo más completo de Python sale alrededor de 200 dolares... auch)\n- Jupyter notebook es un entorno muy interactivo, pero puede traer problemas en la lógica de ejecución. Hay que tener cuidado\nPara instalar Python, conviene descargarse Anaconda. Este proyecto corresponde a una distribución de Python, que al tener una interfaz grafica amigable y manejador de paquetes llamado conda te permite instalar todas las librerías científicas de una. En Linux y macOS instalar Python sin Anaconda es más fácil, en Windows diría que es una necesidad sin meterse en asuntos oscuros de compilación (y además que el soporte en Windows de las librerías no es tan amplio). \nExiste un proyecto llamado pyenv que en Linux y macOS permite instalar cualquier versión de Python. Si lo quieren tener (aunque para empezar Anaconda es mejor) pregunte que lo configuramos rápidamente.\nDatos, memoria y otras yerbas\nPara hacer cuentas, primero necesitamos el medio para guardar o almacenar los datos. El sector este se denomina memoria. Nuestros datos se guardan en espacios de memoria, y esos espacios tienen un nombre, un rótulo con el cual los podremos llamar y pedirle a la computadora que los utilice para operar con ellos, los modifique, etc. \nComo esos espacios son capaces de variar al avanzar los datos llegamos a llamarlos variables, y el proceso de llenar la variable con un valor se denomina asignación, que en Python se corresponde con el \"=\".\nHasta ahora sólo tenemos en la cabeza valores numéricos para nuestras variables, considerando la analogía de la super-calculadora. Pero esto no es así, y es más las variables en Python contienen la información adicional del tipo de dato. Este tipo de dato determina las operaciones posibles con la variable (además del tamaño en memoria, pero esto ya era esperable del mismo valor de la variable).\nVeamos un par de ejemplos", "x = 5\ny = 'Hola mundo!'\nz = [1,2,3]", "Aquí hemos guardado en un espacio de memoria llamado por nosotros \"x\" la información de un valor de tipo entero, 5, en otro espacio de memoria, que nosotros llamamos \"y\" guardamos el texto \"Hola mundo!\". En Python, las comillas indican que lo que encerramos con ellas es un texto. x no es un texto, así que Python lo tratará como variable para manipular. \"z\" es el nombre del espacio de memoria donde se almacena una lista con 3 elementos enteros.\nPodemos hacer cosas con esta información. Python es un lenguaje interpretado (a diferencia de otros como Java o C++), eso significa que ni bien nosotros le pedimos algo a Python, éste lo ejecuta. Así es que podremos pedirle por ejemplo que imprima en pantalla el contenido en y, el tipo de valor que es x (entero) entre otras cosas.", "print(y)\nprint(type(x))\nprint(type(y), type(z), len(z))", "Vamos a utilizar mucho la función type() para entender con qué tipo de variables estamos trabajando. type() es una función predeterminada por Python, y lo que hace es pedir como argumento (lo que va entre los paréntesis) una variable y devuelve inmediatamente el tipo de variable que es. \nEjercicio 1\nEn el siguiente bloque cree las variables \"dato1\" y \"dato2\" y guarde en ellas los textos \"estoy programando\" y \"que emocion!\". Con la función type() averigue qué tipo de datos se almacena en esas variables.", "# Realice el ejercicio 1", "Para las variables integers(enteros) y floats (flotantes) podemos hacer las operaciones matemáticas usuales y esperables. Veamos un poco las compatibilidades entre estos tipos de variables.", "a = 5\nb = 7\nc = 5.0\nd = 7.0\nprint(a+b, b+c, a*d, a/b, a/d, c**2)", "Ejercicio 2\nCalcule el resultado de $$ \\frac{(2+7.9)^2}{4^{7.4-3.14*9.81}-1} $$ y guárdelo en una variable", "# Realice el ejercicio 2. El resultado esperado es -98.01", "Listas, tuplas y diccionarios\nLas listas son cadenas de datos de cualquier tipo, unidos por estar en una misma variable, con posiciones dentro de esa lista, con las cuales nosotros podemos llamarlas. En Python, las listas se enumeran desde el 0 en adelante.\nEstas listas también tienen algunas operaciones que le son válidas.\nDistintas son las tuplas. Las listas son editables (en jerga, mutables), pero las tuplas no (inmutables). Esto es importante cuando, a lo largo del desarrollo de un código donde necesitamos que ciertas cosas no cambien, no editemos por error valores fundamentales de nuestro problema a resolver.", "lista1 = [1, 2, 'saraza']\nprint(lista1, type(lista1))\nprint(lista1[1], type(lista1[1]))\nprint(lista1[2], type(lista1[2]))\nprint(lista1[-1])\n\nlista2 = [2,3,4]\nlista3 = [5,6,7]\n#print(lista2+lista3)\nprint(lista2[2]+lista3[0])\n\ntupla1 = (1,2,3)\nlista4 = [1,2,3]\nlista4[2] = 0\nprint(lista4)\n#tupla1[0] = 0\nprint(tupla1)", "Hay formas muy cómodas de hacer listas. Presentamos una que utilizaremos mucho, que es usando la función range. Esta devuelve como una receta de como hacer los numeros; por lo tanto tenemos que decirle al generador que cree la lista, por medio de otra herramienta incorporada de Python, list", "listilla = list(range(10))\nprint(listilla, type(listilla))", "Cómo en general no se hace seguido esto, no existe una forma \"rápida\" o \"más elegante\" de hacerlo.\nEjercicio 3\n\n\nHaga una lista con los resultados de los últimos dos ejercicios y que la imprima en pantalla\n\n\nSobreescriba en la misma variable la misma lista pero con sus elementos permutados e imprima nuevamente la lista\n\n\nEjemplo de lo que debería mostrarse en pantalla\n['estoy programando', 'que emocion!', -98.01]\n['estoy programando', -98.01, 'que emocion!']", "# Realice el ejercicio 3", "Ejercicio 4\n\n\nHaga una lista con la función range de 15 elementos y sume los elementos 5, 10 y 12\n\n\nCon la misma lista, haga el producto de los primeros 4 elementos de esa lista\n\n\nCon la misma lista, reste el último valor con el primero", "# Realice el ejercicio 4", "Ahora, el titulo hablaba de diccionarios... pero no son los que usamos para buscar el significado de las palabras. ¡Aunque pueden ser parecidos o funcionar igual!.\nUn diccionario es un relación entre una variable llamada llave y otra variable llamado valor. Relación en el sentido de función que veíamos en el secundario, pero usualmente de forma discreta. \nLa magia es que sabiendo la llave, o key, ya tienes el valor, o value, por lo que podés usarlo como una lista pero sin usar indices si no cosas como cadenas. Las keys son únicas, y si quiero crear un diccionario con las mismas keys se van a pisar y queda la última aparición\nVeamos un ejemplo", "d = {\"hola\": 1, \"mundo\": 2, 0: \"numero\", (0, 1): [\"tupla\", 0, 1]} # Las llaves pueden ser casi cualquier cosa (lista no)\n\nprint(d, type(d))\n\nprint(d[\"hola\"])\nprint(d[0])\nprint(d[(0, 1)])\n\n# Podés setear una llave (o key) vieja\nd[0] = 10\n\n# O podes agregar una nueva. El orden de las llaves no es algo en qué confiar necesariamente, para eso está OrderedDict\nd[42] = \"La respuesta\"\n\n# Cambiamos el diccionario, así que aparecen nuevas keys y cambios de values\nprint(d)\n\n# Keys repetidas terminan siendo sobreescritas\nrep_d = {0: 1, 0: 2}\nprint(rep_d)\n\n# Otra cosas menor, un diccionario vacío es\nempt_d = {}\nprint(empt_d)", "Es particularmente mágico el diccionario y lo podes usar para muchisimas cosas (y además Python lo usa para casi todo internamente, así que está muy bueno saber usarlos!).\nEl largo de un diccionario es la cantidad de keys que tiene, por ejemplo", "new_d = {0: '0', '0': 0}\n\nprint(len(new_d))\n\n# Diccionario vacío\nprint(len({}))", "Ejercicio 5\nHaga un diccionario con tal que con el siguiente código\nprint(tu_dict[1] + tu_dict[\"FIFA\"] + tu_dict[(3,4)])\nImprima \"Programador, hola mundo!\". Puede tener todas las entradas que quieras, no hay limite de la creatividad acá", "# Realice el ejercicio 5\n\n# Descomente esta línea y a trabajar\n# print(tu_dict[1] + tu_dict[\"FIFA\"] + tu_dict[(3,4)])", "Booleans\nEste tipo de variable tiene sólo dos valores posibles: 1 y 0, o True y False. Las utilizaremos escencialmente para que Python reconozca relaciones entre números.", "print(5 > 4)\nprint(4 > 5)\nprint(4 == 5) #La igualdad matemática se escribe con doble ==\nprint(4 != 5) #La desigualdad matemática se escribe con !=\nprint(type(4 > 5))", "También podemos comparar listas, donde todas las entradas deberíán ser iguales", "print([1, 2, 3] == [1, 2, 3])\nprint([1, 2, 3] == [1, 3, 2])", "Lo mismo para tuplas (y aplica para diccionarios)", "print((0, 1) == (0, 1))\nprint((1, 3) == (0, 3))", "Con la función id() podemos ver si dos variables apuntan a la misma dirección de memoria, es decir podemos ver si dos variables tienen exactamente el mismo valor (aunque sea filosófico, en Python la diferencia es importante)", "a = 5\nb = a\nprint(id(a) == id(b))\n\na = 12 # Reutilizamos la variable, con un nuevo valor\nb = 12 \nprint(id(a) == id(b)) # Python cachea números de 16bits\n\na = 66000\nb = 66000\nprint(id(a) == id(b)) \n\n# No cachea listas, ni strings\na = [1, 2, 3]\nb = [1, 2, 3]\nprint(id(a) == id(b))\n\na = \"Python es lo más\"\nb = \"Python es lo más\"\nprint(id(a) == id(b))", "Las listas, tuplas y diccionarios también pueden devolver booleanos cuando se le pregunta si tiene o no algún elemento. Los diccionarios trabajaran sobre las llaves y las listas/tuplas sobre sus indices/valores", "nueva_l = [0, 42, 3]\nnueva_t = (2.3, 4.2); \nnuevo_d = {\"0\": -4, (0, 1): \"tupla\"}\n\n# La frase es \n# >>> x in collection\n# donde collection es una tupla, lista o diccionario. Parece inglés escrito no?\nprint(42 in nueva_l)\nprint(3 in nueva_t)\nprint((0,1) in nuevo_d)", "Ejercicio 6\nAverigue el resultado de 4!=5==1. ¿Dónde pondría paréntesis para que el resultado fuera distinto?", "# Realice el ejercicio 5", "Control de flujo: condicionales e iteraciones (if y for para los amigos)\nSi en el fondo un programa es una serie de algoritmos que la computadora debe seguir, un conocimiento fundamental para programar es saber cómo pedirle a una computadora que haga operaciones si se cumple una condición y que haga otras si no se cumple. Nos va a permitir hacer programas mucho más complejos. Veamos entonces como aplicar un if.", "parametro = 5\nif parametro > 0: # un if inaugura un nuevo bloque indentado\n print('Tu parametro es {} y es mayor a cero'.format(parametro))\n print('Gracias')\nelse: # el else inaugura otro bloque indentado\n print('Tu parametro es {} y es menor o igual a cero'.format(parametro))\n print('Gracias')\nprint('Vuelva pronto')\nprint(' ')\n\nparametro = -5\nif parametro > 0: # un if inaugura un nuevo bloque indentado\n print('Tu parametro es {} y es mayor a cero'.format(parametro))\n print('Gracias')\nelse: # el else inaugura otro bloque indentado\n print('Tu parametro es {} y es menor o igual a cero'.format(parametro))\n print('Gracias')\nprint('Vuelva pronto')\nprint(' ')", "Ejercicio 7\nHaga un programa con un if que imprima la suma de dos números si un tercero es positivo, y que imprima la resta si el tercero es negativo.", "# Realice el ejercicio 7", "Para que Python repita una misma acción n cantidad de veces, utilizaremos la estructura for. En cada paso, nosotros podemos aprovechar el \"número de iteración\" como una variable. Eso nos servirá en la mayoría de los casos.", "nueva_lista = ['nada',1,2,'tres', 'cuatro', 7-2, 2*3, 7/1, 2**3, 3**2]\nfor i in range(10): # i es una variable que inventamos en el for, y que tomará los valores de la \n print(nueva_lista[i]) #lista que se genere con range(10)", "Ejercicio 8\n\nHaga otra lista con 16 elementos, y haga un programa que con un for imprima solo los primeros 7\nModifique el for anterior y haga que imprima solo los elementos pares de su lista", "# Realice el ejercicio 8", "La estructura while es poco recomendada en Python pero es importante saber que existe: consiste en repetir un paso mientras se cumpla una condición. Es como un for mezclado con un if.", "i = 1\nwhile i < 10: # tener cuidado con los while que se cumplen siempre. Eso daría lugar a los loops infinitos.\n i = i+1\n print(i)", "Ejercicio 9\n\nCalcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).\nCalcule la sumatoria de los elementos de una lista.", "# Realice el ejercicio 8", "Funciones\nPero si queremos definir nuestra propia manera de calcular algo, o si queremos agrupar una serie de órdenes bajo un mismo nombre, podemos definirnos nuestras propias funciones, pidiendo la cantidad de argumentos que querramos.\nVamos a usar las funciones lambda (también llamadas anonimas) más que nada para funciones matemáticas, aunque también tenga otros usos. Definamos el polinomio $f(x) = x^2 - 5x + 6$ que tiene como raíces $x = 3$ y $x = 2$.", "f = lambda x: x**2 - 5*x + 6\nprint(f(3), f(2), f(0))", "Las funciones lambda son necesariamente funciones de una sola linea y también tienen que retornar nada; por eso son candidatas para expresiones matemáticas simples.\nLas otras funciones, las más generales, se las llama funciones def, y tienen la siguiente forma.", "def promedio(a,b,c):\n N = a + b + c # Es importante que toda la función tenga su contenido indentado\n N = N/3.0\n return N\nmipromedio = promedio(5,5,7) # Aquí rompimos la indentación\nprint(mipromedio)", "Algo muy interesante y curioso, es que podemos hacer lo siguiente con las funciones", "def otra_funcion(a, b):\n return a + b * 2\n\n# Es un valor!\notra_f = otra_funcion\nprint(otra_f)\nprint(type(otra_f))\n\nprint(otra_f(2, 3))", "Las funciones pueden ser variables y esto abre la puerta a muchas cosas. Si tienen curiosidad, pregunten que está re bueno esto!\nEjercicio 10\nHacer una función que calcule el promedio de $n$ elementos dados en una lista.\nSugerencia: utilizar las funciones len() y sum() como auxiliares.", "# Realice el ejercicio 9", "Ejercicio 11\nUsando lo que ya sabemos de funciones matemáticas y las bifurcaciones que puede generar un if, hacer una función que reciba los coeficientes $a, b, c$ de la parábola $f(x) = ax^2 + bx + c$ y calcule las raíces si son reales (es decir, usando el discriminante $\\Delta = b^2 - 4ac$ como criterio), y sino que imprima en pantalla una advertencia de que el cálculo no se puede hacer en $\\mathbb{R}$.", "# Realice el ejercicio 10", "Bonus track 1\nModificar la función anterior para que calcule las raíces de todos modos, aunque sean complejas. Python permite usar números complejos escritos de la forma 1 + 4j. Investiguen un poco", "# Bonus track 1", "Ejercicio 12\nRepitan el ejercicio 8, es decir\n1. Hacer una función que calcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).\n* Hacer una función que calcule la sumatoria de los elementos de una lista.\n¿Se les ocurre otra forma de hacer el factorial? Piensen la definición matemática y escribanla en Python, y prueben calcular el factorial de 100 con esta definición nueva", "# Realice el ejercicio 12", "Paquetes y módulos\nPero las operaciones básicas de suma, resta, multiplicación y división son todo lo que un lenguaje como Python puede hacer \"nativamente\". Una potencia o un seno es álgebra no lineal, y para hacerlo, habría que inventarse un algoritmo (una serie de pasos) para calcular por ejemplo sen($\\pi$). Pero alguien ya lo hizo, ya lo pensó, ya lo escribió en lenguaje Python y ahora todos podemos usar ese algoritmo sin pensar en él. Solamente hay que decirle a nuestro intérprete de Python dónde está guardado ese algoritmo. Esta posibilidad de usar algoritmos de otros es fundamental en la programación, porque es lo que permite que nuestro problema se limite solamente a entender cómo llamar a estos algoritmos ya pensados y no tener que pensarlos cada vez.\nVamos entonces a llamar a un paquete (como se le llama en Python) llamada math que nos va a extender nuestras posibilididades matemáticas.", "import math # Llamamos a una biblioteca\n\nr1 = math.pow(2,4)\nr2 = math.cos(math.pi)\nr3 = math.log(100,10)\nr4 = math.log(math.e)\n\nprint(r1, r2, r3, r4)", "Para entender cómo funcionan estas funciones, es importante recurrir a su documentation. La de esta biblioteca en particular se encuentra en\nhttps://docs.python.org/2/library/math.html\nEjercicio 13\nUse Python como calculadora y halle los resultados de\n\n$\\log(\\cos(2\\pi))$\n$\\text{atanh}(2^{\\cos(e)} -1) $\n$\\sqrt{x^2+2x+1}$ con $x = 125$", "# Realice el ejercicio 13", "Crear bibliotecas\nBueno, ahora que sabemos como usar bibliotecas, nos queda saber cómo podemos crearlas. Pero para saber eso, tenemos que saber que es un módulo en Python y cómo se relaciona con un paquete.\nSe le llama módulo a los archivos de Python, archivos con la extensión *.py, como por ejemplo taller_python.py (como tal vez algunos hicieron ya). En este archivo se agregan funciones, variables, etc, que pueden ser llamadas desde otro módulo con el nombre sin la extensión, es decir", "import taller_python # Vean el repositorio!", "Python para buscar estos módulos revisa si el módulo importado (con el comando import) está presente en la misma carpeta del que importa y luego en una serie de lugares estándares de Python (que se pueden alterar y revisar usando sys.path, importando el paquete sys). Si lo encuentra lo importa y podés usar las funciones, y si no puede salta una excepción", "print(taller_python.func(5, 6))\n\n# Veamos la documentación\nhelp(taller_python.func)", "Traten de importar la función __func_oculta. Se puede, pero es un hack de Python y la idea es que no sepa de ella. Es una forma de ocultar y encapsular código, que es uno de los principios de la programación orientada a objetos.\nFinalmente, un paquete como math es un conjunto de módulos ordenados en una carpeta con el nombre math, con un archivo especial __init__.py, que hace que la carpeta se comporte como un módulo. Python importa lo que vea en el archivo __init__.py y permite además importar los módulos dentro (o submodulos), si no tienen guiones bajos antes.\nUsualmente no es recomendable trabajar en el __init__.py, salvo que se tenga una razón muy necesaria (o simplemente vagancia)\nEjercicio 14\nCreen una libraría llamada mi_taller_python y agregen dos funciones, una que devuelva el resultado de $\\sqrt{x^2+2x+1}$ para cualquier x y otra que resuelva el resultado de $(x^2+2x+1)^{y}$, para cualquier x e y. Hagan todas las funciones ocultas que requieran (aunque recomendamos siempre minimzarlas)", "# Realice el ejercicio 14", "Bonus track 2\nAhora que nos animamos a buscar nuevas bibliotecas y definir funciones, buscar la función newton() de la biblioteca scipy.optimize para hallar $x$ tal que se cumpla la siguiente ecuación no lineal $$\\frac{1}{x} = ln(x)$$", "#Acá va el bonus track 2, para ya saborear la próxima clase", "Con esto terminamos la primera sesión del taller! Para la próxima vamos a aprender a manejar muchos datos al mismo tiempo, graficarlos y crear datos estadísticos, usando un par de librerías especificas del set científico de Python (numpy, scipy y matplotlib).\nEn nuestro repositorio en Github (https://github.com/fifabsas/talleresfifabsas) está colgado este material así como el de la próxima clase. Además tiene ejemplos (hasta de automatización de instrumental) y otras instancias de talleres que hemos dado a traves del tiempo.\nLa importancia de las referencias\nPara más referencias pueden googlear. Dejamos algunas de referencia:\nhttp://pybonacci.org/2012/06/07/algebra-lineal-en-python-con-numpy-i-operaciones-basicas/\nhttp://relopezbriega.github.io/blog/2015/06/14/algebra-lineal-con-python/\nhttp://pendientedemigracion.ucm.es/info/aocg/python/modulos_cientificos/numpy/index.html\nPero es importantísimo manejarse con la documentación de las bibliotecas que se utilizan\nhttps://docs.python.org/2/library/math.html\nhttp://docs.scipy.org/doc/numpy/reference/routines.linalg.html\nhttp://matplotlib.org/api/pyplot_api.html\nRecursos\nPara seguir profundizando con la programación en Python, ofrecemos distintos recursos\nUn tutorial: http://www.learnpython.org/\nHow to think like a computer scientist (aprendizaje interactivo): http://interactivepython.org/runestone/static/thinkcspy/index.html\nOtro tutorial, en inglés, pero muy completo: http://learnpythonthehardway.org/book\nCoursera, que nunca está de más: https://www.coursera.org/learn/interactive-python-1\nOtro más: https://es.coursera.org/learn/python\nY por fuera del taller, seguimos en contacto. Tenemos un grupo de Facebook donde pueden hacerse consultas y otros chicos que fueron al taller antes o aprendieron por sus medios podrán responderles. El grupo es https://www.facebook.com/groups/303815376436624/?fref=ts\nAgradecimientos\nTodo esto es posible gracias al aporte de mucha gente.\n* A los docentes de la materia, por darnos el espacio para ayudar y que se lleve a cabo este taller.\n* Gente muy copada del DF como Hernán Grecco, Guillermo Frank y Agustín Corbat por hacer aportes a estos talleres de diferentes maneras, desde poner su apellido para que nos presten un labo hasta venir como invitado a un taller.\n* El Departamento de Computación que cuatrimestre a cuatrimestre nos presta los labos desinteresadamente.\n* Pibes de la FIFA que prestan su tiempo a organizar el material y llevan a cabo el taller.\n* Todos los que se acercan y piden que estos talleres se sigan dando y nos siguen llenando los Labos. Sí ¡Gracias a todos ustedes!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
karlstroetmann/Formal-Languages
ANTLR4-Python/Interpreter/Interpreter.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open (\"../../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)", "An Interpreter for a Simple Programming Language\n$\\neg$, $\\wedge$, $\\vee$\nIn this notebook we develop an interpreter for a small programming language.\nThe grammar for this language is stored in the file Pure.g4.", "!type Pure.g4\n\n!cat -n Pure.g4", "The grammar shown above does only contain skip actions. The corrsponding grammar that is enriched with actions is stored in the file Simple.g4.\nAn example program that conforms to this grammar is stored in the file sum.sl.", "!type sum.sl \n\n!cat sum.sl", "The file Simple.g4 contains a parser for the language described by the grammar Pure.g4. This parser returns\nan abstract syntax tree. This tree is represented as a nested tuple.", "!type Simple.g4 \n\n!cat -n Simple.g4", "The parser shown above will transform the program sum.sl into the nested tuple stored in the file sum.ast.", "!type sum.ast \n\n!cat sum.ast\n\n!antlr4 -Dlanguage=Python3 Simple.g4\n\nfrom SimpleLexer import SimpleLexer\nfrom SimpleParser import SimpleParser\nimport antlr4\n\n%run ../AST-2-Dot.ipynb", "The function main takes one parameter file. This parameter is a string specifying the name of file containing a program.\nThe function reads this program and executes it.", "def main(file):\n with open(file, 'r') as handle:\n program_text = handle.read()\n input_stream = antlr4.InputStream(program_text)\n lexer = SimpleLexer(input_stream)\n token_stream = antlr4.CommonTokenStream(lexer)\n parser = SimpleParser(token_stream)\n result = parser.program()\n Statements = result.stmnt_list\n ast = tuple2dot(Statements)\n print(Statements)\n display(ast)\n ast.render('ast', view=True)\n execute_tuple(Statements)", "The function execute_tuple takes two arguments:\n- Statement_List is a list of statements,\n- Values is a dictionary assigning integer values to variable names.\nThe function executes the statements in Statement_List. If an assignment statement is executed,\nthe dictionary Values is updated.", "def execute_tuple(Statement_List, Values={}):\n for stmnt in Statement_List:\n execute(stmnt, Values)", "The function execute takes two arguments:\n- stmnt is a statement,\n- Values is a dictionary assigning values to variable names.\nThe function executes the statements in Statement_List. If an assignment statement is executed,\nthe dictionary Values is updated.\nThe following trick can be used to split a list into its components.", "L = [1, 2, 3, 4, 5]\na, b, *R = L\na, b, R\n\ndef execute(stmnt, Values):\n op = stmnt[0]\n if stmnt == 'program':\n pass\n elif op == ':=':\n _, var, value = stmnt\n Values[var] = evaluate(value, Values)\n elif op == 'print':\n _, expr = stmnt\n print(evaluate(expr, Values))\n elif op == 'if':\n _, test, *SL = stmnt\n if evaluate(test, Values):\n execute_tuple(SL, Values)\n elif op == 'while':\n _, test, *SL = stmnt\n while evaluate(test, Values):\n execute_tuple(SL, Values)\n else:\n assert False, f'{stmnt} unexpected'", "The function evaluate takes two arguments:\n- expr is a logical expression or an arithmetic expression,\n- Values is a dictionary assigning integer values to variable names.\nThe function evaluates the given expression and returns this value.", "def evaluate(expr, Values):\n if isinstance(expr, int):\n return expr\n if isinstance(expr, str):\n return Values[expr] \n op = expr[0]\n if op == 'read':\n return int(input('Please enter a natural number:'))\n if op == '==':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) == evaluate(rhs, Values)\n if op == '<':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) < evaluate(rhs, Values)\n if op == '+':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) + evaluate(rhs, Values)\n if op == '-':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) - evaluate(rhs, Values)\n if op == '*':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) * evaluate(rhs, Values)\n if op == '/':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) / evaluate(rhs, Values)\n assert False, f'{expr} unexpected'\n\n!type sum.sl\n\n!cat sum.sl\n\nmain('sum.sl')\n\n!type factorial.sl\n\n!cat factorial.sl\n\nmain('factorial.sl')\n\n!del *.py *.tokens *.interp\n!del *.pdf\n!del ast\n\n!rmdir /Q /S __pycache__\n\n!dir /B\n\n!rm *.py *.tokens *.interp\n!rm ast\n!rm -r __pycache__/\n!rm *.pdf\n\n!ls" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
brainiak/brainiak
examples/reconstruct/iem2d_example.ipynb
apache-2.0
[ "import numpy as np\nfrom brainiak.reconstruct import iem as IEM\nimport matplotlib.pyplot as plt\nimport scipy.io", "In this example, we will assume that the stimuli are circular checkerboards presented in 2-dimensional visual space. We will build an encoding model that has a grid of 6x6 channels, or basis functions, which also span this 2D space.\nRead the documentation for the module to get further details on the IEM.\nAbout the data\nThe data and content of this notebook are adapted from the IEM tutorial written by Thomas Sprague & John Serences for MATLAB (https://github.com/tommysprague/IEM-tutorial).\n\"Participants viewed peripheral flickering checkerboard stimuli presented at a range of contrasts (0-70%, logarithmically spaced) while performing either a demanding target detection task (contrast change) at the stimulus position (\"attend stimulus\" condition) or at the fixation point (\"attend fixation\" condition). The stimuli appeared randomly on the left or right side of the screen. Targets appeared rarely, and trials in which targets do appear are not included in analyses. Thus, sensory conditions are perfectly equated across the attend stimulus and the attend fixation conditions.\nIn addition to this main attention task, paricipants also performed a \"spatial mapping\" task in which they viewed small checkerboard discs presented at different positions on the screen while they performed a demanding fixation task (contrast change detection).\"\nThese data were collected by Thomas Sprague & Sirawaj Itthipuripat, for the following paper:\nItthipuripat, S., Sprague, T.,C., Serences, J.T. 2019. Functional MRI and EEG Index Complementary Attentional Modulations. J. Neurosci. 31:6162-6179. Data available at https://osf.io/savfp/.", "# Load the fMRI data\ndata = scipy.io.loadmat('AL61_Bilat-V1_attnContrast.mat')\ntrn_conds = data['trn_conds'] # position in space for 128 trials\n# flip to cartesian coordinates to make life easier\ntrn_conds[:,1] = trn_conds[:,1]*-1\ntrn = data['trn'] # matrix of (trials, voxels)", "The test data have different conditions than the training data. There are four independent variables in these data based on the values in the following columns: \n- In column 1, whether the stimulus was on the left (1) or right (2) side of the screen. \n- In column 2, the logarithmically spaced stimulus contrast from lowest (1) to highest (6). \n- In column 3, the task instruction to attend to fixation (1) or the stimulus (2).\n- In column 4, whether the target was present (1) or not (0).", "# Note there are several different conditions in the test data.\ntst_conds = data['tst_conds']\ntst = data['tst']\nattn_conds = np.unique(tst_conds[:, 2])\nstim_contrasts = np.unique(tst_conds[:, 1])\n\n# Set up parameters\nn_channels = [9, 5] # channels in the x, y directions\ncos_exponent = 5\nstimx, stimy = [-17/2, 17/2], [-5, 5]\nstim_res = [171, 101]\nnpixels = stim_res[0] * stim_res[1]\nstim_size = 1.449\nchanx, chany = [-6, 6], [-3, 3]\n\niem_obj = iem.InvertedEncoding2D(stim_xlim=stimx, stim_ylim=stimy,\n stimulus_resolution=stim_res,\n stim_radius=stim_size,\n chan_xlim=chanx, chan_ylim=chany,\n channel_exp=7)", "The quality and interpretability of your stimulus reconstructions all depend on how you set up the channels, or basis functions, in the model. In order to ensure that you can accurately reconstruct stimuli at all portions in the area where you have presented stimuli, you will want to evenly space your basis functions in that region. You also will likely want to ensure some overlap between the basis functions.\nThere are two pre-built functions to create a 2D grid of basis functions, to use a rectangular grid or a triangular grid. A triangular grid is more space-efficient, so let's use that.\nNote you will need to define these basis functions before you can fit the model. Otherwise it will throw an error.", "basis_fcns, basis_centers = iem_obj.define_basis_functions_sqgrid(n_channels)", "To visualize these, you will need to reshape the second dimension into the 2D pixel space where the stimuli are represented.", "plt.plot(basis_centers[:, 0], basis_centers[:, 1], '.')\nplt.title('Centers of all basis functions')\nplt.xlim(stimx)\nplt.ylim(stimy)\nplt.show()\n\nf, ax = plt.subplots(n_channels[1], n_channels[0], figsize=[18, 8])\ni = 0\nfor ii in range(n_channels[1]):\n for jj in range(n_channels[0]):\n ax[ii, jj].imshow(basis_fcns[i, :].reshape(stim_res[1], \n stim_res[0]),\n extent=[stimx[0], stimx[1], stimy[0], stimy[1]])\n i += 1\nplt.suptitle('Images of each basis function', fontsize=25)\nplt.show()", "To check how well the basis functions cover the stimulus domain, we can sum across all the basis functions.", "sum_fcns = basis_fcns.sum(axis=0).reshape(stim_res[1], stim_res[0])\nplt.imshow(sum_fcns, extent=[stimx[0], stimx[1], stimy[0], stimy[1]])\nplt.title('Spatial coverage of basis functions')\n\nplt.figure()\nplt.plot(iem_obj.yp, sum_fcns[:, 51])\nplt.title('Cross-section of summed coverage')\nplt.show()", "Next, we want to map channel responses for each voxel. To do this, we fit a standard general linear model (GLM), where the design matrix is the channel activations for each trial. Below, you can see the design matrix of these trial activations in the channel domain (x-axis: trials, y-axis: channels, color: activations).", "C = iem_obj._define_trial_activations(trn_conds)\nplt.imshow(C)\nprint(C.shape)", "Whenever you run the fit() function, the trial-wise channel activations will be created automatically, and the GLM will be fit on the training data and feature labels. Using this, we can then predict the feature responses on a set of test data.", "iem_obj = iem_obj.fit(trn, trn_conds)\nstim_reconstructions = iem_obj.predict_feature_responses(tst)", "Average feature reconstructions across trials\nIn this experiment, we are not specifically interested in separating trials by whether stimuli were on the left or the right. Instead, we're interested in how the activation in the model-based reconstruction varies with the experimental manipulation of contrast and attended location. For the sake of visualization and quantification, we can simply average across the trials of interest. Below we separated the trials by contrast and attention location, but averaged across trials where the stimulus appeared on the left side of the screen and the target was not present (to ensure that overall contrast is identical across averaged trials).", "vmin, vmax = 0, 0\nmean_recons = np.zeros((stim_contrasts.size, attn_conds.size, npixels))\n\nfor aa, attn_cond in enumerate(attn_conds):\n for ss, contrast in enumerate(stim_contrasts):\n thisidx = np.argwhere((tst_conds[:, 0] == 1) &\n (tst_conds[:, 1] == contrast) &\n (tst_conds[:, 2] == attn_cond) &\n (tst_conds[:, 3] == 0))\n rs = np.mean(stim_reconstructions[:, thisidx], axis=1)\n if rs.min() < vmin:\n vmin = rs.min()\n if rs.max() > vmax:\n vmax = rs.max()\n mean_recons[ss, aa, :] = rs.squeeze()", "Finally, we plot the data as a function of:\n1) whether subjects were attending to the stimulus or fixation, and\n2) the contrast of the stimulus (across six levels).", "f, ax = plt.subplots(6, 2, figsize=(10,16))\nfor aa, attn_cond in enumerate(attn_conds):\n for ss, contrast in enumerate(stim_contrasts):\n ax[ss, aa].imshow(mean_recons[ss, aa, :].\\\n reshape(stim_res[1], stim_res[0]),\n origin='lower', interpolation='none',\n cmap='inferno',\n extent=[stimx[0], stimx[1], stimy[0], stimy[1]],\n vmin=vmin, vmax=vmax)\n if contrast == stim_contrasts[0]:\n if attn_cond == 1:\n ax[ss, aa].set_title('Attend fixation')\n elif attn_cond == 2:\n ax[ss, aa].set_title('Attend stimulus')\n if attn_cond == 1:\n ax[ss, aa].set_ylabel('Contrast value {}'.format(contrast))", "These data suggest that increasing the contrast leads to stronger activation of the stimulus. They also suggest that the effect of attention is greatest at low contrast levels -- e.g. at contrast level 3, we see a clear enhancement when the participant is attending to the stimulus compared to when they are attending fixation.\nHowever, since this is single-participant data, these effects should be quantified across a group of subjects.\nFull results from these manipulations across a group of subjects can be seen in Itthipuripat, Sprague, Serences 2019." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kellyrowland/openmc
docs/source/pythonapi/examples/post-processing.ipynb
mit
[ "This notebook demonstrates some basic post-processing tasks that can be performed with the Python API, such as plotting a 2D mesh tally and plotting neutron source sites from an eigenvalue calculation. The problem we will use is a simple reflected pin-cell.", "from IPython.display import Image\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport openmc\nfrom openmc.statepoint import StatePoint\nfrom openmc.source import Source\nfrom openmc.stats import Box\n\n%matplotlib inline", "Generate Input Files\nFirst we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.", "# Instantiate some Nuclides\nh1 = openmc.Nuclide('H-1')\nb10 = openmc.Nuclide('B-10')\no16 = openmc.Nuclide('O-16')\nu235 = openmc.Nuclide('U-235')\nu238 = openmc.Nuclide('U-238')\nzr90 = openmc.Nuclide('Zr-90')", "With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.", "# 1.6 enriched fuel\nfuel = openmc.Material(name='1.6% Fuel')\nfuel.set_density('g/cm3', 10.31341)\nfuel.add_nuclide(u235, 3.7503e-4)\nfuel.add_nuclide(u238, 2.2625e-2)\nfuel.add_nuclide(o16, 4.6007e-2)\n\n# borated water\nwater = openmc.Material(name='Borated Water')\nwater.set_density('g/cm3', 0.740582)\nwater.add_nuclide(h1, 4.9457e-2)\nwater.add_nuclide(o16, 2.4732e-2)\nwater.add_nuclide(b10, 8.0042e-6)\n\n# zircaloy\nzircaloy = openmc.Material(name='Zircaloy')\nzircaloy.set_density('g/cm3', 6.55)\nzircaloy.add_nuclide(zr90, 7.2758e-3)", "With our three materials, we can now create a materials file object that can be exported to an actual XML file.", "# Instantiate a MaterialsFile, add Materials\nmaterials_file = openmc.MaterialsFile()\nmaterials_file.add_material(fuel)\nmaterials_file.add_material(water)\nmaterials_file.add_material(zircaloy)\nmaterials_file.default_xs = '71c'\n\n# Export to \"materials.xml\"\nmaterials_file.export_to_xml()", "Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.", "# Create cylinders for the fuel and clad\nfuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)\nclad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)\n\n# Create boundary planes to surround the geometry\n# Use both reflective and vacuum boundaries to make life interesting\nmin_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')\nmin_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')\nmax_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')\nmin_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')\nmax_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')", "With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.", "# Create a Universe to encapsulate a fuel pin\npin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')\n\n# Create fuel Cell\nfuel_cell = openmc.Cell(name='1.6% Fuel')\nfuel_cell.fill = fuel\nfuel_cell.region = -fuel_outer_radius\npin_cell_universe.add_cell(fuel_cell)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='1.6% Clad')\nclad_cell.fill = zircaloy\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\npin_cell_universe.add_cell(clad_cell)\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='1.6% Moderator')\nmoderator_cell.fill = water\nmoderator_cell.region = +clad_outer_radius\npin_cell_universe.add_cell(moderator_cell)", "OpenMC requires that there is a \"root\" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.", "# Create root Cell\nroot_cell = openmc.Cell(name='root cell')\nroot_cell.fill = pin_cell_universe\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z\n\n# Create root Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(root_cell)", "We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.", "# Create Geometry and set root Universe\ngeometry = openmc.Geometry()\ngeometry.root_universe = root_universe\n\n# Instantiate a GeometryFile\ngeometry_file = openmc.GeometryFile()\ngeometry_file.geometry = geometry\n\n# Export to \"geometry.xml\"\ngeometry_file.export_to_xml()", "With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 90 active batches each with 5000 particles.", "# OpenMC simulation parameters\nbatches = 100\ninactive = 10\nparticles = 5000\n\n# Instantiate a SettingsFile\nsettings_file = openmc.SettingsFile()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsource_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nsettings_file.source = Source(space=Box(\n source_bounds[:3], source_bounds[3:]))\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.", "# Instantiate a Plot\nplot = openmc.Plot(plot_id=1)\nplot.filename = 'materials-xy'\nplot.origin = [0, 0, 0]\nplot.width = [1.26, 1.26]\nplot.pixels = [250, 250]\nplot.color = 'mat'\n\n# Instantiate a PlotsFile, add Plot, and export to \"plots.xml\"\nplot_file = openmc.PlotsFile()\nplot_file.add_plot(plot)\nplot_file.export_to_xml()", "With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.", "# Run openmc in plotting mode\nexecutor = openmc.Executor()\nexecutor.plot_geometry(output=False)\n\n# Convert OpenMC's funky ppm to png\n!convert materials-xy.ppm materials-xy.png\n\n# Display the materials plot inline\nImage(filename='materials-xy.png')", "As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a 2D mesh tally.", "# Instantiate an empty TalliesFile\ntallies_file = openmc.TalliesFile()\n\n# Create mesh which will be used for tally\nmesh = openmc.Mesh()\nmesh.dimension = [100, 100]\nmesh.lower_left = [-0.63, -0.63]\nmesh.upper_right = [0.63, 0.63]\ntallies_file.add_mesh(mesh)\n\n# Create mesh filter for tally\nmesh_filter = openmc.Filter(type='mesh', bins=[1])\nmesh_filter.mesh = mesh\n\n# Create mesh tally to score flux and fission rate\ntally = openmc.Tally(name='flux')\ntally.add_filter(mesh_filter)\ntally.add_score('flux')\ntally.add_score('fission')\ntallies_file.add_tally(tally)\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()", "Now we a have a complete set of inputs, so we can go ahead and run our simulation.", "# Run OpenMC!\nexecutor.run_simulation()", "Tally Data Processing\nOur simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, data from the statepoint file is only read into memory when it is requested. This helps keep the memory use to a minimum even when a statepoint file may be huge.", "# Load the statepoint file\nsp = StatePoint('statepoint.100.h5')", "Next we need to get the tally, which can be done with the StatePoint.get_tally(...) method.", "tally = sp.get_tally(scores=['flux'])\nprint(tally)", "The statepoint file actually stores the sum and sum-of-squares for each tally bin from which the mean and variance can be calculated as described here. The sum and sum-of-squares can be accessed using the sum and sum_sq properties:", "tally.sum", "However, the mean and standard deviation of the mean are usually what you are more interested in. The Tally class also has properties mean and std_dev which automatically calculate these statistics on-the-fly.", "print(tally.mean.shape)\n(tally.mean, tally.std_dev)", "The tally data has three dimensions: one for filter combinations, one for nuclides, and one for scores. We see that there are 10000 filter combinations (corresponding to the 100 x 100 mesh bins), a single nuclide (since none was specified), and two scores. If we only want to look at a single score, we can use the get_slice(...) method as follows.", "flux = tally.get_slice(scores=['flux'])\nfission = tally.get_slice(scores=['fission'])\nprint(flux)", "To get the bins into a form that we can plot, we can simply change the shape of the array since it is a numpy array.", "flux.std_dev.shape = (100, 100)\nflux.mean.shape = (100, 100)\nfission.std_dev.shape = (100, 100)\nfission.mean.shape = (100, 100)\n\nfig = plt.subplot(121)\nfig.imshow(flux.mean)\nfig2 = plt.subplot(122)\nfig2.imshow(fission.mean)", "Now let's say we want to look at the distribution of relative errors of our tally bins for flux. First we create a new variable called relative_error and set it to the ratio of the standard deviation and the mean, being careful not to divide by zero in case some bins were never scored to.", "# Determine relative error\nrelative_error = np.zeros_like(flux.std_dev)\nnonzero = flux.mean > 0\nrelative_error[nonzero] = flux.std_dev[nonzero] / flux.mean[nonzero]\n\n# distribution of relative errors\nret = plt.hist(relative_error[nonzero], bins=50)", "Source Sites\nSource sites can be accessed from the source property. As shown below, the source sites are represented as a numpy array with a structured datatype.", "sp.source", "If we want, say, only the energies from the source sites, we can simply index the source array with the name of the field:", "sp.source['E']", "Now, we can look at things like the energy distribution of source sites. Note that we don't directly use the matplotlib.pyplot.hist method since our binning is logarithmic.", "# Create log-spaced energy bins from 1 keV to 100 MeV\nenergy_bins = np.logspace(-3,1)\n\n# Calculate pdf for source energies\nprobability, bin_edges = np.histogram(sp.source['E'], energy_bins, density=True)\n\n# Make sure integrating the PDF gives us unity\nprint(sum(probability*np.diff(energy_bins)))\n\n# Plot source energy PDF\nplt.semilogx(energy_bins[:-1], probability*np.diff(energy_bins), linestyle='steps')\nplt.xlabel('Energy (MeV)')\nplt.ylabel('Probability/MeV')", "Let's also look at the spatial distribution of the sites. To make the plot a little more interesting, we can also include the direction of the particle emitted from the source and color each source by the logarithm of its energy.", "plt.quiver(sp.source['xyz'][:,0], sp.source['xyz'][:,1],\n sp.source['uvw'][:,0], sp.source['uvw'][:,1],\n np.log(sp.source['E']), cmap='jet', scale=20.0)\nplt.colorbar()\nplt.xlim((-0.5,0.5))\nplt.ylim((-0.5,0.5))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scotthuang1989/Python-3-Module-of-the-Week
networking/selectors — IO Multiplexing Abstractions.ipynb
apache-2.0
[ "The selectors module provides a platform-independent abstraction layer on top of the platform-specific I/O monitoring functions in select.\nOperating Model\nThe APIs in selectors are event-based, similar to poll() from select. There are several implementations and the module automatically sets the alias DefaultSelector to refer to the most efficient one for the current system configuration.\nA selector object provides methods for specifying what events to look for on a socket, and then lets the caller wait for events in a platform-independent way. Registering interest in an event creates a SelectorKey, which holds the socket, information about the events of interest, and optional application data. The owner of the selector calls its select() method to learn about events. The return value is a sequence of key objects and a bitmask indicating what events have occurred. A program using a selector should repeatedly call select(), then handle the events appropriately.\nEcho Server\nThe echo server example below uses the application data in the SelectorKey to register a callback function to be invoked on the new event. The main loop gets the callback from the key and passes the socket and event mask to it. As the server starts, it registers the accept() function to be called for read events on the main server socket. Accepting the connection produces a new socket, which is then registered with the read() function as a callback for read events.", "# %load selectors_echo_server.py\nimport selectors\nimport socket\n\nmysel = selectors.DefaultSelector()\nkeep_running = True\n\n\ndef read(connection, mask):\n \"Callback for read events\"\n global keep_running\n\n client_address = connection.getpeername()\n print('read({})'.format(client_address))\n data = connection.recv(1024)\n if data:\n # A readable client socket has data\n print(' received {!r}'.format(data))\n connection.sendall(data)\n else:\n # Interpret empty result as closed connection\n print(' closing')\n mysel.unregister(connection)\n connection.close()\n # Tell the main loop to stop\n keep_running = False\n\n\ndef accept(sock, mask):\n \"Callback for new connections\"\n new_connection, addr = sock.accept()\n print('accept({})'.format(addr))\n new_connection.setblocking(False)\n mysel.register(new_connection, selectors.EVENT_READ, read)\n\n\nserver_address = ('localhost', 10000)\nprint('starting up on {} port {}'.format(*server_address))\nserver = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nserver.setblocking(False)\nserver.bind(server_address)\nserver.listen(5)\n\nmysel.register(server, selectors.EVENT_READ, accept)\n\nwhile keep_running:\n print('waiting for I/O')\n for key, mask in mysel.select(timeout=1):\n callback = key.data\n callback(key.fileobj, mask)\n\nprint('shutting down')\nmysel.close()", "When read() receives no data from the socket, it interprets the read event as the other side of the connection being closed instead of sending data. It removes the socket from the selector and closes it. In order to avoid an infinite loop, this server also shuts itself down after it has finished communicating with a single client.\nEcho Client\nThe echo client example below processes all of the I/O events in the main loop, instead of using callbacks. It sets up the selector to report read events on the socket, and to report when the socket is ready to send data. Because it is looking at two types of events, the client must check which occurred by examining the mask value. After all of its outgoing data has been sent, it changes the selector configuration to only report when there is data to read.", "# %load selectors_echo_client.py\nimport selectors\nimport socket\n\nmysel = selectors.DefaultSelector()\nkeep_running = True\noutgoing = [\n b'It will be repeated.',\n b'This is the message. ',\n]\nbytes_sent = 0\nbytes_received = 0\n\n# Connecting is a blocking operation, so call setblocking()\n# after it returns.\nserver_address = ('localhost', 10000)\nprint('connecting to {} port {}'.format(*server_address))\nsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nsock.connect(server_address)\nsock.setblocking(False)\n\n# Set up the selector to watch for when the socket is ready\n# to send data as well as when there is data to read.\nmysel.register(\n sock,\n selectors.EVENT_READ | selectors.EVENT_WRITE,\n)\n\nwhile keep_running:\n print('waiting for I/O')\n for key, mask in mysel.select(timeout=1):\n connection = key.fileobj\n client_address = connection.getpeername()\n print('client({})'.format(client_address))\n\n if mask & selectors.EVENT_READ:\n print(' ready to read')\n data = connection.recv(1024)\n if data:\n # A readable client socket has data\n print(' received {!r}'.format(data))\n bytes_received += len(data)\n\n # Interpret empty result as closed connection,\n # and also close when we have received a copy\n # of all of the data sent.\n keep_running = not (\n data or\n (bytes_received and\n (bytes_received == bytes_sent))\n )\n\n if mask & selectors.EVENT_WRITE:\n print(' ready to write')\n if not outgoing:\n # We are out of messages, so we no longer need to\n # write anything. Change our registration to let\n # us keep reading responses from the server.\n print(' switching to read-only')\n mysel.modify(sock, selectors.EVENT_READ)\n else:\n # Send the next message.\n next_msg = outgoing.pop()\n print(' sending {!r}'.format(next_msg))\n sock.sendall(next_msg)\n bytes_sent += len(next_msg)\n\nprint('shutting down')\nmysel.unregister(connection)\nconnection.close()\nmysel.close()", "The client tracks the amount of data it has sent, and the amount it has received. When those values match and are non-zero, the client exits the processing loop and cleanly shuts down by removing the socket from the selector and closing both the socket and the selector." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
michrawson/nyu_ml_lectures
notebooks/01.3 Data Representation for Machine Learning.ipynb
cc0-1.0
[ "Representation and Visualization of Data\nMachine learning is about creating models from data: for that reason, we'll start by\ndiscussing how data can be represented in order to be understood by the computer. Along\nwith this, we'll build on our matplotlib examples from the previous section and show some\nexamples of how to visualize data.\nData in scikit-learn\nData in scikit-learn, with very few exceptions, is assumed to be stored as a\ntwo-dimensional array, of size [n_samples, n_features]. Many algorithms also accept scipy.sparse matrices of the same shape.\n\nn_samples: The number of samples: each sample is an item to process (e.g. classify).\n A sample can be a document, a picture, a sound, a video, an astronomical object,\n a row in database or CSV file,\n or whatever you can describe with a fixed set of quantitative traits.\nn_features: The number of features or distinct traits that can be used to describe each\n item in a quantitative manner. Features are generally real-valued, but may be boolean or\n discrete-valued in some cases.\n\nThe number of features must be fixed in advance. However it can be very high dimensional\n(e.g. millions of features) with most of them being zeros for a given sample. This is a case\nwhere scipy.sparse matrices can be useful, in that they are\nmuch more memory-efficient than numpy arrays.\nEach sample (data point) is a row in the data array, and each feature is a column.\nA Simple Example: the Iris Dataset\nAs an example of a simple dataset, we're going to take a look at the iris data stored by scikit-learn.\nThe data consists of measurements of three different species of irises. There are three species of iris\nin the dataset, which we can picture here:\nIris Setosa\n<img src=\"figures/iris_setosa.jpg\" width=\"50%\">\nIris Versicolor\n<img src=\"figures/iris_versicolor.jpg\" width=\"50%\">\nIris Virginica\n<img src=\"figures/iris_virginica.jpg\" width=\"50%\">\nQuick Question:\nIf we want to design an algorithm to recognize iris species, what might the data be?\nRemember: we need a 2D array of size [n_samples x n_features].\n\n\nWhat would the n_samples refer to?\n\n\nWhat might the n_features refer to?\n\n\nRemember that there must be a fixed number of features for each sample, and feature\nnumber i must be a similar kind of quantity for each sample.\nLoading the Iris Data with Scikit-learn\nScikit-learn has a very straightforward set of data on these iris species. The data consist of\nthe following:\n\n\nFeatures in the Iris dataset:\n\n\nsepal length in cm\n\nsepal width in cm\npetal length in cm\n\npetal width in cm\n\n\nTarget classes to predict:\n\n\nIris Setosa\n\nIris Versicolour\nIris Virginica\n\n<img src=\"figures/petal_sepal.jpg\" alt=\"Sepal\" style=\"width: 50%;\"/>\n\"Petal-sepal\". Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg#/media/File:Petal-sepal.jpg\nscikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:", "from sklearn.datasets import load_iris\niris = load_iris()", "The resulting dataset is a Bunch object: you can see what's available using\nthe method keys():", "iris.keys()", "The features of each sample flower are stored in the data attribute of the dataset:", "n_samples, n_features = iris.data.shape\nprint(n_samples)\nprint(n_features)\n# the sepal length, sepal width, petal length and petal width of the first sample (first flower)\nprint(iris.data[0])", "The information about the class of each sample is stored in the target attribute of the dataset:", "print(iris.data.shape)\nprint(iris.target.shape)\n\nprint(iris.target)", "The names of the classes are stored in the last attribute, namely target_names:", "print(iris.target_names)", "This data is four dimensional, but we can visualize two of the dimensions\nat a time using a simple scatter-plot. Again, we'll start by enabling\nmatplotlib inline mode:", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nx_index = 3\ny_index = 0\n\n# this formatter will label the colorbar with the correct target names\nformatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])\n\nplt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target)\nplt.colorbar(ticks=[0, 1, 2], format=formatter)\nplt.xlabel(iris.feature_names[x_index])\nplt.ylabel(iris.feature_names[y_index])", "Quick Exercise:\nChange x_index and y_index in the above script\nand find a combination of two parameters\nwhich maximally separate the three classes.\nThis exercise is a preview of dimensionality reduction, which we'll see later.\nOther Available Data\nScikit-learn makes available a host of datasets for testing learning algorithms.\nThey come in three flavors:\n\nPackaged Data: these small datasets are packaged with the scikit-learn installation,\n and can be downloaded using the tools in sklearn.datasets.load_*\nDownloadable Data: these larger datasets are available for download, and scikit-learn\n includes tools which streamline this process. These tools can be found in\n sklearn.datasets.fetch_*\nGenerated Data: there are several datasets which are generated from models based on a\n random seed. These are available in the sklearn.datasets.make_*\n\nYou can explore the available dataset loaders, fetchers, and generators using IPython's\ntab-completion functionality. After importing the datasets submodule from sklearn,\ntype\ndatasets.load_&lt;TAB&gt;\n\nor\ndatasets.fetch_&lt;TAB&gt;\n\nor\ndatasets.make_&lt;TAB&gt;\n\nto see a list of available functions.", "from sklearn import datasets", "The data downloaded using the fetch_ scripts are stored locally,\nwithin a subdirectory of your home directory.\nYou can use the following to determine where it is:", "from sklearn.datasets import get_data_home\nget_data_home()", "Be warned: many of these datasets are quite large, and can take a long time to download!\n(especially on Conference wifi).\nIf you start a download within the IPython notebook\nand you want to kill it, you can use ipython's \"kernel interrupt\" feature, available in the menu or using\nthe shortcut Ctrl-m i.\nYou can press Ctrl-m h for a list of all ipython keyboard shortcuts.\nLoading Digits Data\nNow we'll take a look at another dataset, one where we have to put a bit\nmore thought into how to represent the data. We can explore the data in\na similar manner as above:", "from sklearn.datasets import load_digits\ndigits = load_digits()\n\ndigits.keys()\n\nn_samples, n_features = digits.data.shape\nprint((n_samples, n_features))\n\nprint(digits.data[0])\nprint(digits.target)", "The target here is just the digit represented by the data. The data is an array of\nlength 64... but what does this data mean?\nThere's a clue in the fact that we have two versions of the data array:\ndata and images. Let's take a look at them:", "print(digits.data.shape)\nprint(digits.images.shape)", "We can see that they're related by a simple reshaping:", "import numpy as np\nprint(np.all(digits.images.reshape((1797, 64)) == digits.data))", "Let's visualize the data. It's little bit more involved than the simple scatter-plot\nwe used above, but we can do it rather quickly.", "# set up the figure\nfig = plt.figure(figsize=(6, 6)) # figure size in inches\nfig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)\n\n# plot the digits: each image is 8x8 pixels\nfor i in range(64):\n ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])\n ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')\n \n # label the image with the target value\n ax.text(0, 7, str(digits.target[i]))", "We see now what the features mean. Each feature is a real-valued quantity representing the\ndarkness of a pixel in an 8x8 image of a hand-written digit.\nEven though each sample has data that is inherently two-dimensional, the data matrix flattens\nthis 2D data into a single vector, which can be contained in one row of the data matrix.\nGenerated Data: the S-Curve\nOne dataset often used as an example of a simple nonlinear dataset is the S-curve:", "from sklearn.datasets import make_s_curve\ndata, colors = make_s_curve(n_samples=1000)\nprint(data.shape)\nprint(colors.shape)\n\nfrom mpl_toolkits.mplot3d import Axes3D\nax = plt.axes(projection='3d')\nax.scatter(data[:, 0], data[:, 1], data[:, 2], c=colors)\nax.view_init(10, -60)", "This example is typically used with an unsupervised learning method called Locally\nLinear Embedding. We'll explore unsupervised learning in detail later in the tutorial.\nExercise: working with the faces dataset\nHere we'll take a moment for you to explore the datasets yourself.\nLater on we'll be using the Olivetti faces dataset.\nTake a moment to fetch the data (about 1.4MB), and visualize the faces.\nYou can copy the code used to visualize the digits above, and modify it for this data.", "from sklearn.datasets import fetch_olivetti_faces\n\n# fetch the faces data\n\n\n# Use a script like above to plot the faces image data.\n# hint: plt.cm.bone is a good colormap for this data\n", "Solution:", "# %load solutions/02A_faces_plot.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dendisuhubdy/tensorflow
tensorflow/contrib/eager/python/examples/notebooks/4_high_level.ipynb
apache-2.0
[ "import tensorflow as tf\ntf.enable_eager_execution()\ntfe = tf.contrib.eager\n", "High level API\nWe recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.\nLayers: common sets of useful operations\nMost of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables.\nMany machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers.\nTensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.", "# In the tf.keras.layers package, layers are objects. To construct a layer,\n# simply construct the object. Most layers take as a first argument the number\n# of output dimensions / channels.\nlayer = tf.keras.layers.Dense(100)\n# The number of input dimensionss is often unnecessary, as it can be inferred\n# the first time the layer is used, but it can be provided if you want to \n# specify it manually, which is useful in some complex models.\nlayer = tf.keras.layers.Dense(10, input_shape=(None, 5))", "The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),\nConv2D, LSTM, BatchNormalization, Dropout, and many others.", "# To use a layer, simply call it.\nlayer(tf.zeros([10, 5]))\n\n# Layers have many useful methods. For example, you can inspect all variables\n# in a layer by calling layer.variables. In this case a fully-connected layer\n# will have variables for weights and biases.\nlayer.variables\n\n# The variables are also accessible through nice accessors\nlayer.kernel, layer.bias", "Implementing custom layers\nThe best way to implement your own layer is extending the tf.keras.Layer class and implementing:\n * __init__ , where you can do all input-independent initialization\n * build, where you know the shapes of the input tensors and can do the rest of the initialization\n * call, where you do the forward computation\nNote that you don't have to wait until build is called to create your variables, you can also create them in __init__. However, the advantage of creating them in build is that it enables late variable creation based on the shape of the inputs the layer will operate on. On the other hand, creating variables in __init__ would mean that shapes requires to create the variables will need to be explicitly specified.", "class MyDenseLayer(tf.keras.layers.Layer):\n def __init__(self, num_outputs):\n super(MyDenseLayer, self).__init__()\n self.num_outputs = num_outputs\n \n def build(self, input_shape):\n self.kernel = self.add_variable(\"kernel\", \n shape=[input_shape[-1].value, \n self.num_outputs])\n \n def call(self, input):\n return tf.matmul(input, self.kernel)\n \nlayer = MyDenseLayer(10)\nprint(layer(tf.zeros([10, 5])))\nprint(layer.variables)", "Note that you don't have to wait until build is called to create your variables, you can also create them in __init__.\nOverall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request!\nModels: composing layers\nMany interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut.\nThe main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model.", "class ResnetIdentityBlock(tf.keras.Model):\n def __init__(self, kernel_size, filters):\n super(ResnetIdentityBlock, self).__init__(name='')\n filters1, filters2, filters3 = filters\n\n self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))\n self.bn2a = tf.keras.layers.BatchNormalization()\n\n self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')\n self.bn2b = tf.keras.layers.BatchNormalization()\n\n self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))\n self.bn2c = tf.keras.layers.BatchNormalization()\n\n def call(self, input_tensor, training=False):\n x = self.conv2a(input_tensor)\n x = self.bn2a(x, training=training)\n x = tf.nn.relu(x)\n\n x = self.conv2b(x)\n x = self.bn2b(x, training=training)\n x = tf.nn.relu(x)\n\n x = self.conv2c(x)\n x = self.bn2c(x, training=training)\n\n x += input_tensor\n return tf.nn.relu(x)\n\n \nblock = ResnetIdentityBlock(1, [1, 2, 3])\nprint(block(tf.zeros([1, 2, 3, 3])))\nprint([x.name for x in block.variables])", "Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential", " my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Conv2D(2, 1, \n padding='same'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Conv2D(3, (1, 1)),\n tf.keras.layers.BatchNormalization()])\nmy_seq(tf.zeros([1, 2, 3, 3]))", "Next steps\nNow you can go back to the previous notebook and adapt the linear regression example to use layers and models to be better structured." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Unidata/unidata-python-workshop
notebooks/NumPy/NumPy Broadcasting and Vectorization.ipynb
mit
[ "<a name=\"top\"></a>\n<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>NumPy Broadcasting and Vectorization</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://www.contribute.geeksforgeeks.org/wp-content/uploads/numpy-logo1.jpg\" alt=\"NumPy Logo\" style=\"height: 250px;\"></div>\n\nOverview:\n\nTeaching: 20 minutes\nExercises: 15 minutes\n\nQuestions\n\nHow can we work with arrays of differing shapes without needing to manually loop or copy data?\nHow can we reframe operations on data to avoid looping in Python?\n\nObjectives\n\n<a href=\"#broadcast\">Use broadcasting to implicitly loop over data</a>\n<a href=\"#vectorizing\">Vectorize calculations to avoid explicit loops</a>\n\n<a name=\"broadcasting\"></a>\n1. Using broadcasting to implicitly loop over data\nBroadcasting is a useful NumPy tool that allows us to perform operations between arrays with different shapes, provided that they are compatible with each other in certain ways. To start, we can create an array below and add 5 to it:", "import numpy as np\n\na = np.array([10, 20, 30, 40])\na + 5", "This works even though 5 is not an array; it works like as we would expect, adding 5 to each of the elements in a. This also works if 5 is an array:", "b = np.array([5])\na + b", "This takes the single element in b and adds it to each of the elements in a. This won't work for just any b, though; for instance, the following:\npython\nb = np.array([5, 6, 7])\na + b\nwon't work. It does work if a and b are the same shape:", "b = np.array([5, 5, 10, 10])\na + b", "What if what we really want is pairwise addition of a, b? Without broadcasting, we could accomplish this by looping:", "b = np.array([1, 2, 3, 4, 5])\n\nresult = np.empty((5, 4), dtype=np.int32)\nfor row, valb in enumerate(b):\n for col, vala in enumerate(a):\n result[row, col] = vala + valb\nresult", "We can also do this by manually repeating the arrays to the proper shape for the result, using np.tile. This avoids the need to manually loop:", "aa = np.tile(a, (5, 1))\naa\n\n# Turn b into a column array, then tile it\nbb = np.tile(b.reshape(5, 1), (1, 4))\nbb\n\naa + bb", "We can also do this using broadcasting, which is where NumPy implicitly repeats the array without using additional memory. With broadcasting, NumPy takes care of repeating for you, provided dimensions are \"compatible\". This works as:\n1. Check the number of dimensions of the arrays. If they are different, prepend size one dimensions\n2. Check if each of the dimensions are compatible: either the same size, or one of them is 1.", "a.shape\n\nb.shape", "Right now, they have the same number of dimensions, 1, but that dimension is incompatible. We can solve this by appending a dimension using np.newaxis when indexing:", "bb = b[:, np.newaxis]\nbb.shape\n\na + bb", "This can be written more directly in one line:", "a + b[:, np.newaxis]", "This also works 2D and 1D, etc.:", "x = np.array([1, 2])\ny = np.array([3, 4, 5])\nz = np.array([6, 7, 8, 9])\n\nd_2d = x[:, np.newaxis]**2 + y**2\n\nd_2d.shape\n\nd_3d = d_2d[..., np.newaxis] + z**2\n\nd_3d.shape", "Or in one line:", "h = x[:, np.newaxis, np.newaxis]**2 + y[np.newaxis, :, np.newaxis]**2 + z**2", "We can see this one-line result has the same shape and same values as the other multi-step calculation.", "h.shape\n\nnp.all(h == d_3d)", "Broadcasting is often useful when you want to do calculations with coordinate values, which are often given as 1D arrays corresponding to positions along a particular array dimension. For example, taking range and azimiuth values for radar data (1D separable polar coordinates) and converting to x,y pairs relative to the radar location.\nExercise\nGiven the 3D temperature field and 1-D pressure coordinates below, calculate: $T * exp(P / 1000)$. You will need to use broadcasting to make the arrays compatible.", "# Starting data\npressure = np.array([1000, 850, 500, 300])\ntemps = np.linspace(20, 30, 24).reshape(4, 3, 2)\n\n#\n# YOUR CALCULATION HERE\n#\n\n# %load solutions/broadcasting.py\n", "<a name=\"vectorizing\"></a>\n2. Vectorize calculations to avoid explicit loops\nWhen working with arrays of data, loops over the individual array elements is a fact of life. However, for improved runtime performance, it is important to avoid performing these loops in Python as much as possible, and let NumPy handle the looping for you. Avoiding these loops frequently, but not always, results in shorter and clearer code as well.\nLook ahead/behind\nOne common pattern for vectorizing is in converting loops that work over the current point as well as the previous and/or next point. This comes up when doing finite-difference calculations (e.g. approximating derivatives)", "a = np.linspace(0, 20, 6)\na", "We can calculate the forward difference for this array with a manual loop as:", "d = np.zeros(a.size - 1)\nfor i in range(len(a) - 1):\n d[i] = a[i + 1] - a[i]\nd", "It would be nice to express this calculation as a loop, if possible. To see how to go about this, let's condsider the values that are involved in calculating d[i], a[i+1] and a[i]. The values over the loop iterations are:\n<table>\n <tr> <td>i</td><td>a[i+1]</td><td>a[i]</td> </tr>\n <tr> <td>0</td><td>4</td><td>0</td> </tr>\n <tr> <td>1</td><td>8</td><td>4</td> </tr>\n <tr> <td>2</td><td>12</td><td>8</td> </tr>\n <tr> <td>3</td><td>16</td><td>12</td> </tr>\n <tr> <td>4</td><td>20</td><td>16</td> </tr>\n</table>\n\nWe can express the series of values for a[i+1] then as:", "a[1:]", "and a[i] as:", "a[:-1]", "This means that we can express the forward difference as:", "a[1:] - a[:-1]", "It should be noted that using slices in this way returns only a view on the original array. This means not only can you use the slices to modify the original data (even accidentally), but that this is also a quick operation that does not involve a copy and does not bloat memory usage.\nExercise: 2nd Derivative\nA finite difference estimate of the 2nd derivative is given by:\n$$f''(x) = 2\nf_i - f_{i+1} - f_{i-1}$$\n(we're ignoring $\\Delta x$ here)\n\nWrite vectorized code to calculate this finite difference for a (using slices)\n\nWhat values should we be expecting to get for the 2nd derivative?", "# %load solutions/vectorized_diff.py\n", "Blocking\nAnother application where vectorization comes into play to make operations more efficient is when operating on blocks of data. Let's start by creating some temperature data (rounding to make it easier to see/recognize the values).", "temps = np.round(20 + np.random.randn(10) * 5, 1)\ntemps", "Let's start by writing a loop to take a 3-point running mean of the data. We'll do this by iterating over all the point in the array and average the 3 points centered on that point. We'll simplify the problem by avoiding dealing with the cases at the edges of the array.", "avg = np.zeros_like(temps)\n# We're just ignoring the edge effects here\nfor i in range(1, len(temps) - 1):\n sub = temps[i - 1:i + 2]\n avg[i] = sub.mean()\n\navg", "As with the case of doing finite differences, we can express this using slices of the original array:", "# i - 1 i i + 1\n(temps[:-2] + temps[1:-1] + temps[2:]) / 3", "Another option to solve this is not using slicing but by using a powerful numpy tool: as_strided. This tool can result in some odd behavior, so take care when using--the tradeoff is that this can be used to do some powerful operations. What we're doing here is altering how NumPy is interpreting the values in the memory that underpins the array. So for this array:", "temps", "we can create a view of the array with a new, bigger shape, with rows made up of overlapping values. We do this by specifying a new shape of 8x3, one row for each of the length 3 blocks we can fit in the original 1D array of data. We then use the strides argument to control how numpy walks between items in each dimension. The last item in the strides tuple is just as normal--it says that the number of bytes to walk between items is just the size of an item. (Increasing this would skip items.) The first item says that when we go to a new, in this case row, only advance the size of a single item. This is what gives us overlapping rows.", "block_size = 3\nnew_shape = (len(temps) - block_size + 1, block_size)\nbytes_per_item = temps.dtype.itemsize\ntemps_strided = np.lib.stride_tricks.as_strided(temps,\n shape=new_shape,\n strides=(bytes_per_item, bytes_per_item))\ntemps_strided", "Now that we have this view of the array with the rows representing overlapping blocks, we can operate across the rows with mean and the axis=-1 argument to get our running average:", "temps_strided.mean(axis=-1)", "It should be noted that there are no copies going on here, so if we change a value at a single indexed location, the change actually shows up in multiple locations:", "temps_strided[0, 2] = 2000\ntemps_strided", "Finding the difference between min and max\nAnother operation that crops up when slicing and dicing data is trying to identify a set of indexes, along a particular axis, within a larger multidimensional array. For instance, say we have a 3D array of temperatures, and want to identify the location of the $-10^oC$ isotherm within each column:", "pressure = np.linspace(1000, 100, 25)\ntemps = np.random.randn(25, 30, 40) * 3 + np.linspace(25, -100, 25).reshape(-1, 1, 1)", "NumPy has the function argmin() which returns the index of the minium value. We can use this to find the minimum absolute difference between the value and -10:", "# Using axis=0 to tell it to operate along the pressure dimension\ninds = np.argmin(np.abs(temps - -10), axis=0)\ninds\n\ninds.shape", "Great! We have an array representing the index of the point closest to $-10^oC$ in each column of data. We could use this to look up into our pressure coordinates to find the pressure level for each column:", "pressure[inds]", "How about using that to find the actual temperature value that was closest?", "temps[inds, :, :].shape", "Unfortunately, this replaced the pressure dimension (size 25) with the shape of our index array (30 x 40), giving us a 30 x 40 x 30 x 40 array (imagine what would have happened with real data!). One solution here would be to loop:", "output = np.empty(inds.shape, dtype=temps.dtype)\nfor (i, j), val in np.ndenumerate(inds):\n output[i, j] = temps[val, i, j]\noutput", "Of course, what we really want to do is avoid the explicit loop. Let's temporarily simplify the problem to a single dimension. If we have a 1D array, we can pass a 1D array of indices (a full) range, and get back the same as the original data array:", "pressure[np.arange(pressure.size)]\n\nnp.all(pressure[np.arange(pressure.size)] == pressure)", "We can use this to select all the indices on the other dimensions of our temperature array. We will also need to use the magic of broadcasting to combine arrays of indices across dimensions:\nNow vectorized solution", "y_inds = np.arange(temps.shape[1])[:, np.newaxis]\nx_inds = np.arange(temps.shape[2])\ntemps[inds, y_inds, x_inds]", "Now let's say we want to find the relative humidity at the -10C isotherm", "np.all(output == temps[inds, y_inds, x_inds])", "Resources\n\nNumPy Broadcasting Documentation\nNumPy Broadcasting Article" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bjshaw/phys202-2015-work
days/day12/Integration.ipynb
mit
[ "Numerical Integration\nLearning Objectives: Learn how to numerically integrate 1d and 2d functions that are represented as Python functions or numerical arrays of data using scipy.integrate.\nThis lesson was orginally developed by Jennifer Klay under the terms of the MIT license. The original version is in this repo (https://github.com/Computing4Physics/C4P). Her materials was in turn based on content from the Computational Physics book by Mark Newman at University of Michigan, materials developed by Matt Moelter and Jodi Christiansen for PHYS 202 at Cal Poly, as well as the SciPy tutorials.\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np", "Introduction\nWe often calculate integrals in physics (electromagnetism, thermodynamics, quantum mechanics, etc.). In calculus, you learned how to evaluate integrals analytically. Some functions are too difficult to integrate analytically and for these we need to use the computer to integrate numerically. A numerical integral goes back to the basic principles of calculus. Given a function $f(x)$, we need to find the area under the curve between two limits, $a$ and $b$:\n$$\nI(a,b) = \\int_a^b f(x) dx\n$$\nThere is no known way to calculate such an area exactly in all cases on a computer, but we can do it approximately by dividing up the area into rectangular slices and adding them all together. Unfortunately, this is a poor approximation, since the rectangles under and overshoot the function:\n<img src=\"rectangles.png\" width=400>\n\nTrapezoidal Rule\nA better approach, which involves very little extra work, is to divide the area into trapezoids rather than rectangles. The area under the trapezoids is a considerably better approximation to the area under the curve, and this approach, though simple, often gives perfectly adequate results.\n<img src=\"trapz.png\" width=420>\nWe can improve the approximation by making the size of the trapezoids smaller. Suppose we divide the interval from $a$ to $b$ into $N$ slices or steps, so that each slice has width $h = (b − a)/N$ . Then the right-hand side of the $k$ th slice falls at $a+kh$, and the left-hand side falls at $a+kh−h$ = $a+(k−1)h$ . Thus the area of the trapezoid for this slice is\n$$\nA_k = \\tfrac{1}{2}h[ f(a+(k−1)h)+ f(a+kh) ]\n$$\nThis is the trapezoidal rule. It gives us a trapezoidal approximation to the area under one slice of our function.\nNow our approximation for the area under the whole curve is the sum of the areas of the trapezoids for all $N$ slices\n$$\nI(a,b) \\simeq \\sum\\limits_{k=1}^N A_k = \\tfrac{1}{2}h \\sum\\limits_{k=1}^N [ f(a+(k−1)h)+ f(a+kh) ] = h \\left[ \\tfrac{1}{2}f(a) + \\tfrac{1}{2}f(b) + \\sum\\limits_{k=1}^{N-1} f(a+kh)\\right]\n$$\nNote the structure of the formula: the quantity inside the square brackets is a sum over values of $f(x)$ measured at equally spaced points in the integration domain, and we take a half of the values at the start and end points but one times the value at all the interior points.\nApplying the Trapezoidal rule\nUse the trapezoidal rule to calculate the integral of $x^4 − 2x + 1$ from $x$ = 0 to $x$ = 2.\nThis is an integral we can do by hand, so we can check our work. To define the function, let's use a lambda expression (you learned about these in the advanced python section of CodeCademy). It's basically just a way of defining a function of some variables in one line. For this case, it is just a function of x:", "func = lambda x: x**4 - 2*x + 1\n\nN = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\n\nk = np.arange(1,N)\nI = h*(0.5*func(a) + 0.5*func(b) + func(a+k*h).sum())\n\nprint(I)", "The correct answer is\n$$\nI(0,2) = \\int_0^2 (x^4-2x+1)dx = \\left[\\tfrac{1}{5}x^5-x^2+x\\right]_0^2 = 4.4\n$$\nSo our result is off by about 2%.\nSimpson's Rule\nThe trapezoidal rule estimates the area under a curve by approximating the curve with straight-line segments. We can often get a better result if we approximate the function instead with curves of some kind. Simpson's rule uses quadratic curves. In order to specify a quadratic completely one needs three points, not just two as with a straight line. So in this method we take a pair of adjacent slices and fit a quadratic through the three points that mark the boundaries of those slices. \nGiven a function $f(x)$ and spacing between adjacent points $h$, if we fit a quadratic curve $ax^2 + bx + c$ through the points $x$ = $-h$, 0, $+h$, we get\n$$\nf(-h) = ah^2 - bh + c, \\hspace{1cm} f(0) = c, \\hspace{1cm} f(h) = ah^2 +bh +c\n$$\nSolving for $a$, $b$, and $c$ gives:\n$$\na = \\frac{1}{h^2}\\left[\\tfrac{1}{2}f(-h) - f(0) + \\tfrac{1}{2}f(h)\\right], \\hspace{1cm} b = \\frac{1}{2h}\\left[f(h)-f(-h)\\right], \\hspace{1cm} c = f(0)\n$$\nand the area under the curve of $f(x)$ from $-h$ to $+h$ is given approximately by the area under the quadratic:\n$$\nI(-h,h) \\simeq \\int_{-h}^h (ax^2+bx+c)dx = \\tfrac{2}{3}ah^3 + 2ch = \\tfrac{1}{3}h[f(-h)+4f(0)+f(h)]\n$$\nThis is Simpson’s rule. It gives us an approximation to the area under two adjacent slices of our function. Note that the final formula for the area involves only $h$ and the value of the function at evenly spaced points, just as with the trapezoidal rule. So to use Simpson’s rule we don’t actually have to worry about the details of fitting a quadratic—we just plug numbers into this formula and it gives us an answer. This makes Simpson’s rule almost as simple to use as the trapezoidal rule, and yet Simpson’s rule often gives much more accurate answers.\nApplying Simpson’s rule involves dividing the domain of integration into many slices and using the rule to separately estimate the area under successive pairs of slices, then adding the estimates for all pairs to get the final answer.\nIf we are integrating from $x = a$ to $x = b$ in slices of width $h$ then Simpson’s rule gives the area under the $k$ th pair, approximately, as\n$$\nA_k = \\tfrac{1}{3}h[f(a+(2k-2)h)+4f(a+(2k-1)h) + f(a+2kh)]\n$$\nWith $N$ slices in total, there are $N/2$ pairs of slices, and the approximate value of the entire integral is given by the sum\n$$\nI(a,b) \\simeq \\sum\\limits_{k=1}^{N/2}A_k = \\tfrac{1}{3}h\\left[f(a)+f(b)+4\\sum\\limits_{k=1}^{N/2}f(a+(2k-1)h)+2\\sum\\limits_{k=1}^{N/2-1}f(a+2kh)\\right]\n$$\nNote that the total number of slices must be even for Simpson's rule to work.\nApplying Simpson's rule\nNow let's code Simpson's rule to compute the integral of the same function from before, $f(x) = x^4 - 2x + 1$ from 0 to 2.", "N = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\n\nk1 = np.arange(1,N/2+1)\nk2 = np.arange(1,N/2)\nI = (1./3.)*h*(func(a) + func(b) + 4.*func(a+(2*k1-1)*h).sum() + 2.*func(a+2*k2*h).sum())\n \nprint(I)", "Adaptive methods and higher order approximations\nIn some cases, particularly for integrands that are rapidly varying, a very large number of steps may be needed to achieve the desired accuracy, which means the calculation can become slow. \nSo how do we choose the number $N$ of steps for our integrals? In our example calculations we just chose round numbers and looked to see if the results seemed reasonable. A more common situation is that we want to calculate the value of an integral to a given accuracy, such as four decimal places, and we would like to know how many steps will be needed. So long as the desired accuracy does not exceed the fundamental limit set by the machine precision of our computer— the rounding error that limits all calculations—then it should always be possible to meet our goal by using a large enough number of steps. At the same time, we want to avoid using more steps than are necessary, since more steps take more time and our calculation will be slower. \nIdeally we would like an $N$ that gives us the accuracy we want and no more. A simple way to achieve this is to start with a small value of $N$ and repeatedly double it until we achieve the accuracy we want. This method is an example of an adaptive integration method, one that changes its own parameters to get a desired answer.\nThe trapezoidal rule is based on approximating an integrand $f(x)$ with straight-line segments, while Simpson’s rule uses quadratics. We can create higher-order (and hence potentially more accurate) rules by using higher-order polynomials, fitting $f(x)$ with cubics, quartics, and so forth. The general form of the trapezoidal and Simpson rules is\n$$\n\\int_a^b f(x)dx \\simeq \\sum\\limits_{k=1}^{N}w_kf(x_k)\n$$\nwhere the $x_k$ are the positions of the sample points at which we calculate the integrand and the $w_k$ are some set of weights. In the trapezoidal rule, the first and last weights are $\\tfrac{1}{2}$ and the others are all 1, while in Simpson’s rule the weights are $\\tfrac{1}{3}$ for the first and last slices and alternate between $\\tfrac{4}{3}$ and $\\tfrac{2}{3}$ for the other slices. For higher-order rules the basic form is the same: after fitting to the appropriate polynomial and integrating we end up with a set of weights that multiply the values $f(x_k)$ of the integrand at evenly spaced sample points. \nNotice that the trapezoidal rule is exact if the function being integrated is actually a straight line, because then the straight-line approximation isn’t an approximation at all. Similarly, Simpson’s rule is exact if the function being integrated is a quadratic, and so on for higher order polynomials.\nThere are other more advanced schemes for calculating integrals that can achieve high accuracy while still arriving at an answer quickly. These typically combine the higher order polynomial approximations with adaptive methods for choosing the number of slices, in some cases allowing their sizes to vary over different regions of the integrand. \nOne such method, called Gaussian Quadrature - after its inventor, Carl Friedrich Gauss, uses Legendre polynomials to choose the $x_k$ and $w_k$ such that we can obtain an integration rule accurate to the highest possible order of $2N−1$. It is beyond the scope of this course to derive the Gaussian quadrature method, but you can learn more about it by searching the literature. \nNow that we understand the basics of numerical integration and have even coded our own trapezoidal and Simpson's rules, we can feel justified in using scipy's built-in library of numerical integrators that build on these basic ideas, without coding them ourselves.\nscipy.integrate\nIt is time to look at scipy's built-in functions for integrating functions numerically. Start by importing the library.", "import scipy.integrate as integrate\n\nintegrate?", "An overview of the module is provided by the help command, but it produces a lot of output. Here's a quick summary:\nMethods for Integrating Functions given function object.\nquad -- General purpose integration.\ndblquad -- General purpose double integration.\ntplquad -- General purpose triple integration.\nfixed_quad -- Integrate func(x) using Gaussian quadrature of order n.\nquadrature -- Integrate with given tolerance using Gaussian quadrature.\nromberg -- Integrate func using Romberg integration.\n\nMethods for Integrating Functions given fixed samples.\ntrapz -- Use trapezoidal rule to compute integral from samples.\ncumtrapz -- Use trapezoidal rule to cumulatively compute integral.\nsimps -- Use Simpson's rule to compute integral from samples.\nromb -- Use Romberg Integration to compute integral from (2**k + 1) evenly-spaced samples.\n\nSee the <code>special</code> module's orthogonal polynomials (<code>scipy.special</code>) for Gaussian quadrature roots and weights for other weighting factors and regions.\nInterface to numerical integrators of ODE systems.\nodeint -- General integration of ordinary differential equations.\node -- Integrate ODE using VODE and ZVODE routines.\n\nGeneral integration (quad)\nThe scipy function quad is provided to integrate a function of one variable between two points. The points can be $\\pm\\infty$ ($\\pm$ np.infty) to indicate infinite limits. For example, suppose you wish to integrate the following: \n$$\nI = \\int_0^{2\\pi} e^{-x}\\sin(x)dx\n$$\nThis could be computed using quad as:", "fun = lambda x : np.exp(-x)*np.sin(x) \n\nresult,error = integrate.quad(fun, 0, 2*np.pi) \n\nprint(result,error)", "The first argument to quad is a “callable” Python object (i.e a function, method, or class instance). Notice that we used a lambda function in this case as the argument. The next two arguments are the limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error.\nThe analytic solution to the integral is \n$$\n\\int_0^{2\\pi} e^{-x} \\sin(x) dx = \\frac{1}{2} - e^{-2\\pi} \\simeq \\textrm{0.499066}\n$$\nso that is pretty good.\nHere it is again, integrated from 0 to infinity:", "I = integrate.quad(fun, 0, np.infty)\n\nprint(I)", "In this case the analytic solution is exactly 1/2, so again pretty good.\nWe can calculate the error in the result by looking at the difference between the exact result and the numerical value from quad with", "print(abs(I[0]-0.5))", "In this case, the numerically-computed integral is within $10^{-16}$ of the exact result — well below the reported error bound.\nIntegrating array data\nWhen you want to compute the integral for an array of data (such as our thermistor resistance-temperature data from the Interpolation lesson), you don't have the luxury of varying your choice of $N$, the number of slices (unless you create an interpolated approximation to your data).\nThere are three functions for computing integrals given only samples: trapz , simps, and romb. The trapezoidal rule approximates the function as a straight line between adjacent points while Simpson’s rule approximates the function between three adjacent points as a parabola, as we have already seen. The first two functions can also handle non-equally-spaced samples (something we did not code ourselves) which is a useful extension to these integration rules.\nIf the samples are equally-spaced and the number of samples available is $2^k+1$ for some integer $k$, then Romberg integration can be used to obtain high-precision estimates of the integral using the available samples. Romberg integration is an adaptive method that uses the trapezoid rule at step-sizes related by a power of two and then performs something called Richardson extrapolation on these estimates to approximate the integral with a higher-degree of accuracy. (A different interface to Romberg integration useful when the function can be provided is also available as romberg).\nApplying simps to array data\nHere is an example of using simps to compute the integral for some discrete data:", "x = np.arange(0, 20, 2)\ny = np.array([0, 3, 5, 2, 8, 9, 0, -3, 4, 9], dtype = float)\nplt.plot(x,y)\nplt.xlabel('x')\nplt.ylabel('y')\n#Show the integration area as a filled region\nplt.fill_between(x, y, y2=0,color='red',hatch='//',alpha=0.2);\n\nI = integrate.simps(y,x) \nprint(I)", "Multiple Integrals\nMultiple integration can be handled using repeated calls to quad. The mechanics of this for double and triple integration have been wrapped up into the functions dblquad and tplquad. The function dblquad performs double integration. Use the help function to be sure that you define the arguments in the correct order. The limits on all inner integrals are actually functions (which can be constant).\nDouble integrals using dblquad\nSuppose we want to integrate $f(x,y)=y\\sin(x)+x\\cos(y)$ over $\\pi \\le x \\le 2\\pi$ and $0 \\le y \\le \\pi$:\n$$\\int_{x=\\pi}^{2\\pi}\\int_{y=0}^{\\pi} y \\sin(x) + x \\cos(y) dxdy$$\nTo use dblquad we have to provide callable functions for the range of the x-variable. Although here they are constants, the use of functions for the limits enables freedom to integrate over non-constant limits. In this case we create trivial lambda functions that return the constants. Note the order of the arguments in the integrand. If you put them in the wrong order you will get the wrong answer.", "from scipy.integrate import dblquad\n\n#NOTE: the order of arguments matters - inner to outer\nintegrand = lambda x,y: y * np.sin(x) + x * np.cos(y)\n\nymin = 0\nymax = np.pi\n\n#The callable functions for the x limits are just constants in this case:\nxmin = lambda y : np.pi\nxmax = lambda y : 2*np.pi\n\n#See the help for correct order of limits\nI, err = dblquad(integrand, ymin, ymax, xmin, xmax)\nprint(I, err)\n\ndblquad?", "Triple integrals using tplquad\nWe can also numerically evaluate a triple integral:\n$$ \\int_{x=0}^{\\pi}\\int_{y=0}^{1}\\int_{z=-1}^{1} y\\sin(x)+z\\cos(x) dxdydz$$", "from scipy.integrate import tplquad\n\n#AGAIN: the order of arguments matters - inner to outer\nintegrand = lambda x,y,z: y * np.sin(x) + z * np.cos(x)\n\nzmin = -1\nzmax = 1\n\nymin = lambda z: 0\nymax = lambda z: 1\n\n#Note the order of these arguments:\nxmin = lambda y,z: 0\nxmax = lambda y,z: np.pi\n\n#Here the order of limits is outer to inner\nI, err = tplquad(integrand, zmin, zmax, ymin, ymax, xmin, xmax)\nprint(I, err)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledrui/Regression
week2/.ipynb_checkpoints/week-2-multiple-regression-assignment-2-blank-checkpoint.ipynb
mit
[ "Regression Week 2: Multiple Regression (gradient descent)\nIn the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.\nIn this notebook we will cover estimating multiple regression weights via gradient descent. You will:\n* Add a constant column of 1's to a graphlab SFrame to account for the intercept\n* Convert an SFrame into a Numpy array\n* Write a predict_output() function using Numpy\n* Write a numpy function to compute the derivative of the regression weights with respect to a single feature\n* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.\n* Use the gradient descent function to estimate regression weights for multiple features\nFire up graphlab create\nMake sure you have the latest version of graphlab (>= 1.7)", "import graphlab", "Load in house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.", "sales = graphlab.SFrame('kc_house_data.gl/')", "If we want to do any \"feature engineering\" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.\nConvert to Numpy Array\nAlthough SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional \"array\").\nRecall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the \"feature matrix\" by the \"weight vector\". \nFirst we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.", "import numpy as np # note this allows us to refer to numpy as np instead ", "Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:\n* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')\n* A numpy array containing the values of the output\nWith this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)\nPlease note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!", "def get_numpy_data(data_sframe, features, output):\n data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame\n # add the column 'constant' to the front of the features list so that we can extract it along with the others:\n features = ['constant'] + features # this is how you combine two lists\n # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):\n features_sframe = data_sframe[features]\n # the following line will convert the features_SFrame into a numpy matrix:\n feature_matrix = features_sframe.to_numpy()\n # assign the column of data_sframe associated with the output to the SArray output_sarray\n output_sarray = data_sframe[output]\n # the following will convert the SArray into a numpy array by first converting it to a list\n output_array = output_sarray.to_numpy()\n return(feature_matrix, output_array)", "For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:", "(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list\nprint example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'\nprint example_output[0] # and the corresponding output", "Predicting output given regression weights\nSuppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:", "my_weights = np.array([1., 1.]) # the example weights\nmy_features = example_features[0,] # we'll use the first data point\npredicted_value = np.dot(my_features, my_weights)\nprint predicted_value", "np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:", "def predict_output(feature_matrix, weights):\n # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array\n # create the predictions vector by using np.dot()\n predictions = np.dot(feature_matrix, weights)\n\n return(predictions)", "If you want to test your code run the following cell:", "test_predictions = predict_output(example_features, my_weights)\nprint test_predictions[0] # should be 1181.0\nprint test_predictions[1] # should be 2571.0", "Computing the Derivative\nWe are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.\nSince the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:\n(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)^2\nWhere we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:\n2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)* [feature_i]\nThe term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:\n2*error*[feature_i]\nThat is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!\nRecall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. \nWith this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).", "def feature_derivative(errors, feature):\n # Assume that errors and feature are both numpy arrays of the same length (number of data points)\n # compute twice the dot product of these vectors as 'derivative' and return the value\n\n return(derivative)", "To test your feature derivartive run the following:", "(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') \nmy_weights = np.array([0., 0.]) # this makes all the predictions 0\ntest_predictions = predict_output(example_features, my_weights) \n# just like SFrames 2 numpy arrays can be elementwise subtracted with '-': \nerrors = test_predictions - example_output # prediction errors in this case is just the -example_output\nfeature = example_features[:,0] # let's compute the derivative with respect to 'constant', the \":\" indicates \"all rows\"\nderivative = feature_derivative(errors, feature)\nprint derivative\nprint -np.sum(example_output)*2 # should be the same as derivative", "Gradient Descent\nNow we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. \nThe amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.\nWith this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria", "from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)\n\ndef regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):\n converged = False \n weights = np.array(initial_weights) # make sure it's a numpy array\n while not converged:\n # compute the predictions based on feature_matrix and weights using your predict_output() function\n\n # compute the errors as predictions - output\n\n gradient_sum_squares = 0 # initialize the gradient sum of squares\n # while we haven't reached the tolerance yet, update each feature's weight\n for i in range(len(weights)): # loop over each weight\n # Recall that feature_matrix[:, i] is the feature column associated with weights[i]\n # compute the derivative for weight[i]:\n\n # add the squared value of the derivative to the gradient magnitude (for assessing convergence)\n\n # subtract the step size times the derivative from the current weight\n \n # compute the square-root of the gradient sum of squares to get the gradient matnigude:\n gradient_magnitude = sqrt(gradient_sum_squares)\n if gradient_magnitude < tolerance:\n converged = True\n return(weights)", "A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect \"tolerance\" to be small, small is only relative to the size of the features. \nFor similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.\nRunning the Gradient Descent as Simple Regression\nFirst let's split the data into training and test data.", "train_data,test_data = sales.random_split(.8,seed=0)", "Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:", "# let's test out the gradient descent\nsimple_features = ['sqft_living']\nmy_output = 'price'\n(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)\ninitial_weights = np.array([-47000., 1.])\nstep_size = 7e-12\ntolerance = 2.5e7", "Next run your gradient descent with the above parameters.\nHow do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)? \nQuiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?\nUse your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:", "(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)", "Now compute your predictions using test_simple_feature_matrix and your weights from above.\nQuiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?\nNow that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).\nRunning a multiple regression\nNow we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:", "model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. \nmy_output = 'price'\n(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)\ninitial_weights = np.array([-100000., 1., 1.])\nstep_size = 4e-12\ntolerance = 1e9", "Use the above parameters to estimate the model weights. Record these values for your quiz.\nUse your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!\nQuiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?\nWhat is the actual price for the 1st house in the test data set?\nQuiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2?\nNow use your predictions and the output to compute the RSS for model 2 on TEST data.\nQuiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gfrubi/electrodinamica
notebooks/Ejemplo-quiver-y-odenint.ipynb
gpl-3.0
[ "Graficando campos 2D", "import numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('classic')", "Aquí definimos la \"malla\" a ser usada, con el comando meshgrid de Numpy", "n = 20 # nodos de la grilla\nmax = 2 # máximo y mínimo del gráfico\nX = np.linspace(-max,max,n)\nY = np.linspace(-max,max,n)\nx,y = np.meshgrid(X,Y)", "Definimos las 2 funciones $E_x$ y $E_y$ del campo vectorial 2D:\n$$\n\\vec{E}(x,y)=E_x(x,y)\\hat{x}+E_y(x,y)\\hat{y}\n$$\nEjemplo: \n$$\n\\vec{E}(x,y)=y\\hat{x}-x\\hat{y}\n$$", "def Ex(x,y): \n return y\ndef Ey(x,y):\n return -x\ndef E(x,y): # módulo del campo eléctrico, para normalización\n return np.sqrt((Ex(x,y))**2+(Ey(x,y))**2)", "Ahora graficamos usando quiver de matplotlib.pyplot.\nPor defecto, quiver representa el módulo del campo como en largo de cada flecha", "plt.figure(figsize=(5,5))\nplt.quiver(x,y,Ex(x,y),Ey(x,y), color='blue', scale=50, pivot='middle')\nplt.xlim(-1.1*max,1.1*max)\nplt.ylim(-1.1*max,1.1*max)\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.title('$\\\\vec{E}(\\\\vec{x})$')\nplt.savefig('campo2d.pdf')", "También podemos graficar el campo normalizado (de modo que las fechas sólo representan la dirección del campo)", "plt.figure(figsize=(5,5))\nplt.quiver(x,y,Ex(x,y)/E(x,y),Ey(x,y)/E(x,y), color='blue', pivot='middle')\nplt.xlim(-1.1*max,1.1*max)\nplt.ylim(-1.1*max,1.1*max)\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.title('$\\\\vec{E}(\\\\vec{x})$')\nplt.savefig('campo2d-normalizado.pdf')", "Otra alternativa es graficar el campo normalizado y codificar el módulo del campo en una escala de colores (\"color map\"), usando la opción cmap. Ver https://matplotlib.org/stable/tutorials/colors/colormaps.html para más detalles y una lista de los mapas de colores disponibles.", "plt.figure(figsize=(7,5))\nplt.quiver(x,y,Ex(x,y)/E(x,y),Ey(x,y)/E(x,y),E(x,y), pivot='middle', cmap='Blues')\nplt.xlim(-1.1*max,1.1*max)\nplt.ylim(-1.1*max,1.1*max)\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.title('$\\\\vec{E}(\\\\vec{x})$')\nplt.colorbar()\nplt.savefig('campo2d-normalizado-colores.pdf')", "Líneas de campo: integración numérica\nAhora encontraremos una solución numérica para las líneas de campo eléctrico. Para eso usaremos la función odeint de scipy.integrate, que integra un sistema de EDO's de primer orden (no-lineal, acoplado) que se indique.", "from scipy.integrate import odeint", "odeint resuelve numéricamente sistemas de EDO's de primer orden de la forma\n$$\n\\frac{dX}{dt}=f(X,t)\n$$\ndonde $X(t)=[X_0(t),X_1(t), \\dots,X_N(t)]$ es el vector de las $N$ incógnitas y $f(X,t)=[f_0(X,t),f_1(X,t),\\dots,f_N(X,t)]$ es la función que define el sistema.\nEn nuestro caso bidimensional ($N=2$), el sistema de EDO's es de la forma\n$$\n\\frac{dx}{dt}(t) = E_x(x(t),y(t))\n$$\n$$\n\\frac{dy}{dt}(t) = E_y(x(t),y(t))\n$$\nDefinimos por lo tanto la función $f$ requerida de la forma siguiente:", "def f(XX,t):\n x, y = XX\n dxdt = Ex(x,y)\n dydt = Ey(x,y)\n return [dxdt,dydt]", "Además, necesitamos definir el arreglo de valores de $t$ en los que se encontrará la solución numérica, y una condición inicial $X(t=0)=[X_0(0), X_1(0), \\dots, X_N(0)]$", "t = np.linspace(0,5,1000)\nX0 = [1,0]", "Con eso, podemos llamar a la función odeint del modo siguiente:", "Xsol = odeint(f,X0,t)\nxsol = Xsol[:,0]\nysol = Xsol[:,1]", "Podemos agregar la curva encontrada numéricamente a nuestro gráfico anterio:", "plt.figure(figsize=(7,5))\nplt.quiver(x,y,Ex(x,y)/E(x,y),Ey(x,y)/E(x,y),E(x,y), scale=30, pivot='middle', cmap='Blues')\nplt.xlim(-1.1*max,1.1*max)\nplt.ylim(-1.1*max,1.1*max)\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.title('$\\\\vec{E}(\\\\vec{x})$')\nplt.plot(xsol,ysol)\nplt.colorbar()\nplt.savefig('campo2d-normalizado-colores-y-solucion-numerica.pdf')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bxin/cwfs
examples/AuxTel.ipynb
gpl-3.0
[ "Patrick provided a pair of images from AuxTel.\nLet's look at how those images work with our cwfs code\nload the modules", "from lsst.cwfs.instrument import Instrument\nfrom lsst.cwfs.algorithm import Algorithm\nfrom lsst.cwfs.image import Image, readFile, aperture2image, showProjection\nimport lsst.cwfs.plots as plots\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline", "Define the image objects. Input arguments: file name, field coordinates in deg, image type\nThe colorbar() below may produce a warning message if your matplotlib version is older than 1.5.0\n( https://github.com/matplotlib/matplotlib/issues/5209 )", "fieldXY = [0,0]\n\nI1 = Image(readFile('../tests/testImages/AuxTel/I1_intra_20190912_HD21161_z05.fits'), fieldXY, Image.INTRA)\nI2 = Image(readFile('../tests/testImages/AuxTel/I2_extra_20190912_HD21161_z05.fits'), fieldXY, Image.EXTRA)\n\nplots.plotImage(I1.image,'intra')\nplots.plotImage(I2.image,'extra')", "Define the instrument. Input arguments: instrument name, size of image stamps", "inst=Instrument('AuxTel',I1.sizeinPix)", "Define the algorithm being used. Input arguments: baseline algorithm, instrument, debug level", "algo=Algorithm('exp',inst,0)", "Run it", "algo.runIt(inst,I1,I2,'paraxial')", "Print the Zernikes Zn (n>=4)", "print(algo.zer4UpNm)", "plot the Zernikes Zn (n>=4)", "plots.plotZer(algo.zer4UpNm,'nm')", "We check that the optical parameters provided are consistent with the image diameter. Otherwise the numerical solutions themselves do not make much sense.", "print(\"Expected image diameter in pixels = %.0f\"%(inst.offset/inst.fno/inst.pixelSize))\n\nplots.plotImage(I1.image0,'original intra', mask=algo.pMask)\n\nplots.plotImage(I2.image0,'original extra', mask=algo.pMask)", "Patrick asked the question: can we show the results of the fit in intensity space, and also the residual?\nGreat question. The short answer is no.\nThe long answer: the current approach implemented is the so-called inversion approach, i.e., to inversely solve the Transport of Intensity Equation with boundary conditions. It is not a forward fit. If you think of the unperturbed image as I0, and the real image as I, we iteratively map I back toward I0 using the estimated wavefront. Upon convergence, our \"residual images\" should have intensity distributions that are nearly uniform. We always have an estimated wavefront, and a residual wavefront. The residual wavefront is obtained from the two residual images.\nHowever, using tools availabe in the cwfs package, we can easily make forward prediction of the images using the wavefront solution. This is basically to take the slope of the wavefront at any pupil position, and raytrace to the image plane. We will demostrate these below.", "nanMask = np.ones(I1.image.shape)\nnanMask[I1.pMask==0] = np.nan\nfig, ax = plt.subplots(1,2, figsize=[10,4])\nimg = ax[0].imshow(algo.Wconverge*nanMask, origin='lower')\nax[0].set_title('Final WF = estimated + residual')\nfig.colorbar(img, ax=ax[0])\nimg = ax[1].imshow(algo.West*nanMask, origin='lower')\nax[1].set_title('residual wavefront')\nfig.colorbar(img, ax=ax[1])\n\nfig, ax = plt.subplots(1,2, figsize=[10,4])\nimg = ax[0].imshow(I1.image, origin='lower')\nax[0].set_title('Intra residual image')\nfig.colorbar(img, ax=ax[0])\nimg = ax[1].imshow(I2.image, origin='lower')\nax[1].set_title('Extra residual image')\nfig.colorbar(img, ax=ax[1])", "Now we do the forward raytrace using our wavefront solutions\nThe code is simply borrowed from existing cwfs code.\nWe first set up the pupil grid. Oversample means how many ray to trace from each grid point on the pupil.", "oversample = 10\nprojSamples = I1.image0.shape[0]*oversample\n\nluty, lutx = np.mgrid[\n -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),\n -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]\nlutx = lutx / (projSamples / 2 / inst.sensorFactor)\nluty = luty / (projSamples / 2 / inst.sensorFactor)", "We now trace the rays to the image plane. Lutxp and Lutyp are image coordinates for each (oversampled) ray. showProjection() makes the intensity image. Then, to down sample the image back to original resolution, we want to use the function downResolution() which is defined for the image class.", "lutxp, lutyp, J = aperture2image(I1, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')\nshow_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)\nI1fit = Image(show_lutxyp, fieldXY, Image.INTRA)\nI1fit.downResolution(oversample, I1.image0.shape[0], I1.image0.shape[1])", "Now do the same thing for extra focal image", "luty, lutx = np.mgrid[\n -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),\n -(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]\nlutx = lutx / (projSamples / 2 / inst.sensorFactor)\nluty = luty / (projSamples / 2 / inst.sensorFactor)\n\nlutxp, lutyp, J = aperture2image(I2, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')\nshow_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)\nI2fit = Image(show_lutxyp, fieldXY, Image.EXTRA)\nI2fit.downResolution(oversample, I2.image0.shape[0], I2.image0.shape[1])\n\n#The atmosphere used here is just a random Gaussian smearing. We do not care much about the size at this point\nfrom scipy.ndimage import gaussian_filter\n\natmSigma = .6/3600/180*3.14159*21.6/1.44e-5\nI1fit.image[np.isnan(I1fit.image)]=0\na = gaussian_filter(I1fit.image, sigma=atmSigma)\n\nfig, ax = plt.subplots(1,3, figsize=[15,4])\nimg = ax[0].imshow(I1fit.image, origin='lower')\nax[0].set_title('Forward prediction (no atm) Intra')\nfig.colorbar(img, ax=ax[0])\nimg = ax[1].imshow(a, origin='lower')\nax[1].set_title('Forward prediction (w atm) Intra')\nfig.colorbar(img, ax=ax[1])\n\nimg = ax[2].imshow(I1.image0, origin='lower')\nax[2].set_title('Real Image, Intra')\nfig.colorbar(img, ax=ax[2])\n\nI2fit.image[np.isnan(I2fit.image)]=0\nb = gaussian_filter(I2fit.image, sigma=atmSigma)\n\nfig, ax = plt.subplots(1,3, figsize=[15,4])\nimg = ax[0].imshow(I2fit.image, origin='lower')\nax[0].set_title('Forward prediction (no atm) Extra')\nfig.colorbar(img, ax=ax[0])\nimg = ax[1].imshow(b, origin='lower')\nax[1].set_title('Forward prediction (w atm) Extra')\nfig.colorbar(img, ax=ax[1])\n\nimg = ax[2].imshow(I2.image0, origin='lower')\nax[2].set_title('Real Image, Extra')\nfig.colorbar(img, ax=ax[2])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cgrudz/cgrudz.github.io
teaching/stat_775_2021_fall/activities/activity-2021-09-24.ipynb
mit
[ "Introduction to Python part XIII (And a discussion of numerical solutions to O/SDEs)\nActivity 1: Discussion numerical solutions\n\nWhat is the Euler scheme and how is this related to the tangent approximation / Taylor's theorem?\nWhat do we mean by order of convergence of a numerical solution to an ODE? What is the order of convergence of Euler's scheme?\nWhat is the Euler-Maruyama scheme? How does this extend / reduce to the Euler scheme?\nWhat do we mean by strong versus weak convergence of a numerical solution to an SDE? What are the rates of convergence for Euler-Maruyama?\nHow do these sample paths relate to the Fokker-Plank equation? \n\nActivity 2: Test-Driven Development\nAn assertion checks that something is true at a particular point in the program. The next step is to check the overall behavior of a piece of code, i.e., to make sure that it produces the right output when it’s given a particular input. For example, suppose we need to find where two or more time series overlap. The range of each time series is represented as a pair of numbers, which are the time the interval started and ended. The output is the largest range that they all include:\n\nMost novice programmers would solve this problem like this:\n\nWrite a function range_overlap.\nCall it interactively on two or three different inputs.\nIf it produces the wrong answer, fix the function and re-run that test.\n\nThis clearly works — after all, thousands of scientists are doing it right now — but there’s a better way:\n\nWrite a short function for each test.\nWrite a range_overlap function that should pass those tests.\nIf range_overlap produces any wrong answers, fix it and re-run the test functions.\n\nWriting the tests before writing the function they exercise is called test-driven development (TDD). Its advocates believe it produces better code faster because:\n\nIf people write tests after writing the thing to be tested, they are subject to confirmation bias, i.e., they subconsciously write tests to show that their code is correct, rather than to find errors.\nWriting tests helps programmers figure out what the function is actually supposed to do.\n\nHere are three test functions for range_overlap:", "assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)\nassert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)\nassert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)", "The error is actually reassuring: we haven’t written range_overlap yet, so if the tests passed, it would be a sign that someone else had and that we were accidentally using their function.\nAnd as a bonus of writing these tests, we’ve implicitly defined what our input and output look like: we expect a list of pairs as input, and produce a single pair as output.\nSomething important is missing, though. We don’t have any tests for the case where the ranges don’t overlap at all:", "assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == ???\n", "What should range_overlap do in this case: fail with an error message, produce a special value like (0.0, 0.0) to signal that there’s no overlap, or something else? Any actual implementation of the function will do one of these things; writing the tests first helps us figure out which is best before we’re emotionally invested in whatever we happened to write before we realized there was an issue.\nAnd what about this case?", "assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == ???", "Do two segments that touch at their endpoints overlap or not? Mathematicians usually say “yes”, but engineers usually say “no”. The best answer is “whatever is most useful in the rest of our program”, but again, any actual implementation of range_overlap is going to do something, and whatever it is ought to be consistent with what it does when there’s no overlap at all.\nSince we’re planning to use the range this function returns as the X axis in a time series chart, we decide that:\n\nevery overlap has to have non-zero width, and\nwe will return the special value None when there’s no overlap.\n\nNone is built into Python, and means “nothing here”. (Other languages often call the equivalent value null or nil). With that decision made, we can finish writing our last two tests:", "assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None\nassert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None", "Again, we get an error because we haven’t written our function, but we’re now ready to do so:", "def range_overlap(ranges):\n \"\"\"Return common overlap among a set of [left, right] ranges.\"\"\"\n max_left = 0.0\n min_right = 1.0\n for (left, right) in ranges:\n max_left = max(max_left, left)\n min_right = min(min_right, right)\n return (max_left, min_right)", "Take a moment to think about why we calculate the left endpoint of the overlap as the maximum of the input left endpoints, and the overlap right endpoint as the minimum of the input right endpoints. We’d now like to re-run our tests, but they’re scattered across three different cells. To make running them easier, let’s put them all in a function:", "def test_range_overlap():\n assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None\n assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None\n assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)\n assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)\n assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)\n assert range_overlap([]) == None\n\ntest_range_overlap()\n", "The first test that was supposed to produce None fails, so we know something is wrong with our function. We don’t know whether the other tests passed or failed because Python halted the program as soon as it spotted the first error. Still, some information is better than none, and if we trace the behavior of the function with that input, we realize that we’re initializing max_left and min_right to 0.0 and 1.0 respectively, regardless of the input values. This violates another important rule of programming: always initialize from data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Naereen/notebooks
Demonstration_of_Pypi_package_OCaml_for_polyglot_notebook_02-2021.ipynb
mit
[ "Demonstration of Pypi package OCaml for polyglot notebook\nTL;DR: this Pypi ocaml package allows to embed a real and complete OCaml runtime (with full standard library) in an IPython console or Jupyter notebook running IPython kernel (Python 3). OCaml functions and tiny programs can then be called directly from the IPython/Jupyter console, and results can be printed, or affected to Python variables! It even supports Currying multi-argument functions!\nSee also: note, if you really want to use OCaml in a Jupyter notebook, the best solution is OCaml-jupyter kernel!\n(i opened an issue there to present this to the developer, just for his curiosity)\n\nAuthor: Lilian Besson (@Naereen)\nDate: 23/02/2021\nLicense: MIT License\n\nIntroduction\n\nSee this issue I opened and assigned to myself.\n\nIf you run this:", "!pip3 install ocaml", "Then you can use basic OCaml expressions and standard library... from IPython or Jupyter notebook with IPython kernel, without having to install OCaml yourself on your laptop!\nIt's pre-compiled. I don't know what version thought...", "import ocaml\n\nanswer_to_life = %ocaml 40 + 2\n\nprint(answer_to_life)\nprint(type(answer_to_life)) # a real integer!", "It would be a great project to show to students studying Python and C and OCaml : this tiny IPython magic makes the link between Python and OCaml (one way) using OCaml compiled as a dynamic extension to CPython 😍 !\nReference and missing documentation of ocaml Pypi package\nThis Jupyter notebook uses the ocaml v0.0.11 package from Pypi.\nThere is no documentation, I asked the author, who works at JaneStreet and he redirected me to this blog post by JaneStreet.\nApparently, being professional developers doesn't mean they restrain themselves from shipping unfinished and undocumented packages to Pypi. Okay... Why? It seems highly unprofessional! I understand being in a hurry, but then just don't publish to Pypi, and let user install it using pip but from a Git repository:\nbash\n$ pip install git+https://github.com/&lt;Username&gt;/&lt;Projects&gt;\nTheir blog post states\n\nNote that this package is not currently very well polished but it should give some ideas of what can be done through this Python-OCaml integration.\n\nBut come on, writing a short README or description in the setup.py is the least that they should do...\nIframe showing the https://pypi.org/project/ocaml/ page:\n<iframe src=\"http://pypi.org/project/ocaml/>\nI don't do that! See my Pypi packages on https://pypi.org/user/Naereen/:\n\n\nlempel-ziv-complexity (Last released on Dec 19, 2019): Fast implementation of the Lempel-Ziv complexity function.\nSMPyBandits (Last released on Oct 25, 2019): SMPyBandits: Open-Source Python package for Single- and Multi-Players multi-armed Bandits algorithms.\nparcoursup (Last released on Nov 14, 2018): ParcourSup.py : un clone en Python 3 de ParcoursSup, écrit à but didactique.\ngeneratewordcloud (Last released on Oct 16, 2016): A simple Python (2 or 3) script to generate a PNG word-cloud image from a bunch of text files. Based on word_cloud.\nansicolortags (Last released on Jul 2, 2016): ansicolortags brings a simple and powerful way to use colours in a terminal application with Python 2 and 3.\n\n\nLet's show the current version of the package, just for reference:", "%load_ext watermark\n\n%watermark -v -p ocaml", "A first example\nAs for other languages binding, we can have either a full cell, starting with %%ocaml :", "%%ocaml\nprint_endline \"Hello world from OCaml running in Jupyter (from IPython)!\";;", "Natively, IPython/Jupyter supports lots of \"magic commands\", and especially %%bash, %%perl, %%javascript and %%ruby interface to famous scripting languages, and a generic %%script one.", "%%bash\necho \"Hello world from Bash running in Jupyter (from IPython)\"\n\n%%script bash\necho \"Hello world from Bash running in Jupyter (from IPython)\"", "Not that it has been possible, since a long time, to call an OCaml toplevel like this!", "%%script ocaml\nprint_endline \"Hello world from OCaml running in Jupyter (from IPython)!\";;", "But it does nothing else than opening a sub-process, running ocaml command, feeding it the content of the cell, and then exiting.\n\nCons: It's not so pretty, as it prints all this startup messages. I thought we could silent them, like python -q (quiet) option... but we can't!\nPros: it shows the value of each line, so it can be used to quickly check something, if needed...\n\nMore full %%ocaml cells", "%%ocaml\nlet sum : (int list -> int) = List.fold_left (+) 0 in\nlet a_list (n:int) : int list = Array.to_list (Array.init n (fun i -> i*i+30)) in\nfor n = 300000 to 300010 do\n Format.printf \"\\nList of size %2.i had sum = %4.i.%!\" n (sum (a_list n));\ndone;;", "As I was saying, using %%script ocaml allows to quickly check things, like for instance the interface of a module!", "%%script ocaml\n#show Array;;", "Using OCaml returned values from Python\nThis package allows to use dynamically defined OCaml functions from Python, the same way it can be done for other languages lke Julia or R (see this blog post if you never saw these possibilities, or this one).\nFor instance:", "b = %ocaml true\nprint(type(b), b)\n\ns = %ocaml \"OK ?\"\nprint(type(s), s)\n\ni = %ocaml 2021\nprint(type(i), i)\n\nf = %ocaml 2.99792458\nprint(type(f), f)", "So booleans, strings, integers and floats get perfectly mapped from OCaml values to Python native values.", "l = %ocaml [1, 3, 5]\nprint(type(l), l)\n\na = %ocaml [|2; 4; 6|]\nprint(type(a), a)\n\nt = %ocaml (23, 02, 2021)\nprint(type(t), t)", "So 'a list, 'a array and 'a * 'b * .. heterogeneous tuples get perfectly mapped from OCaml values to Python native values!\nBut it's not perfect, as for instance OCaml has a char type (similar to the one in C) but Python only has strings, so this'll fail:\nValueError: ocaml error (Failure\"unknown type char\")", "c = %ocaml 'C'\nprint(type(c), c)", "And for functions:", "sum_ocaml_1 = %ocaml let sum : (int list -> int) = List.fold_left (+) 0 in sum\n\nprint(sum_ocaml_1, type(sum_ocaml_1))\n\nsum_ocaml_1 ([1, 2, 3, 4, 5]) # 15", "Or simply", "sum_ocaml_2 = %ocaml List.fold_left (+) 0\n\nsum_ocaml_2 ([1, 2, 3, 4, 5]) # 15", "What about user defined types?", "%%ocaml\ntype state = TODO | DONE | Unknown of string;;\n\nlet print_state (s:state) =\n match s with\n | TODO -> Format.printf \"TODO%!\"\n | DONE -> Format.printf \"DONE%!\"\n | Unknown status -> Format.printf \"%s%!\" status\n;;\n\nprint_state TODO;;", "It fails:\n```\nSyntaxError: ocaml evaluation error on lines 1:11 to 1:15\nError: \n\n1: let out = (type TODO | DONE);;\nSyntax error: operator expected.\n```", "t = %ocaml type TODO | DONE", "Indeed the %ocam magic only works for expression, with no ;;.\nMore datatypes conversions to Python?\nWe can still explore:\n- references, let x = ref 1\n- polymorphic functions like let smaller (x: 'a) (y: 'a) : true = x &lt; y \n- 'a option type\n- functions with labels\n- records\n- polymorphic variants ?\nFore reference, see https://github.com/janestreet/ppx_python#conversions\nReferences - fail", "xref = %ocaml ref 1", "Polymorphic function - works!", "cons = %ocaml fun hd tl -> hd :: tl\nprint(cons, type(cons))\n\ncons(10)([20, 30])\n\ncons(1.0)([2.0, 30])", "Woooo, somehow OCaml accepted a polymorphic list at some point?", "head, tail = %ocaml List.hd, List.tl\n\na_list = [1, 2, 3]\na_list.append(a_list)\nhead(a_list), tail(a_list)", "Another example:", "smaller = %ocaml fun (x: int) (y: int) -> x < y\nprint(smaller)\nhelp(smaller)\n\nsmaller_poly = %ocaml fun (x: 'a) (y: 'a) -> x < y\nprint(smaller_poly)\nhelp(smaller_poly)", "Option type", "none = %ocaml None\nprint(none, type(none))\n\nsome_int = %ocaml Some 42\nprint(some_int, type(some_int))\n\n# instinguishable from None, so that's weird!\nsome_None = %ocaml Some None\nprint(some_None, type(some_None))", "Note that this limitation was explained:\n\nNote that this makes the two OCaml values [Some None] and [None] indistinguishable on the Python side as both are represented using None.\n\nFunctions with labels - labels get erased", "# val fold_left : f:('a -> 'b -> 'a) -> init:'a -> 'b list -> 'a\nfold_left = %ocaml ListLabels.fold_left\n\nprint(fold_left, type(fold_left))\nhelp(fold_left:)\n\nfold_left(lambda x: lambda y: x + y)(0)([1, 2, 3, 4, 5])", "Record - fail", "%%ocaml\ntype ratio = {num: int; denom: int};;\nlet add_ratio r1 r2 =\n {num = r1.num * r2.denom + r2.num * r1.denom;\n denom = r1.denom * r2.denom};;\nadd_ratio {num=1; denom=3} {num=2; denom=5};;\n\n%ocaml {num=1; denom=3}", "Of course it fails!\nBut it could be translated to Python dictionaries.\nExceptions - fail", "exc = %ocaml exception Empty_list", "Polymorphic variants - fail\nSee documentation", "%%ocaml\nFormat.printf \"%i%!\" (let value `float = 0 in value `float);;\n\nzero = %ocaml let value `float = 0 in value `float\n\nprint(zero, type(zero))\n\nvariant = %ocaml `float\n\nprint(variant, type(variant))", "Optional arguments - fail\nIt would be difficult to implement them correctly along side the (awesome) partial application closure feature...", "%%ocaml\nlet bump ?(step = 1) x = x + step;;\nFormat.printf \"\\n%i%!\" (bump 41);;\nFormat.printf \"\\n%i%!\" (bump ~step:12 30);;\n\nbump = %ocaml let bump ?(step = 1) x = x + step in bump", "Recursive list?", "%%ocaml\nlet rec list1 = 0 :: list2\nand list2 = 1 :: list1\nin\nFormat.printf \"%i -> %i -> %i -> %i ...%!\" (List.hd list1) (List.hd list2) (List.hd (List.tl list2)) (List.hd (List.tl list1));;\n\n# don't run\nif False:\n list1, list2 = %ocaml let rec list1 = 0 :: list2 and list2 = 1 :: list1 in (list1, list2)", "It fails, but takes 100% CPU and freezes. But in Python we can do it:", "list1 = [0]\nlist2 = [1]\nlist1.append(list2)\nlist2.append(list1)\nprint(list1)\nprint(list2)", "From standard library\nWhat about Sets, mapped to set?\nWhat about HashTbl, mapped to dict?\nAnd Stack, Queue, etc?\nTODO: left as an exercise for the reader.\nMap - fail", "%%ocaml\n\nmodule IntPairs = struct\n type t = int * int\n let compare (x0,y0) (x1,y1) =\n match Stdlib.compare x0 x1 with\n 0 -> Stdlib.compare y0 y1\n | c -> c\nend\n\nmodule PairsMap = Map.Make(IntPairs)\n\nlet m = PairsMap.(empty |> add (0,1) \"hello\" |> add (1,0) \"world\")\n\n(* not an expression, not usable in %ocaml magic *)", "Stack (or any module using a custom type) - fail", "stack = %ocaml Stack.create()", "😮 Curryed functions!\nImagine you define this function in math:\n$p : (x,y,z) \\mapsto x * y * z$, on $\\mathbb{N}^3 \\to \\mathbb{N}$.\nThen the Curryed form states that it is equivalent to\n$p' : x \\mapsto y \\mapsto z \\mapsto x * y * z$, informally defined on $\\mathbb{N} \\to \\mathbb{N} \\to \\mathbb{N} \\to \\mathbb{N}$.\nSo for instance if $x=1$ and $y=2$, $p'(x)(y)$ is $z \\mapsto 1 * 2 * z$, which is also $z \\mapsto p(1, 2, z)$.\nIn Python function, that would be this function:", "def product3values(x, y, z):\n return x * y * z", "But you can't directly use it for partial application:", "x = 1\ny = 2\npartial_product = product3values(x, y)\nz = 10\nprint(f\"With x = {x}, y = {y}, and {partial_product} applied to z = {z}, we got {partial_product(z)}\")", "With the Python standard library, it's possible to use functools.partial to obtain partially evaluated functions, which can be viewed as a limited support of Curryed function.", "import functools\n\npartial_product = functools.partial(product3values, 1, 2)\n\nz = 10\nprint(f\"With x = {x}, y = {y}, and {partial_product} applied to z = {z}, we got {partial_product(z)}\")", "But in OCaml, the conventions is to directly write functions in Curry form, rather than tuple form:", "%%ocaml\n(* this is advised *)\nlet product_curry (x:int) (y:int) (z:int) : int = x * y * z in\nlet x = 1 and y = 2 in\nlet partial_product = product_curry x y in\nlet z = 10 in\nFormat.printf \"With x = %i, y = %i, and partial_product applied to z = %i, we got %i.\" x y z (partial_product z);;", "Indeed, in most situations, the tuple form is just not \"OCaml\"esque, and tedious to use, and does not allow partial application!", "%%ocaml\n(* this is NOT advised *)\nlet product_curry (xyz : (int * int * int)) : int =\n let x, y, z = xyz in\n x * y * z\nin\nlet x = 1 and y = 2 in\nlet partial_product = product_curry x y in\nlet z = 10 in\nFormat.printf \"With x = %i, y = %i, and partial_product applied to z = %i, we got %i.\" x y z (partial_product z);;", "Well that was some long explanation, but now comes the magic!\nIf you use %ocaml to get in Python the values returned from OCaml, then functions are Curryed function!", "product_curry = %ocaml let product_curry (x:int) (y:int) (z:int) : int = x * y * z in product_curry", "The only information we have on this function is the OCaml signature, in its docstring:", "help(product_curry)", "So we can't use it as a classical 3-arguments Python function:", "product_curry(1, 2, 10)", "But we CAN use it as a Curryed function!", "product_curry(1)(2)(10)", "Which is awesome because now we can do partial evaluation as in OCaml!", "partial_product_1 = product_curry(1)\npartial_product_1(2)(10)\n\npartial_product_2 = product_curry(1)(2)\npartial_product_2_too = partial_product_1(2)\npartial_product_2(10), partial_product_2_too(10)", "What's very cool is that these functions docstrings keep showing the signature of the underlying OCaml function, even if they were obtained from pure Python cells!", "help(partial_product_1)\nhelp(partial_product_2)\nhelp(partial_product_2_too)", "That's it for this feature, it's cool and interesting.\nSome questions\nCan we share variables between two consecutive full cells?\nIn the %ocaml mode, nothing can be shared from the OCaml side, as it's just an expression, let's check:", "%ocaml let x = 1 in x\n\n%ocaml x", "But what about full cell mode, %%ocaml?", "%%ocaml\n(* See https://en.wikipedia.org/wiki/42_(number) *)\nlet answer_to_life = 42 in\nFormat.printf \"\\n... « The answer to life, the universe, and everything is %i »%!\" answer_to_life;;\n\n%%ocaml\nFormat.printf \"\\n... « The answer to life, the universe, and everything is %i »%!\" answer_to_life;;", "==> Answer: no, we cannot share any memory between two consecutive cells.\nWell, too bad, but it's not so important.\nIs there any documentation of the magic or commands?\nNo.", "?%ocaml", "Docstring: &lt;no docstring&gt;\nFile: /usr/local/lib/python3.6/dist-packages/ocaml/__init__.py\n\nMore remarks\nNote that there blog post says that using the opttoploop could be used to compile the OCaml to a faster version (native code), but do not document about this.\n\nNote that with the toploop module, the OCaml code is evaluated by compiling to bytecode which is not optimal, switching to the opttoploop module that generates native code should make it even faster.", "\"opttoploop\" in dir(ocaml)", "Also note that the ocaml module is shipped with an example of a tiny module which was written in OCaml and compiled, being made available to Python directly:", "# it doesn't have a docstring, don't try help(<...>) or <...>?\nocaml.ocaml.example_module.approx_pi\n\nocaml.ocaml.example_module.approx_pi(1000000)", "Final note: a tiny benchmark\nLet's compare the speed of naive Python and naive OCaml sum of a list/array of floats, for various input size.", "import ocaml\nimport numpy as np\n\npython_sum = sum\nocaml_sum = %ocaml List.fold_left (+.) 0.\nnumpy_sum = np.sum\n\nprint(python_sum( [1.0, 2.0, 3.0, 4.0, 5.0] ))\nprint(ocaml_sum( [1.0, 2.0, 3.0, 4.0, 5.0] ))\nprint(numpy_sum( [1.0, 2.0, 3.0, 4.0, 5.0] ))", "Now for a \"large\" array, let's use IPython %timeit magic for very quick benchmarking.\n\nScience is about making hypotheses, designing experiments to check them, and conclude.\nMy hypothesis here is that the OCaml version will be between 10 to 50 slower than the Python one (and Numpy version is 50-200 faster than Python).", "sizes = [100, 1000, 10000, 100000, 1000000, 10000000]\nprint(f\"Comparing time of python_sum and ocaml_sun :\")\nfor size in sizes:\n print(f\"\\n- For size = {size}:\")\n X = list(np.random.randn(size))\n \n print(\"\\tFor python sum: \", end='')\n %timeit python_sum(X)\n \n assert np.isclose(python_sum(X), ocaml_sum(X))\n print(\"\\tFor OCaml sum: \", end='')\n %timeit ocaml_sum(X)\n\n assert np.isclose(python_sum(X), numpy_sum(X))\n print(\"\\tFor numpy.sum: \", end='')\n %timeit numpy_sum(X)", "Well that's better than what I expected!\nIt seems that the overhead is constant and not increasing when the size of the input is increasing!\nIt means that if the Python code runs in time $T_1(n)$ for inputs of size $n$, then then OCaml binding code runs in less than $T_2(n) \\leq \\alpha T_1(n) + \\beta$, with two constants $\\alpha, \\beta$.", "import matplotlib.pyplot as plt\n\nµs = 1\nms = 1000*µs\ns = 1000*ms\n\nX = sizes\n# TODO: get this automatically?\n \nY_python = [ 7.27*µs, 72.1*µs, 786*µs, 7.55*ms, 68.2*ms, 677*ms ]\nY_ocaml = [ 16*µs, 157*µs, 1.8*ms, 24.2*ms, 286*ms, 2.92*s ]\nY_numpy = [ 12*µs, 67.7*µs, 615*µs, 6.25*ms, 62.6*ms, 632*ms ]\n\nfig = plt.figure(figsize=(14, 10), dpi=300)\n\nplt.loglog(X, Y_python, color=\"blue\", marker=\"o\", label=\"naive Python\", lw=4, ms=15)\nplt.loglog(X, Y_ocaml, color=\"green\", marker=\"d\", label=\"using OCaml\", lw=4, ms=15)\nplt.loglog(X, Y_numpy, color=\"orange\", marker=\"s\", label=\"using Numpy\", lw=4, ms=15)\n\nplt.ylabel(\"Time in micro-seconds\")\nplt.xlabel(\"Size of input list\")\nplt.legend()\nplt.title(\"Tiny benchmark comparing OCaml binding to Python\")\nplt.show()", "Just to check the experimental values of $\\alpha$ and $\\beta$ in my claim above, let's use numpy.polyfit function:", "np.polyfit(Y_python, Y_ocaml, deg=1)", "So the OCaml bidding code runs about 4.5 times slower than the Python one!\nIt means that if you're doing a data analysis or some things in Python, and suddenly you think of an easy way to write an elegant and fast OCaml version, it's definitely viable to write it in OCaml, use ocaml_function = %ocaml ... and then use to solve your task!\nThe overhead for using OCaml interpreted functions should not be too large.\nConclusion: demonstration of Pypi package OCaml for polyglot notebook\nThis (long) notebook dived in the details of this Pypi ocaml package allows to embed a real and complete OCaml runtime (with full standard library) in an IPython console or Jupyter notebook running IPython kernel (Python 3). OCaml functions and tiny programs can then be called directly from the IPython/Jupyter console, and results can be printed, or affected to Python variables! It even supports Currying multi-argument functions!\nIt's too bad that the package lack documentation, and is not open source, I would have liked to improve it a little bit!\nSee also: note, if you really want to use OCaml in a Jupyter notebook, the best solution is OCaml-jupyter kernel!\n(i opened an issue there to present this to the developer, just for his curiosity)\nThat's it for today!\nSee other notebooks" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
christoffkok/auxi.0
src/examples/tools/materialphysicalproperties/core.Dataset.ipynb
lgpl-3.0
[ "Working with auxi's Material Physical Property DataSets\nPurpose\nThe purpose of this example is to introduce and demonstrate the DataSet class in auxi's material physical property tools package.\nBackground\nMaterial physical property models are created based on sets of data that are determined experimentally. One can also test an existing property model against a data set to determine how well the model describes the physical property in question. In auxi these data sets are stored in csv files that have a specific format required by auxi. The DataSet class makes it easy to create, store, read, and use these data sets from inside the Python environment.\nItems Covered\nThe following items in auxi are discussed and demonstrated in this example:\n* auxi.tools.materialproperties.core.DataSet\nExample Scope\nIn this example we will address the following aspects:\n1. Importing the DataSet class.\n2. How to create your own physical property data set csv file with the create_template() method.\n3. Reading your data set csv file into a Dataset object.\n4. How to use the data sets for materials that are included in auxi.\nDemonstrations\nImporting the DataSet Class\nBefore you can use that DataSet class, you need to import it from auxi.", "from auxi.tools.materialphysicalproperties.core import DataSet", "You are now ready to use the DataSet class.\nCreating Your Own Data Set\nThe DataSet's create_template method makes it easy to create your own physical property data set. You only provide the material, path, and show parameters. A csv file will then be generated, and it will be automatically opened in your default csv editor if you set show to True.\nIn this csv file you would see text between &lt; &gt; signs. This indicates which text you should edit, like parameter name, units, symbol and values.", "path = '../../temp'\nmaterial = 'auxiite'\n\nDataSet.create_template(material, path, show=True)", "You can now edit the data set and save it for further use.\nReading Your Data Set\nTo read your data set into a DataSet object, you do the following:", "dataset = DataSet('data/dataset-auxiite2.csv')", "We can now use the content of the DataSet object. For this example, we will print the different variables inside the object. These include:\n* Material name\n* Data set description\n* Reference\n* List of names, symbols, display symbols, and units for the columns in the data set", "print('Material: ', dataset.material)\nprint('Description: ', dataset.description)\nprint('References: ', dataset.reference)\nprint('Properties: ', dataset.col_names)\nprint('Symbols: ', dataset.col_symbols)\nprint('Display symbols: ', dataset.display_symbols_dict)\nprint('Units: ', dataset.col_units)\nprint('Data:')\nprint(dataset.data)", "You can also print the content of the data set by simply doing the following:", "print(dataset)", "Using auxi's Own Data Sets\nFor convenience auxi contains a number of material physical property data sets that you can easily access and use. These are contained in the gases and liquids modules in the auxi.tools.materialproperties package.\nLet's look at the data set for air.", "from auxi.tools.materialphysicalproperties.gases import air_dataset\nprint(air_dataset)", "There is also a data set for water.", "from auxi.tools.materialphysicalproperties.liquids import h2o_dataset\nprint(h2o_dataset)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/applied-machine-learning-intensive
content/02_data/05_exploratory_data_analysis/colab-part2.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/02_data/05_exploratory_data_analysis/colab-part2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Exploratory Data Analysis: Part 2 - Analysis and Visualizations\nIn this second part of our Exploratory Data Analysis journey, we will build upon the cleaned up dataset that we created in part 1.\nIn part 2 we will use visualization libraries to look closer at individual columns and to see how different columns relate to one another.\nPart 1 Recap\nThe Dataset: Chocolate Bar Ratings\nYou should remember that in this lab we are using a chocolate bar ratings dataset from the Flavors of Cacao data. On the Kaggle page for the dataset we can find the documentation for the columns:\nColumn | Data Type | Description\n-------|-----------|-------------\nCompany (Maker-if known) | String | Name of the company manufacturing the bar.\nSpecific Bean Origin or Bar Name | String | The specific geo-region of origin for the bar.\nREF | Number | A value linked to when the review was entered in the database. Higher = more recent.\nReview Date | Number | Date of publication of the review.\nCocoa Percent | String | Cocoa percentage (darkness) of the chocolate bar being reviewed.\nCompany Location | String | Manufacturer base country.\nRating | Number | Expert rating for the bar.\nBeanType | String | The variety (breed) of bean used, if provided.\nBroad Bean Origin | String | The broad geo-region of origin for the bean.\nIn part 1 of this unit we modified the columns to be:\nColumn | Data Type | Description\n-------|-----------|------------\nCompany | String | Name of the company manufacturing the bar.\nCompany Location | String | Manufacturer base country.\nBean Type | String | The variety (breed) of bean used, if provided.\nSpecific Bean Origin | String | The specific geo-region of origin for the bar.\nBroad Bean Origin | String | The broad geo-region of origin for the bean.\nCocoa Percent | Number | Cocoa percentage (darkness) of the chocolate bar being reviewed.\nREF | Number |A value linked to when the review was entered in the database. Higher = more recent.\nReview Date | Number | Date of publication of the review.\nRating | Number | Expert rating for the bar. Number between 1.0 and 5.0, inclusive.\nGrade | Number | Expert rating for the bar. 'A' - 'Q'. Maps to distinct ratings.\nLet's download the dataset again and apply the changes that we made in part 1.\nAcquiring the Data\nThe data is hosted on Kaggle, so we can use our Kaggle credentials to download the data into the lab. The dataset is located at https://www.kaggle.com/rtatman/chocolate-bar-ratings. We can use the kaggle command line utility to do this.\nFirst off, upload your kaggle.json file into the lab now.\nNext, run the following command to get the credential files set to the right permissions and located in the correct spot.", "! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'", "Now we can run the kaggle command to actually download the data.", "! kaggle datasets download rtatman/chocolate-bar-ratings\n! ls", "We now have our data downloaded to our virtual machine and stored in the file chocolate-bar-ratings.zip.\nPrepping the DataFrame\nWe will now load the data into a DataFrame and apply the data preprocessing that was done in part 1 of this unit. Run the hidden code block below to fill a DataFrame called df with preprocessed data.\nRun this block again if you need to reset the df DataFrame. After you run the code block, there will also be a function that you can use to reset df:\npython\n df = reload_data()", "#@title Part 1 Code: Press Run To Load Preprocessed Data Frame\n\nimport pandas as pd\n\ndef reload_data():\n df = pd.read_csv('chocolate-bar-ratings.zip')\n\n df.columns = [\n 'Company',\n 'Specific Bean Origin',\n 'REF',\n 'Review Date',\n 'Cocoa Percent',\n 'Company Location',\n 'Rating',\n 'Bean Type',\n 'Broad Bean Origin'\n ]\n\n df = df[[\n 'Company',\n 'Company Location',\n 'Bean Type',\n 'Specific Bean Origin',\n 'Broad Bean Origin',\n 'Cocoa Percent',\n 'REF',\n 'Review Date',\n 'Rating',\n ]]\n\n column = 'Company'\n for broken, fixed in {\n 'Shattel': 'Shattell',\n 'Cacao de Origin': 'Cacao de Origen',\n }.items():\n df.loc[df[column] == broken, column] = fixed\n\n column = 'Company Location'\n for broken, fixed in {\n 'Domincan Republic': 'Dominican Republic',\n 'Niacragua': 'Nicaragua',\n 'Eucador': 'Ecuador',\n 'Amsterdam': 'Holland',\n 'U.K.': 'England',\n }.items():\n df.loc[df[column] == broken, column] = fixed\n\n column = 'Bean Type'\n df.loc[df[column].isna(), column] = 'Unknown'\n df.loc[df[column] == chr(0xa0), column] = 'Unknown'\n\n column = 'Specific Bean Origin'\n for broken, fixed in {\n 'Ambolikapkly P.': 'Ambolikapiky P.',\n 'Dominican Republicm, rustic': 'Dominican Republic, rustic',\n 'Nicaraqua': 'Nicaragua',\n }.items():\n df.loc[df[column] == broken, column] = fixed\n\n column = 'Broad Bean Origin'\n df.loc[(df['Specific Bean Origin'] == 'Madagascar') &\n (df[column].isna()), column] = 'Madagascar'\n df.loc[df[column] == chr(0xa0), column] = 'Unknown'\n\n column = 'Cocoa Percent'\n df[column] = df[column].apply(lambda s: float(s[:-1]))\n\n def grade(rating):\n letter_grade = 'A'\n numeric_rating = 5.0\n while rating < numeric_rating:\n letter_grade = chr(ord(letter_grade) + 1)\n numeric_rating -= 0.25\n return letter_grade\n\n column = 'Grade'\n df[column] = df['Rating'].apply(grade)\n \n return df\n\ndf = reload_data()\ndf.sample(10)", "Data Analysis\nWe have looked at each column in isolation in order to ensure that the data in that column is complete and seems to make sense. In this section we will take that newly-cleaned data and look at it in a little more depth. We will examine some columns individually. We'll also see how columns might relate to one another.\nVisualizing Ratings\nRatings are very important to our dataset. One interesting visualization might be a line chart of counts of each rating. To do that we can extract a Series containing the ratings, group by that Series, and plot the counts.", "import matplotlib.pyplot as plt\n\nratings = df['Rating'].groupby(df['Rating']).count()\n\nplt.plot(ratings.index, ratings)\nplt.show()", "This shows that ratings seem to mostly be in the 3s with a reasonable tail higher and lower.\nWe can also put these values in a bar chart", "import matplotlib.pyplot as plt\n\ngrades = df['Rating'].groupby(df['Rating']).count()\n\nplt.bar(grades.index, grades)\nplt.show()", "This changes the visualization's form quite a bit. Instead of using 'Rating' for the chart, let's use our 'Grade' column.", "import matplotlib.pyplot as plt\n\ngrades = df['Grade'].groupby(df['Grade']).count()\n\nplt.bar(grades.index, grades)\nplt.show()", "Overall, that's looking better. The sorting is opposite of 'Rating', so our best chocolates are to the left instead of the right. And you can see that we are missing 'B', 'C' and other values, which is not ideal.\nExercise 1: Grade Bar Chart\nImprove on the bar chart of Grade values by having the chart include all letter grades, with counts or not, between 'A' and 'Q'.", "df = reload_data()\n\n# Your Solution Goes Here", "Visualizing Cocoa Percentage\nCocoa percentage is another value that might be interesting in our data analysis. For instance, does a higher or lower cocoa percentage seem to correlate with the rating in any manner?\nWe'll get to questions like this, but first let's just create a simple plot.\nFirst let's see if 'Cocoa Percent' is an actual continuous variable.", "sorted(df['Cocoa Percent'].unique())", "This does seem much more like a continuous variable than 'Rating' did. In this case we will want to use some sort of continuous plot.\nFor this particular visualization, we will pull out a new tool, the seaborn.distplot. This plot combines a line chart of kernel density and a histogram to show the distribution of values in the Series.", "import seaborn as sns\n\n_ = sns.distplot(df['Cocoa Percent'])", "As we can tell from this plot, a very large proportion of chocolate bars rated have around 70% cocoa. We may want to ask whether this sample is representative of chocolate bars on the market in general or whether reviewers favor such bars because they expect them to be more enjoyable. (i.e. Is our sample representative of the population?) This, of course, would require research outside of our data set.\nExercise 2: What Is a Distplot?\nWe plotted a \"distplot\" above but didn't get too specific on what was actually being shown. The values on the x-axis make sense; they are the percentage of cocoa used in the bar. But there are some other aspects of the chart that are a little less clear at first.\nUsing the seaborn.distplot documentation and other resources, answer the following questions:\nStudent Solution\n\nWhat do the columns on the chart represent?\nYour Answer Goes here\n\n\nWhat does the continuous line on the chart represent?\nYour Answer Goes here\n\n\nWhat is the y-value of the sns.distplot?\nYour Answer Goes here\n\n\n\n\nVisualizing Ratings by Bean Type\nSometimes it is useful to visualize our target, in this case, 'Rating', by different features. One way to do this with a continuous variable like 'Rating' is with a box plot.\nBox plots (a.k.a., box and whisker plots) show the distribution across categories. These plots are quite informative, as they display a five-statistic summary of a dataset, including:\n\nMinimum\nFirst quartile (about 25% of the numbers in the dataset lie below it)\nMedian (splits the dataset in half)\nThird quartile (about 75% of the numbers in the dataset lie below it)\nMaximum\n\nIn the example below, we create our box plot using seaborn.boxplot(). We pass the function our DataFrame, the y-value which determines the number of box plots, and the x-value which is what the statistics are gathered from.\nRemember that seaborn is based on matplotlib, so we can still use features of matplotlib like the figure size to increase the size of seaborn charts.", "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf = reload_data()\n\nplt.figure(figsize=[10, 16])\nax = sns.boxplot(\n data=df,\n y='Bean Type',\n x='Rating',\n)\n_ = ax.set_title('Rating Distribution by Bean Type')", "This is a nice chart, but it is difficult to get a snapshot view of the data. Sometimes sorting is important.\nAs an example, let's look at a chart that is sorted alphabetically.", "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf = reload_data()\n\nplt.figure(figsize=[10, 16])\n\nax = sns.boxplot(\n data=df,\n y='Bean Type',\n x='Rating',\n order=sorted(df['Bean Type'].unique()),\n)\n\n_ = ax.set_title('Rating Distribution by Bean Type')", "That might be even worse!\nWhat sort order might give some meaning to this chart?\nExercise 3: Building Boxplots\nBoxplots are useful, but without curated sorting, the message can get lost in the noise. Experiment with different sorting strategies for the boxplot and make an argument for why you find value in your plot.\nBelow is an example of plots sorted by the mean rating. Look at it and see why sorting by mean might bring value.", "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf = reload_data()\n\nplt.figure(figsize=[10, 16])\n\nax = sns.boxplot(\n data=df,\n y='Bean Type',\n x='Rating',\n order=sorted(df['Bean Type'].unique(), \n key=lambda bt: df[df['Bean Type'] == bt]['Rating'].mean()),\n)\n\n_ = ax.set_title('Rating Distribution by Bean Type')", "Some thoughts:\n\nEven sorting by mean, the max ratings can still be held by beans near the median.\nBeans with a low median rating are actually widespread on ratings.\nQuality tightens with beans with higher median rankings.\n\nThese observations may lead us to ask: what are the sample sizes? Are the top-median beans just single samples while more popular beans are weighted down by large sample sizes? Regardless of which sorting criteria we use, we should take a closer look at our bean types. Some of the types with fewer samples might need to be rolled into a larger grouping.\nFor this exercise, try different statistics for sorting. Find one that you can interpret and explain why the specific sorting is meaningful.\nStudent Solution", "# Your Solution Goes Here", "Visualizing Ratings by Cocoa Percent\nLet's take a moment to also look at the spread of ratings by 'Cocoa Percent'. A natural way to do this might be a scatter plot.", "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf = reload_data()\n\nplt.figure(figsize=[10, 10])\nax = sns.scatterplot(\n data=df,\n x='Cocoa Percent',\n y='Rating',\n)\n\n_ = ax.set_title('Ratings by Cocoa Percent')", "This is a somewhat unfulfilling scatter plot. Since ratings aren't really continuous, we see \"lines\" of dots for the ratings tranches. There are also some lines formed vertically; though 'Cocoa Percent' is continuous, some percentages are more common than others.\nMaybe a boxplot would work?", "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf = reload_data()\n\nplt.figure(figsize=[10, 10])\nax = sns.boxplot(\n data=df,\n x='Cocoa Percent',\n y='Rating',\n)\n\n_ = ax.set_title('Ratings by Cocoa Percent')", "That is better. Given this box plot, we can see some trends in the data. However, there are enough different 'Cocoa Percent' values to make this chart a little cluttered.\nWe can clean it up a bit by binning the 'Cocoa Percent' values.\nBinning is the process of taking a large set of continuous values and dividing them into a fixed number of bins where each bin contains a range of values. Each bin typically has the same sized range.\nTo bin data we can use NumPy.histogram_bin_edges to find the edges of our bins.", "import pandas as pd\nimport numpy as np\n\ndf = reload_data()\n\nedges = np.histogram_bin_edges(df['Cocoa Percent'])\n\nfor i in range(len(edges) - 1):\n print(f'{edges[i]:.{1}f} - {edges[i+1]:.{1}f} ({edges[i+1]-edges[i]:.{1}f})')", "In the code above we asked histogram_bin_edges() to find the edges of our bins for 'Cocoa Percent'. We then printed the edges for each bin and the size of the range for each bin. You can see that each is 5.8.\nGiven these bin edges, we can now use another handy NumPy function: digitize. digitize will examine our 'Cocoa Percent' values and return the bin that each value belongs in.", "import pandas as pd\nimport numpy as np\n\ndf = reload_data()\n\nedges = np.histogram_bin_edges(df['Cocoa Percent'])\nbins = np.digitize(df['Cocoa Percent'], edges)\nnp.unique(bins)", "Wait, there are 11 bins when there were only supposed to be 10. What happened?\nIt turns out that histogram_bin_edges and digitize think differently about the list of edges. histogram_bin_edges returns bin_count + 1 values (in this case, 11), so that each bin has a defined stop and start. The final edge serves as an upper limit (in this case, 100).\nOn the other hand, digitize lets the final bin accept any data larger than the start of the bin (or smaller if you are creating decreasing order bins), so there's no need for an edge to define the outside boundary. For that reason, we have to not pass digitize the final edge.", "import pandas as pd\nimport numpy as np\n\ndf = reload_data()\n\nedges = np.histogram_bin_edges(df['Cocoa Percent'])\nbins = np.digitize(df['Cocoa Percent'], edges[:-1])\nnp.unique(bins)", "That is better. We can now bring it all together and create a binned boxplot.", "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\n\ndf = reload_data()\n\nedges = np.histogram_bin_edges(df['Cocoa Percent'], bins=10)\nbins = np.digitize(df['Cocoa Percent'], edges[:-1])\n\nplt.figure(figsize=[16, 10])\n\nax = sns.boxplot(\n x=bins,\n y=df['Rating'],\n)\n\nlabels = [f'{edges[i]:.{1}f} - {edges[i+1]:.{1}f}'\n for i in range(len(edges) - 1)]\n_ = plt.xticks(list(range(10)), labels)\n_ = ax.set_title('Ratings by Cocoa Percent')", "Exercise 4: Interpreting Box Plots\nWe've now created a boxplot that shows the range of ratings for different bins of percentages of cocoa. Use the plot to answer the following questions.\nStudent Solution\n\nWhich bin of cocoa percentages tends to be the lowest rated?\nYour Answer Goes Here\n\n\nWhich bin (or bins) of cocoa percentages tend to get the most consistent ratings?\nYour Answer Goes Here\n\n\n\n\nFinding Correlations\nAnother important check is to see if there are any correlations in your dataset. Correlation is a measure of how well two continuous variables track together.\nPandas provides DataFrame.corr(), which provides a correlation matrix for all of the numeric values in the DataFrame.", "df = reload_data()\n\ndf.corr()", "Exercise 5: Heatmap Correlations\nIt is common for correlations to be shown in a heatmap. In this exercise we will create a heatmap and then attempt to interpret the map.\nStudent Solution\nUses seaborn or matplotlib to create a heatmap representing the correlations in df.", "# Your Solution Goes Here", "Which columns have the strongest correlation?\nYour answer goes here.\n\n\nWhy do you think they have such a strong correlation?\nYour answer goes here.\n\n\n\n\nVisualizing Ratings Across the Globe\nThis dataset seems to really emphasize the source of the cocoa beans used to make a chocolate bar. It would be interesting to see if there was a correlation with the geographical source of the beans and the perceived quality of the resulting chocolate.\nIn order to do this, we need to take one of our columns of location data and turn it into geographical coordinates that we can then use to attempt to identify a pattern.\nThe alleged geographical columns that we have in our data are:\nColumn | Data Type | Description\n-------|-----------|------------\nCompany Location | String | Manufacturer base country.\nSpecific Bean Origin | String | The specific geo-region of origin for the bar.\nBroad Bean Origin | String | The broad geo-region of origin for the bean.\nBut we know from our earlier analysis of the data that both of the bean origin columns are pretty messy. We could possibly tie most of the data points down to a geographical location, but it would be a huge undertaking. Also, the bean origins can have multiple locations for mixed blends. Ultimately this would be an interesting undertaking, but not for the scope of this course. \n'Company Location' offers more promise though. We have verified that it contains country information. It doesn't necessarily relate to the source of the bean, but we can at least try to see where the best bars are created!\nSince the 'Company Location' values are all countries, we went ahead and created a DataFrame below that contains the latitude and longitude values for the countries that we have company data for. We have also merged the lat/long data into df.\nRun the hidden code cell below to update df with latitude and longitude data. After you run this cell a function called reload_data_plus_latlong() will be available to you to restore df to its original state at any time. To use it in your code write:\npython\ndf = reload_data_plus_latlong()", "#@title Lat Long Addition: Press Run To Load `df`\n\ndef reload_data_plus_latlong():\n df = reload_data()\n\n country_df = pd.DataFrame([\n ['Argentina', -36.3, -60.0],\n ['Australia', -35.15, 149.08],\n ['Austria', 48.12, 16.22],\n ['Belgium', 50.51, 4.21],\n ['Bolivia', -16.2, -68.1],\n ['Brazil', -15.47, -47.55],\n ['Canada', 45.27, -75.42],\n ['Chile', -33.24, -70.4],\n ['Colombia', 4.34, -74.0],\n ['Costa Rica', 9.55, -84.02],\n ['Czech Republic', 50.05, 14.22],\n ['Denmark', 55.41, 12.34],\n ['Dominican Republic', 18.74, -70.16],\n ['Ecuador', -0.15, -78.35],\n ['England', 52.36, -1.17],\n ['Fiji', -18.06, 178.3],\n ['Finland', 60.15, 25.03],\n ['France', 48.5, 2.2],\n ['Germany', 52.3, 13.25],\n ['Ghana', 5.35, -0.06],\n ['Grenada', 12.12, -61.68],\n ['Guatemala', 14.4, -90.22],\n ['Holland', 52.13, 5.29],\n ['Honduras', 14.05, -87.14],\n ['Hungary', 47.29, 19.05],\n ['Iceland', 64.1, -21.57],\n ['India', 28.37, 77.13],\n ['Ireland', 53.21, -6.15],\n ['Israel', 31.71, -35.1],\n ['Italy', 41.54, 12.29],\n ['Japan', 36.2, 138.25],\n ['Lithuania', 54.38, 25.19],\n ['Madagascar', -18.55, 47.31],\n ['Martinique', 14.36, -61.02],\n ['Mexico', 19.2, -99.1],\n ['Netherlands', 52.23, 4.54],\n ['New Zealand', -41.19, 174.46],\n ['Nicaragua', 12.06, -86.2],\n ['Peru', -12.0, -77.0],\n ['Philippines', 14.4, 121.03],\n ['Poland', 52.13, 21.0],\n ['Portugal', 38.42, -9.1],\n ['Puerto Rico', 18.28, -66.07],\n ['Russia', 61.52, 105.32],\n ['Sao Tome', 0.19, 6.61],\n ['Scotland', 56.49, 4.2],\n ['Singapore', 1.35, 103.82],\n ['South Africa', -25.44, 28.12],\n ['South Korea', 35.91, 127.77],\n ['Spain', 40.25, -3.45],\n ['St. Lucia', 13.91, -60.98],\n ['Suriname', 5.5, -55.1],\n ['Sweden', 59.2, 18.03],\n ['Switzerland', 46.57, 7.28],\n ['U.S.A.', 39.91, -77.02],\n ['Venezuela', 10.3, -66.55],\n ['Vietnam', 14.06, 108.28],\n ['Wales', 52.13, -3.78],\n ], columns=['Country', 'Latitude', 'Longitude'])\n\n df = pd.merge(df, country_df, left_on='Company Location', right_on='Country')\n return df\n\ndf = reload_data_plus_latlong()\ndf.sample(10)", "Given this new geographical data, we can now scatter plot the mean ratings data for each country onto the country's latitude and longitude.", "import seaborn as sns\n\ndf = reload_data_plus_latlong()\n\nmean_ratings_df = df.groupby(['Latitude', 'Longitude'],\n as_index=False).mean()\nmean_ratings_df = mean_ratings_df[['Latitude', 'Longitude', 'Rating']]\n\n_ = sns.scatterplot(x='Longitude', y='Latitude', data=mean_ratings_df)", "Exercise 6: More Meaningful Scatter Plots\nWe just added in geographical data in order to scatter plot our ratings onto a \"map\", but the plot isn't very meaningful. From the plots we can somewhat make out Europe and South America, but we don't really know if they produce high-quality chocolates.\nIn this exercise you will add visual cues to the scatter plot to indicate the relative quality of chocolate produced by each country. Check out the seaborn.scatterplot() documentation and find arguments that you can pass to seaborn.scatterplot() that will call out the good chocolate from the bad by changing the size and color of the dots on the plot.\nStudent Solution", "# Your Solution Goes Here", "Finding Facts About the Data\nWe've created some nice visualizations to explore our data. Sometimes, you might just want textual information. In this section we'll use Pandas to answer a few questions about our dataset.\nWe might want to find the companies with the highest average ratings. To do this we can group our data by 'Company', find the mean rating, and print out only the top rated companies. We'll do this just using Pandas.", "df.groupby('Company')[['Rating']].mean().sort_values(by='Rating').tail(10)", "We can use grouping and index selection to answer questions like: How many companies produce, on average, worse than the overall mean rated chocolates?", "# Find the overall mean rating\nmean_rating = df['Rating'].mean()\n\n# Find each company's mean rating\nmean_ratings_by_company = df.groupby('Company',\n as_index=False)[['Rating']].mean()\n\n# See how many of those ratings are below the mean\nmean_ratings_by_company[mean_ratings_by_company['Rating'] < mean_rating]['Company'].count()", "Exercise 7: Highest Rated Companies With Many Reviews\nEarlier we found the companies with the highest mean rating, but some of those companies only have one or two reviews. For this exercise find five highest-mean rated companies with at least five reviews.\nStudent Solution", "# Your Code Goes Here", "Conclusion\nCongratulations. You have now taken a dataset with missing and messy values, cleaned it up, and examined the data through visualization and through Pandas queries.\nThis exploratory data analysis and data preprocessing is a very important step in data science and machine learning. Knowing how to use machine learning models is important, but if you want to model data properly, it is just as important that you understand your data well." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ayushmaskey/ayushmaskey.github.io
jupyter/pandas_resampling.ipynb
mit
[ "import pandas as pd\nimport numpy as np", "resampling\n\ndoes not have frequency and we want it\ndoes not have the frequency we want", "rng = pd.date_range('1/1/2011', periods=72, freq='H')\nrng[1:4]\n\nts = pd.Series(list(range(len(rng))), index=rng)\nts.head()", "convert hourly to 45 min frequency and fill data\n\nffill --> forward fill --> use previous month data\nbfill", "converted = ts.asfreq('45Min', method='ffill')\nconverted.head(10)\n\nts.shape\n\nconverted.shape\n\nconverted2 = ts.asfreq('3H')\nconverted2.head()", "resampling better option to not lose all the data", "#mean of 0 and 1, 2 and 3 etc\nts.resample('2H').mean()[0:10]\n\n#resampling events in irregular time series\nirreq_ts = ts[ list( np.random.choice( a = list( range( len(ts))), size=10, replace=False ))]\nirreq_ts\n\nirreq_ts = irreq_ts.sort_index()\nirreq_ts\n\nirreq_ts.resample('H').fillna( method='ffill', limit=5)\n\nirreq_ts.resample('H').count()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
datacommonsorg/api-python
notebooks/intro_data_science/Regression_Evaluation_and_Interpretation.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/intro_data_science/Regression_Evaluation_and_Interpretation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2022 Google LLC.\nSPDX-License-Identifier: Apache-2.0\nRegression: Evaluation and Interpretation\nIn part 1, we saw how powerful regression can be as a tool for prediction. In this colab, we'll take that exploration one step further: what can regression models tell us about the statistical relationships between variables?\nIn particular, this colab will take a more rigorous statistical approach to regressions. We'll look at how to evaluate and interpret our regression models using statistical methods.\nLearning Objectives:\n\nHypothesis testing with regression\nRegression tables\nPearson Correlation coefficient, $r$\n$R^2$ and adjusted $R^2$\nInterpreting weights and intercepts\nHow correlated variables affect models\n\n\nNeed extra help?\nIf you're new to Google Colab, take a look at this getting started tutorial.\nTo build more familiarity with the Data Commons API, check out these Data Commons Tutorials.\nAnd for help with Pandas and manipulating data frames, take a look at the Pandas Documentation.\nWe'll be using the scikit-learn library for implementing our models today. Documentation can be found here. \nAs usual, if you have any other questions, please reach out to your course staff!\nGetting Set Up\nRun the following code boxes to load the python libraries and data we'll be using today.", "# Setup/Imports\n!pip install datacommons --upgrade --quiet\n!pip install datacommons_pandas --upgrade --quiet\n\n# Data Commons Python and Pandas APIs\nimport datacommons\nimport datacommons_pandas\n\n# For manipulating data\nimport numpy as np\nimport pandas as pd\n\n# For implementing models and evaluation methods\nfrom sklearn import linear_model\nfrom sklearn.metrics import r2_score, mean_squared_error\nfrom statsmodels import api as sm\n\n\n# For plotting/printing\nfrom matplotlib import pyplot as plt\nimport seaborn as sns", "The Data\nIn this assignment, we'll be returning to the scenario we started in Part 1. As a refresher, we'll be exploring how obesity rates vary with different health or societal factors across US cities.\nOur data science question: What can we learn about the relationship of those health and lifestyle factors to obesity rates?", "# Load the data we'll be using\ncity_dcids = datacommons.get_property_values([\"CDC500_City\"],\n \"member\",\n limit=500)[\"CDC500_City\"]\n\n# We've compiled a list of some nice Data Commons Statistical Variables\n# to use as features for you\nstat_vars_to_query = [\n \"Count_Person\",\n \"Percent_Person_PhysicalInactivity\",\n \"Percent_Person_SleepLessThan7Hours\",\n \"Percent_Person_WithHighBloodPressure\",\n \"Percent_Person_WithMentalHealthNotGood\",\n \"Percent_Person_WithHighCholesterol\",\n \"Percent_Person_Obesity\"\n \n]\n\n# Query Data Commons for the data and remove any NaN values\nraw_features_df = datacommons_pandas.build_multivariate_dataframe(city_dcids,stat_vars_to_query)\nraw_features_df.dropna(inplace=True)\n\n# order columns alphabetically\nraw_features_df = raw_features_df.reindex(sorted(raw_features_df.columns), axis=1)\n\n# Add city name as a column for readability.\n# --- First, we'll get the \"name\" property of each dcid\n# --- Then add the returned dictionary to our data frame as a new column\ndf = raw_features_df.copy(deep=True)\ncity_name_dict = datacommons.get_property_values(city_dcids, 'name')\ncity_name_dict = {key:value[0] for key, value in city_name_dict.items()}\ndf.insert(0, 'City Name', pd.Series(city_name_dict))\n\n# Display results\ndisplay(df)", "The Model\nRun the following code box to fit an ordinary least squares regression model to our data.", "# fit a regression model\ndep_var = \"Percent_Person_Obesity\"\ny = df[dep_var].to_numpy().reshape(-1, 1)\nx = df.loc[:, ~df.columns.isin([dep_var, \"City Name\"])]\nx = sm.add_constant(x)\n\n\nmodel = sm.OLS(y, x)\nresults = model.fit()", "Part 0) Regression Tables\nWhen performing regression analyses, statistical packages will usually provide a regression table, which summarizes the results of the analysis.\nRun the following codebox to display the regression table for our original model. In this colab, we'll go over some of the statistics included in the table.", "print(results.summary())", "Part 1) Hypothesis Testing\n1.1) Null Hypotheses\nWhen performing statistical analyses, one usually starts with a statement of the null hypothesis. Typically for regression models, these take the form of the coefficient for a variable equaling zero.\nQ1.2) Write out the null hypotheses for each of our independent variables.\n1.2) T-test\nSo how do we test our null hypotheses? We use the T-test.\nTake a look at the regression table above to answer the following questions\nQ1.2A) According to the t-test, which variables are statistically significant?\nQ1.2B) For variables that are not statistically significant, should we keep them in our model? Why or why not?\n1.3) F-test\nBeyond testing the significance of our individual variables independently, we can also test the significance of our model overall using the F-test. In particular, the F-test compares our model to one without predictors (aka, just an intercept). In other words, can our model do statistically better than just predicting the mean?\nAgain use the regression table above to answer the following questions:\nQ1.3A) What is the null hypothesis for the F-test?\nQ1.3B) Can we reject the null hypothesis for our model?\nPart 2) Statistical Measures\n2.1) Correlation Coefficient $r$\nWe can quantify predictiveness of variables using a correlation coefficient, a number that represents the degree to which two variables have a statistical relationship. The most common correlation coefficient used is the Pearson correlation coefficient, also known as Pearson's r, which measures the strength of linear relationships between variables.\nMathematically, the correlation coefficient is defined as:\n$$ r = \\frac{\\sum_i (x_i - \\bar{x})(y_i - \\bar{y})}{\\sqrt{\\sum_i (x_i - \\bar{x})^2}\\sqrt{\\sum_i (y_i - \\bar{y})^2}}\n$$\nwhere $x$ and $y$ are the two variables.\nThose of you with a statistics background might recognize this as the ratio of covariance to the product of their standard deviations.\nQ2.1A) Either using the mathematical definition or by exploring with code, explain what the correlation coefficient would be in the following cases:\nA) $x = y$\nB) $x = -y$\nC) $x$ and $y$ are both normally distributed variables with mean 0 and variance 1, randomly sampled independently from each other.", "\"\"\"\nOptional cell for Q2.1A\n\"\"\"\n\n# Hint: Try writing code to generate values for x and y, then either write or import\n# a function to calculate the correlation coefficient\n\n# Your code here", "Now run the following code box to use panda's .corr() function to calculate the correlation coefficient between our variables. Note that pandas outputs the results as a matrix.", "# calculate correlation\ndf.corr()", "Q2.1B) Explain why the diagonals of the matrix have the value 1.\nQ2.1C) What is the correlation coefficient between Count_Person and Percent_Person_Obesity? What does the correlation coefficient imply about the relationship between population and obesity rate?\nQ2.1D) What is the correlation coefficient between Percent_Person_PhysicalInactivity and Percent_Person_Obesity? What does the correlation coefficient imply about the relationship between physical inactivity and obesity rate?\nQ2.1E) In general, would you prefer to include features that correlate strongly with the dependent variable, or features with no correlation in a regression model?\nQ2.1F) You find a new feature with correlation coefficient $r=-0.97$ between it and obesity rates. Would it be a good idea to add this new feature to your model?\n2.2) $R^2$ Score\nTo quantify how predictive a linear regression model is overall is to use the coefficient of determination, $R^2$ (pronounced \"R squared\").\nMathematically, the $R^2$ score is defined as:\n$$S_{residuals} = \\sum_i{(y_i - f_i)^2} \\\nS_{total} = \\sum_i{(y_i - \\bar{y})^2}\\\nR^2 = 1 - \\frac{S_{residuals}}{S_{total}}$$\nwhere $y_i$s are the actual dependent variable values, $f_i$ are the predicted dependent variable values, and $\\bar{y}$ is the average of the $y_i$'s.\nConceptually, the $R^2$ score is a measure of explained variance. If $R^2=0.75$, that means that 75% of the variance in the dependent variable has been accounted for by our model, while 25% of the remaining variability has not.\nQ2.2A) Based on the mathematic definition, what is the range of values possible for R^2?\nQ2.2B) Come up with a situation (e.g. what would the data look like) where:\nA) $R^2 = 1.0$\nB) $R^2 = 0.0$\nLet's now analyze what the $R^2$ value is for our model.", "# calculate R^2\nprint(\"Model R^2 =\", results.rsquared)", "Q2.2C) Is the model's $R^2$ a \"good\" score?\nQ2.2D) Can you think of any ways we can change our model that would improve the $R^2$ score?\n2.3) Adjusted $R^2$\nThere's an issue with $R^2$ scores that one needs to be aware of when working with multiple independent variables. Namely, that the number of independent variables used can affect the $R^2$ score.\nLet's see this in practice. Let's create a new dataframe with an extra 100 dummy variables (randomly sampled from a 0-mean 1-variance normal distribution) tacked on.", "# Pad our dataframe with more random variables\ndf_padded = df.copy()\nnum_rows = len(df.index)\nfor i in range(100):\n var_name = f\"Random Variable {i}\"\n random_data = np.random.normal(size=(num_rows, 1))\n df_padded[var_name] = random_data\ndisplay(df_padded)\n", "Now let's fit a new model to the data and compare R^2 scores.", "# New R^2\ny_padded = df_padded[dep_var].to_numpy().reshape(-1, 1)\nx_padded = df_padded.loc[:, ~df_padded.columns.isin([dep_var, \"City Name\"])]\nx_padded = sm.add_constant(x_padded)\n\npadded_model = sm.OLS(y_padded, x_padded)\npadded_results = padded_model.fit()\n\nprint(\"Original Model R^2 = \", results.rsquared)\nprint(\"Padded Model R^2 =\", padded_results.rsquared)\n", "Q2.3A) Which model had a better $R^2$ score?\nQ2.3B) Think about the variables used in each model. Should one model be much more predictive than another?\nQ2.3B) In general, how would you expect $R^2$ to change as we increase the number of independent variables?\nSo how do we fix this? We can adjust our $R^2$ metric to account for the number of variables. The most popular way to defined the adjusted $R^2$ score is as follows:\n$$R^{2}_{adj}=1-(1-R^{2}){n-1 \\over n-p-1}$$\nwhere $n$ is the number of data points and $p$ is the number of independent variables.\nNow let's compare the adjusted $R^2$ of our models.", "# Adjusted R^2\nprint(\"Original Model Adjusted R^2 = \", results.rsquared_adj)\nprint(\"Padded Model Adjusted R^2 =\", padded_results.rsquared_adj)", "Q2.3D) Which model had a better adjusted $R^2$ score?\nQ2.3E) When would you prefer to use adjusted R^2 over R^2 to evaluate model fit?\nPart 3) Interpreting Regression Models\n3.1) Analyzing Weights and Intercepts\nThe parameters of the regression model itself can also yield important insights.\nRun the following code box to display the weights and intercept of our original model.", "# Display weights/coefficients\ndisplay(results.params.round(5))", "Q3.1A) What is the intercept of our model? What are its units?\nQ3.1B) What are the units on each of the model weights (aka coefficients)?\nQ3.1C) Which variables matter most to our model?\nQ3.1D) In words, describe what a weight/coefficient in a linear regression means.\nQ3.1E) Our model is used to generate a predicted obesity rate for a fictional city named Dataopolis. If we increased Percent_Person_WithMentalHealthNotGood for Dataopolis by 1 unit, while keeping the values for all remaining variables constant, by how much would we expect our predicted obesity rate to change?\n3.2) The effect of correlated variables\nWhen interpreting weights, one thing to look out for is if we have independent variables that are highly correlated with each other.\nLet's illustrate why this might be a problem, by adding a variables that are correlated with one of the existing variables", "# New variable correlated with Percent_Person_WithMentalHealthNotGood\ncorrelated_df = df.copy()\ntarget_var = \"Percent_Person_WithMentalHealthNotGood\"\nnoise = np.random.normal(size=(len(correlated_df.index),))\ncorrelated_df[\"Correlated Variable\"] = correlated_df[target_var] + noise\n\n# show new data frame\nprint(\"New dataframe to fit:\")\ndisplay(correlated_df)\n\n# Create a new model\ny_corr = correlated_df[dep_var].to_numpy().reshape(-1, 1)\nx_corr = correlated_df.loc[:, ~correlated_df.columns.isin([dep_var, \"City Name\"])]\nx_corr = sm.add_constant(x_corr)\n\ncorrelated_model = sm.OLS(y_corr, x_corr)\ncorrelated_results = correlated_model.fit()\n\nprint(\"Correlated Model Weights and Intercept:\")\ndisplay(correlated_results.params.round(5))", "Q3.2A) Compare the new weights of the correlated model with the weights of our original model. What happened to the wieghts corresponding to Percent_Person_WithMentalHealthNotGood?\nQ3.2B) Thinking back to your answers for Q3.1C-E, how might correlated variables affect the interpretation of model weights?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
andmax/gpufilter
python/alg5epe.ipynb
mit
[ "Parallel Recursive Filtering of Infinite Input Extensions\nThis notebook tests alg5epe\nAlgorithm 5 Even-Periodic Extension", "import math\nimport cmath\nimport numpy as np\nfrom scipy import ndimage, linalg\nfrom skimage.color import rgb2gray\nfrom skimage.measure import structural_similarity as ssim\nfrom sklearn.metrics import mean_squared_error\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.gray() # to plot gray images using gray scale\n\n%run 'all_functions.ipynb'", "First: load the test image and run Gaussian filter on it", "%%time\nX1 = plt.imread('input.png')\nX1 = rgb2gray(X1)\ns = 16. # sigma for testing filtering\nX2 = np.copy(X1).astype(np.float64)\n# Gaussian filter runs with periodic extension\nX2 = ndimage.filters.gaussian_filter(X1, sigma=s, mode='reflect')", "Second: setup basic parameters from the input image", "%%time\nb = 32 # squared block size (b,b)\nw = [ weights1(s), weights2(s) ] # weights of the recursive filter\nwidth, height = X1.shape[1], X1.shape[0]\nm_size, n_size = get_mn(X1, b)\nblocks = break_blocks(X1, b, m_size, n_size)\n# Pre-computation of matrices and pre-allocation of carries\nalg5m1 = build_alg5_matrices(b, 1, w[0], width, height)\nalg5m2 = build_alg5_matrices(b, 2, w[1], width, height)\nalg5c1 = build_alg5_carries(m_size, n_size, b, 1)\nalg5c2 = build_alg5_carries(m_size, n_size, b, 2)\nalg5epem1 = build_epe_matrices(1, w[0], alg5m1)\nalg5epem2 = build_epe_matrices(2, w[1], alg5m2)", "Third: run alg5epe with filter order 1 then 2", "%%time\n# Running alg5epe with filter order r = 1\nalg5_stage1(m_size, n_size, 1, w[0], alg5m1, alg5c1, blocks)\nalg5_epe_stage23(m_size, n_size, alg5m1, alg5epem1, alg5c1)\nalg5_epe_stage45(m_size, n_size, 1, alg5m1, alg5epem1, alg5c1)\nalg5_stage6(m_size, n_size, w[0], alg5c1, blocks)\n# Running alg5epe with filter order r = 2\nalg5_stage1(m_size, n_size, 2, w[1], alg5m2, alg5c2, blocks)\nalg5_epe_stage23(m_size, n_size, alg5m2, alg5epem2, alg5c2)\nalg5_epe_stage45(m_size, n_size, 2, alg5m2, alg5epem2, alg5c2)\nalg5_stage6(m_size, n_size, w[1], alg5c2, blocks)\n# Join blocks back together\nX3 = join_blocks(blocks, b, m_size, n_size, X1.shape)", "Fourth: show both results and error measurements", "fig, (ax2, ax3) = plt.subplots(1, 2)\nfig.set_figheight(9)\nfig.set_figwidth(14)\nax2.imshow(X2)\nax3.imshow(X3)\nprint '[ Mean Squared Error:', mean_squared_error(X2, X3), ' ]',\nprint '[ Structural similarity:', ssim(X2, X3), ' ]'", "Conclusion: direct convolution (left) and recursive filtering (right) present the same result when considering even-periodic extension" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rebeccabilbro/tiamat
MongoDBTutorial.ipynb
mit
[ "from IPython.core.display import HTML ", "Introduction to MongoDB with PyMongo and NOAA Data\nThis notebook provides a basic walkthrough of how to use MongoDB and is based on a tutorial originally by Alberto Negron.\nWhat is MongoDB?\nMongoDB is a cross-platform document-oriented NoSQL database. Rather than the traditional table-based relational database structure, MongoDB stores JSON-like documents with dynamic schemas (called BSON), making data integration easier and faster for certain types of applications.\nFeatures\nSome of the features include: \nDocument-orientation\nInstead of taking a business subject and breaking it up into multiple relational structures, MongoDB can store the business subject in the minimal number of documents. \nAd hoc queries\nMongoDB supports field, range queries, regular expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions. \nIndexing\nAny field in a MongoDB document can be indexed – including within arrays and embedded documents. Primary and secondary indices are available. \nAggregation\nAggregation operators can be strung together to form a pipeline – analogous to Unix pipes. \nWhen it makes sense to use MongoDB\nMetadata records are frequently stored as JSON, and almost anything you get from an API will be JSON. For example, check out the metadata records for the National Oceanic and Atmospheric Administration. \nMongoDB is a great tool to use with JSON data because it stores structured data as JSON-like documents, using dynamic rather than predefined schemas. \nIn MongoDB, an element of data is called a document, and documents are stored in collections. One collection may have any number of documents. Collections are a bit like tables in a relational database, and documents are like records. But there is one big difference: every record in a table has the same fields (with, usually, differing values) in the same order, while each document in a collection can have completely different fields from the other documents.\nDocuments are Python dictionaries that can have strings as keys and can contain various primitive types (int, float,unicode, datetime) as well as other documents (Python dicts) and arrays (Python lists).\nGetting started\nFirst we need to import json and pymongo.\nNote that the pprint module provides a capability to “pretty-print” arbitrary Python data structures in a form which can be used as input to the interpreter. This is particularly helpful with JSON. You can read more about pprint here.", "import json\nimport pymongo\nfrom pprint import pprint", "Connect\nJust as with the relational database example with sqlite, we need to begin by setting up a connection. With MongoDB, we will be using pymongo, though MongoDB also comes with a console API that uses Javascript. \nTo make our connection, we will use the PyMongo method MongoClient:", "conn=pymongo.MongoClient()", "Create and access a database\nMongodb creates databases and collections automatically for you if they don't exist already. A single instance of MongoDB can support multiple independent databases. When working with PyMongo, we access databases using attribute style access, just like we did with sqlite:", "db = conn.mydb\n\nconn.database_names()", "Collections\nA collection is a group of documents stored in MongoDB, and can be thought of as roughly the equivalent of a table in a relational database. Getting a collection in PyMongo works the same as getting a database:", "collection = db.my_collection\n\ndb.collection_names()", "Insert data\nTo insert some data into MongoDB, all we need to do is create a dict and call insert_one on the collection object:", "doc = {\"class\":\"xbus-502\",\"date\":\"03-05-2016\",\"instructor\":\"bengfort\",\"classroom\":\"C222\",\"roster_count\":\"25\"}\ncollection.insert_one(doc)", "You can put anything in:", "doc = {\"class\":\"xbus-502\",\"date\":\"03-05-2016\",\"teaching_assistant\":\"bilbro\", \"sauce\": \"awesome\"}\ncollection.insert_one(doc)", "A practical example\nAt my job I have been working on a project to help make Commerce datasets easier to find. One of the barriers to searching for records is when the keywords return either too many or too few results. It can also be a problem if the keywords are too technical for lay users. \nOne solution is to use topic modeling to extract latent themes from the metadata records and then probabilistically assign each record a more sensical set of keywords based on its proximity (via kmeans) to the topics.\nIn order to get started, first I had to gather up a bunch of JSON metadata records and store them for analysis and modeling. Here's what I did: \n```python\nimport requests\nNOAA_URL = \"https://data.noaa.gov/data.json\"\ndef load_data(URL):\n \"\"\"\n Loads the data from URL and returns data in JSON format.\n \"\"\"\n r = requests.get(URL)\n data = r.json()\n return data\nnoaa = load_data(NOAA_URL)\n```\nBut...this kinda takes a long time, so I've created a file for you that contains a small chunk of the records to use for today's workshop.", "with open(\"data_sample.json\") as data_file: \n noaa = json.load(data_file)\n\nlen(noaa)", "Checking out the data\nNow let's print out just one record to examine the structure.", "pprint(noaa[0])", "Or say we wanted just the \"description\" field:", "pprint(noaa[0]['description'])", "Define the database\nWe will want to enter these records into our database. But first, we'll define a specific database for the NOAA records:", "db = conn.earthwindfire", "Define the collection\nNext we define the collection where we'll insert the NOAA metadata records:", "records = db.records", "Insert data\nThen we loop through each record in the NOAA dataset and insert just the target information for each into the collection.", "# What data fields seem important to you? Add them below following the examples:\n\ndef insert(metadata):\n for dataset in metadata:\n data ={}\n data[\"title\"] = dataset[\"title\"]\n data[\"description\"] = dataset[\"description\"]\n data[\"keywords\"] = dataset[\"keyword\"]\n data[\"accessLevel\"] = dataset[\"accessLevel\"]\n data[\"lang\"] = dataset[\"language\"]\n # choose your own\n # choose your own\n # choose your own \n # choose your own\n\n records.insert_one(data)\n\ninsert(noaa)\n\n# Check to make sure they're all in there\nrecords.count()", "Querying\nQuerying with .findOne( )\nThe find_one() method selects and returns a single document from a collection and returns that document (or None if there are no matches). It is useful when you know there is only one matching document, or are only interested in the first match", "records.find_one()", "Querying with .find( )\nTo get more than a single document as the result of a query we use the find() method. find() returns a Cursor instance, which allows us to iterate over all matching documents.\npython\nrecords.find()\nFor example, we can iterate over the first 2 documents (there are a lot in the collection and this is just an example) in the records collection", "for rec in records.find()[:2]:\n pprint(rec)", "Searching\nMongoDB queries are represented as JSON-like structures just like documents. To build a query, you just need to specify a dictionary with the properties you want the results to match. For example, let's say we were just interested in publically available satellite data from NESDIS.\nThis query will match all documents in the records collection with keywords code \"NESDIS\".", "records.find({\"keywords\": \"NESDIS\"}).count()", "1117 is probably more than we want to print out in a Jupyter Notebook... \nWe can further narrow our search by adding additional fields", "records.find({\"keywords\": \"NESDIS\",\"keywords\": \"Russia\",\"accessLevel\":\"public\"}).count()", "Since there's only two, let's check them out:", "for r in records.find({\"keywords\": \"NESDIS\",\"keywords\": \"Russia\",\"accessLevel\":\"public\"}):\n pprint(r)", "If you already know SQL...\nThe following table provides an overview of common SQL aggregation terms, functions, and concepts and the corresponding MongoDB aggregation operators: \n| SQL Terms, Functions, and Concepts | MongoDB Aggregation Operators |\n| ---------------------------------- |:-------------------------------|\n| WHERE | \\$match |\n| GROUP BY | \\$group |\n| HAVING | \\$match |\n| SELECT | \\$project |\n| ORDER BY | \\$sort |\n| LIMIT | \\$limit |\n| SUM() | \\$sum |\n| COUNT() | \\$sum |\n| join | \\$lookup |\nBut...thanks to MongoDB's nested data structures, we can also do a lot of things we can't do in a relational database. \nLength\nLet's look for some entries that have way too many keywords:", "cursor = db.records.find({\"$where\": \"this.keywords.length > 100\"}).limit(2);\nfor rec in cursor:\n pprint(rec)", "Full text search with a text index\nOne of the things that makes MongoDB special is that it enables us to create search indexes. Indexes provide high performance read operations for frequently used queries.\nIn particular, a text index will enable us to search for string content in a collection. Keep in mind that a collection can have at most one text index. \nWe will create a text index on the description field so that we can search inside our NOAA records text:", "db.records.create_index([('description', 'text')])", "To test our newly created text index on the description field, we will search documents using the $text operator. Let's start by looking for all the documents that have the word 'precipitation' in their description field.", "cursor = db.records.find({'$text': {'$search': 'precipitation'}})\nfor rec in cursor:\n print rec\n\ncursor = db.records.find({'$text': {'$search': 'fire'}})\ncursor.count()", "If we want to create a new text index, we can do so by first dropping the first text index:", "db.records.drop_index(\"description_text\") ", "We can also create a wildcard text index for scenarios where we want any text fields in the records to be searchable. In such scenarios you can index all the string fields of your document using the $** wildcard specifier.\nThe query would go something like this:", "db.records.create_index([(\"$**\",\"text\")])\n\ncursor = db.records.find({'$text': {'$search': \"Russia\"}})\nfor rec in cursor:\n pprint(rec)", "Projections\nProjections allow you to pass along the documents with only the specified fields to the next stage in the pipeline. The specified fields can be existing fields from the input documents or newly computed fields.\nFor example, let's redo our fulltext Russia search, but project just the titles of the records:", "cursor = db.records.find({'$text': {'$search': \"Russia\"}}, {\"title\": 1,\"_id\":0 })\nfor rec in cursor:\n print rec", "Limit\n.limit() passes the first n documents unmodified to the pipeline where n is the specified limit. For each input document, this method outputs either one document (for the first n documents) or zero documents (after the first n documents).", "cursor = db.records.find({'$text': {'$search': \"Russia\"}}, {\"title\": 1,\"_id\":0 }).limit(2)\nfor rec in cursor:\n print rec", "Aggregate\nMongoDB can perform aggregation operations with .aggregate(), such as grouping by a specified key and evaluating a total or a count for each distinct group. \nUse the $group stage to group by a specified key using the _id field. $group accesses fields by the field path, which is the field name prefixed by a dollar sign. \nFor example, we can use $group to aggregate all the languages of the NOAA records:", "cursor = db.records.aggregate(\n [\n {\"$group\": {\"_id\": \"$lang\", \"count\": {\"$sum\": 1}}}\n ]\n)\nfor document in cursor:\n pprint(document)", "Or we can combine $match and $group to aggregate the titles of just the public access records that match the word 'Soviet':", "cursor = db.records.aggregate(\n [\n {\"$match\": {'$text': {'$search': \"Russia\"}, \"accessLevel\": \"public\"}},\n {\"$group\": {\"_id\": \"$title\"}}\n ]\n)\n\nfor document in cursor:\n pprint(document)", "The aggregation pipeline\nThe aggregation pipeline allows MongoDB to provide native aggregation capabilities that corresponds to many common data aggregation operations in SQL. Here's where you will put the pieces together to aggregate to get results that you can begin to analyze and perform machine learning on.\nHere's an example of an aggregation pipeline:", "from IPython.display import Image\nImage(filename='images/mongodb_pipeline.png', width=600, height=300)", "Removing data\nIt's easy (almost too easy) to delete projects, collections, and databases in MongoDB. Before we get rid of anything, let's determine what collections we have in our database:", "conn.earthwindfire.collection_names()", "Now let's delete our records collection and check again to see what collections are in our database:", "conn.earthwindfire.drop_collection(\"records\")\nconn.earthwindfire.collection_names()", "We can also just drop a database. First let's determine what databases we have:", "conn.database_names()", "Now let's remove the earthwindfire database:", "conn.drop_database(\"earthwindfire\")\nconn.database_names()", "Nice work!\nMiscellaneous\nStatistics\nThe dbstats method returns statistics that reflect the use state of a single database:", "db = conn.mydb\ncollection = db.my_collection\ndb.command({'dbstats': 1})", "collStats returns a variety of storage statistics for a given collection. Let's try it out for our NOAA records collection:", "db.command({'collstats': 'my_collection', 'verbose': 'true' })" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rawrgulmuffins/presentation_notes
pycon2016/tutorials/computation_statistics/hypothesis.ipynb
mit
[ "Hypothesis Testing\nCopyright 2016 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "%matplotlib inline\nfrom __future__ import print_function, division\n\nimport numpy\nimport scipy.stats\n\nimport matplotlib.pyplot as pyplot\n\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\n\nimport first\n\n# seed the random number generator so we all get the same results\nnumpy.random.seed(19)\n\n# some nicer colors from http://colorbrewer2.org/\nCOLOR1 = '#7fc97f'\nCOLOR2 = '#beaed4'\nCOLOR3 = '#fdc086'\nCOLOR4 = '#ffff99'\nCOLOR5 = '#386cb0'\n\n", "Part One\nSuppose you observe an apparent difference between two groups and you want to check whether it might be due to chance.\nAs an example, we'll look at differences between first babies and others. The first module provides code to read data from the National Survey of Family Growth (NSFG).", "live, firsts, others = first.MakeFrames()", "We'll look at a couple of variables, including pregnancy length and birth weight. The effect size we'll consider is the difference in the means.\nOther examples might include a correlation between variables or a coefficient in a linear regression. The number that quantifies the size of the effect is called the \"test statistic\".", "def TestStatistic(data):\n group1, group2 = data\n test_stat = abs(group1.mean() - group2.mean())\n return test_stat", "For the first example, I extract the pregnancy length for first babies and others. The results are pandas Series objects.", "group1 = firsts.prglngth\ngroup2 = others.prglngth", "The actual difference in the means is 0.078 weeks, which is only 13 hours.", "actual = TestStatistic((group1, group2))\nactual", "The null hypothesis is that there is no difference between the groups. We can model that by forming a pooled sample that includes first babies and others.", "n, m = len(group1), len(group2)\npool = numpy.hstack((group1, group2))", "Then we can simulate the null hypothesis by shuffling the pool and dividing it into two groups, using the same sizes as the actual sample.", "def RunModel():\n numpy.random.shuffle(pool)\n data = pool[:n], pool[n:]\n return data", "The result of running the model is two NumPy arrays with the shuffled pregnancy lengths:", "RunModel()", "Then we compute the same test statistic using the simulated data:", "TestStatistic(RunModel())", "If we run the model 1000 times and compute the test statistic, we can see how much the test statistic varies under the null hypothesis.", "test_stats = numpy.array([TestStatistic(RunModel()) for i in range(1000)])\ntest_stats.shape", "Here's the sampling distribution of the test statistic under the null hypothesis, with the actual difference in means indicated by a gray line.", "pyplot.vlines(actual, 0, 300, linewidth=3, color='0.8')\npyplot.hist(test_stats, color=COLOR5)\npyplot.xlabel('difference in means')\npyplot.ylabel('count')\nNone # It's interesting that these None's are significant to the notebook.", "The p-value is the probability that the test statistic under the null hypothesis exceeds the actual value.", "pvalue = sum(test_stats >= actual) / len(test_stats)\npvalue", "In this case the result is about 15%, which means that even if there is no difference between the groups, it is plausible that we could see a sample difference as big as 0.078 weeks.\nWe conclude that the apparent effect might be due to chance, so we are not confident that it would appear in the general population, or in another sample from the same population.\nSTOP HERE\nPart Two\nWe can take the pieces from the previous section and organize them in a class that represents the structure of a hypothesis test.", "class HypothesisTest(object):\n \"\"\"Represents a hypothesis test.\"\"\"\n\n def __init__(self, data):\n \"\"\"Initializes.\n\n data: data in whatever form is relevant\n \"\"\"\n self.data = data\n self.MakeModel()\n self.actual = self.TestStatistic(data)\n self.test_stats = None\n\n def PValue(self, iters=1000):\n \"\"\"Computes the distribution of the test statistic and p-value.\n\n iters: number of iterations\n\n returns: float p-value\n \"\"\"\n self.test_stats = numpy.array([self.TestStatistic(self.RunModel()) \n for _ in range(iters)])\n\n count = sum(self.test_stats >= self.actual)\n return count / iters\n\n def MaxTestStat(self):\n \"\"\"Returns the largest test statistic seen during simulations.\n \"\"\"\n return max(self.test_stats)\n\n def PlotHist(self, label=None):\n \"\"\"Draws a Cdf with vertical lines at the observed test stat.\n \"\"\"\n ys, xs, patches = pyplot.hist(ht.test_stats, color=COLOR4)\n pyplot.vlines(self.actual, 0, max(ys), linewidth=3, color='0.8')\n pyplot.xlabel('test statistic')\n pyplot.ylabel('count')\n\n def TestStatistic(self, data):\n \"\"\"Computes the test statistic.\n\n data: data in whatever form is relevant \n \"\"\"\n raise UnimplementedMethodException()\n\n def MakeModel(self):\n \"\"\"Build a model of the null hypothesis.\n \"\"\"\n pass\n\n def RunModel(self):\n \"\"\"Run the model of the null hypothesis.\n\n returns: simulated data\n \"\"\"\n raise UnimplementedMethodException()\n", "HypothesisTest is an abstract parent class that encodes the template. Child classes fill in the missing methods. For example, here's the test from the previous section.", "class DiffMeansPermute(HypothesisTest):\n \"\"\"Tests a difference in means by permutation.\"\"\"\n\n def TestStatistic(self, data):\n \"\"\"Computes the test statistic.\n\n data: data in whatever form is relevant \n \"\"\"\n group1, group2 = data\n test_stat = abs(group1.mean() - group2.mean())\n return test_stat\n\n def MakeModel(self):\n \"\"\"Build a model of the null hypothesis.\n \"\"\"\n group1, group2 = self.data\n self.n, self.m = len(group1), len(group2)\n self.pool = numpy.hstack((group1, group2))\n\n def RunModel(self):\n \"\"\"Run the model of the null hypothesis.\n\n returns: simulated data\n \"\"\"\n numpy.random.shuffle(self.pool)\n data = self.pool[:self.n], self.pool[self.n:]\n return data", "Now we can run the test by instantiating a DiffMeansPermute object:", "data = (firsts.prglngth, others.prglngth)\nht = DiffMeansPermute(data)\np_value = ht.PValue(iters=1000)\nprint('\\nmeans permute pregnancy length')\nprint('p-value =', p_value)\nprint('actual =', ht.actual)\nprint('ts max =', ht.MaxTestStat())", "And we can plot the sampling distribution of the test statistic under the null hypothesis.", "ht.PlotHist()", "Difference in standard deviation\nExercize 1: Write a class named DiffStdPermute that extends DiffMeansPermute and overrides TestStatistic to compute the difference in standard deviations. Is the difference in standard deviations statistically significant?", "# Solution goes here", "Here's the code to test your solution to the previous exercise.", "data = (firsts.prglngth, others.prglngth)\nht = DiffStdPermute(data)\np_value = ht.PValue(iters=1000)\nprint('\\nstd permute pregnancy length')\nprint('p-value =', p_value)\nprint('actual =', ht.actual)\nprint('ts max =', ht.MaxTestStat())", "Difference in birth weights\nNow let's run DiffMeansPermute again to see if there is a difference in birth weight between first babies and others.", "data = (firsts.totalwgt_lb.dropna(), others.totalwgt_lb.dropna())\nht = DiffMeansPermute(data)\np_value = ht.PValue(iters=1000)\nprint('\\nmeans permute birthweight')\nprint('p-value =', p_value)\nprint('actual =', ht.actual)\nprint('ts max =', ht.MaxTestStat())", "In this case, after 1000 attempts, we never see a sample difference as big as the observed difference, so we conclude that the apparent effect is unlikely under the null hypothesis. Under normal circumstances, we can also make the inference that the apparent effect is unlikely to be caused by random sampling.\nOne final note: in this case I would report that the p-value is less than 1/1000 or less than 0.001. I would not report p=0, because the apparent effect is not impossible under the null hypothesis; just unlikely." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Yu-Group/scikit-learn-sandbox
jupyter/backup_deprecated_nbs/01_Exploring_Tree_Plots.ipynb
mit
[ "Trees and Forests\nNOTE: This module code was partly taken from Andreas Muellers Adavanced scikit-learn O'Reilly Course\nIt is just used to explore the scikit-learn random forest object in a systematic manner\nI've added more code to it to understand how to generate tree plots for random forests", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "Decision Tree Classification", "%%bash\npwd\nls\n\nfrom figures import plot_interactive_tree\nplot_interactive_tree.plot_tree_interactive()", "Random Forests", "from figures import plot_interactive_forest\nplot_interactive_forest.plot_forest_interactive()", "Selecting the Optimal Estimator via Cross-Validation", "from sklearn import grid_search\nfrom sklearn import tree\nfrom sklearn.datasets import load_digits\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\n\ndigits = load_digits()\nX, y = digits.data, digits.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n\nrf = RandomForestClassifier(n_estimators=200, n_jobs=-1)\nparameters = {'max_features':['sqrt', 'log2'],\n 'max_depth':[5, 7, 9]}\n\nclf_grid = grid_search.GridSearchCV(rf, parameters)\nclf_grid.fit(X_train, y_train)\n\nclf_grid.score(X_train, y_train)\n\nclf_grid.score(X_test, y_test)\n\nclf_grid.best_params_\n\nclf_grid.best_estimator_", "Fit the forest manually", "rf = RandomForestClassifier(n_estimators=5, n_jobs=-1)\nrf.fit(X_train, y_train)\n\nrf.score(X_test, y_test)\n\nprint([estimator.tree_.max_depth for estimator in rf.estimators_])\n\nfor idx, dec_tree in enumerate(rf.estimators_):\n if idx == 0:\n print(dec_tree.tree_.max_depth)\n else:\n pass\n\nfor idx, dec_tree in enumerate(rf.estimators_):\n if idx == 0:\n tree.export_graphviz(dec_tree) \n\nfrom sklearn import tree\ni_tree = 0\nfor tree_in_forest in rf.estimators_:\n if i_tree ==0:\n with open('tree_' + str(i_tree) + '.png', 'w') as my_file:\n my_file = tree.export_graphviz(tree_in_forest, out_file = my_file)\n i_tree = i_tree + 1\n else:\n pass\n\nimport io\nfrom scipy import misc\nfrom sklearn import tree\nimport pydot\n\ndef show_tree(decisionTree, file_path):\n dotfile = io.StringIO()\n tree.export_graphviz(decisionTree, out_file=dotfile)\n (graph,)=pydot.graph_from_dot_data(dotfile.getvalue())\n #pydot.graph_from_dot_data(dotfile.getvalue()).write_png(file_path)\n graph.write_png(file_path)\n i = misc.imread(file_path)\n plt.imshow(i)\n\nfrom sklearn import tree\ni_tree = 0\nfor tree_in_forest in rf.estimators_:\n if i_tree ==0:\n show_tree(tree_in_forest, 'test.png')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/sql/raw/tut3.ipynb
apache-2.0
[ "Introduction\nNow that you can select raw data, you're ready to learn how to group your data and count things within those groups. This can help you answer questions like: \n\nHow many of each kind of fruit has our store sold?\nHow many species of animal has the vet office treated?\n\nTo do this, you'll learn about three new techniques: GROUP BY, HAVING and COUNT(). Once again, we'll use this made-up table of information on pets. \n\nCOUNT()\nCOUNT(), as you may have guessed from the name, returns a count of things. If you pass it the name of a column, it will return the number of entries in that column. \nFor instance, if we SELECT the COUNT() of the ID column in the pets table, it will return 4, because there are 4 ID's in the table.\n\nCOUNT() is an example of an aggregate function, which takes many values and returns one. (Other examples of aggregate functions include SUM(), AVG(), MIN(), and MAX().) As you'll notice in the picture above, aggregate functions introduce strange column names (like f0__). Later in this tutorial, you'll learn how to change the name to something more descriptive.\nGROUP BY\nGROUP BY takes the name of one or more columns, and treats all rows with the same value in that column as a single group when you apply aggregate functions like COUNT().\nFor example, say we want to know how many of each type of animal we have in the pets table. We can use GROUP BY to group together rows that have the same value in the Animal column, while using COUNT() to find out how many ID's we have in each group. \n\nIt returns a table with three rows (one for each distinct animal). We can see that the pets table contains 1 rabbit, 1 dog, and 2 cats.\nGROUP BY ... HAVING\nHAVING is used in combination with GROUP BY to ignore groups that don't meet certain criteria. \nSo this query, for example, will only include groups that have more than one ID in them.\n\nSince only one group meets the specified criterion, the query will return a table with only one row. \nExample: Which Hacker News comments generated the most discussion?\nReady to see an example on a real dataset? The Hacker News dataset contains information on stories and comments from the Hacker News social networking site. \nWe'll work with the comments table and begin by printing the first few rows. (We have hidden the corresponding code. To take a peek, click on the \"Code\" button below.)", "#$HIDE_INPUT$\nfrom google.cloud import bigquery\n\n# Create a \"Client\" object\nclient = bigquery.Client()\n\n# Construct a reference to the \"hacker_news\" dataset\ndataset_ref = client.dataset(\"hacker_news\", project=\"bigquery-public-data\")\n\n# API request - fetch the dataset\ndataset = client.get_dataset(dataset_ref)\n\n# Construct a reference to the \"comments\" table\ntable_ref = dataset_ref.table(\"comments\")\n\n# API request - fetch the table\ntable = client.get_table(table_ref)\n\n# Preview the first five lines of the \"comments\" table\nclient.list_rows(table, max_results=5).to_dataframe()", "Let's use the table to see which comments generated the most replies. Since:\n- the parent column indicates the comment that was replied to, and \n- the id column has the unique ID used to identify each comment, \nwe can GROUP BY the parent column and COUNT() the id column in order to figure out the number of comments that were made as responses to a specific comment. (This might not make sense immediately -- take your time here to ensure that everything is clear!)\nFurthermore, since we're only interested in popular comments, we'll look at comments with more than ten replies. So, we'll only return groups HAVING more than ten ID's.", "# Query to select comments that received more than 10 replies\nquery_popular = \"\"\"\n SELECT parent, COUNT(id)\n FROM `bigquery-public-data.hacker_news.comments`\n GROUP BY parent\n HAVING COUNT(id) > 10\n \"\"\"", "Now that our query is ready, let's run it and store the results in a pandas DataFrame:", "# Set up the query (cancel the query if it would use too much of \n# your quota, with the limit set to 10 GB)\nsafe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)\nquery_job = client.query(query_popular, job_config=safe_config)\n\n# API request - run the query, and convert the results to a pandas DataFrame\npopular_comments = query_job.to_dataframe()\n\n# Print the first five rows of the DataFrame\npopular_comments.head()", "Each row in the popular_comments DataFrame corresponds to a comment that received more than ten replies. For instance, the comment with ID 801208 received 56 replies.\nAliasing and other improvements\nA couple hints to make your queries even better:\n- The column resulting from COUNT(id) was called f0__. That's not a very descriptive name. You can change the name by adding AS NumPosts after you specify the aggregation. This is called aliasing, and it will be covered in more detail in an upcoming lesson.\n- If you are ever unsure what to put inside the COUNT() function, you can do COUNT(1) to count the rows in each group. Most people find it especially readable, because we know it's not focusing on other columns. It also scans less data than if supplied column names (making it faster and using less of your data access quota).\nUsing these tricks, we can rewrite our query:", "# Improved version of earlier query, now with aliasing & improved readability\nquery_improved = \"\"\"\n SELECT parent, COUNT(1) AS NumPosts\n FROM `bigquery-public-data.hacker_news.comments`\n GROUP BY parent\n HAVING COUNT(1) > 10\n \"\"\"\n\nsafe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)\nquery_job = client.query(query_improved, job_config=safe_config)\n\n# API request - run the query, and convert the results to a pandas DataFrame\nimproved_df = query_job.to_dataframe()\n\n# Print the first five rows of the DataFrame\nimproved_df.head()", "Now you have the data you want, and it has descriptive names. That's good style.\nNote on using GROUP BY\nNote that because it tells SQL how to apply aggregate functions (like COUNT()), it doesn't make sense to use GROUP BY without an aggregate function. Similarly, if you have any GROUP BY clause, then all variables must be passed to either a\n1. GROUP BY command, or\n2. an aggregation function.\nConsider the query below:", "query_good = \"\"\"\n SELECT parent, COUNT(id)\n FROM `bigquery-public-data.hacker_news.comments`\n GROUP BY parent\n \"\"\"", "Note that there are two variables: parent and id. \n- parent was passed to a GROUP BY command (in GROUP BY parent), and \n- id was passed to an aggregate function (in COUNT(id)).\nAnd this query won't work, because the author column isn't passed to an aggregate function or a GROUP BY clause:", "query_bad = \"\"\"\n SELECT author, parent, COUNT(id)\n FROM `bigquery-public-data.hacker_news.comments`\n GROUP BY parent\n \"\"\"", "If make this error, you'll get the error message SELECT list expression references column (column's name) which is neither grouped nor aggregated at.\nYour turn\nThese aggregations let you write much more interesting queries. Try it yourself with these coding exercises." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
csiu/100daysofcode
datamining/2017-03-03-day07.ipynb
mit
[ "layout: post\nauthor: csiu\ndate: 2017-03-03\ntitle: \"Day07:\"\ncategories: update\ntags:\n - 100daysofcode\n - text-mining\nexcerpt:\n\nDAY 07 - Mar 3, 2017\nYesterday, the Flesch reading ease score got me thinking ...\nFlesch reading ease\nFlesch reading ease is a measure of how difficult a passage in English is to understand. The formula for the readability ease measure is calculated as follows:\n$RE = 206.835 – (1.015 x \\frac{total\\ words}{total\\ sentences}) – (84.6 x \\frac{total\\ syllables}{total\\ words})$\nwhere $\\frac{total\\ words}{total\\ sentences}$ refers to the average sentence length (ASL) and \n$\\frac{total\\ syllables}{total\\ words}$ refers to the average number of syllables per word (ASW).", "def readability_ease(num_sentences, num_words, num_syllables):\n asl = num_words / num_sentences\n asw = num_syllables / num_words\n \n return(206.835 - (1.015 * asl) - (84.6 * asw))", "The readability ease (RE) score ranges from 0 to 100 and a higher scores indicate material that is easier to read.", "def readability_ease_interpretation(x):\n if 90 <= x:\n res = \"5th grade] \"\n res += \"Very easy to read. Easily understood by an average 11-year-old student.\"\n \n elif 80 <= x < 90:\n res = \"6th grade] \"\n res += \"Easy to read. Conversational English for consumers.\"\n \n elif 70 <= x < 80:\n res = \"7th grade] \"\n res += \"Fairly easy to read.\"\n \n elif 60 <= x < 70:\n res = \"8th & 9th grade] \"\n res += \"Plain English. Easily understood by 13- to 15-year-old students.\"\n \n elif 50 <= x < 60:\n res = \"10th to 12th grade] \"\n res += \"Fairly difficult to read.\"\n \n elif 30 <= x < 50:\n res = \"College] \"\n res += \"Difficult to read.\"\n \n elif 0 <= x < 30:\n res = \"College Graduate] \"\n res += \"Very difficult to read. Best understood by university graduates.\"\n \n print(\"[{:.1f}|{}\".format(x, res))", "Test case", "text = \"Hello world, how are you? I am great. Thank you for asking!\"", "In this test case, we have 12 words, 14 syllables, and 3 sentences.\nCounting words\nCounting words is easy.", "import nltk\nimport re\n\ntext = text.lower()\n\nwords = nltk.wordpunct_tokenize(re.sub('[^a-zA-Z_ ]', '',text))\nnum_words = len(words)\n\nprint(words)\nprint(num_words)", "Counting syllables\nCounting syllables is a bit more tricky. According to Using Python and the NLTK to Find Haikus in the Public Twitter Stream by Brandon Wood (2013), the Carnegie Mellon University (CMU) Pronouncing Dictionary corpora contain the syllable count for over 125,000 (English) words and thus could be used to count syllables.", "from nltk.corpus import cmudict\nfrom curses.ascii import isdigit\n\nd = cmudict.dict()\n\ndef count_syllables(word):\n return([len(list(y for y in x if isdigit(y[-1]))) for x in d[word.lower()]][0])\n\nprint(\"Number of syllables per word\", \"=\"*28, sep=\"\\n\")\nfor word in words:\n num_syllables = count_syllables(word)\n print(\"{}: {}\".format(word, num_syllables))", "Counting sentences\nThis was already done in Day03.", "sentences = nltk.tokenize.sent_tokenize(text)\nnum_sentences = len(sentences)\n\nprint(\"Number of sentences: {}\".format(num_sentences), \"=\"*25, sep=\"\\n\")\nfor sentence in sentences:\n print(sentence)", "Putting it all together", "def flesch_reading_ease(text):\n ## Preprocessing\n text = text.lower()\n \n sentences = nltk.tokenize.sent_tokenize(text)\n words = nltk.wordpunct_tokenize(re.sub('[^a-zA-Z_ ]', '',text))\n\n ## Count\n num_sentences = len(sentences)\n num_words = len(words)\n num_syllables = sum([count_syllables(word) for word in words])\n\n ## Calculate\n fre = readability_ease(num_sentences, num_words, num_syllables)\n return(fre)\n\nfre = flesch_reading_ease(text)\n\nreadability_ease_interpretation(fre)", "In the example, the sentence was constructed at a 5th grade level. It's strange the score is above 100.\nWhat about Shakespeare?", "# (As You Like it Act 2, Scene 7)\ntext = \"\"\"\nAll the world's a stage, \nand all the men and women merely players. \nThey have their exits and their entrances; \nAnd one man in his time plays many parts\n\"\"\"\n\nfre = flesch_reading_ease(text)\nreadability_ease_interpretation(fre)", "... Thus, Shakespeare is be doable in high school." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ContinuumIO/pydata-apps
Section_1_blaze.ipynb
mit
[ "<img src=\"images/continuum_analytics_logo.png\" \n alt=\"Continuum Logo\",\n align=\"right\",\n width=\"30%\">,\nIntroduction to Blaze\nIn this tutorial we'll learn how to use Blaze to discover, migrate, and query data living in other databases. Generally this tutorial will have the following format\n\nodo - Move data to database\nblaze - Query data in database\n\nInstall\nThis tutorial uses many different libraries that are all available with the Anaconda Distribution. Once you have Anaconda install, please run these commands from a terminal:\n$ conda install -y blaze\n$ conda install -y bokeh\n$ conda install -y odo\nnbviewer: http://nbviewer.ipython.org/github/ContinuumIO/pydata-apps/blob/master/Section-1_blaze.ipynb\ngithub: https://github.com/ContinuumIO/pydata-apps\n<hr/>\n\nGoal: Accessible, Interactive, Analytic Queries\nNumPy and Pandas provide accessible, interactive, analytic queries; this is valuable.", "import pandas as pd\ndf = pd.read_csv('iris.csv')\ndf.head()\n\ndf.groupby(df.Species).PetalLength.mean() # Average petal length per species", "<hr/>\n\nBut as data grows and systems become more complex, moving data and querying data become more difficult. Python already has excellent tools for data that fits in memory, but we want to hook up to data that is inconvenient.\nFrom now on, we're going to assume one of the following:\n\nYou have an inconvenient amount of data\nThat data should live someplace other than your computer\n\n<hr/>\n\nDatabases and Python\nWhen in-memory arrays/dataframes cease to be an option, we turn to databases. These live outside of the Python process and so might be less convenient. The open source Python ecosystem includes libraries to interact with these databases and with foreign data in general. \nExamples:\n\nSQL - sqlalchemy \nHive/Cassandra - pyhive\nImpala - impyla\nRedShift - redshift-sqlalchemy\n...\n\n\nMongoDB - pymongo\nHBase - happybase\nSpark - pyspark\nSSH - paramiko\nHDFS - pywebhdfs\nAmazon S3 - boto\n\nToday we're going to use some of these indirectly with odo (was into) and Blaze. We'll try to point out these libraries as we automate them so that, if you'd like, you can use them independently.\n<hr />\n\n<img src=\"images/continuum_analytics_logo.png\" \n alt=\"Continuum Logo\",\n align=\"right\",\n width=\"30%\">,\nodo (formerly into)\nOdo migrates data between formats and locations.\nBefore we can use a database we need to move data into it. The odo project provides a single consistent interface to move data between formats and between locations.\nWe'll start with local data and eventually move out to remote data.\nodo docs\n<hr/>\n\nExamples\nOdo moves data into a target from a source\n```python\n\n\n\nodo(source, target)\n```\n\n\n\nThe target and source can be either a Python object or a string URI. The following are all valid calls to into\n```python\n\n\n\nodo('iris.csv', pd.DataFrame) # Load CSV file into new DataFrame\nodo(my_df, 'iris.json') # Write DataFrame into JSON file\nodo('iris.csv', 'iris.json') # Migrate data from CSV to JSON\n```\n\n\n\n<hr/>\n\nExercise\nUse odo to load the iris.csv file into a Python list, a np.ndarray, and a pd.DataFrame", "from odo import odo\nimport numpy as np\nimport pandas as pd\n\nodo(\"iris.csv\", pd.DataFrame)", "<hr/>\n\nURI Strings\nOdo refers to foreign data either with a Python object like a sqlalchemy.Table object for a SQL table, or with a string URI, like postgresql://hostname::tablename.\nURI's often take on the following form\nprotocol://path-to-resource::path-within-resource\n\nWhere path-to-resource might point to a file, a database hostname, etc. while path-within-resource might refer to a datapath or table name. Note the two main separators\n\n:// separates the protocol on the left (sqlite, mongodb, ssh, hdfs, hive, ...)\n:: separates the path within the database on the right (e.g. tablename)\n\nodo docs on uri strings\n<hr/>\n\nExamples\nHere are some example URIs\nmyfile.json\nmyfiles.*.csv'\npostgresql://hostname::tablename\nmongodb://hostname/db::collection\nssh://user@host:/path/to/myfile.csv\nhdfs://user@host:/path/to/*.csv\n<hr />\n\nExercise\nMigrate your CSV file into a table named iris in a new SQLite database at sqlite:///my.db. Remember to use the :: separator and to separate your database name from your table name.\nodo docs on SQL", "odo(\"iris.csv\", \"sqlite:///my.db::iris\")", "What kind of object did you get receive as output? Call type on your result.", "type(_)", "<hr/>\n\nHow it works\nOdo is a network of fast pairwise conversions between pairs of formats. We when we migrate between two formats we traverse a path of pairwise conversions.\nWe visualize that network below:\n\nEach node represents a data format. Each directed edge represents a function to transform data between two formats. A single call to into may traverse multiple edges and multiple intermediate formats. Red nodes support larger-than-memory data.\nA single call to into may traverse several intermediate formats calling on several conversion functions. For example, we when migrate a CSV file to a Mongo database we might take the following route:\n\nLoad in to a DataFrame (pandas.read_csv)\nConvert to np.recarray (DataFrame.to_records)\nThen to a Python Iterator (np.ndarray.tolist)\nFinally to Mongo (pymongo.Collection.insert)\n\nAlternatively we could write a special function that uses MongoDB's native CSV\nloader and shortcut this entire process with a direct edge CSV -&gt; Mongo.\nThese functions are chosen because they are fast, often far faster than converting through a central serialization format.\nThis picture is actually from an older version of odo, when the graph was still small enough to visualize pleasantly. See odo docs for a more updated version.\n<hr/>\n\nRemote Data\nWe can interact with remote data in three locations\n\nOn Amazon's S3 (this will be quick)\nOn a remote machine via ssh\nOn the Hadoop File System (HDFS)\n\nFor most of this we'll wait until we've seen Blaze, briefly we'll use S3.\nS3\nFor now, we quickly grab a file from Amazon's S3.\nThis example depends on boto to interact with S3.\nconda install boto\n\nodo docs on aws", "odo('s3://nyqpug/tips.csv', pd.DataFrame)", "<hr/>\n\n<img src=\"images/continuum_analytics_logo.png\" \n alt=\"Continuum Logo\",\n align=\"right\",\n width=\"30%\">,\nBlaze\nBlaze translates a subset of numpy/pandas syntax into database queries. It hides away the database.\nOn simple datasets, like CSV files, Blaze acts like Pandas with slightly different syntax. In this case Blaze is just using Pandas.\n<hr/>\n\nPandas example", "import pandas as pd\n\ndf = pd.read_csv('iris.csv')\ndf.head(5)\n\ndf.Species.unique()\n\ndf.Species.drop_duplicates()", "<hr/>\n\nBlaze example", "import blaze as bz\n\nd = bz.Data('iris.csv')\nd.head(5)\n\nd.Species.distinct()", "<hr/>\n\nForeign Data\nBlaze does different things under-the-hood on different kinds of data\n\nCSV files: Pandas DataFrames (or iterators of DataFrames)\nSQL tables: SQLAlchemy.\nMongo collections: PyMongo\n...\n\nSQL\nWe'll play with SQL a lot during this tutorial. Blaze translates your query to SQLAlchemy. SQLAlchemy then translates to the SQL dialect of your database, your database then executes that query intelligently.\n\nBlaze $\\rightarrow$ SQLAlchemy $\\rightarrow$ SQL $\\rightarrow$ Database computation\n\nThis translation process lets analysts interact with a familiar interface while leveraging a potentially powerful database.\nTo keep things local we'll use SQLite, but this works with any database with a SQLAlchemy dialect. Examples in this section use the iris dataset. Exercises use the Lahman Baseball statistics database, year 2013.\nIf you have not downloaded this dataset you could do so here - https://github.com/jknecht/baseball-archive-sqlite/raw/master/lahman2013.sqlite. \n<hr/>", "!ls ", "Examples\nLets dive into Blaze Syntax. For simple queries it looks and feels similar to Pandas", "db = bz.Data('sqlite:///my.db')\n#db.iris\n#db.iris.head()\n\ndb.iris.Species.distinct()\n\ndb.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']]", "<hr />\n\nWork happens on the database\nIf we were using pandas we would read the table into pandas, then use pandas' fast in-memory algorithms for computation. Here we translate your query into SQL and then send that query to the database to do the work.\n\nPandas $\\leftarrow_\\textrm{data}$ SQL, then Pandas computes\nBlaze $\\rightarrow_\\textrm{query}$ SQL, then database computes\n\nIf we want to dive into the internal API we can inspect the query that Blaze transmits.\n<hr />", "# Inspect SQL query\nquery = db.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']]\nprint bz.compute(query)\n\nquery = bz.by(db.iris.Species, longest=db.iris.PetalLength.max(),\n shortest=db.iris.PetalLength.min())\nprint bz.compute(query)\n\nodo(query, list)", "<hr />\n\nExercises\nNow we load the Lahman baseball database and perform similar queries", "# db = bz.Data('postgresql://postgres:postgres@ec2-54-159-160-163.compute-1.amazonaws.com') # Use Postgres if you don't have the sqlite file\ndb = bz.Data('sqlite:///lahman2013.sqlite')\ndb.dshape\n\n# View the Salaries table\n\n\n# What are the distinct teamIDs in the Salaries table?\n\n\n# What is the minimum and maximum yearID in the Sarlaries table? \n\n\n# For the Oakland Athletics (teamID OAK), pick out the playerID, salary, and yearID columns\n\n\n# Sort that result by salary. \n# Use the ascending=False keyword argument to the sort function to find the highest paid players\n", "<hr />\n\nExample: Split-apply-combine\nIn Pandas we perform computations on a per-group basis with the groupby operator. In Blaze our syntax is slightly different, using instead the by function.", "import pandas as pd\niris = pd.read_csv('iris.csv')\niris.groupby('Species').PetalLength.min()\n\niris = bz.Data('sqlite:///my.db::iris')\nbz.by(iris.Species, largest=iris.PetalLength.max(), \n smallest=iris.PetalLength.min())\nprint(_)", "<hr/>\n\nStore Results\nBy default Blaze only shows us the first ten lines of a result. This provides a more interactive feel and stops us from accidentally crushing our system. Sometimes we do want to compute all of the results and store them someplace.\nBlaze expressions are valid sources for odo. So we can store our results in any format.", "iris = bz.Data('sqlite:///my.db::iris')\nquery = bz.by(iris.Species, largest=iris.PetalLength.max(), # A lazily evaluated result\n smallest=iris.PetalLength.min()) \n\nodo(query, list) # A concrete result", "<hr/>\n\nExercise: Storage\nThe solution to the first split-apply-combine problem is below. Store that result in a list, a CSV file, and in a new SQL table in our database (use a uri like sqlite://... to specify the SQL table.)", "result = bz.by(db.Salaries.teamID, avg=db.Salaries.salary.mean(), \n max=db.Salaries.salary.max(), \n ratio=db.Salaries.salary.max() / db.Salaries.salary.min()\n ).sort('ratio', ascending=False)\n\nodo(result, list)[:10]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/workshops
extras/tensorflow_lattice/01_lattice_estimator_basics.ipynb
apache-2.0
[ "TensorFlow Lattice estimators\nIn this tutorial, we will cover basics of TensorFlow Lattice estimators.", "# import libraries\n!pip install tensorflow_lattice\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tensorflow_lattice as tfl\nimport tempfile\nfrom six.moves import urllib", "Synthetic dataset\nHere we create a synthetic dataset.", "%matplotlib inline\n\n# Training dataset contains one feature, \"distance\".\ntrain_features = {\n 'distance': np.array([1.0, 1.3, 1.5, 2.0, 2.1, 3.0,\n 4.0, 5.0, 1.3, 1.7, 2.5, 2.8,\n 4.7, 4.2, 3.5, 4.75, 5.2,\n 5.8, 5.9]) * 0.1, \n}\ntrain_labels = np.array([4.8, 4.9, 5.0, 5.0,\n 4.8, 3.3, 2.5, 2.0,\n 4.7, 4.6, 4.0, 3.2,\n 2.12, 2.1, 2.5, 2.2,\n 2.3, 2.34, 2.6])\nplt.scatter(train_features['distance'], train_labels)\nplt.xlabel('distance')\nplt.ylabel('user hapiness')\n\n# This function draws two plots.\n# Firstly, we draw the scatter plot of `distance` vs. `label`.\n# Secondly, we generate predictions from `estimator` distance ranges in\n# [xmin, xmax]. \ndef Plot(distance, label, estimator, xmin=0.0, xmax=10.0):\n %matplotlib inline\n test_features = {\n 'distance': np.linspace(xmin, xmax, num=100)\n }\n # Estimator accepts an input in the form of input_fn (callable).\n # numpy_input_fn creates an input function that generates a dictionary where\n # the key is a feaeture name ('distance'), and the value is a tensor with\n # a shape [batch_size, 1].\n test_input_fn = tf.estimator.inputs.numpy_input_fn(\n x=test_features,\n batch_size=1,\n num_epochs=1,\n shuffle=False)\n # Estimator's prediction is 1d tensor with a shape [batch_size]. Since we\n # set batch_size == 1 in the above, p['predictions'] will contain only one\n # element in each batch, and we fetch this value by p['predictions'][0].\n predictions = [p['predictions'][0]\n for p in estimator.predict(input_fn=test_input_fn)]\n \n # Plot estimator's response and (distance, label) scatter plot.\n fig, ax = plt.subplots(1, 1)\n ax.plot(test_features['distance'], predictions)\n ax.scatter(distance, label)\n plt.xlabel('distance')\n plt.ylabel('user hapiness')\n plt.legend(['prediction', 'data'])", "DNN Estimator\nNow let us define feature columns and use DNN regressor to fit a model.", "# Specify feature.\nfeature_columns = [\n tf.feature_column.numeric_column('distance'),\n]\n# Define a neural network legressor.\n# The first hidden layer contains 30 hidden units, and the second\n# hidden layer contains 10 hidden units.\ndnn_estimator = tf.estimator.DNNRegressor(\n feature_columns=feature_columns,\n hidden_units=[30, 10],\n optimizer=tf.train.GradientDescentOptimizer(\n learning_rate=0.01,\n ),\n)\n\n# Define training input function.\n# mini-batch size is 10, and we iterate the dataset over\n# 1000 times.\ntrain_input_fn = tf.estimator.inputs.numpy_input_fn(\n x=train_features,\n y=train_labels,\n batch_size=10,\n num_epochs=1000,\n shuffle=False)\n\ntf.logging.set_verbosity(tf.logging.ERROR)\n# Train this estimator\ndnn_estimator.train(input_fn=train_input_fn)\n\n# Response in [0.0, 1.0] range\nPlot(train_features['distance'], train_labels, dnn_estimator, 0.0, 1.0)\n\n# Now let's increase the prediction range to [0.0, 3.0]\n# Note) In most machines, the prediction is going up.\n# However, DNN training does not have a unique solution, so it's possible\n# not to see this phenomenon.\nPlot(train_features['distance'], train_labels, dnn_estimator, 0.0, 3.0)", "TensorFlow Lattice calibrated linear model\nLet's use calibrated linear model to fit the data.\nSince we only have one example, there's no reason to use a lattice.", "# TensorFlow Lattice needs feature names to specify\n# per-feature parameters.\nfeature_names = [fc.name for fc in feature_columns]\nnum_keypoints = 5\n\nhparams = tfl.CalibratedLinearHParams(\n feature_names=feature_names,\n learning_rate=0.1,\n num_keypoints=num_keypoints)\n\n# input keypoint initializers.\n# init_fns are dict of (feature_name, callable initializer).\nkeypoints_init_fns = {\n 'distance': lambda: tfl.uniform_keypoints_for_signal(num_keypoints,\n input_min=0.0,\n input_max=0.7,\n output_min=-1.0,\n output_max=1.0)}\n\nnon_monotnic_estimator = tfl.calibrated_linear_regressor(\n feature_columns=feature_columns,\n keypoints_initializers_fn=keypoints_init_fns,\n hparams=hparams)\n\nnon_monotnic_estimator.train(input_fn=train_input_fn)\n\n# The prediction goes up!\nPlot(train_features['distance'], train_labels, non_monotnic_estimator, 0.0, 1.0)\n\n# Declare distance as a decreasing monotonic input.\nhparams.set_feature_param('distance', 'monotonicity', -1)\nmonotonic_estimator = tfl.calibrated_linear_regressor(\n feature_columns=feature_columns,\n keypoints_initializers_fn=keypoints_init_fns,\n hparams=hparams)\n\nmonotonic_estimator.train(input_fn=train_input_fn)\n\n# Now it's decreasing.\nPlot(train_features['distance'], train_labels, monotonic_estimator, 0.0, 1.0)\n\n# Even if the output range becomes larger, the prediction never goes up!\nPlot(train_features['distance'], train_labels, monotonic_estimator, 0.0, 3.0)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
palrogg/foundations-homework
Data_and_databases/Homework_3_Paul_Ronga.ipynb
mit
[ "Homework assignment #3\nThese problem sets focus on using the Beautiful Soup library to scrape web pages.\nProblem Set #1: Basic scraping\nI've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.", "!pip3 install bs4\nfrom bs4 import BeautifulSoup\nfrom urllib.request import urlopen\nhtml_str = urlopen(\"http://static.decontextualize.com/widgets2016.html\").read()\ndocument = BeautifulSoup(html_str, \"html.parser\")", "Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of &lt;h3&gt; tags contained in widgets2016.html.", "h3_tags = document.find_all('h3')\nprint(\"There is\", len(h3_tags), \"“h3” tags in widgets2016.html.\")", "Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the \"Widget Catalog\" header.", "tel = document.find('a', {'class': 'tel'})\nprint(\"The telephone number is\", tel.string)", "In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):\nSkinner Widget\nWidget For Furtiveness\nWidget For Strawman\nJittery Widget\nSilver Widget\nDivided Widget\nManicurist Widget\nInfinite Widget\nYellow-Tipped Widget\nUnshakable Widget\nSelf-Knowledge Widget\nWidget For Cinema", "widget_names = document.find_all('td', {'class': 'wname'})\nfor name in widget_names:\n print(name.string)", "Problem set #2: Widget dictionaries\nFor this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:\n[{'partno': 'C1-9476',\n 'price': '$2.70',\n 'quantity': u'512',\n 'wname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': '$9.36',\n 'quantity': '967',\n 'wname': u'Widget For Furtiveness'},\n ...several items omitted...\n {'partno': '5B-941/F',\n 'price': '$13.26',\n 'quantity': '919',\n 'wname': 'Widget For Cinema'}]\nAnd this expression:\nwidgets[5]['partno']\n\n... should evaluate to:\nLH-74/O", "widgets = []\n\n# your code here\n\nwidget_infos = document.find_all('tr', {'class': 'winfo'})\nfor info in widget_infos:\n partno = info.find('td', {'class': 'partno'})\n price = info.find('td', {'class': 'price'})\n quantity = info.find('td', {'class': 'quantity'})\n wname = info.find('td', {'class': 'wname'})\n\n widgets.append({'partno': partno.string, 'price': price.string, 'quantity': quantity.string, 'wname': wname.string})\n\n# end your code\n\nwidgets", "In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:\n[{'partno': 'C1-9476',\n 'price': 2.7,\n 'quantity': 512,\n 'widgetname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': 9.36,\n 'quantity': 967,\n 'widgetname': 'Widget For Furtiveness'},\n ... some items omitted ...\n {'partno': '5B-941/F',\n 'price': 13.26,\n 'quantity': 919,\n 'widgetname': 'Widget For Cinema'}]\n\n(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)", "widgets = []\n\n# your code here\n\nwidget_infos = document.find_all('tr', {'class': 'winfo'})\nfor info in widget_infos:\n partno = info.find('td', {'class': 'partno'})\n price = info.find('td', {'class': 'price'})\n quantity = info.find('td', {'class': 'quantity'})\n wname = info.find('td', {'class': 'wname'})\n\n widgets.append({'partno': partno.string, 'price': float(price.string[1:]), 'quantity': int(quantity.string), 'wname': wname.string})\n\n# end your code\n\nwidgets", "Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.\nExpected output: 7928", "total_nb_widgets = 0\nfor widget in widgets:\n total_nb_widgets += widget['quantity']\nprint(total_nb_widgets)", "In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.\nExpected output:\nWidget For Furtiveness\nJittery Widget\nSilver Widget\nInfinite Widget\nWidget For Cinema", "for widget in widgets:\n if widget['price'] > 9.30:\n print(widget['wname'])", "Problem set #3: Sibling rivalries\nIn the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:\nOften, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):", "example_html = \"\"\"\n<h2>Camembert</h2>\n<p>A soft cheese made in the Camembert region of France.</p>\n\n<h2>Cheddar</h2>\n<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>\n\"\"\"\n\n", "If our task was to create a dictionary that maps the name of the cheese to the description that follows in the &lt;p&gt; tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:", "example_doc = BeautifulSoup(example_html, \"html.parser\")\ncheese_dict = {}\nfor h2_tag in example_doc.find_all('h2'):\n cheese_name = h2_tag.string\n cheese_desc_tag = h2_tag.find_next_sibling('p')\n cheese_dict[cheese_name] = cheese_desc_tag.string\n\ncheese_dict", "With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header \"Hallowed Widgets.\"\nExpected output:\nMZ-556/B\nQV-730\nT1-9731\n5B-941/F", "hallowed_header = document.find('h3', text='Hallowed widgets')\nsibling_table = hallowed_header.find_next_sibling()\nfor part in sibling_table.find_all('td', {'class': 'partno'}):\n print(part.string)", "Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!\nIn the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are \"categories\" of widgets (e.g., the contents of the &lt;h3&gt; tags on the page: \"Forensic Widgets\", \"Mood widgets\", \"Hallowed Widgets\") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:\n{'Forensic Widgets': 3,\n 'Hallowed widgets': 4,\n 'Mood widgets': 2,\n 'Wondrous widgets': 3}", "category_counts = {}\n# your code here\n\ncategories = document.find_all('h3')\nfor category in categories:\n table = category.find_next_sibling('table')\n widgets = table.select('td.wname')\n category_counts[category.string] = len(widgets)\n\n# end your code\ncategory_counts", "Congratulations! You're done." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
materialsvirtuallab/matgenb
notebooks/2018-11-6-Dopant suggestions using Pymatgen.ipynb
bsd-3-clause
[ "Introduction\nThis notebook demonstrates how to predict likely n- and p-type dopant atoms using pymatgen. This example uses the Materials API to download the structure of interest but any Structure object can be used. Two methods for choosing dopants are demonstrated. The first uses a simple Shannon radii comparison, whereas the second is based on the substitution probability of two atoms calculated using the SubstitutionPredictor utility in pymatgen. This code requires knowledge of the oxidation state of all elements in the structure. These can be guessed using pymatgen but should be checked to ensure the validity of the results.\nWritten using:\n- pymatgen==2018.11.Y\nAuthor: Alex Ganose (10/06/18)", "# Imports we need for generating dopant suggestions\n\nfrom pymatgen.analysis.structure_prediction.dopant_predictor import \\\n get_dopants_from_shannon_radii, get_dopants_from_substitution_probabilities\nfrom pymatgen.analysis.local_env import CrystalNN\n\nfrom pymatgen import MPRester\nfrom pprint import pprint\n\n# Establish rester for accessing Materials API\n\napi_key = None # INSERT YOUR OWN API KEY\n\nmpr = MPRester(api_key=api_key) ", "Here we define a variable -- num_dopants for how many dopants you wish to explore.", "num_dopants = 5 # number of highest probability dopants you wish to see ", "Download a structure and add oxidation states\nIn this section, we use the Materials API to download a structure and add information on the oxidation states of the atoms.", "mp_id = 'mp-856' # Materials Project id for rutile SnO2\n\nstructure = mpr.get_structure_by_material_id(mp_id)", "The downloaded structure does not contain oxidation state information. There are two ways to add this information. The first is to specify the oxidation state of the elements manually.", "structure.add_oxidation_state_by_element({\"Sn\": 4, \"O\": -2})", "Alternatively, we can use pymatgen to guess the oxidation states. If using this method you should check that the oxidation states are what you expect.", "structure.add_oxidation_state_by_guess()", "Let's check what oxidation states pymatgen guessed.", "species = structure.composition.elements\n\nprint(species)", "Finding dopants by Shannon radii\nIn this section, we use the known Shannon radii to predict likely dopants. We will prefer dopants which have the smallest difference in radius to the host atoms. As the Shannon radii depend on the coordination number of the site, we must first calculate the bonding in the structure. In this example, we do this using the CrystalNN class.", "cnn = CrystalNN()\nbonded_structure = cnn.get_bonded_structure(structure)", "Pymatgen has a function to take a bonded structure with oxidation states and report the closest n- and p-type dopants, sorted by the difference in Shannon radii. Let's run this on our bonded structure:", "dopants = get_dopants_from_shannon_radii(bonded_structure, num_dopants=num_dopants)\n\npprint(dopants)", "The most favoured n-type dopant is U on a Sn site. Unfortunately, this is not a sustainable or safe choice of dopant. The most common industrial n-type dopant for SnO2 is fluorine. While F is present in our list of suggested dopants, it found way down at suggestion number 4.\nAnother limitation of the Shannon radii approach to choosing dopants is that the radii depend on both the coordination number and charge state. For many elements, the radii for many charge state/coordination number combinations have not been tabulated, meaning this approach is incomplete.\nInstead we should use a more robust approach to determine possible dopants. \nFinding dopants by substitution probability\nIn this section, we use the statistics provided by SubstitutionPredictor to predict likely dopants substitutions using a data-mined approach from ICSD data. Based on the species in the structure, we get a list of which species are likely to substitute in but have different charge states. The substitution prediction methodology is presented in: \nHautier, G., Fischer, C., Ehrlacher, V., Jain, A., and Ceder, G. (2011) Data Mined Ionic Substitutions for the Discovery of New Compounds. Inorganic Chemistry, 50(2), 656-663. doi:10.1021/ic102031h\nHere, we define a variable -- threshold for the threshold probability in making substitution/structure predictions.", "threshold = 0.001 # probability threshold for substitution/structure predictions", "Pymatgen provides a function to filter the predicted substitutions by their charge states and return a list of n- and p-type dopants. Let's run the function on the structure we downloaded earlier:", "dopants = get_dopants_from_substitution_probabilities(\n structure, num_dopants=num_dopants, threshold=threshold)\n\npprint(dopants)", "The function returns a list of potential dopants sorted by their substitution probability. The most likely n-type dopant is F on a O site. Fluorine doped SnO2 (FTO) is one of the most widely used transparent conducting oxides, therefore validating this approach." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quoniammm/mine-tensorflow-examples
fastAI/deeplearning2/DCGAN.ipynb
mit
[ "Generative Adversarial Networks in Keras", "%matplotlib inline\nimport importlib\nimport utils2; importlib.reload(utils2)\nfrom utils2 import *\n\nfrom tqdm import tqdm", "The original GAN!\nSee this paper for details of the approach we'll try first for our first GAN. We'll see if we can generate hand-drawn numbers based on MNIST, so let's load that dataset first.\nWe'll be refering to the discriminator as 'D' and the generator as 'G'.", "from keras.datasets import mnist\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\nX_train.shape\n\nn = len(X_train)\n\nX_train = X_train.reshape(n, -1).astype(np.float32)\nX_test = X_test.reshape(len(X_test), -1).astype(np.float32)\n\nX_train /= 255.; X_test /= 255.", "Train\nThis is just a helper to plot a bunch of generated images.", "def plot_gen(G, n_ex=16):\n plot_multi(G.predict(noise(n_ex)).reshape(n_ex, 28,28), cmap='gray')", "Create some random data for the generator.", "def noise(bs): return np.random.rand(bs,100)", "Create a batch of some real and some generated data, with appropriate labels, for the discriminator.", "def data_D(sz, G):\n real_img = X_train[np.random.randint(0,n,size=sz)]\n X = np.concatenate((real_img, G.predict(noise(sz))))\n return X, [0]*sz + [1]*sz\n\ndef make_trainable(net, val):\n net.trainable = val\n for l in net.layers: l.trainable = val", "Train a few epochs, and return the losses for D and G. In each epoch we:\n\nTrain D on one batch from data_D()\nTrain G to create images that the discriminator predicts as real.", "def train(D, G, m, nb_epoch=5000, bs=128):\n dl,gl=[],[]\n for e in tqdm(range(nb_epoch)):\n X,y = data_D(bs//2, G)\n dl.append(D.train_on_batch(X,y))\n make_trainable(D, False)\n gl.append(m.train_on_batch(noise(bs), np.zeros([bs])))\n make_trainable(D, True)\n return dl,gl", "MLP GAN\nWe'll keep thinks simple by making D & G plain ole' MLPs.", "MLP_G = Sequential([\n Dense(200, input_shape=(100,), activation='relu'),\n Dense(400, activation='relu'),\n Dense(784, activation='sigmoid'),\n])\n\nMLP_D = Sequential([\n Dense(300, input_shape=(784,), activation='relu'),\n Dense(300, activation='relu'),\n Dense(1, activation='sigmoid'),\n])\nMLP_D.compile(Adam(1e-4), \"binary_crossentropy\")\n\nMLP_m = Sequential([MLP_G,MLP_D])\nMLP_m.compile(Adam(1e-4), \"binary_crossentropy\")\n\ndl,gl = train(MLP_D, MLP_G, MLP_m, 8000)", "The loss plots for most GANs are nearly impossible to interpret - which is one of the things that make them hard to train.", "plt.plot(dl[100:])\n\nplt.plot(gl[100:])", "This is what's known in the literature as \"mode collapse\".", "plot_gen()", "OK, so that didn't work. Can we do better?...\nDCGAN\nThere's lots of ideas out there to make GANs train better, since they are notoriously painful to get working. The paper introducing DCGANs is the main basis for our next section. Add see https://github.com/soumith/ganhacks for many tips!\nBecause we're using a CNN from now on, we'll reshape our digits into proper images.", "X_train = X_train.reshape(n, 28, 28, 1)\nX_test = X_test.reshape(len(X_test), 28, 28, 1)", "Our generator uses a number of upsampling steps as suggested in the above papers. We use nearest neighbor upsampling rather than fractionally strided convolutions, as discussed in our style transfer notebook.", "CNN_G = Sequential([\n Dense(512*7*7, input_dim=100, activation=LeakyReLU()),\n BatchNormalization(mode=2),\n Reshape((7, 7, 512)),\n UpSampling2D(),\n Convolution2D(64, 3, 3, border_mode='same', activation=LeakyReLU()),\n BatchNormalization(mode=2),\n UpSampling2D(),\n Convolution2D(32, 3, 3, border_mode='same', activation=LeakyReLU()),\n BatchNormalization(mode=2),\n Convolution2D(1, 1, 1, border_mode='same', activation='sigmoid')\n])", "The discriminator uses a few downsampling steps through strided convolutions.", "CNN_D = Sequential([\n Convolution2D(256, 5, 5, subsample=(2,2), border_mode='same', \n input_shape=(28, 28, 1), activation=LeakyReLU()),\n Convolution2D(512, 5, 5, subsample=(2,2), border_mode='same', activation=LeakyReLU()),\n Flatten(),\n Dense(256, activation=LeakyReLU()),\n Dense(1, activation = 'sigmoid')\n])\n\nCNN_D.compile(Adam(1e-3), \"binary_crossentropy\")", "We train D a \"little bit\" so it can at least tell a real image from random noise.", "sz = n//200\nx1 = np.concatenate([np.random.permutation(X_train)[:sz], CNN_G.predict(noise(sz))])\nCNN_D.fit(x1, [0]*sz + [1]*sz, batch_size=128, nb_epoch=1, verbose=2)\n\nCNN_m = Sequential([CNN_G, CNN_D])\nCNN_m.compile(Adam(1e-4), \"binary_crossentropy\")\n\nK.set_value(CNN_D.optimizer.lr, 1e-3)\nK.set_value(CNN_m.optimizer.lr, 1e-3)", "Now we can train D & G iteratively.", "dl,gl = train(CNN_D, CNN_G, CNN_m, 2500)\n\nplt.plot(dl[10:])\n\nplt.plot(gl[10:])", "Better than our first effort, but still a lot to be desired:...", "plot_gen(CNN_G)", "End" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ucsd-ccbb/visJS_2_jupyter
notebooks/url_image_example/url_image_network.ipynb
mit
[ "Using a URL image in visJS2jupyter\n\nAuthors: Brin Rosenthal (sbrosenthal@ucsd.edu), Mikayla Webster (m1webste@ucsd.edu), Julia Len (jlen@ucsd.edu)\n\nImport packages", "import matplotlib as mpl\nimport networkx as nx\nimport visJS2jupyter.visJS_module as visJS_module\nreload(visJS_module)\n\n# create a simple graph\nG = nx.connected_watts_strogatz_graph(30,5,.2)\nnodes = list(G.nodes()) # type cast to list in order to make compatible with networkx 1.11 and 2.0\nedges = list(G.edges()) # for nx 2.0, returns an \"EdgeView\" object rather than an iterable", "Map node attributes to visual properties, and style the nodes and edges\n\nTo map node attributes to properties, simply add the property to the graph as a node-attribute, and use the return_node_to_color function", "degree = dict(G.degree())\nnx.set_node_attributes(G, name = 'degree', values = degree) ", "<a id='interactive_network'></a>\nInteractive network", "# set node initial positions using networkx's spring_layout function\npos = nx.spring_layout(G)\n\n# set per-node attributes\nnodes_dict = [{\"id\":n,\n \"degree\":nx.degree(G,n),\n \"node_shape\": 'image', # must set node shape to \"image\"\n \"x\":pos[n][0]*700,\n \"y\":pos[n][1]*700} for n in nodes\n ]\n\n# map to indices for source/target in edges\nnode_map = dict(zip(nodes,range(len(nodes))))\n\n# set per-edge attributes\nedges_dict = [{\"source\":node_map[edges[i][0]], \"target\":node_map[edges[i][1]], \n \"color\":\"gray\"} for i in range(len(edges))]\n\n# url iage to use as node shape\nurl = 'https://cdn0.iconfinder.com/data/icons/kids-paint/512/hedgehog-512.png'\n\n# set network-wide style parameters\nvisJS_module.visjs_network(nodes_dict,edges_dict,\n node_size_multiplier=10,\n node_size_transform = '',\n node_font_size=25,\n edge_arrow_to=True,\n physics_enabled=True,\n edge_color_highlight='#8A324E',\n edge_color_hover='#8BADD3',\n edge_width=3,\n max_velocity=15,\n node_image = url, # specify url here\n min_velocity=1)\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
sr320/sr320.github.io
jupyter/Cgigas/Lotterhos BS samples.ipynb
mit
[ "Analysis of two oyster samples where Lotterhos did methylRAD\nThe M2 and M3 samples are here:\nhttp://owl.fish.washington.edu/nightingales/C_gigas/9_GATCAG_L001_R1_001.fastq.gz\nhttp://owl.fish.washington.edu/nightingales/C_gigas/10_TAGCTT_L001_R1_001.fastq.gz", "bsmaploc=\"/Applications/bioinfo/BSMAP/bsmap-2.74/\"\n", "Genome version", "!curl \\\nftp://ftp.ensemblgenomes.org/pub/release-32/metazoa/fasta/crassostrea_gigas/dna/Crassostrea_gigas.GCA_000297895.1.dna_sm.toplevel.fa.gz \\\n> /Volumes/caviar/wd/data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa.gz \n\n!curl ftp://ftp.ensemblgenomes.org/pub/release-32/metazoa/fasta/crassostrea_gigas/dna/CHECKSUMS \n\n!ls /Volumes/caviar/wd/data/\n\n!md5 /Volumes/caviar/wd/data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa.gz\n\ncd /Volumes/caviar/wd/\n\nmkdir $(date +%F)\n\nls\n\nls /Volumes/web/nightingales/C\n\n!curl \\\nhttp://owl.fish.washington.edu/nightingales/C_gigas/9_GATCAG_L001_R1_001.fastq.gz \\\n> /Volumes/caviar/wd/2016-10-11/9_GATCAG_L001_R1_001.fastq.gz\n\n!curl \\\nhttp://owl.fish.washington.edu/nightingales/C_gigas/10_TAGCTT_L001_R1_001.fastq.gz \\\n> /Volumes/caviar/wd/2016-10-11/10_TAGCTT_L001_R1_001.fastq.gz\n\ncd 2016-10-11/\n\n!cp 9_GATCAG_L001_R1_001.fastq.gz M2.fastq.gz\n\n!cp 10_TAGCTT_L001_R1_001.fastq.gz M3.fastq.gz\n\nfor i in (\"M2\",\"M3\"):\n !{bsmaploc}bsmap \\\n-a {i}.fastq.gz \\\n-d ../data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa \\\n-o bsmap_out_{i}.sam \\\n-p 6\n\nfor i in (\"M2\",\"M3\"):\n !python {bsmaploc}methratio.py \\\n-d ../data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa \\\n-u -z -g \\\n-o methratio_out_{i}.txt \\\n-s {bsmaploc}samtools \\\nbsmap_out_{i}.sam \\\n\n!head /Volumes/caviar/wd/2016-10-11/methratio_out_M2.txt\n\n!curl https://raw.githubusercontent.com/che625/olson-ms-nb/master/scripts/mr3x.awk \\\n> /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk\n\n!curl https://raw.githubusercontent.com/che625/olson-ms-nb/master/scripts/mr_gg.awk.sh \\\n> /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh\n\n#first methratio files are converted to filter for CG context, 3x coverage (mr3x.awk), and reformatting (mr_gg.awk.sh).\n#due to issue passing variable to awk, simple scripts were used (included in repository)\nfor i in (\"M2\",\"M3\"):\n !echo {i}\n !grep \"[A-Z][A-Z]CG[A-Z]\" <methratio_out_{i}.txt> methratio_out_{i}CG.txt\n !awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk methratio_out_{i}CG.txt \\\n > mr3x.{i}.txt\n !awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh \\\n mr3x.{i}.txt > mkfmt_{i}.txt\n\n#first methratio files are converted to filter for CG context, 3x coverage (mr3x.awk), and reformatting (mr_gg.awk.sh).\n#due to issue passing variable to awk, simple scripts were used (included in repository)\nfor i in (\"M2\",\"M3\"):\n !echo {i}\n !grep -i \"[A-Z][A-Z]CG[A-Z]\" <methratio_out_{i}.txt> methratio_out_{i}CGi.txt\n !awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk methratio_out_{i}CGi.txt \\\n > mr3xi.{i}.txt\n !awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh \\\n mr3xi.{i}.txt > mkfmti_{i}.txt\n\n#maybe we need to ignore case\n\n!md5 mkfmt_M2.txt mkfmti_M2.txt | head\n\n#nope\n\n!head -100 mkfmt_M2.txt", "Products", "cd /Users/sr320/git-repos/sr320.github.io/jupyter\n\nmkdir analyses\n\nmkdir analyses/$(date +%F)\n\nfor i in (\"M2\",\"M3\"):\n !cp /Volumes/caviar/wd/2016-10-11/mkfmt_{i}.txt analyses/$(date +%F)/mkfmt_{i}.txt\n\n!head analyses/$(date +%F)/*", "urls\n```\nhttps://raw.githubusercontent.com/sr320/sr320.github.io/master/jupyter/analyses/2016-10-11/mkfmt_M2.txt\nhttps://raw.githubusercontent.com/sr320/sr320.github.io/master/jupyter/analyses/2016-10-11/mkfmt_M3.txt\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/ThinkStats2
homeworks/homework03.ipynb
gpl-3.0
[ "Homework 3\nVisualizing relationships between variables\nAllen Downey\nMIT License", "%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(style='white')\n\nfrom utils import decorate\nfrom thinkstats2 import Pmf, Cdf\n\nimport thinkstats2\nimport thinkplot", "Loading", "%time brfss = pd.read_hdf('brfss.hdf5', 'brfss')\n\nbrfss.shape\n\nbrfss.head()\n\nbrfss.describe()", "Scatter plot\nScatter plots are a good way to visualize the relationship between two variables, but it is surprising hard to make a good one.\nHere's a simple plot of height and weight.", "height = brfss['HTM4']\nweight = brfss['WTKG3']\n\nplt.plot(height, weight, 'o')\n\nplt.xlabel('Height in cm')\nplt.ylabel('Weight in kg');", "The center of this plot is saturated, so it is not as dark as it should be, which means the rest of the plot is relatively darker than it should be. It gives too much visual weight to the outliers and obscures the shape of the relationship.\nExercise: Use keywords alpha and markersize to avoid saturation.", "# Solution goes here", "With transparency and smaller markers, you will be able to see that height and weight are discretized.\nExercise: Use np.random.normal to add enough noise to height and weight so the vertical lines in the scatter plot are blurred out. Create variables named height_jitter and weight_jitter.", "# Solution goes here", "Linear regression\nWe can use scipy.stats to find the linear least squares fit to weight as a function of height.", "from scipy.stats import linregress\n\nsubset = brfss.dropna(subset=['WTKG3', 'HTM4'])\nxs = subset['HTM4']\nys = subset['WTKG3']\n\nres = linregress(xs, ys)\nres", "The LinregressResult object contains the estimated parameters and a few other statistics.\nWe can use the estimated slope and intercept to plot the line of best fit.", "# jitter the data\nheight_jitter = height + np.random.normal(0, 2, size=len(height))\nweight_jitter = weight + np.random.normal(0, 2, size=len(weight))\n\n# make the scatter plot\nplt.plot(height_jitter, weight_jitter, 'o', markersize=1, alpha=0.02)\nplt.axis([140, 200, 0, 160])\n\n# plot the line of best fit\nfx = np.array([xs.min(), xs.max()])\nfy = res.intercept + res.slope * fx\nplt.plot(fx, fy, '-', alpha=0.5)\n\n# label the axes\nplt.xlabel('Height in cm')\nplt.ylabel('Weight in kg')\nplt.axis([140, 200, 0, 160]);", "Weight and age\nExercise: Make a scatter plot of weight and age. The variable AGE is discretized in 5-year intervals, so you might want to jitter it. \nAdjust transparency and marker size to generate the best view of the relationship.", "# Solution goes here", "Exercise: Use linregress to estimate the slope and intercept of the line of best fit for this data.\nNote: as in the previous example, use dropna to drop rows that contain NaN for either variable, and use the resulting subset to compute the arguments for linregress.", "# Solution goes here", "Exercise: Generate a plot that shows the estimated line and a scatter plot of the data.", "# Solution goes here", "Box and violin plots\nThe Seaborn package, which is usually imported as sns, provides two functions used to show the distribution of one variable as a function of another variable.\nThe following box plot shows the distribution of weight in each age category. Read the documentation so you know what it means.", "data = brfss.dropna(subset=['AGE', 'WTKG3'])\n\nsns.boxplot(x='AGE', y='WTKG3', data=data, whis=10)\n\nsns.despine(left=True, bottom=True)\nplt.xlabel('Age in years')\nplt.ylabel('Weight in kg');", "This figure makes the shape of the relationship clearer; average weight increases between ages 20 and 50, and then decreases.\nA violin plot is another way to show the same thing. Again, read the documentation so you know what it means.", "sns.violinplot(x='AGE', y='WTKG3', data=data, inner=None)\n\nsns.despine(left=True, bottom=True)\nplt.xlabel('Age in years')\nplt.ylabel('Weight in kg');", "Exercise: Make a box plot that shows the distribution of weight as a function of income. The variable INCOME2 contains income codes with 8 levels.\nUse dropna to select the rows with valid income and weight information.", "# Solution goes here", "Exercise: Make a violin plot with the same variables.", "# Solution goes here", "Plotting percentiles\nOne more way to show the relationship between two variables is to break one variables into groups and plot percentiles of the other variable across groups.\nAs a starting place, here's the median weight in each age group.", "grouped = brfss.groupby('AGE')\n\nfor name, group in grouped['WTKG3']:\n print(name, group.median())", "To get the other percentiles, we can use a Cdf.", "ps = [95, 75, 50, 25, 5]\n\nfor name, group in grouped['WTKG3']:\n percentiles = Cdf(group).Percentiles(ps)\n print(name, percentiles)", "Now I'll collect those results in a list of arrays:", "res = []\nfor name, group in grouped['WTKG3']:\n percentiles = Cdf(group).Percentiles(ps)\n res.append(percentiles)\n \nres", "To get the age groups, we can extract the \"keys\" from the groupby object.", "xs = grouped.groups.keys()\nxs", "Now, we want to loop through the columns of the list of arrays; to do that, we want to transpose it.", "rows = np.transpose(res)\nrows", "Now we can plot the percentiles across the groups.", "width = [1,2,5,2,1]\n\nfor i, qs in enumerate(rows):\n plt.plot(xs, qs, label=ps[i], linewidth=width[i], color='C4')\n \ndecorate(xlabel='Age (years)',\n ylabel='Weight (kg)')", "In my opinion, this plot shows the shape of the relationship most clearly.\nDiscretizing variables\nBox plot, violin plots, and percentile line plots don't work as well if the number of groups on the x-axis is too big. For example, here's a box plot of weight versus height.", "sns.boxplot(x='HTM4', y='WTKG3', data=data, whis=10)\n\nsns.despine(left=True, bottom=True)\nplt.xlabel('Height in cm')\nplt.ylabel('Weight in kg');", "This would look better and mean more if there were fewer height groups. We can use pd.cut to put people into height groups where each group spans 10 cm.", "bins = np.arange(0, height.max(), 10)\nbrfss['_HTMG10'] = pd.cut(brfss['HTM4'], bins=bins, labels=bins[:-1]).astype(float)", "Now here's what the plot looks like.", "sns.boxplot(x='_HTMG10', y='WTKG3', data=brfss, whis=10)\nplt.xticks(rotation=30)\n\nsns.despine(left=True, bottom=True)\nplt.xlabel('Height in cm')\nplt.ylabel('Weight in kg');", "Exercise: Plot percentiles of weight versus these height groups.", "# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here", "Vegetables\nExercise: The variable _VEGESU1 contains the self-reported number of serving of vegetables each respondent eats per day. Explore relationships between this variable and the others variables in the dataset, and design visualizations that show any relationship you find as clearly as possible.", "# Solution goes here\n\n# Solution goes here\n\n# Solution goes here", "Correlation\nOne way to compute correlations is the Pandas method corr, which returns a correlation matrix.", "subset = brfss[['HTM4', 'WTKG3', 'AGE']]\nsubset.corr()", "Exercise: Compute a correlation matrix for age, income, and vegetable servings.", "subset = brfss[['AGE', 'INCOME2', '_VEGESU1']]\nsubset.corr()", "Correlation calibration\nTo calibrate your sense of correlation, let's look at scatter plots for fake data with different values of rho.\nThe following function generates random normally-distributed data with approximately the given coefficient of correlation.", "def gen_corr(rho):\n means = [0, 0]\n covs = [[1, rho], [rho, 1]]\n m = np.random.multivariate_normal(means, covs, 100)\n return np.transpose(m)", "This function makes a scatter plot and shows the actual value of rho.", "def plot_scatter(rho, seed=1):\n np.random.seed(seed)\n xs, ys = gen_corr(rho)\n rho = np.corrcoef(xs, ys)[0][1]\n\n plt.plot(xs, ys, 'o', alpha=0.5)\n plt.xlabel('x')\n plt.ylabel('y')\n ax = plt.gca()\n label_rho(ax, rho)\n \n return xs, ys\n\ndef label_rho(ax, rho):\n label = 'ρ = %0.2f' % rho\n plt.text(0.05, 0.95, label, \n horizontalalignment='left', \n verticalalignment='top', \n transform=ax.transAxes,\n fontsize=12)", "The following plots show what scatter plots look like with different values of rho.", "res = []\nxs, ys = plot_scatter(0, seed=18)\nres.append((xs, ys))\n\nxs, ys = plot_scatter(0.25, seed=18)\nres.append((xs, ys))\n\nxs, ys = plot_scatter(0.5, seed=18)\nres.append((xs, ys))\n\nxs, ys = plot_scatter(0.75, seed=18)\nres.append((xs, ys))\n\nxs, ys = plot_scatter(0.95, seed=18)\nres.append((xs, ys))", "Here are all the plots side-by-side for comparison.", "fig, axes = plt.subplots(ncols=5, sharey=True, figsize=(15,3)) \n\nfor ax, (xs, ys) in zip(axes, res):\n ax.plot(xs, ys, 'o', alpha=0.5)\n rho = np.corrcoef(xs, ys)[0][1]\n label_rho(ax, rho)", "Nonlinear relationships\nHere an example that generates fake data with a nonlinear relationship.", "np.random.seed(18)\nxs = np.linspace(-1, 1)\nys = xs**2 + np.random.normal(0, 0.05, len(xs))\n\nplt.plot(xs, ys, 'o', alpha=0.5)\nplt.xlabel('x')\nplt.ylabel('y');", "This relationship is quite strong, in the sense that we can make a much better guess about y if we know x than if we don't.\nBut if we compute correlations, they don't show the relationship.", "df = pd.DataFrame(dict(xs=xs, ys=ys))\ndf.corr(method='pearson')\n\ndf.corr(method='spearman')\n\ndf.corr(method='kendall')", "Correlation strength\nHere are two fake datasets showing hypothetical relationships between weight and age.", "np.random.seed(18)\nxs = np.linspace(20, 50)\nys1 = 75 + 0.02 * xs + np.random.normal(0, 0.15, len(xs))\n\nplt.plot(xs, ys1, 'o', alpha=0.5)\nplt.xlabel('Age in years')\nplt.ylabel('Weight in kg')\n\nrho = np.corrcoef(xs, ys1)[0][1]\nlabel_rho(plt.gca(), rho)\n\nnp.random.seed(18)\nxs = np.linspace(20, 50)\nys2 = 65 + 0.2 * xs + np.random.normal(0, 3, len(xs))\n\nplt.plot(xs, ys2, 'o', alpha=0.5)\nplt.xlabel('Age in years')\nplt.ylabel('Weight in kg')\n\nrho = np.corrcoef(xs, ys2)[0][1]\nlabel_rho(plt.gca(), rho)", "Which relationship is stronger?\nIt depends on what we mean. Clearly, the first one has a higher coefficient of correlation. In that world, knowing someone's age would allow you to make a better guess about their weight.\nBut look more closely at the y-axis in the two plots. How much weight do people gain per year in each of these hypothetical worlds?", "from scipy.stats import linregress\n \nres = linregress(xs, ys1)\nres\n\nres = linregress(xs, ys2)\nres", "In fact, the slope for the second data set is almost 10 times higher.\nThe following figures show the same data again, this time with the line of best fit and the estimated slope.", "def label_slope(ax, slope):\n label = 'm = %0.3f' % slope\n plt.text(0.05, 0.95, label, \n horizontalalignment='left', \n verticalalignment='top', \n transform=ax.transAxes,\n fontsize=12)\n\nres = linregress(xs, ys1)\nfx = np.array([xs.min(), xs.max()])\nfy = res.intercept + res.slope * fx\n\nplt.plot(xs, ys1, 'o', alpha=0.5)\nplt.plot(fx, fy, '-', alpha=0.5)\n\nplt.xlabel('Age in years')\nplt.ylabel('Weight in kg')\nlabel_slope(plt.gca(), res.slope)\n\nplt.gca().get_ylim()\n\nres = linregress(xs, ys2)\nfx = np.array([xs.min(), xs.max()])\nfy = res.intercept + res.slope * fx\n\nplt.plot(xs, ys2, 'o', alpha=0.5)\nplt.plot(fx, fy, '-', alpha=0.5)\n\nplt.xlabel('Age in years')\nplt.ylabel('Weight in kg')\nlabel_slope(plt.gca(), res.slope)\n\nplt.gca().get_ylim()", "The difference is not obvious from looking at the figure; you have to look carefully at the y-axis labels and the estimated slope.\nAnd you have to interpret the slope in context. In the first case, people gain about 0.019 kg per year, which works out to less than half a pound per decade. In the second case, they gain almost 4 pounds per decade.\nBut remember that in the first case, the coefficient of correlation is substantially higher.\nExercise: So, in which case is the relationship \"stronger\"? Write a sentence or two below to summarize your thoughts." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ComputationalModeling/spring-2017-danielak
past-semesters/spring_2016/homework_assignments/Numpy_2D_array_tutorial.ipynb
agpl-3.0
[ "Numpy 2D arrays - some examples\nThis notebook demonstrates how to work with 2D numpy arrays, including array slicing, random numbers, and making plots with them. Note that this works with higher-dimensional arrays as well!\nSome useful links:\n\nNumpy quickstart\nA useful numpy tutorial\nNumpy array creation methods\nNumpy array slicing/indexing tutorial\nNumpy array slicing and indexing techniques (more extensive documentation)\nThe Numpy random module", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport random", "Array creation and basic properties\nCreate an 8x10 array of zeros called my_array. Note that you can do this with any numpy array method (ones, zeros_like, ones_like, etc.). See this page for a full list of routines for array creation.", "a = np.zeros([8,10])\nprint(a)", "You have already created a 1D numpy array of predetermined values by giving np.array a list. You can make a multi-dimensional numpy array by giving np.array a set of nested lists (i.e., a list of lists). The following will create a 3x3 array with predetermined values:", "b = np.array([[1,2,3],[4,5,6],[7,8,9]])\nprint(b)", "The array .shape property tells you how large the array is in each dimension, .ndim tells you the number of dimensions, and .size tells you the total number of elements in the array. You can access each of the dimensions dim by .shape[dim].", "print(\"the shape of this array is:\", a.shape)\nprint(\"there are:\", a.ndim, \"dimensions\")\nprint(\"there are\", a.size, \"total elements\")\n\nfor i in range(a.ndim):\n print(\"the size of dimension\", i, \"is\", a.shape[i])", "You can manipulate individual cells of an array by:\na[index_1,index_2]\nNote that when you print it, the first index corresponds to rows (counting down from the top) and the second index corresponds to columns (counting from the left). Indices in both directions count by zeros.", "a[2,6]=11\n\n#print entire array\nprint(a)\n\n#print a single element of the array\nprint(a[2,6])", "Slicing arrays\nYou can also use the same type of slicing that you use with lists - in other words, python allows you to select some subset of the elements in a list or an array to manipulate or copy. With slicing, there are three values that can be used along each dimension: start,end, and step, separated by colons. Here are some examples in 1D:\nmyarray[start,end] # items start through end-1\nmyarray[start:] # items start through the end of the array\nmyarray[:end] # items from the beginning of the array through end-1\nmyarray[:] # a copy of the whole array\nmyarray[start,end,step] # every \"step\" item from start to end-1\nmyarray[::step] # every \"step\" item over the whole array, starting with the first element.\nNote that negative indices count from the end of the array, so myarray[-1] is the last element in the array, myarray[-2] is the second-to-last element, etc. You can also reverse the order of the array by starting at the end and counting to the beginning by negative numbers -- in other words, myarray[-1::-1] starts at the end of the array and goes to the first element by counting down by one each time.", "# create a 1D array with values 0...10\nc = np.arange(0,10)\n\n# note: the '\\n' at the beginning of many of the print statements \n# adds a carriage return (blank line)\n\nprint(\"some elements from the middle of the array:\",c[3:7] )\nprint(\"\\nthe second element through the second-to-last element:\", c[2:-1]) \nprint(\"\\nthe first half of the array:\", c[:5])\nprint(\"\\nthe second half of the array:\", c[5:])\nprint(\"\\nevery other element from 2-8 (inclusive):\",c[2:9:2])\nprint(\"\\nevery third element in the array:\",c[::3])\nprint(\"\\nreverse the array:\",c[-1::-1]) ", "The same sort of technique can be used with a multi-dimensional array, with start, stop, and (optionally) step specified along each dimension, with the dimensions separated by a comma. The syntax would be:\nmy2Darray[start1:stop1:step1, start2:stop2:step2]\nWith the same rules as above. You can also combine slicing with fixed indices to get some or all elements from a single row or column of your array.\nFor example, array b created above is a 3x3 array with the values 1-9 stored in it. We can do several different things:\nb[0,:] # get the first row\nb[:,2] # get the third column\nb[1,::2] # get every other element of the first row, starting at element 0\nb[:2,:2] # get a square array containing the first two elements along each dimension\nb[-2:,-2:] # get a square array containing the last two elements along each dimension\nb[::2,::2] # get a square array of every other element along each dimension\nb[-1::-1,-1::-1] # original sized array, but reversed along both dimensions", "print(\"the array b:\\n\",b,\"\\n\")\n# To get a square array containing the first two elements along each dimension:\nprint(\"the first row:\", b[0,:])\n\nprint(\"\\nthe third column:\",b[:,2])\n\nprint(\"\\nevery other element of the second row, starting with element 0:\",b[1,::2])\n\nprint(\"\\nsquare array of first two elements along each dimension:\\n\",b[:2,:2])\n\nprint(\"\\nsquare array of last two elements along each dimension:\\n\",b[-2:,-2:])\n\nprint(\"\\nsquare array of every other element along each dimension:\\n\",b[::2,::2])\n\nprint(\"\\nreversed array:\\n\",b[-1::-1,-1::-1])", "Copying arrays\nSo far, we've only shown you how to create arrays and manipulate subsets of arrays. But what about copying arrays? What happens when you create an array c, and set d=c?", "c = np.full((4,4),10.0) # makes an array of shape (4,4) where all elements are value 10.0\n\nd = c\n\nprint(\"c:\\n\",c, \"\\nd:\\n\", d)", "The two arrays are the same, which is what you would expect. But, what happens if we make changes to array d?", "d[:,0] = -1.0 # make column 0 equal to -1\nd[:,2] = -6.0 # make column 2 equal to -6\n\nprint(\"c:\\n\",c, \"\\nd:\\n\", d)", "Arrays c and d are identical, even though you only changed d!\nSo what's going on here? When you equate arrays in Numpy (i.e., d = c), you create a reference, rather than copying the array -- in other words, the array d is not a distinct array, but rather points to the array c in memory. Any modification to either c or d will be seen by both. To actually make a copy, you have to use the np.copy() method:", "e = np.full((4,4),10.0) # makes an array of shape (4,4) where all elements are value 10.0\n\nf = np.copy(e)\n\nf[:,0] = -1.0 # make column 0 equal to -1\nf[:,2] = -6.0 # make column 2 equal to -6\n\nprint(\"e:\\n\",e, \"\\nf:\\n\", f)", "You can also make a copy of a subset of an arrays:", "g = np.full((4,4),10.0) # makes an array of shape (4,4) where all elements are value 10.0\n\nh = np.copy(g[0:2,0:2])\n\nprint(\"g:\\n\",g, \"\\nh:\\n\", h)", "Note that you can also create an array that references a subset of another array rather than copies it, and manipulate that in any way you want. The changes will then appear in both the new array and your original array. For example:", "i = np.full((4,4),10.0) # makes an array of shape (4,4) where all elements are value 10.0\n\nj = i[0:2,0:2]\n\nprint(\"\\nunmodified arrays:\\n\")\n\nprint(\"i:\\n\",i, \"\\nj:\\n\", j)\n\nprint(\"\\narrays after modification:\\n\")\n\nj[1,1]=-999.0\n\nprint(\"i:\\n\",i, \"\\nj:\\n\", j)\n", "Numpy and random numbers\nNumpy has a random module that can be used to generate random numbers in a similar way to the standard Python random module, but with the added advantage that it can do so for arrays of values. Two commonly-used methods are:\n\nrandom, which generates an array with user-specified dimensions (1D, 2D, or more dimensions!) and fills it with random floating-point values in the interval [0,1).\nrandint, which generates an array with user-specified dimensions and fills it with random integers in a user-specified interval.", "random_float_array = np.random.random((5,5))\n\nprint(random_float_array)\n\nrandom_int_array = np.random.randint(0,10,(5,5))\n\nprint(\"\\n\",random_int_array)", "Plotting numpy arrays\nIt's easy to plot 2D Numpy arrays in matplotlib using the pyplot matshow method:", "new_rand_array = np.random.random((100,100))\n\nplt.matshow(new_rand_array)\n\n# uncomment the following line to save the figure to your hard drive!\n#plt.savefig(\"myimage.png\")", "And you can turn off the array axes with the following incantation:", "myplot = plt.matshow(new_rand_array)\nmyplot.axes.get_xaxis().set_visible(False)\nmyplot.axes.get_yaxis().set_visible(False)", "(See this page for a more complex example.) Finally, you can use the pyplot imshow method to control many aspects of a plotted array, including things like the color map, opacity, and the minimum and maximum range.", "# interpolation='none' keeps imshow() from trying to interpolate between values and\n# making it look fuzzy.\n# cmap='mapname' changes the color map.\n# vmin, vmax sets the range of the color bar (from 0.0 - 0.5 in this example)\n\nmyplot = plt.imshow(new_rand_array, interpolation='none',cmap='hot',vmin=0.0,vmax=0.5)\n\n# uncomment the following lines to remove the axis labels\n#myplot.axes.get_xaxis().set_visible(False)\n#myplot.axes.get_yaxis().set_visible(False)\n\n# uncomment the following line to save the figure to your hard drive!\n#plt.savefig(\"myimage.png\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
madlib/madlib-examples
Prediction-metrics-demo-1.ipynb
apache-2.0
[ "Prediction metrics\nThis module provides a set of metrics to evaluate the quality of predictions of a model. A typical function will take a set of \"prediction\" and \"observation\" values and use them to calculate the desired metric, unless noted otherwise.", "%load_ext sql\n\n# %sql postgresql://gpdbchina@10.194.10.68:55000/madlib\n%sql postgresql://fmcquillan@localhost:5432/madlib\n\n%sql select madlib.version();", "Continuous variables", "%%sql \nDROP TABLE IF EXISTS test_set;\nCREATE TABLE test_set(\n pred FLOAT8, -- predicted values\n obs FLOAT8 -- actual observed values\n );\nINSERT INTO test_set VALUES\n (37.5,53.1), (12.3,34.2), (74.2,65.4), (91.1,82.1);\n\nSELECT * FROM test_set;\n\n\n%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.mean_abs_error( 'test_set', 'table_out', 'pred', 'obs');\nSELECT * FROM table_out;\n\n%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.mean_abs_perc_error( 'test_set', 'table_out', 'pred', 'obs');\nSELECT * FROM table_out;\n\n%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.mean_perc_error( 'test_set', 'table_out', 'pred', 'obs');\nSELECT * FROM table_out;\n\n%%sql \nDROP TABLE IF EXISTS table_out;\nSELECT madlib.mean_squared_error( 'test_set', 'table_out', 'pred', 'obs');\nSELECT * FROM table_out;\n\n%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.r2_score( 'test_set', 'table_out', 'pred', 'obs');\nSELECT * FROM table_out;\n\n%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.adjusted_r2_score( 'test_set', 'table_out', 'pred', 'obs', 3, 100);\nSELECT * FROM table_out;", "Binary classification\nCreate the sample data for binary classifier metrics:", "%%sql\nDROP TABLE IF EXISTS test_set;\nCREATE TABLE test_set AS\n SELECT ((a*8)::integer)/8.0 pred, -- prediction probability TRUE\n ((a*0.5+random()*0.5)>0.5) obs -- actual observations\n FROM (select random() as a from generate_series(1,100)) x;\nSELECT * FROM test_set;", "Run the Binary Classifier metrics function and View the True Positive Rate and the False Positive Rate:", "%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.binary_classifier( 'test_set', 'table_out', 'pred', 'obs');\nSELECT threshold, tpr, fpr FROM table_out ORDER BY threshold;", "View all metrics at a given threshold value:", "%%sql\nSELECT * FROM table_out WHERE threshold=0.5;", "Run the Area Under ROC curve function:", "%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.area_under_roc( 'test_set', 'table_out', 'pred', 'obs');\nSELECT * FROM table_out;", "Multi-class classification\nCreate the sample data for confusion matrix.", "%%sql\nDROP TABLE IF EXISTS test_set;\nCREATE TABLE test_set AS\n SELECT (x+y)%5+1 AS pred,\n (x*y)%5 AS obs\n FROM generate_series(1,5) x,\n generate_series(1,5) y;\nSELECT * FROM test_set;\n\n%%sql\nDROP TABLE IF EXISTS table_out;\nSELECT madlib.confusion_matrix( 'test_set', 'table_out', 'pred', 'obs');\nSELECT * FROM table_out ORDER BY class;" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dennys-bd/Coursera-Machine-Learning-Specialization
Course 2 - ML, Regression/week-2-multiple-regression-assignment-2-blank.ipynb
mit
[ "Regression Week 2: Multiple Regression (gradient descent)\nIn the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.\nIn this notebook we will cover estimating multiple regression weights via gradient descent. You will:\n* Add a constant column of 1's to a graphlab SFrame to account for the intercept\n* Convert an SFrame into a Numpy array\n* Write a predict_output() function using Numpy\n* Write a numpy function to compute the derivative of the regression weights with respect to a single feature\n* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.\n* Use the gradient descent function to estimate regression weights for multiple features\nFire up graphlab create\nMake sure you have the latest version of graphlab (>= 1.7)", "import graphlab", "Load in house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.", "sales = graphlab.SFrame('kc_house_data.gl/')", "If we want to do any \"feature engineering\" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.\nConvert to Numpy Array\nAlthough SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional \"array\").\nRecall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the \"feature matrix\" by the \"weight vector\". \nFirst we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.", "import numpy as np # note this allows us to refer to numpy as np instead ", "Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:\n* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')\n* A numpy array containing the values of the output\nWith this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)\nPlease note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!", "def get_numpy_data(data_sframe, features, output):\n data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame\n # add the column 'constant' to the front of the features list so that we can extract it along with the others:\n features = ['constant'] + features # this is how you combine two lists\n # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):\n features_sframe = data_sframe[features]\n # the following line will convert the features_SFrame into a numpy matrix:\n feature_matrix = features_sframe.to_numpy()\n # assign the column of data_sframe associated with the output to the SArray output_sarray\n output_sarray = data_sframe[output]\n # the following will convert the SArray into a numpy array by first converting it to a list\n output_array = output_sarray.to_numpy()\n return(feature_matrix, output_array)", "For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:", "(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list\nprint example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'\nprint example_output[0] # and the corresponding output", "Predicting output given regression weights\nSuppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:", "my_weights = np.array([1., 1.]) # the example weights\nmy_features = example_features[0,] # we'll use the first data point\npredicted_value = np.dot(my_features, my_weights)\nprint predicted_value", "np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:", "def predict_output(feature_matrix, weights):\n # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array\n # create the predictions vector by using np.dot()\n predictions = np.dot(feature_matrix, weights)\n return(predictions)", "If you want to test your code run the following cell:", "test_predictions = predict_output(example_features, my_weights)\nprint test_predictions[0] # should be 1181.0\nprint test_predictions[1] # should be 2571.0", "Computing the Derivative\nWe are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.\nSince the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:\n(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)^2\nWhere we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:\n2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)* [feature_i]\nThe term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:\n2*error*[feature_i]\nThat is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!\nRecall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. \nWith this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).", "def feature_derivative(errors, feature):\n # Assume that errors and feature are both numpy arrays of the same length (number of data points)\n # compute twice the dot product of these vectors as 'derivative' and return the value\n derivative = np.dot(errors, feature)*2\n return(derivative)", "To test your feature derivartive run the following:", "(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') \nmy_weights = np.array([0., 0.]) # this makes all the predictions 0\ntest_predictions = predict_output(example_features, my_weights) \n# just like SFrames 2 numpy arrays can be elementwise subtracted with '-': \nerrors = test_predictions - example_output # prediction errors in this case is just the -example_output\nfeature = example_features[:,0] # let's compute the derivative with respect to 'constant', the \":\" indicates \"all rows\"\nderivative = feature_derivative(errors, feature)\nprint derivative\nprint -np.sum(example_output)*2 # should be the same as derivative", "Gradient Descent\nNow we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. \nThe amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.\nWith this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria", "from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)\n\ndef regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):\n converged = False \n weights = np.array(initial_weights) # make sure it's a numpy array\n while not converged:\n # compute the predictions based on feature_matrix and weights using your predict_output() function\n predictions = predict_output(feature_matrix, weights)\n # compute the errors as predictions - output\n errors = predictions - output\n gradient_sum_squares = 0 # initialize the gradient sum of squares\n # while we haven't reached the tolerance yet, update each feature's weight\n for i in range(len(weights)): # loop over each weight\n # Recall that feature_matrix[:, i] is the feature column associated with weights[i]\n # compute the derivative for weight[i]:\n derivative = feature_derivative(errors, feature_matrix[:, i])\n # add the squared value of the derivative to the gradient sum of squares (for assessing convergence)\n gradient_sum_squares = gradient_sum_squares + derivative * derivative\n # subtract the step size times the derivative from the current weight\n weights[i] = weights[i] - derivative*step_size \n # compute the square-root of the gradient sum of squares to get the gradient magnitude:\n gradient_magnitude = sqrt(gradient_sum_squares)\n if gradient_magnitude < tolerance:\n converged = True\n return(weights)", "A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect \"tolerance\" to be small, small is only relative to the size of the features. \nFor similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.\nRunning the Gradient Descent as Simple Regression\nFirst let's split the data into training and test data.", "train_data,test_data = sales.random_split(.8,seed=0)", "Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:", "# let's test out the gradient descent\nsimple_features = ['sqft_living']\nmy_output = 'price'\n(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)\ninitial_weights = np.array([-47000., 1.])\nstep_size = 7e-12\ntolerance = 2.5e7", "Next run your gradient descent with the above parameters.", "weights = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)\nprint weights", "How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)? \nQuiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?\nUse your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:", "(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)", "Now compute your predictions using test_simple_feature_matrix and your weights from above.", "predictions = predict_output(test_simple_feature_matrix, weights)\nprint predictions", "Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?", "round(predictions[0], 2)", "Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).", "residuals = test_output - predictions\nRSS = sum(residuals*residuals)\nprint RSS", "Running a multiple regression\nNow we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:", "model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. \nmy_output = 'price'\n(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)\ninitial_weights = np.array([-100000., 1., 1.])\nstep_size = 4e-12\ntolerance = 1e9", "Use the above parameters to estimate the model weights. Record these values for your quiz.", "weights_more_features = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)", "Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!", "(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)\npredicted = predict_output(test_feature_matrix, weights_more_features)", "Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?", "round(predicted[0], 2)", "What is the actual price for the 1st house in the test data set?", "test_data[0]['price']", "Quiz Question: Which estimate was closer to the true price for the 1st house on the TEST data set, model 1 or model 2?\nNow use your predictions and the output to compute the RSS for model 2 on TEST data.", "residuals_2 = test_output - predicted\nRSS_2 = sum(residuals_2**2)\nprint RSS_2", "Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data?", "print 'Residual 1: %f e Residual 2: %f' % (RSS, RSS_2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
empirical-org/WikipediaSentences
notebooks/BERT-4.1 Experiments Multilabel-QuillNLP.ipynb
agpl-3.0
[ "Multilabel BERT Experiments\nIn this notebook we do some first experiments with BERT: we finetune a BERT model+classifier on each of our datasets separately and compute the accuracy of the resulting classifier on the test data.\nFor these experiments we use the pytorch_transformers package. It contains a variety of neural network architectures for transfer learning and pretrained models, including BERT and XLNET.\nTwo different BERT models are relevant for our experiments: \n\nBERT-base-uncased: a relatively small BERT model that should already give reasonable results,\nBERT-large-uncased: a larger model for real state-of-the-art results.", "from multilabel import EATINGMEAT_BECAUSE_MAP, EATINGMEAT_BUT_MAP, JUNKFOOD_BECAUSE_MAP, JUNKFOOD_BUT_MAP\n\nLABEL_MAP = JUNKFOOD_BUT_MAP\nBERT_MODEL = 'bert-base-uncased'\nBATCH_SIZE = 16 if \"base\" in BERT_MODEL else 2\nGRADIENT_ACCUMULATION_STEPS = 1 if \"base\" in BERT_MODEL else 8\nMAX_SEQ_LENGTH = 100\nPREFIX = \"junkfood_but\"", "Data\nWe use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt.", "import ndjson\nimport glob\nfrom collections import Counter\n\ntrain_file = f\"../data/interim/{PREFIX}_train_withprompt.ndjson\"\nsynth_files = glob.glob(f\"../data/interim/{PREFIX}_train_withprompt_allsynth.ndjson\")\ndev_file = f\"../data/interim/{PREFIX}_dev_withprompt.ndjson\"\ntest_file = f\"../data/interim/{PREFIX}_test_withprompt.ndjson\"\n\nwith open(train_file) as i:\n train_data = ndjson.load(i)\n\nsynth_data = []\nfor f in synth_files:\n with open(f) as i:\n synth_data += ndjson.load(i)\n \nwith open(dev_file) as i:\n dev_data = ndjson.load(i)\n \nwith open(test_file) as i:\n test_data = ndjson.load(i)\n \nlabels = Counter([item[\"label\"] for item in train_data])\nprint(labels)\nprint(len(synth_data))", "Next, we build the label vocabulary, which maps every label in the training data to an index.", "def map_to_multilabel(items):\n return [{\"text\": item[\"text\"], \"label\": LABEL_MAP[item[\"label\"]]} for item in items]\n\ntrain_data = map_to_multilabel(train_data)\ndev_data = map_to_multilabel(dev_data)\nsynth_data = map_to_multilabel(synth_data)\ntest_data = map_to_multilabel(test_data)\n\nimport sys\nsys.path.append('../')\n\nfrom quillnlp.models.bert.preprocessing import preprocess, create_label_vocabulary\n\nlabel2idx = create_label_vocabulary(train_data)\nidx2label = {v:k for k,v in label2idx.items()}\ntarget_names = [idx2label[s] for s in range(len(idx2label))]\n\nMAX_SEQ_LENGTH = 100\ntrain_dataloader = preprocess(train_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE)\ndev_dataloader = preprocess(dev_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE)\ntest_dataloader = preprocess(test_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE, shuffle=False)", "Model\nWe load the pretrained model and put it on a GPU if one is available. We also put the model in \"training\" mode, so that we can correctly update its internal parameters on the basis of our data sets.", "import sys\nsys.path.append('../')\n\nimport torch\nfrom quillnlp.models.bert.models import get_multilabel_bert_classifier\n\nBERT_MODEL = 'bert-base-uncased'\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel = get_multilabel_bert_classifier(BERT_MODEL, len(label2idx), device=device)", "Training", "from quillnlp.models.bert.train import train\n\nbatch_size = 16 if \"base\" in BERT_MODEL else 2\ngradient_accumulation_steps = 1 if \"base\" in BERT_MODEL else 8\noutput_model_file = train(model, train_dataloader, dev_dataloader, batch_size, gradient_accumulation_steps, device)", "Evaluation", "from quillnlp.models.bert.train import evaluate\nfrom sklearn.metrics import precision_recall_fscore_support, classification_report\n\nprint(\"Loading model from\", output_model_file)\ndevice=\"cpu\"\n\nmodel = get_multilabel_bert_classifier(BERT_MODEL, len(label2idx), model_file=output_model_file, device=device)\nmodel.eval()\n\n_, test_correct, test_predicted = evaluate(model, test_dataloader, device)\n\nprint(\"Test performance:\", precision_recall_fscore_support(test_correct, test_predicted, average=\"micro\"))\nprint(classification_report(test_correct, test_predicted, target_names=target_names))\n\nall_correct = 0\nfp, fn, tp, tn = 0, 0, 0, 0\nfor c, p in zip(test_correct, test_predicted):\n if sum(c == p) == len(c):\n all_correct +=1\n for ci, pi in zip(c, p):\n if pi == 1 and ci == 1:\n tp += 1\n same = 1\n elif pi == 1 and ci == 0:\n fp += 1\n elif pi == 0 and ci == 1:\n fn += 1\n else:\n tn += 1\n same =1\n \nprecision = tp/(tp+fp)\nrecall = tp/(tp+fn)\nprint(\"P:\", precision)\nprint(\"R:\", recall)\nprint(\"A:\", all_correct/len(test_correct))\n\nfor item, predicted, correct in zip(test_data, test_predicted, test_correct):\n correct_labels = [idx2label[i] for i, l in enumerate(correct) if l == 1]\n predicted_labels = [idx2label[i] for i, l in enumerate(predicted) if l == 1]\n print(\"{}#{}#{}\".format(item[\"text\"], \";\".join(correct_labels), \";\".join(predicted_labels)))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
finafina/zendesk_python
Get Email from Customers who have submitted Tickets.ipynb
gpl-3.0
[ "Get Email\nThis report returns the email addresses from customers who have contacted customer service ('custom_fields' value = 'customer_service') in a certain timeframe. Optional slicing by incoming channel.\nImport Tickets", "### Read file where API token is stored\ntoken = open('zd_token').read().strip('\\n')\nemail = open('zd_email').read().strip('\\n')\nsubdomain = open('zd_subdomain').read().strip('\\n')", "Import and save CX tickets in array.", "from zendesk import Zendesk\nimport datetime\nimport time\nimport json\nimport collections \nimport csv\n\n### Zendesk credentials\nzd = Zendesk(subdomain,\n email+'/token',\n token) \n\n### Set the interval you're interested in\n### today = datetime.datetime.today().date()\n### weekStartDate = today\n### weekEndDate = today\n\nweekStartDate = (datetime.datetime.now() - datetime.timedelta(days = 34)).date()\nweekEndDate = (datetime.datetime.now() - datetime.timedelta(days = 30)).date()\n\nprint '---------------------------------------------------------------------'\nprint 'START.'\nprint 'Week Start Date:', weekStartDate,'. Week End Date:', weekEndDate, '.'\nprint '---------------------------------------------------------------------'\n\n\n### Build array containing tickets from given time intervall\ntickets = []\n\ntry:\n pageNumber=1\n nextPage = True\n while nextPage:\n zdresult = zd.search(query='type:ticket created>='+ str(weekStartDate) +' created<=' + str(weekEndDate), page=pageNumber)\n print 'Zendesk Result Page Number: ', pageNumber\n print zdresult['next_page']\n for result in zdresult['results']:\n if result['custom_fields'][1]['value'] == 'customer_service': #adjust custom field according to your ticket structure\n tickets.append(result)\n if zdresult['next_page'] is None:\n nextPage=False\n else:\n nextPage=True\n pageNumber += 1\nexcept ValueError:\n print(ValueError)\n\nprint '---------------------------------------------------------------------' \nprint 'Tickets array length:', len(tickets) , '.'\nprint 'DONE.'\nprint '---------------------------------------------------------------------'", "Use Print method below that suits best:\n1) Print Emails (All Channels)", "requesterArr = []\nnoEmail = 0\n\nprint '---------------------------------------------------------------------'\nprint 'START.'\nprint '---------------------------------------------------------------------'\n\ntry:\n for ticket in tickets:\n ticketID = ticket['id']\n source = ticket['custom_fields'][9]['value'] #adjust custom field according to your ticket structure\n requesterID = ticket['requester_id'] \n zdresult = zd.search(query='type:ticket user:'+ str(requesterID))\n email = zdresult['results'][0]['email']\n if (email is not None):\n requesterArr.append([ticketID,email,channel])\n# print 'ticket: ', ticketID, ',',email , ',', channel \n print (\"'%s',\" % email)\n elif:\n noEmail+=1;\n \nexcept ValueError:\n print(\"Error!\")\n\nprint '---------------------------------------------------------------------' \nprint 'Length: ', len(requesterArr) , ', no email: ', noEmail , '.'\nprint 'DONE.'\nprint '---------------------------------------------------------------------'", "2) Print Emails (Channel: Voice)", "requesterArr = []\ncallNoEmail = 0\n\nprint '---------------------------------------------------------------------'\nprint 'START.'\nprint '---------------------------------------------------------------------'\n\ntry:\n for ticket in tickets:\n channel = ticket['via']['channel']\n ticketID = ticket['id']\n source = ticket['custom_fields'][9]['value'] #adjust custom field according to your ticket structure\n requesterID = ticket['requester_id'] \n zdresult = zd.search(query='type:ticket user:'+ str(requesterID))\n email = zdresult['results'][0]['email']\n if (email is not None) and (channel =='voice'):\n requesterArr.append([ticketID,email,channel])\n# print 'ticket: ', ticketID, ',',email , ',', channel\n print (\"'%s',\" % email)\n elif (channel =='voice'):\n callNoEmail+=1;\n \nexcept ValueError:\n print(\"Error!\")\n\nprint '---------------------------------------------------------------------'\nprint 'Length: ', len(requesterArr) , ', call but no email: ', callNoEmail, '.'\nprint 'DONE.'\nprint '---------------------------------------------------------------------'\n", "3) Print Emails (Channels: All minus Voice)", "requesterArr = []\nnoEmail = 0\n\nprint '---------------------------------------------------------------------'\nprint 'START.'\nprint '---------------------------------------------------------------------'\n\ntry:\n for ticket in tickets:\n channel = ticket['via']['channel']\n ticketID = ticket['id']\n source = ticket['custom_fields'][9]['value'] #adjust custom field according to your ticket structure\n requesterID = ticket['requester_id'] \n zdresult = zd.search(query='type:ticket user:'+ str(requesterID))\n email = zdresult['results'][0]['email']\n if (email is not None) and (channel !='voice'):\n requesterArr.append([ticketID,email,channel])\n# print 'ticket: ', ticketID, ',',email , ',', channel \n print (\"'%s',\" % email)\n elif (channel !='voice'):\n noEmail+=1;\n \nexcept ValueError:\n print(\"Error!\")\n\nprint '---------------------------------------------------------------------'\nprint 'Length: ', len(requesterArr) , ', no email: ', noEmail, '.'\nprint 'DONE.'\nprint '---------------------------------------------------------------------'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/mlops-on-gcp
immersion/tfx_pipelines/04-metadata/solutions/lab-04.ipynb
apache-2.0
[ "Inspecting TFX metadata\nLearning Objectives\n\nUse a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.\n\nIn this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted AI Platform Pipelines instance includes the ML Metadata service. In AI Platform Pipelines, ML Metadata uses MySQL as a database backend and can be accessed using a GRPC server.\nSetup", "import os\nimport json\n\nimport ml_metadata\nimport tensorflow_data_validation as tfdv\nimport tensorflow_model_analysis as tfma\n\n\nfrom ml_metadata.metadata_store import metadata_store\nfrom ml_metadata.proto import metadata_store_pb2\n\nfrom tfx.orchestration import metadata\nfrom tfx.types import standard_artifacts\n\nfrom tensorflow.python.lib.io import file_io\n\n!python -c \"import tfx; print('TFX version: {}'.format(tfx.__version__))\"\n!python -c \"import kfp; print('KFP version: {}'.format(kfp.__version__))\"", "Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in lab-02 or lab-03.\n1.1 Configure Kubernetes port forwarding\nTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.\nFrom a JupyterLab terminal, execute the following commands:\ngcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE] \nkubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080\nProceed to the next step, \"Connecting to ML Metadata\".\nOption 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.\nHosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.", "import yaml\n\n# Set `PATH` to include the directory containing TFX CLI.\nPATH=%env PATH\n%env PATH=/home/jupyter/.local/bin:{PATH}", "The pipeline source can be found in the pipeline folder. Switch to the pipeline folder and compile the pipeline.", "%cd pipeline", "2.1 Create AI Platform Pipelines cluster\nNavigate to AI Platform Pipelines page in the Google Cloud Console.\nCreate or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select \"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform\" to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an App instance name such as \"TFX-lab-04\".\n2.2 Configure environment settings\nUpdate the below constants with the settings reflecting your lab environment.\n\nGCP_REGION - the compute region for AI Platform Training and Prediction\nARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the kubeflowpipelines- prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.", "!gsutil ls", "CUSTOM_SERVICE_ACCOUNT - In the gcp console Click on the Navigation Menu. Navigate to IAM &amp; Admin, then to Service Accounts and use the service account starting with prifix - 'tfx-tuner-caip-service-account'. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup README for setup instructions.\n\n\nENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.\n\n\nOpen the SETTINGS for your instance\n\nUse the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.", "#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.\nGCP_REGION = 'us-central1'\nARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default' #Change\nENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com' #Change\nCUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com' #Change\n\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]\n\n# Set your resource settings as environment variables. These override the default values in pipeline/config.py.\n%env GCP_REGION={GCP_REGION}\n%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}\n%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}\n%env PROJECT_ID={PROJECT_ID}", "2.3 Compile pipeline", "PIPELINE_NAME = 'tfx_covertype_lab_04'\nMODEL_NAME = 'tfx_covertype_classifier'\nDATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'\nCUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)\nRUNTIME_VERSION = '2.3'\nPYTHON_VERSION = '3.7'\nUSE_KFP_SA=False\nENABLE_TUNING=True\n\n%env PIPELINE_NAME={PIPELINE_NAME}\n%env MODEL_NAME={MODEL_NAME}\n%env DATA_ROOT_URI={DATA_ROOT_URI}\n%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}\n%env RUNTIME_VERSION={RUNTIME_VERSION}\n%env PYTHON_VERIONS={PYTHON_VERSION}\n%env USE_KFP_SA={USE_KFP_SA}\n%env ENABLE_TUNING={ENABLE_TUNING}\n\n!tfx pipeline compile --engine kubeflow --pipeline_path runner.py", "2.4 Deploy pipeline to AI Platform", "!tfx pipeline create \\\n--pipeline_path=runner.py \\\n--endpoint={ENDPOINT} \\\n--build_target_image={CUSTOM_TFX_IMAGE}", "(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:", "!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}", "2.5 Create and monitor pipeline run", "!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}", "2.6 Configure Kubernetes port forwarding\nTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.\nFrom a JupyterLab terminal, execute the following commands:\ngcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE] \nkubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080\nConnecting to ML Metadata\nConfigure ML Metadata GRPC client", "grpc_host = 'localhost'\ngrpc_port = 7000\nconnection_config = metadata_store_pb2.MetadataStoreClientConfig()\nconnection_config.host = grpc_host\nconnection_config.port = grpc_port", "Connect to ML Metadata service", "store = metadata_store.MetadataStore(connection_config)", "Important\nA full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below.\nExploring ML Metadata\nThe Metadata Store uses the following data model:\n\nArtifactType describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.\nArtifact describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.\nExecutionType describes a type of component or step in a workflow, and its runtime parameters.\nExecution is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.\nEvent is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.\nContextType describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.\nContext is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.\nAttribution is a record of the relationship between Artifacts and Contexts.\nAssociation is a record of the relationship between Executions and Contexts.\n\nList the registered artifact types.", "for artifact_type in store.get_artifact_types():\n print(artifact_type.name)", "Display the registered execution types.", "for execution_type in store.get_execution_types():\n print(execution_type.name)", "List the registered context types.", "for context_type in store.get_context_types():\n print(context_type.name)", "Visualizing TFX artifacts\nRetrieve data analysis and validation artifacts", "with metadata.Metadata(connection_config) as store:\n schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME) \n stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)\n anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)\n\nschema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')\nprint(\"Generated schame file:{}\".format(schema_file))\n\nstats_path = stats_artifacts[-1].uri\ntrain_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')\neval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')\nprint(\"Train stats file:{}, Eval stats file:{}\".format(\n train_stats_file, eval_stats_file))\n\nanomalies_path = anomalies_artifacts[-1].uri\ntrain_anomalies_file = os.path.join(anomalies_path, 'train', 'anomalies.pbtxt')\neval_anomalies_file = os.path.join(anomalies_path, 'eval', 'anomalies.pbtxt')\n\nprint(\"Train anomalies file:{}, Eval anomalies file:{}\".format(\n train_anomalies_file, eval_anomalies_file))", "Visualize schema", "schema = tfdv.load_schema_text(schema_file)\ntfdv.display_schema(schema=schema)", "Visualize statistics\nExercise: looking at the features visualized below, answer the following questions:\n\nWhich feature transformations would you apply to each feature with TF Transform?\nAre there data quality issues with certain features that may impact your model performance? How might you deal with it?", "train_stats = tfdv.load_statistics(train_stats_file)\neval_stats = tfdv.load_statistics(eval_stats_file)\ntfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,\n lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')", "Visualize anomalies", "train_anomalies = tfdv.load_anomalies_text(train_anomalies_file)\ntfdv.display_anomalies(train_anomalies)\n\neval_anomalies = tfdv.load_anomalies_text(eval_anomalies_file)\ntfdv.display_anomalies(eval_anomalies)", "Retrieve model artifacts", "with metadata.Metadata(connection_config) as store:\n model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)\n hyperparam_artifacts = store.get_artifacts_by_type(standard_artifacts.HyperParameters.TYPE_NAME)\n \nmodel_eval_path = model_eval_artifacts[-1].uri\nprint(\"Generated model evaluation result:{}\".format(model_eval_path))\nbest_hparams_path = os.path.join(hyperparam_artifacts[-1].uri, 'best_hyperparameters.txt')\nprint(\"Generated model best hyperparameters result:{}\".format(best_hparams_path))", "Return best hyperparameters", "# Latest pipeline run Tuner search space.\njson.loads(file_io.read_file_to_string(best_hparams_path))['space']\n\n# Latest pipeline run Tuner searched best_hyperparameters artifacts.\njson.loads(file_io.read_file_to_string(best_hparams_path))['values']", "Visualize model evaluations\nExercise: review the model evaluation results below and answer the following questions:\n\nWhich Wilderness Area had the highest accuracy?\nWhich Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?", "eval_result = tfma.load_eval_result(model_eval_path)\ntfma.view.render_slicing_metrics(\n eval_result, slicing_column='Wilderness_Area')", "Debugging tip: If the TFMA visualization of the Evaluator results do not render, try switching to view in a Classic Jupyter Notebook. You do so by clicking Help &gt; Launch Classic Notebook and re-opening the notebook and running the above cell to see the interactive TFMA results.\nLicense\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gabriel-astudillo/jupyter
Rendimiento Computacional.ipynb
gpl-3.0
[ "Descripción del software\nDiagrama de Estados\n<img src=\"https://raw.githubusercontent.com/gabriel-astudillo/jupyter/master/diagramaEstados.png\">\nRendimiento Computacional\nLos tiempos de ejecución del programa implementado se midieron a través de la función omp_get_wtime() en los bloques indicados en diagrama de flujo siguiente\n<img src=\"https://raw.githubusercontent.com/gabriel-astudillo/jupyter/master/diag_general.png\">\nPor cada tamaño de grilla (128x128 y 256x256 pixeles) y por cada \"tiempo total de simulación T\", con T={2000, 4000, 8000}, se realizaron 10 mediciones, cuyos promedios se almacenaron en los archivos \"tejec-128x128.txt\" y \"tejec-256x256.txt\". Estos archivos están disponibles en https://github.com/gabriel-astudillo/jupyter.\nEl código que permite obtener el gráfico comparativo de los tiempos de ejecución para distintos tamaños de grilla, tiempos de simulación y número de threads, es el siguiente:", "import pandas as pd\nimport numpy as np\nimport scipy as sp\nimport plotly.plotly as py\nimport plotly.figure_factory as ff\nimport plotly\nfrom plotly.graph_objs import *\nplotly.tools.set_credentials_file(username='gastudillo', api_key='OiqcwUGj4Jmtn1KtY6oR')", "Se crean las estructuras (data frames) que almacenan los datos de los tiempos de ejecución para ambas grillas", "header_names = [\"threads\",\"T2000Mean\",\"T2000Std\",\"T4000Mean\",\"T4000Std\",\"T8000Mean\",\"T8000Std\"]\n\ndf_128x128 = pd.read_csv(\"https://raw.githubusercontent.com/gabriel-astudillo/jupyter/master/tejec-128x128.txt\",\n delim_whitespace = True,\n header = None, \n names = header_names\n )\n\n\ndf_256x256 = pd.read_csv(\"https://raw.githubusercontent.com/gabriel-astudillo/jupyter/master/tejec-256x256.txt\",\n delim_whitespace = True,\n header = None, \n names = header_names\n )", "A partir de las data frames, se crean arreglos que contienen los datos datos que se van a graficar. El parámetro independiente es el número de threads, que se almacena en el arreglo \"threads\". Por otro lado, los arreglos \"time_exec_128x128\" y \"time_exec_256x256\" almacenan los tiempos de ejecución para cada tiempo de simulación, ordenados por cantidad de threads.", "threads = df_128x128.threads\n\ntime_exec_128x128 = [df_128x128.T2000Mean, df_128x128.T4000Mean, df_128x128.T8000Mean]\ntime_exec_256x256 = [df_256x256.T2000Mean, df_256x256.T4000Mean, df_256x256.T8000Mean]\n", "Finalmente, a través de la función \"make_figure\", se grafican los tiempos de ejecución almacenados de los arreglos anteriores.", "def make_figure(times_exec_128, times_exec_256):\n trace_2000_128 = {\n \"x\" : threads,\n \"y\" : times_exec_128[0],\n \"name\":'128x128,T=2000',\n \"line\": {\n \"color\": \"rgb(#ff, #cd, #d2)\",\n \"width\": 3\n }\n }\n \n trace_4000_128 = {\n \"x\":threads,\n \"y\" : times_exec_128[1],\n \"name\":'128x128,T=4000',\n \"line\": {\n \"color\": \"rgb(#A2, #D5, #F2)\", \n \"width\": 3\n } \n }\n \n trace_8000_128 = {\n \"x\":threads,\n \"y\" : times_exec_128[2],\n \"name\":'128x128,T=8000',\n \"line\": {\n \"color\": \"rgb(#59, #60, #6D)\",\n \"width\": 3\n }\n }\n \n trace_2000_256 = {\n \"x\" : threads,\n \"y\" : times_exec_256[0],\n \"name\":'256x256,T=2000',\n \"line\": {\n \"color\": \"rgb(#ff, #cd, #d2)\",\n \"width\": 3\n }\n }\n \n trace_4000_256 = {\n \"x\":threads,\n \"y\" : times_exec_256[1],\n \"name\":'256x256,T=4000',\n \"line\": {\n \"color\": \"rgb(#A2, #D5, #F2)\", \n \"width\": 3\n } \n }\n \n trace_8000_256 = {\n \"x\":threads,\n \"y\" : times_exec_256[2],\n \"name\":'256x256,T=8000',\n \"line\": {\n \"color\": \"rgb(#59, #60, #6D)\",\n \"width\": 3\n }\n }\n \n data = [trace_2000_128, trace_4000_128, trace_8000_128,\n trace_2000_256, trace_4000_256, trace_8000_256]\n \n \n layout = Layout(title=\"Tiempo de Ejecución por período de simulación<br>para grillas de 128x128 y 256x256 \",\n xaxis=dict(\n title='Threads',\n autotick=False,\n ticks='inside',\n tick0=0,\n dtick=1\n ),\n yaxis=dict(\n title='Tiempo de Ejecución (en segundos)',\n showline=True,\n ticks='outside'\n )\n )\n \n fig = Figure(data=data, layout=layout)\n \n return fig\n\nfig_128x128 = make_figure(time_exec_128x128, time_exec_256x256)\n#fig_256x256 = make_figure(time_exec_256x256, \"256x256\")\n\npy.iplot(fig_128x128, filename='time_exec_all')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
johanvdw/niche_vlaanderen
docs/advanced_usage.ipynb
mit
[ "Advanced usage\nUsing config files\nInstead of specifying all inputs using set_input, it is possible to use a config file. A config file can be loaded using read_config_file or it can be read and executed immediately by using run_config_file.\nThe syntax of the config file is explained more in detail in Niche Configuration file, but is already introduced here because it will be used in the next examples.\nIf you want to recreate the examples below, the config files can be found under the docs folder, so if you extract all the data the you should be able to run the examples from the notebook. \nComparing Niche classes\nNiche models can be compared using a NicheDelta class. This can be used to compare different scenario's. \nIn our example, we will compare the results of the running Niche two times, once using a simple model and once using a full model.", "import niche_vlaanderen as nv\nimport matplotlib.pyplot as plt\n\nsimple = nv.Niche()\nsimple.run_config_file(\"simple.yml\")\n\nfull = nv.Niche()\nfull.run_config_file(\"full.yml\")\n\ndelta = nv.NicheDelta(simple, full)\nax = delta.plot(7)\nplt.show()", "It is also possible to show the areas in a dataframe by using the table attribute.", "delta.table.head()", "Like Niche, NicheDelta also has a write method, which takes a directory as an argument.", "delta.write(\"comparison_output\", overwrite_files=True)", "Creating deviation maps\nIn many cases, it is not only important to find out which vegetation types are possible given the different input files, but also to find out how much change would be required to mhw or mlw to allow a certain vegetation type.\nTo create deviation maps, it is necessary to run a model with the deviation option.", "dev = nv.Niche()\ndev.set_input(\"mhw\",\"../testcase/zwarte_beek/input/mhw.asc\")\ndev.set_input(\"mlw\",\"../testcase/zwarte_beek/input/mhw.asc\")\ndev.set_input(\"soil_code\",\"../testcase/zwarte_beek/input/soil_code.asc\")\ndev.run(deviation=True, full_model=False)", "The deviation maps can be plotted by specifying either mhw or mlw with the vegetation type, eg mhw_14 (to show the deviation between mhw and the required mhw for vegetation type 14).\nPositive values indicate that the actual condition is too dry for the vegetation type. Negative values indicate that the actual condition is too wet for the vegetation type.", "dev.plot(\"mlw\")\ndev.plot(\"mlw_14\")\n\nplt.show()", "Creating statistics per shape object\nNiche also contains a helper function that allows one to calculate the possible vegetation by using a vector dataset, such as a .geojson or .shp file.\nThe vegetation is returned as a pandas dataframe, where shapes are identified by their id and the area not covered by a shape gets shape_id -1.", "df = full.zonal_stats(\"../testcase/zwarte_beek/input/study_area_l72.geojson\")\ndf", "Using abiotic grids\nIn certain cases the intermediary grids of Acidity or NutrientLevel need changes, to compensate for specific circumstances.\nIn that case it is possible to run a Niche model and make some adjustments to the grid and then using an abiotic grid as an input.", "import niche_vlaanderen as nv\nimport matplotlib.pyplot as plt\n\nfull = nv.Niche()\nfull.run_config_file(\"full.yml\")\nfull.write(\"output_abiotic\", overwrite_files=True)", "Now it is possible to adapt the acidity and nutrient_level grids outside niche. For this demo, we will use some Python magic to make all nutrient levels one level lower. Note that there is no need to do this in Python, any other tool could be used as well. So if you don't understand this code - don't panic (and ignore the warning)!", "import rasterio\nwith rasterio.open(\"output_abiotic/full_nutrient_level.tif\") as src:\n nutrient_level = src.read(1)\n profile = src.profile\n nodata = src.nodatavals[0]\n \nnutrient_level[nutrient_level != nodata] = nutrient_level[nutrient_level != nodata] -1\n\n# we can not have nutrient level 0, so we set all places where this occurs to 1\nnutrient_level[nutrient_level ==0 ] = 1\n\nwith rasterio.open(\"output_abiotic/adjusted_nutrient.tif\", 'w', **profile) as dst:\n dst.write(nutrient_level, 1)", "Next we will create a new niche model using the same options as our previous full models, but we will also add the previously calculated acidity and nutrient level values as input, and run with the abiotic=True option. Note that we use the read_config_file method (and not run_config_file) because we still want to edit the configuration before running.", "adjusted = nv.Niche()\nadjusted.read_config_file(\"full.yml\")\nadjusted.set_input(\"acidity\", \"output_abiotic/full_acidity.tif\")\nadjusted.set_input(\"nutrient_level\", \"output_abiotic/adjusted_nutrient.tif\")\nadjusted.name = \"adjusted\"\nadjusted.run(abiotic=True)\n\nadjusted.plot(7)\nfull.plot(7)\nplt.show()", "Overwriting standard code tables\nOne is free to adapt the standard code tables that are used by NICHE. By specifying the paths to the adapted code tables in a NICHE class object, the standard code tables can be overwritten. In this way, standard model functioning can be tweaked. However, it is strongly advised to use ecological data that is reviewed by experts and to have in-depth knowledge of the model functioning.\nThe possible code tables that can be adapted and set within a NICHE object are:\nct_acidity, ct_soil_mlw_class, ct_soil_codes, lnk_acidity, ct_seepage, ct_vegetation, ct_management, ct_nutrient_level and ct_mineralisation\nAfter adapting the vegetation code table for type 7 (Caricion gracilis) on peaty soil (V) by randomly altering the maximum mhw and mlw to 5 and 4 cm resp. (i.e. below ground, instead of standard values of -28 and -29 cm) and saving the file to ct_vegetation_adj7.csv, the adjusted model can be built and run.", "adjusted_ct = nv.Niche(ct_vegetation = \"ct_vegetation_adj7.csv\")\nadjusted_ct.read_config_file(\"full.yml\")\nadjusted_ct.run()", "Example of changed potential area of Caricion gracilis vegetation type because of the changes set in the vegetation code table:", "adjusted_ct.plot(7)\nfull.plot(7)\nplt.show()", "Potential area is shrinking because of the range of grondwater levels that have become more narrow (excluding the wettest places)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
miykael/nipype_tutorial
notebooks/basic_nodes.ipynb
bsd-3-clause
[ "Nodes\nFrom the Interface tutorial, you learned that interfaces are the core pieces of Nipype that run the code of your desire. But to streamline your analysis and to execute multiple interfaces in a sensible order, you have to put them in something that we call a Node.\nIn Nipype, a node is an object that executes a certain function. This function can be anything from a Nipype interface to a user-specified function or an external script. Each node consists of a name, an interface category and at least one input field, and at least one output field.\nFollowing is a simple node from the utility interface, with the name name_of_node, the input field IN and the output field OUT:\n\nOnce you connect multiple nodes to each other, you create a directed graph. In Nipype we call such graphs either workflows or pipelines. Directed connections can only be established from an output field (below node1_out) of a node to an input field (below node2_in) of another node.\n\nThis is all there is to Nipype. Connecting specific nodes with certain functions to other specific nodes with other functions. So let us now take a closer look at the different kind of nodes that exist and see when they should be used.\nExample of a simple node\nFirst, let us take a look at a simple stand-alone node. In general, a node consists of the following elements:\nnodename = Nodetype(interface_function(), name='labelname')\n\n\nnodename: Variable name of the node in the python environment.\nNodetype: Type of node to be created. This can be a Node, MapNode or JoinNode.\ninterface_function: Function the node should execute. Can be user specific or coming from an Interface.\nlabelname: Label name of the node in the workflow environment (defines the name of the working directory)\n\nLet us take a look at an example: For this, we need the Node module from Nipype, as well as the Function module. The second only serves a support function for this example. It isn't a prerequisite for a Node.", "# Import Node and Function module\nfrom nipype import Node, Function\n\n# Create a small example function\ndef add_two(x_input):\n return x_input + 2\n\n# Create Node\naddtwo = Node(Function(input_names=[\"x_input\"],\n output_names=[\"val_output\"],\n function=add_two),\n name='add_node')", "As specified before, addtwo is the nodename, Node is the Nodetype, Function(...) is the interface_function and add_node is the labelname of the this node. In this particular case, we created an artificial input field, called x_input, an artificial output field called val_output and specified that this node should run the function add_two().\nBut before we can run this node, we need to declare the value of the input field x_input:", "addtwo.inputs.x_input = 4", "After all input fields are specified, we can run the node with run():", "addtwo.run()\n\ntemp_res = addtwo.run()\n\ntemp_res.outputs", "And what is the output of this node?", "addtwo.result.outputs", "Example of a neuroimaging node\nLet's get back to the BET example from the Interface tutorial. The only thing that differs from this example, is that we will put the BET() constructor inside a Node and give it a name.", "# Import BET from the FSL interface\nfrom nipype.interfaces.fsl import BET\n\n# Import the Node module\nfrom nipype import Node\n\n# Create Node\nbet = Node(BET(frac=0.3), name='bet_node')", "In the Interface tutorial, we were able to specify the input file with the in_file parameter. This works exactly the same way in this case, where the interface is in a node. The only thing that we have to be careful about when we use a node is to specify where this node should be executed. This is only relevant for when we execute a node by itself, but not when we use them in a Workflow.", "in_file = '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'\n\n# Specify node inputs\nbet.inputs.in_file = in_file\nbet.inputs.out_file = '/output/node_T1w_bet.nii.gz'\n\nres = bet.run()", "As we know from the Interface tutorial, the skull stripped output is stored under res.outputs.out_file. So let's take a look at the before and the after:", "from nilearn.plotting import plot_anat\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplot_anat(in_file, title='BET input', cut_coords=(10,10,10),\n display_mode='ortho', dim=-1, draw_cross=False, annotate=False);\nplot_anat(res.outputs.out_file, title='BET output', cut_coords=(10,10,10),\n display_mode='ortho', dim=-1, draw_cross=False, annotate=False);", "Exercise 1\nDefine a Node for IsotropicSmooth (from fsl). Run the node for T1 image for one of the subjects.", "# write your solution here\n\n# Import the Node module\nfrom nipype import Node\n# Import IsotropicSmooth from the FSL interface\nfrom nipype.interfaces.fsl import IsotropicSmooth\n\n# Define a node\nsmooth_node = Node(IsotropicSmooth(), name=\"smoothing\")\nsmooth_node.inputs.in_file = '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'\nsmooth_node.inputs.fwhm = 4\nsmooth_node.inputs.out_file = '/output/node_T1w_smooth.nii.gz'\nsmooth_res = smooth_node.run()", "Exercise 2\nPlot the original image and the image after smoothing.", "# write your solution here\n\nfrom nilearn.plotting import plot_anat\n%pylab inline\nplot_anat(smooth_node.inputs.in_file, title='smooth input', cut_coords=(10,10,10),\n display_mode='ortho', dim=-1, draw_cross=False, annotate=False);\nplot_anat(smooth_res.outputs.out_file, title='smooth output', cut_coords=(10,10,10),\n display_mode='ortho', dim=-1, draw_cross=False, annotate=False);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
waymo-research/waymo-open-dataset
tutorial/tutorial.ipynb
apache-2.0
[ "Waymo Open Dataset Tutorial\n\nWebsite: https://waymo.com/open\nGitHub: https://github.com/waymo-research/waymo-open-dataset\n\nThis tutorial demonstrates how to use the Waymo Open Dataset with two frames of data. Visit the Waymo Open Dataset Website to download the full dataset.\nTo use, open this notebook in Colab.\nUncheck the box \"Reset all runtimes before running\" if you run this colab directly from the remote kernel. Alternatively, you can make a copy before trying to run it by following \"File > Save copy in Drive ...\".\nInstall waymo_open_dataset package", "!rm -rf waymo-od > /dev/null\n!git clone https://github.com/waymo-research/waymo-open-dataset.git waymo-od\n!cd waymo-od && git branch -a\n!cd waymo-od && git checkout remotes/origin/master\n!pip3 install --upgrade pip\n\n!pip3 install waymo-open-dataset-tf-2-1-0==1.2.0\n\nimport os\nimport tensorflow.compat.v1 as tf\nimport math\nimport numpy as np\nimport itertools\n\ntf.enable_eager_execution()\n\nfrom waymo_open_dataset.utils import range_image_utils\nfrom waymo_open_dataset.utils import transform_utils\nfrom waymo_open_dataset.utils import frame_utils\nfrom waymo_open_dataset import dataset_pb2 as open_dataset", "Read one frame\nEach file in the dataset is a sequence of frames ordered by frame start timestamps. We have extracted two frames from the dataset to demonstrate the dataset format.", "FILENAME = '/content/waymo-od/tutorial/frames'\ndataset = tf.data.TFRecordDataset(FILENAME, compression_type='')\nfor data in dataset:\n frame = open_dataset.Frame()\n frame.ParseFromString(bytearray(data.numpy()))\n break\n\n(range_images, camera_projections,\n _, range_image_top_pose) = frame_utils.parse_range_image_and_camera_projection(\n frame)", "Examine frame context\nRefer to dataset.proto for the data format. The context contains shared information among all frames in the scene.", "print(frame.context)", "Visualize Camera Images and Camera Labels", "import matplotlib.pyplot as plt\nimport matplotlib.patches as patches\n\ndef show_camera_image(camera_image, camera_labels, layout, cmap=None):\n \"\"\"Show a camera image and the given camera labels.\"\"\"\n\n ax = plt.subplot(*layout)\n\n # Draw the camera labels.\n for camera_labels in frame.camera_labels:\n # Ignore camera labels that do not correspond to this camera.\n if camera_labels.name != camera_image.name:\n continue\n\n # Iterate over the individual labels.\n for label in camera_labels.labels:\n # Draw the object bounding box.\n ax.add_patch(patches.Rectangle(\n xy=(label.box.center_x - 0.5 * label.box.length,\n label.box.center_y - 0.5 * label.box.width),\n width=label.box.length,\n height=label.box.width,\n linewidth=1,\n edgecolor='red',\n facecolor='none'))\n\n # Show the camera image.\n plt.imshow(tf.image.decode_jpeg(camera_image.image), cmap=cmap)\n plt.title(open_dataset.CameraName.Name.Name(camera_image.name))\n plt.grid(False)\n plt.axis('off')\n\nplt.figure(figsize=(25, 20))\n\nfor index, image in enumerate(frame.images):\n show_camera_image(image, frame.camera_labels, [3, 3, index+1])", "Visualize Range Images", "plt.figure(figsize=(64, 20))\ndef plot_range_image_helper(data, name, layout, vmin = 0, vmax=1, cmap='gray'):\n \"\"\"Plots range image.\n\n Args:\n data: range image data\n name: the image title\n layout: plt layout\n vmin: minimum value of the passed data\n vmax: maximum value of the passed data\n cmap: color map\n \"\"\"\n plt.subplot(*layout)\n plt.imshow(data, cmap=cmap, vmin=vmin, vmax=vmax)\n plt.title(name)\n plt.grid(False)\n plt.axis('off')\n\ndef get_range_image(laser_name, return_index):\n \"\"\"Returns range image given a laser name and its return index.\"\"\"\n return range_images[laser_name][return_index]\n\ndef show_range_image(range_image, layout_index_start = 1):\n \"\"\"Shows range image.\n\n Args:\n range_image: the range image data from a given lidar of type MatrixFloat.\n layout_index_start: layout offset\n \"\"\"\n range_image_tensor = tf.convert_to_tensor(range_image.data)\n range_image_tensor = tf.reshape(range_image_tensor, range_image.shape.dims)\n lidar_image_mask = tf.greater_equal(range_image_tensor, 0)\n range_image_tensor = tf.where(lidar_image_mask, range_image_tensor,\n tf.ones_like(range_image_tensor) * 1e10)\n range_image_range = range_image_tensor[...,0] \n range_image_intensity = range_image_tensor[...,1]\n range_image_elongation = range_image_tensor[...,2]\n plot_range_image_helper(range_image_range.numpy(), 'range',\n [8, 1, layout_index_start], vmax=75, cmap='gray')\n plot_range_image_helper(range_image_intensity.numpy(), 'intensity',\n [8, 1, layout_index_start + 1], vmax=1.5, cmap='gray')\n plot_range_image_helper(range_image_elongation.numpy(), 'elongation',\n [8, 1, layout_index_start + 2], vmax=1.5, cmap='gray')\nframe.lasers.sort(key=lambda laser: laser.name)\nshow_range_image(get_range_image(open_dataset.LaserName.TOP, 0), 1)\nshow_range_image(get_range_image(open_dataset.LaserName.TOP, 1), 4)", "Point Cloud Conversion and Visualization", "points, cp_points = frame_utils.convert_range_image_to_point_cloud(\n frame,\n range_images,\n camera_projections,\n range_image_top_pose)\npoints_ri2, cp_points_ri2 = frame_utils.convert_range_image_to_point_cloud(\n frame,\n range_images,\n camera_projections,\n range_image_top_pose,\n ri_index=1)\n\n# 3d points in vehicle frame.\npoints_all = np.concatenate(points, axis=0)\npoints_all_ri2 = np.concatenate(points_ri2, axis=0)\n# camera projection corresponding to each point.\ncp_points_all = np.concatenate(cp_points, axis=0)\ncp_points_all_ri2 = np.concatenate(cp_points_ri2, axis=0)", "Examine number of points in each lidar sensor.\nFirst return.", "print(points_all.shape)\nprint(cp_points_all.shape)\nprint(points_all[0:2])\nfor i in range(5):\n print(points[i].shape)\n print(cp_points[i].shape)", "Second return.", "print(points_all_ri2.shape)\nprint(cp_points_all_ri2.shape)\nprint(points_all_ri2[0:2])\nfor i in range(5):\n print(points_ri2[i].shape)\n print(cp_points_ri2[i].shape)", "Show point cloud\n3D point clouds are rendered using an internal tool, which is unfortunately not publicly available yet. Here is an example of what they look like.", "from IPython.display import Image, display\ndisplay(Image('/content/waymo-od/tutorial/3d_point_cloud.png'))", "Visualize Camera Projection", "images = sorted(frame.images, key=lambda i:i.name)\ncp_points_all_concat = np.concatenate([cp_points_all, points_all], axis=-1)\ncp_points_all_concat_tensor = tf.constant(cp_points_all_concat)\n\n# The distance between lidar points and vehicle frame origin.\npoints_all_tensor = tf.norm(points_all, axis=-1, keepdims=True)\ncp_points_all_tensor = tf.constant(cp_points_all, dtype=tf.int32)\n\nmask = tf.equal(cp_points_all_tensor[..., 0], images[0].name)\n\ncp_points_all_tensor = tf.cast(tf.gather_nd(\n cp_points_all_tensor, tf.where(mask)), dtype=tf.float32)\npoints_all_tensor = tf.gather_nd(points_all_tensor, tf.where(mask))\n\nprojected_points_all_from_raw_data = tf.concat(\n [cp_points_all_tensor[..., 1:3], points_all_tensor], axis=-1).numpy()\n\ndef rgba(r):\n \"\"\"Generates a color based on range.\n\n Args:\n r: the range value of a given point.\n Returns:\n The color for a given range\n \"\"\"\n c = plt.get_cmap('jet')((r % 20.0) / 20.0)\n c = list(c)\n c[-1] = 0.5 # alpha\n return c\n\ndef plot_image(camera_image):\n \"\"\"Plot a cmaera image.\"\"\"\n plt.figure(figsize=(20, 12))\n plt.imshow(tf.image.decode_jpeg(camera_image.image))\n plt.grid(\"off\")\n\ndef plot_points_on_image(projected_points, camera_image, rgba_func,\n point_size=5.0):\n \"\"\"Plots points on a camera image.\n\n Args:\n projected_points: [N, 3] numpy array. The inner dims are\n [camera_x, camera_y, range].\n camera_image: jpeg encoded camera image.\n rgba_func: a function that generates a color from a range value.\n point_size: the point size.\n\n \"\"\"\n plot_image(camera_image)\n\n xs = []\n ys = []\n colors = []\n\n for point in projected_points:\n xs.append(point[0]) # width, col\n ys.append(point[1]) # height, row\n colors.append(rgba_func(point[2]))\n\n plt.scatter(xs, ys, c=colors, s=point_size, edgecolors=\"none\")\n\nplot_points_on_image(projected_points_all_from_raw_data,\n images[0], rgba, point_size=5.0)", "Install from source code\nThe remaining part of this colab covers details of installing the repo form source code which provides a richer API.\nInstall dependencies", "!sudo apt install build-essential\n!sudo apt-get install --assume-yes pkg-config zip g++ zlib1g-dev unzip python3 python3-pip\n!wget https://github.com/bazelbuild/bazel/releases/download/0.28.0/bazel-0.28.0-installer-linux-x86_64.sh\n!sudo bash ./bazel-0.28.0-installer-linux-x86_64.sh", "Build and test (this can take 10 mins)\nConfigure .bazelrc. This works with/without Tensorflow. This colab machine has Tensorflow installed.", "!cd waymo-od && ./configure.sh && cat .bazelrc && bazel clean\n\n!cd waymo-od && bazel build ... --show_progress_rate_limit=10.0", "Metrics computation\nThe core metrics computation library is written in C++, so it can be extended to other programming languages. It can compute detection metrics (mAP) and tracking metrics (MOTA). See more information about the metrics on the website.\nWe provide command line tools and TensorFlow ops to call the detection metrics library to compute detection metrics. We will provide a similar wrapper for tracking metrics library in the future. You are welcome to contribute your wrappers.\nCommand line detection metrics computation\nThe command takes a pair of files for prediction and ground truth. Read the comment in waymo_open_dataset/metrics/tools/compute_detection_metrics_main.cc for details of the data format.", "!cd waymo-od && bazel-bin/waymo_open_dataset/metrics/tools/compute_detection_metrics_main waymo_open_dataset/metrics/tools/fake_predictions.bin waymo_open_dataset/metrics/tools/fake_ground_truths.bin", "TensorFlow custom op\nA TensorFlow op is defined at metrics/ops/metrics_ops.cc. We provide a python wrapper of the op at metrics/ops/py_metrics_ops.py, and a tf.metrics-like implementation of the op at metrics/python/detection_metrics.py. This library requires TensorFlow to be installed.\nInstall TensorFlow and NumPy.", "!pip3 install numpy tensorflow", "Reconfigure .bazelrc such that you can compile the TensorFlow ops", "!cd waymo-od && ./configure.sh && cat .bazelrc", "Run the op and tf.metrics wrapper unit tests which can be referenced as example usage of the libraries.", "!cd waymo-od && bazel test waymo_open_dataset/metrics/ops/... && bazel test waymo_open_dataset/metrics/python/...", "Run all tests in the repo.", "!cd waymo-od && bazel test ...", "Build local PIP package", "!cd waymo-od && export PYTHON_VERSION=3 && ./pip_pkg_scripts/build.sh", "You can install the locally compiled package or access any c++ binary compiled from this." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BrownDwarf/ApJdataFrames
notebooks/Grankin2008.ipynb
mit
[ "ApJdataFrames\nGrankin et al. 2008\nTitle: Results of the ROTOR-program \nAuthors: Grankin et al. \nData is from this paper:\nhttp://www.aanda.org/articles/aa/full/2008/09/aa8476-07/aa8476-07.html", "import warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport pandas as pd\n\nfrom astropy.io import ascii, votable, misc", "Download Data", "#! mkdir ../data/Grankin08\n\n#! wget -q -O ../data/Grankin08/table1_orig.tex http://www.aanda.org/articles/aa/full/2008/09/aa8476-07/table1.tex \n\n#! wget -q -O ../data/Grankin08/table3.tex http://www.aanda.org/articles/aa/full/2008/09/aa8476-07/table3.tex \\\n\n! ls ../data/Grankin08/", "Ew, it's in $\\LaTeX$ format!\nTable 1", "#! head -n 30 ../data/Grankin08/table1.tex", "Two problems from latex.\n\nThe values with binaries go onto a new line, which foils data reading.\nThe \\farcs latex screws up the decimal point.\n\nTo deal with these I had to manually delete (horror!) the carriage returns with Sublime Text, then do this:", "#! cp ../data/Grankin08/table1_orig.tex ../data/Grankin08/table1_mod.tex\n\n#! sed -i 's/\\\\farcs/./g' ../data/Grankin08/table1_mod.tex\n\nnames_1 = ['Name', 'HBC', 'SpT', 'JD_min_max', 'N_seasons', 'V_range', 'N_obs', 'avgB_V', 'avgV_R', 'mult', 'ref']\n\ntab1 = pd.read_csv('../data/Grankin08/table1_mod.tex', sep='&',\n skiprows=10, names=names_1, engine='python', skipfooter=8)\n\ntab1.tail()\n\ntab1.to_csv('../data/Grankin08/table1.csv', index=False)", "Table 3", "#! tail -n 15 ../data/Grankin08/table3.tex\n\nnames = ['Name', 'Epochs', 'delta_V_min', 'delta_V_max', 'HJD0-24000000', 'Period', 'Ref1', 'Ref2']\n\ntab3 = pd.read_csv('../data/Grankin08/table3.tex', sep='&', comment='\\\\',\n skiprows=10, names=names, engine='python', skipfooter=5)\n\ntab3.tail()\n\ntab3.to_csv('../data/Grankin08/table3.csv', index=False)", "Raw data files\nCopied from Vizier and pasted into a text document...", "#! head ../data/Grankin08/grankin08_dat_files.txt\n\n#gr_dat = pd.read_csv('../data/Grankin08/grankin08_dat_files.txt', usecols=[0],\n# delim_whitespace=True, names=['filename'])", "Download the data:", "import os", "Only need to run this once:\n```python\nfor i in range(len(gr_dat)):\nfor i in range(3):\nfn = gr_dat.filename[i]\nweb_addr = 'http://vizier.cfa.harvard.edu/vizier/ftp/cats/J/A+A/479/827/phot/'\ncmd = 'curl '+ web_addr + fn +' &gt; ' + '../data/Grankin08/phot/'+fn \nos.system(cmd)\nprint(cmd)\n\n```", "#! gzip -d ../data/Grankin08/phot/*.dat.gz\n\n#gr_dat['dat_fn'] = gr_dat.filename.str[0:-3]", "Add RA, DEC, and other simbad info", "from astroquery.simbad import Simbad\n\ngr_t1 = tab1\n\ngr_t1['HBC_name'] = 'HBC' + gr_t1.HBC\n\ngr_t1['alt_name'] = gr_t1.HBC_name\n\ngr_t1.alt_name[18] = 'TAP 10'\ngr_t1.alt_name[19] = 'TAP 11'\ngr_t1.alt_name[20] = 'TAP 14'\n\nSimbad.add_votable_fields('sptype', 'otype')\n\ngr_t1['pref_name'] = ''\ngr_t1['RA'] = ''\ngr_t1['DEC'] = ''\ngr_t1['SpT_simbad'] = ''\ngr_t1['Otype_simbad'] = ''\n\nN_sources = len(gr_t1)", "You only have to run this once:", "for i in range(N_sources):\n name = gr_t1.Name[i]\n name_alt = gr_t1.alt_name[i]\n result_table = Simbad.query_object(name)\n try:\n RA, DEC = result_table['RA'].data.data[0], result_table['DEC'].data.data[0]\n SpT, Otype = result_table['SP_TYPE'].data.data[0], result_table['OTYPE'].data.data[0]\n print(\"{} was found in Simbad.\".format(name))\n except TypeError:\n print(\"Attempt 1 did not work for {}, trying HBC name: {}...\".format(name, name_alt), end='')\n result_table = Simbad.query_object(name_alt)\n RA, DEC = result_table['RA'].data.data[0], result_table['DEC'].data.data[0]\n SpT, Otype = result_table['SP_TYPE'].data.data[0], result_table['OTYPE'].data.data[0]\n print(' success!')\n name = name_alt\n gr_t1 = gr_t1.set_value(i, 'pref_name', name)\n gr_t1 = gr_t1.set_value(i, 'RA', RA)\n gr_t1 = gr_t1.set_value(i, 'DEC', DEC)\n gr_t1 = gr_t1.set_value(i, 'SpT_simbad', SpT.decode())\n gr_t1 = gr_t1.set_value(i, 'Otype_simbad', Otype.decode())", "python\nfor i in range(N_sources):\n name = gr_t1.Name[i]\n name_alt = gr_t1.alt_name[i]\n result_table = Simbad.query_object(name)\n try:\n RA, DEC = result_table['RA'].data.data[0], result_table['DEC'].data.data[0]\n SpT, Otype = result_table['SP_TYPE'].data.data[0], result_table['OTYPE'].data.data[0]\n print(\"{} was found in Simbad.\".format(name))\n except TypeError:\n print(\"Attempt 1 did not work for {}, trying HBC name: {}...\".format(name, name_alt), end='')\n result_table = Simbad.query_object(name_alt)\n RA, DEC = result_table['RA'].data.data[0], result_table['DEC'].data.data[0]\n SpT, Otype = result_table['SP_TYPE'].data.data[0], result_table['OTYPE'].data.data[0]\n print(' success!')\n name = name_alt\n gr_t1 = gr_t1.set_value(i, 'pref_name', name)\n gr_t1 = gr_t1.set_value(i, 'RA', RA)\n gr_t1 = gr_t1.set_value(i, 'DEC', DEC)\n gr_t1 = gr_t1.set_value(i, 'SpT_simbad', SpT.decode())\n gr_t1 = gr_t1.set_value(i, 'Otype_simbad', Otype.decode())", "gr_t1.tail()\n\ngr_t1.to_csv('../data/Grankin08/table1_plus.csv', index=False)\n\ntab3.to_csv('../data/Grankin08/table3.csv', index=False)", "The end!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
stable/_downloads/299b3deaa8eb66e88d34f06090d06628/evoked_ers_source_power.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute evoked ERS source power using DICS, LCMV beamformer, and dSPM\nHere we examine 3 ways of localizing event-related synchronization (ERS) of\nbeta band activity in this dataset: somato-dataset using\n:term:DICS, :term:LCMV beamformer, and :term:dSPM applied to active and\nbaseline covariance matrices.", "# Authors: Luke Bloy <luke.bloy@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\n\nimport numpy as np\nimport mne\nfrom mne.cov import compute_covariance\nfrom mne.datasets import somato\nfrom mne.time_frequency import csd_morlet\nfrom mne.beamformer import (make_dics, apply_dics_csd, make_lcmv,\n apply_lcmv_cov)\nfrom mne.minimum_norm import (make_inverse_operator, apply_inverse_cov)\n\nprint(__doc__)", "Reading the raw data and creating epochs:", "data_path = somato.data_path()\nsubject = '01'\ntask = 'somato'\nraw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',\n 'sub-{}_task-{}_meg.fif'.format(subject, task))\n\n# crop to 5 minutes to save memory\nraw = mne.io.read_raw_fif(raw_fname).crop(0, 300)\n\n# We are interested in the beta band (12-30 Hz)\nraw.load_data().filter(12, 30)\n\n# The DICS beamformer currently only supports a single sensor type.\n# We'll use the gradiometers in this example.\npicks = mne.pick_types(raw.info, meg='grad', exclude='bads')\n\n# Read epochs\nevents = mne.find_events(raw)\nepochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, picks=picks,\n preload=True, decim=3)\n\n# Read forward operator and point to freesurfer subject directory\nfname_fwd = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),\n 'sub-{}_task-{}-fwd.fif'.format(subject, task))\nsubjects_dir = op.join(data_path, 'derivatives', 'freesurfer', 'subjects')\n\nfwd = mne.read_forward_solution(fname_fwd)", "Compute covariances\nERS activity starts at 0.5 seconds after stimulus onset. Because these\ndata have been processed by MaxFilter directly (rather than MNE-Python's\nversion), we have to be careful to compute the rank with a more conservative\nthreshold in order to get the correct data rank (64). Once this is used in\ncombination with an advanced covariance estimator like \"shrunk\", the rank\nwill be correctly preserved.", "rank = mne.compute_rank(epochs, tol=1e-6, tol_kind='relative')\nactive_win = (0.5, 1.5)\nbaseline_win = (-1, 0)\nbaseline_cov = compute_covariance(epochs, tmin=baseline_win[0],\n tmax=baseline_win[1], method='shrunk',\n rank=rank, verbose=True)\nactive_cov = compute_covariance(epochs, tmin=active_win[0], tmax=active_win[1],\n method='shrunk', rank=rank, verbose=True)\n\n# Weighted averaging is already in the addition of covariance objects.\ncommon_cov = baseline_cov + active_cov\nmne.viz.plot_cov(baseline_cov, epochs.info)", "Compute some source estimates\nHere we will use DICS, LCMV beamformer, and dSPM.\nSee ex-inverse-source-power for more information about DICS.", "def _gen_dics(active_win, baseline_win, epochs):\n freqs = np.logspace(np.log10(12), np.log10(30), 9)\n csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20)\n csd_baseline = csd_morlet(epochs, freqs, tmin=baseline_win[0],\n tmax=baseline_win[1], decim=20)\n csd_ers = csd_morlet(epochs, freqs, tmin=active_win[0], tmax=active_win[1],\n decim=20)\n filters = make_dics(epochs.info, fwd, csd.mean(), pick_ori='max-power',\n reduce_rank=True, real_filter=True, rank=rank)\n stc_base, freqs = apply_dics_csd(csd_baseline.mean(), filters)\n stc_act, freqs = apply_dics_csd(csd_ers.mean(), filters)\n stc_act /= stc_base\n return stc_act\n\n\n# generate lcmv source estimate\ndef _gen_lcmv(active_cov, baseline_cov, common_cov):\n filters = make_lcmv(epochs.info, fwd, common_cov, reg=0.05,\n noise_cov=None, pick_ori='max-power')\n stc_base = apply_lcmv_cov(baseline_cov, filters)\n stc_act = apply_lcmv_cov(active_cov, filters)\n stc_act /= stc_base\n return stc_act\n\n\n# generate mne/dSPM source estimate\ndef _gen_mne(active_cov, baseline_cov, common_cov, fwd, info, method='dSPM'):\n inverse_operator = make_inverse_operator(info, fwd, common_cov)\n stc_act = apply_inverse_cov(active_cov, info, inverse_operator,\n method=method, verbose=True)\n stc_base = apply_inverse_cov(baseline_cov, info, inverse_operator,\n method=method, verbose=True)\n stc_act /= stc_base\n return stc_act\n\n\n# Compute source estimates\nstc_dics = _gen_dics(active_win, baseline_win, epochs)\nstc_lcmv = _gen_lcmv(active_cov, baseline_cov, common_cov)\nstc_dspm = _gen_mne(active_cov, baseline_cov, common_cov, fwd, epochs.info)", "Plot source estimates\nDICS:", "brain_dics = stc_dics.plot(\n hemi='rh', subjects_dir=subjects_dir, subject=subject,\n time_label='DICS source power in the 12-30 Hz frequency band')", "LCMV:", "brain_lcmv = stc_lcmv.plot(\n hemi='rh', subjects_dir=subjects_dir, subject=subject,\n time_label='LCMV source power in the 12-30 Hz frequency band')", "dSPM:", "brain_dspm = stc_dspm.plot(\n hemi='rh', subjects_dir=subjects_dir, subject=subject,\n time_label='dSPM source power in the 12-30 Hz frequency band')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sbussmann/sensor-fusion
Code/Vehicle Classification Exercise.ipynb
mit
[ "Goal: Classify vehicle as bus or car based on smartphone sensor data", "import pandas as pd\nfrom scipy.ndimage import gaussian_filter\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.metrics import accuracy_score\nimport numpy as np\n%matplotlib inline", "Load the processed sensor data for car trip (see \"Process Smartphone Sensor Data\" jupyter notebook). On this trip, I drove my car from home to Censio and back and used SensorLog on my iPhone to track the trip. The total time for the trip was about 15 minutes.", "dfcar = pd.read_csv('../Data/shaneiphone_exp2_processed.csv', index_col='DateTime')", "Load the processed sensor data for bus trip (see \"Process Smartphone Sensor Data\" jupyter notebook). On this trip, I took the 47 bus for about 10 minutes.", "dfbus = pd.read_csv('../Data/shanebus20150827_processed.csv', index_col='DateTime')\n\n# combine into a single dataframe\ndf = pd.concat([dfcar, dfbus])\n\n# Use only userAcceleration and gyroscope data, since these features are expected to generalize well.\nxyz = ['X', 'Y', 'Z']\nmeasures = ['userAcceleration', 'gyroscope']\nbasefeatures = [i + j for i in measures for j in xyz]\nfeatures = [i + j for i in measures for j in xyz]\n\n# Add Gaussian smoothed features\nsmoothfeatures = []\nfor i in features:\n df[i + 'sm'] = gaussian_filter(df[i], 3)\n df[i + '2sm'] = gaussian_filter(df[i], 100)\n smoothfeatures.append(i + 'sm')\n smoothfeatures.append(i + '2sm')\nfeatures.extend(smoothfeatures)\n\n# Generate Jerk signal\njerkfeatures = []\nfor i in features:\n diffsignal = np.diff(df[i])\n df[i + 'jerk'] = np.append(0, diffsignal)\n jerkfeatures.append(i + 'jerk')\nfeatures.extend(jerkfeatures)\n\n# assign class labels\ncar0 = (df.index > '2015-08-25 14:35:00') & \\\n (df.index <= '2015-08-25 14:42:00')\n\ncar1 = (df.index > '2015-08-25 14:43:00') & \\\n (df.index <= '2015-08-25 14:48:00')\n\nbus0 = (df.index > '2015-08-27 10:10:00') & \\\n (df.index <= '2015-08-27 10:15:00')\nbus1 = (df.index > '2015-08-27 10:15:00') & \\\n (df.index <= '2015-08-27 10:20:00')\n\nnc = len(df)\ndf['class'] = np.zeros(nc) - 1\ndf['class'][car0] = np.zeros(nc)\ndf['class'][car1] = np.zeros(nc)\ndf['class'][bus0] = np.ones(nc)\ndf['class'][bus1] = np.ones(nc)\n\n# separate into quarters for train and validation\nq1 = df[car0]\nq2 = df[car1]\nq3 = df[bus0]\nq4 = df[bus1]\ntraindf = pd.concat([q2, q4])\nvalidationdf = pd.concat([q1, q3])\n\n# check for NaNs in the dataframes\nprint(traindf.isnull().sum().sum())\nprint(validationdf.isnull().sum().sum())\n\n# drop NaNs\ntraindf = traindf.dropna()\nvalidationdf = validationdf.dropna()\n\n# Make the training and validation sets\nX_train = traindf[features].values\ny_train = traindf['class'].values\nX_test = validationdf[features].values\ny_test = validationdf['class'].values\n\n# train a random forest\nclf = RandomForestClassifier(n_estimators=200)\n\n# get the 5-fold cross-validation score\nscores = cross_val_score(clf, X_train, y_train, cv=5)\nprint(scores, scores.mean(), scores.std())\n\n# apply model to test set\nclf.fit(X_train, y_train)\npredict_y = clf.predict(X_test)\n\n# obtain accuracy score\ntestscore = accuracy_score(y_test, predict_y)\nprint(\"Accuracy score on test set: %6.3f\" % testscore)", "We're not overfitting the data, but we're also not really predicting the vehicle class very well, since we're only right about 65-70% of the time with any prediction we make.", "# Inspect feature importances\nfor i, ifeature in enumerate(features):\n print(ifeature + ': %6.4f' % clf.feature_importances_[i])", "The smoothed gyroscopeZ data is the most useful feature.", "# compare bus gyroscopeZ2sm and car gyroscopeZ2sm\nq1['gyroscopeZ2sm'].plot(color='blue', figsize=(12,6), kind='hist', bins=40, alpha=0.4) # car\nq3['gyroscopeZ2sm'].plot(color='green', kind='hist', bins=40, alpha=0.4) # bus", "Reflecting on this further, it occurs to me that this methodology is identifying that the bus trip and the car trip followed different routes and had different numbers and types of turns. A better way to go might be to identify features for each turn (e.g., time to complete turn, average accelerometer and gyroscope signal during turn, etc.) and apply the random forest to those features.\nAnother interesting avenue to pursue is features in Fourier space", "# Generate Fourier Transform of features\nfftfeatures = []\nfor i in features:\n reals = np.real(np.fft.rfft(df[i]))\n imags = np.imag(np.fft.rfft(df[i]))\n complexs = [reals[0]]\n n = len(reals)\n if n % 2 == 0:\n complexs.append(imags[0])\n for j in range(1, n - 1):\n complexs.append(reals[j])\n complexs.append(imags[j])\n complexs.append(reals[j])\n df['f' + i] = complexs\n fftfeatures.append('f' + i)\nfeatures.extend(fftfeatures)\n\n# Make the training and validation sets\nX_train = traindf[fftfeatures].values\ny_train = traindf['class'].values\nX_test = validationdf[fftfeatures].values\ny_test = validationdf['class'].values\n\n# train a random forest\nclf = RandomForestClassifier(n_estimators=200)\n\n# get the 5-fold cross-validation score\nscores = cross_val_score(clf, X_train, y_train, cv=5)\nprint(scores, scores.mean(), scores.std())\n\n# apply model to test set\nclf.fit(X_train, y_train)\npredict_y = clf.predict(X_test)\n\n# obtain accuracy score\ntestscore = accuracy_score(y_test, predict_y)\nprint(\"Accuracy score on test set: %6.3f\" % testscore)", "Much better accuracy on the test set: 87%. We are definitely overfitting here, since we got 100% accuracy on the training set. We are also probably suffering from the same problem using the time series data, where the classifier learns to classify based on the nature of the route, not the nature of the ride.", "# Inspect feature importances\nfor i, ifeature in enumerate(fftfeatures):\n print(ifeature + ': %6.4f' % clf.feature_importances_[i])", "Interesting that the accelerometer signal is more important here. This could be an indication that training in Fourier space helps mitigate the route-based issues that we encountered when using the time series data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gaufung/Data_Analytics_Learning_Note
DesignPattern/MediatorPattern.ipynb
mit
[ "中介者模式(Mediator Pattern)\n1 代码\n有一个手机仓储管理系统,使用者有三方:销售、仓库管理员、采购。需求是:销售一旦达成订单,销售人员会通过系统的销售子系统部分通知仓储子系统,仓储子系统会将可出仓手机数量减少,同时通知采购管理子系统当前销售订单;仓储子系统的库存到达阈值以下,会通知销售子系统和采购子系统,并督促采购子系统采购;采购完成后,采购人员会把采购信息填入采购子系统,采购子系统会通知销售子系统采购完成,并通知仓库子系统增加库存。\n从需求描述来看,每个子系统都和其它子系统有所交流,在设计系统时,如果直接在一个子系统中集成对另两个子系统的操作,一是耦合太大,二是不易扩展。为解决这类问题,我们需要引入一个新的角色-中介者-来将“网状结构”精简为“星形结构”。(为充分说明设计模式,某些系统细节暂时不考虑,例如:仓库满了怎么办该怎么设计。类似业务性的内容暂时不考虑)\n首先构造三个子系统,即三个类(在中介者模式中,这些类叫做同事些):", "class colleague():\n mediator = None\n def __init__(self,mediator):\n self.mediator = mediator\nclass purchaseColleague(colleague):\n def buyStuff(self,num):\n print (\"PURCHASE:Bought %s\"%num)\n self.mediator.execute(\"buy\",num)\n def getNotice(self,content):\n print (\"PURCHASE:Get Notice--%s\"%content)\nclass warehouseColleague(colleague):\n total=0\n threshold=100\n def setThreshold(self,threshold):\n self.threshold=threshold\n def isEnough(self):\n if self.total<self.threshold:\n print (\"WAREHOUSE:Warning...Stock is low... \")\n self.mediator.execute(\"warning\",self.total)\n return False\n else:\n return True\n def inc(self,num):\n self.total+=num\n print (\"WAREHOUSE:Increase %s\"%num)\n self.mediator.execute(\"increase\",num)\n self.isEnough()\n def dec(self,num):\n if num>self.total:\n print (\"WAREHOUSE:Error...Stock is not enough\")\n else:\n self.total-=num\n print (\"WAREHOUSE:Decrease %s\"%num)\n self.mediator.execute(\"decrease\",num)\n self.isEnough()\nclass salesColleague(colleague):\n def sellStuff(self,num):\n print (\"SALES:Sell %s\"%num)\n self.mediator.execute(\"sell\",num)\n def getNotice(self, content):\n print (\"SALES:Get Notice--%s\" % content)", "当各个类在初始时都会指定一个中介者,而各个类在有变动时,也会通知中介者,由中介者协调各个类的操作。\n中介者实现如下:", "class abstractMediator():\n purchase=\"\"\n sales=\"\"\n warehouse=\"\"\n def setPurchase(self,purchase):\n self.purchase=purchase\n def setWarehouse(self,warehouse):\n self.warehouse=warehouse\n def setSales(self,sales):\n self.sales=sales\n def execute(self,content,num):\n pass\nclass stockMediator(abstractMediator):\n def execute(self,content,num):\n print (\"MEDIATOR:Get Info--%s\"%content)\n if content==\"buy\":\n self.warehouse.inc(num)\n self.sales.getNotice(\"Bought %s\"%num)\n elif content==\"increase\":\n self.sales.getNotice(\"Inc %s\"%num)\n self.purchase.getNotice(\"Inc %s\"%num)\n elif content==\"decrease\":\n self.sales.getNotice(\"Dec %s\"%num)\n self.purchase.getNotice(\"Dec %s\"%num)\n elif content==\"warning\":\n self.sales.getNotice(\"Stock is low.%s Left.\"%num)\n self.purchase.getNotice(\"Stock is low. Please Buy More!!! %s Left\"%num)\n elif content==\"sell\":\n self.warehouse.dec(num)\n self.purchase.getNotice(\"Sold %s\"%num)\n else:\n pass", "中介者模式中的execute是最重要的方法,它根据同事类传递的信息,直接协调各个同事的工作。\n在场景类中,设置仓储阈值为200,先采购300,再卖出120,实现如下:", "mobile_mediator=stockMediator()#先配置\n\nmobile_purchase=purchaseColleague(mobile_mediator)\nmobile_warehouse=warehouseColleague(mobile_mediator)\nmobile_sales=salesColleague(mobile_mediator)\n\nmobile_mediator.setPurchase(mobile_purchase)\nmobile_mediator.setWarehouse(mobile_warehouse)\nmobile_mediator.setSales(mobile_sales)\n\nmobile_warehouse.setThreshold(200)\nmobile_purchase.buyStuff(300)\nmobile_sales.sellStuff(120)", "2 Discriptions\n中介者模式的定义为:用一个中介对象封装一系列的对象交互。中介者使各对象不需要显式地互相作用,从而使其耦合松散,并可以独立地改变它们之间的交互。\n3 Advantages\n\n减少类与类的依赖,降低了类和类之间的耦合;\n容易扩展规模。\n\n4 Usages\n\n设计类图时,出现了网状结构时,可以考虑将类图设计成星型结构,这样就可以使用中介者模式了。如机场调度系统(多个跑道、飞机、指挥塔之间的调度)、路由系统;著名的MVC框架中,其中的C(Controller)就是M(Model)和V(View)的中介者。\n\n5 Disadvantages\n\n中介者本身的复杂性可能会很大,例如,同事类的方法如果很多的话,本例中的execute逻辑会很复杂。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dwaithe/ONBI_image_analysis
day4_machineLearning/.ipynb_checkpoints/2015 clustering with ipython practical-checkpoint.ipynb
gpl-2.0
[ "Clustering with ipython\nYou may work with other members of the course if you like. This practical is not assessed although some of the skills will be required for your practical project next week. If you are stuck at any stage please ask a demonstrator.\nDominic Waithe 2015 (c)", "#This line is very important: (It turns on inline the visuals!)\n%pylab inline\nimport csv\n\n#You will need these also. These functions extract the data from the results file.\ndef load_file_return_data(filepath):\n data =[]\n with open(filepath,'r') as f:\n reader=csv.reader(f,delimiter='\\t')\n headers = reader.next()\n for line in reader:\n data.append(line)\n\n name_list = list(enumerate(headers))\n return data, name_list\ndef return_data_with_header(header,data,name_list):\n for idx, name in name_list:\n if name == header:\n ind_to_take = idx\n data_col = []\n for line in data:\n \n data_col.append(float(line[ind_to_take]))\n return np.array(data_col)", "Reading the data from the Results.txt\nThe first stage is to read your Fiji exported data into python.", "#You insert the local path where your exported imageJ \n#where 'Results.txt' is currently written.\n\ndata,name_list = load_file_return_data('Results.txt')\nprint name_list\n#Insert the header you wish to extract here:\nheader1 = 'Area'\ndata_col1 = return_data_with_header(header1,data,name_list)\n#Insert the header you wish to extract here:\nheader2 = 'Mean'\ndata_col2 = return_data_with_header(header2,data,name_list)", "Plotting the data\nIt is always handy to plot relationships", "plot(data_col1,data_col2, 'o')\nxlabel('Area')\nylabel('Mean intensity')", "Looking for clusters in the data.\nIs there structure in the data we can utilise to isolate cells further? We want to use the intensity of the cells and their area to further isolate the cells which we segmented.", "#To cluster the data we start by using the kmeans algorithm.\nfrom sklearn.cluster import KMeans,SpectralClustering \n#We initialise a kmeans model in the variable kmeans:\nkmeans = KMeans(n_clusters=2, init='k-means++', n_init=10, max_iter=300, tol=0.0001, precompute_distances='auto')\n#For more information.\n#http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html\n\n#Now we reorganise the data to a format which is compatible with kmeans algorithm.\ndata_arr = np.zeros((data_col2.__len__(),2))\ndata_arr[:,0] = np.array(data_col1)\ndata_arr[:,1] = np.array(data_col2)\n#Now we use this data to fit the kmeans model in an unsupervised fashion.\nkmeans.fit(data_arr)\nout = kmeans.predict(data_arr)\n\n#Now we fit the two clusters we have tried to find\nplot(data_col1[out == 1],data_col2[out ==1], 'go')\nplot(data_col1[out == 0],data_col2[out ==0], 'ro')\nxlabel('Area')\nylabel('Mean intensity')\n#The clustering algorithm should have highlighted good proportion\n#of the cells which are both large and green. These represent the\n#Dendritic cells in the image.", "Special Challenge\nTry and visualise which of your cells is which in the original image. This will involve importing the image and highlighting those areas close to the 'X' and 'Y' coordinates in the saved 'results' table. How did the kmeans perform. There are plenty of clustering algorithms in the sklearn library take a look and see if you can improve the classification.\nhttp://scikit-learn.org/stable/modules/clustering.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.22/_downloads/12e915738bb5b40ffcb1157f1c2dee72/plot_receptive_field.ipynb
bsd-3-clause
[ "%matplotlib inline", "Spectro-temporal receptive field (STRF) estimation on continuous data\nThis demonstrates how an encoding model can be fit with multiple continuous\ninputs. In this case, we simulate the model behind a spectro-temporal receptive\nfield (or STRF). First, we create a linear filter that maps patterns in\nspectro-temporal space onto an output, representing neural activity. We fit\na receptive field model that attempts to recover the original linear filter\nthat was used to create this data.\nReferences\nEstimation of spectro-temporal and spatio-temporal receptive fields using\nmodeling with continuous inputs is described in:\n.. [1] Theunissen, F. E. et al. Estimating spatio-temporal receptive\n fields of auditory and visual neurons from their responses to\n natural stimuli. Network 12, 289-316 (2001).\n.. [2] Willmore, B. & Smyth, D. Methods for first-order kernel\n estimation: simple-cell receptive fields from responses to\n natural scenes. Network 14, 553-77 (2003).\n.. [3] Crosse, M. J., Di Liberto, G. M., Bednar, A. & Lalor, E. C. (2016).\n The Multivariate Temporal Response Function (mTRF) Toolbox:\n A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli.\n Frontiers in Human Neuroscience 10, 604.\n doi:10.3389/fnhum.2016.00604\n.. [4] Holdgraf, C. R. et al. Rapid tuning shifts in human auditory cortex\n enhance speech intelligibility. Nature Communications, 7, 13654 (2016).\n doi:10.1038/ncomms13654", "# Authors: Chris Holdgraf <choldgraf@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.decoding import ReceptiveField, TimeDelayingRidge\n\nfrom scipy.stats import multivariate_normal\nfrom scipy.io import loadmat\nfrom sklearn.preprocessing import scale\nrng = np.random.RandomState(1337) # To make this example reproducible", "Load audio data\nWe'll read in the audio data from [3]_ in order to simulate a response.\nIn addition, we'll downsample the data along the time dimension in order to\nspeed up computation. Note that depending on the input values, this may\nnot be desired. For example if your input stimulus varies more quickly than\n1/2 the sampling rate to which we are downsampling.", "# Read in audio that's been recorded in epochs.\npath_audio = mne.datasets.mtrf.data_path()\ndata = loadmat(path_audio + '/speech_data.mat')\naudio = data['spectrogram'].T\nsfreq = float(data['Fs'][0, 0])\nn_decim = 2\naudio = mne.filter.resample(audio, down=n_decim, npad='auto')\nsfreq /= n_decim", "Create a receptive field\nWe'll simulate a linear receptive field for a theoretical neural signal. This\ndefines how the signal will respond to power in this receptive field space.", "n_freqs = 20\ntmin, tmax = -0.1, 0.4\n\n# To simulate the data we'll create explicit delays here\ndelays_samp = np.arange(np.round(tmin * sfreq),\n np.round(tmax * sfreq) + 1).astype(int)\ndelays_sec = delays_samp / sfreq\nfreqs = np.linspace(50, 5000, n_freqs)\ngrid = np.array(np.meshgrid(delays_sec, freqs))\n\n# We need data to be shaped as n_epochs, n_features, n_times, so swap axes here\ngrid = grid.swapaxes(0, -1).swapaxes(0, 1)\n\n# Simulate a temporal receptive field with a Gabor filter\nmeans_high = [.1, 500]\nmeans_low = [.2, 2500]\ncov = [[.001, 0], [0, 500000]]\ngauss_high = multivariate_normal.pdf(grid, means_high, cov)\ngauss_low = -1 * multivariate_normal.pdf(grid, means_low, cov)\nweights = gauss_high + gauss_low # Combine to create the \"true\" STRF\nkwargs = dict(vmax=np.abs(weights).max(), vmin=-np.abs(weights).max(),\n cmap='RdBu_r', shading='gouraud')\n\nfig, ax = plt.subplots()\nax.pcolormesh(delays_sec, freqs, weights, **kwargs)\nax.set(title='Simulated STRF', xlabel='Time Lags (s)', ylabel='Frequency (Hz)')\nplt.setp(ax.get_xticklabels(), rotation=45)\nplt.autoscale(tight=True)\nmne.viz.tight_layout()", "Simulate a neural response\nUsing this receptive field, we'll create an artificial neural response to\na stimulus.\nTo do this, we'll create a time-delayed version of the receptive field, and\nthen calculate the dot product between this and the stimulus. Note that this\nis effectively doing a convolution between the stimulus and the receptive\nfield. See here &lt;https://en.wikipedia.org/wiki/Convolution&gt;_ for more\ninformation.", "# Reshape audio to split into epochs, then make epochs the first dimension.\nn_epochs, n_seconds = 16, 5\naudio = audio[:, :int(n_seconds * sfreq * n_epochs)]\nX = audio.reshape([n_freqs, n_epochs, -1]).swapaxes(0, 1)\nn_times = X.shape[-1]\n\n# Delay the spectrogram according to delays so it can be combined w/ the STRF\n# Lags will now be in axis 1, then we reshape to vectorize\ndelays = np.arange(np.round(tmin * sfreq),\n np.round(tmax * sfreq) + 1).astype(int)\n\n# Iterate through indices and append\nX_del = np.zeros((len(delays),) + X.shape)\nfor ii, ix_delay in enumerate(delays):\n # These arrays will take/put particular indices in the data\n take = [slice(None)] * X.ndim\n put = [slice(None)] * X.ndim\n if ix_delay > 0:\n take[-1] = slice(None, -ix_delay)\n put[-1] = slice(ix_delay, None)\n elif ix_delay < 0:\n take[-1] = slice(-ix_delay, None)\n put[-1] = slice(None, ix_delay)\n X_del[ii][tuple(put)] = X[tuple(take)]\n\n# Now set the delayed axis to the 2nd dimension\nX_del = np.rollaxis(X_del, 0, 3)\nX_del = X_del.reshape([n_epochs, -1, n_times])\nn_features = X_del.shape[1]\nweights_sim = weights.ravel()\n\n# Simulate a neural response to the sound, given this STRF\ny = np.zeros((n_epochs, n_times))\nfor ii, iep in enumerate(X_del):\n # Simulate this epoch and add random noise\n noise_amp = .002\n y[ii] = np.dot(weights_sim, iep) + noise_amp * rng.randn(n_times)\n\n# Plot the first 2 trials of audio and the simulated electrode activity\nX_plt = scale(np.hstack(X[:2]).T).T\ny_plt = scale(np.hstack(y[:2]))\ntime = np.arange(X_plt.shape[-1]) / sfreq\n_, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 6), sharex=True)\nax1.pcolormesh(time, freqs, X_plt, vmin=0, vmax=4, cmap='Reds',\n shading='gouraud')\nax1.set_title('Input auditory features')\nax1.set(ylim=[freqs.min(), freqs.max()], ylabel='Frequency (Hz)')\nax2.plot(time, y_plt)\nax2.set(xlim=[time.min(), time.max()], title='Simulated response',\n xlabel='Time (s)', ylabel='Activity (a.u.)')\nmne.viz.tight_layout()", "Fit a model to recover this receptive field\nFinally, we'll use the :class:mne.decoding.ReceptiveField class to recover\nthe linear receptive field of this signal. Note that properties of the\nreceptive field (e.g. smoothness) will depend on the autocorrelation in the\ninputs and outputs.", "# Create training and testing data\ntrain, test = np.arange(n_epochs - 1), n_epochs - 1\nX_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]\nX_train, X_test, y_train, y_test = [np.rollaxis(ii, -1, 0) for ii in\n (X_train, X_test, y_train, y_test)]\n# Model the simulated data as a function of the spectrogram input\nalphas = np.logspace(-3, 3, 7)\nscores = np.zeros_like(alphas)\nmodels = []\nfor ii, alpha in enumerate(alphas):\n rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=alpha)\n rf.fit(X_train, y_train)\n\n # Now make predictions about the model output, given input stimuli.\n scores[ii] = rf.score(X_test, y_test)\n models.append(rf)\n\ntimes = rf.delays_ / float(rf.sfreq)\n\n# Choose the model that performed best on the held out data\nix_best_alpha = np.argmax(scores)\nbest_mod = models[ix_best_alpha]\ncoefs = best_mod.coef_[0]\nbest_pred = best_mod.predict(X_test)[:, 0]\n\n# Plot the original STRF, and the one that we recovered with modeling.\n_, (ax1, ax2) = plt.subplots(1, 2, figsize=(6, 3), sharey=True, sharex=True)\nax1.pcolormesh(delays_sec, freqs, weights, **kwargs)\nax2.pcolormesh(times, rf.feature_names, coefs, **kwargs)\nax1.set_title('Original STRF')\nax2.set_title('Best Reconstructed STRF')\nplt.setp([iax.get_xticklabels() for iax in [ax1, ax2]], rotation=45)\nplt.autoscale(tight=True)\nmne.viz.tight_layout()\n\n# Plot the actual response and the predicted response on a held out stimulus\ntime_pred = np.arange(best_pred.shape[0]) / sfreq\nfig, ax = plt.subplots()\nax.plot(time_pred, y_test, color='k', alpha=.2, lw=4)\nax.plot(time_pred, best_pred, color='r', lw=1)\nax.set(title='Original and predicted activity', xlabel='Time (s)')\nax.legend(['Original', 'Predicted'])\nplt.autoscale(tight=True)\nmne.viz.tight_layout()", "Visualize the effects of regularization\nAbove we fit a :class:mne.decoding.ReceptiveField model for one of many\nvalues for the ridge regularization parameter. Here we will plot the model\nscore as well as the model coefficients for each value, in order to\nvisualize how coefficients change with different levels of regularization.\nThese issues as well as the STRF pipeline are described in detail\nin [1], [2], and [4]_.", "# Plot model score for each ridge parameter\nfig = plt.figure(figsize=(10, 4))\nax = plt.subplot2grid([2, len(alphas)], [1, 0], 1, len(alphas))\nax.plot(np.arange(len(alphas)), scores, marker='o', color='r')\nax.annotate('Best parameter', (ix_best_alpha, scores[ix_best_alpha]),\n (ix_best_alpha, scores[ix_best_alpha] - .1),\n arrowprops={'arrowstyle': '->'})\nplt.xticks(np.arange(len(alphas)), [\"%.0e\" % ii for ii in alphas])\nax.set(xlabel=\"Ridge regularization value\", ylabel=\"Score ($R^2$)\",\n xlim=[-.4, len(alphas) - .6])\nmne.viz.tight_layout()\n\n# Plot the STRF of each ridge parameter\nfor ii, (rf, i_alpha) in enumerate(zip(models, alphas)):\n ax = plt.subplot2grid([2, len(alphas)], [0, ii], 1, 1)\n ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)\n plt.xticks([], [])\n plt.yticks([], [])\n plt.autoscale(tight=True)\nfig.suptitle('Model coefficients / scores for many ridge parameters', y=1)\nmne.viz.tight_layout()", "Using different regularization types\nIn addition to the standard ridge regularization, the\n:class:mne.decoding.TimeDelayingRidge class also exposes\nLaplacian &lt;https://en.wikipedia.org/wiki/Laplacian_matrix&gt;_ regularization\nterm as:\n\\begin{align}\\left[\\begin{matrix}\n 1 & -1 & & & & \\\n -1 & 2 & -1 & & & \\\n & -1 & 2 & -1 & & \\\n & & \\ddots & \\ddots & \\ddots & \\\n & & & -1 & 2 & -1 \\\n & & & & -1 & 1\\end{matrix}\\right]\\end{align}\nThis imposes a smoothness constraint of nearby time samples and/or features.\nQuoting [3]_:\nTikhonov [identity] regularization (Equation 5) reduces overfitting by\n smoothing the TRF estimate in a way that is insensitive to\n the amplitude of the signal of interest. However, the Laplacian\n approach (Equation 6) reduces off-sample error whilst preserving\n signal amplitude (Lalor et al., 2006). As a result, this approach\n usually leads to an improved estimate of the system’s response (as\n indexed by MSE) compared to Tikhonov regularization.", "scores_lap = np.zeros_like(alphas)\nmodels_lap = []\nfor ii, alpha in enumerate(alphas):\n estimator = TimeDelayingRidge(tmin, tmax, sfreq, reg_type='laplacian',\n alpha=alpha)\n rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=estimator)\n rf.fit(X_train, y_train)\n\n # Now make predictions about the model output, given input stimuli.\n scores_lap[ii] = rf.score(X_test, y_test)\n models_lap.append(rf)\n\nix_best_alpha_lap = np.argmax(scores_lap)", "Compare model performance\nBelow we visualize the model performance of each regularization method\n(ridge vs. Laplacian) for different levels of alpha. As you can see, the\nLaplacian method performs better in general, because it imposes a smoothness\nconstraint along the time and feature dimensions of the coefficients.\nThis matches the \"true\" receptive field structure and results in a better\nmodel fit.", "fig = plt.figure(figsize=(10, 6))\nax = plt.subplot2grid([3, len(alphas)], [2, 0], 1, len(alphas))\nax.plot(np.arange(len(alphas)), scores_lap, marker='o', color='r')\nax.plot(np.arange(len(alphas)), scores, marker='o', color='0.5', ls=':')\nax.annotate('Best Laplacian', (ix_best_alpha_lap,\n scores_lap[ix_best_alpha_lap]),\n (ix_best_alpha_lap, scores_lap[ix_best_alpha_lap] - .1),\n arrowprops={'arrowstyle': '->'})\nax.annotate('Best Ridge', (ix_best_alpha, scores[ix_best_alpha]),\n (ix_best_alpha, scores[ix_best_alpha] - .1),\n arrowprops={'arrowstyle': '->'})\nplt.xticks(np.arange(len(alphas)), [\"%.0e\" % ii for ii in alphas])\nax.set(xlabel=\"Laplacian regularization value\", ylabel=\"Score ($R^2$)\",\n xlim=[-.4, len(alphas) - .6])\nmne.viz.tight_layout()\n\n# Plot the STRF of each ridge parameter\nxlim = times[[0, -1]]\nfor ii, (rf_lap, rf, i_alpha) in enumerate(zip(models_lap, models, alphas)):\n ax = plt.subplot2grid([3, len(alphas)], [0, ii], 1, 1)\n ax.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)\n ax.set(xticks=[], yticks=[], xlim=xlim)\n if ii == 0:\n ax.set(ylabel='Laplacian')\n ax = plt.subplot2grid([3, len(alphas)], [1, ii], 1, 1)\n ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)\n ax.set(xticks=[], yticks=[], xlim=xlim)\n if ii == 0:\n ax.set(ylabel='Ridge')\nfig.suptitle('Model coefficients / scores for laplacian regularization', y=1)\nmne.viz.tight_layout()", "Plot the original STRF, and the one that we recovered with modeling.", "rf = models[ix_best_alpha]\nrf_lap = models_lap[ix_best_alpha_lap]\n_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(9, 3),\n sharey=True, sharex=True)\nax1.pcolormesh(delays_sec, freqs, weights, **kwargs)\nax2.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)\nax3.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)\nax1.set_title('Original STRF')\nax2.set_title('Best Ridge STRF')\nax3.set_title('Best Laplacian STRF')\nplt.setp([iax.get_xticklabels() for iax in [ax1, ax2, ax3]], rotation=45)\nplt.autoscale(tight=True)\nmne.viz.tight_layout()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
necromuralist/machine_learning_studies
machine_learning/udacity/project_1/boston_housing.ipynb
mit
[ "Machine Learning Engineer Nanodegree\nModel Evaluation & Validation\nProject 1: Predicting Boston Housing Prices\nWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been written. You will need to implement additional functionality to successfully answer all of the questions for this project. Unless it is requested, do not modify any of the code that has already been included. In this template code, there are four sections which you must complete to successfully produce a prediction with your model. Each section where you will write code is preceded by a STEP X header with comments describing what must be done. Please read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer that relate to the project and your implementation. Each section where you will answer a question is preceded by a QUESTION X header. Be sure that you have carefully read each question and provide thorough answers in the text boxes that begin with \"Answer:\". Your project submission will be evaluated based on your answers to each of the questions. \nA description of the dataset can be found here, which is provided by the UCI Machine Learning Repository.\nGetting Started\nTo familiarize yourself with an iPython Notebook, try double clicking on this cell. You will notice that the text changes so that all the formatting is removed. This allows you to make edits to the block of text you see here. This block of text (and mostly anything that's not code) is written using Markdown, which is a way to format text using headers, links, italics, and many other options! Whether you're editing a Markdown text block or a code block (like the one below), you can use the keyboard shortcut Shift + Enter or Shift + Return to execute the code or text block. In this case, it will show the formatted text.\nLet's start by setting up some code we will need to get the rest of the project up and running. Use the keyboard shortcut mentioned above on the following code block to execute it. Alternatively, depending on your iPython Notebook program, you can press the Play button in the hotbar. You'll know the code block executes successfully if the message \"Boston Housing dataset loaded successfully!\" is printed.", "# Importing a few necessary libraries\nimport numpy as np\nimport matplotlib.pyplot as pl\nfrom sklearn import datasets\nfrom sklearn.tree import DecisionTreeRegressor\n\n# Make matplotlib show our plots inline (nicely formatted in the notebook)\n%matplotlib inline\n\n# Create our client's feature set for which we will be predicting a selling price\nCLIENT_FEATURES = [[11.95, 0.00, 18.100, 0, 0.6590, 5.6090, 90.00, 1.385, 24, 680.0, 20.20, 332.09, 12.13]]\n\n# Load the Boston Housing dataset into the city_data variable\ncity_data = datasets.load_boston()\n\n# Initialize the housing prices and housing features\nhousing_prices = city_data.target\nhousing_features = city_data.data\n\nprint \"Boston Housing dataset loaded successfully!\"", "Statistical Analysis and Data Exploration\nIn this first section of the project, you will quickly investigate a few basic statistics about the dataset you are working with. In addition, you'll look at the client's feature set in CLIENT_FEATURES and see how this particular sample relates to the features of the dataset. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand your results.\nStep 1\nIn the code block below, use the imported numpy library to calculate the requested statistics. You will need to replace each None you find with the appropriate numpy coding for the proper statistic to be printed. Be sure to execute the code block each time to test if your implementation is working successfully. The print statements will show the statistics you calculate!", "import numpy\n\n# Number of houses in the dataset\ntotal_houses = housing_features.shape[0]\n\n# Number of features in the dataset\ntotal_features = housing_features.shape[1]\n\n# Minimum housing value in the dataset\nminimum_price = housing_prices.min()\n\n# Maximum housing value in the dataset\nmaximum_price = housing_prices.min()\n\n# Mean house value of the dataset\nmean_price = housing_prices.mean()\n\n# Median house value of the dataset\nmedian_price = numpy.median(housing_prices)\n\n# Standard deviation of housing values of the dataset\nstd_dev = numpy.std(housing_prices)\n\n# Show the calculated statistics\nprint \"Boston Housing dataset statistics (in $1000's):\\n\"\nprint \"Total number of houses:\", total_houses\nprint \"Total number of features:\", total_features\nprint \"Minimum house price:\", minimum_price\nprint \"Maximum house price:\", maximum_price\nprint \"Mean house price: {0:.3f}\".format(mean_price)\nprint \"Median house price:\", median_price\nprint \"Standard deviation of house price: {0:.3f}\".format(std_dev)", "Question 1\nAs a reminder, you can view a description of the Boston Housing dataset here, where you can find the different features under Attribute Information. The MEDV attribute relates to the values stored in our housing_prices variable, so we do not consider that a feature of the data.\nOf the features available for each data point, choose three that you feel are significant and give a brief description for each of what they measure.\nRemember, you can double click the text box below to add your answer!", "import pandas\nimport seaborn\ncolumns = ('crime big_lots industrial charles_river nox rooms old distance'\n ' highway_access tax_rate pupil_teacher_ratio blacks lower_status'.split())\nhousing_data = pandas.DataFrame(housing_features, columns=columns)\nhousing_data['median_value'] = housing_prices\nclient_data = pandas.DataFrame(CLIENT_FEATURES, columns=columns)\n\nseaborn.set_style('whitegrid')\nfor column in housing_data.columns:\n grid = seaborn.lmplot(column, 'median_value', data=housing_data, size=8)\n axe = grid.fig.gca()\n title = axe.set_title('{0} vs price'.format(column))", "CRIM - the per-capita crime rate. INDUS - the proportion of non-retail business acres per town. LSTAT - the percentage of the population that is of lower status.\nQuestion 2\nUsing your client's feature set CLIENT_FEATURES, which values correspond with the features you've chosen above?\nHint: Run the code block below to see the client's data.", "print CLIENT_FEATURES\n\nprint(client_data.crime)\nprint(client_data.industrial)\nprint(client_data.lower_status)", "CRIM : 11.95, INDUS: 18.1, LSTAT: 12.13\nEvaluating Model Performance\nIn this second section of the project, you will begin to develop the tools necessary for a model to make a prediction. Being able to accurately evaluate each model's performance through the use of these tools helps to greatly reinforce the confidence in your predictions.\nStep 2\nIn the code block below, you will need to implement code so that the shuffle_split_data function does the following:\n- Randomly shuffle the input data X and target labels (housing values) y.\n- Split the data into training and testing subsets, holding 30% of the data for testing.\nIf you use any functions not already acessible from the imported libraries above, remember to include your import statement below as well!\nEnsure that you have executed the code block once you are done. You'll know if the shuffle_split_data function is working if the statement \"Successfully shuffled and split the data!\" is printed.", "# Put any import statements you need for this code block here\nfrom sklearn import cross_validation\ndef shuffle_split_data(X, y):\n \"\"\" Shuffles and splits data into 70% training and 30% testing subsets,\n then returns the training and testing subsets. \"\"\"\n\n # Shuffle and split the data\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,\n y,\n test_size=.3,\n random_state=0)\n\n # Return the training and testing data subsets\n return X_train, y_train, X_test, y_test\n\n\n# Test shuffle_split_data\ntry:\n X_train, y_train, X_test, y_test = shuffle_split_data(housing_features, housing_prices)\n print \"Successfully shuffled and split the data!\"\nexcept:\n print \"Something went wrong with shuffling and splitting the data.\"", "Question 4\nWhy do we split the data into training and testing subsets for our model?\nSo that we can assess the model using a different data-set than what it was trained on, thus reducing the likelihood of overfitting the model to the training data and increasing the likelihood that it will generalize to other data.\nStep 3\nIn the code block below, you will need to implement code so that the performance_metric function does the following:\n- Perform a total error calculation between the true values of the y labels y_true and the predicted values of the y labels y_predict.\nYou will need to first choose an appropriate performance metric for this problem. See the sklearn metrics documentation to view a list of available metric functions. Hint: Look at the question below to see a list of the metrics that were covered in the supporting course for this project.\nOnce you have determined which metric you will use, remember to include the necessary import statement as well!\nEnsure that you have executed the code block once you are done. You'll know if the performance_metric function is working if the statement \"Successfully performed a metric calculation!\" is printed.", "# Put any import statements you need for this code block here\nfrom sklearn.metrics import mean_squared_error\ndef performance_metric(y_true, y_predict):\n \"\"\" Calculates and returns the total error between true and predicted values\n based on a performance metric chosen by the student. \"\"\"\n error = mean_squared_error(y_true, y_predict)\n return error\n\n# Test performance_metric\ntry:\n total_error = performance_metric(y_train, y_train)\n print \"Successfully performed a metric calculation!\"\nexcept:\n print \"Something went wrong with performing a metric calculation.\"", "Question 4\nWhich performance metric below did you find was most appropriate for predicting housing prices and analyzing the total error. Why?\n- Accuracy\n- Precision\n- Recall\n- F1 Score\n- Mean Squared Error (MSE)\n- Mean Absolute Error (MAE)\nMean Squared Error was the most appropriate performance metric for predicting housing prices because we are predicting a numeric value (this is a regression problem) and while Mean Absolute Error could also be used, the MSE emphasizes larger errors more (due to the squaring) and so is preferable.\nStep 4 (Final Step)\nIn the code block below, you will need to implement code so that the fit_model function does the following:\n- Create a scoring function using the same performance metric as in Step 2. See the sklearn make_scorer documentation.\n- Build a GridSearchCV object using regressor, parameters, and scoring_function. See the sklearn documentation on GridSearchCV.\nWhen building the scoring function and GridSearchCV object, be sure that you read the parameters documentation thoroughly. It is not always the case that a default parameter for a function is the appropriate setting for the problem you are working on.\nSince you are using sklearn functions, remember to include the necessary import statements below as well!\nEnsure that you have executed the code block once you are done. You'll know if the fit_model function is working if the statement \"Successfully fit a model to the data!\" is printed.", "# Put any import statements you need for this code block\nfrom sklearn.metrics import make_scorer\nfrom sklearn.grid_search import GridSearchCV\n\ndef fit_model(X, y):\n \"\"\" Tunes a decision tree regressor model using GridSearchCV on the input data X \n and target labels y and returns this optimal model. \"\"\"\n\n # Create a decision tree regressor object\n regressor = DecisionTreeRegressor()\n\n # Set up the parameters we wish to tune\n parameters = {'max_depth':(1,2,3,4,5,6,7,8,9,10)}\n\n # Make an appropriate scoring function\n scoring_function = make_scorer(mean_squared_error, greater_is_better=False)\n\n # Make the GridSearchCV object\n reg = GridSearchCV(regressor, param_grid=parameters,\n scoring=scoring_function, cv=10)\n\n # Fit the learner to the data to obtain the optimal model with tuned parameters\n reg.fit(X, y)\n\n # Return the optimal model\n return reg\n\n\n# Test fit_model on entire dataset\ntry:\n reg = fit_model(housing_features, housing_prices)\n print \"Successfully fit a model!\"\nexcept:\n print \"Something went wrong with fitting a model.\"", "Question 5\nWhat is the grid search algorithm and when is it applicable?\nThe GridSearchCV algorithm exhaustively works through the parameters it is given to tune the model. Because it is exhaustive it is appropriate when the parameters are relatively limited and the model-creation is not computationally intensive, otherwise its run-time might be infeasible.\nQuestion 6\nWhat is cross-validation, and how is it performed on a model? Why would cross-validation be helpful when using grid search?\nCross-validation is a method of testing a model by partitioning the data into subsets, with each subset taking a turn as the test set while the data not being used as a test-set is used as the training set. This allows the model to be tested against all the data-points, rather than having some data reserved exclusively as training data and the remainder exclusively as testing data.\nBecause grid-search attempts to find the optimal parameters for a model, it's advantageous to use the same training and testing data in each case (case meaning a particular permutation of the parameters) so that the comparisons are equitable. One could simply perform an initial train-validation-test split and use this throughout the grid search, but this then risks the possibility that there was something in the initial split that will bias the outcome. By using all the partitions of the data as both test and training data, as cross-validation does, the chance of a bias in the splitting is reduced and at the same time all the parameter permutations are given the same data to be tested against.\nCheckpoint!\nYou have now successfully completed your last code implementation section. Pat yourself on the back! All of your functions written above will be executed in the remaining sections below, and questions will be asked about various results for you to analyze. To prepare the Analysis and Prediction sections, you will need to intialize the two functions below. Remember, there's no need to implement any more code, so sit back and execute the code blocks! Some code comments are provided if you find yourself interested in the functionality.", "def learning_curves(X_train, y_train, X_test, y_test):\n \"\"\" Calculates the performance of several models with varying sizes of training data.\n The learning and testing error rates for each model are then plotted. \"\"\"\n \n print \"Creating learning curve graphs for max_depths of 1, 3, 6, and 10. . .\"\n \n # Create the figure window\n fig = pl.figure(figsize=(10,8))\n\n # We will vary the training set size so that we have 50 different sizes\n sizes = np.round(np.linspace(1, len(X_train), 50))\n train_err = np.zeros(len(sizes))\n test_err = np.zeros(len(sizes))\n\n # Create four different models based on max_depth\n for k, depth in enumerate([1,3,6,10]):\n \n for i, s in enumerate(sizes):\n \n # Setup a decision tree regressor so that it learns a tree with max_depth = depth\n regressor = DecisionTreeRegressor(max_depth = depth)\n \n # Fit the learner to the training data\n regressor.fit(X_train[:s], y_train[:s])\n\n # Find the performance on the training set\n train_err[i] = performance_metric(y_train[:s], regressor.predict(X_train[:s]))\n \n # Find the performance on the testing set\n test_err[i] = performance_metric(y_test, regressor.predict(X_test))\n\n # Subplot the learning curve graph\n ax = fig.add_subplot(2, 2, k+1)\n ax.plot(sizes, test_err, lw = 2, label = 'Testing Error')\n ax.plot(sizes, train_err, lw = 2, label = 'Training Error')\n ax.legend()\n ax.set_title('max_depth = %s'%(depth))\n ax.set_xlabel('Number of Data Points in Training Set')\n ax.set_ylabel('Total Error')\n ax.set_xlim([0, len(X_train)])\n \n # Visual aesthetics\n fig.suptitle('Decision Tree Regressor Learning Performances', fontsize=18, y=1.03)\n fig.tight_layout()\n fig.show()\n\ndef model_complexity(X_train, y_train, X_test, y_test):\n \"\"\" Calculates the performance of the model as model complexity increases.\n The learning and testing errors rates are then plotted. \"\"\"\n \n print \"Creating a model complexity graph. . . \"\n\n # We will vary the max_depth of a decision tree model from 1 to 14\n max_depth = np.arange(1, 14)\n train_err = np.zeros(len(max_depth))\n test_err = np.zeros(len(max_depth))\n\n for i, d in enumerate(max_depth):\n # Setup a Decision Tree Regressor so that it learns a tree with depth d\n regressor = DecisionTreeRegressor(max_depth = d)\n\n # Fit the learner to the training data\n regressor.fit(X_train, y_train)\n\n # Find the performance on the training set\n train_err[i] = performance_metric(y_train, regressor.predict(X_train))\n\n # Find the performance on the testing set\n test_err[i] = performance_metric(y_test, regressor.predict(X_test))\n\n # Plot the model complexity graph\n pl.figure(figsize=(7, 5))\n pl.title('Decision Tree Regressor Complexity Performance')\n pl.plot(max_depth, test_err, lw=2, label = 'Testing Error')\n pl.plot(max_depth, train_err, lw=2, label = 'Training Error')\n pl.legend()\n pl.xlabel('Maximum Depth')\n pl.ylabel('Total Error')\n pl.show()", "Analyzing Model Performance\nIn this third section of the project, you'll take a look at several models' learning and testing error rates on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing max_depth parameter on the full training set to observe how model complexity affects learning and testing errors. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.", "learning_curves(X_train, y_train, X_test, y_test)", "Question 7\nChoose one of the learning curve graphs that are created above. What is the max depth for the chosen model? As the size of the training set increases, what happens to the training error? What happens to the testing error?\nLooking at the model with max-depth of 3, as the size of the training set increases, the training error gradually increases. The testing error initially decreases, the seems to more or less stabilize.\nQuestion 8\nLook at the learning curve graphs for the model with a max depth of 1 and a max depth of 10. When the model is using the full training set, does it suffer from high bias or high variance when the max depth is 1? What about when the max depth is 10?\nThe training and testing plots for the model with max-depth 1 move toward convergence with an error near 50, indicating a high bias (the model is too simple, and the additional data isn't improving the generalization of the model). For the model with max-depth 10, the curves haven't converged, and the training error remains near 0, indicating that it suffers from high variance, and should be improved with more data.", "model_complexity(X_train, y_train, X_test, y_test)", "Question 9\nFrom the model complexity graph above, describe the training and testing errors as the max depth increases. Based on your interpretation of the graph, which max depth results in a model that best generalizes the dataset? Why?\nAs max-depth increases the training error improves, while the testing error decreases up until a depth of 5 and then begins a slight increase as the depth is increased. Based on this I would say that the max-depth of 5 created the model that best generalized the dataset, as it minimized the testing error.\nModel Prediction\nIn this final section of the project, you will make a prediction on the client's feature set using an optimized model from fit_model. To answer the following questions, it is recommended that you run the code blocks several times and use the median or mean value of the results.\nQuestion 10\nUsing grid search on the entire dataset, what is the optimal max_depth parameter for your model? How does this result compare to your intial intuition?\nHint: Run the code block below to see the max depth produced by your optimized model.", "print \"Final model optimal parameters:\", reg.best_params_\n\nrepetitions = 1000\nmodels = [fit_model(housing_features, housing_prices) for model in range(repetitions)]\nparams_scores = [(model.best_params_, model.best_score_) for model in models]\nparameters = numpy.array([param_score[0]['max_depth'] for param_score in params_scores])\nscores = numpy.array([param_score[1] for param_score in params_scores])\n\nbest_models = pandas.DataFrame.from_dict({'parameter':parameters, 'score': scores})\nx_labels = sorted(best_models.parameter.unique())\nfigure = pl.figure()\naxe = figure.gca()\ngrid = seaborn.boxplot('parameter', 'score', data = best_models,\n order=x_labels, ax=axe)\n\ntitle = axe.set_title(\"Best Parameters vs Best Scores\")\n\nbest_index = np.where(scores==np.max(scores))\nprint(scores[best_index])\nprint(parameters[best_index])\n\nbin_range = best_models.parameter.max() - best_models.parameter.min()\nbins = pandas.cut(best_models.parameter,\n bin_range)\n\n\nparameter_group = pandas.groupby(best_models, 'parameter')\nparameter_group.median()\n\nparameter_group.max()", "While a max-depth of 4 was the most common best-parameter, the max-depth of 5 was the median max-depth, had the highest median score, and had the highest overall score, so I will say that the optimal max_depth parameter is 5. This is in line with what I had guessed, based on the Complexity Performance plot.\nQuestion 11\nWith your parameter-tuned model, what is the best selling price for your client's home? How does this selling price compare to the basic statistics you calculated on the dataset? \nHint: Run the code block below to have your parameter-tuned model make a prediction on the client's home.", "best_model = models[best_index[0][0]]\nsale_price = best_model.predict(CLIENT_FEATURES)\npredicted = sale_price[0] * 1000\nactual_median = housing_data.median_value.median() * 1000\nprint (\"Predicted value of client's home: ${0:,.2f}\".format(predicted))\nprint(\"Median Value - predicted: ${0:,.2f}\".format(actual_median - predicted))", "The predicted value of the client's home is $20,968.\nQuestion 12 (Final Question):\nIn a few sentences, discuss whether you would use this model or not to predict the selling price of future clients' homes in the Greater Boston area.\nI think that this model seems to make a reasonable prediction for the given data (Boston Suburbs in 1970), but I'm not sure that I agree that the data for an entire suburb is necessarily generalizable for a specific unit within a suburb. What this model predicts is that a suburb with attributes similar to the client's would have our predicted median value, but within each suburb there would likely be a bit of variance. I would also think that separating out the upper-class houses would give a better model for this particular client, given the sub-median values for his or her features, and the right-skew of the data. If the goal was to predict median prices for suburbs, then I would use this model, but not for individual sales." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
marpat/blog
Dipoles_quad_all_Jupyter.ipynb
gpl-3.0
[ "Natural Dipole Analysis\nThis notebook discusses Python code to analyse Natural dipole moment of planar molecules. In the \"DIPOLE MOMENT ANALYSIS\" section, the NBO output file (.nbo, .out) contains individual x,y,z-components and length of the total molecular dipole moment. Each of the entries is decomposed into individual NLMO and NBO bond contributions. On the example of formamide, we are going to extract those lines and build a vector (NLMO) representation of the total molecular dipole.\nSimilar method using Java-based KNIME platform is described here.\nDetails of the formamide analysis can be found in the excellent book of Weinhold and Landis:\nWeinhold, F. and Landis, C.R. in Discovering Chemistry with Natural Bond Orbitals, pp. 147-152. \n <span style=\"color:blue; margin-left: 290px\">Planar geometry of formamide</span>\nElectric Dipole ##\nElectric dipole moment is the most fundamental quantity related to the charge distribution in a molecule. It reflects the degree to which positive and negative charge is distributed relative to one another. Dipole moment is a vector quantity characterized by its scalar magnitude, sign, and direction. Accordingly, the dipole can be described by its vector μ directed from the negative to the positive charge. Magnitude of the dipole moment is defined as: $$\\mu = |q|R,$$ where q is charge and R is the displacement vector pointing from the negative to positive* charge. The net dipole moment of a molecule can be conceptually described as a vector sum of the individual moments.<br>\nThis is what we will attempt to do in the following cells.\nStyling and Imports\nFirst the custom styling. File *.css inserts contents of custom .css file into the \nheader of the notebook's HTML file. Other variants of .css files are in the ./css directory and differ only in color decoration of iPython notebook cells. This step is optional and the if ... else statement can be commented out.", "%%capture\n# suppress output; remove %% capture for debugging\n# to (de)activate line numbering pres Esc while in the cell \n# followed by l (lower L)\n\n'''Sanity check since we are changing directories and *.css file path\nwould be incorrect after cell re-run'''\nfrom IPython.core.display import HTML\nimport string,sys,os,os.path,re\ncss_file = './css/blog.css'\nif os.path.isfile(css_file):\n css_file\nelse:\n %cd ..\nHTML(open(css_file, \"r\").read())\n\n# Imports and settings\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import FancyArrowPatch\nfrom pylab import *\n%matplotlib inline\n\n# Pandas options\npd.set_option('display.max_columns', 14)\npd.set_option('display.width', 300)\n#pd.set_option('display.mpl_style', 'default') # enable for nicer plots\npd.set_option(\"max_columns\", 14)\npd.set_option('chained_assignment',None) # suppress warnings for web publishing\n\n# Global parameters\nrcParams['figure.figsize'] = (6, 6)", "csv to Dataframe\nIn the next step, coordinates of NLMO dipoles are read from *.csv file. This file was generated earlier using the KNIME \"Dipole_v2/3\" workflow. Alternatively, iPython notebook <span style=\"color:rgba(255,100,0,1)\">ReadNboDip.ipynb</span> at this Github repository creates the identical input file. \"Plain\" Python script (<span style=\"color:rgba(255,100,0,1)\">ReadNboDip.py</span>) that parses NBO output files for the same <i>Dipole moment summary</i> can be downloaded from this Github repository. \nIn this example, this file is available from the ./dipole directory.<br>", "%%capture\n# Step into the directory of input .csv files using the magic %cd\n# Make sure that we are not already in the 'dipoles' dir from the previous run\nif os.path.isdir('dipoles'):\n %cd dipoles\nelse:\n %cd ..\n %cd dipoles", "First, a row of arbitrary values (zeroes) is inserted at the top of dataframe df (line 15; press Esc+l (lower L)) and the table values are assigned int, float, or string types, respectively.", "# Input file can be generated in ReadNboDip.ipynb notebook\ninfile = 'form_dip.csv' # in directory ./dipoles\n# Save the file path, name and extension\nfullpath = os.path.abspath(infile)\npath,file=os.path.split(fullpath)\nbasename, extension = os.path.splitext(infile)\n\n# Create Pandas dataframe from the *.csv file\ndf = pd.read_csv(infile)\n# fix datatype for columns\ndf.convert_objects(convert_numeric=True).dtypes \n\n# Prepare first blank row with XYZ=0 to set the point of vector origin\n# set the first row to zeroes\nrowex = df.loc[[0]] \n# Get dataframe column headers\nheaders = df.columns.values.tolist()\n# Fix the data types\nfor item in headers:\n if df.dtypes[item] == 'float64' :\n rowex[item] = 0.\n elif df.dtypes[item] == 'int64' :\n rowex[item] = 0\n else :\n rowex[item] = ''\nrow0 = rowex\n\n# Reassemble data table placing row0 at the top, followed by the rest\ndf2 = pd.concat([row0, df]).reset_index().ix[::]\n# print df2\n\n\"\"\" Identify which column (coordinate) has constant values (orthogonal to the \nmolecular plane).\n\"\"\"\n# Prepare stds of absolute values for each x,y,z column\n# Smallest stds indicate constant dimension perpendicular to 2D plane\nC1 = df2[\"X\"].abs()\nC1std = C1.std()\nC2 = df2[\"Y\"].abs()\nC2std = C2.std()\nC3 = df2[\"Z\"].abs()\nC3std = C3.std()\n# print \"stds are: %.3f %.3f %.3f\" % (C1std, C2std, C3std)\n\n'''Assign X,Y coordinates only, Z=0. Any of the X,Y,Z can be constant or \nzero 0 (which one may change every time). \nRemap coordinates arbitrarily to X,Y with Z=[const] '''\ndef coord_headers(C1std, C2std, C3std):\n if C1std > 0.1 and C2std > 0.1 > C3std:\n df2['newX'] = df2[\"X\"]\n df2['newY'] = df2[\"Y\"]\n elif C1std > 0.1 > C2std and C3std > 0.1:\n df2['newX'] = df2[\"X\"]\n df2['newY'] = df2[\"Z\"]\n elif C1std < 0.1 < C2std and C3std > 0.1:\n df2['newX'] = df2[\"Y\"]\n df2['newY'] = df2[\"Z\"]\n else:\n df2['newX'] = df2[\"X\"]\n df2['newY'] = df2[\"Y\"]\n\n# Append columns newX,newY to df2 (values are often same as the original X,Y)\n# Plane defined\ncoord_headers(C1std, C2std, C3std)\n\n# Remove rows with CR (core) orbitals and reindex the dataframe\ndf2 = df2[~df2['Type'].str.contains(\"CR\")]\n\n# Copy dataframe for the intermediate output\ndf2b1 = df2.reset_index().ix[::] # Comment off if re-running the cell\n\nHTML(df2b1.to_html(classes = 'grid', escape=False))\n# table styling requires package qgrid http://nbviewer.ipython.org/github/quantopian/qgrid/\n# blob/master/qgrid_demo.ipynb", "<center><span style=\"color: orange\"><b>Table 1</b>. Current df2 with new columns for consolidated X, Y coordinates.</span></center>\n\nFirst Plot\nNow, to have dipole vectors continue from one to another (as opposed to all starting at the zero), we have to <em>transpose coordinates</em> in a way that the origin of new vector starts at the end of the previous vector. In Python, use the function cumsum().\nLet's try to build continuous vector graph from the original dataframe df2. This is equivalent to a vector decomposition.\n<span style=\"color:rgba(217,0,126,1)\">The following cell is just a test case</span> (can be removed later). It generates plot of <span style=\"color: orange\"><b>Table 1</b></span> as continuing vector segments. You can skip it and go to section \"Cleaning up the Vector Path\".", "# Test area running off the branch df2b1\n# Translate coordinates from X,Y,Z=0,0,0 to assure vector continuity\ndf2b1['newXa'] = df2b1.newX.cumsum()\ndf2b1['newYa'] = df2b1.newY.cumsum()\ndf3b = df2b1\n# print df3b\n\n# Calculate total dipole (compare with column Tot_Dip)\nlastX = df3b.tail(1).newXa # row and value\nlastX = lastX.tolist()[0] # value only\nlastY = df3b.tail(1).newYa\nlastY = lastY.tolist()[0]\ntotal_dipole = sqrt(np.power(lastX, 2)+np.power(lastY, 2))\ntotal_dipole = round(total_dipole, 2)\n\n# Calc geometrical center of the vector polygon\ncenX = df3b['newXa'].mean()\ncenY = df3b['newYa'].mean()\nx = df3b.newXa\ny = df3b.newYa\n\n# Plot\n# Set rectangular plot dimensions to keep lengths proportional\nxlow = x.min()\nxhigh = x.max()\nylow = y.min()\nyhigh = y.max()\n\n\ndef lst_sort(list):\n ''' \n Sort list of floats by values.\n \n :type list: list of floats\n :param list: max and min x,y-coordinates\n \n :rtype: list of floats\n :return: sorted list of floats \n ''' \n abslist = []\n for item in list:\n abslist.append(item)\n return sorted(abslist)\n\nmargins = lst_sort([xlow, ylow, xhigh, yhigh])\n\nplt.figure() # To generate multiple distinct plots.\nplt.suptitle('Dipole Moments (D)')\nax = []\nxmin = margins[0]-1 #x.min()-1\nxmax = margins[3]+1\nymin = margins[0]-1\nymax = margins[3]+1\nplt.ylabel('Y')\nplt.xlabel('X')\nplt.grid(True)\nplt.xlim(xmin,xmax)\nplt.ylim(ymin,ymax)\ncolor='blue'\nplt.scatter(df3b.newXa, df3b.newYa, s=80, c=color, label='NLMOs')\nfor j, txt in enumerate(df3b['NLMO']):\n plt.annotate(txt, (x[j]+0.1,y[j]+0.3))\nax = gca()\nax.add_patch(FancyArrowPatch((0,0),(lastX,lastY),arrowstyle='->',mutation_scale=20, color='red'))\nfor k in range(1,len(x)):\n ax.add_patch(FancyArrowPatch((x[k], y[k]),(x[k-1],y[k-1]),arrowstyle='<-',mutation_scale=20, color='blue'))\n\nplt.annotate(total_dipole,\n xy=(lastX/2*0.9, lastY/2*1.1),\n color='red',\n xycoords='data',\n textcoords='offset points')\n\n\nplt.annotate(\"+\",\n xy=(cenX, cenY),\n color='red',\n xycoords='data',\n textcoords='offset points')", "Cleaning up the Vector Path\nResulting plot is rather cluttered and thus we will need to arrange dipoles in some way to get a more interpretable view.\nSince most molecules have center of the coordinate system placed somewhere at the center of the molecule, sorting dipole coordinates by xy-quadrant in which the dipole origin resides is a sensible approach. The following cartoon indicates how such quadrants are defined.\n\nWe will write a function that assigns each row an arbitrary quadrant depending on the signs of coordinates x and y. To further arrange directions of lines in each quadrant, we will include another function that calculates slope of the line and we will sort by the quadrant (rank) and the slope.", "# Function to rank XY-points into quadrants 0-3\ndef xyrank(a, b, c, d, e, f):\n ''' \n Assign qudrant value 1,2,3,4.\n :type inputs: floats and str\n :param inputs: X, Y, Q(1), Q(2), Q(3), Q(4) \n :rtype: str\n :return: quadrant number as str \n '''\n if a > 0 and b > 0: # [Q1]\n return c\n elif a > 0 > b: # [Q2]\n return d\n elif a < 0 and b < 0: # [Q3]\n return e\n else: # [Q4] 1 a>0, b<0\n return f\n \n \n# Function to calculate slope\ndef slope(a, b):\n if a != 0:\n slope = (1.0 * b / a) # expects float\n return slope\n else:\n slope = 0 # for a = zero, set slope arbitrarily to 0\n return slope", "Unfortunately, it is not obvious which sequence of quadrants will lead to the least cluttered dipole diagram. To make sure that we test all possibilities, we will evaluate all permutations of four quadrants, that is, 4! = 4 x 3 x 2 x 1 = 24\nFor each trial, we record an array of quadrants and standard deviation of distances of all vector origins from the graph origin (geometric center). Small stds should indicate even distribution of vectors around the origin (circle, ellipsoid, symmetrical polygon?). Standard deviation of distances together with the sequence of quadrants appear at the top of each plot. As we will see later, lowest stds are not necesarilly indicative of the \"clean\" graph shape.\n<span style=\"color:rgba(190,0,0,0.8)\"> <u>By no means the resulting vector plots indicate atom connectivity!</u></span> It is the relative direction of dipole vectors that helps us to assess the importance and internal balance of NLMO orbitals within the molecule.", "# Get the list of quadrant permutations\n# Four quadrants of a XY plot\nlst = [0, 1, 2, 3]\nset(list(lst)) # Creates a set\nimport itertools\npermlist = set(itertools.permutations(lst))\npermlist = list(permlist) # Get permutation set into a list.", "Plot Survey\nFor each set of qudrants, the code below will sort the dataframe by the quadrant and slope. The corresponding plot will be created and saved as .png image. Quadrant sequence is part of the filename.", "%%capture\n# remove %%capture magic to see the plot and output\n\n# Iterate through the list of all quadrants\nglobal captarr\ncaptarr = []\ncaptperm = []\nfor i in range(len(permlist)):\n # Apply function to the table values in each row; make a copy first\n df21 = df2.copy()\n df21[\"xyRank\"] = df2.apply(\n lambda row: xyrank(row['newX'], row['newY'], permlist[i][0], \n permlist[i][1], permlist[i][2], permlist[i][3]),\n axis=1)\n\n # Apply function to table values in a row\n df21[\"slope\"] = df21.apply(lambda row: slope(row['newX'], row['newY']), axis=1)\n \n toprow = df21[:1] # row with zero dipole \n # Directly change rank of the first row in the dataframe\n toprow.xyRank[0] = 4\n # Isolate remaining data into rest (drop)\n rest1 = df21.drop(df21.index[[0, 0]])\n # sort rest1 by xyRank, then by slope - descending\n df3 = rest1.sort_index(by=['xyRank', 'slope'], ascending=[False, False])\n # Put it back\n df3 = pd.concat([toprow, df3]).reset_index().ix[::]\n df3 = df3.sort_index(by=['xyRank', 'slope'], ascending=[False, False])\n\n # Translate coordinates from xyz=0,0,0\n df3['newXa'] = df3.newX.cumsum()\n df3['newYa'] = df3.newY.cumsum()\n\n # Calculate total dipole\n lastX = df3.tail(1).newXa # row and value\n lastX = lastX.tolist()[0] # value only\n lastY = df3.tail(1).newYa\n lastY = lastY.tolist()[0]\n total_dipole = sqrt(np.power(lastX, 2) + np.power(lastY, 2))\n total_dipole = round(total_dipole, 2)\n\n # Calc distances of x,y and assess even distribution of points \n # around the geom center\n cenX = df3['newXa'].mean()\n cenY = df3['newYa'].mean()\n distX = abs(cenX - df3.newXa)\n distY = abs(cenY - df3.newYa)\n # distance from centroid\n distXY = np.sqrt(distX * distX + distY * distY)\n diststd = distXY.std()\n captarr.append(round(diststd, 1))\n captperm.append(permlist[i])\n print \"++++++++++ Rank list is : \", permlist[i], \"; \\\n std of distances is:\", round(diststd, 2)\n\n # Plot\n plt.figure() # To generate multiple distinct plots.\n plt.suptitle('Dipole Moments (D)')\n ax = []\n x = df3.newXa\n y = df3.newYa\n xmin = x.min() - 1\n xmax = x.max() + 1\n ymin = y.min() - 1\n ymax = y.max() + 1\n plt.ylabel('Y')\n plt.xlabel('X')\n plt.grid(True)\n plt.xlim(xmin, xmax)\n plt.ylim(ymin, ymax)\n color = 'blue'\n plt.scatter(df3.newXa, df3.newYa, s=80, c=color, label='NLMOs')\n # plt.plot(df3.newXa, df3.newYa, c=color)\n # plt.legend(loc=1,borderaxespad=0.)\n for j, txt in enumerate(df3['NLMO']):\n plt.annotate(txt, (x[j] + 0.1, y[j] + 0.3))\n ax = gca()\n ax.add_patch(FancyArrowPatch((0, 0), (lastX, lastY), arrowstyle='->', \\\n mutation_scale=20, color='red'))\n for k in range(1, len(x)):\n ax.add_patch(\n FancyArrowPatch((x[k], y[k]), (x[k - 1], y[k - 1]), arrowstyle='<-', \\\n mutation_scale=20, color='blue'))\n\n plt.annotate(total_dipole,\n xy=(lastX / 2 * 0.9, lastY / 2 * 1.1),\n color='red',\n xycoords='data',\n textcoords='offset points')\n\n plt.annotate(round(diststd, 2),\n xy=(df3['newXa'].min() + 1.5, df3['newYa'].max() + 0.3),\n color='green',\n xycoords='data',\n textcoords='offset points')\n\n plt.annotate((permlist[i]), # asumX, asumY, \n xy=(xmin + 0.2, ymax - 0.7),\n color='black',\n xycoords='data',\n textcoords='offset points')\n\n plt.annotate(\"+\",\n xy=(cenX, cenY),\n color='red',\n xycoords='data',\n textcoords='offset points')\n\n # plt.show()\n f_perm = str(permlist[i]).replace(\"(\", \"_\")\n f_perm = f_perm.replace(\")\", \"_\")\n f_perm = f_perm.replace(\",\",\"\")\n f_perm = f_perm.replace(\" \",\"\")\n pic = basename + f_perm + '.png' \n try:\n plt.savefig(pic, ext='png', format='png', dpi=100)\n except IOError:\n print \"Error: can\\'t find the file or read data\"\n\n# %%capture\n# remove %%capture magic to see output\n# print \"\\n\" +('-'*80)+\"\\n\"\n# print \"> Images were saved in \"+ path +\"\\\\\"+basename+\"_q1q2q3q4_.png files\"\n\n# Sort by distance stds and retrieve the first three results\ndata = zip(captperm, captarr)\nsor = sorted(data, key=lambda tup: tup[1])\nprint sor[0:3]", "Inspect graphs above and note the quadrant sequence of the best looking graph. As indicated earlier, it is not the graph with lowest std value.\nFinal Plot\nTo make the final plot, enter the quadrant sequence of the best looking plot above (0,3,2,1) and replace the sequence in variable quad. The plot image will be saved in directory ./dipole.\n<span style=\"color:blue\">Enter the quadrant sequence (to line 2) and re-run the last part to get the graph.</span>", "# Enter the best sequence of quadrants\nquad = [0, 3, 2, 1] # original sequence 0,3,2,1 or 1,3,2,0\ndf21 = df2.copy()\ndf21[\"xyRank\"] = df2.apply(lambda row: xyrank(row['newX'], row['newY'],\\\n quad[0], quad[1], quad[2], quad[3]), axis=1)\n\n# Apply function to table values in a row\ndf21[\"slope\"] = df21.apply(lambda row: slope(row['newX'], row['newY']), axis=1)\ntoprow = df21[:1] # row with zero dipole \n# Directly change cell rank in dataframe\ntoprow.xyRank[0] = 4\n# Isolate remaining data into rest (drop)\nrest1 = df21.drop(df21.index[[0, 0]])\n# sort rest1 by xyRank, then by slope - descending\ndf3 = rest1.sort_index(by=['xyRank', 'slope'], ascending=[False, False])\n# Put it back\ndf3 = pd.concat([toprow,df3]).reset_index().ix[::]\ndf3 = df3.sort_index(by=['xyRank', 'slope'], ascending=[False, False])\n\n\n# Translate coordinates from xyz=0,0,0\ndf3['newXa'] = df3.newX.cumsum()\ndf3['newYa'] = df3.newY.cumsum()\n# print df3 # works OK\n\n# Calculate total dipole\nlastX = df3.tail(1).newXa # row and value\nlastX = lastX.tolist()[0] # value only\nlastY = df3.tail(1).newYa\nlastY = lastY.tolist()[0]\ntotal_dipole = sqrt(np.power(lastX, 2)+np.power(lastY, 2))\ntotal_dipole = round(total_dipole, 2)\n\n# Calc distances of x,y and assess fit to a circle\ncenX = df3['newXa'].mean()\ncenY = df3['newYa'].mean()\ndistX = abs(cenX-df3.newXa)\ndistY = abs(cenY-df3.newYa)\n# distance from centroid\ndistXY = np.sqrt(distX*distX + distY*distY)\ndiststd = distXY.std()\n\nx = df3.newXa\ny = df3.newYa\n\n# Set rectangular plot dimensions to keep lengths proportional\nxlow = x.min()\nxhigh = x.max()\nylow = y.min()\nyhigh = y.max()\n\n\ndef lst_sort(list):\n ''' \n Sort list of floats by values.\n \n :type list: list of floats\n :param list: max and min x,y-coordinates\n \n :rtype: list of floats\n :return: sorted list of floats \n ''' \n abslist = []\n for item in list:\n abslist.append(item)\n return sorted(abslist)\n\nmargins = lst_sort([xlow, ylow, xhigh, yhigh])\n\n# Plot\nplt.figure() # To generate multiple distinct plots.\nplt.suptitle('Dipole Moments (D)')\nax = []\nxmin = margins[0]-1\nxmax = margins[3]+1\nymin = margins[0]-1\nymax = margins[3]+1\nplt.ylabel('Y')\nplt.xlabel('X')\nplt.grid(True)\nplt.xlim(xmin,xmax)\nplt.ylim(ymin,ymax)\ncolor='blue'\nplt.scatter(df3.newXa, df3.newYa, s=80, c=color, label='NLMOs')\n# plt.plot(df3.newXa, df3.newYa, c=color)\n# plt.legend(loc=1,borderaxespad=0.)\nfor j, txt in enumerate(df3['NLMO']):\n plt.annotate(txt, (x[j]+0.1,y[j]+0.3))\nax = gca()\nax.add_patch(FancyArrowPatch((0,0),(lastX,lastY),arrowstyle='->',mutation_scale=20, \\\n color='red'))\nfor k in range(1,len(x)):\n ax.add_patch(FancyArrowPatch((x[k], y[k]),(x[k-1],y[k-1]),arrowstyle='<-',\\\n mutation_scale=20, color='blue'))\n\nplt.annotate(total_dipole,\n xy=(lastX/2*0.9, lastY/2*1.1),\n color='red',\n xycoords='data',\n textcoords='offset points')\n\n\nplt.annotate((quad), # asumX, asumY, \n xy=(xmin+0.4, ymax-0.6),\n color='black',\n xycoords='data',\n textcoords='offset points')\n\nplt.annotate(\"+\",\n xy=(cenX, cenY),\n color='red',\n xycoords='data',\n textcoords='offset points')\n\ndf3.drop('level_0', axis=1, inplace=True)\ndf3.drop('index', axis=1, inplace=True)\nHTML(df3.to_html(classes = 'grid', escape=False))", "Alternative and acceptable vector decomposition is shown in Figure 4.\n\n<center><span style=\"color: orange\"><b>Figure 4</b>. Alternative vector decomposition.</span></center>\nThis concludes the analysis of NLMO dipole moment data from the NBO output file. Several vector paths composed of partial formamide dipoles were created and saved as images.\nNext step will be analysis of partial NLMO dipoles in terms of bonds and lone pair contributions.\n\n<span style=\"font-size: 12px\"><i>iPython Notebook Dipoles_quad_all_Jupyter.ipynb:<br>\nversion 1.0 created on Dec 23, 2014<br>\nversion 2.0 updated on Aug 29, 2015</i></span>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
npuichigo/ttsflow
third_party/tensorflow/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
apache-2.0
[ "DeepDreaming with TensorFlow\n\nLoading and displaying the model graph\nNaive feature visualization\nMultiscale image generation\nLaplacian Pyramid Gradient Normalization\nPlaying with feature visualzations\nDeepDream\n\nThis notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:\n\nvisualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see GoogLeNet and VGG16 galleries)\nembed TensorBoard graph visualizations into Jupyter notebooks\nproduce high-resolution images with tiled computation (example)\nuse Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost\ngenerate DeepDream-like images with TensorFlow (DogSlugs included)\n\nThe network under examination is the GoogLeNet architecture, trained to classify images into one of 1000 categories of the ImageNet dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for GoogLeNet and VGG16 architectures.", "# boilerplate code\nfrom __future__ import print_function\nimport os\nfrom io import BytesIO\nimport numpy as np\nfrom functools import partial\nimport PIL.Image\nfrom IPython.display import clear_output, Image, display, HTML\n\nimport tensorflow as tf", "<a id='loading'></a>\nLoading and displaying the model graph\nThe pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network:", "!wget -nc https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip -n inception5h.zip\n\nmodel_fn = 'tensorflow_inception_graph.pb'\n\n# creating TensorFlow session and loading the model\ngraph = tf.Graph()\nsess = tf.InteractiveSession(graph=graph)\nwith tf.gfile.FastGFile(model_fn, 'rb') as f:\n graph_def = tf.GraphDef()\n graph_def.ParseFromString(f.read())\nt_input = tf.placeholder(np.float32, name='input') # define the input tensor\nimagenet_mean = 117.0\nt_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)\ntf.import_graph_def(graph_def, {'input':t_preprocessed})", "To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.", "layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]\nfeature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]\n\nprint('Number of layers', len(layers))\nprint('Total number of feature channels:', sum(feature_nums))\n\n\n# Helper functions for TF Graph visualization\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = tf.compat.as_bytes(\"<stripped %d bytes>\"%size)\n return strip_def\n \ndef rename_nodes(graph_def, rename_func):\n res_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = res_def.node.add() \n n.MergeFrom(n0)\n n.name = rename_func(n.name)\n for i, s in enumerate(n.input):\n n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])\n return res_def\n \ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n \n iframe = \"\"\"\n <iframe seamless style=\"width:800px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '&quot;'))\n display(HTML(iframe))\n\n# Visualizing the network graph. Be sure expand the \"mixed\" nodes to see their \n# internal structure. We are going to visualize \"Conv2D\" nodes.\ntmp_def = rename_nodes(graph_def, lambda s:\"/\".join(s.split('_',1)))\nshow_graph(tmp_def)", "<a id='naive'></a>\nNaive feature visualization\nLet's start with a naive way of visualizing these. Image-space gradient ascent!", "# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity\n# to have non-zero gradients for features with negative initial activations.\nlayer = 'mixed4d_3x3_bottleneck_pre_relu'\nchannel = 139 # picking some feature channel to visualize\n\n# start with a gray image with a little noise\nimg_noise = np.random.uniform(size=(224,224,3)) + 100.0\n\ndef showarray(a, fmt='jpeg'):\n a = np.uint8(np.clip(a, 0, 1)*255)\n f = BytesIO()\n PIL.Image.fromarray(a).save(f, fmt)\n display(Image(data=f.getvalue()))\n \ndef visstd(a, s=0.1):\n '''Normalize the image range for visualization'''\n return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5\n\ndef T(layer):\n '''Helper for getting layer output tensor'''\n return graph.get_tensor_by_name(\"import/%s:0\"%layer)\n\ndef render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n \n img = img0.copy()\n for i in range(iter_n):\n g, score = sess.run([t_grad, t_score], {t_input:img})\n # normalizing the gradient, so the same step size should work \n g /= g.std()+1e-8 # for different layers and networks\n img += g*step\n print(score, end = ' ')\n clear_output()\n showarray(visstd(img))\n\nrender_naive(T(layer)[:,:,:,channel])", "<a id=\"multiscale\"></a>\nMultiscale image generation\nLooks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.\nWith multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.", "def tffunc(*argtypes):\n '''Helper that transforms TF-graph generating function into a regular one.\n See \"resize\" function below.\n '''\n placeholders = list(map(tf.placeholder, argtypes))\n def wrap(f):\n out = f(*placeholders)\n def wrapper(*args, **kw):\n return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))\n return wrapper\n return wrap\n\n# Helper function that uses TF to resize an image\ndef resize(img, size):\n img = tf.expand_dims(img, 0)\n return tf.image.resize_bilinear(img, size)[0,:,:,:]\nresize = tffunc(np.float32, np.int32)(resize)\n\n\ndef calc_grad_tiled(img, t_grad, tile_size=512):\n '''Compute the value of tensor t_grad over the image in a tiled way.\n Random shifts are applied to the image to blur tile boundaries over \n multiple iterations.'''\n sz = tile_size\n h, w = img.shape[:2]\n sx, sy = np.random.randint(sz, size=2)\n img_shift = np.roll(np.roll(img, sx, 1), sy, 0)\n grad = np.zeros_like(img)\n for y in range(0, max(h-sz//2, sz),sz):\n for x in range(0, max(w-sz//2, sz),sz):\n sub = img_shift[y:y+sz,x:x+sz]\n g = sess.run(t_grad, {t_input:sub})\n grad[y:y+sz,x:x+sz] = g\n return np.roll(np.roll(grad, -sx, 1), -sy, 0)\n\ndef render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n \n img = img0.copy()\n for octave in range(octave_n):\n if octave>0:\n hw = np.float32(img.shape[:2])*octave_scale\n img = resize(img, np.int32(hw))\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n # normalizing the gradient, so the same step size should work \n g /= g.std()+1e-8 # for different layers and networks\n img += g*step\n print('.', end = ' ')\n clear_output()\n showarray(visstd(img))\n\nrender_multiscale(T(layer)[:,:,:,channel])", "<a id=\"laplacian\"></a>\nLaplacian Pyramid Gradient Normalization\nThis looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normalization.", "k = np.float32([1,4,6,4,1])\nk = np.outer(k, k)\nk5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)\n\ndef lap_split(img):\n '''Split the image into lo and hi frequency components'''\n with tf.name_scope('split'):\n lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')\n lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])\n hi = img-lo2\n return lo, hi\n\ndef lap_split_n(img, n):\n '''Build Laplacian pyramid with n splits'''\n levels = []\n for i in range(n):\n img, hi = lap_split(img)\n levels.append(hi)\n levels.append(img)\n return levels[::-1]\n\ndef lap_merge(levels):\n '''Merge Laplacian pyramid'''\n img = levels[0]\n for hi in levels[1:]:\n with tf.name_scope('merge'):\n img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi\n return img\n\ndef normalize_std(img, eps=1e-10):\n '''Normalize image by making its standard deviation = 1.0'''\n with tf.name_scope('normalize'):\n std = tf.sqrt(tf.reduce_mean(tf.square(img)))\n return img/tf.maximum(std, eps)\n\ndef lap_normalize(img, scale_n=4):\n '''Perform the Laplacian pyramid normalization.'''\n img = tf.expand_dims(img,0)\n tlevels = lap_split_n(img, scale_n)\n tlevels = list(map(normalize_std, tlevels))\n out = lap_merge(tlevels)\n return out[0,:,:,:]\n\n# Showing the lap_normalize graph with TensorBoard\nlap_graph = tf.Graph()\nwith lap_graph.as_default():\n lap_in = tf.placeholder(np.float32, name='lap_in')\n lap_out = lap_normalize(lap_in)\nshow_graph(lap_graph)\n\ndef render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,\n iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n # build the laplacian normalization graph\n lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))\n\n img = img0.copy()\n for octave in range(octave_n):\n if octave>0:\n hw = np.float32(img.shape[:2])*octave_scale\n img = resize(img, np.int32(hw))\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n g = lap_norm_func(g)\n img += g*step\n print('.', end = ' ')\n clear_output()\n showarray(visfunc(img))\n\nrender_lapnorm(T(layer)[:,:,:,channel])", "<a id=\"playing\"></a>\nPlaying with feature visualizations\nWe got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.", "render_lapnorm(T(layer)[:,:,:,65])", "Lower layers produce features of lower complexity.", "render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])", "There are many interesting things one may try. For example, optimizing a linear combination of features often gives a \"mixture\" pattern.", "render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)", "<a id=\"deepdream\"></a>\nDeepDream\nNow let's reproduce the DeepDream algorithm with TensorFlow.", "def render_deepdream(t_obj, img0=img_noise,\n iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n\n # split the image into a number of octaves\n img = img0\n octaves = []\n for i in range(octave_n-1):\n hw = img.shape[:2]\n lo = resize(img, np.int32(np.float32(hw)/octave_scale))\n hi = img-resize(lo, hw)\n img = lo\n octaves.append(hi)\n \n # generate details octave by octave\n for octave in range(octave_n):\n if octave>0:\n hi = octaves[-octave]\n img = resize(img, hi.shape[:2])+hi\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n img += g*(step / (np.abs(g).mean()+1e-7))\n print('.',end = ' ')\n clear_output()\n showarray(img/255.0)", "Let's load some image and populate it with DogSlugs (in case you've missed them).", "img0 = PIL.Image.open('pilatus800.jpg')\nimg0 = np.float32(img0)\nshowarray(img0/255.0)\n\nrender_deepdream(tf.square(T('mixed4c')), img0)", "Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.\nUsing an arbitrary optimization objective still works:", "render_deepdream(T(layer)[:,:,:,139], img0)", "Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an example of running the flower dream over the bigger image.\nWe hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
uwbmrb/BMRB-API
documentation/notebooks/Vicinal Disulfides.ipynb
gpl-3.0
[ "Example notebook for using the PDB and BMRB APIs for structural biology data science applications\nIntroduction\nThis notebook is designed to walk through some sample queries of both the PDB and BMRB in order to correlate NMR parameters with structure. It is hoped that this will give some guidance as to the utility of the wwPDB API's as well as to an overall strategy of how to gather data from the different databases. This is not intended to be a tutorial on Python and no claim is made about the efficiency or correctness of the code.\nResearch Problem\nFor this example we will explore vicinal disulfide bonds in proteins - disulfide bonds between adjacent cysteines in a protein. Vicinal disulfide bonds are rare in nature but can be biologically important<sup>1</sup>. As the protein backbone is strained from such a linkage, the hypothetical research question for this notebook is whether there are any abnormal NMR chemical shifts associated with such a structure. \nFigure 1. This illustration shows a comparison of the abnormal dihedral angles observed for vicinal disulfides (right). This figure is from the poster presented at the 46th Experimental NMR Conference in Providence, RI. Susan Fox-Erlich, Heidi J.C. Ellis, Timothy O. Martyn, & Michael R. Gryk. (2005) StrucCheck: a JAVA Application to Derive Geometric Attributes from Arbitrary Subsets of Spatial Coordinates Obtained from the PDB.\n<sup>1</sup>Xiu-hong Wang, Mark Connor, Ross Smith, Mark W. Maciejewski, Merlin E.H. Howden, Graham M. Nicholson, Macdonald J. Christie & Glenn F. King. Discovery and characterization of a family of insecticidal neurotoxins with a rare vicinal disulfide bridge. Nat Struct Mol Biol 7, 505–513 (2000). https://www.nature.com/articles/nsb0600_505 https://doi.org/10.1038/nsb0600_505\nStrategy\nOur overall strategy will be to query the RCSB PDB for all entries which have vicinal disulfide bonds. We will then cross-reference those entries with the BMRB in order to get available chemical shifts. Since we are interested in NMR chemical shifts, when we query the PDB it will be useful to limit our search to structures determined by NMR.\nFirst we need to install and import the REST module which will be required for the PDB and BMRB.\nhttps://www.rcsb.org/pages/webservices\nhttps://github.com/uwbmrb/BMRB-API", "%%capture\n!pip install requests;\n\nimport requests", "Building the PDB Query (Search API)\nIn order to find all PDB entries with vicinal disulfides, we will first search for all entries with at least one disulfide bond. This is the disulfide_filter portion of the query.\nIn addition, as we are interested in the chemical shifts for vicinal disulfides, we will also restrict the results to only solution NMR studies.\nFinally, as this is an example for illustration purposes and we want to keep the number of results small, we will further restrict the results to stuctures determined by Glenn King. Hi Glenn!\nThis section makes use of the Search API at PDB. Later, we will use the Data API.", "pdbAPI = \"https://search.rcsb.org/rcsbsearch/v1/query?json=\"\ndisulfide_filter = '{\"type\": \"terminal\", \"service\": \"text\", \"parameters\": {\"operator\": \"greater_or_equal\", \"value\": 1, \"attribute\": \"rcsb_entry_info.disulfide_bond_count\"}}'\nNMR_filter = '{\"type\": \"terminal\", \"service\": \"text\", \"parameters\": {\"operator\": \"exact_match\", \"value\": \"NMR\", \"attribute\": \"rcsb_entry_info.experimental_method\"}}'\nGK_filter = '{\"type\": \"terminal\", \"service\": \"text\", \"parameters\": {\"operator\": \"exact_match\", \"value\": \"King, G.F.\", \"attribute\": \"audit_author.name\"}}'", "Now we can combine these three filters together using AND", "filters = '{\"type\": \"group\", \"logical_operator\": \"and\", \"nodes\": [' + disulfide_filter + ',' + NMR_filter + ',' + GK_filter + ']}'", "And add the return information. Note that we are specifying the polymer_instance ID's as that is where the disulfide bonds are noted.", "full_query = '{\"query\": ' + filters + ', \"request_options\": {\"return_all_hits\": true}, \"return_type\": \"polymer_instance\"}'", "And finally submit the requst to the PDB. The response should be 200 if the query was successful.", "response = requests.get(pdbAPI + full_query)\nprint(response) # should return 200\n\nprint(type(response.json()))\n#print(response.json()) #uncomment this line if you want to see the results", "Next we will extract just the PDB codes from our results and build a list.", "pdb_results = response.json()\n\npdb_list = []\nfor x in pdb_results['result_set']:\n pdb_list.append (x['identifier'])\nprint (pdb_list)", "PDB Data API\nThe basics of the data API are illustrated with this link:\nhttps://data.rcsb.org/rest/v1/core/polymer_entity_instance/1DL0/A\nThis illustrates the REST query string, as well as how we need to append the PDB entry ID and polymer instance to the end.", "data_query_base = \"https://data.rcsb.org/rest/v1/core/polymer_entity_instance/\"\n\ndef swapSymbols(iter):\n return iter.replace(\".\",\"/\")\npdb_list2 = list(map(swapSymbols,pdb_list))\nprint(pdb_list2)", "Now we can loop through each PDB entry and request the polymer_entity_instance information. We will only care about disulfide bridges of adjacent residues", "data_response = requests.get(data_query_base + \"1DL0/A\")\nprint(data_response) # should return 200\n\nvds_list = []\nfor instance in pdb_list2:\n data_response = requests.get(data_query_base + instance)\n if data_response.status_code == 200:\n data_result = data_response.json()\n for x in data_result['rcsb_polymer_struct_conn']:\n if (x['connect_type'] == 'disulfide bridge' and x['connect_partner']['label_seq_id']-x['connect_target']['label_seq_id']==1):\n vds_list.append (data_result['rcsb_polymer_entity_instance_container_identifiers']['entry_id'])\nprint(vds_list)", "Our list is small (intentionally) but we can now use it to fetch chemical shifts from the BMRB.\nBMRB API\nOur first step is to find the corresponding BMRB entries for the PDB entries in our list. The query we want is shown below:\nhttp://api.bmrb.io/v2/search/get_bmrb_ids_from_pdb_id/", "BMRB_LookupString = 'http://api.bmrb.io/v2/search/get_bmrb_ids_from_pdb_id/'\n\nBMRB_ID_List = []\nfor PDB_ID in vds_list:\n BMRB_response = requests.get(BMRB_LookupString + PDB_ID)\n if BMRB_response.status_code == 200:\n BMRB_result = BMRB_response.json()\n for x in BMRB_result:\n for y in x['match_types']:\n if y == 'Author Provided':\n BMRB_ID_List.append (x['bmrb_id'])\nprint(BMRB_ID_List)\n\nchemical_shifts_list = []\nfor ID in BMRB_ID_List:\n x = requests.get(\"http://api.bmrb.io/v2/entry/\" + ID + \"?saveframe_category=assigned_chemical_shifts\")\n chemical_shifts_list.append (x.json())\n#print(chemical_shifts_list)", "Alternate Approach\nLook up through the BMRB adit_nmr_match csv file\nloop_\n _Assembly_db_link.Author_supplied\n _Assembly_db_link.Database_code\n _Assembly_db_link.Accession_code\n _Assembly_db_link.Entry_mol_code\n _Assembly_db_link.Entry_mol_name\n _Assembly_db_link.Entry_experimental_method\n _Assembly_db_link.Entry_structure_resolution\n _Assembly_db_link.Entry_relation_type\n _Assembly_db_link.Entry_details\n _Assembly_db_link.Entry_ID\n _Assembly_db_link.Assembly_ID\n\n yes PDB 1AXH . . . . .", "bmrb_link = \"https://bmrb.io/ftp/pub/bmrb/nmr_pdb_integrated_data/adit_nmr_matched_pdb_bmrb_entry_ids.csv\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kmclaugh/fastai_courses
kevin_files/lesson4.ipynb
apache-2.0
[ "from theano.sandbox import cuda\n\n%matplotlib inline\nimport utils; reload(utils)\nfrom utils import *\nfrom __future__ import division, print_function\n\n#path = \"data/ml-20m/\"\npath = \"./data/movielens/sample/\"\nmodel_path = path + 'models/'\nif not os.path.exists(model_path): os.mkdir(model_path)\nbatch_size=64", "Set up data\nWe're working with the movielens data, which contains one rating per row, like this:", "ratings = pd.read_csv(path+'ratings.csv')\nratings.head()\n\nlen(ratings)", "Just for display purposes, let's read in the movie names too.", "movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict()\n\nusers = ratings.userId.unique()\nmovies = ratings.movieId.unique()\n\nuserid2idx = {o:i for i,o in enumerate(users)}\nmovieid2idx = {o:i for i,o in enumerate(movies)}", "We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.", "\nratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])\nratings.userId = ratings.userId.apply(lambda x: userid2idx[x])\nratings.head()\n\nuser_min, user_max, movie_min, movie_max = (ratings.userId.min(), \n ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())\nuser_min, user_max, movie_min, movie_max\n\nn_users = ratings.userId.nunique()\nn_movies = ratings.movieId.nunique()\nn_users, n_movies", "This is the number of latent factors in each embedding.", "n_factors = 50\n\nnp.random.seed = 42", "Randomly split into training and validation.", "msk = np.random.rand(len(ratings)) < 0.8\ntrn = ratings[msk]\nval = ratings[~msk]", "Create subset for Excel\nWe create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.", "g=ratings.groupby('userId')['rating'].count()\ntopUsers=g.sort_values(ascending=False)[:15]\n\ng=ratings.groupby('movieId')['rating'].count()\ntopMovies=g.sort_values(ascending=False)[:15]\n\ntop_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')\n\ntop_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')\n\npd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)", "Dot product\nThe most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:", "user_in = Input(shape=(1,), dtype='int64', name='user_in')\nu = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in)\nmovie_in = Input(shape=(1,), dtype='int64', name='movie_in')\nm = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in)\n\nx = merge([u, m], mode='dot')\nx = Flatten()(x)\nmodel = Model([user_in, movie_in], x)\nmodel.compile(Adam(0.001), loss='mse')\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.01\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=3, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.001\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6, \n validation_data=([val.userId, val.movieId], val.rating))", "The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...\nBias\nThe problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.", "def embedding_input(name, n_in, n_out, reg):\n inp = Input(shape=(1,), dtype='int64', name=name)\n return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp)\n\nuser_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)\nmovie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)\n\ndef create_bias(inp, n_in):\n x = Embedding(n_in, 1, input_length=1)(inp)\n return Flatten()(x)\n\nub = create_bias(user_in, n_users)\nmb = create_bias(movie_in, n_movies)\n\nx = merge([u, m], mode='dot')\nx = Flatten()(x)\nx = merge([x, ub], mode='sum')\nx = merge([x, mb], mode='sum')\nmodel = Model([user_in, movie_in], x)\nmodel.compile(Adam(0.001), loss='mse')\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.01\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.001\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=10, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=5, \n validation_data=([val.userId, val.movieId], val.rating))", "This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!", "model.save_weights(model_path+'bias.h5')\n\nmodel.load_weights(model_path+'bias.h5')", "We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.", "model.predict([np.array([3]), np.array([6])])", "Analyze results\nTo make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.", "g=ratings.groupby('movieId')['rating'].count()\ntopMovies=g.sort_values(ascending=False)[:2000]\ntopMovies = np.array(topMovies.index)", "First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).", "get_movie_bias = Model(movie_in, mb)\nmovie_bias = get_movie_bias.predict(topMovies)\nmovie_ratings = [(b[0], movie_names[movies[i]]) for i,b in zip(topMovies,movie_bias)]", "Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.", "sorted(movie_ratings, key=itemgetter(0))[:15]\n\nsorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]", "We can now do the same thing for the embeddings.", "get_movie_emb = Model(movie_in, m)\nmovie_emb = np.squeeze(get_movie_emb.predict([topMovies]))\nmovie_emb.shape", "Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.", "from sklearn.decomposition import PCA\npca = PCA(n_components=3)\nmovie_pca = pca.fit(movie_emb.T).components_\n\nfac0 = movie_pca[0]\n\nmovie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac0, topMovies)]", "Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.", "sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]\n\nfac1 = movie_pca[1]\n\nmovie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac1, topMovies)]", "The 2nd is 'hollywood blockbuster'.", "sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]\n\nfac2 = movie_pca[2]\n\nmovie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac2, topMovies)]", "The 3rd is 'violent vs happy'.", "sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]", "We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.", "import sys\nstdout, stderr = sys.stdout, sys.stderr # save notebook stdout and stderr\nreload(sys)\nsys.setdefaultencoding('utf-8')\nsys.stdout, sys.stderr = stdout, stderr # restore notebook stdout and stderr\n\nstart=50; end=100\nX = fac0[start:end]\nY = fac2[start:end]\nplt.figure(figsize=(15,15))\nplt.scatter(X, Y)\nfor i, x, y in zip(topMovies[start:end], X, Y):\n plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)\nplt.show()", "Neural net\nRather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.", "user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)\nmovie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)\n\nx = merge([u, m], mode='concat')\nx = Flatten()(x)\nx = Dropout(0.3)(x)\nx = Dense(70, activation='relu')(x)\nx = Dropout(0.75)(x)\nx = Dense(1)(x)\nnn = Model([user_in, movie_in], x)\nnn.compile(Adam(0.001), loss='mse')\n\nnn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8, \n validation_data=([val.userId, val.movieId], val.rating))", "This improves on our already impressive accuracy even further!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
dev/_downloads/9d142d98d094666b7fd2d94155f8b3ec/decoding_unsupervised_spatial_filter.ipynb
bsd-3-clause
[ "%matplotlib inline", "Analysis of evoked response using ICA and PCA reduction techniques\nThis example computes PCA and ICA of evoked or epochs data. Then the\nPCA / ICA components, a.k.a. spatial filters, are used to transform\nthe channel data to new sources / virtual channels. The output is\nvisualized on the average of all the epochs.", "# Authors: Jean-Remi King <jeanremi.king@gmail.com>\n# Asish Panda <asishrocks95@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.decoding import UnsupervisedSpatialFilter\n\nfrom sklearn.decomposition import PCA, FastICA\n\nprint(__doc__)\n\n# Preprocess data\ndata_path = sample.data_path()\n\n# Load and filter data, set up epochs\nmeg_path = data_path / 'MEG' / 'sample'\nraw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'\nevent_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.1, 0.3\nevent_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)\n\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 20, fir_design='firwin')\nevents = mne.read_events(event_fname)\n\npicks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,\n exclude='bads')\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,\n picks=picks, baseline=None, preload=True,\n verbose=False)\n\nX = epochs.get_data()", "Transform data with PCA computed on the average ie evoked response", "pca = UnsupervisedSpatialFilter(PCA(30), average=False)\npca_data = pca.fit_transform(X)\nev = mne.EvokedArray(np.mean(pca_data, axis=0),\n mne.create_info(30, epochs.info['sfreq'],\n ch_types='eeg'), tmin=tmin)\nev.plot(show=False, window_title=\"PCA\", time_unit='s')", "Transform data with ICA computed on the raw epochs (no averaging)", "ica = UnsupervisedSpatialFilter(FastICA(30), average=False)\nica_data = ica.fit_transform(X)\nev1 = mne.EvokedArray(np.mean(ica_data, axis=0),\n mne.create_info(30, epochs.info['sfreq'],\n ch_types='eeg'), tmin=tmin)\nev1.plot(show=False, window_title='ICA', time_unit='s')\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bioinformatica-corso/lezioni
laboratorio/lezione6-7-15-21ott21/esercizio3-soluzione.ipynb
cc0-1.0
[ "Esercizio 3\nEMBL (http://www.ebi.ac.uk/cgi-bin/sva/sva.pl/) è una banca di sequenze nucleotidiche sviluppata da EMBL-EBI (European Bioinformatics Institute, European Molecular Biology Laboratory), in cui ogni sequenza nucleotidica viene memorizzata, con altre informazioni, in un file di puro testo (EMBL entry) in un formato che prende il nome di EMBL.\nIl formato EMBL è composto da record che iniziano con un codice a due lettere maisucole che ne indica il contenuto. I soli record che non iniziano con un codice a due lettere sono quelli (nella parte finale) contenenti la sequenza nucleotidica.\nIl record // segnala la fine del file.\nL'esercizio richiede di scaricare (in formato EMBL) la sequenza nucleotidica con identificatore M10051, contenente un mRNA (cioé un trascritto espresso da un gene), e scrivere un notebook che ottenga da tale entry:\n\nla sequenza nucleotidica in formato FASTA\nla sequenza della corrispondente proteina in formato FASTA\n\n\nRequisiti:\n\n\nnell’header FASTA della sequenza nucleotidica devono comparire l’identificatore univoco della sequenza e l’organismo a cui si riferisce, nel seguente formato:\n&gt;M10051-HUM\n\n\n\nla sequenza nucleotidica deve essere prodotta su alfabeto {a,c,g,u}\n\n\nnell’header FASTA della sequenza della proteina devono comparire l’identificatore univoco della sequenza, l’organismo e la lunghezza della proteina, nel seguente formato:\n&gt;M10051-HUM; len = 1382\n\n\n\nle sequenze nucleotidiche devono essere prodotte in record di 80 caratteri (e non su unica riga)\n\n\ndeve essere definita una funzione format_fasta() che prenda come argomenti una stringa contenente l'intestazione FASTA e una sequenza nucleotidica (o una proteina) e restituisca la sequenza in formato FASTA con la sequenza separata in righe di 80 caratteri.\n\n\nusare solo espressioni regolari per estrarre le informazioni\n\n\n\nProdurre i due output in due oggetti di tipo stringa.\n\nRecord da considerare per l'esercizio:\n\n\nIl record che inizia con ID:\nID M10051; SV 1; linear; mRNA; STD; HUM; 4723 BP.\n\n\ncontiene l'identificatore univoco della sequenza (M10051) e l'organismo (HUM). Il fatto che il file si riferisca alla sequenza nucleotidica di un gene è indicato dalla presenza della parola mRNA.\n\n\nL'insieme di tutti i record che iniziano con FT contengono le features annotate sulla sequenza nucleotidica. In particolare tutti i record della sezione:\nFT /translation=\"MGTGGRRGAAAAPLLVAVAALLLGAAGHLYPGEVCPGMDIRNNLT\n FT RLHELENCSVIEGHLQILLMFKTRPEDFRDLSFPKLIMITDYLLLFRVYGLESLKDLFP\n FT NLTVIRGSRLFFNYALVIFEMVHLKELGLYNLMNITRGSVRIEKNNELCYLATIDWSRI\n FT LDSVEDNHIVLNKDDNEECGDICPGTAKGKTNCPATVINGQFVERCWTHSHCQKVCPTI\n FT [...]\n FT DGGSSLGFKRSYEEHIPYTHMNGGKKNGRILTLPRSNPS\"\n\n\ncontengono la sequenza della proteina che corrisponde alla sequenza di mRNA in questione.\n\nIl record che inizia con SQ:SQ Sequence 4723 BP; 1068 A; 1298 C; 1311 G; 1046 T; 0 other;\n\n\n\nintroduce la sezione della sequenza nucleotidica e fornisce la sua composizione nucleotidica oltre che la sua lunghezza. \nLa sezione termina con il record // (segnale di fine del file).\nLa sequenza nucleotidica viene rappresentata tramite una serie di record che iniziano con degli spazi, e ogni record contiene un pezzo di sequenza lungo 60 basi. Ogni pezzo di 60 basi è a sua volta separato in pezzi di 10 basi.\nIl numero intero alla fine di un record di sequenza fornisce la lunghezza totale della sequenza fino a questo punto.\nSQ Sequence 4723 BP; 1068 A; 1298 C; 1311 G; 1046 T; 0 other;\n ggggggctgc gcggccgggt cggtgcgcac acgagaagga cgcgcggccc ccagcgctct 60\n tgggggccgc ctcggagcat gacccccgcg ggccagcgcc gcgcgcctga tccgaggaga 120\n ccccgcgctc ccgcagccat gggcaccggg ggccggcggg gggcggcggc cgcgccgctg 180\n ctggtggcgg tggccgcgct gctactgggc gccgcgggcc acctgtaccc cggagaggtg 240\n tgtcccggca tggatatccg gaacaacctc actaggttgc atgagctgga gaattgctct 300\n gtcatcgaag gacacttgca gatactcttg atgttcaaaa cgaggcccga agatttccga 360\n gacctcagtt tccccaaact catcatgatc actgattact tgctgctctt ccgggtctat 420\n gggctcgaga gcctgaagga cctgttcccc aacctcacgg tcatccgggg atcacgactg 480\n [...]\n tttttcgttc cccccacccg cccccagcag atggaaagaa agcacctgtt tttacaaatt 4620\n cttttttttt tttttttttt tttttttttg ctggtgtctg agcttcagta taaaagacaa 4680\n aacttcctgt ttgtggaaca aaatttcgaa agaaaaaacc aaa 4723\n//\n\nNOTA BENE:\n- l'alfabeto degli amminoacidi è {ACDEFGHIKLMNPQRSTVWY}\n- la sequenza nucleotidica riportata nell'entry EMBL è su alfabeto {a,c,g,t} nonostante rappresenti la sequenza primaria di un mRNA. Per ottenere la sequenza su alfabeto {a,c,g,u} basta operare una sostituzione di tutti i simboli t con simboli u.\n\nSoluzione\nDefinizione della funzione format_fasta()\nLa funzione deve prendere come argomento l'header FASTA e una sequenza e restituisce la sequenza in formato FASTA dopo averla separata in righe (record) di 80 caratteri.", "def format_fasta(header, sequence):\n return header + '\\n' + '\\n'.join(re.findall('\\w{,80}', sequence))", "NOTA BENE: supporre che l'header in input alla funzione abbia già il simbolo &gt; all'inizio, ma non il simbolo \\n alla fine.\nParametri in input", "input_file_name = './M10051.txt'", "Importare il modulo re per utilizzare le espressioni regolari.", "import re", "Lettura del file EMBL nell'unica stringa file_str", "with open(input_file_name, 'r') as input_file:\n file_str = input_file.read()\n\nprint(file_str)", "Estrazione dell'identificatore univoco e dell'organismo relativo all'entry.\nEstrarre dal record ID:\nID M10051; SV 1; linear; mRNA; STD; HUM; 4723 BP.\n\nl'identificatore univoco e l'organismo e assegnarli alle variabili identifier e organism.\nEstrazione dell'identificatore.", "s = re.search('^ID\\s+(\\w+)', file_str, re.M)\nidentifier = s.group(1)\n\nidentifier", "Estrazione dell'organismo.", "s = re.search('(\\w+);[\\s\\d\\w.]+$', file_str, re.M)\ns = re.search('^ID.+\\s+(\\w+);', file_str, re.M)\norganism = s.group(1)\n\norganism", "Sequenza nucleotidica in formato FASTA\nOttenere la lista dei record della sequenza nucleotidica escludendo da ognuno di essi solo l'intero finale.", "seq_row_list = re.findall('^(\\s+\\D+)\\d+', file_str, re.M)\n\nseq_row_list", "Estrarre dalla lista appena costruita, la lista contenente i pezzi di lunghezza 10 della sequenza nucleotidica in modo tale che l'elemento i-esimo sia una lista annidata contenente i sei pezzi di sequenza relativi all'i-esimo record della lista appena costruita.\nNOTA BENE: fare attenzione al fatto che l'ultimo pezzo potrebbe essere più corto di 10 basi.", "seq_chunk_list = [re.findall('\\w+', row) for row in seq_row_list]\n\nseq_chunk_list", "Concatenare i pezzi della lista in un'unica stringa.", "nucleotide_sequence = ''\nfor chunk in seq_chunk_list:\n for element in chunk:\n nucleotide_sequence += element\n \nnucleotide_sequence = ''.join([''.join(six_chunk_list) for six_chunk_list in seq_chunk_list])\n\nnucleotide_sequence", "Sostituire tutti i simboli t con un simbolo u.", "nucleotide_sequence = re.sub('t', 'u', nucleotide_sequence)\n\nnucleotide_sequence", "Produrre la sequenza nucleotidica in formato FASTA con il seguente header:\n&gt;M10051-HUM", "header = '>' + identifier + '-' + organism\nnucleotide_fasta_sequence = format_fasta(header, nucleotide_sequence)\n\nprint(nucleotide_fasta_sequence)", "Sequenza della proteina in formato FASTA\nEstrarre il prefisso della proteina contenuto nel record:\nFT /translation=\"MGTGGRRGAAAAPLLVAVAALLLGAAGHLYPGEVCPGMDIRNNLT", "s = re.search('^FT\\s+/translation=\\\"(\\w+)', file_str, re.M)\nprotein_prefix = s.group(1)\n\nprotein_prefix", "Ottenere la lista degli altri record di proteina:\nFT RLHELENCSVIEGHLQILLMFKTRPEDFRDLSFPKLIMITDYLLLFRVYGLESLKDLFP\n\nNOTA BENE: attenzione all'ultimo:\nFT DGGSSLGFKRSYEEHIPYTHMNGGKKNGRILTLPRSNPS\"\n\nche contiene dei doppi apici \" finali.", "protein_list = re.findall('^FT\\s+([A-Z]+)\"?$', file_str, re.M)\n\nprotein_list", "Aggiungere in testa alla lista il prefisso trovato prima e concatenare tutti gli elementi della lista per ottenere la sequenza della proteina in un'unica stringa.", "protein_list[:0] = [protein_prefix]\nprotein_sequence = ''.join(protein_list)\n\nprotein_sequence", "Produrre la sequenza della proteina in formato FASTA con il seguente header:\n&gt;M10051-HUM; len = 1382", "header = '>' + identifier + '-' + organism + '; len = ' + str(len(protein_sequence))\nprotein_fasta_sequence = format_fasta(header, protein_sequence)\n\nprint(protein_fasta_sequence)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]