repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
fonnesbeck/scientific-python-workshop
notebooks/Model Building with PyMC.ipynb
cc0-1.0
[ "Building Models in PyMC\nBayesian inference begins with specification of a probability model\nrelating unknown variables to data. PyMC provides three basic building\nblocks for Bayesian probability models: Stochastic, Deterministic\nand Potential.\nA Stochastic object represents a variable whose value is not\ncompletely determined by its parents, and a Deterministic object\nrepresents a variable that is entirely determined by its parents. In\nobject-oriented programming parlance, Stochastic and Deterministic\nare subclasses of the Variable class, which only serves as a template\nfor other classes and is never actually implemented in models.\nThe third basic class, Potential, represents 'factor potentials', which are not variables but simply\nlog-likelihood terms and/or constraints that are multiplied into joint\ndistributions to modify them. Potential and Variable are subclasses\nof Node.\nThe Stochastic class\nA stochastic variable has the following primary attributes:\nvalue\n: The variable's current value.\nlogp\n: The log-probability of the variable's current value given the values\n of its parents.\nA stochastic variable can optionally be endowed with a method called\nrandom, which draws a value for the variable given the values of its\nparents. \nCreation of stochastic variables\nThere are three main ways to create stochastic variables, called the\nautomatic, decorator, and direct interfaces.\nAutomatic\nStochastic variables with standard distributions provided by PyMC can be created in a\nsingle line using special subclasses of Stochastic. For example, the uniformly-distributed discrete variable $switchpoint$ in the coal mining disasters model is created using the automatic interface as follows:", "import pymc as pm\nimport numpy as np\nfrom pymc.examples import disaster_model\n\nswitchpoint = pm.DiscreteUniform('switchpoint', lower=0, upper=110)", "Similarly, the rate parameters can automatically be given exponential priors:", "early_mean = pm.Exponential('early_mean', beta=1., value=1)\nlate_mean = pm.Exponential('late_mean', beta=1., value=1)", "Decorator\nUniformly-distributed discrete stochastic variable $switchpoint$ in the disasters model could alternatively be created from a function that computes its log-probability as follows:", "@pm.stochastic\ndef switchpoint(value=1900, t_l=1851, t_h=1962):\n \"\"\"The switchpoint for the rate of disaster occurrence.\"\"\"\n if value > t_h or value < t_l:\n # Invalid values\n return -np.inf\n else:\n # Uniform log-likelihood\n return -np.log(t_h - t_l + 1)", "Note that this is a simple Python function preceded by a Python\nexpression called a decorator, here called\n@stochastic. Generally, decorators enhance functions with\nadditional properties or functionality. The Stochastic object\nproduced by the @stochastic decorator will evaluate its\nlog-probability using the function switchpoint. The value\nargument, which is required, provides an initial value for the\nvariable. The remaining arguments will be assigned as parents of\nswitchpoint (i.e. they will populate the parents dictionary).\nTo emphasize, the Python function decorated by @stochastic should\ncompute the log-density or log-probability of the variable. That\nis why the return value in the example above is $-\\log(t_h-t_l+1)$\nrather than $1/(t_h-t_l+1)$.\nDirect\nIts also possible to instantiate Stochastic directly:", "def switchpoint_logp(value, t_l, t_h):\n if value > t_h or value < t_l:\n return -np.inf\n else:\n return -np.log(t_h - t_l + 1)\n\ndef switchpoint_rand(t_l, t_h):\n return np.round( (t_l - t_h) * np.random.random() ) + t_l\n\nswitchpoint = pm.Stochastic( logp = switchpoint_logp,\n doc = 'The switchpoint for the rate of disaster occurrence.',\n name = 'switchpoint',\n parents = {'t_l': 1851, 't_h': 1962},\n random = switchpoint_rand,\n trace = True,\n value = 1900,\n dtype=int,\n rseed = 1.,\n observed = False,\n cache_depth = 2,\n plot=True,\n verbose = 0)", "Notice that the log-probability and random variate functions are\nspecified externally and passed to Stochastic as arguments. This\nis a rather awkward way to instantiate a stochastic variable;\nconsequently, such implementations should be rare.\nData Stochastics\nData are represented by Stochastic objects whose observed attribute\nis set to True. If a stochastic variable's observed flag is True,\nits value cannot be changed, and it won't be sampled by the fitting\nmethod.\nIn each interface, an optional keyword argument observed can be set to\nTrue. In the decorator interface, the\n@observed decorator is used instead of @stochastic:", "from scipy.stats.distributions import poisson\n\n@pm.observed\ndef likelihood(value=[1, 2, 1, 5], parameter=3):\n return poisson.logpmf(value, parameter).sum()", "In the other interfaces, the observed=True argument is added to the\ninstantiation of the Stochastic, or its subclass:", "disasters = pm.Poisson('disasters', mu=2, \n value=disaster_model.disasters_array, \n observed=True)", "The Deterministic class\nThe Deterministic class represents variables whose values are\ncompletely determined by the values of their parents. For example, in\nour disasters model, $rate$ is a deterministic variable.", "@pm.deterministic\ndef rate(s=switchpoint, e=early_mean, l=late_mean):\n ''' Concatenate Poisson means '''\n out = np.empty(len(disaster_model.disasters_array))\n out[:s] = e\n out[s:] = l\n return out", "so rate's value can be computed exactly from the values of its parents\nearly_mean, late_mean and switchpoint.\nA Deterministic variable's most important attribute is value, which\ngives the current value of the variable given the values of its parents.\nLike Stochastic's logp attribute, this attribute is computed\non-demand and cached for efficiency.\nA Deterministic variable has the following additional attributes:\nparents\n: A dictionary containing the variable's parents. The keys of the dictionary correspond to the names assigned to the variable's parents by the variable, and the values correspond to the actual parents.\nchildren\n: A set containing the variable's children, which must be nodes.\nDeterministic variables have no methods.\nCreation of deterministic variables\nDeterministic variables are less complicated than stochastic variables,\nand have similar automatic, decorator, and direct\ninterfaces:\nAutomatic\nA handful of common functions have been wrapped in Deterministic\nobjects. These are brief enough to list:\nLinearCombination\n: Has two parents $x$ and $y$, both of which must be iterable (i.e. vector-valued). This function returns:\n\\[\\sum_i x_i^T y_i\\]\nIndex\n: Has two parents $x$ and index. $x$ must be iterable, index must be valued as an integer. \n\\[x[\\text{index}]\\]\nIndex is useful for implementing dynamic models, in which the parent-child connections change.\nLambda\n: Converts an anonymous function (in Python, called lambda functions) to a Deterministic instance on a single line.\nCompletedDirichlet\n: PyMC represents Dirichlet variables of length $k$ by the first $k-1$ elements; since they must sum to 1, the $k^{th}$ element is determined by the others. CompletedDirichlet appends the $k^{th}$ element to the value of its parent $D$.\nLogit, InvLogit, StukelLogit, StukelInvLogit\n: Common link functions for generalized linear models, and their inverses.\nIts a good idea to use these classes when feasible in order to give hints to step methods.\nCertain elementary operations on variables create deterministic variables. For example:", "x = pm.MvNormal('x', np.ones(3), np.eye(3))\ny = pm.MvNormal('y', np.ones(3), np.eye(3))\nx+y\n\nprint(x[0])\n\nprint(x[0]+y[2])", "All the objects thus created have trace=False and plot=False by default.\nDecorator\nWe have seen in the disasters example how the decorator interface is used to create a deterministic variable. Notice that rather than returning the log-probability, as is the\ncase for Stochastic objects, the function returns the value of the deterministic object, given its parents. Also notice that, unlike for Stochastic objects, there is no value argument\npassed, since the value is calculated deterministically by the\nfunction itself. \nDirect\nDeterministic objects can also be instantiated directly:", "def rate_eval(switchpoint=switchpoint, early_mean=early_mean, late_mean=late_mean):\n value = np.zeros(111)\n value[:switchpoint] = early_mean\n value[switchpoint:] = late_mean\n return value\n\nrate = pm.Deterministic(eval = rate_eval,\n name = 'rate',\n parents = {'switchpoint': switchpoint, \n 'early_mean': early_mean, \n 'late_mean': late_mean},\n doc = 'The rate of disaster occurrence.',\n trace = True,\n verbose = 0,\n dtype=float,\n plot=False,\n cache_depth = 2)", "Containers\nIn some situations it would be inconvenient to assign a unique label to\neach parent of some variable. Consider $y$ in the following model:\n$$\\begin{align}\nx_0 &\\sim N (0,\\tau_x)\\\nx_{i+1}|x_i &\\sim \\text{N}(x_i, \\tau_x)\\\n&i=0,\\ldots, N-2\\\ny|x &\\sim N \\left(\\sum_{i=0}^{N-1}x_i^2,\\tau_y\\right)\n\\end{align}$$\nHere, $y$ depends on every element of the Markov chain $x$, but we\nwouldn't want to manually enter $N$ parent labels x_0,\nx_1, etc.\nThis situation can be handled naturally in PyMC:", "N = 10\nx_0 = pm.Normal('x_0', mu=0, tau=1)\n\nx = np.empty(N, dtype=object)\nx[0] = x_0\n\nfor i in range(1, N):\n\n x[i] = pm.Normal('x_%i' % i, mu=x[i-1], tau=1)\n\n@pm.observed\ndef y(value=1, mu=x, tau=100):\n return pm.normal_like(value, (mu**2).sum(), tau)", "PyMC automatically wraps array $x$ in an appropriate Container class.\nThe expression 'x_%i' % i labels each Normal object in the container\nwith the appropriate index $i$. For example, if i=1, the name of the\ncorresponding element becomes x_1.\nContainers, like variables, have an attribute called value. This\nattribute returns a copy of the (possibly nested) iterable that was\npassed into the container function, but with each variable inside\nreplaced with its corresponding value.\nThe Potential class\nFor some applications, we want to be able to modify the joint density by\nincorporating terms that don't correspond to probabilities of variables\nconditional on parents, for example:\n$$\\begin{eqnarray}\np(x_0, x_2, \\ldots x_{N-1}) \\propto \\prod_{i=0}^{N-2} \\psi_i(x_i, x_{i+1}).\n\\end{eqnarray}$$\nIn other cases we may want to add probability terms to existing models.\nFor example, suppose we want to constrain the difference between the early and late means in the disaster model to be less than 1, so that the joint density becomes:\n$$p(y,\\tau,\\lambda_1,\\lambda_2) \\propto p(y|\\tau,\\lambda_1,\\lambda_2) p(\\tau) p(\\lambda_1) p(\\lambda_2) I(|\\lambda_2-\\lambda_1| \\lt 1)$$\nArbitrary factors are implemented by objects of class Potential. Bayesian\nhierarchical notation doesn't accomodate these potentials. \nPotentials have one important attribute, logp, the log of their\ncurrent probability or probability density value given the values of\ntheir parents. The only other additional attribute of interest is\nparents, a dictionary containing the potential's parents. Potentials\nhave no methods. They have no trace attribute, because they are not\nvariables. They cannot serve as parents of variables (for the same\nreason), so they have no children attribute.\nCreation of Potentials\nThere are two ways to create potentials:\nDecorator\nA potential can be created via a decorator in a way very similar to\nDeterministic's decorator interface:", "@pm.potential\ndef rate_constraint(l1=early_mean, l2=late_mean):\n if np.abs(l2 - l1) > 1:\n return -np.inf\n return 0", "The function supplied should return the potential's current\nlog-probability or log-density as a Numpy float. The\npotential decorator can take verbose and cache_depth arguments\nlike the stochastic decorator.\nDirect\nThe same potential could be created directly as follows:", "def rate_constraint_logp(l1=early_mean, l2=late_mean):\n if np.abs(l2 - l1) > 1:\n return -np.inf\n return 0\n\nrate_constraint = pm.Potential(logp = rate_constraint_logp,\n name = 'rate_constraint',\n parents = {'l1': early_mean, 'l2': late_mean},\n doc = 'Constraint on rate differences',\n verbose = 0,\n cache_depth = 2)", "Example: Bioassay model\nRecall from a previous lecture the bioassay example, where the number of deaths in a toxicity experiment was modeled as a binomial response, with the probability of death being a linear function of dose:\n$$\\begin{aligned}\ny_i &\\sim \\text{Bin}(n_i, p_i) \\\n\\text{logit}(p_i) &= a + b x_i\n\\end{aligned}$$\nImplement this model in PyMC (we will show you how to fit the model later!)", "# Log dose in each group\nlog_dose = [-.86, -.3, -.05, .73]\n\n# Sample size in each group\nn = 5\n\n# Outcomes\ndeaths = [0, 1, 3, 5]\n\n## Write your answer here", "Fitting Models\nPyMC provides three objects that fit models:\n\n\nMCMC, which coordinates Markov chain Monte Carlo algorithms. The actual work of updating stochastic variables conditional on the rest of the model is done by StepMethod objects.\n\n\nMAP, which computes maximum a posteriori estimates.\n\n\nNormApprox, the joint distribution of all stochastic variables in a model is approximated as normal using local information at the maximum a posteriori estimate.\n\n\nAll three objects are subclasses of Model, which is PyMC's base class\nfor fitting methods. MCMC and NormApprox, both of which can produce\nsamples from the posterior, are subclasses of Sampler, which is PyMC's\nbase class for Monte Carlo fitting methods. Sampler provides a generic\nsampling loop method and database support for storing large sets of\njoint samples. These base classes implement some basic methods that are\ninherited by the three implemented fitting methods, so they are\ndocumented at the end of this section.\nMaximum a posteriori estimates\nThe MAP class sets all stochastic variables to their maximum a\nposteriori values using functions in SciPy's optimize package; hence,\nSciPy must be installed to use it. MAP can only handle variables whose\ndtype is float, so it will not work, for example, on the disaster model example. \nWe can fit the bioassay example using MAP:", "from pymc.examples import gelman_bioassay\nM = pm.MAP(gelman_bioassay)\nM.fit(method='fmin_powell')", "This call will cause $M$ to fit the model using Powell's method, which does not require derivatives. The variables in DisasterModel have now been set to their maximum a posteriori values:", "M.alpha.value\n\nM.beta.value", "We can also calculate model selection statistics, AIC and BIC:", "M.AIC\n\nM.BIC", "MAP has two useful methods:\nfit(method ='fmin', iterlim=1000, tol=.0001)\n: The optimization method may be fmin, fmin_l_bfgs_b, fmin_ncg,\n fmin_cg, or fmin_powell. See the documentation of SciPy's\n optimize package for the details of these methods. The tol and\n iterlim parameters are passed to the optimization function under\n the appropriate names.\nrevert_to_max()\n: If the values of the constituent stochastic variables change after\n fitting, this function will reset them to their maximum a\n posteriori values.\nThe useful attributes of MAP are:\nlogp\n: The joint log-probability of the model.\nlogp_at_max\n: The maximum joint log-probability of the model.\nAIC\n: Akaike's information criterion for this model.\nBIC\n: The Bayesian information criterion for this model.\nOne use of the MAP class is finding reasonable initial states for MCMC\nchains. Note that multiple Model subclasses can handle the same\ncollection of nodes.\nNormal approximations\nThe NormApprox class extends the MAP class by approximating the\nposterior covariance of the model using the Fisher information matrix,\nor the Hessian of the joint log probability at the maximum.", "N = pm.NormApprox(gelman_bioassay)\nN.fit()", "The approximate joint posterior mean and covariance of the variables are\navailable via the attributes mu and C, which the the approximate posterior mean and variance/covariance, respectively:", "N.mu[N.alpha]\n\nN.C[N.alpha, N.beta]", "As with MAP, the variables have been set to their maximum a\nposteriori values (which are also in the mu attribute) and the AIC\nand BIC of the model are available.\nWe can also generate samples from the posterior:", "N.sample(100)\nN.trace('alpha')[:10]", "In addition to the methods and attributes of MAP, NormApprox\nprovides the following methods:\nsample(iter)\n: Samples from the approximate posterior distribution are drawn and stored.\nisample(iter)\n: An 'interactive' version of sample(): sampling can be paused, returning control to the user.\ndraw\n: Sets all variables to random values drawn from the approximate posterior.\nMCMC\nThe MCMC class implements PyMC's core business: producing Markov chain Monte Carlo samples for\na model's variables. Its primary job is to create and coordinate a collection of 'step\nmethods', each of which is responsible for updating one or more\nvariables. \nMCMC provides the following useful methods:\nsample(iter, burn, thin, tune_interval, tune_throughout, save_interval, ...)\n: Runs the MCMC algorithm and produces the traces. The iter argument\ncontrols the total number of MCMC iterations. No tallying will be\ndone during the first burn iterations; these samples will be\nforgotten. After this burn-in period, tallying will be done each\nthin iterations. Tuning will be done each tune_interval\niterations. If tune_throughout=False, no more tuning will be done\nafter the burnin period. The model state will be saved every\nsave_interval iterations, if given.\nisample(iter, burn, thin, tune_interval, tune_throughout, save_interval, ...)\n: An interactive version of sample. The sampling loop may be paused\nat any time, returning control to the user.\nuse_step_method(method, *args, **kwargs):\n: Creates an instance of step method class method to handle some\nstochastic variables. The extra arguments are passed to the init\nmethod of method. Assigning a step method to a variable manually\nwill prevent the MCMC instance from automatically assigning one.\nHowever, you may handle a variable with multiple step methods.\nstats():\n: Generate summary statistics for all nodes in the model.\nThe sampler's MCMC algorithms can be accessed via the step_method_dict\nattribute. M.step_method_dict[x] returns a list of the step methods\nM will use to handle the stochastic variable x.\nAfter sampling, the information tallied by M can be queried via\nM.db.trace_names. In addition to the values of variables, tuning\ninformation for adaptive step methods is generally tallied. These\n‘traces’ can be plotted to verify that tuning has in fact terminated. After sampling ends you can retrieve the trace as\nM.trace[’var_name’].\nWe can instantiate a MCMC sampler for the bioassay example as follows:", "M = pm.MCMC(gelman_bioassay, db='sqlite')", "Step methods\nStep method objects handle individual stochastic variables, or sometimes groups \nof them. They are responsible for making the variables they handle take single \nMCMC steps conditional on the rest of the model. Each subclass of \nStepMethod implements a method called step(), which is called by \nMCMC. Step methods with adaptive tuning parameters can optionally implement \na method called tune(), which causes them to assess performance (based on \nthe acceptance rates of proposed values for the variable) so far and adjust.\nThe major subclasses of StepMethod are Metropolis and\nAdaptiveMetropolis. PyMC provides several flavors of the \nbasic Metropolis steps.\nMetropolis\nMetropolis and subclasses implement Metropolis-Hastings steps. To tell an \nMCMC object :math:M to handle a variable :math:x with a Metropolis step \nmethod, you might do the following:", "M.use_step_method(pm.Metropolis, M.alpha, proposal_sd=1., proposal_distribution='Normal')", "Metropolis itself handles float-valued variables, and subclasses\nDiscreteMetropolis and BinaryMetropolis handle integer- and\nboolean-valued variables, respectively.\nMetropolis' __init__ method takes the following arguments:\nstochastic\n: The variable to handle.\nproposal_sd\n: A float or array of floats. This sets the proposal standard deviation if the proposal distribution is normal.\nscale\n: A float, defaulting to 1. If the scale argument is provided but not proposal_sd, proposal_sd is computed as follows:\npython\nif all(self.stochastic.value != 0.):\n self.proposal_sd = (ones(shape(self.stochastic.value)) * \n abs(self.stochastic.value) * scale)\nelse:\n self.proposal_sd = ones(shape(self.stochastic.value)) * scale\nproposal_distribution\n: A string indicating which distribution should be used for proposals.\nCurrent options are 'Normal' and 'Prior'. If\nproposal_distribution=None, the proposal distribution is chosen\nautomatically. It is set to 'Prior' if the variable has no\nchildren and has a random method, and to 'Normal' otherwise.\nAlhough the proposal_sd attribute is fixed at creation, Metropolis\nstep methods adjust their initial proposal standard deviations using an\nattribute called adaptive_scale_factor. During tuning, the\nacceptance ratio of the step method is examined, and this scale factor\nis updated accordingly. If the proposal distribution is normal,\nproposals will have standard deviation\nself.proposal_sd * self.adaptive_scale_factor.\nBy default, tuning will continue throughout the sampling loop, even\nafter the burnin period is over. This can be changed via the\ntune_throughout argument to MCMC.sample. If an adaptive step\nmethod's tally flag is set (the default for Metropolis), a trace of\nits tuning parameters will be kept. If you allow tuning to continue\nthroughout the sampling loop, it is important to verify that the\n'Diminishing Tuning' condition of Roberts and Rosenthal (2007) is satisfied: the\namount of tuning should decrease to zero, or tuning should become very\ninfrequent.\nIf a Metropolis step method handles an array-valued variable, it\nproposes all elements independently but simultaneously. That is, it\ndecides whether to accept or reject all elements together but it does\nnot attempt to take the posterior correlation between elements into\naccount. The AdaptiveMetropolis class (see below), on the other hand,\ndoes make correlated proposals.\nAdaptiveMetropolis\nThe AdaptativeMetropolis (AM) step method works like a regular\nMetropolis step method, with the exception that its variables are\nblock-updated using a multivariate jump distribution whose covariance is\ntuned during sampling. Although the chain is non-Markovian, it has\ncorrect ergodic properties (Haario et al., 2001).\nAdaptiveMetropolis works on vector-valued, continuous stochastics:", "from pymc.examples import disaster_model_linear\nM = pm.MCMC(disaster_model_linear)\nM.use_step_method(pm.AdaptiveMetropolis, M.params_of_mean)", "AdaptativeMetropolis's init method takes the following arguments:\nstochastics\n: The stochastic variables to handle. These will be updated jointly.\ncov (optional)\n: An initial covariance matrix. Defaults to the identity matrix,\nadjusted according to the scales argument.\ndelay (optional)\n: The number of iterations to delay before computing the empirical\ncovariance matrix.\nscales (optional):\n: The initial covariance matrix will be diagonal, and its diagonal\nelements will be set to scales times the stochastics' values,\nsquared.\ninterval (optional):\n: The number of iterations between updates of the covariance matrix.\nDefaults to 1000.\ngreedy (optional):\n: If True, only accepted jumps will be counted toward the delay\nbefore the covariance is first computed. Defaults to True.\nshrink_if_necessary (optional):\n: Whether the proposal covariance should be shrunk if the acceptance\nrate becomes extremely small.\nIn this algorithm, jumps are proposed from a multivariate normal\ndistribution with covariance matrix $\\Sigma$. The algorithm first\niterates until delay samples have been drawn (if greedy is true,\nuntil delay jumps have been accepted). At this point, $\\Sigma$ is\ngiven the value of the empirical covariance of the trace so far and\nsampling resumes. The covariance is then updated each interval\niterations throughout the entire sampling run. It is this constant\nadaptation of the proposal distribution that makes the chain\nnon-Markovian.\nDiscreteMetropolis\nThis class is just like Metropolis, but specialized to handle\nStochastic instances with dtype int. The jump proposal distribution\ncan either be 'Normal', 'Prior' or 'Poisson' (the default). In the\nnormal case, the proposed value is drawn from a normal distribution\ncentered at the current value and then rounded to the nearest integer.\nBinaryMetropolis\nThis class is specialized to handle Stochastic instances with dtype\nbool.\nFor array-valued variables, BinaryMetropolis can be set to propose\nfrom the prior by passing in dist=\"Prior\". Otherwise, the argument\np_jump of the init method specifies how probable a change is. Like\nMetropolis' attribute proposal_sd, p_jump is tuned throughout the\nsampling loop via adaptive_scale_factor.\nAutomatic assignment of step methods\nEvery step method subclass (including user-defined ones) that does not\nrequire any __init__ arguments other than the stochastic variable to\nbe handled adds itself to a list called StepMethodRegistry in the PyMC\nnamespace. If a stochastic variable in an MCMC object has not been\nexplicitly assigned a step method, each class in StepMethodRegistry is\nallowed to examine the variable.\nTo do so, each step method implements a class method called\ncompetence(stochastic), whose only argument is a single stochastic\nvariable. These methods return values from 0 to 3; 0 meaning the step\nmethod cannot safely handle the variable and 3 meaning it will most\nlikely perform well for variables like this. The MCMC object assigns\nthe step method that returns the highest competence value to each of its\nstochastic variables.\nRunning MCMC Samplers\nWe can carry out Markov chain Monte Carlo sampling by calling the sample method (or in the terminal, isample) with the appropriate arguments.", "M = pm.MCMC(gelman_bioassay)\nM.sample(10000, burn=5000)\n\n%matplotlib inline\npm.Matplot.plot(M.LD50)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
y2ee201/Deep-Learning-Nanodegree
my-experiments/Linear Regression.ipynb
mit
[ "Linear regression using Batch Gradient Descent\nBuilding linear regression from ground up", "import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd", "Our Linear Regression Model\nThis python class contains our Linear Regression model/algo. We use the OOP's concepts to impart the model's behaviour to a python class. This class contains methods to train, predict and plot the cost curve.", "class linear_regression():\n \n def __init__(self):\n self.weights = None\n self.learning_rate = None\n self.epochs = None\n self.trainx = None\n self.trainy = None\n self.costcurve = []\n print('Status: Model Initialized')\n \n def train(self, trainx, trainy, learning_rate, epochs):\n # np.random.seed(10)\n self.trainx = trainx\n self.trainy = trainy\n self.learning_rate = learning_rate\n self.epochs = epochs\n \n # self.weights = np.random.randn(self.trainx.shape[1]+1)\n self.weights = np.random.uniform(low=0.0, high=inputx.shape[1]**0.5, size=inputx.shape[1]+1)\n self.trainx = np.append(self.trainx,np.ones((self.trainx.shape[0],1)), axis=1)\n \n for epoch in range(epochs):\n output = np.dot(self.trainx, self.weights)\n output = np.reshape(output, (output.shape[0],1))\n error = np.subtract(self.trainy, output)\n total_error = np.sum(error)\n cost = np.mean(error**2)\n self.costcurve.append([epoch+1, cost])\n gradients = (self.learning_rate / self.trainx.shape[0]) * np.dot(error.T, self.trainx)\n gradients = np.reshape(gradients, (gradients.T.shape[0],))\n self.weights += gradients\n print('step:{0} \\n cost:{1}'.format(epoch+1, cost))\n \n # return self.weights\n \n def plotCostCurve(self):\n costcurvearray = np.array(self.costcurve)\n plt.plot(costcurvearray[:,0],costcurvearray[:,1])\n plt.xlabel('Epochs')\n plt.ylabel('Cost')\n plt.show()\n \n def predict(self, validatex):\n validatex_new = np.append(validatex,np.ones((validatex.shape[0],1)), axis=1)\n predict = np.dot(validatex_new, self.weights)\n return np.reshape(predict, (predict.shape[0],1))\n \n ", "Ingesting Sample Data\nWe are using the MPG dataset from UCI Datasets to test our implementation. http://archive.ics.uci.edu/ml/datasets/Auto+MPG", "mpg_data = np.genfromtxt('mpg.txt', delimiter=',', dtype='float')\nprint(mpg_data.shape)\n\nmpg_data = mpg_data[~np.isnan(mpg_data).any(axis=1)]\n\ninputx = mpg_data[:,1:8]\n\nfor i in range(inputx.shape[1]):\n inputx[:,i] = (inputx[:,i]-np.min(inputx[:,i]))/(np.max(inputx[:,i])-np.min(inputx[:,i]))\n\ninputy = np.reshape(mpg_data[:,0],(mpg_data.shape[0],1))\n\nmodel = linear_regression()\nmodel.train(inputx, inputy, 0.01, 1000)\n\nprint(model.weights)", "Visualizing the Cost per Step\nPlotting a cost curve explains to us what is happening in our gradient descent. Whether our model converges or not. It also helps to understand the point after which we need not expend resources to train.", "model.plotCostCurve()", "Predicting Values\nHere we are using our input dataset to predict the Y with our final model. In actual scenarios there is proper process to do this.", "model.predict(inputx)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
empirical-org/WikipediaSentences
notebooks/BERT-2 Experiments One Model.ipynb
agpl-3.0
[ "Bert Experiments: One Model\nIn this notebook, we continue our BERT experiments. We try to finetune one BERT model on several of our data sets. This makes it easier to deploy our solution in production. \nAs a first test, we'll just train a BERT model that takes as input a response from any of several data sets, and outputs probabilities for all labels in all data sets. This is slightly suboptimal (after all, we don't need probabilities for labels that are not relevant to a specific prompt), but as long as we're not working with thousands of different labels, I don't think this is very problematic.\nThe setup and preprocessing procedure is very similar to that in the first \"Bert experiments\" notebook. I'll highlight the areas where it is different.", "import torch\n\nfrom pytorch_transformers.tokenization_bert import BertTokenizer\nfrom pytorch_transformers.modeling_bert import BertForSequenceClassification\n\nBERT_MODEL = 'bert-large-uncased'\nBATCH_SIZE = 16 if \"base\" in BERT_MODEL else 2\nGRADIENT_ACCUMULATION_STEPS = 1 if \"base\" in BERT_MODEL else 8\n\n\ntokenizer = BertTokenizer.from_pretrained(BERT_MODEL)", "Data\nAs we build one \"big\" model, we combine the data from all of our input files. We keep the test files separate, because we want to be able to evaluate on every prompt separately.\nIn addition, we also remember which labels are relevant for every prompt, because in the prediction phase, we will only look at the probabilities of the relevant labels.", "import ndjson\nimport glob\n\nfile_prefixes = [\"eatingmeat_but_large\", \"eatingmeat_because_large\",\n \"junkfood_but\", \"junkfood_because\"]\n\ntrain_data = []\ndev_data = []\ntest_data = {}\nlabel2idx = {}\ntarget_names = {}\n\nfor prefix in file_prefixes:\n \n train_files = glob.glob(f\"../data/interim/{prefix}_train_withprompt*.ndjson\")\n dev_file = f\"../data/interim/{prefix}_dev_withprompt.ndjson\"\n test_file = f\"../data/interim/{prefix}_test_withprompt.ndjson\"\n\n target_names[prefix] = []\n for train_file in train_files:\n with open(train_file) as i:\n new_train_data = ndjson.load(i)\n for item in new_train_data:\n if item[\"label\"] not in label2idx:\n target_names[prefix].append(item[\"label\"])\n label2idx[item[\"label\"]] = len(label2idx)\n train_data += new_train_data\n \n with open(dev_file) as i:\n dev_data += ndjson.load(i)\n\n with open(test_file) as i:\n test_data[prefix] = ndjson.load(i)\n\nprint(label2idx)\nprint(target_names)", "Model", "model = BertForSequenceClassification.from_pretrained(BERT_MODEL, num_labels=len(label2idx))\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel.to(device)\nmodel.train()", "Preprocessing", "import logging\nimport numpy as np\n\nlogging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',\n datefmt = '%m/%d/%Y %H:%M:%S',\n level = logging.INFO)\nlogger = logging.getLogger(__name__)\n\nMAX_SEQ_LENGTH=100\n\nclass InputFeatures(object):\n \"\"\"A single set of features of data.\"\"\"\n\n def __init__(self, input_ids, input_mask, segment_ids, label_id):\n self.input_ids = input_ids\n self.input_mask = input_mask\n self.segment_ids = segment_ids\n self.label_id = label_id\n \n \ndef convert_examples_to_features(examples, label2idx, max_seq_length, tokenizer, verbose=0):\n \"\"\"Loads a data file into a list of `InputBatch`s.\"\"\"\n \n features = []\n for (ex_index, ex) in enumerate(examples):\n \n # TODO: should deal better with sentences > max tok length\n input_ids = tokenizer.encode(\"[CLS] \" + ex[\"text\"] + \" [SEP]\")\n segment_ids = [0] * len(input_ids)\n \n # The mask has 1 for real tokens and 0 for padding tokens. Only real\n # tokens are attended to.\n input_mask = [1] * len(input_ids)\n\n # Zero-pad up to the sequence length.\n padding = [0] * (max_seq_length - len(input_ids))\n input_ids += padding\n input_mask += padding\n segment_ids += padding\n\n assert len(input_ids) == max_seq_length\n assert len(input_mask) == max_seq_length\n assert len(segment_ids) == max_seq_length\n\n label_id = label2idx[ex[\"label\"]]\n if verbose and ex_index == 0:\n logger.info(\"*** Example ***\")\n logger.info(\"text: %s\" % ex[\"text\"])\n logger.info(\"input_ids: %s\" % \" \".join([str(x) for x in input_ids]))\n logger.info(\"input_mask: %s\" % \" \".join([str(x) for x in input_mask]))\n logger.info(\"segment_ids: %s\" % \" \".join([str(x) for x in segment_ids]))\n logger.info(\"label:\" + str(ex[\"label\"]) + \" id: \" + str(label_id))\n\n features.append(\n InputFeatures(input_ids=input_ids,\n input_mask=input_mask,\n segment_ids=segment_ids,\n label_id=label_id))\n return features\n\ntrain_features = convert_examples_to_features(train_data, label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=0)\ndev_features = convert_examples_to_features(dev_data, label2idx, MAX_SEQ_LENGTH, tokenizer)\n\ntest_features = {}\nfor prefix in test_data:\n test_features[prefix] = convert_examples_to_features(test_data[prefix], label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=1)\n\nimport torch\nfrom torch.utils.data import TensorDataset, DataLoader, RandomSampler\n\ndef get_data_loader(features, max_seq_length, batch_size): \n\n all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)\n all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)\n all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)\n all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)\n data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)\n sampler = RandomSampler(data)\n dataloader = DataLoader(data, sampler=sampler, batch_size=batch_size)\n return dataloader\n\ntrain_dataloader = get_data_loader(train_features, MAX_SEQ_LENGTH, BATCH_SIZE)\ndev_dataloader = get_data_loader(dev_features, MAX_SEQ_LENGTH, BATCH_SIZE)\ntest_dataloaders = {} \nfor prefix in test_features:\n test_dataloaders[prefix] = get_data_loader(test_features[prefix], MAX_SEQ_LENGTH, BATCH_SIZE)", "Evaluation", "def evaluate(model, dataloader):\n\n eval_loss = 0\n nb_eval_steps = 0\n predicted_labels, correct_labels = [], []\n\n for step, batch in enumerate(tqdm(dataloader, desc=\"Evaluation iteration\")):\n batch = tuple(t.to(device) for t in batch)\n input_ids, input_mask, segment_ids, label_ids = batch\n\n with torch.no_grad():\n tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids)\n\n outputs = np.argmax(logits.to('cpu'), axis=1)\n label_ids = label_ids.to('cpu').numpy()\n \n predicted_labels += list(outputs)\n correct_labels += list(label_ids)\n \n eval_loss += tmp_eval_loss.mean().item()\n nb_eval_steps += 1\n\n eval_loss = eval_loss / nb_eval_steps\n \n correct_labels = np.array(correct_labels)\n predicted_labels = np.array(predicted_labels)\n \n return eval_loss, correct_labels, predicted_labels", "Training", "from pytorch_transformers.optimization import AdamW, WarmupLinearSchedule\n\nNUM_TRAIN_EPOCHS = 100\nLEARNING_RATE = 1e-5\nWARMUP_PROPORTION = 0.1\n\ndef warmup_linear(x, warmup=0.002):\n if x < warmup:\n return x/warmup\n return 1.0 - x\n\nnum_train_steps = int(len(train_data) / BATCH_SIZE / GRADIENT_ACCUMULATION_STEPS * NUM_TRAIN_EPOCHS)\n\nparam_optimizer = list(model.named_parameters())\nno_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']\noptimizer_grouped_parameters = [\n {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},\n {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}\n ]\n\noptimizer = AdamW(optimizer_grouped_parameters, lr=LEARNING_RATE, correct_bias=False)\nscheduler = WarmupLinearSchedule(optimizer, warmup_steps=100, t_total=num_train_steps)\n\nimport os\n\nOUTPUT_DIR = \"/tmp/\"\nMODEL_FILE_NAME = \"pytorch_model.bin\"\noutput_model_file = os.path.join(OUTPUT_DIR, MODEL_FILE_NAME)\n\nfrom tqdm import trange\nfrom tqdm import tqdm_notebook as tqdm\nfrom sklearn.metrics import classification_report, precision_recall_fscore_support\n\n\nPATIENCE = 5\n\nglobal_step = 0\nmodel.train()\nloss_history = []\nbest_epoch = 0\nfor epoch in trange(int(NUM_TRAIN_EPOCHS), desc=\"Epoch\"):\n tr_loss = 0\n nb_tr_examples, nb_tr_steps = 0, 0\n for step, batch in enumerate(tqdm(train_dataloader, desc=\"Training iteration\")):\n batch = tuple(t.to(device) for t in batch)\n input_ids, input_mask, segment_ids, label_ids = batch\n outputs = model(input_ids, segment_ids, input_mask, label_ids)\n loss = outputs[0]\n \n if GRADIENT_ACCUMULATION_STEPS > 1:\n loss = loss / GRADIENT_ACCUMULATION_STEPS\n\n loss.backward()\n\n tr_loss += loss.item()\n nb_tr_examples += input_ids.size(0)\n nb_tr_steps += 1\n if (step + 1) % GRADIENT_ACCUMULATION_STEPS == 0:\n lr_this_step = LEARNING_RATE * warmup_linear(global_step/num_train_steps, WARMUP_PROPORTION)\n for param_group in optimizer.param_groups:\n param_group['lr'] = lr_this_step\n optimizer.step()\n optimizer.zero_grad()\n global_step += 1\n\n dev_loss, _, _ = evaluate(model, dev_dataloader)\n \n print(\"Loss history:\", loss_history)\n print(\"Dev loss:\", dev_loss)\n \n if len(loss_history) == 0 or dev_loss < min(loss_history):\n model_to_save = model.module if hasattr(model, 'module') else model\n torch.save(model_to_save.state_dict(), output_model_file)\n best_epoch = epoch\n \n if epoch-best_epoch >= PATIENCE: \n print(\"No improvement on development set. Finish training.\")\n break\n \n loss_history.append(dev_loss)", "Results", "from tqdm import tqdm_notebook as tqdm\nfrom sklearn.metrics import classification_report, precision_recall_fscore_support\n\ndevice=\"cpu\"\nprint(\"Loading model from\", output_model_file)\n\nmodel_state_dict = torch.load(output_model_file, map_location=lambda storage, loc: storage)\nmodel = BertForSequenceClassification.from_pretrained(BERT_MODEL, state_dict=model_state_dict, num_labels=len(label2idx))\nmodel.to(device)\n\nmodel.eval()\n\n#_, train_correct, train_predicted = evaluate(model, train_dataloader)\n#_, dev_correct, dev_predicted = evaluate(model, dev_dataloader)\n\n#print(\"Training performance:\", precision_recall_fscore_support(train_correct, train_predicted, average=\"micro\"))\n#print(\"Development performance:\", precision_recall_fscore_support(dev_correct, dev_predicted, average=\"micro\"))\n\nfor prefix in test_dataloaders:\n print(prefix)\n _, test_correct, test_predicted = evaluate(model, test_dataloaders[prefix])\n\n print(\"Test performance:\", precision_recall_fscore_support(test_correct, test_predicted, average=\"micro\"))\n\n print(classification_report(test_correct, test_predicted, target_names=target_names[prefix]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jskksj/cv2stuff
cv2stuff/notebooks/.ipynb_checkpoints/ConfigParser-checkpoint.ipynb
isc
[ "import configparser\nimport yaml\nimport io\n\nconfig = configparser.ConfigParser()\n\nconfig['DEFAULT'] = {'ServerAliveInterval': '45', 'Compression': 'yes','CompressionLevel': '9'}\n\nconfig.default_section\n\nconfig['bitbucket.org'] = {}\nconfig['bitbucket.org']['User'] = 'hg'\nconfig['topsecret.server.com'] = {}\ntopsecret = config['topsecret.server.com']\ntopsecret['Port'] = '50022' # mutates the parser\ntopsecret['ForwardX11'] = 'no' # same here\nconfig['DEFAULT']['ForwardX11'] = 'yes'\n\nwith open('example.ini', 'w') as configfile:\n config.write(configfile)", "Now that we have created and saved a configuration file, let’s read it back and explore the data it holds.", "config = configparser.ConfigParser()\nconfig.sections()\n\nconfig.read('example.ini')\n\nconfig.sections()\n\n'bitbucket.org' in config\n\n'bytebong.com' in config\n\nconfig['bitbucket.org']['User']\n\nconfig['DEFAULT']['Compression']\n\ntopsecret = config['topsecret.server.com']\n\ntopsecret['ForwardX11']\n\ntopsecret['Port']\n\nfor key in config['bitbucket.org']:\n print(key)\n\nconfig['bitbucket.org']['ForwardX11']\n\ntype(topsecret['Port'])\n\ntype(int(topsecret['Port']))\n\ntype(topsecret.getint('Port'))\n\ntype(topsecret.getfloat('Port'))\n\nint(topsecret['Port']) - 22.0\n\nint(topsecret['Port']) - 22\n\ntry:\n topsecret.getint('ForwardX11')\nexcept ValueError:\n print(True)\n\ntopsecret.getboolean('ForwardX11')\n\nconfig['bitbucket.org'].getboolean('ForwardX11')\n\nconfig.getboolean('bitbucket.org', 'Compression')\n\ntopsecret.get('Port')\n\ntopsecret.get('CompressionLevel')\n\ntopsecret.get('Cipher')\n\ntopsecret.get('Cipher', '3des-cbc')", "Please note that default values have precedence over fallback values. For instance, in our example the 'CompressionLevel' key was specified only in the 'DEFAULT' section. If we try to get it from the section 'topsecret.server.com', we will always get the default, even if we specify a fallback:", "topsecret.get('CompressionLevel', '3')", "The same fallback argument can be used with the getint(), getfloat() and getboolean() methods, for example:", "'BatchMode' in topsecret\n\ntopsecret.getboolean('BatchMode', fallback=True)\n\nconfig['DEFAULT']['BatchMode'] = 'no'\ntopsecret.getboolean('BatchMode', fallback=True)\n\nimport yaml\n\nwith open(\"config.yml\", 'r') as ymlfile:\n cfg = yaml.load(ymlfile)\n\nfor section in cfg:\n print(section)\nprint(cfg['mysql'])\nprint(cfg['other'])\n\n# Load the configuration file\nwith open(\"config.yml\") as f:\n sample_config = f.read()\nconfig = configparser.RawConfigParser(allow_no_value=True)\nconfig.readfp(io.BytesIO(sample_config))\n\n# List all contents\nprint(\"List all contents\")\nfor section in config.sections():\n print(\"Section: %s\" % section)\n for options in config.options(section):\n print(\"x %s:::%s:::%s\" % (options,\n config.get(section, options),\n str(type(options))))\n\n# Print some contents\nprint(\"\\nPrint some contents\")\nprint(config.get('other', 'use_anonymous')) # Just get the value\nprint(config.getboolean('other', 'use_anonymous')) # You know the datatype?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
TheMitchWorksPro/DataTech_Playground
PY_Basics/TMWP_DictionaryBasics.ipynb
mit
[ "<div align=\"right\">Env: Python [conda env:PY27_Test]</div>\n<div align=\"right\">Env: Python [conda env:PY36] </div>\n\nWorking With Dictionaries - The Basics\nThese excercises come from multiple sources and show the basics of creating, modifying, and sorting dictionaries.\nAll code was created in Python 2.7 and cross-tested in Python 3.6.\nQuick Guide to the basics:\n- Create Dictionary: dictionary = { key1:value, key2:value2 }\n - Example: myDict1 = { 'one':'first thing', 'two':'secondthing' }\n - Example: myDict2 = { 1:43, 2:600, 3:-1000.4 }\n - Example: myDict3 = { 1:\"text\", 2:345, 3:'another value' }\n- Add to a dictionary: myDict1['newVal'] = 'something stupid'\n - Resulting Dictionary: myDict1 = { 'one':'first thing', 'two':'secondthing', 'newVal':'something stupid' }\n- Remove from dictionary: del myDict1['newVal']\n - now myDict1 is back the way it was: { 'one':'first thing', 'two':'secondthing' }\nIn This Document:\n- Immediately Below: modified \"Learn Python The HardWay\" dictionary example (covers much of the basics)\n- .get() - safely retrieve dict value \n- sorting\n- When to use different sorted container options\n- using comprehensions w/ dictionaries\n- over-riding \"missing\" key \n- find nth element\n- nested dictionaries \n- more dictionary and nested dictionary resourses\n- browse below for more topics that may not be in this list ...", "# Ex39 in Learn Python the Hard Way:\n# https://learnpythonthehardway.org/book/ex39.html\n# edited, expanded, and made PY3.x compliant by Mitch before inclusion in this notebook\n\n# create a mapping of state to abbreviation\nstates = {\n 'Oregon': 'OR',\n 'Florida': 'FL',\n 'California': 'CA',\n 'New York': 'NY',\n 'Michigan': 'MI'\n}\n\n# create a basic set of states and some cities in them\ncities = {\n 'CA': 'San Francisco',\n 'MI': 'Detroit',\n 'FL': 'Jacksonville'\n}\n\n# add some more cities\ncities['NY'] = 'New York'\ncities['OR'] = 'Portland'\n\n# print out some cities\nprint('-' * 10)\nprint(\"Two cities:\")\nprint(\"NY State has: %s\" %cities['NY'])\nprint(\"OR State has: %s\" %cities['OR'])\n\n# print some states\nprint('-' * 10)\nprint(\"Abbreviations for Two States:\")\n# PY 2.7 syntax from original code: print \"Michigan's abbreviation is: \", states['Michigan']\nprint(\"Michigan's abbreviation is: %s\" %states['Michigan'])\nprint(\"Florida's abbreviation is: %s\" %states['Florida'])\n\n# do it by using the state then cities dict\nprint('-' * 10)\nprint(\"State Abbreviation extracted from cities dictionary:\")\nprint(\"Michigan has: %s\" %cities[states['Michigan']])\nprint(\"Florida has: %s\" %cities[states['Florida']])\n\n# print every state abbreviation\nprint('-' * 10)\nprint(\"Every State Abbreviation:\")\nfor state, abbrev in states.items():\n print(\"%s is abbreviated %s\" % (state, abbrev))\n\n# print every city in state\nprint('-' * 10)\nprint(\"Every city in Every State:\")\nfor abbrev, city in cities.items():\n print(\"%s has the city %s\" %(abbrev, city))\n\n# now do both at the same time\nprint('-' * 10)\nprint(\"Do Both at Once:\")\nfor state, abbrev in states.items():\n print(\"%s state is abbreviated %s and has city %s\" % (\n state, abbrev, cities[abbrev]))\n\nprint('-' * 10)", "<a id=\"get\" name=\"get\"></a>\n.get() examples", "# ex 39 Python the Hard Way modified code continued ...\n\n# safely get a abbreviation by state that might not be there\nstate = states.get('Texas')\nif not state:\n print(\"Sorry, no Texas.\")\n\n# get a city with a default value\ncity = cities.get('TX', 'Does Not Exist')\nprint(\"The city for the state 'TX' is: %s\" % city)\nprint(\"The city for the state 'FL' is: %s\" % states.get('Florida'))\n\ncity2 = states.get('Hawaii')\nprint(\"city2: %s\" %city2)\nif city2 == None:\n city2 = 'Value == None'\nelif city2 == '':\n city2 = 'Value is empty \"\"'\nelif not city2:\n city2 = 'Value Missing (Passed not test)'\nelse:\n city2 = 'No Such Value'\n \nprint(\"The city for the state 'HI' is: %s\" % city2) \nprint(\"These commands used .get() to safely retrieve a value\")\n\n# more tests on above code from Learn Python the Hard Way:\nprint(not city2)\n\n# tests that produce an error - numerical indexing has no meaning in dictionaries:\n# print states[1][1]\n\n# what happens if all keys are not unique?\nfoods = {\n 'fruit': 'banana',\n 'fruit': 'apple',\n 'meat': 'beef'\n}\n\nfor foodType, indivFood in foods.items():\n print(\"%s includes %s\" % (foodType, indivFood))\n\n# answer: does not happen. 2nd attempt to use same key over-writes the first\n# remove elements from a dictionary\ndel foods['meat']\n# add an element to dictionary\nfoods['vegetables'] = 'carrot'\nfoods['meats'] = 'chicken'\n# change an element to a dictionary\nfoods['vegetables'] = 'corn'\n\nfoods\n\n# from MIT Big Data Class:\n# Associative Arrays ==> Called \"Dictionaries\" or \"Maps\" in Python\n# each value has a key that you can use to find it - { Key:Value }\n\nsuper_heroes = {'Spider Man' : 'Peter Parker',\n 'Super Man' : 'Clark Kent',\n 'Wonder Woman': 'Dianna Prince',\n 'The Flash' : 'Barry Allen',\n 'Professor X' : 'Charles Exavior',\n 'Wolverine' : 'Logan'}\nprint(\"%s %s\" %(\"len(super_heroes): )\", len(super_heroes)))\nprint(\"%s %s\" %(\"Secret Identity for The Flash:\", super_heroes['The Flash']))\ndel super_heroes['Wonder Woman']\nprint(\"%s %s\" %(\"len(super_heroes): )\", len(super_heroes)))\nprint(super_heroes)\nsuper_heroes['Wolverine'] = 'John Logan'\nprint(\"Secret Identify for Wolverine:\", super_heroes.get(\"Wolverine\"))\nprint(\"Keys ... then Values (for super_heroes):\")\nprint(super_heroes.keys())\nprint(super_heroes.values())", "<a id=\"nested\" name=\"nested\"></a>\nnested dictionaries", "# list of dictionaries:\n\nFoodList = [foods, {'meats':'beef', 'fruit':'banana', 'vegetables':'broccoli'}]\nprint(FoodList[0])\nprint(FoodList[1])\n\n# dictionary of dictionaries (sometimes called \"nested dictionary\"):\n# note: this is an example only. In real world, sinde FoodList is inclusive of foods, you probably would not include both\n# uniform structures (same number of levels across all elements) is also advisable if possible\n\nnestedDict = { 'heroes':super_heroes, 'foods': foods, 'complex_foods':FoodList } \nprint(nestedDict['heroes'])\nprint('-'*72)\nprint(nestedDict['complex_foods'])", "<a id=\"nest2\" name=\"nest2\"></a>\nWorking With Dictionaries and Nested Dictionaries - Helpful Code\nThis section has additional resources for working with dictionaries and nested dictionaries:\n\nFileDataObj code - the FileDataObject stores contents from a file in a nested dictionary and explores sorting and summarising the nested dict\ndictionary and nested dictionary functions in a PY module - merging dictionaries, adding to a nested dictionary, summarizing a nested dictionary, etc. (some of this code was created from the previous example)\n\n<a id=\"sorting\" name=\"sorting\"></a>\nSorting", "# Help on Collections Objects including Counter, OrderedDict, dequeu, etc:\n# https://docs.python.org/2/library/collections.html\n\n# regular dictionary does not necessarily preserve order (things added in randomly?)\n# original order of how you add elements is prserved in OrderedDict\n\nfrom collections import OrderedDict\n\nmyOrdDict = OrderedDict({'banana': 3, 'apple': 4, 'pear': 1, 'orange': 2})\nprint(myOrdDict)\nmyOrdDict['pork belly'] = 7\nprint(myOrdDict)\nmyOrdDict['sandwich'] = 5\nprint(myOrdDict)\nmyOrdDict['hero'] = 5\nprint(myOrdDict)\n\n# sorting the ordered dictionary ...\n\n# dictionary sorted by key\n# replacing original OrderedDict w/ results\nmyOrdDict = OrderedDict(sorted(myOrdDict.items(), key=lambda t: t[0]))\nprint(\"myOrdDict (sorted by key):\\n %s\" %myOrdDict)\n\n# dictionary sorted by value\nmyOrdDict2 = OrderedDict(sorted(myOrdDict.items(), key=lambda t: t[1]))\nprint(\"myOrdDict2 (sorted by value):\\n %s\" %myOrdDict2)\n\n# dictionary sorted by length of the key string\nmyOrdDict3 = OrderedDict(sorted(myOrdDict.items(), key=lambda t: len(t[0])))\nprint(\"myOrdDict3 (sorted by length of key):\\n %s\" %myOrdDict3)\n\n# collections.OrderedDict(sorted(dictionary.items(), reverse=True))\n# pd.Series(OrderedDict(sorted(browser.items(), key=lambda v: v[1])))\n\n# changing sort order to reverse key sort:\nmyOrdDict3 = OrderedDict(sorted(myOrdDict.items(), reverse=True))\nprint(\"myOrdDict3 (reverse key sort):\\n %s\" %myOrdDict3)\n\n# testing of above strategy ... usually works but encountered cases where it failed for no known reason\n# lambda approach may be more reliable:\n\nimport pandas as pd\n\n# value sort as pandas series:\nmyOrdDict4 = pd.Series(OrderedDict(sorted(myOrdDict.items(), key=lambda v: v[1])))\nprint(\"myOrdDict4 (value sort / alternate method):\\n %s\" %myOrdDict4)\n\n# value sort in reverse order:\nmyOrdDict5 = OrderedDict(sorted(myOrdDict.items(), key=lambda t: (-t[1],t[0])))\nprint(\"myOrdDict5 (sorted by value in reverse order):\\n %s\" %myOrdDict5)\n\n# Help on Collections Objects including Counter, OrderedDict, dequeu, etc:\n# https://docs.python.org/2/library/collections.html\n\n# sample using a list:\n# for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']:\n# cnt[word] += 1\n\nfrom collections import Counter\n\ncnt = Counter()\nfor num in myOrdDict.values():\n cnt[num] +=1\n \nprint(cnt)\n\n\n# http://stackoverflow.com/questions/11089655/sorting-dictionary-python-3\n# another approach proposed in 2013 on Stack Overflow (but this may have been newer than OrderdDict at the time)\n\n''' Help topic recommends this approach:\npip install sortedcontainers\n\nThen:\n\nfrom sortedcontainers import SortedDict\nmyDic = SortedDict({10: 'b', 3:'a', 5:'c'})\nsorted_list = list(myDic.keys())\n\n'''\nprint(\"conda install sortedcontainers is available in Python 2.7 and 3.6 as of April 2017\")\n\n# some dictionaries to work with ...\n\nsuper_heroes # created earlier\n\nsuper_heroes['The Incredible Hulk'] = 'Bruce Banner'\n\nsuper_heroes # seems to alpha sort on keys anyway\n\n# quick case study exploring another means of reverse sorting (from Stack Overflow):\nreversed_tst = OrderedDict(list(super_heroes.items())[::-1])\nreversed_tst # note how in this instance, we don't get what we expected\n # this example might not be advisable ...\n\n# however ... if we combine methodologies:\nreversed_tst = OrderedDict(sorted(super_heroes.items(), key=lambda v: v[1])[::-1])\nreversed_tst # now the values are in reverse order ...\n\n# however ... if we combine methodologies:\nreversed_tst = OrderedDict(sorted(super_heroes.items(), key=lambda k: k)[::-1])\nreversed_tst # now the keys are in reverse order ...\n\nfruitDict = {3: 'banana', 4: 'pear', 1: 'apple', 2: 'orange'}\nfruitDict # dictionaries appear to alpha sort at least on output making it hard to spot the effects below\n\n# help on library:\n# http://www.grantjenks.com/docs/sortedcontainers/sorteddict.html\n\n# test sample code from Stack Overflow post:\nfrom sortedcontainers import SortedDict\nmyDic = SortedDict({10: 'b', 3:'a', 5:'c'})\nsorted_list = list(myDic.keys())\n\nprint(myDic)\nprint(sorted_list)\n\nfruitDict = SortedDict(fruitDict)\nsorted_list = list(fruitDict.keys())\n\nprint(fruitDict)\nprint(sorted_list)", "<a id=\"when\" name=\"when\"></a>As per the examples above ...\nSo when to do what?\n- OrderedDict: will store whatever you put into it in whatever order you first record the data (maintaing that order)\n- SortedDict: by default will alpha sort the data (over-riding original order) and maintain it for you in sorted order\n- Dict: Don't care about storing it in order? just sort and output the results without storing it in new container\n**Final note: only SortedDict allows indexing by numerical order on the data (by-passing keys) under both Python 2.7 and 3.6 (as shown in the next section)\n<a id=\"nth_elem\" name=\"nth_elem\"></a>\nFind the nth element in a dictionary", "# MIT Big Data included a demo of this type of index/access to a dictionary in a Python 2.7 notebook\n# the code is organized in a try-except block here so it won't halt the notebook if converted to Python 3.6\n\ndef print_1st_keyValue(someDict):\n try:\n print(someDict.values()[0]) # only works in Python 2.7\n except Exception as ee:\n print(str(type(ee)) + \": \" + str(ee)) # error from PY 3.6: \n # <class 'TypeError'>: 'dict_values' object does not support indexing\n finally:\n try:\n print(someDict.keys()[0]) # only works in Python 2.7\n except Exception as ee:\n print(str(type(ee)) + \": \" + str(ee)) # error from PY 3.6: \n # <class 'TypeError'>: 'dict_keys' object does not support indexing\n \nprint_1st_keyValue(super_heroes)\n\nprint_1st_keyValue(myOrdDict) # run same test on ordered dictionaries\n # failed in Python 3.6, worked in Python 2.7\n # reminder: syntax is orderedDict.values()[0], orderedDict.keys()[0]\n\nprint_1st_keyValue(fruitDict) # run same test on sorted dictionary - \n # this works in Python 3.6 and 2.7\n # reminder: syntax is sortedDict.values()[0], sortedDict.keys()[0]", "<a id=\"comprehensions\" name=\"comprehensions\"></a>\nDictionary Comprehensions", "# dictionary comprehension\n[ k for k in fruitDict if k > 2 ]\n\n[ fruitDict[k] for k in fruitDict if k > 1 ] \n\nnewDict = { k*2:'fruit - '+fruitDict[k] for k in fruitDict if k > 1 and len(fruitDict[k]) >=6} \nprint(newDict)\ntype(newDict)", "<a id=\"misskey\" name=\"misskey\"></a>\nkeyDict object", "class KeyDict(dict):\n def __missing__(self, key):\n #self[key] = key # uncomment if desired behavior is to add keys when they are not found (w/ key as value)\n #this version returns the key that was not found\n return key\n\nkdTst = KeyDict(super_heroes)\nprint(kdTst['The Incredible Hulk'])\nprint(kdTst['Ant Man']) # value not found so it returns itself as per __missing__ over-ride\n\nhelp(SortedDict)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eds-uga/csci1360e-su17
lectures/L6.ipynb
mit
[ "Lecture 6: Conditionals and Exceptions\nCSCI 1360E: Foundations for Informatics and Analytics\nOverview and Objectives\nIn this lecture, we'll go over how to make \"decisions\" over the course of your code depending on the values certain variables take. We'll also introduce exceptions and how to handle them gracefully. By the end of the lecture, you should be able to\n\nBuild arbitrary conditional hierarchies to test a variety of possible circumstances\nConstruct elementary boolean logic statements\nCatch basic errors and present meaningful error messages in lieu of a Python crash\n\nPart 1: Conditionals\nUp until now, we've been somewhat hobbled in our coding prowess; we've lacked the tools to make different decisions depending on the values our variables take.\nFor example: how do you find the maximum value in a list of numbers?", "x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]", "If we want to figure out the maximum value, we'll obviously need a loop to check each element of the list (which we know how to do), and a variable to store the maximum.", "max_val = 0\nfor element in x:\n \n # ... now what?\n\n pass", "We also know we can check relative values, like max_val &lt; element. If this evaluates to True, we know we've found a number in the list that's bigger than our current candidate for maximum value. But how do we execute code until this condition, and this condition alone?\nEnter if / elif / else statements! (otherwise known as \"conditionals\")\nWe can use the keyword if, followed by a statement that evaluates to either True or False, to determine whether or not to execute the code. For a straightforward example:", "x = 5\nif x < 5:\n print(\"How did this happen?!\") # Spoiler alert: this won't happen.\n\nif x == 5:\n print(\"Working as intended.\")", "In conjunction with if, we also have an else clause that we can use to execute whenever the if statement doesn't:", "x = 5\nif x < 5:\n print(\"How did this happen?!\") # Spoiler alert: this won't happen.\nelse:\n print(\"Correct.\")", "This is great! We can finally finish computing the maximum element of a list!", "x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]\nmax_val = 0\nfor element in x:\n if max_val < element:\n max_val = element\n\nprint(\"The maximum element is: {}\".format(max_val))", "Let's pause here and walk through that code.\nx = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]\n - This code defines the list we want to look at.\nmax_val = 0\n - And this is a placeholder for the eventual maximum value.\nfor element in x:\n - A standard for loop header: we're iterating over the list x, one at a time storing its elements in the variable element.\nif max_val &lt; element:\n - The first line of the loop body is an if statement. This statement asks: is the value in our current max_val placeholder smaller than the element of the list stored in element?\nmax_val = element\n - If the answer to that if statement is True, then this line executes: it sets our placeholder equal to the current list element.\nLet's look at slightly more complicated but utterly classic example: assigning letter grades from numerical grades.", "student_grades = {\n 'Jen': 82,\n 'Shannon': 75,\n 'Natasha': 94,\n 'Benjamin': 48,\n}", "We know the 90-100 range is an \"A\", 80-89 is a \"B\", and so on. How would we build a conditional to assign letter grades?\nThe third and final component of conditionals is the elif statement (short for \"else if\").\nelif allows us to evaluate as many options as we'd like, all within the same conditional context (this is important). So for our grading example, it might look like this:", "letter = ''\nfor student, grade in student_grades.items():\n if grade >= 90:\n letter = \"A\"\n elif grade >= 80:\n letter = \"B\"\n elif grade >= 70:\n letter = \"C\"\n elif grade >= 60:\n letter = \"D\"\n else:\n letter = \"F\"\n \n print(student, letter)", "Ok, that's neat. But there's still one more edge case: what happens if we want to enforce multiple conditions simultaneously?\nTo illustrate, let's go back to our example of finding the maximum value in a list, and this time, let's try to find the second-largest value in the list. For simplicity, let's say we've already found the largest value.", "x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]\nmax_val = 81 # We've already found it!\nsecond_largest = 0", "Here's the rub: we now have two constraints to enforce--the second largest value needs to be larger than pretty much everything in the list, but also needs to be smaller than the maximum value. Not something we can encode using if / elif / else.\nInstead, we'll use two more keywords integral to conditionals: and and or.\nYou've already seen and: this is used to join multiple boolean statements together in such a way that, if one of the statements is False, the entire statement is False.", "True and True and True and True and True and True and False", "One False ruins the whole thing.\nHowever, we haven't encountered or before. How do you think it works?\nHere's are two examples:", "True or True or True or True or True or True or False\n\nFalse or False or False or False or False or False or True", "Figured it out?\nWhereas and needs every statement it joins to be True in order for the whole statement to be True, only one statement among those joined by or needs to be True for everything to be True.\nHow about this example?", "(True and False) or (True or False)", "(Order of operations works the same way!)\nGetting back to conditionals, then: we can use this boolean logic to enforce multiple constraints simultaneously.", "for element in x:\n if second_largest < element and element < max_val:\n second_largest = element\n\nprint(\"The second-largest element is: {}\".format(second_largest))", "Let's step through the code.\nfor element in x:\n if second_largest &lt; element and element &lt; max_val:\n second_largest = element\n\n\nThe first condition, second_largest &lt; element, is the same as before: if our current estimate of the second largest element is smaller than the latest element we're looking at, it's definitely a candidate for second-largest.\n\n\nThe second condition, element &lt; max_val, is what ensures we don't just pick the largest value again. This enforces the constraint that the current element we're looking at is also less than the maximum value.\n\n\nThe and keyword glues these two conditions together: it requires that they BOTH be True before the code inside the statement is allowed to execute.\n\n\nIt would be easy to replicate this with \"nested\" conditionals:", "second_largest = 0\nfor element in x:\n if second_largest < element:\n if element < max_val:\n second_largest = element\n\nprint(\"The second-largest element is: {}\".format(second_largest))", "...but your code starts getting a little unwieldy with so many indentations.\nYou can glue as many comparisons as you want together with and; the whole statement will only be True if every single condition evaluates to True. This is what and means: everything must be True.\nThe other side of this coin is or. Like and, you can use it to glue together multiple constraints. Unlike and, the whole statement will evaluate to True as long as at least ONE condition is True. This is far less stringent than and, where ALL conditions had to be True.", "numbers = [1, 2, 5, 6, 7, 9, 10]\nfor num in numbers:\n if num == 2 or num == 4 or num == 6 or num == 8 or num == 10:\n print(\"{} is an even number.\".format(num))", "In this contrived example, I've glued together a bunch of constraints. Obviously, these constraints are mutually exclusive; a number can't be equal to both 2 and 4 at the same time, so num == 2 and num == 4 would never evaluate to True. However, using or, only one of them needs to be True for the statement underneath to execute.\nThere's a little bit of intuition to it.\n\n\n\"I want this AND this\" has the implication of both at once.\n\n\n\"I want this OR this\" sounds more like either one would be adequate.\n\n\nOne other important tidbit, concerning not only conditionals, but also lists and booleans: the not keyword.\nAn often-important task in data science, when you have a list of things, is querying whether or not some new piece of information you just received is already in your list. You could certainly loop through the list, asking \"is my new_item == list[item i]?\". But, thankfully, there's a better way:", "import random\nlist_of_numbers = [i for i in range(10)] # Generates 10 random numbers, between 1 and 100.\nif 13 not in list_of_numbers:\n print(\"Aw man, my lucky number isn't here!\")", "Notice a couple things here--\n\n\nList comprehensions make an appearance! Can you parse it out?\n\n\nThe if statement asks if the number 13 is NOT found in list_of_numbers\n\n\nWhen that statement evaluates to True--meaning the number is NOT found--it prints the message.\n\n\nIf you omit the not keyword, then the question becomes: \"is this number in the list?\"", "import random\nlist_of_numbers = [i for i in range(10)] # Generates 10 random numbers, between 1 and 100.\nif 13 in list_of_numbers:\n print(\"Somehow the number 13 is in a list generated by range(10)\")", "Nothing is printed in this case, since our conditional is asking if the number 13 was in the list. Which it's not.\nBe careful with this. Typing issues can hit you full force here: if you ask:\nif 0 in some_list\nand it's a list of floats, then this operation will always evaluate to False.\nSimilarly, if you ask if \"shannon\" in name_list, it will look for the precise string \"shannon\" and return False even if the string \"Shannon\" is in the list. With great power, etc etc.\nPart 2: Error Handling\nYes, errors: plaguing us since Windows 95 (but really, since well before then).\n\nBy now, I suspect you've likely seen your fair share of Python crashes.\n\n\nNotImplementedError from the homework assignments\n\n\nTypeError from trying to multiply an integer by a string\n\n\nKeyError from attempting to access a dictionary key that didn't exist\n\n\nIndexError from referencing a list beyond its actual length\n\n\nor any number of other error messages. These are the standard way in which Python (and most other programming languages) handles error messages.\nThe error is known as an Exception. Some other terminology here includes:\n\n\nAn exception is raised when such an error occurs. This is why you see the code snippet raise NotImplementedError in your homeworks. In other languages such as Java, an exception is \"thrown\" instead of \"raised\", but the meanings are equivalent.\n\n\nWhen you are writing code that could potentially raise an exception, you can also write code to catch the exception and handle it yourself. When an exception is caught, that means it is handled without crashing the program.\n\n\nHere's a fairly classic example: divide by zero!\n\nLet's say we're designing a simple calculator application that divides two numbers. We'll ask the user for two numbers, divide them, and return the quotient. Seems simple enough, right?", "def divide(x, y):\n return x / y\n\ndivide(11, 0)", "D'oh! The user fed us a 0 for the denominator and broke our calculator. Meanie-face.\nSo we know there's a possibility of the user entering a 0. This could be malicious or simply by accident. Since it's only one value that could crash our app, we could in principle have an if statement that checks if the denominator is 0. That would be simple and perfectly valid.\nBut for the sake of this lecture, let's assume we want to try and catch the ZeroDivisionError ourselves and handle it gracefully.\nTo do this, we use something called a try / except block, which is very similar in its structure to if / elif / else blocks.\nFirst, put the code that could potentially crash your program inside a try statement. Under that, have a except statement that defines\n\nA variable for the error you're catching, and\nAny code that dictates how you want to handle the error", "def divide_safe(x, y):\n quotient = 0\n try:\n quotient = x / y\n except ZeroDivisionError:\n print(\"You tried to divide by zero. Why would you do that?!\")\n return quotient", "Now if our user tries to be snarky again--", "divide_safe(11, 0)", "No error, no crash! Just a \"helpful\" error message.\nLike conditionals, you can also create multiple except statements to handle multiple different possible exceptions:", "import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n # code for something can cause multiple exceptions\n pass\nexcept NameError:\n print(\"Caught a NameError!\")\nexcept ValueError:\n print(\"Nope, it was actually a ValueError.\")", "Also like conditionals, you can handle multiple errors simultaneously. If, like in the previous example, your code can raise multiple exceptions, but you want to handle them all the same way, you can stack them all in one except statement:", "import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n if num == 1:\n raise NameError(\"This happens when you use a variable you haven't defined\")\n else:\n raise ValueError(\"This happens when you try to multiply a string\")\nexcept (NameError, ValueError): # MUST have the parentheses!\n print(\"Caught...well, some kinda error, not sure which.\")", "If you're like me, and you're writing code that you know could raise one of several errors, but are too lazy to look up specifically what errors are possible, you can create a \"catch-all\" by just not specifying anything:", "import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n if num == 1:\n raise NameError(\"This happens when you use a variable you haven't defined\")\n else:\n raise ValueError(\"This happens when you try to multiply a string\")\nexcept:\n print(\"I caught something!\")", "Finally--and this is really getting into what's known as control flow (quite literally: \"controlling the flow\" of your program)--you can tack an else statement onto the very end of your exception-handling block to add some final code to the handler.\nWhy? This is code that is only executed if NO exception occurs. Let's go back to our random number example: instead of raising one of two possible exceptions, we'll raise an exception only if we flip a 1.", "import random # For generating random exceptions.\nnum = random.randint(0, 1)\ntry:\n if num == 1:\n raise NameError(\"This happens when you use a variable you haven't defined\")\nexcept:\n print(\"I caught something!\")\nelse:\n print(\"HOORAY! Lucky coin flip!\")", "Review Questions\nSome questions to discuss and consider:\n1: Go back to the if / elif / else example about student grades. Let's assume, instead of elif for the different conditions, you used a bunch of if statements, e.g. if grade &gt;= 90, if grade &gt;= 80, if grade &gt;= 70, and so on; effectively, you didn't use elif at all, but just used if. What would the final output be in this case?\n2: We saw that you can add an else statement to the end of an exception handling block, which will run code in the event that no exception is raised. Why is this useful? Why not add the code you want to run in the try block itself?\n3: With respect to error handling, we discussed try, except, and else statements. There is actually one more: finally, which executes no matter what, regardless of whether an exception occurs or not. Why would this be useful?\n4: There's a whole field of \"Boolean Algebra\" that is not to different from the \"regular\" algebra you're familiar with. In this sense, rather than floating-point numbers, variables are either True or False, but everything still works pretty much the same. Take the following equation: a(a + b). Let's say a = True and b = False. If multiplication is the and operator, and addition is the or operator, what's the final result?\n5: How would you write the following constraint as a Python conditional: $10 \\lt x \\le 20$, and $x$ is even.\nCourse Administrivia\n\n\nAssignment 1 is due tomorrow at 11:59:59pm. Let me know if there are any questions!\n\n\nHow is Assignment 2 going? Getting your fill of loops?\n\n\nAdditional Resources\n\nMatthes, Eric. Python Crash Course. 2016. ISBN-13: 978-1593276034\nGrus, Joel. Data Science from Scratch. 2015. ISBN-13: 978-1491901427" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
molgor/spystats
notebooks/.ipynb_checkpoints/variogram_envelope_by_chunks-checkpoint.ipynb
bsd-2-clause
[ "Here I'm process by chunks the entire region.", "# Load Biospytial modules and etc.\n%matplotlib inline\nimport sys\nsys.path.append('/apps')\nimport django\ndjango.setup()\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n## Use the ggplot style\nplt.style.use('ggplot')\n\nfrom external_plugins.spystats import tools\n%run ../testvariogram.py\n\nsection.shape", "Algorithm for processing Chunks\n\nMake a partition given the extent\nProduce a tuple (minx ,maxx,miny,maxy) for each element on the partition\nCalculate the semivariogram for each chunk and save it in a dataframe\nPlot Everything\nDo the same with a mMatern Kernel", "minx,maxx,miny,maxy = getExtent(new_data)\n\nmaxy\n\n## Let's build the partition\nN = 6\nxp,dx = np.linspace(minx,maxx,N,retstep=True)\nyp,dy = np.linspace(miny,maxy,N,retstep=True)\n\n\ndx\n\nxx,yy = np.meshgrid(xp,yp)\n\ncoordinates_list = [ (xx[i][j],yy[i][j]) for i in range(N) for j in range(N)]\n\nfrom functools import partial\ntuples = map(lambda (x,y) : partial(getExtentFromPoint,x,y,step_sizex=dx,step_sizey=dy)(),coordinates_list)\n\nlen(tuples)\n\nchunks = map(lambda (mx,Mx,my,My) : subselectDataFrameByCoordinates(new_data,'newLon','newLat',mx,Mx,my,My),tuples)\nchunks_sizes = map(lambda df : df.shape[0],chunks)\n\nchunk_w_size = zip(chunks,chunks_sizes)\n\n## Here we can filter based on a threshold\nthreshold = 10\nnonempty_chunks_w_size = filter(lambda (df,n) : df.shape[0] > threshold ,chunk_w_size)\nchunks_non_empty, chunks_sizes = zip(*nonempty_chunks_w_size)\n\nlengths = pd.Series(map(lambda ch : ch.shape[0],chunks_non_empty))\n\nlengths.plot.hist()\n\ncs = chunks_non_empty\nvariograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=600000),cs)\n\n%time vars = map(lambda v : v.calculateEmpirical(),variograms)\n%time vars = map(lambda v : v.calculateEnvelope(num_iterations=1),variograms)\n\n%time lags = map(lambda v : v.lags,variograms)\n\nlags = pd.DataFrame(lags).transpose()\n\nlags = lags[[0]]", "Take an average of the empirical variograms also with the envelope.\nWe will use the group by directive on the field lags", "envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1)\nenvhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1)\nvariogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1)\nn_points = pd.DataFrame(map(lambda v : v.n_points,variograms))\n\npoints = n_points.transpose()\n\nprint(variogram.shape)\nprint(points.shape)\n\nejem1 = pd.DataFrame(variogram.values * points.values)\n\n# Chunks (variograms) columns\n# lag rows\nvempchunk = ejem1.sum(axis=1) / points.sum(axis=1)\nplt.plot(lags,vempchunk,'--',color='blue',lw=2.0)\n\n## Cut some values\nvchunk = pd.concat([lags,vempchunk],axis=1)\nvchunk.columns = ['lags','semivariance']\nv = vchunk[vchunk['lags'] < 500000]\nplt.plot(v.lags,v.semivariance,'--',color='blue',lw=2.0)\n#vemp2\n\n", "Let's bring the whole empirical variogram (calculated in HEC)\nFor comparison purposes", "thrs_dist = 1000000\nnt = 30 # num iterations\nfilename = \"../HEC_runs/results/low_q/data_envelope.csv\"\nenvelope_data = pd.read_csv(filename)\ngvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=thrs_dist)\ngvg.envelope = envelope_data\ngvg.empirical = gvg.envelope.variogram\ngvg.lags = gvg.envelope.lags\nvdata = gvg.envelope.dropna()\ngvg.plot(refresh=False,legend=False,percentage_trunked=20)\nplt.plot(v.lags,v.semivariance,'--',color='blue',lw=2.0)", "Ok, now the thing that I was supposed to do since the begining\nThe log of the species instead of only species number", "cs = chunks_non_empty\nvariograms =map(lambda chunk : tools.Variogram(chunk,'residuals2',using_distance_threshold=600000),cs)\n%time vars = map(lambda v : v.calculateEmpirical(),variograms)\n%time vars = map(lambda v : v.calculateEnvelope(num_iterations=1),variograms)\n%time lags = map(lambda v : v.lags,variograms)\nlags = pd.DataFrame(lags).transpose()\nenvslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1)\nenvhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1)\nvariogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1)\nn_points = pd.DataFrame(map(lambda v : v.n_points,variograms))\npoints = n_points.transpose()\nejem1 = pd.DataFrame(variogram.values * points.values)\n# Chunks (variograms) columns\n# lag rows\nvempchunk = ejem1.sum(axis=1) / points.sum(axis=1)\nplt.plot(lags,vempchunk,'--',color='blue',lw=2.0)\nthrs_dist = 1000000\nnt = 30 # num iterations\nfilename = \"../HEC_runs/results/low_q/data_envelope.csv\"\nenvelope_data = pd.read_csv(filename)\ngvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=thrs_dist)\ngvg.envelope = envelope_data\ngvg.empirical = gvg.envelope.variogram\ngvg.lags = gvg.envelope.lags\nvdata = gvg.envelope.dropna()\ngvg.plot(refresh=False,legend=False,percentage_trunked=20)\n#plt.plot(v.lags,v.semivariance,'--',color='blue',lw=2.0)\n\nvempchunk = ejem1.sum(axis=1) / points.sum(axis=1)\n#plt.plot(lags,vempchunk,'--',color='blue',lw=2.0)\nthrs_dist = 1000000\nnt = 30 # num iterations\nfilename = \"../HEC_runs/results/low_q/data_envelope.csv\"\nenvelope_data = pd.read_csv(filename)\ngvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=thrs_dist)\ngvg.envelope = envelope_data\ngvg.empirical = gvg.envelope.variogram\ngvg.lags = gvg.envelope.lags\nvdata = gvg.envelope.dropna()\ngvg.plot(refresh=False,legend=False,percentage_trunked=20)\nplt.plot(v.lags,v.semivariance,'--',color='blue',lw=2.0)\n\n## Oke satisfecho. Me muevo por ahora a hacer la optimización con gls" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tritemio/multispot_paper
realtime kinetics/Convert us-ALEX SM files to Photon-HDF5.ipynb
mit
[ "Convert µs-ALEX SM files to Photon-HDF5\n<p class=\"lead\">This <a href=\"https://jupyter.org/\">Jupyter notebook</a>\nwill guide you through the conversion of a µs-ALEX data file from <b>SM</b> (WeissLab)\nto <a href=\"http://photon-hdf5.org\">Photon-HDF5</a> format. For more info on how to edit\na jupyter notebook refer to <a href=\"http://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb#Overview-of-the-Notebook-UI\">this example</a>.</p>\n\n<i>Please send feedback and report any problems to the \nPhoton-HDF5 google group.</i>\n1. How to run it?\nThe notebook is composed by \"text cells\", such as this paragraph, and \"code cells\"\ncontaining the code to be executed (and identified by an In [ ] prompt). \nTo execute a code cell, select it and press SHIFT+ENTER. \nTo modify an cell, click on it to enter \"edit mode\" (indicated by a green frame), \nthen type.\nYou can run this notebook directly online (for demo purposes), or you can \nrun it on your on desktop. For a local installation please refer to:\n\nJupyter Notebook Quick-Start Guide \n\n<br>\n<div class=\"alert alert-success\">\nPlease run each each code cell using <b>SHIFT+ENTER</b>.\n</div>\n\n2. Prepare the data file\n2.1 Upload the data file\nNote: Skip to section 2.2 if you are running the notebook locally.\nBefore starting, you have to upload a data file to be converted to Photon-HDF5.\nYou can use one of our example data files available\non figshare. \nTo upload a file (up to 35 MB) switch to the \"Home\" tab in your browser, \nclick the upload button and select the data file. \nWait until the upload completes.\nFor larger files (like some of our example files) please use the \nUpload notebook instead.\nOnce the file is uploaded, come back here and follow the instructions below.\n2.2 Select the file\nSpecify the input data file in the following cell:", "filename = '0023uLRpitc_NTP_20dT_0.5GndCl.sm'", "The next cell will check if the filename location is correct:", "import os\ntry: \n with open(filename): pass\n print('Data file found, you can proceed.')\nexcept IOError:\n print('ATTENTION: Data file not found, please check the filename.\\n'\n ' (current value \"%s\")' % filename)", "In case of file not found, please double check the file name\nand that the file has been uploaded.\n3. Load the data\nWe start by loading the software:", "%matplotlib inline\nimport numpy as np\nimport phconvert as phc\nprint('phconvert version: ' + phc.__version__)", "Then we load the input file:", "d = phc.loader.usalex_sm(filename,\n donor = 0,\n acceptor = 1,\n alex_period = 4000,\n alex_offset = 700,\n alex_period_donor = (2180, 3900),\n alex_period_acceptor = (200, 1800),\n excitation_wavelengths = (532e-9, 635e-9),\n detection_wavelengths = (580e-9, 680e-9))", "And we plot the alternation histogram:", "phc.plotter.alternation_hist(d)", "The previous plot is the alternation histogram for the donor and acceptor channel separately.\nThe shaded areas marks the donor (green) and acceptor (red) excitation periods.\nIf the histogram looks wrong in some aspects (no photons, wrong detectors\nassignment, wrong period selection) please go back to the previous cell \nwhich loads the file and change the parameters until the histogram looks correct.\nYou may also find useful to see how many different detectors are present\nand their number of photons. This information is shown in the next cell:", "detectors = d['photon_data']['detectors']\n\nprint(\"Detector Counts\")\nprint(\"-------- --------\")\nfor det, count in zip(*np.unique(detectors, return_counts=True)):\n print(\"%8d %8d\" % (det, count))", "4. Metadata\nIn the next few cells, we specify some metadata that will be stored \nin the Photon-HDF5 file. Please modify these fields to reflect\nthe content of the data file:", "author = 'John Doe'\nauthor_affiliation = 'Research Institution'\ndescription = 'us-ALEX measurement of a doubly-labeled ssDNA sample.'\nsample_name = '20dt ssDNA oligo doubly labeled with Cy3B and Atto647N'\ndye_names = 'Cy3B, ATTO647N'\nbuffer_name = 'TE50 + 0.5M GndCl'", "5. Conversion\n<br>\n<div class=\"alert alert-success\">\n<p>Once you finished editing the the previous sections you can proceed with\nthe actual conversion. To do that, click on the menu <i>Cells -> Run All Below</i>.\n\n<p>After the execution go to <b>Section 6</b> to download the Photon-HDF5 file.\n</div>\n\nThe cells below contain the code to convert the input file to Photon-HDF5.\n5.1 Add metadata", "d['description'] = description\n\nd['sample'] = dict(\n sample_name=sample_name,\n dye_names=dye_names,\n buffer_name=buffer_name,\n num_dyes = len(dye_names.split(',')))\n\nd['identity'] = dict(\n author=author,\n author_affiliation=author_affiliation)", "5.2 Save to Photon-HDF5\nThis command saves the new file to disk. If the input data does not follows the Photon-HDF5 specification it returns an error (Invalid_PhotonHDF5) printing what violates the specs.", "phc.hdf5.save_photon_hdf5(d, overwrite=True)", "You can check it's content by using an HDF5 viewer such as HDFView.\n6. Load Photon-HDF5\nWe can load the newly created Photon-HDF5 file to check its content:", "from pprint import pprint\n\nfilename = d['_data_file'].filename\n\nh5data = phc.hdf5.load_photon_hdf5(filename)\n\nphc.hdf5.dict_from_group(h5data.identity)\n\nphc.hdf5.dict_from_group(h5data.setup)\n\npprint(phc.hdf5.dict_from_group(h5data.photon_data))\n\nh5data._v_file.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
VectorBlox/PYNQ
docs/source/5_programming_onboard.ipynb
bsd-3-clause
[ "Programming PYNQ-Z1's onboard peripherals\nLEDs, switches and buttons\nPYNQ-Z1 has the following on-board LEDs, pushbuttons and switches:\n\n\n4 monochrome LEDs (LD3-LD0)\n\n\n4 push-button switches (BTN3-BTN0)\n\n\n2 RGB LEDs (LD5-LD4)\n\n\n2 Slide-switches (SW1-SW0)\n\n\nThe peripherals are highlighted in the image below. \n\nAll of these peripherals are connected to programmable logic. This means controllers must be implemented in an overlay before these peripherals can be used. The base overlay contains controllers for all of these peripherals. \nNote that there are additional push-buttons and LEDs on the board (e.g. power LED, reset button). They are not user accessible, and are not highlighted in the figure. \nPeripheral Example\nUsing the base overlay, each of the highlighted devices can be controlled using their corresponding pynq classes. \nTo demonstrate this, we will first download the base overlay to ensure it is loaded, and then import the LED, RGBLED, Switch and Button classes from the module pynq.board.", "from pynq import Overlay\nfrom pynq.board import LED\nfrom pynq.board import RGBLED\nfrom pynq.board import Switch\nfrom pynq.board import Button\n\nOverlay(\"base.bit\").download()", "Controlling a single LED\nNow we can instantiate objects of each of these classes and use their methods to manipulate the corresponding peripherals. Let’s start by instantiating a single LED and turning it on and off.", "led0 = LED(0)\n\nled0.on()", "Check the board and confirm the LD0 is ON", "led0.off()", "Let’s then toggle led0 using the sleep() method to see the LED flashing.", "import time\nfrom pynq.board import LED\nfrom pynq.board import Button\n\nled0 = LED(0)\nfor i in range(20):\n led0.toggle()\n time.sleep(.1)", "Example: Controlling all the LEDs, switches and buttons\nThe example below creates 3 separate lists, called leds, switches and buttons.", "MAX_LEDS = 4\nMAX_SWITCHES = 2\nMAX_BUTTONS = 4\n\nleds = [0] * MAX_LEDS\nswitches = [0] * MAX_SWITCHES\nbuttons = [0] * MAX_BUTTONS\n\nfor i in range(MAX_LEDS):\n leds[i] = LED(i) \nfor i in range(MAX_SWITCHES):\n switches[i] = Switch(i) \nfor i in range(MAX_BUTTONS):\n buttons[i] = Button(i) ", "It will be useful to be able to turn off selected LEDs so we will create a helper function to do that. It either clears the LEDs whose numbers we list in the parameter, or by default clears LD3-LD0.", "# Helper function to clear LEDs\ndef clear_LEDs(LED_nos=list(range(MAX_LEDS))):\n \"\"\"Clear LEDS LD3-0 or the LEDs whose numbers appear in the list\"\"\"\n for i in LED_nos:\n leds[i].off()\n \nclear_LEDs()", "First, all LEDs are set to off. Then each switch is read, and if in the on position, the corresponding led is turned on. You can execute this cell a few times, changing the position of the switches on the board.\n\nLEDs start in the off state\nIf SW0 is on, LD2 and LD0 will be on\nIf SW1 is on, LD3 and LD1 will be on", "clear_LEDs()\n\nfor i in range(MAX_LEDS): \n if switches[i%2].read(): \n leds[i].on()\n else:\n leds[i].off()", "The last example toggles an led (on or off) if its corresponding push button is pressed for so long as SW0 is switched on.\nTo end the program, slide SW0 to the off position.", "import time\n\nclear_LEDs()\n\nwhile switches[0].read():\n for i in range(MAX_LEDS):\n if buttons[i].read():\n leds[i].toggle()\n time.sleep(.1)\n \nclear_LEDs()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/dwd/cmip6/models/sandbox-2/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: DWD\nSource ID: SANDBOX-2\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:57\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'dwd', 'sandbox-2', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
smorton2/think-stats
code/chap03ex.ipynb
gpl-3.0
[ "Examples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "from __future__ import print_function, division\n\n%matplotlib inline\n\nimport numpy as np\n\nimport nsfg\nimport first\nimport thinkstats2\nimport thinkplot", "Again, I'll load the NSFG pregnancy file and select live births:", "preg = nsfg.ReadFemPreg()\nlive = preg[preg.outcome == 1]", "Here's the histogram of birth weights:", "hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')", "To normalize the disrtibution, we could divide through by the total count:", "n = hist.Total()\npmf = hist.Copy()\nfor x, freq in hist.Items():\n pmf[x] = freq / n", "The result is a Probability Mass Function (PMF).", "thinkplot.Hist(pmf)\nthinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PMF')", "More directly, we can create a Pmf object.", "pmf = thinkstats2.Pmf([1, 2, 2, 3, 5])\npmf", "Pmf provides Prob, which looks up a value and returns its probability:", "pmf.Prob(2)", "The bracket operator does the same thing.", "pmf[2]", "The Incr method adds to the probability associated with a given values.", "pmf.Incr(2, 0.2)\npmf[2]", "The Mult method multiplies the probability associated with a value.", "pmf.Mult(2, 0.5)\npmf[2]", "Total returns the total probability (which is no longer 1, because we changed one of the probabilities).", "pmf.Total()", "Normalize divides through by the total probability, making it 1 again.", "pmf.Normalize()\npmf.Total()", "Here's the PMF of pregnancy length for live births.", "pmf = thinkstats2.Pmf(live.prglngth, label='prglngth')", "Here's what it looks like plotted with Hist, which makes a bar graph.", "thinkplot.Hist(pmf)\nthinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')", "Here's what it looks like plotted with Pmf, which makes a step function.", "thinkplot.Pmf(pmf)\nthinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')", "We can use MakeFrames to return DataFrames for all live births, first babies, and others.", "live, firsts, others = first.MakeFrames()", "Here are the distributions of pregnancy length.", "first_pmf = thinkstats2.Pmf(firsts.prglngth, label='firsts')\nother_pmf = thinkstats2.Pmf(others.prglngth, label='others')", "And here's the code that replicates one of the figures in the chapter.", "width=0.45\naxis = [27, 46, 0, 0.6]\nthinkplot.PrePlot(2, cols=2)\nthinkplot.Hist(first_pmf, align='right', width=width)\nthinkplot.Hist(other_pmf, align='left', width=width)\nthinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='PMF', axis=axis)\n\nthinkplot.PrePlot(2)\nthinkplot.SubPlot(2)\nthinkplot.Pmfs([first_pmf, other_pmf])\nthinkplot.Config(xlabel='Pregnancy length(weeks)', axis=axis)", "Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered \"full term\").", "weeks = range(35, 46)\ndiffs = []\nfor week in weeks:\n p1 = first_pmf.Prob(week)\n p2 = other_pmf.Prob(week)\n diff = 100 * (p1 - p2)\n diffs.append(diff)\n\nthinkplot.Bar(weeks, diffs)\nthinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='Difference (percentage points)')\n", "Biasing and unbiasing PMFs\nHere's the example in the book showing operations we can perform with Pmf objects.\nSuppose we have the following distribution of class sizes.", "d = { 7: 8, 12: 8, 17: 14, 22: 4, \n 27: 6, 32: 12, 37: 8, 42: 3, 47: 2 }\n\npmf = thinkstats2.Pmf(d, label='actual')", "This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.", "def BiasPmf(pmf, label):\n new_pmf = pmf.Copy(label=label)\n\n for x, p in pmf.Items():\n new_pmf.Mult(x, x)\n \n new_pmf.Normalize()\n return new_pmf", "The following graph shows the difference between the actual and observed distributions.", "biased_pmf = BiasPmf(pmf, label='observed')\nthinkplot.PrePlot(2)\nthinkplot.Pmfs([pmf, biased_pmf])\nthinkplot.Config(xlabel='Class size', ylabel='PMF')", "The observed mean is substantially higher than the actual.", "print('Actual mean', pmf.Mean())\nprint('Observed mean', biased_pmf.Mean())", "If we were only able to collect the biased sample, we could \"unbias\" it by applying the inverse operation.", "def UnbiasPmf(pmf, label=None):\n new_pmf = pmf.Copy(label=label)\n\n for x, p in pmf.Items():\n new_pmf[x] *= 1/x\n \n new_pmf.Normalize()\n return new_pmf", "We can unbias the biased PMF:", "unbiased = UnbiasPmf(biased_pmf, label='unbiased')\nprint('Unbiased mean', unbiased.Mean())", "And plot the two distributions to confirm they are the same.", "thinkplot.PrePlot(2)\nthinkplot.Pmfs([pmf, unbiased])\nthinkplot.Config(xlabel='Class size', ylabel='PMF')", "Pandas indexing\nHere's an example of a small DataFrame.", "import numpy as np\nimport pandas\narray = np.random.randn(4, 2)\ndf = pandas.DataFrame(array)\ndf", "We can specify column names when we create the DataFrame:", "columns = ['A', 'B']\ndf = pandas.DataFrame(array, columns=columns)\ndf", "We can also specify an index that contains labels for the rows.", "index = ['a', 'b', 'c', 'd']\ndf = pandas.DataFrame(array, columns=columns, index=index)\ndf", "Normal indexing selects columns.", "df['A']", "We can use the loc attribute to select rows.", "df.loc['a']", "If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute:", "df.iloc[0]", "loc can also take a list of labels.", "indices = ['a', 'c']\ndf.loc[indices]", "If you provide a slice of labels, DataFrame uses it to select rows.", "df['a':'c']", "If you provide a slice of integers, DataFrame selects rows by integer index.", "df[0:2]", "But notice that one method includes the last elements of the slice and one does not.\nIn general, I recommend giving labels to the rows and names to the columns, and using them consistently.\nExercises\nExercise: Something like the class size paradox appears if you survey children and ask how many children are in their family. Families with many children are more likely to appear in your sample, and families with no children have no chance to be in the sample.\nUse the NSFG respondent variable numkdhh to construct the actual distribution for the number of children under 18 in the respondents' households.\nNow compute the biased distribution we would see if we surveyed the children and asked them how many children under 18 (including themselves) are in their household.\nPlot the actual and biased distributions, and compute their means.", "# Create original PMF\n\nresp = nsfg.ReadFemResp()\nnumkdhh_pmf = thinkstats2.Pmf(resp['numkdhh'], label='actual')\nnumkdhh_pmf\n\n# Create copy and confirm values\n\npmf = numkdhh_pmf.Copy()\nprint(pmf)\nprint(pmf.Total())\nprint('mean', pmf.Mean())\n\n# Weight PMF by number of children that would respond with each value\n\ndef BiasPmf(pmf, label):\n child_pmf = pmf.Copy(label=label)\n for x, p in pmf.Items():\n child_pmf.Mult(x, x)\n child_pmf.Normalize()\n return child_pmf\n\nchild_pmf = BiasPmf(pmf, 'childs_view')\nprint(child_pmf)\nprint(child_pmf.Total())\nprint('mean', child_pmf.Mean())\n\n# Plot\n\nthinkplot.PrePlot(2)\nthinkplot.Pmfs([pmf, child_pmf])\nthinkplot.Show(xlabel='Children In Family', ylabel='PMF')\n\n# True mean\n\nprint('True mean', pmf.Mean())\n\n# Mean based on the children's responses\n\nprint('Child view mean', child_pmf.Mean())", "Exercise: Write functions called PmfMean and PmfVar that take a Pmf object and compute the mean and variance. To test these methods, check that they are consistent with the methods Mean and Var provided by Pmf.", "def PmfMean(pmf):\n mean=0\n for x, p in pmf.Items():\n mean += x*p\n return mean\n\nPmfMean(child_pmf)\n\ndef PmfVar(pmf):\n variance=0\n pmf_mean=PmfMean(pmf)\n for x, p in pmf.Items():\n variance += p * np.power(x-pmf_mean, 2)\n return variance\n\nPmfVar(child_pmf)\n\nprint('Check Mean =', PmfMean(child_pmf) == thinkstats2.Pmf.Mean(child_pmf))\nprint('Check Variance = ', PmfVar(child_pmf) == thinkstats2.Pmf.Var(child_pmf))", "Exercise: I started this book with the question, \"Are first babies more likely to be late?\" To address it, I computed the difference in means between groups of babies, but I ignored the possibility that there might be a difference between first babies and others for the same woman.\nTo address this version of the question, select respondents who have at least two live births and compute pairwise differences. Does this formulation of the question yield a different result?\nHint: use nsfg.MakePregMap:", "live, firsts, others = first.MakeFrames()\n\npreg.iloc[0:2].prglngth\n\npreg_map = nsfg.MakePregMap(live)\n\nhist = thinkstats2.Hist()\n\nfor case, births in preg_map.items():\n if len(births) >= 2:\n pair = preg.loc[births[0:2]].prglngth\n diff = pair.iloc[1] - pair.iloc[0]\n hist[diff] += 1\n\nthinkplot.Hist(hist)\n\npmf = thinkstats2.Pmf(hist)\nPmfMean(pmf)", "Exercise: In most foot races, everyone starts at the same time. If you are a fast runner, you usually pass a lot of people at the beginning of the race, but after a few miles everyone around you is going at the same speed.\nWhen I ran a long-distance (209 miles) relay race for the first time, I noticed an odd phenomenon: when I overtook another runner, I was usually much faster, and when another runner overtook me, he was usually much faster.\nAt first I thought that the distribution of speeds might be bimodal; that is, there were many slow runners and many fast runners, but few at my speed.\nThen I realized that I was the victim of a bias similar to the effect of class size. The race was unusual in two ways: it used a staggered start, so teams started at different times; also, many teams included runners at different levels of ability.\nAs a result, runners were spread out along the course with little relationship between speed and location. When I joined the race, the runners near me were (pretty much) a random sample of the runners in the race.\nSo where does the bias come from? During my time on the course, the chance of overtaking a runner, or being overtaken, is proportional to the difference in our speeds. I am more likely to catch a slow runner, and more likely to be caught by a fast runner. But runners at the same speed are unlikely to see each other.\nWrite a function called ObservedPmf that takes a Pmf representing the actual distribution of runners’ speeds, and the speed of a running observer, and returns a new Pmf representing the distribution of runners’ speeds as seen by the observer.\nTo test your function, you can use relay.py, which reads the results from the James Joyce Ramble 10K in Dedham MA and converts the pace of each runner to mph.\nCompute the distribution of speeds you would observe if you ran a relay race at 7 mph with this group of runners.", "import relay\n\nresults = relay.ReadResults()\nspeeds = relay.GetSpeeds(results)\nspeeds = relay.BinData(speeds, 3, 12, 100)\n\npmf = thinkstats2.Pmf(speeds, 'actual speeds')\nthinkplot.Pmf(pmf)\nthinkplot.Config(xlabel='Speed (mph)', ylabel='PMF')\n\ndef ObservedPmf(pmf, speed, label):\n observed_pmf = pmf.Copy(label=label)\n for value in observed_pmf.Values():\n diff = abs(speed - value)\n observed_pmf[value] *= diff\n observed_pmf.Normalize()\n return observed_pmf\n\nobserved = ObservedPmf(pmf, 7, 'observed speeds')\nthinkplot.Hist(observed)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jasontlam/snorkel
tutorials/intro/Intro_Tutorial_3.ipynb
apache-2.0
[ "Intro. to Snorkel: Extracting Spouse Relations from the News\nPart III: Training an End Extraction Model\nIn this final section of the tutorial, we'll use the noisy training labels we generated in the last tutorial part to train our end extraction model.\nFor this tutorial, we will be training a Bi-LSTM, a state-of-the-art deep neural network implemented in TensorFlow.", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport os\n\n# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE\n# Note that this is necessary for parallel execution amongst other things...\n# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'\n\nfrom snorkel import SnorkelSession\nsession = SnorkelSession()", "We repeat our definition of the Spouse Candidate subclass:", "from snorkel.models import candidate_subclass\n\nSpouse = candidate_subclass('Spouse', ['person1', 'person2'])", "We reload the probabilistic training labels:", "from snorkel.annotations import load_marginals\n\ntrain_marginals = load_marginals(session, split=0)", "We also reload the candidates:", "train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all()\ndev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all()\ntest_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()", "Finally, we load gold labels for evaluation:", "from snorkel.annotations import load_gold_labels\n\nL_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)\nL_gold_test = load_gold_labels(session, annotator_name='gold', split=2)", "Now we can setup our discriminative model. Here we specify the model and learning hyperparameters.\nThey can also be set automatically using a search based on the dev set with a GridSearch object.", "from snorkel.learning.disc_models.rnn import reRNN\n\ntrain_kwargs = {\n 'lr': 0.01,\n 'dim': 50,\n 'n_epochs': 10,\n 'dropout': 0.25,\n 'print_freq': 1,\n 'max_sentence_length': 100\n}\n\nlstm = reRNN(seed=1701, n_threads=None)\nlstm.train(train_cands, train_marginals, X_dev=dev_cands, Y_dev=L_gold_dev, **train_kwargs)", "Now, we get the precision, recall, and F1 score from the discriminative model:", "p, r, f1 = lstm.score(test_cands, L_gold_test)\nprint(\"Prec: {0:.3f}, Recall: {1:.3f}, F1 Score: {2:.3f}\".format(p, r, f1))", "We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report:", "tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)", "Note that if this is the final test set that you will be reporting final numbers on, to avoid biasing results you should not inspect results. However you can run the model on your development set and, as we did in the previous part with the generative labeling function model, inspect examples to do error analysis.\nYou can also improve performance substantially by increasing the number of training epochs!\nFinally, we can save the predictions of the model on the test set back to the database. (This also works for other candidate sets, such as unlabeled candidates.)", "lstm.save_marginals(session, test_cands)", "More importantly, you completed the introduction to Snorkel! Give yourself a pat on the back!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/en-snapshot/probability/examples/Learnable_Distributions_Zoo.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Learnable Distributions Zoo\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Learnable_Distributions_Zoo\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Learnable_Distributions_Zoo.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Learnable_Distributions_Zoo.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Learnable_Distributions_Zoo.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIn this colab we show various examples of building learnable (\"trainable\") distributions. (We make no effort to explain the distributions, only to show how to build them.)", "import numpy as np\nimport tensorflow.compat.v2 as tf\nimport tensorflow_probability as tfp\nfrom tensorflow_probability.python.internal import prefer_static\ntfb = tfp.bijectors\ntfd = tfp.distributions\ntf.enable_v2_behavior()\n\nevent_size = 4\nnum_components = 3", "Learnable Multivariate Normal with Scaled Identity for chol(Cov)", "learnable_mvn_scaled_identity = tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(tf.zeros(event_size), name='loc'),\n scale=tfp.util.TransformedVariable(\n tf.ones([1]),\n bijector=tfb.Exp(),\n name='scale')),\n reinterpreted_batch_ndims=1,\n name='learnable_mvn_scaled_identity')\n\nprint(learnable_mvn_scaled_identity)\nprint(learnable_mvn_scaled_identity.trainable_variables)", "Learnable Multivariate Normal with Diagonal for chol(Cov)", "learnable_mvndiag = tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(tf.zeros(event_size), name='loc'),\n scale=tfp.util.TransformedVariable(\n tf.ones(event_size),\n bijector=tfb.Softplus(), # Use Softplus...cuz why not?\n name='scale')),\n reinterpreted_batch_ndims=1,\n name='learnable_mvn_diag')\n\nprint(learnable_mvndiag)\nprint(learnable_mvndiag.trainable_variables)", "Mixture of Multivarite Normal (spherical)", "learnable_mix_mvn_scaled_identity = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tf.Variable(\n # Changing the `1.` intializes with a geometric decay.\n -tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),\n name='logits')),\n components_distribution=tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(\n tf.random.normal([num_components, event_size]),\n name='loc'),\n scale=tfp.util.TransformedVariable(\n 10. * tf.ones([num_components, 1]),\n bijector=tfb.Softplus(), # Use Softplus...cuz why not?\n name='scale')),\n reinterpreted_batch_ndims=1),\n name='learnable_mix_mvn_scaled_identity')\n\nprint(learnable_mix_mvn_scaled_identity)\nprint(learnable_mix_mvn_scaled_identity.trainable_variables)", "Mixture of Multivariate Normal (spherical) with first mix weight unlearnable", "learnable_mix_mvndiag_first_fixed = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tfp.util.TransformedVariable(\n # Initialize logits as geometric decay.\n -tf.math.log(1.5) * tf.range(num_components, dtype=tf.float32),\n tfb.Pad(paddings=[[1, 0]], constant_values=0)),\n name='logits'),\n components_distribution=tfd.Independent(\n tfd.Normal(\n loc=tf.Variable(\n # Use Rademacher...cuz why not?\n tfp.random.rademacher([num_components, event_size]),\n name='loc'),\n scale=tfp.util.TransformedVariable(\n 10. * tf.ones([num_components, 1]),\n bijector=tfb.Softplus(), # Use Softplus...cuz why not?\n name='scale')),\n reinterpreted_batch_ndims=1),\n name='learnable_mix_mvndiag_first_fixed')\n\nprint(learnable_mix_mvndiag_first_fixed)\nprint(learnable_mix_mvndiag_first_fixed.trainable_variables)", "Mixture of Multivariate Normal (full Cov)", "learnable_mix_mvntril = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tf.Variable(\n # Changing the `1.` intializes with a geometric decay.\n -tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),\n name='logits')),\n components_distribution=tfd.MultivariateNormalTriL(\n loc=tf.Variable(tf.zeros([num_components, event_size]), name='loc'),\n scale_tril=tfp.util.TransformedVariable(\n 10. * tf.eye(event_size, batch_shape=[num_components]),\n bijector=tfb.FillScaleTriL(),\n name='scale_tril')),\n name='learnable_mix_mvntril')\n\nprint(learnable_mix_mvntril)\nprint(learnable_mix_mvntril.trainable_variables)", "Mixture of Multivariate Normal (full Cov) with unlearnable first mix & first component", "# Make a bijector which pads an eye to what otherwise fills a tril.\nnum_tril_nonzero = lambda num_rows: num_rows * (num_rows + 1) // 2\n\nnum_tril_rows = lambda nnz: prefer_static.cast(\n prefer_static.sqrt(0.25 + 2. * prefer_static.cast(nnz, tf.float32)) - 0.5,\n tf.int32)\n\n# TFP doesn't have a concat bijector, so we roll out our own.\nclass PadEye(tfb.Bijector):\n\n def __init__(self, tril_fn=None):\n if tril_fn is None:\n tril_fn = tfb.FillScaleTriL()\n self._tril_fn = getattr(tril_fn, 'inverse', tril_fn)\n super(PadEye, self).__init__(\n forward_min_event_ndims=2,\n inverse_min_event_ndims=2,\n is_constant_jacobian=True,\n name='PadEye')\n\n def _forward(self, x):\n num_rows = int(num_tril_rows(tf.compat.dimension_value(x.shape[-1])))\n eye = tf.eye(num_rows, batch_shape=prefer_static.shape(x)[:-2])\n return tf.concat([self._tril_fn(eye)[..., tf.newaxis, :], x],\n axis=prefer_static.rank(x) - 2)\n\n def _inverse(self, y):\n return y[..., 1:, :]\n\n def _forward_log_det_jacobian(self, x):\n return tf.zeros([], dtype=x.dtype)\n\n def _inverse_log_det_jacobian(self, y):\n return tf.zeros([], dtype=y.dtype)\n\n def _forward_event_shape(self, in_shape):\n n = prefer_static.size(in_shape)\n return in_shape + prefer_static.one_hot(n - 2, depth=n, dtype=tf.int32)\n\n def _inverse_event_shape(self, out_shape):\n n = prefer_static.size(out_shape)\n return out_shape - prefer_static.one_hot(n - 2, depth=n, dtype=tf.int32)\n\n\ntril_bijector = tfb.FillScaleTriL(diag_bijector=tfb.Softplus())\nlearnable_mix_mvntril_fixed_first = tfd.MixtureSameFamily(\n mixture_distribution=tfd.Categorical(\n logits=tfp.util.TransformedVariable(\n # Changing the `1.` intializes with a geometric decay.\n -tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),\n bijector=tfb.Pad(paddings=[(1, 0)]),\n name='logits')),\n components_distribution=tfd.MultivariateNormalTriL(\n loc=tfp.util.TransformedVariable(\n tf.zeros([num_components, event_size]),\n bijector=tfb.Pad(paddings=[(1, 0)], axis=-2),\n name='loc'),\n scale_tril=tfp.util.TransformedVariable(\n 10. * tf.eye(event_size, batch_shape=[num_components]),\n bijector=tfb.Chain([tril_bijector, PadEye(tril_bijector)]),\n name='scale_tril')),\n name='learnable_mix_mvntril_fixed_first')\n\n\nprint(learnable_mix_mvntril_fixed_first)\nprint(learnable_mix_mvntril_fixed_first.trainable_variables)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xmnlab/pywim
notebooks/WeightEstimation.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Weight-Estimation\" data-toc-modified-id=\"Weight-Estimation-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Weight Estimation</a></div><div class=\"lev2 toc-item\"><a href=\"#Algorithm-Setup\" data-toc-modified-id=\"Algorithm-Setup-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Algorithm Setup</a></div><div class=\"lev2 toc-item\"><a href=\"#Open-Raw-Data-File-(Synthetic)\" data-toc-modified-id=\"Open-Raw-Data-File-(Synthetic)-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Open Raw Data File (Synthetic)</a></div><div class=\"lev2 toc-item\"><a href=\"#Data-cleaning\" data-toc-modified-id=\"Data-cleaning-13\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Data cleaning</a></div><div class=\"lev2 toc-item\"><a href=\"#Speed-estimation\" data-toc-modified-id=\"Speed-estimation-14\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Speed estimation</a></div><div class=\"lev2 toc-item\"><a href=\"#Wave-Curve-extration\" data-toc-modified-id=\"Wave-Curve-extration-15\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Wave Curve extration</a></div><div class=\"lev2 toc-item\"><a href=\"#Weight-estimation\" data-toc-modified-id=\"Weight-estimation-16\"><span class=\"toc-item-num\">1.6&nbsp;&nbsp;</span>Weight estimation</a></div><div class=\"lev3 toc-item\"><a href=\"#Estimation-by-Peak-Voltage\" data-toc-modified-id=\"Estimation-by-Peak-Voltage-161\"><span class=\"toc-item-num\">1.6.1&nbsp;&nbsp;</span>Estimation by Peak Voltage</a></div><div class=\"lev3 toc-item\"><a href=\"#Estimation-by-Area-under-the-signal\" data-toc-modified-id=\"Estimation-by-Area-under-the-signal-162\"><span class=\"toc-item-num\">1.6.2&nbsp;&nbsp;</span>Estimation by Area under the signal</a></div><div class=\"lev3 toc-item\"><a href=\"#Estimation-by-Re-sampling-of-area\" data-toc-modified-id=\"Estimation-by-Re-sampling-of-area-163\"><span class=\"toc-item-num\">1.6.3&nbsp;&nbsp;</span>Estimation by Re-sampling of area</a></div><div class=\"lev1 toc-item\"><a href=\"#References\" data-toc-modified-id=\"References-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>References</a></div>\n\n# Weight Estimation\n\nWeight estimation can differ respectively of the technology.\n\n## Algorithm Setup", "from IPython.display import display\nfrom matplotlib import pyplot as plt\nfrom scipy import integrate\n\nimport numpy as np\nimport pandas as pd\nimport peakutils\nimport sys\n\n# local\nsys.path.insert(0, '../')\n\nfrom pywim.estimation.speed import speed_by_peak\nfrom pywim.utils import storage\nfrom pywim.utils.dsp import wave_curve\nfrom pywim.utils.stats import iqr", "Open Raw Data File (Synthetic)", "f = storage.open_file('../data/wim_day_001_01_20170324.h5')\n\ndset = f[list(f.keys())[0]]\ndf = storage.dataset_to_dataframe(dset)\n\n# information on the file\npaddle = len(max(dset.attrs, key=lambda v: len(v)))\n\nprint('METADATA')\nprint('='*80)\nfor k in dset.attrs:\n print('{}:'.format(k).ljust(paddle, ' '), dset.attrs[k], sep='\\t')\n\ndf.plot()\nplt.grid(True)\nplt.show()", "Data cleaning\n## use information from data cleaning report", "data_cleaned = df.copy()\n\nfor k in data_cleaned.keys():\n # use the first 10 points as reference to correct the baseline\n # in this case should work well\n data_cleaned[k] -= data_cleaned[k].values[:10].mean()\n\ndata_cleaned.plot()\nplt.grid(True)\nplt.show()", "Speed estimation", "# calculates the speed for each pair of sensors by axles\nspeed = speed_by_peak.sensors_estimation(\n data_cleaned, dset.attrs['sensors_distance']\n)\n\ndisplay(speed)", "Wave Curve extration", "curves = []\n\nfor k in data_cleaned.keys():\n curves.append(\n wave_curve.select_curve_by_threshold(\n data_cleaned[k], threshold=1, delta_x=5\n )\n )\n \n for c in curves[-1]: \n # plot each axle measured\n c.plot()\n \n plt.grid(True)\n plt.title(k)\n plt.show()", "Weight estimation\n\\cite{kwon2007development} presents three approach about weight estimation: \n\nPeak voltage;\nArea under the signal;\nRe-sampling of area (proposed method).\n\nIn this study, these three methods will be implemented.\nEstimation by Peak Voltage\nAccording to \\cite{kwon2007development}, the peak voltage generated by the \nsame vehicle does not change for different speeds, however, this assumption is \nincorrect since the peak will change if tire inflation pressure is not \nconstant. So, this method can be very helpful when accuracy is not important.\nThe equation presented in that study is:\n\\begin{equation}\\label{eq:weigh_by_peak}\nw = \\alpha * peak_signal_voltage(x_i)\n\\end{equation}\nwhere:\n\npeak_signal_voltage($x_i$) is the peak voltage value of the digitized signal x(t); \nand α is a calibration factor which must be determined using a known axle load.", "def weigh_by_peak_signal_voltage(peaks: [float], cs: [float]):\n \"\"\"\n :param peaks: peak signal voltage array\n :type peaks: np.array\n :param cs: calibration factor array\n :type cs: np.array\n :returns: np.array\n \"\"\"\n return np.array(peaks * cs)\n\nx = data_cleaned.index.values\n\nfor k in data_cleaned.keys():\n y = data_cleaned[k].values\n \n indexes = peakutils.indexes(y, thres=0.5, min_dist=30)\n \n # calibration given by random function\n c = np.random.randint(900, 1100, 1)\n w = weigh_by_peak_signal_voltage(y[indexes], c)\n \n print(k, w)", "Estimation by Area under the signal\n\\cite{kwon2007development} presented the axle load computation method recommended \nby Kistler \\cite{kistler2004installation} that computes the axle loads using the area \nunder the signal curve and the speed of the vehicle traveling. A typical signal curve \ncan be viewed as:\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{img/kistler-signal.png}}\n\\caption{\\label{fig:kistler-signal} Raw data signal illustration. Source \\cite{kistler2004installation}}\n\\end{figure}\nThe equation \npresented by Kistler \\cite{kistler2004installation} is:\n\\begin{equation}\\label{eq:weigh_by_area_under_the_signal_1}\n W = \\frac{V . C}{L} . \\int_{t_2+\\Delta t}^{t_1 - \\Delta t} (x(t) − b(t)) dt\n\\end{equation}\nwhere: \n\nt1 and t2 are point where threshold touches on the start and the end of the signal;\n$\\Delta$t is an average value from t1 and t2 to the point when the signal is near to the baseline;\nC is a constant calibration factor;\nL is the sensor width;\nV is the speed (velocity) of the vehicle;\nx(t) is the load signal;\nb(t) is the baseline level. \n\nAlso, there is a digital form as:\n\\begin{equation}\\label{eq:weigh_by_area_under_the_signal_2}\nW = \\frac{V . C}{L} . \\sum(x_i-b_i)\n\\end{equation}", "load = []\n\nt = 1/dset.attrs['sample_rate']\n\nprint('t = ', t)\n\nfor axles_curve in curves:\n # composite trapezoidal rule\n load.append([\n integrate.trapz(v, dx=t)\n for v in axles_curve\n ])\n \nprint('\\nLoad estimation:')\ndisplay(load)\n\n# W = (v/L) * A * C\nv = speed\na = load\nl = 0.053 # sensor width\nc = dset.attrs['calibration_constant']\nw = []\n\nfor i, _load in enumerate(load):\n # sensor data\n _w = []\n for j in range(len(_load)):\n # axle data\n _w.append((v[i][j]/l) * a[i][j] * c[i])\n w.append(_w)\n \nweight = np.matrix(w)\n\nprint('Axle estimated weight by each sensor:')\ndisplay(weight)\n\nweight_axles = []\n\nfor i in range(weight.shape[0]):\n v = pd.Series(weight[:, i].view(np.ndarray).flatten())\n weight_axles.append(iqr.reject_outliers(v).mean())\n \nprint('Axle estimated weight:')\ndisplay(weight_axles)\n\ngvw = sum(weight_axles)\n\nprint('Gross Vehicle Weigh:', gvw)", "Estimation by Re-sampling of area\nReferences\n(<a id=\"cit-kwon2007development\" href=\"#call-kwon2007development\">Kwon and Aryal, 2007</a>) Kwon Taek and Aryal Bibhu, ``Development of a pc-based eight-channel wim system'', , vol. , number , pp. , 2007.\n(<a id=\"cit-kistler2004installation\" href=\"#call-kistler2004installation\">Kistler Instrumente, 2004</a>) AG Kistler Instrumente, ``Installation Instructions: Lineas\\textregistered Sensors for Weigh-in-Motion Type 9195E'', 2004." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gchrupala/reimaginet
notes.ipynb
mit
[ "Notes\nDevelopment and evaluation of imaginet and related models.", "%pylab inline\n\nfrom ggplot import *\nimport pandas as pd\n\ndata = pd.DataFrame(\n dict(epoch=range(1,11)+range(1,11)+range(1,11)+range(1,8)+range(1,11)+range(1,11),\n model=hstack([repeat(\"char-3-grow\", 10), \n repeat(\"char-1\", 10),\n repeat(\"char-3\", 10),\n repeat(\"visual\", 7), \n repeat(\"multitask\",10), \n repeat(\"sum\", 10)]),\n recall=[#char-3-grow lw0222.uvt.nl:/home/gchrupala/reimaginet/run-110-phon\n 0.097281087565,\n 0.140863654538,\n 0.161015593762,\n 0.173410635746,\n 0.176969212315,\n 0.175529788085,\n 0.175089964014,\n 0.174010395842,\n 0.173370651739,\n 0.173050779688,\n # char-1 yellow.uvt.nl:/home/gchrupala/repos/reimagine/run-200-phon\n 0.100919632147,\n 0.127588964414,\n 0.140583766493,\n 0.148300679728,\n 0.150739704118,\n 0.153338664534,\n 0.156657337065,\n 0.159016393443,\n 0.159056377449,\n 0.160655737705,\n # char-3 yellow.uvt.nl:/home/gchrupala/repos/reimagine/run-201-phon\n 0.078368652539,\n 0.125789684126,\n 0.148140743703,\n 0.158216713315,\n 0.163694522191,\n 0.168612554978,\n 0.172570971611,\n 0.17181127549,\n 0.171531387445,\n 0.170611755298,\n\n # visual \n 0.160015993603,\n 0.184406237505,\n 0.193202718912,\n 0.19956017593,\n 0.201079568173,\n 0.201719312275,\n 0.19944022391,\n # multitask\n 0.16093562575, \n 0.185525789684,\n 0.194482207117,\n 0.202758896441,\n 0.203558576569,\n 0.20243902439,\n 0.199240303878,\n 0.195361855258,\n 0.193242702919,\n 0.189924030388,\n # sum\n 0.137984806078,\n 0.145581767293,\n 0.149340263894,\n 0.151819272291,\n 0.152898840464,\n 0.154218312675,\n 0.155257896841,\n 0.155697720912,\n 0.15637744902,\n 0.156657337065\n ]))\n\n\ndef standardize(x):\n return (x-numpy.mean(x))/numpy.std(x)", "Image retrieval evaluation\nModels:\n- Sum - additively composed word embeddings (1024 dimensions)\n- Visual - Imaginet with disabled textual pathway (1024 embeddings + 1 x 1024 hidden\n- Multitask - Full Imaginet model (1024 embeddings + 1 x 1024 hidden)\n- Char-1 - Model similar to imaginet, but trained on character-level. Captions are lowecases, with spaces removed. The model has 256 character embeddings + 3 layers of 1024 recurrent hidden layers. \n- Char-3 - Like above, but 3 GRU layers\n- Char-3-grow. Like above, but layers >1 initialized to pre-trained approximate identity\nRemarks: \n- Models NOT trained on extra train data (restval)", "ggplot(data.loc[data['model'].isin(['sum','char-1','char-3','char-3-grow','multitask'])], \n aes(x='epoch', y='recall', color='model')) + geom_line(size=3) + theme()\n\n\nggplot(data.loc[data['model'].isin(['visual','multitask','sum'])], \n aes(x='epoch', y='recall', color='model')) + geom_line(size=3) + theme()\n\ndata_grow = pd.DataFrame(dict(epoch=range(1,11)+range(1,11),\n model=hstack([repeat(\"gru-2-grow\", 10),repeat(\"gru-1\", 10)]),\n recall=[#gru-1\n 0.170971611355,\n 0.192163134746,\n 0.206797281088,\n 0.211355457817,\n 0.21331467413,\n 0.218992403039,\n 0.214674130348,\n 0.214634146341,\n 0.214434226309,\n 0.212115153938,\n # gru-2-grow\n 0.173730507797,\n 0.198320671731,\n 0.206117552979,\n 0.211715313874,\n 0.212914834066,\n 0.211915233906,\n 0.209956017593,\n 0.210795681727,\n 0.209076369452,\n 0.208996401439\n ]))\n\n \n", "Models:\n - GRU-1 - Imaginet (1024 emb + 1 x 1024 hidden)\n - GRU-2 grow - Imaginet (1024 emb + 2 x 1024 hidden)\nRemarks:\n - Models trained on extra train data (restval)\n - Layers >1 initialized to pre-trained approximate identity", "ggplot(data_grow, aes(x='epoch', y='recall', color='model')) + geom_line(size=3) + theme()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
EstevaoVieira/udacity_projects
titanic_survival_exploration/titanic_survival_exploration.ipynb
mit
[ "Machine Learning Engineer Nanodegree\nIntroduction and Foundations\nProject: Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. \n\nGetting Started\nTo begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.\nRun the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.\n\nTip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.", "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())", "From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.\nRun the code cell below to remove Survived as a feature of the dataset and store it in outcomes.", "# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())", "The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. \nThink: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?", "def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(outcomes[:5], predictions)", "Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nMaking Predictions\nIf we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.\nThe predictions_0 function below will always predict that a passenger did not survive.", "def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)", "Question 1\nUsing the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Predictions have an accuracy of 61.62%.\n\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.\nRun the code cell below to plot the survival outcomes of passengers based on their sex.", "vs.survival_stats(data, outcomes, 'Sex')", "Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.", "def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n predictions.append(passenger.Sex=='female')\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)", "Question 2\nHow accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Predictions have an accuracy of 78.68%.\n\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.\nRun the code cell below to plot the survival outcomes of male passengers based on their age.", "vs.survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])", "Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.", "def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n if passenger.Sex=='female':\n predictions.append(1)\n elif passenger.Age < 10: #passed first if mean it is male (do not need to explicit)\n predictions.append(1)\n else:\n predictions.append(0)\n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_2(data)", "Question 3\nHow accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Predictions have an accuracy of 79.35%.\n\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. \nPclass, Sex, Age, SibSp, and Parch are some suggested features to try.\nUse the survival_stats function below to to examine various survival statistics.\nHint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: [\"Sex == 'male'\", \"Age &lt; 18\"]", "vs.survival_stats(data, outcomes, 'Embarked', [ \"Sex == 'female'\", 'Pclass == 3','Age < 20'])", "After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.\nMake sure to keep track of the various features and conditions you tried before arriving at your final prediction model.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.", "def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n predictions = []\n for _, passenger in data.iterrows():\n if passenger.Pclass ==3:\n if passenger.Sex=='female' and passenger.Age<20 and passenger.Embarked!='S':\n predictions.append(1)\n else:\n predictions.append(0)\n elif passenger.Sex=='female':\n predictions.append(1)\n elif passenger.Age < 10:\n if passenger.SibSp >= 3:\n predictions.append(0)\n else:\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)", "Question 4\nDescribe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?\nHint: Run the code cell below to see the accuracy of your predictions.", "print accuracy_score(outcomes, predictions)", "Answer: Predictions have an accuracy of 80.81%. Some features were much more informative than others, like Sex and Age. Others do not had much sense such as the port of embarcation. The process of searching for features was based entirely on the graphs, such that when I found situations where there was great difference between red and green bars I made a prediction for all of the people in that category to be classified as the highest bar.\nConclusion\nAfter several iterations of exploring and conditioning on the data, you have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the decision tree. A decision tree splits a set of data into smaller and smaller groups (called nodes), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. This link provides another introduction into machine learning using a decision tree.\nA decision tree is just one of many models that come from supervised learning. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like 'Survived', or a numerical, continuous value like predicting the price of a house.\nQuestion 5\nThink of a real-world scenario where supervised learning could be applied. What would be the outcome variable that you are trying to predict? Name two features about the data used in this scenario that might be helpful for making the predictions. \nAnswer: Supervised learning could be used by banks or creditcard companies, by using localization(city,state), price and type of store as features, as well as many previous cases of fraud, or even some artificially created samples, to detect if a given creditcard use may be a fraud.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
widdowquinn/Teaching-SfAM-ECS
workshop/03a-building.ipynb
mit
[ "<img src=\"images/JHI_STRAP_Web.png\" style=\"width: 150px; float: right;\">\n03a - Building a Reproducible Document\nTable of Contents\n\nPython Imports/Startup\nBiological Motivation\nLoad Sequence\nBuild BLAST database\nRun BLAST query\nLoad BLAST results\nQuery UniProt\nQuery KEGG\n\n<a id=\"python_imports\"></a>\n1. Python Imports/Startup\n<p></p>\n<div class=\"alert-success\">\n<b>It can be very convenient to have all the `Python` library imports at the top of the notebook.</b>\n</div>\n\nThis is very helpful when running the notebook with, e.g. Cell -&gt; Run All or Kernel -&gt; Restart &amp; Run All from the menu bar, all the libraries are available throughout the document.", "# The line below allows the notebooks to show graphics inline\n%pylab inline\n\nimport io # This lets us handle streaming data\nimport os # This lets us communicate with the operating system\n\nimport pandas as pd # This lets us use dataframes\nimport seaborn as sns # This lets us draw pretty graphics\n\n# Biopython is a widely-used library for bioinformatics\n# tasks, and integrating with software\nfrom Bio import SeqIO # This lets us handle sequence data\nfrom Bio.KEGG import REST # This lets us connect to the KEGG databases\n\n# The bioservices library allows connections to common\n# online bioinformatics resources\nfrom bioservices import UniProt # This lets us connect to the UniProt databases\n\nfrom IPython.display import Image # This lets us display images (.png etc) from code", "<p></p>\n<div class=\"alert-success\">\n<b>It can be useful here to create any output directories that will be used throughout the document.</b>\n</div>\n\nThe os.makedirs() function allows us to create a new directory, and the exist_ok option will prevent the notebook code from stopping and throwing an error if the directory already exists.", "# Create a new directory for notebook output\nOUTDIR = os.path.join(\"data\", \"reproducible\", \"output\")\nos.makedirs(OUTDIR, exist_ok=True)", "<p></p>\n<div class=\"alert-success\">\n<b>It can be useful here to create helper functions that will be used throughout the document.</b>\n</div>\n\nThe to_df() function will turn tabular data into a pandas dataframe", "# A small function to return a Pandas dataframe, given tabular text\ndef to_df(result):\n return pd.read_table(io.StringIO(result), header=None)", "<a id=\"motivation\"></a>\n2. Biological Motivation\n<p></p>\n<div class=\"alert-info\">\n<b>We are working on a project to improve bacterial throughput for biosynthesis, and have been provided with a nucleotide sequence of a gene of interest.\n<br></br><br></br>\nThis gene is overrepresented in populations of bacteria that appear to be associated with enhanced metabolic function relevant to a biosynthetic output (lipid conversion to ethanol).\n<br></br><br></br>\nWe want to find out more about the annotated function and literature associated with this gene, which appears to derive from *Proteus mirabilis*.\n</div>\n\nOur plan is to:\n\nidentify a homologue in a reference isolate of P. mirabilis\nobtain the protein sequence/identifier for the homologue\nget information about the molecular function of this protein from UniProt\nget information about the metabolic function of this protein from KEGG\nvisualise some of the information about this gene/protein\n\n<a id=\"load_sequence\"></a>\n3. Load Sequence\n<p></p>\n<div class=\"alert-success\">\n<b>We first load the sequence from a local `FASTA` file, using the `Biopython` `SeqIO` library.</b>\n</div>\n\n<a id=\"build_blast\"></a>\n4. Build BLAST Database\n<p></p>\n<div class=\"alert-success\">\n<b>We now build a local `BLAST` database from the *P. mirabilis* reference proteins.</b>\n</div>\n\n<a id=\"blast_query\"></a>\n5. Run BLAST Query\n<p></p>\n<div class=\"alert-success\">\n<b>We now query the wildtype sequence against our custom `BLAST` database from the *P. mirabilis* reference proteins.</b>\n</div>\n\n<a id=\"blast_results\"></a>\n6. Load BLAST Results\n<p></p>\n<div class=\"alert-success\">\n<b>We now load the `BLASTX` results for inspection and visualisation, using `pandas`</b>\n</div>\n\n<a id=\"uniprot\"></a>\n7. Query UniProt\n<p></p>\n<div class=\"alert-success\">\n<b>We now query the `UniProt` databases for information on our best match</b>\n</div>\n\n<a id=\"kegg\"></a>\n8. Query KEGG\n<p></p>\n<div class=\"alert-success\">\n<b>We now query the `KEGG` databases for information on our best match</b>\n</div>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session09/Day2/gps/03-FastGPs.ipynb
mit
[ "Fast GP implementations\nThe data file needed for this tutorial can be downloaded as follows:", "!wget https://raw.githubusercontent.com/rodluger/tutorials/master/gps/data/sample_transit.txt\n!mv *.txt data/", "Benchmarking our implementation\nLet's do some timing tests and compare them to what we get with two handy GP packages: george and celerite. We'll learn how to use both along the way.\nBelow is the code we wrote in the Day 1 tutorials to sample from and compute the likelihood of a GP.", "import numpy as np\nfrom scipy.linalg import cho_factor\n\n\ndef ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0):\n \"\"\"\n Return the ``N x M`` exponential squared\n covariance matrix between time vectors `t1`\n and `t2`. The kernel has amplitude `A` and\n lengthscale `l`.\n \n \"\"\"\n if t2 is None:\n t2 = t1\n T2, T1 = np.meshgrid(t2, t1)\n return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2)\n\n\ndef ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):\n \"\"\"\n Return the log of the GP likelihood of the\n data `y(t)` given uncertainty `sigma` and\n an Exponential Squared Kernel with amplitude `A`\n and length scale `sigma`.\n \n \"\"\"\n # The covariance and its determinant\n npts = len(t)\n kernel = ExpSquaredKernel\n K = kernel(t, A=A, l=l) + sigma ** 2 * np.eye(npts)\n \n # The marginal log likelihood\n log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))\n log_like -= 0.5 * np.linalg.slogdet(K)[1]\n log_like -= 0.5 * npts * np.log(2 * np.pi)\n \n return log_like\n\n\ndef draw_from_gaussian(mu, S, ndraws=1, eps=1e-12):\n \"\"\"\n Generate samples from a multivariate gaussian\n specified by covariance ``S`` and mean ``mu``.\n \n (We derived these equations in Day 1, Notebook 01, Exercise 7.)\n \"\"\"\n npts = S.shape[0]\n L, _ = cho_factor(S + eps * np.eye(npts), lower=True)\n L = np.tril(L)\n u = np.random.randn(npts, ndraws)\n x = np.dot(L, u) + mu[:, None]\n return x.T\n\n\ndef compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0):\n \"\"\"\n Compute the mean vector and covariance matrix of a GP\n at times `t_test` given training points `y_train(t_train)`.\n The training points have uncertainty `sigma` and the\n kernel is assumed to be an Exponential Squared Kernel\n with amplitude `A` and lengthscale `l`.\n \n \"\"\"\n # Compute the required matrices\n kernel = ExpSquaredKernel\n Stt = kernel(t_train, A=1.0, l=1.0)\n Stt += sigma ** 2 * np.eye(Stt.shape[0])\n Spp = kernel(t_test, A=1.0, l=1.0)\n Spt = kernel(t_test, t_train, A=1.0, l=1.0)\n\n # Compute the mean and covariance of the GP\n mu = np.dot(Spt, np.linalg.solve(Stt, y_train))\n S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T))\n \n return mu, S", "<div style=\"background-color: #D6EAF8; border-left: 15px solid #2E86C1;\">\n <h1 style=\"line-height:2.5em; margin-left:1em;\">Exercise 1a</h1>\n</div>\n\nLet's time how long our custom implementation of a GP takes for a rather long dataset. Create a time array of 10,000 points between 0 and 10 and time how long it takes to sample the prior of the GP for the default kernel parameters (unit amplitude and timescale). Add a bit of noise to the sample and then time how long it takes to evaluate the log likelihood for the dataset. Make sure to store the value of the log likelihood for later.\n<div style=\"background-color: #D6EAF8; border-left: 15px solid #2E86C1;\">\n <h1 style=\"line-height:2.5em; margin-left:1em;\">Exercise 1b</h1>\n</div>\n\nLet's time how long it takes to do the same operations using the george package (pip install george).\nThe kernel we'll use is\npython\nkernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2)\nwhere amp = 1 and tau = 1 in this case.\nTo instantiate a GP using george, simply run\npython\ngp = george.GP(kernel)\nThe george package pre-computes a lot of matrices that are re-used in different operations, so before anything else, ask it to compute the GP model for your timeseries:\npython\ngp.compute(t, sigma)\nNote that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run!\nFinally, the log likelihood is given by gp.log_likelihood(y) and a sample can be drawn by calling gp.sample().\nHow do the speeds compare? Did you get the same value of the likelihood (assuming you computed it for the same sample in both cases)?\n<div style=\"background-color: #D6EAF8; border-left: 15px solid #2E86C1;\">\n <h1 style=\"line-height:2.5em; margin-left:1em;\">Exercise 1c</h1>\n</div>\n\ngeorge offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Instantiate the GP object again by passing the keyword solver=george.HODLRSolver and re-compute the log likelihood. How long did that take?\n(I wasn't able to draw samples using the HODLR solver; unfortunately this may not be implemented.)\n<div style=\"background-color: #D6EAF8; border-left: 15px solid #2E86C1;\">\n <h1 style=\"line-height:2.5em; margin-left:1em;\">Exercise 2</h1>\n</div>\n\nThe george package is super useful for GP modeling, and I recommend you read over the docs and examples. It implements several different kernels that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then celerite is what it's all about:\nbash\npip install celerite\nCheck out the docs here, as well as several tutorials. There is also a paper that discusses the math behind celerite. The basic idea is that for certain families of kernels, there exist extremely efficient methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, celerite is able to do everything in order $N$ (!!!) This is a huge advantage, especially for datasets with tens or hundreds of thousands of data points. Using george or any homebuilt GP model for datasets larger than about 10,000 points is simply intractable, but with celerite you can do it in a breeze.\nRepeat the timing tests, but this time using celerite. Note that the Exponential Squared Kernel is not available in celerite, because it doesn't have the special form needed to make its factorization fast. Instead, use the Matern 3/2 kernel, which is qualitatively similar, and which can be approximated quite well in terms of the celerite basis functions:\npython\nkernel = celerite.terms.Matern32Term(np.log(1), np.log(1))\nNote that celerite accepts the log of the amplitude and the log of the timescale. Other than this, you should be able to compute the likelihood and draw a sample with the same syntax as george.\nHow much faster did it run?\n<div style=\"background-color: #D6EAF8; border-left: 15px solid #2E86C1;\">\n <h1 style=\"line-height:2.5em; margin-left:1em;\">Exercise 3</h1>\n</div>\n\nLet's use celerite for a real application: fitting an exoplanet transit model in the presence of correlated noise.\nBelow is a (fictitious) light curve for a star with a transiting planet. There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\\sigma$. Your task is to verify this claim.\nAssume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters.\nFit the transit with a simple inverted Gaussian with three free parameters:\npython\ndef transit_shape(depth, t0, dur):\n return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2)\nRead the celerite docs to figure out how to solve this problem efficiently. \nHINT: I borrowed heavily from this tutorial, so you might want to take a look at it...", "import matplotlib.pyplot as plt\nt, y, yerr = np.loadtxt(\"data/sample_transit.txt\", unpack=True)\nplt.errorbar(x, y, yerr=yerr, fmt=\".k\", capsize=0)\nplt.xlabel(\"time\")\nplt.ylabel(\"relative flux\");" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
lpsinger/healpy
doc/healpy_tutorial.ipynb
gpl-2.0
[ "healpy tutorial\nSee the Jupyter Notebook version of this tutorial at https://github.com/healpy/healpy/blob/master/doc/healpy_tutorial.ipynb\nSee a executed version of the notebook with embedded plots at https://gist.github.com/zonca/9c114608e0903a3b8ea0bfe41c96f255\nChoose the inline backend of maptlotlib to display the plots inside the Jupyter Notebook", "import matplotlib.pyplot as plt\n\n%matplotlib inline\n\nimport numpy as np\nimport healpy as hp", "NSIDE and ordering\nMaps are simply numpy arrays, where each array element refers to a location in the sky as defined by the Healpix pixelization schemes (see the healpix website).\nNote: Running the code below in a regular Python session will not display the maps; it's recommended to use an IPython shell or a Jupyter notebook.\nThe resolution of the map is defined by the NSIDE parameter, which is generally a power of 2.", "NSIDE = 32\nprint(\n \"Approximate resolution at NSIDE {} is {:.2} deg\".format(\n NSIDE, hp.nside2resol(NSIDE, arcmin=True) / 60\n )\n)", "The function healpy.pixelfunc.nside2npix gives the number of pixels NPIX of the map:", "NPIX = hp.nside2npix(NSIDE)\nprint(NPIX)", "The same pixels in the map can be ordered in 2 ways, either RING, where they are numbered in the array in horizontal rings starting from the North pole:", "m = np.arange(NPIX)\nhp.mollview(m, title=\"Mollview image RING\")\nhp.graticule()", "The standard coordinates are the colatitude $\\theta$, $0$ at the North Pole, $\\pi/2$ at the equator and $\\pi$ at the South Pole and the longitude $\\phi$ between $0$ and $2\\pi$ eastward, in a Mollview projection, $\\phi=0$ is at the center and increases eastward toward the left of the map.\nWe can also use vectors to represent coordinates, for example vec is the normalized vector that points to $\\theta=\\pi/2, \\phi=3/4\\pi$:", "vec = hp.ang2vec(np.pi / 2, np.pi * 3 / 4)\nprint(vec)", "We can find the indices of all the pixels within $10$ degrees of that point and then change the value of the map at those indices:", "ipix_disc = hp.query_disc(nside=32, vec=vec, radius=np.radians(10))\n\nm = np.arange(NPIX)\nm[ipix_disc] = m.max()\nhp.mollview(m, title=\"Mollview image RING\")", "We can retrieve colatitude and longitude of each pixel using pix2ang, in this case we notice that the first 4 pixels cover the North Pole with pixel centers just ~$1.5$ degrees South of the Pole all at the same latitude. The fifth pixel is already part of another ring of pixels.", "theta, phi = np.degrees(hp.pix2ang(nside=32, ipix=[0, 1, 2, 3, 4]))\n\ntheta\n\nphi", "The RING ordering is necessary for the Spherical Harmonics transforms, the other option is NESTED ordering which is very efficient for map domain operations because scaling up and down maps is achieved just multiplying and rounding pixel indices.\nSee below how pixel are ordered in the NESTED scheme, notice the structure of the 12 HEALPix base pixels (NSIDE 1):", "m = np.arange(NPIX)\nhp.mollview(m, nest=True, title=\"Mollview image NESTED\")", "All healpy routines assume RING ordering, in fact as soon as you read a map with read_map, even if it was stored as NESTED, it is transformed to RING.\nHowever, you can work in NESTED ordering passing the nest=True argument to most healpy routines.\nReading and writing maps to file\nFor the following section, it is required to download larger maps by executing from the terminal the bash script healpy_get_wmap_maps.sh which should be available in your path.\nThis will download the higher resolution WMAP data into the current directory.", "!healpy_get_wmap_maps.sh\n\nwmap_map_I = hp.read_map(\"wmap_band_iqumap_r9_7yr_W_v4.fits\")", "By default, input maps are converted to RING ordering, if they are in NESTED ordering. You can otherwise specify nest=True to retrieve a map is NESTED ordering, or nest=None to keep the ordering unchanged.\nBy default, read_map loads the first column, for reading other columns you can specify the field keyword.\nwrite_map writes a map to disk in FITS format, if the input map is a list of 3 maps, they are written to a single file as I,Q,U polarization components:", "hp.write_map(\"my_map.fits\", wmap_map_I, overwrite=True)", "Visualization\nAs shown above, mollweide projection with mollview is the most common visualization tool for HEALPIX maps. It also supports coordinate transformation, coord does Galactic to ecliptic coordinate transformation, norm='hist' sets a histogram equalized color scale and xsize increases the size of the image. graticule adds meridians and parallels.", "hp.mollview(\n wmap_map_I,\n coord=[\"G\", \"E\"],\n title=\"Histogram equalized Ecliptic\",\n unit=\"mK\",\n norm=\"hist\",\n min=-1,\n max=1,\n)\nhp.graticule()", "gnomview instead provides gnomonic projection around a position specified by rot, for example you can plot a projection of the galactic center, xsize and ysize change the dimension of the sky patch.", "hp.gnomview(wmap_map_I, rot=[0, 0.3], title=\"GnomView\", unit=\"mK\", format=\"%.2g\")", "mollzoom is a powerful tool for interactive inspection of a map, it provides a mollweide projection where you can click to set the center of the adjacent gnomview panel.\nMasked map, partial maps\nBy convention, HEALPIX uses $-1.6375 * 10^{30}$ to mark invalid or unseen pixels. This is stored in healpy as the constant UNSEEN.\nAll healpy functions automatically deal with maps with UNSEEN pixels, for example mollview marks in grey those sections of a map.\nThere is an alternative way of dealing with UNSEEN pixel based on the numpyMaskedArray class, hp.ma loads a map as a masked array, by convention the mask is 0 where the data are masked, while numpy defines data masked when the mask is True, so it is necessary to flip the mask.", "mask = hp.read_map(\"wmap_temperature_analysis_mask_r9_7yr_v4.fits\").astype(np.bool_)\nwmap_map_I_masked = hp.ma(wmap_map_I)\nwmap_map_I_masked.mask = np.logical_not(mask)", "Filling a masked array fills in the UNSEEN value and return a standard array that can be used by mollview.\ncompressed() instead removes all the masked pixels and returns a standard array that can be used for examples by the matplotlib hist() function:", "hp.mollview(wmap_map_I_masked.filled())\n\nplt.hist(wmap_map_I_masked.compressed(), bins=1000);", "Spherical Harmonics transforms\nhealpy provides bindings to the C++ HEALPIX library for performing spherical harmonic transforms.\nhp.anafast computes the angular power spectrum of a map:", "LMAX = 1024\ncl = hp.anafast(wmap_map_I_masked.filled(), lmax=LMAX)\nell = np.arange(len(cl))", "therefore we can plot a normalized CMB spectrum and write it to disk:", "plt.figure(figsize=(10, 5))\nplt.plot(ell, ell * (ell + 1) * cl)\nplt.xlabel(\"$\\ell$\")\nplt.ylabel(\"$\\ell(\\ell+1)C_{\\ell}$\")\nplt.grid()\nhp.write_cl(\"cl.fits\", cl, overwrite=True)", "Gaussian beam map smoothing is provided by hp.smoothing:", "wmap_map_I_smoothed = hp.smoothing(wmap_map_I, fwhm=np.radians(1.))\nhp.mollview(wmap_map_I_smoothed, min=-1, max=1, title=\"Map smoothed 1 deg\")", "For more information see the HEALPix primer" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
slundberg/shap
notebooks/tabular_examples/neural_networks/Census income classification with Keras.ipynb
mit
[ "Census income classification with Keras\nTo download a copy of this notebook visit github.", "from sklearn.model_selection import train_test_split\nfrom keras.layers import Input, Dense, Flatten, Concatenate, concatenate, Dropout, Lambda\nfrom keras.models import Model\nfrom keras.layers.embeddings import Embedding\nfrom tqdm import tqdm\nimport shap\n\n# print the JS visualization code to the notebook\nshap.initjs()", "Load dataset", "X,y = shap.datasets.adult()\nX_display,y_display = shap.datasets.adult(display=True)\n\n# normalize data (this is important for model convergence)\ndtypes = list(zip(X.dtypes.index, map(str, X.dtypes)))\nfor k,dtype in dtypes:\n if dtype == \"float32\":\n X[k] -= X[k].mean()\n X[k] /= X[k].std()\n\nX_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2, random_state=7)", "Train Keras model", "# build model\ninput_els = []\nencoded_els = []\nfor k,dtype in dtypes:\n input_els.append(Input(shape=(1,)))\n if dtype == \"int8\":\n e = Flatten()(Embedding(X_train[k].max()+1, 1)(input_els[-1]))\n else:\n e = input_els[-1]\n encoded_els.append(e)\nencoded_els = concatenate(encoded_els)\nlayer1 = Dropout(0.5)(Dense(100, activation=\"relu\")(encoded_els))\nout = Dense(1)(layer1)\n\n# train model\nregression = Model(inputs=input_els, outputs=[out])\nregression.compile(optimizer=\"adam\", loss='binary_crossentropy')\nregression.fit(\n [X_train[k].values for k,t in dtypes],\n y_train,\n epochs=50,\n batch_size=512,\n shuffle=True,\n validation_data=([X_valid[k].values for k,t in dtypes], y_valid)\n)", "Explain predictions\nHere we take the Keras model trained above and explain why it makes different predictions for different individuals. SHAP expects model functions to take a 2D numpy array as input, so we define a wrapper function around the original Keras predict function.", "def f(X):\n return regression.predict([X[:,i] for i in range(X.shape[1])]).flatten()", "Explain a single prediction\nHere we use a selection of 50 samples from the dataset to represent \"typical\" feature values, and then use 500 perterbation samples to estimate the SHAP values for a given prediction. Note that this requires 500 * 50 evaluations of the model.", "explainer = shap.KernelExplainer(f, X.iloc[:50,:])\nshap_values = explainer.shap_values(X.iloc[299,:], nsamples=500)\nshap.force_plot(explainer.expected_value, shap_values, X_display.iloc[299,:])", "Explain many predictions\nHere we repeat the above explanation process for 50 individuals. Since we are using a sampling based approximation each explanation can take a couple seconds depending on your machine setup.", "shap_values50 = explainer.shap_values(X.iloc[280:330,:], nsamples=500)\n\nshap.force_plot(explainer.expected_value, shap_values50, X_display.iloc[280:330,:])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
alexvmarch/pandas_intro
01_parsing.ipynb
mit
[ "# First lets import some libraries we will use...\nimport numpy as np\nimport scipy as sp\nimport pandas as pd\n\nxyz_path = '100.xyz' # File path\nnframe = 100 # Number of frames (or snapshots)\nnat = 195 # Number of atoms\na = 12.55 # Cell size", "First approach\nWrite a function that reads an xyz trajectory file in. We are going to need to be able to separate numbers from atomic symbols; an XYZ trajectory file looks like:\nnat [unit]\n[first frame]\nsymbol1 x11 y11 z11\nsymbol2 x21 y21 z21\nnat [unit]\n[second frame]\nsymbol1 x12 y12 z12\nsymbol2 x22 y22 z22\nStuff in [ ] are optional (if units are absent, angstroms are assumed; a blank is included if no comments are present).\nHere is an example file parser. All it does is read line by line and return a list of these lines.", "def skeleton_naive_xyz_parser(path):\n '''\n Simple xyz parser.\n '''\n # Read in file\n lines = None\n with open(path) as f: \n lines = f.readlines()\n # Process lines\n # ...\n # Return processed lines\n # ...\n return lines\n\nlines = skeleton_naive_xyz_parser(xyz_path)\nlines", "CODING TIME: Try to expand the skeleton above to convert the line strings into \ninto a list of xyz data rows (i.e. convert the strings to floats).\nIf you can't figure out any approach, run the cell below which will print one possible (of many) ways of \napproaching this problem.\nNote that you may have to run \"%load\" cells twice, once to load the code and once to instantiate the function.", "%load -s naive_xyz_parser, snippets/parsing.py\n\ndata = naive_xyz_parser(xyz_path)\ndata", "DataFrames\nPeople spend a lot of time reading code, especially their own code.\nLets do two things in using DataFrames: make our code more readable\nand not reinvent the wheel (i.e. parsers). We have pride in the \ncode we write! \nFirst an example of using DataFrames...", "np.random.seed = 1\ndf = pd.DataFrame(np.random.randint(0, 10, size=(6, 4)), columns=['A', 'B', 'C', 'D'])\ndf\n\ndf += 1\ndf\n\ndf.loc[:, 'A'] = [0, 0, 1, 1, 2, 2]\ndf\n\ndf.groupby('A')[['B', 'C', 'D']].apply(lambda f: f.sum())", "Second approach: pandas.read_csv\nLike 99% (my estimate) of all widely established Python packages, pandas is very well \ndocumented.\nLet's use this function of pandas to read in our well structured xyz data. \n\nnames: specifies column names (and implicitly number of columns)\ndelim_whitespace: tab or space separated files\n\nCODING TIME: Figure out what options we need to correctly parse in the XYZ trajectory data using pandas.read_csv", "def skeleton_pandas_xyz_parser(path):\n '''\n Parses xyz files using pandas read_csv function.\n '''\n # Read from disk\n df = pd.read_csv(path, delim_whitespace=True, names=['symbol', 'x', 'y', 'z'])\n # Remove nats and comments\n # ...\n # ...\n return df\n\ndf = skeleton_pandas_xyz_parser(xyz_path)\ndf.head()", "One possible solution (run this only if you have already finished the above!):", "%load -s pandas_xyz_parser, snippets/parsing.py\n\ndf = pandas_xyz_parser(xyz_path)\ndf.head()", "Testing your functions is key\nA couple of quick tests should suffice...though these barely make the cut...", "print(len(df) == nframe * nat) # Make sure that we have the correct number of rows\nprint(df.dtypes) # Make sure that each column's type is correct", "Lets attach a meaningful index\nThis is easy since we know the number of atoms and number of frames...", "df = pandas_xyz_parser(xyz_path)\ndf.index = pd.MultiIndex.from_product((range(nframe), range(nat)), names=['frame', 'atom'])\ndf", "CODING TIME: Put parsing and indexing together into a single function..", "%load -s parse, snippets/parsing.py", "Saving your work!\nWe did all of this work parsing our data, but this Python kernel won't be alive eternally so lets save\nour data so that we can load it later (i.e. in the next notebook!).\nWe are going to create an HDF5 store to save our DataFrame(s) to disk. \nHDF is a high performance, portable, binary data storage format designed with scientific data exchange in mind. Use it!\nAlso note that pandas has extensive IO functionality.", "xyz = parse(xyz_path, nframe, nat)\nstore = pd.HDFStore('xyz.hdf5', mode='w')\nstore.put('xyz', xyz)\nstore.close()", "Though there are a bunch of improvements/features we could make to our parse function...\n...lets move on to step two" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
notconfusing/WIGI
chi/PAH analysis.ipynb
mit
[ "import pandas\nfrom collections import defaultdict\n\npah = pandas.read_csv('enwiki_pah_misalignment.tsv',sep='\\t')\n\npah.head()\n\npah.groupby('dissonance').head()\n\nlines = open('english-qid-names2016-03-27.csv','r').readlines()\n\nqidnames = {}\nfor line in lines:\n qid, ennamen = line.split(',', maxsplit=1)\n enname = ennamen.split('\\n')[0]\n if enname:\n qidnames[qid]=enname\n\nimport json\n\njson.dump(qidnames, open('qid_enpage.json','w'))\n\nendf = pandas.DataFrame.from_dict(qidnames,orient='index')\n\nlen(endf)", "Todo:\n + map gender-enwiki-page-id\n + dissoance class priors\n + posteriors by gender\n + posterior for no gender.", "bigdf = pandas.read_csv('/media/notconfusing/9d9b45fc-55f7-428c-a228-1c4c4a1b728c/home/maximilianklein/snapshot_data/2016-01-03/gender-index-data-2016-01-03.csv')\n\ngender_qid_df = bigdf[['qid','gender']]\n\ndef map_gender(x):\n if isinstance(x,float):\n return 'no gender'\n else:\n gen = x.split('|')[0]\n if gen == 'Q6581072':\n return 'female'\n elif gen == 'Q6581097':\n return 'male'\n else:\n return 'nonbin'\ngender_qid_df['gender'] = gender_qid_df['gender'].apply(map_gender)\n\ndef qid2enname(x):\n try:\n return qidnames[x]\n except KeyError:\n return None\ngender_qid_df['enname'] = gender_qid_df['qid'].apply(qid2enname)\n\nenname_id = pandas.read_csv('/home/notconfusing/workspace/wikidumpparse/wikidump/mediawiki-utilities/enname_id.txt',sep='\\t',names=['enname','pageid'])\n\ngender_page_id = pandas.merge(gender_qid_df, enname_id, how='inner',on='enname')\n\npah_gender = pandas.merge(pah, gender_page_id, how='left', on='pageid')\n\npah_gender\n\nlen(pah), len(gender_page_id), len(pah_gender)", "Rel risk. P(gender|misaligned)/P(gender)\nWhat proportion of the misaligned dataset is about women?\nFor each gender, what proportion of the each misalignment group do the represent.", "pah_gender['gender'] = pah_gender['gender'].fillna('nonbio')\n\nSE = pah_gender[(pah_gender['dissonance'] == 'Moderate negative') | (pah_gender['dissonance'] == 'High negative')]\nNI = pah_gender[(pah_gender['dissonance'] == 'Moderate positive') | (pah_gender['dissonance'] == 'High positive')]\nrel_risk = defaultdict(dict)\nfor risk, risk_name in [(SE,'Spent Effort'), (NI,'Needs Improvement')]:\n for gender in ['female','male','nonbin','nonbio']:\n gen_mis = len(risk[risk['gender'] == gender])\n p_gen_mis = gen_mis/len(risk) #p(gender|misalignment)\n p_gen = len(pah_gender[pah_gender['gender'] == gender]) / len(pah_gender) #p(gender)\n print(p_gen_mis, p_gen)\n rel_risk[gender][risk_name] = p_gen_mis/p_gen#rel sirk\n\n java_min_int = -2147483648\nallrecs = pandas.read_csv('/media/notconfusing/9d9b45fc-55f7-428c-a228-1c4c4a1b728c/home/maximilianklein/snapshot_data/2016-01-03/gender-index-data-2016-01-03.csv',na_values=[java_min_int])\n\ndef sum_column(q_str):\n if type(q_str) is str:\n qs = q_str.split('|')\n return len(qs) #cos the format will always end with a |\nfor col in ['site_links']:\n allrecs[col] = allrecs[col].apply(sum_column)\n\nallrecs['site_links'].head(20)\n\nallrecs['gender'] = allrecs['gender'].apply(map_gender)\n\nsl_risk = defaultdict(dict)\nsl_risk['nonbio']['Sitelink Ratio'] = 1\nfor gender in ['female','male','nonbin']:\n gend_df = allrecs[allrecs['gender']==gender]\n gend_df_size = len(gend_df)\n avg_sl = (gend_df['site_links'].sum() / gend_df_size) / 2.6\n sl_risk[gender]['Sitelink Ratio'] = avg_sl\n\nsl_risk_df = pandas.DataFrame.from_dict(sl_risk, orient='index')\n\nrel_risk_df = pandas.DataFrame.from_dict(rel_risk,orient=\"index\")\n\n\nrisk_df = pandas.DataFrame.join(sl_risk_df,rel_risk_df)\n\nrisk_df.index = ['Female','Male','Non-binary','Non-biography']\n\nprint(risk_df.to_latex(columns = ['Needs Improvement','Spent Effort', 'Sitelink Ratio'],float_format=lambda n:'%.2f' %n))" ]
[ "code", "markdown", "code", "markdown", "code" ]
maartenbreddels/ipyvolume
docs/source/pythreejs.ipynb
mit
[ "Integration with pythreejs\nipyvolume uses parts of pythreejs, giving a lot of flexibility to tweak the visualizations or behaviour.\nMaterials\nThe Scatter object has a material and line_material object, which both are a ShaderMaterial pythreejs object: https://pythreejs.readthedocs.io/en/stable/api/materials/ShaderMaterial_autogen.html.", "import ipywidgets as widgets\nimport numpy as np\nimport ipyvolume as ipv\n\n# a scatter plot\nx, y, z = np.random.normal(size=(3, 100))\nfig = ipv.figure()\nscatter = ipv.scatter(x, y, z, marker='box')\nscatter.connected = True # draw connecting lines\nipv.show()", "Using scatter.material we can tweak the material setting:", "scatter.material.visible = False", "Or even connect a toggle button to a line_material property.", "toggle_lines = widgets.ToggleButton(description=\"Show lines\")\nwidgets.jslink((scatter.line_material, 'visible'), (toggle_lines, 'value'))\ntoggle_lines", "Controls\nipyvolume has builtin controls. For more flexibility, a Controls class from https://pythreejs.readthedocs.io/en/stable/api/controls/index.html can be contructed.", "import pythreejs\nimport ipyvolume as ipv\nimport numpy as np\nfig = ipv.figure()\nscatter = ipv.scatter(x, y, z, marker='box')\nipv.show()\n\ncontrol = pythreejs.OrbitControls(controlling=fig.camera)\n# assigning to fig.controls will overwrite the builtin controls\nfig.controls = control\ncontrol.autoRotate = True\n# the controls does not update itself, but if we toggle this setting, ipyvolume will update the controls\nfig.render_continuous = True\n\n\n\ncontrol.autoRotate = True\ntoggle_rotate = widgets.ToggleButton(description=\"Rotate\")\nwidgets.jslink((control, 'autoRotate'), (toggle_rotate, 'value'))\ntoggle_rotate", "Camera\nThe camera property of ipyvolume is by default a PerspectiveCamera, but other cameras should also work: https://pythreejs.readthedocs.io/en/stable/api/cameras/index.html", "text = widgets.Text()\nwidgets.jslink((fig.camera, 'position'), (text, 'value'))\ntext" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
davofis/computational_seismology
06_finite_elements/fe_elastic_1d_solution.ipynb
gpl-3.0
[ "<div style='background-image: url(\"../../share/images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Computational Seismology</div>\n <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Finite Element Method - 1D Elastic Wave Equation</div>\n </div>\n </div>\n</div>\n\nSeismo-Live: http://seismo-live.org\nAuthors:\n\nDavid Vargas (@dvargas)\nHeiner Igel (@heinerigel)\n\nBasic Equations\nThis notebook presents a finite element code for the 1D elastic wave equation. Additionally, a solution using finite difference scheme is given for comparison.\nThe problem of solving the wave equation\n\\begin{equation}\n\\rho(x) \\partial_t^2 u(x,t) = \\partial_x (\\mu(x) \\partial_x u(x,t)) + f(x,t)\n\\end{equation}\nusing the finite element method is done after a series of steps performed on the above equation.\n1) We first obtain a weak form of the wave equation by integrating over the entire physical domain $D$ and at the same time multiplying by some basis $\\varphi_{i}$. \n2) Integration by parts and implementation of the stress-free boundary condition is performed.\n3) We approximate our unknown displacement field $u(x, t)$ by a sum over space-dependent basis functions $\\varphi_i$ weighted by time-dependent coefficients $u_i(t)$.\n\\begin{equation}\nu(x,t) \\ \\approx \\ \\overline{u}(x,t) \\ = \\ \\sum_{i=1}^{n} u_i(t) \\ \\varphi_i(x)\n\\end{equation}\n4) Utilize the same basis functions used to expand $u(x, t)$ as test functions in the weak form, this is the Galerkin principle.\n5) We can turn the continuous weak form into a system of linear equations by considering the approximated displacement field.\n\\begin{equation}\n\\mathbf{M}^T\\partial_t^2 \\mathbf{u} + \\mathbf{K}^T\\mathbf{u} = \\mathbf{f}\n\\end{equation}\n6) For the second time-derivative, we use a standard finite-difference approximation. Finally, we arrive at the explicit time extrapolation scheme.\n\\begin{equation}\n\\mathbf{u}(t + dt) = dt^2 (\\mathbf{M}^T)^{-1}[\\mathbf{f} - \\mathbf{K}^T\\mathbf{u}] + 2\\mathbf{u} - \\mathbf{u}(t-dt).\n\\end{equation}\nwhere $\\mathbf{M}$ is known as the mass matrix, and $\\mathbf{K}$ the stiffness matrix.\n7) As interpolating functions, we choose interpolants such that $\\varphi_{i}(x_{i}) = 1$ and zero elsewhere. Then, we transform the space coordinate into a local system. According to $\\xi = x − x_{i}$ and $h_{i} = x_{i+1} − x_{i}$, we have:\n<p style=\"width:35%;float:right;padding-left:50px\">\n<img src=fig_fe_basis_h.png>\n<span style=\"font-size:smaller\">\n</span>\n</p>\n\n\\begin{equation}\n \\varphi_{i}(\\xi) =\n \\begin{cases}\n \\frac{\\xi}{h_{i-1}} + 1 & \\quad \\text{if} \\quad -h_{i-1} \\le \\xi \\le 0\\\n 1 + \\frac{\\xi}{h_{i}} & \\quad \\text{if} \\quad 0 \\le \\xi \\le h_{i}\\\n 0 & \\quad elsewhere\\\n \\end{cases}\n\\end{equation}\nwith the corresponding derivatives\n\\begin{equation}\n \\partial_{\\xi}\\varphi_{i}(\\xi) =\n \\begin{cases}\n \\frac{1}{h_{i-1}} & \\quad \\text{if} \\quad -h_{i-1} \\le \\xi \\le 0\\\n -\\frac{1}{h_{i}} & \\quad \\text{if} \\quad 0 \\le \\xi \\le h_{i}\\\n 0 & \\quad elsewhere\\\n \\end{cases}\n\\end{equation}\nThe figure on the left-hand side illustrates the shape of $\\varphi_{i}(\\xi)$ and $\\partial_{\\xi}\\varphi_{i}(\\xi)$ with varying $h$.\nCode implementation starts with the initialization of a particular setup of our problem. Then, we define the source that introduces perturbations following by initialization of the mass and stiffness matrices. Finally, time extrapolation is done.", "# Import all necessary libraries, this is a configuration step for the exercise.\n# Please run it before the simulation code!\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\n\n# Show the plots in the Notebook\nplt.switch_backend(\"nbagg\")", "1. Initialization of setup", "# Initialization of setup\n# ---------------------------------------------------------------\n# Basic parameters\nnt = 2000 # Number of time steps\nvs = 3000 # Wave velocity [m/s] \nro0 = 2500 # Density [kg/m^3]\nnx = 1000 # Number of grid points \nisx = 500 # Source location [m] \nxmax = 10000. # Maximum length\neps = 0.5 # Stability limit\niplot = 20 # Snapshot frequency\n\ndx = xmax/(nx-1) # calculate space increment\nx = np.arange(0, nx)*dx # initialize space coordinates\nx = np.transpose(x)\n\nh = np.diff(x) # Element sizes [m]\n\n# parameters\nro = x*0 + ro0\nmu = x*0 + ro*vs**2\n\n# time step from stabiity criterion\ndt = 0.5*eps*dx/np.max(np.sqrt(mu/ro))\n# initialize time axis\nt = np.arange(1, nt+1)*dt \n\n# ---------------------------------------------------------------\n# Initialize fields\n# ---------------------------------------------------------------\nu = np.zeros(nx)\nuold = np.zeros(nx)\nunew = np.zeros(nx)\n\np = np.zeros(nx)\npold = np.zeros(nx)\npnew = np.zeros(nx)", "2. Source time function\nIn 1D the propagating signal is an integral of the source time function. As we look for a Gaussian waveform, we initialize the source time function $f(t)$ using the first derivative of a Gaussian function.\n\\begin{equation}\nf(t) = -\\dfrac{2}{\\sigma^2}(t - t_0)e^{-\\dfrac{(t - t_0)^2}{\\sigma^2}}\n\\end{equation}\nExercise 1\nInitialize a source time function called 'src'. Use $\\sigma = 20 dt$ as Gaussian width, and time shift $t_0 = 3\\sigma$. Then, visualize the source in a given plot.", "# Initialization of the source time function\n# ---------------------------------------------------------------\npt = 20*dt # Gaussian width\nt0 = 3*pt # Time shift\nsrc = -2/pt**2 * (t-t0) * np.exp(-1/pt**2 * (t-t0)**2)\n\n# Source vector\n# ---------------------------------------------------------------\nf = np.zeros(nx); f[isx:isx+1] = f[isx:isx+1] + 1.\n\n# ---------------------------------------------------------------\n# Plot source time fuction\n# ---------------------------------------------------------------\nplt.plot(t, src, color='b', lw=2, label='Source time function')\nplt.ylabel('Amplitude', size=16)\nplt.xlabel('time', size=16)\nplt.legend()\nplt.grid(True)\nplt.show()", "3. The Mass Matrix\nHaving implemented the desired source, now we initialize the mass and stiffness matrices. In general, the mass matrix is given\n\\begin{equation}\nM_{ij} = \\int_{D} \\rho \\varphi_i \\varphi_j dx = \\int_{D_{\\xi}} \\rho \\varphi_i \\varphi_j d\\xi\n\\end{equation}\nnext, the defined basis are introduced and some algebraic treatment is done to arrive at the explicit form of the mass matrix\nExercise 2\nImplement the mass matrix \n\\begin{equation}\nM_{ij} = \\frac{\\rho h}{6}\n \\begin{pmatrix}\n \\ddots & & & & 0\\\n 1 & 4 & 1 & & \\\n & 1 & 4 & 1 & \\\n & & 1 & 4 & 1\\\n 0 & & & & \\ddots\n \\end{pmatrix} \n\\end{equation}\nCompute the inverse mass matrix and display your result to visually inspect how it looks like", "# ---------------------------------------------------------------\n# Mass matrix M_ij (Eq 6.56)\n# ---------------------------------------------------------------\nM = np.zeros((nx,nx), dtype=float)\nfor i in range(1, nx-1):\n for j in range (1, nx-1):\n if j==i:\n M[i,j] = (ro[i-1]*h[i-1] + ro[i]*h[i])/3\n elif j==i+1:\n M[i,j] = ro[i]*h[i]/6\n elif j==i-1:\n M[i,j] = ro[i-1]*h[i-1]/6\n else:\n M[i,j] = 0\n \n# Corner elements\nM[0,0] = ro[0]*h[0]/3\nM[nx-1,nx-1] = ro[nx-1]*h[nx-2]/3\n# Invert M\nMinv = np.linalg.inv(M)\n\n# ---------------------------------------------------------------\n# Display inverse mass matrix inv(M)\n# ---------------------------------------------------------------\nplt.figure()\nplt.imshow(Minv)\nplt.title('Mass Matrix $\\mathbf{M}$')\nplt.axis(\"off\")\nplt.tight_layout()\nplt.show()", "4. The Stiffness matrix\nOn the other hand, the general form of the stiffness matrix is\n\\begin{equation}\nK_{ij} = \\int_{D} \\mu \\partial_x\\varphi_i \\partial_x\\varphi_j dx = \\int_{D_{\\xi}} \\mu \\partial_\\xi\\varphi_i \\partial_\\xi\\varphi_j d\\xi\n\\end{equation} \nat this point, the defined basis are introduced. Again, with the help of some algebraic treatment, we arrive at the explicit form of the stiffness matrix\nExercise 3\nImplement the stiffness matrix \n\\begin{equation}\nK_{ij} = \\frac{\\mu}{h}\n \\begin{pmatrix}\n \\ddots & & & & 0\\\n -1 & 2 & -1 & & \\\n &-1 & 2 & -1 & \\\n & & -1 & 2 & -1\\\n 0 & & & & \\ddots\n \\end{pmatrix} \n\\end{equation}\nDisplay the stiffness matrix to visually inspect how it looks like", "# ---------------------------------------------------------------\n# Stiffness matrix Kij (Eq 6.60)\n# ---------------------------------------------------------------\nK = np.zeros((nx,nx), dtype=float)\nfor i in range(1, nx-1):\n for j in range(1, nx-1):\n if i==j:\n K[i,j] = mu[i-1]/h[i-1] + mu[i]/h[i]\n elif i==j+1:\n K[i,j] = -mu[i-1]/h[i-1]\n elif i+1==j:\n K[i,j] = -mu[i]/h[i]\n else:\n K[i,j] = 0\n\nK[0,0] = mu[0]/h[0]\nK[nx-1,nx-1] = mu[nx-1]/h[nx-2]\n\n# ---------------------------------------------------------------\n# Display stiffness matrix K\n# ---------------------------------------------------------------\nplt.figure()\nplt.imshow(K)\nplt.title('Stiffness Matrix $\\mathbf{K}$')\nplt.axis(\"off\")\n\nplt.tight_layout()\nplt.show()", "5. Finite differences matrices\nWe implement a finite difference scheme in order to compare with the finite elements solution. \nExercise 4\nImplement the finite differences matrices $M$ and $D$. Where $M$ is a diagonal mass matrix containing the inverse densities, and differentiation matrix \n\\begin{equation}\nD_{ij} = \\frac{\\mu}{dt^2}\n \\begin{pmatrix}\n -2 & 1 & & & \\\n 1 & -2 & 1 & & \\\n & & \\ddots & & \\\n & & 1 & -2 & 1\\\n & & & 1 & -2\n \\end{pmatrix} \n\\end{equation}\nDisplay both matrices to visually inspect how they look like", "# Initialize finite differences matrices (Eq 6.61)\n# ---------------------------------------------------------------\nMf = np.zeros((nx,nx), dtype=float)\nD = np.zeros((nx,nx), dtype=float)\ndx = h[1]\n\nfor i in range(nx):\n Mf[i,i] = 1./ro[i]\n if i>0:\n if i<nx-1:\n D[i+1,i] =1\n D[i-1,i] =1\n D[i,i] = -2\n \nD = ro0 * vs**2 * D/dx**2\n\n# ---------------------------------------------------------------\n# Display differences matrices\n# ---------------------------------------------------------------\nfig, (ax1, ax2) = plt.subplots(1, 2)\nax1.imshow(D)\nax1.set_title('Stiffness Differential Matrix $\\mathbf{D}$')\nax1.axis(\"off\")\n\nax2.imshow(Mf)\nax2.set_title('Stiffness Differential Matrix $\\mathbf{M_f}$')\nax2.axis(\"off\")\n\nplt.tight_layout()\nplt.show()", "6. Finite element solution\nFinally we implement the finite element solution using the computed mass $M$ and stiffness $K$ matrices together with a finite differences extrapolation scheme\n\\begin{equation}\n\\mathbf{u}(t + dt) = dt^2 (\\mathbf{M}^T)^{-1}[\\mathbf{f} - \\mathbf{K}^T\\mathbf{u}] + 2\\mathbf{u} - \\mathbf{u}(t-dt).\n\\end{equation}", "# Initialize animated plot\n# ---------------------------------------------------------------\nplt.figure(figsize=(12,4))\n\nline1 = plt.plot(x, u, 'k', lw=1.5, label='FEM')\nline2 = plt.plot(x, p, 'r', lw=1.5, label='FDM')\nplt.title('Finite elements 1D Animation', fontsize=16)\nplt.ylabel('Amplitude', fontsize=12)\nplt.xlabel('x (m)', fontsize=12)\n\nplt.ion() # set interective mode\nplt.show()\n\n# ---------------------------------------------------------------\n# Time extrapolation\n# ---------------------------------------------------------------\nfor it in range(nt):\n # --------------------------------------\n # Finite Element Method\n unew = (dt**2) * Minv @ (f*src[it] - K @ u) + 2*u - uold \n uold, u = u, unew\n \n # --------------------------------------\n # Finite Difference Method\n pnew = (dt**2) * Mf @ (D @ p + f/dx*src[it]) + 2*p - pold\n pold, p = p, pnew\n \n # -------------------------------------- \n # Animation plot. Display both solutions\n if not it % iplot:\n for l in line1:\n l.remove()\n del l\n for l in line2:\n l.remove()\n del l\n line1 = plt.plot(x, u, 'k', lw=1.5, label='FEM')\n line2 = plt.plot(x, p, 'r', lw=1.5, label='FDM')\n plt.legend()\n plt.gcf().canvas.draw()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RadoslawDryzner/LeRepoDuGuerrier
Homework01/Homework 1.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1\"><a href=\"#Task-1.-Compiling-Ebola-Data\"><span class=\"toc-item-num\">Task 1.&nbsp;&nbsp;</span>Compiling Ebola Data</a></div>\n <div class=\"lev1\"><a href=\"#Task-2.-RNA-Sequences\"><span class=\"toc-item-num\">Task 2.&nbsp;&nbsp;</span>RNA Sequences</a></div>\n <div class=\"lev1\"><a href=\"#Task-3.-Class-War-in-Titanic\"><span class=\"toc-item-num\">Task 3.&nbsp;&nbsp;</span>Class War in Titanic</a></div></p>", "DATA_FOLDER = 'Data' # Use the data folder provided in Tutorial 02 - Intro to Pandas.\n%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport datetime\nfrom dateutil.parser import parse\nfrom os import listdir\nfrom os.path import isfile, join\nsns.set_context('notebook')", "Task 1. Compiling Ebola Data\nThe DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.\nUse pandas to import these data files into a single Dataframe.\nUsing this DataFrame, calculate for each country, the daily average per month of new cases and deaths.\nMake sure you handle all the different expressions for new cases and deaths that are used in the reports.\nFirst, we define some helpful functions that will help us during the parsing of the data.\n- get_files: returns all the .csv files for a given country", "def get_files(country):\n path = DATA_FOLDER + \"/ebola/\" + country + \"_data/\"\n return [f for f in listdir(path) if isfile(join(path, f))]", "sum_row: for a given row, returns the total value for the new cases / deaths. We first defined this function as the sum of all new cases / deaths in all provinces, but we discovered some strange data for some provinces, so we decided to only take into account the 'total' column\nsum_rows: sum all the rows given in argument", "def sum_row(row, total_col):\n return float(row[total_col].values[0])\n\ndef sum_rows(rows, total_col):\n tot = 0\n for row in rows:\n tot += sum_row(row, total_col)\n return tot ", "Now, we define for each country a function, which, for a given file, returns a dictionnary with the country, date, upper and lower bounds for the new cases, and upper and lower bounds for the new deaths.\nAs we don't know if the new cases / deaths for the 'probable' and 'suspected' cases is reliable, we decided to create an upper bound with the sum of the 'confirmed', 'probable' and 'suspected' new cases / deaths, and a lower bound with only the 'confirmed' new cases / deaths.\nThe structure of these functions are the same for each country, only the name of the descrption of the data changes.", "def get_row_guinea(file):\n country = 'guinea'\n date = file[:10]\n raw = pd.read_csv(DATA_FOLDER + \"/ebola/\" + country + \"_data/\" + file)\n total_col = \"Totals\"\n \n new_cases_lower = sum_row(raw[raw.Description == \"New cases of confirmed\"], total_col)\n new_cases_upper = sum_row(raw[raw.Description == \"Total new cases registered so far\"], total_col)\n \n new_deaths_lower = sum_row(raw[(raw.Description == \"New deaths registered today (confirmed)\") | (raw.Description == \"New deaths registered\")], total_col)\n new_deaths_upper = sum_row(raw[(raw.Description == \"New deaths registered today\") | (raw.Description == \"New deaths registered\")], total_col)\n \n return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'NewDeathsLower' : new_deaths_lower, 'NewDeathsUpper' : new_deaths_upper}\n\n\ndef get_row_liberia(file):\n country = 'liberia'\n date = file[:10]\n raw = pd.read_csv(DATA_FOLDER + \"/ebola/\" + country + \"_data/\" + file).fillna(0)\n total_col = \"National\"\n \n new_cases_lower = sum_row(raw[raw.Variable == \"New case/s (confirmed)\"], total_col)\n list_cases_upper = ([\"New Case/s (Suspected)\", \n \"New Case/s (Probable)\",\n \"New case/s (confirmed)\"])\n new_cases_upper = sum_rows([raw[raw.Variable == row] for row in list_cases_upper], total_col)\n \n new_deaths_lower = sum_row(raw[raw.Variable == \"Newly reported deaths\"], total_col)\n new_deaths_upper = new_deaths_lower\n \n return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'NewDeathsLower' : new_deaths_lower, 'NewDeathsUpper' : new_deaths_upper}", "As the files for the Sierra Leone does not contain data for the new deaths, we first extract the total deaths for each day, and we will process them later to get the new deaths.", "def get_row_sl(file):\n country = 'sl'\n date = file[:10]\n raw = pd.read_csv(DATA_FOLDER + \"/ebola/\" + country + \"_data/\" + file).fillna(0)\n total_col = \"National\"\n \n new_cases_lower = sum_row(raw[raw.variable == \"new_confirmed\"], total_col)\n list_cases_upper = ([\"new_suspected\", \n \"new_probable\",\n \"new_confirmed\"])\n new_cases_upper = sum_rows([raw[raw.variable == row] for row in list_cases_upper], total_col)\n \n list_death_upper = ([\"death_suspected\", \n \"death_probable\",\n \"death_confirmed\"])\n total_death_upper = sum_rows([raw[raw.variable == row] for row in list_death_upper], total_col)\n total_death_lower = sum_row(raw[raw.variable == \"death_confirmed\"], total_col)\n \n return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'TotalDeathLower' : total_death_lower, 'TotalDeathUpper' : total_death_upper}\n\nrows_guinea = [get_row_guinea(file) for file in get_files(\"guinea\")]\n\nrows_liberia = [get_row_liberia(file) for file in get_files(\"liberia\")]", "We now transform the data for the Sierra Leone :\n- we first create a new dictionary for which the keys are date, and the values are the previously extracted values from the .csv files.\n- then for each value in this dictionary, we try to get the value of the day before, and perform the difference to get the new deaths of this day.", "rows_sl_total_deaths = [get_row_sl(file) for file in get_files(\"sl\")]\ndic_sl_total_deaths = {}\nfor row in rows_sl_total_deaths:\n dic_sl_total_deaths[row['Date']] = row\n \nrows_sl = []\nfor date, entry in dic_sl_total_deaths.items():\n date_before = date - datetime.timedelta(days=1)\n if date_before in dic_sl_total_deaths:\n \n if entry['TotalDeathUpper'] != 0 and dic_sl_total_deaths[date_before]['TotalDeathUpper'] != 0 and entry['TotalDeathLower'] != 0 and dic_sl_total_deaths[date_before]['TotalDeathLower'] != 0: \n copy = dict(entry)\n del copy['TotalDeathUpper']\n del copy['TotalDeathLower']\n \n copy['NewDeathsUpper'] = entry['TotalDeathUpper'] - dic_sl_total_deaths[date_before]['TotalDeathUpper']\n copy['NewDeathsLower'] = entry['TotalDeathLower'] - dic_sl_total_deaths[date_before]['TotalDeathLower']\n\n rows_sl.append(copy)", "We can now insert the data in a dataframe. For Liberia, December's data is in a completely different format so we dropped it: for instance, for some days, the new cases are the new cases for the day and for some other they are the total cases for this country.", "raw_dataframe = pd.DataFrame(columns=['Country', 'Date', 'NewCasesLower', 'NewCasesUpper', 'NewDeathsLower', 'NewDeathsUpper'])\nfor row in rows_sl, rows_guinea:\n raw_dataframe = raw_dataframe.append(row, ignore_index = True)\nfor row in rows_liberia:\n if row['Date'].month != 12: #December data is erroneous\n raw_dataframe = raw_dataframe.append(row, ignore_index = True)\n \nraw_dataframe\n\ndataframe = raw_dataframe.set_index(['Country', 'Date'])\n\ndataframe_no_day = raw_dataframe\ndataframe_no_day['Year'] = raw_dataframe['Date'].apply(lambda x: x.year)\ndataframe_no_day['Month'] = raw_dataframe['Date'].apply(lambda x: x.month)\nfinal_df = dataframe_no_day[['Country', 'Year', 'Month', 'NewCasesLower', 'NewCasesUpper', 'NewDeathsLower', 'NewDeathsUpper']].groupby(['Country', 'Year', 'Month']).mean()\nfinal_df", "Finally, to have some final general idea for the data, we average the bounds.", "s1 = final_df[['NewCasesLower', 'NewCasesUpper']].mean(axis=1)\ns2 = final_df[['NewDeathsLower', 'NewDeathsUpper']].mean(axis=1)\nfinal = pd.concat([s1, s2], axis=1)\nfinal.columns = ['NewCasesAverage', 'NewDeathsAverage']\nfinal", "Task 2. RNA Sequences\nIn the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each. \nUse pandas to import the first 9 spreadsheets into a single DataFrame.\nThen, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.\nMake sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.\nWe load the first spreadsheet from the file's Sheet 1. Then we add a new column that is the same for all the data in this import, which corresponds to the barcode of the code.\nThen we rename the columns for more clarity.", "mid = pd.read_excel(DATA_FOLDER + '/microbiome/MID1.xls', sheetname='Sheet 1', header=None)\nmid.fillna('unknown', inplace=True)\nmid['BARCODE'] = 'MID1'\nmid.columns = ['Taxon', 'Count', 'BARCODE']", "Now we repeat this operation for every other spreadsheet except the metadata. At each iteration we simply concatenate the data at the end of the previous data, this accumulating all the files' data into a single dataframe. We don't care about any index right now since we will use a random one later.", "for i in range(2, 10):\n midi = pd.read_excel(DATA_FOLDER + '/microbiome/MID' + str(i) + '.xls', sheetname='Sheet 1', header=None)\n midi.fillna('unknown', inplace=True)\n midi['BARCODE'] = 'MID' + str(i)\n midi.columns = ['Taxon', 'Count', 'BARCODE']\n mid = pd.concat([mid, midi])", "Finally, we do a merge with the metadata. We join on the BARCODE column. This column will be the index of the metadata when we import it in this case. Finally we set the index for the three columns BARCODE, GROUP and SAMPLE which are all the columns of the metada and are unique.\nThe only NaN value we found was the NA value on the metadata, which may indicate that there is no sample for the first group. We replaced it anyway by unknown.", "metadata = pd.read_excel(DATA_FOLDER + '/microbiome/metadata.xls', sheetname='Sheet1', index_col=0)\nmetadata.fillna('unknown', inplace=True)\nmerged = pd.merge(mid, metadata, right_index=True, left_on='BARCODE')\nmerged = merged.set_index(keys=['BARCODE', 'Taxon'])\nmerged", "Task 3. Class War in Titanic\nUse pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.", "from IPython.core.display import HTML\nHTML(filename=DATA_FOLDER+'/titanic.html')", "For each of the following questions state clearly your assumptions and discuss your findings:\n1. Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical. \n2. Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals. \n3. Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.\n4. For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.\n5. Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.\n6. Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.\n1. We start by importing the data from the file.", "titanic = pd.read_excel(DATA_FOLDER + '/titanic.xls', sheetname='titanic')\ntitanic", "Next we can list the data types of each field.", "titanic.dtypes", "When it comes to the object fields, we can be a bit more precise. name, sec, ticket, cabin, embarked, boat and home.dex are all strings.\nNext, we call the describe method to list some statistics on the data. We thus obtain the range of all of the numeric fields of our data.", "titanic.describe()", "Moreover, we can also note some ranges of other fields. For example, sex has only two possible values female and male. embarked can only be S, C and Q.\nFor a better visual result, we decided to replace the travel classes, ports to more readable values. As we make them categorical types, the performance stays the same.", "class_dic = {1 : 'First Class', 2 : 'Second Class', 3 : 'Third Class', np.nan : np.nan}\nsurvived_dic = {0 : 'Deceased' , 1 : 'Survived', np.nan : np.nan}\nemarked_dic = {'C' : 'Cherbourg', 'Q' : 'Queenstown', 'S' : 'Southampton', np.nan : np.nan}\ntitanic['pclass'] = titanic['pclass'].apply(lambda x: class_dic[x])\n\ntitanic['survived'] = titanic['survived'].apply(lambda x: survived_dic[x])\n\ntitanic['embarked'] = titanic['embarked'].apply(lambda x: emarked_dic[x])", "Then we make categorical data as actually categorical.", "titanic['pclass'] = titanic.pclass.astype('category')\ntitanic['survived'] = titanic.survived.astype('category')\ntitanic['sex'] = titanic.sex.astype('category')\ntitanic['embarked'] = titanic.embarked.astype('category')\ntitanic['cabin'] = titanic.cabin.astype('category')\ntitanic['boat'] = titanic.boat.astype('category')", "2. We plot the histogram of the travel class.", "titanic.pclass.value_counts(sort=False).plot(kind='bar')", "Next we plot the histogram of the three embark ports.", "titanic.embarked.value_counts().plot(kind='bar')", "Next we plot the histogram of the sex.", "titanic.sex.value_counts().plot(kind='bar')", "Next, we cut the ages data into decades and plot the histogram of the devades.", "pd.cut(titanic.age, range(0,90,10)).value_counts(sort=False).plot(kind='bar')", "3. We plot the cabin floor data as a pie chart.", "titanic.cabin.dropna().apply(lambda x : x[0]).value_counts(sort=False).plot(kind='pie')", "4. Here, we plot the proportion of people that survived in the first class.", "titanic[titanic.pclass == \"First Class\"].survived.value_counts(sort=False).plot(kind='pie')", "Next, we plot the proportion of people that survived in the second class.", "titanic[titanic.pclass == \"Second Class\"].survived.value_counts(sort=False).plot(kind='pie')", "Finally, we plot the proportion of people that survived in the third class.", "titanic[titanic.pclass == \"Third Class\"].survived.value_counts(sort=False).plot(kind='pie')", "As we can see, the lower the class, the higher the probability of death.\n5. Here we add new columns that will help us to calculate proportions of survived people in the last part.", "titanic.insert(0, 'alive', 0)\ntitanic.insert(0, 'dead', 0)\ntitanic.insert(0, 'ratio', 0)", "Here we set these new columns to appropriate values. We essentialy separate the survived columns for easier summing later on. Finnaly we slice the data to take only the columns of interest.", "titanic.loc[titanic['survived'] == \"Survived\", 'alive'] = 1\ntitanic.loc[titanic['survived'] == \"Deceased\", 'dead'] = 1\ndf = titanic[['pclass', 'sex', 'alive', 'dead', 'ratio']]", "We group the data by the sec and class of the passangers and we sum it. Then we have the sum of alive and dead people groupped as we wish and we can easily calculate the proportion of them that survived, which we plot as a histogram.", "aggregated = df.groupby(['sex', 'pclass']).sum()\n(aggregated['alive'] / (aggregated['alive'] + aggregated['dead'])).plot(kind='bar')", "We can see that there is a huge difference of survival between the classes and sexes : for instance, the third class males have 7 less times chance of survival than the first class females.\n6. Next we insert a new column that will be the age category of each person. Since we wan't to split the people in two equal groups based on age, we compute the median age of passangers. We also drop the passengers with an unknown age value, to avoid bad results for the median computation.", "titanic.dropna(axis=0, subset=['age'], inplace=True)\ntitanic.insert(0, 'age_category', 0)\nmedian = titanic['age'].median()", "Next, we set the correct category to people below or above the median age. The people that have the median age are grouped with the people below it. Next we set this column as a categorical column.", "titanic.loc[titanic['age'] > median, 'age_category'] = \"Age > \" + str(median)\ntitanic.loc[titanic['age'] <= median, 'age_category'] = \"Age <= \" + str(median)\ntitanic['age_category'] = titanic.age_category.astype('category')", "Next, we take the columns that are of interest to us and group by age category, sec and travel class. Then we sum over these groups, obtaining the people that lived and those that died which which we can compute the proportion and display it as a dataframe.", "sub = titanic[['pclass', 'sex', 'age_category', 'alive', 'dead', 'ratio']]\nsubagg = sub.groupby(['age_category', 'sex', 'pclass']).sum()\nsubagg['ratio'] = (subagg['alive'] / (subagg['alive'] + subagg['dead']))\nonly_ratio = subagg[['ratio']]\nonly_ratio", "As before, we can see that there is a huge difference of survival between the classes and sexes. In the other hand, the age doesn't make a large difference : no matter if the passenger is above or below 28 years old, its probability of survival is more determined by its sex and class." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
saashimi/CPO-datascience
Normalized Dataset - Testing parameters.ipynb
mit
[ "Random Forests Using Full PSU dataset", "#Import required packages\nimport pandas as pd\nimport numpy as np\nimport datetime\nimport matplotlib.pyplot as plt\n\ndef format_date(df_date):\n \"\"\"\n Splits Meeting Times and Dates into datetime objects where applicable using regex.\n \"\"\"\n df_date['Days'] = df_date['Meeting_Times'].str.extract('([^\\s]+)', expand=True)\n df_date['Start_Date'] = df_date['Meeting_Dates'].str.extract('([^\\s]+)', expand=True)\n df_date['Year'] = df_date['Term'].astype(str).str.slice(0,4)\n df_date['Quarter'] = df_date['Term'].astype(str).str.slice(4,6)\n df_date['Term_Date'] = pd.to_datetime(df_date['Year'] + df_date['Quarter'], format='%Y%m')\n df_date['End_Date'] = df_date['Meeting_Dates'].str.extract('(?<=-)(.*)(?= )', expand=True)\n df_date['Start_Time'] = df_date['Meeting_Times'].str.extract('(?<= )(.*)(?=-)', expand=True)\n df_date['Start_Time'] = pd.to_datetime(df_date['Start_Time'], format='%H%M')\n df_date['End_Time'] = df_date['Meeting_Times'].str.extract('((?<=-).*$)', expand=True)\n df_date['End_Time'] = pd.to_datetime(df_date['End_Time'], format='%H%M')\n df_date['Duration_Hr'] = ((df_date['End_Time'] - df_date['Start_Time']).dt.seconds)/3600\n return df_date\n\ndef format_xlist(df_xl):\n \"\"\"\n revises % capacity calculations by using Max Enrollment instead of room capacity. \n \"\"\"\n df_xl['Cap_Diff'] = np.where(df_xl['Xlst'] != '', \n df_xl['Max_Enrl'].astype(int) - df_xl['Actual_Enrl'].astype(int), \n df_xl['Room_Capacity'].astype(int) - df_xl['Actual_Enrl'].astype(int)) \n df_xl = df_xl.loc[df_xl['Room_Capacity'].astype(int) < 999]\n\n return df_xl \n ", "Partitioning a dataset in training and test sets", "pd.set_option('display.max_rows', None) \n\n\ndf = pd.read_csv('data/PSU_master_classroom_91-17.csv', dtype={'Schedule': object, 'Schedule Desc': object})\ndf = df.fillna('')\n\ndf = format_date(df)\n# Avoid classes that only occur on a single day\ndf = df.loc[df['Start_Date'] != df['End_Date']]\n\n#terms = [199104, 199204, 199304, 199404, 199504, 199604, 199704, 199804, 199904, 200004, 200104, 200204, 200304, 200404, 200504, 200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]\n#terms = [200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]\n#df = df.loc[df['Term'].isin(terms)]\ndf = df.loc[df['Online Instruct Method'] != 'Fully Online']\n#dept_lst = ['MTH', 'CH', 'BI', 'CE', 'CS', 'ECE', 'EMGT' ]\n#df = df.loc[df['Dept'].isin(dept_lst)]\n\n# Calculate number of days per week and treat Sunday condition\ndf['Days_Per_Week'] = df['Days'].str.len()\ndf['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)\ndf['Building'] = df['ROOM'].str.extract('([^\\s]+)', expand=True)\n\ndf_cl = format_xlist(df)\ndf_cl['%_Empty'] = df_cl['Cap_Diff'].astype(float) / df_cl['Room_Capacity'].astype(float)\n\n# Normalize the results\ndf_cl['%_Empty'] = df_cl['Actual_Enrl'].astype(np.float32)/df_cl['Room_Capacity'].astype(np.float32)\ndf_cl = df_cl.replace([np.inf, -np.inf], np.nan).dropna()\n\n\nfrom sklearn.preprocessing import LabelEncoder\n\ndf_cl = df_cl.sample(n = 80000)\n\n# Save as a 1D array. Otherwise will throw errors.\ny = np.asarray(df_cl['%_Empty'], dtype=\"|S6\")\n\ncols = df_cl[['Dept', 'Days', 'Start_Time', 'ROOM', 'Quarter', 'Room_Capacity', 'Building', 'Class', 'Instructor', 'Schedule', 'Max_Enrl']]\ncat_columns = ['Dept', 'Days', 'Class', 'Start_Time', 'ROOM', 'Building', 'Instructor', 'Schedule']\n\n#cols = df_cl[['Start_Time', 'Class', 'Instructor' ]]\n#cat_columns = ['Start_Time', 'Class', 'Instructor']\n\nfor column in cat_columns:\n categorical_mapping = {label: idx for idx, label in enumerate(np.unique(cols['{0}'.format(column)]))}\n cols['{0}'.format(column)] = cols['{0}'.format(column)].map(categorical_mapping)\n\nfrom distutils.version import LooseVersion as Version\nfrom sklearn import __version__ as sklearn_version\n \nif Version(sklearn_version) < '0.18':\n from sklearn.cross_validation import train_test_split\nelse:\n from sklearn.model_selection import train_test_split\n\nX = cols.iloc[:, :].values\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)\n", "Determine Feature Importances\nUtilize Random Forests Method to determine feature importances. On the left, trees are trained independently by recursive binary partitioning of a bootstrapped sample of the input data, X . On the right, test data is dropped down through each tree and the response estimate is the average over the all the individual predictions in the forest.\nRandom Forests Diagram\n<img src=\"files/Random Forests.png\">\nSource: ResearchGate.net", "from sklearn.ensemble import RandomForestClassifier\n\nfeat_labels = cols.columns[:]\n\nforest = RandomForestClassifier(n_estimators=20,\n random_state=0,\n n_jobs=-1) # -1 sets n_jobs=n_CPU cores\n\nforest.fit(X_train, y_train)\nimportances = forest.feature_importances_\n\nindices = np.argsort(importances)[::-1]\n\nfor f in range(X_train.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, \n feat_labels[indices[f]], \n importances[indices[f]]))\n\nplt.title('Feature Importances')\nplt.bar(range(X_train.shape[1]), \n importances[indices],\n color='lightblue', \n align='center')\n\nplt.xticks(range(X_train.shape[1]), \n feat_labels[indices], rotation=90)\nplt.xlim([-1, X_train.shape[1]])\nplt.tight_layout()\nplt.show()", "Feature Importance Results\nClass, Instructor, and Start Times are the three most important factors in predicting the percentage of empty seats expected.\nTest Prediction\nMachine-generated algorithm results in roughly .65 - .70 accuracy score.", "from sklearn.metrics import accuracy_score\n\nforest = RandomForestClassifier(n_estimators=20,\n random_state=0,\n n_jobs=-1) # -1 sets n_jobs=n_CPU cores\n\nforest.fit(X_train, y_train)\ny_predict = forest.predict(X)\ny_actual = y\n\nprint(accuracy_score(y_actual, y_predict))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cubewise-code/TM1py-samples
Samples/Cubike/cubike_data_integration.ipynb
mit
[ "Data science with IBM Planning Analytics\nCubike example - Part 1\nCubike is a fictional Bike Sharing company that we use in the series of articles about Data Science with TM1 and Planning Analytics:\n\nPart 1: Upload weather data from web services\nPart 2: Explore your TM1 and Planning Analytics data with Pandas and Ploty\nPart 3: Timeseries Forecasting with Facebook Prophet\nIf you are new to TM1py, this article will guide you to setup TM1py and all the different modules required in the Cubike example.\n\nNote: All the prerequisites to run this sample into your environement are defined in this article:\n* Setup cubike example\nPart 1: Upload weather data from web services into TM1/Planning Analytics\nThe objective in this first part is to upload weather data from National Oceanic and Atmospheric Administation (NOAA) Web service API.\nStep 1: Imports librairies\nThe first step in the Jupyter notebook is to import the packages and define the constants we need:\n\n\nrequests: library for HTTP / REST Request against Webservices\n\n\njson: standard library for JSON parsing, manipulation", "import requests\nimport json\nfrom TM1py import TM1Service", "Constants\nWe are pulling the weather data from the National Oceanic and Atmospheric Administation (NOAA). NOAA has a rich API which allows us to access all kind of environmental data from the US.\n<b>STATION</b> <a href=\"https://www.ncdc.noaa.gov/cdo-web/datasets/NORMAL_DLY/stations/GHCND:USW00014732/detail\">GHCND:USW00014732</a> (40.7792°, -73.88°) \n<b>FROM, TO</b> Timeframe\n<b>HEADERS</b> Token for Authentication with the API", "STATION = 'GHCND:USW00014732'\nFROM, TO = '2017-01-01', '2017-01-04'\nHEADERS = {\"token\": 'yyqEBOAbHVbtXkfAmZuPNfnSXvdfyhgn'}", "Step 2: Build URL for the Query\nBuild the parametrized URL and print it", "url = 'https://www.ncdc.noaa.gov/cdo-web/api/v2/data?' \\\n 'datasetid=GHCND&' \\\n 'startdate=' + FROM + '&' \\\n 'enddate=' + TO + '&' \\\n 'limit=1000&' \\\n 'datatypeid=TMIN&' \\\n 'datatypeid=TAVG&' \\\n 'datatypeid=TMAX&' \\\n 'stationid=' + STATION\n\nprint(url)", "This is the URL we will get the data from.\nStep 3: Query Weather Data\nNow that our URL is ready, we need to send the request to the API:\n\nExecute the URL against the NOAA API to get the results\nPrettyprint first three items from result-set", "response = requests.get(url, headers=HEADERS).json()\nresults = response[\"results\"] \n\nprint(json.dumps(results[0:3], indent=2))", "Step 4: Rearrange Data\nBefore sending the data into TM1, we now need to rearrange the data so it matches our TM1 cube structure:\n\nVersion = Actual\nDate = record['date'][0:10]\nCity = NYC\nDataType = record['datatype']", "cells = dict()\n\nfor record in results:\n value = record['value'] / 10\n coordinates = (\"Actual\", record['date'][0:10], \"NYC\", record['datatype'])\n cells[coordinates] = value\n\nfor coordinate, value in cells.items():\n print(coordinate, value)", "Step 5: Send Data to TM1\nNow that the data is ready, we just need to connect to our TM1 instance and finally write the values into the TM1 cube \"Weather Data\".\nFirst we need to get the authentication parameters of our TM1 instance which are stored in a config.ini file:", "import configparser\nconfig = configparser.ConfigParser()\nconfig.read(r'..\\..\\config.ini')", "With TM1py we can send data to a cube with two lines of code as long as our cell set match dimensions in our cube:", "with TM1Service(**config['tm1srv01']) as tm1:\n tm1.cubes.cells.write_values(\"Weather Data\", cells)", "Next\nTo open next Jupyter Notebook:\n1. Go to File at the top left\n1. Click Open\n1. A new tab will open and then click on cubike_data_science.ipynb" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ShinjiKatoA16/UCSY-sw-eng
Exclusive OR (XOR).ipynb
mit
[ "Exclusive OR\nFollowing symbol are used to explain Exclusive OR in logic, or mathematics.\n$A{\\veebar}B \\ A{\\oplus}B$\nlogical operation\n| A | B | $A{\\oplus}B$ |\n|---|---|---|\n| F | F | F |\n| F | T | T |\n| T | F | T |\n| T | T | F |\n$X \\oplus False(0) = X \\ X \\oplus True(1) = \\lnot X$\nCommutative low, Associative low\nLike Addition or Multiplication, commulative low and associative low can be applied to XOR.\nCommutative low\n$A \\oplus B = B \\oplus A$\nAssciative low\n$A \\oplus (B \\oplus C) = (A \\oplus B) \\oplus C$\n| A | B | C | $A{\\oplus}(B{\\oplus}C)$ | $(A{\\oplus}B){\\oplus}C$ |\n|---|---|---|---|---|\n| F | F | F | F | F |\n| F | F | T | T | T |\n| F | T | F | T | T |\n| F | T | T | F | F |\n| T | F | F | T | T |\n| T | F | T | F | F |\n| T | T | F | F | F |\n| T | T | T | T | T |", "# In Python, XOR operator is ^\nfor a in (False, True):\n for b in (False,True):\n for c in (False, True):\n # print (a, b, c, a^(b^c), (a^b)^c)\n print (\"{!r:5} {!r:5} {!r:5} | {!r:5} {!r:5}\".format(a, b, c, a^(b^c), (a^b)^c))", "Find general rule from specific case\n\nWhat will be returned for 1 True $\\oplus$ 100 False\nWhat will be returned for 2 True $\\oplus$ 100 False\nWhat will be returned for 3 True $\\oplus$ 100 False\nWhat will be returned for 101 True $\\oplus$ 10000 False\n\nWhat is the general rule ?\nBit operation\nWhen 2 integers are calculated with XOR operator, each bit is calculated separately. (Same as AND, OR, NOT)\n```\n10(0b1010) xor 6(0b0110) = 12 (0b1100)\n10(0b1010) and 6(0b0110) = 2 (0b0010)\n10(0b1010) or 6(0b0110) = 14 (0b1110)\nNOT 10(0b1010) = 5 (0b0101)\n0b1010 0b1010 0b1010 0b1010\n0b0110 0b0110 0b0110\n-(XOR)- -(AND)- -(OR)- -(NOT)-\n0b1100 0b0010 0b1110 0b0101\n```", "# CONSTANT is not supported in Python, Upper case variable is typically used as Constant\n\nINT10 = 0b1010\nINT06 = 0b0110\nREV_MASK = 0b1111\n\nprint('INT10:', INT10, 'INT06:', INT06)\nprint('Bit XOR:', INT10^INT06)\nprint('Bit AND:', INT10&INT06)\nprint('Bit OR: ', INT10|INT06)\nprint('Bit NOT', INT10, ':', (~INT10)&0x0f)\nprint('Reverse INT10:', INT10^REV_MASK)\nprint('Reverse INT06:', INT06^REV_MASK)", "XOR usage\n\nBit reverse (ex. 32bit integer, remain upper 16bit, reverse lower 16bit)\nCPU register 0 clear (Often used in assemblar): $X \\oplus X = 0$\nEncryption ($Original \\oplus Key = Encrypted \\rightarrow Encrypted \\oplus Key = Original$)\nParity (RAID5 etc, N HDD with (N-1) Capacity, If one of the HDD is broken, its data can be recovered from other data)\n\nXOR parity\n\n$A \\oplus A = 0$\n$A \\oplus B = 0 \\rightarrow A = B$\n$A \\oplus B = C \\rightarrow A \\oplus B \\oplus C = 0 \\rightarrow A = B \\oplus C$\n$A \\oplus B \\oplus C \\oplus ... Z = 0 \\rightarrow X =$ XOR of other DATA", "# XOR data recovery test, Create parity from 3 random data, Recover any data from other data and parity\n\nimport random\n\ndata = [random.randrange(1000000) for i in range(3)]\nprint(data)\n\nparity = 0\nfor x in data:\n parity ^= x\n\nprint('XOR of all data is:', parity)\nprint('DATA[0]:', data[1]^data[2]^parity)\nprint('DATA[1]:', data[0]^data[2]^parity)\nprint('DATA[2]:', data[0]^data[1]^parity)\nprint('Parity: ', data[0]^data[1]^data[2])\nprint('XOR of all data + Parity:', data[0]^data[1]^data[2]^parity)", "XOR parity recalculation\n\nInitialization: XOR of every data\nUpdate:\nCalculate $OldData \\oplus NewData = Diff$\n$NewParity=OldParity \\oplus Diff$", "import random\n\ndata = [random.randrange(1000000) for i in range(5)]\nprint(data)\n\n# Initialize Parity\nparity = 0\nfor x in data:\n parity ^= x\n \n# Update data[2]\nolddata = data[2]\nnewdata = random.randrange(1000000)\nprint('data[2] is updated from:', olddata, 'to', newdata)\ndata[2] = newdata\ndiff = olddata ^ newdata\n\n# Calc new parity\nnewparity = parity ^ diff\nprint('old parity:', parity)\nparity = newparity\nprint('new parity:', parity)\n\n# Check if parity is correct\nchk_parity = 0\nfor x in data:\n chk_parity ^= x\n\nprint('updated parity:', chk_parity)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rainyear/pytips
Tips/2016-03-14-Command-Line-tools-in-Python.ipynb
mit
[ "Python 开发命令行工具\nPython 作为一种脚本语言,可以非常方便地用于系统(尤其是*nix系统)命令行工具的开发。Python 自身也集成了一些标准库,专门用于处理命令行相关的问题。\n命令行工具的一般结构\n\n1. 标准输入输出\n*nix 系统中,一切皆为文件,因此标准输入、输出可以完全可以看做是对文件的操作。标准化输入可以通过管道(pipe)或重定向(redirect)的方式传递:", "# script reverse.py\n#!/usr/bin/env python\nimport sys\nfor l in sys.stdin.readlines():\n sys.stdout.write(l[::-1])", "保存为 reverse.py,通过管道 | 传递:\n```sh\nchmod +x reverse.py\ncat reverse.py | ./reverse.py\nnohtyp vne/nib/rsu/!#\nsys tropmi\n:)(senildaer.nidts.sys ni l rof\n)]1-::[l(etirw.tuodts.sys\n```\n通过重定向 &lt; 传递:\n```sh\n./reverse.py < reverse.py\n输出结果同上\n```\n2. 命令行参数\n一般在命令行后追加的参数可以通过 sys.argv 获取, sys.argv 是一个列表,其中第一个元素为当前脚本的文件名:", "# script argv.py\n#!/usr/bin/env python\nimport sys\nprint(sys.argv) # 下面返回的是 Jupyter 运行的结果", "运行上面的脚本:\n```sh\nchmod +x argv.py\n./argv.py hello world\npython argv.py hello world\n返回的结果是相同的\n['./test.py', 'hello', 'world']\n```\n对于比较复杂的命令行参数,例如通过 --option 传递的选项参数,如果是对 sys.argv 逐项进行解析会很麻烦,Python 提供标准库 argparse(旧的库为 optparse,已经停止维护)专门解析命令行参数:", "# script convert.py\n#!/usr/bin/env python\nimport argparse as apa\ndef loadConfig(config):\n print(\"Load config from: {}\".format(config))\ndef setTheme(theme):\n print(\"Set theme: {}\".format(theme))\ndef main():\n parser = apa.ArgumentParser(prog=\"convert\") # 设定命令信息,用于输出帮助信息\n parser.add_argument(\"-c\", \"--config\", required=False, default=\"config.ini\")\n parser.add_argument(\"-t\", \"--theme\", required=False, default=\"default.theme\")\n parser.add_argument(\"-f\") # Accept Jupyter runtime option\n args = parser.parse_args()\n loadConfig(args.config)\n setTheme(args.theme)\n\nif __name__ == \"__main__\":\n main()", "利用 argparse 可以很方便地解析选项参数,同时可以定义指定参数的相关属性(是否必须、默认值等),同时还可以自动生成帮助文档。执行上面的脚本:\n```sh\n./convert.py -h\nusage: convert [-h] [-c CONFIG] [-t THEME]\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG, --config CONFIG\n -t THEME, --theme THEME\n```\n3. 执行系统命令\n当 Python 能够准确地解读输入信息或参数之后,就可以通过 Python 去做任何事情了。这里主要介绍通过 Python 调用系统命令,也就是替代 Shell 脚本完成系统管理的功能。我以前的习惯是将命令行指令通过 os.system(command) 执行,但是更好的做法应该是用 subprocess 标准库,它的存在就是为了替代旧的 os.system; os.spawn* 。\nsubprocess 模块提供简便的直接调用系统指令的call()方法,以及较为复杂可以让用户更加深入地与系统命令进行交互的Popen对象。", "# script list_files.py\n#!/usr/bin/env python\nimport subprocess as sb\nres = sb.check_output(\"ls -lh ./*.ipynb\", shell=True) # 为了安全起见,默认不通过系统 Shell 执行,因此需要设定 shell=True\nprint(res.decode()) # 默认返回值为 bytes 类型,需要进行解码操作", "如果只是简单地执行系统命令还不能满足你的需求,可以使用 subprocess.Popen 与生成的子进程进行更多交互:", "import subprocess as sb\n\np = sb.Popen(['grep', 'communicate'], stdin=sb.PIPE, stdout=sb.PIPE)\nres, err = p.communicate(sb.check_output('cat ./*', shell=True))\nif not err:\n print(res.decode())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/examples
courses/udacity_intro_to_tensorflow_for_deep_learning/l08c08_forecasting_with_lstm.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Forecasting with an LSTM\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c08_forecasting_with_lstm.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c08_forecasting_with_lstm.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nSetup", "import numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\nkeras = tf.keras\n\ndef plot_series(time, series, format=\"-\", start=0, end=None, label=None):\n plt.plot(time[start:end], series[start:end], format, label=label)\n plt.xlabel(\"Time\")\n plt.ylabel(\"Value\")\n if label:\n plt.legend(fontsize=14)\n plt.grid(True)\n \ndef trend(time, slope=0):\n return slope * time\n \n \ndef seasonal_pattern(season_time):\n \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n return np.where(season_time < 0.4,\n np.cos(season_time * 2 * np.pi),\n 1 / np.exp(3 * season_time))\n\n \ndef seasonality(time, period, amplitude=1, phase=0):\n \"\"\"Repeats the same pattern at each period\"\"\"\n season_time = ((time + phase) % period) / period\n return amplitude * seasonal_pattern(season_time)\n \n \ndef white_noise(time, noise_level=1, seed=None):\n rnd = np.random.RandomState(seed)\n return rnd.randn(len(time)) * noise_level\n \n\ndef sequential_window_dataset(series, window_size):\n series = tf.expand_dims(series, axis=-1)\n ds = tf.data.Dataset.from_tensor_slices(series)\n ds = ds.window(window_size + 1, shift=window_size, drop_remainder=True)\n ds = ds.flat_map(lambda window: window.batch(window_size + 1))\n ds = ds.map(lambda window: (window[:-1], window[1:]))\n return ds.batch(1).prefetch(1)\n\ntime = np.arange(4 * 365 + 1)\n\nslope = 0.05\nbaseline = 10\namplitude = 40\nseries = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n\nnoise_level = 5\nnoise = white_noise(time, noise_level, seed=42)\n\nseries += noise\n\nplt.figure(figsize=(10, 6))\nplot_series(time, series)\nplt.show()\n\nsplit_time = 1000\ntime_train = time[:split_time]\nx_train = series[:split_time]\ntime_valid = time[split_time:]\nx_valid = series[split_time:]\n\nclass ResetStatesCallback(keras.callbacks.Callback):\n def on_epoch_begin(self, epoch, logs):\n self.model.reset_states()", "LSTM RNN Forecasting", "keras.backend.clear_session()\ntf.random.set_seed(42)\nnp.random.seed(42)\n\nwindow_size = 30\ntrain_set = sequential_window_dataset(x_train, window_size)\n\nmodel = keras.models.Sequential([\n keras.layers.LSTM(100, return_sequences=True, stateful=True,\n batch_input_shape=[1, None, 1]),\n keras.layers.LSTM(100, return_sequences=True, stateful=True),\n keras.layers.Dense(1),\n keras.layers.Lambda(lambda x: x * 200.0)\n])\nlr_schedule = keras.callbacks.LearningRateScheduler(\n lambda epoch: 1e-8 * 10**(epoch / 20))\nreset_states = ResetStatesCallback()\noptimizer = keras.optimizers.SGD(lr=1e-8, momentum=0.9)\nmodel.compile(loss=keras.losses.Huber(),\n optimizer=optimizer,\n metrics=[\"mae\"])\nhistory = model.fit(train_set, epochs=100,\n callbacks=[lr_schedule, reset_states])\n\nplt.semilogx(history.history[\"lr\"], history.history[\"loss\"])\nplt.axis([1e-8, 1e-4, 0, 30])\n\nkeras.backend.clear_session()\ntf.random.set_seed(42)\nnp.random.seed(42)\n\nwindow_size = 30\ntrain_set = sequential_window_dataset(x_train, window_size)\nvalid_set = sequential_window_dataset(x_valid, window_size)\n\nmodel = keras.models.Sequential([\n keras.layers.LSTM(100, return_sequences=True, stateful=True,\n batch_input_shape=[1, None, 1]),\n keras.layers.LSTM(100, return_sequences=True, stateful=True),\n keras.layers.Dense(1),\n keras.layers.Lambda(lambda x: x * 200.0)\n])\noptimizer = keras.optimizers.SGD(lr=5e-7, momentum=0.9)\nmodel.compile(loss=keras.losses.Huber(),\n optimizer=optimizer,\n metrics=[\"mae\"])\nreset_states = ResetStatesCallback()\nmodel_checkpoint = keras.callbacks.ModelCheckpoint(\n \"my_checkpoint.h5\", save_best_only=True)\nearly_stopping = keras.callbacks.EarlyStopping(patience=50)\nmodel.fit(train_set, epochs=500,\n validation_data=valid_set,\n callbacks=[early_stopping, model_checkpoint, reset_states])\n\nmodel = keras.models.load_model(\"my_checkpoint.h5\")\n\nrnn_forecast = model.predict(series[np.newaxis, :, np.newaxis])\nrnn_forecast = rnn_forecast[0, split_time - 1:-1, 0]\n\nplt.figure(figsize=(10, 6))\nplot_series(time_valid, x_valid)\nplot_series(time_valid, rnn_forecast)\n\nkeras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
BadWizard/Inflation
Disaggregated-Data/weather-like-plot-HICP-by-item-ver-2.ipynb
mit
[ "Make a plot of HICP inflation by item groups", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom datetime import datetime\nimport numpy as np\n\nfrom matplotlib.ticker import FixedLocator, FixedFormatter\n#import seaborn as sns\n\nfrom matplotlib.ticker import FixedLocator, FixedFormatter\nimport seaborn as sns\n\nls\n\ndf_ind_items = pd.read_csv('raw_data_items.csv',header=0,index_col=0,parse_dates=0)\ndf_ind_items.head()\n\ndf_ind_items.index", "Compute annual inflation rates", "df_infl_items = df_ind_items.pct_change(periods=12)*100\nmask_rows_infl = df_infl_items.index.year >= 2000\ndf_infl_items = df_infl_items[mask_rows_infl]\ndf_infl_items.tail()\n\ntt = df_infl_items.copy() \ntt['month'] = tt.index.month \ntt['year'] = tt.index.year\ntt.head()\n\ntt.to_csv('infl_items.csv')", "df_infl_items.rename(columns = dic)\ntt = df_infl_items.copy()\ntt['month'] = tt.index.month\ntt['year'] = tt.index.year\nmelted_df = pd.melt(tt,id_vars=['month','year'])\nmelted_df.head()", "df_infl_items['min'] = df_infl_items.apply(min,axis=1)\ndf_infl_items['max'] = df_infl_items.apply(max,axis=1)\ndf_infl_items['mean'] = df_infl_items.apply(np.mean,axis=1)\ndf_infl_items['mode'] = df_infl_items.quantile(q=0.5, axis=1)\ndf_infl_items['10th'] = df_infl_items.quantile(q=0.10, axis=1)\ndf_infl_items['90th'] = df_infl_items.quantile(q=0.90, axis=1)\ndf_infl_items['25th'] = df_infl_items.quantile(q=0.25, axis=1)\ndf_infl_items['75th'] = df_infl_items.quantile(q=0.75, axis=1)\n\ndf_infl_items.tail()", "df_infl_items['month'] = df_infl_items.index.month\ndf_infl_items['year'] = df_infl_items.index.year", "df_infl_items.head()\n\nprint(df_infl_items.describe())", "Generate a bunch of histograms of the data to make sure that all of the data\nis in an expected range.\nwith plt.style.context('https://gist.githubusercontent.com/rhiever/d0a7332fe0beebfdc3d5/raw/223d70799b48131d5ce2723cd5784f39d7a3a653/tableau10.mplstyle'):\n for column in df_infl_items.columns[:-2]:\n #if column in ['date']:\n # continue\n plt.figure()\n plt.hist(df_infl_items[column].values)\n plt.title(column)\n #plt.savefig('{}.png'.format(column))", "len(df_infl_items)\n\ndf_infl_items.columns\n\ndf_infl_items['month_order'] = range(len(df_infl_items))\nmonth_order = df_infl_items['month_order']\nmax_infl = df_infl_items['max'].values\nmin_infl = df_infl_items['min'].values\nmean_infl = df_infl_items['mean'].values\nmode_infl = df_infl_items['mode'].values\np25th = df_infl_items['25th'].values\np75th = df_infl_items['75th'].values\np10th = df_infl_items['10th'].values\np90th = df_infl_items['90th'].values\ninflEA = df_infl_items['76451'].values\n\nyear_begin_df = df_infl_items[df_infl_items.index.month == 1]\nyear_begin_df;\n\nyear_beginning_indeces = list(year_begin_df['month_order'].values)\nyear_beginning_indeces\n\nyear_beginning_names = list(year_begin_df.index.year)\nyear_beginning_names\n\ninflEA[inflEA.argmin()]\ninflEA[inflEA.argmax()]\n\nhist_low,ind_hist_low = min(inflEA), inflEA.argmin()\nhist_high,ind_hist_high = max(inflEA), inflEA.argmax()\n\nprint(hist_high)\nprint(ind_hist_high)\n\nprint(min(inflEA))\nprint(max(inflEA))\n\nblue3 = tuple(x/255 for x in [24, 116, 205]) # 1874CD\nwheat2 = tuple(x/255 for x in [238, 216, 174])\nwheat3 = tuple(x/255 for x in [205, 186, 150])\nwheat4 = tuple(x/255 for x in [139, 126, 102])\n\nfirebrick3 = tuple(x/255 for x in [205, 38, 38])\ngray30 = tuple(x/255 for x in [77, 77, 77])\n\nidx = month_order\nfig, ax = plt.subplots(figsize=(20, 10), subplot_kw={'axisbg': 'white'},\n facecolor='white')\n# plot the high-low bars\nplt.vlines(idx, p10th, p90th, color=wheat3, alpha=.9,\n linewidth=2.0);\n\n#ax.vlines(idx, past_stats.lower, past_stats.upper, color=wheat3, alpha=.9,\n# linewidth=1.5, zorder=-1)\n\n# plot the confidence interval around the means\nplt.vlines(idx, p25th, p75th, linewidth=2.5,\n color=wheat4, zorder=-1)\n\n\n# plot the present year time-series\nplt.plot(idx,inflEA, color='k',linewidth=2, zorder=10);\n\n# plot the made-up 2014 range. don't know what this was supposed to show.\nax.vlines(idx[len(idx) // 8 + 2], -4, -1, linewidth=5, color=wheat2)\nax.vlines(idx[len(idx) // 8 + 2], -3, -2, linewidth=5, color=wheat4)\n#ax.errorbar(len(idx) // 8 + 3, -2.5, yerr=.5, capsize=2, capthick=1,\n# color='black')\nax.text(len(idx) // 8 + 4, -2.5, \"IQR\", verticalalignment='center')\n\nax.text(len(idx) // 8 + 4, -1.2, \"90 %-tile\", verticalalignment='top')\nax.text(len(idx) // 8 + 4, -3.8, \"10 %-tile\", verticalalignment='top')\n#ax.text(len(idx) // 2 - 1, 9, \"2014 Temperature\",\n# horizontalalignment='right')\n\n\nax.plot(ind_hist_high, hist_high, 'ro',markersize=10)\nax.plot(ind_hist_low, hist_low, 'bo',markersize=10)\n\nax.annotate(\"historical low\",\n xy=(ind_hist_low,hist_low), xytext=(50, -45),\n textcoords='offset points', #arrowprops=dict(facecolor='blue',\n # arrowstyle=\"->\",\n #connectionstyle=\"angle3\",\n # width=2,\n # headwidth=0,\n # shrink=.02),\n arrowprops=dict(arrowstyle='->', connectionstyle='arc3, rad=0.2'),\n #arrowprops=dict(arrowstyle='->', lw= 4, color= 'blue')\n color='blue', horizontalalignment='left')\n\nax.annotate(\"historical high\",\n xy=(ind_hist_high,hist_high), xytext=(ind_hist_high + 0, 6),\n textcoords='offset points',\n #arrowprops=dict(facecolor='red',width=2,headwidth=0,shrink=.02),\n arrowprops=dict(arrowstyle='->', connectionstyle='arc3, rad=0.2'), \n color='red', horizontalalignment='center')\n\n\n\n##############\n## formatting\n#\nplt.xticks(year_beginning_indeces,\n year_beginning_names,\n fontsize=12)\n\n\nleft_spine = ax.spines['left']\nleft_spine.set_visible(True)\nleft_spine.set_color(wheat4)\nleft_spine.set_linewidth(2)\n\nax.xaxis.set_ticklabels([])\nax.xaxis.grid(color=wheat3, linestyle='dotted',linewidth=2)\n\nplt.xticks(year_beginning_indeces,\n year_beginning_names,\n fontsize=12)\n\nplt.xlim(-5,200)\n\n\nyticks = (range(-4, 10, 2))\nax.yaxis.set_ticks(yticks)\n \nylabels = [str(i) + u\"%\" for i in yticks]\n \nax.yaxis.set_ticklabels(ylabels, fontsize=14)\nax.yaxis.grid(color='white', zorder=1)\n\n\n\nax.set_title(\"Headline and disaggregated inflation, Jan 2000 - May 2016\", loc=\"left\",\n fontsize=23)\nax.text(0, 8.5, \"annual rate of change\", fontsize=15,\n fontdict=dict(weight='bold'))\n\nax.set_xlim(-5,200)\nax.set_ylim(-5, 9)\nfig.savefig(\"Inflation-Items.svg\")\nfig.savefig(\"Inflation-Items.png\", dpi=200)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nick-youngblut/SIPSim
ipynb/bac_genome/fullCyc/.ipynb_checkpoints/Day1_rep10_justOverlap-checkpoint.ipynb
mit
[ "Goal\n\nSimulating fullCyc Day1 control gradients\nNot simulating incorporation (all 0% isotope incorp.)\nDon't know how much true incorporatation for emperical data\n\n\nRichness = genome reference pool size (n=1147)\nSimulating 10 replicate gradients to assess simulation stochasticity.\nUsing a relative large bandwidth to create fairly smooth distributions\n\nInit", "import os\nimport glob\nimport re\nimport nestly\n\n%load_ext rpy2.ipython\n%load_ext pushnote\n\n%%R\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(gridExtra)\nlibrary(phyloseq)\n\n## BD for G+C of 0 or 100\nBD.GCp0 = 0 * 0.098 + 1.66\nBD.GCp100 = 1 * 0.098 + 1.66", "Nestly\n\nassuming fragments already simulated", "workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'\nbuildDir = os.path.join(workDir, 'Day1_rep10_justOverlap')\nR_dir = '/home/nick/notebook/SIPSim/lib/R/'\n\nfragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl'\ntargetFile = '/home/nick/notebook/SIPSim/dev/fullCyc/CD-HIT/target_taxa.txt'\n\nphyseqDir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'\nphyseq_bulkCore = 'bulk-core'\nphyseq_SIP_core = 'SIP-core_unk'\n\nnreps = 10\nprefrac_comm_abundance = '1e9'\n\nseq_per_fraction = ['lognormal', 9.432, 0.5, 10000, 30000] # dist, mean, scale, min, max\nbulk_days = [1]\nnprocs = 14\n\n# building tree structure\nnest = nestly.Nest()\n\n ## varying params\nnest.add('rep', [x + 1 for x in xrange(nreps)])\n\n## set params\nnest.add('bulk_day', bulk_days, create_dir=False)\nnest.add('abs', [prefrac_comm_abundance], create_dir=False)\nnest.add('percIncorp', [0], create_dir=False)\nnest.add('percTaxa', [0], create_dir=False)\nnest.add('np', [nprocs], create_dir=False)\nnest.add('subsample_dist', [seq_per_fraction[0]], create_dir=False)\nnest.add('subsample_mean', [seq_per_fraction[1]], create_dir=False)\nnest.add('subsample_scale', [seq_per_fraction[2]], create_dir=False)\nnest.add('subsample_min', [seq_per_fraction[3]], create_dir=False)\nnest.add('subsample_max', [seq_per_fraction[4]], create_dir=False)\nnest.add('bandwidth', [0.6], create_dir=False)\nnest.add('cmd', ['\"print unless /NA$/\"'], create_dir=False)\n\n### input/output files\nnest.add('buildDir', [buildDir], create_dir=False)\nnest.add('R_dir', [R_dir], create_dir=False)\nnest.add('fragFile', [fragFile], create_dir=False)\nnest.add('targetFile', [targetFile], create_dir=False)\nnest.add('physeqDir', [physeqDir], create_dir=False)\nnest.add('physeq_bulkCore', [physeq_bulkCore], create_dir=False)\n\n\n# building directory tree\nnest.build(buildDir)\n\n# bash file to run\nbashFile = os.path.join(buildDir, 'SIPSimRun.sh')\n\n%%writefile $bashFile\n#!/bin/bash\n\nexport PATH={R_dir}:$PATH\n\n#-- making DNA pool similar to gradient of interest\necho '# Creating comm file from phyloseq'\nphyloseq2comm.r {physeqDir}{physeq_bulkCore} -s 12C-Con -d {bulk_day} > {physeq_bulkCore}_comm.txt\nprintf 'Number of lines: '; wc -l {physeq_bulkCore}_comm.txt\n\necho '## Adding target taxa to comm file'\ncomm_add_target.r {physeq_bulkCore}_comm.txt {targetFile} > {physeq_bulkCore}_comm_target.txt\nprintf 'Number of lines: '; wc -l {physeq_bulkCore}_comm_target.txt\n\necho '## Selecting just target taxa'\nperl -ne {cmd} {physeq_bulkCore}_comm_target.txt | comm_set_abund.r - > tmp.txt\nrm -f {physeq_bulkCore}_comm_target.txt\nmv tmp.txt {physeq_bulkCore}_comm_target.txt\n\n\necho '## parsing out genome fragments to make simulated DNA pool resembling the gradient of interest'\n## all OTUs without an associated reference genome will be assigned a random reference (of the reference genome pool)\n### this is done through --NA-random\nSIPSim fragment_KDE_parse {fragFile} {physeq_bulkCore}_comm_target.txt \\\n --rename taxon_name --NA-random > fragsParsed.pkl\n\n\necho '#-- SIPSim pipeline --#'\necho '# converting fragments to KDE'\nSIPSim fragment_KDE \\\n fragsParsed.pkl \\\n > fragsParsed_KDE.pkl\n \necho '# adding diffusion'\nSIPSim diffusion \\\n fragsParsed_KDE.pkl \\\n --bw {bandwidth} \\\n --np {np} \\\n > fragsParsed_KDE_dif.pkl \n\necho '# adding DBL contamination'\nSIPSim DBL \\\n fragsParsed_KDE_dif.pkl \\\n --bw {bandwidth} \\\n --np {np} \\\n > fragsParsed_KDE_dif_DBL.pkl \n \necho '# making incorp file'\nSIPSim incorpConfigExample \\\n --percTaxa {percTaxa} \\\n --percIncorpUnif {percIncorp} \\\n > {percTaxa}_{percIncorp}.config\n\necho '# adding isotope incorporation to BD distribution'\nSIPSim isotope_incorp \\\n fragsParsed_KDE_dif_DBL.pkl \\\n {percTaxa}_{percIncorp}.config \\\n --comm {physeq_bulkCore}_comm_target.txt \\\n --bw {bandwidth} \\\n --np {np} \\\n > fragsParsed_KDE_dif_DBL_inc.pkl\n \n\necho '# simulating gradient fractions'\nSIPSim gradient_fractions \\\n {physeq_bulkCore}_comm_target.txt \\\n > fracs.txt \n\necho '# simulating an OTU table'\nSIPSim OTU_table \\\n fragsParsed_KDE_dif_DBL_inc.pkl \\\n {physeq_bulkCore}_comm_target.txt \\\n fracs.txt \\\n --abs {abs} \\\n --np {np} \\\n > OTU_abs{abs}.txt\n \n#echo '# simulating PCR'\n#SIPSim OTU_PCR \\\n# OTU_abs{abs}.txt \\\n# > OTU_abs{abs}_PCR.txt \n \necho '# subsampling from the OTU table (simulating sequencing of the DNA pool)'\nSIPSim OTU_subsample \\\n --dist {subsample_dist} \\\n --dist_params mean:{subsample_mean},sigma:{subsample_scale} \\\n --min_size {subsample_min} \\\n --max_size {subsample_max} \\\n OTU_abs{abs}.txt \\\n > OTU_abs{abs}_sub.txt\n \necho '# making a wide-formatted table'\nSIPSim OTU_wideLong -w \\\n OTU_abs{abs}_sub.txt \\\n > OTU_abs{abs}_sub_w.txt\n \necho '# making metadata (phyloseq: sample_data)'\nSIPSim OTU_sampleData \\\n OTU_abs{abs}_sub.txt \\\n > OTU_abs{abs}_sub_meta.txt\n\n!chmod 777 $bashFile\n!cd $workDir; \\\n nestrun --template-file $bashFile -d Day1_rep10_justOverlap --log-file log.txt -j 2\n\n%pushnote Day1_rep10_justOverlap complete", "BD min/max\n\nwhat is the min/max BD that we care about?", "%%R\n## min G+C cutoff\nmin_GC = 13.5\n## max G+C cutoff\nmax_GC = 80\n## max G+C shift\nmax_13C_shift_in_BD = 0.036\n\n\nmin_BD = min_GC/100.0 * 0.098 + 1.66 \nmax_BD = max_GC/100.0 * 0.098 + 1.66 \n\nmax_BD = max_BD + max_13C_shift_in_BD\n\ncat('Min BD:', min_BD, '\\n')\ncat('Max BD:', max_BD, '\\n')", "Loading data\nEmperical\nSIP data", "%%R \n# simulated OTU table file\nOTU.table.dir = '/home/nick/notebook/SIPSim/dev/fullCyc/frag_norm_9_2.5_n5/Day1_default_run/1e9/'\nOTU.table.file = 'OTU_abs1e9_PCR_sub.txt'\n#OTU.table.file = 'OTU_abs1e9_sub.txt'\n#OTU.table.file = 'OTU_abs1e9.txt'\n\n%%R -i physeqDir -i physeq_SIP_core -i bulk_days\n\n# bulk core samples\nF = file.path(physeqDir, physeq_SIP_core)\nphyseq.SIP.core = readRDS(F) \nphyseq.SIP.core.m = physeq.SIP.core %>% sample_data\n\nphyseq.SIP.core = prune_samples(physeq.SIP.core.m$Substrate == '12C-Con' & \n physeq.SIP.core.m$Day %in% bulk_days, \n physeq.SIP.core) %>%\n filter_taxa(function(x) sum(x) > 0, TRUE)\nphyseq.SIP.core.m = physeq.SIP.core %>% sample_data \n\nphyseq.SIP.core\n\n%%R \n## dataframe\ndf.EMP = physeq.SIP.core %>% otu_table %>%\n as.matrix %>% as.data.frame\ndf.EMP$OTU = rownames(df.EMP)\ndf.EMP = df.EMP %>% \n gather(sample, abundance, 1:(ncol(df.EMP)-1)) \n\ndf.EMP = inner_join(df.EMP, physeq.SIP.core.m, c('sample' = 'X.Sample')) \n\ndf.EMP.nt = df.EMP %>%\n group_by(sample) %>%\n mutate(n_taxa = sum(abundance > 0)) %>%\n ungroup() %>%\n distinct(sample) %>%\n filter(Buoyant_density >= min_BD, \n Buoyant_density <= max_BD)\n \ndf.EMP.nt %>% head(n=3)", "bulk soil samples", "%%R\nphyseq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'\nphyseq.bulk = 'bulk-core'\nphyseq.file = file.path(physeq.dir, physeq.bulk)\nphyseq.bulk = readRDS(physeq.file)\nphyseq.bulk.m = physeq.bulk %>% sample_data\nphyseq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' &\n physeq.bulk.m$Day %in% bulk_days, physeq.bulk)\n\nphyseq.bulk.m = physeq.bulk %>% sample_data\nphyseq.bulk\n\n%%R\nphyseq.bulk.n = transform_sample_counts(physeq.bulk, function(x) x/sum(x))\nphyseq.bulk.n\n\n%%R\n# making long format of each bulk table\nbulk.otu = physeq.bulk.n %>% otu_table %>% as.data.frame\nncol = ncol(bulk.otu)\nbulk.otu$OTU = rownames(bulk.otu)\nbulk.otu = bulk.otu %>%\n gather(sample, abundance, 1:ncol) \n\nbulk.otu = inner_join(physeq.bulk.m, bulk.otu, c('X.Sample' = 'sample')) %>%\n dplyr::select(OTU, abundance) %>%\n rename('bulk_abund' = abundance)\nbulk.otu %>% head(n=3)\n\n%%R\n# joining tables\ndf.EMP.j = inner_join(df.EMP, bulk.otu, c('OTU' = 'OTU')) %>%\n filter(Buoyant_density >= min_BD, \n Buoyant_density <= max_BD) \n \ndf.EMP.j %>% head(n=3)", "Simulated", "OTU_files = !find $buildDir -name \"OTU_abs1e9_sub.txt\"\nOTU_files\n\n%%R -i OTU_files\n# loading files\n\ndf.SIM = list()\nfor (x in OTU_files){\n SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_rep10_justOverlap/', '', x)\n SIM_rep = gsub('/OTU_abs1e9_sub.txt', '', SIM_rep)\n df.SIM[[SIM_rep]] = read.delim(x, sep='\\t') \n }\ndf.SIM = do.call('rbind', df.SIM)\ndf.SIM$SIM_rep = gsub('\\\\.[0-9]+$', '', rownames(df.SIM))\nrownames(df.SIM) = 1:nrow(df.SIM)\ndf.SIM %>% head\n\n%%R\n## edit table\ndf.SIM.nt = df.SIM %>%\n filter(count > 0) %>%\n group_by(SIM_rep, library, BD_mid) %>%\n summarize(n_taxa = n()) %>%\n filter(BD_mid >= min_BD, \n BD_mid <= max_BD)\ndf.SIM.nt %>% head ", "'bulk soil' community files", "# loading comm files\ncomm_files = !find $buildDir -name \"bulk-core_comm_target.txt\"\ncomm_files\n\n%%R -i comm_files\n\ndf.comm = list()\nfor (f in comm_files){\n rep = gsub('.+/Day1_rep10_justOverlap/([0-9]+)/.+', '\\\\1', f)\n df.comm[[rep]] = read.delim(f, sep='\\t') %>%\n dplyr::select(library, taxon_name, rel_abund_perc) %>%\n rename('bulk_abund' = rel_abund_perc) %>%\n mutate(bulk_abund = bulk_abund / 100)\n}\n\ndf.comm = do.call('rbind', df.comm)\ndf.comm$SIM_rep = gsub('\\\\.[0-9]+$', '', rownames(df.comm))\nrownames(df.comm) = 1:nrow(df.comm)\ndf.comm %>% head(n=3)\n\n%%R\n## joining tables\ndf.SIM.j = inner_join(df.SIM, df.comm, c('SIM_rep' = 'SIM_rep',\n 'library' = 'library',\n 'taxon' = 'taxon_name')) %>%\n filter(BD_mid >= min_BD, \n BD_mid <= max_BD)\n \ndf.SIM.j %>% head(n=3)\n\n%%R \n# filtering & combining emperical w/ simulated data\n\n## emperical \nmax_BD_range = max(df.EMP.j$Buoyant_density) - min(df.EMP.j$Buoyant_density)\ndf.EMP.j.f = df.EMP.j %>%\n filter(abundance > 0) %>%\n group_by(OTU) %>%\n summarize(mean_rel_abund = mean(bulk_abund),\n min_BD = min(Buoyant_density),\n max_BD = max(Buoyant_density),\n BD_range = max_BD - min_BD,\n BD_range_perc = BD_range / max_BD_range * 100) %>%\n ungroup() %>%\n mutate(dataset = 'emperical',\n SIM_rep = NA)\n\n## simulated\nmax_BD_range = max(df.SIM.j$BD_mid) - min(df.SIM.j$BD_mid)\ndf.SIM.j.f = df.SIM.j %>%\n filter(count > 0) %>%\n group_by(SIM_rep, taxon) %>%\n summarize(mean_rel_abund = mean(bulk_abund),\n min_BD = min(BD_mid),\n max_BD = max(BD_mid),\n BD_range = max_BD - min_BD,\n BD_range_perc = BD_range / max_BD_range * 100) %>%\n ungroup() %>%\n rename('OTU' = taxon) %>%\n mutate(dataset = 'simulated')\n\n## join\ndf.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%\n filter(BD_range_perc > 0,\n mean_rel_abund > 0)\n\ndf.j$SIM_rep = reorder(df.j$SIM_rep, df.j$SIM_rep %>% as.numeric)\n\ndf.j %>% head(n=3)\n\n%%R -h 400\n## plotting\nggplot(df.j, aes(mean_rel_abund, BD_range_perc, color=SIM_rep)) +\n geom_point(alpha=0.3) +\n scale_x_log10() +\n scale_y_continuous() +\n labs(x='Pre-fractionation abundance', y='% of total BD range') +\n facet_grid(dataset ~ .) +\n theme_bw() +\n theme(\n text = element_text(size=16),\n panel.grid = element_blank()#,\n #legend.position = 'none'\n )\n", "BD span of just overlapping taxa (redundant; but consistent wiht other notebooks)\n\nTaxa overlapping between emperical data and genomes in dataset\nThese taxa should have the same relative abundances in both datasets.\nThe comm file was created from the emperical dataset phyloseq file.", "%%R -i targetFile\n\ndf.target = read.delim(targetFile, sep='\\t')\ndf.target %>% nrow %>% print\ndf.target %>% head(n=3)\n\n%%R\n# filtering to just target taxa\ndf.j.t = df.j %>% \n filter(OTU %in% df.target$OTU) \ndf.j %>% nrow %>% print\ndf.j.t %>% nrow %>% print\n\n## plotting\nggplot(df.j.t, aes(mean_rel_abund, BD_range_perc, color=SIM_rep)) +\n geom_point(alpha=0.5, shape='O') +\n scale_x_log10() +\n scale_y_continuous() +\n #scale_color_manual(values=c('blue', 'red')) +\n labs(x='Pre-fractionation abundance', y='% of total BD range') +\n facet_grid(dataset ~ .) +\n theme_bw() +\n theme(\n text = element_text(size=16),\n panel.grid = element_blank()#,\n #legend.position = 'none'\n )", "Check\n\nAre all target (overlapping) taxa the same relative abundances in both datasets?", "%%R -w 600 -h 500\n# formatting data\ndf.1 = df.j.t %>% \n filter(dataset == 'simulated') %>%\n select(SIM_rep, OTU, mean_rel_abund, BD_range, BD_range_perc)\n\ndf.2 = df.j.t %>%\n filter(dataset == 'emperical') %>%\n select(SIM_rep, OTU, mean_rel_abund, BD_range, BD_range_perc)\n\ndf.12 = inner_join(df.1, df.2, c('OTU' = 'OTU')) %>%\n mutate(BD_diff_perc = BD_range_perc.y - BD_range_perc.x)\n\n\ndf.12$SIM_rep.x = reorder(df.12$SIM_rep.x, df.12$SIM_rep.x %>% as.numeric)\n\n## plotting\np1 = ggplot(df.12, aes(mean_rel_abund.x, mean_rel_abund.y)) +\n geom_point(alpha=0.5) +\n scale_x_log10() +\n scale_y_log10() +\n labs(x='Relative abundance (simulated)', y='Relative abundance (emperical)') +\n facet_wrap(~ SIM_rep.x)\n theme_bw() +\n theme(\n text = element_text(size=16),\n panel.grid = element_blank(),\n legend.position = 'none'\n )\np1", "Correlation between relative abundance and BD_range diff\n\nAre low abundant taxa more variable in their BD span", "%%R -w 800 -h 500\n\nggplot(df.12, aes(mean_rel_abund.x, BD_diff_perc)) +\n geom_point(alpha=0.5) +\n scale_x_log10() +\n labs(x='Pre-fractionation relative abundance', \n y='Difference in % of gradient spanned\\n(emperical - simulated)',\n title='Overlapping taxa') +\n facet_wrap(~ SIM_rep.x) +\n theme_bw() +\n theme(\n text = element_text(size=16),\n panel.grid = element_blank(),\n legend.position = 'none'\n )\n", "Notes\n\nbetween Day1_rep10, Day1_richFromTarget_rep10, and Day1_add_Rich_rep10:\nDay1_rep10 has the most accurate representation of BD span (% of gradient spanned by taxa).\nAccuracy drops at ~1e-3 to ~5e-4, but this is caused by detection limits (veil-line effect).\n\n\n\nComparing abundance distributions of overlapping taxa", "%%R\n\njoin_abund_dists = function(df.EMP.j, df.SIM.j, df.target){\n ## emperical \n df.EMP.j.f = df.EMP.j %>%\n filter(abundance > 0) %>%\n dplyr::select(OTU, sample, abundance, Buoyant_density, bulk_abund) %>%\n mutate(dataset = 'emperical', SIM_rep = NA) %>%\n filter(OTU %in% df.target$OTU) \n \n ## simulated\n df.SIM.j.f = df.SIM.j %>%\n filter(count > 0) %>%\n dplyr::select(taxon, fraction, count, BD_mid, bulk_abund, SIM_rep) %>%\n rename('OTU' = taxon,\n 'sample' = fraction,\n 'Buoyant_density' = BD_mid,\n 'abundance' = count) %>%\n mutate(dataset = 'simulated') %>%\n filter(OTU %in% df.target$OTU) \n \n ## getting just intersecting OTUs\n OTUs.int = intersect(df.EMP.j.f$OTU, df.SIM.j.f$OTU)\n \n df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%\n filter(OTU %in% OTUs.int) %>%\n group_by(sample) %>%\n mutate(rel_abund = abundance / sum(abundance))\n \n cat('Number of overlapping OTUs between emperical & simulated:', \n df.j$OTU %>% unique %>% length, '\\n\\n')\n return(df.j)\n }\n\n\ndf.j = join_abund_dists(df.EMP.j, df.SIM.j, df.target)\ndf.j %>% head(n=3) %>% as.data.frame \n\n%%R\n# closure operation\ndf.j = df.j %>%\n ungroup() %>%\n mutate(SIM_rep = SIM_rep %>% as.numeric) %>%\n group_by(dataset, SIM_rep, sample) %>%\n mutate(rel_abund_c = rel_abund / sum(rel_abund)) %>%\n ungroup()\n\ndf.j %>% head(n=3) %>% as.data.frame\n\n%%R -h 1500 -w 800\n# plotting \nplot_abunds = function(df){\n p = ggplot(df, aes(Buoyant_density, rel_abund_c, fill=OTU)) +\n geom_area(stat='identity', position='dodge', alpha=0.5) +\n labs(x='Buoyant density', \n y='Subsampled community\\n(relative abundance for subset taxa)') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none',\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank(),\n plot.margin=unit(c(0.1,1,0.1,1), \"cm\")\n )\n return(p)\n }\n\n\n# simulations\ndf.j.f = df.j %>%\n filter(dataset == 'simulated')\np.SIM = plot_abunds(df.j.f)\np.SIM = p.SIM + facet_grid(SIM_rep ~ .)\n\n# emperical\ndf.j.f = df.j %>%\n filter(dataset == 'emperical')\np.EMP = plot_abunds(df.j.f)\n\n# make figure\ngrid.arrange(p.EMP, p.SIM, ncol=1, heights=c(1,5))", "Check: plotting closure abs-abunds of overlapping taxa\n\nThe overlapping taxa should have the same closure-transformed relative abundances for both:\nabsolute abundances (OTU table)\nrelative abundances (subsampled OTU table; as above)\n\nLoading OTU table (abs abunds)", "OTU_files = !find $buildDir -name \"OTU_abs1e9.txt\"\nOTU_files\n\n%%R -i OTU_files\n# loading files\n\ndf.SIM.abs = list()\nfor (x in OTU_files){\n SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_rep10/', '', x)\n SIM_rep = gsub('/OTU_abs1e9.txt', '', SIM_rep)\n df.SIM.abs[[SIM_rep]] = read.delim(x, sep='\\t') \n }\ndf.SIM.abs = do.call('rbind', df.SIM.abs)\ndf.SIM.abs$SIM_rep = gsub('\\\\.[0-9]+$', '', rownames(df.SIM.abs))\nrownames(df.SIM.abs) = 1:nrow(df.SIM.abs)\ndf.SIM.abs %>% head\n\n%%R\n# subset just overlapping taxa\n# & closure operation\ndf.SIM.abs.t = df.SIM.abs %>%\n filter(taxon %in% df.target$OTU) %>%\n group_by(SIM_rep, fraction) %>%\n mutate(rel_abund_c = count / sum(count)) %>%\n rename('Buoyant_density' = BD_mid,\n 'OTU' = taxon)\n\ndf.SIM.abs.t %>% head(n=3) %>% as.data.frame\n\n%%R -w 800 -h 1200\n# plotting\np.abs = plot_abunds(df.SIM.abs.t) \np.abs + facet_grid(SIM_rep ~ .)", "Notes\n\nThe abundance distributions of the overlapping OTUs look pretty similar between 'absolute' and 'relative' (post-PCR & post-sequencing simulation). \nThe difference between absolute and relative are probably due to the PCR simulation\n\nCalculating center of mass for overlapping taxa\n\nweighted mean BD, where weights are relative abundances", "%%R\n\ncenter_mass = function(df){\n df = df %>%\n group_by(dataset, SIM_rep, OTU) %>%\n summarize(center_mass = weighted.mean(Buoyant_density, rel_abund_c, na.rm=T)) %>%\n ungroup()\n return(df)\n}\n\ndf.j.cm = center_mass(df.j) \n\n%%R\n# getting mean cm for all SIM_reps\ndf.j.cm.s = df.j.cm %>%\n group_by(dataset, OTU) %>%\n summarize(mean_cm = mean(center_mass, na.rm=T),\n stdev_cm = sd(center_mass)) %>%\n ungroup() %>%\n spread(dataset, mean_cm) %>%\n group_by(OTU) %>%\n summarize(stdev_cm = mean(stdev_cm, na.rm=T),\n emperical = mean(emperical, na.rm=T),\n simulated = mean(simulated, na.rm=T)) %>%\n ungroup()\n\n# check\ncat('Number of OTUs:', df.j.cm.s$OTU %>% unique %>% length, '\\n')\n\n# plotting\nggplot(df.j.cm.s, aes(emperical, simulated,\n ymin = simulated - stdev_cm,\n ymax = simulated + stdev_cm)) +\n geom_pointrange() +\n stat_function(fun = function(x) x, linetype='dashed', alpha=0.5, color='red') +\n scale_x_continuous(limits=c(1.69, 1.74)) +\n scale_y_continuous(limits=c(1.7, 1.75)) +\n labs(title='Center of mass') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )", "Notes\n\nError bars are stdev of simulation reps\nThe center of mass for most OTUs is shifted heavier in the simulations vs the emperical data\nAlso, most have a mean simulated BD of ~1.73\nThis is approximately the middle of the BD range (1.725)\nIt suggests that many of the taxa are equally dispersed across the gradient", "%%R\nBD_MIN = df.j$Buoyant_density %>% min \nBD_MAX = df.j$Buoyant_density %>% max\nBD_AVE = mean(c(BD_MIN, BD_MAX))\nprint(c(BD_MIN, BD_AVE, BD_MAX))", "R^2 for each SIM_rep", "%%R\n# formatting table\ndf.j.cm.j = inner_join(df.j.cm %>% \n filter(dataset == 'simulated') %>%\n rename('cm_SIM' = center_mass),\n df.j.cm %>% \n filter(dataset == 'emperical') %>%\n rename('cm_EMP' = center_mass),\n c('OTU' = 'OTU')) %>%\n select(-starts_with('dataset'))\n\ndf.j.cm.j %>% head\n\n%%R -w 300 -h 400\n# lm()\ndf.j.cm.j.lm = df.j.cm.j %>%\n group_by(SIM_rep.x) %>%\n do(fit = lm(cm_EMP ~ cm_SIM, data = .)) %>%\n mutate(R2 = summary(fit)$coeff[2],\n data = 'simulated')\n\n#df.j.cm.j.lm %>% head\n\n# plotting\nggplot(df.j.cm.j.lm, aes(data, R2)) +\n geom_boxplot() +\n geom_jitter(height=0, width=0.2, color='red') +\n labs(y='R^2', title='simulated ~ emperical') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\n", "Notes:\n\nPretty low R^2, with 1 simulation rep at almost 0\n\nPlotting abundances of some outlier taxa\n\nwhy is simulated so different in center of mass vs emperical?", "%%R -h 1100 -w 800\n\n# cutoff on which OTU are major outliers (varying between simulated and emperical)\nBD.diff.cut = 0.02\n\n# which OTU to plot?\ndf.j.cm.s.f = df.j.cm.s %>% \n mutate(cm_diff = abs(emperical - simulated)) %>%\n filter(cm_diff > BD.diff.cut, ! is.na(simulated)) \n\nprint('OTUs:')\nprint(df.j.cm.s.f$OTU)\n\n# filtering to just target taxon\n## Simulated\ndf.j.f = df.j %>%\n filter(dataset == 'simulated', \n OTU %in% df.j.cm.s.f$OTU) \n\np.SIM = plot_abunds(df.j.f)\np.SIM = p.SIM + facet_grid(SIM_rep ~ .)\n\n## Emperical\ndf.j.f = df.j %>%\n filter(dataset == 'emperical', \n OTU %in% df.j.cm.s.f$OTU)\np.EMP = plot_abunds(df.j.f)\n\n# make figure\ngrid.arrange(p.EMP, p.SIM, ncol=1, heights=c(1,5))", "Notes\n\nThe center of mass seems to be correct: most taxa are shifted heavier relative to the emperical data\n\nCheck: which taxon is the highly abundant one?\n\nwhy is it shifted so far to 'heavy'\nthe abundance distribution mode is ~1.73, which is a G+C of ~0.71\n\nAbundant taxon: OTU.32\n * rep genome: Pseudonocardia_dioxanivorans_CB1190.fna\n * genome GC = 73.31\n * do the G+C contents of this amplicon region vary among taxa in the genus?\n * No, genome G+C for all 7 genomes in ncbi are ~70 \nCheck: plotting 'absolute' abudance distributions for major CM outliers", "%%R\n# subset outliers\ndf.SIM.abs.t = df.SIM.abs %>%\n filter(taxon %in% df.target$OTU) %>%\n group_by(SIM_rep, fraction) %>%\n mutate(rel_abund_c = count / sum(count)) %>%\n rename('Buoyant_density' = BD_mid,\n 'OTU' = taxon) %>%\n filter(OTU %in% df.j.cm.s.f$OTU) \n\ndf.SIM.abs.t %>% head(n=3) %>% as.data.frame\n\n%%R -w 800 -h 1200\n# plotting\np.abs = plot_abunds(df.SIM.abs.t) \np.abs + facet_grid(SIM_rep ~ .)", "Notes:\n\nsimulated absolute abundances seem similar to the simulated relative abundances: both shifted heavy vs emperical\n\nWhat genomes are these outliers?", "%%R -w 800\ngenomes = df.target %>% \n filter(OTU %in% df.SIM.abs.t$OTU) \n\ndf.genInfo = read.delim('/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/genome_info.txt')\ndf.genInfo.f = df.genInfo %>% \n filter(seq_name %in% genomes$genome_seqID) %>%\n mutate(genome_ID = gsub('\\\\.fna', '', file_name))\n\ndf.genInfo.f$genome_ID = reorder(df.genInfo.f$genome_ID, df.genInfo.f$total_GC)\n\n# plotting\nggplot(df.genInfo.f, aes(genome_ID, total_GC)) +\n geom_point() +\n theme_bw() +\n theme(\n text = element_text(size=16),\n axis.text.x = element_text(angle=50, hjust=1)\n )", "Notes\n\nOdd... these outlier OTUs are simulated from genomes with high G+C.\nI would have expected the OTUs to be shifted right in the emperical data.\nAre the simulations over-shifting the distributions of heavy G+C taxa?\n\nPlotting rep genome G+C of all overlapping taxa", "%%R -w 800\n\ndf.genInfo = read.delim('/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/genome_info.txt')\ndf.genInfo.f = df.genInfo %>% \n filter(seq_name %in% df.target$genome_seqID) %>%\n mutate(genome_ID = gsub('\\\\.fna', '', file_name))\n\ndf.genInfo.f$genome_ID = reorder(df.genInfo.f$genome_ID, df.genInfo.f$total_GC)\n\n# plotting\nggplot(df.genInfo.f, aes(genome_ID, total_GC)) +\n geom_point() +\n theme_bw() +\n theme(\n text = element_text(size=16),\n axis.text.x = element_blank()\n )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
QuantEcon/QuantEcon.notebooks
ddp_ex_MF_7_6_1_py.ipynb
bsd-3-clause
[ "DiscreteDP Example: Mine Management\nDaisuke Oyama\nFaculty of Economics, University of Tokyo\nFrom Miranda and Fackler, <i>Applied Computational Economics and Finance</i>, 2002,\nSection 7.6.1", "%matplotlib inline\n\nimport itertools\nimport numpy as np\nfrom scipy import sparse\nimport matplotlib.pyplot as plt\nfrom quantecon.markov import DiscreteDP", "The model is formulated with finite horizon in Section 7.2.1,\nbut solved with infinite horizon in Section 7.6.1.\nHere we follow the latter.", "price = 1 # Market price of ore\nsbar = 100 # Upper bound of ore stock\nbeta = 0.9 # Discount rate\nn = sbar + 1 # Number of states\nm = sbar + 1 # Number of actions\n\n# Cost function\nc = lambda s, x: x**2 / (1+s)", "Product formulation\nThis approch sets up the reward array R and the transition probability array Q\nas a 2-dimensional array of shape (n, m)\nand a 3-simensional array of shape (n, m, n), respectively,\nwhere the reward is set to $-\\infty$ for infeasible state-action pairs\n(and the transition probability distribution is arbitrary for those pairs).\nReward array:", "R = np.empty((n, m))\nfor s, x in itertools.product(range(n), range(m)):\n R[s, x] = price * x - c(s, x) if x <= s else -np.inf", "(Degenerate) transition probability array:", "Q = np.zeros((n, m, n))\nfor s, x in itertools.product(range(n), range(m)):\n if x <= s:\n Q[s, x, s-x] = 1\n else:\n Q[s, x, 0] = 1 # Arbitrary", "Set up the dynamic program as a DiscreteDP instance:", "ddp = DiscreteDP(R, Q, beta)", "Solve the optimization problem with the solve method,\nwhich by defalut uses the policy iteration algorithm:", "res = ddp.solve()", "The number of iterations:", "res.num_iter", "The controlled Markov chain is stored in res.mc.\nTo simulate:", "nyrs = 15\nspath = res.mc.simulate(nyrs+1, init=sbar)\n\nspath", "Draw the graphs:", "wspace = 0.5\nhspace = 0.3\nfig = plt.figure(figsize=(12, 8+hspace))\nfig.subplots_adjust(wspace=wspace, hspace=hspace)\nax0 = plt.subplot2grid((2, 4), (0, 0), colspan=2)\nax1 = plt.subplot2grid((2, 4), (0, 2), colspan=2)\nax2 = plt.subplot2grid((2, 4), (1, 1), colspan=2)\n\nax0.plot(res.v)\nax0.set_xlim(0, sbar)\nax0.set_ylim(0, 60)\nax0.set_xlabel('Stock')\nax0.set_ylabel('Value')\nax0.set_title('Optimal Value Function')\n\nax1.plot(res.sigma)\nax1.set_xlim(0, sbar)\nax1.set_ylim(0, 25)\nax1.set_xlabel('Stock')\nax1.set_ylabel('Extraction')\nax1.set_title('Optimal Extraction Policy')\n\nax2.plot(spath)\nax2.set_xlim(0, nyrs)\nax2.set_ylim(0, sbar)\nax2.set_xticks(np.linspace(0, 15, 4, endpoint=True))\nax2.set_xlabel('Year')\nax2.set_ylabel('Stock')\nax2.set_title('Optimal State Path')\n\nplt.show()", "State-action pairs formulation\nThis approach assigns the rewards and transition probabilities\nonly to feaslbe state-action pairs,\nsetting up R and Q as a 1-dimensional array of length L\nand a 2-dimensional array of shape (L, n), respectively.\nIn particular, this allows us to formulate Q in\nscipy sparse matrix format.\nWe need the arrays of feasible state and action indices:", "S = np.arange(n)\nX = np.arange(m)\n\n# Values of remaining stock in the next period\nS_next = S.reshape(n, 1) - X.reshape(1, m)\n\n# Arrays of feasible state and action indices\ns_indices, a_indices = np.where(S_next >= 0)\n\n# Number of feasible state-action pairs\nL = len(s_indices)\n\nL\n\ns_indices\n\na_indices", "Reward vector:", "R = np.empty(L)\nfor i, (s, x) in enumerate(zip(s_indices, a_indices)):\n R[i] = price * x - c(s, x)", "(Degenerate) transition probability array,\nwhere we use the scipy.sparse.lil_matrix format,\nwhile any format will do\n(internally it will be converted to the scipy.sparse.csr_matrix format):", "Q = sparse.lil_matrix((L, n))\nit = np.nditer((s_indices, a_indices), flags=['c_index'])\nfor s, x in it:\n i = it.index\n Q[i, s-x] = 1", "Alternatively, one can construct Q directly as a scipy.sparse.csr_matrix as follows:", "# data = np.ones(L)\n# indices = s_indices - a_indices\n# indptr = np.arange(L+1)\n# Q = sparse.csr_matrix((data, indices, indptr), shape=(L, n))", "Set up the dynamic program as a DiscreteDP instance:", "ddp_sp = DiscreteDP(R, Q, beta, s_indices, a_indices)", "Solve the optimization problem with the solve method,\nwhich by defalut uses the policy iteration algorithm:", "res_sp = ddp_sp.solve()", "Number of iterations:", "res_sp.num_iter", "Simulate the controlled Markov chain:", "nyrs = 15\nspath_sp = res_sp.mc.simulate(nyrs+1, init=sbar)", "Draw the graphs:", "wspace = 0.5\nhspace = 0.3\nfig = plt.figure(figsize=(12, 8+hspace))\nfig.subplots_adjust(wspace=wspace, hspace=hspace)\nax0 = plt.subplot2grid((2, 4), (0, 0), colspan=2)\nax1 = plt.subplot2grid((2, 4), (0, 2), colspan=2)\nax2 = plt.subplot2grid((2, 4), (1, 1), colspan=2)\n\nax0.plot(res_sp.v)\nax0.set_xlim(0, sbar)\nax0.set_ylim(0, 60)\nax0.set_xlabel('Stock')\nax0.set_ylabel('Value')\nax0.set_title('Optimal Value Function')\n\nax1.plot(res_sp.sigma)\nax1.set_xlim(0, sbar)\nax1.set_ylim(0, 25)\nax1.set_xlabel('Stock')\nax1.set_ylabel('Extraction')\nax1.set_title('Optimal Extraction Policy')\n\nax2.plot(spath_sp)\nax2.set_xlim(0, nyrs)\nax2.set_ylim(0, sbar)\nax2.set_xticks(np.linspace(0, 15, 4, endpoint=True))\nax2.set_xlabel('Year')\nax2.set_ylabel('Stock')\nax2.set_title('Optimal State Path')\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gtzan/mir_book
data_mining_random_variables.ipynb
cc0-1.0
[ "Discrete Random Variables and Sampling\nGeorge Tzanetakis, University of Victoria\nIn this notebook we will explore discrete random variables and sampling. After defining a helper class and associated functions we will be able to create both symbolic and numeric random variables and generate samples from them. \nDefine a helper random variable class based on the scipy discrete random variable functionality providing both numeric and symbolic RVs", "%matplotlib inline \nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport numpy as np \n\nclass Random_Variable: \n \n def __init__(self, name, values, probability_distribution): \n self.name = name \n self.values = values \n self.probability_distribution = probability_distribution \n if all(type(item) is np.int64 for item in values): \n self.type = 'numeric'\n self.rv = stats.rv_discrete(name = name, values = (values, probability_distribution))\n elif all(type(item) is str for item in values): \n self.type = 'symbolic'\n self.rv = stats.rv_discrete(name = name, values = (np.arange(len(values)), probability_distribution))\n self.symbolic_values = values \n else: \n self.type = 'undefined'\n \n def sample(self,size): \n if (self.type =='numeric'): \n return self.rv.rvs(size=size)\n elif (self.type == 'symbolic'): \n numeric_samples = self.rv.rvs(size=size)\n mapped_samples = [values[x] for x in numeric_samples]\n return mapped_samples \n \n ", "Let's first create some random samples of symbolic random variables corresponding to a coin and a dice", "values = ['H', 'T']\nprobabilities = [0.9, 0.1]\ncoin = Random_Variable('coin', values, probabilities)\nsamples = coin.sample(20)\nprint(samples)\n\nvalues = ['1', '2', '3', '4', '5', '6']\nprobabilities = [1/6.] * 6\ndice = Random_Variable('dice', values, probabilities)\nsamples = dice.sample(10)\nprint(samples);\n[100] * 10\n[1 / 6.] * 3\n", "Now let's look at a numeric random variable corresponding to a dice so that we can more easily make plots and histograms", "values = np.arange(1,7)\nprobabilities = [1/6.] * 6\ndice = Random_Variable('dice', values, probabilities)\nsamples = dice.sample(100)\nplt.stem(samples, markerfmt= ' ')", "Let's now look at a histogram of these generated samples. Notice that even with 500 samples the bars are not equal length so the calculated frequencies are only approximating the probabilities used to generate them", "plt.figure()\nplt.hist(samples,bins=[1,2,3,4,5,6,7],normed=1, rwidth=0.5,align='left');", "Let's plot the cumulative histogram of the samples", "plt.hist(samples,bins=[1,2,3,4,5,6,7],normed=1, rwidth=0.5,align='left', cumulative=True);", "Let's now estimate the frequency of the event roll even number in different ways. \nFirst let's count the number of even numbers in the generated samples. Then let's \ntake the sum of the counts of the individual estimated probabilities.", "\n# we can also write the predicates directly using lambda notation \nest_even = len([x for x in samples if x%2==0]) / len(samples)\nest_2 = len([x for x in samples if x==2]) / len(samples)\nest_4 = len([x for x in samples if x==4]) / len(samples)\nest_6 = len([x for x in samples if x==6]) / len(samples)\nprint(est_even)\n# Let's print some estimates \nprint('Estimates of 2,4,6 = ', (est_2, est_4, est_6))\nprint('Direct estimate = ', est_even) \nprint('Sum of estimates = ', est_2 + est_4 + est_6)\nprint('Theoretical value = ', 0.5)\n\n", "Notice that we can always estimate the probability of an event by simply counting how many times it occurs in the samples of an experiment. However if we have multiple events we are interested in then it can be easier to calculate the probabilities of the values of invdividual random variables and then use the rules of probability to estimate the probabilities of more complex events." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
saketkc/notebooks
python/Kneedle Algorithm.ipynb
bsd-2-clause
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Python-implementation-of-Finding-a-&quot;Kneedle&quot;-in-a-Haystack:-Detecting-Knee-Points-in-System-Behavior\" data-toc-modified-id=\"Python-implementation-of-Finding-a-&quot;Kneedle&quot;-in-a-Haystack:-Detecting-Knee-Points-in-System-Behavior-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Python implementation of <a href=\"https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf\" target=\"_blank\">Finding a \"Kneedle\" in a Haystack: Detecting Knee Points in System Behavior</a></a></div><div class=\"lev2 toc-item\"><a href=\"#Example-1\" data-toc-modified-id=\"Example-1-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Example 1</a></div><div class=\"lev2 toc-item\"><a href=\"#Example-2\" data-toc-modified-id=\"Example-2-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Example 2</a></div>\n\n# Python implementation of [Finding a \"Kneedle\" in a Haystack: Detecting Knee Points in System Behavior](https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf)", "%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport seaborn as sns\nfrom scipy.interpolate import UnivariateSpline\nimport matplotlib.pyplot as plt\nsns.set_style('white')\n\nnp.random.seed(42)\n\ndef draw_plot(X, Y, knee_point=None):\n plt.plot(X, Y)\n if knee_point:\n plt.axvline(x=knee_point, color='k', linestyle='--')\n\n\nmu = 50\nsigma = 10\nS = 1\nn = 1000", "Example 1\n$$ X \\sim N(50, 10) $$\nKnee point(expected) : $\\mu+\\sigma=60$\nKnee point(simulation) : 66", "X = np.random.normal(mu, sigma, n)\n\nsorted_X = np.sort(X)\nY = np.arange(len(X))/float(len(sorted_X))\n \n\n\ndef _locate(Y_d, T_lm, maxima_ids):\n n = len(Y_d)\n for j in range(0, n):\n for index, i in enumerate(maxima_ids):\n if j <= i:\n continue\n if Y_d[j] <= T_lm[index]:\n return index\n\n\ndef find_knee_point(X, Y, S):\n n = len(X)\n spl = UnivariateSpline(X, Y)\n X_s = np.linspace(np.min(X), np.max(X), n)\n Y_s = spl(X_s)\n X_sn = (X_s - np.min(X_s)) / (np.max(X_s) - np.min(X_s))\n Y_sn = (Y_s - np.min(Y_s)) / (np.max(Y_s) - np.min(Y_s))\n X_d = X_sn\n Y_d = Y_sn - X_sn\n X_lm = []\n Y_lm = []\n maxima_ids = []\n for i in range(1, n - 1):\n if (Y_d[i] > Y_d[i - 1] and Y_d[i] > Y_d[i + 1]):\n X_lm.append(X_d[i])\n Y_lm.append(Y_d[i])\n maxima_ids.append(i)\n T_lm = Y_lm - S * np.sum(np.diff(X_sn)) / (n - 1)\n knee_point_index = _locate(Y_d, T_lm, maxima_ids)\n knee_point = X_lm[knee_point_index] * (np.max(X_s) - np.min(X_s)\n ) + np.min(X_s)\n return knee_point, Y_d\n\nknee_point, yd = find_knee_point(sorted_X, Y, S)\ndraw_plot(sorted_X, Y, knee_point)\n\n\nknee_point", "Example 2\n$$ y = -1/x + 5$$\nKnee point(expected): $0.22$\nKnee point(simulation): $0.4$", "x = np.linspace(0.1,1,10)\ny = -1/x+5\nknee_point, _ = find_knee_point(x, y, S)\ndraw_plot(x, y, knee_point)\n\nknee_point" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/healthcare
datathon/mimic_eicu/tutorials/BigQuery_ML.ipynb
apache-2.0
[ "Understanding Electronic Health Records with BigQuery ML\nThis tutorial introduces\nBigQuery ML (BQML) in the\ncontext of working with the MIMIC3\ndataset.\nBigQuery ML adds only a few statements to\nstandard SQL.\nThese statements automate the creation and evaluation of statistical models on\nBigQuery datasets. BigQuery ML has several\nadvantages\nover older machine learning tools and workflows. Some highlights are BQML's high\nperformance on massive datasets, support for\nHIPAA compliance, and\nease of use. BQML automatically implements state of the art best practices in\nmachine learning for your dataset.\nMIMIC3 is a 10-year database of health records from the intensive care unit of\nBeth Israel Deaconess Medical Center in Boston. It's full of insights that are\njust begging to be uncovered.\nTable of Contents\nSetup\nCovers importing libraries, and authenticating with Google Cloud in Colab.\nCase complexity & mortality\nNon-technical. Introduces the theme for this tutorial.\nTaking a first look at the data\nCovers basic SQL syntax, how BigQuery integrates with Colab and pandas, and the\nbasics of creating visualizations with seaborn.\nCreating a classification model\nCovers creating and training simple models with BigQuery ML.\nPlotting the predictions \nCovers inference (making predictions) with BigQuery ML models, and how to\ninspect the weights of a parametric model.\nAdding a confounding variable\nCovers creating and training a slightly more complicated model, and introduces\nhow BigQuery ML's model comparison features can be used to address confounding\nrelationships.\nPlotting ROC and precision-recall curves\nCovers how to create ROC and precision-recall curves with BigQuery ML. These are\nvisualizations that describe the performance of binary classification models .\nMore complex models\nCreating the models\nCovers creating logistic regression models with many input variables.\nGetting evaluation metrics\nCovers how to get numerical measures of model performance using BigQuery ML.\nExploring our model \nDemonstrates how to interpret models with many variables.\nConclusion\nNon-technical. Looks back on how we have used BigQuery ML to answer a research\nquestion.\nSetup\nFirst, you'll need to sign into your google account to access the Google Cloud\nPlatform (GCP).\nWe're also going to import some standard python data analysis packages that\nwe'll use later to visualize our models.", "from __future__ import print_function\nfrom google.colab import auth\nfrom google.cloud import bigquery\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nauth.authenticate_user()", "Next you'll need to enter some information on how to access the data.\nanalysis_project is the project used for processing the queries.\nThe other fields,\nadmissions_table,\nd_icd_diagnoses_table,\ndiagnoses_icd_table,\nand patients_table,\nidentify the BigQuery tables we're going to query. They're written in the form\n\"project_id.dataset_id.table_id\". We're going to use a slightly modified\nversion of the %%bigquery cell magic in this tutorial, which replaces these\nvariables with their values whenever they're surrounded by curly-braces.", "#@title Fill out this form then press [shift ⇧]+[enter ⏎] {run: \"auto\"}\nimport subprocess\nimport re\n\nanalysis_project = 'your-analysis-project' #@param {type:\"string\"}\n\nadmissions_table = 'physionet-data.mimiciii_clinical.admissions' # @param {type: \"string\"}\nd_icd_diagnoses_table = 'physionet-data.mimiciii_clinical.d_icd_diagnoses' # @param {type: \"string\"}\ndiagnoses_icd_table = 'physionet-data.mimiciii_clinical.diagnoses_icd' # @param {type: \"string\"}\npatients_table = 'physionet-data.mimiciii_clinical.patients' # @param {type: \"string\"}\n\n# Preprocess queries made with the %%bigquery magic\n# by substituting these values\nsub_dict = {\n 'admissions_table': admissions_table,\n 'd_icd_diagnoses_table': d_icd_diagnoses_table,\n 'diagnoses_icd_table': diagnoses_icd_table,\n 'patients_table': patients_table\n}\n\n# Get a suffix to attach to the names of the models created during this tutorial\n# to avoid collisions between simultaneous users.\naccount = subprocess.check_output(\n ['gcloud', 'config', 'list', 'account', '--format',\n 'value(core.account)']).decode().strip()\nsub_dict['suffix'] = re.sub(r'[^\\w]', '_', account)[:900]\n\n# Set the default project for running queries\nbigquery.magics.context.project = analysis_project\n\n# Set up the substitution preprocessing injection\nif bigquery.magics._run_query.func_name != 'format_and_run_query':\n original_run_query = bigquery.magics._run_query\n\ndef format_and_run_query(client, query, job_config=None):\n query = query.format(**sub_dict)\n return original_run_query(client, query, job_config)\n\nbigquery.magics._run_query = format_and_run_query\n\nprint('analysis_project:', analysis_project)\nprint()\nprint('custom %%bigquery magic substitutions:')\nfor k, v in sub_dict.items():\n print(' ', '{%s}' % k, '→', v)\n\n%config InlineBackend.figure_format = 'svg'\n\nbq = bigquery.Client(project=analysis_project)", "Case complexity & mortality\nThis tutorial is a case study. We're going to use BQML and MIMIC3 to answer a\nresearch question.\n\nIn the intensive care unit, are complex cases more or less likely to be\nfatal?\n\nMaybe it's obvious that they would be more fatal. After all, things only get\nworse as you add more comorbidities. Or maybe the exact opposite is true.\nCompare the patient who comes to the ICU with ventricular fibrillation to the\npatient who comes with a laundry list of chronic comorbidities. Especially\nwithin the context of a particular admission, the single acute condition seems\nmore lethal.\nTaking a first look at the data\nDo we have the data to answer this question?\nIf you browse through the\nlist of tables in the MIMIC dataset,\nyou'll find that whether the patient passed away during the course of their\nadmission is recorded. We can also operationalize the definition of case\ncomplexity by counting the number of diagnoses that the patient had during an\nadmission. More diagnoses means greater case complexity.\nWe need to check that we have a sufficiently diverse sample to build a viable\nmodel. First we'll check our dependent variable, which measures whether a\npatient passed away.", "%%bigquery\nSELECT\n COUNT(*) as total,\n SUM(HOSPITAL_EXPIRE_FLAG) as died\nFROM\n `{admissions_table}`", "Clearly the ICU is a very serious place: about 10% of admissions are mortal. As\ndata scientists, this tells us that we have a significant, albeit imbalanced,\nnumber of samples in both categories. The models we're training will easily\nadapt to this class imbalance, but we will need to be cautious when evaluating\nthe performance of our models. After all, a model that simply says \"no one dies\"\nwill be right 91% of the time.\nNext we'll look at the distribution of our independent variable: the number of\ndiagnoses assigned to a patient during their admission.", "%%bigquery hist_df\nSELECT\n n_diagnoses, COUNT(*) AS cnt\nFROM (\n SELECT\n COUNT(*) AS n_diagnoses\n FROM\n `{diagnoses_icd_table}`\n GROUP BY\n HADM_ID\n)\nGROUP BY n_diagnoses\nORDER BY n_diagnoses\n\ng = sns.barplot(\n x=hist_df.n_diagnoses, y=hist_df.cnt, color=sns.color_palette()[0])\n# Remove every fifth label on the x-axis for readability\nfor i, label in enumerate(g.get_xticklabels()):\n if i % 5 != 4 and i != 0:\n label.set_visible(False)", "With the exception of the dramatic mode¹, the spread of the diagnosis counts is\nbell-curved shaped. The mathematical explanation of this is called central limit\ntheorem. While this is by no means a deal breaker, the thins tails we see in the\ndistribution can be a challenge for linear-regression models. This is because\nthe extreme points tend to affect the\nlikelihood the most, so\nhaving fewer of them makes your model more sensitive to outliers. Regularization\ncan help with this, but if it becomes too much of a problem we can consider a\ndifferent type of model (such as support-vector machines, or robust regression)\ninstead of generalized linear regression.\n\n¹ Which is sort of fascinating. Comparing the most common diagnoses for\nadmissions with exactly 9 diagnoses to the rest of the cohort seems to suggest\nthat this is due to positive correlations between cardiac diagnoses, e.g.\ncardiac complications NOS, mitral valve disorders, aortic valve disorders,\nsubendocardial infarction etc. Your team might be interested in investigating\nthis more seriously, especially if there is a cardiologist among you.\nCreating a classification model\nCreating a model with BigQuery ML\nis simple. You write a normal query in standard SQL, and each row of the result\nis used as an input to train your model. BigQuery ML automatically applies the\nrequired\ntransformations\ndepending on each variable's data type. For example, STRINGs are transformed\ninto one-hot vectors, and TIMESTAMPs\nare\nstandardized.\nThese transformations are necessary to get a valid result, but they're easy to\nforget and a pain to implement. Without BQML, you also have to remember to apply\nthese transformations when you make predictions and plots. It's fantastic that\nBigQuery takes care of all this for you.\nBigQuery ML also automatically performs\nvalidation-based early stopping\nto prevent\noverfitting.\nTo start, we're going to create a\n(regularized) logistic regression\nmodel that uses a single variable, the number of diagnoses a patient had during\nan admission, to predict the probability that a patient will pass away during an\nICU admission.", "%%bigquery\n# BigQuery ML create model statement:\nCREATE OR REPLACE MODEL `mimic_models.complexity_mortality_{suffix}`\nOPTIONS(\n # Use logistic_reg for discrete predictions (classification) and linear_reg\n # for continuous predictions (forecasting).\n model_type = 'logistic_reg',\n # See the below aside (𝜎 = 0.5 ⇒ 𝜆 = 2)\n l2_reg = 2,\n # Identify the column to use as the label (dependent variable)\n input_label_cols = [\"died\"]\n)\nAS\n# standard SQL query to train the model with:\nSELECT\n COUNT(*) AS number_of_diagnoses,\n MAX(HOSPITAL_EXPIRE_FLAG) as died\nFROM\n `{admissions_table}`\n INNER JOIN `{diagnoses_icd_table}`\n USING (HADM_ID)\nGROUP BY HADM_ID", "Optional aside: picking the regularization penalty $(\\lambda)$ with Bayes' Theorem\nFrom the frequentist point of view,\n$l_2$ regularized regression\nminimizes the negative log-likelihood of a model with an added penalty term:\n$\\lambda \\| w \\|^2$. This penalty term reflects our desire for the model to be\nas simple as possible, and it removes the degeneracies caused by\ncollinear input variables.\n$\\lambda$ is called l2_reg in BigQuery ML model options. You're given the\nfreedom to set it to anything you want. In general, larger values of lambda\nencourage the model to give simpler explanations¹, and smaller values give the\nmodel more freedom to match the observed data. So what should you set $\\lambda$\n(a.k.a l2_reg) to?\nA short calculation (see e.g. chapters 4.3.2 and 4.5.1 of\nPattern Recognition and Machine Learning)\nshows that $l_2$ penalized logistic regression is equivalent to Bayesian\nlogistic regression with the pior $ \\omega \\sim \\mathcal{N}(0, \\sigma^2 =\n\\frac{1}{2 \\lambda})$.\nLater on in this tutorial, we'll run an\n$l_1$ regularized regression,\nwhich means the penalty term is $\\lambda \\| \\omega \\|$. The same reasoning\napplies except the corresponding prior is $w \\sim \\text{Laplace}(0, b =\n\\frac{1}{\\lambda})$.\nThis Bayesian perspective gives meaning to the value of $\\lambda$. It reflects\nour prior uncertainty towards the strength of the relationship that we're\nmodeling.\nSince BQML automatically standardizes and one-hot encodes its inputs, we can use\nthis interpretation to give some generic advice on choosing $\\lambda$. If you\ndon't have any special information, then any value of $\\lambda$ around $1$ is\nreasonable, and reflects that even a perfect correlation between the input and\nthe output is not too surprising.\nAs long as you choose $\\lambda$ to be much less than your sample size, its exact\nvalue should not influence your results very much. And even very small values of\n$\\lambda$ can remedy problems due to collinear inputs.\n\n¹ Although regularization helps with overfitting, it does not completely solve\nit, and due care should still be taken not to select too many inputs for too\nlittle data.\nPlotting the predictions\nWe can inspect the weights that our model learned using the\nML.WEIGHTS\nstatement. The positive weight that we see for number_of_diagnoses is our\nfirst evidence that case complexity is associated with mortality.", "%%bigquery simple_model_weights\nSELECT * FROM ML.WEIGHTS(MODEL `mimic_models.complexity_mortality_{suffix}`)", "By default the weights are automatically translated to their unstandardized\nforms. Meaning that we don't have to standardize our inputs before multiplying\nthem with the weights to obtain predictions. You can see the standardized\nweights with ML.WEIGHTS(MODEL ..., STRUCT(true AS standardize)), which can be\nhelpful for answering questions about the relative importance of different\nvariables, regardless of their scale.\nWe can use the unstandardized weights to make a python function that returns the\npredicted probability of mortality given an ICU admission with a certain number\nof diagnoses\npython\ndef predict(number_of_diagnoses):\n return scipy.special.expit(\n simple_model_weights.weight[0] * number_of_diagnoses\n + simple_model_weights.weight[1])\nbut it's often faster and easier to make predictions with the\nML.PREDICT\nstatement.\nWe'd like to create a plot showing our model's predictions and the underlying\ndata. We can use ML.PREDICT to get the data to draw the prediction line, and\ncopy-paste the query we fed into CREATE MODEL to get the data points.", "params = {'max_prediction': hist_df.n_diagnoses.max()}\n\n%%bigquery line_df --params $params\nSELECT * FROM\nML.PREDICT(MODEL `mimic_models.complexity_mortality_{suffix}`, (\n SELECT * FROM\n UNNEST(GENERATE_ARRAY(1, @max_prediction)) AS number_of_diagnoses\n))\n\n%%bigquery scatter_df\nSELECT\n COUNT(*) AS num_diag,\n MAX(HOSPITAL_EXPIRE_FLAG) as died\nFROM\n `{admissions_table}` AS adm\n INNER JOIN `{diagnoses_icd_table}` AS diag\n USING (HADM_ID)\nGROUP BY HADM_ID\n\nsns.regplot(\n x='num_diag',\n y='died',\n data=scatter_df,\n fit_reg=False,\n x_bins=np.arange(1,\n scatter_df.num_diag.max() + 1))\nplt.plot(line_df.number_of_diagnoses,\n line_df.predicted_died_probs.apply(lambda x: x[0]['prob']))\nplt.xlabel('Case complexity (number of diagnoses)')\nplt.ylabel('Probability of death during admission')", "Qualitatively, our model fits the data quite well, and the trend is pretty\nclear. We might be tempted to say we've proven that increasing case complexity\nincreases the probability of death during an admission to the ICU. While we've\nprovided some evidence of this, we haven't proven it yet.\nThe biggest problem is we don't know if case complexity is causing the increase\nin deaths, or if is merely correlated with some other variables that affect the\nprobability of death more directly.\nAdding a confounding variable\nPatient age is a likely candidate for a confounding variable that could be\nmediating the relationship between complexity and mortality. Patients generally\naccrue diagnoses as they age¹ and approach their life expectancy. By adding the\npatient's age to our model, we can see how much of the relationship between case\ncomplexity and mortality is explained the patient's age.\n\n¹ Using the CORR standard SQL function, you can calculate that the Pearson\ncorrelation coeffiecient between age and number of diagnoses is $0.37$", "%%bigquery\nCREATE OR REPLACE MODEL `mimic_models.complexity_age_mortality_{suffix}`\nOPTIONS(model_type='logistic_reg', l2_reg=2, input_label_cols=[\"died\"])\nAS\nSELECT\n # MIMIC3 sets all ages over 89 to 300 to avoid the possibility of\n # identification.\n IF(DATETIME_DIFF(ADMITTIME, DOB, DAY)/365.25 < 200,\n DATETIME_DIFF(ADMITTIME, DOB, DAY)/365.25,\n # The life expectancy of a 90 year old is approximately 5 years according\n # to actuarial tables. So we'll use 95 as the mean age of 90+'s\n 95) AS age,\n num_diag,\n died\nFROM\n (SELECT\n COUNT(*) AS num_diag,\n MAX(HOSPITAL_EXPIRE_FLAG) as died,\n ANY_VALUE(ADMITTIME) as ADMITTIME,\n SUBJECT_ID\n FROM\n `{admissions_table}` AS adm\n JOIN `{diagnoses_icd_table}` AS diag\n USING (HADM_ID, SUBJECT_ID)\n GROUP BY HADM_ID, SUBJECT_ID\n )\n JOIN `{patients_table}` AS patients\n USING (SUBJECT_ID)", "When we investigate the weights for this model, we see the weight associated\nwith the number of diagnoses is only slightly smaller now. This tells us that\nsome of the effect we saw in the univariate model was due to the confounding\ninfluence of age, but most of it wasn't.", "%%bigquery\nSELECT * FROM ML.WEIGHTS(MODEL `mimic_models.complexity_age_mortality_{suffix}`)", "Another way to understand this relationship is to compare the effectiveness of\nthe model with and without age as an input. This answers the question: given the\nnumber of diagnoses that a patient has received, how much extra information does\ntheir age give us? To be thorough, we could also include a model with just the\npatient's age. You can add a couple of code cells to this notebook and do this\nas an exercise if you're curious.\nPlotting ROC and precision-recall curves\nOne way to compare the effectiveness of binary classification models is with\nROC curves or a precision-recall curves.\nSince ROC curves tend to appear overly optimistic when the data has a\nsignificant class imbalance, we're going to favour precision-recall curves in\nthis tutorial. Precision-Recall curves plot the recall (which measures the\nmodel's performance on the positive samples)\n$$\n\\text{Recall} = \\frac{\\text{True Positives}}{\\text{True Positives} +\n\\text{False Negatives}}\n$$\nagainst the precision (which measures the model's performance on the samples it\nclassified as positive examples)\n$$\n\\text{Precision} = \\frac{\\text{True Positives}}{\\text{True Positives} +\n\\text{False Positives}}\n$$\nas the decision threshold\nranges from $0$ (predict no one dies) to $1$ (predict everyone dies)¹.\nTo make these plots, we're going to use the\nML.ROC_CURVE\nBigQuery ML statement. ML.ROC_CURVE returns the data you need to draw both ROC\nand precision-recall curves with your graphing library of choice.\nML.ROC_CURVE defaults to using data from the evaluation dataset. If it\noperated on the training dataset, it would be difficult to distinguish\noverfitting from excellent performance. If you have your own validation dataset,\nyou can provide it as an optional second argument.\n\n¹ BigQuery ML uses the convention that the threshold is between $0$ and $1$,\nrather than the logit of this value.", "%%bigquery comp_roc\nSELECT * FROM ML.ROC_CURVE(MODEL `mimic_models.complexity_mortality_{suffix}`)\n\n%%bigquery comp_age_roc\nSELECT * FROM\nML.ROC_CURVE(MODEL `mimic_models.complexity_age_mortality_{suffix}`)\n\ndef set_precision(df):\n df['precision'] = df.true_positives / (df.true_positives + df.false_positives)\n\n\ndef plot_precision_recall(df, label=None):\n # manually add the threshold = -∞ point\n df = df[df.true_positives != 0]\n recall = [0] + list(df.recall)\n precision = [1] + list(df.precision)\n # x=recall, y=precision line chart\n plt.plot(recall, precision, label=label)\n\nset_precision(comp_roc)\nset_precision(comp_age_roc)\nplot_precision_recall(comp_age_roc, label='bivariate (age) model')\nplot_precision_recall(comp_roc, label='univariate model')\nplt.plot(\n np.linspace(0, 1, 2), [comp_roc.precision.min()] * 2,\n label='null model',\n linestyle='--')\nplt.legend()\nplt.xlim([0, 1])\nplt.ylim([0, 1])\nplt.xlabel(r'Recall $\\left(\\frac{T_p}{T_p + F_n} \\right)$')\nplt.ylabel(r'Precision $\\left(\\frac{T_p}{T_p + F_p} \\right)$')", "We see that:\n\nBoth these models are significantly better than the zero variable model,\n implying that case complexity has a significant impact on patient mortality.\nAdding the patient's age only marginally improves the model, implying that\n the impact of case complexity is not mediated through age.\n\nOf course, neither of these models is very good when it comes to making\npredictions. For our last set of models, we'll try more earnestly to predict\npatient mortality\nMore complex models\nOne of the main attractions of BigQuery ML is its ability to scale to high\ndimensional models with\nup to millions of variables.\nOur dataset isn't nearly large enough to train this many variables without\nsevere overfitting, be we can still abide training models with hundreds of\nvariables.\nOur strategy will use the $m$ most frequent diagnoses, and a handful of other\nlikely relevant variables as the inputs to our model. Namely, we'll use:\n\nADMISSION_TYPE: reflects the reason for, and seriousness of the admission\nurgent\nemergency\nnewborn\nelective\n\n\nINSURANCE: reflects the patients socioeconomic status, a well-known\n covariate with patient outcomes\nSelf Pay\nMedicare\nPrivate\nMedicaid\nGovernment\n\n\nGENDER: accounts for both social and physiological differences across\n genders\nAGE: accounts for both social and physiological differences across ages\nnumber of diagnoses: our stand-in for case complexity\n\nin addition to the top $m$ diagnoses. We'll compare models with $m \\in \\left{8,\n16, 32, 64, 128, 256, 512 \\right}$ to determine the most sensible value of $m$.\nThis will give us valuable information regarding our original question: whether\ncase complexity increases the probability of ICU mortality. We wonder if the\nnumber of diagnoses increases patient risk only because it increases the chances\nof one of their many diagnoses being lethal, or if these is an interactive\neffect¹. We'll be able to test this by determining whether\n$\\omega_{n_{\\text{diagnoses}}}$ goes to $0$ as we increase $m$.\nWe'll also get some interesting information on the relative lethality of\ndifferent diagnoses, and how these compare with social determinants.\n\n¹As in the often misattributed quote:\nquantity has a quality all its own, or\ndoes it?\nCreating the models\nWe'll start by getting a list of the most frequent diagnoses", "%%bigquery top_diagnoses\nWITH top_diag AS (\n SELECT COUNT(*) AS count, ICD9_CODE FROM `{diagnoses_icd_table}`\n GROUP BY ICD9_CODE\n)\nSELECT top_diag.ICD9_CODE, icd_lookup.SHORT_TITLE, top_diag.count FROM\ntop_diag JOIN\n `{d_icd_diagnoses_table}` AS icd_lookup\nUSING (ICD9_CODE)\nORDER BY count DESC LIMIT 1024", "which we'll use to create our models. In the CREATE MODEL SELECT statement, we\ncreate one column for each of the $m$ diagnoses and fill it with $1$ if the\npatient had that diagnosis and $0$ otherwise.\nThis time around we're using l1_reg instead of l2_reg because we expect that\nsome of our some of our many variables will not significantly impact the\noutcome, and we would prefer a sparse model if possible.", "top_n_diagnoses = (8, 16, 32, 64, 128, 256, 512)\n\nquery_jobs = list()\nfor m in top_n_diagnoses:\n # The expressions for creating the new columns for each input diagnosis\n diagnosis_columns = list()\n for _, row in top_diagnoses.iloc[:m].iterrows():\n diagnosis_columns.append('MAX(IF(ICD9_CODE = \"{0}\", 1.0, 0.0))'\n ' as `icd9_{0}`'.format(row.ICD9_CODE))\n\n query = \"\"\"\n CREATE OR REPLACE MODEL `mimic_models.predict_mortality_diag_{m}_{suffix}`\n OPTIONS(model_type='logistic_reg', l1_reg=2, input_label_cols=[\"died\"])\n AS\n WITH diagnoses AS (\n SELECT\n HADM_ID,\n COUNT(*) AS num_diag,\n {diag_cols}\n FROM `{diagnoses_icd_table}`\n WHERE ICD9_CODE IS NOT NULL\n GROUP BY HADM_ID\n )\n SELECT\n IF(DATETIME_DIFF(adm.ADMITTIME, patients.DOB, DAY)/365.25 < 200,\n DATETIME_DIFF(adm.ADMITTIME, patients.DOB, DAY)/365.25, 95) AS age,\n diagnoses.* EXCEPT (HADM_ID),\n adm.HOSPITAL_EXPIRE_FLAG as died,\n adm.ADMISSION_TYPE as adm_type,\n adm.INSURANCE as insurance,\n patients.GENDER\n FROM\n `{admissions_table}` AS adm\n LEFT JOIN `{patients_table}` AS patients USING (SUBJECT_ID)\n LEFT JOIN diagnoses USING (HADM_ID)\n \"\"\".format(\n m=m, diag_cols=',\\n '.join(diagnosis_columns), **sub_dict)\n # Run the query, and track its progress with query_jobs\n query_jobs.append(bq.query(query))\n\n# Wait for all of the models to finish training\nfor j in query_jobs:\n j.exception()", "Getting evaluation metrics\nTo obtain numerical evaluation metrics on your models, BigQuery ML provides the\nML.EVALUATE\nstatement. Just like ML.ROC_CURVE, ML.EVALUATE defaults to using the\nevaluation dataset that was set aside when the model was created.", "eval_queries = list()\nfor m in top_n_diagnoses:\n eval_queries.append(\n 'SELECT * FROM ML.EVALUATE('\n 'MODEL `mimic_models.predict_mortality_diag_{}_{suffix}`)'\n .format(m, **sub_dict))\neval_query = '\\nUNION ALL\\n'.join(eval_queries)\nbq.query(eval_query).result().to_dataframe()", "And we can also plot the precision-recall curves as we did before.", "for m in top_n_diagnoses:\n df = bq.query('SELECT * FROM ML.ROC_CURVE('\n 'MODEL `mimic_models.predict_mortality_diag_{}_{suffix}`)'\n .format(m, **sub_dict)).result().to_dataframe()\n set_precision(df)\n plot_precision_recall(df, label='{} diagnoses'.format(m))\n\nplt.plot(\n np.linspace(0, 1, 2), [df.precision.min()] * 2,\n label='null model',\n linestyle='--')\nplt.legend()\nplt.xlim([0, 1])\nplt.ylim([0, 1])\nplt.xlabel(r'Recall $\\left(\\frac{T_p}{T_p + F_n} \\right)$')\nplt.ylabel(r'Precision $\\left(\\frac{T_p}{T_p + F_p} \\right)$')", "The model with $m = 512$ seems to be overfitting the data, while somewhere\nbetween $m = 128$ and $m = 256$ seems to be the sweet spot for model\nflexibility. Since we've now used the evaluation dataset to determine $m$\n(albeit informally), and when to stop-early during training, dogmatic rigour\nwould demand that we measure our model on a third (validation) dataset before we\nbrag about its efficacy. On the other hand, there isn't a ton of flexibility in\nchoosing between a few different values of $m$, nor in when to stop early. You\ncan use your own judgment.\nActually, the predictive power of our model¹ isn't nearly as interesting as it's\nweights and what they tell us. In the next section, we'll dig into them.\n\n¹Which could be described as approaching respectability, but still a long way\naway from brag worthy.\nExploring our model\nLet's have a look at the weights from the $m = 128$ model.", "%%bigquery weights_128\nSELECT * FROM ML.WEIGHTS(MODEL `mimic_models.predict_mortality_diag_128_{suffix}`)\nORDER BY weight DESC", "First we'll look at the weights for the numerical inputs.", "pd.set_option('max_rows', 150)\nweights_128['ICD9_CODE'] = weights_128.processed_input \\\n .apply(lambda x: x[len('icd9_'):] if x.startswith('icd9_') else x)\nview_df = weights_128.merge(top_diagnoses,how='left', on='ICD9_CODE') \\\n .rename(columns={'ICD9_CODE': 'input'})\nview_df = view_df[~pd.isnull(view_df.weight)]\nview_df[['input', 'SHORT_TITLE', 'weight', 'count']]", "We see have a list of diagnoses, sorted from most fatal to least fatal according\nto our model.\nGoing back to our original question, we can see that the weight for num_diag\n(a.k.a the number of diagnoses) has essentially gone to zero. The average\ndiagnoses weight is also very small:", "view_df[~pd.isnull(view_df.SHORT_TITLE)].weight.mean()", "so we can conclude that given that a patient has been admitted to the ICU, the\nnumber of diagnoses they've been given does not predict their outcome beyond the\nlinear effect of the component diagnoses.\nIt might be surprising that the weight for age is also very small. One\nexplanation for this might be that DNR¹ status, and falls are among the highest\nweighted diagnoses. These diagnoses are associated with advanced age² ³ and\nthere is literature³ to support that DNR status mediates the effect of age on\nsurvival. One thing we couldn't find much data on was the relationship between\nage and palliative treatment. This could be a good subject for a datathon team\nto tackle.\n\n¹Do not resuscitate\n²Article: Age-Related Changes in Physical Fall Risk Factors: Results from a 3\nYear Follow-up of Community Dwelling Older Adults in Tasmania,\nAustralia\n³Article: Do Not Resuscitate (DNR) Status, Not Age, Affects Outcomes after\nInjury: An Evaluation of 15,227 Consecutive Trauma\nPatients\nNow let's look at the weights for the categorical variables.", "for _, row in weights_128[pd.isnull(weights_128.weight)].iterrows():\n print(row.processed_input)\n print(\n *sorted([tuple(x.values()) for x in row.category_weights],\n key=lambda x: x[1],\n reverse=True),\n sep='\\n',\n end='\\n\\n')", "We see that the patient's insurance has a startlingly large effect in our model.\nFor those of us not familiar with american medical insurance terminology¹:\n\nSelf pay: the patient pays out-of-pocket for their medical care as they\n require it\nMedicare: a government program for people with low incomes\nPrivate: insurance that is usually paid for by the patient's employer\nMedicaid: a government program for people who have a disability or are over\n 65 years old\nGovernment: insurance granted by the government excluding medicare and\n medicaid. This includes government employees and veterans.\n\nThe impact of socioeconomic status on health is on clear display here. The\ndifference between the weights for medicare and private insurance is $0.25$,\nwhich is similar to the weight for atrial fibrillation.\nThe outlook for patient's paying out of pocket is also grim, and may reflect an\navoidance of hospital care for financial reasons in addition to other\nsocioeconomic factors.\nThe weights for the admission type seem to reflect common sense, as do the\nweights for gender given that females have a longer life expectancy than males.\n\n¹ See https://en.wikipedia.org/wiki/Health_insurance_in_the_United_States\n² There are thousands of articles on this, see e.g. Article: Socioeconomic\nDisparities in Health in the United States: What the Patterns Tell\nUs\n³ See https://en.wikipedia.org/wiki/List_of_countries_by_life_expectancy\nConclusion\nWe've found evidence that case complexity increases the risk of an ICU\nadmission, but only through the cumulative effects of the component diagnoses.\nThat's not to say that these nonlinear interactions aren't very powerful in\ncertain cases¹, but that this seems to be the exception rather than the rule.\nWe were able to obtain these results entirely from within BigQuery, with minimal\nmodifications to standard SQL statements, only resorting to python for\nvisualization.\n\n¹ That is, between certain combinations or cliques of diagnoses." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ThyrixYang/LearningNotes
MOOC/stanford_cnn_cs231n/assignment1/knn.ipynb
gpl-3.0
[ "k-Nearest Neighbor (kNN) exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nThe kNN classifier consists of two stages:\n\nDuring training, the classifier takes the training data and simply remembers it\nDuring testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\nThe value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.", "# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\nfrom __future__ import print_function\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\n# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint('Training data shape: ', X_train.shape)\nprint('Training labels shape: ', y_train.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)\n\n# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()\n\n# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = list(range(num_training))\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = list(range(num_test))\nX_test = X_test[mask]\ny_test = y_test[mask]\n\n# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint(X_train.shape, X_test.shape)\n\nfrom cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)", "We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\nFirst we must compute the distances between all test examples and all train examples. \nGiven these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.\nFirst, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.", "# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint(dists.shape)\n\n# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()", "Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\nWhat in the data is the cause behind the distinctly bright rows?\nWhat causes the columns?\n\nYour Answer: fill this in.", "# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))", "You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:", "y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))", "You should expect to see a slightly better performance than with k = 1.", "# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint('Two loop version took %f seconds' % two_loop_time)\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint('One loop version took %f seconds' % one_loop_time)\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint('No loop version took %f seconds' % no_loop_time)\n\n# you should see significantly faster performance with the fully vectorized implementation", "Cross-validation\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.", "num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\nX_train_folds = np.array_split(X_train, num_folds)\ny_train_folds = np.array_split(y_train, num_folds)\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\nfor k in k_choices:\n print(\"running {}\".format(k))\n k_to_accuracies[k] = []\n for train_id in range(0, num_folds):\n classifier = KNearestNeighbor()\n classifier.train(X_train_folds[train_id], y_train_folds[train_id])\n accuracy = 0\n for test_id in range(0, num_folds):\n if(test_id == train_id):\n continue\n y_test_pred = classifier.predict(X_train_folds[test_id], k)\n num_correct = np.sum(y_test_pred == y_train_folds[test_id])\n accuracy += float(num_correct) / len(y_train_folds[test_id])\n accuracy /= (num_folds - 1)\n k_to_accuracies[k].append(accuracy)\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print('k = %d, accuracy = %f' % (k, accuracy))\n\n# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()\n\n# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 1\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
leoferres/prograUDD
certamenes/Certamen2_A_TI2_2017_1.ipynb
mit
[ "Certamen 2A, TI 2, 2017-1\nLeo Ferres & Rodrigo Trigo\nUDD\nPregunta 1\nCree la función fechaValida(fecha) que devuelva True si el argumento es una fecha real, o False si no. Ejemplo, \"32 de enero\" no es válida (no considere bisiestos). La fecha se dará en el siguiente formato: dd/mm/yyyy. Sugerencia: puede usar la función split() de str. Compruebe que ejecute usando su fecha de nacimiento.", "##escriba la función aqui##\n\nfechaValida('02/06/2017')", "Pregunta 2\nDado el string de su RUT sin guión ni dígito verificador encuentre la $\\sum_{i=1}^{n}d_i*i$, donde $n$ es el largo del string, $d$ es cada dígito, y $d_1$ es el último número del RUT.", "rut = input(\"ingrese su rut: \")\n\n##su código va aqui##", "Pregunta 3\nCree dos funciones: 1) tirarDado() que devuelva un número $x$ aleatorio $1\\leq x \\leq 6$, y 2) la función sumar() que lanza dados y finaliza cuando la suma de los dados sea mayor que 10000 y retorna cuántos dados lanzó.", "import random\nrandom.seed(int(rut))\n##su código va aqui##", "Pregunta 4\nOtra vez el bendito triángulo. Lo pueden hacer o no? Cree una función que tome como argumento el alto de un triángulo equilátero y lo dibuje usando estrellitas. Por ejemplo, para $h=3$, entonces\n```\n *\n\n\n```\nPregunta 5\nLea el archivo puertos.csv donde aparecen el puerto y el país al que pertenece de la siguiente forma: aarhus;dinamarca. Note que el separador es un punto y coma (\";\") y que la primer línea tiene el encabezado, el cual no debe ser considerado. Cree un diccionario puertos en el que la llave es el país, y el valor es el número de puertos de ese país." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/snu/cmip6/models/sandbox-1/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: SNU\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:38\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'snu', 'sandbox-1', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
massimo-nocentini/simulation-methods
chapter-one.ipynb
mit
[ "<p>\n<img src=\"http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg\" \n alt=\"UniFI logo\" style=\"float: left; width: 20%; height: 20%;\">\n<div align=\"right\">\nMassimo Nocentini<br>\n<small>\n<br>October 22, 2016: Horner method, binomial anti-difference\n<br>October 9, 2016: computing sums, Stirling numbers, thms coding\n<br>October 7, 2016: operators and powers theory\n</small>\n</div>\n</p>\n<br>\n<div align=\"center\">\n<b>Abstract</b><br>\nIn this document we collect notes and exercises for the first chapter of the course.\n</div>", "from sympy import *\nfrom sympy.abc import n, i, N, x, k, y\n\ninit_printing()\n\n%run src/commons.py", "", "delta, antidifference, I, E, f = symbols(r'\\Delta \\Delta^{-1} I E f', cls=Function) # combinators\n\nI_eval_rule = define(I(x), x) # identity\nE_eval_rule = define(E(x),x+1) # forward shift\n\nI_eval_rule, E_eval_rule\n\ndelta_eval_rule = define(delta(f(x)), f(x+1)-f(x),) # forward difference def\ndelta_pow_rule = define(delta(f(x))**k, delta(delta(f(x))**(k-1)),) # repeated applications of differentiation\n\ndelta_eval_rule, delta_pow_rule\n\ndelta_EI_conv = define(delta(x), E(x)-I(x)) # conversion via combinators\n\ndelta_EI_conv\n\ndef rewrite(eq, rule, **kwds):\n return eq.replace(query=rule.lhs, value=rule.rhs, **kwds)\n\ns = Wild('s')\nrewrite(delta_eval_rule, define(f, Lambda([s], ff(s,n))))\n\nf_wild = WildFunction('f')\nD=define(delta(f_wild**k), delta(delta(f_wild**(k-1))))\nrewrite(delta_pow_rule, D)\n\nclass ForwardDifference(Function):\n \n def _latex(self, *_):\n if len(self.args) < 2:\n func, args = self.args[0].func, self.args[0].args\n v = args[0]\n else:\n func, args = self.args[0].func, self.args[0].args\n v = self.args[1]\n \n D = Function(r'\\Delta_{{{}}}'.format(latex(v)))\n expr = D(func(*args))\n return latex(expr)\n \n def doit(self):\n \n if len(self.args) < 2:\n func, args = self.args[0].func, self.args[0].args\n v = args[0]\n else:\n func, args = self.args[0].func, self.args[0].args\n v = self.args[1]\n \n return func(*map(lambda a: a.subs({v:v+1}, simulataneous=True),args)) - func(*args)\n \n def _e2val_power(self, k):\n if k.is_Number:\n return ForwardDifference(Pow(ForwardDifference(self.args), k-1, evaluate=False))\n else:\n return super()._eval_power(k)", "know the difference\nLet ${y_{n}}{n\\in\\mathbb{N}}$ be a sequence, where $y{n}=f(n)$ for some function $f$. Assume that each coefficient $y_{n}$ is not known; on the contrary, assume that there exists a known sequence ${g_{n}}{n\\in\\mathbb{N}}$ which satisfies:\n$$\n\\begin{equation}\n\\Delta y{n} = y_{n+1}-y_{n}=g_{n}\n\\end{equation}\n$$\nBy finite summation on both sides:\n$$\n\\sum_{n=n_{0}}^{N-1}{\\Delta y_{n}} = \\sum_{n=n_{0}}^{N-1}{g_{n}}\n$$\nmany terms in the lhs disappear, so:\n$$\ny_{N}-y_{n_{0}} = \\sum_{n=n_{0}}^{N-1}{g_{n}}\n$$\ntherefore, if the initial term $y_{n_{0}}$ is given, we can compute any term $y_{N}$ by:\n$$\ny_{N} = y_{n_{0}} + \\sum_{n=n_{0}}^{N-1}{g_{n}}\n$$\nsince each term $g_{n}$ is known by hypothesis and the summation can be done.\na little generalization\nConsider an additional known sequence ${p_{n}}{n\\in\\mathbb{N}}$ and we're required to find a solution for equation: $ y{n+1} = p_{n}y_{n} + g_{n} $. So define an helper sequence ${P_{n}}{n\\in\\mathbb{N}}$ such that $P{n_{0}}=1$ and $P_{n}=p_{n-1}P_{n-1}$, therefore $P_{n}=\\prod_{k=n_{0}}^{n-1}{p_{k}}$ holds by induction. Now study the following:\n$$\n \\frac{y_{n+1}}{P_{n+1}} = \\frac{p_{n}y_{n}}{P_{n+1}} + \\frac{g_{n}}{P_{n+1}} \n = \\frac{y_{n}}{P_{n}} + \\frac{g_{n}}{P_{n+1}} \n$$\ncalling $z_{n}=\\frac{y_{n}}{P_{n}}$ and $q_{n}=\\frac{g_{n}}{P_{n+1}}$, it yields: $z_{n+1} = z_{n}+q_{n}$, with initial condition $z_{n_{0}} = \\frac{y_{n_{0}}}{P_{n_{0}}} = y_{n_{0}}$. So we've a recurrence in a simpler form, whose solution is a sequence ${z_{n}}{n\\in\\mathbb{N}}$ such that $z{n} = z_{n_{0}} + \\sum_{i=n_{0}}^{n-1}{q_{i}}$. By backward substitution:\n$$\n\\begin{split}\n \\frac{y_{n}}{P_{n}} &= y_{n_{0}} + \\sum_{i=n_{0}}^{n-1}{\\frac{g_{i}}{P_{i+1}}} \\\n y_{n} &= P_{n}y_{n_{0}} + \\sum_{i=n_{0}}^{n-1}{\\frac{P_{n}g_{i}}{P_{i+1}}} \\\n y_{n} &= \\left(\\prod_{k=n_{0}}^{n-1}{p_{k}}\\right)y_{n_{0}} + \n \\sum_{i=n_{0}}^{n-1}{\\left(\\prod_{k=i+1}^{n-1}{p_{k}}\\right)g_{i}} \\\n\\end{split}\n$$\nwhich is the closed form for coefficients of solution sequence ${y_{n}}_{n\\in\\mathbb{N}}$.\nHorner method\nLet $p\\in\\prod_{n}$ be a polynomial over coefficients ${b_{n}\\in\\mathbb{C}}{n\\in\\mathbb{N}}$, defined as $p(x)=\\sum{i=0}^{n}{b_{i}x^{n-i}}$. Define the difference equation $y_{i} = xy_{i-1} + b_{i}$, for $i\\in{1,\\ldots,n}$, with initial condition $y_{0}=b_{0}$; therefore, $y_{n}=p(x)$ holds.\nIn order to see this, recognize that we have a recurrence of the last form where $p_{i}=x$ and $g_{i}=b_{i+1}$ forall $i$ in the domain, therefore its solution has the generic coefficient $y_{n}$ which satisfies:\n$$\n y_{n} = \\left(\\prod_{k=n_{0}}^{n-1}{x}\\right)b_{0} + \n \\sum_{i=n_{0}}^{n-1}{\\left(\\prod_{k=i+1}^{n-1}{x}\\right)b_{i+1}}\n = x^{n}b_{0} + \\sum_{i=0}^{n-1}{x^{n-1-(i+1)+1}b_{i+1}} =\n = x^{n}b_{0} + \\sum_{i=1}^{n}{x^{n-i}b_{i}} = \\sum_{i=0}^{n}{x^{n-i}b_{i}} = p(x)\n$$\nas required.\n$\\Delta$ operator relations\nOn the other hand, assume that no initial condition $y_{n_{0}}$ is given, we obtain $y_{n}$ on the lhs by application of the anti-difference operator $\\Delta^{-1}$ on the left in both members $\\Delta y_{n}=g_{n}$ so \n$y_{n} = \\Delta^{-1}g_{n}$. Use this identity as rewriting rule and apply it to the former equation, obtaining $\\Delta\\Delta^{-1}g_{n}=g_{n}$, therefore the relation $\\Delta\\Delta^{-1}=I$ on operators holds.\nMoreover, let ${w_{n}}{n\\in\\mathbb{N}}$ be a constant sequence, so we can augment:\n$$\ny{n} = \\Delta^{-1}g_{n} + w_{n} = \\Delta^{-1}\\Delta y_{n} + w_{n}\n$$\nbecause $\\Delta w_{n}=0$, therefore the relation $\\Delta^{-1}\\Delta = I - K$, where $K$ is the constant operator, holds.\ncomputing sums via $\\Delta^{-1}$\nLet $g_{n}=\\Delta y_{n}$ and assume to not have a closed formula in $n$ for coefficients $y_{n}$, but to know that $y_{n}=\\Delta^{-1}g_{n}$ holds. Apply summation on both members and manipulating on the rhs:\n$$\n\\sum_{n=n_{0}}^{N-1}{g_{n}} = \\sum_{n=n_{0}}^{N-1}{\\Delta y_{n}} \n= y_{N}-y_{n_{0}} = y_{n} \\big|{n{0}}^{N} = \\Delta^{-1}g_{n} \\big|{n{0}}^{N} = \\Delta^{-1}g_{n} \\big|{n=N} - \\Delta^{-1}g{n} \\big|{n=n{0}}\n$$\ntherefore, if we have an unknown sequence ${g_{n}}{n\\in\\mathbb{N}}$, which has a closed-form of \n$\\Delta^{-1}g{n}$ as a term which support substitution of symbol $n$, so the sum $\\sum_{n=n_{0}}^{N-1}{g_{n}}$ can be easily computed by a difference, as done in the fundamental theorem of calculus.", "g = IndexedBase('g')\nn = IndexedBase('n')\nf = Function('f')\n\nanti_difference = Function('\\Delta^{-1}')\n\ndef accept_replacing(thm_ctor):\n \n def replacing(subs=lambda *args: {}, **kwds):\n \n weq, variables = thm_ctor(**kwds)\n mapping = subs(*variables) if callable(subs) else subs\n for k,v in mapping.items():\n weq = weq.replace(k, v, simultaneous=True)\n \n return weq, [mapping.get(v, v) for v in variables]\n \n return replacing\n \n@accept_replacing\ndef summation_antidifference_thm():\n\n (n, sup), inf = symbols('n N'), IndexedBase('n')[0]\n eq = Eq(Sum(g[n], (n, inf, sup-1)), \n Subs(anti_difference(g[n]), n, sup) - \n Subs(anti_difference(g[n]), n, inf))\n \n return eq, (g, n, inf, sup)\n\n@accept_replacing\ndef antidifference_of_ff_thm():\n (n, i), w = symbols('n i'), IndexedBase('w')\n eq = Eq(anti_difference(ff(n, i)), ff(n, i+1)/(i+1)+w[n])\n return eq, (x, n, w)\n\n@accept_replacing\ndef antidifference_of_binomial_thm():\n (n, k), w = symbols('n k'), IndexedBase('w')\n eq = Eq(anti_difference(binomial(n, k)), binomial(n, k+1)+w[n])\n return eq, (x, n, w)\n\n@accept_replacing\ndef constant_sequence_thm():\n variables = w, i, j = IndexedBase('w'), *symbols('i j')\n eq = Eq(w[i], w[j])\n return eq, (w, i, j)\n\ndef doit(thm, lhs=True, rhs=True):\n eq, *variables = thm\n return Eq(eq.lhs.doit() if lhs else eq.lhs, eq.rhs.doit() if rhs else eq.rhs), variables\n\ndef rewrite(thm, rule, include_rule_vars=False):\n\n eq, *rest = thm\n try:\n rw, *others = rule # so, `rule` can be a thm too\n except:\n rw, *others = rule, []\n \n augmented = []\n augmented.extend(*rest)\n if include_rule_vars: augmented.extend(*others)\n return eq.replace(rw.lhs, rw.rhs, simultaneous=True), augmented\n\n\n\nthm = eq, (g, n, inf, sup) = summation_antidifference_thm()\nthm", "$(x)_{i}$ application", "local_thm = Eq(g[n], ff(n,i)), (g, n, i)\nlocal_thm\n\ninst_thm = rewrite(thm, local_thm)\ninst_thm\n\nant_ff_thm = antidifference_of_ff_thm(subs={})\nant_ff_thm\n\nready_thm = eq, *_ = rewrite(inst_thm, ant_ff_thm)\nready_thm\n\ndone_thm = doit(ready_thm, lhs=False)\ndone_thm", "${{n}\\choose{k}}$ application", "local_thm = Eq(g[n], binomial(n,k)), (g, n, k)\nlocal_thm\n\ninst_thm = rewrite(thm, local_thm)\ninst_thm\n\nant_binomial_thm = antidifference_of_binomial_thm(subs={})\nant_binomial_thm", "Previous thm holds by the following argument.\n$$\n\\begin{split}\n \\Delta{{x}\\choose{j}} = {{x+1}\\choose{j}} - {{x}\\choose{j}} &= \\frac{(x+1){j}}{(j){j}}-\\frac{(x){j}}{(j){j}}\\\n &= \\frac{ (x+1)x\\cdots(x-j+2) -x\\cdots(x-j+2)(x-j+1) }{(j){j}}\\\n &= \\frac{ x\\cdots(x-j+2)(x+1 -(x-j+1)) }{(j){j}}\\\n &= \\frac{ x\\cdots(x-j+2) }{(j-1)!} = \\frac{ (x){(j-1)} }{(j-1){(j-1)}} = {{x}\\choose{j-1}}\\\n\\end{split}\n$$\ntherefore, to find $\\Delta^{-1}{{x}\\choose{j}}$ we are required to provide a term $t_{x}$ such that application of $\\Delta$ to it yields ${{x}\\choose{j}}$. So choose $t_{x}={{x}\\choose{j+1}}$, according to above identity.", "ready_thm = eq, *_ = rewrite(inst_thm, ant_ff_thm)\nready_thm\n\ndone_thm = doit(ready_thm, lhs=False)\ndone_thm\n\nff(x+1,i)-ff(x,i)", "powers\nin $\\mathbb{R}$\nIn $\\mathbb{R}$ the $n$-th power of the symbol $x$ satisfies:\n$$\n\\begin{split}\nx^{0}&=1 \\\nx \\neq0 &\\rightarrow x^{-n}=\\frac{1}{x^{n}}\\\n\\frac{\\partial x^{n}}{\\partial{x}} &= n x^{n-1} \\\n\\frac{\\partial^{-1} x^{n}}{\\partial{x}} = \\int x^{n}\\partial x &= \\frac{x^{n+1}}{n+1}+c\n\\end{split}\n$$\nfor some $c\\in\\mathbb{R}$. In $\\mathbb{N}$ its counterpart is the falling factorial function in the variable $x$ defined as: \n$$(x){n} = \\underbrace{x(x-1)(x-2)\\cdots(x-n+1)}{n\\text{ terms}}$$\nin $\\mathbb{N}$\nWe apply operator $\\Delta$ to derive an identity about forward differences of $(x){n}$: \n$$\n\\begin{split}\n\\Delta (x){n} &= (x+1){n} - (x){n} \\\n&= (x+1)x(x-1)\\cdots(x-n+2) - x(x-1)\\cdots(x-n+2)(x-n+1) \\\n&= (x){(n-1)}(x+1 -(x-n+1)) \\\n&=n(x){(n-1)}\n\\end{split}\n$$\nPrevious identity allows us to recover the anti-difference of $(x){n}$: it requires to find a sequence ${g{n}}{n\\in\\mathbb{N}}$ such that $\\Delta g{n} = (x){n}$, namely $$g{n}=\\frac{(x){(n+1)}}{n+1}+w{n}=\\Delta^{-1}(x){n}$$ where ${w{n}}_{n\\in\\mathbb{N}}$ is a constant sequence.\nMoreover, in order to provide corresponding identities for the left ones, we reason according to:\n$$\n(x){m+n} = \\underbrace{x(x-1)\\cdots(x-m+1)}{(x){m}}\\underbrace{(x-m)(x-m-1)\\cdots(x-m-n+1)}{(x-m){n}}\n$$\nsubstitution $m=0$ yields $(x){n}=(x){0}(x){n}$ therefore $(x){0}=1$. On the other hand, substitution $m=-n$ yields $(x){0}=(x){-n}(x+n){n}$. So:\n$$\n(x+n){n} \\neq 0 \\rightarrow (x){-n} = \\frac{1}{(x+n)_{n}} = \\frac{1}{(x+n)(x+n-1)\\cdots(x+1)}\n$$\nrequiring $x\\not\\in{-1, -2, \\ldots, -n}$.\nproperties\n\n$(x)_{n}$ is monic polynomial of degree $n$ with roots ${0, 1, \\ldots, n-1}$\n$\\Delta(x){n}\\in\\Pi{n-1}$\n$\\Delta^{-1}(x){n}\\in\\Pi{n+1}$\n$k < j \\rightarrow (k)_{j} = k(k-1)\\cdots(k-k)\\cdots(k-j+1)=0$\n$(k)_{k} = k(k-1)\\cdots(k-(k-1)+1)(k-k+1)=k!$\n$(k){k}=(k){(k-1)}$\n\nand, finally:\n$$\\frac{(k){j}}{(j){j}} = \\frac{k(k-1)\\cdots(k-j+1)}{j!}=\\frac{k!}{j!(k-j)!}={{k}\\choose{j}}$$\n$\\mathbb{R} \\leftarrow \\mathbb{N}$, via Stirling numbers of the second type", "from sympy.functions.combinatorial.numbers import stirling", "The following identity links the two kinds of powers:\n$$\nx^{n} = \\sum_{i=1}^{n}{\\mathcal{S}{n,i} (x){i}}\n$$\nwhere coefficients $\\mathcal{S}{n,i}$ are Stirling's numbers of the second kind, defined according to the following recurrence relation $\\mathcal{S}{n+1, i} = \\mathcal{S}{n, i-1} + i\\mathcal{S}{n, i}$, for $i\\in{2,\\ldots,n}$, with initial conditions $\\mathcal{S}{n, 1} = \\mathcal{S}{n, n} = 1$.\nIn the following matrix we report the upper chunk of the infinite matrix generated by the recurrence relation; for the sake of clarity, according to Python indexing which is zero-based, we include the very first row and column, which yields $\\mathcal{S}_{0, 0}=1$ and $0$ everywhere else.", "stirling_matrix_second_kink = Matrix(11,11,lambda i, j: stirling(i,j, kind=2, signed=False))\nstirling_matrix_second_kink\n\nm = Mul(stirling_matrix_second_kink, Matrix(11,1,lambda i, _: ff(x, i, evaluate=False)),evaluate=False)\n#.applyfunc(lambda i: i.as_poly(x).as_expr())\nEq(m, Matrix(11,1,lambda i, _: x**i), evaluate=False)\n\n_.lhs.doit()", "Proof. By induction on $n$.\nBase case $n=1$, so $x = \\mathcal{S}{1,1}(x){1}=x$, which holds.\nAssume the theorem true for $n$ and show for $n+1$, so:\n$$\n\\begin{split}\n x^{n+1} = x\\cdot x^{n} &= x\\sum_{i=1}^{n}{\\mathcal{S}{n,i} (x){i}}\n = \\sum_{i=1}^{n}{\\mathcal{S}{n,i} (x-i+i) (x){i}}\n = \\sum_{i=1}^{n}{\\mathcal{S}{n,i}\\left(\\underbrace{(x-i) (x){i}}{(x){(i+1)}} + i (x){i} \\right)}\\\n &= \\sum{i=2}^{n+1}{\\mathcal{S}{n,i-1} (x){i}} + \\sum_{i=1}^{n}{i \\mathcal{S}{n,i} (x){i}}\n = \\underbrace{\\mathcal{S}{n,1}}{\\mathcal{S}{n+1,1}} (x){1} + \n \\sum_{i=2}^{n}{\\underbrace{\\left(\\mathcal{S}{n,i-1}+i\\mathcal{S}{n,i}\\right)}{\\mathcal{S}{n+1,i}} (x){i}} +\n \\underbrace{\\mathcal{S}{n,n}}{\\mathcal{S}{n+1,n+1}} (x){n+1}\n = \\sum{i=1}^{n+1}{\\mathcal{S}{n+1,i} (x){i}}\n\\end{split}\n$$\nas required. $\\blacksquare$\nThe just proved identity allows us to easily compute summations of the form $\\sum_{x=n_{0}}^{N-1}{x^{k}}$, for some $k\\in\\mathbb{N}$. One way to do this is to use the result seen some cells above, where it is required to know $\\Delta^{-1}x^{k}$:\n$$\n \\sum_{x=n_{0}}^{N-1}{x^{k}} = \\Delta^{-1}x^{k} \\big|{x=N} - \\Delta^{-1}x^{k} \\big|{x=n_{0}}\n$$\nbut such anti-difference is unknown. Therefore put the last identity in:\n$$\n\\sum_{x=n_{0}}^{N-1}{x^{k}} = \\sum_{x=n_{0}}^{N-1}{\\sum_{i=1}^{k}{\\mathcal{S}{k,i} (x){i}}} =\n\\sum_{i=1}^{k}{\\mathcal{S}{k,i}\\sum{x=n_{0}}^{N-1}{ (x){i}}} = \\sum{i=1}^{k}{\\mathcal{S}{k,i}\\left(\\frac{{\\left(N\\right)}{\\left(i + 1\\right)}}{i + 1} - \\frac{{\\left(n_{0}\\right)}{\\left(i + 1\\right)}}{i + 1}\\right)}\n$$\nwhere we recognize $\\sum{n=n_{0}}^{N - 1} {\\left(n\\right)}{i} = w{N} - w_{n_{0}} + \\frac{{\\left(N\\right)}{\\left(i + 1\\right)}}{i + 1} - \\frac{{\\left(n{0}\\right)}{\\left(i + 1\\right)}}{i + 1}$ where $w{N}-w_{n_{0}}=0$ since ${w_{n}}_{n\\in\\mathbb{n}}$ is a constant sequence.\napplications", "def power_summation_thm():\n n, S, (N, x, k, i) = IndexedBase('n'), IndexedBase('\\mathcal{S}'), symbols('N x k i')\n inf, sup = n[0], N-1\n return (Eq(Sum(x**k, (x, inf, sup)), Sum(s[k,i]*((ff(sup+1, i+1)/(i+1))-(ff(inf, i+1)/(i+1))), (i, 1, k))),\n [n, S, inf, sup, N, x, k, i])\n\ndef expand_Sum(aSumExpr):\n \n generic_sum_coeff, (sum_index, starting_bound, ending_bound) = aSumExpr.args\n \n summands = [generic_sum_coeff.subs(sum_index, n) for n in range(starting_bound, ending_bound+1)]\n result = Add(*summands, evaluate=False)\n \n return result\n\ndef stirling_row(row, indexed=None, *args, **kwds):\n return {indexed[row, i] if indexed else (row,i):stirling(row, i, *args, **kwds) for i in range(row+1)}\n\ndef do_powers_summation(power, bottom=1, top=Symbol('n'), expand=True):\n eq, (n, S, inf, sup, N, x, k, i) = power_summation_thm()\n inst_eq = eq.subs({inf:bottom, k:power, N:top+1}, simultaneous=True)\n rhs = expand_Sum(inst_eq.rhs)\n rhs = rhs.subs(stirling_row(power, S)).factor() if expand else rhs\n return Eq(inst_eq.lhs, rhs)\n \n\npower_summation_thm()\n\ndo_powers_summation(power=1, bottom=1)\n\ndo_powers_summation(power=2, bottom=1)\n\ndo_powers_summation(power=3, bottom=1)\n\ndo_powers_summation(power=3, bottom=1, expand=False)\n\nten_powers = do_powers_summation(power=10)\nten_powers\n\nten_powers.replace(Symbol('n'),20).doit()", "$\\mathbb{R} \\rightarrow \\mathbb{N}$, via Stirling numbers of the first type\nIt is possible to revert the previous argument and find a characterization for $(x){i}$ using powers $x^{i}$ as follows: \n$$\n\\sum{i=1}^{n}{\\mathcal{s}{n,i} x^{i}} = (x){n}\n$$\nwhere coefficients $\\mathcal{s}_{n,i}$ are Stirling's numbers of the first kind, tabulated in the following matrix:", "stirling_matrix_first_kink = Matrix(11,11,lambda i, j: stirling(i,j, kind=1, signed=True))\nstirling_matrix_first_kink", "Stirling's matrices of numbers are inverses the one of the other, namely:", "stirling_matrix_second_kink**(-1)", "therefore their product yields the identity matrix:", "stirling_matrix_second_kink*stirling_matrix_first_kink", "<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SHDShim/pytheos
examples/6_p_scale_test_Dorogokupets2015_MgO.ipynb
apache-2.0
[ "%cat 0Source_Citation.txt\n\n%matplotlib inline\n# %matplotlib notebook # for interactive", "For high dpi displays.", "%config InlineBackend.figure_format = 'retina'", "0. General note\nThis example compares pressure calculated from pytheos and original publication for the MgO scale by Dorogokupets 2015.\n1. Global setup", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom uncertainties import unumpy as unp\nimport pytheos as eos", "3. Compare", "eta = np.linspace(1., 0.6, 9)\nprint(eta)\n\ndorogokupets2015_mgo = eos.periclase.Dorogokupets2015()\n\nhelp(eos.periclase.Dorogokupets2015)\n\ndorogokupets2015_mgo.print_equations()\n\ndorogokupets2015_mgo.print_equations()\n\ndorogokupets2015_mgo.print_parameters()\n\nv0 = 74.698\n\ndorogokupets2015_mgo.three_r\n\nv = v0 * (eta) \ntemp = 3000.\n\np = dorogokupets2015_mgo.cal_p(v, temp * np.ones_like(v))", "Table is not given in this publication.", "print('for T = ', temp)\nfor eta_i, p_i in zip(eta, p):\n print(\"{0: .3f} {1: .2f} \".format(eta_i, p_i))\n\nv = dorogokupets2015_mgo.cal_v(p, temp * np.ones_like(p), min_strain=0.6)\nprint((v/v0))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.15/_downloads/plot_parcellation.ipynb
bsd-3-clause
[ "%matplotlib inline", "Plot a cortical parcellation\nIn this example, we download the HCP-MMP1.0 parcellation [1]_ and show it\non fsaverage.\n<div class=\"alert alert-info\"><h4>Note</h4><p>The HCP-MMP dataset has license terms restricting its use.\n Of particular relevance:\n\n \"I will acknowledge the use of WU-Minn HCP data and data\n derived from WU-Minn HCP data when publicly presenting any\n results or algorithms that benefitted from their use.\"</p></div>\n\nReferences\n.. [1] Glasser MF et al. (2016) A multi-modal parcellation of human\n cerebral cortex. Nature 536:171-178.", "# Author: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nfrom surfer import Brain\n\nimport mne\n\nsubjects_dir = mne.datasets.sample.data_path() + '/subjects'\nmne.datasets.fetch_hcp_mmp_parcellation(subjects_dir=subjects_dir,\n verbose=True)\nlabels = mne.read_labels_from_annot(\n 'fsaverage', 'HCPMMP1', 'lh', subjects_dir=subjects_dir)\n\nbrain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,\n cortex='low_contrast', background='white', size=(800, 600))\nbrain.add_annotation('HCPMMP1')\naud_label = [label for label in labels if label.name == 'L_A1_ROI-lh'][0]\nbrain.add_label(aud_label, borders=False)", "We can also plot a combined set of labels (23 per hemisphere).", "brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,\n cortex='low_contrast', background='white', size=(800, 600))\nbrain.add_annotation('HCPMMP1_combined')" ]
[ "code", "markdown", "code", "markdown", "code" ]
brooksandrew/simpleblog
_ipynb/2017-11-19-Fifty-states-rural-postman-problem.ipynb
mit
[ "Motivation\nRecently I spent a considerable amount of time writing the postman_problems python library implementing solvers for the Chinese and Rural Postman Problems (CPP and RPP respectively). I wrote about my initial motivation for the project: finding the optimal route through a trail system in a state park here. Although I've still yet to run the 34 mile optimal trail route, I am pleased with the optimization procedure. However, I couldn't help but feel that all those nights and weekends hobbying on this thing deserved a more satisfying visual than my static SVGs and hacky GIF. So to spice it up, I decided to solve the RPP on a graph derived from geodata and visualize on an interactive Leaflet map.\nThe Problem\nIn short, ride all 50 state named avenues in DC end-to-end following the shortest route possible.\nThere happens to be an annual 50 states ride sponsored by our regional bike association, WABA, that takes riders to each of the 50<sup>†</sup> state named avenues in DC. Each state's avenue is touched, but not covered in full. This problem takes it a step further by instituting this requirement. Thus, it boils to the RPP where the required edges are state avenues (end-to-end) and the optional edges are every other road within DC city limits.\nFor those unfamiliar with DC street naming convention, that can (and should) be remedied with a read through the history behind the street system here. Seriously, it's an interesting read. Basically there are 50 state named avenues in DC ranging from 0.3 miles (Indiana Avenue) to 10 miles (Massachusetts Avenue) comprising 115 miles in total.\nThe Solution\nThe data is grabbed from Open Street Maps (OSM). Most of the post is spent wrangling the OSM geodata into shape for the RPP algorithm using NetworkX graphs. The final route (and intermediate steps) are visualized using Leaflet maps through mplleaflet which enable interactivity using tiles from Mapbox and CartoDB among others.\nNote to readers: the rendering of these maps can work the browser pretty hard; allow a couple extra seconds for loading.\nThe Approach\nMost of the heavy lifting leverages functions from the graph.py module in the postman_problems_examples repo. The majority of pre-RPP processing employs heuristics that simplify the computation such that this code can run in a reasonable amount of time. The parameters employed here, which I believe get pretty darn close to the optimal solution, run in about 50 minutes. By tweaking a couple parameters, accuracy can be sacrificed for time to get run time down to ~5 minutes on a standard 4 core laptop.\nVerbose technical details about the guts of each step are omitted from this post for readability. However the interested reader can find these in the docstrings in graph.py.\nThe table of contents below provides the best high-level summary of the approach. All code needed to reproduce this analysis is in the postman_problems_examples repo, including the jupyter notebook used to author this blog post and a conda environment.\n<sup>†</sup> While there are 50 roadways, there are technically only 48 state named avenues: Ohio Drive and California Street are the stubborn exceptions.\nTable of Contents\n\nTable of Contents\n{:toc}", "import mplleaflet\nimport networkx as nx\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom collections import Counter\n\n# can be found in https://github.com/brooksandrew/postman_problems_examples\nfrom osm2nx import read_osm, haversine\nfrom graph import (\n states_to_state_avenue_name, subset_graph_by_edge_name, keep_oneway_edges_only, create_connected_components,\n create_unkinked_connected_components, nodewise_distance_connected_components,\n calculate_component_overlap, calculate_redundant_components, create_deduped_state_road_graph, \n create_contracted_edge_graph, shortest_paths_between_components, find_minimum_weight_edges_to_connect_components,\n create_rpp_edgelist\n )\n\n# can be found in https://github.com/brooksandrew/postman_problems\nfrom postman_problems.tests.utils import create_mock_csv_from_dataframe\nfrom postman_problems.solver import rpp, cpp\nfrom postman_problems.stats import calculate_postman_solution_stats", "0: Get the data\nThere are many ways to grab Open Street Map (OSM) data, since it's, well, open. I grabbed the DC map from GeoFabrik here.\n1: Load OSM to NetworkX\nWhile some libraries like OSMnx provide an elegant interface to downloading, transforming and manipulating OSM data in NetworkX, I decided to start with the raw data itself. I adopted an OSM-to-nx parser from a hodge podge of Gists (here and there) to read_osm.\nread_osm creates a directed graph. However, for this analysis, we'll use undirected graphs with the assumption that all roads are bidirectional on a bike one way or another.", "%%time\n\n# load OSM to a directed NX\ng = read_osm('district-of-columbia-latest.osm') \n\n# create an undirected graph\ng_ud = g.to_undirected()", "This is a pretty big graph, about 275k edges. It takes about a minute to load on my machine (Macbook w 4 cores)", "print(len(g.edges())) # number of edges", "2: Make Graph w State Avenues only\nGenerate state avenue names", "STATE_STREET_NAMES = [\n 'Alabama','Alaska','Arizona','Arkansas','California','Colorado',\n 'Connecticut','Delaware','Florida','Georgia','Hawaii','Idaho','Illinois',\n 'Indiana','Iowa','Kansas','Kentucky','Louisiana','Maine','Maryland',\n 'Massachusetts','Michigan','Minnesota','Mississippi','Missouri','Montana',\n 'Nebraska','Nevada','New Hampshire','New Jersey','New Mexico','New York',\n 'North Carolina','North Dakota','Ohio','Oklahoma','Oregon','Pennsylvania',\n 'Rhode Island','South Carolina','South Dakota','Tennessee','Texas','Utah',\n 'Vermont','Virginia','Washington','West Virginia','Wisconsin','Wyoming'\n]", "Most state avenues are written in the long form (ex. Connecticut Avenue Northwest). However, some, such as Florida Ave NW, are written in the short form. To be safe, we grab any permutation OSM could throw at us.", "candidate_state_avenue_names = states_to_state_avenue_name(STATE_STREET_NAMES)\n\n# two states break the \"Avenue\" pattern\ncandidate_state_avenue_names += ['California Street Northwest', 'Ohio Drive Southwest']\n\n# preview\ncandidate_state_avenue_names[0:20]", "Create graph w state avenues only", "g_st = subset_graph_by_edge_name(g, candidate_state_avenue_names)\n\n# Add state edge attribute from full streetname (with avenue/drive and quandrant)\nfor e in g_st.edges(data=True):\n e[2]['state'] = e[2]['name'].rsplit(' ', 2)[0]", "This is a much smaller graph:", "print(len(g_st.edges()))", "But every state is represented:", "edge_count_by_state = Counter([e[2]['state'] for e in g_st.edges(data=True)])\n\n# number of unique states \nprint(len(edge_count_by_state))", "Here they are by edge count:", "edge_count_by_state", "2.1 Viz state avenues\nAs long as your NetworkX graph has lat and lon node attributes, mplleaflet can be used to pretty effortlessly plot your NetworkX graph on an interactive map. \nHere's the map with all the state avenues...", "fig, ax = plt.subplots(figsize=(1,8))\n\npos = {k: (g_st.node[k]['lon'], g_st.node[k]['lat']) for k in g_st.nodes()} \nnx.draw_networkx_edges(g_st, pos, width=4.0, edge_color='black', alpha=0.7)\n\n# save viz \nmplleaflet.save_html(fig, 'state_avenues_all.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/state_avenues_all.html\" height=\"400\" width=\"750\"></iframe>\n\nYou can even customize with your favorite tiles. For example:\nmplleaflet.display(fig=ax.figure, tiles='stamen_wc')\n<img src=\"https://github.com/brooksandrew/postman_problems_examples/raw/master/50states/fig/stamen_wc_state_ave.jpeg\" width=\"700\">\n...But there's a wrinkle. Zoom in on bigger avenues, like New York or Rhode Island, and you'll notice that there are two parallel edges representing each direction as a separate one-way road. This usually occurs when there are several lanes of traffic in each direction, or physical dividers between directions. Example below:\nThis is great for OSM and point A to B routing problems, but for the Rural Postman problem it imposes the requirement that each main avenue be cycled twice. We're not into that.\nExample: Rhode Island Ave (parallel edges) vs Florida Ave (single edge)\n\n3. Remove Redundant State Avenues\nAs it turns out, removing these parallel (redundant) edges is a nontrivial problem to solve. My approach is the following:\n1. Build graph with one-way state avenue edges only.\n2. For each state avenue, create list of connected components that represent sequences of OSM ways in the same direction (broken up by intersections and turns).\n3. Compute distance between each node in a component to every other node in the other candidate components.\n4. Identify redundant components as those with the majority of their nodes below some threshold distance away from another component.\n5. Build graph without redundant edges.\n3.1 Create state avenue graph with one-way edges only", "g_st1 = keep_oneway_edges_only(g_st)", "The one-way avenues are plotted in red below. A brief look indicates that 80-90% of the one-way avenues are parallel (redundant). A few, like Idaho Avenue NW and Ohio Drive SW, are single one-way roads with no accompanying parallel edge for us to remove.\nNOTE: you'll need to zoom in 3-4 levels to see the parallel edges.", "fig, ax = plt.subplots(figsize=(1,6))\n\npos = {k: (g_st1.node[k]['lon'], g_st1.node[k]['lat']) for k in g_st1.nodes()}\nnx.draw_networkx_edges(g_st1, pos, width=3.0, edge_color='red', alpha=0.7)\n\n# save viz\nmplleaflet.save_html(fig, 'oneway_state_avenues.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/oneway_state_avenues.html\" height=\"400\" width=\"750\"></iframe>\n\nCreate connected components with one-way state avenues", "comps = create_connected_components(g_st1)", "There are 163 distinct components in the graph above.", "len(comps)", "3.2 Split connected components\nRemove kinked nodes\nHowever, we need to break some of these components up into smaller ones. Many components, like the one below, have bends or a connected cycle that contain both the parallel edges, where we only want one. My approach is to identify the nodes with sharp angles and remove them. I don't know what the proper name for these is (you can read about angular resolution), but we'll call them \"kinked nodes.\" \nThis will split the connected component below into two, allowing us to determine that one of them is redundant.\n\nI borrow this code from jeromer to calculate the compass bearing (0 to 360) of each edge. Wherever the the bearing difference between two adjacent edges is greater than bearing_thresh, we call the node shared by both edges a \"kinked node.\" A relative low bearing_thresh of 60 appeared to work best after some experimentation.", "# create list of comps (graphs) without kinked nodes\ncomps_unkinked = create_unkinked_connected_components(comps=comps, bearing_thresh=60)\n\n# comps in dict form for easy lookup\ncomps_dict = {comp.graph['id']:comp for comp in comps_unkinked} ", "After removing these \"kinked nodes,\" our list of components grows from 163 to 246:", "len(comps_unkinked)", "Viz components without kinked nodes\nExample: Here's the Massachusetts Ave example from above after we remove kinked nodes:\n\nFull map: Zoom in on the map below and you'll see that we split up most of the obvious components that should be. There are a few corner cases that we miss, but I'd estimate we programmatically split about 95% of the components correctly.", "fig, ax = plt.subplots(figsize=(1,6))\n\nfor comp in comps_unkinked:\n pos = {k: (comp.node[k]['lon'], comp.node[k]['lat']) for k in comp.nodes()}\n nx.draw_networkx_edges(comp, pos, width=3.0, edge_color='orange', alpha=0.7)\n \nmplleaflet.save_html(fig, 'oneway_state_avenues_without_kinked_nodes.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/oneway_state_avenues_without_kinked_nodes.html\" height=\"500\" width=\"750\"></iframe>\n\n3.3 & 3.4 Match connected components\nNow that we've crafted the right components, we calculate how close (parallel) each component is to one another.\nThis is a relatively coarse approach, but performs surprisingly well:\n1. Find closest nodes from candidate components to each node in each component (pseudo code below):\nFor each node N in component C:\n For each C_cand in components with same street avenue as C:\n Calculate closest node in C_cand to N.\n2. Calculate overlap between components. Using the distances calculated in 1., we say that a node from component C is matched to a component C_cand if the distance is less than thresh_distance specified in calculate_component_overlap. 75 meters seemed to work pretty well. Essentially we're saying these nodes are close enough to be considered interchangeable.\n3. Use the node-wise matching calculated in 2. to calculate which components are redundant. If thresh_pct of nodes in component C are close enough (within thresh_distance) to nodes in component C_cand, we call C redundant and discard it.", "# caclulate nodewise distances between each node in comp with closest node in each candidate\ncomp_matches = nodewise_distance_connected_components(comps_unkinked)\n\n# calculate overlap between components\ncomp_overlap = calculate_component_overlap(comp_matches, thresh_distance=75)\n\n# identify redundant and non-redundant components\nremove_comp_ids, keep_comp_ids = calculate_redundant_components(comp_overlap, thresh_pct=0.75)", "Viz redundant component solution\nThe map below visualizes the solution to the redundant parallel edges problem. There are some misses, but overall this simple approach works surprisingly well: \n\n<font color='red'>red</font>: redundant one-way edges to remove\n<font color='black'>black</font>: one-way edges to keep\n<font color='blue'>blue</font>: all state avenues", "fig, ax = plt.subplots(figsize=(1,8))\n\n# plot redundant one-way edges\nfor i, road in enumerate(remove_comp_ids):\n for comp_id in remove_comp_ids[road]:\n comp = comps_dict[comp_id]\n posc = {k: (comp.node[k]['lon'], comp.node[k]['lat']) for k in comp.nodes()}\n nx.draw_networkx_edges(comp, posc, width=7.0, edge_color='red')\n\n# plot keeper one-way edges \nfor i, road in enumerate(keep_comp_ids):\n for comp_id in keep_comp_ids[road]:\n comp = comps_dict[comp_id]\n posc = {k: (comp.node[k]['lon'], comp.node[k]['lat']) for k in comp.nodes()}\n nx.draw_networkx_edges(comp, posc, width=3.0, edge_color='black')\n\n# plot all state avenues\npos_st = {k: (g_st.node[k]['lon'], g_st.node[k]['lat']) for k in g_st.nodes()} \nnx.draw_networkx_edges(g_st, pos_st, width=1.0, edge_color='blue', alpha=0.7)\n\nmplleaflet.save_html(fig, 'redundant_edges.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/redundant_edges.html\" height=\"500\" width=\"750\"></iframe>\n\n3.5 Build graph without redundant edges\nThis is the essentially the graph with just black and <font color='blue'>blue</font> edges from the map above.", "# create a single graph with deduped state roads\ng_st_nd = create_deduped_state_road_graph(g_st, comps_dict, remove_comp_ids)", "After deduping the redundant edges, our connected component count drops from 246 to 96.", "len(list(nx.connected_components(g_st_nd)))", "4. Create Single Connected Component\nThe strategy I employ for solving the Rural Postman Problem (RPP) in postman_problems is simple in that it reuses the machinery from the Chinese Postman Problem (CPP) solver here. However, it makes the strong assumption that the graph's required edges form a single connected component. This is obviously not true for our state avenue graph as-is, but it's not too off. Although there are 96 components, there are only a couple more than a few hundred meters to the next closest component. \nSo we hack it a bit by adding required edges to the graph to make it a single connected component. The tricky part is choosing the edges that add as little distance as possible. This was the first computationally intensive step that required some clever tricks and approximations to ensure execution in a reasonable amount of time.\nMy approach:\n\n\nBuild graph with contracted edges only. \n\n\nCalculate haversine distance between each possible pair of components. \n\n\nFind minimum distance connectors: iterate through the data structure created in 2. to calculate shortest paths for top candidates based on haversine distance and add shortest connectors to graph. More details below.\n\n\nBuild single component graph.\n\n\n4.1 Contract edges\nNodes with degree 2 are collapsed into an edge stretching from a dead-end node (degree 1) or intersection (degree >= 3) to another. This achieves two things:\n * Limits the number of distance calculations.\n * Ensures that components are connected at logical points (dead ends and intersections) rather than arbitrary parts of a roadway. This will make for a more continuous route.", "# Create graph with contracted edges only\ng_st_contracted = create_contracted_edge_graph(graph=g_st_nd, \n edge_weight='length')", "This significantly reduces the nodes needed for distances computations by a factor of > 15.", "print('Number of nodes in contracted graph: {}'.format(len(g_st_contracted.nodes())))\nprint('Number of nodes in original graph: {}'.format(len(g_st_nd.nodes())))", "4.2 Calculate haversine distance between components\nThe 345 nodes from the contracted edge graph translate to >100,000 possible node pairings. That means >100,000 distance calculations. While applying a shortest path algorithm over the graph would certainly be more exact, it is painfully slow compared to simple haversine distance. This is mainly due to the high number of nodes and edges in the DC OSM map (over 250k edges).\nOn my laptop I averaged about 4 shortest path calculations per second. Not too bad for a handful, but 115k would take about 7 hours. Haversine distance, by comparison, churns through 115k in a couple seconds.", "# create dataframe with shortest paths (haversine distance) between each component\ndfsp = shortest_paths_between_components(g_st_contracted)\n\ndfsp.shape[0] # number of rows (node pairs)", "4.3 Find minimum distance connectors\nThis gets a bit tricky. Basically we iterate through the top (closest) candidate pairs of components and connect them iteration-by-iteration with the shortest path edge. We use pre-calculated haversine distance to get in the right ballpark, then refine with true shortest path for the closest 20 candidates. This helps us avoid the scenario where we naively connect two nodes that are geographically close as the crow flies (haversine), but far away via available roads. Two nodes separated by highways or train tracks, for example.", "# min weight edges that create a single connected component\nconnector_edges = find_minimum_weight_edges_to_connect_components(dfsp=dfsp, \n graph=g_ud, \n edge_weight='length', \n top=20)", "We had 96 components to connect, so it makes sense that we have 95 connectors.", "len(connector_edges)", "4.4 Build single component graph", "# adding connector edges to create one single connected component\nfor e in connector_edges:\n g_st_contracted.add_edge(e[0], e[1], distance=e[2]['distance'], path=e[2]['path'], required=1, connector=True)", "We add about 12 miles with the 95 additional required edges. That's not too bad: an average distance of 0.13 miles per each edge added.", "print(sum([e[2]['distance'] for e in g_st_contracted.edges(data=True) if e[2].get('connector')])/1609.34)", "So that leaves us with a single component of 124 miles of required edges to optimize a route through. That means the distance of deduped state avenues alone, without connectors (~112 miles) is just a couple miles away from what Wikipedia reports (115 miles).", "print(sum([e[2]['distance'] for e in g_st_contracted.edges(data=True)])/1609.34)", "Make graph with granular edges (filling in those that were contracted) connecting components:", "g1comp = g_st_contracted.copy()\nfor e in g_st_contracted.edges(data=True):\n if 'path' in e[2]:\n granular_type = 'connector' if 'connector' in e[2] else 'state'\n \n # add granular connector edges to graph \n for pair in list(zip(e[2]['path'][:-1], e[2]['path'][1:])):\n g1comp.add_edge(pair[0], pair[1], granular='True', granular_type=granular_type)\n \n # add granular connector nodes to graph\n for n in e[2]['path']:\n g1comp.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon'])", "4.5 Viz single connected component\nBlack edges represent the deduped state avenues.\n<font color='red'>Red</font> edges represent the 12 miles of connectors that create the single connected component.", "fig, ax = plt.subplots(figsize=(1,6))\n\ng1comp_conn = g1comp.copy()\ng1comp_st = g1comp.copy()\n\nfor e in g1comp.edges(data=True):\n if ('granular_type' not in e[2]) or (e[2]['granular_type'] != 'connector'):\n g1comp_conn.remove_edge(e[0], e[1])\n\nfor e in g1comp.edges(data=True):\n if ('granular_type' not in e[2]) or (e[2]['granular_type'] != 'state'):\n g1comp_st.remove_edge(e[0], e[1])\n \npos = {k: (g1comp_conn.node[k]['lon'], g1comp_conn.node[k]['lat']) for k in g1comp_conn.nodes()} \nnx.draw_networkx_edges(g1comp_conn, pos, width=5.0, edge_color='red')\n\npos_st = {k: (g1comp_st.node[k]['lon'], g1comp_st.node[k]['lat']) for k in g1comp_st.nodes()} \nnx.draw_networkx_edges(g1comp_st, pos_st, width=3.0, edge_color='black')\n\n# save viz\nmplleaflet.save_html(fig, 'single_connected_comp.html', tiles='cartodb_positron')\n", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/single_connected_comp.html\" height=\"500\" width=\"750\"></iframe>\n\n5. Solve CPP\nI don't expect the Chinese Postman solution to be optimal since it only utilizes the required edges. However, I do expect it to execute quickly and serve as a benchmark for the Rural Postman solution. In the age of \"deep learning,\" I agree with Smerity, baselines need more love.\n5.1 Create CPP edgelist\nThe cpp solver I wrote operates off an edgelist (text file). This feels a bit clunky here, but it works.", "# create list with edge attributes and \"from\" & \"to\" nodes\ntmp = []\nfor e in g_st_contracted.edges(data=True):\n tmpi = e[2].copy() # so we don't mess w original graph\n tmpi['start_node'] = e[0]\n tmpi['end_node'] = e[1]\n tmp.append(tmpi)\n\n# create dataframe w node1 and node2 in order\neldf = pd.DataFrame(tmp) \neldf = eldf[['start_node', 'end_node'] + list(set(eldf.columns)-{'start_node', 'end_node'})]", "The first two columns are interpeted as the from and to nodes; everything else as edge attributes.", "eldf.head(3)", "5.2 CPP solver\nStarting point\nI fix the starting node for the solution to OSM node 49765113 which corresponds to (38.917002, -77.0364987): the intersection of New Hampshire Avenue NW, 16th St NW and U St NW... and also the close to my house:\n\nSolve", "# create mockfilename\nelfn = create_mock_csv_from_dataframe(eldf)\n\n# solve\nSTART_NODE = '49765113' # New Hampshire Ave NW & U St NW.\ncircuit_cpp, gcpp = cpp(elfn, start_node=START_NODE)", "5.3: CPP results\nThe CPP solution covers roughly 390,000 meters, about 242 miles.\nThe optimal CPP route doubles the required distance, doublebacking every edge on average... definitely not ideal.", "# circuit stats\ncalculate_postman_solution_stats(circuit_cpp)", "6. Solve RPP\nThe RPP should improve the CPP solution as it considers optional edges that can drastically limit the amount of doublebacking. \nWe could add every possible edge that connects the required nodes, but it turns out that computation blows up quickly, and I'm not that patient. The get_shortest_paths_distances is the bottleneck applying dijkstra path length on all possible combinations. There are ~14k pairs to calculate shortest path for (4 per second) which would take almost one hour.\nHowever, we can use some heuristics to speed this up dramatically without sacrificing too much.\n6.1 Create RPP edgelist\nIdeally optional edges will be relatively short, since they are, well, optional. It is unlikely that the RPP algorithm will find that leveraging an optional edge that stretches from one corner of the graph to another will be efficient. Thus we constrain the set of optional edges presented to the RPP solver to include only those less than max_distance.\nI experimented with several thresholds. 3200 meters certainly took longer (~40 minutes), but yielded the best route results. I tried 4000m which ran for about 4 hours and returned a route with the same distance (160 miles) as the 3200m threshold.", "%%time\ndfrpp = create_rpp_edgelist(g_st_contracted=g_st_contracted, \n graph_full=g_ud, \n edge_weight='length', \n max_distance=3200)", "Check how many optional edges are considered (0=optional, 1=required):", "Counter(dfrpp['required'])", "6.2 RPP solver\nApply the RPP solver to the processed dataset.", "%%time\n\n# create mockfilename\nelfn = create_mock_csv_from_dataframe(dfrpp)\n\n# solve\ncircuit_rpp, grpp = rpp(elfn, start_node=START_NODE)", "6.3 RPP results\nAs expected, the RPP route is considerably shorter than the CPP solution. The ~242 mile CPP route is cut significantly to ~160 with the RPP approach.\n~26,000m (~161 miles) in total with ~59,000m (37 miles) of doublebacking. Not bad... but probably a 2-day ride.", "# RPP route distance (miles)\nprint(sum([e[3]['distance'] for e in circuit_rpp])/1609.34)\n\n# hack to convert 'path' from str back to list. Caused by `create_mock_csv_from_dataframe`\nfor e in circuit_rpp:\n if type(e[3]['path']) == str:\n exec('e[3][\"path\"]=' + e[3][\"path\"])\n\ncalculate_postman_solution_stats(circuit_rpp)", "As seen below, filling the contracted edges back in with the granular nodes adds considerably to the edge count.", "print('Number of edges in RPP circuit (with contracted edges): {}'.format(len(circuit_rpp)))\nprint('Number of edges in RPP circuit (with granular edges): {}'.format(rppdf.shape[0]))", "6.4 Viz RPP graph\nCreate RPP granular graph\nAdd the granular edges (that we contracted for computation) back to the graph.", "# calc shortest path between optional nodes and add to g1comp graph\nfor e in [e for e in circuit_rpp if e[3]['required']==0] :\n \n # add granular optional edges to g1comp\n path = e[3]['path']\n for pair in list(zip(path[:-1], path[1:])):\n if g1comp.has_edge(pair[0], pair[1]):\n continue\n g1comp.add_edge(pair[0], pair[1], granular='True', granular_type='optional')\n \n # add granular nodes from optional edge paths to g1comp\n for n in path:\n g1comp.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon'])", "Visualize RPP solution by edge type\n\n<font color='black'>black</font>: required state avenue edges\n<font color='red'>red</font>: required non-state avenue edges added to form single component\n<font color='blue'>blue</font>: optional non-state avenue roads", "fig, ax = plt.subplots(figsize=(1,12))\n\ng1comp_conn = g1comp.copy()\ng1comp_st = g1comp.copy()\ng1comp_opt = g1comp.copy()\n\nfor e in g1comp.edges(data=True):\n if e[2].get('granular_type') != 'connector':\n g1comp_conn.remove_edge(e[0], e[1])\n\nfor e in g1comp.edges(data=True):\n #if e[2].get('name') not in candidate_state_avenue_names:\n if e[2].get('granular_type') != 'state':\n g1comp_st.remove_edge(e[0], e[1])\n\nfor e in g1comp.edges(data=True):\n if e[2].get('granular_type') != 'optional':\n g1comp_opt.remove_edge(e[0], e[1])\n\n \npos = {k: (g1comp_conn.node[k]['lon'], g1comp_conn.node[k]['lat']) for k in g1comp_conn.nodes()} \nnx.draw_networkx_edges(g1comp_conn, pos, width=6.0, edge_color='red')\n\npos_st = {k: (g1comp_st.node[k]['lon'], g1comp_st.node[k]['lat']) for k in g1comp_st.nodes()} \nnx.draw_networkx_edges(g1comp_st, pos_st, width=4.0, edge_color='black')\n\npos_opt = {k: (g1comp_opt.node[k]['lon'], g1comp_opt.node[k]['lat']) for k in g1comp_opt.nodes()} \nnx.draw_networkx_edges(g1comp_opt, pos_opt, width=2.0, edge_color='blue')\n\n# save vbiz\nmplleaflet.save_html(fig, 'rpp_solution_edge_type.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/rpp_solution_edge_type.html\" height=\"500\" width=\"750\"></iframe>\n\nVisualize RPP solution by edge walk count\nEdge walks per color: \n<font color='black'>black</font>: 1 <br>\n<font color='magenta'>magenta</font>: 2 <br>\n<font color='orange'>orange</font>: 3 <br>\nEdges walked more than once are also widened.\nThis solution feels pretty reasonable with surprisingly little doublebacking. After staring at this for several minutes, I could think of roads I'd prefer not to cycle on, but no obvious shorter paths.", "## Create graph directly from rpp_circuit and original graph w lat/lon (g_ud)\ncolor_seq = [None, 'black', 'magenta', 'orange', 'yellow']\ngrppviz = nx.Graph()\nedges_cnt = Counter([tuple(sorted([e[0], e[1]])) for e in circuit_rpp])\n\nfor e in circuit_rpp:\n for n1, n2 in zip(e[3]['path'][:-1], e[3]['path'][1:]):\n if grppviz.has_edge(n1, n2):\n grppviz[n1][n2]['linewidth'] += 2\n grppviz[n1][n2]['cnt'] += 1\n else: \n grppviz.add_edge(n1, n2, linewidth=2.5)\n grppviz[n1][n2]['color_st'] = 'black' if g_st.has_edge(n1, n2) else 'red'\n grppviz[n1][n2]['cnt'] = 1\n grppviz.add_node(n1, lat=g_ud.node[n1]['lat'], lon=g_ud.node[n1]['lon'])\n grppviz.add_node(n2, lat=g_ud.node[n2]['lat'], lon=g_ud.node[n2]['lon']) \n\nfor e in grppviz.edges(data=True):\n e[2]['color_cnt'] = color_seq[e[2]['cnt']]\n\nfig, ax = plt.subplots(figsize=(1,12))\n\npos = {k: (grppviz.node[k]['lon'], grppviz.node[k]['lat']) for k in grppviz.nodes()} \ne_width = [e[2]['linewidth'] for e in grppviz.edges(data=True)]\ne_color = [e[2]['color_cnt'] for e in grppviz.edges(data=True)]\nnx.draw_networkx_edges(grppviz, pos, width=e_width, edge_color=e_color, alpha=0.7)\n \n# save viz\nmplleaflet.save_html(fig, 'rpp_solution_edge_cnt.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/50states/maps/rpp_solution_edge_cnt.html\" height=\"500\" width=\"750\"></iframe>\n\n6.5 Serialize RPP solution\nCSV\nRemember we contracted the edges in 4.1 for more efficient computation. However, when we visualize the solution, the more granular edges within the larger contracted ones are filled back in, so we can see the exact route to ride with all the bends and squiggles.", "# fill in RPP solution edgelist with granular nodes\nrpplist = []\nfor ee in circuit_rpp:\n path = list(zip(ee[3]['path'][:-1], ee[3]['path'][1:]))\n for e in path:\n rpplist.append({\n 'start_node': e[0],\n 'end_node': e[1],\n 'start_lat': g_ud.node[e[0]]['lat'],\n 'start_lon': g_ud.node[e[0]]['lon'],\n 'end_lat': g_ud.node[e[1]]['lat'],\n 'end_lon': g_ud.node[e[1]]['lon'],\n 'street_name': g_ud[e[0]][e[1]].get('name')\n })\n \n# write solution to disk\nrppdf = pd.DataFrame(rpplist)\nrppdf.to_csv('rpp_solution.csv', index=False)\n", "Geojson\nSimilarly, we create a geojson object of the RPP solution using the time attribute to keep track of the route order. This data structure can be used for fancy js/d3 visualizations. Coming soon, hopefully.", "geojson = {'features':[], 'type': 'FeatureCollection'}\ntime = 0\npath = list(reversed(circuit_rpp[0][3]['path']))\n\nfor e in circuit_rpp:\n if e[3]['path'][0] != path[-1]: \n path = list(reversed(e[3]['path']))\n else:\n path = e[3]['path']\n \n for n in path:\n time += 1\n doc = {'type': 'Feature',\n 'properties': {\n 'latitude': g.node[n]['lat'],\n 'longitude': g.node[n]['lon'],\n 'time': time,\n 'id': e[3].get('id')\n },\n 'geometry':{\n 'type': 'Point',\n 'coordinates': [g.node[n]['lon'], g.node[n]['lat']]\n }\n }\n geojson['features'].append(doc)\n\nwith open('circuit_rpp.geojson','w') as f:\n json.dump(geojson, f)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gcallah/Indra
notebooks/InteractiveIndra.ipynb
gpl-3.0
[ "Indra2 basic system\ninteractive.py contains some sample code on how to use the new Indra system. This notebook will illustrate its contents.\nRight now we have three types of entity at play here:\n\nEntity: the base \"thingie\" from which all else descends. Operations on entities act like vector operations.\nComposite: an Entity that can hold other entities. Because it is itself an entity, we can nest composites. Operations on composites act like set operations.\nTime: a Composite that loops over its members when acting, giving rise to \"periods\" of action.\n\nLet's import interactive and see what we can do:", "from indra.interactive import *", "Now let's have a look at some of the things we get from that import:", "newton", "newton is an Entity. The base Entity has a name, a lifespan (duration), and an arbitrary number of attributes.\nhardy is another entity. Here's how easy it is to make a group out of two entities:", "great_mathematicians = newton + hardy\ngreat_mathematicians", "Oops, we forgot Leibniz and Ramanujan:", "forgotten = leibniz + ramanujan\nforgotten\ngreat_mathematicians += forgotten\ngreat_mathematicians\n\nWe can also do set intersection:\n\ngreat_mathematicians *= forgotten\ngreat_mathematicians", "And take subsets, using a predicate:", "calc_founders = newton + leibniz\njust_l = calc.subset(max_duration, 25, name=\"Just Leibniz!\")\njust_l", "In that particular case, our predicate selected just the set members with a duration of less than 25, and so we got only Leibniz. The function signature for subset is:\ndef subset(self, predicate, *args, name=None):\n*args is a list of arguments to pass to the predicate function, so it can take an arbitrary number. The optional name parameter will name the new subset.", "calc_founder" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/feature_engineering/labs/3_keras_basic_feat_eng.ipynb
apache-2.0
[ "Basic Feature Engineering in Keras\nLearning Objectives\n\nCreate an input pipeline using tf.data\nEngineer features to create categorical, crossed, and numerical feature columns\n\nOverview\nIn this lab, we utilize feature engineering to improve the prediction of housing prices using a Keras Sequential Model.", "import os\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport tensorflow\nimport tensorflow as tf\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow import feature_column as fc\nfrom tensorflow.keras import layers\n\nprint(\"TensorFlow version: \", tf.version.VERSION)", "Many of the Google Machine Learning Courses Programming Exercises use the California Housing Dataset, which contains data drawn from the 1990 U.S. Census. Our lab dataset has been pre-processed so that there are no missing values.", "!ls -l ../data/", "Let's read in the dataset and create a Pandas dataframe.", "housing_df = pd.read_csv(\"../data/housing_pre-proc.csv\", on_bad_lines=\"skip\")\nhousing_df.head()", "We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, for example, the count row and corresponding columns. The count shows 20433.000000 for all feature columns. Thus, there are no missing values.", "housing_df.describe()", "Split the dataset for ML\nThe dataset we loaded was a single CSV file. We will split this into train, validation, and test sets.", "train, test = train_test_split(housing_df, test_size=0.2)\ntrain, val = train_test_split(train, test_size=0.2)\n\nprint(len(train), \"train examples\")\nprint(len(val), \"validation examples\")\nprint(len(test), \"test examples\")", "Now, we need to output the split files. We will specifically need the test.csv later for testing. You should see the files appear in the home directory.", "train.to_csv(\"../data/housing-train.csv\", encoding=\"utf-8\", index=False)\n\nval.to_csv(\"../data/housing-val.csv\", encoding=\"utf-8\", index=False)\n\ntest.to_csv(\"../data/housing-test.csv\", encoding=\"utf-8\", index=False)\n\n!head ../data/housing*.csv", "Create an input pipeline using tf.data\nNext, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. \nExercise. Here, we create an input pipeline using tf.data. This function is missing two lines. Correct and run the cell.", "# A utility method to create a tf.data dataset from a Pandas Dataframe\ndef df_to_dataset(dataframe, shuffle=True, batch_size=32):\n dataframe = dataframe.copy()\n\n # TODO: Your code goes here\n\n if shuffle:\n ds = ds.shuffle(buffer_size=len(dataframe))\n ds = ds.batch(batch_size)\n return ds", "Next we initialize the training and validation datasets.", "batch_size = 32\ntrain_ds = df_to_dataset(train)\nval_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)", "Exercise. Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.", "for feature_batch, label_batch in train_ds.take(1):\n print(\"Every feature:\", list(feature_batch.keys()))\n print(\n \"A batch of households:\",\n # TODO: Your code goes here\n )\n print(\n \"A batch of ocean_proximity:\",\n # TODO: Your code goes here\n )\n print(\n \"A batch of targets:\",\n # TODO: Your code goes here\n )", "We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.\nNumeric columns\nThe output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.\nExercise. In the California housing prices dataset, most columns from the dataframe are numeric. Let' create a variable called numeric_cols to hold only the numerical feature columns.", "numeric_cols = # TODO: Your code goes here", "Max-min scaler function\nIt is very important for numerical variables to get scaled before they are \"fed\" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number. \nExercise. Next, we scale the numerical feature columns that we assigned to the variable \"numeric cols\".", "# Scalar def get_scal(feature):\ndef get_scal(feature):\n def minmax(x):\n mini = # TODO: Your code goes here\n maxi = # TODO: Your code goes here\n return # TODO: Your code goes here\n return(minmax)\n\nfeature_columns = []\nfor header in numeric_cols:\n scal_input_fn = get_scal(\n # TODO: Your code goes here\n )\n feature_columns.append(\n fc.numeric_column(\n # TODO: Your code goes here\n )\n )", "Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.", "print(\"Total number of feature columns: \", len(feature_columns))", "Using the Keras Sequential Model\nNext, we will run this cell to compile and fit the Keras Sequential model.", "# Model create\nfeature_layer = tf.keras.layers.DenseFeatures(feature_columns, dtype=\"float64\")\n\nmodel = tf.keras.Sequential(\n [\n feature_layer,\n layers.Dense(12, activation=\"relu\"),\n layers.Dense(8, activation=\"relu\"),\n layers.Dense(1, activation=\"linear\", name=\"median_house_value\"),\n ]\n)\n\n# Model compile\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mse\"])\n\n# Model Fit\nhistory = model.fit(train_ds, validation_data=val_ds, epochs=32)", "Next we show loss as Mean Square Error (MSE). Remember that MSE is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable (e.g. housing median age) and predicted values.", "loss, mse = model.evaluate(train_ds)\nprint(\"Mean Squared Error\", mse)", "Visualize the model loss curve\nNext, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.", "def plot_curves(history, metrics):\n nrows = 1\n ncols = 2\n fig = plt.figure(figsize=(10, 5))\n\n for idx, key in enumerate(metrics):\n ax = fig.add_subplot(nrows, ncols, idx + 1)\n plt.plot(history.history[key])\n plt.plot(history.history[f\"val_{key}\"])\n plt.title(f\"model {key}\")\n plt.ylabel(key)\n plt.xlabel(\"epoch\")\n plt.legend([\"train\", \"validation\"], loc=\"upper left\");\n\nplot_curves(history, [\"loss\", \"mse\"])", "Load test data\nNext, we read in the test.csv file and validate that there are no null values. \nAgain, we can use .describe() to see some summary statistics for the numeric fields in our dataframe. The count shows 4087.000000 for all feature columns. Thus, there are no missing values.", "test_data = pd.read_csv(\"../data/housing-test.csv\")\ntest_data.describe()", "Exercise. Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable.", "def test_input_fn(features, batch_size=256):\n \"\"\"An input function for prediction.\"\"\"\n # Convert the inputs to a Dataset without labels.\n return tf.data.Dataset.from_tensor_slices(\n # TODO: Your code goes here\n )\n\ntest_predict = test_input_fn(dict(test_data))", "Prediction: Linear Regression\nBefore we begin to feature engineer our feature columns, we should predict the median house value. By predicting the median house value now, we can then compare it with the median house value after feature engineeing.\nTo predict with Keras, you simply call model.predict() and pass in the housing features you want to predict the median_house_value for. Note: We are predicting the model locally.", "predicted_median_house_value = model.predict(test_predict)", "Next, we run two predictions in separate cells - one where ocean_proximity=INLAND and one where ocean_proximity= NEAR OCEAN.", "# Ocean_proximity is INLAND\nmodel.predict(\n {\n \"longitude\": tf.convert_to_tensor([-121.86]),\n \"latitude\": tf.convert_to_tensor([39.78]),\n \"housing_median_age\": tf.convert_to_tensor([12.0]),\n \"total_rooms\": tf.convert_to_tensor([7653.0]),\n \"total_bedrooms\": tf.convert_to_tensor([1578.0]),\n \"population\": tf.convert_to_tensor([3628.0]),\n \"households\": tf.convert_to_tensor([1494.0]),\n \"median_income\": tf.convert_to_tensor([3.0905]),\n \"ocean_proximity\": tf.convert_to_tensor([\"INLAND\"]),\n },\n steps=1,\n)\n\n# Ocean_proximity is NEAR OCEAN\nmodel.predict(\n {\n \"longitude\": tf.convert_to_tensor([-122.43]),\n \"latitude\": tf.convert_to_tensor([37.63]),\n \"housing_median_age\": tf.convert_to_tensor([34.0]),\n \"total_rooms\": tf.convert_to_tensor([4135.0]),\n \"total_bedrooms\": tf.convert_to_tensor([687.0]),\n \"population\": tf.convert_to_tensor([2154.0]),\n \"households\": tf.convert_to_tensor([742.0]),\n \"median_income\": tf.convert_to_tensor([4.9732]),\n \"ocean_proximity\": tf.convert_to_tensor([\"NEAR OCEAN\"]),\n },\n steps=1,\n)", "The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set. \nGo to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predicted for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering! \nEngineer features to create categorical and numerical features\nExercise. Create a cell that indicates which features will be used in the model.\nNote: Be sure to bucketize 'housing_median_age' and ensure that 'ocean_proximity' is one-hot encoded. And, don't forget your numeric values!", "numeric_cols = # TODO: Your code goes here\n\nbucketized_cols = # TODO: Your code goes here\n\n# indicator columns,Categorical features\ncategorical_cols = # TODO: Your code goes here", "Next, we scale the numerical, bucktized, and categorical feature columns that we assigned to the variables in the preceding cell.", "# Scalar def get_scal(feature):\ndef get_scal(feature):\n def minmax(x):\n mini = train[feature].min()\n maxi = train[feature].max()\n return (x - mini) / (maxi - mini)\n\n return minmax\n\n# All numerical features - scaling\nfeature_columns = []\nfor header in numeric_cols:\n scal_input_fn = get_scal(header)\n feature_columns.append(\n fc.numeric_column(header, normalizer_fn=scal_input_fn)\n )", "Categorical Feature\nIn this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.\nExercise. Next, we create a categorical feature using ocean_proximity.", "for feature_name in categorical_cols:\n vocabulary = # TODO: Your code goes here\n categorical_c = # TODO: Your code goes here\n one_hot = # TODO: Your code goes here\n \n feature_columns.append(one_hot)", "Bucketized Feature\nOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.\nNext we create a bucketized column using 'housing_median_age'", "age = fc.numeric_column(\"housing_median_age\")\n\n# Bucketized cols\nage_buckets = # TODO: Your code goes here\n\nfeature_columns.append(age_buckets)", "Feature Cross\nCombining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features.\nExercise. Next, we create a feature cross of 'housing_median_age' and 'ocean_proximity'.", "vocabulary = housing_df['ocean_proximity'].unique()\nocean_proximity = fc.categorical_column_with_vocabulary_list('ocean_proximity',\n vocabulary)\n\ncrossed_feature = # TODO: Your code goes here\ncrossed_feature = fc.indicator_column(crossed_feature)\n\nfeature_columns.append(crossed_feature)", "Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.", "print(\"Total number of feature columns: \", len(feature_columns))", "Next, we will run this cell to compile and fit the Keras Sequential model. This is the same model we ran earlier.", "# Model create\nfeature_layer = tf.keras.layers.DenseFeatures(feature_columns, dtype=\"float64\")\n\nmodel = tf.keras.Sequential(\n [\n feature_layer,\n layers.Dense(12, activation=\"relu\"),\n layers.Dense(8, activation=\"relu\"),\n layers.Dense(1, activation=\"linear\", name=\"median_house_value\"),\n ]\n)\n\n# Model compile\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mse\"])\n\n# Model Fit\nhistory = model.fit(train_ds, validation_data=val_ds, epochs=32)", "Next, we show loss and mean squared error then plot the model.", "loss, mse = model.evaluate(train_ds)\nprint(\"Mean Squared Error\", mse)\n\nplot_curves(history, [\"loss\", \"mse\"])", "Exercise. Get a prediction from the model. Note: You may use the same values from the previous prediciton.", "model.predict({\n 'longitude': # TODO: Your code goes here,\n 'latitude': # TODO: Your code goes here,\n 'housing_median_age': # TODO: Your code goes here,\n 'total_rooms': # TODO: Your code goes here,\n 'total_bedrooms': # TODO: Your code goes here,\n 'population': # TODO: Your code goes here,\n 'households': # TODO: Your code goes here,\n 'median_income': # TODO: Your code goes here,\n 'ocean_proximity': # TODO: Your code goes here)\n}, steps=1)", "Analysis\nThe array returns a predicted value. Compare this value to the test set you ran earlier. Your predicted value may be a bit better.\nNow that you have your \"feature engineering template\" setup, you can experiment by creating additional features. For example, you can create derived features, such as households per population, and see how they impact the model. You can also experiment with replacing the features you used to create the feature cross.\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/gan
tensorflow_gan/examples/esrgan/colab_notebooks/ESRGAN_TPU.ipynb
apache-2.0
[ "ESRGAN with TF-GAN on TPU\nOverview\nThis notebook demonstrates the E2E process of data loading, preprocessing, training and evaluation of the ESRGAN model using TF-GAN on TPUs. To understand the basics of TF-GAN and explore more features of the library, please visit TF-GAN tutorial notebook first. Please visit the Google Cloud Tutorial to learn how to create and make use of a cloud storage bucket. \nLearning Objectives\nThrough this Colab notebook you will learn how to :\n* Implement the ESRGAN model and train it\n* Make use of various TF-GAN functions to visualize and evaluate the results. \nSteps to run this notebook\n\nClick on the following icon to open this notebook in Google Colaboratory. \n\n\n\n\n\nCreate a Cloud Storage bucket for storage : http://console.cloud.google.com/storage.\nNavigate to Runtime &gt; Change runtime type tab \nSelect TPU from hardware accelerator and save\nClick Connect in the upper right corner and select Connect to hosted runtime.\n\nTesting out the TPU connection\nFirst, you'll need to enable TPUs for the notebook.\nNavigate to Edit→Notebook Settings, and select TPU from the Hardware Accelerator drop-down (you can also access Notebook Settings via the command palette: cmd/ctrl-shift-P).\nNext, we'll check that we can connect to the TPU.", "import os\nimport tensorflow.compat.v1 as tf\nimport pprint\nassert 'COLAB_TPU_ADDR' in os.environ, 'Did you forget to switch to TPU?'\ntpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']\n\nwith tf.Session(tpu_address) as sess:\n devices = sess.list_devices()\npprint.pprint(devices)\ndevice_is_tpu = [True if 'TPU' in str(x) else False for x in devices]\nassert True in device_is_tpu, 'Did you forget to switch to TPU?'", "Authentication\nTo run on Google's free Cloud TPUs, you must set up a Google Cloud Storage bucket to store dataset and model weights during training. New customers to Google Cloud Platform can get $300 in free credits which can come in handy while running this notebook. Please visit the Google Cloud Tutorial to learn how to create and make use of a cloud storage bucket. \nFirstly enter the name of the cloud bucket you have created.\nFor authentication you will be redirected to give Google Cloud SDK access to your cloud bucket. Paste the authentication code in text box below this cell and proceed.", "import json\nimport os\nimport pprint\nimport re\nimport time\nimport tensorflow.compat.v1 as tf\nimport tensorflow_gcs_config\n\n# Google Cloud Storage bucket for storing the training dataset.\nbucket = '' #@param {type:\"string\"}\n\nassert bucket, 'Must specify an existing GCS bucket name'\nprint('Using bucket: {}'.format(bucket))\n\nassert 'COLAB_TPU_ADDR' in os.environ, 'Missing TPU; did you request a TPU in Notebook Settings?'\ntpu_address = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])\n\nfrom google.colab import auth\nauth.authenticate_user()\n\n# Upload credentials to TPU.\ntf.config.experimental_connect_to_host(tpu_address)\ntensorflow_gcs_config.configure_gcs_from_colab_auth()\n# Now credentials are set for all future sessions on this TPU.", "Check imports", "# Check that imports for the rest of the file work.\nimport os\nimport tensorflow as tf\n!pip install tensorflow-gan\nimport tensorflow_gan as tfgan\nfrom tensorflow.keras import layers\nimport tensorflow_datasets as tfds\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\n# Allow matplotlib images to render immediately.\n%matplotlib inline", "Training ESRGAN\nThe ESRGAN model proposed in the paper ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks (Wang Xintao et al.) performs the task of image super-resolution which is the process of reconstructing high resolution (HR) image from a given low resolution (LR) image. Such a task has numerous application in today's world. The Super-Resolution GAN model was a major breathrough in this field and was capable of generating photorealistic images, however the model also generated artifacts that reduced the overall visual quality. To overcome this, the ESRGAN model was proposed with three major changes made to the SRGAN model : \n1. Using Residual-in-Residual Dense Block (RRDB) without batch normalization as basic network building unit \n2. Using an improved method to calculate adversarial loss used in RelativisticGAN\n3. Improving perceptual loss function by using features before activation. \nGo to the visualize results cell to see some of the results obtained.\nDefine Parameters", " Params = {\n 'batch_size' : 32, # Number of image samples used in each training step \n 'hr_dimension' : 256, # Dimension of a High Resolution (HR) Image\n 'scale' : 4, # Factor by which Low Resolution (LR) Images will be downscaled.\n 'data_name': 'div2k/bicubic_x4', # Dataset name - loaded using tfds.\n 'trunk_size' : 11, # Number of Residual blocks used in Generator,\n 'init_lr' : 0.00005, # Initial Learning rate for networks. \n 'ph1_steps' : 10000, # Number of steps required for phase-1 training\n 'ph2_steps' : 100000, # Number of steps required for phase-2 training\n 'decay_ph1' : 0.2, # Factor by which learning rates are modified during phase-1 training \n 'decay_ph2' : 0.5, # Factor by which learning rates are modified during phase-2 training \n 'model_dir' : 'gs://{}/SavedModels' # Path to save the model after training. (inside the cloud bucket)\n .format(bucket),\n 'ckpt_dir' : '/content/ckpts/', # Path to save the training checkpoints. (outside the cloud bucket)\n 'lambda' : 0.005, # To balance adversarial loss during phase-2 training. \n 'eta' : 0.01, # To balance L1 loss during phase-2 training.\n 'val_steps' : 100 # Number of steps required for validation.\n}", "Load Training Dataset\nWe have used the DIV2K dataset which is usually used for benchmarking super resolution models. DIV2K dataset provides various kinds of image from which we are downloading only the HR images and corresponding LR images downsampled using bicubic downsampling. All the HR images are also scaled to 96 x 96 and LR images to 28 x 28.", "dataset_dir = 'gs://{}/{}'.format(bucket, 'datasets')\n\ndef input_fn(mode, params):\n assert 'batch_size' in params\n bs = params['batch_size']\n split = 'train' if mode == 'train' else 'validation'\n shuffle = True \n\n def scale(image, *args):\n hr_size = params['hr_dimension']\n scale = params['scale']\n\n hr_image = image\n hr_image = tf.image.resize(hr_image, [hr_size, hr_size])\n lr_image = tf.image.resize(hr_image, [hr_size//scale, hr_size//scale], method='bicubic')\n \n hr_image = tf.clip_by_value(hr_image, 0, 255)\n lr_image = tf.clip_by_value(lr_image, 0, 255)\n \n return lr_image, hr_image\n\n dataset = (tfds.load(params['data_name'], split=split, data_dir=dataset_dir, as_supervised=True)\n .map(scale, num_parallel_calls=4)\n .cache()\n .repeat())\n if shuffle:\n dataset = dataset.shuffle(\n buffer_size=10000, reshuffle_each_iteration=True)\n dataset = (dataset.batch(bs, drop_remainder=True)\n .prefetch(tf.data.experimental.AUTOTUNE))\n \n return dataset\n\ntrain_ds = input_fn(mode='train', params=Params)", "Visualize the dataset", "img_lr, img_hr = next(iter(train_ds))\nlr = Image.fromarray(np.array(img_lr)[0].astype(np.uint8))\nlr = lr.resize([256, 256])\ndisplay(lr)\n\nhr = Image.fromarray(np.array(img_hr)[0].astype(np.uint8))\nhr = hr.resize([256, 256])\ndisplay(hr)", "Network Architecture\nThe basic network buidling unit of the ESRGAN is the Residual-in-Residual Block (RRDB) without batch normalization. The network implemented is similar to the architecture proposed in the paper.\nGenerator", "def _conv_block(input, filters, activation=True):\n h = layers.Conv2D(filters, kernel_size=[3,3], \n kernel_initializer=\"he_normal\", bias_initializer=\"zeros\", \n strides=[1,1], padding='same', use_bias=True)(input)\n if activation:\n h = layers.LeakyReLU(0.2)(h)\n return h\n\ndef dense_block(input):\n h1 = _conv_block(input, 32)\n h1 = layers.Concatenate()([input, h1])\n\n h2 = _conv_block(h1, 32)\n h2 = layers.Concatenate()([input, h1, h2])\n\n h3 = _conv_block(h2, 32)\n h3 = layers.Concatenate()([input, h1, h2, h3])\n\n h4 = _conv_block(h3, 32)\n h4 = layers.Concatenate()([input, h1, h2, h3, h4]) \n\n h5 = _conv_block(h4, 32, activation=False)\n \n h5 = layers.Lambda(lambda x: x * 0.2)(h5)\n h = layers.Add()([h5, input])\n \n return h\n\ndef rrdb(input):\n h = dense_block(input)\n h = dense_block(h)\n h = dense_block(h)\n h = layers.Lambda(lambda x:x * 0.2)(h)\n out = layers.Add()([h, input])\n return out\n\ndef upsample(x, filters):\n x = layers.Conv2DTranspose(filters, kernel_size=3, \n strides=2, padding='same', \n use_bias = True)(x)\n x = layers.LeakyReLU(alpha=0.2)(x)\n return x\n\ndef generator_network(filter=32, \n trunk_size=Params['trunk_size'], \n out_channels=3):\n lr_input = layers.Input(shape=(None, None, 3))\n \n x = layers.Conv2D(filter, kernel_size=[3,3], strides=[1,1], \n padding='same', use_bias=True)(lr_input)\n x = layers.LeakyReLU(0.2)(x)\n \n ref = x\n for i in range(trunk_size):\n x = rrdb(x)\n\n x = layers.Conv2D(filter, kernel_size=[3,3], strides=[1,1], \n padding='same', use_bias = True)(x)\n x = layers.Add()([x, ref])\n\n x = upsample(x, filter)\n x = upsample(x, filter)\n \n x = layers.Conv2D(filter, kernel_size=3, strides=1, \n padding='same', use_bias=True)(x)\n x = layers.LeakyReLU(0.2)(x)\n hr_output = layers.Conv2D(out_channels, kernel_size=3, strides=1, \n padding='same', use_bias=True)(x)\n\n model = tf.keras.models.Model(inputs=lr_input, outputs=hr_output)\n return model", "Discriminator", "def _conv_block_d(x, out_channel):\n x = layers.Conv2D(out_channel, 3,1, padding='same', use_bias=False)(x)\n x = layers.BatchNormalization(momentum=0.8)(x)\n x = layers.LeakyReLU(alpha=0.2)(x)\n\n x = layers.Conv2D(out_channel, 4,2, padding='same', use_bias=False)(x)\n x = layers.BatchNormalization(momentum=0.8)(x)\n x = layers.LeakyReLU(alpha=0.2)(x)\n return x\n\ndef discriminator_network(filters = 64, training=True):\n img = layers.Input(shape = (Params['hr_dimension'], Params['hr_dimension'], 3))\n \n x = layers.Conv2D(filters, [3,3], 1, padding='same', use_bias=False)(img)\n x = layers.BatchNormalization()(x)\n x = layers.LeakyReLU(alpha=0.2)(x)\n\n x = layers.Conv2D(filters, [3,3], 2, padding='same', use_bias=False)(x)\n x = layers.BatchNormalization()(x)\n x = layers.LeakyReLU(alpha=0.2)(x)\n\n x = _conv_block_d(x, filters *2)\n x = _conv_block_d(x, filters *4)\n x = _conv_block_d(x, filters *8)\n \n x = layers.Flatten()(x)\n x = layers.Dense(100)(x)\n x = layers.LeakyReLU(alpha=0.2)(x)\n x = layers.Dense(1)(x)\n\n model = tf.keras.models.Model(inputs = img, outputs = x)\n return model", "Loss Functions\nThe ESRGAN model makes use of three loss functions - pixel loss, perceptual loss (vgg_loss) and adversarial loss. Perceptual loss is calculated using the pre-trained VGG-19 network. Adversarial loss for the model is calculated using relativistic average loss as discussed in the paper. The relativistic_generator_loss and relativistic_discriminator_loss, pre-defined in TF-GAN losses are used for calculating generator and discriminator losses respectively. \nThese loss functions ensures the balance between visual quality and metrics such as PSNR and encorages the generator to produce more realistic images with natural textures.", "def pixel_loss(y_true, y_pred):\n y_true = tf.cast(y_true, tf.float32)\n y_pred = tf.cast(y_pred, tf.float32)\n return tf.reduce_mean(tf.reduce_mean(tf.abs(y_true - y_pred), axis = 0))\n\n# Function for calculating perceptual loss\ndef vgg_loss(weight=None, input_shape=None):\n vgg_model = tf.keras.applications.vgg19.VGG19(\n input_shape=input_shape, weights=weight, include_top=False\n )\n\n for layer in vgg_model.layers:\n layer.trainable = False\n\n vgg_model.get_layer(\"block5_conv4\").activation = lambda x: x\n vgg = tf.keras.Model(\n inputs=[vgg_model.input],\n outputs=[vgg_model.get_layer(\"block5_conv4\").output])\n\n def loss(y_true, y_pred):\n return tf.compat.v1.losses.absolute_difference(vgg(y_true), vgg(y_pred))\n\n return loss", "Training\nESRGAN model is trained in two phases in which the first phase deals with training the generator network individually and is aimed at improving the PSNR values of generated images by reducing the L1 loss. \nIf starting from scratch, phase-1 training can be completed within an hour on free colab TPU, whereas phase-2 can take around 2-3 hours to get good results. As a result saving the weights/checkpoints are important steps during training. \nTraining of the same generator model is continued in the second phase along with the discriminator network. In the second phase, the generator reduces the L1 Loss, Relativistic average GAN (RaGAN) loss which indicates how realistic does the generated image look and the imporved Perceptual loss proposed in the paper.", "# To display images in the order : LR Image -> Generated Image -> HR Image\ndef visualize_results(image_lr, generated, image_hr):\n size = 128\n resized_lr = tf.image.resize(image_lr, [size, size], method=tf.image.ResizeMethod.BILINEAR)\n resized_gen = tf.image.resize(generated, [size, size], method=tf.image.ResizeMethod.BILINEAR)\n resized_hr = tf.image.resize(image_hr, [size, size], method=tf.image.ResizeMethod.BILINEAR)\n\n stack = tf.stack([resized_lr[0], resized_gen[0], resized_hr[0]])\n\n image_grid = tfgan.eval.python_image_grid(stack, grid_shape=(1, 3))\n result = Image.fromarray(image_grid.astype(np.uint8))\n return result\n\n# Define the TPU strategy\ntpu = tf.distribute.cluster_resolver.TPUClusterResolver() \ntf.config.experimental_connect_to_cluster(tpu)\ntf.tpu.experimental.initialize_tpu_system(tpu)\nstrategy = tf.distribute.experimental.TPUStrategy(tpu)\n\ntrain_ds = iter(strategy.experimental_distribute_dataset(train_ds))", "Phase - 1 Training\nSteps Involved:\n\nDefine the generator and its optimizer. \nTake LR, HR image pairs from the training dataset\nInput the LR image to the generator network\nCalculate the L1 loss using the generated image and HR image\nCalculate gradient value and apply it to the optimizer\nUpdate the learning rate of optimizer after every decay steps for better performance", "with strategy.scope():\n metric = tf.keras.metrics.Mean()\n psnr_metric = tf.keras.metrics.Mean()\n\n generator = generator_network()\n\n g_optimizer = tf.optimizers.Adam(\n learning_rate = 0.0002,\n beta_1 = 0.9,\n beta_2 = 0.99\n )\n\n@tf.function\ndef train_step(image_lr, image_hr):\n with tf.GradientTape() as tape:\n fake = generator(image_lr)\n loss = pixel_loss(image_hr, fake) * (1.0 / Params['batch_size'])\n psnr_value = tf.image.psnr(fake, image_hr,max_val = 256.0)\n \n metric(loss)\n\n gradient = tape.gradient(loss, generator.trainable_variables)\n g_optimizer.apply_gradients(zip(gradient, generator.trainable_variables))\n\n return psnr_value\n\ndef val_steps(image_lr, image_hr):\n fake = generator(image_lr)\n result = visualize_results(image_lr, fake, image_hr)\n display(result)\n\nstep_count = 0\nwhile step_count < Params['ph1_steps']:\n lr, hr = next(train_ds)\n psnr_loss = strategy.run(train_step, args = (lr, hr))\n loss = strategy.reduce(tf.distribute.ReduceOp.MEAN, psnr_loss, axis=None)\n psnr_metric(loss)\n \n if step_count%1000 == 0:\n lr = np.array(lr.values)[0]\n hr = np.array(hr.values)[0]\n print(\"step {} PNSR = {}\".format(step_count, psnr_metric.result()))\n val_steps(lr, hr) \n \n if step_count%5000 == 0:\n g_optimizer.learning_rate.assign(\n g_optimizer.learning_rate * Params['decay_ph1']) \n step_count+=1\n\n# Save the generator network which is then used for phase-2 training\nos.makedirs(Params['model_dir'] + '/Phase_1/generator', exist_ok = True)\ngenerator.save(Params['model_dir'] + '/Phase_1/generator')", "Phase - 2\nDefine optimizers and load networks\n\nGenerator network trained in Phase 1 is loaded.\nCheckpoints are also defined which can be useful during training.", "with strategy.scope():\n optimizer = tf.optimizers.Adam(\n learning_rate = 0.0002,\n beta_1 = 0.9,\n beta_2 = 0.99\n )\n generator = tf.keras.models.load_model(Params['model_dir'] + '/Phase_1/generator/')\n discriminator = discriminator_network()\n\n g_optimizer = optimizer\n g_optimizer.learning_rate.assign(0.00005)\n d_optimizer = optimizer\n\n checkpoint = tf.train.Checkpoint(G=generator,\n D = discriminator,\n G_optimizer=g_optimizer,\n D_optimizer=d_optimizer)\n local_device_option = tf.train.CheckpointOptions(experimental_io_device=\"/job:localhost\")", "Load VGG weights\nThe VGG-19 network pretrained on imagenet is loaded for calculating perceptual loss.", "with strategy.scope():\n perceptual_loss = vgg_loss(\n weight = \"imagenet\",\n input_shape = [Params['hr_dimension'], Params['hr_dimension'], 3])\n\nwith strategy.scope():\n gen_metric = tf.keras.metrics.Mean()\n disc_metric = tf.keras.metrics.Mean()\n psnr_metric = tf.keras.metrics.Mean()", "Training step\n\nInput the LR image to the generator network\nCalculate L1 loss, perceptual loss and adversarial loss for both generator and discriminator.\nUpdate the optimizers for both networks using the obtained gradient values\nUpdate the learning rate of optimizers after every decay steps for better performance\nTF-GAN's image grid function is used to display the generated images in the validation steps.", "@tf.function\ndef train_step(image_lr, image_hr):\n with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:\n fake = generator(image_lr)\n \n percep_loss = tf.reduce_mean(perceptual_loss(image_hr, fake))\n l1_loss = pixel_loss(image_hr, fake) \n \n real_logits = discriminator(image_hr) \n fake_logits = discriminator(fake) \n \n loss_RaG = tfgan.losses.losses_impl.relativistic_generator_loss(real_logits,\n fake_logits) \n disc_loss = tfgan.losses.losses_impl.relativistic_discriminator_loss(real_logits,\n fake_logits) \n\n gen_loss = percep_loss + Params['lambda'] * loss_RaG + Params['eta'] * l1_loss\n\n gen_loss = gen_loss / Params['batch_size']\n disc_loss = disc_loss / Params['batch_size'] \n psnr_loss = tf.image.psnr(fake, image_hr, max_val = 256.0)\n \n disc_metric(disc_loss) \n gen_metric(gen_loss)\n psnr_metric(psnr_loss)\n \n disc_grad = disc_tape.gradient(disc_loss, discriminator.trainable_variables)\n d_optimizer.apply_gradients(zip(disc_grad, discriminator.trainable_variables))\n\n gen_grad = gen_tape.gradient(gen_loss, generator.trainable_variables) \n g_optimizer.apply_gradients(zip(gen_grad, generator.trainable_variables))\n \n return [disc_loss, gen_loss, psnr_loss]\n\ndef val_step(image_lr, image_hr):\n fake = generator(image_lr)\n result = visualize_results(image_lr, fake, image_hr)\n display(result)\n\nstep_count = 0\ndecay_step = [9000, 30000, 50000]\n\nwhile step_count < Params['ph2_steps']:\n lr, hr = next(train_ds)\n \n if tf.train.latest_checkpoint(Params['ckpt_dir']): \n checkpoint.restore(tf.train.latest_checkpoint(Params['ckpt_dir']))\n \n disc_loss, gen_loss, psnr_loss = strategy.run(train_step, args = (lr, hr))\n \n if step_count % 1000 == 0:\n print(\"step {}\".format(step_count) + \" Generator Loss = {} \".format(gen_metric.result()) + \n \"Disc Loss = {}\".format(disc_metric.result()) + \" PSNR : {}\".format(psnr_metric.result()))\n \n lr = np.array(lr.values)[0]\n hr = np.array(hr.values)[0]\n val_step(lr, hr)\n \n checkpoint.write(Params['ckpt_dir'], options=local_device_option)\n \n if step_count >= decay_step[0]:\n decay_step.pop(0)\n g_optimizer.learning_rate.assign(\n g_optimizer.learning_rate * Params['decay_ph2'])\n \n d_optimizer.learning_rate.assign(\n d_optimizer.learning_rate * Params['decay_ph2'])\n \n step_count+=1\n\nos.makedirs(Params['model_dir'] + '/Phase_2/generator', exist_ok = True)\nos.makedirs(Params['model_dir'] + '/Phase_2/discriminator', exist_ok = True)\n\ngenerator.save(Params['model_dir'] + '/Phase_2/generator')\ndiscriminator.save(Params['model_dir'] + '/Phase_2/discriminator')", "Network Interpolation", "def network_interpolation(alpha=0.2,\n phase_1_path=None,\n phase_2_path=None):\n psnr_gen = tf.keras.model.load_model(phase_1_path)\n gan_gen = tf.keras.models.load_model(phase_2_path)\n\n for var_1, var_2 in zip(gan_gen.trainable_variables, \n psnr_gen.trainable_variables):\n var_1.assign((1 - alpha) * var_2 + alpha * var_1)\n\n return gan_gen\n\ngenerator = network_interpolation(phase_1_path = Params['model_dir'] + '/Phase_1/generator',\n phase_2_path = Params['model_dir'] + '/Phase_2/generator')\ngenerator.save(Params['model_dir'] + '/InterpolatedGenerator/')", "Evaluation", "val_ds = input_fn(mode='validation', params=Params)", "Visualize Generated Images", "def val_steps(image_lr, image_hr):\n fake = generator(image_lr)\n result = visualize_results(image_lr, fake, image_hr)\n display(result)\n\nfor i in range(3):\n lr, hr = next(iter(val_ds))\n val_steps(lr, hr) ", "FID and Inception Scores are two common metrices used to evaluate the performance of a GAN model and PSNR value is used to quantify the similarity between two images and is used for benchmarking super resolution models.", "@tf.function\ndef get_fid_score(real_image, gen_image):\n size = tfgan.eval.INCEPTION_DEFAULT_IMAGE_SIZE\n\n resized_real_images = tf.image.resize(real_image, [size, size], method=tf.image.ResizeMethod.BILINEAR)\n resized_generated_images = tf.image.resize(gen_image, [size, size], method=tf.image.ResizeMethod.BILINEAR)\n \n num_inception_images = 1\n num_batches = Params['batch_size'] // num_inception_images\n \n fid = tfgan.eval.frechet_inception_distance(resized_real_images, resized_generated_images, num_batches=num_batches)\n return fid\n \n@tf.function\ndef get_inception_score(images, gen, num_inception_images = 8):\n size = tfgan.eval.INCEPTION_DEFAULT_IMAGE_SIZE\n resized_images = tf.image.resize(images, [size, size], method=tf.image.ResizeMethod.BILINEAR)\n\n num_batches = Params['batch_size'] // num_inception_images\n inc_score = tfgan.eval.inception_score(resized_images, num_batches=num_batches)\n\n return inc_score\n\nwith strategy.scope():\n generator = tf.keras.models.load_model(Params['model_dir'] + '/InterpolatedGenerator')\n\n fid_metric = tf.keras.metrics.Mean()\n inc_metric = tf.keras.metrics.Mean()\n psnr_metric = tf.keras.metrics.Mean()\n\ncount = 0\ni = 0\nwhile i < Params['val_steps']: \n lr, hr = next(iter(val_ds))\n\n gen = generator(lr)\n \n fid = strategy.run(get_fid_score, args = (hr, gen))\n real_is = strategy.run(get_inception_score, args=(hr, gen))\n gen_is = strategy.run(get_inception_score, args=(gen, hr))\n val_steps(lr, hr) \n\n fid_metric(fid)\n inc_metric(gen_is)\n psnr_metric(tf.reduce_mean(tf.image.psnr(gen, hr, max_val = 256.0)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Salman-H/mars-search-robot
.ipynb_checkpoints/Rover_Project_Test_Notebook-checkpoint.ipynb
bsd-2-clause
[ "Rover Project Test Notebook\nThis notebook contains the functions that provide the scaffolding needed to test out mapping methods. The following steps are taken to test functions and calibrate data for the project:\n\nThe simulator is run in \"Training Mode\" and some data is recorded. Note: the simulator may crash if a large (longer than a few minutes) dataset is recorded; only a small data sample is required i.e. just some example images to work with. \nThe functions are tested with the data.\nFunctions are written and modified to report and map out detections of obstacles and rock samples (yellow rocks).\nprocess_image() function is populated with the appropriate steps/functions to go from a raw image to a worldmap.\nmoviepy functions are used to construct a video output from processed image data.\nOnce it is confirmed that mapping is working, perception.py and decision.py are modified to allow the rover to navigate and map in autonomous mode!\n\nNote: If, at any point, display windows freeze up or other confounding issues are encountered, Kernel should be restarted and output cleared from the \"Kernel\" menu above.\nUncomment and run the next cell to get code highlighting in the markdown cells.", "#%%HTML\n#<style> code {background-color : orange !important;} </style>\n\n%matplotlib inline\n#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window\n\nimport cv2 # OpenCV for perspective transform\nimport numpy as np\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\nimport scipy.misc # For saving images as needed\nimport glob # For reading in a list of images from a folder", "Quick Look at the Data\nThere's some example data provided in the test_dataset folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator. \nNext, read in and display a random image from the test_dataset folder", "path = '../test_dataset/IMG/*'\nimg_list = glob.glob(path)\n\n# Grab a random image and display it\nidx = np.random.randint(0, len(img_list)-1)\nimage = mpimg.imread(img_list[idx])\nplt.imshow(image)", "Calibration Data\nRead in and display example grid and rock sample calibration images. The grid is used for perspective transform and the rock image for creating a new color selection that identifies these samples of interest.", "# In the simulator the grid on the ground can be toggled on for calibration.\n# The rock samples can be toggled on with the 0 (zero) key. \n# Here's an example of the grid and one of the rocks\nexample_grid = '../calibration_images/example_grid1.jpg'\nexample_rock = '../calibration_images/example_rock1.jpg'\nexample_rock2 = '../calibration_images/example_rock2.jpg'\ngrid_img = mpimg.imread(example_grid)\nrock_img = mpimg.imread(example_rock)\nrock_img2 = mpimg.imread(example_rock2)\n\nfig = plt.figure(figsize=(12,3))\nplt.subplot(131)\nplt.imshow(grid_img)\nplt.subplot(132)\nplt.imshow(rock_img)\nplt.subplot(133)\nplt.imshow(rock_img2)", "Perspective Transform\nDefine the perspective transform function and test it on an image. \nFour source points are selected which represent a 1 square meter grid in the image viewed from the rover's front camera. These source points are subsequently mapped to four corresponding grid cell points in our \"warped\" image such that a grid cell in it is 10x10 pixels viewed from top-down. Thus, the front_cam image is said to be warped into a top-down view image by the perspective transformation. The example grid image above is used to choose source points for the grid cell which is in front of the rover (each grid cell is 1 square meter in the sim). The source and destination points are defined to warp the image to a grid where each 10x10 pixel square represents 1 square meter.\nThe following steps are used to warp an image using a perspective transform:\n\nDefine 4 source points, in this case, the 4 corners of a grid cell in the front camera image above.\nDefine 4 destination points (must be listed in the same order as source points!).\nUse cv2.getPerspectiveTransform() to get M, the transform matrix.\nUse cv2.warpPerspective() to apply M and warp front camera image to a top-down view.\n\nRefer to the following documentation for geometric transformations in OpenCV:\nhttp://docs.opencv.org/trunk/da/d6e/tutorial_py_geometric_transformations.html", "def perspect_transform(input_img, sourc_pts, destn_pts):\n \"\"\"\n Apply a perspective transformation to input 3D image.\n\n Keyword arguments:\n input_img -- 3D numpy image on which perspective transform is applied\n sourc_pts -- numpy array of four source coordinates on input 3D image\n destn_pts -- corresponding destination coordinates on output 2D image\n\n Return value:\n output_img -- 2D numpy image with overhead view\n\n \"\"\"\n transform_matrix = cv2.getPerspectiveTransform(\n sourc_pts,\n destn_pts\n )\n output_img = cv2.warpPerspective(\n input_img,\n transform_matrix,\n (input_img.shape[1], input_img.shape[0]) # keep same size as input_img\n )\n return output_img\n\n\n# Define calibration box in source (actual) and destination (desired)\n# coordinates to warp the image to a grid where each 10x10 pixel square\n# represents 1 square meter and the destination box will be 2*dst_size\n# on each side\ndst_size = 5\n\n# Set a bottom offset (rough estimate) to account for the fact that the\n# bottom of the image is not the position of the rover but a bit in front\n# of it\nbottom_offset = 6\n\nsource = np.float32(\n [[14, 140],\n [301, 140],\n [200, 96],\n [118, 96]]\n)\ndestination = np.float32(\n [\n [image.shape[1]/2 - dst_size,\n image.shape[0] - bottom_offset],\n\n [image.shape[1]/2 + dst_size,\n image.shape[0] - bottom_offset],\n\n [image.shape[1]/2 + dst_size,\n image.shape[0] - 2*dst_size - bottom_offset],\n\n [image.shape[1]/2 - dst_size,\n image.shape[0] - 2*dst_size - bottom_offset]\n ]\n)\nwarped = perspect_transform(grid_img, source, destination)\nplt.imshow(warped)\n# scipy.misc.imsave('../output/warped_example.jpg', warped)\n\nwarped_rock = perspect_transform(rock_img, source, destination)\nwarped_rock2 = perspect_transform(rock_img2, source, destination)\n\nfig = plt.figure(figsize=(16,7))\n\nplt.subplot(221)\nplt.imshow(rock_img)\n\nplt.subplot(222)\nplt.imshow(rock_img2)\n\nplt.subplot(223)\nplt.imshow(warped_rock)\n\nplt.subplot(224)\nplt.imshow(warped_rock2)\n\nrock1_pixels = np.copy(rock_img)\nplt.imshow(rock1_pixels[90:112,150:172])", "Color Thresholding\nDefine the color thresholding function for navigable terrain and apply it to the warped image.\nUltimately, the map not only includes navigable terrain but also obstacles and the positions of the rock samples we're searching for. New functions are needed to return the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that these areas can be mapped into world coordinates as well. \nColor thresholding for navigable terrain", "def color_thresh_nav(input_img, rgb_thresh=(160, 160, 160)):\n \"\"\"\n Apply a color threshold to extract only ground terrain pixels.\n\n Keyword arguments:\n input_img -- numpy image on which RGB threshold is applied\n rgb_thresh -- RGB thresh tuple above which only ground pixels are detected\n\n Return value:\n nav_img -- binary image identifying ground/navigable terrain pixels\n\n \"\"\"\n # Create an array of zeros same xy size as input_img, but single channel\n nav_img = np.zeros_like(input_img[:, :, 0])\n\n # Require that each of the R(0), G(1), B(2) pixels be above all three\n # rgb_thresh values such that pix_above_thresh will now contain a\n # boolean array with \"True\" where threshold was met\n pix_above_thresh = (\n (input_img[:, :, 0] > rgb_thresh[0]) &\n (input_img[:, :, 1] > rgb_thresh[1]) &\n (input_img[:, :, 2] > rgb_thresh[2])\n )\n # Index the array of zeros with the boolean array and set to 1 (white)\n # those pixels that are above rgb_thresh for ground/navigable terrain\n nav_img[pix_above_thresh] = 1\n\n # nav_img will now contain white pixels identifying navigable terrain\n return nav_img\n\n\nthreshed = color_thresh_nav(warped)\nplt.imshow(threshed, cmap='gray')\n#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)", "Color thresholding for obstacle terrain", "def color_thresh_obst(input_img, rgb_thresh=(160, 160, 160)):\n \"\"\"\n Apply a color threshold to extract only mountain rock pixels.\n\n Keyword arguments:\n input_img -- numpy image on which RGB threshold is applied\n rgb_thresh -- RGB thresh tuple below which only obstacle pixels are detected\n\n Return value:\n nav_img -- binary image identifying rocks/obstacles terrain pixels\n\n \"\"\"\n\n # Create an array of zeros same xy size as input_img, but single channel\n obs_img = np.zeros_like(input_img[:, :, 0])\n\n # Require that each of the R(0), G(1), B(2) pixels be below all three\n # rgb_thresh values such that pix_below_thresh will now contain a\n # boolean array with \"True\" where threshold was met\n #pix_below_thresh = (\n # (input_img[:, :, 0] < rgb_thresh[0]) &\n # (input_img[:, :, 1] < rgb_thresh[1]) &\n # (input_img[:, :, 2] < rgb_thresh[2])\n #)\n pix_below_thresh = (\n (np.logical_and(input_img[:, :, 0] > 0,input_img[:, :, 0] <= rgb_thresh[0])) & \n (np.logical_and(input_img[:, :, 1] > 0,input_img[:, :, 1] <= rgb_thresh[1])) & \n (np.logical_and(input_img[:, :, 2] > 0,input_img[:, :, 2] <= rgb_thresh[2]))\n )\n \n # Index the array of zeros with the boolean array and set to 1 (white)\n # those pixels that are below rgb_thresh for rocks/obstacles terrain\n obs_img[pix_below_thresh] = 1\n\n # obs_img will now contain white pixels identifying obstacle terrain\n return obs_img\n\n\nthreshed_obstacles_image = color_thresh_obst(warped)\nplt.imshow(threshed_obstacles_image, cmap='gray')", "Color thresholding for gold rocks", "def color_thresh_rock(input_img, low_bound, upp_bound):\n \"\"\"\n Apply a color threshold using OpenCV to extract pixels for gold rocks.\n\n Keyword arguments:\n input_img -- numpy image on which OpenCV HSV threshold is applied\n low_bound -- tuple defining lower HSV color value for gold rocks\n upp_bound -- tuple defining upper HSV color value for gold rocks\n\n Return value:\n threshed_img -- binary image identifying gold rock pixels\n\n \"\"\"\n\n # Convert BGR to HSV\n hsv_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2HSV)\n\n # Threshold the HSV image to get only colors for gold rocks\n threshed_img = cv2.inRange(hsv_img, low_bound, upp_bound)\n\n return threshed_img\n\n\n# define range of gold rock color in HSV\nlower_bound = (75, 130, 130)\nupper_bound = (255, 255, 255)\n\n# apply rock color threshold to original rocks 1 and 2 images\nthreshed_rock_image = color_thresh_rock(\n rock_img,\n lower_bound,\n upper_bound\n)\nthreshed_rock2_image = color_thresh_rock(\n rock_img2,\n lower_bound,\n upper_bound\n)\n\n# apply rock color threshold to warped rocks 1 and 2 images\nthreshed_warped_rock_image = color_thresh_rock(\n warped_rock,\n lower_bound,\n upper_bound\n)\nthreshed_warped_rock2_image = color_thresh_rock(\n warped_rock2,\n lower_bound,\n upper_bound\n)\n\n# verify correctness of gold rock threshold\nfig = plt.figure(figsize=(20,11))\n\nplt.subplot(421)\nplt.imshow(rock_img)\n\nplt.subplot(422)\nplt.imshow(threshed_rock_image, cmap='gray')\n\nplt.subplot(423)\nplt.imshow(warped_rock)\n\nplt.subplot(424)\nplt.imshow(threshed_warped_rock_image, cmap='gray')\n\nplt.subplot(425)\nplt.imshow(rock_img2)\n\nplt.subplot(426)\nplt.imshow(threshed_rock2_image, cmap='gray')\n\nplt.subplot(427)\nplt.imshow(warped_rock2)\n\nplt.subplot(428)\nplt.imshow(threshed_warped_rock2_image, cmap='gray')", "Coordinate Transformations\nDefine the functions used to do coordinate transforms and apply them to an image.", "def to_rover_coords(binary_img):\n \"\"\"Convert all points on img coord-frame to those on rover's frame.\"\"\"\n # Identify nonzero pixels in binary image representing\n # region of interest e.g. rocks\n ypos, xpos = binary_img.nonzero()\n\n # Calculate pixel positions with reference to rover's coordinate\n # frame given that rover front cam itself is at center bottom of\n # the photographed image.\n x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)\n y_pixel = -(xpos - binary_img.shape[1]/2).astype(np.float)\n return x_pixel, y_pixel\n\n\ndef to_polar_coords(x_pix, y_pix):\n \"\"\"Convert cartesian coordinates to polar coordinates.\"\"\"\n # compute distance and angle of 'each' pixel from origin and\n # vertical respectively\n distances = np.sqrt(x_pix**2 + y_pix**2)\n angles = np.arctan2(y_pix, x_pix)\n return distances, angles\n\n\ndef rotate_pix(x_pix, y_pix, angle):\n \"\"\"Apply a geometric rotation.\"\"\"\n angle_rad = angle * np.pi / 180 # yaw to radians\n x_pix_rotated = (x_pix * np.cos(angle_rad)) - (y_pix * np.sin(angle_rad))\n y_pix_rotated = (x_pix * np.sin(angle_rad)) + (y_pix * np.cos(angle_rad))\n return x_pix_rotated, y_pix_rotated\n\n\ndef translate_pix(x_pix_rot, y_pix_rot, x_pos, y_pos, scale): \n \"\"\"Apply a geometric translation and scaling.\"\"\"\n x_pix_translated = (x_pix_rot / scale) + x_pos\n y_pix_translated = (y_pix_rot / scale) + y_pos\n return x_pix_translated, y_pix_translated\n\n\ndef pix_to_world(x_pix, y_pix, x_pos, y_pos, yaw, world_size, scale):\n \"\"\"\n Apply a geometric transformation i.e. rotation and translation to ROI.\n\n Keyword arguments:\n x_pix, y_pix -- numpy array coords of ROI being converted to world frame\n x_pos, y_pos, yaw -- rover position and yaw angle in world frame\n world_size -- integer length of the square world map (200 x 200 pixels)\n scale -- scale factor between world frame pixels and rover frame pixels\n\n Note:\n Requires functions rotate_pix and translate_pix to work\n\n \"\"\"\n # Apply rotation and translation\n x_pix_rot, y_pix_rot = rotate_pix(\n x_pix,\n y_pix,\n yaw\n )\n x_pix_tran, y_pix_tran = translate_pix(\n x_pix_rot,\n y_pix_rot,\n x_pos,\n y_pos,\n scale\n )\n # Clip pixels to be within world_size\n x_pix_world = np.clip(np.int_(x_pix_tran), 0, world_size - 1)\n y_pix_world = np.clip(np.int_(y_pix_tran), 0, world_size - 1)\n\n return x_pix_world, y_pix_world\n\n\n# Grab another random image\nidx = np.random.randint(0, len(img_list)-1)\nimage = mpimg.imread(img_list[idx])\nwarped = perspect_transform(image, source, destination)\nthreshed = color_thresh_nav(warped)\n\n# Calculate pixel values in rover-centric coords and \n# distance/angle to all pixels\nxpix, ypix = to_rover_coords(threshed)\ndist, angles = to_polar_coords(xpix, ypix)\nmean_dir = np.mean(angles)\n\n######## TESTING ############\nxpix = xpix[dist < 130]\nypix = ypix[dist < 130]\n\n# Do some plotting\nfig = plt.figure(figsize=(12,9))\nplt.subplot(221)\nplt.imshow(image)\nplt.subplot(222)\nplt.imshow(warped)\nplt.subplot(223)\nplt.imshow(threshed, cmap='gray')\nplt.subplot(224)\nplt.plot(xpix, ypix, '.')\nplt.ylim(-160, 160)\nplt.xlim(0, 160)\narrow_length = 100\nx_arrow = arrow_length * np.cos(mean_dir)\ny_arrow = arrow_length * np.sin(mean_dir)\nplt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)\n", "Testing left and right nav angles", "x_nav_test_pix, y_nav_test_pix = to_rover_coords(threshed)\nnav_test_dists, nav_test_angles = to_polar_coords(x_nav_test_pix, y_nav_test_pix)\nmean_test_angle = np.mean(nav_test_angles)\n\n# separate nav_test_angles into left and right angles\nnav_test_left_angles = nav_test_angles[nav_test_angles > 0]\nmean_test_left_angle = np.mean(nav_test_left_angles)\n\nnav_test_right_angles = nav_test_angles[nav_test_angles < 0]\nmean_test_right_angle = np.mean(nav_test_right_angles)\n\nprint('nav_test_angles:')\nprint(nav_test_angles)\nprint('amount: ', len(nav_test_angles))\nprint('mean:', mean_test_angle * 180 / np.pi)\nprint('')\nprint('nav_test_left_angles:')\nprint(nav_test_left_angles)\nprint('amount: ', len(nav_test_left_angles))\nprint('mean:', mean_test_left_angle * 180 / np.pi)\nprint('')\nprint('nav_test_right_angles:')\nprint(nav_test_right_angles)\nprint('amount: ', len(nav_test_right_angles))\nprint('mean:', mean_test_right_angle * 180 / np.pi)\nprint('')\n\n#### do some plotting ######\nfig = plt.figure(figsize=(12,9))\nplt.plot(x_nav_test_pix, y_nav_test_pix, '.')\nplt.ylim(-160, 160)\nplt.xlim(0, 180)\narrow_length = 150\n\n# main test angle\nx_mean_test_angle = arrow_length * np.cos(mean_test_angle)\ny_mean_test_angle = arrow_length * np.sin(mean_test_angle)\nplt.arrow(0, 0, x_mean_test_angle, y_mean_test_angle, color='red', zorder=2, head_width=10, width=2)\n\n# main left test angle\nx_mean_test_left_angle = arrow_length * np.cos(mean_test_left_angle)\ny_mean_test_left_angle = arrow_length * np.sin(mean_test_left_angle)\nplt.arrow(0, 0, x_mean_test_left_angle, y_mean_test_left_angle, color='yellow', zorder=2, head_width=10, width=2)\n\n# main right test angle\nx_mean_test_right_angle = arrow_length * np.cos(mean_test_right_angle)\ny_mean_test_right_angle = arrow_length * np.sin(mean_test_right_angle)\nplt.arrow(0, 0, x_mean_test_right_angle, y_mean_test_right_angle, color='blue', zorder=2, head_width=10, width=2)\n\n\n", "Testing Image Pixels for Improving Fidelity", "nav_x_pixs, nav_y_pixs = to_rover_coords(threshed)\nnav_dists, nav_angles = to_polar_coords(nav_x_pixs, nav_y_pixs)\n\nprint('nav_x_pixs:')\nprint(nav_x_pixs)\nprint(nav_x_pixs.shape)\nprint('')\nprint('nav_y_pixs:')\nprint(nav_y_pixs)\nprint(nav_y_pixs.shape)\nprint('')\nprint('nav_dists:')\nprint('len(nav_dists):', len(nav_dists))\nprint(nav_dists[:4])\nprint('mean:', np.mean(nav_dists))\nprint('shape:', nav_dists.shape)\nprint('')\n\n# remove some pixels that are farthest away\n#indexes_to_remove = []\n#trim_nav_x_pixs = np.delete(nav_x_pixs, x )\n\ntrim_nav_x_pixs = nav_x_pixs[nav_dists < 120]\nprint('trim_nav_x_pixs')\nprint(trim_nav_x_pixs)\n\ntrim_nav_y_pixs = nav_y_pixs[nav_dists < 120]\nprint('trim_nav_y_pixs')\nprint(trim_nav_y_pixs)", "Read in saved data and ground truth map of the world\nThe next cell is all setup to read data saved from rover sensors into a pandas dataframe. Here we'll also read in a \"ground truth\" map of the world, where white pixels (pixel value = 1) represent navigable terrain. \nAfter that, we'll define a class to store telemetry data and pathnames to images. When the class (data = SensorData()) is instantiated, we'll have a global variable called data that can be referenced for telemetry and to map data within the process_image() function in the following cell.", "import pandas as pd\n\n# Change the path below to your data directory\n# If you are in a locale (e.g., Europe) that uses ',' as the decimal separator\n# change the '.' to ','\n\n# Read in csv log file as dataframe\ndf = pd.read_csv('../test_dataset_2/robot_log.csv', delimiter=';', decimal='.')\ncsv_img_list = df[\"Path\"].tolist() # Create list of image pathnames\n\n# Read in ground truth map and create a 3-channel image with it\nground_truth = mpimg.imread('../calibration_images/map_bw.png')\nground_truth_3d = np.dstack(\n (ground_truth*0,\n ground_truth*255,\n ground_truth*0)\n).astype(np.float)\n\n\nclass SensorData():\n \"\"\"\n Create a class to be a container of rover sensor data from sim.\n\n Reads in saved data from csv sensor log file generated by sim which\n includes saved locations of front camera snapshots and corresponding\n rover position and yaw values in world coordinate frame\n\n \"\"\"\n\n def __init__(self):\n \"\"\"\n Initialize a SensorData instance unique to a single simulation run.\n\n worldmap instance variable is instantiated with a size of 200 square\n grids corresponding to a 200 square meters space which is same size as\n the 200 square pixels ground_truth variable allowing full range\n of output position values in x and y from the sim\n\n \"\"\"\n self.images = csv_img_list\n self.xpos = df[\"X_Position\"].values\n self.ypos = df[\"Y_Position\"].values\n self.yaw = df[\"Yaw\"].values\n # running index set to -1 as hack because moviepy\n # (below) seems to run one extra iteration\n self.count = -1\n self.worldmap = np.zeros((200, 200, 3)).astype(np.float)\n self.ground_truth = ground_truth_3d # Ground truth worldmap\n\n\n# Instantiate a SensorData().. this will be a global variable/object\n# that can be referenced in the process_image() function below\ndata = SensorData()\n", "Write a function to process stored images\nThe process_image() function below is modified by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this process_image() function in conjunction with the moviepy video processing package to create a video from the rover camera image data taken in the simulator. \nIn short, we will be passing individual images into process_image() and building up an image called output_image that will be stored as one frame of the output video. A mosaic of the various steps of above analysis process and additional text can also be added. \nThe output video ultimately demonstrates our mapping process.", "def process_image(input_img):\n \"\"\"\n Establish ROIs in rover cam image and overlay with ground truth map.\n\n Keyword argument:\n input_img -- 3 channel color image\n\n Return value:\n output_img -- 3 channel color image with ROIs identified\n\n Notes:\n Requires data (a global SensorData object)\n Required by the ImageSequeceClip object from moviepy module\n\n \"\"\"\n # Example of how to use the SensorData() object defined above\n # to print the current x, y and yaw values\n # print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])\n\n # 1) Define source and destination points for perspective transform\n # 2) Apply perspective transform\n warped_img = perspect_transform(input_img, source, destination)\n\n # 3) Apply color threshold to identify following ROIs:\n # a. navigable terrain\n # b. obstacles\n # c. rock samples\n threshed_img_navigable = color_thresh_nav(warped_img)\n threshed_img_obstacle = color_thresh_obst(warped_img)\n threshed_img_rock = color_thresh_rock(\n warped_img,\n lower_bound,\n upper_bound\n )\n\n # 4) Convert thresholded image pixel values to rover-centric coords\n navigable_x_rover, navigable_y_rover = to_rover_coords(threshed_img_navigable)\n obstacle_x_rover, obstacle_y_rover = to_rover_coords(threshed_img_obstacle)\n rock_x_rover, rock_y_rover = to_rover_coords(threshed_img_rock)\n \n ########################### TESTING ############################\n nav_dists = to_polar_coords(navigable_x_rover, navigable_y_rover)[0]\n navigable_x_rover = navigable_x_rover[nav_dists < 130]\n navigable_y_rover = navigable_y_rover[nav_dists < 130]\n \n\n # 5) Convert rover-centric pixel values to world coords\n my_worldmap = np.zeros((200, 200))\n my_scale = 10 # scale factor assumed between world and rover space pixels\n #curr_rover_xpos = data.xpos[data.count-1]\n #curr_rover_ypos = data.ypos[data.count-1]\n #curr_rover_yaw = data.yaw[data.count-1]\n \n navigable_x_world, navigable_y_world = pix_to_world(\n navigable_x_rover,\n navigable_y_rover,\n data.xpos[data.count],\n data.ypos[data.count],\n data.yaw[data.count],\n #curr_rover_xpos,\n #curr_rover_ypos,\n #curr_rover_yaw,\n my_worldmap.shape[0],\n my_scale\n )\n obstacle_x_world, obstacle_y_world = pix_to_world(\n obstacle_x_rover,\n obstacle_y_rover,\n data.xpos[data.count],\n data.ypos[data.count],\n data.yaw[data.count],\n #curr_rover_xpos,\n #curr_rover_ypos,\n #curr_rover_yaw,\n my_worldmap.shape[0],\n my_scale\n )\n rock_x_world, rock_y_world = pix_to_world(\n rock_x_rover,\n rock_y_rover,\n data.xpos[data.count],\n data.ypos[data.count],\n data.yaw[data.count],\n #curr_rover_xpos,\n #curr_rover_ypos,\n #curr_rover_yaw,\n my_worldmap.shape[0],\n my_scale\n )\n\n # 6) Update worldmap (to be displayed on right side of screen)\n #data.worldmap[obstacle_y_world, obstacle_x_world] = (255,0,0)\n #data.worldmap[rock_y_world, rock_x_world] = (255,255,255)\n #data.worldmap[navigable_y_world, navigable_x_world] = (0,0,255)\n data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1\n data.worldmap[rock_y_world, rock_x_world, 1] += 1\n data.worldmap[navigable_y_world, navigable_x_world, 2] += 1\n\n # 7) Make a mosaic image\n\n # First create a blank image (can be whatever shape)\n output_image = np.zeros(\n (input_img.shape[0] + data.worldmap.shape[0],\n input_img.shape[1]*2,\n 3)\n )\n # Next we populate regions of the image with various output\n # Here we're putting the original image in the upper left hand corner\n output_image[0:input_img.shape[0], 0:input_img.shape[1]] = input_img\n\n # add a warped image to the mosaic\n warped = perspect_transform(input_img, source, destination)\n\n # Add the warped image in the upper right hand corner\n output_image[0:input_img.shape[0], input_img.shape[1]:] = warped\n\n # Overlay worldmap with ground truth map\n map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)\n\n # Flip map overlay so y-axis points upward and add to output_image\n output_image[\n input_img.shape[0]:,\n 0:data.worldmap.shape[1]\n ] = np.flipud(map_add)\n\n # Then putting some text over the image\n cv2.putText(\n output_image,\n \"Populate this image with your analyses to make a video!\",\n (20, 20),\n cv2.FONT_HERSHEY_COMPLEX,\n 0.4,\n (255, 255, 255),\n 1\n )\n data.count += 1 # Keep track of the index in the Databucket()\n return output_image\n", "Make a video from processed image data\nUse the moviepy library to process images and create a video.", "# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom moviepy.editor import ImageSequenceClip\n\n\n# Define pathname to save the output video\noutput = '../output/test_mapping.mp4'\n\n# Re-initialize data in case this cell is run multiple times\ndata = SensorData()\n\n# Note: output video will be sped up because recording rate in\n# simulator is fps=25\nclip = ImageSequenceClip(data.images, fps=60)\nnew_clip = clip.fl_image(process_image) # process_image expects color images!\n\n%time new_clip.write_videofile(output, audio=False)", "This next cell should function as an inline video player\nIf this fails to render the video, the alternative video rendering method in the following cell can be run. The output video mp4 is saved in the /output folder.", "from IPython.display import HTML\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(output))", "Below is an alternative way to create a video in case the above cell did not work.", "import io\nimport base64\n\nvideo = io.open(output, 'r+b').read()\nencoded_video = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded_video.decode('ascii')))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
twosigma/beaker-notebook
test/ipynb/python/OutputContainersTest.ipynb
apache-2.0
[ "Output Containers and Layout Managers\nOutput containers are objects that hold a collection of other objects, and displays all its contents, even when they are complex interactive objects and MIME type.\nBy default the contents are just stacked up on the page, but you can configure them to get tabs, grid, cycling, or other layout methods.\nWithout Output", "# The defining of variable doesn't initiate output\nx = \"some string\"", "Stacked Output Containers", "from beakerx import *\no = OutputContainer()\no.addItem(\"simplest example\")\no.addItem([2, 3, 5, 7]) \no.addItem(HTML(\"<h1>title</h1>\"))\no.addItem(None)\no", "Tabbed Output Containers", "rates = pd.read_csv(\"../../../doc/resources/data/interest-rates.csv\")\nc = Color(120, 120, 120, 100)\nplot1 = Plot(initWidth= 300, initHeight= 400) \nplot1.add(Points(x= rates.y1, y=rates.y30, size= 3, displayName=\"y1 vs y30\"))\nplot1.add(Points(x= rates.m3, y=rates.y5, size= 3, displayName=\"m3 vs y5\"))\nplot1.add(Line(x= rates.y1, y=rates.y30, color= c))\nplot1.add(Line(x= rates.m3, y=rates.y5, color= c))\nplot1.setShowLegend(False)\n\nplot2 = SimpleTimePlot(rates, [\"m3\", \"y1\"], showLegend=False, initWidth= 300, initHeight= 400)\nplot3 = SimpleTimePlot(rates, [\"y5\", \"y10\"], showLegend=False, initWidth= 300, initHeight= 400)\n\ntable = pd.DataFrame(rates.head(n=10), columns=[\"m3\", \"y1\", \"y5\", \"y10\"])\n\nl = TabbedOutputContainerLayoutManager()\nl.setBorderDisplayed(False)\no = OutputContainer()\no.setLayoutManager(l)\no.addItem(plot1, \"Scatter with History\")\no.addItem(plot2, \"Short Term\")\no.addItem(plot3, \"Long Term\")\no.addItem(table, \"1990/01\")\no\n", "Grid Output Containers", "plot1.setShowLegend(False)\nbars = CategoryPlot(initWidth= 300, initHeight= 400)\nbars.add(CategoryBars(value= [[1.1, 2.4, 3.8], [1, 3, 5]]))\n\nlg = GridOutputContainerLayoutManager(3)\n\nog = OutputContainer()\nog.setLayoutManager(lg)\nog.addItem(plot1, \"Scatter with History\")\nog.addItem(plot2, \"Short Term\")\nog.addItem(plot3, \"Long Term1\")\nog.addItem(plot3, \"Long Term2\")\nog.addItem(table, \"1990/01\")\nog.addItem(bars, \"Bar Chart\")\n\nog", "Cycling Output Container", "l = CyclingOutputContainerLayoutManager()\nl.setPeriod(3000); # milliseconds\nl.setBorderDisplayed(False);\no = OutputContainer()\no.setLayoutManager(l)\no.addItem(plot1, \"Scatter with History\")\no.addItem(plot2, \"Short Term\")\no.addItem(table, \"1990/01\")\no.addItem(plot3, \"Long Term\")\no" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gutouyu/cs231n
cs231n/assignment/assignment2/BatchNormalization.ipynb
mit
[ "Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.", "# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)", "Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.", "# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint(' means: ', a.mean(axis=0))\nprint(' stds: ', a.std(axis=0))\n\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint(' mean: ', a_norm.mean(axis=0))\nprint(' std: ', a_norm.std(axis=0))\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint('After batch normalization (nontrivial gamma, beta)')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))", "Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.", "# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dbeta error: ', rel_error(db_num, dbeta))\nprint('dgamma error: ', rel_error(da_num, dgamma))", "Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.", "np.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))", "Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.", "np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()", "Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.", "np.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100,100,100,100,100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()", "Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.", "plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.", "np.random.seed(232) #231\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()", "Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:\nvalid和train的准确率,使用batch normalize比不用bn要高,而且是在整个的weight_scale范围内都是。而且当weight_sacle过小的时候,bn更具有鲁棒性,不会造成准确率的大批下降。\n当weight_scale比较大的时候,bn也更加有鲁棒性,不会造成loss大幅度波动。\nbatch normalization:\n1. more robust to bad weight initialization\n2. reduce overfitting\n3. prevent vanish gradient and gradient explosion problem" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
samgoodgame/sf_crime
iterations/misc/sf_crime-Sarah.ipynb
mit
[ "SF Crime\nW207 Final Project\nBasic Modeling\nEnvironment and Data", "# Import relevant libraries:\nimport time\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import svm\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.ensemble import RandomForestClassifier\n\n# Set random seed and format print output:\nnp.random.seed(0)\nnp.set_printoptions(precision=3)", "DDL to construct table for SQL transformations:\nsql\nCREATE TABLE kaggle_sf_crime (\ndates TIMESTAMP, \ncategory VARCHAR,\ndescript VARCHAR,\ndayofweek VARCHAR,\npd_district VARCHAR,\nresolution VARCHAR,\naddr VARCHAR,\nX FLOAT,\nY FLOAT);\nGetting training data into a locally hosted PostgreSQL database:\nsql\n\\copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER;\nSQL Query used for transformations:\nsql\nSELECT\n category,\n date_part('hour', dates) AS hour_of_day,\n CASE\n WHEN dayofweek = 'Monday' then 1\n WHEN dayofweek = 'Tuesday' THEN 2\n WHEN dayofweek = 'Wednesday' THEN 3\n WHEN dayofweek = 'Thursday' THEN 4\n WHEN dayofweek = 'Friday' THEN 5\n WHEN dayofweek = 'Saturday' THEN 6\n WHEN dayofweek = 'Sunday' THEN 7\n END AS dayofweek_numeric,\n X,\n Y,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS bayview_binary,\n CASE\n WHEN pd_district = 'INGLESIDE' THEN 1\n ELSE 0\n END AS ingleside_binary,\n CASE\n WHEN pd_district = 'NORTHERN' THEN 1\n ELSE 0\n END AS northern_binary,\n CASE\n WHEN pd_district = 'CENTRAL' THEN 1\n ELSE 0\n END AS central_binary,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS pd_bayview_binary,\n CASE\n WHEN pd_district = 'MISSION' THEN 1\n ELSE 0\n END AS mission_binary,\n CASE\n WHEN pd_district = 'SOUTHERN' THEN 1\n ELSE 0\n END AS southern_binary,\n CASE\n WHEN pd_district = 'TENDERLOIN' THEN 1\n ELSE 0\n END AS tenderloin_binary,\n CASE\n WHEN pd_district = 'PARK' THEN 1\n ELSE 0\n END AS park_binary,\n CASE\n WHEN pd_district = 'RICHMOND' THEN 1\n ELSE 0\n END AS richmond_binary,\n CASE\n WHEN pd_district = 'TARAVAL' THEN 1\n ELSE 0\n END AS taraval_binary\nFROM kaggle_sf_crime;\nLoad the data into training, development, and test:", "data_path = \"./data/train_transformed.csv\"\n\ndf = pd.read_csv(data_path, header=0)\nx_data = df.drop('category', 1)\ny = df.category.as_matrix()\n\n\n## read in zip code data\n\ndata_path_zip = \"./data/2016_zips.csv\"\nzips = pd.read_csv(data_path_zip, header=0, sep ='\\t', usecols = [0,5,6], names = [\"GEOID\", \"INTPTLAT\", \"INTPTLONG\"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float})\n\nsf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)]\n\n\nlen(sf_zips)\n\n###mapping longitude/latitude to zipcodes\n\ndef dist(lat1, long1, lat2, long2):\n #return np.sqrt((lat1-lat2)**2+(long1-long2)**2)\n return abs(lat1-lat2)+abs(long1-long2)\n\ndef find_zipcode(lat, long):\n \n distances = sf_zips.apply(lambda row: dist(lat, long, row[\"INTPTLAT\"], row[\"INTPTLONG\"]), axis=1)\n return sf_zips.loc[distances.idxmin(), \"GEOID\"]\n\n#x_data['zipcode'] = 0\n#for i in range(0, 1):\n# x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)\nx_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)\n\n\nx_data[:10]\n\n### read in school data\ndata_path_schools = \"./data/pubschls.csv\"\nschools = pd.read_csv(data_path_schools,header=0, sep ='\\t', usecols = [\"CDSCode\",\"StatusType\", \"School\", \"EILCode\", \"EILName\", \"Zip\", \"Latitude\", \"Longitude\"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float})\nschools = schools[(schools[\"StatusType\"] == 'Active')]\n\nx_data_sub= x_data[0:5]\n\n### find closest school\n\ndef dist(lat1, long1, lat2, long2):\n return np.sqrt((lat1-lat2)**2+(long1-long2)**2)\n\ndef find_closest_school(lat, long):\n \n distances = schools.apply(lambda row: dist(lat, long, row[\"Latitude\"], row[\"Longitude\"]), axis=1)\n return min(distances)\n\nx_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1)\n\n# Impute missing values with mean values:\nx_complete = x_data.fillna(x_data.mean())\nX_raw = x_complete.as_matrix()\nX = X_raw\n# Scale the data between 0 and 1:\n#X = MinMaxScaler().fit_transform(X_raw)\n\n# Shuffle data to remove any underlying pattern that may exist:\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, y = X[shuffle], y[shuffle]\n\n# Separate training, dev, and test data:\ntest_data, test_labels = X[800000:], y[800000:]\ndev_data, dev_labels = X[700000:800000], y[700000:800000]\ntrain_data, train_labels = X[:700000], y[:700000]\n\nmini_train_data, mini_train_labels = X[:75000], y[:75000]\nmini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]\n\n#the submission format requires that we list the ID of each example?\n#this is to remember the order of the IDs after shuffling\n#(not used for anything right now)\nallIDs = np.array(list(df.axes[0]))\nallIDs = allIDs[shuffle]\n\ntestIDs = allIDs[800000:]\ndevIDs = allIDs[700000:800000]\ntrainIDs = allIDs[:700000]\n\n#this is for extracting the column names for the required submission format\nsampleSubmission_path = \"./data/sampleSubmission.csv\"\nsampleDF = pd.read_csv(sampleSubmission_path)\nallColumns = list(sampleDF.columns)\nfeatureColumns = allColumns[1:]\n\n#this is for extracting the test data for our baseline submission\nreal_test_path = \"./data/test_transformed.csv\"\ntestDF = pd.read_csv(real_test_path, header=0)\nreal_test_data = testDF\n\ntest_complete = real_test_data.fillna(real_test_data.mean())\nTest_raw = test_complete.as_matrix()\n\nTestData = MinMaxScaler().fit_transform(Test_raw)\n\n#here we remember the ID of each test data point\n#(in case we ever decide to shuffle the test data for some reason)\ntestIDs = list(testDF.axes[0])\n\ntrain_data[:5]", "Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.", "## Data sanity checks\nprint(train_data[:1])\nprint(train_labels[:1])", "Model Prototyping\nRapidly assessing the viability of different model forms:", "##Neural Network\n\nimport theano \nfrom theano import tensor as T\nfrom theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams\nprint (theano.config.device) # We're using CPUs (for now)\nprint (theano.config.floatX )# Should be 64 bit for CPUs\n\nnp.random.seed(0)\n\nfrom IPython.display import display, clear_output \n\nnumFeatures = train_data[1].size\nnumTrainExamples = train_data.shape[0]\nnumTestExamples = test_data.shape[0]\nprint ('Features = %d' %(numFeatures))\nprint ('Train set = %d' %(numTrainExamples))\nprint ('Test set = %d' %(numTestExamples))\n\nclass_labels = list(set(train_labels))\nprint(class_labels)\nnumClasses = len(class_labels)\n\nprint(train_labels[:5])\n\n##binarize the class labels\n\ndef binarizeY(data):\n binarized_data = np.zeros((data.size,39))\n for j in range(0,data.size):\n feature = data[j]\n i = class_labels.index(feature)\n binarized_data[j,i]=1\n return binarized_data\n\ntrain_labels_b = binarizeY(train_labels)\ntest_labels_b = binarizeY(test_labels)\nnumClasses = train_labels_b[1].size\n\nprint ('Classes = %d' %(numClasses))\n\nprint ('\\n', train_labels_b[:5, :], '\\n')\nprint (train_labels[:10], '\\n')\n\n#1) Parameters\nnumFeatures = train_data.shape[1]\n\nnumHiddenNodeslayer1 = 50\nnumHiddenNodeslayer2 = 30\n\nw_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01)))\nw_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01)))\nw_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01)))\nparams = [w_1, w_2, w_3]\n\n\n#2) Model\nX = T.matrix()\nY = T.matrix()\n\nsrng = RandomStreams()\ndef dropout(X, p=0.):\n if p > 0:\n X *= srng.binomial(X.shape, p=1 - p)\n X /= 1 - p\n return X\n\ndef model(X, w_1, w_2, w_3, p_1, p_2, p_3):\n return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3))\ny_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5)\ny_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.)\n\n## (3) Cost function\n#cost = T.mean(T.sqr(y_hat - Y))\ncost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y))\n\n## (4) Objective (and solver)\n\nalpha = 0.01\ndef backprop(cost, w):\n grads = T.grad(cost=cost, wrt=w)\n updates = []\n for wi, grad in zip(w, grads):\n updates.append([wi, wi - grad * alpha])\n return updates\n\nupdate = backprop(cost, params)\ntrain = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)\ny_pred = T.argmax(y_hat_predict, axis=1)\npredict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)\n\nminiBatchSize = 10 \n\ndef gradientDescent(epochs):\n for i in range(epochs):\n for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)):\n cc = train(train_data[start:end], train_labels_b[start:end])\n clear_output(wait=True)\n print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) )\n\ngradientDescent(50)\n\n### How to decide what # to use for epochs? epochs in this case are how many rounds?\n### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should\n### do more iterations; otherwise if its looking like its flattening, you can stop\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
openradar/AMS_radar_in_the_cloud
notebooks/Introduction_to_SciPy.ipynb
bsd-2-clause
[ "Overview\nThe goal of this tutorial is to provide an example of the use of SciPy. SciPy is a collection of many different algorihtms, so there's no way we can cover everything here. For more information, try looking at the:\n- SciPy Reference Guide\n- SciPy Lectures\nSciPy is a library that wraps general-purpose, scientific algorithms. These algorithms are frequently written in FORTRAN, so SciPy gives you the ability to work with these performant algorithms without dealing with the compiled languages.\n\nIntegration\nOptimization\nSpatial Algorithms\nODE Solvers\nInterpolation\nStatistics\nLinear Algebra\nSpecial Functions\nSignal Processing\nFFT\n\nAn Example Using Spatial\nThis example walks through using the spatial algorithms library in SciPy to reduce some point data.", "# Set-up to have matplotlib use its IPython notebook backend\n%matplotlib inline\n\n# Convention for import of the pyplot interface\nimport matplotlib.pyplot as plt\nimport numpy as np", "Let's create some data, using normally distributed locations.", "# Create some example data\nimport scipy.stats\n\n# Initialize the RandomState so that this is repeatable\nrs = np.random.RandomState(seed=20170122)\n\n# Set up the distribution\ndist = scipy.stats.norm(loc=5, scale=2)\n\n# Request a bunch of random values from this distribution\nx, y = dist.rvs(size=(2, 100000), random_state=rs)\n\n# Go ahead and explicitly create a figure and an axes\nfig, ax = plt.subplots(figsize=(10, 6), dpi=100)\n\n# Do a scatter plot of our locations\nax.scatter(x, y)", "Now let's create some more data to analyze.", "# Some exponentially distributed values to make things interesting\nsize = scipy.stats.expon(loc=10, scale=10).rvs(size=100000, random_state=rs)\nstrength = scipy.stats.expon(loc=5).rvs(size=100000, random_state=rs)\n\n# Make the scatter plot more complex--change the color of markers by strength,\n# and scale their size by the size variable\nfig, ax = plt.subplots(figsize=(10, 6), dpi=100)\n\n# c specifies what to color by, s what to scale by\nax.scatter(x, y, c=strength, s=size**2, alpha=0.7)", "So we have a messy dataset, and we'd like to pull reduce the number of points. For this exercise, let's pick points and clear out the radius around them. We can do this by favoring certain points; in this case, we'll favor those with higher strength values.", "import scipy.spatial\n\n# Put the x and y values together--so that this is (N, 2)\nxy = np.vstack((x, y)).T\n\n# Create a mask--all True values initially. We keep values where this is True.\nkeep = np.ones(x.shape, dtype=np.bool)\n\n# Get the indices that would sort the strength array--and can be used to sort\n# the point locations by strength\nsorted_indices = np.argsort(strength)[::-1]\n\n# Create a kdTree--a data structure that makes it easy to do search in nD space\ntree = scipy.spatial.cKDTree(xy)\n\n# Loop over all the potential points\nfor sort_index in sorted_indices:\n # Check if this point is being kept\n if keep[sort_index]:\n # Use the kdTree to find the neighbors around the current point\n neighbors = tree.query_ball_point(xy[sort_index], r=1)\n \n # Eliminate the points within that radius--but not the current point\n for index in neighbors:\n if index != sort_index:\n keep[index] = False\n\n# Make the scatter plot more complex--change the color of markers by strength,\n# and scale their size by the size variable\nfig, ax = plt.subplots(figsize=(10, 6), dpi=100)\n\n# c specifies what to color by, s what to scale by\nax.scatter(x[keep], y[keep], c=strength[keep], s=size[keep]**2, alpha=0.7)", "Exercise\nGive it a try yourself. Try to modify the code above to:\n\nFilter by size instead of strength\nChange the radius" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tomfaulkenberry/MT_flanker
exp2/results/.ipynb_checkpoints/SqueakIntro-checkpoint.ipynb
gpl-2.0
[ "Update: I've moved the code from this post, along with resources for designing mouse tracking experiments, and some example data, to a GitHub repository. The best way to learn how to use squeak is to play around with this repository, which also includes the content of this post.\n\nA while ago, I gathered up the python code I've been using to process mouse trajectory data\ninto a package and gave it the jaunty title squeak.\nHowever, as this was mostly for my own use, I never got around to properly documenting it.\nRecently, a few people have asked me for advice on analysing mouse data not collected using MouseTracker - for instance, data generated using my OpenSesame implementation. In response, I've gone through a full example for this post, and written a script that should be able to preprocess any data collected using my OpenSesame implementation. To use any of this, you'll need to have the python language installed, along with some specific scientific packages, and of course squeak itself, which is available using the pip command:\npip install squeak\nIn this post, I go through the code bit by bit, explaining what specifically is going on.\nIf you're not used to using python, you don't have to worry to much about understanding all of the syntax,\nalthough python is relatively easy to read as if it was plain English. The full, downloadable script is included at the bottom of the page.\n\nData Processing", "# For reading data files\nimport os \nimport glob\n\nimport numpy as np # Numeric calculation\nimport pandas as pd # General purpose data analysis library\nimport squeak # For mouse data\n\n# For plotting\nimport matplotlib.pyplot as plt \n%matplotlib inline\n# Prettier default settings for plots (optional)\nimport seaborn\nseaborn.set_style('darkgrid')\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 8, 5", "First, we need to load our data.\nI'll show how to do this using the .csv files saved by OpenSesame,\nas this gives us a chance to see how you can use squeak to handle\ntrajectory data that's been saved in this exchangable format.\nWe can combine all of our files into a single data structure\nby reading them one at a time, using pd.read_csv,\nstoring them in a list,\nand then merging this list using pd.concat.", "results = []\nfor datafile in glob.glob('data/*.csv'):\n this_data = pd.read_csv(datafile)\n results.append(this_data)\ndata = pd.concat(results)", "A faster and more concise alternative, using python's list comprehension abilities, would look like this instead:", "data = pd.concat(\n [pd.DataFrame(pd.read_csv(datafile)) \n for datafile in glob.glob('data/*.csv')])", "Either way, we end up with data in the form shown below.", "print data.head()", "As you can see, there's one row per trial,\nand each of the coding variables we recorded in OpenSesame occupy a single column.\nThe trajectory data, though, is stored in three columns, \n\"tTrajectory\",\n\"xTrajectory\",\nand \"yTrajectory\",\ncorresponding to time elapsed, x-axis position, and y-axis position, respectively.\nEach cell here actually contains a string representation of the list of values in each case, in the form\n\"[time1, time2, time3, ..., timeN]\"\n\nWe can parse these using squeak's list_from_string function.", "data['t'] = data.tTrajectory.map(squeak.list_from_string)\ndata['x'] = data.xTrajectory.map(squeak.list_from_string)\ndata['y'] = data.yTrajectory.map(squeak.list_from_string)", "At this stage, we have our data in a format python can understand, and it looks like this.", "for i in range(len(data)):\n x = data.x.iloc[i]\n y = data.y.iloc[i]\n plt.plot(x, y, color='blue', alpha=.5) # alpha controlls the transparency\nplt.show()", "We still need to do some preprocessing of the trajectories - OpenSesame logs y-axis coordinates upside down from what we would want, and more importantly, it's conventional to standardise trajectories so they start at [0,0] and end at [1,1.5], and to flip the trials where the left hand side response was chosen the other way around for comparison. Let's do that now.", "data['y'] = data.y * -1 # Reverse y axis\ndata['x'] = data.x.map(squeak.remap_right) # Flip the leftward responses\ndata['x'] = data.x.map(squeak.normalize_space)\ndata['y'] = data.y.map(squeak.normalize_space) * 1.5\n\nfor i in range(len(data)):\n x = data.x.iloc[i]\n y = data.y.iloc[i]\n plt.plot(x, y, color='blue', alpha=.5)\nplt.text(0, 0, 'START', horizontalalignment='center')\nplt.text(1, 1.5, 'END', horizontalalignment='center')\nplt.show()", "Our next problem as that all of our trials last for different amounts of time.", "for i in range(len(data)):\n x = data.x.iloc[i]\n t = data.t.iloc[i]\n plt.plot(t, x, color='blue', alpha=.3)\nplt.xlabel('Time (msec)')\nplt.ylabel('x axis position')\nplt.show()", "We can deal with this in one of two ways, both of which I'll demonstrate.\nMost analyses standardize the trajectories into 101 time slices, for comparison,\nmeaning that for every trajectory, sample 50 is halfway through, regardless of how long that actually takes.\n(the code looks a little intimidating, and future versions of squeak should include a more concise way of doing this. You don't need to worry too much about what's happening here).", "data['nx'], data['ny'] = zip(*[squeak.even_time_steps(x, y, t) for x, y, t, in zip(data.x, data.y, data.t)])\n\nfor i, x in data.nx.iteritems():\n plt.plot(x, color='blue', alpha=.3)\nplt.xlabel('Normalized time step')\nplt.ylabel('x axis position')\nplt.show()", "An alternative approach is to keep the actual timestamp for each sample, so you can analyse the development of the trajectories in real time. To do this, you need to \"extend\" the data for all of the trials so that they all last for the same amount of time. In this example, we'll extend every trial to 5 seconds (5000 milliseconds).\nThis can be done by treating all of the time after the participant has clicked on their response as if they instead just kept the cursor right on top of the response until they reach 5 seconds. Again, you can copy this code literally, so don't worry about the details of the syntax here.", "max_time = 5000 # Alternatively, max_time = data.rt.max()\ndata['rx'] = [squeak.uniform_time(x, t, max_duration=5000) for x, t in zip(data.x, data.t)]\ndata['ry'] = [squeak.uniform_time(y, t, max_duration=5000) for y, t in zip(data.y, data.t)] \n\nfor i in range(len(data)):\n x = data.rx.iloc[i]\n plt.plot(x.index, x, color='blue', alpha=.3)\nplt.xlabel('Time (msec)')\nplt.ylabel('x axis position')\nplt.show()", "With all of this done, you're ready to calculate the statistics you'll be using in your analyses. Again, don't worry too much about the syntax here.\nThe most popular measures, calculated here, are:\n\nMaximum Deviation (MD): The size of the largest distance achieved between the actual trajectory and what it would have looked like if it was perfectly straight.\nArea Under the Curve (AUC): The area bounded between the trajectory and the ideal straight line path\nX-flips: changes of direction on the x axis\nInitiation time: The time taken for the participant to start moving the cursor.", "# Mouse Stats\ndata['md'] = data.apply(lambda trial: squeak.max_deviation(trial['nx'], trial['ny']), axis=1)\ndata['auc'] = data.apply(lambda trial: squeak.auc(trial['nx'], trial['ny']), axis=1)\ndata['xflips'] = data.nx.map(squeak.count_x_flips)\ndata['init_time'] = data.ry.map(lambda y: y.index[np.where(y > .05)][0])\n\n# Taking a look at condition means\nprint data.groupby('condition')['md', 'auc', 'xflips', 'init_time', 'rt'].mean()", "Finally, we'll save our processed data. First, we split of our processed mouse trajectory columns into seperate data structures, which I'll explore a little more below.\nThe normalized time data are labelled nx and ny, and are formatted so that each row corresponds to a single trial, and each column is a time point, from 0 to 101. The real time data, rx and ry, are structured analagously, with each column corresponding to a timestamp. By default, these are broken up into 20 msec intervals, and the column headings (20, 40, 60, etc) reflect the actual timestamp.", "nx = pd.concat(list(data.nx), axis=1).T\nny = pd.concat(list(data.ny), axis=1).T\n\nrx = pd.concat(list(data.rx), axis=1).T\nry = pd.concat(list(data.ry), axis=1).T", "With that done, we can delete this information from our main data frame, so that it's compact enough to use easily in your data analysis package of choice, before finally saving everything as csv files.", "redundant = ['xTrajectory', 'yTrajectory', 'tTrajectory',\n 'x', 'y', 't', 'nx', 'ny', 'rx', 'ry']\ndata = data.drop(redundant, axis=1)\n\ndata.head()\n\n# Save data\ndata.to_csv('processed.csv', index=False)\nnx.to_csv('nx.csv', index=False)\nny.to_csv('ny.csv', index=False)\nrx.to_csv('rx.csv', index=False)\nry.to_csv('ry.csv', index=False)", "The commands in full, minus the unnecessary plotting commands,\nis below.\nYou can also download it as a script for running on your own data here.\nTo use it, you'll need to have python installed, along with the SciPy scientific library for python.\nYou should have your files arranged to that \nthe script is in the main folder\n(i.e. C:\\username\\Desktop\\MyResults),\nand your .csv files in a folder within that called data\n(C:\\username\\Desktop\\MyResults\\data),\nalthough obviously you can change this.\n```python\n!/usr/bin/env python\nimport os \nimport glob\ntry:\n import numpy as np # Numeric calculation\n import pandas as pd # General purpose data analysis library\n import squeak # For mouse data\nexcept:\n raise Exception(\"\\\nWhoops, you're missing some of the dependencies you need to run this script.\\n\\\nYou need to have numpy, pandas, and squeak installed.\")\nthis_dir = os.path.abspath('.')\nprint \"Running in %s\\n\\\nChecking for .csv files in %s\" % (this_dir, os.path.join(this_dir, 'data'))\ndatafiles = glob.glob('data/*.csv')\nprint \"%i files found:\" % len(datafiles)\nprint '\\n'.join(datafiles)\nprint \"\\nProcessing...\"\ndata = pd.concat(\n [pd.DataFrame(pd.read_csv(datafile)) \n for datafile in datafiles])\ndata['t'] = data.tTrajectory.map(squeak.list_from_string)\ndata['x'] = data.xTrajectory.map(squeak.list_from_string)\ndata['y'] = data.yTrajectory.map(squeak.list_from_string)\ndata['y'] = data.y * -1 # Reverse y axis\ndata['x'] = data.x.map(squeak.remap_right) # Flip the leftward responses\ndata['x'] = data.x.map(squeak.normalize_space)\ndata['y'] = data.y.map(squeak.normalize_space) * 1.5\nNormalized time\ndata['nx'], data['ny'] = zip(*[squeak.even_time_steps(x, y, t) \n for x, y, t, in zip(data.x, data.y, data.t)])\nReal time\nmax_time = 5000 # Alternatively, max_time = data.rt.max()\ndata['rx'] = [squeak.uniform_time(x, t, max_duration=5000) for x, t in zip(data.x, data.t)]\ndata['ry'] = [squeak.uniform_time(y, t, max_duration=5000) for y, t in zip(data.y, data.t)] \nMouse Stats\ndata['md'] = data.apply(lambda trial: squeak.max_deviation(trial['nx'], trial['ny']), axis=1)\ndata['auc'] = data.apply(lambda trial: squeak.auc(trial['nx'], trial['ny']), axis=1)\ndata['xflips'] = data.nx.map(squeak.count_x_flips)\ndata['init_time'] = data.ry.map(lambda y: y.index[np.where(y > .05)][0])\nSeperate data frames\nnx = pd.concat(list(data.nx), axis=1).T\nny = pd.concat(list(data.ny), axis=1).T\nrx = pd.concat(list(data.rx), axis=1).T\nry = pd.concat(list(data.ry), axis=1).T\nredundant = ['xTrajectory', 'yTrajectory', 'tTrajectory',\n 'x', 'y', 't', 'nx', 'ny', 'rx', 'ry']\ndata = data.drop(redundant, axis=1)\nprint \"Done!\\n\"\nSave data\ndata.to_csv('processed.csv', index=False)\nprint \"Summary statistics saved to %s\" % os.path.join(this_dir, 'processed.csv')\nnx.to_csv('nx.csv', index=False)\nny.to_csv('ny.csv', index=False)\nrx.to_csv('rx.csv', index=False)\nry.to_csv('ry.csv', index=False)\nfor n in ['nx', 'ny', 'rx', 'ry']:\n print \"Mouse trajectories saved to %s.csv\" % os.path.join(this_dir, n)\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mraty/applied-data-science
course-2_applied_plotting/Assignment4.ipynb
mit
[ "Assignment 4\nBefore working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to Preview the Grading for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.\nThis assignment requires that you to find at least two datasets on the web which are related, and that you visualize these datasets to answer a question with the broad topic of religious events or traditions (see below) for the region of Ann Arbor, Michigan, United States, or United States more broadly.\nYou can merge these datasets with data from different regions if you like! For instance, you might want to compare Ann Arbor, Michigan, United States to Ann Arbor, USA. In that case at least one source file must be about Ann Arbor, Michigan, United States.\nYou are welcome to choose datasets at your discretion, but keep in mind they will be shared with your peers, so choose appropriate datasets. Sensitive, confidential, illicit, and proprietary materials are not good choices for datasets for this assignment. You are welcome to upload datasets of your own as well, and link to them using a third party repository such as github, bitbucket, pastebin, etc. Please be aware of the Coursera terms of service with respect to intellectual property.\nAlso, you are welcome to preserve data in its original language, but for the purposes of grading you should provide english translations. You are welcome to provide multiple visuals in different languages if you would like!\nAs this assignment is for the whole course, you must incorporate principles discussed in the first week, such as having as high data-ink ratio (Tufte) and aligning with Cairo’s principles of truth, beauty, function, and insight.\nHere are the assignment instructions:\n\nState the region and the domain category that your data sets are about (e.g., Ann Arbor, Michigan, United States and religious events or traditions).\nYou must state a question about the domain category and region that you identified as being interesting.\nYou must provide at least two links to available datasets. These could be links to files such as CSV or Excel files, or links to websites which might have data in tabular form, such as Wikipedia pages.\nYou must upload an image which addresses the research question you stated. In addition to addressing the question, this visual should follow Cairo's principles of truthfulness, functionality, beauty, and insightfulness.\nYou must contribute a short (1-2 paragraph) written justification of how your visualization addresses your stated research question.\n\nWhat do we mean by religious events or traditions? For this category you might consider calendar events, demographic data about religion in the region and neighboring regions, participation in religious events, or how religious events relate to political events, social movements, or historical events.\nTips\n\nWikipedia is an excellent source of data, and I strongly encourage you to explore it for new data sources.\nMany governments run open data initiatives at the city, region, and country levels, and these are wonderful resources for localized data sources.\nSeveral international agencies, such as the United Nations, the World Bank, the Global Open Data Index are other great places to look for data.\nThis assignment requires you to convert and clean datafiles. Check out the discussion forums for tips on how to do this from various sources, and share your successes with your fellow students!\n\nExample\nLooking for an example? Here's what our course assistant put together for the Ann Arbor, MI, USA area using sports and athletics as the topic. Example Solution File\nSolution\nIn the solution personal food expenditure in 2016 is compared to the number of religious (Catholic) holidays in the US. Target is to see if there is a correlation between number of holidays and food expenditure in the US in 2016.\nDatasets\nPersonal consumption expenditure\nU.S. Bureau of Economic Analysis, Personal consumption expenditures: Food [DFXARC1M027SBEA]\nFiltered dataset for 2016 expenditure in CSV format\nCatholic holidays\nA List of Catholic Holidays in the 2016 Year.\nData preprocessing", "import pandas as pd\nimport numpy as np\n\ndata_url = \"https://fred.stlouisfed.org/graph/fredgraph.csv?chart_type=line&recession_bars=on&log_scales=&bgcolor=%23e1e9f0&graph_bgcolor=%23ffffff&fo=Open+Sans&ts=12&tts=12&txtcolor=%23444444&show_legend=yes&show_axis_titles=yes&drp=0&cosd=2015-12-26&coed=2017-01-30&height=450&stacking=&range=Custom&mode=fred&id=DFXARC1M027SBEA&transformation=lin&nd=1959-01-01&ost=-99999&oet=99999&lsv=&lev=&mma=0&fml=a&fgst=lin&fgsnd=2009-06-01&fq=Monthly&fam=avg&vintage_date=&revision_date=&line_color=%234572a7&line_style=solid&lw=2&scale=left&mark_type=none&mw=2&width=1168\"\ndf_expenditure = pd.read_csv(data_url)\n\ndf_expenditure.head()\n\n# Find string between two strings\ndef find_between( s, first, last ):\n try:\n start = s.index( first ) + len( first )\n end = s.index( last, start )\n return s[start:end]\n except ValueError:\n return \"\"\n\nfrom urllib.request import urlopen\n\nlink = \"http://www.calendar-12.com/catholic_holidays/2016\"\nresponse = urlopen(link)\ncontent = response.read().decode(\"utf-8\") \n\n# 'Poor mans' way of parsing days from HTML.\n# Using this approach since learning environment does not\n# have proper packages installed for html parsing.\ntable = find_between(content, \"<tbody>\",\"</tbody>\");\nrows = table.split(\"/tr\")\n\ncsv = \"Day\\n\"\nfor row in rows:\n day = find_between(row, '\">', \"</t\")\n day = find_between(day, \"> \", \"</\")\n csv = csv + day + \"\\n\"\n\nprint(csv)\n\nimport sys\nif sys.version_info[0] < 3: \n from StringIO import StringIO\nelse:\n from io import StringIO\n\n \ndf_catholic = pd.read_csv(StringIO(csv), sep=\";\")\ndf_catholic.head()\n\nfrom datetime import datetime\n\n# Strip out weekday name\ndf_catholic[\"Date\"] = df_catholic.apply(lambda row:row[\"Day\"][row[\"Day\"].find(\",\")+1:], axis=1)\n# Convert to date\ndf_catholic[\"Date\"] = df_catholic.apply(lambda row: datetime.strptime(row[\"Date\"], \" %B %d, %Y\"), axis=1)\ndf_catholic[\"Holiday\"] = 1\ndf_catholic.head()\n\n# Convert to expenditure also to date\ndf_expenditure[\"Date\"] = df_expenditure.apply(lambda row: datetime.strptime(row[\"DATE\"], \"%Y-%m-%d\"), axis=1)\ndf_expenditure.head()", "Visualise correlation\nLet's see how correlation looks when comparing number of Catholic holidays to montly spendin in the US.\nTarget for the visualisation is to show Catholic holidays for a month and project those on to the expenditure curve. This helps us to see if there is a direct correlation between expenditure and holidays or is there not.", "import matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\n%matplotlib notebook\n\n\nfig, ax = plt.subplots()\na = ax.plot(list(df_expenditure[\"Date\"]), df_expenditure[\"DFXARC1M027SBEA\"], label=\"Expentiture\", zorder=10)\n\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n\nplt.xlabel(\"Date\")\nplt.ylabel(\"Billions of Dollars\")\nplt.title(\"Catholic holiday effect on US food expenditure, 2016\")\n\nax2 = ax.twinx()\n#b = ax2.scatter(list(df_catholic[\"Date\"]),df_catholic[\"Holiday\"], s=60, c=\"red\", alpha=0.7, label=\"Holiday\")\n\nb = ax2.bar(list(df_catholic[\"Date\"]),df_catholic[\"Holiday\"], alpha=0.2, label=\"Holiday\", color=\"Red\")\n\nax2 = plt.gca()\nax2.spines['top'].set_visible(False)\nax2.spines['right'].set_visible(False)\n\n#my_xticks = ['','','Holiday','', '']\n#plt.yticks(list(df_catholic[\"Holiday\"]), my_xticks)\n\nax2 = plt.gca()\nax2.yaxis.set_visible(False)\n\n# Combine legend\nh1, l1 = ax.get_legend_handles_labels()\nh2, l2 = ax2.get_legend_handles_labels()\nax.legend(h1+h2, l1+l2, loc=4, frameon = False)\n\nmonths = ['Jan','Feb','Mar','Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct' ,'Nov', 'Dec', 'Jan']\nplt.xticks(list(df_expenditure[\"Date\"]), months, rotation='vertical')\nfig.autofmt_xdate()\n\nplt.show()", "Conclusion\nThis visualisation aims to find an answer to a question about correlation between food expenditure in 2016 and Catholic holidays in the US in 2016. Two different web sites were scraped for informatio. For Catholic holidays Calendar-12.com and for food expenditure in the US FRED Economic Data was utilized. Calendar-12.com provided holiday information was scraped from a static html page and FRED information was downloaded in the form of filtered csv file. Food expenditure and holidays were plotted in a single graph where expenditure is shown as a line graph and holidays are projected on to this graph with thin bar charts.\nPlot shows food expenditure change and holidays in a given month. From the graph it is clear to see that there is no correlation between amount of holidays and food expenditure change. However we can see that some months, April and December, seem to have sharp rise in food expenditure. This could be because of holidays, especially in December when Catholic Christmas and New Year are. But just based on data used in this paper we can't draw conclusions between food expenditure and Catholic holidays." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
junhwanjang/DataSchool
Lecture/02. 파이썬 프로그래밍/8) pandas 패키지의 소개.ipynb
mit
[ "pandas 패키지의 소개\npandas 패키지\n\n\nIndex를 가진 자료형인 R의 data.frame 자료형을 Python에서 구현\n\n\n참고 자료\n\nhttp://pandas.pydata.org/\nhttp://pandas.pydata.org/pandas-docs/stable/10min.html\nhttp://pandas.pydata.org/pandas-docs/stable/tutorials.html\n\npandas 자료형\n\nSeries\n시계열 데이터\n\nIndex를 가지는 1차원 NumPy Array\n\n\nDataFrame\n\n복수 필드 시계열 데이터 또는 테이블 데이터\n\nIndex를 가지는 2차원 NumPy Array\n\n\nIndex\n\nLabel: 각각의 Row/Column에 대한 이름 \nName: 인덱스 자체에 대한 이름\n\n<img src=\"https://docs.google.com/drawings/d/12FKb94RlpNp7hZNndpnLxmdMJn3FoLfGwkUAh33OmOw/pub?w=602&h=446\" style=\"width:60%; margin:0 auto 0 auto;\">\nSeries\n\n\nRow Index를 가지는 자료열\n\n\n생성\n\n추가/삭제\nIndexing\n\n명시적인 Index를 가지지 않는 Series", "s = pd.Series([4, 7, -5, 3])\ns\n\ns.values\n\ntype(s.values)\n\ns.index\n\ntype(s.index)", "Vectorized Operation", "s * 2\n\nnp.exp(s)", "명시적인 Index를 가지는 Series\n\n생성시 index 인수로 Index 지정\nIndex 원소는 각 데이터에 대한 key 역할을 하는 Label\ndict", "s2 = pd.Series([4, 7, -5, 3], index=[\"d\", \"b\", \"a\", \"c\"])\ns2\n\ns2.index", "Series Indexing 1: Label Indexing\n\nSingle Label\nLabel Slicing\n마지막 원소 포함\nLabel을 원소로 가지는 Label (Label을 사용한 List Fancy Indexing)\n주어진 순서대로 재배열", "s2['a']\n\ns2[\"b\":\"c\"]\n\ns2[['a', 'b']]", "Series Indexing 2: Integer Indexing\n\nSingle Integer\nInteger Slicing\n마지막 원소를 포함하지 않는 일반적인 Slicing\nInteger List Indexing (List Fancy Indexing)\nBoolearn Fancy Indexing", "s2[2]\n\ns2[1:4]\n\ns2[[2, 1]]\n\ns2[s2 > 0]", "dict 연산", "\"a\" in s2, \"e\" in s2\n\nfor k, v in s2.iteritems():\n print(k, v)\n\ns2[\"d\":\"a\"]", "dict 데이터를 이용한 Series 생성\n\n별도의 index를 지정하면 지정한 자료만으로 생성", "sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}\ns3 = pd.Series(sdata)\ns3\n\nstates = ['California', 'Ohio', 'Oregon', 'Texas']\ns4 = pd.Series(sdata, index=states)\ns4\n\npd.isnull(s4)\n\npd.notnull(s4)\n\ns4.isnull()\n\ns4.notnull()", "Index 기준 연산", "print(s3.values, s4.values)\ns3.values + s4.values\n\ns3 + s4", "Index 이름", "s4\n\ns4.name = \"population\"\ns4\n\ns4.index.name = \"state\"\ns4", "Index 변경", "s\n\ns.index\n\ns.index = ['Bob', 'Steve', 'Jeff', 'Ryan']\ns\n\ns.index", "DataFrame\n\nMulti-Series\n동일한 Row 인덱스를 사용하는 복수 Series\n\nSeries를 value로 가지는 dict\n\n\n2차원 행렬\n\n\nDataFrame을 행렬로 생각하면 각 Series는 행렬의 Column의 역할\n\n\nNumPy Array와 차이점 \n\n\n각 Column(Series)마다 type이 달라도 된다.\n\n\nColumn Index\n\n(Row) Index와 Column Index를 가진다.\n각 Column(Series)에 Label 지정 가능\n(Row) Index와 Column Label을 동시에 사용하여 자료 접근 가능", "data = {\n 'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],\n 'year': [2000, 2001, 2002, 2001, 2002],\n 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]\n}\ndf = pd.DataFrame(data)\ndf\n\npd.DataFrame(data, columns=['year', 'state', 'pop'])\n\ndf.dtypes", "명시적인 Column/Row Index를 가지는 DataFrame", "df2 = pd.DataFrame(data, \n columns=['year', 'state', 'pop', 'debt'],\n index=['one', 'two', 'three', 'four', 'five'])\ndf2", "Single Column Access", "df[\"state\"]\n\ntype(df[\"state\"])\n\ndf.state", "Cloumn Data Update", "df2['debt'] = 16.5\ndf2\n\ndf2['debt'] = np.arange(5)\ndf2\n\ndf2['debt'] = pd.Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five'])\ndf2", "Add Column", "df2['eastern'] = df2.state == 'Ohio'\ndf2", "Delete Column", "del df2['eastern']\ndf2", "inplace 옵션\n\n함수/메소드는 두 가지 종류\n그 객체 자체를 변형\n\n해당 객체는 그대로 두고 변형된 새로운 객체를 출력\n\n\nDataFrame 메소드 대부분은 inplace 옵션을 가짐\n\ninplace=True이면 출력을 None으로 하고 객체 자체를 변형\ninplace=False이면 객체 자체는 보존하고 변형된 새로운 객체를 출력", "x = [3, 6, 1, 4]\nsorted(x)\n\nx\n\nx.sort()\nx", "drop 메소드를 사용한 Row/Column 삭제\n\ndel 함수 \ninplace 연산\ndrop 메소드 \n삭제된 Series/DataFrame 출력\nSeries는 Row 삭제\nDataFrame은 axis 인수로 Row/Column 선택\naxis=0(디폴트): Row\naxis=1: Column", "s = pd.Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e'])\ns\n\ns2 = s.drop('c')\ns2\n\ns\n\ns.drop([\"b\", \"c\"])\n\ndf = pd.DataFrame(np.arange(16).reshape((4, 4)),\n index=['Ohio', 'Colorado', 'Utah', 'New York'],\n columns=['one', 'two', 'three', 'four'])\ndf\n\ndf.drop(['Colorado', 'Ohio'])\n\ndf.drop('two', axis=1)\n\ndf.drop(['two', 'four'], axis=1)", "Nested dict를 사용한 DataFrame 생성", "pop = {\n 'Nevada': {\n 2001: 2.4, \n 2002: 2.9\n },\n 'Ohio': {\n 2000: 1.5, \n 2001: 1.7, \n 2002: 3.6\n }\n}\n\n\ndf3 = pd.DataFrame(pop)\ndf3", "Series dict를 사용한 DataFrame 생성", "pdata = {\n 'Ohio': df3['Ohio'][:-1],\n 'Nevada': df3['Nevada'][:2]\n}\npd.DataFrame(pdata)", "NumPy array로 변환", "df3.values\n\ndf2.values", "DataFrame의 Column Indexing\n\nSingle Label key\nSingle Label attribute\nLabel List Fancy Indexing", "df2\n\ndf2[\"year\"]\n\ndf2.year\n\ndf2[[\"state\", \"debt\", \"year\"]]\n\ndf2[[\"year\"]]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
guyk1971/deep-learning
sentiment-rnn/Sentiment_RNN_Solution.ipynb
mit
[ "Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.", "import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment-network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment-network/labels.txt', 'r') as f:\n labels = f.read()\n\nreviews[:2000]", "Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nall_text[:2000]\n\nwords[:100]", "Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.", "from collections import Counter\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\nreviews_ints = []\nfor each in reviews:\n reviews_ints.append([vocab_to_int[word] for word in each.split()])", "Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.", "labels = labels.split('\\n')\nlabels = np.array([1 if each == 'positive' else 0 for each in labels])\n\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.", "non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\nlen(non_zero_idx)\n\nreviews_ints[-1]", "Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.", "reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nlabels = np.array([labels[ii] for ii in non_zero_idx])", "Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "seq_len = 200\nfeatures = np.zeros((len(reviews_ints), seq_len), dtype=int)\nfor i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_len]\n\nfeatures[:10,:100]", "Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.", "split_frac = 0.8\nsplit_idx = int(len(features)*0.8)\ntrain_x, val_x = features[:split_idx], features[split_idx:]\ntrain_y, val_y = labels[:split_idx], labels[split_idx:]\n\ntest_idx = int(len(val_x)*0.5)\nval_x, test_x = val_x[:test_idx], val_x[test_idx:]\nval_y, test_y = val_y[:test_idx], val_y[test_idx:]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate", "lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001", "For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.", "n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')\n labels_ = tf.placeholder(tf.int32, [None, None], name='labels')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')", "Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].", "# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs_)", "LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.", "with graph.as_default():\n # Your basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)", "RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.", "with graph.as_default():\n outputs, final_state = tf.nn.dynamic_rnn(cell, embed,\n initial_state=initial_state)", "Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.", "with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.", "with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].", "def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]", "Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.", "epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")", "Testing", "test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Startupsci/data-science-notebooks
.ipynb_checkpoints/titanic-data-science-solutions-checkpoint.ipynb
mit
[ "Titanic Data Science Solutions\nThis notebook is companion to the book Data Science Solutions. The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.\nThere are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.\nWorkflow stages\nThe competition solution workflow goes through seven stages described in the Data Science Solutions book's sample chapter online here.\n\nQuestion or problem definition.\nAcquire training and testing data.\nWrangle, prepare, cleanse the data.\nAnalyze, identify patterns, and explore the data.\nModel, predict and solve the problem.\nVisualize, report, and present the problem solving steps and final solution.\nSupply or submit the results.\n\nThe workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.\n\nWe may combine mulitple workflow stages. We may analyze by visualizing data.\nPerform a stage earlier than indicated. We may analyze data before and after wrangling.\nPerform a stage multiple times in our workflow. Visualize stage may be used multiple times.\nDrop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition.\n\nQuestion and problem definition\nCompetition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is described here at Kaggle.\n\nKnowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.\n\nWe may also want to develop some early understanding about the domain of our problem. This is described on the Kaggle competition description page here. Here are the highlights to note.\n\nOn April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.\nOne of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.\nAlthough there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.\n\nWorkflow goals\nThe data science solutions workflow solves for seven major goals.\nClassifying. We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.\nCorrelating. One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a correlation among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.\nConverting. For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.\nCompleting. Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.\nCorrecting. We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.\nCreating. Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.\nCharting. How to select the right visualization plots and charts depending on nature of the data and the solution goals. A good start is to read the Tableau paper on Which chart or graph is right for you?.", "# data analysis and wrangling\nimport pandas as pd\nimport numpy as np\nimport random as rnd\n\n# visualization\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# machine learning\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.tree import DecisionTreeClassifier", "Acquire data\nThe Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames.", "# read titanic training & test csv files as a pandas DataFrame\ntrain_df = pd.read_csv('data/titanic-kaggle/train.csv')\ntest_df = pd.read_csv('data/titanic-kaggle/test.csv')", "Analyze by describing data\nPandas also helps describe the datasets answering following questions early in our project.\nWhich features are available in the dataset?\nNoting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle data page here.", "print train_df.columns.values", "Which features are categorical?\nThese values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.\n\nCategorical: Survived, Sex, and Embarked. Ordinal: Pclass.\n\nWhich features are numerical?\nWhich features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.\n\nContinous: Age, Fare. Discrete: SibSp, Parch.", "# preview the data\ntrain_df.head()", "Which features are mixed data types?\nNumerical, alphanumeric data within same feature. These are candidates for correcting goal.\n\nTicket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.\n\nWhich features may contain errors or typos?\nThis is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.\n\nName feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.", "train_df.tail()", "Which features contain blank, null or empty values?\nThese will require correcting.\n\nCabin > Age > Embarked features contain a number of null values in that order for the training dataset.\nCabin > Age are incomplete in case of test dataset.\n\nWhat are the data types for various features?\nHelping us during converting goal.\n\nSeven features are integer or floats. Six in case of test dataset.\nFive features are strings (object).", "train_df.info()\nprint('_'*40)\ntest_df.info()", "What is the distribution of numerical feature values across the samples?\nThis helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.\n\nTotal samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).\nSurvived is a categorical feature with 0 or 1 values.\nAround 38% samples survived representative of the actual survival rate.\nMost passengers (> 75%) did not travel with parents or children.\nMore than 35% passengers had a sibling on board.\nFares varied significantly with few passengers (<1%) paying as high as $512.\nFew elderly passengers (<1%) within age range 65-80.", "train_df.describe(percentiles=[.25, .5, .75])\n# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.\n# Review Parch distribution using `percentiles=[.75, .8]`\n# Sibling distribution `[.65, .7]`\n# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`", "What is the distribution of categorical features?\n\nNames are unique across the dataset (count=unique=891)\nSex variable as two possible values with 65% male (top=male, freq=577/count=891).\nCabin values have several dupicates across samples. Alternatively several passengers shared a cabin.\nEmbarked takes three possible values. S port used by most passengers (top=S)\nTicket feature has high ratio (22%) of duplicate values (unique=681). Possibly an error as two passengers may not travel on the same ticket.", "train_df.describe(include=['O'])", "Assumtions based on data analysis\nWe arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.\nCompleting.\n\nWe may want to complete Age feature as it is definitely correlated to survival.\nWe may want to complete the Embarked feature as it may also correlate with survival or another important feature.\n\nCorrecting.\n\nTicket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.\nCabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.\nPassengerId may be dropped from training dataset as it does not contribute to survival.\nName feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.\n\nCreating.\n\nWe may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.\nWe may want to engineer the Name feature to extract Title as a new feature.\nWe may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.\nWe may also want to create a Fare range feature if it helps our analysis.\n\nCorrelating.\n\nDoes port of embarkation (Embarked) correlate with survival?\nDoes fare paid (range) correlate with survival?\n\nWe may also add to our assumptions based on the problem description noted earlier.\nClassifying.\n\nWomen (Sex=female) were more likely to have survived.\nChildren (Age<?) were more likely to have survived. \nThe upper-class passengers (Pclass=1) were more likely to have survived.\n\nAnalyze by visualizing data\nNow we can start confirming some of our assumptions using visualizations for analyzing the data.\nCorrelating numerical features\nLet us start by understanding correlations between numerical features and our solution goal (Survived).\nA histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)\nNote that x-axis in historgram visualizations represents the count of samples or passengers.\nObservations.\n\nInfants (Age <=4) had high survival rate.\nOldest passengers (Age = 80) survived.\nLarge number of 15-25 year olds did not survive.\nMost passengers are in 15-35 age range.\n\nDecisions.\nThis simple analysis confirms our assumptions as decisions for subsequent workflow stages.\n\nWe should consider Age (our assumption classifying #2) in our model training.\nComplete the Age feature for null values (completing #1).", "g = sns.FacetGrid(train_df, col='Survived')\ng.map(plt.hist, 'Age', bins=20)", "We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.\nObservations.\n\nPclass=3 had most passengers, however most did not survive. Confirms our classifying assumption #2.\nInfant passengers in Pclass=2 mostly survived. Further qualifies our classifying assumption #2.\nMost passengers in Pclass=1 survived. Confirms our classifying assumption #3.\nPclass varies in terms of Age distribution of passengers.\n\nDecisions.\n\nConsider Pclass for model training.", "grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')\ngrid.map(plt.hist, 'Age', alpha=.5, bins=20)\ngrid.add_legend();", "Correlating categorical features\nNow we can correlate categorical features with our solution goal.\nObservations.\n\nFemale passengers had much better survival rate than males. Confirms classifying (#1).\nException in Embarked=C where males had higher survival rate.\nMales had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (#2).\nPorts of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (#1).\n\nDecisions.\n\nAdd Sex feature to model training.\nComplete and add Embarked feature to model training.", "grid = sns.FacetGrid(train_df, col='Embarked')\ngrid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')\ngrid.add_legend()", "Correlating categorical and numerical features\nWe may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).\nObservations.\n\nHigher fare paying passengers had better survival. Confirms our assumption for creating (#4) fare ranges.\nPort of embarkation correlates with survival rates. Confirms correlating (#1) and completing (#2).\n\nDecisions.\n\nConsider banding Fare feature.", "grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})\ngrid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)\ngrid.add_legend()", "Wrangle data\nWe have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.\nCorrecting by dropping features\nThis is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.\nBased on our assumptions and decisions we want to drop the Cabin (correcting #2) and Ticket (correcting #1) features.\nNote that where applicable we perform operations on both training and testing datasets together to stay consistent.", "train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)\ntest_df = test_df.drop(['Ticket', 'Cabin'], axis=1)", "Creating new feature extracting from existing\nWe want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.\nIn the following code we extract Title feature using regular expressions. The RegEx pattern (\\w+\\.) matches the first word which ends with a dot character within Name feature. The expand=False flag returns a DataFrame.\nObservations.\nWhen we plot Title, Age, and Survived, we note the following observations.\n\nMost titles band Age groups accurately. For example: Master title has Age mean of 5 years.\nSurvival among Title Age bands varies slightly.\nCertain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).\n\nDecision.\n\nWe decide to retain the new Title feature for model training.", "train_df['Title'] = train_df.Name.str.extract('(\\w+\\.)', expand=False)\nsns.barplot(hue=\"Survived\", x=\"Age\", y=\"Title\", data=train_df, ci=False)", "Let us extract the Title feature for the training dataset as well.\nThen we can safely drop the Name feature from training and testing datasets and the PassengerId feature from the training dataset.", "test_df['Title'] = test_df.Name.str.extract('(\\w+\\.)', expand=False)\n\ntrain_df = train_df.drop(['Name', 'PassengerId'], axis=1)\ntest_df = test_df.drop(['Name'], axis=1)\ntest_df.describe(include=['O'])", "Converting a categorical feature\nNow we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.\nLet us start by converting Sex feature to a new feature called Gender where female=1 and male=0.", "train_df['Gender'] = train_df['Sex'].map( {'female': 1, 'male': 0} ).astype(int)\ntrain_df.loc[:, ['Gender', 'Sex']].head()", "We do this both for training and test datasets.", "test_df['Gender'] = test_df['Sex'].map( {'female': 1, 'male': 0} ).astype(int)\ntest_df.loc[:, ['Gender', 'Sex']].head()", "We can now drop the Sex feature from our datasets.", "train_df = train_df.drop(['Sex'], axis=1)\ntest_df = test_df.drop(['Sex'], axis=1)\ntrain_df.head()", "Completing a numerical continuous feature\nNow we should start estimating and completing features with missing or null values. We will first do this for the Age feature.\nWe can consider three methods to complete a numerical continuous feature.\n\n\nA simple way is to generate random numbers between mean and standard deviation.\n\n\nMore accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Gender, and Pclass. Guess Age values using median values for Age across sets of Pclass and Gender feature combinations. So, median Age for Pclass=1 and Gender=0, Pclass=1 and Gender=1, and so on...\n\n\nCombine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.\n\n\nMethod 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.", "grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')\ngrid.map(plt.hist, 'Age', alpha=.5, bins=20)\ngrid.add_legend();", "Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.", "guess_ages = np.zeros((2,3))\nguess_ages", "Now we iterate over Gender (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.\nNote that we also tried creating the AgeFill feature using method 3 and realized during model stage that the correlation coeffficient of AgeFill is better when compared with the method 2.", "for i in range(0, 2):\n for j in range(0, 3):\n guess_df = train_df[(train_df['Gender'] == i) & \\\n (train_df['Pclass'] == j+1)]['Age'].dropna()\n \n # Correlation of AgeFill is -0.014850\n # age_mean = guess_df.mean()\n # age_std = guess_df.std()\n # age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)\n \n # Correlation of AgeFill is -0.011304\n age_guess = guess_df.median()\n\n # Convert random age float to nearest .5 age\n guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5\n \nguess_ages\n\ntrain_df['AgeFill'] = train_df['Age']\n\nfor i in range(0, 2):\n for j in range(0, 3):\n train_df.loc[ (train_df.Age.isnull()) & (train_df.Gender == i) & (train_df.Pclass == j+1),\\\n 'AgeFill'] = guess_ages[i,j]\n\ntrain_df[train_df['Age'].isnull()][['Gender','Pclass','Age','AgeFill']].head(10)", "We repeat the feature completing goal for the test dataset.", "guess_ages = np.zeros((2,3))\n\nfor i in range(0, 2):\n for j in range(0, 3):\n guess_df = test_df[(test_df['Gender'] == i) & \\\n (test_df['Pclass'] == j+1)]['Age'].dropna()\n\n # Correlation of AgeFill is -0.014850\n # age_mean = guess_df.mean()\n # age_std = guess_df.std()\n # age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)\n\n # Correlation of AgeFill is -0.011304\n age_guess = guess_df.median()\n\n guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5\n\ntest_df['AgeFill'] = test_df['Age']\n\nfor i in range(0, 2):\n for j in range(0, 3):\n test_df.loc[ (test_df.Age.isnull()) & (test_df.Gender == i) & (test_df.Pclass == j+1),\\\n 'AgeFill'] = guess_ages[i,j]\n\ntest_df[test_df['Age'].isnull()][['Gender','Pclass','Age','AgeFill']].head(10)", "We can now drop the Age feature from our datasets.", "train_df = train_df.drop(['Age'], axis=1)\ntest_df = test_df.drop(['Age'], axis=1)\ntrain_df.head()", "Create new feature combining existing features\nWe can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.\nNote that we commented out this code as we realized during model stage that the combined feature is reducing the confidence score of our dataset instead of improving it. The correlation score of separate Parch feature is also better than combined FamilySize feature.", "# Logistic Regression Score is 0.81032547699214363\n# Parch correlation is -0.065878 and SibSp correlation is -0.370618\n\n# Decision: Retain Parch and SibSp as separate features\n\n# Logistic Regression Score is 0.80808080808080807\n# FamilySize correlation is -0.233974\n\n# train_df['FamilySize'] = train_df['SibSp'] + train_df['Parch']\n# test_df['FamilySize'] = test_df['SibSp'] + test_df['Parch']\n# train_df.loc[:, ['Parch', 'SibSp', 'FamilySize']].head(10)\n\n# train_df = train_df.drop(['Parch', 'SibSp'], axis=1)\n# test_df = test_df.drop(['Parch', 'SibSp'], axis=1)\n# train_df.head()", "We can also create an artificial feature combining Pclass and AgeFill.", "test_df['Age*Class'] = test_df.AgeFill * test_df.Pclass\ntrain_df['Age*Class'] = train_df.AgeFill * train_df.Pclass\ntrain_df.loc[:, ['Age*Class', 'AgeFill', 'Pclass']].head(10)", "Completing a categorical feature\nEmbarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.", "freq_port = train_df.Embarked.dropna().mode()[0]\nfreq_port\n\ntrain_df['EmbarkedFill'] = train_df['Embarked']\ntrain_df.loc[train_df['Embarked'].isnull(), 'EmbarkedFill'] = freq_port\ntrain_df[train_df['Embarked'].isnull()][['Embarked','EmbarkedFill']].head(10)", "We can now drop the Embarked feature from our datasets.", "test_df['EmbarkedFill'] = test_df['Embarked']\ntrain_df = train_df.drop(['Embarked'], axis=1)\ntest_df = test_df.drop(['Embarked'], axis=1)\ntrain_df.head()", "Converting categorical feature to numeric\nWe can now convert the EmbarkedFill feature by creating a new numeric Port feature.", "Ports = list(enumerate(np.unique(train_df['EmbarkedFill'])))\nPorts_dict = { name : i for i, name in Ports } \ntrain_df['Port'] = train_df.EmbarkedFill.map( lambda x: Ports_dict[x]).astype(int)\n\nPorts = list(enumerate(np.unique(test_df['EmbarkedFill'])))\nPorts_dict = { name : i for i, name in Ports }\ntest_df['Port'] = test_df.EmbarkedFill.map( lambda x: Ports_dict[x]).astype(int)\n\ntrain_df[['EmbarkedFill', 'Port']].head(10)", "Similarly we can convert the Title feature to numeric enumeration TitleBand banding age groups with titles.", "Titles = list(enumerate(np.unique(train_df['Title'])))\nTitles_dict = { name : i for i, name in Titles } \ntrain_df['TitleBand'] = train_df.Title.map( lambda x: Titles_dict[x]).astype(int)\n\nTitles = list(enumerate(np.unique(test_df['Title'])))\nTitles_dict = { name : i for i, name in Titles } \ntest_df['TitleBand'] = test_df.Title.map( lambda x: Titles_dict[x]).astype(int)\n\ntrain_df[['Title', 'TitleBand']].head(10)", "Now we can safely drop the EmbarkedFill and Title features. We this we now have a dataset that only contains numerical values, a requirement for the model stage in our workflow.", "train_df = train_df.drop(['EmbarkedFill', 'Title'], axis=1)\ntest_df = test_df.drop(['EmbarkedFill', 'Title'], axis=1)\ntrain_df.head()", "Quick completing and converting a numeric feature\nWe can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.\nNote that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.\nWe may also want round off the fare to two decimals as it represents currency.", "test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)\n\ntrain_df['Fare'] = train_df['Fare'].round(2)\ntest_df['Fare'] = test_df['Fare'].round(2)\n\ntest_df.head(10)", "Model, predict and solve\nNow we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:\n\nLogistic Regression\nKNN or k-Nearest Neighbors\nSupport Vector Machines\nNaive Bayes classifier\nDecision Tree\nRandom Forrest\nPerceptron\nArtificial neural network\nRVM or Relevance Vector Machine", "X_train = train_df.drop(\"Survived\", axis=1)\nY_train = train_df[\"Survived\"]\nX_test = test_df.drop(\"PassengerId\", axis=1).copy()\nX_train.shape, Y_train.shape, X_test.shape", "Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia.\nNote the confidence score generated by the model based on our training dataset.", "# Logistic Regression\n\nlogreg = LogisticRegression()\nlogreg.fit(X_train, Y_train)\nY_pred = logreg.predict(X_test)\nacc_log = round(logreg.score(X_train, Y_train) * 100, 2)\nacc_log", "We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the correlation coefficient for all features as these relate to survival.\n\nGender as expected has the highest corrlation with Survived.\nSurprisingly Fare ranks higher than Age.\nOur decision to extract TitleBand feature from name is a good one.\nThe artificial feature Age*Class scores well against existing features.\nWe tried creating a feature combining Parch and SibSp into FamilySize. Parch ended up with better correlation coefficient and FamilySize reduced our LogisticRegression confidence score.\nAnother surprise is that Pclass contributes least to our model, even worse than Port of embarkation, or the artificial feature Age*Class.", "coeff_df = pd.DataFrame(train_df.columns.delete(0))\ncoeff_df.columns = ['Feature']\ncoeff_df[\"Correlation\"] = pd.Series(logreg.coef_[0])\n\ncoeff_df.sort_values(by='Correlation', ascending=False)", "Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia.\nNote that the model generates a confidence score which is higher than Logistics Regression model.", "# Support Vector Machines\n\nsvc = SVC()\nsvc.fit(X_train, Y_train)\nY_pred = svc.predict(X_test)\nacc_svc = round(svc.score(X_train, Y_train) * 100, 2)\nacc_svc", "In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia.\nKNN confidence score is better than Logistics Regression but worse than SVM.", "knn = KNeighborsClassifier(n_neighbors = 3)\nknn.fit(X_train, Y_train)\nY_pred = knn.predict(X_test)\nacc_knn = round(knn.score(X_train, Y_train) * 100, 2)\nacc_knn", "In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia.\nThe model generated confidence score is the lowest among the models evaluated so far.", "# Gaussian Naive Bayes\n\ngaussian = GaussianNB()\ngaussian.fit(X_train, Y_train)\nY_pred = gaussian.predict(X_test)\nacc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)\nacc_gaussian", "The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.", "# Perceptron\n\nperceptron = Perceptron()\nperceptron.fit(X_train, Y_train)\nY_pred = perceptron.predict(X_test)\nacc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)\nacc_perceptron\n\n# Linear SVC\n\nlinear_svc = LinearSVC()\nlinear_svc.fit(X_train, Y_train)\nY_pred = linear_svc.predict(X_test)\nacc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)\nacc_linear_svc\n\n# Stochastic Gradient Descent\n\nsgd = SGDClassifier()\nsgd.fit(X_train, Y_train)\nY_pred = sgd.predict(X_test)\nacc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)\nacc_sgd", "This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia.\nThe model confidence score is the highest among models evaluated so far.", "# Decision Tree\n\ndecision_tree = DecisionTreeClassifier()\ndecision_tree.fit(X_train, Y_train)\nY_pred = decision_tree.predict(X_test)\nacc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)\nacc_decision_tree", "The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia.\nThe model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.", "# Random Forest\n\nrandom_forest = RandomForestClassifier(n_estimators=100)\nrandom_forest.fit(X_train, Y_train)\nY_pred = random_forest.predict(X_test)\nrandom_forest.score(X_train, Y_train)\nacc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)\nacc_random_forest", "Model evaluation\nWe can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.", "models = pd.DataFrame({\n 'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression', \n 'Random Forest', 'Naive Bayes', 'Perceptron', \n 'Stochastic Gradient Decent', 'Linear SVC', \n 'Decision Tree'],\n 'Score': [acc_svc, acc_knn, acc_log, \n acc_random_forest, acc_gaussian, acc_perceptron, \n acc_sgd, acc_linear_svc, acc_decision_tree]})\nmodels.sort_values(by='Score', ascending=False)\n\nsubmission = pd.DataFrame({\n \"PassengerId\": test_df[\"PassengerId\"],\n \"Survived\": Y_pred\n })\nsubmission.to_csv('data/titanic-kaggle/submission.csv', index=False)", "Our submission to the competition site Kaggle results in scoring 3,883 of 6,082 competition entries. This result is indicative while the competition is running. This result only accounts for part of the submission dataset. Not bad for our first attempt. Any suggestions to improve our score are most welcome.\nReferences\nThis notebook has been created based on great work done solving the Titanic competition and other sources.\n\nA journey through Titanic\nGetting Started with Pandas: Kaggle's Titanic Competition" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ThyrixYang/LearningNotes
MOOC/stanford_cnn_cs231n/assignment2/.ipynb_checkpoints/BatchNormalization-checkpoint.ipynb
gpl-3.0
[ "Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.", "# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)", "Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.", "# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint(' means: ', a.mean(axis=0))\nprint(' stds: ', a.std(axis=0))\n\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint(' mean: ', a_norm.mean(axis=0))\nprint(' std: ', a_norm.std(axis=0))\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint('After batch normalization (nontrivial gamma, beta)')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))", "Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.", "# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))", "Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.", "np.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))", "Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.", "np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()", "Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.", "np.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()", "Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.", "plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.", "np.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()", "Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
darkomen/TFG
modelado/temperatura/modelado.ipynb
cc0-1.0
[ "Modelado de un sistema con ipython\nPara el correcto funcionamiento del extrusor de filamento, es necesario regular correctamente la temperatura a la que está el cañon. Por ello se usará un sistema consistente en una resitencia que disipe calor, y un sensor de temperatura PT100 para poder cerrar el lazo y controlar el sistema. A continuación, desarrollaremos el proceso utilizado.", "#Importamos las librerías utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerías\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Mostramos todos los gráficos en el notebook\n%pylab inline\n\n#Abrimos el fichero csv con los datos de la muestra\ndatos = pd.read_csv('datos.csv')\n\n#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\n#columns = ['temperatura', 'entrada']\ncolumns = ['temperatura', 'entrada']", "Respuesta del sistema\nEl primer paso será someter al sistema a un escalon en lazo abierto para ver la respuesta temporal del mismo. A medida que va calentando, registraremos los datos para posteriormente representarlos.", "#Mostramos en varias gráficas la información obtenida tras el ensayo\nax = datos[columns].plot(secondary_y=['entrada'],figsize=(10,5), ylim=(20,60),title='Modelo matemático del sistema')\nax.set_xlabel('Tiempo')\nax.set_ylabel('Temperatura [ºC]')\n#datos_filtrados['RPM TRAC'].plot(secondary_y=True,style='g',figsize=(20,20)).set_ylabel=('RPM')\n", "Cálculo del polinomio\nHacemos una regresión con un polinomio de orden 2 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.", "# Buscamos el polinomio de orden 4 que determina la distribución de los datos\nreg = np.polyfit(datos['time'],datos['temperatura'],2)\n# Calculamos los valores de y con la regresión\nry = np.polyval(reg,datos['time'])\nprint (reg)\n\nplt.plot(datos['time'],datos['temperatura'],'b^', label=('Datos experimentales'))\nplt.plot(datos['time'],ry,'ro', label=('regresión polinómica'))\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('Tiempo')\nplt.ylabel('Temperatura [ºC]')\n", "El polinomio caracteristico de nuestro sistema es:\n$$P_x= 25.9459 -1.5733·10^{-4}·X - 8.18174·10^{-9}·X^2$$\nTransformada de laplace\nSi calculamos la transformada de laplace del sistema, obtenemos el siguiente resultado:\n$$G_s = \\frac{25.95·S^2 - 0.00015733·S + 1.63635·10^{-8}}{S^3}$$\nCálculo del PID mediante OCTAVE\nAplicando el método de sintonizacion de Ziegler-Nichols calcularemos el PID para poder regular correctamente el sistema.Este método, nos da d emanera rápida unos valores de $K_p$, $K_i$ y $K_d$ orientativos, para que podamos ajustar correctamente el controlador. Esté método consiste en el cálculo de tres parámetros característicos, con los cuales calcularemos el regulador:\n$$G_s=K_p(1+\\frac{1}{T_i·S}+T_d·S)=K_p+\\frac{K_i}{S}+K_d$$\nEl cálculo de los parámetros característicos del método, lo realizaremos con Octave, con el siguiente código:\n~~~\npkg load control\n%los datos en la función tf() debe ser el numerador y denominador de nuestro sistema.\nH=tf([25.95 0.000157333 1.63635E-8],[1 0 0 0]);\nstep(H);\ndt=0.150;\nt=0:dt:65;\ny=step(H,t);\ndy=diff(y)/dt;\n[m,p]=max(dy);\nyi=y(p);\nti=t(p);\nL=ti-yi/m\nTao=(y(end)-yi)/m+ti-L\nKp=1.2Tao/L\nTi=2L;\nTd=0.5L;\nKi=Kp/ti;\nKd=KpTd;\n~~~\nEn esta primera iteración, los datos obtenidos son los siguientes:\n$K_p = 6082.6$ $K_i=93.868 K_d=38.9262$\nCon lo que nuestro regulador tiene la siguiente ecuación característica:\n$$G_s = \\frac{38.9262·S^2 + 6082.6·S + 93.868}{S}$$\nIteracción 1 de regulador", "#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ndatos_it1 = pd.read_csv('Regulador1.csv')\ncolumns = ['temperatura']\n\n#Mostramos en varias gráficas la información obtenida tras el ensayo\nax = datos_it1[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)\nax.set_xlabel('Tiempo')\nax.set_ylabel('Temperatura [ºC]')\nax.hlines([80],0,3500,colors='r')\n#Calculamos MP\nTmax = datos_it1.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo\n\nSp=80.0 #Valor del setpoint\nMp= ((Tmax-Sp)/(Sp))*100\nprint(\"El valor de sobreoscilación es de: {:.2f}%\".format(Mp))\n#Calculamos el Error en régimen permanente\nErrp = datos_it1.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente\nEregimen = abs(Sp-Errp)\nprint(\"El valor del error en régimen permanente es de: {:.2f}\".format(Eregimen))", "En este caso hemos establecido un setpoint de 80ºC Como vemos, una vez introducido el controlador, la temperatura tiende a estabilizarse, sin embargo tiene mucha sobreoscilación. Por ello aumentaremos los valores de $K_i$ y $K_d$, siendo los valores de esta segunda iteracción los siguientes:\n$K_p = 6082.6$ $K_i=103.25 K_d=51.425$\nIteracción 2 del regulador", "#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ndatos_it2 = pd.read_csv('Regulador2.csv')\ncolumns = ['temperatura']\n\n#Mostramos en varias gráficas la información obtenida tras el ensayo\nax2 = datos_it2[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)\nax2.set_xlabel('Tiempo')\nax2.set_ylabel('Temperatura [ºC]')\nax2.hlines([80],0,3500,colors='r')\n#Calculamos MP\nTmax = datos_it2.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo\n\nSp=80.0 #Valor del setpoint\nMp= ((Tmax-Sp)/(Sp))*100\nprint(\"El valor de sobreoscilación es de: {:.2f}%\".format(Mp))\n#Calculamos el Error en régimen permanente\nErrp = datos_it2.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente\nEregimen = abs(Sp-Errp)\nprint(\"El valor del error en régimen permanente es de: {:.2f}\".format(Eregimen))", "En esta segunda iteracción hemos logrado bajar la sobreoscilación inicial, pero tenemos mayor error en regimen permanente. Por ello volvemos a aumentar los valores de $K_i$ y $K_d$ siendo los valores de esta tercera iteracción los siguientes:\n$K_p = 6082.6$ $K_i=121.64 K_d=60$\nIteracción 3 del regulador", "#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ndatos_it3 = pd.read_csv('Regulador3.csv')\ncolumns = ['temperatura']\n\n#Mostramos en varias gráficas la información obtenida tras el ensayo\nax3 = datos_it3[columns].plot(figsize=(10,5), ylim=(20,180),title='Modelo matemático del sistema con regulador',)\nax3.set_xlabel('Tiempo')\nax3.set_ylabel('Temperatura [ºC]')\nax3.hlines([160],0,6000,colors='r')\n#Calculamos MP\nTmax = datos_it3.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo\n\nSp=160.0 #Valor del setpoint\nMp= ((Tmax-Sp)/(Sp))*100\nprint(\"El valor de sobreoscilación es de: {:.2f}%\".format(Mp))\n#Calculamos el Error en régimen permanente\nErrp = datos_it3.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente\nEregimen = abs(Sp-Errp)\nprint(\"El valor del error en régimen permanente es de: {:.2f}\".format(Eregimen))", "En este caso, se puso un setpoint de 160ºC. Como vemos, la sobreoscilación inicial ha disminuido en comparación con la anterior iteracción y el error en regimen permanente es menor. Para intentar minimar el error, aumentaremos únicamente el valor de $K_i$. Siendo los valores de esta cuarta iteracción del regulador los siguientes:\n $K_p = 6082.6$ $K_i=121.64 K_d=150$\nIteracción 4", "#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ndatos_it4 = pd.read_csv('Regulador4.csv')\ncolumns = ['temperatura']\n\n#Mostramos en varias gráficas la información obtenida tras el ensayo\nax4 = datos_it4[columns].plot(figsize=(10,5), ylim=(20,180),title='Modelo matemático del sistema con regulador',)\nax4.set_xlabel('Tiempo')\nax4.set_ylabel('Temperatura [ºC]')\nax4.hlines([160],0,7000,colors='r')\n#Calculamos MP\nTmax = datos_it4.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo\nprint (\" {:.2f}\".format(Tmax))\nSp=160.0 #Valor del setpoint\nMp= ((Tmax-Sp)/(Sp))*100\nprint(\"El valor de sobreoscilación es de: {:.2f}%\".format(Mp))\n#Calculamos el Error en régimen permanente\nErrp = datos_it4.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente\nEregimen = abs(Sp-Errp)\nprint(\"El valor del error en régimen permanente es de: {:.2f}\".format(Eregimen))", "Por lo tanto, el regulador que cumple con las especificaciones deseadas tiene la siguiente ecuación característica:\n$$G_s = \\frac{150·S^2 + 6082.6·S + 121.64}{S}$$" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
timkpaine/lantern
experimental/widgets/applications/Lorenz Differential Equations.ipynb
apache-2.0
[ "Exploring the Lorenz System of Differential Equations\nIn this Notebook we explore the Lorenz system of differential equations:\n$$\n\\begin{aligned}\n\\dot{x} & = \\sigma(y-x) \\\n\\dot{y} & = \\rho x - y - xz \\\n\\dot{z} & = -\\beta z + xy\n\\end{aligned}\n$$\nThis is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters (\\(\\sigma\\), \\(\\beta\\), \\(\\rho\\)) are varied.\nImports\nFirst, we import the needed things from IPython, NumPy, Matplotlib and SciPy.", "%matplotlib inline\n\nfrom ipywidgets import interact, interactive\nfrom IPython.display import clear_output, display, HTML\n\nimport numpy as np\nfrom scipy import integrate\n\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.colors import cnames\nfrom matplotlib import animation", "Computing the trajectories and plotting the result\nWe define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation (\\(\\sigma\\), \\(\\beta\\), \\(\\rho\\)), the numerical integration (N, max_time) and the visualization (angle).", "def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):\n\n fig = plt.figure()\n ax = fig.add_axes([0, 0, 1, 1], projection='3d')\n ax.axis('off')\n\n # prepare the axes limits\n ax.set_xlim((-25, 25))\n ax.set_ylim((-35, 35))\n ax.set_zlim((5, 55))\n \n def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho):\n \"\"\"Compute the time-derivative of a Lorenz system.\"\"\"\n x, y, z = x_y_z\n return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]\n\n # Choose random starting points, uniformly distributed from -15 to 15\n np.random.seed(1)\n x0 = -15 + 30 * np.random.random((N, 3))\n\n # Solve for the trajectories\n t = np.linspace(0, max_time, int(250*max_time))\n x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)\n for x0i in x0])\n \n # choose a different color for each trajectory\n colors = plt.cm.viridis(np.linspace(0, 1, N))\n\n for i in range(N):\n x, y, z = x_t[i,:,:].T\n lines = ax.plot(x, y, z, '-', c=colors[i])\n plt.setp(lines, linewidth=2)\n\n ax.view_init(30, angle)\n plt.show()\n\n return t, x_t", "Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.", "t, x_t = solve_lorenz(angle=0, N=10)", "Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters.", "w = interactive(solve_lorenz, angle=(0.,360.), max_time=(0.1, 4.0), \n N=(0,50), sigma=(0.0,50.0), rho=(0.0,50.0))\ndisplay(w)", "The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments:", "t, x_t = w.result\n\nw.kwargs", "After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \\(x\\), \\(y\\) and \\(z\\).", "xyz_avg = x_t.mean(axis=1)\n\nxyz_avg.shape", "Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors.", "plt.hist(xyz_avg[:,0])\nplt.title('Average $x(t)$');\n\nplt.hist(xyz_avg[:,1])\nplt.title('Average $y(t)$');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ramalho/jupyter101
en/Pi simulation.ipynb
gpl-3.0
[ "Computing π by simulation\n<img src=\"../img/cannon-circle.png\" width=\"500\">\nConsider a cannon firing random shots on a square field enclosing a circle. If the radius of the circle is 1, then it's area is π, and the area of the field is 4.\nAfter n shots, the number c of shots inside the circle will be proportional to π:\n$$\n\\frac{π}{4}=\\frac{c}{n}\n$$\nThen the value of π can be computed like this:\n$$\nπ = \\frac{4 \\cdot c}{n}\n$$\nTo get started, let's generate coordinates for the shots:", "import random\n\ndef rnd(n):\n return [random.uniform(-1, 1) for _ in range(n)]\n\nSHOTS = 5000\nx = rnd(SHOTS)\ny = rnd(SHOTS)", "Now we can select coordinate pairs inside the circle:", "def pairs(seq1, seq2):\n yes1, yes2, no1, no2 = [], [], [], []\n for a, b in zip(seq1, seq2):\n if (a*a + b*b)**.5 <= 1:\n yes1.append(a)\n yes2.append(b)\n else:\n no1.append(a)\n no2.append(b)\n return yes1, yes2, no1, no2\n\nx_sim, y_sim, x_nao, y_nao = pairs(x, y)", "We now plot the shots inside the circle in blue, outside in red:", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.figure()\nplt.axes().set_aspect('equal')\nplt.grid()\nplt.scatter(x_sim, y_sim, 3, color='b')\nplt.scatter(x_nao, y_nao, 3, color='r')", "We can now see the π approximation:\n$$\nπ = \\frac{4 \\cdot c}{n}\n$$", "4 * len(x_sim) / SHOTS", "The next function abstracts the process so far. Given n, pi(n) will compute an approximation of π by generating random coordinates and counting those that fall inside the circle:", "def pi(n):\n uni = random.uniform\n c = 0\n i = 0\n while i < n:\n if abs(complex(uni(-1, 1), uni(-1, 1))) <= 1:\n c += 1\n i += 1\n return c * 4.0 / n", "Using this loop, I tried the pi() function with n at different orders of magnitude:\n```python\nres = []\nfor i in range(10):\n n = 10**i\n res.append((n, pi(n)))\nres\n```\nMy notebook took more than 25 minutes to compute these results:", "res = [\n (1, 4.0),\n (10, 2.8),\n (100, 3.24),\n (1000, 3.096),\n (10000, 3.1248),\n (100000, 3.14144),\n (1000000, 3.142716),\n (10000000, 3.1410784),\n (100000000, 3.14149756),\n (1000000000, 3.141589804)\n]", "Now we can graph how the results of pi() approach the actual π (the red line):", "import math\n\nplt.figure()\nx, y = zip(*res)\nx = [round(math.log(n, 10)) for n in x]\nplt.plot(x, y)\nplt.axhline(math.pi, color='r')\nplt.grid()\nplt.xticks(x, ['10**%1.0f' % a for a in x])\nx" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vpw/AndroidForBeginners
Ass2.5/END3_MNIST_addnumber_ass2_5.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/vpw/AndroidForBeginners/blob/master/Ass2.5/END3_MNIST_addnumber_ass2_5.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nQuick Start PyTorch - MNIST\nTo run a Code Cell you can click on the ⏯ Run button in the Navigation Bar above or type Shift + Enter", "#pip install --force-reinstall torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n%pylab inline\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.data import Dataset\nimport torch.utils.data.dataloader as dataloader\nimport torch.optim as optim\n\nfrom torch.utils.data import TensorDataset\nfrom torch.autograd import Variable\nfrom torchvision import transforms\nfrom torchvision.datasets import MNIST\nfrom torchsummary import summary\n\nSEED = 1\n\n# CUDA?\nuse_cuda = torch.cuda.is_available()\n\n# For reproducibility\ntorch.manual_seed(SEED)\n\nif use_cuda:\n torch.cuda.manual_seed(SEED)\n\ndevice = torch.device(\"cuda\" if use_cuda else \"cpu\")", "Preparing the Dataset\nThe \nMNIST dataset is normalized to [0,1] range as per the explanation here: https://stackoverflow.com/questions/63746182/correct-way-of-normalizing-and-scaling-the-mnist-dataset\nThe dataset has the input MNIST image, a onehot encoding of a random number generated per test case, and the output which consists of the concatenation of the MNIST number (onehot) and a binary (decimal to binary) encoding of the sum. As the sum can vary from 0 to 19, 5 binary digits are needed, the dimension of the model output is hence 10+5 = 15 nodes. The target for MNIST is the MNIST label (instead of the one host encoding) so it is appended to the 5 digit sum.", "class MNIST_add_dataset(Dataset):\n def __init__(self, trainset=True):\n self.trainset=trainset\n self.mnist_set = MNIST('./data', train=self.trainset, download=True, \n transform=transforms.Compose([\n transforms.ToTensor(), # ToTensor does min-max normalization.\n transforms.Normalize((0.1307,), (0.3081,))\n ]), )\n # list of random integers for adding - part of training set\n self.rand_int_num_list = [np.random.randint(0,10) for i in range(len(self.mnist_set))]\n #print(\"Rand int \",self.rand_int_num_list[0])\n # convert it to 1 hot encoding for use in training\n self.num_onehot = np.identity(10)[self.rand_int_num_list] \n #print(\"Rand int one hot \",self.num_onehot[0])\n # get the list of MNIST digits\n self.digit_list = list(map(lambda x:[x[1]], self.mnist_set))\n #print(\"MNIST digit \",self.digit_list[0], len(self.digit_list))\n # get the final train target label by summing the MNIST label and the random number\n # and get the binary (5 digit) representation of the sum of the MNIST digit and the random number\n self.bin_sum_list = list(map(lambda x,y: list(map(int,f'{x[1]+y:05b}')), self.mnist_set, self.rand_int_num_list))\n #print(\"Binary of sum \",self.bin_sum_list[0])\n # set the target as a concatenation of the MNIST label and the binary encoding of the sum of the \n # MNIST number and the random number\n self.target = list(map(lambda x,y:np.concatenate((x,y)),self.digit_list,self.bin_sum_list))\n #print(\"Target \",self.target[0])\n def __getitem__(self, index):\n # MNIST image input\n image = self.mnist_set[index][0]\n # One hot encoding of the random number\n oh_num = torch.as_tensor(self.num_onehot[index],dtype=torch.float32)\n # concatenated target\n target = torch.tensor(self.target[index])\n return ([image, oh_num],target) \n def __len__(self):\n return len(self.mnist_set) ", "Make the train and test datasets", "train_set = MNIST_add_dataset(trainset=True)\ntest_set = MNIST_add_dataset(trainset=False)", "Visualization\nData loader", "dataloader_args = dict(shuffle=True, batch_size=256,num_workers=4, pin_memory=True) if use_cuda else dict(shuffle=True, batch_size=64)\ntrain_loader = dataloader.DataLoader(train_set, **dataloader_args)\ntest_loader = dataloader.DataLoader(test_set, **dataloader_args)", "Model\nThe model consists of 3 convolutional blocks, followed by 3 linear blocks, with ReLU activation in between. The MNIST image is fed to the 1st Conv block, and the ranbdom number (one hot) to the first linear block (alongwith the output of the 3rd Conv block, concatenated).", "class Model(nn.Module):\n def __init__(self):\n super(Model, self).__init__()\n # conv layer 1\n self.conv1 = nn.Sequential(\n \n nn.Conv2d(1,16,5), # 16x24x24\n nn.ReLU(),\n nn.MaxPool2d(2,2) # 16x12x12\n )\n self.conv2 = nn.Sequential(\n nn.Conv2d(16,32,5), # 32x8x8\n nn.ReLU(),\n nn.MaxPool2d(2,2) # 32x4x4\n )\n self.conv3 = nn.Sequential(\n nn.Conv2d(32,10,3), # 10x2x2\n nn.MaxPool2d(2,2) # 10x1x1\n )\n self.relu = nn.ReLU()\n self.fc1 = nn.Linear(10+10, 60) # adding random number one hot to the 1x10 MNIST output\n self.fc2 = nn.Linear(60, 30)\n self.fc3 = nn.Linear(30, 15) # 10 for MNIST 1-hot coding, and 5 for binary repr of sum of digits\n\n def forward(self, image, number):\n #print(\"0 \",image.shape)\n x = self.conv1(image)\n #print(\"1 \",x.shape)\n x = self.conv2(x)\n #print(\"2 \",x.shape)\n x = self.conv3(x)\n #print(\"3 \",x.shape)\n x = x.view(-1,10)\n #print(\"after \",x.shape)\n # concatenate the number\n x = torch.cat((x,number),1)\n x = self.fc1(x)\n x = self.relu(x)\n x = self.fc2(x)\n x = self.relu(x)\n x = self.fc3(x)\n x = x.view(-1,15)\n #print(\"In forward x shape \",x.shape)\n \n # The first 10 outputs should be the onehot encoding of the MNIST digit\n # using a Log softmax (with NLL Loss) for this\n o1 = F.log_softmax(x[:,:10])\n #print(\"In forward o1 shape \",o1.shape)\n\n # for the 5 digit sum outout - as it is a multi-label classification, I am using a Sigmoid and not a softmax as there\n # will be multiple 1's in the output\n # used Hardsigmoid as it has a more sharp curve\n sig = nn.Hardsigmoid()\n o2 = sig(x[:,10:])\n #print(\"In forward o2 shape \",o2.shape)\n return torch.cat((o1,o2),1)\n\nmodel = Model().to(device)\nprint(model)\n# random test\nmodel.forward(torch.rand((1, 1, 28, 28)).to(device), torch.rand((1, 10)).to(device))\n\n\ndef train(model, device, train_loader, optimizer, epoch, losses):\n print(f\"EPOCH - {epoch}\")\n model.train()\n for batch_idx, (input, target) in enumerate(train_loader):\n image, number, target = input[0].to(device), input[1].to(device), target.to(device)\n # clear the grad computation\n optimizer.zero_grad()\n y_pred = model(image, number) # Passing batch\n #print(\"Input shape \", input.shape)\n #print(len(image))\n #print(\"Image shape \", image.shape)\n #print(\"Number shape \", number.shape)\n #print(\"Target shape \",target.shape)\n #print(\"Ypred shape \",y_pred.shape)\n \n # Calculate loss\n #print(target[:,0].shape)\n #print(y_pred[:,:10])\n # using 2 losses - one for the MNIST prediction and one for the sum (binary)\n # using Negative log likelihood for the MNIST prediction as we used Log Softmax for the activation\n loss_nll = nn.NLLLoss()\n loss1 = loss_nll(y_pred[:,:10],target[:,0])\n \n \n # Using Binary cross entropy for the binary sum representation\n loss_bce = torch.nn.BCELoss()\n loss2 = loss_bce(y_pred[:,10:].float(),target[:,1:].float())\n\n # Total loss\n loss=loss1+loss2\n #print(\"Loss1 \",loss1.cpu().data.item())\n #print(\"Loss2 \",loss2.cpu().data.item())\n #print(\"Loss \",loss.cpu().data.item())\n losses.append(loss.cpu().data.item())\n \n # Backpropagation\n loss.backward()\n optimizer.step()\n # Display\n if batch_idx % 100 == 0:\n print('\\r Train Epoch: {}/{} \\\n [{}/{} ({:.0f}%)]\\\n \\tAvg Loss: {:.6f}'.format(\n epoch+1,\n EPOCHS,\n batch_idx * len(image), \n len(train_loader.dataset),\n 100. * batch_idx / len(train_loader), \n loss.cpu().data.item()/256), \n end='')\n\ndef test(model, device, test_loader):\n model.eval()\n test_loss = 0\n correct_MNIST = 0\n correct_sum = 0\n correct = 0\n with torch.no_grad(): # dont compute gradients \n for data, target in test_loader:\n image, number, target = data[0].to(device), data[1].to(device), target.to(device)\n # get prediction\n output = model(image, number)\n #print(output.shape, target.shape)\n #print(\"Output \",output,\"\\nTarget \", target)\n \n # compute loss\n loss_nll = nn.NLLLoss()\n loss1 = loss_nll(output[:,:10],target[:,0]).item()\n \n loss_bce = torch.nn.BCELoss()\n loss2 = loss_bce(output[:,10:].float(),target[:,1:].float()).item()\n \n loss = loss1+loss2\n \n #print(\"Loss1 \",loss1.cpu().data.item())\n #print(\"Loss2 \",loss2.cpu().data.item())\n #print(\"Loss \",loss.cpu().data.item())\n\n test_loss += loss # sum up batch loss\n\n #pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability\n #correct += pred.eq(target.view_as(pred)).sum().item()\n\n pred_MNIST = output[:,:10].argmax(dim=1, keepdim=True) # get the index of the max log-probability\n correct_MNIST_list = pred_MNIST.eq(target[:,0].view_as(pred_MNIST))\n correct_MNIST += correct_MNIST_list.sum().item()\n pred_sum = output[:,10:]\n correct_sum_list = pred_sum.eq(target[:,1:].view_as(pred_sum))\n correct_sum += correct_sum_list.sum().item()\n correct_list = torch.logical_and(correct_MNIST_list, correct_sum_list)\n correct += correct_list.sum().item()\n\n test_loss /= len(test_loader.dataset)\n\n print('\\nTest set: Average loss: {:.4f}, \\\n Accuracy-MNIST: {}/{} ({:.0f}%)\\t\\\n Accuracy-sum: {}/{} ({:.0f}%)\\t\\\n Accuracy-total: {}/{} ({:.0f}%)\\n'.format(\n test_loss, \n correct_MNIST, len(test_loader.dataset),\n 100. * correct_MNIST / len(test_loader.dataset),\n correct_sum, len(test_loader.dataset),\n 100. * correct_sum / len(test_loader.dataset),\n correct, len(test_loader.dataset),\n 100. * correct / len(test_loader.dataset)))\n\nmodel = Model().to(device)\noptimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)\n\nEPOCHS=20\nlosses = []\n\nfor epoch in range(EPOCHS):\n train(model, device, train_loader, optimizer, epoch, losses)\n test(model, device, test_loader)\n\n\nplot(losses)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
philferriere/tfoptflow
tfoptflow/pwcnet_finetune_sm-6-2-cyclic-chairsthingsmix.ipynb
mit
[ "PWC-Net-small model finetuning (with cyclical learning rate schedule)\nIn this notebook we:\n- Use a small model (no dense or residual connections), 6 level pyramid, uspample level 2 by 4 as the final flow prediction\n- Train the PWC-Net-small model on a mix of the FlyingChairs and FlyingThings3DHalfRes dataset using a Cyclic<sub>short</sub> schedule of our own\n- Let the Cyclic<sub>short</sub> schedule oscillate between 2e-05 and 1e-06 for 200,000 steps\n- Switch to the \"robust\" loss described in the paper, instead of the \"multiscale\" loss used during training\nBelow, look for TODO references and customize this notebook based on your own needs.\nReference\n[2018a]<a name=\"2018a\"></a> Sun et al. 2018. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. [arXiv] [web] [PyTorch (Official)] [Caffe (Official)]", "\"\"\"\npwcnet_finetune.ipynb\n\nPWC-Net model finetuning.\n\nWritten by Phil Ferriere\n\nLicensed under the MIT License (see LICENSE for details)\n\nTensorboard:\n [win] tensorboard --logdir=E:\\\\repos\\\\tf-optflow\\\\tfoptflow\\\\pwcnet-sm-6-2-cyclic-chairsthingsmix_finetuned\n [ubu] tensorboard --logdir=/media/EDrive/repos/tf-optflow/tfoptflow/pwcnet-sm-6-2-cyclic-chairsthingsmix_finetuned\n\"\"\"\nfrom __future__ import absolute_import, division, print_function\nimport sys\nfrom copy import deepcopy\n\nfrom dataset_base import _DEFAULT_DS_TUNE_OPTIONS\nfrom dataset_flyingchairs import FlyingChairsDataset\nfrom dataset_flyingthings3d import FlyingThings3DHalfResDataset\nfrom dataset_mixer import MixedDataset\nfrom model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_FINETUNE_OPTIONS", "TODO: Set this first!", "# TODO: You MUST set dataset_root to the correct path on your machine!\nif sys.platform.startswith(\"win\"):\n _DATASET_ROOT = 'E:/datasets/'\nelse:\n _DATASET_ROOT = '/media/EDrive/datasets/'\n_FLYINGCHAIRS_ROOT = _DATASET_ROOT + 'FlyingChairs_release'\n_FLYINGTHINGS3DHALFRES_ROOT = _DATASET_ROOT + 'FlyingThings3D_HalfRes'\n \n# TODO: You MUST adjust the settings below based on the number of GPU(s) used for training\n# Set controller device and devices\n# A one-gpu setup would be something like controller='/device:GPU:0' and gpu_devices=['/device:GPU:0']\n# Here, we use a dual-GPU setup, as shown below\n# gpu_devices = ['/device:GPU:0', '/device:GPU:1']\n# controller = '/device:CPU:0'\ngpu_devices = ['/device:GPU:0']\ncontroller = '/device:GPU:0'\n\n# TODO: You MUST adjust this setting below based on the amount of memory on your GPU(s)\n# Batch size\nbatch_size = 8", "Finetune on FlyingChairs+FlyingThings3DHalfRes mix\nLoad the dataset", "# TODO: You MUST set the batch size based on the capabilities of your GPU(s) \n# Load train dataset\nds_opts = deepcopy(_DEFAULT_DS_TUNE_OPTIONS)\nds_opts['in_memory'] = False # Too many samples to keep in memory at once, so don't preload them\nds_opts['aug_type'] = 'heavy' # Apply all supported augmentations\nds_opts['batch_size'] = batch_size * len(gpu_devices) # Use a multiple of 8; here, 16 for dual-GPU mode (Titan X & 1080 Ti)\nds_opts['crop_preproc'] = (256, 448) # Crop to a smaller input size\nds1 = FlyingChairsDataset(mode='train_with_val', ds_root=_FLYINGCHAIRS_ROOT, options=ds_opts)\nds_opts['type'] = 'into_future'\nds2 = FlyingThings3DHalfResDataset(mode='train_with_val', ds_root=_FLYINGTHINGS3DHALFRES_ROOT, options=ds_opts)\nds = MixedDataset(mode='train_with_val', datasets=[ds1, ds2], options=ds_opts)\n\n# Display dataset configuration\nds.print_config()", "Configure the finetuning", "# Start from the default options\nnn_opts = deepcopy(_DEFAULT_PWCNET_FINETUNE_OPTIONS)\nnn_opts['verbose'] = True\nnn_opts['ckpt_path'] = './models/pwcnet-sm-6-2-cyclic-chairsthingsmix/pwcnet.ckpt-49000'\nnn_opts['ckpt_dir'] = './pwcnet-sm-6-2-cyclic-chairsthingsmix_finetuned/'\nnn_opts['batch_size'] = ds_opts['batch_size']\nnn_opts['x_shape'] = [2, ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 3]\nnn_opts['y_shape'] = [ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 2]\nnn_opts['use_tf_data'] = True # Use tf.data reader\nnn_opts['gpu_devices'] = gpu_devices\nnn_opts['controller'] = controller\n\n# Use the PWC-Net-small model in quarter-resolution mode\nnn_opts['use_dense_cx'] = False\nnn_opts['use_res_cx'] = False\nnn_opts['pyr_lvls'] = 6\nnn_opts['flow_pred_lvl'] = 2\n\n# Robust loss as described doesn't work, so try the following:\nnn_opts['loss_fn'] = 'loss_multiscale' # 'loss_multiscale' # 'loss_robust' # 'loss_robust'\nnn_opts['q'] = 1. # 0.4 # 1. # 0.4 # 1.\nnn_opts['epsilon'] = 0. # 0.01 # 0. # 0.01 # 0.\n\n# Set the learning rate schedule. This schedule is for a single GPU using a batch size of 8.\n# Below,we adjust the schedule to the size of the batch and the number of GPUs.\nnn_opts['lr_policy'] = 'multisteps'\nnn_opts['init_lr'] = 1e-05\nnn_opts['lr_boundaries'] = [80000, 120000, 160000, 200000]\nnn_opts['lr_values'] = [1e-05, 5e-06, 2.5e-06, 1.25e-06, 6.25e-07]\nnn_opts['max_steps'] = 200000\n\n# Below,we adjust the schedule to the size of the batch and our number of GPUs (2).\nnn_opts['max_steps'] = int(nn_opts['max_steps'] * 8 / ds_opts['batch_size'])\nnn_opts['cyclic_lr_stepsize'] = int(nn_opts['cyclic_lr_stepsize'] * 8 / ds_opts['batch_size'])\n\n# Instantiate the model and display the model configuration\nnn = ModelPWCNet(mode='train_with_val', options=nn_opts, dataset=ds)\nnn.print_config()", "Finetune the model", "# Train the model\nnn.train()", "Training log\nHere are the training curves for the run above:\n\n\n\nHere are the predictions issued by the model for a few validation samples:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kaggle/learntools
notebooks/deep_learning/raw/tut4_transfer_learning.ipynb
apache-2.0
[ "Intro\nAt the end of this lesson, you will be able to use transfer learning to build highly accurate computer vision models for your custom purposes, even when you have relatively little data.\nLesson", "from IPython.display import YouTubeVideo\nYouTubeVideo('mPFq5KMxKVw', width=800, height=450)", "Sample Code\nSpecify Model", "from tensorflow.python.keras.applications import ResNet50\nfrom tensorflow.python.keras.models import Sequential\nfrom tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D\n\nnum_classes = 2\nresnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'\n\nmy_new_model = Sequential()\nmy_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))\nmy_new_model.add(Dense(num_classes, activation='softmax'))\n\n# Say not to train first layer (ResNet) model. It is already trained\nmy_new_model.layers[0].trainable = False", "Compile Model", "my_new_model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])", "Fit Model", "from tensorflow.python.keras.applications.resnet50 import preprocess_input\nfrom tensorflow.python.keras.preprocessing.image import ImageDataGenerator\n\nimage_size = 224\ndata_generator = ImageDataGenerator(preprocessing_function=preprocess_input)\n\n\ntrain_generator = data_generator.flow_from_directory(\n '../input/urban-and-rural-photos/train',\n target_size=(image_size, image_size),\n batch_size=24,\n class_mode='categorical')\n\nvalidation_generator = data_generator.flow_from_directory(\n '../input/urban-and-rural-photos/val',\n target_size=(image_size, image_size),\n class_mode='categorical')\n\nmy_new_model.fit_generator(\n train_generator,\n steps_per_epoch=3,\n validation_data=validation_generator,\n validation_steps=1)", "Note on Results:\nThe printed validation accuracy can be meaningfully better than the training accuracy at this stage. This can be puzzling at first.\nIt occurs because the training accuracy was calculated at multiple points as the network was improving (the numbers in the convolutions were being updated to make the model more accurate). The network was inaccurate when the model saw the first training images, since the weights hadn't been trained/improved much yet. Those first training results were averaged into the measure above.\nThe validation loss and accuracy measures were calculated after the model had gone through all the data. So the network had been fully trained when these scores were calculated.\nThis isn't a serious issue in practice, and we tend not to worry about it.\nYour Turn\nTry transfer learning yourself." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SoftwareHeritage/swh-web-ui
swh/web/tests/resources/contents/other/extensions/word2vec.ipynb
agpl-3.0
[ "Learning word embeddings - word2vec\n- Saurabh Mathur\nThe aim of this experiment is to use the algorithm developed by Tomas Mikolov et al. to learn high quality vector representations of text.\nThe skip-gram model\nGiven,\na sequence of words $ w_1, w_2, .., w_T $, predict the next word.\nThe objective is to maximize average log probability.\n$$ AverageLogProbability = \\frac{1}{T} \\sum_{t=1}^{T} \\sum_{-c \\leqslant j\\leqslant c, j \\neq 0} log\\ p (w_{t+j} | w_t) $$\nwhere $ c $ is the length of context.\nBasic skip-gram model\nThe basic skip-gram formulation defines $ p (w_{t+j} | w_t) $ in terms of softmax as -\n$$ p (wo | wi) = \\frac{ exp(v'^{T} {wo} \\cdot v{wi}) }{ \\sum^{W}{w=1} exp(v'^{T} {w} \\cdot v_{wi} ) } $$\nwhere $vi$ and $vo$ are input and output word vectors.\nThis is extremely costly and this impractical as, W is huge ( ~ $10^5-10^7$ terms ). \nThere are three proposed methods to get around this limitation.\n- Heirarchial softmax\n- Negative sampling\n- Subsample frequent words\nI'm using Google's Tensorflow library for the implementation", "import tensorflow as tf", "For the data, I'm using the text8 dataset which is a 100MB sample of cleaned English Wikipedia dump on Mar. 3, 2006", "import os, urllib\n\n \ndef fetch_data(url):\n \n filename = url.split(\"/\")[-1]\n datadir = os.path.join(os.getcwd(), \"data\")\n filepath = os.path.join(datadir, filename)\n \n if not os.path.exists(datadir):\n os.makedirs(datadir)\n if not os.path.exists(filepath):\n urllib.urlretrieve(url, filepath)\n \n return filepath\n\nurl = \"http://mattmahoney.net/dc/text8.zip\"\nfilepath = fetch_data(url)\nprint (\"Data at {0}.\".format(filepath))\n \n\n\nimport os, zipfile\n\ndef read_data(filename):\n with zipfile.ZipFile(filename) as f:\n data = tf.compat.as_str(f.read(f.namelist()[0])).split()\n return data\n\n\nwords = read_data(filepath)", "Take only the top $c$ words, mark rest as UNK (unknown).", "def build_dataset(words, vocabulary_size):\n count = [[ \"UNK\", -1 ]].extend(\n collections.Counter(words).most_common(vocabulary_size))\n word_to_index = { word: i for i, (word, _) in enumerate(count) }\n data = [word_to_index.get(word, 0) for word in words] # map unknown words to 0\n unk_count = data.count(0) # Number of unknown words\n count[0][1] = unk_count\n index_to_word= dict(zip(word_to_index.values(), word_to_index.keys()))\n \n return data, count, word_to_index, index_to_word\n\nvocabulary_size = 50000\ndata, count, word_to_index, index_to_word = build_dataset(words, vocabulary_size)\nprint (\"data: {0}\".format(data[:5]))\nprint (\"count: {0}\".format(count[:5]))\nprint (\"word_to_index: {0}\".format(word_to_index.items()[:5]))\nprint (\"index_to_word: {0}\".format(index_to_word.items()[:5]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
qutip/qutip-notebooks
examples/optomechanical-steadystate.ipynb
lgpl-3.0
[ "QuTiP Example: Steady State for an Optomechanical System in the Single-Photon Strong-Coupling Regime\nP.D. Nation and J.R. Johansson\nFor more information about QuTiP see http://qutip.org", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nimport numpy as np\n\nfrom IPython.display import Image\n\nfrom qutip import *", "Optomechanical Hamiltonian\nThe optomechanical Hamiltonian arrises from the radiation pressure interaction of light in an optical cavity where one of the cavity mirrors is mechanically compliant.", "Image(filename='images/optomechanical_setup.png', width=500, embed=True)", "Assuming that $a^{+}$, $a$ and $b^{+}$,$b$ are the raising and lowering operators for the cavity and mechanical oscillator, respectively, the Hamiltonian for an optomechanical system driven by a classical monochromatic pump term can be written as \n\\begin{equation}\n\\frac{\\hat{H}}{\\hbar}=-\\Delta\\hat{a}^{+}\\hat{a}+\\omega_{m}\\hat{b}^{+}\\hat{b}+g_{0}(\\hat{b}+\\hat{b}^{+})\\hat{a}^{+}\\hat{a}+E\\left(\\hat{a}+\\hat{a}^{+}\\right),\n\\end{equation}\nwhere $\\Delta=\\omega_{p}-\\omega_{c}$ is the detuning between the pump ($\\omega_{p}$) and cavity ($\\omega_{c}$) mode frequencies, $g_{0}$ is the single-photon coupling strength, and $E$ is the amplitude of the pump mode. It is known that in the single-photon strong-coupling regime, where the cavity frequency shift per phonon is larger than the cavity line width, $g_{0}/\\kappa \\gtrsim 1$ where $\\kappa$ is the decay rate of the cavity, and a single single photon displaces the mechanical oscillator by more than its zero-point amplitude $g_{0}/\\omega_{m} \\gtrsim 1$, or equiviently, $g^{2}{0}/\\kappa\\omega{m} \\gtrsim 1$, the mechanical oscillator can be driven into a nonclassical steady state of the system$+$environment dynamics. Here, we will use the steady state solvers in QuTiP to explore such a state and compare the various solvers.\nSolving for the Steady State Density Matrix\nThe steady state density matrix of the optomechanical system plus the environment can be found from the Liouvillian superoperator $\\mathcal{L}$ via\n\\begin{equation}\n\\frac{d\\rho}{dt}=\\mathcal{L}\\rho=0\\rho,\n\\end{equation}\nwhere $\\mathcal{L}$ is typically given in Lindblad form\n\\begin{align}\n\\mathcal{L}[\\hat{\\rho}]=&-i[\\hat{H},\\hat{\\rho}]+\\kappa \\mathcal{D}\\left[\\hat{a},\\hat{\\rho}\\right]\\\n&+\\Gamma_{m}(1+n_{\\rm{th}})\\mathcal{D}[\\hat{b},\\hat{\\rho}]+\\Gamma_{m}n_{\\rm th}\\mathcal{D}[\\hat{b}^{+},\\hat{\\rho}], \\nonumber\n\\end{align}\nwhere $\\Gamma_{m}$ is the coulping strength of the mechanical oscillator to its thermal environment with average occupation number $n_{th}$. As is customary, here we assume that the cavity mode is coupled to the vacuum.\nAlthough, the steady state solution is nothing but an eigenvalue equation, the numerical solution to this equation is anything but trivial due to the non-Hermitian structure of $\\mathcal{L}$ and its worsening condition number has the dimensionality of the truncated Hilbert space increases.\nSteady State Solvers in QuTiP v3.0+\nAs of QuTiP version 3.0, the following steady state solvers are available:\n\ndirect: Direct LU factorization\neigen: Calculates the eigenvector associated with the zero eigenvalue of $\\mathcal{L}\\rho$.\npower: Finds zero eigenvector using inverse-power method.\niterative-gmres: Iterative solution via the GMRES solver.\niterative-lgmres: Iterative solution via the LGMRES solver.\niterative-bicgstab: Iterative solution via the BICGSTAB solver.\nsvd: Solution via SVD decomposition (dense matrices only).\n\nSetup and Solution\nSystem Parameters", "# System Parameters (in units of wm)\n#-----------------------------------\nNc = 4 # Number of cavity states\nNm = 80 # Number of mech states\nkappa = 0.3 # Cavity damping rate\nE = 0.1 # Driving Amplitude \ng0 = 2.4*kappa # Coupling strength\nQm = 1e4 # Mech quality factor\ngamma = 1/Qm # Mech damping rate\nn_th = 1 # Mech bath temperature\ndelta = -0.43 # Detuning", "Build Hamiltonian and Collapse Operators", "# Operators\n#----------\na = tensor(destroy(Nc), qeye(Nm))\nb = tensor(qeye(Nc), destroy(Nm))\nnum_b = b.dag()*b\nnum_a = a.dag()*a\n\n# Hamiltonian\n#------------\nH = -delta*(num_a)+num_b+g0*(b.dag()+b)*num_a+E*(a.dag()+a)\n\n# Collapse operators\n#-------------------\ncc = np.sqrt(kappa)*a\ncm = np.sqrt(gamma*(1.0 + n_th))*b\ncp = np.sqrt(gamma*n_th)*b.dag()\nc_ops = [cc,cm,cp]", "Run Steady State Solvers", "solvers = ['direct','eigen','power','iterative-gmres','iterative-bicgstab']\nmech_dms = []\n\nfor ss in solvers:\n if ss in ['iterative-gmres','iterative-bicgstab']:\n use_rcm = True\n else:\n use_rcm = False\n rho_ss, info = steadystate(H, c_ops, method=ss,use_precond=True, \n use_rcm=use_rcm, tol=1e-15, return_info=True)\n print(ss,'solution time =',info['solution_time'])\n rho_mech = ptrace(rho_ss, 1)\n mech_dms.append(rho_mech)\nmech_dms = np.asarray(mech_dms)", "Check Consistancy of Solutions\nCan check to see if the solutions are the same by looking at the number of nonzero elements (NNZ) in the difference between mechanical density matrices.", "for kk in range(len(mech_dms)):\n print((mech_dms[kk]-mech_dms[0]).data.nnz)", "It seems that the eigensolver solution is not exactly the same. Lets check the magnitude of the elements to see if they are small.", "for kk in range(len(mech_dms)):\n print(any(abs((mech_dms[kk] - mech_dms[0]).data.data)>1e-11))", "Plot the Mechanical Oscillator Wigner Function\nIt is known that the density matrix for the mechanical oscillator is diagoinal in the Fock basis due to phase diffusion. However some small off-diagonal terms show up during the factorization process", "fig = plt.figure(figsize=(8,6))\nplt.spy(rho_mech.data, ms=1);", "Therefore, to remove this error, let use explicitly take the diagonal elements are form a new operator out of them", "diag = rho_mech.diag()\nrho_mech2 = qdiags(diag, 0, dims=rho_mech.dims, shape=rho_mech.shape)\nfig = plt.figure(figsize=(8,6))\nplt.spy(rho_mech2.data, ms=1);", "Now lets compute the oscillator Wigner function and plot it to see if there are any regions of negativity.", "xvec = np.linspace(-20, 20, 256)\nW = wigner(rho_mech2, xvec, xvec)\nwmap = wigner_cmap(W, shift=-1e-5)\n\nfig, ax = plt.subplots(figsize=(8,6))\nc = ax.contourf(xvec, xvec, W, 256, cmap=wmap)\nax.set_xlim([-10, 10])\nax.set_ylim([-10, 10])\nplt.colorbar(c, ax=ax);", "Versions", "from qutip.ipynbtools import version_table\n\nversion_table()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
notebooks/hopper.ipynb
mit
[ "Modeling and Simulation in Python\nCase study: Hopper optimization\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# If you want the figures to appear in the notebook, \n# and you want to interact with them, use\n# %matplotlib notebook\n\n# If you want the figures to appear in the notebook, \n# and you don't want to interact with them, use\n# %matplotlib inline\n\n# If you want the figures to appear in separate windows, use\n# %matplotlib qt5\n\n# tempo switch from one to another, you have to select Kernel->Restart\n\n%matplotlib inline\n\nfrom modsim import *\n\nkg = UNITS.kilogram\nm = UNITS.meter\ns = UNITS.second\nN = UNITS.newton\n\ncondition = Condition(mass = 0.03 * kg,\n fraction = 1 / 3,\n k = 9810.0 * N / m,\n duration = 0.3 * s,\n L = 0.05 * m,\n d = 0.005 * m,\n v1 = 0 * m / s,\n v2 = 0 * m / s,\n g = 9.8 * m / s**2)\n\ncondition = Condition(mass = 0.03,\n fraction = 1 / 3,\n k = 9810.0,\n duration = 0.3,\n L = 0.05,\n d = 0.005,\n v1 = 0,\n v2 = 0,\n g = 9.8)\n\ndef make_system(condition):\n \"\"\"Make a system object.\n \n condition: Condition with \n \n returns: System with init\n \"\"\"\n unpack(condition)\n \n x1 = L - d # upper mass\n x2 = 0 # lower mass \n \n init = State(x1=x1, x2=x2, v1=v1, v2=v2)\n \n m1, m2 = fraction*mass, (1-fraction)*mass\n ts = linspace(0, duration, 1001)\n \n return System(init=init, m1=m1, m2=m2, k=k, L=L, ts=ts)", "Testing make_system", "system = make_system(condition)\nsystem\n\nsystem.init\n\ndef slope_func(state, t, system):\n \"\"\"Computes the derivatives of the state variables.\n \n state: State object with theta, y, r\n t: time\n system: System object with r, k\n \n returns: sequence of derivatives\n \"\"\"\n x1, x2, v1, v2 = state\n unpack(system)\n \n dx = x1 - x2\n f_spring = k * (L - dx)\n \n a1 = f_spring/m1 - g\n a2 = -f_spring/m2 - g\n \n if t < 0.003 and a2 < 0:\n a2 = 0\n \n return v1, v2, a1, a2", "Testing slope_func", "slope_func(system.init, 0, system)", "Now we can run the simulation.", "run_odeint(system, slope_func)\n\nsystem.results.tail()\n\nplot(system.results.x1)\n\nplot(system.results.x2)\n\nplot(system.results.x1 - system.results.x2)\n\nplot(ys, color='green', label='y')\n\ndecorate(xlabel='Time (s)',\n ylabel='Length (m)')", "Plotting r", "plot(rs, color='red', label='r')\n\ndecorate(xlabel='Time (s)',\n ylabel='Radius (mm)')", "We can also see the relationship between y and r, which I derive analytically in the book.", "plot(rs, ys, color='purple')\n\ndecorate(xlabel='Radius (mm)',\n ylabel='Length (m)',\n legend=False)", "And here's the figure from the book.", "subplot(3, 1, 1)\nplot(thetas, label='theta')\ndecorate(ylabel='Angle (rad)')\n\nsubplot(3, 1, 2)\nplot(ys, color='green', label='y')\ndecorate(ylabel='Length (m)')\n\nsubplot(3, 1, 3)\nplot(rs, color='red', label='r')\n\ndecorate(xlabel='Time(s)',\n ylabel='Radius (mm)')\n\nsavefig('chap11-fig01.pdf')", "We can use interpolation to find the time when y is 47 meters.", "T = interp_inverse(ys, kind='cubic')\nt_end = T(47)\nt_end", "At that point r is 55 mm, which is Rmax, as expected.", "R = interpolate(rs, kind='cubic')\nR(t_end)", "The total amount of rotation is 1253 rad.", "THETA = interpolate(thetas, kind='cubic')\nTHETA(t_end)", "Unrolling\nFor unrolling the paper, we need more units:", "kg = UNITS.kilogram\nN = UNITS.newton", "And a few more parameters in the Condition object.", "condition = Condition(Rmin = 0.02 * m,\n Rmax = 0.055 * m,\n Mcore = 15e-3 * kg,\n Mroll = 215e-3 * kg,\n L = 47 * m,\n tension = 2e-4 * N,\n duration = 180 * s)", "make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.", "def make_system(condition):\n \"\"\"Make a system object.\n \n condition: Condition with Rmin, Rmax, Mcore, Mroll,\n L, tension, and duration\n \n returns: System with init, k, rho_h, Rmin, Rmax,\n Mcore, Mroll, ts\n \"\"\"\n unpack(condition)\n \n init = State(theta = 0 * radian,\n omega = 0 * radian/s,\n y = L)\n \n area = pi * (Rmax**2 - Rmin**2)\n rho_h = Mroll / area\n k = (Rmax**2 - Rmin**2) / 2 / L / radian \n ts = linspace(0, duration, 101)\n \n return System(init=init, k=k, rho_h=rho_h,\n Rmin=Rmin, Rmax=Rmax,\n Mcore=Mcore, Mroll=Mroll, \n ts=ts)", "Testing make_system", "system = make_system(condition)\nsystem\n\nsystem.init", "Here's how we compute I as a function of r:", "def moment_of_inertia(r, system):\n \"\"\"Moment of inertia for a roll of toilet paper.\n \n r: current radius of roll in meters\n system: System object with Mcore, rho, Rmin, Rmax\n \n returns: moment of inertia in kg m**2\n \"\"\"\n unpack(system)\n Icore = Mcore * Rmin**2 \n Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)\n return Icore + Iroll", "When r is Rmin, I is small.", "moment_of_inertia(system.Rmin, system)", "As r increases, so does I.", "moment_of_inertia(system.Rmax, system)", "Here's the slope function.", "def slope_func(state, t, system):\n \"\"\"Computes the derivatives of the state variables.\n \n state: State object with theta, omega, y\n t: time\n system: System object with Rmin, k, Mcore, rho_h, tension\n \n returns: sequence of derivatives\n \"\"\"\n theta, omega, y = state\n unpack(system)\n \n r = sqrt(2*k*y + Rmin**2)\n I = moment_of_inertia(r, system)\n tau = r * tension\n alpha = tau / I\n dydt = -r * omega\n \n return omega, alpha, dydt ", "Testing slope_func", "slope_func(system.init, 0*s, system)", "Now we can run the simulation.", "run_odeint(system, slope_func)", "And look at the results.", "system.results.tail()", "Extrating the time series", "thetas = system.results.theta\nomegas = system.results.omega\nys = system.results.y", "Plotting theta", "plot(thetas, label='theta')\n\ndecorate(xlabel='Time (s)',\n ylabel='Angle (rad)')", "Plotting omega", "plot(omegas, color='orange', label='omega')\n\ndecorate(xlabel='Time (s)',\n ylabel='Angular velocity (rad/s)')", "Plotting y", "plot(ys, color='green', label='y')\n\ndecorate(xlabel='Time (s)',\n ylabel='Length (m)')", "Here's the figure from the book.", "subplot(3, 1, 1)\nplot(thetas, label='theta')\ndecorate(ylabel='Angle (rad)')\n\nsubplot(3, 1, 2)\nplot(omegas, color='orange', label='omega')\ndecorate(ylabel='Angular velocity (rad/s)')\n\nsubplot(3, 1, 3)\nplot(ys, color='green', label='y')\n\ndecorate(xlabel='Time(s)',\n ylabel='Length (m)')\n\nsavefig('chap11-fig02.pdf')", "Yo-yo\nExercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string.\nI provide a Condition object with the system parameters:\n\n\nRmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.\n\n\nRout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string. \n\n\nL is the length of the string.\n\n\ng is the acceleration of gravity.", "condition = Condition(Rmin = 8e-3 * m,\n Rmax = 16e-3 * m,\n Rout = 35e-3 * m,\n mass = 50e-3 * kg,\n L = 1 * m,\n g = 9.8 * m / s**2,\n duration = 1 * s)", "Here's a make_system function that computes I and k based on the system parameters.\nI estimated I by modeling the yo-yo as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.", "def make_system(condition):\n \"\"\"Make a system object.\n \n condition: Condition with Rmin, Rmax, Rout, \n mass, L, g, duration\n \n returns: System with init, k, Rmin, Rmax, mass,\n I, g, ts\n \"\"\"\n unpack(condition)\n \n init = State(theta = 0 * radian,\n omega = 0 * radian/s,\n y = L,\n v = 0 * m / s)\n \n I = mass * Rout**2 / 2\n k = (Rmax**2 - Rmin**2) / 2 / L / radian \n ts = linspace(0, duration, 101)\n \n return System(init=init, k=k,\n Rmin=Rmin, Rmax=Rmax,\n mass=mass, I=I, g=g,\n ts=ts)", "Testing make_system", "system = make_system(condition)\nsystem\n\nsystem.init", "Write a slope function for this system, using these results from the book:\n$ r = \\sqrt{2 k y + R_{min}^2} $ \n$ T = m g I / I^* $\n$ a = -m g r^2 / I^* $\n$ \\alpha = m g r / I^* $\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\nHint: If y is less than 0, it means you have reached the end of the string, so the equation for r is no longer valid. In this case, the simplest thing to do it return the sequence of derivatives 0, 0, 0, 0", "# Solution goes here", "Test your slope function with the initial conditions.", "slope_func(system.init, 0*s, system)", "Then run the simulation.", "run_odeint(system, slope_func)", "Check the final conditions. If things have gone according to plan, the final value of y should be close to 0.", "system.results.tail()", "Plot the results.", "thetas = system.results.theta\nys = system.results.y", "theta should increase and accelerate.", "plot(thetas, label='theta')\n\ndecorate(xlabel='Time (s)',\n ylabel='Angle (rad)')", "y should decrease and accelerate down.", "plot(ys, color='green', label='y')\n\ndecorate(xlabel='Time (s)',\n ylabel='Length (m)')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jeffzhengye/pylearn
google_cloud/google_cloud.ipynb
unlicense
[ "Call dialogflow with python api\nprecondition\nyou have to download your json key, and export GOOGLE_APPLICATION_CREDENTIALS=\"/mnt/d/code/sabala/weather-e6aad-0371e7c946bc.json\"", "# check firewall\n!rm index.html*\n!wget www.google.com\n\nimport uuid\nfrom google.cloud import dialogflow\n\nsession_client = dialogflow.SessionsClient()\n\n# session format: 'projects/*/locations/*/agent/environments/*/users/*/sessions/*'.\ndef get_session(project_id, session_id, env=None):\n \"\"\"\n Using the same `session_id` between requests allows continuation\n of the conversation.\n :return: session is a str\n \"\"\"\n if env is None:\n session = session_client.session_path(project_id, session_id)\n return session\n else:\n assert isinstance(env, str)\n return f\"projects/{project_id}/agent/environments/{env}/sessions/{session_id}\"\n \n\n# [START dialogflow_detect_intent_text]\ndef detect_intent_texts(project_id, session_id, text, language_code, env=None):\n \"\"\"Returns the result of detect intent with texts as inputs.\n\n Using the same `session_id` between requests allows continuation\n of the conversation.\"\"\"\n session= get_session(project_id, session_id, env=env)\n print(\"Session path: {}\\n\".format(session))\n\n text_input = dialogflow.TextInput(text=text, language_code=language_code)\n query_input = dialogflow.QueryInput(text=text_input)\n\n response = session_client.detect_intent(\n request={\"session\": session, \"query_input\": query_input}\n ) \n\n return response\n\n\n# set credentials, this is a must\n\nimport os\nos.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] =\"/mnt/d/code/sabala/mega-sabala-9ibe-940e7527ac9b.json\"", "examples of making session url", "from google.cloud import dialogflow\n\nproject_id = \"sabala-348110\"\n\nsession_id = str(uuid.uuid4())\n# texts = [\"Me toque uma música\", \"toca nos 80\", \"Tocar música clássica\", \"parar música\"][-1:]\nlanguage_code = \"en-US\"\n\nsession = get_session(project_id, session_id, env='new')\nsession\n", "visit Mega Agent and print", "project_id = \"mega-sabala-9ibe\"\n\nsession_id = str(uuid.uuid4())\ntexts = [\"how's the weather today\", \"Você pode me dizer a maneira mais fácil de ganhar dinheiro?\"]\ntexts = [\"parar música\"]\nlanguage_code = \"pt-BR\"\nresponse = detect_intent_texts(\n project_id, session_id, texts[0], language_code\n)\n\nresponse.output_audio = \"None\"\nprint(\"=\" * 20)\nprint(\"Query text: {}\".format(response.query_result.query_text))\nprint(\n \"Detected intent: {} (confidence: {})\\n\".format(\n response.query_result.intent.display_name,\n response.query_result.intent_detection_confidence,\n )\n)\nprint(\"Fulfillment text: {}\\n\".format(response.query_result.fulfillment_text))\nprint(\"Fulfillment Full: \\n{}\\n\".format(response))\nprint(type(response))", "visit media/Music Agent\nvisit the default one: draft", "project_id = \"sabala-348110\"\n\nsession_id = str(uuid.uuid4())\ntexts = [\"Me toque uma música\", \"toca nos 80\", \"Tocar música clássica\", \"parar música\"]\nlanguage_code = \"en-US\"\n\nresponse = detect_intent_texts(\n project_id, session_id, texts[-1], language_code, env='before_tf'\n)\n\nprint( [(_, type(_)) for _ in dir(response) if not _.startswith(\"_\")] )\nresponse.output_audio = \"None\"\nprint(response.query_result.intent)", "visit active version in a specified environment", "response = detect_intent_texts(\n project_id, session_id, texts[-1], language_code, env='before_tf'\n)", "Webhook handle\n\nhttps://github.com/googleapis/python-dialogflow/blob/HEAD/samples/snippets/webhook.py", "# TODO: change the default Entry Point text to handleWebhook\n\ndef handleWebhook(request):\n\n req = request.get_json()\n\n responseText = \"\"\n intent = req[\"queryResult\"][\"intent\"][\"displayName\"]\n\n if intent == \"Default Welcome Intent\":\n responseText = \"Hello from a GCF Webhook\"\n elif intent == \"get-agent-name\":\n responseText = \"My name is Flowhook\"\n else:\n responseText = f\"There are no fulfillment responses defined for Intent {intent}\"\n\n # You can also use the google.cloud.dialogflowcx_v3.types.WebhookRequest protos instead of manually writing the json object\n res = {\"fulfillmentMessages\": [{\"text\": {\"text\": [responseText]}}]}\n\n return res\n\nhandleWebhook(response)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csaladenes/csaladenes.github.io
present/mcc/jupyter/1-DimensionalityReduction-PCA.ipynb
mit
[ "Dénes Csala, MCC, Kolozsvár, 2021\n<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>\nDimensionality Reduction: Principal Component Analysis in-depth\nHere we'll explore Principal Component Analysis, which is an extremely useful linear dimensionality reduction technique.\nWe'll start with our standard set of initial imports:", "from __future__ import print_function, division\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\nplt.style.use('seaborn')", "Introducing Principal Component Analysis\nPrincipal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset:", "np.random.seed(1)\nX = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T\nplt.plot(X[:, 0], X[:, 1], 'o')\nplt.axis('equal');", "We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution:", "from sklearn.decomposition import PCA\npca = PCA(n_components=2)\npca.fit(X)\nprint(pca.explained_variance_)\nprint(pca.components_)", "To see what these numbers mean, let's view them as vectors plotted on top of the data:", "plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)\nfor length, vector in zip(pca.explained_variance_, pca.components_):\n v = vector * 3 * np.sqrt(length)\n plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)\nplt.axis('equal');", "Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more \"important\" than the other direction.\nThe explained variance quantifies this measure of \"importance\" in direction.\nAnother way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance:", "clf = PCA(0.95) # keep 95% of variance\nX_trans = clf.fit_transform(X)\nprint(X.shape)\nprint(X_trans.shape)", "By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression:", "X_new = clf.inverse_transform(X_trans)\nplt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)\nplt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)\nplt.axis('equal');", "The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the \"most important\" features of the data are maintained, and we've compressed the data by 50%!\nThis is the sense in which \"dimensionality reduction\" works: if you can approximate a data set in a lower dimension, you can often have an easier time visualizing it or fitting complicated models to the data.\nApplication of PCA to Digits\nThe dimensionality reduction might seem a bit abstract in two dimensions, but the projection and dimensionality reduction can be extremely useful when visualizing high-dimensional data. Let's take a quick look at the application of PCA to the digits data we looked at before:", "from sklearn.datasets import load_digits\ndigits = load_digits()\nX = digits.data\ny = digits.target\n\nprint(X[0][:8])\nprint(X[0][8:16])\nprint(X[0][16:24])\nprint(X[0][24:32])\nprint(X[0][32:40])\nprint(X[0][40:48])\n\npca = PCA(2) # project from 64 to 2 dimensions\nXproj = pca.fit_transform(X)\nprint(X.shape)\nprint(Xproj.shape)\n\n(1797*2)/(1797*64)\n\nplt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5,\n cmap=plt.cm.get_cmap('nipy_spectral', 10))\nplt.colorbar();", "This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.\nWhat do the Components Mean?\nPCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.\nThe input data is represented as a vector: in the case of the digits, our data is\n$$\nx = [x_1, x_2, x_3 \\cdots]\n$$\nbut what this really means is\n$$\nimage(x) = x_1 \\cdot{\\rm (pixel~1)} + x_2 \\cdot{\\rm (pixel~2)} + x_3 \\cdot{\\rm (pixel~3)} \\cdots\n$$\nIf we reduce the dimensionality in the pixel space to (say) 6, we recover only a partial image:", "import numpy as np\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\ndef plot_image_components(x, coefficients=None, mean=0, components=None,\n imshape=(8, 8), n_components=6, fontsize=12):\n if coefficients is None:\n coefficients = x\n \n if components is None:\n components = np.eye(len(coefficients), len(x))\n \n mean = np.zeros_like(x) + mean\n \n\n fig = plt.figure(figsize=(1.2 * (5 + n_components), 1.2 * 2))\n g = plt.GridSpec(2, 5 + n_components, hspace=0.3)\n\n def show(i, j, x, title=None):\n ax = fig.add_subplot(g[i, j], xticks=[], yticks=[])\n ax.imshow(x.reshape(imshape), interpolation='nearest')\n if title:\n ax.set_title(title, fontsize=fontsize)\n\n show(slice(2), slice(2), x, \"True\")\n\n approx = mean.copy()\n show(0, 2, np.zeros_like(x) + mean, r'$\\mu$')\n show(1, 2, approx, r'$1 \\cdot \\mu$')\n\n for i in range(0, n_components):\n approx = approx + coefficients[i] * components[i]\n show(0, i + 3, components[i], r'$c_{0}$'.format(i + 1))\n show(1, i + 3, approx,\n r\"${0:.2f} \\cdot c_{1}$\".format(coefficients[i], i + 1))\n plt.gca().text(0, 1.05, '$+$', ha='right', va='bottom',\n transform=plt.gca().transAxes, fontsize=fontsize)\n\n show(slice(2), slice(-2, None), approx, \"Approx\")\n\nwith plt.style.context('seaborn-white'):\n plot_image_components(digits.data[0])", "But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like\n$$\nimage(x) = {\\rm mean} + x_1 \\cdot{\\rm (basis~1)} + x_2 \\cdot{\\rm (basis~2)} + x_3 \\cdot{\\rm (basis~3)} \\cdots\n$$\nWhat PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.\nThe low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum:", "def plot_pca_interactive(data, n_components=6):\n from sklearn.decomposition import PCA\n from ipywidgets import interact\n\n pca = PCA(n_components=n_components)\n Xproj = pca.fit_transform(data)\n\n def show_decomp(i=0):\n plot_image_components(data[i], Xproj[i],\n pca.mean_, pca.components_)\n \n interact(show_decomp, i=(0, data.shape[0] - 1));\n\nplot_pca_interactive(digits.data)", "Here we see that with only six PCA components, we recover a reasonable approximation of the input!\nThus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.\nChoosing the Number of Components\nBut how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components:", "pca = PCA().fit(X)\nplt.plot(np.cumsum(pca.explained_variance_ratio_))\nplt.xlabel('number of components')\nplt.ylabel('cumulative explained variance');", "Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.\nPCA as data compression\nAs we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.\nHere's what a single digit looks like as you change the number of components:", "fig, axes = plt.subplots(8, 8, figsize=(8, 8))\nfig.subplots_adjust(hspace=0.1, wspace=0.1)\n\nfor i, ax in enumerate(axes.flat):\n pca = PCA(i + 1).fit(X)\n im = pca.inverse_transform(pca.transform(X[25:26]))\n\n ax.imshow(im.reshape((8, 8)), cmap='binary')\n ax.text(0.95, 0.05, 'n = {0}'.format(i + 1), ha='right',\n transform=ax.transAxes, color='green')\n ax.set_xticks([])\n ax.set_yticks([])", "Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once:", "from ipywidgets import interact\n\ndef plot_digits(n_components):\n fig = plt.figure(figsize=(8, 8))\n plt.subplot(1, 1, 1, frameon=False, xticks=[], yticks=[])\n nside = 10\n \n pca = PCA(n_components).fit(X)\n Xproj = pca.inverse_transform(pca.transform(X[:nside ** 2]))\n Xproj = np.reshape(Xproj, (nside, nside, 8, 8))\n total_var = pca.explained_variance_ratio_.sum()\n \n im = np.vstack([np.hstack([Xproj[i, j] for j in range(nside)])\n for i in range(nside)])\n plt.imshow(im)\n plt.grid(False)\n plt.title(\"n = {0}, variance = {1:.2f}\".format(n_components, total_var),\n size=18)\n plt.clim(0, 16)\n \ninteract(plot_digits, n_components=[1, 15, 20, 25, 32, 40, 64], nside=[1, 8]);", "Other Dimensionality Reducting Routines\nNote that scikit-learn contains many other unsupervised dimensionality reduction routines: some you might wish to try are\nOther dimensionality reduction techniques which are useful to know about:\n\nsklearn.decomposition.PCA: \n Principal Component Analysis\nsklearn.decomposition.RandomizedPCA:\n extremely fast approximate PCA implementation based on a randomized algorithm\nsklearn.decomposition.SparsePCA:\n PCA variant including L1 penalty for sparsity\nsklearn.decomposition.FastICA:\n Independent Component Analysis\nsklearn.decomposition.NMF:\n non-negative matrix factorization\nsklearn.manifold.LocallyLinearEmbedding:\n nonlinear manifold learning technique based on local neighborhood geometry\nsklearn.manifold.IsoMap:\n nonlinear manifold learning technique based on a sparse graph algorithm\n\nEach of these has its own strengths & weaknesses, and areas of application. You can read about them on the scikit-learn website." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/ko/lattice/tutorials/shape_constraints.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Tensorflow Lattice와 형상 제약 조건\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/lattice/tutorials/shape_constraints\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/shape_constraints.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드하기</a></td>\n</table>\n\n개요\n이 튜토리얼은 TensorFlow Lattice(TFL) 라이브러리에서 제공하는 제약 조건 및 regularizer에 대한 개요입니다. 여기서는 합성 데이터세트에 TFL canned estimator를 사용하지만, 해당 튜토리얼의 모든 내용은 TFL Keras 레이어로 구성된 모델로도 수행될 수 있습니다.\n계속하기 전에 런타임에 필요한 모든 패키지가 아래 코드 셀에서 가져온 대로 설치되어 있는지 먼저 확인하세요.\n설정\nTF Lattice 패키지 설치하기", "#@test {\"skip\": true}\n!pip install tensorflow-lattice", "필수 패키지 가져오기", "import tensorflow as tf\n\nfrom IPython.core.pylabtools import figsize\nimport itertools\nimport logging\nimport matplotlib\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport sys\nimport tensorflow_lattice as tfl\nlogging.disable(sys.maxsize)", "이 가이드에서 사용되는 기본값", "NUM_EPOCHS = 1000\nBATCH_SIZE = 64\nLEARNING_RATE=0.01", "레스토랑 순위 지정을 위한 훈련 데이터세트\n사용자가 레스토랑 검색 결과를 클릭할지 여부를 결정하는 단순화된 시나리오를 상상해봅니다. 이 작업은 주어진 입력 특성에 따른 클릭률(CTR)을 예측하는 것입니다.\n\n평균 평점(avg_rating): [1,5] 범위의 값을 가진 숫자 특성입니다.\n리뷰 수(num_reviews): 200개로 제한되는 값이 있는 숫자 특성으로, 트렌드를 측정하는 데 사용됩니다.\n달러 등급(dollar_rating): { \"D\", \"DD\", \"DDD\", \"DDDD\"} 세트에 문자열 값이 있는 범주형 특성입니다.\n\n실제 CTR이 공식으로 제공되는 합성 데이터세트를 만듭니다. $$ CTR = 1 / (1 + exp{\\mbox {b(dollar_rating)}-\\mbox {avg_rating} \\times log(\\mbox {num_reviews}) / 4}) $$ 여기서 $b(\\cdot)$는 각 dollar_rating을 기준값으로 변환합니다. $$ \\mbox{D}\\to 3,\\ \\ mbox{DD}\\to 2,\\ \\ mbox{DDD}\\to 4,\\ \\mbox{DDDD}\\to 4.5. $$\n이 공식은 일반적인 사용자 패턴을 반영합니다. 예를 들어 다른 모든 사항이 수정된 경우 사용자는 별표 평점이 더 높은 식당을 선호하며 '$$'식당은 '$'보다 더 많은 클릭을 받고 '$$$' 및 '$$$'가 이어집니다.", "def click_through_rate(avg_ratings, num_reviews, dollar_ratings):\n dollar_rating_baseline = {\"D\": 3, \"DD\": 2, \"DDD\": 4, \"DDDD\": 4.5}\n return 1 / (1 + np.exp(\n np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -\n avg_ratings * np.log1p(num_reviews) / 4))", "이 CTR 함수의 등고선도를 살펴보겠습니다.", "def color_bar():\n bar = matplotlib.cm.ScalarMappable(\n norm=matplotlib.colors.Normalize(0, 1, True),\n cmap=\"viridis\",\n )\n bar.set_array([0, 1])\n return bar\n\n\ndef plot_fns(fns, split_by_dollar=False, res=25):\n \"\"\"Generates contour plots for a list of (name, fn) functions.\"\"\"\n num_reviews, avg_ratings = np.meshgrid(\n np.linspace(0, 200, num=res),\n np.linspace(1, 5, num=res),\n )\n if split_by_dollar:\n dollar_rating_splits = [\"D\", \"DD\", \"DDD\", \"DDDD\"]\n else:\n dollar_rating_splits = [None]\n if len(fns) == 1:\n fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)\n else:\n fig, axes = plt.subplots(\n len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)\n axes = axes.flatten()\n axes_index = 0\n for dollar_rating_split in dollar_rating_splits:\n for title, fn in fns:\n if dollar_rating_split is not None:\n dollar_ratings = np.repeat(dollar_rating_split, res**2)\n values = fn(avg_ratings.flatten(), num_reviews.flatten(),\n dollar_ratings)\n title = \"{}: dollar_rating={}\".format(title, dollar_rating_split)\n else:\n values = fn(avg_ratings.flatten(), num_reviews.flatten())\n subplot = axes[axes_index]\n axes_index += 1\n subplot.contourf(\n avg_ratings,\n num_reviews,\n np.reshape(values, (res, res)),\n vmin=0,\n vmax=1)\n subplot.title.set_text(title)\n subplot.set(xlabel=\"Average Rating\")\n subplot.set(ylabel=\"Number of Reviews\")\n subplot.set(xlim=(1, 5))\n\n _ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))\n\n\nfigsize(11, 11)\nplot_fns([(\"CTR\", click_through_rate)], split_by_dollar=True)", "데이터 준비하기\n이제 합성 데이터세트를 만들어야 합니다. 레스토랑과 해당 특징의 시뮬레이션된 데이터세트를 생성하는 것으로 작업을 시작합니다.", "def sample_restaurants(n):\n avg_ratings = np.random.uniform(1.0, 5.0, n)\n num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))\n dollar_ratings = np.random.choice([\"D\", \"DD\", \"DDD\", \"DDDD\"], n)\n ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)\n return avg_ratings, num_reviews, dollar_ratings, ctr_labels\n\n\nnp.random.seed(42)\navg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)\n\nfigsize(5, 5)\nfig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)\nfor rating, marker in [(\"D\", \"o\"), (\"DD\", \"^\"), (\"DDD\", \"+\"), (\"DDDD\", \"x\")]:\n plt.scatter(\n x=avg_ratings[np.where(dollar_ratings == rating)],\n y=num_reviews[np.where(dollar_ratings == rating)],\n c=ctr_labels[np.where(dollar_ratings == rating)],\n vmin=0,\n vmax=1,\n marker=marker,\n label=rating)\nplt.xlabel(\"Average Rating\")\nplt.ylabel(\"Number of Reviews\")\nplt.legend()\nplt.xlim((1, 5))\nplt.title(\"Distribution of restaurants\")\n_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))", "훈련, 검증 및 테스트 데이터세트를 생성해 보겠습니다. 검색 결과에 레스토랑이 표시되면 사용자의 참여(클릭 또는 클릭 없음)를 샘플 포인트로 기록할 수 있습니다.\n실제로 사용자가 모든 검색 결과를 확인하지 않는 경우가 많습니다. 즉, 사용자는 현재 사용 중인 순위 모델에서 이미 '좋은' 것으로 간주되는 식당만 볼 수 있습니다. 결과적으로 '좋은' 레스토랑은 훈련 데이터세트에서 더 자주 좋은 인상을 남기고 더 과장되게 표현됩니다. 더 많은 특성을 사용할 때 훈련 데이터세트에서는 특성 공간의 '나쁜' 부분에 큰 간격이 생길 수 있습니다.\n모델이 순위 지정에 사용되면 훈련 데이터세트로 잘 표현되지 않는 보다 균일한 분포로 모든 관련 결과에 대해 평가되는 경우가 많습니다. 이 경우 과도하게 표현된 데이터 포인트에 과대 적합이 발생하여 일반화될 수 없기 때문에, 유연하고 복잡한 모델은 실패할 수 있습니다. 이 문제는 도메인 지식을 적용하여 모델이 훈련 데이터세트에서 선택할 수 없을 때 합리적인 예측을 할 수 있도록 안내하는 형상 제약 조건을 추가함으로써 처리합니다.\n이 예에서 훈련 데이터세트는 대부분 우수하고 인기 있는 음식점과의 사용자 상호 작용으로 구성됩니다. 테스트 데이터세트에는 위에서 설명한 평가 설정을 시뮬레이션하도록 균일한 분포가 있습니다. 해당 테스트 데이터세트는 실제 문제 설정에서는 사용할 수 없습니다.", "def sample_dataset(n, testing_set):\n (avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)\n if testing_set:\n # Testing has a more uniform distribution over all restaurants.\n num_views = np.random.poisson(lam=3, size=n)\n else:\n # Training/validation datasets have more views on popular restaurants.\n num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)\n\n return pd.DataFrame({\n \"avg_rating\": np.repeat(avg_ratings, num_views),\n \"num_reviews\": np.repeat(num_reviews, num_views),\n \"dollar_rating\": np.repeat(dollar_ratings, num_views),\n \"clicked\": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))\n })\n\n\n# Generate datasets.\nnp.random.seed(42)\ndata_train = sample_dataset(500, testing_set=False)\ndata_val = sample_dataset(500, testing_set=False)\ndata_test = sample_dataset(500, testing_set=True)\n\n# Plotting dataset densities.\nfigsize(12, 5)\nfig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)\nfor ax, data, title in [(axs[0], data_train, \"training\"),\n (axs[1], data_test, \"testing\")]:\n _, _, _, density = ax.hist2d(\n x=data[\"avg_rating\"],\n y=data[\"num_reviews\"],\n bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),\n density=True,\n cmap=\"Blues\",\n )\n ax.set(xlim=(1, 5))\n ax.set(ylim=(0, 200))\n ax.set(xlabel=\"Average Rating\")\n ax.set(ylabel=\"Number of Reviews\")\n ax.title.set_text(\"Density of {} examples\".format(title))\n _ = fig.colorbar(density, ax=ax)", "훈련 및 평가에 사용되는 input_fns 정의하기", "train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=data_train,\n y=data_train[\"clicked\"],\n batch_size=BATCH_SIZE,\n num_epochs=NUM_EPOCHS,\n shuffle=False,\n)\n\n# feature_analysis_input_fn is used for TF Lattice estimators.\nfeature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=data_train,\n y=data_train[\"clicked\"],\n batch_size=BATCH_SIZE,\n num_epochs=1,\n shuffle=False,\n)\n\nval_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=data_val,\n y=data_val[\"clicked\"],\n batch_size=BATCH_SIZE,\n num_epochs=1,\n shuffle=False,\n)\n\ntest_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=data_test,\n y=data_test[\"clicked\"],\n batch_size=BATCH_SIZE,\n num_epochs=1,\n shuffle=False,\n)", "그래디언트 Boosted 트리 적합화하기\navg_rating과 num_reviews 두 가지 특성으로 시작하겠습니다.\n검증 및 테스트 메트릭을 플롯하고 계산하기 위한 몇 가지 보조 함수를 만듭니다.", "def analyze_two_d_estimator(estimator, name):\n # Extract validation metrics.\n metric = estimator.evaluate(input_fn=val_input_fn)\n print(\"Validation AUC: {}\".format(metric[\"auc\"]))\n metric = estimator.evaluate(input_fn=test_input_fn)\n print(\"Testing AUC: {}\".format(metric[\"auc\"]))\n\n def two_d_pred(avg_ratings, num_reviews):\n results = estimator.predict(\n tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=pd.DataFrame({\n \"avg_rating\": avg_ratings,\n \"num_reviews\": num_reviews,\n }),\n shuffle=False,\n ))\n return [x[\"logistic\"][0] for x in results]\n\n def two_d_click_through_rate(avg_ratings, num_reviews):\n return np.mean([\n click_through_rate(avg_ratings, num_reviews,\n np.repeat(d, len(avg_ratings)))\n for d in [\"D\", \"DD\", \"DDD\", \"DDDD\"]\n ],\n axis=0)\n\n figsize(11, 5)\n plot_fns([(\"{} Estimated CTR\".format(name), two_d_pred),\n (\"CTR\", two_d_click_through_rate)],\n split_by_dollar=False)\n", "TensorFlow 그래디언트 boosted 결정 트리를 데이터세트에 적합하도록 맞출 수 있습니다.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n]\ngbt_estimator = tf.estimator.BoostedTreesClassifier(\n feature_columns=feature_columns,\n # Hyper-params optimized on validation set.\n n_batches_per_layer=1,\n max_depth=2,\n n_trees=50,\n learning_rate=0.05,\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ngbt_estimator.train(input_fn=train_input_fn)\nanalyze_two_d_estimator(gbt_estimator, \"GBT\")", "모델이 실제 CTR의 일반적인 형상을 포착하고 적절한 검증 메트릭을 가지고 있지만, 입력 공간의 여러 부분에서 반직관적인 동작을 보입니다. 평균 평점 또는 리뷰 수가 증가하면 예상 CTR이 감소하는데, 이는 훈련 데이터세트에서 잘 다루지 않는 영역에 샘플 포인트가 부족하기 때문입니다. 모델은 데이터에서만 올바른 동작을 추론할 방법이 없습니다.\n이 문제를 해결하기 위해 모델이 평균 평점과 리뷰 수에 대해 단조롭게 증가하는 값을 출력해야 한다는 형상 제약 조건을 적용합니다. 나중에 TFL에서 이를 구현하는 방법을 살펴보겠습니다.\nDNN 적합화하기\nDNN 분류자로 같은 단계를 반복할 수 있습니다. 여기서 비슷한 패턴이 관찰되는데 리뷰 수가 적은 샘플 포인트가 충분하지 않으면 무의미한 외삽이 발생합니다. 검증 메트릭이 트리 솔루션보다 우수하더라도 테스트 메트릭은 훨씬 나쁘다는 점을 유의하세요.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n]\ndnn_estimator = tf.estimator.DNNClassifier(\n feature_columns=feature_columns,\n # Hyper-params optimized on validation set.\n hidden_units=[16, 8, 8],\n optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ndnn_estimator.train(input_fn=train_input_fn)\nanalyze_two_d_estimator(dnn_estimator, \"DNN\")", "형상 제약 조건\nTensorFlow Lattice(TFL)는 훈련 데이터 이상의 모델 동작을 보호하기 위해 형상 제약 조건을 적용하는 데 중점을 둡니다. 이러한 형상 제약 조건은 TFL Keras 레이어에 적용됩니다. 자세한 내용은 JMLR 논문에서 찾을 수 있습니다.\n이 튜토리얼에서는 다양한 형상 제약을 다루기 위해 준비된 TF estimator를 사용하지만, 해당 모든 단계는 TFL Keras 레이어에서 생성된 모델로 수행할 수 있습니다.\n다른 TensorFlow estimator와 마찬가지로 준비된 TFL estimator는 특성 열을 사용하여 입력 형식을 정의하고 훈련 input_fn을 사용하여 데이터를 전달합니다. 준비된 TFL estimator을 사용하려면 다음이 필요합니다.\n\n모델 구성: 모델 아키텍처 및 특성별 형상 제약 조건 및 regularizer를 정의합니다.\n특성 분석 input_fn: TFL 초기화를 위해 데이터를 전달하는 TF input_fn.\n\n자세한 설명은 준비된 estimator 튜토리얼 또는 API 설명서를 참조하세요.\n단조\n먼저 두 특성에 단조 형상 제약 조건을 추가하여 단조 문제를 해결합니다.\nTFL에 형상 제약 조건을 적용하기 위해 특성 구성에 제약 조건을 지정합니다. 다음 코드는 monotonicity=\"increasing\"을 설정하여 num_reviews 및 avg_rating 모두에 대해 출력이 단조롭게 증가하도록 요구할 수 있는 방법을 보여줍니다.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n]\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=[\n tfl.configs.FeatureConfig(\n name=\"num_reviews\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_num_keypoints=20,\n ),\n tfl.configs.FeatureConfig(\n name=\"avg_rating\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_num_keypoints=20,\n )\n ])\ntfl_estimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ntfl_estimator.train(input_fn=train_input_fn)\nanalyze_two_d_estimator(tfl_estimator, \"TF Lattice\")", "CalibratedLatticeConfig를 사용하면 먼저 calibrator를 각 입력(숫자 특성에 대한 부분 선형 함수)에 적용한 다음 격자 레이어를 적용하여 보정된 특성을 비선형적으로 융합하는 준비된 분류자를 생성합니다. tfl.visualization을 사용하여 모델을 시각화할 수 있습니다. 특히 다음 플롯은 미리 준비된 estimator에 포함된 두 개의 훈련된 calibrator를 보여줍니다.", "def save_and_visualize_lattice(tfl_estimator):\n saved_model_path = tfl_estimator.export_saved_model(\n \"/tmp/TensorFlow_Lattice_101/\",\n tf.estimator.export.build_parsing_serving_input_receiver_fn(\n feature_spec=tf.feature_column.make_parse_example_spec(\n feature_columns)))\n model_graph = tfl.estimators.get_model_graph(saved_model_path)\n figsize(8, 8)\n tfl.visualization.draw_model_graph(model_graph)\n return model_graph\n\n_ = save_and_visualize_lattice(tfl_estimator)", "제약 조건이 추가되면 평균 평점이 증가하거나 리뷰 수가 증가함에 따라 예상 CTR이 항상 증가합니다. 이것은 calibrator와 격자가 단조로운지 확인하여 수행됩니다.\n감소 수익\n감소 수익은 특정 특성값을 증가시키는 한계 이득이 값이 증가함에 따라 감소한다는 것을 의미합니다. 해당 경우에는 num_reviews 특성이 이 패턴을 따를 것으로 예상하므로 그에 따라 calibrator를 구성할 수 있습니다. 감소하는 수익률은 두 가지 충분한 조건으로 분해할 수 있습니다.\n\ncalibrator가 단조롭게 증가하고 있으며\ncalibrator는 오목합니다.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n]\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=[\n tfl.configs.FeatureConfig(\n name=\"num_reviews\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_convexity=\"concave\",\n pwl_calibration_num_keypoints=20,\n ),\n tfl.configs.FeatureConfig(\n name=\"avg_rating\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_num_keypoints=20,\n )\n ])\ntfl_estimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ntfl_estimator.train(input_fn=train_input_fn)\nanalyze_two_d_estimator(tfl_estimator, \"TF Lattice\")\n_ = save_and_visualize_lattice(tfl_estimator)", "오목 제약 조건을 추가하여 테스트 메트릭이 어떻게 향상되는지 확인하세요. 예측 플롯은 또한 지상 진실과 더 유사합니다.\n2D 형상 제약 조건: 신뢰\n리뷰가 한두 개밖에 없는 레스토랑의 별 5개는 신뢰할 수 없는 평가일 가능성이 높지만(실제 레스토랑 경험은 나쁠 수 있습니다), 수백 개의 리뷰가 있는 레스토랑에 대한 4성급은 훨씬 더 신뢰할 수 있습니다(이 경우에 레스토랑 경험은 좋을 것입니다). 레스토랑 리뷰 수는 평균 평점에 대한 신뢰도에 영향을 미친다는 것을 알 수 있습니다.\nTFL 신뢰 제약 조건을 실행하여 한 특성의 더 큰(또는 더 작은) 값이 다른 특성에 대한 더 많은 신뢰 또는 신뢰를 나타냄을 모델에 알릴 수 있습니다. 이는 특성 구성에서 reflects_trust_in 구성을 설정하여 수행됩니다.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n]\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=[\n tfl.configs.FeatureConfig(\n name=\"num_reviews\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_convexity=\"concave\",\n pwl_calibration_num_keypoints=20,\n # Larger num_reviews indicating more trust in avg_rating.\n reflects_trust_in=[\n tfl.configs.TrustConfig(\n feature_name=\"avg_rating\", trust_type=\"edgeworth\"),\n ],\n ),\n tfl.configs.FeatureConfig(\n name=\"avg_rating\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_num_keypoints=20,\n )\n ])\ntfl_estimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ntfl_estimator.train(input_fn=train_input_fn)\nanalyze_two_d_estimator(tfl_estimator, \"TF Lattice\")\nmodel_graph = save_and_visualize_lattice(tfl_estimator)", "다음 플롯은 훈련된 격자 함수를 나타냅니다. 신뢰 제약 조건으로 인해, 보정된 num_reviews의 큰 값이 보정된 avg_rating에 대한 경사를 더 높여서 격자 출력에서 더 중요한 이동이 있을 것을 예상합니다.", "lat_mesh_n = 12\nlat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(\n lat_mesh_n**2, 0, 0, 1, 1)\nlat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(\n model_graph.output_node.weights.flatten())\nlat_mesh_z = [\n lat_mesh_fn([lat_mesh_x.flatten()[i],\n lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)\n]\ntrust_plt = tfl.visualization.plot_outputs(\n (lat_mesh_x, lat_mesh_y),\n {\"Lattice Lookup\": lat_mesh_z},\n figsize=(6, 6),\n)\ntrust_plt.title(\"Trust\")\ntrust_plt.xlabel(\"Calibrated avg_rating\")\ntrust_plt.ylabel(\"Calibrated num_reviews\")\ntrust_plt.show()", "Smoothing Calibrator\n이제 avg_rating의 calibrator를 살펴보겠습니다. 단조롭게 증가하지만 기울기의 변화는 갑작스럽고 해석하기 어렵습니다. 이는 regularizer_configs의 regularizer 설정으로 이 calibrator를 스무딩하는 것을 고려해볼 수 있음을 의미합니다.\n여기에서는 곡률의 변화를 줄이기 위해 wrinkle regularizer를 적용합니다. 또한 laplacian regularizer를 사용하여 calibrator를 평면화하고 hessian regularizer를 사용하여 보다 선형적으로 만들 수 있습니다.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n]\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=[\n tfl.configs.FeatureConfig(\n name=\"num_reviews\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_convexity=\"concave\",\n pwl_calibration_num_keypoints=20,\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name=\"calib_wrinkle\", l2=1.0),\n ],\n reflects_trust_in=[\n tfl.configs.TrustConfig(\n feature_name=\"avg_rating\", trust_type=\"edgeworth\"),\n ],\n ),\n tfl.configs.FeatureConfig(\n name=\"avg_rating\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_num_keypoints=20,\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name=\"calib_wrinkle\", l2=1.0),\n ],\n )\n ])\ntfl_estimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ntfl_estimator.train(input_fn=train_input_fn)\nanalyze_two_d_estimator(tfl_estimator, \"TF Lattice\")\n_ = save_and_visualize_lattice(tfl_estimator)", "이제 calibrator가 매끄럽고 전체 예상 CTR이 실제와 더 잘 일치합니다. 해당 적용은 테스트 메트릭과 등고선 플롯 모두에 반영됩니다.\n범주형 보정을 위한 부분 단조\n지금까지 모델에서 숫자 특성 중 두 가지만 사용했습니다. 여기에서는 범주형 보정 레이어를 사용하여 세 번째 특성을 추가합니다. 다시 플롯 및 메트릭 계산을 위한 도우미 함수를 설정하는 것으로 시작합니다.", "def analyze_three_d_estimator(estimator, name):\n # Extract validation metrics.\n metric = estimator.evaluate(input_fn=val_input_fn)\n print(\"Validation AUC: {}\".format(metric[\"auc\"]))\n metric = estimator.evaluate(input_fn=test_input_fn)\n print(\"Testing AUC: {}\".format(metric[\"auc\"]))\n\n def three_d_pred(avg_ratings, num_reviews, dollar_rating):\n results = estimator.predict(\n tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=pd.DataFrame({\n \"avg_rating\": avg_ratings,\n \"num_reviews\": num_reviews,\n \"dollar_rating\": dollar_rating,\n }),\n shuffle=False,\n ))\n return [x[\"logistic\"][0] for x in results]\n\n figsize(11, 22)\n plot_fns([(\"{} Estimated CTR\".format(name), three_d_pred),\n (\"CTR\", click_through_rate)],\n split_by_dollar=True)\n ", "세 번째 특성인 dollar_rating을 포함하려면 범주형 특성이 특성 열과 특성 구성 모두에서 TFL 내에서 약간 다른 처리가 필요하다는 점을 기억해야 합니다. 여기서 다른 모든 입력이 고정될 때 'DD' 레스토랑의 출력이 'D' 레스토랑보다 커야 한다는 부분 단조 제약 조건을 적용합니다. 해당 적용은 특성 구성에서 monotonicity 설정을 사용하여 수행됩니다.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n tf.feature_column.categorical_column_with_vocabulary_list(\n \"dollar_rating\",\n vocabulary_list=[\"D\", \"DD\", \"DDD\", \"DDDD\"],\n dtype=tf.string,\n default_value=0),\n]\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=[\n tfl.configs.FeatureConfig(\n name=\"num_reviews\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_convexity=\"concave\",\n pwl_calibration_num_keypoints=20,\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name=\"calib_wrinkle\", l2=1.0),\n ],\n reflects_trust_in=[\n tfl.configs.TrustConfig(\n feature_name=\"avg_rating\", trust_type=\"edgeworth\"),\n ],\n ),\n tfl.configs.FeatureConfig(\n name=\"avg_rating\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_num_keypoints=20,\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name=\"calib_wrinkle\", l2=1.0),\n ],\n ),\n tfl.configs.FeatureConfig(\n name=\"dollar_rating\",\n lattice_size=2,\n pwl_calibration_num_keypoints=4,\n # Here we only specify one monotonicity:\n # `D` resturants has smaller value than `DD` restaurants\n monotonicity=[(\"D\", \"DD\")],\n ),\n ])\ntfl_estimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ntfl_estimator.train(input_fn=train_input_fn)\nanalyze_three_d_estimator(tfl_estimator, \"TF Lattice\")\n_ = save_and_visualize_lattice(tfl_estimator)", "범주형 calibrator는 모델 출력의 선호도를 보여줍니다. DD &gt; D &gt; DDD &gt; DDDD는 설정과 일치합니다. 결측값에 대한 열도 있습니다. 훈련 및 테스트 데이터에는 누락된 특성이 없지만, 모델은 다운스트림 모델 제공 중에 발생하는 누락된 값에 대한 대체 값을 제공합니다.\ndollar_rating을 조건으로 이 모델의 예상 CTR도 플롯합니다. 필요한 모든 제약 조건이 각 슬라이스에서 충족됩니다.\n출력 보정\n지금까지 훈련한 모든 TFL 모델의 경우 격자 레이어(모델 그래프에서 'Lattice'로 표시됨)가 모델 예측을 직접 출력합니다. 때때로 격자 출력이 모델 출력을 내도록 재조정되어야 하는지는 확실하지 않습니다.\n\n특성은 $log$ 카운트이고 레이블은 카운트입니다.\n격자는 매우 적은 수의 꼭짓점을 갖도록 구성되지만 레이블 분포는 비교적 복잡합니다.\n\n이러한 경우 격자 출력과 모델 출력 사이에 또 다른 calibrator를 추가하여 모델 유연성을 높일 수 있습니다. 방금 구축한 모델에 5개의 키포인트가 있는 보정 레이어를 추가하겠습니다. 또한 함수를 원활하게 유지하기 위해 출력 calibrator용 regularizer를 추가합니다.", "feature_columns = [\n tf.feature_column.numeric_column(\"num_reviews\"),\n tf.feature_column.numeric_column(\"avg_rating\"),\n tf.feature_column.categorical_column_with_vocabulary_list(\n \"dollar_rating\",\n vocabulary_list=[\"D\", \"DD\", \"DDD\", \"DDDD\"],\n dtype=tf.string,\n default_value=0),\n]\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n output_calibration=True,\n output_calibration_num_keypoints=5,\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name=\"output_calib_wrinkle\", l2=0.1),\n ],\n feature_configs=[\n tfl.configs.FeatureConfig(\n name=\"num_reviews\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_convexity=\"concave\",\n pwl_calibration_num_keypoints=20,\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name=\"calib_wrinkle\", l2=1.0),\n ],\n reflects_trust_in=[\n tfl.configs.TrustConfig(\n feature_name=\"avg_rating\", trust_type=\"edgeworth\"),\n ],\n ),\n tfl.configs.FeatureConfig(\n name=\"avg_rating\",\n lattice_size=2,\n monotonicity=\"increasing\",\n pwl_calibration_num_keypoints=20,\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name=\"calib_wrinkle\", l2=1.0),\n ],\n ),\n tfl.configs.FeatureConfig(\n name=\"dollar_rating\",\n lattice_size=2,\n pwl_calibration_num_keypoints=4,\n # Here we only specify one monotonicity:\n # `D` resturants has smaller value than `DD` restaurants\n monotonicity=[(\"D\", \"DD\")],\n ),\n])\ntfl_estimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42),\n)\ntfl_estimator.train(input_fn=train_input_fn)\nanalyze_three_d_estimator(tfl_estimator, \"TF Lattice\")\n_ = save_and_visualize_lattice(tfl_estimator)", "최종 테스트 메트릭과 플롯은 상식적인 제약 조건을 사용하는 것이 모델이 예기치 않은 동작을 방지하고 전체 입력 공간으로 더 잘 외삽하는 데 어떻게 도움이 되는지 보여줍니다." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
unnati-xyz/intro-python-data-science
onion/6-Insight.ipynb
mit
[ "Share the Insight\nThere are two main insights we want to communicate. \n- Bangalore is the largest market for Onion Arrivals. \n- Onion Price variation has increased in the recent years.\nLet us explore how we can communicate these insight visually.\nPreprocessing to get the data", "# Import the library we need, which is Pandas and Matplotlib\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\n\n# Set some parameters to get good visuals - style to ggplot and size to 15,10\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (15, 10)\n\n# Read the csv file of Monthwise Quantity and Price csv file we have.\ndf = pd.read_csv('MonthWiseMarketArrivals_clean.csv')\n\n# Change the index to the date column\ndf.index = pd.PeriodIndex(df.date, freq='M')\n\n# Sort the data frame by date\ndf = df.sort_values(by = \"date\")\n\n# Get the data for year 2015\ndf2015 = df[df.year == 2015]\n\n# Groupby on City to get the sum of quantity\ndf2015City = df2015.groupby(['city'], as_index=False)['quantity'].sum()\n\ndf2015City = df2015City.sort_values(by = \"quantity\", ascending = False)\n\ndf2015City.head()", "Let us plot the Cities in a Geographic Map", "# Load the geocode file\ndfGeo = pd.read_csv('city_geocode.csv')\n\ndfGeo.head()", "PRINCIPLE: Joining two data frames\nThere will be many cases in which your data is in two different dataframe and you would like to merge them in to one dataframe. Let us look at one example of this - which is called left join", "dfCityGeo = pd.merge(df2015City, dfGeo, how='left', on=['city', 'city'])\n\ndfCityGeo.head()\n\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100)", "We can do a crude aspect ratio adjustment to make the cartesian coordinate systesm appear like a mercator map", "dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100, figsize = [10,11])\n\n# Let us at quanitity as the size of the bubble\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity, figsize = [10,11])\n\n# Let us scale down the quantity variable\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity/1000, figsize = [10,11])\n\n# Reduce the opacity of the color, so that we can see overlapping values\ndfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity/1000, alpha = 0.5, figsize = [10,11])", "Exercise - Can you plot all the States by quantity in (pseudo) geographic map" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/image_classification/solutions/4_tpu_training.ipynb
apache-2.0
[ "Transfer Learning on TPUs\nIn the <a href=\"3_tf_hub_transfer_learning.ipynb\">previous notebook</a>, we learned how to do transfer learning with TensorFlow Hub. In this notebook, we're going to kick up our training speed with TPUs.\nLearning Objectives\n\nKnow how to set up a TPU strategy for training\nKnow how to use a TensorFlow Hub Module when training on a TPU\nKnow how to create and specify a TPU for training\n\nFirst things first. Configure the parameters below to match your own Google Cloud project details.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.", "import os\nos.environ[\"BUCKET\"] = \"your-bucket-here\"", "Packaging the Model\nIn order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in tpu_models with the data processing functions from the previous lab copied into <a href=\"tpu_models/trainer/util.py\">util.py</a>.\nSimilarly, the model building and training functions are pulled into <a href=\"tpu_models/trainer/model.py\">model.py</a>. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new task.py file.\nWe've added five command line arguments which are standard for cloud training of a TensorFlow model: epochs, steps_per_epoch, train_path, eval_path, and job-dir. There are two new arguments for TPU training: tpu_address and hub_path\ntpu_address is going to be our TPU name as it appears in Compute Engine Instances. We can specify this name with the ctpu up command.\nhub_path is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.\nThe other big difference is some code to deploy our model on a TPU. To begin, we'll set up a TPU Cluster Resolver, which will help tensorflow communicate with the hardware to set up workers for training (more on TensorFlow Cluster Resolvers). Once the resolver connects to and initializes the TPU system, our Tensorflow Graphs can be initialized within a TPU distribution strategy, allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.\nTODO #1: Set up a TPU strategy", "%%writefile tpu_models/trainer/task.py\nimport argparse\nimport json\nimport os\nimport sys\n\nimport tensorflow as tf\n\nfrom . import model\nfrom . import util\n\n\ndef _parse_arguments(argv):\n \"\"\"Parses command-line arguments.\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '--epochs',\n help='The number of epochs to train',\n type=int, default=5)\n parser.add_argument(\n '--steps_per_epoch',\n help='The number of steps per epoch to train',\n type=int, default=500)\n parser.add_argument(\n '--train_path',\n help='The path to the training data',\n type=str, default=\"gs://cloud-ml-data/img/flower_photos/train_set.csv\")\n parser.add_argument(\n '--eval_path',\n help='The path to the evaluation data',\n type=str, default=\"gs://cloud-ml-data/img/flower_photos/eval_set.csv\")\n parser.add_argument(\n '--tpu_address',\n help='The path to the TPUs we will use in training',\n type=str, required=True)\n parser.add_argument(\n '--hub_path',\n help='The path to TF Hub module to use in GCS',\n type=str, required=True)\n parser.add_argument(\n '--job-dir',\n help='Directory where to save the given model',\n type=str, required=True)\n return parser.parse_known_args(argv)\n\n\ndef main():\n \"\"\"Parses command line arguments and kicks off model training.\"\"\"\n args = _parse_arguments(sys.argv[1:])[0]\n \n # TODO: define a TPU strategy\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(\n tpu=args.tpu_address)\n tf.config.experimental_connect_to_cluster(resolver)\n tf.tpu.experimental.initialize_tpu_system(resolver)\n strategy = tf.distribute.TPUStrategy(resolver)\n \n with strategy.scope():\n train_data = util.load_dataset(args.train_path)\n eval_data = util.load_dataset(args.eval_path, training=False)\n image_model = model.build_model(args.job_dir, args.hub_path)\n\n model_history = model.train_and_evaluate(\n image_model, args.epochs, args.steps_per_epoch,\n train_data, eval_data, args.job_dir)\n\n\nif __name__ == '__main__':\n main()\n", "The TPU server\nBefore we can start training with this code, we need a way to pull in MobileNet. When working with TPUs in the cloud, the TPU will not have access to the VM's local file directory since the TPU worker acts as a server. Because of this all data used by our model must be hosted on an outside storage system such as Google Cloud Storage. This makes caching our dataset especially critical in order to speed up training time.\nTo access MobileNet with these restrictions, we can download a compressed saved version of the model by using the wget command. Adding ?tf-hub-format=compressed at the end of our module handle gives us a download URL.", "!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed", "This model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.", "%%bash\nrm -r tpu_models/hub\nmkdir tpu_models/hub\ntar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/", "Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using gsutil cp to copy everything.", "!gsutil rm -r gs://$BUCKET/tpu_models\n!gsutil cp -r tpu_models gs://$BUCKET/tpu_models", "Spinning up a TPU\nTime to wake up a TPU! Open the Google Cloud Shell and copy the gcloud compute command below. Say 'Yes' to the prompts to spin up the TPU.\ngcloud compute tpus execution-groups create \\\n --name=my-tpu \\\n --zone=us-central1-b \\\n --tf-version=2.3.2 \\\n --machine-type=n1-standard-1 \\\n --accelerator-type=v3-8\nIt will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively Compute Engine Interface can be used to SSH in. You'll know you're running on a TPU when the command line starts with your-username@your-tpu-name.\nThis is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the . at the end as it tells gsutil to copy data into the currect directory.", "!echo \"gsutil cp -r gs://$BUCKET/tpu_models .\"", "Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out.\nTODO #2 and #3: Specify the tpu_address and hub_path", "%%bash\nexport TPU_NAME=my-tpu\necho \"export TPU_NAME=\"$TPU_NAME\necho \"python3 -m tpu_models.trainer.task \\\n --tpu_address=\\$TPU_NAME \\\n --hub_path=gs://$BUCKET/tpu_models/hub/ \\\n --job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)\"", "How did it go? In the previous lab, it took about 2-3 minutes to get through 25 images. On the TPU, it took 5-6 minutes to get through 2500. That's more than 40x faster! And now our accuracy is over 90%! Congratulations!\nTime to pack up shop. Run exit in the TPU terminal to close the SSH connection, and gcloud compute tpus execution-groups delete my-tpu --zone=us-central1-b in the Cloud Shell to delete the Cloud TPU and Compute Engine instances. Alternatively, they can be deleted through the Compute Engine Interface, but don't forget to separately delete the TPU too!\nCopyright 2022 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
a301-teaching/a301_code
notebooks/vancouver_zoom.ipynb
mit
[ "Take a look at channel 1 and NDVI at full resolution for Vancouver\n\n\nModis channel listing\n\n\nBand 1 centered at 0.645 microns (red)\n\n\ndata acquired at 250 m resolution, August 11, 2016\n\n\nI've written the measurements between -125 -> -120 deg lon and 45-50 degrees lat to\n an hdf: vancouver_hires.h5, download in the cell below\n\n\nsee what channel 1 and the ndvi look like at 250 meter resoluiton", "from a301utils.a301_readfile import download\nimport numpy as np\nimport h5py\nimport sys\nimport a301lib\nfrom a301lib.geolocate import fast_hist, fast_avg, fast_count,make_plot \nfrom a301lib.radiation import planckInvert\nfrom matplotlib import pyplot as plt\nfrom a301lib.geolocate import find_corners\n#\n# use hdfview to see the structure of this file\n#\nfilename = 'vancouver_hires.h5'\ndownload(filename)\nh5_file=h5py.File(filename)\n\ndef subsample(*datalist,lats=None,lons=None,llcrnr=None,urcrnr=None):\n \"\"\"\n return a list of satellite scene variables (radiances, reflectivies, ndvi, etc.)\n each cropped to a subset of lats, lons and data where:\n llcnr['lat'] < lats < urcnr['lat'] and\n llcnr['lon'] < lon < urcnr['lon']\n \n Parameters\n ----------\n datalist: list of vectors or arrays of pixel values to be clipped,\n each is same size as lats and lons\n \n lats: vector or array of pixel lats\n units: degrees N\n \n lons: vector or array of pixel lons\n units: degrees E\n \n \n \n llcnr: dictionary\n lower left corner dictionary with keys ['lat','lon']\n containing the latitude and longitude of the lower left corner\n \n urcnr: dictionary\n upper right corner dictionary with keys ['lat','lon']\n containing the latitude and longitude of the upper right corner\n \n Returns\n -------\n \n list: all variables in datalist, followed by lats and lons, cropped\n to the corners\n\n \n \"\"\"\n hit_lat = np.logical_and(lats >= llcrnr['lat'], lats <= urcrnr['lat'])\n hit_lon = np.logical_and(lons >= llcrnr['lon'], lons <= urcrnr['lon'])\n selected=np.logical_and(hit_lat,hit_lon)\n outlist=[lats[selected],lons[selected]]\n for item in datalist:\n outlist.append(item[selected])\n return outlist\n\n\n\nlat_data=h5_file['latlon']['lat'][...]\nlon_data=h5_file['latlon']['lon'][...]\nchan1_refl=h5_file['data_fields']['chan1'][...]\nchan2_refl=h5_file['data_fields']['chan2'][...]\nndvi = (chan2_refl - chan1_refl)/(chan2_refl + chan1_refl)\nllcrnr = dict(lat=49.,lon=-123.5)\nurcrnr = dict(lat=49.5,lon=-122.5)\nsublats, sublons,subchan1, subchan2, subndvi = subsample(chan1_refl,chan2_refl,ndvi,\n lats=lat_data,lons=lon_data,\n llcrnr=llcrnr,urcrnr=urcrnr)\ncorners=find_corners(sublats,sublons)\n\nlon_min= llcrnr['lon']\nlon_max = urcrnr['lon']\n\nlat_min = llcrnr['lat']\nlat_max = urcrnr['lat']\nbinsize = 0.006\n\nlon_hist = fast_hist(sublons.ravel(),lon_min,lon_max,binsize=binsize)\nlat_hist = fast_hist(sublats.ravel(),lat_min,lat_max,binsize=binsize)\ngridded_ndvi = fast_avg(lat_hist,lon_hist,subndvi.ravel(),bad_neg=False)\ngridded_chan1 = fast_avg(lat_hist,lon_hist,subchan1.ravel(),bad_neg=False)\n\nlat_centers=lat_hist['centers_vec']\nlon_centers=lon_hist['centers_vec']\nlon_array,lat_array=np.meshgrid(lon_centers,lat_centers)\nprint(lon_array.shape)\nmasked_reflects = np.ma.masked_invalid(gridded_chan1)\nmasked_ndvi = np.ma.masked_invalid(gridded_ndvi)", "Here is an example of a seaborn colormap with five xkcd colors", "import seaborn as sns\nfrom matplotlib.colors import ListedColormap, LinearSegmentedColormap \ndef five_colors():\n colors = [\"royal blue\", \"baby blue\", \"eggshell\", \"burnt red\", \"soft pink\"]\n #print([the_color for the_color in colors])\n colors=[sns.xkcd_rgb[the_color] for the_color in colors]\n cmap=ListedColormap(colors,N=5)\n cmap.set_over('w')\n cmap.set_under('k') #black\n cmap.set_bad('0.75') #75% grey\n return cmap\n", "and here is our previous colormap", "from matplotlib import cm\nfrom matplotlib.colors import Normalize\ndef continuous_colors():\n cmap=cm.YlGnBu_r #see http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps\n cmap.set_over('r')\n cmap.set_under('orange',alpha=0.8)\n cmap.set_bad('pink') #75% grey\n return cmap\n\n\n%matplotlib inline\n#\n# choose one of the two\n#\ncmap = continuous_colors()\n#cmap = five_colors()\n\n#\n# set the range over which the pallette extends so I use\n# use all my colors \n#\nvmin= 0.0\nvmax= 0.10\nthe_norm=Normalize(vmin=vmin,vmax=vmax,clip=False)\nfig,ax = plt.subplots(1,1,figsize=(14,18))\ncorners['ax'] = ax\ncorners['resolution']='h'\ncorners['projection']='lcc'\ncorners['urcrnrlon'] = urcrnr['lon']\ncorners['urcrnrlat'] = urcrnr['lat']\ncorners['llcrnrlat'] = llcrnr['lat']\ncorners['llcrnrlon'] = llcrnr['lon']\nproj = make_plot(corners,lat_sep=0.1,lon_sep=0.25)\nlon_array,lat_array=np.meshgrid(lon_centers,lat_centers)\n#\n# translate every lat,lon pair in the scene to x,y plotting coordinates \n# for th Lambert projection\n#\nx,y=proj(lon_array,lat_array)\nCS=proj.pcolormesh(x, y,masked_reflects, cmap=cmap, norm=the_norm)\nCBar=proj.colorbar(CS, 'right', size='5%', pad='5%',extend='both')\nCBar.set_label('Channel 1 reflectance',\n rotation=270,verticalalignment='bottom',size=18)\n_=ax.set_title('Modis Channel 1, August 11, 2016 Vancouver',size=22)\n\n\nimport seaborn as sns\nsns.choose_diverging_palette(as_cmap=True)\n\ncmap=sns.diverging_palette(261, 153,sep=6, s=85, l=66,as_cmap=True)\nvmin= -0.9\nvmax= 0.9\ncmap.set_over('c')\ncmap.set_under('k',alpha=0.8)\ncmap.set_bad('pink')\nthe_norm=Normalize(vmin=vmin,vmax=vmax,clip=False)\nfig,ax2 = plt.subplots(1,1,figsize=(14,18))\ncorners['ax']=ax2\nproj = make_plot(corners,lat_sep=0.1,lon_sep=0.25)\nCS=proj.pcolormesh(x, y,masked_ndvi, cmap=cmap, norm=the_norm)\nCBar=proj.colorbar(CS, 'right', size='5%', pad='5%',extend='both')\nCBar.set_label('NDVI',\n rotation=270,verticalalignment='bottom',size=18)\n_=ax2.set_title('Modis NDVI, August 11, 2016 Vancouver',size=22)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
brandoncgay/deep-learning
batch-norm/Batch_Normalization_Exercises.ipynb
mit
[ "Batch Normalization – Practice\nBatch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.\nThis is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:\n1. Complicated enough that training would benefit from batch normalization.\n2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.\n3. Simple enough that the architecture would be easy to understand without additional resources.\nThis notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.\n\nBatch Normalization with tf.layers.batch_normalization\nBatch Normalization with tf.nn.batch_normalization\n\nThe following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.", "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True, reshape=False)", "Batch Normalization using tf.layers.batch_normalization<a id=\"example_1\"></a>\nThis version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization \nWe'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer", "We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)\n return conv_layer", "Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). \nThis cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)\nUsing batch normalization, you'll be able to train this same network to over 90% in that same number of batches.\nAdd batch normalization\nWe've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. \nIf you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.\nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.", "def fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)\n layer = tf.layers.batch_normalization(layer, training=is_training)\n layer = tf.nn.relu(layer)\n return layer", "TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.", "def conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)\n return conv_layer", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.\nBatch Normalization using tf.nn.batch_normalization<a id=\"example_2\"></a>\nMost of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.\nThis version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.\nOptional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. \nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.", "def fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer", "TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.", "def conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n\n in_channels = prev_layer.get_shape().as_list()[3]\n out_channels = layer_depth*4\n \n weights = tf.Variable(\n tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))\n \n bias = tf.Variable(tf.zeros(out_channels))\n\n conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n conv_layer = tf.nn.relu(conv_layer)\n\n return conv_layer", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "Once again, the model with batch normalization should reach an accuracy over 90%. There are plenty of details that can go wrong when implementing at this low level, so if you got it working - great job! If not, do not worry, just look at the Batch_Normalization_Solutions notebook to see what went wrong." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dchandan/rebound
ipython_examples/OrbitalElements.ipynb
gpl-3.0
[ "Orbital Elements\nWe can add particles to a simulation by specifying cartesian components:", "import rebound\nsim = rebound.Simulation()\nsim.add(m=1., x=1., vz = 2.)", "Any components not passed automatically default to 0. REBOUND can also accept orbital elements. \nReference bodies\nAs a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. This is a useful set for theoretical calculations, and gives a logical behavior as the mass ratio increase, e.g., in the case of a circumbinary planet. Let's set up a binary,", "sim.add(m=1., a=1.)\nsim.status()", "We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,", "sim.add(m=1.e-3, a=100.)", "This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class):", "sim.add(primary=sim.particles[1], a=0.01)", "All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, we can always calculate them when required with sim.calculate_orbits(). Note that REBOUND will always output angles in the range $[-\\pi,\\pi]$, except the inclination which is always in $[0,\\pi]$.", "orbits = sim.calculate_orbits()\nfor orbit in orbits:\n print(orbit)", "Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.\nTo get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class:", "print(sim.particles[3].calculate_orbit(sim, primary=sim.particles[1]))", "though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).\nEdge cases and orbital element sets\nDifferent orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,", "sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0., inc=0.1, Omega=0.3, omega=0.1)\norbits = sim.calculate_orbits()\nprint(orbits[0])", "The problem here is that $\\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\\theta$, the broken angle from the $x$ axis to the ascending node = $\\Omega + \\omega + f$, and then to the particle's position, is always well defined:", "print(orbits[0].theta)", "To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.", "sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0., inc=0.1, Omega=0.3, theta = 0.4)\norbits = sim.calculate_orbits()\nprint(orbits[0].theta)\n\nimport rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.2, Omega=0.1)\norbits = sim.calculate_orbits()\nprint(orbits[0])", "Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\\Omega$ is not a good variable, but we pass it anyway! In this case, $\\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\\varpi = \\Omega + \\omega$), which is the angle from the $x$ axis to pericenter:", "print(orbits[0].pomega)", "We can specify the pericenter of the orbit with either $\\omega$ or $\\varpi$:", "import rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.2, pomega=0.1)\norbits = sim.calculate_orbits()\nprint(orbits[0])", "Note that if the inclination is exactly zero, REBOUND sets $\\Omega$ (which is undefined) to 0, so $\\omega = \\varpi$. \nFinally, we can initialize particles using mean, rather than true, longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \\lambda = \\Omega + \\omega + M$, which is analogous to $\\theta$ above,", "sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.1, Omega=0.3, M = 0.1)\nsim.add(a=1., e=0.1, Omega=0.3, l = 0.4)\norbits = sim.calculate_orbits()\nprint(orbits[0].l)\nprint(orbits[1].l)\n\nimport rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.1, omega=1.)\norbits = sim.calculate_orbits()\nprint(orbits[0])", "Accuracy\nAs a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly:", "import random\nimport numpy as np\n\ndef simulation(par):\n e,f = par\n e = 10**e\n f = 10**f\n sim = rebound.Simulation()\n sim.add(m=1.)\n a = 1.\n inc = random.random()*np.pi\n Omega = random.random()*2*np.pi\n sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, f=f)\n o=sim.calculate_orbits()[0]\n if o.f < 0: # avoid wrapping issues\n o.f += 2*np.pi\n err = max(np.fabs(o.e-e)/e, np.fabs(o.f-f)/f)\n return err\n\nrandom.seed(1)\nN = 100\nes = np.linspace(-16.,-1.,N)\nfs = np.linspace(-16.,-1.,N)\nparams = [(e,f) for e in es for f in fs]\n\npool=rebound.InterruptiblePool()\nres = pool.map(simulation, params)\nres = np.array(res).reshape(N,N)\nres = np.nan_to_num(res)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib import ticker\nfrom matplotlib.colors import LogNorm\nimport matplotlib\n\nf,ax = plt.subplots(1,1,figsize=(7,5))\nextent=[fs.min(), fs.max(), es.min(), es.max()]\n\nax.set_xlim(extent[0], extent[1])\nax.set_ylim(extent[2], extent[3])\nax.set_xlabel(r\"true anomaly (f)\")\nax.set_ylabel(r\"eccentricity\")\n\nim = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin=\"lower\", interpolation='nearest', cmap=\"RdYlGn_r\", extent=extent)\ncb = plt.colorbar(im, ax=ax)\ncb.solids.set_rasterized(True)\ncb.set_label(\"Relative Error\")", "We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\\theta$ as discussed above, we get much better results:", "def simulation(par):\n e,theta = par\n e = 10**e\n theta = 10**theta\n sim = rebound.Simulation()\n sim.add(m=1.)\n a = 1.\n inc = random.random()*np.pi\n Omega = random.random()*2*np.pi\n omega = random.random()*2*np.pi\n sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, theta=theta)\n o=sim.calculate_orbits()[0]\n if o.theta < 0:\n o.theta += 2*np.pi\n err = max(np.fabs(o.e-e)/e, np.fabs(o.theta-theta)/theta)\n return err\n\nrandom.seed(1)\nN = 100\nes = np.linspace(-16.,-1.,N)\nthetas = np.linspace(-16.,-1.,N)\nparams = [(e,theta) for e in es for theta in thetas]\n\npool=rebound.InterruptiblePool()\nres = pool.map(simulation, params)\nres = np.array(res).reshape(N,N)\nres = np.nan_to_num(res)\n\nf,ax = plt.subplots(1,1,figsize=(7,5))\nextent=[thetas.min(), thetas.max(), es.min(), es.max()]\n\nax.set_xlim(extent[0], extent[1])\nax.set_ylim(extent[2], extent[3])\nax.set_xlabel(r\"true longitude (\\theta)\")\nax.set_ylabel(r\"eccentricity\")\n\nim = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin=\"lower\", interpolation='nearest', cmap=\"RdYlGn_r\", extent=extent)\ncb = plt.colorbar(im, ax=ax)\ncb.solids.set_rasterized(True)\ncb.set_label(\"Relative Error\")", "Hyperbolic & Parabolic Orbits\nREBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$:", "sim.add(a=-0.2, e=1.4)\nsim.status()", "Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,", "sim = rebound.Simulation()\nsim.add(m=1.)\nq = 0.1\na=-1.e14\ne=1.+q/np.fabs(a)\nsim.add(a=a, e=e)\nprint(sim.calculate_orbits()[0])", "Retrograde Orbits\nOrbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example,", "sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1.,inc=np.pi,e=0.1, Omega=0., pomega=1.)\nprint(sim.calculate_orbits()[0])", "We passed $\\Omega=0$ and $\\varpi=1.$. For prograde orbits, $\\varpi = \\Omega + \\omega$, so we'd expect $\\omega = 1$, but instead we get $\\omega=-1$. If we think about things physically, $\\varpi$ is the angle from the $x$ axis to pericenter, measured in the positive direction (counterclockwise) defined by $z$. $\\Omega$ is always measured in this same sense, but $\\omega$ is always measured in the orbital plane in the direction of the orbit. For retrograde orbits, this means that $\\omega$ is measured in the opposite sense to $\\Omega$, so $\\varpi = \\Omega - \\omega$, which is why we got $\\omega = -1$. \nSimilarly, the retrograde version of $\\theta = \\Omega + \\omega + f$ is $\\theta = \\Omega - \\omega - f$, and l = $\\lambda = \\Omega + \\omega + M$ becomes $\\lambda = \\Omega - \\omega - M$. REBOUND chooses these conventions based on whether $i < \\pi/2$, which means that if you were tracking $\\varpi$ for nearly polar orbits, you would get unphysical jumps if the orbits crossed back and forth between prograde and retrograde. Of course, $\\varpi$ is not a good angle at such high inclinations, and only has physical meaning when the orbital plane nearly coincides with the reference plane.\nExceptions\nAdding a particle or getting orbital elements should never yield NaNs in any of the structure fields. Please let us know if you find a case that does. \nIn cases where it would return a NaN, rebound will raise a ValueError. The only cases that should do so when adding a particle are 1) passing an eccentricity of exactly 1. 2) passing a negative eccentricity. 3) Passing $e>1$ if $a>0$. 4) Passing $e<1$ if $a<0$. 5) Passing a longitude or anomaly for a hyperbolic orbit that's beyond the range allowed by the asymptotes defined by the hyperbola.\nWhen obtaining orbital elements from a Particle structure, REBOUND will raise a ValueError if 1) the primary's mass is zero, or 2) the particle's and primary's position are the same.\nNegative inclinations\nWhile inclinations are only defined in the range $[0,\\pi]$, you can also pass negative inclinations when adding particles in REBOUND. This is interpreted as referencing $\\Omega$ and $\\omega$ to the descending, rather than the ascending node. So for example, if one set up particles with the same $\\Omega$ and a range of inclinations distributed around zero, one would obtain what one might expect, i.e. a set of orbits that are all rotated around the same line of nodes." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hparik11/Deep-Learning-Nanodegree-Foundation-Repository
autoencoder/Simple_Autoencoder.ipynb
mit
[ "A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)", "Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.", "img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.", "# Size of the encoding layer (the hidden layer)\nencoding_dim = 64 # feel free to change this value\n\n# Input and target placeholders\ninputs_ = tf.placeholder(tf.float32, shape=(None, 784), name=\"inputs\")\ntargets_ = tf.placeholder(tf.float32, shape=(None, 784), name=\"targets\")\n\n# Output of hidden layer, single fully connected layer here with ReLU activation\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits, fully connected layer with no activation\nlogits = tf.layers.dense(encoded, 784)\n# Sigmoid output from\ndecoded = tf.nn.sigmoid(logits, name='output')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)", "Training", "# Create the session\nsess = tf.Session()", "Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).", "epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
QuantEcon/QuantEcon.notebooks
game_theory_py.ipynb
bsd-3-clause
[ "Tools for Game Theory in QuantEcon.py\nDaisuke Oyama\nFaculty of Economics, University of Tokyo\nThis notebook demonstrates the functionalities of the game_theory module\nin QuantEcon.py.", "import numpy as np\nimport quantecon.game_theory as gt", "Normal Form Games\nAn $N$-player normal form game is a triplet $g = (I, (A_i){i \\in I}, (u_i){i \\in I})$ where\n\n$I = {0, \\ldots, N-1}$ is the set of players,\n$A_i = {0, \\ldots, n_i-1}$ is the set of actions of player $i \\in I$, and\n$u_i \\colon A_i \\times A_{i+1} \\times \\cdots \\times A_{i+N-1} \\to \\mathbb{R}$\n is the payoff function of player $i \\in I$,\n\nwhere $i+j$ is understood modulo $N$.\nNote that we adopt the convention that the $0$-th argument of the payoff function $u_i$ is\nplayer $i$'s own action\nand the $j$-th argument, $j = 1, \\ldots, N-1$, is player ($i+j$)'s action (modulo $N$).\nIn our module,\na normal form game and a player are represented by\nthe classes NormalFormGame and Player, respectively.\nA Player carries the player's payoff function and implements in particular\na method that returns the best response action(s) given an action of the opponent player,\nor a profile of actions of the opponents if there are more than one.\nA NormalFormGame is in effect a container of Player instances.\nCreating a NormalFormGame\nThere are several ways to create a NormalFormGame instance.\nThe first is to pass an array of payoffs for all the players, i.e.,\nan $(N+1)$-dimenstional array of shape $(n_0, \\ldots, n_{N-1}, N)$\nwhose $(a_0, \\ldots, a_{N-1})$-entry contains an array of the $N$ payoff values\nfor the action profile $(a_0, \\ldots, a_{N-1})$.\nAs an example, consider the following game (\"Matching Pennies\"):\n$\n\\begin{bmatrix}\n1, -1 & -1, 1 \\\n-1, 1 & 1, -1\n\\end{bmatrix}\n$", "matching_pennies_bimatrix = [[(1, -1), (-1, 1)],\n [(-1, 1), (1, -1)]]\ng_MP = gt.NormalFormGame(matching_pennies_bimatrix)\n\nprint(g_MP)\n\nprint(g_MP.players[0]) # Player instance for player 0\n\nprint(g_MP.players[1]) # Player instance for player 1\n\ng_MP.players[0].payoff_array # Player 0's payoff array\n\ng_MP.players[1].payoff_array # Player 1's payoff array\n\ng_MP[0, 0] # payoff profile for action profile (0, 0)", "If a square matrix (2-dimensional array) is given,\nthen it is considered to be a symmetric two-player game.\nConsider the following game (symmetric $2 \\times 2$ \"Coordination Game\"):\n$\n\\begin{bmatrix}\n4, 4 & 0, 3 \\\n3, 0 & 2, 2\n\\end{bmatrix}\n$", "coordination_game_matrix = [[4, 0],\n [3, 2]] # square matrix\ng_Coo = gt.NormalFormGame(coordination_game_matrix)\n\nprint(g_Coo)\n\ng_Coo.players[0].payoff_array # Player 0's payoff array\n\ng_Coo.players[1].payoff_array # Player 1's payoff array", "Another example (\"Rock-Paper-Scissors\"):\n$\n\\begin{bmatrix}\n 0, 0 & -1, 1 & 1, -1 \\\n 1, -1 & 0, 0 & -1, 1 \\\n-1, 1 & 1, -1 & 0, 0\n\\end{bmatrix}\n$", "RPS_matrix = [[ 0, -1, 1],\n [ 1, 0, -1],\n [-1, 1, 0]]\ng_RPS = gt.NormalFormGame(RPS_matrix)\n\nprint(g_RPS)", "The second is to specify the sizes of the action sets of the players\nto create a NormalFormGame instance filled with payoff zeros,\nand then set the payoff values to each entry.\nLet us construct the following game (\"Prisoners' Dilemma\"):\n$\n\\begin{bmatrix}\n1, 1 & -2, 3 \\\n3, -2 & 0, 0\n\\end{bmatrix}\n$", "g_PD = gt.NormalFormGame((2, 2)) # There are 2 players, each of whom has 2 actions\ng_PD[0, 0] = 1, 1\ng_PD[0, 1] = -2, 3\ng_PD[1, 0] = 3, -2\ng_PD[1, 1] = 0, 0\n\nprint(g_PD)", "Finally, a NormalFormGame instance can be constructed by giving an array of Player instances,\nas explained in the next section.\nCreating a Player\nA Player instance is created by passing an array of dimension $N$\nthat represents the player's payoff function (\"payoff array\").\nConsider the following game (a variant of \"Battle of the Sexes\"):\n$\n\\begin{bmatrix}\n3, 2 & 1, 1 \\\n0, 0 & 2, 3\n\\end{bmatrix}\n$", "player0 = gt.Player([[3, 1],\n [0, 2]])\nplayer1 = gt.Player([[2, 0],\n [1, 3]])", "Beware that in payoff_array[h, k], h refers to the player's own action,\nwhile k refers to the opponent player's action.", "player0.payoff_array\n\nplayer1.payoff_array", "Passing an array of Player instances is the third way to create a NormalFormGame instance:", "g_BoS = gt.NormalFormGame((player0, player1))\n\nprint(g_BoS)", "More than two players\nThe game_theory module also supports games with more than two players.\nLet us consider the following version of $N$-player Cournot Game.\nThere are $N$ firms (players) which produce a homogeneous good\nwith common constant marginal cost $c \\geq 0$.\nEach firm $i$ simultaneously determines the quantity $q_i \\geq 0$ (action) of the good to produce.\nThe inverse demand function is given by the linear function $P(Q) = a - Q$, $a > 0$,\nwhere $Q = q_0 + \\cdots + q_{N-1}$ is the aggregate supply.\nThen the profit (payoff) for firm $i$ is given by\n$$\nu_i(q_i, q_{i+1}, \\ldots, q_{i+N-1})\n= P(Q) q_i - c q_i\n= \\left(a - c - \\sum_{j \\neq i} q_j - q_i\\right) q_i.\n$$\nTheoretically, the set of actions, i.e., available quantities, may be\nthe set of all nonnegative real numbers $\\mathbb{R}_+$\n(or a bounded interval $[0, \\bar{q}]$ with some upper bound $\\bar{q}$),\nbut for computation on a computer we have to discretize the action space\nand only allow for finitely many grid points.\nThe following script creates a NormalFormGame instance of the Cournot game as described above,\nassuming that the (common) grid of possible quantity values is stored in an array q_grid.", "from quantecon import cartesian\n\n\ndef cournot(a, c, N, q_grid):\n \"\"\"\n Create a `NormalFormGame` instance for the symmetric N-player Cournot game\n with linear inverse demand a - Q and constant marginal cost c.\n\n Parameters\n ----------\n a : scalar\n Intercept of the demand curve\n\n c : scalar\n Common constant marginal cost\n\n N : scalar(int)\n Number of firms\n\n q_grid : array_like(scalar)\n Array containing the set of possible quantities\n\n Returns\n -------\n NormalFormGame\n NormalFormGame instance representing the Cournot game\n\n \"\"\"\n q_grid = np.asarray(q_grid)\n payoff_array = \\\n cartesian([q_grid]*N).sum(axis=-1).reshape([len(q_grid)]*N) * (-1) + \\\n (a - c)\n payoff_array *= q_grid.reshape([len(q_grid)] + [1]*(N-1))\n payoff_array += 0 # To get rid of the minus sign of -0\n\n player = gt.Player(payoff_array)\n return gt.NormalFormGame([player for i in range(N)])", "Here's a simple example with three firms,\nmarginal cost $20$, and inverse demand function $80 - Q$,\nwhere the feasible quantity values are assumed to be $10$ and $15$.", "a, c = 80, 20\nN = 3\nq_grid = [10, 15] # [1/3 of Monopoly quantity, Nash equilibrium quantity]\n\ng_Cou = cournot(a, c, N, q_grid)\n\nprint(g_Cou)\n\nprint(g_Cou.players[0])\n\ng_Cou.nums_actions", "Nash Equilibrium\nA Nash equilibrium of a normal form game is a profile of actions\nwhere the action of each player is a best response to the others'.\nThe Player object has a method best_response.\nConsider the Matching Pennies game g_MP defined above.\nFor example, player 0's best response to the opponent's action 1 is:", "g_MP.players[0].best_response(1)", "Player 0's best responses to the opponent's mixed action [0.5, 0.5]\n(we know they are 0 and 1):", "# By default, returns the best response action with the smallest index\ng_MP.players[0].best_response([0.5, 0.5])\n\n# With tie_breaking='random', returns randomly one of the best responses\ng_MP.players[0].best_response([0.5, 0.5], tie_breaking='random') # Try several times\n\n# With tie_breaking=False, returns an array of all the best responses\ng_MP.players[0].best_response([0.5, 0.5], tie_breaking=False)", "For this game, we know that ([0.5, 0.5], [0.5, 0.5]) is a (unique) Nash equilibrium.", "g_MP.is_nash(([0.5, 0.5], [0.5, 0.5]))\n\ng_MP.is_nash((0, 0))\n\ng_MP.is_nash((0, [0.5, 0.5]))", "Finding Nash equilibria\nThere are several algorithms implemented to compute Nash equilibria:\n\nBrute force\n Find all pure-action Nash equilibria of an $N$-player game (if any).\nSequential best response\n Find one pure-action Nash equilibrium of an $N$-player game (if any).\nSupport enumeration\n Find all mixed-action Nash equilibria of a two-player nondegenerate game.\nVertex enumeration\n Find all mixed-action Nash equilibria of a two-player nondegenerate game.\nLemke-Howson\n Find one mixed-action Nash equilibrium of a two-player game.\nMcLennan-Tourky\n Find one mixed-action Nash equilibrium of an $N$-player game.\n\nFor more variety of algorithms, one should look at Gambit.\nBrute force\nFor small games, we can find pure action Nash equilibria by brute force,\nby calling the routine pure_nash_brute.\nIt visits all the action profiles and check whether each is a Nash equilibrium\nby the is_nash method.", "def print_pure_nash_brute(g):\n \"\"\"\n Print all pure Nash equilibria of a normal form game found by brute force.\n \n Parameters\n ----------\n g : NormalFormGame\n \n \"\"\"\n NEs = gt.pure_nash_brute(g)\n num_NEs = len(NEs)\n if num_NEs == 0:\n msg = 'no pure Nash equilibrium'\n elif num_NEs == 1:\n msg = '1 pure Nash equilibrium:\\n{0}'.format(NEs)\n else:\n msg = '{0} pure Nash equilibria:\\n{1}'.format(num_NEs, NEs)\n\n print('The game has ' + msg)", "Matching Pennies:", "print_pure_nash_brute(g_MP)", "Coordination game:", "print_pure_nash_brute(g_Coo)", "Rock-Paper-Scissors:", "print_pure_nash_brute(g_RPS)", "Battle of the Sexes:", "print_pure_nash_brute(g_BoS)", "Prisoners' Dillema:", "print_pure_nash_brute(g_PD)", "Cournot game:", "print_pure_nash_brute(g_Cou)", "Sequential best response\nIn some games, such as \"supermodular games\" and \"potential games\",\nthe process of sequential best responses converges to a Nash equilibrium.\nHere's a script to find one pure Nash equilibrium by sequential best response, if it converges.", "def sequential_best_response(g, init_actions=None, tie_breaking='smallest',\n verbose=True):\n \"\"\"\n Find a pure Nash equilibrium of a normal form game by sequential best\n response.\n\n Parameters\n ----------\n g : NormalFormGame\n\n init_actions : array_like(int), optional(default=[0, ..., 0])\n The initial action profile.\n\n tie_breaking : {'smallest', 'random'}, optional(default='smallest')\n\n verbose: bool, optional(default=True)\n If True, print the intermediate process.\n\n \"\"\"\n N = g.N # Number of players\n a = np.empty(N, dtype=int) # Action profile\n if init_actions is None:\n init_actions = [0] * N\n a[:] = init_actions\n\n if verbose:\n print('init_actions: {0}'.format(a))\n\n new_a = np.empty(N, dtype=int)\n max_iter = np.prod(g.nums_actions)\n\n for t in range(max_iter):\n new_a[:] = a\n for i, player in enumerate(g.players):\n if N == 2:\n a_except_i = new_a[1-i]\n else:\n a_except_i = new_a[np.arange(i+1, i+N) % N]\n new_a[i] = player.best_response(a_except_i,\n tie_breaking=tie_breaking)\n if verbose:\n print('player {0}: {1}'.format(i, new_a))\n if np.array_equal(new_a, a):\n return a\n else:\n a[:] = new_a\n\n print('No pure Nash equilibrium found')\n return None", "A Cournot game with linear demand is known to be a potential game,\nfor which sequential best response converges to a Nash equilibrium.\nLet us try a bigger instance:", "a, c = 80, 20\nN = 3\nq_grid = np.linspace(0, a-c, 13) # [0, 5, 10, ..., 60]\ng_Cou = cournot(a, c, N, q_grid)\n\na_star = sequential_best_response(g_Cou) # By default, start with (0, 0, 0)\nprint('Nash equilibrium indices: {0}'.format(a_star))\nprint('Nash equilibrium quantities: {0}'.format(q_grid[a_star]))\n\n# Start with the largest actions (12, 12, 12)\nsequential_best_response(g_Cou, init_actions=(12, 12, 12))", "The limit action profile is indeed a Nash equilibrium:", "g_Cou.is_nash(a_star)", "In fact, the game has other Nash equilibria\n(because of our choice of grid points and parameter values):", "print_pure_nash_brute(g_Cou)", "Make it bigger:", "N = 4\nq_grid = np.linspace(0, a-c, 61) # [0, 1, 2, ..., 60]\ng_Cou = cournot(a, c, N, q_grid)\n\nsequential_best_response(g_Cou)\n\nsequential_best_response(g_Cou, init_actions=(0, 0, 0, 30))", "Sequential best response does not converge in all games:", "print(g_MP) # Matching Pennies\n\nsequential_best_response(g_MP)", "Support enumeration\nThe routine support_enumeration,\nwhich is for two-player games,\nvisits all equal-size support pairs and\nchecks whether each pair has a Nash equilibrium (in mixed actions)\nby the indifference condition.\n(This should thus be used only for small games.)\nFor nondegenerate games, this routine returns all the Nash equilibria.\nMatching Pennies:", "gt.support_enumeration(g_MP)", "The output list contains a pair of mixed actions as a tuple of two NumPy arrays,\nwhich constitues the unique Nash equilibria of this game.\nCoordination game:", "print(g_Coo)\n\ngt.support_enumeration(g_Coo)", "The output contains three tuples of mixed actions,\nwhere the first two correspond to the two pure action equilibria,\nwhile the last to the unique totally mixed action equilibrium.\nRock-Paper-Scissors:", "print(g_RPS)\n\ngt.support_enumeration(g_RPS)", "Consider the $6 \\times 6$ game by\nvon Stengel (1997), page 12:", "player0 = gt.Player(\n [[ 9504, -660, 19976, -20526, 1776, -8976],\n [ -111771, 31680, -130944, 168124, -8514, 52764],\n [ 397584, -113850, 451176, -586476, 29216, -178761],\n [ 171204, -45936, 208626, -263076, 14124, -84436],\n [ 1303104, -453420, 1227336, -1718376, 72336, -461736],\n [ 737154, -227040, 774576, -1039236, 48081, -300036]]\n)\n\nplayer1 = gt.Player(\n [[ 72336, -461736, 1227336, -1718376, 1303104, -453420],\n [ 48081, -300036, 774576, -1039236, 737154, -227040],\n [ 29216, -178761, 451176, -586476, 397584, -113850],\n [ 14124, -84436, 208626, -263076, 171204, -45936],\n [ 1776, -8976, 19976, -20526, 9504, -660],\n [ -8514, 52764, -130944, 168124, -111771, 31680]]\n)\n\ng_vonStengel = gt.NormalFormGame((player0, player1))\n\nlen(gt.support_enumeration(g_vonStengel))", "Note that the $n \\times n$ game where the payoff matrices are given by the identy matrix\nhas $2^n−1$ equilibria.\nIt had been conjectured that this is the maximum number of equilibria of\nany nondegenerate $n \\times n$ game.\nThe game above, the number of whose equilibria is $75 > 2^6 - 1 = 63$,\nwas presented as a counter-example to this conjecture.\nNext, let us study the All-Pay Acution,\nwhere, unlike standard auctions,\nbidders pay their bids regardless of whether or not they win.\nSituations modeled as all-pay auctions include\njob promotion, R&D, and rent seeking competitions, among others.\nHere we consider a version of All-Pay Auction with complete information,\nsymmetric bidders, discrete bids, bid caps, and \"full dissipation\"\n(where the prize is materialized if and only if\nthere is only one bidder who makes a highest bid).\nSpecifically, each of $N$ players simultaneously bids an integer from ${0, 1, \\ldots, c}$,\nwhere $c$ is the common (integer) bid cap.\nIf player $i$'s bid is higher than all the other players',\nthen he receives the prize, whose value is $r$, common to all players,\nand pays his bid $b_i$.\nOtherwise, he pays $b_i$ and receives nothing (zero value).\nIn particular, if there are more than one players who make the highest bid,\nthe prize gets fully dissipated and all the players receive nothing.\nThus, player $i$'s payoff function is\n$$\nu_i(b_i, b_{i+1}, \\ldots, b_{i+N-1}) =\n\\begin{cases}\nr - b_i & \\text{if $b_i > b_j$ for all $j \\neq i$}, \\ - b_i & \\text{otherwise}.\n\\end{cases}\n$$\nThe following is a script to construct a NormalFormGame instance\nfor the All-Pay Auction game,\nwhere we use Numba\nto speed up the loops:", "from numba import jit\n\n\ndef all_pay_auction(r, c, N, dissipation=True):\n \"\"\"\n Create a `NormalFormGame` instance for the symmetric N-player\n All-Pay Auction game with common reward `r` and common bid cap `e`.\n\n Parameters\n ----------\n r : scalar(float)\n Common reward value.\n\n c : scalar(int)\n Common bid cap.\n\n N : scalar(int)\n Number of players.\n\n dissipation : bool, optional(default=True)\n If True, the prize fully dissipates in case of a tie. If False,\n the prize is equally split among the highest bidders (or given\n to one of the highest bidders with equal probabilities).\n\n Returns\n -------\n NormalFormGame\n NormalFormGame instance representing the All-Pay Auction game.\n\n \"\"\"\n player = gt.Player(np.empty((c+1,)*N))\n populate_APA_payoff_array(r, dissipation, player.payoff_array)\n return gt.NormalFormGame((player,)*N)\n\n\n@jit(nopython=True)\ndef populate_APA_payoff_array(r, dissipation, out):\n \"\"\"\n Populate the payoff array for a player in an N-player All-Pay\n Auction game.\n\n Parameters\n ----------\n r : scalar(float)\n Reward value.\n\n dissipation : bool, optional(default=True)\n If True, the prize fully dissipates in case of a tie. If False,\n the prize is equally split among the highest bidders (or given\n to one of the highest bidders with equal probabilities).\n\n out : ndarray(float, ndim=N)\n NumPy array to store the payoffs. Modified in place.\n\n Returns\n -------\n out : ndarray(float, ndim=N)\n View of `out`.\n\n \"\"\"\n nums_actions = out.shape\n N = out.ndim\n for bids in np.ndindex(nums_actions):\n out[bids] = -bids[0]\n num_ties = 1\n for j in range(1, N):\n if bids[j] > bids[0]:\n num_ties = 0\n break\n elif bids[j] == bids[0]:\n if dissipation:\n num_ties = 0\n break\n else:\n num_ties += 1\n if num_ties > 0:\n out[bids] += r / num_ties\n return out", "Consider the two-player case with the following parameter values:", "N = 2\nc = 5 # odd\nr = 8\n\ng_APA_odd = all_pay_auction(r, c, N)\nprint(g_APA_odd)", "Clearly, this game has no pure-action Nash equilibrium.\nIndeed:", "gt.pure_nash_brute(g_APA_odd)", "As pointed out by Dechenaux et al. (2006),\nthere are three Nash equilibria when the bid cap c is odd\n(so that there are an even number of actions for each player):", "gt.support_enumeration(g_APA_odd)", "In addition to a symmetric, totally mixed equilibrium (the third),\nthere are two asymmetric, \"alternating\" equilibria (the first and the second).\nIf c is even, there is a unique equilibrium, which is symmetric and totally mixed.\nFor example:", "c = 6 # even\ng_APA_even = all_pay_auction(r, c, N)\ngt.support_enumeration(g_APA_even)", "Vertex enumeration\nThe routine vertex_enumeration\ncomputes mixed-action Nash equilibria of a 2-player normal form game\nby enumeration and matching of vertices of the best response polytopes.\nFor a non-degenerate game input, these are all the Nash equilibria.\nInternally,\nscipy.spatial.ConvexHull\nis used to compute vertex enumeration of the best response polytopes,\nor equivalently, facet enumeration of their polar polytopes.\nThen, for each vertex of the polytope for player 0,\nvertices of the polytope for player 1 are searched to find a completely labeled pair.", "gt.vertex_enumeration(g_MP)\n\nlen(gt.support_enumeration(g_vonStengel))\n\ngt.vertex_enumeration(g_APA_odd)\n\ngt.vertex_enumeration(g_APA_even)", "support_enumeration and vertex_enumeration the same functionality\n(i.e., enumeration of Nash equilibria of a two-player game),\nbut the latter seems to run faster than the former.\nLemke-Howson\nThe routine lemke_howson\nimplements the Lemke-Howson algorithm (Lemke and Howson 1964),\nwhich returns one Nash equilibrium of a two-player normal form game.\nFor the details of the algorithm, see, e.g., von Stengel (2007).\nMatching Pennies:", "gt.lemke_howson(g_MP)", "Coordination game:", "gt.lemke_howson(g_Coo)", "The initial pivot can be specified by init_pivot,\nwhich should be an integer $k$ such that $0 \\leq k \\leq n_1 + n_2 - 1$ (default to 0),\nwhere $0, \\ldots, n_1-1$ correspond to player 0's actions,\nwhile $n_1 \\ldots, n_1+n_2-1$ to player 1's actions.", "gt.lemke_howson(g_Coo, init_pivot=1)", "All-Pay Auction:", "gt.lemke_howson(g_APA_odd, init_pivot=0)\n\ngt.lemke_howson(g_APA_odd, init_pivot=1)", "Additional information is returned if the option full_output is set True:", "NE, res = gt.lemke_howson(g_APA_odd, full_output=True)\nres", "lemke_howson runs fast,\nwith a reasonable time amount for games with up to several hundreds actions.\n(In fact, this is the only routine among the Nash equilibrium computation routines in the game_theory submodule\nthat scales to large-size games.)\nFor example:", "N = 2\nc = 200 # 201 actions\nr = 500\ng_APA200 = all_pay_auction(r, c, N)\n\ngt.lemke_howson(g_APA200)", "McLennan-Tourky\nThe routine mclennan_tourky\ncomputes one approximate Nash equilibrium of an $N$-player normal form game\nby the fixed-point algorithm of McLennan and Tourky (2006) applied to the best response correspondence\nConsider the symmetric All-Pay Auction with full dissipation as above, but this time with three players:", "N = 3\nr = 16\nc = 5\ng_APA3 = all_pay_auction(r, c, N)", "Run mclennan_tourky:", "NE = gt.mclennan_tourky(g_APA3)\nNE", "This output is an $\\varepsilon$-Nash equilibrium of the game,\nwhich is a profile of mixed actions $(x^_0, \\ldots, x^{N-1})$ such that\nfor all $i$, $u_i(x^_i, x^{-i}) \\geq u_i(x_i, x^*_{-i}) - \\varepsilon$ for all $x_i$,\nwhere the value of $\\varepsilon$ is specified by the option epsilon (default to 1e-3).", "g_APA3.is_nash(NE, tol=1e-3)\n\nepsilon = 1e-4\nNE = gt.mclennan_tourky(g_APA3, epsilon=epsilon)\nNE\n\ng_APA3.is_nash(NE, tol=epsilon)", "Additional information is returned by setting the full_output option to True:", "NE, res = gt.mclennan_tourky(g_APA3, full_output=True)\nres", "For this game, mclennan_tourky returned a symmetric, totally mixed action profile\n(cf. Rapoport and Amaldoss 2004)\nwith the default initial condition (0, 0, 0) (profile of pure actions).\nLet's try a different initial condition:", "init = (\n [1/2, 0, 0, 1/2, 0, 0], [0, 1/2, 0, 0, 1/2, 0], [0, 0, 0, 0, 0, 1]\n) # profile of mixed actions\ngt.mclennan_tourky(g_APA3, init=init)", "We obtained an asymetric \"alternating\" mixed action profile.\nWhile this is just an approximate Nash equilibrium,\nit suggests that there is an (exact) Nash equilibrium of the form\n$((p_0, 0, p_2, 0, p_4, 0), (p_0, 0, p_2, 0, p_4, 0), (0, q_1, 0, q_3, 0, q_5))$.\nIn fact, a simple calculation shows that there is one such that\n$$\np_0 = \\left(\\frac{r-4}{r}\\right)^{\\frac{1}{2}},\np_0 + p_2 = \\left(\\frac{r-2}{r}\\right)^{\\frac{1}{2}},\np_4 = 1 - (p_0 + p_2),\n$$\nand\n$$\nq_1 = \\frac{2}{r p_0},\nq_1 + q_3 = \\frac{4}{r (p_0+p_2)},\nq_5 = 1 - (q_1 + q_3).\n$$\nTo verify:", "p0 = ((r-4)/r)**(1/2)\np02 = ((r-2)/r)**(1/2)\np2 = p02 - p0\np4 = 1 - p02\nq1 = (2/r)/p0\nq13 = (4/r)/(p02)\nq3 = q13 - q1\nq5 = 1 - q13\na = ([p0, 0, p2, 0, p4, 0], [p0, 0, p2, 0, p4, 0], [0, q1, 0, q3, 0, q5])\na\n\ng_APA3.is_nash(a)", "References\n\n\nE. Dechenaux, D. Kovenock, and V. Lugovskyy (2006),\n \"Caps on bidding in all-pay auctions:\n Comments on the experiments of A. Rapoport and W. Amaldoss,\"\n Journal of Economic Behavior and Organization 61, 276-283.\n\n\nC. E. Lemke and J. T. Howson (1964),\n \"Equilibrium Points of Bimatrix Games,\"\n Journal of the Society for Industrial and Applied Mathematics 12, 413-423.\n\n\nA. McLennan and R. Tourky (2006),\n \"From Imitation Games to Kakutani.\"\n\n\nA. Rapoport and W. Amaldoss (2004),\n \"Mixed-strategy play in single-stage first-price all-pay auctions with symmetric players,\"\n Journal of Economic Behavior and Organization 54, 585-607.\n\n\nB. von Stengel (1997),\n \"New Lower Bounds for the Number of Equilibria in Bimatrix Games.\"\n\n\nB. von Stengel (2007),\n \"Equilibrium Computation for Two-Player Games in Strategic and Extensive Form,\"\n Chapter 3, N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani eds., Algorithmic Game Theory." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aitatanit/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb
mit
[ "Chapter 2\n\nThis chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.\nA little more on PyMC\nParent and Child relationships\nTo assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables. \n\n\nparent variables are variables that influence another variable. \n\n\nchild variable are variables that are affected by other variables, i.e. are the subject of parent variables. \n\n\nA variable can be both a parent and child. For example, consider the PyMC code below.", "import pymc as pm\n\n\nparameter = pm.Exponential(\"poisson_param\", 1)\ndata_generator = pm.Poisson(\"data_generator\", parameter)\ndata_plus_one = data_generator + 1", "parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.\nLikewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.\nThis nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.", "print(\"Children of `parameter`: \")\nprint(parameter.children)\nprint(\"\\nParents of `data_generator`: \")\nprint(data_generator.parents)\nprint(\"\\nChildren of `data_generator`: \")\nprint(data_generator.children)", "Of course a child can have more than one parent, and a parent can have many children.\nPyMC Variables\nAll PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:", "print(\"parameter.value =\", parameter.value)\nprint(\"data_generator.value =\", data_generator.value)\nprint(\"data_plus_one.value =\", data_plus_one.value)", "PyMC is concerned with two types of programming variables: stochastic and deterministic.\n\n\nstochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential.\n\n\ndeterministic variables are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's parent variables, I could determine what foo's value is. \n\n\nWe will detail each below.\nInitializing Stochastic variables\nInitializing a stochastic variable requires a name argument, plus additional parameters that are class specific. For example:\nsome_variable = pm.DiscreteUniform(\"discrete_uni_var\", 0, 4)\nwhere 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use object??, for example pm.DiscreteUniform?? if you are using IPython!)\nThe name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.\nFor multivariable problems, rather than creating a Python array of stochastic variables, addressing the size keyword in the call to a Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its value attribute return Numpy arrays. \nThe size argument also solves the annoying case where you may have many variables $\\beta_i, \\; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:\nbeta_1 = pm.Uniform(\"beta_1\", 0, 1)\nbeta_2 = pm.Uniform(\"beta_2\", 0, 1)\n...\n\nwe can instead wrap them into a single variable:\nbetas = pm.Uniform(\"betas\", 0, 1, size=N)\n\nCalling random()\nWe can also call on a stochastic variable's random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.", "lambda_1 = pm.Exponential(\"lambda_1\", 1) # prior on first behaviour\nlambda_2 = pm.Exponential(\"lambda_2\", 1) # prior on second behaviour\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=10) # prior on behaviour change\n\nprint(\"lambda_1.value = %.3f\" % lambda_1.value)\nprint(\"lambda_2.value = %.3f\" % lambda_2.value)\nprint(\"tau.value = %.3f\" % tau.value, \"\\n\")\n\nlambda_1.random(), lambda_2.random(), tau.random()\n\nprint(\"After calling random() on the variables...\")\nprint(\"lambda_1.value = %.3f\" % lambda_1.value)\nprint(\"lambda_2.value = %.3f\" % lambda_2.value)\nprint(\"tau.value = %.3f\" % tau.value)", "The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.\nWarning: Don't update stochastic variables' values in-place.\nStraight from the PyMC docs, we quote [4]:\n\nStochastic objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:\n\n A.value = new_value\n\n\nThe following are in-place updates and should never be used:\n\n A.value += 3\n A.value[2,1] = 5\n A.value.attribute = new_attribute_value\n\nDeterministic variables\nSince most variables you will be modeling are stochastic, we distinguish deterministic variables with a pymc.deterministic wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the pymc.deterministic decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:\n@pm.deterministic\ndef some_deterministic_var(v1=v1,):\n #jelly goes here.\n\nFor all purposes, we can treat the object some_deterministic_var as a variable and not a Python function. \nPrepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:", "type(lambda_1 + lambda_2)", "The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\\lambda$ looked like: \n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\nAnd in PyMC code:", "import numpy as np\nn_data_points = 5 # in CH1 we had ~70 data points\n\n\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_data_points)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after tau is lambda2\n return out", "Clearly, if $\\tau, \\lambda_1$ and $\\lambda_2$ are known, then $\\lambda$ is known completely, hence it is a deterministic variable. \nInside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following:\n@pm.deterministic\ndef some_deterministic(stoch=some_stochastic_var):\n return stoch.value**2\n\nwill return an AttributeError detailing that stoch does not have a value attribute. It simply needs to be stoch**2. During the learning phase, it's the variable's value that is repeatedly passed in, not the actual variable. \nNotice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values. \nIncluding observations in the Model\nAt this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like \"What does my prior distribution of $\\lambda_1$ look like?\"", "%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nfrom matplotlib import pyplot as plt\nfigsize(12.5, 4)\n\n\nsamples = [lambda_1.random() for i in range(20000)]\nplt.hist(samples, bins=70, normed=True, histtype=\"stepfilled\")\nplt.title(\"Prior distribution for $\\lambda_1$\")\nplt.xlim(0, 8);", "To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model. \nPyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:", "data = np.array([10, 5])\nfixed_variable = pm.Poisson(\"fxd\", 1, value=data, observed=True)\nprint(\"value: \", fixed_variable.value)\nprint(\"calling .random()\")\nfixed_variable.random()\nprint(\"value: \", fixed_variable.value)", "This is how we include data into our models: initializing a stochastic variable to have a fixed value. \nTo complete our text message example, we fix the PyMC variable observations to the observed dataset.", "# We're using some fake data here\ndata = np.array([10, 25, 15, 20, 35])\nobs = pm.Poisson(\"obs\", lambda_, value=data, observed=True)\nprint(obs.value)", "Finally...\nWe wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)", "model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])", "Modeling approaches\nA good starting point in Bayesian modeling is to think about how your data might have been generated. Put yourself in an omniscient position, and try to imagine how you would recreate the dataset. \nIn the last chapter we investigated text message data. We begin by asking how our observations may have been generated:\n\n\nWe started by thinking \"what is the best random variable to describe this count data?\" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.\n\n\nNext, we think, \"Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?\" Well, the Poisson distribution has a parameter $\\lambda$. \n\n\nDo we know $\\lambda$? No. In fact, we have a suspicion that there are two $\\lambda$ values, one for the earlier behaviour and one for the later behaviour. We don't know when the behaviour switches though, but call the switchpoint $\\tau$.\n\n\nWhat is a good distribution for the two $\\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\\alpha$.\n\n\nDo we know what the parameter $\\alpha$ might be? No. At this point, we could continue and assign a distribution to $\\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\\lambda$, (\"it probably changes over time\", \"it's likely between 10 and 30\", etc.), we don't really have any strong beliefs about $\\alpha$. So it's best to stop here. \nWhat is a good value for $\\alpha$ then? We think that the $\\lambda$s are between 10-30, so if we set $\\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\\alpha$ as to reflect our belief is to set the value so that the mean of $\\lambda$, given $\\alpha$, is equal to our observed mean. This was shown in the last chapter.\n\n\nWe have no expert opinion of when $\\tau$ might have occurred. So we will suppose $\\tau$ is from a discrete uniform distribution over the entire timespan.\n\n\nBelow we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library )\n<img src=\"http://i.imgur.com/7J30oCG.png\" width = 700/>\nPyMC, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]:\n\nProbabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.\n\nSame story; different ending.\nInterestingly, we can create new datasets by retelling the story.\nFor example, if we reverse the above steps, we can simulate a possible realization of the dataset.\n1. Specify when the user's behaviour switches by sampling from $\\text{DiscreteUniform}(0, 80)$:", "tau = pm.rdiscrete_uniform(0, 80)\nprint(tau)", "2. Draw $\\lambda_1$ and $\\lambda_2$ from an $\\text{Exp}(\\alpha)$ distribution:", "alpha = 1. / 20.\nlambda_1, lambda_2 = pm.rexponential(alpha, 2)\nprint(lambda_1, lambda_2)", "3. For days before $\\tau$, represent the user's received SMS count by sampling from $\\text{Poi}(\\lambda_1)$, and sample from $\\text{Poi}(\\lambda_2)$ for days after $\\tau$. For example:", "data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]", "4. Plot the artificial dataset:", "plt.bar(np.arange(80), data, color=\"#348ABD\")\nplt.bar(tau - 1, data[tau - 1], color=\"r\", label=\"user behaviour changed\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Artificial dataset\")\nplt.xlim(0, 80)\nplt.legend();", "It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\\lambda_i, \\tau$, that maximize this probability. \nThe ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:", "def plot_artificial_sms_dataset():\n tau = pm.rdiscrete_uniform(0, 80)\n alpha = 1. / 20.\n lambda_1, lambda_2 = pm.rexponential(alpha, 2)\n data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]\n plt.bar(np.arange(80), data, color=\"#348ABD\")\n plt.bar(tau - 1, data[tau - 1], color=\"r\", label=\"user behaviour changed\")\n plt.xlim(0, 80)\n\nfigsize(12.5, 5)\nplt.suptitle(\"More examples of artificial datasets\", fontsize=14)\nfor i in range(1, 5):\n plt.subplot(4, 1, i)\n plot_artificial_sms_dataset()", "Later we will see how we use this to make predictions and test the appropriateness of our models.\nExample: Bayesian A/B testing\nA/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results. \nSimilarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards. \nOften, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a \"Z-score\" and even more confusing \"p-values\" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural. \nA Simple Case\nAs this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \\lt p_A \\lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us. \nSuppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \\frac{n}{N}$. Unfortunately, the observed frequency $\\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\\frac{1}{6}$. Knowing the true frequency of events like:\n\nfraction of users who make purchases, \nfrequency of social attributes, \npercent of internet users with cats etc. \n\nare common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.\nThe observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.\nWith respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be. \nTo set up a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:", "import pymc as pm\n\n# The parameters are the bounds of the Uniform.\np = pm.Uniform('p', lower=0, upper=1)", "Had we had stronger beliefs, we could have expressed them in the prior above.\nFor this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\\ \\sim \\text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.", "# set constants\np_true = 0.05 # remember, this is unknown.\nN = 1500\n\n# sample N Bernoulli random variables from Ber(0.05).\n# each random variable has a 0.05 chance of being a 1.\n# this is the data-generation step\noccurrences = pm.rbernoulli(p_true, N)\n\nprint(occurrences) # Remember: Python treats True == 1, and False == 0\nprint(occurrences.sum())", "The observed frequency is:", "# Occurrences.mean is equal to n/N.\nprint(\"What is the observed frequency in Group A? %.4f\" % occurrences.mean())\nprint(\"Does this equal the true frequency? %s\" % (occurrences.mean() == p_true))", "We combine the observations into the PyMC observed variable, and run our inference algorithm:", "# include the observations, which are Bernoulli\nobs = pm.Bernoulli(\"obs\", p, value=occurrences, observed=True)\n\n# To be explained in chapter 3\nmcmc = pm.MCMC([p, obs])\nmcmc.sample(18000, 1000)", "We plot the posterior distribution of the unknown $p_A$ below:", "figsize(12.5, 4)\nplt.title(\"Posterior distribution of $p_A$, the true effectiveness of site A\")\nplt.vlines(p_true, 0, 90, linestyle=\"--\", label=\"true $p_A$ (unknown)\")\nplt.hist(mcmc.trace(\"p\")[:], bins=25, histtype=\"stepfilled\", normed=True)\nplt.legend();", "Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.\nA and B Together\nA similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )", "import pymc as pm\nfigsize(12, 4)\n\n# these two quantities are unknown to us.\ntrue_p_A = 0.05\ntrue_p_B = 0.04\n\n# notice the unequal sample sizes -- no problem in Bayesian analysis.\nN_A = 1500\nN_B = 750\n\n# generate some observations\nobservations_A = pm.rbernoulli(true_p_A, N_A)\nobservations_B = pm.rbernoulli(true_p_B, N_B)\nprint(\"Obs from Site A: \", observations_A[:30].astype(int), \"...\")\nprint(\"Obs from Site B: \", observations_B[:30].astype(int), \"...\")\n\nprint(observations_A.mean())\nprint(observations_B.mean())\n\n# Set up the pymc model. Again assume Uniform priors for p_A and p_B.\np_A = pm.Uniform(\"p_A\", 0, 1)\np_B = pm.Uniform(\"p_B\", 0, 1)\n\n\n# Define the deterministic delta function. This is our unknown of interest.\n@pm.deterministic\ndef delta(p_A=p_A, p_B=p_B):\n return p_A - p_B\n\n# Set of observations, in this case we have two observation datasets.\nobs_A = pm.Bernoulli(\"obs_A\", p_A, value=observations_A, observed=True)\nobs_B = pm.Bernoulli(\"obs_B\", p_B, value=observations_B, observed=True)\n\n# To be explained in chapter 3.\nmcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])\nmcmc.sample(20000, 1000)", "Below we plot the posterior distributions for the three unknowns:", "p_A_samples = mcmc.trace(\"p_A\")[:]\np_B_samples = mcmc.trace(\"p_B\")[:]\ndelta_samples = mcmc.trace(\"delta\")[:]\n\nfigsize(12.5, 10)\n\n# histogram of posteriors\n\nax = plt.subplot(311)\n\nplt.xlim(0, .1)\nplt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,\n label=\"posterior of $p_A$\", color=\"#A60628\", normed=True)\nplt.vlines(true_p_A, 0, 80, linestyle=\"--\", label=\"true $p_A$ (unknown)\")\nplt.legend(loc=\"upper right\")\nplt.title(\"Posterior distributions of $p_A$, $p_B$, and delta unknowns\")\n\nax = plt.subplot(312)\n\nplt.xlim(0, .1)\nplt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,\n label=\"posterior of $p_B$\", color=\"#467821\", normed=True)\nplt.vlines(true_p_B, 0, 80, linestyle=\"--\", label=\"true $p_B$ (unknown)\")\nplt.legend(loc=\"upper right\")\n\nax = plt.subplot(313)\nplt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of delta\", color=\"#7A68A6\", normed=True)\nplt.vlines(true_p_A - true_p_B, 0, 60, linestyle=\"--\",\n label=\"true delta (unknown)\")\nplt.vlines(0, 0, 60, color=\"black\", alpha=0.2)\nplt.legend(loc=\"upper right\");", "Notice that as a result of N_B &lt; N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$. \nWith respect to the posterior distribution of $\\text{delta}$, we can see that the majority of the distribution is above $\\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:", "# Count the number of samples less than 0, i.e. the area under the curve\n# before 0, represent the probability that site A is worse than site B.\nprint(\"Probability site A is WORSE than site B: %.3f\" % \\\n (delta_samples < 0).mean())\n\nprint(\"Probability site A is BETTER than site B: %.3f\" % \\\n (delta_samples > 0).mean())", "If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential \"power\" than each additional data point for site A). \nTry playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.\nI hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation. \nAn algorithm for human deceit\nSocial data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals \"Have you ever cheated on a test?\" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit \"Yes\" to cheating when in fact they hadn't cheated). \nTo present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.\nThe Binomial Distribution\nThe binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:\n$$P( X = k ) = {{N}\\choose{k}} p^k(1-p)^{N-k}$$\nIf $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \\sim \\text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \\le X \\le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.", "figsize(12.5, 4)\n\nimport scipy.stats as stats\nbinomial = stats.binom\n\nparameters = [(10, .4), (10, .9)]\ncolors = [\"#348ABD\", \"#A60628\"]\n\nfor i in range(2):\n N, p = parameters[i]\n _x = np.arange(N + 1)\n plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],\n edgecolor=colors[i],\n alpha=0.6,\n label=\"$N$: %d, $p$: %.1f\" % (N, p),\n linewidth=3)\n\nplt.legend(loc=\"upper left\")\nplt.xlim(0, 10.5)\nplt.xlabel(\"$k$\")\nplt.ylabel(\"$P(X = k)$\")\nplt.title(\"Probability mass distributions of binomial random variables\");", "The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \\sim \\text{Binomial}(N, p )$.\nThe expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.\nExample: Cheating among students\nWe will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ \"Yes I did cheat\" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$. \nThis is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:\n\nIn the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers \"Yes, I did cheat\" if the coin flip lands heads, and \"No, I did not cheat\", if the coin flip lands tails. This way, the interviewer does not know if a \"Yes\" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers. \n\nI call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars. \nSuppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\\text{Uniform}(0,1)$ prior.", "import pymc as pm\n\nN = 100\np = pm.Uniform(\"freq_cheating\", 0, 1)", "Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.", "true_answers = pm.Bernoulli(\"truths\", p, size=N)", "If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.", "first_coin_flips = pm.Bernoulli(\"first_flips\", 0.5, size=N)\nprint(first_coin_flips.value)", "Although not everyone flips a second time, we can still model the possible realization of second coin-flips:", "second_coin_flips = pm.Bernoulli(\"second_flips\", 0.5, size=N)", "Using these variables, we can return a possible realization of the observed proportion of \"Yes\" responses. We do this using a PyMC deterministic variable:", "@pm.deterministic\ndef observed_proportion(t_a=true_answers,\n fc=first_coin_flips,\n sc=second_coin_flips):\n\n observed = fc * t_a + (1 - fc) * sc\n return observed.sum() / float(N)", "The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.", "observed_proportion.value", "Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 \"Yes\" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a \"Yes\" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be \"Yes\". \nThe researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:", "X = 35\n\nobservations = pm.Binomial(\"obs\", N, observed_proportion, observed=True,\n value=X)", "Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.", "model = pm.Model([p, true_answers, first_coin_flips,\n second_coin_flips, observed_proportion, observations])\n\n# To be explained in Chapter 3!\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 15000)\n\nfigsize(12.5, 3)\np_trace = mcmc.trace(\"freq_cheating\")[:]\nplt.hist(p_trace, histtype=\"stepfilled\", normed=True, alpha=0.85, bins=30,\n label=\"posterior distribution\", color=\"#348ABD\")\nplt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)\nplt.xlim(0, 1)\nplt.legend();", "With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency? \nI would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters. \nThis kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful. \nAlternative PyMC Model\nGiven a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes: \n\\begin{align}\nP(\\text{\"Yes\"}) &= P( \\text{Heads on first coin} )P( \\text{cheater} ) + P( \\text{Tails on first coin} )P( \\text{Heads on second coin} ) \\\\\n& = \\frac{1}{2}p + \\frac{1}{2}\\frac{1}{2}\\\\\n& = \\frac{p}{2} + \\frac{1}{4}\n\\end{align}\nThus, knowing $p$ we know the probability a student will respond \"Yes\". In PyMC, we can create a deterministic function to evaluate the probability of responding \"Yes\", given $p$:", "p = pm.Uniform(\"freq_cheating\", 0, 1)\n\n\n@pm.deterministic\ndef p_skewed(p=p):\n return 0.5 * p + 0.25", "I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake. \nIf we know the probability of respondents saying \"Yes\", which is p_skewed, and we have $N=100$ students, the number of \"Yes\" responses is a binomial random variable with parameters N and p_skewed.\nThis is where we include our observed 35 \"Yes\" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.", "yes_responses = pm.Binomial(\"number_cheaters\", 100, p_skewed,\n value=35, observed=True)", "Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.", "model = pm.Model([yes_responses, p_skewed, p])\n\n# To Be Explained in Chapter 3!\nmcmc = pm.MCMC(model)\nmcmc.sample(25000, 2500)\n\nfigsize(12.5, 3)\np_trace = mcmc.trace(\"freq_cheating\")[:]\nplt.hist(p_trace, histtype=\"stepfilled\", normed=True, alpha=0.85, bins=30,\n label=\"posterior distribution\", color=\"#348ABD\")\nplt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)\nplt.xlim(0, 1)\nplt.legend();", "More PyMC Tricks\nProtip: Lighter deterministic variables with Lambda class\nSometimes writing a deterministic function using the @pm.deterministic decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations can produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in Lambda functions can handle this with the elegance and simplicity required. For example, \nbeta = pm.Normal(\"coefficients\", 0, size=(N, 1))\nx = np.random.randn((N, 1))\nlinear_combination = pm.Lambda(lambda x=x, beta=beta: np.dot(x.T, beta))\n\nProtip: Arrays of PyMC variables\nThere is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example:", "N = 10\nx = np.empty(N, dtype=object)\nfor i in range(0, N):\n x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)", "The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:\nExample: Challenger Space Shuttle Disaster <span id=\"challenger\"/>\nOn January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):", "figsize(12.5, 3.5)\nnp.set_printoptions(precision=3, suppress=True)\nchallenger_data = np.genfromtxt(\"data/challenger_data.csv\", skip_header=1,\n usecols=[1, 2], missing_values=\"NA\",\n delimiter=\",\")\n# drop the NA values\nchallenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]\n\n# plot it, as a function of temperature (the first column)\nprint(\"Temp (F), O-Ring failure?\")\nprint(challenger_data)\n\nplt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color=\"k\",\n alpha=0.5)\nplt.yticks([0, 1])\nplt.ylabel(\"Damage Incident?\")\nplt.xlabel(\"Outside temperature (Fahrenheit)\")\nplt.title(\"Defects of the Space Shuttle O-Rings vs temperature\");", "It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask \"At temperature $t$, what is the probability of a damage incident?\". The goal of this example is to answer that question.\nWe need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.\n$$p(t) = \\frac{1}{ 1 + e^{ \\;\\beta t } } $$\nIn this model, $\\beta$ is the variable we are uncertain about. Below is the function plotted for $\\beta = 1, 3, -5$.", "figsize(12, 3)\n\n\ndef logistic(x, beta):\n return 1.0 / (1.0 + np.exp(beta * x))\n\nx = np.linspace(-4, 4, 100)\nplt.plot(x, logistic(x, 1), label=r\"$\\beta = 1$\")\nplt.plot(x, logistic(x, 3), label=r\"$\\beta = 3$\")\nplt.plot(x, logistic(x, -5), label=r\"$\\beta = -5$\")\nplt.title(\"Logistic functon plotted for several value of $\\\\beta$ parameter\", fontsize=14)\nplt.legend();", "But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:\n$$p(t) = \\frac{1}{ 1 + e^{ \\;\\beta t + \\alpha } } $$\nSome plots are below, with differing $\\alpha$.", "def logistic(x, beta, alpha=0):\n return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))\n\nx = np.linspace(-4, 4, 100)\n\nplt.plot(x, logistic(x, 1), label=r\"$\\beta = 1$\", ls=\"--\", lw=1)\nplt.plot(x, logistic(x, 3), label=r\"$\\beta = 3$\", ls=\"--\", lw=1)\nplt.plot(x, logistic(x, -5), label=r\"$\\beta = -5$\", ls=\"--\", lw=1)\n\nplt.plot(x, logistic(x, 1, 1), label=r\"$\\beta = 1, \\alpha = 1$\",\n color=\"#348ABD\")\nplt.plot(x, logistic(x, 3, -2), label=r\"$\\beta = 3, \\alpha = -2$\",\n color=\"#A60628\")\nplt.plot(x, logistic(x, -5, 7), label=r\"$\\beta = -5, \\alpha = 7$\",\n color=\"#7A68A6\")\n\nplt.title(\"Logistic functon with bias, plotted for several value of $\\\\alpha$ bias parameter\", fontsize=14)\nplt.legend(loc=\"lower left\");", "Adding a constant term $\\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).\nLet's start modeling this in PyMC. The $\\beta, \\alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.\nNormal distributions\nA Normal random variable, denoted $X \\sim N(\\mu, 1/\\tau)$, has a distribution with two parameters: the mean, $\\mu$, and the precision, $\\tau$. Those familiar with the Normal distribution already have probably seen $\\sigma^2$ instead of $\\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\\tau$ is always positive. \nThe probability density function of a $N( \\mu, 1/\\tau)$ random variable is:\n$$ f(x | \\mu, \\tau) = \\sqrt{\\frac{\\tau}{2\\pi}} \\exp\\left( -\\frac{\\tau}{2} (x-\\mu)^2 \\right) $$\nWe plot some different density functions below.", "import scipy.stats as stats\n\nnor = stats.norm\nx = np.linspace(-8, 7, 150)\nmu = (-2, 0, 3)\ntau = (.7, 1, 2.8)\ncolors = [\"#348ABD\", \"#A60628\", \"#7A68A6\"]\nparameters = zip(mu, tau, colors)\n\nfor _mu, _tau, _color in parameters:\n plt.plot(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)),\n label=\"$\\mu = %d,\\;\\\\tau = %.1f$\" % (_mu, _tau), color=_color)\n plt.fill_between(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)), color=_color,\n alpha=.33)\n\nplt.legend(loc=\"upper right\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"density function at $x$\")\nplt.title(\"Probability distribution of three different Normal random \\\nvariables\");", "A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\\mu$. In fact, the expected value of a Normal is equal to its $\\mu$ parameter:\n$$ E[ X | \\mu, \\tau] = \\mu$$\nand its variance is equal to the inverse of $\\tau$:\n$$Var( X | \\mu, \\tau ) = \\frac{1}{\\tau}$$\nBelow we continue our modeling of the Challenger space craft:", "import pymc as pm\n\ntemperature = challenger_data[:, 0]\nD = challenger_data[:, 1] # defect or not?\n\n# notice the`value` here. We explain why below.\nbeta = pm.Normal(\"beta\", 0, 0.001, value=0)\nalpha = pm.Normal(\"alpha\", 0, 0.001, value=0)\n\n\n@pm.deterministic\ndef p(t=temperature, alpha=alpha, beta=beta):\n return 1.0 / (1. + np.exp(beta * t + alpha))", "We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:\n$$ \\text{Defect Incident, $D_i$} \\sim \\text{Ber}( \\;p(t_i)\\; ), \\;\\; i=1..N$$\nwhere $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.", "p.value\n\n# connect the probabilities in `p` with our observations through a\n# Bernoulli random variable.\nobserved = pm.Bernoulli(\"bernoulli_obs\", p, value=D, observed=True)\n\nmodel = pm.Model([observed, beta, alpha])\n\n# Mysterious code to be explained in Chapter 3\nmap_ = pm.MAP(model)\nmap_.fit()\nmcmc = pm.MCMC(model)\nmcmc.sample(120000, 100000, 2)", "We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\\alpha$ and $\\beta$:", "alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d\nbeta_samples = mcmc.trace('beta')[:, None]\n\nfigsize(12.5, 6)\n\n# histogram of the samples:\nplt.subplot(211)\nplt.title(r\"Posterior distributions of the variables $\\alpha, \\beta$\")\nplt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,\n label=r\"posterior of $\\beta$\", color=\"#7A68A6\", normed=True)\nplt.legend()\n\nplt.subplot(212)\nplt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,\n label=r\"posterior of $\\alpha$\", color=\"#A60628\", normed=True)\nplt.legend();", "All samples of $\\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\\beta = 0$, implying that temperature has no effect on the probability of defect. \nSimilarly, all $\\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\\alpha$ is significantly less than 0. \nRegarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected). \nNext, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.", "t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]\np_t = logistic(t.T, beta_samples, alpha_samples)\n\nmean_prob_t = p_t.mean(axis=0)\n\nfigsize(12.5, 4)\n\nplt.plot(t, mean_prob_t, lw=3, label=\"average posterior \\nprobability \\\nof defect\")\nplt.plot(t, p_t[0, :], ls=\"--\", label=\"realization from posterior\")\nplt.plot(t, p_t[-2, :], ls=\"--\", label=\"realization from posterior\")\nplt.scatter(temperature, D, color=\"k\", s=50, alpha=0.5)\nplt.title(\"Posterior expected value of probability of defect; \\\nplus realizations\")\nplt.legend(loc=\"lower left\")\nplt.ylim(-0.1, 1.1)\nplt.xlim(t.min(), t.max())\nplt.ylabel(\"probability\")\nplt.xlabel(\"temperature\");", "Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.\nAn interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.", "from scipy.stats.mstats import mquantiles\n\n# vectorized bottom and top 2.5% quantiles for \"confidence interval\"\nqs = mquantiles(p_t, [0.025, 0.975], axis=0)\nplt.fill_between(t[:, 0], *qs, alpha=0.7,\n color=\"#7A68A6\")\n\nplt.plot(t[:, 0], qs[0], label=\"95% CI\", color=\"#7A68A6\", alpha=0.7)\n\nplt.plot(t, mean_prob_t, lw=1, ls=\"--\", color=\"k\",\n label=\"average posterior \\nprobability of defect\")\n\nplt.xlim(t.min(), t.max())\nplt.ylim(-0.02, 1.02)\nplt.legend(loc=\"lower left\")\nplt.scatter(temperature, D, color=\"k\", s=50, alpha=0.5)\nplt.xlabel(\"temp, $t$\")\n\nplt.ylabel(\"probability estimate\")\nplt.title(\"Posterior probability estimates given temp. $t$\");", "The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.\nMore generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.\nWhat about the day of the Challenger disaster?\nOn the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.", "figsize(12.5, 2.5)\n\nprob_31 = logistic(31, beta_samples, alpha_samples)\n\nplt.xlim(0.995, 1)\nplt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')\nplt.title(\"Posterior distribution of probability of defect, given $t = 31$\")\nplt.xlabel(\"probability of defect occurring in O-ring\");", "Is our model appropriate?\nThe skeptical reader will say \"You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?\" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\\; \\forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.\nWe can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data. \nPreviously in this Chapter, we simulated artificial datasets for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:\nobserved = pm.Bernoulli( \"bernoulli_obs\", p, value=D, observed=True)\n\nHence we create:\nsimulated_data = pm.Bernoulli(\"simulation_data\", p)\n\nLet's simulate 10 000:", "simulated = pm.Bernoulli(\"bernoulli_sim\", p)\nN = 10000\n\nmcmc = pm.MCMC([simulated, alpha, beta, observed])\nmcmc.sample(N)\n\nfigsize(12.5, 5)\n\nsimulations = mcmc.trace(\"bernoulli_sim\")[:]\nprint(simulations.shape)\n\nplt.title(\"Simulated dataset using posterior parameters\")\nfigsize(12.5, 6)\nfor i in range(4):\n ax = plt.subplot(4, 1, i + 1)\n plt.scatter(temperature, simulations[1000 * i, :], color=\"k\",\n s=50, alpha=0.6)", "Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).\nWe wish to assess how good our model is. \"Good\" is a subjective term of course, so results must be relative to other models. \nWe will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.\nThe following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.\nFor each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \\;\\text{Defect} = 1 | t, \\alpha, \\beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:", "posterior_probability = simulations.mean(axis=0)\nprint(\"posterior prob of defect | realized defect \")\nfor i in range(len(D)):\n print(\"%.2f | %d\" % (posterior_probability[i], D[i]))", "Next we sort each column by the posterior probabilities:", "ix = np.argsort(posterior_probability)\nprint(\"probb | defect \")\nfor i in range(len(D)):\n print(\"%.2f | %d\" % (posterior_probability[ix[i]], D[ix[i]]))", "We can present the above data better in a figure: I've wrapped this up into a separation_plot function.", "from separation_plot import separation_plot\n\n\nfigsize(11., 1.5)\nseparation_plot(posterior_probability, D)", "The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions. \nThe black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.\nIt is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:\n\nthe perfect model, which predicts the posterior probability to be equal to 1 if a defect did occur.\na completely random model, which predicts random probabilities regardless of temperature.\na constant model: where $P(D = 1 \\; | \\; t) = c, \\;\\; \\forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.", "figsize(11., 1.25)\n\n# Our temperature-dependent model\nseparation_plot(posterior_probability, D)\nplt.title(\"Temperature-dependent model\")\n\n# Perfect model\n# i.e. the probability of defect is equal to if a defect occurred or not.\np = D\nseparation_plot(p, D)\nplt.title(\"Perfect model\")\n\n# random predictions\np = np.random.rand(23)\nseparation_plot(p, D)\nplt.title(\"Random model\")\n\n# constant model\nconstant_prob = 7. / 23 * np.ones(23)\nseparation_plot(constant_prob, D)\nplt.title(\"Constant-prediction model\");", "In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.\nThe perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.\nExercises\n1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50? \n2. Try plotting $\\alpha$ samples versus $\\beta$ samples. Why might the resulting plot look like this?", "# type your code here.\nfigsize(12.5, 4)\n\nplt.scatter(alpha_samples, beta_samples, alpha=0.1)\nplt.title(\"Why does the plot look like this?\")\nplt.xlabel(r\"$\\alpha$\")\nplt.ylabel(r\"$\\beta$\");", "References\n\n[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.\n[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.\n[3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.\n[4] Fonnesbeck, Christopher. \"Building Models.\" PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html.\n[5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1.\n[6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000\n[7] Gelman, Andrew. \"Philosophy and the practice of Bayesian statistics.\" British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.\n[8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. \"The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models.\" American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.", "from IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
d00d/quantNotebooks
Notebooks/quantopian_research_public/notebooks/lectures/VaR_and_CVaR/notebook.ipynb
unlicense
[ "Portfolio Value at Risk and Conditional Value at Risk\nBy Jonathan Larkin and Delaney Granizo-Mackenzie.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nValue at Risk (VaR) is a key concept in portfolio risk management. It uses the past observed distribution of portfolio returns to estimate what your future losses might be at difference likelihood levels. Let's demonstrate this concept through an example.", "import numpy as np\nimport pandas as pd\n\nfrom scipy.stats import norm\nimport time\n\nimport matplotlib.pyplot as plt", "Simulated Data Example\nLet's simulate some returns of 10 hypothetical assets.\nNOTE\nWe use normal distributions to simulate the returns, in practice real returns will almost never follow normal distributions and usually have weird behavior including fat tails. We'll discuss this more later.", "# mu = 0.01, std = 0.10, 1000 bars, 10 assets\nmu = 0.01\nsigma = 0.10\nbars = 1000\nnum_assets = 10\n\nreturns = np.random.normal(mu, sigma, (bars, num_assets))\n\n# Fake asset names\nnames = ['Asset %s' %i for i in range(num_assets)]\n\n# Put in a pandas dataframe\nreturns = pd.DataFrame(returns, columns=names)\n\n# Plot the last 50 bars\nplt.plot(returns.head(50))\nplt.xlabel('Time')\nplt.ylabel('Return');", "The Value at Risk (VaR) for coverage $\\alpha$ is defined as the maximum amount we could expect to lose with likelihood $p = 1 - \\alpha$. Put another way, on no more that $100 \\times p \\%$ of days should we expect to lose more than the VaR. There are many ways to estimate VaR and none of them are perfect. In fact you should not put complete trust in VaR, it is rather intended as a way to get a sense of how much might be lost in different levels of extreme scenarios, and provide this info to people responsible for risk management.\nVaR for a high $\\alpha$ is a measure of worst case outcomes. For example one might track their $\\alpha = 0.999$ VaR to understand how a 1/1000 crisis event might affect them. Because real distributions tend to diverge and become less and less consistent the further along the tail we go, extreme VaR should be taken with a grain of salt.\nRelationship to Confidence Intervals\nFor those familiar with confidence intervals, VaR is very similar. The idea of trying to cover a set of possible values with an interval specified by $\\alpha$ is similar to how VaR tries to cover a set of possible losses. For those unfamiliar there is a lecture available here.\nHistorical (Non-Parametric) VaR\nWe'll use historical VaR, which looks at previous returns distributions and uses that to compute the $p$ percentile. This percentile is the amount of loss you could reasonably expect to experience with probability $p$, assuming future returns are close to past returns. Again, this isn't perfect, and requires that there is no regime change in which the returns distribution changes. For instance, if your historical window doesn't include any crisis events, your VaR estimate will be far lower than it should be.\nTo compute historical VaR for coverage $\\alpha$ we simply take the $100 \\times (1 - \\alpha)$ percentile of lowest oberserved returns and multiply that by our total value invested.\nNow let's compute the VaR of this set of 10 assets. To do this we need a set of portfolio weights. We'll start super simple.", "weights = np.ones((10, 1))\n# Normalize\nweights = weights / np.sum(weights)\n\ndef value_at_risk(value_invested, returns, weights, alpha=0.95, lookback_days=520):\n returns = returns.fillna(0.0)\n # Multiply asset returns by weights to get one weighted portfolio return\n portfolio_returns = returns.iloc[-lookback_days:].dot(weights)\n # Compute the correct percentile loss and multiply by value invested\n return np.percentile(portfolio_returns, 100 * (1-alpha)) * value_invested", "We'll compute the VaR for $\\alpha = 0.95$.", "value_invested = 1000000\n\nvalue_at_risk(value_invested, returns, weights, alpha=0.95)", "Interpreting this, we say that historically no more than $5\\%$ of days resulted in losses more extreme than this, or that on each day your probability of losing this much is less than $5\\%$. Keeping in mind that any forecast like this is just an estimate.\nNormal vs. Non-Parametric Historical VaR\nNormal Case\nA special case of VaR is when you assume that the returns follow a given distribution rather than non-parametrically estiamting it historically. In this case a normal VaR would fit our data, because all our returns were simulated form a normal distribution. We can check this by using a normal distribution Cumulative Distribution Function (CDF), which sums the area under a normal curve to figure out how likely certain values are. We'll use an inverse CDF, or PPF, which for a given likelihood will tell us to which value that likelihood corresponds.\nSpecifically, the closed form formula for Normal VaR is\n$$VaR_{\\alpha}(x) = \\mu - \\sigma N^{-1}(\\alpha)$$", "# Portfolio mean return is unchanged, but std has to be recomputed\n# This is because independent variances sum, but std is sqrt of variance\nportfolio_std = np.sqrt( np.power(sigma, 2) * num_assets ) / num_assets\n\n# manually \n(mu - portfolio_std * norm.ppf(0.95)) * value_invested", "Seems close enough to within some random variance. Let's visualize the continuous normal case. Notice that the VaR is expressed as a return rather than an absolute loss. To get aboslute loss we just need to multiply by value invested.", "def value_at_risk_N(mu=0, sigma=1.0, alpha=0.95):\n return mu - sigma*norm.ppf(alpha)\n\n\nx = np.linspace(-3*sigma,3*sigma,1000)\ny = norm.pdf(x, loc=mu, scale=portfolio_std)\nplt.plot(x,y);\nplt.axvline(value_at_risk_N(mu = 0.01, sigma = portfolio_std, alpha=0.95), color='red', linestyle='solid');\nplt.legend(['Return Distribution', 'VaR for Specified Alpha as a Return'])\nplt.title('VaR in Closed Form for a Normal Distribution');", "Historical (Non-Parametric) Case\nHistorical VaR instead uses historical data to draw a discrete Probability Density Function, or histogram. Then finds the point at which only $100 \\times (1-\\alpha)\\%$ of the points are below that return. It returns that return as the VaR return for coverage $\\alpha$.", "lookback_days = 520\nalpha = 0.95\n\n# Multiply asset returns by weights to get one weighted portfolio return\nportfolio_returns = returns.fillna(0.0).iloc[-lookback_days:].dot(weights)\n\nportfolio_VaR = value_at_risk(value_invested, returns, weights, alpha=0.95)\n# Need to express it as a return rather than absolute loss\nportfolio_VaR_return = portfolio_VaR / value_invested\n\nplt.hist(portfolio_returns, bins=20)\nplt.axvline(portfolio_VaR_return, color='red', linestyle='solid');\nplt.legend(['VaR for Specified Alpha as a Return', 'Historical Returns Distribution'])\nplt.title('Historical VaR');", "Underlying Distributions Are Not Always Normal\nIn real financial data the underlying distributions are rarely normal. This is why we prefer historical VaR as opposed to an assumption of an underlying distribution. Historical VaR is also non-parametric, so we aren't at risk of overfitting distribution parameters to some data set.\nReal Data Example\nWe'll show this on some real financial data.", "# OEX components as of 3/31/16\n# http://www.cboe.com/products/indexcomponents.aspx?DIR=OPIndexComp&FILE=snp100.doc\noex = ['MMM','T','ABBV','ABT','ACN','ALL','GOOGL','GOOG','MO','AMZN','AXP','AIG','AMGN','AAPL','BAC',\n 'BRK-B','BIIB','BLK','BA','BMY','CVS','COF','CAT','CELG','CVX','CSCO','C','KO','CL','CMCSA',\n 'COP','CSOT','DHR','DOW','DUK','DD','EMC','EMR','EXC','XOM','FB','FDX','F','GD','GE','GM','GILD',\n 'GS','HAL','HD','HON','INTC','IBM','JPM','JNJ','KMI','LLY','LMT','LOW','MA','MCD','MDT','MRK',\n 'MET,','MSFT','MDZL','MON','MS','NKE','NEE','OXY','ORCL','PYPL','PEP','PFE','PM','PG','QCOM',\n 'RTN','SLB','SPG','SO','SBUX','TGT','TXN','BK','PCLN','TWX','FOXA','FOX','USB','UNP','UPS','UTX',\n 'UNH','VZ','V','WMT','WBA','DIS','WFC']\ntickers = symbols(oex)\nnum_stocks = len(tickers)\n\nstart = time.time()\ndata = get_pricing(tickers, fields='close_price', start_date='2014-01-01', end_date='2016-04-04')\nend = time.time()\nprint \"Time: %0.2f seconds.\" % (end - start)\n\nreturns = data.pct_change()\nreturns = returns - returns.mean(skipna=True) # de-mean the returns\n\ndata.plot(legend=None);\nreturns.plot(legend=None); ", "Now we need to generate some weights.", "def scale(x):\n return x / np.sum(np.abs(x))\n\nweights = scale(np.random.random(num_stocks))\nplt.bar(np.arange(num_stocks),weights);", "Now let's compute the VaR for $\\alpha = 0.95$. We'll write this as $VaR_{\\alpha=0.95}$ from now on.", "value_at_risk(value_invested, returns, weights, alpha=0.95, lookback_days=520)", "Let's visualize this.", "lookback_days = 520\nalpha = 0.95\n\n# Multiply asset returns by weights to get one weighted portfolio return\nportfolio_returns = returns.fillna(0.0).iloc[-lookback_days:].dot(weights)\n\nportfolio_VaR = value_at_risk(value_invested, returns, weights, alpha=0.95)\n# Need to express it as a return rather than absolute loss\nportfolio_VaR_return = portfolio_VaR / value_invested\n\nplt.hist(portfolio_returns, bins=20)\nplt.axvline(portfolio_VaR_return, color='red', linestyle='solid');\nplt.legend(['VaR for Specified Alpha as a Return', 'Historical Returns Distribution'])\nplt.title('Historical VaR');\nplt.xlabel('Return');\nplt.ylabel('Observation Frequency');", "The distribution looks visibly non-normal, but let's confirm that the returns are non-normal using a statistical test. We'll use Jarque-Bera, and our p-value cutoff is 0.05.", "from statsmodels.stats.stattools import jarque_bera\n\n_, pvalue, _, _ = jarque_bera(portfolio_returns)\n\nif pvalue > 0.05:\n print 'The portfolio returns are likely normal.'\nelse:\n print 'The portfolio returns are likely not normal.'", "Sure enough, they're likely not normal, so it would be a big mistake to use a normal distribution to underlie a VaR computation here.\nWe Lied About 'Non-Parametric'\nYou'll notice the VaR computation conspicuously uses a lookback window. This is a parameter to the otherwise 'non-parametric' historical VaR. Keep in mind that because lookback window affects VaR, it's important to pick a lookback window that's long enough for the VaR to converge. To check if our value has seemingly converged let's run an experiment.\nAlso keep in mind that even if something has converged on a say 500 day window, that may be ignoring a financial collapse that happened 1000 days ago, and therefore is ignoring crucial data. On the other hand, using all time data may be useless for reasons of non-stationarity in returns varaince. Basically as returns variance changes over time, older measurements may reflect state that is no longer accurate. For more information on non-stationarity you can check out this lecture.", "N = 1000\nVaRs = np.zeros((N, 1))\nfor i in range(N):\n VaRs[i] = value_at_risk(value_invested, returns, weights, lookback_days=i)\n\nplt.plot(VaRs)\nplt.xlabel('Lookback Window')\nplt.ylabel('VaR');", "We can see here that VaR does appear to converge within a 400-600 lookback window period. Therefore our 520 day parameter should be fine. In fact, 1000 may be better as it uses strictly more information, but more computationally intensive and prey to stationarity concerns.\nIt can be useful to do analyses like this when evaluating whether a VaR is meaningful. Another check we'll do is for stationarity of the portfolio returns over this time period.", "from statsmodels.tsa.stattools import adfuller\n\nresults = adfuller(portfolio_returns)\npvalue = results[1]\n\nif pvalue < 0.05:\n print 'Process is likely stationary.'\nelse:\n print 'Process is likely non-stationary.'", "Conditional Value at Risk (CVaR)\nCVaR is what many consider an improvement on VaR, as it takes into account the shape of the returns distribution. It is also known as Expected Shortfall (ES), as it is an expectation over all the different possible losses greater than VaR and their corresponding estimated likelihoods.\nIf you are not familiar with expectations, much content is available online. However we will provide a brief refresher.\nExpected Value\nSay you have a fair six sided die. Each number is equally likely. The notion of an expectation, written as $\\mathrm{E}(X)$, is what should you expect to happen out of all the possible outcomes. To get this you multiply each event by the probability of that event and add that up, think of it as a probability weighted average. With a die we get\n$$1/6 \\times 1 + 1/6 \\times 2 + 1/6 \\times 3 + 1/6 \\times 4 + 1/6 \\times 5 + 1/6 \\times 6 = 3.5$$\nWhen the probabilities are unequal it gets more complicated, and when the outcomes are continuous we have to use integration in closed form equations. Here is the formula for CVaR.\n$$CVaR_{\\alpha}(x) \\approx \\frac{1}{(1-\\alpha)} \\int_{f(x,y) \\geq VaR_{\\alpha}(x)} f(x,y)p(y)dy dx$$", "def cvar(value_invested, returns, weights, alpha=0.95, lookback_days=520):\n # Call out to our existing function\n var = value_at_risk(value_invested, returns, weights, alpha, lookback_days=lookback_days)\n returns = returns.fillna(0.0)\n portfolio_returns = returns.iloc[-lookback_days:].dot(weights)\n \n # Get back to a return rather than an absolute loss\n var_pct_loss = var / value_invested\n \n return value_invested * np.nanmean(portfolio_returns[portfolio_returns < var_pct_loss])", "Let's compute CVaR on our data and see how it compares with VaR.", "cvar(value_invested, returns, weights, lookback_days=500)\n\nvalue_at_risk(value_invested, returns, weights, lookback_days=500)", "CVaR is higher because it is capturing more information about the shape of the distribution, AKA the moments of the distribution. If the tails have more mass, this will capture that. In general it is considered to be a far superior metric compared with VaR and you should use it over VaR in most cases.\nLet's visualize what it's capturing.", "lookback_days = 520\nalpha = 0.95\n\n# Multiply asset returns by weights to get one weighted portfolio return\nportfolio_returns = returns.fillna(0.0).iloc[-lookback_days:].dot(weights)\n\nportfolio_VaR = value_at_risk(value_invested, returns, weights, alpha=0.95)\n# Need to express it as a return rather than absolute loss\nportfolio_VaR_return = portfolio_VaR / value_invested\n\nportfolio_CVaR = cvar(value_invested, returns, weights, alpha=0.95)\n# Need to express it as a return rather than absolute loss\nportfolio_CVaR_return = portfolio_CVaR / value_invested\n\n# Plot only the observations > VaR on the main histogram so the plot comes out\n# nicely and doesn't overlap.\nplt.hist(portfolio_returns[portfolio_returns > portfolio_VaR_return], bins=20)\nplt.hist(portfolio_returns[portfolio_returns < portfolio_VaR_return], bins=10)\nplt.axvline(portfolio_VaR_return, color='red', linestyle='solid');\nplt.axvline(portfolio_CVaR_return, color='red', linestyle='dashed');\nplt.legend(['VaR for Specified Alpha as a Return',\n 'CVaR for Specified Alpha as a Return',\n 'Historical Returns Distribution', \n 'Returns < VaR'])\nplt.title('Historical VaR and CVaR');\nplt.xlabel('Return');\nplt.ylabel('Observation Frequency');", "Checking for Convergence Again\nFinally, we'll check for convergence.", "N = 1000\nCVaRs = np.zeros((N, 1))\nfor i in range(N):\n CVaRs[i] = cvar(value_invested, returns, weights, lookback_days=i)\n\nplt.plot(CVaRs)\nplt.xlabel('Lookback Window')\nplt.ylabel('VaR');", "Sources\n\nhttp://www.wiley.com/WileyCDA/WileyTitle/productCd-1118445597.html\nhttp://www.ise.ufl.edu/uryasev/publications/\nhttp://www.ise.ufl.edu/uryasev/files/2011/11/VaR_vs_CVaR_CARISMA_conference_2010.pdf\nhttp://faculty.washington.edu/ezivot/econ589/me20-1-4.pdf\n\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
awilky/NLTK-TwitterMixer
NLTK TwitterMixer.ipynb
mit
[ "NLTK TwitterMixer\nby Afton Wilky\nThe NLTK TwitterMixer generates new text based on tweets scraped from the Twitter API. \nTwitter data is collect by a simple search query. \nWriting functions scrape part of speech patterns from sentences and compile words from the text into a part of speech, python dictionary. They create new sentences by looping through the part of speech pattern and making a random choice from the part of speech dictionary.\nWords used in the new text can be restricted to only those which contain a specified list of phonemes (i.e. particular sounds, like the consonant 'R' and/or the vowel, 'O'). Because the functions implement the CMU Pronouncing Dictionary, rather than regex search, differences in spelling don't affect the results (e.g. 'ER' will return both hurt, HH ER T, and heard, HH ER D).\nPackages required:\nNatural Language Toolkit (NLTK) (including the CMU Pronouncing Dict Corpora, CMUDict) installed on your local machine (see installation / getting started instructions at http://www.nltk.org/install.html). If you are using Anaconda or Miniconda, the NLTK package for Anaconda/Miniconda must also be installed.\nTweepy", "# NLTK TwitterMixer, Copyright (C) 2017 Afton Wilky\n# Author: Afton Wilky <aftonwilky.com>\n# License: MIT", "Twitter\nText remixed by the TwitterMixer is scraped from the Twitter API using the Tweepy (3.5.0) module. Tweepy Documentation: http://docs.tweepy.org/en/v3.5.0/\nTwitter Import Statements", "import json\nimport tweepy", "Access Twitter API: keys and tokens\nUse the Tweepy module to get access to the Twitter API. Store consumer key and secret, access token and secret in a local file called 'twconfig.py'.\nInstructions\nSign up for or log into a Twitter Developer account:\nhttps://dev.twitter.com/\nNavigate to or create an app to get OAuth credentials, tokens and keys:\nhttps://apps.twitter.com/\nAccess tokens and keys are availabile in the \"Manage Keys and Access Tokens.\" \nCreate a local file 'twconfig.py'. Assign the values from your Twitter app to the following variables:\nconsumer_key, \nconsumer_secret, \naccess_token, \naccess_secret\nSave the local file and import it.", "# Import local file that stores tokens and keys\n\nfrom twconfig import * ", "Tweepy documentation on using your tokens and keys:\nhttp://docs.tweepy.org/en/v3.5.0/auth_tutorial.html#auth-tutorial", "# Get Twitter API Access\n\nauth = tweepy.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_secret)\n\napi = tweepy.API(auth)", "Query Twitter API and Process the Data\nThe following are very basic functions to query the Twitter API and store the results in variables and .txt files, using plain-text / string and Python-dict formats so the data is easy to work with.\nCurrently, Tweepy does not support limiting searches by date or the number of results (unless they are paginated). \nAn alternative to Tweepy is the Twython module, which is more full-featured. However, because of dependencies, (requests-mock) the package can no longer be used with Anaconda / Miniconda.", "#######################\n# Query Twitter API #\n#######################\n\ndef twitter_search(query):\n \"\"\"Query the Twitter API\"\"\"\n tweets = []\n for tweet in api.search(q = query):\n tweets.append(tweet._json)\n return tweets\n \n \n##################\n# Process Data #\n##################\n\ndef process_tweets(data):\n \"\"\"\n Process the results of a Twitter API query.\n Adds 'text', 'created_at' and 'hashtags' fields\n to Python dictionary.\n \"\"\"\n # get the text and date fields\n processed_data = []\n for tweet in data:\n entry = {}\n entry['text'], entry['created_at'] = tweet['text'], tweet['created_at']\n if tweet['entities']['hashtags'] != []:\n hashtags = []\n for hashtag in tweet['entities']['hashtags']:\n hashtags.append(hashtag['text'])\n entry['hashtags'] = hashtags\n else:\n entry['hashtags'] = []\n processed_data.append(entry)\n return processed_data\n\n\ndef get_string(tweets):\n \"\"\"\n Converts unprocessed Twitter data into a string.\n \"\"\"\n text = ''\n for tweet in process_tweets(tweets):\n text = text + tweet['text'] + '\\n'\n return text \n\n\n##########################\n# Write Data to Files #\n##########################\n\ndef write_file_tweets(processed_data, filepath, filename):\n with open(filepath + filename + '.txt', 'wb') as f:\n for tweet in processed_data:\n f.write(bytes(tweet['text'] + '\\n', 'utf-8'))\n\n\ndef write_file_hashtags(processed_data, filepath, filename):\n with open(filepath + filename + '.txt', 'wb') as f:\n for tweet in processed_data:\n for hashtag in tweet['hashtags']:\n f.write(bytes(hashtag + '\\n', 'utf-8'))", "Call functions to create variables and files storing raw and processed Twitter data.", "########################\n# Query Twitter API #\n########################\n\nhello_raw = twitter_search('hello')\n\n\n##############################################################\n# Convert raw Twitter query data into a Python dictionary. #\n##############################################################\n\nhello = process_tweets(hello_raw)\n\n\n###################################################\n# Convert raw Twitter query data into a string #\n# that can be used by NLTK. #\n###################################################\n\nhello_text = get_string(hello_raw)\n\n\n######################################################\n# Write processed Twitter hashtag data to a file # \n# Note: replace argument #2 with the appropriate #\n# filepath for your computer #\n######################################################\n\n# write_file_hashtags(hello, '/Users/USERNAME/', 'testing')\n\n\n######################################################\n# Write processed Twitter tweets data to a file #\n# Note: replace argument #2 with the appropriate #\n# filepath for your computer #\n######################################################\n\n# write_file_tweets(hello, '/Users/USERNAME/', 'testing')", "NLTK\nThe Natural Language Toolkit (NLTK) provides users with access to a series of corpora and functions useful for parsing and working with text. Information about the toolkit and documentation is available at http://www.nltk.org/.\nNLTK functions require plaintext files (.txt) or variables assigned to values processed by the PlainTextCorpusReader NLTK function.\ne.g.\ntweets_all = PlaintextCorpusReader(FILEPATH, 'tweets_all.txt')\nhashtags_all = PlaintextCorpusReader(FILEPATH, 'hashtags_all.txt')\nNLTK Import Statements", "import nltk\nimport nltk.data\n\n\nimport random\nfrom collections import defaultdict, OrderedDict\n\n\nfrom nltk.corpus import PlaintextCorpusReader\nfrom nltk.corpus import CategorizedPlaintextCorpusReader\nfrom nltk.corpus import cmudict\nfrom nltk.corpus import wordnet\n\n\nfrom nltk import load_parser\nfrom nltk.tokenize import *\nfrom nltk.probability import *\nfrom nltk.misc.wordfinder import wordfinder\nfrom nltk.text import Text", "Variables\nSet variable to access the CMU Pronunciation Dictionary", "prondict = nltk.corpus.cmudict.dict()", "Basic Functions", "def write_file(x, filepath):\n \"\"\"Writes a file and generates filename from first 20 characters\"\"\"\n bad_file_chars = ['>', '<', ':', '\"', '/', '\\\\', \"|\", '*', ' ', '\\?', '\\u2014', '\\u2019']\n filename = str(x[:20])\n for char in bad_file_chars:\n filename = filename.replace(char, '_')\n with open(filepath, 'wb') as f:\n f.write(x.encode('utf-8'))\n\n\ndef write_json_file(data, filepath):\n with open(filepath + '.json', 'w') as f:\n json.dump(data, f)\n\n\ndef read_file(filepath, filename):\n \"\"\"Reads file\"\"\"\n return PlaintextCorpusReader(filepath, filename)\n\n\ndef tokenize(text):\n sentences = nltk.sent_tokenize(text.raw())\n words = [nltk.word_tokenize(sentence.lower()) for sentence in sentences]\n return words\n\n\ndef process(text):\n words = tokenize(text)\n tagged_words = [dict(nltk.pos_tag(word)) for word in words]\n return tagged_words ", "Basic NLTK Functions", "def get_words_longer_than(length, text):\n \"\"\"Returns words from a text that are longer than a length.\"\"\"\n longer_words = set([w for w in text.words() if len(w) > length])\n print(longer_words)\n\n\ndef get_words_by_char(text, ch):\n \"\"\"Returns words in a text that contain specified character or string.\"\"\"\n ch_words = [w.lower() for w in text.words() if ch in w]\n return ch_words", "CMU Pronouncing Dictionary Functions\nThe CMU Pronouncing Dictionary is a corpus of words (English) and their pronunciation, available through the Natural Language Toolkit. \nPronunciations are broken into a series of 1- and 2-letter 'phones' that represent the sound of each syllable. Each word is also labeled with information about stressed and unstressed syllables (0: No stress; 1: Primary stress; 2: Secondary stress). \nInformation about the dictionary is available at http://www.speech.cs.cmu.edu/cgi-bin/cmudict", "######################################################\n# Helper Functions - CMU Pronunciation Dictionary #\n######################################################\n\ndef get_safe_cmudict_list(text):\n \"\"\"\n Returns a LIST of words in a text that can be \n processed by the CMU Pronuncation Dictionary.\n \"\"\"\n # get_cmudict_error_words(text)\n cmu_dict_words = sorted([w.lower() for w in text.words() \n if w.lower() in prondict])\n return cmu_dict_words\n\n\ndef get_safe_cmudict_set(text):\n \"\"\"\n Returns a SET of words in a text that can be \n processed by the CMU Pronuncation Dictionary.\n \"\"\"\n return set(get_safe_cmudict_list(text))", "Phone-based Functions (CMU Pronouncing Dictionary)\nFor information about 'phones', see the \"Phoneme Set\" section of the CMU Dict information page: http://www.speech.cs.cmu.edu/cgi-bin/cmudict", "def get_phones(text):\n \"\"\"Returns a list of CMU Dictionary phones in a text.\"\"\"\n cmu_dict_words = get_safe_cmudict_list(text)\n return sorted([ph for w in cmu_dict_words for ph in prondict[w][0]])\n\n\ndef get_phones_by_freq(text):\n \"\"\"Returns a list of CMU Dictionary phones in a text, sorted by frequency.\"\"\"\n return get_by_freq(get_phones(text))\n\n\ndef get_words_by_phone(phone, text):\n \"\"\"Returns a list of words in a text that contain a specified phone.\"\"\"\n cmu_dict_words = get_safe_cmudict_set(text)\n return sorted(set([w.lower() for w in cmu_dict_words for \n ph in prondict[w][0] if phone[:2] in ph]))\n\n\ndef get_phone_words(phone_list, text):\n return set([word for phone in phone_list for \n word in get_words_by_phone(phone, text)])", "Lexical / Grammatical Functions", "def get_grammar(text):\n \"\"\"return a list containing the part of speech order of a sentences in a text\"\"\"\n processed_text = process(text)\n grammar = OrderedDict()\n for i in range(0, len(processed_text)):\n grammar[i] = []\n for k, v in processed_text[i].items():\n # print grammar[i]\n grammar[i].append(v)\n return grammar", "Dictionary Functions", "def make_pos_dict(text):\n \"\"\"Returns a dictionary of words sorted by their part of speech.\"\"\"\n tagged_words = process(text)\n pos = defaultdict(set)\n for i in range(0, len(tagged_words)):\n for k, v in tagged_words[i].items():\n pos[v].add(k)\n return pos\n\n\ndef make_word_pos_dict(text):\n \"\"\"Returns a flat dict of words and parts of speach in text.\"\"\"\n tagged_words = process(text)\n word_pos_dict = {}\n for i in range(0, len(tagged_words)):\n for word, pos in tagged_words[i].items():\n word_pos_dict[word] = pos\n return word_pos_dict\n\n\ndef make_phone_pos_dict(phone_list, text):\n \"\"\"\n Returns a dictionary of words containing the specified phones, \n sorted by their part of speech\n \"\"\"\n pos = make_pos_dict(text)\n phone_words = get_phone_words(phone_list, text)\n phone_pos_dict = defaultdict(list)\n for k, v in pos.items():\n for word in v:\n if word in phone_words:\n phone_pos_dict[k].append(word)\n return phone_pos_dict", "Writing Functions", "def write_sentences(text):\n \"\"\"\n Writes sentences using the part of speech patterns and words \n from the input text.\n \"\"\"\n # scrape part of speech patterns from input text\n grammar = get_grammar(text)\n # compile a dictionary, associating each word with a part of speech\n pos_dict = make_pos_dict(text)\n new_text = []\n # go through each sentence in grammar\n for i in range(0, len(grammar)): \n # for each pos in grammar[i] make a random choice from the pos_dict\n for pos in grammar[i]:\n word = random.choice(list(pos_dict[pos]))\n # append that choice to new_text\n new_text.append(word)\n return ' '.join(new_text)\n\n\ndef get_new_text(word_dict, grammar):\n \"\"\"\n Writes new text based on an input grammar and dictionary of words \n associated with their part of speech.\n \"\"\"\n new_text = []\n # go through each sentence in grammar\n for i in range(0, len(grammar)): \n # for each pos in grammar[i] make a random choice from the phone_pos_dict\n for pos in grammar[i]:\n if word_dict[pos]:\n word = random.choice(list(word_dict[pos]))\n # append that choice to new_text\n new_text.append(word)\n return ' '.join(new_text)\n\n\ndef write_phone_sentences(phone_list, text):\n \"\"\"\n Writes sentences using the part of speech patterns and words \n from the input text that contain specified phones.\n \"\"\"\n grammar = get_grammar(text)\n phone_pos_dict = make_phone_pos_dict(phone_list, text)\n return get_new_text(phone_pos_dict, grammar)", "Examples\nWrite new text and save it to a file, reading text from file", "##############################\n# Read text from .txt file #\n##############################\n\n# new_file = read_file('FILEPATH', 'FILENAME.txt')\n\n### OR ###\n\n# tweets = PlaintextCorpusReader('FILEPATH', 'FILENAME.txt')\n\n\n################################################################\n# Write sentences based on the file read in the previous step #\n################################################################\n\n# Write new sentences based on words and grammar in a text\n# new_text = write_sentences(new_file)\n\n### OR ###\n\n# Write new sentences with specified phones\n# based on words and grammar in a text\n# new_text = write_phone_sentences(['OW', 'OY', 'UW', 'AO'], new_file)\n\n\n#############################################\n# Create a new file containing the #\n# sentences written in the previous step #\n#############################################\n\n# write_file(new_text)\n# print(new_new_text)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bjodah/aqchem
examples/aqueous_radiolysis.ipynb
bsd-2-clause
[ "Aqueous radiolysis\nIn this notebook we will look at the reactive species in water subjected to ionizing radiation.\nThe reaction-set is simply has been taken from the litterature and we will not pay to much attention\nto it, but rather focus on the analysis of dominant reactions toward the end.", "from collections import defaultdict\nimport numpy as np\nimport sympy\nimport matplotlib.pyplot as plt\nimport chempy\nfrom chempy import Reaction, Substance, ReactionSystem\nfrom chempy.kinetics.ode import get_odesys\nfrom chempy.kinetics.analysis import plot_reaction_contributions\nfrom chempy.printing.tables import UnimolecularTable, BimolecularTable\nimport pyodesys\nfrom pyodesys.symbolic import ScaledSys\nfrom pyneqsys.plotting import mpl_outside_legend\nsympy.init_printing()\n%matplotlib inline\n{k: globals()[k].__version__ for k in 'sympy pyodesys chempy'.split()}\n\n# Never mind the next row, it contains all the reaction data of aqueous radiolysis at room temperature\n_species, _reactions = ['H', 'H+', 'H2', 'e-(aq)', 'HO2-', 'HO2', 'H2O2', 'HO3', 'O-', 'O2', 'O2-', 'O3', 'O3-', 'OH', 'OH-', 'H2O'], [({1: 1, 14: 1}, {15: 1}, 140000000000.0, {}), ({15: 1}, {1: 1, 14: 1}, 2.532901323662559e-05, {}), ({6: 1}, {1: 1, 4: 1}, 0.11193605692841689, {}), ({1: 1, 4: 1}, {6: 1}, 50000000000.0, {}), ({6: 1, 14: 1}, {4: 1, 15: 1}, 13000000000.0, {}), ({4: 1, 15: 0}, {6: 1, 14: 1}, 58202729.542927094, {15: 1}), ({3: 1, 15: 0}, {0: 1, 14: 1}, 19.0, {15: 1}), ({0: 1, 14: 1}, {3: 1, 15: 1}, 18000000.0, {}), ({0: 1}, {1: 1, 3: 1}, 3.905960400662016, {}), ({1: 1, 3: 1}, {0: 1}, 23000000000.0, {}), ({13: 1, 14: 1}, {8: 1, 15: 1}, 13000000000.0, {}), ({8: 1, 15: 0}, {13: 1, 14: 1}, 103500715.55425139, {15: 1}), ({13: 1}, {8: 1, 1: 1}, 0.12589254117941662, {}), ({8: 1, 1: 1}, {13: 1}, 100000000000.0, {}), ({5: 1}, {1: 1, 10: 1}, 1345767.401963457, {}), ({1: 1, 10: 1}, {5: 1}, 50000000000.0, {}), ({5: 1, 14: 1}, {10: 1, 15: 1}, 50000000000.0, {}), ({10: 1, 15: 0}, {5: 1, 14: 1}, 18.619585312728415, {15: 1}), ({3: 1, 13: 1}, {14: 1}, 30000000000.0, {}), ({3: 1, 6: 1}, {13: 1, 14: 1}, 14000000000.0, {}), ({10: 1, 3: 1, 15: 0}, {4: 1, 14: 1}, 13000000000.0, {15: 1}), ({3: 1, 5: 1}, {4: 1}, 20000000000.0, {}), ({9: 1, 3: 1}, {10: 1}, 22200000000.0, {}), ({3: 2, 15: 0}, {2: 1, 14: 2}, 5000000000.0, {15: 2}), ({0: 1, 3: 1, 15: 0}, {2: 1, 14: 1}, 25000000000.0, {15: 1}), ({3: 1, 4: 1}, {8: 1, 14: 1}, 3500000000.0, {}), ({8: 1, 3: 1, 15: 0}, {14: 2}, 22000000000.0, {15: 1}), ({3: 1, 12: 1, 15: 0}, {9: 1, 14: 2}, 16000000000.0, {15: 1}), ({3: 1, 11: 1}, {12: 1}, 36000000000.0, {}), ({0: 1, 15: 0}, {2: 1, 13: 1}, 11.0, {15: 1}), ({0: 1, 8: 1}, {14: 1}, 10000000000.0, {}), ({0: 1, 4: 1}, {13: 1, 14: 1}, 90000000.0, {}), ({0: 1, 12: 1}, {9: 1, 14: 1}, 10000000000.0, {}), ({0: 2}, {2: 1}, 7750000000.0, {}), ({0: 1, 13: 1}, {15: 1}, 7000000000.0, {}), ({0: 1, 6: 1}, {13: 1, 15: 1}, 90000000.0, {}), ({0: 1, 9: 1}, {5: 1}, 21000000000.0, {}), ({0: 1, 5: 1}, {6: 1}, 10000000000.0, {}), ({0: 1, 10: 1}, {4: 1}, 20000000000.0, {}), ({0: 1, 11: 1}, {7: 1}, 38000000000.0, {}), ({13: 2}, {6: 1}, 3600000000.0, {}), ({5: 1, 13: 1}, {9: 1, 15: 1}, 6000000000.0, {}), ({10: 1, 13: 1}, {9: 1, 14: 1}, 8200000000.0, {}), ({2: 1, 13: 1}, {0: 1, 15: 1}, 40000000.0, {}), ({13: 1, 6: 1}, {5: 1, 15: 1}, 30000000.0, {}), ({8: 1, 13: 1}, {4: 1}, 20000000000.0, {}), ({4: 1, 13: 1}, {5: 1, 14: 1}, 7500000000.0, {}), ({12: 1, 13: 1}, {11: 1, 14: 1}, 2550000000.0, {}), ({12: 1, 13: 1}, {1: 1, 10: 2}, 5950000000.0, {}), ({11: 1, 13: 1}, {9: 1, 5: 1}, 110000000.0, {}), ({10: 1, 5: 1}, {9: 1, 4: 1}, 80000000.0, {}), ({5: 2}, {9: 1, 6: 1}, 800000.0, {}), ({8: 1, 5: 1}, {9: 1, 14: 1}, 6000000000.0, {}), ({5: 1, 6: 1}, {9: 1, 13: 1, 15: 1}, 0.5, {}), ({4: 1, 5: 1}, {9: 1, 13: 1, 14: 1}, 0.5, {}), ({12: 1, 5: 1}, {9: 2, 14: 1}, 6000000000.0, {}), ({11: 1, 5: 1}, {9: 1, 7: 1}, 500000000.0, {}), ({10: 2, 15: 0}, {9: 1, 6: 1, 14: 2}, 100.0, {15: 2}), ({8: 1, 10: 1, 15: 0}, {9: 1, 14: 2}, 600000000.0, {15: 1}), ({10: 1, 6: 1}, {9: 1, 13: 1, 14: 1}, 0.13, {}), ({10: 1, 4: 1}, {8: 1, 9: 1, 14: 1}, 0.13, {}), ({10: 1, 12: 1, 15: 0}, {9: 2, 14: 2}, 10000.0, {15: 1}), ({10: 1, 11: 1}, {9: 1, 12: 1}, 1500000000.0, {}), ({8: 2, 15: 0}, {4: 1, 14: 1}, 1000000000.0, {15: 1}), ({8: 1, 9: 1}, {12: 1}, 3600000000.0, {}), ({8: 1, 2: 1}, {0: 1, 14: 1}, 80000000.0, {}), ({8: 1, 6: 1}, {10: 1, 15: 1}, 500000000.0, {}), ({8: 1, 4: 1}, {10: 1, 14: 1}, 400000000.0, {}), ({8: 1, 12: 1}, {10: 2}, 700000000.0, {}), ({8: 1, 11: 1}, {9: 1, 10: 1}, 5000000000.0, {}), ({12: 1}, {8: 1, 9: 1}, 300.0, {}), ({1: 1, 12: 1}, {9: 1, 13: 1}, 90000000000.0, {}), ({12: 1, 6: 1}, {9: 1, 10: 1, 15: 1}, 1600000.0, {}), ({12: 1, 4: 1}, {9: 1, 10: 1, 14: 1}, 890000.0, {}), ({2: 1, 12: 1}, {0: 1, 9: 1, 14: 1}, 250000.0, {}), ({7: 1}, {9: 1, 13: 1}, 110000.0, {}), ({13: 1, 7: 1}, {9: 1, 6: 1}, 5000000000.0, {}), ({7: 2}, {9: 2, 6: 1}, 5000000000.0, {}), ({10: 1, 7: 1}, {9: 2, 14: 1}, 10000000000.0, {}), ({7: 1}, {1: 1, 12: 1}, 328.097819129701, {}), ({1: 1, 12: 1}, {7: 1}, 52000000000.0, {}), ({11: 1, 14: 1}, {10: 1, 5: 1}, 70.0, {}), ({11: 1, 4: 1}, {9: 1, 10: 1, 13: 1}, 2800000.0, {})]\n\nspecies = [Substance.from_formula(s) for s in _species]\nreactions = [\n Reaction({_species[k]: v for k, v in reac.items()},\n {_species[k]: v for k, v in prod.items()}, param,\n {_species[k]: v for k, v in inact_reac.items()})\n for reac, prod, param, inact_reac in _reactions\n]\n\n# radiolytic yields for gamma radiolysis of neat water\nC_H2O = 55.5\nYIELD_CONV = 1.0364e-07 # mol * eV / (J * molecules)\nprod_rxns = [\n Reaction({'H2O': 1}, {'H+': 1, 'OH-': 1}, 0.5 * YIELD_CONV / C_H2O),\n Reaction({'H2O': 1}, {'H+': 1, 'e-(aq)': 1, 'OH': 1}, 2.6 * YIELD_CONV / C_H2O),\n Reaction({'H2O': 1}, {'H': 2, 'H2O2': 1}, 0.66 * YIELD_CONV / C_H2O, {'H2O': 1}),\n Reaction({'H2O': 1}, {'H2': 1, 'H2O2': 1}, 0.74 * YIELD_CONV / C_H2O, {'H2O': 1}),\n Reaction({'H2O': 1}, {'H2': 1, 'OH': 2}, 0.1 * YIELD_CONV / C_H2O, {'H2O': 1}),\n Reaction({'H2O': 1}, {'H2': 3, 'HO2': 2}, 0.04 * YIELD_CONV / C_H2O, {'H2O': 3}),\n]\n\n# The productions reactions have hardcoded rates corresponding to 1 Gy/s\nrsys = ReactionSystem(prod_rxns + reactions, species)\nrsys\n\nuni, not_uni = UnimolecularTable.from_ReactionSystem(rsys)\nbi, not_bi = BimolecularTable.from_ReactionSystem(rsys)\nassert not (not_uni & not_bi), \"There are only uni- & bi-molecular reactions in this set\"\n\n%debug\n\nuni\n\nbi\n\nodesys, extra = get_odesys(rsys)\nodesys.exprs[:3] # take a look at the first three ODEs in the system\n\nj = odesys.get_jac()\nj.shape\n\ndef integrate_and_plot(odesys, c0_dict, integrator, ax=None, zero_conc=0, log10_t0=-12, tout=None,\n print_info=False, **kwargs):\n if tout is None:\n tout = np.logspace(log10_t0, 6)\n c0 = [c0_dict.get(k, zero_conc)+zero_conc for k in _species]\n result = odesys.integrate(tout, c0, integrator=integrator, **kwargs)\n if ax is None:\n ax = plt.subplot(1, 1, 1)\n result.plot(ax=ax, title_info=2)\n ax.set_xscale('log'); ax.set_yscale('log')\n mpl_outside_legend(ax, prop={'size': 9})\n if print_info:\n print({k: v for k, v in result.info.items() if not k.startswith('internal')})\n return result\n\nc0_dict = defaultdict(float, {'H+': 1e-7, 'OH-': 1e-7, 'H2O': 55.5})\nplt.figure(figsize=(16,6))\nintegrate_and_plot(odesys, c0_dict, integrator='cvode', first_step=1e-14, atol=1e-10, rtol=1e-10, autorestart=5)\nplt.legend()\n_ = plt.ylim([1e-16, 60])\n\ndef integrate_and_plot_scaled(rsys, dep_scaling, *args, **kwargs):\n integrate_and_plot(get_odesys(\n rsys, SymbolicSys=ScaledSys, dep_scaling=dep_scaling)[0], *args, **kwargs)", "Let's see how different solvers behave when integrating this problem:", "fig, axes = plt.subplots(2, 2, figsize=(14, 14))\nsolvers = 'cvode', 'odeint', 'gsl', 'scipy'\nfor idx, ax in enumerate(np.ravel(axes)):\n #if idx == 1:\n # continue\n kw = {'method': 'vode'} if solvers[idx] == 'scipy' else {}\n integrate_and_plot_scaled(rsys, 1e3, c0_dict, solvers[idx], ax=ax, atol=1e-6, rtol=1e-6,\n nsteps=4000, first_step=1e-10*extra['max_euler_step_cb'](0, c0_dict), **kw)\n ax.set_title(solvers[idx])\n\nintegrate_and_plot(odesys, c0_dict, 'cvode', first_step=1e-12)\n\nintegrate_and_plot(odesys, c0_dict, 'scipy')", "One way to avoid negative concentrations is to solve for the logarithm of the concentration instead of the concentration directly:", "from pyodesys.symbolic import symmetricsys\nlogexp = (sympy.log, sympy.exp)\nLogLogSys = symmetricsys(logexp, logexp, exprs_process_cb=lambda exprs: [\n sympy.powsimp(expr.expand(), force=True) for expr in exprs])\nloglogsys, _ = get_odesys(rsys, SymbolicSys=LogLogSys)\n\nloglogsys.exprs[:3]\n\nintegrate_and_plot(loglogsys, c0_dict, 'gsl', zero_conc=1e-26, first_step=1e-3, log10_t0=-13)", "It works, but at the cost of signigicant overhead (much larger number of function evaluations were needed). Another option is to scale the problem:", "ssys, _ = get_odesys(rsys, SymbolicSys=ScaledSys, dep_scaling=1e16)", "for speed we will use native compiled C++ code for the numerical evalatuations:", "from chempy.kinetics._native import get_native\nnsys = get_native(rsys, ssys, 'cvode')\n\nintegrate_and_plot(nsys, c0_dict, 'cvode', nsteps=96000,\n error_outside_bounds=True, get_dx_max_factor=-1.0, autorestart=2, atol=1e-9, rtol=1e-10)\n\nintegrate_and_plot(nsys, c0_dict, 'cvode', nsteps=96000, tout=(1e-16, 1e6),\n error_outside_bounds=True, get_dx_max_factor=-1.0, autorestart=2, atol=1e-9, rtol=1e-10,\n print_info=True, first_step=1e-16)", "We can also access the generated C++ code (which can be handy if it needs to be run where Python is not available)", "print(''.join(open(next(filter(lambda s: s.endswith('.cpp'), nsys._native._written_files))).readlines()[:42]))\n\ninit_conc_H2O2 = c0_dict.copy()\ninit_conc_H2O2['H2O2'] = 0.01\nodesys2 = ScaledSys.from_other(odesys, lower_bounds=0, dep_scaling=1e8, indep_scaling=1e-6)\nres = integrate_and_plot(odesys2, init_conc_H2O2, integrator='cvode', nsteps=500,\n first_step=1e-16, atol=1e-5, rtol=1e-10, autorestart=1)", "In order to plot the most important reactions vs. time we can use a function provided by ChemPy:", "from chempy.kinetics.analysis import plot_reaction_contributions\n\nsks = ['H2O2', 'OH', 'HO2']\nfig, axes = plt.subplots(1, len(sks), figsize=(16, 10))\nselection = res.xout > 1e0\nplot_reaction_contributions(res, rsys, extra['rate_exprs_cb'], selection=selection,\n substance_keys=sks, axes=axes, combine_equilibria=True, total=True)\nplt.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
willettk/insight
notebooks/panama_papers.ipynb
apache-2.0
[ "Exploring the ICIJ Offshore Leaks database\nKyle Willett\n20 Jul 2016\nOne basic question is to assess the demographics of the companies involved, many of whom fit historical definitions of tax havens. In the past, these include countries with small populations, overseas territories, and unstable or non-democratic governments. \nI examined the data in the Offshore ICIJ database to see if the companies involved are associated with countries fitting these characteristics, or whether there's a broader set of locations involved. Matching the address nodes against population and business count identifies a group of roughly a dozen nations responsible for a disproportionate fraction of offshore businesses, particularly focusing on small island nations and British overseas dependencies. The British Virgin Islands had by far the highest ratio of offshore business to total population. Institutions in these countries should be high-priority candidates for follow-up investigations into illegal offshore activity. \nWhile processing the data, I also devised a method to better clean the data and merge nodes that were originally designated as separate locations. Using measures of similarity on text processing, 3% of the address nodes can be merged in the ICIJ database.", "%matplotlib inline\n\n# Import packages\n\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\n\n# Load in data from the Panama Papers ICIJ site (set of CSV files)\n\naddresses = pd.read_csv(\"Addresses.csv\",\n dtype={'address':str,\n 'icij_id':str,\n 'valid_until':str,\n 'country_codes':str,\n 'countries':str,\n 'node_id':np.int64,\n 'sourceID':str})", "Look briefly at structure and content of the addresses file.", "addresses.head()", "While the number of addresses and companies at each location can be counted, that doesn't give any context as to whether the smaller countries (with more distinct characteristics) are represented at unusual frequencies. As an initial estimate, we'll compare it to the country's total population (future metrics could include things like GDP, international memberships, numbers of banks, etc).", "# Load 2015 population data from World Bank\n\npopfile = \"API_SP.POP.TOTL_DS2_en_csv_v2.csv\"\ncountry_name = []\ncountry_code = []\npop_2015 = []\nwith open(popfile,'r') as f:\n for idx,line in enumerate(f):\n if idx > 5:\n s = line.split(\",\")\n country_name.append(s[0].strip()[1:-1])\n country_code.append(s[1].strip()[1:-1])\n pop_2015.append(int((s[-2].strip()[1:-1]).split(\".\")[0]))", "Link the two datasets (Panama Papers and population)", "# Join the dataframes\n\ndfpop = pd.DataFrame(index=country_code,data={\"country_name\":country_name,\"population\":pop_2015})\n\n# Add a new column that counts the number of addresses in each country\n\ncounted = addresses.groupby('country_codes').count()['countries']\njoined = dfpop.join(counted)\njoined.rename(columns = {'countries':'address_count'}, inplace = True)", "Now that we have all the data in a single frame, let's look at the results. Start by simply plotting the population against the number of addresses per country. If there's no strong preference toward tax havens, the two values should be strongly correlated.", "# Plot as a scatter plot\n\ngood = np.isfinite(joined.address_count)\njoined[good].plot(x='population',y='address_count',\n kind='scatter',\n loglog=True,\n figsize=(8,8));", "There's definitely a trend (even within log space), but quite a lot of scatter and outliers. For example, one country in the top 10 for overall address count is in the bottom 5 for population.", "# Check the actual correlation value\n\njoined.corr()", "This value (0.561) tells us that the data are mildly correlated, but not with high statistical significance. \nThat gives the overall distribution; let's look at the countries dominating the high and low ends of the data.", "print \"Countries with the highest number of addresses\\n\\n\",joined.sort_values(\n by=\"address_count\",ascending=False).head()\nprint \"\"\nprint \"Countries with the lowest number of addresses\\n\\n\",joined.sort_values(\n by=\"address_count\",ascending=True).head()", "So the top five correlate with three of the most populous countries in the world (US, China, Russia) and two smaller countries that are known for being business and financial capitals (Hong Kong, Singapore). The bottom five include a US dependency, two of the smallest countries on the African mainland, a European micronation, and the relatively isolated Asian nation of Bhutan.\nSo let's clarify the question further, and move away from absolute numbers to a ratio. What are the countries that are statistically over-represented in terms of their population to business ratio?", "ratio = joined.address_count / joined.population\njoined['pop_bus_ratio'] = ratio\n\n# Plot the results (in log space) as a histogram\n\nfig,ax = plt.subplots(1,1,figsize=(8,6))\np = sns.distplot(np.log10(joined[good].pop_bus_ratio), kde=False, color=\"b\",ax=ax)\np.set_xlabel('log ratio of population to number of businesses')\np.set_ylabel('Count');", "So even by eye, there's a clear skew and a long tail to the right of the distribution. The high value here indicates countries with many more addresses/businesses per capita citizen than expected.", "# What are the names of these countries?\n\njoined.sort_values(\"pop_bus_ratio\",ascending=False)[:15]", "This is more like the definition above; numerous dependencies, lots of micronations and small island countries, and very small average populations overall.", "# Plot the total population versus the ratio constructed above.\n\njoined[good].plot(x='population',y='pop_bus_ratio',\n kind='scatter',\n loglog=True,\n figsize=(8,8));", "So now the data are separating more clearly. Instead of picking a sample by eye, let's try a clustering algorithm on these features and see if the extremes select a similar set of data.", "# Use the k-means clustering algorithm from scikit-learn\n\nfrom sklearn.cluster import KMeans\n\nn_clusters = 5\nX = np.log10(np.array(joined[good][['population','pop_bus_ratio']]))\ny_pred = KMeans(n_clusters=n_clusters).fit_predict(X)\n\nplt.figure(figsize=(8, 8))\nplt.subplot(111)\nplt.scatter(X[:, 0], X[:, 1], c=y_pred,cmap=plt.cm.Set1,s=40)\nplt.xlabel('population')\nplt.ylabel('pop_bus_ratio');\n\nnp.mean(X[y_pred == 1,0])\n\n# Look at the countries corresponding to the top left cluster\n\ncluster_topleft = 0\nxmean = max(X[:,0])\nfor i in range(n_clusters):\n avgpop = np.mean(X[y_pred == i,0])\n if avgpop < xmean:\n xmean = avgpop\n cluster_topleft = i\n \nprint joined[good][y_pred == cluster_topleft]", "Almost uniformly, these are countries with very small populations (< 100,000). 11/14 are small island countries, and 6/14 are dependencies of the United Kingdom (but with different tax and financial status, which presumably strongly affects the likelihood of choosing to incorporate there). The latter category is really over-represented; the UN List of Non-Self-Governing Territories has only 17 total entries, of which 6 appear in this smaller list of 14. The odds of that happening independently are essentially zero ($p\\simeq10^{-20}$). \nThe (British) Virgin Islands deserve special mention; their average ratio of population to number of addresses is roughly a factor of 10 higher than any other country in the database, with 1 Offshore Leaks business for roughly every 7.3 inhabitants. There could be many reasons for this (including geographical proximity to the Mossack Fonseca offices in Panama), but it's a very clear starting point for further investigations into a wide range of companies.", "# Here are the top 15 countries in the list above colored in bright orange on a world map. Spot any?\n\n# Made with https://www.amcharts.com/visited_countries/#MC,LI,IM,GI,KN,TC,VG,BM,DM,KY,SC,MH,NR\n\nfrom IPython.display import Image\nImage(url=\"http://i.imgur.com/hpgZsx1.png\")\n\n# Zooming way into the Caribbean, a few of the entries are finally visible.\n\nImage(url=\"http://i.imgur.com/dt5anpZ.png\")", "Address disambiguation\nIterating a little bit more on the addresses would potentially be very useful; small differences between the strings can point to the same place and cause potential overcounting (or make it difficult to merge records that should point to the same node). \nCan we clean the data any further beyond the original state?", "import jellyfish\n\n# Group by country; there shouldn't be any need to compare addresses in different countries,\n# since that would be computationally expensive and we assume that would require a large shift\n# in the difference between text strings (ie, extremely unlikely to change 1 letter and go from a real address\n# in North Korea to one in South Africa).\n\ngrouped = addresses.groupby(\"country_codes\")\n\n# As a test case, let's look at addresses in the last country in the group: Zimbabwe.\n\nzw = list(addresses[addresses['country_codes'] == 'ZWE'].address)\nprint len(zw)\n\ndl = []\nfor idx,z in enumerate(zw):\n for z2 in zw[idx+1:]:\n try:\n dl_ = jellyfish.damerau_levenshtein_distance(unicode(z,'utf-8'),unicode(z2,'utf-8'))\n dl.append(dl_)\n except:\n pass\n\nfig,ax = plt.subplots(1,1,figsize=(8,6))\np = sns.distplot(dl, kde=False, color=\"b\",ax=ax)\np.set_xlabel('Damerau-Levenshtein distance')\np.set_ylabel('Count of Zimbabwean addresses');", "There seems to be a clear peak of duplicates at a D-L distance of only a couple ($<3). Try a couple different sizes and manually evaluate the true positive and true negative rate.", "for n in (1,2,3,4,5):\n print \"\\n DL <= {}\".format(n)\n for idx,z in enumerate(zw):\n for z2 in zw[idx+1:]:\n try:\n dl_ = jellyfish.damerau_levenshtein_distance(unicode(z,'utf-8'),unicode(z2,'utf-8'))\n if dl_ == n:\n print \"{} -> {}\".format(z,z2)\n break\n except:\n pass", "Assess each address pair manually; simple transpositions or errors are a true positive. A situation where the house number changed, though, could be either a transcription error OR two truly different locations on the same street. To be conservative, the latter will be counted as a false positive.\nDL &lt;= 1: TP = 5, FP = 0\nDL &lt;= 2: TP = 9, FP = 1\nDL &lt;= 3: TP = 12, FP = 4\nDL &lt;= 4: TP = 12, FP = 6\nDL &lt;= 5: TP = 17, FP = 7\nThe false positive rate increases sharply at DL <= 3. Acting conservatively, we'll only use a DL-distance of 2 for the sample.", "# Run algorithm on the entire sample. How many nodes can we merge in the set?\n\ndl_threshold = 2\n\nduplicate_addresses = {}\nfailed_to_parse = 0\nfor g in grouped:\n name = g[0]\n print \"Processing {}\".format(name)\n duplicate_addresses[name] = 0\n lst = list(g[1].address)\n if len(lst) > 1:\n for idx,add1 in enumerate(lst):\n # Don't compare twice and double-count.\n for add2 in lst[idx+1:]:\n try:\n dl = jellyfish.damerau_levenshtein_distance(unicode(add1,'utf-8'),unicode(add2,'utf-8'))\n if dl <= dl_threshold:\n duplicate_addresses[name] += 1\n break\n except:\n failed_to_parse += 1\n\nprint duplicate_addresses\n\nprint \"Able to merge {} addresses in system ({:.1f}%).\".format(sum(duplicate_addresses.values()),\n sum(duplicate_addresses.values())*100./len(addresses))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ankoorb/scipy2015_tutorial
notebooks/3. Fitting Regression Models.ipynb
cc0-1.0
[ "%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import fmin\n\nnp.random.seed(1789)\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()", "Regression modeling\nA general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. \nFor example: \n\nhow different medical interventions influence the incidence or duration of disease\nhow baseball player's performance varies as a function of age.\nhow test scores are correlated with tissue LSD concentration", "from io import StringIO\n\ndata_string = \"\"\"\nDrugs\tScore\n0\t1.17\t78.93\n1\t2.97\t58.20\n2\t3.26\t67.47\n3\t4.69\t37.47\n4\t5.83\t45.65\n5\t6.00\t32.92\n6\t6.41\t29.97\n\"\"\"\n\nlsd_and_math = pd.read_table(StringIO(data_string), sep='\\t', index_col=0)\nlsd_and_math", "Taking LSD was a profound experience, one of the most important things in my life --Steve Jobs", "lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8))", "We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$.\n\n$M(Y|X) = E(Y|X)$\n$M(Y|X) = Pr(Y=1|X)$\n\nIn general,\n$$M(Y|X) = f(X)$$\nfor linear regression\n$$M(Y|X) = f(X\\beta)$$\nwhere $f$ is some function, for example a linear function:\n<div style=\"font-size: 150%;\"> \n$y_i = \\beta_0 + \\beta_1 x_{1i} + \\ldots + \\beta_k x_{ki} + \\epsilon_i$\n</div>\n\nRegression is a weighted sum of independent predictors\nand $\\epsilon_i$ accounts for the difference between the observed response $y_i$ and its prediction from the model $\\hat{y_i} = \\beta_0 + \\beta_1 x_i$. This is sometimes referred to as process uncertainty.\nInterpretation: coefficients represent the change in Y for a unit increment of the predictor X.\nTwo important regression assumptions:\n\nnormal errors\nhomoscedasticity\n\nParameter estimation\nWe would like to select $\\beta_0, \\beta_1$ so that the difference between the predictions and the observations is zero, but this is not usually possible. Instead, we choose a reasonable criterion: the smallest sum of the squared differences between $\\hat{y}$ and $y$.\n<div style=\"font-size: 120%;\"> \n$$R^2 = \\sum_i (y_i - [\\beta_0 + \\beta_1 x_i])^2 = \\sum_i \\epsilon_i^2 $$ \n</div>\n\nSquaring serves two purposes: \n\nto prevent positive and negative values from cancelling each other out\nto strongly penalize large deviations. \n\nWhether or not the latter is a desired depends on the goals of the analysis.\nIn other words, we will select the parameters that minimize the squared error of the model.", "sum_of_squares = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2)\n\nsum_of_squares([0,1], lsd_and_math.Drugs, lsd_and_math.Score)\n\nx, y = lsd_and_math.T.values\nb0, b1 = fmin(sum_of_squares, [0,1], args=(x,y))\nb0, b1\n\nax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8))\nax.plot([0,10], [b0, b0+b1*10])\n\nax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8), ylim=(20, 90))\nax.plot([0,10], [b0, b0+b1*10])\nfor xi, yi in zip(x,y):\n ax.plot([xi]*2, [yi, b0+b1*xi], 'k:')", "Alternative loss functions\nMinimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences:", "sum_of_absval = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x))\n\nb0, b1 = fmin(sum_of_absval, [0,0], args=(x,y))\nprint('\\nintercept: {0:.2}, slope: {1:.2}'.format(b0,b1))\nax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8))\nax.plot([0,10], [b0, b0+b1*10])", "We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model:\n<div style=\"font-size: 150%;\"> \n$y_i = \\beta_0 + \\beta_1 x_i + \\beta_2 x_i^2 + \\epsilon_i$\n</div>", "sum_squares_quad = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2)\n\nb0,b1,b2 = fmin(sum_squares_quad, [1,1,-1], args=(x,y))\nprint('\\nintercept: {0:.2}, x: {1:.2}, x2: {2:.2}'.format(b0,b1,b2))\nax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8))\nxvals = np.linspace(0, 8, 100)\nax.plot(xvals, b0 + b1*xvals + b2*(xvals**2))", "Although a polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters.\nFor some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of spawning salmon and the number of juveniles recruited into the population the following year; one would expect the relationship to be positive, but not necessarily linear.", "sum_squares_cubic = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) \n - theta[3]*(x**3)) ** 2)\n\nsalmon = pd.read_table(\"../data/salmon.dat\", delim_whitespace=True, index_col=0)\nplt.plot(salmon.spawners, salmon.recruits, 'r.')\nb0,b1,b2,b3 = fmin(sum_squares_cubic, [0,1,-1,0], args=(salmon.spawners, salmon.recruits))\nxvals = np.arange(500)\nplt.plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3))", "Linear Regression with scikit-learn\nIn practice, we need not fit least squares models by hand because they are implemented generally in packages such as scikit-learn and statsmodels. For example, scikit-learn package implements least squares models in its LinearRegression class:", "from sklearn import linear_model\n\nstraight_line = linear_model.LinearRegression()\nstraight_line.fit(x[:, np.newaxis], y)\n\nstraight_line.coef_\n\nstraight_line.intercept_\n\nplt.plot(x, y, 'ro')\nplt.plot(x, straight_line.predict(x[:, np.newaxis]), color='blue',\n linewidth=3)", "For more general regression model building, its helpful to use a tool for describing statistical models, called patsy. With patsy, it is easy to specify the desired combinations of variables for any particular analysis, using an \"R-like\" syntax. patsy parses the formula string, and uses it to construct the approriate design matrix for the model.\nFor example, the quadratic model specified by hand above can be coded as:", "from patsy import dmatrix\n\nX = dmatrix('salmon.spawners + I(salmon.spawners**2)')", "The dmatrix function returns the design matrix, which can be passed directly to the LinearRegression fitting method.", "poly_line = linear_model.LinearRegression(fit_intercept=False)\npoly_line.fit(X, salmon.recruits)\n\npoly_line.coef_\n\nplt.plot(salmon.spawners, salmon.recruits, 'ro')\nfrye_range = np.arange(500)\nplt.plot(frye_range, poly_line.predict(dmatrix('frye_range + I(frye_range**2)')), color='blue')", "Generalized linear models\nOften our data violates one or more of the linear regression assumptions:\n\nnon-linear\nnon-normal error distribution\nheteroskedasticity\n\nthis forces us to generalize the regression model in order to account for these characteristics.\nAs a motivating example, consider the Olympic medals data that we compiled earlier in the tutorial.", "medals = pd.read_csv('../data/medals.csv')\nmedals.head()", "We expect a positive relationship between population and awarded medals, but the data in their raw form are clearly not amenable to linear regression.", "medals.plot(x='population', y='medals', kind='scatter')", "Part of the issue is the scale of the variables. For example, countries' populations span several orders of magnitude. We can correct this by using the logarithm of population, which we have already calculated.", "medals.plot(x='log_population', y='medals', kind='scatter')", "This is an improvement, but the relationship is still not adequately modeled by least-squares regression.", "linear_medals = linear_model.LinearRegression()\nX = medals.log_population[:, np.newaxis]\nlinear_medals.fit(X, medals.medals)\n\nax = medals.plot(x='log_population', y='medals', kind='scatter')\nax.plot(medals.log_population, linear_medals.predict(X), color='red',\n linewidth=2)", "This is due to the fact that the response data are counts. As a result, they tend to have characteristic properties. \n\ndiscrete\npositive\nvariance grows with mean\n\nto account for this, we can do two things: (1) model the medal count on the log scale and (2) assume Poisson errors, rather than normal.\nRecall the Poisson distribution from the previous section:\n$$p(y)=\\frac{e^{-\\lambda}\\lambda^y}{y!}$$\n\n$Y={0,1,2,\\ldots}$\n$\\lambda > 0$\n\n$$E(Y) = \\text{Var}(Y) = \\lambda$$\nSo, we will model the logarithm of the expected value as a linear function of our predictors:\n$$\\log(\\lambda) = X\\beta$$\nIn this context, the log function is called a link function. This transformation implies the mean of the Poisson is:\n$$\\lambda = \\exp(X\\beta)$$\nWe can plug this into the Poisson likelihood and use maximum likelihood to estimate the regression covariates $\\beta$.\n$$\\log L = \\sum_{i=1}^n -\\exp(X_i\\beta) + Y_i (X_i \\beta)- \\log(Y_i!)$$\nAs we have already done, we just need to code the kernel of this likelihood, and optimize!", "# Poisson negative log-likelhood\npoisson_loglike = lambda beta, X, y: -(-np.exp(X.dot(beta)) + y*X.dot(beta)).sum()", "Let's use the assign method to add a column of ones to the design matrix.", "poisson_loglike([0,1], medals[['log_population']].assign(intercept=1), medals.medals)", "We will use Nelder-Mead to minimize the negtive log-likelhood.", "b1, b0 = fmin(poisson_loglike, [0,1], args=(medals[['log_population']].assign(intercept=1).values, \n medals.medals.values))\n\nb0, b1", "The resulting fit looks reasonable.", "ax = medals.plot(x='log_population', y='medals', kind='scatter')\nxvals = np.arange(12, 22)\nax.plot(xvals, np.exp(b0 + b1*xvals), 'r--')", "Exercise: Multivariate GLM\nAdd the OECD indicator variable to the model, and estimate the model coefficients.", "# Write your answer here", "Interactions among variables\nInteractions imply that the effect of one covariate $X_1$ on $Y$ depends on the value of another covariate $X_2$.\n$$M(Y|X) = \\beta_0 + \\beta_1 X_1 + \\beta_2 X_2 +\\beta_3 X_1 X_2$$\nthe effect of a unit increase in $X_1$:\n$$M(Y|X_1+1, X_2) - M(Y|X_1, X_2)$$\n$$\\begin{align}\n&= \\beta_0 + \\beta_1 (X_1 + 1) + \\beta_2 X_2 +\\beta_3 (X_1 + 1) X_2\n- [\\beta_0 + \\beta_1 X_1 + \\beta_2 X_2 +\\beta_3 X_1 X_2] \\\n&= \\beta_1 + \\beta_3 X_2\n\\end{align}$$", "ax = medals[medals.oecd==1].plot(x='log_population', y='medals', kind='scatter', alpha=0.8)\nmedals[medals.oecd==0].plot(x='log_population', y='medals', kind='scatter', color='red', alpha=0.5, ax=ax)", "Interaction can be interpreted as:\n\n$X_1$ interacts with $X_2$\n$X_1$ modifies the effect of $X_2$\n$X_2$ modifies the effect of $X_1$\n$X_1$ and $X_2$ are non-additive or synergistic\n\nLet's construct a model that predicts medal count based on population size and OECD status, as well as the interaction. We can use patsy to set up the design matrix.", "y = medals.medals\nX = dmatrix('log_population * oecd', data=medals)\nX", "Now, fit the model.", "interaction_params = fmin(poisson_loglike, [0,1,1,0], args=(X, y))\n\ninteraction_params", "Notice anything odd about these estimates?\nThe main effect of the OECD effect is negative, which seems counter-intuitive. This is because the variable is interpreted as the OECD effect when the log-population is zero. This is not particularly meaningful.\nWe can improve the interpretability of this parameter by centering the log-population variable prior to entering it into the model. This will result in the OECD main effect being interpreted as the marginal effect of being an OECD country for an average-sized country.", "y = medals.medals\nX = dmatrix('center(log_population) * oecd', data=medals)\nX\n\nfmin(poisson_loglike, [0,1,1,0], args=(X, y))", "Model Selection\nHow do we choose among competing models for a given dataset? More parameters are not necessarily better, from the standpoint of model fit. For example, fitting a 6th order polynomial to the LSD example certainly results in an overfit.", "def calc_poly(params, data):\n x = np.c_[[data**i for i in range(len(params))]]\n return np.dot(params, x)\n\nx, y = lsd_and_math.T.values\n \nsum_squares_poly = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2)\nbetas = fmin(sum_squares_poly, np.zeros(7), args=(x,y), maxiter=1e6)\nplt.plot(x, y, 'ro')\nxvals = np.linspace(0, max(x), 100)\nplt.plot(xvals, calc_poly(betas, xvals))", "One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as:\n$$AIC = n \\log(\\hat{\\sigma}^2) + 2p$$\nwhere $p$ is the number of parameters in the model and $\\hat{\\sigma}^2 = RSS/(n-p-1)$.\nNotice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases.\nAIC is a metric of information distance between a given model and a notional \"true\" model. Since we don't know the true model, the AIC value itself is not meaningful in an absolute sense, but is useful as a relative measure of model quality.\nTo apply AIC to model selection, we choose the model that has the lowest AIC value.", "n = len(x)\n\naic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p\n\nRSS1 = sum_of_squares(fmin(sum_of_squares, [0,1], args=(x,y)), x, y)\nRSS2 = sum_squares_quad(fmin(sum_squares_quad, [1,1,-1], args=(x,y)), x, y)\n\nprint('\\nModel 1: {0}\\nModel 2: {1}'.format(aic(RSS1, 2, n), aic(RSS2, 3, n)))", "Hence, on the basis of \"information distance\", we would select the 2-parameter (linear) model.\nExercise: Olympic medals model selection\nUse AIC to select the best model from the following set of Olympic medal prediction models:\n\npopulation only\npopulation and OECD\ninteraction model\n\nFor these models, use the alternative form of AIC, which uses the log-likelhood rather than the residual sums-of-squares:\n$$AIC = -2 \\log(L) + 2p$$", "# Write your answer here", "Logistic Regression\nFitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous?\n\nmale/female\npass/fail\ndied/survived\n\nLet's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey.", "titanic = pd.read_excel(\"../data/titanic.xls\", \"titanic\")\ntitanic.name\n\njitter = np.random.normal(scale=0.02, size=len(titanic))\n\nplt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3)\nplt.yticks([0,1])\nplt.ylabel(\"survived\")\nplt.xlabel(\"log(fare)\")", "I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale.\nClearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line.", "x = np.log(titanic.fare[titanic.fare>0])\ny = titanic.survived[titanic.fare>0]\nbetas_titanic = fmin(sum_of_squares, [1,1], args=(x,y))\n\njitter = np.random.normal(scale=0.02, size=len(titanic))\nplt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3)\nplt.yticks([0,1])\nplt.ylabel(\"survived\")\nplt.xlabel(\"log(fare)\")\nplt.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.])", "If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the \"survived\" (y=1) side for larger values of fare than on the \"died\" (y=0) side.\nStochastic model\nRather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data:\n<div style=\"font-size: 120%;\"> \n$$f(y|p) = p^{y} (1-p)^{1-y}$$ \n</div>\n\nwhere $y = {0,1}$ and $p \\in [0,1]$. So, this model predicts whether $y$ is zero or one as a function of the probability $p$. Notice that when $y=1$, the $1-p$ term disappears, and when $y=0$, the $p$ term disappears.\nSo, the model we want to fit should look something like this:\n<div style=\"font-size: 120%;\"> \n$$p_i = \\beta_0 + \\beta_1 x_i + \\epsilon_i$$\n</div>\n\nHowever, since $p$ is constrained to be between zero and one, it is easy to see where a linear (or polynomial) model might predict values outside of this range. As with the Poisson regression, we can modify this model slightly by using a link function to transform the probability to have an unbounded range on a new scale. Specifically, we can use a logit transformation as our link function:\n<div style=\"font-size: 120%;\"> \n$$\\text{logit}(p) = \\log\\left[\\frac{p}{1-p}\\right] = x$$\n</div>\n\nHere's a plot of $p/(1-p)$", "logit = lambda p: np.log(p/(1.-p))\nunit_interval = np.linspace(0,1)\nplt.plot(unit_interval/(1-unit_interval), unit_interval)\nplt.xlabel(r'$p/(1-p)$')\nplt.ylabel('p');", "And here's the logit function:", "plt.plot(logit(unit_interval), unit_interval)\nplt.xlabel('logit(p)')\nplt.ylabel('p');", "The inverse of the logit transformation is:\n<div style=\"font-size: 150%;\"> \n$$p = \\frac{1}{1 + \\exp(-x)}$$\n</div>", "invlogit = lambda x: 1. / (1 + np.exp(-x))", "So, now our model is:\n<div style=\"font-size: 120%;\"> \n$$\\text{logit}(p_i) = \\beta_0 + \\beta_1 x_i + \\epsilon_i$$\n</div>\n\nWe can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli model is:\n<div style=\"font-size: 120%;\"> \n$$L(y|p) = \\prod_{i=1}^n p_i^{y_i} (1-p_i)^{1-y_i}$$\n</div>\n\nwhich, on the log scale is:\n<div style=\"font-size: 120%;\"> \n$$l(y|p) = \\sum_{i=1}^n y_i \\log(p_i) + (1-y_i)\\log(1-p_i)$$\n</div>\n\nWe can easily implement this in Python, keeping in mind that fmin minimizes, rather than maximizes functions:", "def logistic_like(theta, x, y):\n \n p = invlogit(theta[0] + theta[1] * x)\n \n # Return negative of log-likelihood\n return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p))", "Remove null values from variables (a bad idea, which we will show later) ...", "x, y = titanic[titanic.fare.notnull()][['fare', 'survived']].values.T", "... and fit the model.", "b0, b1 = fmin(logistic_like, [0.5,0], args=(x,y))\nb0, b1\n\njitter = np.random.normal(scale=0.01, size=len(x))\nplt.plot(x, y+jitter, 'r.', alpha=0.3)\nplt.yticks([0,.25,.5,.75,1])\nxvals = np.linspace(0, 600)\nplt.plot(xvals, invlogit(b0+b1*xvals))", "As with our least squares model, we can easily fit logistic regression models in scikit-learn, in this case using the LogisticRegression.", "logistic = linear_model.LogisticRegression()\nlogistic.fit(x[:, np.newaxis], y)\n\nlogistic.coef_", "Exercise: multivariate logistic regression\nWhich other variables might be relevant for predicting the probability of surviving the Titanic? Generalize the model likelihood to include 2 or 3 other covariates from the dataset.", "# Write your answer here" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Diyago/Machine-Learning-scripts
DEEP LEARNING/NLP/LSTM RNN/Toxic multiclass prediction Glove + Bidirection LSTM.ipynb
apache-2.0
[ "import sys, os, re, csv, codecs, numpy as np, pandas as pd\n\nfrom keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences\nfrom keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation\nfrom keras.layers import Bidirectional, GlobalMaxPool1D\nfrom keras.models import Model\nfrom keras import initializers, regularizers, constraints, optimizers, layers\nfrom livelossplot import PlotLossesKeras\n", "We include the GloVe word vectors in our input files. To include these in your kernel, simple click 'input files' at the top of the notebook, and search 'glove' in the 'datasets' section.", "EMBEDDING_FILE = f'glove.6B.50d.txt'\nTRAIN_DATA_FILE = f'train.csv'\nTEST_DATA_FILE = f'test.csv'", "Set some basic config parameters:", "embed_size = 50 # how big is each word vector\nmax_features = 20000 # how many unique words to use (i.e num rows in embedding vector)\nmaxlen = 100 # max number of words in a comment to use", "Read in our data and replace missing values:", "train = pd.read_csv(TRAIN_DATA_FILE)\ntest = pd.read_csv(TEST_DATA_FILE)\n\nlist_sentences_train = train[\"comment_text\"].fillna(\"_na_\").values\nlist_classes = [\"toxic\", \"severe_toxic\", \"obscene\", \"threat\", \"insult\", \"identity_hate\"]\ny = train[list_classes].values\nlist_sentences_test = test[\"comment_text\"].fillna(\"_na_\").values", "Standard keras preprocessing, to turn each comment into a list of word indexes of equal length (with truncation or padding as needed).", "tokenizer = Tokenizer(num_words=max_features)\ntokenizer.fit_on_texts(list(list_sentences_train))\nlist_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train)\nlist_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test)\nX_t = pad_sequences(list_tokenized_train, maxlen=maxlen)\nX_te = pad_sequences(list_tokenized_test, maxlen=maxlen)", "Read the glove word vectors (space delimited strings) into a dictionary from word->vector.", "def get_coefs(word,*arr): \n return word, np.asarray(arr, dtype='float32')\nembeddings_index = dict(get_coefs(*o.strip().split()) for o in open(EMBEDDING_FILE))", "Use these vectors to create our embedding matrix, with random initialization for words that aren't in GloVe. We'll use the same mean and stdev of embeddings the GloVe has when generating the random init.", "all_embs = np.stack(embeddings_index.values())\nemb_mean,emb_std = all_embs.mean(), all_embs.std()\nemb_mean,emb_std\n\nword_index = tokenizer.word_index\nnb_words = min(max_features, len(word_index))\nembedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))\nfor word, i in word_index.items():\n if i >= max_features: continue\n embedding_vector = embeddings_index.get(word)\n if embedding_vector is not None: embedding_matrix[i] = embedding_vector", "Simple bidirectional LSTM with two fully connected layers. We add some dropout to the LSTM since even 2 epochs is enough to overfit.", "inp = Input(shape=(maxlen,))\nx = Embedding(max_features, embed_size, weights=[embedding_matrix])(inp)\nx = Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(x)\nx = GlobalMaxPool1D()(x)\nx = Dense(50, activation=\"relu\")(x)\nx = Dropout(0.1)(x)\nx = Dense(6, activation=\"sigmoid\")(x)\nmodel = Model(inputs=inp, outputs=x)\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nprint(model.summary())", "Now we're ready to fit out model! Use validation_split when not submitting.", "model.fit(X_t, y, batch_size=128, epochs=2, validation_split=0.35)", "And finally, get predictions for the test set and prepare a submission CSV:", "y_test = model.predict([X_te], batch_size=1024, verbose=1)\nsample_submission = pd.read_csv('sample_submission.csv')\nsample_submission[list_classes] = y_test\nsample_submission.to_csv('submission.csv', index=False)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
daniel-koehn/Theory-of-seismic-waves-II
04_FD_stability_dispersion/4_general_fd_taylor_operators.ipynb
gpl-3.0
[ "Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from Derivative Approximation by Finite Differences by David Eberly, additional text and SymPy examples by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi", "# Execute this cell to load the notebook's style sheet, then ignore it\nfrom IPython.core.display import HTML\ncss_file = '../style/custom.css'\nHTML(open(css_file, \"r\").read())", "Generalization of Taylor FD operators\nIn the last lesson, we learned how to derive a high order FD approximation for the second derivative using Taylor series expansion. In the next step we derive a general equation to compute FD operators, where I use a detailed derivation based on \"Derivative Approximation by Finite Differences\" by David Eberly\nEstimation of arbitrary FD operators by Taylor series expansion\nWe can approximate the $d-th$ order derivative of a function $f(x)$ with an order of error $p>0$ by a general finite-difference approximation:\n\\begin{equation}\n\\frac{h^d}{d!}f^{(d)}(x) = \\sum_{i=i_{min}}^{i_{max}} C_i f(x+ih) + \\cal{O}(h^{d+p})\n\\end{equation}\nwhere h is an equidistant grid point distance. By choosing the extreme indices $i_{min}$ and $i_{max}$, you can define forward, backward or central operators. The accuracy of the FD operator is defined by it's length and therefore also the number of \nweighting coefficients $C_i$ incorporated in the approximation. $\\cal{O}(h^{d+p})$ terms are negelected. \nFormally, we can approximate $f(x+ih)$ by a Taylor series expansion:\n\\begin{equation}\nf(x+ih) = \\sum_{n=0}^{\\infty} i^n \\frac{h^n}{n!}f^{(n)}(x)\\nonumber\n\\end{equation}\nInserting into eq.(1) yields\n\\begin{align}\n\\frac{h^d}{d!}f^{(d)}(x) &= \\sum_{i=i_{min}}^{i_{max}} C_i \\sum_{n=0}^{\\infty} i^n \\frac{h^n}{n!}f^{(n)}(x) + \\cal{O}(h^{d+p})\\nonumber\\\n\\end{align}\nWe can move the second sum on the RHS to the front\n\\begin{align}\n\\frac{h^d}{d!}f^{(d)}(x) &= \\sum_{n=0}^{\\infty} \\left(\\sum_{i=i_{min}}^{i_{max}} i^n C_i\\right) \\frac{h^n}{n!}f^{(n)}(x) + \\cal{O}(h^{d+p})\\nonumber\\\n\\end{align}\nIn the FD approximation we only expand the Taylor series up to the term $n=(d+p)-1$\n\\begin{align}\n\\frac{h^d}{d!}f^{(d)}(x) &= \\sum_{n=0}^{(d+p)-1} \\left(\\sum_{i=i_{min}}^{i_{max}} i^n C_i\\right) \\frac{h^n}{n!}f^{(n)}(x) + \\cal{O}(h^{d+p})\\nonumber\\\n\\end{align}\nand neglect the $\\cal{O}(h^{d+p})$ terms\n\\begin{align}\n\\frac{h^d}{d!}f^{(d)}(x) &= \\sum_{n=0}^{(d+p)-1} \\left(\\sum_{i=i_{min}}^{i_{max}} i^n C_i\\right) \\frac{h^n}{n!}f^{(n)}(x)\\\n\\end{align}\nMultiplying by $\\frac{d!}{h^d}$ leads to the desired approximation for the $d-th$ derivative of the function f(x):\n\\begin{align}\nf^{(d)}(x) &= \\frac{d!}{h^d}\\sum_{n=0}^{(d+p)-1} \\left(\\sum_{i=i_{min}}^{i_{max}} i^n C_i\\right) \\frac{h^n}{n!}f^{(n)}(x)\\\n\\end{align}\nTreating the approximation in eq.(2) as an equality, the only term in the sum on the right-hand side of the approximation that contains $\\frac{h^d}{d!}f^{d}(x)$ occurs when $n = d$, so the coefficient of that term must be 1. The other terms must vanish for there to be equality, so the coefficients of those terms must be 0; therefore, it is necessary that\n\\begin{equation}\n\\sum_{i=i_{min}}^{i_{max}} i^n C_i=\n\\begin{cases}\n0, ~~ 0 \\le n \\le (d+p)-1 ~ \\text{and} ~ n \\ne d\\\n1, ~~ n = d\n\\end{cases}\\nonumber\\\n\\end{equation}\nThis is a set of $d + p$ linear equations in $i_{max} − i_{min} + 1$ unknowns. If we constrain the number of unknowns to be $d+p$, the linear system has a unique solution. \n\n\nA forward difference approximation occurs if we set $i_{min} = 0$\nand $i_{max} = d + p − 1$. \n\n\nA backward difference approximation can be implemented by setting $i_{max} = 0$ and $i_{min} = −(d + p − 1)$.\n\n\nA centered difference approximation occurs if we set $i_{max} = −i_{min} = (d + p − 1)/2$ where it appears that $d + p$ is necessarily an odd number. As it turns out, $p$ can be chosen to be even regardless of the parity of $d$ and $i_{max} = (d + p − 1)/2$.\n\n\nWe could either implement the resulting linear system as matrix equation as in the previous lesson, or simply use a SymPy function which gives us the FD operators right away.", "# import SymPy libraries\nfrom sympy import symbols, differentiate_finite, Function\n\n# Define symbols\nx, h = symbols('x h')\nf = Function('f')\n\n# 1st order forward operator for 1st derivative\nforward_1st_fx = differentiate_finite(f(x), x, points=[x+h, x]).simplify()\nprint(\"1st order forward operator 1st derivative:\")\nprint(forward_1st_fx)\nprint(\" \")\n\n# 1st order backward operator for 1st derivative\nbackward_1st_fx = differentiate_finite(f(x), x, points=[x, x-h]).simplify()\nprint(\"1st order backward operator 1st derivative:\")\nprint(backward_1st_fx)\nprint(\" \")\n\n# 2nd order centered operator for 1st derivative\ncenter_1st_fx = differentiate_finite(f(x), x, points=[x+h, x-h]).simplify()\nprint(\"2nd order center operator 1st derivative:\")\nprint(center_1st_fx)\nprint(\" \")\n\n# 2nd order FD operator for 2nd derivative\ncenter_2nd_fxx = differentiate_finite(f(x), x, 2, points=[x+h, x, x-h]).simplify()\nprint(\"2nd order center operator 2nd derivative:\")\nprint(center_2nd_fxx)\nprint(\" \")\n\n# 4th order FD operator for 2nd derivative\ncenter_4th_fxx = differentiate_finite(f(x), x, 2, points=[x+2*h, x+h, x, x-h, x-2*h]).simplify()\nprint(\"4th order center operator 2nd derivative:\")\nprint(center_4th_fxx)\nprint(\" \")\n", "Actually, the underlying algorithm also supports variable grid spacings, because it is not based on Taylor series expansion but Lagrange polynomials. For more details, I refer to the paper \"Calculation of weights in finite difference formulas\" by Bengt Fornberg\nAn alternative to using SymPy for the calculation of FD operator weights is the DEVITO package:\n\"https://www.devitoproject.org/\"\nIt does not only calculate FD stencils, but also automatically optimizes the performance for the given hardware.\nWhat we learned:\n\nHow to compute Finite-Difference operators of arbritary derivative and error order\nSymbolic computation of FD operators with SymPy" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
scikit-optimize/scikit-optimize.github.io
dev/notebooks/auto_examples/plots/visualizing-results.ipynb
bsd-3-clause
[ "%matplotlib inline", "Visualizing optimization results\nTim Head, August 2016.\nReformatted by Holger Nahrstaedt 2020\n.. currentmodule:: skopt\nBayesian optimization or sequential model-based optimization uses a surrogate\nmodel to model the expensive to evaluate objective function func. It is\nthis model that is used to determine at which points to evaluate the expensive\nobjective next.\nTo help understand why the optimization process is proceeding the way it is,\nit is useful to plot the location and order of the points at which the\nobjective is evaluated. If everything is working as expected, early samples\nwill be spread over the whole parameter space and later samples should\ncluster around the minimum.\nThe :class:plots.plot_evaluations function helps with visualizing the location and\norder in which samples are evaluated for objectives with an arbitrary\nnumber of dimensions.\nThe :class:plots.plot_objective function plots the partial dependence of the objective,\nas represented by the surrogate model, for each dimension and as pairs of the\ninput dimensions.\nAll of the minimizers implemented in skopt return an OptimizeResult\ninstance that can be inspected. Both :class:plots.plot_evaluations and :class:plots.plot_objective\nare helpers that do just that", "print(__doc__)\nimport numpy as np\nnp.random.seed(123)\n\nimport matplotlib.pyplot as plt", "Toy models\nWe will use two different toy models to demonstrate how :class:plots.plot_evaluations\nworks.\nThe first model is the :class:benchmarks.branin function which has two dimensions and three\nminima.\nThe second model is the hart6 function which has six dimension which makes\nit hard to visualize. This will show off the utility of\n:class:plots.plot_evaluations.", "from skopt.benchmarks import branin as branin\nfrom skopt.benchmarks import hart6 as hart6_\n\n\n# redefined `hart6` to allow adding arbitrary \"noise\" dimensions\ndef hart6(x):\n return hart6_(x[:6])", "Starting with branin\nTo start let's take advantage of the fact that :class:benchmarks.branin is a simple\nfunction which can be visualised in two dimensions.", "from matplotlib.colors import LogNorm\n\n\ndef plot_branin():\n fig, ax = plt.subplots()\n\n x1_values = np.linspace(-5, 10, 100)\n x2_values = np.linspace(0, 15, 100)\n x_ax, y_ax = np.meshgrid(x1_values, x2_values)\n vals = np.c_[x_ax.ravel(), y_ax.ravel()]\n fx = np.reshape([branin(val) for val in vals], (100, 100))\n\n cm = ax.pcolormesh(x_ax, y_ax, fx,\n norm=LogNorm(vmin=fx.min(),\n vmax=fx.max()),\n cmap='viridis_r')\n\n minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])\n ax.plot(minima[:, 0], minima[:, 1], \"r.\", markersize=14,\n lw=0, label=\"Minima\")\n\n cb = fig.colorbar(cm)\n cb.set_label(\"f(x)\")\n\n ax.legend(loc=\"best\", numpoints=1)\n\n ax.set_xlabel(\"$X_0$\")\n ax.set_xlim([-5, 10])\n ax.set_ylabel(\"$X_1$\")\n ax.set_ylim([0, 15])\n\n\nplot_branin()", "Evaluating the objective function\nNext we use an extra trees based minimizer to find one of the minima of the\n:class:benchmarks.branin function. Then we visualize at which points the objective is being\nevaluated using :class:plots.plot_evaluations.", "from functools import partial\nfrom skopt.plots import plot_evaluations\nfrom skopt import gp_minimize, forest_minimize, dummy_minimize\n\n\nbounds = [(-5.0, 10.0), (0.0, 15.0)]\nn_calls = 160\n\nforest_res = forest_minimize(branin, bounds, n_calls=n_calls,\n base_estimator=\"ET\", random_state=4)\n\n_ = plot_evaluations(forest_res, bins=10)", ":class:plots.plot_evaluations creates a grid of size n_dims by n_dims.\nThe diagonal shows histograms for each of the dimensions. In the lower\ntriangle (just one plot in this case) a two dimensional scatter plot of all\npoints is shown. The order in which points were evaluated is encoded in the\ncolor of each point. Darker/purple colors correspond to earlier samples and\nlighter/yellow colors correspond to later samples. A red point shows the\nlocation of the minimum found by the optimization process.\nYou should be able to see that points start clustering around the location\nof the true miminum. The histograms show that the objective is evaluated\nmore often at locations near to one of the three minima.\nUsing :class:plots.plot_objective we can visualise the one dimensional partial\ndependence of the surrogate model for each dimension. The contour plot in\nthe bottom left corner shows the two dimensional partial dependence. In this\ncase this is the same as simply plotting the objective as it only has two\ndimensions.\nPartial dependence plots\nPartial dependence plots were proposed by\n[Friedman (2001)]\nas a method for interpreting the importance of input features used in\ngradient boosting machines. Given a function of $k$: variables\n$y=f\\left(x_1, x_2, ..., x_k\\right)$: the\npartial dependence of $f$ on the $i$-th variable $x_i$ is calculated as:\n$\\phi\\left( x_i \\right) = \\frac{1}{N} \\sum^N{j=0}f\\left(x_{1,j}, x_{2,j}, ..., x_i, ..., x_{k,j}\\right)$:\nwith the sum running over a set of $N$ points drawn at random from the\nsearch space.\nThe idea is to visualize how the value of $x_j$: influences the function\n$f$: after averaging out the influence of all other variables.", "from skopt.plots import plot_objective\n\n_ = plot_objective(forest_res)", "The two dimensional partial dependence plot can look like the true\nobjective but it does not have to. As points at which the objective function\nis being evaluated are concentrated around the suspected minimum the\nsurrogate model sometimes is not a good representation of the objective far\naway from the minima.\nRandom sampling\nCompare this to a minimizer which picks points at random. There is no\nstructure visible in the order in which it evaluates the objective. Because\nthere is no model involved in the process of picking sample points at\nrandom, we can not plot the partial dependence of the model.", "dummy_res = dummy_minimize(branin, bounds, n_calls=n_calls, random_state=4)\n\n_ = plot_evaluations(dummy_res, bins=10)", "Working in six dimensions\nVisualising what happens in two dimensions is easy, where\n:class:plots.plot_evaluations and :class:plots.plot_objective start to be useful is when the\nnumber of dimensions grows. They take care of many of the more mundane\nthings needed to make good plots of all combinations of the dimensions.\nThe next example uses class:benchmarks.hart6 which has six dimensions and shows both\n:class:plots.plot_evaluations and :class:plots.plot_objective.", "bounds = [(0., 1.),] * 6\n\nforest_res = forest_minimize(hart6, bounds, n_calls=n_calls,\n base_estimator=\"ET\", random_state=4)\n\n_ = plot_evaluations(forest_res)\n_ = plot_objective(forest_res, n_samples=40)", "Going from 6 to 6+2 dimensions\nTo make things more interesting let's add two dimension to the problem.\nAs :class:benchmarks.hart6 only depends on six dimensions we know that for this problem\nthe new dimensions will be \"flat\" or uninformative. This is clearly visible\nin both the placement of samples and the partial dependence plots.", "bounds = [(0., 1.),] * 8\nn_calls = 200\n\nforest_res = forest_minimize(hart6, bounds, n_calls=n_calls,\n base_estimator=\"ET\", random_state=4)\n\n_ = plot_evaluations(forest_res)\n_ = plot_objective(forest_res, n_samples=40)\n\n# .. [Friedman (2001)] `doi:10.1214/aos/1013203451 section 8.2 <http://projecteuclid.org/euclid.aos/1013203451>`" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
martinggww/lucasenlights
MachineLearning/DataScience-Python3/CovarianceCorrelation.ipynb
cc0-1.0
[ "Covariance and Correlation\nCovariance measures how two variables vary in tandem from their means.\nFor example, let's say we work for an e-commerce company, and they are interested in finding a correlation between page speed (how fast each web page renders for a customer) and how much a customer spends.\nnumpy offers covariance methods, but we'll do it the \"hard way\" to show what happens under the hood. Basically we treat each variable as a vector of deviations from the mean, and compute the \"dot product\" of both vectors. Geometrically this can be thought of as the angle between the two vectors in a high-dimensional space, but you can just think of it as a measure of similarity between the two variables.\nFirst, let's just make page speed and purchase amount totally random and independent of each other; a very small covariance will result as there is no real correlation:", "%matplotlib inline\n\nimport numpy as np\nfrom pylab import *\n\ndef de_mean(x):\n xmean = mean(x)\n return [xi - xmean for xi in x]\n\ndef covariance(x, y):\n n = len(x)\n return dot(de_mean(x), de_mean(y)) / (n-1)\n\npageSpeeds = np.random.normal(3.0, 1.0, 1000)\npurchaseAmount = np.random.normal(50.0, 10.0, 1000)\n\nscatter(pageSpeeds, purchaseAmount)\n\ncovariance (pageSpeeds, purchaseAmount)\n", "Now we'll make our fabricated purchase amounts an actual function of page speed, making a very real correlation. The negative value indicates an inverse relationship; pages that render in less time result in more money spent:", "purchaseAmount = np.random.normal(50.0, 10.0, 1000) / pageSpeeds\n\nscatter(pageSpeeds, purchaseAmount)\n\ncovariance (pageSpeeds, purchaseAmount)", "But, what does this value mean? Covariance is sensitive to the units used in the variables, which makes it difficult to interpret. Correlation normalizes everything by their standard deviations, giving you an easier to understand value that ranges from -1 (for a perfect inverse correlation) to 1 (for a perfect positive correlation):", "def correlation(x, y):\n stddevx = x.std()\n stddevy = y.std()\n return covariance(x,y) / stddevx / stddevy #In real life you'd check for divide by zero here\n\ncorrelation(pageSpeeds, purchaseAmount)", "numpy can do all this for you with numpy.corrcoef. It returns a matrix of the correlation coefficients between every combination of the arrays passed in:", "np.corrcoef(pageSpeeds, purchaseAmount)", "(It doesn't match exactly just due to the math precision available on a computer.)\nWe can force a perfect correlation by fabricating a totally linear relationship (again, it's not exactly -1 just due to precision errors, but it's close enough to tell us there's a really good correlation here):", "purchaseAmount = 100 - pageSpeeds * 3\n\nscatter(pageSpeeds, purchaseAmount)\n\ncorrelation (pageSpeeds, purchaseAmount)", "Remember, correlation does not imply causality!\nActivity\nnumpy also has a numpy.cov function that can compute Covariance for you. Try using it for the pageSpeeds and purchaseAmounts data above. Interpret its results, and compare it to the results from our own covariance function above." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nproctor/phys202-2015-work
assignments/assignment05/InteractEx02.ipynb
mit
[ "Interact Exercise 2\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display", "Plotting with parameters\nWrite a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\\pi]$.\n\nCustomize your visualization to make it effective and beautiful.\nCustomize the box, grid, spines and ticks to match the requirements of this data.\nUse enough points along the x-axis to get a smooth plot.\nFor the x-axis tick locations use integer multiples of $\\pi$.\nFor the x-axis tick labels use multiples of pi using LaTeX: $3\\pi$.", "def plot_sine1(a,b):\n #style graph\n plt.figure(figsize=(10,5))\n plt.rc('xtick', labelsize=14)\n plt.rc('ytick', labelsize=12)\n ax = plt.gca()\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n \n #Set X(input array) and Y(output array)\n x = np.linspace(0.0, 4*np.pi, 500)\n y = np.sin(a*x+b)\n \n #Label Axis/ Set Ticks\n plt.xlabel(\"X\", fontsize = 14)\n plt.ylabel(\"Y\", fontsize = 14)\n plt.title(\"y(x) = sin(%sx + %s)\" %(a, b), fontsize=16)\n plt.xticks(np.linspace(0.0, 4*np.pi, 5), [r'$0$', r'$\\pi$', r'$2\\pi$', r'$3\\pi$', r'$4\\pi$'])\n plt.plot(x, y)\n\nplot_sine1(5, 3.4)", "Then use interact to create a user interface for exploring your function:\n\na should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.\nb should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.", "interact(plot_sine1, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1));\n\nassert True # leave this for grading the plot_sine1 exercise", "In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:\n\ndashed red: r--\nblue circles: bo\ndotted black: k.\n\nWrite a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.", "def plot_sine2(a, b, style='b-'):\n #Style Graph\n plt.figure(figsize=(10,5))\n plt.rc('xtick', labelsize=14)\n plt.rc('ytick', labelsize=12)\n ax = plt.gca()\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n \n #Set x(input array) and y(output array)\n x = np.linspace(0.0, 4*np.pi, 500)\n y = np.sin(a*x+b)\n \n #More styling (Labels)\n plt.xlabel(\"X\", fontsize = 14)\n plt.ylabel(\"Y\", fontsize = 14)\n plt.title(\"y(x) = sin(%sx + %s)\" %(a, b), fontsize=16)\n plt.xticks(np.linspace(0.0, 4*np.pi, 5), [r'$0$', r'$\\pi$', r'$2\\pi$', r'$3\\pi$', r'$4\\pi$'])\n \n # Now we include a style argument! \n plt.plot(x, y, style)\n\nplot_sine2(4.0, -1.0, 'b-')", "Use interact to create a UI for plot_sine2.\n\nUse a slider for a and b as above.\nUse a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.", "interact(plot_sine2, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1), style= {\"blue dotted line\": 'b.', \"black circles\": 'ko', \"red triangles\": 'r^'});\n\nassert True # leave this for grading the plot_sine2 exercise", "Used \"Lev Levitsky\"'s idea from StackOverFlow to set tick numbers\nUsed \"unutbu\"'s method for using latex in matplotlib" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cliburn/sta-663-2017
notebook/13B_LinearAlgebra2.ipynb
mit
[ "import os\nimport sys\nimport glob\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n%matplotlib inline\n%precision 4\nplt.style.use('ggplot')\n", "Reference\nSciPy's official tutorial on Linear algebra\nMatrix Decompositions\nMatrix decompositions are an important step in solving linear systems in a computationally efficient manner. \nLU Decomposition and Gaussian Elimination\nLU stands for 'Lower Upper', and so an LU decomposition of a matrix $A$ is a decomposition so that \n$$A= LU$$\nwhere $L$ is lower triangular and $U$ is upper triangular.\nNow, LU decomposition is essentially gaussian elimination, but we work only with the matrix $A$ (as opposed to the augmented matrix). \nLet's review how gaussian elimination (ge) works. We will deal with a $3\\times 3$ system of equations for conciseness, but everything here generalizes to the $n\\times n$ case. Consider the following equation:\n$$\\left(\\begin{matrix}a_{11}&a_{12} & a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\end{matrix}\\right)\\left(\\begin{matrix}x_1\\x_2\\x_3\\end{matrix}\\right) = \\left(\\begin{matrix}b_1\\b_2\\b_3\\end{matrix}\\right)$$\nFor simplicity, let us assume that the leftmost matrix $A$ is non-singular. To solve the system using ge, we start with the 'augmented matrix':\n$$\\left(\\begin{array}{ccc|c}a_{11}&a_{12} & a_{13}& b_1 \\a_{21}&a_{22}&a_{23}&b_2\\a_{31}&a_{32}&a_{33}&b_3\\end{array}\\right)$$\nWe begin at the first entry, $a_{11}$. If $a_{11} \\neq 0$, then we divide the first row by $a_{11}$ and then subtract the appropriate multiple of the first row from each of the other rows, zeroing out the first entry of all rows. (If $a_{11}$ is zero, we need to permute rows. We will not go into detail of that here.) The result is as follows:\n$$\\left(\\begin{array}{ccc|c}\n1 & \\frac{a_{12}}{a_{11}} & \\frac{a_{13}}{a_{11}} & \\frac{b_1}{a_{11}} \\\n0 & a_{22} - a_{21}\\frac{a_{12}}{a_{11}} & a_{23} - a_{21}\\frac{a_{13}}{a_{11}} & b_2 - a_{21}\\frac{b_1}{a_{11}}\\\n0&a_{32}-a_{31}\\frac{a_{12}}{a_{11}} & a_{33} - a_{31}\\frac{a_{13}}{a_{11}} &b_3- a_{31}\\frac{b_1}{a_{11}}\\end{array}\\right)$$\nWe repeat the procedure for the second row, first dividing by the leading entry, then subtracting the appropriate multiple of the resulting row from each of the third and first rows, so that the second entry in row 1 and in row 3 are zero. We could continue until the matrix on the left is the identity. In that case, we can then just 'read off' the solution: i.e., the vector $x$ is the resulting column vector on the right. Usually, it is more efficient to stop at reduced row eschelon form (upper triangular, with ones on the diagonal), and then use back substitution to obtain the final answer.\nNote that in some cases, it is necessary to permute rows to obtain reduced row eschelon form. This is called partial pivoting. If we also manipulate columns, that is called full pivoting.\nIt should be mentioned that we may obtain the inverse of a matrix using ge, by reducing the matrix $A$ to the identity, with the identity matrix as the augmented portion. \nNow, this is all fine when we are solving a system one time, for one outcome $b$. Many applications involve solutions to multiple problems, where the left-hand-side of our matrix equation does not change, but there are many outcome vectors $b$. In this case, it is more efficient to decompose $A$.\nFirst, we start just as in ge, but we 'keep track' of the various multiples required to eliminate entries. For example, consider the matrix\n$$A = \\left(\\begin{matrix} 1 & 3 & 4 \\\n 2& 1& 3\\\n 4&1&2\n \\end{matrix}\\right)$$\nWe need to multiply row $1$ by $2$ and subtract from row $2$ to eliminate the first entry in row $2$, and then multiply row $1$ by $4$ and subtract from row $3$. Instead of entering zeroes into the first entries of rows $2$ and $3$, we record the multiples required for their elimination, as so:\n$$\\left(\\begin{matrix} 1 & 3 & 4 \\\n (2)& -5 & -5\\\n (4)&-11&-14\n \\end{matrix}\\right)$$\nAnd then we eliminate the second entry in the third row:\n$$\\left(\\begin{matrix} 1 & 3 & 4 \\\n (2)& -5 & -5\\\n (4)&(\\frac{11}{5})&-3\n \\end{matrix}\\right)$$\nAnd now we have the decomposition:\n$$L= \\left(\\begin{matrix} 1 & 0 & 0 \\\n 2& 1 & 0\\\n 4&\\frac{11}5&1\n \\end{matrix}\\right)\n U = \\left(\\begin{matrix} 1 & 3 & 4 \\\n 0& -5 & -5\\\n 0&0&-3\n \\end{matrix}\\right)$$", "import numpy as np\nimport scipy.linalg as la\nnp.set_printoptions(suppress=True) \n\nA = np.array([[1,3,4],[2,1,3],[4,1,2]])\n\nL = np.array([[1,0,0],[2,1,0],[4,11/5,1]])\nU = np.array([[1,3,4],[0,-5,-5],[0,0,-3]])\nprint(L.dot(U))\nprint(L)\nprint(U)", "We can solve the system by solving two back-substitution problems:\n$$Ly = b$$ and\n$$Ux=y$$\nThese are both $O(n^2)$, so it is more efficient to decompose when there are multiple outcomes to solve for.\nLet do this with numpy:", "import numpy as np\nimport scipy.linalg as la\nnp.set_printoptions(suppress=True) \n\nA = np.array([[1,3,4],[2,1,3],[4,1,2]])\n\nprint(A)\n\nP, L, U = la.lu(A)\nprint(np.dot(P.T, A))\nprint\nprint(np.dot(L, U))\nprint(P)\nprint(L)\nprint(U)", "Note that the numpy decomposition uses partial pivoting (matrix rows are permuted to use the largest pivot). This is because small pivots can lead to numerical instability. Another reason why one should use library functions whenever possible!\nCholesky Decomposition\nRecall that a square matrix $A$ is positive definite if\n$$u^TA u > 0$$\nfor any non-zero n-dimensional vector $u$,\nand a symmetric, positive-definite matrix $A$ is a positive-definite matrix such that\n$$A = A^T$$\nLet $A$ be a symmetric, positive-definite matrix. There is a unique decomposition such that\n$$A = L L^T$$\nwhere $L$ is lower-triangular with positive diagonal elements and $L^T$ is its transpose. This decomposition is known as the Cholesky decompostion, and $L$ may be interpreted as the 'square root' of the matrix $A$. \nAlgorithm:\nLet $A$ be an $n\\times n$ matrix. We find the matri $L$ using the following iterative procedure:\n$$A = \\left(\\begin{matrix}a_{11}&A_{12}\\A_{12}&A_{22}\\end{matrix}\\right) =\n\\left(\\begin{matrix}\\ell_{11}&0\\\nL_{12}&L_{22}\\end{matrix}\\right)\n\\left(\\begin{matrix}\\ell_{11}&L_{12}\\0&L_{22}\\end{matrix}\\right)\n$$\n1.) Let $\\ell_{11} = \\sqrt{a_{11}}$\n2.) $L_{12} = \\frac{1}{\\ell_{11}}A_{12}$\n3.) Solve $A_{22} - L_{12}L_{12}^T = L_{22}L_{22}^T$ for $L_{22}$\nExample:\n$$A = \\left(\\begin{matrix}1&3&5\\3&13&23\\5&23&42\\end{matrix}\\right)$$\n$$\\ell_{11} = \\sqrt{a_{11}} = 1$$\n$$L_{12} = \\frac{1}{\\ell_{11}} A_{12} = A_{12}$$\n$\\begin{eqnarray}\nA_{22} - L_{12}L_{12}^T &=& \\left(\\begin{matrix}13&23\\23&42\\end{matrix}\\right) - \\left(\\begin{matrix}9&15\\15&25\\end{matrix}\\right)\\\n&=& \\left(\\begin{matrix}4&8\\8&17\\end{matrix}\\right)\\\n&=& \\left(\\begin{matrix}2&0\\4&\\ell_{33}\\end{matrix}\\right) \\left(\\begin{matrix}2&4\\0&\\ell_{33}\\end{matrix}\\right)\\\n&=& \\left(\\begin{matrix}4&8\\8&16+\\ell_{33}^2\\end{matrix}\\right)\n\\end{eqnarray}$\nAnd so we conclude that $\\ell_{33}=1$.\nThis yields the decomposition:\n$$\\left(\\begin{matrix}1&3&5\\3&13&23\\5&23&42\\end{matrix}\\right) = \n\\left(\\begin{matrix}1&0&0\\3&2&0\\5&4&1\\end{matrix}\\right)\\left(\\begin{matrix}1&3&5\\0&2&4\\0&0&1\\end{matrix}\\right)$$\nNow, with numpy:", "A = np.array([[1,3,5],[3,13,23],[5,23,42]])\nL = la.cholesky(A)\nprint(np.dot(L.T, L))\n\nprint(L)\nprint(A)", "Cholesky decomposition is about twice as fast as LU decomposition (though both scale as $n^3$).\nMatrix Decompositions for PCA and Least Squares\nEigendecomposition\nEigenvectors and Eigenvalues\nFirst recall that an eigenvector of a matrix $A$ is a non-zero vector $v$ such that\n$$Av = \\lambda v$$\nfor some scalar $\\lambda$\nThe value $\\lambda$ is called an eigenvalue of $A$.\nIf an $n\\times n$ matrix $A$ has $n$ linearly independent eigenvectors, then $A$ may be decomposed in the following manner:\n$$A = B\\Lambda B^{-1}$$\nwhere $\\Lambda$ is a diagonal matrix whose diagonal entries are the eigenvalues of $A$ and the columns of $B$ are the corresponding eigenvectors of $A$.\nFacts: \n\nAn $n\\times n$ matrix is diagonizable $\\iff$ it has $n$ linearly independent eigenvectors.\nA symmetric, positive definite matrix has only positive eigenvalues and its eigendecomposition \n$$A=B\\Lambda B^{-1}$$\n\nis via an orthogonal transformation $B$. (I.e. its eigenvectors are an orthonormal set)\nCalculating Eigenvalues\nIt is easy to see from the definition that if $v$ is an eigenvector of an $n\\times n$ matrix $A$ with eigenvalue $\\lambda$, then\n$$Av - \\lambda I = \\bf{0}$$\nwhere $I$ is the identity matrix of dimension $n$ and $\\bf{0}$ is an n-dimensional zero vector. Therefore, the eigenvalues of $A$ satisfy:\n$$\\det\\left(A-\\lambda I\\right)=0$$\nThe left-hand side above is a polynomial in $\\lambda$, and is called the characteristic polynomial of $A$. Thus, to find the eigenvalues of $A$, we find the roots of the characteristic polynomial.\nComputationally, however, computing the characteristic polynomial and then solving for the roots is prohibitively expensive. Therefore, in practice, numerical methods are used - both to find eigenvalues and their corresponding eigenvectors. We won't go into the specifics of the algorithms used to calculate eigenvalues, but here is a numpy example:", "A = np.array([[0,1,1],[2,1,0],[3,4,5]])\n\nu, V = la.eig(A)\nprint(np.dot(V,np.dot(np.diag(u), la.inv(V))))\nprint(u)\n", "NB: Many matrices are not diagonizable, and many have complex eigenvalues (even if all entries are real).", "A = np.array([[0,1],[-1,0]])\nprint(A)\n\nu, V = la.eig(A)\nprint(np.dot(V,np.dot(np.diag(u), la.inv(V))))\nprint(u)\n\n# If you know the eigenvalues must be real \n# because A is a positive definite (e.g. covariance) matrix \n# use real_if_close\n\nA = np.array([[0,1,1],[2,1,0],[3,4,5]])\nu, V = la.eig(A)\nprint(u)\nprint np.real_if_close(u)", "Singular Values\nFor any $m\\times n$ matrix $A$, we define its singular values to be the square root of the eigenvalues of $A^TA$. These are well-defined as $A^TA$ is always symmetric, positive-definite, so its eigenvalues are real and positive. Singular values are important properties of a matrix. Geometrically, a matrix $A$ maps the unit sphere in $\\mathbb{R}^n$ to an ellipse. The singular values are the lengths of the semi-axes. \nSingular values also provide a measure of the stabilty of a matrix. We'll revisit this in the end of the lecture.\nQR decompositon\nAs with the previous decompositions, $QR$ decomposition is a method to write a matrix $A$ as the product of two matrices of simpler form. In this case, we want:\n$$ A= QR$$\nwhere $Q$ is an $m\\times n$ matrix with $Q Q^T = I$ (i.e. $Q$ is orthogonal) and $R$ is an $n\\times n$ upper-triangular matrix.\nThis is really just the matrix form of the Gram-Schmidt orthogonalization of the columns of $A$. The G-S algorithm itself is unstable, so various other methods have been developed to compute the QR decomposition. We won't cover those in detail as they are a bit beyond our scope.\nThe first $k$ columns of $Q$ are an orthonormal basis for the column space of the first $k$ columns of $A$. \nIterative QR decomposition is often used in the computation of eigenvalues.\nSingular Value Decomposition\nAnother important matrix decomposition is singular value decomposition or SVD. For any $m\\times n$ matrix $A$, we may write:\n$$A= UDV$$\nwhere $U$ is a unitary (orthogonal in the real case) $m\\times m$ matrix, $D$ is a rectangular, diagonal $m\\times n$ matrix with diagonal entries $d_1,...,d_m$ all non-negative. $V$ is a unitary (orthogonal) $n\\times n$ matrix. SVD is used in principle component analysis and in the computation of the Moore-Penrose pseudo-inverse.\nStabilty and Condition Number\nIt is important that numerical algorithms be stable and efficient. Efficiency is a property of an algorithm, but stability can be a property of the system itself.\nExample\n$$\\left(\\begin{matrix}8&6&4&1\\1&4&5&1\\8&4&1&1\\1&4&3&6\\end{matrix}\\right)x = \\left(\\begin{matrix}19\\11\\14\\14\\end{matrix}\\right)$$", "A = np.array([[8,6,4,1],[1,4,5,1],[8,4,1,1],[1,4,3,6]])\nb = np.array([19,11,14,14])\nla.solve(A,b)\n\nb = np.array([19.01,11.05,14.07,14.05])\nla.solve(A,b)", "Note that the tiny perturbations in the outcome vector $b$ cause large differences in the solution! When this happens, we say that the matrix $A$ ill-conditioned. This happens when a matrix is 'close' to being singular (i.e. non-invertible).\nCondition Number\nA measure of this type of behavior is called the condition number. It is defined as:\n$$ cond(A) = ||A||\\cdot ||A^{-1}|| $$\nIn general, it is difficult to compute.\nFact: \n$$cond(A) = \\frac{\\lambda_1}{\\lambda_n}$$\nwhere $\\lambda_1$ is the maximum singular value of $A$ and $\\lambda_n$ is the smallest. The higher the condition number, the more unstable the system. In general if there is a large discrepancy between minimal and maximal singular values, the condition number is large.\nExample", "U, s, V = np.linalg.svd(A)\nprint(s)\nprint(max(s)/min(s))", "Preconditioning\nWe can sometimes improve on this behavior by 'pre-conditioning'. Instead of solving\n$$Ax=b$$\nwe solve\n$$D^{-1}Ax=D^{-1}b$$\nwhere $D^{-1}A$ has a lower condition number than $A$ itself. \nPreconditioning is a very involved topic, quite out of the range of this course. It is mentioned here only to make you aware that such a thing exists, should you ever run into an ill-conditioned problem!\n<font color=red>Exercises</font>\n1. Compute the LU decomposition of the following matrix by hand and using numpy\n$$\\left(\\begin{matrix}1&2&3\\2&-4&6\\3&-9&-3\\end{matrix}\\right)$$\nSolution:\nFirst by hand:\n2. Compute the Cholesky decomposition of the following matrix by hand and using numpy\n$$\\left(\\begin{matrix}1&2&3\\2&-4&6\\3&6&-3\\end{matrix}\\right)$$", "# Your code here", "3. Write a function in Python to solve a system\n$$Ax = b$$\nusing SVD decomposition. Your function should take $A$ and $b$ as input and return $x$.\nYour function should include the following:\n\nFirst, check that $A$ is invertible - return error message if it is not\nInvert $A$ using SVD and solve\nreturn $x$\n\nTest your function for correctness.", "# Your code here\n\ndef svdsolver(A,b):\n U, s, V = np.linalg.svd(A)\n if np.prod(s) == 0:\n print(\"Matrix is singular\")\n else:\n return np.dot(np.dot((V.T).dot(np.diag(s**(-1))), U.T),b)\n \n\nA = np.array([[1,1],[1,2]])\nb = np.array([3,1])\nprint(np.linalg.solve(A,b))\nprint(svdsolver(A,b))\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sysid/nbs
lstm/LTSM_BasicStockMarket.ipynb
mit
[ "LTSM Basic\nhttp://www.jakob-aungiers.com/articles/a/LSTM-Neural-Network-for-Time-Series-Prediction", "dpath = 'data/basic/'\n#path_to_dataset = dpath + 'household_power_consumption.txt'\n\n%mkdir -p dpath\n!wget -P $dpath https://raw.githubusercontent.com/jaungiers/LSTM-Neural-Network-for-Time-Series-Prediction/master/sinwave.csv\n!wget -P $dpath https://raw.githubusercontent.com/jaungiers/LSTM-Neural-Network-for-Time-Series-Prediction/master/sp500.csv\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport time\nimport warnings\nimport numpy as np\nfrom numpy import newaxis\nfrom keras.layers.core import Dense, Activation, Dropout\nfrom keras.layers.recurrent import LSTM\nfrom keras.models import Sequential\n\nwarnings.filterwarnings(\"ignore\")\n\nimport pandas as pd\n\ndef load_ts(filename):\n df = pd.read_csv(filename, header=None)\n data = df[0].tolist()\n return data\n\n#filename = 'sinwave.csv'\nfilename = 'sp500.csv'\nprint('> Loading data...: ', dpath+filename)\n#X_train, y_train, X_test, y_test = load_data(dpath+'sp500.csv', seq_len, True)\nts = load_ts(dpath + filename)\nts[:10]\n\nplt.plot(ts)\n\nseq_len = 50 \n\ndef load_data(ts, seq_len, normalise_window):\n\n sequence_length = seq_len + 1\n result = []\n # create gliding window\n for index in range(len(ts) - sequence_length):\n result.append(ts[index: index + sequence_length])\n \n if normalise_window:\n result = normalise_windows(result)\n\n result = np.array(result)\n\n print(\"Data shape: \", result.shape)\n print(result[:4, :])\n row = round(0.9 * result.shape[0])\n train = result[:row, :]\n test = result[row:, :]\n print(\"Test shape: \", test.shape)\n #np.random.shuffle(train)\n x_train = train[:, :-1]\n y_train = train[:, -1]\n x_test = test[:, :-1]\n print(\"xtest shape: \", x_test.shape)\n y_test = test[:, -1]\n\n x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))\n x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1)) \n\n return [x_train, y_train, x_test, y_test]\n\ndef normalise_windows(window_data):\n normalised_data = []\n for window in window_data:\n normalised_window = [((float(p) / float(window[0])) - 1) for p in window]\n normalised_data.append(normalised_window)\n return normalised_data\n\nprint('> Loading data...')\n\n#X_train, y_train, X_test, y_test = load_data(ts, seq_len, False)\nX_train, y_train, X_test, y_test = load_data(ts, seq_len, True)\nX_train.shape, y_train.shape, X_test.shape, y_test.shape\nX_train[0,:seq_len, 0]\nX_train[1,:seq_len, 0]\ny_train[:5]\n\n#plt.plot(X_train[0,:,0])\n#plt.plot(y_train)\nfig = plt.figure(facecolor='white')\nax = fig.add_subplot(211)\nax.plot(X_train[0, :, 0], label='X')\nax.legend()\nax = fig.add_subplot(212)\nax.plot(y_train, label='y')\nax.legend()\n\ndef build_model(layers):\n model = Sequential()\n\n model.add(LSTM(\n input_dim=layers[0],\n output_dim=layers[1],\n return_sequences=True))\n model.add(Dropout(0.2))\n\n model.add(LSTM(\n layers[2],\n return_sequences=False))\n model.add(Dropout(0.2))\n\n model.add(Dense(\n output_dim=layers[3]))\n model.add(Activation(\"linear\"))\n\n start = time.time()\n model.compile(loss=\"mse\", optimizer=\"rmsprop\")\n print(\"Compilation Time : \", time.time() - start)\n return model\n\nprint('> Data Loaded. Compiling...')\nmodel = build_model([1, 50, 100, 1])\nmodel.summary()\n\nstart_time = time.time()\nepochs = 50\n\nmodel.fit(\n X_train,\n y_train,\n batch_size=512,\n nb_epoch=epochs,\n validation_split=0.05)\n\nprint(\"Training time: \", time.time() - start_time)\n\ndef plot_results(predicted_data, true_data, figsize=(12,6)):\n fig = plt.figure(facecolor='white', figsize=figsize)\n ax = fig.add_subplot(111)\n ax.plot(true_data, label='True Data')\n plt.plot(predicted_data, label='Prediction')\n plt.legend()\n plt.show()", "If you’re observant you’ll have noticed in our load_data() function above we split the data in to train/test sets as is standard practice for machine learning problems. However what we need to watch out for here is what we actually want to achieve in the prediction of the time series.\nIf we were to use the test set as it is, we would be running each window full of the true data to predict the next time step. This is fine if we are only looking to predict one time step ahead, however if we’re looking to predict more than one time step ahead, maybe looking to predict any emergent trends or functions (e.g. the sin function in this case) using the full test set would mean we would be predicting the next time step but then disregarding that prediction when it comes to subsequent time steps and using only the true data for each time step.\ns\nYou can see below the graph of using this approach to predict only one time step ahead at each step in time:", "def predict_point_by_point(model, data):\n #Predict each timestep given the last sequence of true data, in effect only predicting 1 step ahead each time\n predicted = model.predict(data)\n predicted = np.reshape(predicted, (predicted.size,))\n return predicted\n\nstart_time = time.time()\npredicted = predict_point_by_point(model, X_test) \npredicted[:5]\nprint(\"Prediction time: \", time.time() - start_time)\n\nplot_results(predicted, y_test)", "If however we want to do real magic and predict many time steps ahead we only use the first window from the testing data as an initiation window. At each time step we then pop the oldest entry out of the rear of the window and append the prediction for the next time step to the front of the window, in essence shifting the window along so it slowly builds itself with predictions, until the window is full of only predicted values (in our case, as our window is of size 50 this would occur after 50 time steps). We then keep this up indefinitely, predicting the next time step on the predictions of the previous future time steps, to hopefully see an emerging trend.", "def predict_sequence_full(model, data, window_size):\n #Shift the window by 1 new prediction each time, re-run predictions on new window\n curr_frame = data[0]\n predicted = []\n \n # loop over entire testdata\n for i in range(len(data)):\n predicted.append(model.predict(curr_frame[newaxis,:,:])[0,0]) #get element from shape(1,1)\n curr_frame = curr_frame[1:] #move window\n curr_frame = np.insert(curr_frame, [window_size-1], predicted[-1], axis=0) #fill frame with prediction\n return predicted\n\nstart_time = time.time()\npredicted = predict_sequence_full(model, X_test, seq_len)\npredicted[:5]\nprint(\"Prediction time: \", time.time() - start_time)\n\nplot_results(predicted, y_test)", "Overlaid with the true data we can see that with just 1 epoch and a reasonably small training set of data the LSTM has already done a pretty damn good job of predicting the sin function. You can see that as we predict more and more into the future the error margin increases as errors in the prior predictions are amplified more and more when they are used for future predictions. As such we see that the LSTM hasn’t got the frequency quite right and it drifts the more we try to predict it. However as the sin function is a very easy oscillating function with zero noise it can predict it to a good degree.\nA NOT-SO-SIMPLE STOCK MARKET\nWe predicted a several hundred time steps of a sin wave on an accurate point-by-point basis. So we can now just do the same on a stock market time series and make a shit load of money right?\nWell, no.\nA stock time series is unfortunately not a function that can be mapped. It can best described more as a random walk, which makes the whole prediction thing considerably harder. But what about the LSTM identifying any underlying hidden trends? Well, let’s take a look.\nHere is a CSV file where I have taken the adjusted daily closing price of the S&P 500 equity index from January 2000 – August 2016. I’ve stripped out everything to make it in the exact same format as our sin wave data and we will now run it through the same model we used on the sin wave with the same train/test split.\nThere is one slight change we need to make to our data however, because a sin wave is already a nicely normalized repeating pattern it works well running the raw data points through the network. However running the adjusted returns of a stock index through a network would make the optimization process shit itself and not converge to any sort of optimums for such large numbers.\nSo to combat this we will take each n-sized window of training/testing data and normalize each one to reflect percentage changes from the start of that window (so the data at point i=0 will always be 0). We’ll use the following equations to normalise and subsequently de-normalise at the end of the prediction process to get a real world number out of the prediction:\nn = normalised list [window] of price changes\np = raw list [window] of adjusted daily return prices", "start_time = time.time()\npredicted = predict_point_by_point(model, X_test) \npredicted[:5]\nprint(\"Prediction time: \", time.time() - start_time)\n\nplot_results(predicted, y_test, figsize=(20,10))", "Running the data on a single point-by-point prediction as mentioned above gives something that matches the returns pretty closely. But this is deceptive! Why? Well if you look more closely, the prediction line is made up of singular prediction points that have had the whole prior true history window behind them. Because of that, the network doesn’t need to know much about the time series itself other than that each next point most likely won’t be too far from the last point. So even if it gets the prediction for the point wrong, the next prediction will then factor in the true history and disregard the incorrect prediction, yet again allowing for an error to be made.\nWe can’t see what is happening in the brain of the LSTM, but I would make a strong case that for this prediction of what is essentially a random walk (and as a matter of point, I have made a completely random walk of data that mimics the look of a stock index, and the exact same thing holds true there as well!) is “predicting” the next point with essentially a Gaussian distribution, allowing the essentially random prediction to not stray too wildly from the true data.\nSo what would we look at if we wanted to see whether there truly was some underlying pattern discernable in just the price movements? Well we would do the same as for the sin wave problem and let the network predict a sequence of points rather than just the next one.\nDoing that we can now see that unlike the sin wave which carried on as a sin wave sequence that was almost identical to the true data, our stock data predictions converge very quickly into some sort of equilibrium.", "start_time = time.time()\npredicted = predict_sequence_full(model, X_test, seq_len)\npredicted[:5]\nprint(\"Prediction time: \", time.time() - start_time)\n\nplot_results(predicted, y_test, figsize=(20,10))\n\ndef plot_results_multiple(predicted_data, true_data, prediction_len, figsize=(12,6)):\n fig = plt.figure(facecolor='white', figsize=figsize)\n ax = fig.add_subplot(111)\n ax.plot(true_data, label='True Data')\n #Pad the list of predictions to shift it in the graph to it's correct start\n for i, data in enumerate(predicted_data):\n padding = [None for p in range(i * prediction_len)]\n plt.plot(padding + data, label='Prediction'+str(i))\n plt.legend()\n plt.show()\n\ndef predict_sequences_multiple(model, data, window_size, prediction_len):\n #Predict sequence of 50 steps before shifting prediction run forward by 50 steps\n prediction_seqs = []\n for i in range(len(data)//prediction_len):\n curr_frame = data[i*prediction_len]\n predicted = []\n for j in range(prediction_len):\n predicted.append(model.predict(curr_frame[newaxis,:,:])[0,0])\n curr_frame = curr_frame[1:]\n curr_frame = np.insert(curr_frame, [window_size-1], predicted[-1], axis=0)\n prediction_seqs.append(predicted)\n return prediction_seqs\n\nstart_time = time.time()\npredictions = predict_sequences_multiple(model, X_test, seq_len, 50)\n#predicted = predict_sequence_full(model, X_test, seq_len)\n#predicted = predict_point_by_point(model, X_test) \nprint(\"Prediction time: \", time.time() - start_time)\n\nplot_results_multiple(predictions, y_test, 50, figsize=(20,10))", "In fact when we take a look at the graph above of the same run but with the epochs increased to 400 (which should make the model mode accurate) we see that actually it now just tries is predict an upwards momentum for almost every time period!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_online.ipynb
apache-2.0
[ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex client library: AutoML text entity extraction model for online prediction\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex client library for Python to create text entity extraction models and do online prediction using Google Cloud's AutoML.\nDataset\nThe dataset used for this tutorial is the NCBI Disease Research Abstracts dataset from National Center for Biotechnology Information. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.\nObjective\nIn this tutorial, you create an AutoML text entity extraction model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nView the model evaluation.\nDeploy the Model resource to a serving Endpoint resource.\nMake a prediction.\nUndeploy the Model.\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest version of Vertex client library.", "import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG", "Restart the kernel\nOnce you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex client library\nImport the Vertex client library into our Python environment.", "import time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value", "Vertex constants\nSetup up the following constants for Vertex:\n\nAPI_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\nPARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.", "# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "AutoML constants\nSet constants unique to AutoML datasets and training:\n\nDataset Schemas: Tells the Dataset resource service which type of dataset it is.\nData Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).\nDataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.", "# Text Dataset type\nDATA_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml\"\n# Text Labeling type\nLABEL_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/ioformat/text_extraction_io_format_1.0.0.yaml\"\n# Text Training task\nTRAINING_SCHEMA = \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_extraction_1.0.0.yaml\"", "Tutorial\nNow you are ready to start creating your own AutoML text entity extraction model.\nSet up clients\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nDataset Service for Dataset resources.\nModel Service for Model resources.\nPipeline Service for training.\nEndpoint Service for deployment.\nPrediction Service for serving.", "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_dataset_client():\n client = aip.DatasetServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_pipeline_client():\n client = aip.PipelineServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"dataset\"] = create_dataset_client()\nclients[\"model\"] = create_model_client()\nclients[\"pipeline\"] = create_pipeline_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\n\nfor client in clients.items():\n print(client)", "Dataset\nNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.\nCreate Dataset resource instance\nUse the helper function create_dataset to create the instance of a Dataset resource. This function does the following:\n\nUses the dataset client service.\nCreates an Vertex Dataset resource (aip.Dataset), with the following parameters:\ndisplay_name: The human-readable name you choose to give it.\nmetadata_schema_uri: The schema for the dataset type.\nCalls the client dataset service method create_dataset, with the following parameters:\nparent: The Vertex location root path for your Database, Model and Endpoint resources.\ndataset: The Vertex dataset object instance you created.\nThe method returns an operation object.\n\nAn operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.\nYou can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:\n| Method | Description |\n| ----------- | ----------- |\n| result() | Waits for the operation to complete and returns a result object in JSON format. |\n| running() | Returns True/False on whether the operation is still running. |\n| done() | Returns True/False on whether the operation is completed. |\n| canceled() | Returns True/False on whether the operation was canceled. |\n| cancel() | Cancels the operation (this may take up to 30 seconds). |", "TIMEOUT = 90\n\n\ndef create_dataset(name, schema, labels=None, timeout=TIMEOUT):\n start_time = time.time()\n try:\n dataset = aip.Dataset(\n display_name=name, metadata_schema_uri=schema, labels=labels\n )\n\n operation = clients[\"dataset\"].create_dataset(parent=PARENT, dataset=dataset)\n print(\"Long running operation:\", operation.operation.name)\n result = operation.result(timeout=TIMEOUT)\n print(\"time:\", time.time() - start_time)\n print(\"response\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" metadata_schema_uri:\", result.metadata_schema_uri)\n print(\" metadata:\", dict(result.metadata))\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n print(\" etag:\", result.etag)\n print(\" labels:\", dict(result.labels))\n return result\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nresult = create_dataset(\"biomedical-\" + TIMESTAMP, DATA_SCHEMA)", "Now save the unique dataset identifier for the Dataset resource instance you created.", "# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split(\"/\")[-1]\n\nprint(dataset_id)", "Data preparation\nThe Vertex Dataset resource for text has a couple of requirements for your text entity extraction data.\n\nText examples must be stored in a JSONL file. Unlike text classification and sentiment analysis, a CSV index file is not supported.\nThe examples must be either inline text or reference text files that are in Cloud Storage buckets.\n\nJSONL\nFor text entity extraction, the JSONL file has a few requirements:\n\nEach data item is a separate JSON object, on a separate line.\nThe key/value pair text_segment_annotations is a list of character start/end positions in the text per entity with the corresponding label.\ndisplay_name: The label.\nstart_offset/end_offset: The character offsets of the start/end of the entity.\n\nThe key/value pair text_content is the text.\n{'text_segment_annotations': [{'end_offset': value, 'start_offset': value, 'display_name': label}, ...], 'text_content': text}\n\n\nNote: The dictionary key fields may alternatively be in camelCase. For example, 'display_name' can also be 'displayName'.\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.", "IMPORT_FILE = \"gs://ucaip-test-us-central1/dataset/ucaip_ten_dataset.jsonl\"", "Quick peek at your data\nYou will use a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.", "if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head", "Import data\nNow, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:\n\nUses the Dataset client.\nCalls the client method import_data, with the following parameters:\nname: The human readable name you give to the Dataset resource (e.g., biomedical).\n\nimport_configs: The import configuration.\n\n\nimport_configs: A Python list containing a dictionary, with the key/value entries:\n\ngcs_sources: A list of URIs to the paths of the one or more index files.\nimport_schema_uri: The schema identifying the labeling type.\n\nThe import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.", "def import_data(dataset, gcs_sources, schema):\n config = [{\"gcs_source\": {\"uris\": gcs_sources}, \"import_schema_uri\": schema}]\n print(\"dataset:\", dataset_id)\n start_time = time.time()\n try:\n operation = clients[\"dataset\"].import_data(\n name=dataset_id, import_configs=config\n )\n print(\"Long running operation:\", operation.operation.name)\n\n result = operation.result()\n print(\"result:\", result)\n print(\"time:\", int(time.time() - start_time), \"secs\")\n print(\"error:\", operation.exception())\n print(\"meta :\", operation.metadata)\n print(\n \"after: running:\",\n operation.running(),\n \"done:\",\n operation.done(),\n \"cancelled:\",\n operation.cancelled(),\n )\n\n return operation\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nimport_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)", "Train the model\nNow train an AutoML text entity extraction model using your Vertex Dataset resource. To train the model, do the following steps:\n\nCreate an Vertex training pipeline for the Dataset resource.\nExecute the pipeline to start the training.\n\nCreate a training pipeline\nYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:\n\nBeing reusable for subsequent training jobs.\nCan be containerized and ran as a batch job.\nCan be distributed.\nAll the steps are associated with the same pipeline job for tracking progress.\n\nUse this helper function create_pipeline, which takes the following parameters:\n\npipeline_name: A human readable name for the pipeline job.\nmodel_name: A human readable name for the model.\ndataset: The Vertex fully qualified dataset identifier.\nschema: The dataset labeling (annotation) training schema.\ntask: A dictionary describing the requirements for the training job.\n\nThe helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:\n\nparent: The Vertex location root path for your Dataset, Model and Endpoint resources.\ntraining_pipeline: the full specification for the pipeline training job.\n\nLet's look now deeper into the minimal requirements for constructing a training_pipeline specification:\n\ndisplay_name: A human readable name for the pipeline job.\ntraining_task_definition: The dataset labeling (annotation) training schema.\ntraining_task_inputs: A dictionary describing the requirements for the training job.\nmodel_to_upload: A human readable name for the model.\ninput_data_config: The dataset specification.\ndataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.\nfraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.", "def create_pipeline(pipeline_name, model_name, dataset, schema, task):\n\n dataset_id = dataset.split(\"/\")[-1]\n\n input_config = {\n \"dataset_id\": dataset_id,\n \"fraction_split\": {\n \"training_fraction\": 0.8,\n \"validation_fraction\": 0.1,\n \"test_fraction\": 0.1,\n },\n }\n\n training_pipeline = {\n \"display_name\": pipeline_name,\n \"training_task_definition\": schema,\n \"training_task_inputs\": task,\n \"input_data_config\": input_config,\n \"model_to_upload\": {\"display_name\": model_name},\n }\n\n try:\n pipeline = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT, training_pipeline=training_pipeline\n )\n print(pipeline)\n except Exception as e:\n print(\"exception:\", e)\n return None\n return pipeline", "Construct the task requirements\nNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.\nThe minimal fields you need to specify are:\n\nmulti_label: Whether True/False this is a multi-label (vs single) classification.\nbudget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.\nmodel_type: The type of deployed model:\nCLOUD: For deploying to Google Cloud.\ndisable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.\n\nFinally, you create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.", "PIPE_NAME = \"biomedical_pipe-\" + TIMESTAMP\nMODEL_NAME = \"biomedical_model-\" + TIMESTAMP\n\ntask = json_format.ParseDict(\n {\n \"multi_label\": False,\n \"budget_milli_node_hours\": 8000,\n \"model_type\": \"CLOUD\",\n \"disable_early_stopping\": False,\n },\n Value(),\n)\n\nresponse = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)", "Now save the unique identifier of the training pipeline you created.", "# The full unique ID for the pipeline\npipeline_id = response.name\n# The short numeric ID for the pipeline\npipeline_short_id = pipeline_id.split(\"/\")[-1]\n\nprint(pipeline_id)", "Get information on a training pipeline\nNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:\n\nname: The Vertex fully qualified pipeline identifier.\n\nWhen the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.", "def get_training_pipeline(name, silent=False):\n response = clients[\"pipeline\"].get_training_pipeline(name=name)\n if silent:\n return response\n\n print(\"pipeline\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" state:\", response.state)\n print(\" training_task_definition:\", response.training_task_definition)\n print(\" training_task_inputs:\", dict(response.training_task_inputs))\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", dict(response.labels))\n return response\n\n\nresponse = get_training_pipeline(pipeline_id)", "Deployment\nTraining the above model may take upwards of 120 minutes time.\nOnce your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.", "while True:\n response = get_training_pipeline(pipeline_id, True)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_to_deploy_id = None\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n raise Exception(\"Training Job Failed\")\n else:\n model_to_deploy = response.model_to_upload\n model_to_deploy_id = model_to_deploy.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(\"model to deploy:\", model_to_deploy_id)", "Model information\nNow that your model is trained, you can get some information on your model.\nEvaluate the Model resource\nNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.\nList evaluations for all slices\nUse this helper function list_model_evaluations, which takes the following parameter:\n\nname: The Vertex fully qualified model identifier for the Model resource.\n\nThis helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.\nFor each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (confusionMatrix and confidenceMetrics) you will print the result.", "def list_model_evaluations(name):\n response = clients[\"model\"].list_model_evaluations(parent=name)\n for evaluation in response:\n print(\"model_evaluation\")\n print(\" name:\", evaluation.name)\n print(\" metrics_schema_uri:\", evaluation.metrics_schema_uri)\n metrics = json_format.MessageToDict(evaluation._pb.metrics)\n for metric in metrics.keys():\n print(metric)\n print(\"confusionMatrix\", metrics[\"confusionMatrix\"])\n print(\"confidenceMetrics\", metrics[\"confidenceMetrics\"])\n\n return evaluation.name\n\n\nlast_evaluation = list_model_evaluations(model_to_deploy_id)", "Deploy the Model resource\nNow deploy the trained Vertex Model resource you created with AutoML. This requires two steps:\n\n\nCreate an Endpoint resource for deploying the Model resource to.\n\n\nDeploy the Model resource to the Endpoint resource.\n\n\nCreate an Endpoint resource\nUse this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nThe helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nCreating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.", "ENDPOINT_NAME = \"biomedical_endpoint-\" + TIMESTAMP\n\n\ndef create_endpoint(display_name):\n endpoint = {\"display_name\": display_name}\n response = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)\n print(\"Long running operation:\", response.operation.name)\n\n result = response.result(timeout=300)\n print(\"result\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" description:\", result.description)\n print(\" labels:\", result.labels)\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n return result\n\n\nresult = create_endpoint(ENDPOINT_NAME)", "Now get the unique identifier for the Endpoint resource you created.", "# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)", "Compute instance scaling\nYou have several choices on scaling the compute instances for handling your online prediction requests:\n\nSingle Instance: The online prediction requests are processed on a single compute instance.\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.\n\n\nManual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.\n\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.\n\n\nAuto Scaling: The online prediction requests are split across a scaleable number of compute instances.\n\nSet the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.", "MIN_NODES = 1\nMAX_NODES = 1", "Deploy Model resource to the Endpoint resource\nUse this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:\n\nmodel: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.\ndeploy_model_display_name: A human readable name for the deployed model.\nendpoint: The Vertex fully qualified endpoint identifier to deploy the model to.\n\nThe helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:\n\nendpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.\ndeployed_model: The requirements specification for deploying the model.\ntraffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\nIf only one model, then specify as { \"0\": 100 }, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\nIf there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { \"0\": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\n\nLet's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:\n\nmodel: The Vertex fully qualified model identifier of the (upload) model to deploy.\ndisplay_name: A human readable name for the deployed model.\ndisable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.\nautomatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).\n\nTraffic Split\nLet's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.\nWhy would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.\nResponse\nThe method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.", "DEPLOYED_NAME = \"biomedical_deployed-\" + TIMESTAMP\n\n\ndef deploy_model(\n model, deployed_model_display_name, endpoint, traffic_split={\"0\": 100}\n):\n\n deployed_model = {\n \"model\": model,\n \"display_name\": deployed_model_display_name,\n \"automatic_resources\": {\n \"min_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n },\n }\n\n response = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split\n )\n\n print(\"Long running operation:\", response.operation.name)\n result = response.result()\n print(\"result\")\n deployed_model = result.deployed_model\n print(\" deployed_model\")\n print(\" id:\", deployed_model.id)\n print(\" model:\", deployed_model.model)\n print(\" display_name:\", deployed_model.display_name)\n print(\" create_time:\", deployed_model.create_time)\n\n return deployed_model.id\n\n\ndeployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)", "Make a online prediction request\nNow do a online prediction to your deployed model.\nMake test item\nYou will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.", "test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign \" pseudodeficient \" allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'", "Make a prediction\nNow you have a test item. Use this helper function predict_item, which takes the following parameters:\n\nfilename: The Cloud Storage path to the test item.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.\nparameters_dict: Additional filtering parameters for serving prediction results.\n\nThis function calls the prediction client service's predict method with the following parameters:\n\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.\ninstances: A list of instances (text files) to predict.\nparameters: Additional filtering parameters for serving prediction results. Note, text models do not support additional parameters.\n\nRequest\nThe format of each instance is:\n{ 'content': text_item }\n\nSince the predict() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.\nResponse\nThe response object returns a list, where each element in the list corresponds to the corresponding data item in the request. You will see in the output for each prediction -- in our case there is just one:\n\nprediction: A list of IDs assigned to each entity extracted from the text.\nconfidences: The confidence level between 0 and 1 for each entity.\ndisplay_names: The label name for each entity.\ntextSegmentStartOffsets: The character start location of the entity in the text.\ntextSegmentEndOffsets: The character end location of the entity in the text.", "def predict_item(data, endpoint, parameters_dict):\n\n parameters = json_format.ParseDict(parameters_dict, Value())\n\n # The format of each instance should conform to the deployed model's prediction input schema.\n instances_list = [{\"content\": data}]\n instances = [json_format.ParseDict(s, Value()) for s in instances_list]\n\n response = clients[\"prediction\"].predict(\n endpoint=endpoint, instances=instances, parameters=parameters\n )\n print(\"response\")\n print(\" deployed_model_id:\", response.deployed_model_id)\n predictions = response.predictions\n print(\"predictions\")\n for prediction in predictions:\n print(\" prediction:\", dict(prediction))\n return response\n\n\nresponse = predict_item(test_item, endpoint_id, None)", "Undeploy the Model resource\nNow undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.\n\nThis function calls the endpoint client service's method undeploy_model, with the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.\ntraffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.\n\nSince this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.", "def undeploy_model(deployed_model_id, endpoint):\n response = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}\n )\n print(response)\n\n\nundeploy_model(deployed_model_id, endpoint_id)", "Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eds-uga/csci1360-fa16
assignments/A7/A7_Q2.ipynb
mit
[ "Q2\nIn this question, you'll implement main methods that are crucial for linear algebra: vector and matrix multiplication.\nA\nYou've done this before, but you'll do it once more: write a function named dot_product which takes two 1D NumPy arrays (vectors) and computes their pairwise dot product. The function will take two arguments: two 1D NumPy arrays. It will return one floating-point number: the dot product.\nRecall that the dot product is a sum of products: the corresponding elements of two vectors are multiplied with each other, then all those products are summed up into one final number.\nFor example, dot_product([1, 2, 3], [4, 5, 6]) will perform the operation:\n(1 * 4) + (2 * 5) + (3 * 6) = 4 + 10 + 18 = 32\nso dot_product([1, 2, 3], [4, 5, 6]) should return 32.\nIf the vectors are two different lengths, this function should return None.\nYou can use numpy for arrays and the numpy.sum function, but no others (especially not the numpy.dot function which does exactly this).", "try:\n dot_product\nexcept:\n assert False\nelse:\n assert True\n\nimport numpy as np\nnp.random.seed(56985)\n\nx = np.random.random(48)\ny = np.random.random(48)\nnp.testing.assert_allclose(14.012537210130272, dot_product(x, y))\n\nx = np.random.random(48)\ny = np.random.random(49)\nassert dot_product(x, y) is None", "B\nWrite a function mv_multiply which takes a 2D NumPy matrix as the first argument, and a 1D NumPy vector as the second, and multiplies them together, returning the resulting vector.\nThis function will specifically perform the operation $\\vec{y} = A * \\vec{x}$, where $A$ and $\\vec{x}$ are the function arguments. Remember how to perform this multiplication:\n\n\nFirst, you need to check that the number of columns of $A$ is the same as the length of $\\vec{x}$. If not, you should print an error message and return None.\n\n\nSecond, you'll compute the dot product of each row of $A$ with the entire vector $\\vec{x}$.\n\n\nThird, the result of the dot product from the $i^{th}$ row of $A$ will go in the $i^{th}$ element of the solution vector, $\\vec{y}$. Therefore, $\\vec{y}$ will have the same number of elements as rows of $A$.\n\n\nYou can use numpy for arrays, and your dot_product function from Part A, but no other functions.", "try:\n mv_multiply\nexcept:\n assert False\nelse:\n assert True\n\nimport numpy as np\nnp.random.seed(487543)\n\nA = np.random.random((92, 458))\nv = np.random.random(458)\nnp.testing.assert_allclose(mv_multiply(A, v), np.dot(A, v))\n\nimport numpy as np\nnp.random.seed(49589)\n\nA = np.random.random((83, 75))\nv = np.random.random(83)\nassert mv_multiply(A, v) is None", "C\nWrite a function mm_multiply which takes two 2D NumPy matrices as arguments, and returns their matrix product.\nThis function will perform the operation $Z = X \\times Y$, where $X$ and $Y$ are the function arguments. Remember how to perform matrix-matrix multiplication:\n\n\nFirst, you need to make sure the matrix dimensions line up. For computing $X \\times Y$, this means the number of columns of $X$ (first matrix) should match the number of rows of $Y$ (second matrix). These are referred to as the \"inner dimensions\"--matrix dimensions are usually cited as \"rows by columns\", so the second dimension of the first operand $X$ is on the \"inside\" of the operation; same with the first dimension of the second operand, $Y$. If the operation were instead $Y \\times X$, you would need to make sure that the number of columns of $Y$ matches the number of rows of $X$. If these numbers don't match, you should return None.\n\n\nSecond, you'll need to create your output matrix, $Z$. The dimensions of this matrix will be the \"outer dimensions\" of the two operands: if we're computing $X \\times Y$, then $Z$'s dimensions will have the same number of rows as $X$ (the first matrix), and the same number of columns as $Y$ (the second matrix).\n\n\nThird, you'll need to compute pairwise dot products. If the operation is $X \\times Y$, then these dot products will be between the $i^{th}$ row of $X$ with the $j^{th}$ column of $Y$. This resulting dot product will then go in Z[i][j]. So first, you'll find the dot product of row 0 of $X$ with column 0 of $Y$, and put that in Z[0][0]. Then you'll find the dot product of row 0 of $X$ with column 1 of $Y$, and put that in Z[0][1]. And so on, until all rows and columns of $X$ and $Y$ have been dot-product-ed with each other.\n\n\nYou can use numpy, but no functions associated with computing matrix products (and definitely not the @ operator).\nHint: you can make use of your mv_multiply and/or dot_product methods from previous questions to help simplify your code.", "try:\n mm_multiply\nexcept:\n assert False\nelse:\n assert True\n\nimport numpy as np\nnp.random.seed(489547)\n\nA = np.random.random((48, 683))\nB = np.random.random((683, 58))\nnp.testing.assert_allclose(mm_multiply(A, B), A @ B)\n\nA = np.random.random((359, 45))\nB = np.random.random((83, 495))\nassert mm_multiply(A, B) is None\n\nimport numpy as np\nnp.random.seed(466525)\n\nA = np.random.random((58, 683))\nB = np.random.random((683, 58))\nnp.testing.assert_allclose(mm_multiply(B, A), B @ A)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/fio-ronm/cmip6/models/sandbox-1/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: FIO-RONM\nSource ID: SANDBOX-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:01\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cypherai/PySyft
notebooks/Syft - Paillier Homomorphic Encryption Example.ipynb
apache-2.0
[ "Paillier Homomorphic Encryption Example\nDISCLAIMER: This is a proof-of-concept implementation. It does not represent a remotely product ready implementation or follow proper conventions for security, convenience, or scalability. It is part of a broader proof-of-concept demonstrating the vision of the OpenMined project, its major moving parts, and how they might work together.", "from syft.he.Paillier import KeyPair\nimport numpy as np", "Basic Ops", "pubkey,prikey = KeyPair().generate()\n\nx = pubkey.encrypt(np.array([1.,2.,3.,4.,5.]))\n\nprikey.decrypt(x)\n\nprikey.decrypt(x+x[0])\n\nprikey.decrypt(x*5)\n\nprikey.decrypt(x+x/5)", "Key SerDe", "pubkey,prikey = KeyPair().generate()\n\nx = pubkey.encrypt(np.array([1.,2.,3.,4.,5.]))\n\npubkey_str = pubkey.serialize()\nprikey_str = prikey.serialize()\n\npubkey2,prikey2 = KeyPair().deserialize(pubkey_str,prikey_str)\n\nprikey2.decrypt(x)\n\ny = pubkey2.encrypt(np.ones(5))/2\n\nprikey.decrypt(y)", "Value SerDe", "import pickle\n\ny_str = pickle.dumps(y)\n\ny2 = pickle.loads(y_str)\n\nprikey.decrypt(y)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vipmunot/Data-Science-Course
Data Visualization/Lab 7/w07_lab_Vipul_Munot.ipynb
mit
[ "W7 Lab Assignment", "import matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nimport numpy as np\nimport random\nsns.set_style('white')\n\n%matplotlib inline", "Cumulative histogram and CDF\nHow can we plot a cumulative histogram?", "# TODO: Load IMDB data into movie_df using pandas\nmovie_df = pd.read_csv('imdb.csv', delimiter='\\t')\nmovie_df.head()\n\n# TODO: draw a cumulative histogram of movie ratings with 20 bins. Hint: use plt.hist()\nn, bins, patches = plt.hist(movie_df['Rating'], bins = 20, cumulative=True)\n\n# TODO: same histogram, but with normalization \nn, bins, patches = plt.hist(movie_df['Rating'], bins = 20,normed=1, histtype='step', cumulative=True)\n\n# TODO: same histogram, but with normalization \nn, bins, patches = plt.hist(movie_df['Rating'], bins = 20,normed=1, cumulative=True)", "Does it reach 1.0? Why should it become 1.0 at the right end? Also you can do the plot with pandas.", "# TODO: same plot, but call directly from dataframe movie_df\nmovie_df['Rating'].hist( bins = 20,normed=1, cumulative='True')", "CDF\nLet's make it CDF rather than cumulative histogram. You can sort a Series with order function. You can use np.linspace to generate a list of evenly spaced value.", "# TODO: plot CDF (not cumulative histogram) of movie ratings. \nratings = movie_df['Rating'].sort_values()\ncum_dist = np.linspace( 1/len(ratings), 1, num=len(ratings))\nplt.plot(ratings,cum_dist)", "The main advantange of CDF is that we can directly observe percentiles from the plot. Given the number of movies we have, can you estimate the following statistics by observing the plot? Compare your estimation to the precise results calculated from movie_df.\n\nThe numer of movies with rating <= 7\nThe median rating of movies\nThe rating which 90% of movies are under or equal to", "#TODO: provide your estimations.\n#1. 0.65 * len(ratings) = 203457.15.\n#2. 6.5.\n#3. 8.2.\n\n#TODO: calculate the statistics from movie_df.\nseven = movie_df['Rating'][movie_df['Rating'] <= 7].values\nprint(len(seven))\nprint(np.median(movie_df['Rating']))\nprint(np.percentile(movie_df['Rating'], [90])[0])", "Bootstrap Resampling\nLet's imagine that we only have a sample of the IMDB data, say 50 movies. How much can we infer about the original data from this small sample? This is a question that we encounter very often in statistical analysis.\nIn such situations, we can seek help from the bootstraping method. This is a family of statistical methods that relies on random sampling with replacement. Different to the traditional methods, it does not assume that our data follows a particular distribution, and so is very flexible to use.", "#create a random sample from the movie table.\nmovie_df_sample = movie_df.sample(50)\n\nlen(movie_df_sample)", "Now we have a sample with size = 50. We can compute, for example, the mean of movie ratings in this sample:", "print('Mean of sample: ', movie_df_sample.Rating.mean())", "But we only have one statistic. How can we know if this correctly represents the mean of the actual data? We need to compute a confidence interval. This is when we can use bootstrapping.\nFirst, Let's create a function that does the resampling with replacement. It should create a list of the same length as the sample(50 in this case), in which each element is taken randomly from the sample. In this way, some elements may appear more than once, and some none. Then we calculate the mean value of this list.", "def bootstrap_resample(rating_list):\n resampled_list = []\n #todo: write the function that returns the mean of resampled list.\n for i in range(50):\n resampled_list.append(random.choice(rating_list))\n return np.mean(resampled_list) ", "We don't usually just do this once: the typical minimal resample number is 1000. We can create a new list to keep this 1000 mean values.", "sampled_means = []\n\n#todo: call the function 1000 times and populate the list with its returned values.\nfor i in range(1000):\n mean = bootstrap_resample(movie_df_sample['Rating'].values)\n sampled_means.append(mean)\n", "Now we can compute the confidence interval. Say we want the 90% confidence, then we only need to pick out the .95 and .05 critical values.", "print(1000*0.05, 1000*0.95)", "That is, we need to pick the 50th and 950th largest values from the list. We can name it x_a and x_b.", "#todo: sort the list by ascending and pick out the 50th and 950th value. \nsampled_means.sort()\nx_a = sampled_means[49]\nx_b = sampled_means[949]\nprint (x_a,x_b)", "Let x be the mean value of the sample, we have:", "x = movie_df_sample.Rating.mean()", "The confidence interval will then be: [x - (x - x_a), x + (x_b - x)].", "#todo: calculate the confidence interval. \n#Does the mean of the original data fall within this interval? Show your statistics.\nprint([x - (x - x_a), x + (x_b - x)])\n\nnp.mean(movie_df['Rating'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
magnusax/ml-meta-wrapper
examples/gazer-neuralnet-architecture-search-demo.ipynb
mit
[ "import sys\nsys.path.insert(0, \"C:/Users/magaxels/AutoML\")\n\nfrom gazer import GazerMetaLearner\n\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.datasets import load_digits", "Load some toy dataset and split into train and validation", "X, y = load_digits(return_X_y=True)\n\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=0)\n\nprint(X_train.shape, y_train.shape, X_val.shape, y_val.shape)", "Define a learner object using method='select' and estimators=['neuralnet']", "learner = GazerMetaLearner(method='select', estimators=['neuralnet'], verbose=1)", "The entry point to network optimization is found in the optimization module", "from gazer.optimization import grid_search", "It expects the data to be shipped in the following format:", "data = {'train': (X_train, y_train), 'val': (X_val, y_val)}", "We also provide a dictionary of iterables to iterate over.", "params = {\n 'batch_norm': (True, False),\n 'batch_size': 16,\n 'dropout': True,\n 'epochs': (10, 20),\n 'input_units': np.linspace(250, 500, 6, dtype=int),\n 'n_hidden': (2,3),\n 'p': (0.1, 0.5),\n 'validation_split': 0.0,\n}", "Perform grid search over \"architectures\"", "config, df = grid_search(learner, params, data)", "Have a look at results", "df.head()", "The best estimator parameters are found in the 'config' dictionary:", "config", "End of demo" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jpn--/larch
book/example/001_mnl.ipynb
gpl-3.0
[ "1: MTC MNL Mode Choice", "import pandas as pd\nimport larch.numba as lx\n\n# TEST\npd.set_option(\"display.max_columns\", 999)\npd.set_option('expand_frame_repr', False)\npd.set_option('display.precision', 3)\nimport larch\nlarch._doctest_mode_ = True\nfrom pytest import approx", "This example is a mode choice model built using the MTC example dataset.\nFirst we create the Dataset and Model objects:", "d = lx.examples.MTC(format='dataset')\nd\n\nm = lx.Model(d)", "Then we can build up the utility function. We'll use some :ref:idco data first, using\nthe Model.utility.co attribute. This attribute is a dict-like object, to which\nwe can assign :class:LinearFunction objects for each alternative code.", "from larch import P, X, PX\nm.utility_co[2] = P(\"ASC_SR2\") + P(\"hhinc#2\") * X(\"hhinc\")\nm.utility_co[3] = P(\"ASC_SR3P\") + P(\"hhinc#3\") * X(\"hhinc\")\nm.utility_co[4] = P(\"ASC_TRAN\") + P(\"hhinc#4\") * X(\"hhinc\")\nm.utility_co[5] = P(\"ASC_BIKE\") + P(\"hhinc#5\") * X(\"hhinc\")\nm.utility_co[6] = P(\"ASC_WALK\") + P(\"hhinc#6\") * X(\"hhinc\")", "Next we'll use some idca data, with the utility_ca attribute. This attribute\nis only a single :class:LinearFunction that is applied across all alternatives\nusing :ref:idca data. Because the data is structured to vary across alternatives,\nthe parameters (and thus the structure of the :class:LinearFunction) does not need\nto vary across alternatives.", "m.utility_ca = PX(\"tottime\") + PX(\"totcost\")", "Lastly, we need to identify :ref:idca data that gives the availability for each\nalternative, as well as the number of times each alternative is chosen. (In traditional\ndiscrete choice analysis, this is often 0 or 1, but it need not be binary, or even integral.)", "m.availability_var = 'avail'\nm.choice_ca_var = 'chose'", "And let's give our model a descriptive title.", "m.title = \"MTC Example 1 (Simple MNL)\"", "We can view a summary of the choices and alternative \navailabilities to make sure the model is set up \ncorrectly.", "m.choice_avail_summary()\n\n# TEST\ns = ''' name chosen available\naltid \n1 DA 3637 4755\n2 SR2 517 5029\n3 SR3+ 161 5029\n4 Transit 498 4003\n5 Bike 50 1738\n6 Walk 166 1479\n< Total All Alternatives > 5029 \n'''\nimport re\nmash = lambda x: re.sub('\\s+', ' ', x).strip()\nassert mash(s) == mash(str(m.choice_avail_summary()))", "Having created this model, we can then estimate it:", "# TEST\nassert dict(m.required_data()) == {\n 'ca': ['totcost', 'tottime'],\n 'co': ['hhinc'],\n 'choice_ca': 'chose',\n 'avail_ca': 'avail',\n}\nassert m.loglike() == approx(-7309.600971749634)\n\nm.maximize_loglike()\n\n# TEST\nresult = _\nassert result.loglike == approx(-3626.18625551293)\nassert result.logloss == approx(0.7210551313408093)\nassert result.message == 'Optimization terminated successfully.'\n\nm.calculate_parameter_covariance()\n\nm.parameter_summary()\n\n# TEST\nsummary = _\nassert (summary.data.to_markdown()) == '''\n| | Value | Std Err | t Stat | Signif | Null Value |\n|:---------|----------:|----------:|---------:|:---------|-------------:|\n| ASC_BIKE | -2.38 | 0.305 | -7.8 | *** | 0 |\n| ASC_SR2 | -2.18 | 0.105 | -20.81 | *** | 0 |\n| ASC_SR3P | -3.73 | 0.178 | -20.96 | *** | 0 |\n| ASC_TRAN | -0.671 | 0.133 | -5.06 | *** | 0 |\n| ASC_WALK | -0.207 | 0.194 | -1.07 | | 0 |\n| hhinc#2 | -0.00217 | 0.00155 | -1.4 | | 0 |\n| hhinc#3 | 0.000358 | 0.00254 | 0.14 | | 0 |\n| hhinc#4 | -0.00529 | 0.00183 | -2.89 | ** | 0 |\n| hhinc#5 | -0.0128 | 0.00532 | -2.41 | * | 0 |\n| hhinc#6 | -0.00969 | 0.00303 | -3.19 | ** | 0 |\n| totcost | -0.00492 | 0.000239 | -20.6 | *** | 0 |\n| tottime | -0.0513 | 0.0031 | -16.57 | *** | 0 |\n'''[1:-1]", "It is a little tough to read this report because the parameters can show up \nin pretty much any order, as they are not sorted\nwhen they are automatically discovered by Larch.\nWe can use the reorder method to fix this:", "m.ordering = (\n (\"LOS\", \"totcost\", \"tottime\", ),\n (\"ASCs\", \"ASC.*\", ),\n (\"Income\", \"hhinc.*\", ),\n)\n\nm.parameter_summary()\n\n# TEST\nsummary2 = _\nassert summary2.data.to_markdown() == '''\n| | Value | Std Err | t Stat | Signif | Null Value |\n|:----------------------|----------:|----------:|---------:|:---------|-------------:|\n| ('LOS', 'totcost') | -0.00492 | 0.000239 | -20.6 | *** | 0 |\n| ('LOS', 'tottime') | -0.0513 | 0.0031 | -16.57 | *** | 0 |\n| ('ASCs', 'ASC_BIKE') | -2.38 | 0.305 | -7.8 | *** | 0 |\n| ('ASCs', 'ASC_SR2') | -2.18 | 0.105 | -20.81 | *** | 0 |\n| ('ASCs', 'ASC_SR3P') | -3.73 | 0.178 | -20.96 | *** | 0 |\n| ('ASCs', 'ASC_TRAN') | -0.671 | 0.133 | -5.06 | *** | 0 |\n| ('ASCs', 'ASC_WALK') | -0.207 | 0.194 | -1.07 | | 0 |\n| ('Income', 'hhinc#2') | -0.00217 | 0.00155 | -1.4 | | 0 |\n| ('Income', 'hhinc#3') | 0.000358 | 0.00254 | 0.14 | | 0 |\n| ('Income', 'hhinc#4') | -0.00529 | 0.00183 | -2.89 | ** | 0 |\n| ('Income', 'hhinc#5') | -0.0128 | 0.00532 | -2.41 | * | 0 |\n| ('Income', 'hhinc#6') | -0.00969 | 0.00303 | -3.19 | ** | 0 |\n'''[1:-1]\n\n\nm.estimation_statistics()\n\n# TEST\nestats = _\nfrom xmle.elem import Elem\nassert isinstance(estats, Elem)\nassert m._cached_loglike_best == approx(-3626.18625551293)\nassert m._cached_loglike_null == approx(-7309.600971749634)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UWSEDS/LectureNotes
PreFall2018/Visualization-in-Depth/Visualization-in-depth.ipynb
bsd-2-clause
[ "Visualization in Depth With Bokeh\nFor details on bokeh, see http://bokeh.pydata.org/en/latest/docs/user_guide.html#userguide\nSetup", "from bokeh.plotting import figure, output_file, show, output_notebook, vplot\nimport random\nimport numpy as np\nimport pandas as pd\noutput_notebook() # Use so see output in the Jupyter notebook\n\nimport bokeh\nbokeh.__version__", "Objectives\n\nDescribe how to create interactive visualizations using bokeh\nRunning example and visualization goals\nHow to approach a new system\nHover\nWidgets\nStudy bokeh as a system\nFunction specification - what it provides programmers\nDesign (which has client and server parts)\n\nHow to Learn a New System\nSteps\n- Find an example close to what you want\n- Create an environment that runs the example\n- Abstract the key concepts of how it works\n- Transform the example into what you want\nRunning Example - Biological Data", "from IPython.display import Image\nImage(filename='biological_data.png') \n\ndf_bio = pd.read_csv(\"biological_data.csv\")\ndf_bio.head()", "Desired visualization\n- Scatterplot of rate vs. yield\n- Hover shows the evolutionary \"line\"\n- Widgets can specify color (and legend) for values of line\nStep 1: Find something close", "plot = figure(plot_width=400, plot_height=400)\nplot.circle(df_bio['rate'], df_bio['yield'])\nplot.xaxis.axis_label = 'rate'\nplot.yaxis.axis_label = 'yield'\nshow(plot)", "Step 1a: Distinguish \"evolutionary lines\" by color\nLet's distinguish the lines with colors. First, how many lines are there?", "# What are the possible colors\ndf_bio['line'].unique()\n\n# Generate a plot with a different color for each line\ncolors = {'HA': 'red', 'HR': 'green', 'UA': 'blue', 'WT': 'purple'}\nplot = figure(plot_width=700, plot_height=800)\nplot.title.text = 'Phenotypes for evolutionary lines.'\nfor line in list(colors.keys()):\n df = df_bio[df_bio.line == line]\n color = colors[line]\n plot.circle(df['rate'], df['yield'], color=color, legend=line)\nplot.legend.location = \"top_right\"\nshow(plot)", "What colors are possible to use? Check out bokeh.palettes", "import bokeh.palettes as palettes\nprint palettes.__doc__\n#palettes.magma(4)", "Exercise: Handle colors for the plot for an arbitrary number of evolutionary lines. (Hint: construct the colors dictionary using the values of 'line' and a palette.)", "# Generate the colors dictionary\n# Fill this in....\n\n# Plot with the generated palette\n# Fill this in ...", "Bokeh tools\nTools can be specified and positioned when the Figure is created. The interaction workflow is (a) select a tool (identified by vertical blue line), (b) perform gesture for tool.", "TOOLS = 'box_zoom,box_select,resize,reset'\nplot = figure(plot_width=200, plot_height=200, title=None, tools=TOOLS)\nplot.scatter(range(10), range(10))\nshow(plot)\n\nfrom bokeh.models import HoverTool, BoxSelectTool\nTOOLS = [HoverTool(), BoxSelectTool()]\nplot = figure(plot_width=200, plot_height=200, title=None, tools=TOOLS)\nshow(plot)", "Synthesizing Bokeh Concepts (Classes)\nFigure\n- Created using the figure()\n- Controls the size of the plot\n- Allows other elements to be added\n- Has properties for title, x-axis label, y-axis label\nGlyph\n- Mark that's added to the plot - circle, line, polygon\n- Created using Figure methods plot.circle(df['rate'], df['yield'], color=color, legend=line)\nTool\n- Provides user interactions with the graph using gestures\n- Created using a separate constructor (\nAdding a Hover Tool\nBased on our knowledge of Bokeh concepts, is a Tool associated with Figure or Glyph?\nWhich classes will be involved in hovering:\n- Plot & Tool only\n- Glyph only\n- Tool and Glyph\nStart with some examples. First, simple hovering.", "from bokeh.plotting import figure, output_file, show\nfrom bokeh.models import HoverTool, BoxSelectTool\n\noutput_file(\"toolbar.html\")\nTOOLS = [BoxSelectTool(), HoverTool()]\n\np = figure(plot_width=400, plot_height=400, title=None, tools=TOOLS)\n\np.circle([1, 2, 3, 4, 5], [2, 5, 8, 2, 7], size=10)\n\nshow(p)", "Now add ad-hoc data", "from bokeh.plotting import figure, output_file, show, ColumnDataSource\nfrom bokeh.models import HoverTool\n\noutput_file(\"toolbar.html\")\n\n\n\nhover = HoverTool(\n tooltips=[\n (\"index\", \"$index\"),\n (\"(x,y)\", \"(@x, @y)\"),\n (\"desc\", \"@desc\"),\n ]\n )\n\np = figure(plot_width=400, plot_height=400, tools=[hover],\n title=\"Mouse over the dots\")\n\n\nsource = ColumnDataSource(\n data={\n 'x': [1, 2, 3, 4, 5],\n 'y': [2, 5, 8, 2, 7],\n 'desc': ['A', 'b', 'C', 'd', 'E'],\n }\n )\n\n\np.circle('x', 'y', size=20, source=source)\n\nshow(p)", "Exercise: Plot the biological data with colors and a hover that shows the evolutionary line.\nBokeh Widgets\nSee widget.py and my_app.py\nBokeh Server", "Image(filename='BokehArchitecture.png') " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.20/_downloads/4e37370a8ca815e788684add8304ce44/plot_ssp_projs_sensitivity_map.ipynb
bsd-3-clause
[ "%matplotlib inline", "Sensitivity map of SSP projections\nThis example shows the sources that have a forward field\nsimilar to the first SSP vector correcting for ECG.", "# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nfrom mne import read_forward_solution, read_proj, sensitivity_map\n\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nsubjects_dir = data_path + '/subjects'\nfname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\necg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'\n\nfwd = read_forward_solution(fname)\n\nprojs = read_proj(ecg_fname)\n# take only one projection per channel type\nprojs = projs[::2]\n\n# Compute sensitivity map\nssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle')", "Show sensitivity map", "plt.hist(ssp_ecg_map.data.ravel())\nplt.show()\n\nargs = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7,\n hemi='rh', subjects_dir=subjects_dir)\nssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args)" ]
[ "code", "markdown", "code", "markdown", "code" ]
joelau/joelau.github.io
notes/index.ipynb
mit
[ "Introduction\nThis example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http://arxiv.org/abs/1405.3531) in Caffe's Model Zoo. \nFor details of the conversion process, see the example notebook \"Using a Caffe Pretrained Network - CIFAR10\".\nLicense\nThe model is licensed for non-commercial use only\nDownload the model (393 MB)", "!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl", "Setup", "import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nimport lasagne\nfrom lasagne.layers import InputLayer, DenseLayer, DropoutLayer\nfrom lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer\nfrom lasagne.layers import MaxPool2DLayer as PoolLayer\nfrom lasagne.layers import LocalResponseNormalization2DLayer as NormLayer\nfrom lasagne.utils import floatX", "Define the network", "net = {}\nnet['input'] = InputLayer((None, 3, 224, 224))\nnet['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2, flip_filters=False)\nnet['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size\nnet['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3, ignore_border=False)\nnet['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5, flip_filters=False)\nnet['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, ignore_border=False)\nnet['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1, flip_filters=False)\nnet['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1, flip_filters=False)\nnet['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1, flip_filters=False)\nnet['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3, ignore_border=False)\nnet['fc6'] = DenseLayer(net['pool5'], num_units=4096)\nnet['drop6'] = DropoutLayer(net['fc6'], p=0.5)\nnet['fc7'] = DenseLayer(net['drop6'], num_units=4096)\nnet['drop7'] = DropoutLayer(net['fc7'], p=0.5)\nnet['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)\noutput_layer = net['fc8']", "Load the model parameters and metadata", "import pickle\n\nmodel = pickle.load(open('vgg_cnn_s.pkl'))\nCLASSES = model['synset words']\nMEAN_IMAGE = model['mean image']\n\nlasagne.layers.set_all_param_values(output_layer, model['values'])", "Trying it out\nGet some test images\nWe'll download the ILSVRC2012 validation URLs and pick a few at random", "import urllib\n\nindex = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()\nimage_urls = index.split('<br>')\n\nnp.random.seed(23)\nnp.random.shuffle(image_urls)\nimage_urls = image_urls[:5]", "Helper to fetch and preprocess images", "import io\nimport skimage.transform\n\ndef prep_image(url):\n ext = url.split('.')[-1]\n im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)\n # Resize so smallest dim = 256, preserving aspect ratio\n h, w, _ = im.shape\n if h < w:\n im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)\n else:\n im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)\n\n # Central crop to 224x224\n h, w, _ = im.shape\n im = im[h//2-112:h//2+112, w//2-112:w//2+112]\n \n rawim = np.copy(im).astype('uint8')\n \n # Shuffle axes to c01\n im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)\n \n # Convert to BGR\n im = im[::-1, :, :]\n\n im = im - MEAN_IMAGE\n return rawim, floatX(im[np.newaxis])", "Process test images and print top 5 predicted labels", "for url in image_urls:\n try:\n rawim, im = prep_image(url)\n\n prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())\n top5 = np.argsort(prob[0])[-1:-6:-1]\n\n plt.figure()\n plt.imshow(rawim.astype('uint8'))\n plt.axis('off')\n for n, label in enumerate(top5):\n plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)\n except IOError:\n print('bad url: ' + url)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/trax
trax/tf_numpy_and_keras.ipynb
apache-2.0
[ "Using Trax with TensorFlow NumPy and Keras\nThis notebook (run it in colab) shows how you can run Trax directly with TensorFlow NumPy. You will also see how to use Trax layers and models inside Keras so you can use Trax in production, e.g., with TensorFlow.js or TensorFlow Serving.\n\nTrax with TensorFlow NumPy: use Trax with TensorFlow NumPy without any code changes\nConvert Trax to Keras: how to get a Keras layer for your Trax model and use it\nExporting Trax Models for Deployment: how to export Trax models to TensorFlow SavedModel\n\n1. Trax with TensorFlow NumPy\nIn Trax, all computations rely on accelerated math operations happening in the fastmath module. This module can use different backends for acceleration. One of them is TensorFlow NumPy which uses TensorFlow 2 to accelerate the computations.\nThe backend can be set using a call to trax.fastmath.set_backend as you'll see below. Currently available backends are jax (default), tensorflow-numpy and numpy (for debugging). The tensorflow-numpy backend uses TensorFlow Numpy for executing fastmath functions on TensorFlow, while the jax backend calls JAX which lowers to TensorFlow XLA.\nYou may see that tensorflow-numpy and jax backends show different speed and memory characteristics. You may also see different error messages when debugging since it might expose you to the internals of the backends. However for the most part, users can choose a backend and not worry about the internal details of these backends.\nLet's train the sentiment analysis model from the Trax intro using TensorFlow NumPy to see how it works.\nGeneral Setup\nExecute the following few cells (once) before running any of the code samples.", "#@title\n# Copyright 2020 Google LLC.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\n\n# Install and import Trax\n!pip install -q -U git+https://github.com/google/trax@master\n\nimport os\nimport numpy as np\nimport trax", "Here is how you can set the fastmath backend to tensorflow-numpy and verify that it's been set.", "# Use the tensorflow-numpy backend.\ntrax.fastmath.set_backend('tensorflow-numpy')\nprint(trax.fastmath.backend_name())\n\n# Create data streams.\ntrain_stream = trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=True)()\neval_stream = trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=False)()\n\ndata_pipeline = trax.data.Serial(\n trax.data.Tokenize(vocab_file='en_8k.subword', keys=[0]),\n trax.data.Shuffle(),\n trax.data.FilterByLength(max_length=2048, length_keys=[0]),\n trax.data.BucketByLength(boundaries=[ 32, 128, 512, 2048],\n batch_sizes=[512, 128, 32, 8, 1],\n length_keys=[0]),\n trax.data.AddLossWeights()\n )\ntrain_batches_stream = data_pipeline(train_stream)\neval_batches_stream = data_pipeline(eval_stream)\n\n# Print example shapes.\nexample_batch = next(train_batches_stream)\nprint(f'batch shapes = {[x.shape for x in example_batch]}')\n\n# Create the model.\nfrom trax import layers as tl\n\nmodel = tl.Serial(\n tl.Embedding(vocab_size=8192, d_feature=256),\n tl.Mean(axis=1), # Average on axis 1 (length of sentence).\n tl.Dense(2), # Classify 2 classes.\n)\n\n# You can print model structure.\nprint(model)\n\n# Train the model.\nfrom trax.supervised import training\n\n# Training task.\ntrain_task = training.TrainTask(\n labeled_data=train_batches_stream,\n loss_layer=tl.WeightedCategoryCrossEntropy(),\n optimizer=trax.optimizers.Adam(0.01),\n n_steps_per_checkpoint=500,\n)\n\n# Evaluaton task.\neval_task = training.EvalTask(\n labeled_data=eval_batches_stream,\n metrics=[tl.WeightedCategoryCrossEntropy(), tl.WeightedCategoryAccuracy()],\n n_eval_batches=20 # For less variance in eval numbers.\n)\n\n# Training loop saves checkpoints to output_dir.\noutput_dir = os.path.expanduser('~/output_dir/')\ntraining_loop = training.Loop(model,\n train_task,\n eval_tasks=[eval_task],\n output_dir=output_dir)\n\n# Run 2000 steps (batches).\ntraining_loop.run(2000)\n\n# Run on an example.\nexample_input = next(eval_batches_stream)[0][0]\nexample_input_str = trax.data.detokenize(example_input, vocab_file='en_8k.subword')\nprint(f'example input_str: {example_input_str}')\nsentiment_activations = model(example_input[None, :]) # Add batch dimension.\nprint(f'Model returned sentiment activations: {np.asarray(sentiment_activations)}')", "2. Convert Trax to Keras\nThanks to TensorFlow NumPy you can convert the model you just trained into a Keras layer using trax.AsKeras. This allows you to:\n\nuse Trax layers inside Keras models\nrun Trax models with existing Keras input pipelines\nexport Trax models to TensorFlow SavedModel\n\nWhen creating a Keras layer from a Trax one, the Keras layer weights will get initialized to the ones the Trax layer had at the moment of creation. In this way, you can create Keras layers from pre-trained Trax models and save them as SavedModel as shown below.", "# Convert the model into a Keras layer, use the weights from model.\nkeras_layer = trax.AsKeras(model)\nprint(keras_layer)\n\n# Run the Keras layer to verify it returns the same result.\nsentiment_activations = keras_layer(example_input[None, :])\nprint(f'Keras returned sentiment activations: {np.asarray(sentiment_activations)}')\n\nimport tensorflow as tf\n\n# Create a full Keras model using the layer from Trax.\ninputs = tf.keras.Input(shape=(None,), dtype='int32')\nhidden = keras_layer(inputs) \n# You can add other Keras layers here operating on hidden.\noutputs = hidden\nkeras_model = tf.keras.Model(inputs=inputs, outputs=outputs)\nprint(keras_model)\n\n# Run the Keras model to verify it returns the same result.\nsentiment_activations = keras_model(example_input[None, :])\nprint(f'Keras returned sentiment activations: {np.asarray(sentiment_activations)}')", "3. Exporting Trax Models for Deployment\nYou can export the Keras model to disk as TensorFlow SavedModel. It's as simple as calling keras_model.save and allows you to use models with TF tools TensorFlow.js, TensorFlow Serving and TensorFlow Lite.", "# Save the Keras model to output_dir.\nmodel_file = os.path.join(output_dir, \"model_checkpoint\")\nkeras_model.save(model_file)\n\n# Load the model from SavedModel.\nloaded_model = tf.keras.models.load_model(model_file)\n\n# Run the loaded model to verify it returns the same result.\nsentiment_activations = loaded_model(example_input[None, :])\nprint(f'Keras returned sentiment activations: {np.asarray(sentiment_activations)}')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]