{"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ilsr_pairwise(\n n_items, data, alpha=0.0, initial_params=None, max_iter=100, tol=1e-8):\n \"\"\"Compute the ML estimate of model parameters using I-LSR.\n\n This function computes the maximum-likelihood (ML) estimate of model\n parameters given pairwise-comparison data (see :ref:`data-pairwise`), using\n the iterative Luce Spectral Ranking algorithm [MG15]_.\n\n The transition rates of the LSR Markov chain are initialized with\n ``alpha``. When ``alpha > 0``, this corresponds to a form of regularization\n (see :ref:`regularization` for details).\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Pairwise-comparison data.\n alpha : float, optional\n Regularization parameter.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The ML estimate of model parameters.\n \"\"\"\n fun = functools.partial(\n lsr_pairwise, n_items=n_items, data=data, alpha=alpha)\n return _ilsr(fun, initial_params, max_iter, tol)", "response": "Compute the maximum - likelihood of the model parameters using I - LSR."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef lsr_pairwise_dense(comp_mat, alpha=0.0, initial_params=None):\n n_items = comp_mat.shape[0]\n ws, chain = _init_lsr(n_items, alpha, initial_params)\n denom = np.tile(ws, (n_items, 1))\n chain += comp_mat.T / (denom + denom.T)\n chain -= np.diag(chain.sum(axis=1))\n return log_transform(statdist(chain))", "response": "Compute the LSR estimate of model parameters given dense data."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes the maximum - likelihood of the LSR Markov chain given a dense pairwise - comparison matrix.", "response": "def ilsr_pairwise_dense(\n comp_mat, alpha=0.0, initial_params=None, max_iter=100, tol=1e-8):\n \"\"\"Compute the ML estimate of model parameters given dense data.\n\n This function computes the maximum-likelihood (ML) estimate of model\n parameters given dense pairwise-comparison data.\n\n The data is described by a pairwise-comparison matrix ``comp_mat`` such\n that ``comp_mat[i,j]`` contains the number of times that item ``i`` wins\n against item ``j``.\n\n In comparison to :func:`~choix.ilsr_pairwise`, this function is\n particularly efficient for dense pairwise-comparison datasets (i.e.,\n containing many comparisons for a large fraction of item pairs).\n\n The transition rates of the LSR Markov chain are initialized with\n ``alpha``. When ``alpha > 0``, this corresponds to a form of regularization\n (see :ref:`regularization` for details).\n\n Parameters\n ----------\n comp_mat : np.array\n 2D square matrix describing the pairwise-comparison outcomes.\n alpha : float, optional\n Regularization parameter.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The ML estimate of model parameters.\n \"\"\"\n fun = functools.partial(\n lsr_pairwise_dense, comp_mat=comp_mat, alpha=alpha)\n return _ilsr(fun, initial_params, max_iter, tol)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rank_centrality(n_items, data, alpha=0.0):\n _, chain = _init_lsr(n_items, alpha, None)\n for winner, loser in data:\n chain[loser, winner] += 1.0\n # Transform the counts into ratios.\n idx = chain > 0 # Indices (i,j) of non-zero entries.\n chain[idx] = chain[idx] / (chain + chain.T)[idx]\n # Finalize the Markov chain by adding the self-transition rate.\n chain -= np.diag(chain.sum(axis=1))\n return log_transform(statdist(chain))", "response": "Compute the rank centrality estimate of model parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef lsr_rankings(n_items, data, alpha=0.0, initial_params=None):\n weights, chain = _init_lsr(n_items, alpha, initial_params)\n for ranking in data:\n sum_ = weights.take(ranking).sum()\n for i, winner in enumerate(ranking[:-1]):\n val = 1.0 / sum_\n for loser in ranking[i+1:]:\n chain[loser, winner] += val\n sum_ -= weights[winner]\n chain -= np.diag(chain.sum(axis=1))\n return log_transform(statdist(chain))", "response": "Compute the LSR estimate of model parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ilsr_rankings(\n n_items, data, alpha=0.0, initial_params=None, max_iter=100, tol=1e-8):\n \"\"\"Compute the ML estimate of model parameters using I-LSR.\n\n This function computes the maximum-likelihood (ML) estimate of model\n parameters given ranking data (see :ref:`data-rankings`), using the\n iterative Luce Spectral Ranking algorithm [MG15]_.\n\n The transition rates of the LSR Markov chain are initialized with\n ``alpha``. When ``alpha > 0``, this corresponds to a form of regularization\n (see :ref:`regularization` for details).\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Ranking data.\n alpha : float, optional\n Regularization parameter.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The ML estimate of model parameters.\n \"\"\"\n fun = functools.partial(\n lsr_rankings, n_items=n_items, data=data, alpha=alpha)\n return _ilsr(fun, initial_params, max_iter, tol)", "response": "Compute the maximum - likelihood of the model parameters using I - LSR."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncomputing the LSR estimate of model parameters for the top - 1 data.", "response": "def lsr_top1(n_items, data, alpha=0.0, initial_params=None):\n \"\"\"Compute the LSR estimate of model parameters.\n\n This function implements the Luce Spectral Ranking inference algorithm\n [MG15]_ for top-1 data (see :ref:`data-top1`).\n\n The argument ``initial_params`` can be used to iteratively refine an\n existing parameter estimate (see the implementation of\n :func:`~choix.ilsr_top1` for an idea on how this works). If it is set to\n `None` (the default), the all-ones vector is used.\n\n The transition rates of the LSR Markov chain are initialized with\n ``alpha``. When ``alpha > 0``, this corresponds to a form of regularization\n (see :ref:`regularization` for details).\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Top-1 data.\n alpha : float\n Regularization parameter.\n initial_params : array_like\n Parameters used to build the transition rates of the LSR Markov chain.\n\n Returns\n -------\n params : numpy.ndarray\n An estimate of model parameters.\n \"\"\"\n weights, chain = _init_lsr(n_items, alpha, initial_params)\n for winner, losers in data:\n val = 1 / (weights.take(losers).sum() + weights[winner])\n for loser in losers:\n chain[loser, winner] += val\n chain -= np.diag(chain.sum(axis=1))\n return log_transform(statdist(chain))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes the maximum - likelihood of model parameters using I - LSR.", "response": "def ilsr_top1(\n n_items, data, alpha=0.0, initial_params=None, max_iter=100, tol=1e-8):\n \"\"\"Compute the ML estimate of model parameters using I-LSR.\n\n This function computes the maximum-likelihood (ML) estimate of model\n parameters given top-1 data (see :ref:`data-top1`), using the\n iterative Luce Spectral Ranking algorithm [MG15]_.\n\n The transition rates of the LSR Markov chain are initialized with\n ``alpha``. When ``alpha > 0``, this corresponds to a form of regularization\n (see :ref:`regularization` for details).\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Top-1 data.\n alpha : float, optional\n Regularization parameter.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The ML estimate of model parameters.\n \"\"\"\n fun = functools.partial(lsr_top1, n_items=n_items, data=data, alpha=alpha)\n return _ilsr(fun, initial_params, max_iter, tol)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ep_pairwise(n_items, data, alpha, model=\"logit\", max_iter=100,\n initial_state=None):\n \"\"\"Compute a distribution of model parameters using the EP algorithm.\n\n This function computes an approximate Bayesian posterior probability\n distribution over model parameters, given pairwise-comparison data (see\n :ref:`data-pairwise`). It uses the expectation propagation algorithm, as\n presented, e.g., in [CG05]_.\n\n The prior distribution is assumed to be isotropic Gaussian with variance\n ``1 / alpha``. The posterior is approximated by a a general multivariate\n Gaussian distribution, described by a mean vector and a covariance matrix.\n\n Two different observation models are available. ``logit`` (default) assumes\n that pairwise-comparison outcomes follow from a Bradley-Terry model.\n ``probit`` assumes that the outcomes follow from Thurstone's model.\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Pairwise-comparison data.\n alpha : float\n Inverse variance of the (isotropic) prior.\n model : str, optional\n Observation model. Either \"logit\" or \"probit\".\n max_iter : int, optional\n Maximum number of iterations allowed.\n initial_state : tuple of array_like, optional\n Natural parameters used to initialize the EP algorithm.\n\n Returns\n -------\n mean : numpy.ndarray\n The mean vector of the approximate Gaussian posterior.\n cov : numpy.ndarray\n The covariance matrix of the approximate Gaussian posterior.\n\n Raises\n ------\n ValueError\n If the observation model is not \"logit\" or \"probit\".\n \"\"\"\n if model == \"logit\":\n match_moments = _match_moments_logit\n elif model == \"probit\":\n match_moments = _match_moments_probit\n else:\n raise ValueError(\"unknown model '{}'\".format(model))\n return _ep_pairwise(\n n_items, data, alpha, match_moments, max_iter, initial_state)", "response": "Compute a distribution of model parameters using the EP algorithm."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncomputing a pairwise version of the EP algorithm.", "response": "def _ep_pairwise(\n n_items, comparisons, alpha, match_moments, max_iter, initial_state):\n \"\"\"Compute a distribution of model parameters using the EP algorithm.\n\n Raises\n ------\n RuntimeError\n If the algorithm does not converge after ``max_iter`` iterations.\n \"\"\"\n # Static variable that allows to check the # of iterations after the call.\n _ep_pairwise.iterations = 0\n m = len(comparisons)\n prior_inv = alpha * np.eye(n_items)\n if initial_state is None:\n # Initially, mean and covariance come from the prior.\n mean = np.zeros(n_items)\n cov = (1 / alpha) * np.eye(n_items)\n # Initialize the natural params in the function space.\n tau = np.zeros(m)\n nu = np.zeros(m)\n # Initialize the natural params in the space of thetas.\n prec = np.zeros((n_items, n_items))\n xs = np.zeros(n_items)\n else:\n tau, nu = initial_state\n mean, cov, xs, prec = _init_ws(\n n_items, comparisons, prior_inv, tau, nu)\n for _ in range(max_iter):\n _ep_pairwise.iterations += 1\n # Keep a copy of the old parameters for convergence testing.\n tau_old = np.array(tau, copy=True)\n nu_old = np.array(nu, copy=True)\n for i in nprand.permutation(m):\n a, b = comparisons[i]\n # Update mean and variance in function space.\n f_var = cov[a,a] + cov[b,b] - 2 * cov[a,b]\n f_mean = mean[a] - mean[b]\n # Cavity distribution.\n tau_tot = 1.0 / f_var\n nu_tot = tau_tot * f_mean\n tau_cav = tau_tot - tau[i]\n nu_cav = nu_tot - nu[i]\n cov_cav = 1.0 / tau_cav\n mean_cav = cov_cav * nu_cav\n # Moment matching.\n logpart, dlogpart, d2logpart = match_moments(mean_cav, cov_cav)\n # Update factor params in the function space.\n tau[i] = -d2logpart / (1 + d2logpart / tau_cav)\n delta_tau = tau[i] - tau_old[i]\n nu[i] = ((dlogpart - (nu_cav / tau_cav) * d2logpart)\n / (1 + d2logpart / tau_cav))\n delta_nu = nu[i] - nu_old[i]\n # Update factor params in the weight space.\n prec[(a, a, b, b), (a, b, a, b)] += delta_tau * MAT_ONE_FLAT\n xs[a] += delta_nu\n xs[b] -= delta_nu\n # Update mean and covariance.\n if abs(delta_tau) > 0:\n phi = -1.0 / ((1.0 / delta_tau) + f_var) * MAT_ONE\n upd_mat = cov.take([a, b], axis=0)\n cov = cov + upd_mat.T.dot(phi).dot(upd_mat)\n mean = cov.dot(xs)\n # Recompute the global parameters for stability.\n cov = inv_posdef(prior_inv + prec)\n mean = cov.dot(xs)\n if _converged((tau, nu), (tau_old, nu_old)):\n return mean, cov\n raise RuntimeError(\n \"EP did not converge after {} iterations\".format(max_iter))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _log_phi(z):\n # Adapted from the GPML function `logphi.m`.\n if z * z < 0.0492:\n # First case: z close to zero.\n coef = -z / SQRT2PI\n val = functools.reduce(lambda acc, c: coef * (c + acc), CS, 0)\n res = -2 * val - log(2)\n dres = exp(-(z * z) / 2 - res) / SQRT2PI\n elif z < -11.3137:\n # Second case: z very small.\n num = functools.reduce(\n lambda acc, r: -z * acc / SQRT2 + r, RS, 0.5641895835477550741)\n den = functools.reduce(lambda acc, q: -z * acc / SQRT2 + q, QS, 1.0)\n res = log(num / (2 * den)) - (z * z) / 2\n dres = abs(den / num) * sqrt(2.0 / pi)\n else:\n res = log(normal_cdf(z))\n dres = exp(-(z * z) / 2 - res) / SQRT2PI\n return res, dres", "response": "Stable computation of the log of the Normal CDF and its derivative."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninitialize parameters in the weight space.", "response": "def _init_ws(n_items, comparisons, prior_inv, tau, nu):\n \"\"\"Initialize parameters in the weight space.\"\"\"\n prec = np.zeros((n_items, n_items))\n xs = np.zeros(n_items)\n for i, (a, b) in enumerate(comparisons):\n prec[(a, a, b, b), (a, b, a, b)] += tau[i] * MAT_ONE_FLAT \n xs[a] += nu[i]\n xs[b] -= nu[i]\n cov = inv_posdef(prior_inv + prec)\n mean = cov.dot(xs)\n return mean, cov, xs , prec"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef exp_transform(params):\n weights = np.exp(np.asarray(params) - np.mean(params))\n return (len(weights) / weights.sum()) * weights", "response": "Transform parameters into exp - scale weights."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef softmax(xs):\n ys = xs - np.max(xs)\n exps = np.exp(ys)\n return exps / exps.sum(axis=0)", "response": "Stable implementation of the softmax function."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef inv_posdef(mat):\n # See:\n # - http://www.seas.ucla.edu/~vandenbe/103/lectures/chol.pdf\n # - http://scicomp.stackexchange.com/questions/3188\n chol = np.linalg.cholesky(mat)\n ident = np.eye(mat.shape[0])\n res = solve_triangular(chol, ident, lower=True, overwrite_b=True)\n return np.transpose(res).dot(res)", "response": "Stable inverse of a positive definite matrix."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef footrule_dist(params1, params2=None):\n assert params2 is None or len(params1) == len(params2)\n ranks1 = rankdata(params1, method=\"average\")\n if params2 is None:\n ranks2 = np.arange(1, len(params1) + 1, dtype=float)\n else:\n ranks2 = rankdata(params2, method=\"average\")\n return np.sum(np.abs(ranks1 - ranks2))", "response": "r Compute Spearman s footrule distance between two models."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef kendalltau_dist(params1, params2=None):\n assert params2 is None or len(params1) == len(params2)\n ranks1 = rankdata(params1, method=\"ordinal\")\n if params2 is None:\n ranks2 = np.arange(1, len(params1) + 1, dtype=float)\n else:\n ranks2 = rankdata(params2, method=\"ordinal\")\n tau, _ = kendalltau(ranks1, ranks2)\n n_items = len(params1)\n n_pairs = n_items * (n_items - 1) / 2\n return round((n_pairs - n_pairs * tau) / 2)", "response": "r Compute the Kendall tau distance between two models."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef log_likelihood_pairwise(data, params):\n loglik = 0\n for winner, loser in data:\n loglik -= np.logaddexp(0, -(params[winner] - params[loser]))\n return loglik", "response": "Compute the log - likelihood of model parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the log - likelihood of model parameters.", "response": "def log_likelihood_rankings(data, params):\n \"\"\"Compute the log-likelihood of model parameters.\"\"\"\n loglik = 0\n params = np.asarray(params)\n for ranking in data:\n for i, winner in enumerate(ranking[:-1]):\n loglik -= logsumexp(params.take(ranking[i:]) - params[winner])\n return loglik"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the log - likelihood of model parameters.", "response": "def log_likelihood_top1(data, params):\n \"\"\"Compute the log-likelihood of model parameters.\"\"\"\n loglik = 0\n params = np.asarray(params)\n for winner, losers in data:\n idx = np.append(winner, losers)\n loglik -= logsumexp(params.take(idx) - params[winner])\n return loglik"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef log_likelihood_network(\n digraph, traffic_in, traffic_out, params, weight=None):\n \"\"\"\n Compute the log-likelihood of model parameters.\n\n If ``weight`` is not ``None``, the log-likelihood is correct only up to a\n constant (independent of the parameters).\n \"\"\"\n loglik = 0\n for i in range(len(traffic_in)):\n loglik += traffic_in[i] * params[i]\n if digraph.out_degree(i) > 0:\n neighbors = list(digraph.successors(i))\n if weight is None:\n loglik -= traffic_out[i] * logsumexp(params.take(neighbors))\n else:\n weights = [digraph[i][j][weight] for j in neighbors]\n loglik -= traffic_out[i] * logsumexp(\n params.take(neighbors), b=weights)\n return loglik", "response": "Compute the log - likelihood of the model parameters."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute the stationary distribution of a Markov chain.", "response": "def statdist(generator):\n \"\"\"Compute the stationary distribution of a Markov chain.\n\n Parameters\n ----------\n generator : array_like\n Infinitesimal generator matrix of the Markov chain.\n\n Returns\n -------\n dist : numpy.ndarray\n The unnormalized stationary distribution of the Markov chain.\n\n Raises\n ------\n ValueError\n If the Markov chain does not have a unique stationary distribution.\n \"\"\"\n generator = np.asarray(generator)\n n = generator.shape[0]\n with warnings.catch_warnings():\n # The LU decomposition raises a warning when the generator matrix is\n # singular (which it, by construction, is!).\n warnings.filterwarnings(\"ignore\")\n lu, piv = spl.lu_factor(generator.T, check_finite=False)\n # The last row contains 0's only.\n left = lu[:-1,:-1]\n right = -lu[:-1,-1]\n # Solves system `left * x = right`. Assumes that `left` is\n # upper-triangular (ignores lower triangle).\n try:\n res = spl.solve_triangular(left, right, check_finite=False)\n except:\n # Ideally we would like to catch `spl.LinAlgError` only, but there seems\n # to be a bug in scipy, in the code that raises the LinAlgError (!!).\n raise ValueError(\n \"stationary distribution could not be computed. \"\n \"Perhaps the Markov chain has more than one absorbing class?\")\n res = np.append(res, 1.0)\n return (n / res.sum()) * res"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate pairwise comparisons from a Bradley - Terry model.", "response": "def generate_pairwise(params, n_comparisons=10):\n \"\"\"Generate pairwise comparisons from a Bradley--Terry model.\n\n This function samples comparisons pairs independently and uniformly at\n random over the ``len(params)`` choose 2 possibilities, and samples the\n corresponding comparison outcomes from a Bradley--Terry model parametrized\n by ``params``.\n\n Parameters\n ----------\n params : array_like\n Model parameters.\n n_comparisons : int\n Number of comparisons to be returned.\n\n Returns\n -------\n data : list of (int, int)\n Pairwise-comparison samples (see :ref:`data-pairwise`).\n \"\"\"\n n = len(params)\n items = tuple(range(n))\n params = np.asarray(params)\n data = list()\n for _ in range(n_comparisons):\n # Pick the pair uniformly at random.\n a, b = random.sample(items, 2)\n if compare((a, b), params) == a:\n data.append((a, b))\n else:\n data.append((b, a))\n return tuple(data)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_rankings(params, n_rankings, size=3):\n n = len(params)\n items = tuple(range(n))\n params = np.asarray(params)\n data = list()\n for _ in range(n_rankings):\n # Pick the alternatives uniformly at random.\n alts = random.sample(items, size)\n ranking = compare(alts, params, rank=True)\n data.append(ranking)\n return tuple(data)", "response": "Generates a random set of partial ranking items from a Plackett - Luce model."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compare(items, params, rank=False):\n probs = probabilities(items, params)\n if rank:\n return np.random.choice(items, size=len(items), replace=False, p=probs)\n else:\n return np.random.choice(items, p=probs)", "response": "Generate a comparison outcome that follows Luce s axiom."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef probabilities(items, params):\n params = np.asarray(params)\n return softmax(params.take(items))", "response": "Compute the comparison outcome probabilities given a subset of items."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef opt_pairwise(n_items, data, alpha=1e-6, method=\"Newton-CG\",\n initial_params=None, max_iter=None, tol=1e-5):\n \"\"\"Compute the ML estimate of model parameters using ``scipy.optimize``.\n\n This function computes the maximum-likelihood estimate of model parameters\n given pairwise-comparison data (see :ref:`data-pairwise`), using optimizers\n provided by the ``scipy.optimize`` module.\n\n If ``alpha > 0``, the function returns the maximum a-posteriori (MAP)\n estimate under an isotropic Gaussian prior with variance ``1 / alpha``. See\n :ref:`regularization` for details.\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Pairwise-comparison data.\n alpha : float, optional\n Regularization strength.\n method : str, optional\n Optimization method. Either \"BFGS\" or \"Newton-CG\".\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Tolerance for termination (method-specific).\n\n Returns\n -------\n params : numpy.ndarray\n The (penalized) ML estimate of model parameters.\n\n Raises\n ------\n ValueError\n If the method is not \"BFGS\" or \"Newton-CG\".\n \"\"\"\n fcts = PairwiseFcts(data, alpha)\n return _opt(n_items, fcts, method, initial_params, max_iter, tol)", "response": "Compute the maximum likelihood estimate of model parameters using pairwise - comparison algorithm."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef opt_rankings(n_items, data, alpha=1e-6, method=\"Newton-CG\",\n initial_params=None, max_iter=None, tol=1e-5):\n \"\"\"Compute the ML estimate of model parameters using ``scipy.optimize``.\n\n This function computes the maximum-likelihood estimate of model parameters\n given ranking data (see :ref:`data-rankings`), using optimizers provided by\n the ``scipy.optimize`` module.\n\n If ``alpha > 0``, the function returns the maximum a-posteriori (MAP)\n estimate under an isotropic Gaussian prior with variance ``1 / alpha``. See\n :ref:`regularization` for details.\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Ranking data.\n alpha : float, optional\n Regularization strength.\n method : str, optional\n Optimization method. Either \"BFGS\" or \"Newton-CG\".\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Tolerance for termination (method-specific).\n\n Returns\n -------\n params : numpy.ndarray\n The (penalized) ML estimate of model parameters.\n\n Raises\n ------\n ValueError\n If the method is not \"BFGS\" or \"Newton-CG\".\n \"\"\"\n fcts = Top1Fcts.from_rankings(data, alpha)\n return _opt(n_items, fcts, method, initial_params, max_iter, tol)", "response": "Compute the maximum likelihood estimate of model parameters using the Scipy Optimization module."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes the maximum likelihood estimate of model parameters using the Scipy Optimization module.", "response": "def opt_top1(n_items, data, alpha=1e-6, method=\"Newton-CG\",\n initial_params=None, max_iter=None, tol=1e-5):\n \"\"\"Compute the ML estimate of model parameters using ``scipy.optimize``.\n\n This function computes the maximum-likelihood estimate of model parameters\n given top-1 data (see :ref:`data-top1`), using optimizers provided by the\n ``scipy.optimize`` module.\n\n If ``alpha > 0``, the function returns the maximum a-posteriori (MAP)\n estimate under an isotropic Gaussian prior with variance ``1 / alpha``. See\n :ref:`regularization` for details.\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Top-1 data.\n alpha : float, optional\n Regularization strength.\n method : str, optional\n Optimization method. Either \"BFGS\" or \"Newton-CG\".\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Tolerance for termination (method-specific).\n\n Returns\n -------\n params : numpy.ndarray\n The (penalized) ML estimate of model parameters.\n\n Raises\n ------\n ValueError\n If the method is not \"BFGS\" or \"Newton-CG\".\n \"\"\"\n fcts = Top1Fcts(data, alpha)\n return _opt(n_items, fcts, method, initial_params, max_iter, tol)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompute the negative penalized log - likelihood.", "response": "def objective(self, params):\n \"\"\"Compute the negative penalized log-likelihood.\"\"\"\n val = self._penalty * np.sum(params**2)\n for win, los in self._data:\n val += np.logaddexp(0, -(params[win] - params[los]))\n return val"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute the negative penalized log - likelihood.", "response": "def objective(self, params):\n \"\"\"Compute the negative penalized log-likelihood.\"\"\"\n val = self._penalty * np.sum(params**2)\n for winner, losers in self._data:\n idx = np.append(winner, losers)\n val += logsumexp(params.take(idx) - params[winner])\n return val"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mm_pairwise(\n n_items, data, initial_params=None, alpha=0.0,\n max_iter=10000, tol=1e-8):\n \"\"\"Compute the ML estimate of model parameters using the MM algorithm.\n\n This function computes the maximum-likelihood (ML) estimate of model\n parameters given pairwise-comparison data (see :ref:`data-pairwise`), using\n the minorization-maximization (MM) algorithm [Hun04]_, [CD12]_.\n\n If ``alpha > 0``, the function returns the maximum a-posteriori (MAP)\n estimate under a (peaked) Dirichlet prior. See :ref:`regularization` for\n details.\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Pairwise-comparison data.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n alpha : float, optional\n Regularization parameter.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The ML estimate of model parameters.\n \"\"\"\n return _mm(\n n_items, data, initial_params, alpha, max_iter, tol, _mm_pairwise)", "response": "Compute the ML estimate of model parameters using the MM algorithm."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _mm_rankings(n_items, data, params):\n weights = exp_transform(params)\n wins = np.zeros(n_items, dtype=float)\n denoms = np.zeros(n_items, dtype=float)\n for ranking in data:\n sum_ = weights.take(ranking).sum()\n for i, winner in enumerate(ranking[:-1]):\n wins[winner] += 1\n val = 1.0 / sum_\n for item in ranking[i:]:\n denoms[item] += val\n sum_ -= weights[winner]\n return wins, denoms", "response": "Inner loop of MM algorithm for ranking data."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the ML estimate of model parameters given a list of items and ranking data.", "response": "def mm_rankings(n_items, data, initial_params=None, alpha=0.0,\n max_iter=10000, tol=1e-8):\n \"\"\"Compute the ML estimate of model parameters using the MM algorithm.\n\n This function computes the maximum-likelihood (ML) estimate of model\n parameters given ranking data (see :ref:`data-rankings`), using the\n minorization-maximization (MM) algorithm [Hun04]_, [CD12]_.\n\n If ``alpha > 0``, the function returns the maximum a-posteriori (MAP)\n estimate under a (peaked) Dirichlet prior. See :ref:`regularization` for\n details.\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Ranking data.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n alpha : float, optional\n Regularization parameter.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The ML estimate of model parameters.\n \"\"\"\n return _mm(n_items, data, initial_params, alpha, max_iter, tol,\n _mm_rankings)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef mm_top1(\n n_items, data, initial_params=None, alpha=0.0,\n max_iter=10000, tol=1e-8):\n \"\"\"Compute the ML estimate of model parameters using the MM algorithm.\n\n This function computes the maximum-likelihood (ML) estimate of model\n parameters given top-1 data (see :ref:`data-top1`), using the\n minorization-maximization (MM) algorithm [Hun04]_, [CD12]_.\n\n If ``alpha > 0``, the function returns the maximum a-posteriori (MAP)\n estimate under a (peaked) Dirichlet prior. See :ref:`regularization` for\n details.\n\n Parameters\n ----------\n n_items : int\n Number of distinct items.\n data : list of lists\n Top-1 data.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n alpha : float, optional\n Regularization parameter.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The ML estimate of model parameters.\n \"\"\"\n return _mm(n_items, data, initial_params, alpha, max_iter, tol, _mm_top1)", "response": "Compute the ML estimate of model parameters given a top - 1 data set."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncomputing the MAP estimate of a network choice model.", "response": "def choicerank(\n digraph, traffic_in, traffic_out, weight=None,\n initial_params=None, alpha=1.0, max_iter=10000, tol=1e-8):\n \"\"\"Compute the MAP estimate of a network choice model's parameters.\n\n This function computes the maximum-a-posteriori (MAP) estimate of model\n parameters given a network structure and node-level traffic data (see\n :ref:`data-network`), using the ChoiceRank algorithm [MG17]_, [KTVV15]_.\n\n The nodes are assumed to be labeled using consecutive integers starting\n from 0.\n\n Parameters\n ----------\n digraph : networkx.DiGraph\n Directed graph representing the network.\n traffic_in : array_like\n Number of arrivals at each node.\n traffic_out : array_like\n Number of departures at each node.\n weight : str, optional\n The edge attribute that holds the numerical value used for the edge\n weight. If None (default) then all edge weights are 1.\n initial_params : array_like, optional\n Parameters used to initialize the iterative procedure.\n alpha : float, optional\n Regularization parameter.\n max_iter : int, optional\n Maximum number of iterations allowed.\n tol : float, optional\n Maximum L1-norm of the difference between successive iterates to\n declare convergence.\n\n Returns\n -------\n params : numpy.ndarray\n The MAP estimate of model parameters.\n\n Raises\n ------\n ImportError\n If the NetworkX library cannot be imported.\n \"\"\"\n import networkx as nx\n # Compute the (sparse) adjacency matrix.\n n_items = len(digraph)\n nodes = np.arange(n_items)\n adj = nx.to_scipy_sparse_matrix(digraph, nodelist=nodes, weight=weight)\n adj_t = adj.T.tocsr()\n # Process the data into a standard form.\n traffic_in = np.asarray(traffic_in)\n traffic_out = np.asarray(traffic_out)\n data = (adj, adj_t, traffic_in, traffic_out)\n return _mm(\n n_items, data, initial_params, alpha, max_iter, tol, _choicerank)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecorates a function so that it is called whenever a call is made.", "response": "def decorate(cls, function):\n '''\n Decorate a function so the function call is checked whenever\n a call is made. The calls that do not need any checks are skipped.\n\n The `function` argument is the function to be decorated.\n\n The return value will be either\n\n * the function itself, if there is nothing to validate, or\n * a proxy function that will execute the validation.\n '''\n\n should_skip = getattr(function, 'typesafety_skip', False)\n if cls.is_function_validated(function) or should_skip:\n return function\n\n validator = cls(function)\n\n if not validator.need_validate_arguments and \\\n not validator.need_validate_return_value:\n return function\n\n @functools.wraps(function)\n def __wrapper(*args, **kwargs):\n return validator(*args, **kwargs)\n\n __wrapper.__validator__ = validator\n\n return __wrapper"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef undecorate(cls, function):\n '''\n Remove validator decoration from a function.\n\n The `function` argument is the function to be cleaned from\n the validator decorator.\n '''\n\n if cls.is_function_validated(function):\n return cls.get_function_validator(function).function\n\n return function", "response": "Removes validator decoration from a function."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_arguments(self, locals_dict):\n '''\n Validate the arguments passed to a function. If an error occurred,\n the function will throw a TypesafetyError.\n\n The `locals_dict` argument should be the local value dictionary of\n the function. An example call would be like:\n '''\n\n for key, value, validator in self.__map_arguments(locals_dict):\n if not self.__is_valid(value, validator):\n key_name = repr(key)\n func_name = self.__function.__name__\n annotation = self.__argument_annotation.get(key)\n message = self.ARG_TYPE_ERROR_MESSAGE.format(\n key_name,\n func_name,\n self.__format_expectation(annotation),\n value.__class__.__name__)\n raise TypesafetyError(message)", "response": "Validate the arguments passed to a function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nvalidate the return value of a function call.", "response": "def validate_return_value(self, retval):\n '''\n Validate the return value of a function call. If an error occurred,\n the function will throw a TypesafetyError.\n\n The `retval` should contain the return value of the function call.\n '''\n\n if self.__return_annotation is None:\n return\n\n if not self.__is_valid(retval, self.__return_annotation):\n func_name = self.__function.__name__\n msg = self.RET_TYPE_ERROR_MESSAGE.format(\n func_name,\n self.__format_expectation(self.__return_annotation),\n retval.__class__.__name__\n )\n raise TypesafetyError(msg)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef uninstall(self):\n '''\n Uninstall the module finder. If not installed, this will do nothing.\n After uninstallation, none of the newly loaded modules will be\n decorated (that is, everything will be back to normal).\n '''\n\n if self.installed:\n sys.meta_path.remove(self)\n\n # Reload all decorated items\n import_list = []\n for name in self.__loaded_modules:\n del sys.modules[name]\n import_list.append(name)\n\n for name in import_list:\n __import__(name)\n\n self.__reset()", "response": "Uninstall the module finder."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_module(self, fullname, path=None):\n '''\n Find the module. Required for the Python meta-loading mechanism.\n\n This will do nothing, since we use the system to locate a module.\n '''\n\n loader = None\n if self.__filter is None or self.__filter(fullname):\n loader = ModuleLoader(self, fullname, path)\n\n return loader", "response": "Find the module. Required for the Python meta-loading mechanism.\n\n This will do nothing, since we use the system to locate a module."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_module(self, loader):\n '''\n Load the module. Required for the Python meta-loading mechanism.\n '''\n\n modfile, pathname, description = loader.info\n module = imp.load_module(\n loader.fullname,\n modfile,\n pathname,\n description\n )\n sys.modules[loader.fullname] = module\n self.__loaded_modules.add(loader.fullname)\n\n autodecorator.decorate_module(module, decorator=self.__decorator)\n\n return module", "response": "Load the module. Required for the Python meta - loading mechanism."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nregistering the event listeners for the given DB class.", "response": "def _register_event_listeners(self, cls):\n \"\"\"\n :param cls: DB class\n \"\"\"\n event.listen(cls, 'after_insert', self._new_listener)\n event.listen(cls, 'after_update', self._update_listener)\n event.listen(cls, 'after_delete', self._delete_listener)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef after_commit_listener(self, session):\n log.info('Commiting indexing orders for session %s' % session)\n try:\n if not any((session.index_new[self.cls_name],\n session.index_dirty[self.cls_name],\n session.index_deleted[self.cls_name])):\n return\n if session.redis is not None:\n self._queue_job(session.redis,\n self.settings['redis.queue_name'],\n self.index_operation_name,\n session.index_new[self.cls_name],\n session.index_dirty[self.cls_name],\n session.index_deleted[self.cls_name],\n self.settings)\n else:\n log.info('Redis not found, falling back to indexing synchronously without redis')\n self.index_operation(\n session.index_new[self.cls_name],\n session.index_dirty[self.cls_name],\n session.index_deleted[self.cls_name],\n self.settings\n )\n session.index_new[self.cls_name].clear()\n session.index_dirty[self.cls_name].clear()\n session.index_deleted[self.cls_name].clear()\n except AttributeError:\n log.warning('Trying to commit indexing orders, but indexing sets are not present.')", "response": "This method is called when the session is committed."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new job on the specified queue.", "response": "def _queue_job(redis, queue_name, delegate, *args):\n \"\"\"\n creates a new job on the queue\n :param redis: redis\n :param delegate: method to be executed by the queue. Use fully qualified method name as String.\n :param args: arguments of the method\n :return: job_id\n \"\"\"\n log.info('Queuing job...')\n with Connection(redis):\n q = Queue(queue_name)\n job = q.enqueue(delegate, *args)\n return job.id"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncall when the transaction is terminated.", "response": "def after_rollback_listener(self, session):\n \"\"\"\n Rollback of the transaction, undo the indexes.\n If our transaction is terminated, we will reset the\n indexing assignments.\n \"\"\"\n log.info('Removing indexing orders.')\n try:\n session.index_new[self.cls_name].clear()\n session.index_dirty[self.cls_name].clear()\n session.index_deleted[self.cls_name].clear()\n except (AttributeError, KeyError):\n log.warning('Trying to remove indexing orders, but indexing sets are not present.')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove a session from the database.", "response": "def remove_session(self, session):\n \"\"\"\n :param sqlalchemy.session.Session session: Database session to remove\n \"\"\"\n try:\n del session.redis\n except AttributeError:\n pass\n try:\n del session.index_new[self.cls_name]\n del session.index_dirty[self.cls_name]\n del session.index_deleted[self.cls_name]\n except (AttributeError, KeyError):\n log.warning('Removing a session that has no indexing sets.')\n self.sessions.remove(session)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_sort_string(sort):\n if sort is None:\n return ['_score']\n l = sort.rsplit(',')\n sortlist = []\n for se in l:\n se = se.strip()\n order = 'desc' if se[0:1] == '-' else 'asc'\n field = se[1:] if se[0:1] in ['-', '+'] else se\n field = field.strip()\n sortlist.append({field: {\"order\": order, \"unmapped_type\": \"string\", \"missing\": \"_last\"}})\n sortlist.append('_score')\n return sortlist", "response": "Parse a sort string for use with elasticsearch\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_filter_params(query_params, filterable):\n if query_params is not None:\n filter_params = {}\n for fq in query_params.mixed():\n if fq in filterable:\n filter_params[fq] = query_params.mixed().get(fq)\n return filter_params\n else:\n return {}", "response": "Parse query_params to a filter params dict. Merge multiple values for one key to a list."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlinks the given feed to the given sibling.", "response": "def link_to_sibling(self, feed, sibling_type, atom_feed):\n \"\"\"\n Adding previous or next links to the given feed\n self._link_to_sibling(feed, 'previous', atom_feed)\n self._link_to_sibling(feed, 'next', atom_feed)\n\n :param feed: a feed object\n :param sibling_type: 'previous' or 'next'\n :param atom_feed: an atom feed like `feedgen.feed.FeedGenerator`\n \"\"\"\n sibling = self.atom_feed_manager.get_sibling(feed.id, sibling_type)\n if sibling:\n rel = \"prev-archive\" if sibling_type == \"previous\" else \"next-archive\"\n atom_feed.link(href=self.request.route_url(self.get_atom_feed_url, id=sibling.id),\n rel=rel)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_atom_feed(self, feed):\n atom_feed = FeedGenerator()\n atom_feed.id(id=self.request.route_url(self.get_atom_feed_url, id=feed.id))\n atom_feed.link(href=self.request.route_url(self.get_atom_feed_url, id=feed.id), rel='self')\n atom_feed.language('nl-BE')\n self.link_to_sibling(feed, 'previous', atom_feed)\n self.link_to_sibling(feed, 'next', atom_feed)\n return atom_feed", "response": "Initializes an atom feed object given a feed object"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _generate_atom_feed(self, feed):\n atom_feed = self.init_atom_feed(feed)\n atom_feed.title(\"Feed\")\n return atom_feed", "response": "A function returning a feed like feedgen. feed. FeedGenerator."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_private_key(account_name, private_key=None, private_key_path=None,\n storage=None, storage_path=None, api_version=\"v3\",\n readonly=False, http_client=None, ga_hook=None):\n \"\"\"Create a client for a service account.\n\n Create a client with an account name and a private key.\n\n Args:\n account_name: str, the account identifier (probably the account email).\n private_key: str, the private key as a string.\n private_key_path: str, path to a file with the private key in.\n storage: oauth2client.client.Storage, a Storage implementation to store\n credentials.\n storage_path: str, path to a file storage.\n readonly: bool, default False, if True only readonly access is requested\n from GA.\n http_client: httplib2.Http, Override the default http client used.\n ga_hook: function, a hook that is called every time a query is made\n against GA.\n \"\"\"\n if not private_key:\n if not private_key_path:\n raise GapyError(\n \"Must provide either a private_key or a private_key_file\")\n if isinstance(private_key_path, basestring):\n private_key_path = open(private_key_path)\n private_key = private_key_path.read()\n\n storage = _get_storage(storage, storage_path)\n\n scope = GOOGLE_API_SCOPE_READONLY if readonly else GOOGLE_API_SCOPE\n credentials = SignedJwtAssertionCredentials(account_name, private_key,\n scope)\n credentials.set_store(storage)\n\n return Client(_build(credentials, api_version, http_client), ga_hook)", "response": "Create a client for a service account."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a client with a client secrets file.", "response": "def from_secrets_file(client_secrets, storage=None, flags=None,\n storage_path=None, api_version=\"v3\", readonly=False,\n http_client=None, ga_hook=None):\n \"\"\"Create a client for a web or installed application.\n\n Create a client with a client secrets file.\n\n Args:\n client_secrets: str, path to the client secrets file (downloadable from\n Google API Console)\n storage: oauth2client.client.Storage, a Storage implementation to store\n credentials.\n storage_path: str, path to a file storage.\n readonly: bool, default False, if True only readonly access is requested\n from GA.\n http_client: httplib2.Http, Override the default http client used.\n ga_hook: function, a hook that is called every time a query is made\n against GA.\n \"\"\"\n scope = GOOGLE_API_SCOPE_READONLY if readonly else GOOGLE_API_SCOPE\n flow = flow_from_clientsecrets(client_secrets,\n scope=scope)\n storage = _get_storage(storage, storage_path)\n credentials = storage.get()\n if credentials is None or credentials.invalid:\n credentials = run_flow(flow, storage, flags)\n\n return Client(_build(credentials, api_version, http_client), ga_hook)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new client with a credentials stored in stagecraft db.", "response": "def from_credentials_db(client_secrets, storage, api_version=\"v3\",\n readonly=False, http_client=None, ga_hook=None):\n \"\"\"Create a client for a web or installed application.\n\n Create a client with a credentials stored in stagecraft db.\n\n Args:\n client_secrets: dict, client secrets (downloadable from\n Google API Console)\n storage: stagecraft.apps.collectors.libs.ga.CredentialStorage,\n a Storage implementation to store credentials.\n readonly: bool, default False, if True only readonly access is requested\n from GA.\n http_client: httplib2.Http, Override the default http client used.\n ga_hook: function, a hook that is called every time a query is made\n against GA.\n \"\"\"\n credentials = storage.get()\n\n return Client(_build(credentials, api_version, http_client), ga_hook)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild the client object.", "response": "def _build(credentials, api_version, http_client=None):\n \"\"\"Build the client object.\"\"\"\n if not http_client:\n http_client = httplib2.Http()\n\n authorised_client = credentials.authorize(http_client)\n\n return build(\"analytics\", api_version, http=authorised_client)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprefix a string with ga prefix.", "response": "def _prefix_ga(value):\n \"\"\"Prefix a string with 'ga:' if it is not already\n\n Sort values may be prefixed with '-' to indicate negative sort.\n\n >>> _prefix_ga('foo')\n 'ga:foo'\n >>> _prefix_ga('ga:foo')\n 'ga:foo'\n >>> _prefix_ga('-foo')\n '-ga:foo'\n >>> _prefix_ga('-ga:foo')\n '-ga:foo'\n \"\"\"\n prefix = ''\n if value[0] == '-':\n value = value[1:]\n prefix = '-'\n if not value.startswith('ga:'):\n prefix += 'ga:'\n return prefix + value"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the unifinf distribution of a given statistic.", "response": "def ad_unif_inf(statistic):\n \"\"\"\n Approximates the limiting distribution to about 5 decimal digits.\n \"\"\"\n z = statistic\n if z < 2:\n return (exp(-1.2337141 / z) / sqrt(z) *\n (2.00012 + (.247105 - (.0649821 - (.0347962 -\n (.011672 - .00168691 * z) * z) * z) * z) * z))\n else:\n return exp(-exp(1.0776 - (2.30695 - (.43424 - (.082433 -\n (.008056 - .0003146 * z) * z) * z) * z) * z))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef ad_unif_fix(samples, pinf):\n n = samples\n c = .01265 + .1757 / n\n if pinf < c:\n return (((.0037 / n + .00078) / n + .00006) / n) * g1(pinf / c)\n elif pinf < .8:\n return ((.01365 / n + .04213) / n) * g2((pinf - c) / (.8 - c))\n else:\n return g3(pinf) / n", "response": "Corrects the limiting distribution for a finite sample size."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_metadata(self):\n metadata = self.distribution.metadata\n\n missing = []\n for attr in (\"name\", \"version\", \"url\"):\n if not (hasattr(metadata, attr) and getattr(metadata, attr)):\n missing.append(attr)\n\n # https://www.python.org/dev/peps/pep-0345/\n # author or maintainer must be specified\n # author is preferred; if identifcal, specify only author\n if not metadata.author and not metadata.maintainer:\n missing.append(\"author\")\n if self.enforce_email:\n missing.append(\"author_email\")\n else:\n # one or both of author or maintainer specified\n if (\n metadata.author\n and self.enforce_email\n and not metadata.author_email\n ):\n missing.append(\"author_email\")\n if (\n metadata.maintainer\n and self.enforce_email\n and not metadata.maintainer_email\n ):\n missing.append(\"maintainer_email\")\n if (\n metadata.author\n and metadata.maintainer\n and metadata.author == metadata.maintainer\n ):\n self.warn(\n \"Maintainer should be omitted if identical to Author.\\n\"\n \"See https://www.python.org/dev/peps/pep-0345/\"\n \"#maintainer-email-optional\"\n )\n if (\n metadata.author_email\n and metadata.maintainer_email\n and metadata.author_email == metadata.maintainer_email\n ):\n self.warn(\n \"Maintainer Email should be omitted if\"\n \"identical to Author's.\\n\"\n \"See https://www.python.org/dev/peps/pep-0345/\"\n \"#maintainer-email-optional\"\n )\n\n if missing:\n self.warn(\"missing required meta-data: %s\" % \", \".join(missing))", "response": "Checks that all required meta - data is supplied."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconstructs a PostgresConfig from a URL.", "response": "def from_url(cls, url):\n \"\"\"\n Construct a PostgresConfig from a URL.\n \"\"\"\n parsed = urlparse(url)\n return cls(\n username=parsed.username,\n password=parsed.password,\n hostname=parsed.hostname,\n port=parsed.port,\n database=parsed.path.lstrip('/'),\n # Like parse_qs, but produces a scalar per key, instead of a list:\n query_params=dict(param.split('=')\n for param in parsed.query.split('&'))\n if parsed.query else {},\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nallowing using a method or a function to vectorize a single object.", "response": "def vectorize(*args, **kwargs):\n \"\"\"\n Allows using `@vectorize` as well as `@vectorize()`.\n \"\"\"\n if args and callable(args[0]):\n # Guessing the argument is the method.\n return _vectorize(args[0])\n else:\n # Wait for the second call.\n return lambda m: _vectorize(m, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef conditional_http_tween_factory(handler, registry):\n settings = registry.settings if hasattr(registry, 'settings') else {}\n not_cacheble_list = []\n if 'not.cachable.list' in settings:\n not_cacheble_list = settings.get('not.cachable.list').split()\n\n def conditional_http_tween(request):\n response = handler(request)\n\n if request.path not in not_cacheble_list:\n\n # If the Last-Modified header has been set, we want to enable the\n # conditional response processing.\n if response.last_modified is not None:\n response.conditional_response = True\n\n # We want to only enable the conditional machinery if either we\n # were given an explicit ETag header by the view or we have a\n # buffered response and can generate the ETag header ourself.\n if response.etag is not None:\n response.conditional_response = True\n elif (isinstance(response.app_iter, Sequence) and\n len(response.app_iter) == 1) and response.body is not None:\n response.conditional_response = True\n response.md5_etag()\n\n return response\n\n return conditional_http_tween", "response": "Returns a Tween that adds ETag headers and tells Pyramid to enable \n conditional responses where appropriate."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the last modified date of a given file", "response": "def get_last_modified_date(filename):\n \"\"\"\n Get the last modified date of a given file\n\n :param filename: string: pathname of a file\n :return: Date\n \"\"\"\n if os.path.isfile(filename):\n t = os.path.getmtime(filename)\n return datetime.date.fromtimestamp(t).strftime('%d/%m/%Y')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the file size of a given file", "response": "def get_file_size(filename):\n \"\"\"\n Get the file size of a given file\n\n :param filename: string: pathname of a file\n :return: human readable filesize\n \"\"\"\n if os.path.isfile(filename):\n return convert_size(os.path.getsize(filename))\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntransforming bytesize to a human readable filesize.", "response": "def convert_size(size_bytes):\n \"\"\"\n Transform bytesize to a human readable filesize\n\n :param size_bytes: bytesize\n :return: human readable filesize\n \"\"\"\n if size_bytes == 0:\n return \"0B\"\n size_name = (\"B\", \"KB\", \"MB\", \"GB\", \"TB\", \"PB\", \"EB\", \"ZB\", \"YB\")\n i = int(math.floor(math.log(size_bytes, 1024)))\n p = math.pow(1024, i)\n s = round(size_bytes / p, 2)\n return \"%s %s\" % (s, size_name[i])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nforcing validation of all attributes of all traits.", "response": "def validate_all_attributes(self):\n \"\"\"\n Force validation of all traits.\n\n Useful for circumstances where an attribute won't be accessed until\n well after construction, but we want to fail eagerly if that attribute\n is passed incorrectly.\n\n Consider using ``StrictSerializable`` for classes where you always want\n this called on construction.\n\n See Also\n --------\n StrictSerializable\n \"\"\"\n errors = {}\n for name in self.trait_names():\n try:\n getattr(self, name)\n except TraitError as e:\n errors[name] = e\n if errors:\n raise MultipleTraitErrors(errors)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating an example instance of a Serializable subclass.", "response": "def example_instance(cls, skip=()):\n \"\"\"\n Generate an example instance of a Serializable subclass.\n\n If traits have been tagged with an `example` value, then we use that\n value. Otherwise we fall back the default_value for the instance.\n\n Traits with names in ``skip`` will not have example values set.\n \"\"\"\n kwargs = {}\n for name, trait in iteritems(cls.class_traits()):\n if name in skip:\n continue\n value = trait.example_value\n if value is Undefined:\n continue\n kwargs[name] = value\n\n return cls(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates an example yaml string for a Serializable subclass.", "response": "def example_yaml(cls, skip=()):\n \"\"\"\n Generate an example yaml string for a Serializable subclass.\n\n If traits have been tagged with an `example` value, then we use that\n value. Otherwise we fall back the default_value for the instance.\n \"\"\"\n return cls.example_instance(skip=skip).to_yaml(skip=skip)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite a file containing an example yaml string for a Serializable subclass.", "response": "def write_example_yaml(cls, dest, skip=()):\n \"\"\"\n Write a file containing an example yaml string for a Serializable\n subclass.\n \"\"\"\n # Make sure we can make an instance before we open a file.\n inst = cls.example_instance(skip=skip)\n with open(dest, 'w') as f:\n inst.to_yaml(stream=f, skip=skip)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_base64(self, skip=()):\n return base64.b64encode(\n ensure_bytes(\n self.to_json(skip=skip),\n encoding='utf-8',\n )\n )", "response": "Construct from base64 - encoded JSON."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nserialize and write self into environ.", "response": "def to_environ(self, environ, skip=()):\n \"\"\"\n Serialize and write self to environ[self._envvar].\n\n Parameters\n ----------\n environ : dict-like\n Dict-like object (e.g. os.environ) into which to write ``self``.\n \"\"\"\n environ[ensure_unicode(type(self).__name__)] = (\n ensure_unicode(self.to_base64(skip=skip))\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve an object by its object_id", "response": "def get_one(self, object_id):\n \"\"\"\n Retrieve an object by its object_id\n\n :param object_id: the objects id.\n :return: the requested object\n :raises: :class: NoResultFound when the object could not be found\n \"\"\"\n return self.session.query(self.cls).filter_by(id=object_id).one()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_one_for_update(self, object_id):\n return self.session.query(self.cls).with_for_update().filter_by(id=object_id).one()", "response": "Retrieve an object by its object_id"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get(self, object_id, cls=None):\n cls = self.cls if cls is None else cls\n return self.session.query(cls).get(object_id)", "response": "Retrieve an object by its object_id and class."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_for_update(self, object_id, cls=None):\n cls = self.cls if cls is None else cls\n return self.session.query(cls).with_for_update().get(object_id)", "response": "Retrieve an object by its object_id and select for update on the row level lock."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes an object by its id.", "response": "def delete(self, object_id):\n \"\"\"\n Delete an object by its id\n\n :param object_id: the objects id.\n :return: the deleted object\n :raises: :class: NoResultFound when the object could not be found\n \"\"\"\n obj = self.session.query(self.cls).filter_by(id=object_id).one()\n self.session.delete(obj)\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsaving an object to the session", "response": "def save(self, obj):\n \"\"\"\n save an object\n\n :param obj: the object\n :return: the saved object\n \"\"\"\n if obj not in self.session:\n self.session.add(obj)\n else:\n obj = self.session.merge(obj)\n self.session.flush()\n self.session.refresh(obj)\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a previous or next sibling from a feed", "response": "def get_sibling(self, feed_id, sibling_type):\n \"\"\"\n get a previous/next sibling from a feed\n :param feed_id: id of the feed\n :param sibling_type: sibling type ('previous', 'next')\n :return: the sibling\n \"\"\"\n if sibling_type == 'previous':\n query = self.session.query(self.feed_model).filter(self.feed_model.id < feed_id)\n order_clause = desc(self.feed_model.id)\n elif sibling_type == 'next':\n query = self.session.query(self.feed_model).filter(self.feed_model.id > feed_id)\n order_clause = self.feed_model.id\n else:\n raise Exception('Unhandled sibling relation type') # pragma no cover\n if query.count() > 0:\n return query.order_by(order_clause).first()\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsaves an object to the db", "response": "def save_object(self, obj):\n \"\"\"\n Save an object to the db\n :param obj: the object to save\n :return: the saved object\n \"\"\"\n self.session.add(obj)\n self.session.flush()\n return obj"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_from_archive(self, feed_id):\n file_path = self.feed_repository + '/' + str(feed_id) + '.xml'\n if not os.path.isfile(file_path):\n raise FeedArchiveNotFound()\n with open(file_path, 'r') as rec_file:\n return rec_file.read()", "response": "Retrieves the atom feed from the feed archive."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_atom_feed_entry(self, feedentry_id):\n return self.session.query(self.feedentry_model).filter(\n self.feedentry_model.id == feedentry_id\n ).one()", "response": "Get a specific feed entry by ID"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\narchiving a feed :param feed_id: the feed id of the feed to archive :param feed: the feed to archive", "response": "def archive_feed(self, feed_id, feed):\n \"\"\"\n Archive a feed\n :param feed_id: the feed id of the feed to archive\n :param feed: the feed to archive\n \"\"\"\n with open(self.feed_repository + '/' + str(feed_id) + '.xml', 'w') as rec_file:\n rec_file.write(feed.atom_str(pretty=True))\n if feed_id == self.current_feed.id:\n self._current_feed = None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds named concept filters to the internal list of named concept resources.", "response": "def add_named_concept_filters(self, named_filter_concepts):\n \"\"\"\n Adds named concept filters\n\n :param named_filter_concepts: dict with named filter concepts which will be mapped as the key as query param and the value as search string\n \"\"\"\n for concept_key, concept_name in named_filter_concepts.items():\n self.add_concept_filter(concept_key, concept_name=concept_name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a concept filter to the search term", "response": "def add_concept_filter(self, concept, concept_name=None):\n \"\"\"\n Add a concept filter\n\n :param concept: concept which will be used as lowercase string in a search term\n :param concept_name: name of the place where there will be searched for\n \"\"\"\n if concept in self.query_params.keys():\n if not concept_name:\n concept_name = concept\n if isinstance(self.query_params[concept], list):\n if self.es_version == '1':\n es_filter = {'or': []}\n for or_filter in self.query_params[concept]:\n es_filter['or'].append(self._build_concept_term(concept_name, or_filter))\n else:\n es_filter = {\"bool\": {\"should\": []}}\n for or_filter in self.query_params[concept]:\n es_filter[\"bool\"][\"should\"].append(self._build_concept_term(concept_name, or_filter))\n else:\n es_filter = self._build_concept_term(concept_name, self.query_params[concept])\n self.filters.append(es_filter)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(self):\n if self.es_version == '1':\n if len(self.filters) > 0:\n return {\n 'filtered': {\n 'query': self.query,\n 'filter': {\n 'and': self.filters\n }\n }\n }\n else:\n return self.query\n else:\n query = {\n 'bool': {\n 'must': self.query\n }\n }\n if len(self.filters) > 0:\n query[\"bool\"][\"filter\"] = self.filters\n return query", "response": "Builds the query string which can be used for a search query"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse the range headers into a range object.", "response": "def parse(cls, request, default_start=0, default_end=9, max_end=50):\n '''\n Parse the range headers into a range object. When there are no range headers,\n check for a page 'pagina' parameter, otherwise use the defaults defaults\n\n :param request: a request object\n :param default_start: default start for paging (optional, default is 0)\n :param default_end: default end for paging (optional, default is 9)\n :param max_end: maximum end for paging (optional, default is 50,\n no limits in case of None)\n :return: :class: 'oe_utils.range_parser.Range'\n '''\n settings = request.registry.settings\n page_param = settings.get('oe.paging.page.queryparam', 'pagina')\n if 'Range' in request.headers and request.headers['Range'] is not '':\n match = re.match('^items=([0-9]+)-([0-9]+)$', request.headers['Range'])\n\n if match:\n start = int(match.group(1))\n end = int(match.group(2))\n\n if end < start:\n end = start\n if max_end and end > start + max_end:\n end = start + max_end\n return cls(start, end)\n else:\n raise RangeParseException('range header does not match expected format')\n elif page_param in request.params:\n per_page_param = settings.get('oe.paging.per_page.queryparam',\n 'per_pagina')\n page = int(request.params.get(page_param))\n items_per_page = int(\n request.params.get(per_page_param,\n default_end - default_start + 1)\n )\n start = default_start + items_per_page * (page - 1)\n end = start + items_per_page - 1\n return cls(start, end, page)\n else:\n return cls(default_start, default_end)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the correct range headers on the response object.", "response": "def set_response_headers(self, request, total_count):\n '''\n Set the correct range headers on the response\n\n :param request: a request object\n :param total_count: the total number of results\n '''\n response = request.response\n response.headerlist.append(('Access-Control-Expose-Headers', 'Content-Range, X-Content-Range'))\n response.accept_ranges = 'items'\n if total_count is None:\n raise RangeParseException('Provided length value is null')\n if total_count > 0:\n response.content_range = self.content_range(total_count)\n\n self.set_link_headers(request, total_count)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_link_headers(self, request, total_count):\n response = request.response\n if request.headers.get('Range'):\n # Don't set the Link headers when custom ranges were used.\n return\n\n settings = request.registry.settings\n page_param = settings.get('oe.paging.page.queryparam', 'pagina')\n per_page_param = settings.get('oe.paging.per_page.queryparam',\n 'per_pagina')\n url = request.path_url\n try:\n queryparams = request.params.mixed()\n except AttributeError:\n queryparams = request.params\n page_size = self.get_page_size()\n current_page = self.start // page_size + 1\n queryparams[per_page_param] = page_size\n links = {\n 'first': 1,\n 'last': int(math.ceil(float(total_count) / page_size))\n }\n if current_page != links['first']:\n links['prev'] = current_page - 1\n if current_page != links['last']:\n links['next'] = current_page + 1\n\n response.headers['Link'] = self._make_link_headers(links, page_param,\n queryparams, url)", "response": "Sets the Link headers on the response."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef validator(node, rrn):\n '''\n Colander validator that checks whether a given value is a valid\n Belgian national ID number .\n :raises colander.Invalid: If the value is no valid Belgian national ID number.\n '''\n\n def _valid(_rrn):\n x = 97 - (int(_rrn[:-2]) - (int(_rrn[:-2]) // 97) * 97)\n return int(_rrn[-2:]) == x\n\n if len(rrn) != 11:\n raise colander.Invalid(\n node,\n 'Een rijksregisternummer moet 11 cijfers lang zijn.'\n )\n elif not _valid(rrn):\n valid_rrn = False\n if rrn[:1] == '0' or rrn[:1] == '1':\n rrn = '2' + rrn\n valid_rrn = _valid(rrn)\n if not valid_rrn:\n raise colander.Invalid(\n node,\n 'Dit is geen correct rijksregisternummer.'\n )", "response": "Colander validator that checks whether a given value is a valid Belgian national ID number."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nedits a value to a value that can be validated as a kbo number.", "response": "def preparer(value):\n '''\n Edit a value to a value that can be validated as a\n kbo number.\n '''\n if value is None or value == colander.null:\n return colander.null\n return value.strip().replace('.', '')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef preparer(telefoon):\n '''\n Edit a phone value to a value that can be validated as a\n phone number.\n This takes the incoming value and :\n Removes all whitespace ( space, tab , newline , ... ) characters\n Removes the following characters: \" / - . \"\n If no + is present at frond, add the country code\n In short: just add a + at the beginning of the country code.\n '''\n if telefoon is None or telefoon == colander.null:\n return colander.null\n if 'landcode' in telefoon and telefoon.get('landcode') is not None:\n landcode = telefoon.get('landcode')\n value = re.sub(r'\\s+', '', landcode).replace('.', '').replace('/', '').replace(',', '').replace('-', ''). \\\n lstrip('0')\n telefoon['landcode'] = '+' + value if value[0] != '+' else value\n if 'nummer' in telefoon and telefoon.get('nummer') is not None:\n nummer = telefoon.get('nummer')\n value = re.sub(r'\\s+', '', nummer).replace('.', '').replace('/', '').replace(',', '').replace('-', ''). \\\n lstrip('0')\n telefoon['nummer'] = value\n return telefoon", "response": "This function takes a telefoon and returns a dictionary with the values that can be validated as a value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nvalidate international phone number.", "response": "def validator(node, telefoon):\n ''' A valid international phone number looks like this: + .\n It is up to 15 digits long . The country code consists of 1 to 4 digits.\n The actual number may even 15 - cells ( country code) figures cover . We're going to keep the phone numbers including a + to indicate the international call prefix.\n A valid number always begins with a + . The shortest valid number that we want to keep is: +11 ( 3 characters). The longest : +123123456789123 (16 characters).\n Additional validation (after preparation ) : If a number starts with +32 (this is a Belgian number) , it must then follow eight or nine digits . The first decimal after +32 may not be 0 .\n '''\n if telefoon.get('nummer', '') != \"\":\n msg = \"Invalid phone number\"\n landcode = telefoon.get('landcode', '')\n nummer = telefoon.get('nummer', '')\n if landcode[0] != '+' or len(landcode[1:]) > 4 or not is_number(landcode[1:]):\n raise colander.Invalid(node,\n msg + ': Een geldige landcode begint met een + gevolgd door maximaal 4 cijfers')\n if landcode[0] != '+' or len(landcode[1:]) + len(nummer) > 15 or not is_number(nummer):\n raise colander.Invalid(node,\n msg + ': Een geldige nummer begint met een + gevolgd door maximaal 15 cijfers')\n # validatie van Belgische telefoonnummers\n if landcode == '+32':\n if len(nummer) not in [8, 9]:\n raise colander.Invalid(node, msg + ': Na +32 moeten er 8 of 9 cijfers volgen')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the limiting distribution of the Cramer - von Mises statistic.", "response": "def cvm_unif_inf(statistic):\n \"\"\"\n Calculates the limiting distribution of the Cramer-von Mises statistic.\n\n After the second line of equation 1.3 from the Csorgo and Faraway paper.\n \"\"\"\n args = inf_args / statistic\n return (inf_cs * exp(-args) * kv(.25, args)).sum() / statistic ** .5"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cvm_unif_fix1(statistic):\n args = fix1_args / statistic\n kvs = kv((.25, .75, 1.25), args[:, :, newaxis])\n gs, hs = exp(-args) * tensordot(((1, 1, 0), (2, 3, -1)), kvs, axes=(1, 2))\n a = dot((7, 16, 7), fix1_csa * gs).sum() / statistic ** 1.5\n b = dot((1, 0, 24), fix1_csb * hs).sum() / statistic ** 2.5\n return cvm_unif_inf(statistic) / 12 - a - b", "response": "Compute the first term of the small sample count Gotze expansion."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef merge(*ds):\n if not ds:\n raise ValueError(\"Must provide at least one dict to merge().\")\n out = {}\n for d in ds:\n out.update(d)\n return out", "response": "Merge together a sequence of dictionaries."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate a sqlalchemy object or identifier.", "response": "def sqlalchemy_validator(node, value, model, session=None, msg='%s is ongeldig'):\n \"\"\"\n Validate a sqlalchemy object or identifier.\n Function checks if value is an instance of sqlalchemy model\n or an id of one.\n :param node: A colander SchemaNode.\n :param value: The value to be validated. Is the key of the\n SQLAlchemy model.\n :param model: A SQLAlchemy model.\n :param session: A SQLAlchemy session. Is required if value is not an\n instance of a SQLAlchemy model.\n :param msg: A msg to attach to a `colander.Invalid` exception.\n :raises colander.Invalid: If not a valid sqlalchemy model.\n \"\"\"\n\n m = session.query(model).get(value)\n\n if not m:\n raise colander.Invalid(\n node,\n msg % value\n )\n\n return m"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalculating the Kolmogorov - Smirnov statistic for sorted values from U 0 1.", "response": "def ks_stat(data):\n \"\"\"\n Calculates the Kolmogorov-Smirnov statistic for sorted values from U(0, 1).\n \"\"\"\n samples = len(data)\n uniform = arange(0, samples + 1) / samples\n d_plus = (uniform[1:] - data).max()\n d_minus = (data - uniform[:-1]).max()\n return max(d_plus, d_minus)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cvm_stat(data):\n samples2 = 2 * len(data)\n minuends = arange(1, samples2, 2) / samples2\n return 1 / (6 * samples2) + ((minuends - data) ** 2).sum()", "response": "Calculate the Cramer - von Mises statistic for sorted values from U 0 1."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalculating the Anderson - Darling statistic for sorted values from U 0 1.", "response": "def ad_stat(data):\n \"\"\"\n Calculates the Anderson-Darling statistic for sorted values from U(0, 1).\n\n The statistic is not defined if any of the values is exactly 0 or 1. You\n will get infinity as a result and a divide-by-zero warning for such values.\n The warning can be silenced or raised using numpy.errstate(divide=...).\n \"\"\"\n samples = len(data)\n factors = arange(1, 2 * samples, 2)\n return -samples - (factors * log(data * (1 - data[::-1]))).sum() / samples"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ks_unif_durbin_matrix(samples, statistic):\n # Construct the Durbin matrix.\n h, k = modf(samples * statistic)\n k = int(k)\n h = 1 - h\n m = 2 * k + 1\n A = tri(m, k=1)\n hs = h ** arange(1, m + 1)\n A[:, 0] -= hs\n A[-1] -= hs[::-1]\n if h > .5:\n A[-1, 0] += (2 * h - 1) ** m\n A /= fromfunction(lambda i, j: gamma(fmax(1, i - j + 2)), (m, m))\n # Calculate A ** n, expressed as P * 2 ** eP to avoid overflows.\n P = identity(m)\n s = samples\n eA, eP = 0, 0\n while s != 1:\n s, b = divmod(s, 2)\n if b == 1:\n P = dot(P, A)\n eP += eA\n if P[k, k] > factor:\n P /= factor\n eP += shift\n A = dot(A, A)\n eA *= 2\n if A[k, k] > factor:\n A /= factor\n eA += shift\n P = dot(P, A)\n eP += eA\n # Calculate n! / n ** n * P[k, k].\n x = P[k, k]\n for i in arange(1, samples + 1):\n x *= i / samples\n if x < factorr:\n x *= factor\n eP -= shift\n return x * 2 ** eP", "response": "This function calculates the probability that the statistic is less than the given value."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ks_unif_durbin_recurrence_rational(samples, statistic):\n t = statistic * samples\n # Python 3: int()s can be skipped.\n ft1 = int(floor(t)) + 1\n fmt1 = int(floor(-t)) + 1\n fdt1 = int(floor(2 * t)) + 1\n qs = [Fraction(i ** i, factorial(i)) for i in range(ft1)]\n qs.extend(Fraction(i ** i, factorial(i)) - 2 * t *\n sum((t + j) ** (j - 1) / factorial(j) *\n (i - t - j) ** (i - j) / factorial(i - j)\n for j in range(i + fmt1))\n for i in range(ft1, fdt1))\n qs.extend(-sum((-1) ** j * (2 * t - j) ** j / factorial(j) * qs[i - j]\n for j in range(1, fdt1))\n for i in range(fdt1, samples + 1))\n return qs[samples] * factorial(samples) / samples ** samples", "response": "Calculates the probability that the statistic is less than the given value."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\napproximate the statistic distribution by a transformed Li-Chien formula. This ought to be a bit more accurate than using the Kolmogorov limit, but should only be used with large squared sample count times statistic. See: doi:10.18637/jss.v039.i11 and http://www.jstor.org/stable/2985019.", "response": "def ks_unif_pelz_good(samples, statistic):\n \"\"\"\n Approximates the statistic distribution by a transformed Li-Chien formula.\n\n This ought to be a bit more accurate than using the Kolmogorov limit, but\n should only be used with large squared sample count times statistic.\n See: doi:10.18637/jss.v039.i11 and http://www.jstor.org/stable/2985019.\n \"\"\"\n x = 1 / statistic\n r2 = 1 / samples\n rx = sqrt(r2) * x\n r2x = r2 * x\n r2x2 = r2x * x\n r4x = r2x * r2\n r4x2 = r2x2 * r2\n r4x3 = r2x2 * r2x\n r5x3 = r4x2 * rx\n r5x4 = r4x3 * rx\n r6x3 = r4x2 * r2x\n r7x5 = r5x4 * r2x\n r9x6 = r7x5 * r2x\n r11x8 = r9x6 * r2x2\n a1 = rx * (-r6x3 / 108 + r4x2 / 18 - r4x / 36 - r2x / 3 + r2 / 6 + 2)\n a2 = pi2 / 3 * r5x3 * (r4x3 / 8 - r2x2 * 5 / 12 - r2x * 4 / 45 + x + 1 / 6)\n a3 = pi4 / 9 * r7x5 * (-r4x3 / 6 + r2x2 / 4 + r2x * 53 / 90 - 1 / 2)\n a4 = pi6 / 108 * r11x8 * (r2x2 / 6 - 1)\n a5 = pi2 / 18 * r5x3 * (r2x / 2 - 1)\n a6 = -pi4 * r9x6 / 108\n w = -pi2 / 2 * r2x2\n return hpi1d2 * ((a1 + (a2 + (a3 + a4 * hs2) * hs2) * hs2) * exp(w * hs2) +\n (a5 + a6 * is2) * is2 * exp(w * is2)).sum()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef activate(self, *, filter_func=None):\n '''\n Activate the type safety checker. After the call all functions\n that need to be checked will be.\n '''\n\n if self.active:\n raise RuntimeError(\"Type safety check already active\")\n\n self.__module_finder = ModuleFinder(Validator.decorate)\n if filter_func is not None:\n self.__module_finder.set_filter(filter_func)\n self.__module_finder.install()", "response": "Activate the type safety checker."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nusing this decorator to audit an operation", "response": "def audit(**kwargs):\n \"\"\"\n use this decorator to audit an operation\n \"\"\"\n\n def wrap(fn):\n\n @functools.wraps(fn)\n def advice(parent_object, *args, **kw):\n request = parent_object.request\n wijziging = request.audit_manager.create_revision()\n\n result = fn(parent_object, *args, **kw)\n\n if hasattr(request, 'user') and request.user is not None and 'actor' in request.user:\n actor = request.user['actor']\n attributes = request.user['attributes']\n wijziging.updated_by = actor.get('uri', None)\n if actor.get('uri') == actor.get('instantie_actor_uri'):\n wijziging.updated_by_omschrijving = (\n attributes.get('displayname')\n or attributes.get('mail')\n or actor.get('omschrijving'))\n else:\n wijziging.updated_by_omschrijving = actor.get(\n 'omschrijving')\n else:\n wijziging.updated_by = 'publiek'\n wijziging.updated_by_omschrijving = 'publiek'\n\n r_id = request.matchdict.get('id')\n wijziging.resource_object_id = r_id\n if result is not None:\n try:\n renderer_name = request.registry.settings.get(\n 'audit.pyramid.json.renderer',\n 'jsonrenderer')\n json_string = renderers.render(renderer_name, result,\n request=request)\n result_object_json = json.loads(json_string)\n wijziging.resource_object_json = result_object_json\n wijziging.resource_object_id = _get_id_from_result(r_id, result_object_json, kwargs)\n except Exception as e:\n log.error(e)\n\n wijziging.versie = _get_versie_hash(wijziging)\n wijziging.actie = kwargs.get('actie') if kwargs.get('actie') else _action_from_request(request)\n\n request.audit_manager.save(wijziging)\n\n return result\n\n return advice\n\n return wrap"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nuses this decorator to audit an operation with a request as input variable", "response": "def audit_with_request(**kwargs):\n \"\"\"\n use this decorator to audit an operation with a request as input variable\n \"\"\"\n def wrap(fn):\n @audit(**kwargs)\n def operation(parent_object, *args, **kw):\n return fn(parent_object.request, *args, **kw)\n\n @functools.wraps(fn)\n def advice_with_request(the_request, *args, **kw):\n class ParentObject:\n request = the_request\n\n return operation(ParentObject(), *args, **kw)\n\n return advice_with_request\n\n return wrap"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the instance of the class in which this instance is stored.", "response": "def example_value(self):\n \"\"\"\n If we're an instance of a Serializable, fall back to its\n `example_instance()` method.\n \"\"\"\n from .serializable import Serializable\n inst = self._static_example_value()\n if inst is tr.Undefined and issubclass(self.klass, Serializable):\n return self.klass.example_instance()\n return inst"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef copy_and_replace(file_in, file_out, mapping, **kwargs):\n '''\n Copy a file and replace some placeholders with new values.\n '''\n separator = '@@'\n if 'separator' in kwargs:\n separator = kwargs['separator']\n file_in = open(file_in, 'r')\n file_out = open(file_out, 'w')\n s = file_in.read()\n for find, replace in mapping:\n find = separator + find + separator\n print(u'Replacing {0} with {1}'.format(find, replace))\n s = s.replace(find, replace)\n file_out.write(s)", "response": "Copy a file and replace some placeholders with new values."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts plain list to MutableList.", "response": "def coerce(cls, key, value):\n \"\"\"\n Convert plain list to MutableList.\n \"\"\"\n if not isinstance(value, MutableList):\n if isinstance(value, list):\n return MutableList(value)\n # this call will raise ValueError\n return Mutable.coerce(key, value)\n else:\n return value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nregister a new Worker under the given descriptive name.", "response": "def registerWorker(self, name, worker):\n\t\t\"\"\"\n\t\tRegister a new Worker, under the given descriptive name.\n\n\t\tTrying to register multiple workers under the same name will raise an Exception.\n\n\t\tParameters\n\t\t----------\n\t\tname: string\n\t\t\tName to register the given worker under.\n\t\tworker: multiprocessing.Process, or a subclass\n\t\t\tProcess object to register.\n\t\t\"\"\"\n\t\tif not isinstance(worker, multiprocessing.Process):\n\t\t\tself.logger.error(\"Process {0} is not actually a Process!\".format(name))\n\t\t\traise Exception(\"Process {0} is not actually a Process!\".format(name))\n\n\t\tif name in self.worker_list:\n\t\t\tself.logger.error(\"Process {0} already registered!\".format(name))\n\t\t\traise Exception(\"Process {0} already registered!\".format(name))\n\n\t\tself.worker_list[name] = worker\n\t\tself.logger.debug(\"Registered worker {0}\".format(name))\n\n\t\treturn worker"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve the Worker registered under the given name.", "response": "def getWorker(self, name):\n\t\t\"\"\"\n\t\tRetrieve the Worker registered under the given name.\n\n\t\tIf the given name does not exists in the Worker list, an Exception is raised.\n\n\t\tParameters\n\t\t----------\n\t\tname: string\n\t\t\tName of the Worker to retrieve\n\t\t\"\"\"\n\t\tif not name in self.worker_list:\n\t\t\tself.logger.error(\"Worker {0} is not registered!\".format(name))\n\t\t\traise Exception(\"Worker {0} is not registered!\".format(name))\n\n\t\treturn self.worker_list[name]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef unregisterWorker(self, name):\n\t\tif not name in self.worker_list:\n\t\t\tself.logger.error(\"Worker {0} is not registered!\".format(name))\n\t\t\traise Exception(\"Worker {0} is not registered!\".format(name))\n\n\t\tdel self.worker_list[name]\n\t\tself.logger.debug(\"Unregistered worker {0}\".format(name))", "response": "Unregister the Worker with the given name."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef startAll(self):\n\t\tself.logger.info(\"Starting all workers...\")\n\n\t\tfor worker in self.getWorkers():\n\t\t\tprocess = self.getWorker(worker)\n\t\t\tself.logger.debug(\"Starting {0}\".format(process.name))\n\t\t\tprocess.start()\n\n\t\tself.logger.info(\"Started all workers\")", "response": "Start all registered Workers."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stopAll(self, timeout=10, stop=False):\n\t\tself.logger.info(\"Stopping all workers...\")\n\n\t\tfor worker in self.getWorkers():\n\t\t\tprocess = self.getWorker(worker)\n\t\t\tself.logger.debug(\"Stopping {0}\".format(process.name))\n\t\t\tif process.is_alive():\n\t\t\t\tprocess.join(timeout)\n\t\t\t\tif process.is_alive():\n\t\t\t\t\tself.logger.warning(\"Failed to stop {0}, terminating\".format(process.name))\n\t\t\t\t\tprocess.terminate()\n\t\t\tself.unregisterWorker(worker)\n\n\t\tself.logger.info(\"Stopped all workers\")\n\n\t\tif stop:\n\t\t\tself.logger.fatal(\"Comitting suicide\")\n\t\t\tos._exit(0)", "response": "Stop all registered Workers."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_cached_data(self, url):\n \n key = self.get_key_from_url(url)\n \n with self.conn as conn:\n cursor = conn.cursor()\n cursor.execute(\"SELECT * FROM ensembl WHERE key=? AND genome_build=?\",\n (key, self.genome_build))\n row = cursor.fetchone()\n \n # if the data has been cached, check that it is not out of date, and\n # the data was generated from the same Ensembl API version\n if row is not None:\n api_version = row[\"api_version\"]\n data = zlib.decompress(row[\"data\"])\n if IS_PYTHON3:\n data = data.decode(\"utf-8\")\n \n date = datetime.strptime(row[\"cache_date\"], \"%Y-%m-%d\")\n diff = self.today - date\n \n if diff.days < 180 and self.api_version == api_version:\n return data\n \n return None", "response": "get the cached data for a given url"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cache_url_data(self, url, data, attempt=0):\n \n if attempt > 5:\n raise ValueError('too many attempts at writing to the cache')\n \n key = self.get_key_from_url(url)\n \n # don't cache the ensembl version check\n if key == \"info.rest\":\n return\n \n current_date = datetime.strftime(self.today, \"%Y-%m-%d\")\n \n # python3 zlib requires encoded strings\n if IS_PYTHON3:\n data = data.encode(\"utf-8\")\n \n compressed = zlib.compress(data)\n \n # python2 sqlite3 can't write \"8-bit bytestrings\", but it can handle\n # buffer versions of the bytestrings\n if IS_PYTHON2:\n compressed = buffer(compressed)\n \n t = (key, self.genome_build, current_date, self.api_version, compressed)\n \n cmd = \"INSERT OR REPLACE INTO ensembl \" \\\n \"(key, genome_build, cache_date, api_version, data) VALUES (?,?,?,?,?)\"\n try:\n with self.conn as cursor:\n cursor.execute(cmd, t)\n except sqlite3.OperationalError:\n # if we hit a sqlite locking error, wait a random time so conflicting\n # instances are less likely to reconflict, then retry\n time.sleep(random.uniform(1, 10))\n self.cache_url_data(url, data.decode('utf-8'), attempt + 1)", "response": "Cache the data retrieved from the given URL and store it in the cache."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_key_from_url(self, url):\n \n key = url.split(\"/\")[3:]\n \n # fix the final bit of the url, none of which uniquely define the data\n suffix = key.pop()\n suffix = suffix.split(\";\")[0]\n \n # convert \"LONG_ID?feature=transcript\" to ['LONG_ID', \"transcript\"] etc\n id = suffix.split(\"?\", 1)\n suffix = id.pop()\n if \"=\" in suffix:\n _, suffix = suffix.split(\"=\")\n \n key += id + [suffix]\n \n # replace characters not tolerated in keys and remove blank entries\n key = ( x.replace(':', '_') for x in key )\n key = ( x for x in key if x != '' )\n \n return \".\".join(key)", "response": "This function parses the url into a list of folder locations\n and returns it as a string"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncasting a datetime filter value to a datetime object.", "response": "def cast_datetime_filter(value):\n \"\"\"Cast a datetime filter value.\n\n :param value: string representation of a value that needs to be casted to\n a `datetime` object.\n\n \"\"\"\n if isinstance(value, str):\n dtime = parse_datetime(value)\n\n elif isinstance(value, datetime):\n dtime = value\n else:\n raise ValueError('Received value of type {0}'.format(type(value)))\n\n return dtime.isoformat()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef filter_args_to_dict(filter_dict, accepted_filter_keys=[]):\n out_dict = {}\n for k, v in filter_dict.items():\n # make sure that the filter k is acceptable\n # and that there is a value associated with the key\n if k not in accepted_filter_keys or v is None:\n logger.debug(\n 'Filter was not in accepted_filter_keys or value is None.')\n # skip it\n continue\n filter_type = filter_type_map.get(k, None)\n\n if filter_type is None:\n logger.debug('Filter key not foud in map.')\n # hmm, this was an acceptable filter type but not in the map...\n # Going to skip it.\n continue\n\n # map of casting funcitons to filter types\n filter_cast_map = {\n 'int': cast_integer_filter,\n 'datetime': cast_datetime_filter\n }\n cast_function = filter_cast_map.get(filter_type, None)\n\n # if we get a cast function, call it with v. If not, just use v.\n if cast_function:\n out_value = cast_function(v)\n else:\n out_value = v\n out_dict[k] = out_value\n\n return out_dict", "response": "Cast and validate filter args."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bencode(obj):\n if isinstance(obj, int):\n return \"i\" + str(obj) + \"e\"\n\n if isinstance(obj, str):\n if not obj:\n return None\n return str(len(obj)) + \":\" + obj\n\n if isinstance(obj, list):\n res = \"l\"\n for elem in obj:\n elem = bencode(elem)\n if elem:\n res += elem\n return res + \"e\"\n\n if isinstance(obj, dict):\n res = \"d\"\n for key in sorted(obj.keys()):\n if key in obj:\n value = bencode(obj[key])\n key = bencode(key)\n if key and value:\n res += key + value\n\n return res + \"e\"\n\n if isinstance(obj, unicode):\n return bencode(obj.encode('utf-8'))\n\n if isinstance(obj, collections.OrderedDict):\n return bencode(dict(obj))\n raise Exception(\"Unknown object: %s (%s)\" % (repr(obj), repr(type(obj))))", "response": "Bencodes obj and returns it as a string"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndecode a bencoded bytearray and returns it as a python object", "response": "def bdecode(text):\n \"\"\"Decodes a bencoded bytearray and returns it as a python object\"\"\"\n text = text.decode('utf-8')\n\n def bdecode_next(start):\n \"\"\"bdecode helper function\"\"\"\n if text[start] == 'i':\n end = text.find('e', start)\n return int(text[start+1:end], 10), end + 1\n\n if text[start] == 'l':\n res = []\n start += 1\n while text[start] != 'e':\n elem, start = bdecode_next(start)\n res.append(elem)\n return res, start + 1\n\n if text[start] == 'd':\n res = {}\n start += 1\n while text[start] != 'e':\n key, start = bdecode_next(start)\n value, start = bdecode_next(start)\n res[key] = value\n return res, start + 1\n\n lenend = text.find(':', start)\n length = int(text[start:lenend], 10)\n end = lenend + length + 1\n return text[lenend+1:end], end\n return bdecode_next(0)[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchanges the standard password from neo4j to testing to be able to run the test suite.", "response": "def change_password():\n \"\"\"\n Changes the standard password from neo4j to testing to be able to run the test suite.\n \"\"\"\n basic_auth = '%s:%s' % (DEFAULT_USERNAME, DEFAULT_PASSWORD)\n try: # Python 2\n auth = base64.encodestring(basic_auth)\n except TypeError: # Python 3\n auth = base64.encodestring(bytes(basic_auth, 'utf-8')).decode()\n\n headers = {\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json\",\n \"Authorization\": \"Basic %s\" % auth.strip()\n }\n \n response = None\n retry = 0\n while not response: # Retry if the server is not ready yet\n sleep(1)\n con = http.HTTPConnection('localhost:7474', timeout=10)\n try:\n con.request('GET', 'http://localhost:7474/user/neo4j', headers=headers)\n response = json.loads(con.getresponse().read().decode('utf-8'))\n except ValueError:\n con.close()\n retry += 1\n if retry > 10:\n print(\"Could not change password for user neo4j\")\n break\n if response and response.get('password_change_required', None):\n payload = json.dumps({'password': 'testing'})\n con.request('POST', 'http://localhost:7474/user/neo4j/password', payload, headers)\n print(\"Password changed for user neo4j\")\n con.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates an ArgumentParser which listens for common options and adds them to the config file.", "response": "def get_config_argparse(suppress=None):\n\t\"\"\"\n\tCreate an ArgumentParser which listens for the following common options:\n\t- --config\n\t- --help\n\t- --quiet\n\t- --verbose\n\t- --version\n\n\tArguments\n\t---------\n\tsuppress_help: list of strings\n\t\tDefines what options to suppress from being added to the ArgumentParser. Provide\n\t\tthe name of the options to suppress, without \"--\". Of note in particular is the help option.\n\n\t\tIf help is suppressed, help information is suppressed in the returned ArgumentParser\n\t\tIf help is not suppressed, help information is automatically output by ArgumentParser \n\t\t\twhen -h/--help is passed, and the program is exited upon parsing the arguments.\n\t\tIn both cases, -h/--help is not actually suppressed, only the behaviour of the ArgumentParser\n\t\twhen encountering the option.\n\t\"\"\"\n\n\tif suppress is None:\n\t\tsuppress = []\n\n\tconfig_parser = ArgumentParser(description=\"Looking for config\", add_help=(not \"help\" in suppress))\n\tif not \"config\" in suppress:\n\t\tconfig_parser.add_argument(\"--config\",\tmetavar=\"CFG\",\t\t type=str,\t\thelp=\"Config file to load\")\n\tif \"help\" in suppress:\n\t\tconfig_parser.add_argument(\"--help\",\taction=\"store_true\", default=False, help=\"Display usage information and exit\")\n\tif not \"quiet\" in suppress:\n\t\tconfig_parser.add_argument(\"--quiet\",\taction=\"store_true\", default=False, help=\"Don't print messages to stdout\")\n\tif not \"verbose\" in suppress:\n\t\tconfig_parser.add_argument(\"--verbose\", action=\"store_true\", default=False, help=\"Output debug messages\")\n\tif not \"version\" in suppress:\n\t\tconfig_parser.add_argument(\"--version\", action=\"store_true\", default=False, help=\"Display version information and exit\")\n\n\treturn config_parser"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_config_arguments():\n\n\tlogger = logging.getLogger(__name__)\n\tlogger.debug(\"Parsing configuration arguments\")\n\n\treturn get_config_argparse(suppress=[\"help\"]).parse_known_args()", "response": "This method parses the command line arguments and returns a tuple containing parsed variables and unknown variables."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_default_config_file(argparser, suppress=None, default_override=None):\n\tif not suppress:\n\t\tsuppress = []\n\tif not default_override:\n\t\tdefault_override = {}\n\n\tlines = []\n\tseen_arguments = []\n\tfor arg in argparser._actions:\n\t\tif arg.dest in suppress:\n\t\t\tcontinue\n\t\tif arg.dest in seen_arguments:\n\t\t\tcontinue\n\t\tdefault = arg.default\n\t\tif arg.dest in default_override.keys():\n\t\t\tdefault = default_override[arg.dest]\n\t\tlines.append(\"# {0}\\n{1}={2}\\n\".format(arg.help, arg.dest, default))\n\t\tseen_arguments.append(arg.dest)\n\n\treturn \"\".join(lines)", "response": "This method will take an ArgumentParser and return a ConfigObj compatible configuration file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _perform_binds(self, binds):\n\t\tfor bind in binds:\n\t\t\tself.logger.debug(\"Binding queue {0} to exchange {1} with key {2}\".format(bind['queue'], bind['exchange'], bind['routing_key']))\n\t\t\tself.channel.queue_bind(**bind)", "response": "Perform the binds on the queue"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _perform_unbinds(self, binds):\n\t\tfor bind in binds:\n\t\t\tself.logger.debug(\"Unbinding queue {0} from exchange {1} with key {2}\".format(bind['queue'], bind['exchange'], bind['routing_key']))\n\t\t\tself.channel.queue_unbind(**bind)", "response": "Unbinds queues from exchanges."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef close(self):\n\t\tself.cancel()\n\t\tself.logger.debug(\"Closing AMQP connection\")\n\t\ttry:\n\t\t\tself.connection.close()\n\t\texcept Exception, eee:\n\t\t\tself.logger.warning(\"Received an error while trying to close AMQP connection: \" + str(eee))", "response": "Closes the internal connection."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cancel(self, consumer_tag=None):\n\t\tif not consumer_tag:\n\t\t\tif not hasattr(self, \"consumer_tag\"):\n\t\t\t\treturn\n\t\t\tconsumer_tag = self.consumer_tag\n\t\tself.channel.basic_cancel(consumer_tag)", "response": "Cancels the current consuming action by using the stored consumer_tag."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef consume(self, consumer_callback, exclusive=False, recover=False):\n\t\tif recover:\n\t\t\tself.logger.info(\"Asking server to requeue all unacknowledged messages\")\n\t\t\tself.channel.basic_recover(requeue=True)\n\t\tself.consumer_tag = self.channel.basic_consume(consumer_callback=consumer_callback, queue=self.queue_name, exclusive=exclusive)\n\t\treturn self.consumer_tag", "response": "This method is used to start consuming messages from an AMQP queue."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nretrieve the key from a parsed_json dictionary", "response": "def json_get(parsed_json, key):\n \"\"\"\n Retrieves the key from a parsed_json dictionary, or raises an exception if the\n key is not present\n \"\"\"\n if key not in parsed_json:\n raise ValueError(\"JSON does not contain a {} field\".format(key))\n return parsed_json[key]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread a markdown file and returns the contents formatted as rst", "response": "def readme(fname):\n \"\"\"Reads a markdown file and returns the contents formatted as rst\"\"\"\n md = open(os.path.join(os.path.dirname(__file__), fname)).read()\n output = md\n try:\n import pypandoc\n output = pypandoc.convert(md, 'rst', format='md')\n except ImportError:\n pass\n return output"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_config(config_base, custom_file=None, configspec=None):\n\n\tlogger = logging.getLogger(__name__)\n\n\tlogger.debug(\"Expanding variables\")\n\thome = os.path.expanduser(\"~\")\n\tloc = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\n\n\tlogger.debug(\"Create empty config\")\n\tconfig = ConfigObj()\n\n\t# Merge in config file in program dir\n\tif os.path.isfile(os.path.join(loc, \"%s.config\" % config_base)):\n\t\tlogger.debug(\"Loading config from workingdir\")\n\t\tcfg = ConfigObj(os.path.join(loc, \"%s.config\" % config_base), configspec=configspec)\n\t\tif configspec:\n\t\t\tcfg.validate(Validator())\n\t\tconfig.merge(cfg)\n\n\t# Merge in system-wide config (Unix specific)\n\tif os.path.isfile(\"/etc/%s.config\" % config_base):\n\t\tlogger.debug(\"Loading config from /etc\")\n\t\tcfg = ConfigObj(\"/etc/%s.config\" % config_base, configspec=configspec)\n\t\tif configspec:\n\t\t\tcfg.validate(Validator())\n\t\tconfig.merge(cfg)\n\n\t# Merge in user specific config\n\tif os.path.isfile(os.path.join(home, \".%s.config\" % config_base)):\n\t\tlogger.debug(\"Loading config from homedir\")\n\t\tcfg = ConfigObj(os.path.join(home, \".%s.config\" % config_base), configspec=configspec)\n\t\tif configspec:\n\t\t\tcfg.validate(Validator())\n\t\tconfig.merge(cfg)\n\n\t# Config file provided on command line has preference\n\tif custom_file:\n\t\tlogger.debug(\"Loading custom config file\")\n\t\tcfg = ConfigObj(custom_file, configspec=configspec)\n\t\tif configspec:\n\t\t\tcfg.validate(Validator())\n\t\tconfig.merge(cfg)\n\n\treturn config", "response": "Load a configuration file from multiple locations and merge the results into one."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload an entire directory of configuration files, merging them into one. This function will load multiple configuration files matching the given pattern, in the given path, and merge them. The found files are first sorted alphabetically, and then loaded and merged. A good practice is to use ConfigObj sections, for easy loading of information like per-host configuration. Parameters ---------- path: string Absolute path to a directory of ConfigObj files pattern: string Globbing pattern used to find files. Defaults to *.config. configspec: ConfigObj Used to sanitize the values in the resulting ConfigObj. Validation errors are currently not exposed to the caller. allow_errors: boolean If False, errors raised by ConfigObj are not caught. If True, errors raise by ConfigObj are caught, and an error is logged using logger.", "response": "def get_config_dir(path, pattern=\"*.config\", configspec=None, allow_errors=False):\n\t\"\"\"\n\tLoad an entire directory of configuration files, merging them into one.\n\n\tThis function will load multiple configuration files matching the given pattern,\n\tin the given path, and merge them. The found files are first sorted alphabetically,\n\tand then loaded and merged. A good practice is to use ConfigObj sections, for easy\n\tloading of information like per-host configuration.\n\n\tParameters\n\t----------\n\tpath: string\n\t\tAbsolute path to a directory of ConfigObj files\n\tpattern: string\n\t\tGlobbing pattern used to find files. Defaults to *.config.\n\tconfigspec: ConfigObj\n\t\tUsed to sanitize the values in the resulting ConfigObj. Validation errors are currently\n\t\tnot exposed to the caller.\n\tallow_errors: boolean\n\t\tIf False, errors raised by ConfigObj are not caught.\n\t\tIf True, errors raise by ConfigObj are caught, and an error is logged using logger.\n\t\"\"\"\n\n\tlogger = logging.getLogger(__name__)\n\n\tlogger.debug(\"Loading all files matching {0} in {1}\".format(pattern, path))\n\tfiles = Globber(path, include=[pattern], recursive=False).glob()\n\tfiles = sorted(files)\n\n\tconfig = ConfigObj()\n\n\tfor filename in files:\n\t\tlogger.debug(\"- Loading config for {0}\".format(filename))\n\t\ttry:\n\t\t\tconf = ConfigObj(filename, configspec=configspec)\n\t\texcept ConfigObjError, coe:\n\t\t\tlogger.error(\"An error occurred while parsing {0}: {1}\".format(filename, str(coe)))\n\t\t\tcontinue\n\t\tif configspec:\n\t\t\tconf.validate(Validator())\n\t\tconfig.merge(conf)\n\n\treturn config"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _try_then(self):\n if self._cached is not None and self._callback is not None:\n self._callback(*self._cached[0], **self._cached[1])", "response": "Check to see if self has been resolved yet invoke then."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef open(path_or_url):\n is_url = re.compile(\n r'^(?:http|ftp)s?://' # http:// or https://\n r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)|' # domain...\n r'localhost|' # localhost...\n r'\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})' # ...or ip\n r'(?::\\d+)?' # optional port\n r'(?:/?|[/?]\\S+)$', re.IGNORECASE)\n\n if re.match(is_url, path_or_url):\n return urllib.request.urlopen(path_or_url)\n else:\n return builtins.open(path_or_url)", "response": "Wrapper for opening an IO object to a local file or URL."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _is_primitive(thing):\n primitive = (int, str, bool, float)\n return isinstance(thing, primitive)", "response": "Determine if the value is a primitive"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _guess_type(val):\n if isinstance(val, bool):\n return \"choice\"\n elif isinstance(val, int):\n return \"number\"\n elif isinstance(val, float):\n return \"number\"\n elif isinstance(val, str):\n return \"text\"\n elif hasattr(val, 'read'):\n return \"file\"\n else:\n return \"text\"", "response": "Guess the type of the parameter based off the default value if unknown use text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread params values and annotations from the signature and return a list of dictionaries.", "response": "def _params(sig):\n \"\"\"Read params, values and annotations from the signature\"\"\"\n params = []\n for p in sig.parameters:\n param = sig.parameters[p]\n optional = param.default != inspect.Signature.empty\n default = UIBuilder._safe_default(param.default) if param.default != inspect.Signature.empty else ''\n annotation = param.annotation if param.annotation != inspect.Signature.empty else ''\n type = UIBuilder._guess_type(default)\n\n # Create the parameter attribute dict\n p_attr = {\n \"name\": param.name,\n \"label\": param.name,\n \"optional\": optional,\n \"default\": default,\n \"description\": annotation,\n \"hide\": False,\n \"type\": type,\n \"kinds\": None,\n \"choices\": [],\n \"id\": None,\n \"events\": None\n }\n\n # Special choices handling for boolean parameters\n if isinstance(default, bool):\n p_attr['choices'] = {\n 'True': 'true',\n 'False': 'false'\n }\n\n # Append it to the list\n params.append(p_attr)\n return params"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the namespace path to the function", "response": "def _import(func):\n \"\"\"Return the namespace path to the function\"\"\"\n func_name = func.__name__\n\n # from foo.bar import func // func()\n # WARNING: May be broken in IPython, in which case the widget will use a fallback\n if func_name in globals():\n return func_name\n\n # import foo.bar // foo.bar.func()\n module_name = func.__module__\n submodules = module_name.split('.')\n\n if submodules[0] in globals():\n return module_name + '.' + func_name\n\n # from foo import bar // bar.func()\n for i in range(len(submodules)):\n m = submodules[i]\n if m in globals():\n return '.'.join(submodules[i:]) + '.' + func_name\n\n # import foo.bar as fb // fb.func()\n module_ref = sys.modules[func.__module__]\n all_globals = globals()\n\n for n in all_globals:\n if all_globals[n] == module_ref:\n return n + '.' + func_name\n\n # Not Found, return function name\n return func_name"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a Python logging Logger object for the given name and level.", "response": "def get_logger(name=None, level=logging.NOTSET, handlers=None):\n\t\"\"\"\n\tCreate a Python logging Logger for the given name. A special case is\n\twhen the name is None, as this will represent the root Logger object.\n\n\tWhen handlers are specified, the currently configured handlers for this name\n\tare removed, and the specified handlers are set.\n\n\tParameters\n\t----------\n\tname: string\n\t\tName of the Logger to create. Specify None to designate the root Logger.\n\tlevel: string\n\t\tOne of: CRITICAL, ERROR, WARNING, INFO or DEBUG. Alternatively, use the `logging`\n\t\tconstants: logging.CRITICAL, logging.ERROR, etc.\n\thandlers: dict\n\t\tKeys specifies the handler, value may optionally contain configuration,\n\t\tor be specified as None.\n\n\t\tSupported handlers are:\n\t\t- console: logging to stdout. Optionally specify a custom Handler using 'handler'.\n\t\t- file: logging to a specific file. Specify the file as 'logfile'.\n\t\t- syslog: logging to syslog.\n\n\t\tAll handlers support custom output formats by specifying a 'format'.\n\t\"\"\"\n\tlogger = logging.getLogger(name)\n\n\tif name is None:\n\t\tname = \"root\"\n\tif handlers is None:\n\t\thandlers = []\n\n\tlogger.setLevel(level)\n\n\tif len(handlers) != 0:\n\t\tlogger.handlers = []\n\n\tif \"console\" in handlers:\n\t\tif not isinstance(handlers['console'], collections.Iterable):\n\t\t\thandlers['console'] = {}\n\n\t\tif \"handler\" in handlers['console']:\n\t\t\tstrm = handlers['console']['handler']\n\t\telse:\n\t\t\tstrm = logging.StreamHandler()\n\n\t\tif \"format\" in handlers['console']:\n\t\t\tfmt = logging.Formatter(handlers['console']['format'])\n\t\telse:\n\t\t\tfmt = logging.Formatter('%(message)s')\n\n\t\tstrm.setLevel(level)\n\t\tstrm.setFormatter(fmt)\n\t\tlogger.addHandler(strm)\n\n\tif \"file\" in handlers:\n\t\tif not isinstance(handlers['file'], collections.Iterable):\n\t\t\traise TypeError(\"file handler config must be a dict\")\n\t\tif \"logfile\" not in handlers['file']:\n\t\t\traise ValueError(\"file handler config must contain logfile path name\")\n\n\t\tfil = logging.handlers.WatchedFileHandler(handlers['file']['logfile'])\n\n\t\tif \"format\" in handlers['file']:\n\t\t\tfmt = logging.Formatter(handlers['file']['format'])\n\t\telse:\n\t\t\tfmt = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n\n\t\tfil.setLevel(level)\n\t\tfil.setFormatter(fmt)\n\t\tlogger.addHandler(fil)\n\n\tif \"syslog\" in handlers:\n\t\tif not isinstance(handlers['syslog'], collections.Iterable):\n\t\t\thandlers['syslog'] = {}\n\n\t\tsysl = logging.handlers.SysLogHandler(address='/dev/log', facility=logging.handlers.SysLogHandler.LOG_SYSLOG)\n\n\t\tif \"format\" in handlers['syslog']:\n\t\t\tfmt = logging.Formatter(handlers['syslog']['format'])\n\t\telse:\n\t\t\tfmt = logging.Formatter('%(name)s[%(process)s] %(levelname)-8s: %(message)s')\n\n\t\tsysl.setLevel(level)\n\t\tsysl.setFormatter(fmt)\n\t\tlogger.addHandler(sysl)\n\n\treturn logger"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load_de_novos(path, exclude_indels=True):\n \n genes = {}\n with open(path, \"r\") as handle:\n header = handle.readline().strip().split(\"\\t\")\n for line in handle:\n \n line = line.rstrip().split(\"\\t\")\n gene = line[0]\n position = int(line[2]) - 1\n consequence = line[3]\n var_type = line[4]\n \n # ignore indels (some splice_acceptor_variants (in the\n # functional_consequences) are indels\n if exclude_indels and \"indel\" in var_type.lower():\n continue\n \n # trim out variants that are missing data\n if gene == \"\" or gene == \".\" or position == \"NA\":\n continue\n \n if gene not in genes:\n genes[gene] = {\"missense\": [], \"nonsense\": []}\n \n if consequence in missense:\n genes[gene][\"missense\"].append(position)\n elif consequence in lof:\n genes[gene][\"nonsense\"].append(position)\n \n return genes", "response": "Load the de novo data into a dictionary indexed by HGNC ID."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nregisters a new Thread , under the given descriptive name. Trying to register multiple threads under the same name will raise an Exception. Parameters ---------- name: string Name to register the given thread under. thread: threading.Thread, or a subclass Thread object to register.", "response": "def registerThread(self, name, thread):\n\t\t\"\"\"\n\t\tRegister a new Thread , under the given descriptive name.\n\n\t\tTrying to register multiple threads under the same name will raise an Exception.\n\n\t\tParameters\n\t\t----------\n\t\tname: string\n\t\t\tName to register the given thread under.\n\t\tthread: threading.Thread, or a subclass\n\t\t\tThread object to register.\n\t\t\"\"\"\n\t\tif not isinstance(thread, threading.Thread):\n\t\t\tself.logger.error(\"Thread {0} is not actually a Thread!\".format(name))\n\t\t\traise Exception(\"Thread {0} is not actually a Thread!\".format(name))\n\n\t\tif name in self.thread_list:\n\t\t\tself.logger.error(\"Thread {0} already registered!\".format(name))\n\t\t\traise Exception(\"Thread {0} already registered!\".format(name))\n\n\t\tself.thread_list[name] = thread\n\t\tself.logger.debug(\"Registered thread {0}\".format(name))\n\n\t\treturn thread"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getThread(self, name):\n\t\tif not name in self.thread_list:\n\t\t\tself.logger.error(\"Thread {0} is not registered!\".format(name))\n\t\t\traise Exception(\"Thread {0} is not registered!\".format(name))\n\n\t\treturn self.thread_list[name]", "response": "Retrieve the Thread registered under the given name."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unregisterThread(self, name):\n\t\tif not name in self.thread_list:\n\t\t\tself.logger.error(\"Thread {0} is not registered!\".format(name))\n\t\t\traise Exception(\"Thread {0} is not registered!\".format(name))\n\n\t\tdel self.thread_list[name]\n\t\tself.logger.debug(\"Unregistered thread {0}\".format(name))", "response": "Unregister the Thread with the given name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef startAll(self):\n\t\tself.logger.info(\"Starting all threads...\")\n\n\t\tfor thread in self.getThreads():\n\t\t\tthr = self.getThread(thread)\n\t\t\tself.logger.debug(\"Starting {0}\".format(thr.name))\n\t\t\tthr.start()\n\n\t\tself.logger.info(\"Started all threads\")", "response": "Start all registered Threads.\n\tStart all registered Threads.\n\t"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstops all registered Threads.", "response": "def stopAll(self, stop=False):\n\t\t\"\"\"\n\t\tStop all registered Threads. This is method assumes that the Thread is using\n\t\tand internal variable called stop to control its main loop. Stopping a Thread\n\t\tis achieved as follows:\n\n\t\t1. The Thread is retrieved.\n\t\t2. $thread.stop is set to False.\n\t\t3. The Thread is joined, and will wait until the Thread exits.\n\t\t4. The Thread is unregistered.\n\t\t5. If $exit = True, the main process is killed.\n\n\t\tEnsure that any registered Thread responds to having its stop property set to False,\n\t\telse calling stopAll() will result in a hung process.\n\t\t\"\"\"\n\t\tself.logger.info(\"Stopping all threads...\")\n\n\t\tfor thread in self.getThreads():\n\t\t\tthr = self.getThread(thread)\n\t\t\tself.logger.debug(\"Stopping {0}\".format(thr.name))\n\t\t\tthr.stop = True\n\t\t\tthr.join()\n\t\t\tself.unregisterThread(thread)\n\n\t\tself.logger.info(\"Stopped all threads\")\n\n\t\tif stop:\n\t\t\tself.logger.fatal(\"Comitting suicide\")\n\t\t\tos._exit(0)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef usersettings(request):\n if hasattr(request, 'usersettings'):\n usersettings = request.usersettings\n else:\n from .shortcuts import get_current_usersettings\n usersettings = get_current_usersettings()\n\n return {\n 'usersettings': usersettings\n }", "response": "Returns the current UserSettings based on the SITE_ID in the project s settings as context variables\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking the ensembl api version matches a currently working version", "response": "def check_ensembl_api_version(self):\n \"\"\" check the ensembl api version matches a currently working version\n \n This function is included so when the api version changes, we notice the\n change, and we can manually check the responses for the new version.\n \"\"\"\n \n self.attempt = 0\n headers = {\"content-type\": \"application/json\"}\n ext = \"/info/rest\"\n r = self.ensembl_request(ext, headers)\n \n response = json.loads(r)\n self.cache.set_ensembl_api_version(response[\"release\"])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef open_url(self, url, headers):\n \n data = self.cache.get_cached_data(url)\n if data is not None:\n return data, 200, headers\n \n self.rate_limit_ensembl_requests()\n req = request.Request(url, headers=headers)\n \n try:\n handler = request.urlopen(req)\n except HTTPError as error:\n # if we get a http error, we still process the status code, since a\n # later step deals with different status codes differently.\n handler = error\n except (URLError, ConnectionResetError, TimeoutError):\n # if we get a ConnectionResetError, assume something has gone wrong\n # with the server. Later code will wait before retrying.\n return '', 500, headers\n \n status_code = handler.getcode()\n response = handler.read()\n if IS_PYTHON3:\n response = response.decode(\"utf-8\")\n \n # parse the headers into a key, value dictionary\n headers = dict(zip(map(str.lower, handler.headers.keys()), handler.headers.values()))\n \n now = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.gmtime())\n logging.warning(\"{}\\t{}\\t{}\".format(now, status_code, url))\n \n return response, status_code, headers", "response": "open url with python libraries and return the response and status code and headers"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nobtaining sequence via the ensembl REST API", "response": "def ensembl_request(self, ext, headers):\n \"\"\" obtain sequence via the ensembl REST API\n \"\"\"\n \n self.attempt += 1\n if self.attempt > 5:\n raise ValueError(\"too many attempts, figure out why its failing\")\n \n response, status, requested_headers = self.open_url(self.server + ext, headers=headers)\n \n # we might end up passing too many simultaneous requests, or too many\n # requests per hour, just wait until the period is finished before\n # retrying\n if status == 429:\n if \"retry-after\" in requested_headers:\n time.sleep(float(requested_headers[\"retry-after\"]))\n elif \"x-ratelimit-reset\" in requested_headers:\n time.sleep(int(requested_headers[\"x-ratelimit-reset\"]))\n \n return self.ensembl_request(ext, headers)\n # retry after 30 seconds if we get service unavailable error\n elif status in [500, 503, 504]:\n time.sleep(30)\n return self.ensembl_request(ext, headers)\n elif status != 200:\n raise ValueError(\"Invalid Ensembl response for {}\\nheaders: {}\\nresponse: {}\".format(\\\n self.server + ext, requested_headers, response))\n \n # sometimes ensembl returns odd data. I don't know what it is, but the\n # json interpreter can't handle it. Rather than trying to catch it,\n # simply re-request the data\n if requested_headers[\"content-type\"] == \"application/json\":\n try:\n json.loads(response)\n except ValueError:\n now = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.gmtime())\n logging.warning(\"{}\\t{}\\t{}\\t{}\\t{}\".format(now,\n status, self.server + ext,\n \"cannot obtain json output\"))\n return self.ensembl_request(ext, requested_headers)\n \n self.cache.cache_url_data(self.server + ext, response)\n \n return response"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the ensembl gene IDs that correspond to a HGNC symbol", "response": "def get_genes_for_hgnc_id(self, hgnc_symbol):\n \"\"\" obtain the ensembl gene IDs that correspond to a HGNC symbol\n \"\"\"\n \n headers = {\"content-type\": \"application/json\"}\n \n # http://grch37.rest.ensembl.org/xrefs/symbol/homo_sapiens/KMT2A?content-type=application/json\n \n self.attempt = 0\n ext = \"/xrefs/symbol/homo_sapiens/{}\".format(hgnc_symbol)\n r = self.ensembl_request(ext, headers)\n \n genes = []\n for item in json.loads(r):\n if item[\"type\"] == \"gene\":\n genes.append(item[\"id\"])\n \n return genes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_previous_symbol(self, hgnc_symbol):\n \n ensembl_server = self.server\n gene_names_server = \"http://rest.genenames.org\"\n \n self.server = gene_names_server\n headers = {\"accept\": \"application/json\", \"content-type\": \"application/json\"}\n ext = \"/fetch/symbol/{}\".format(hgnc_symbol)\n try:\n r = self.ensembl_request(ext, headers)\n finally:\n self.server = ensembl_server\n \n gene_json = json.loads(r)\n \n prev_gene = []\n docs = gene_json[\"response\"][\"docs\"]\n \n # strip out any gene entries that have been invalidated\n docs = [ x for x in docs if x[\"status\"] != \"Entry Withdrawn\"]\n \n if len(docs) == 0:\n pass\n elif len(docs) > 1:\n raise ValueError(\"{0} has more than one alternate symbol, which I haven't accounted for.\".format(hgnc_symbol))\n elif \"prev_symbol\" in docs[0]:\n prev_gene = docs[0][\"prev_symbol\"]\n \n return prev_gene", "response": "This function is used to get the previous HGNC symbol from the server."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfetches the ensembl transcript IDs for a given list of ensembl gene IDs", "response": "def get_transcript_ids_for_ensembl_gene_ids(self, gene_ids, hgnc_symbols):\n \"\"\" fetch the ensembl transcript IDs for a given ensembl gene ID.\n \n Args:\n gene_ids: list of Ensembl gene IDs for the gene\n hgnc_symbols: list of possible HGNC symbols for gene\n \"\"\"\n \n chroms = {\"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"10\", \"11\", \\\n \"12\", \"13\", \"14\", \"15\", \"16\", \"17\", \"18\", \"19\", \"20\", \"21\", \"22\", \\\n \"X\", \"Y\"}\n \n headers = {\"content-type\": \"application/json\"}\n \n transcript_ids = []\n for gene_id in gene_ids:\n self.attempt = 0\n ext = \"/overlap/id/{}?feature=transcript\".format(gene_id)\n r = self.ensembl_request(ext, headers)\n \n for item in json.loads(r):\n # ignore non-coding transcripts\n if item[\"biotype\"] not in [\"protein_coding\", \"polymorphic_pseudogene\"]:\n continue\n \n # ignore transcripts not on the standard chromosomes\n # (non-default chroms fail to map the known de novo variants\n # to the gene location\n if item[\"Parent\"] != gene_id or item[\"seq_region_name\"] not in \\\n chroms or \\\n all([symbol not in item[\"external_name\"] for symbol in hgnc_symbols]):\n continue\n transcript_ids.append(item[\"id\"])\n \n return transcript_ids"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the genomic sequence for a given transcript", "response": "def get_genomic_seq_for_transcript(self, transcript_id, expand):\n \"\"\" obtain the sequence for a transcript from ensembl\n \"\"\"\n \n headers = {\"content-type\": \"application/json\"}\n \n self.attempt = 0\n ext = \"/sequence/id/{0}?type=genomic;expand_3prime={1};expand_5prime={1}\".format(transcript_id, expand)\n r = self.ensembl_request(ext, headers)\n \n gene = json.loads(r)\n \n seq = gene[\"seq\"]\n seq_id = gene[\"id\"]\n \n if seq_id != transcript_id:\n raise ValueError(\"ensembl gave the wrong transcript\")\n \n desc = gene[\"desc\"].split(\":\")\n chrom = desc[2]\n start = int(desc[3]) + expand\n end = int(desc[4]) - expand\n strand_temp = int(desc[5])\n \n strand = \"+\"\n if strand_temp == -1:\n strand = \"-\"\n \n return (chrom, start, end, strand, seq)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_cds_seq_for_transcript(self, transcript_id):\n \n headers = {\"content-type\": \"text/plain\"}\n \n self.attempt = 0\n ext = \"/sequence/id/{}?type=cds\".format(transcript_id)\n \n return self.ensembl_request(ext, headers)", "response": "get the sequence for a transcript"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_protein_seq_for_transcript(self, transcript_id):\n \n headers = {\"content-type\": \"text/plain\"}\n \n self.attempt = 0\n ext = \"/sequence/id/{}?type=protein\".format(transcript_id)\n \n return self.ensembl_request(ext, headers)", "response": "get the sequence for a given transcript"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_genomic_seq_for_region(self, chrom, start_pos, end_pos):\n \n headers = {\"content-type\": \"text/plain\"}\n \n self.attempt = 0\n ext = \"/sequence/region/human/{}:{}..{}:1\".format(chrom, start_pos, end_pos)\n \n return self.ensembl_request(ext, headers)", "response": "get the genomic sequence for a region"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_chrom_for_transcript(self, transcript_id, hgnc_id):\n \n headers = {\"content-type\": \"application/json\"}\n \n self.attempt = 0\n ext = \"/overlap/id/{}?feature=gene\".format(transcript_id)\n r = self.ensembl_request(ext, headers)\n \n for gene in json.loads(r):\n if gene[\"external_name\"] == hgnc_id:\n return gene[\"seq_region_name\"]\n \n return None", "response": "obtain the chrom of the sequence for a given transcript"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_exon_ranges_for_transcript(self, transcript_id):\n \n headers = {\"content-type\": \"application/json\"}\n \n self.attempt = 0\n ext = \"/overlap/id/{}?feature=exon\".format(transcript_id)\n r = self.ensembl_request(ext, headers)\n \n exon_ranges = []\n for exon in json.loads(r):\n if exon[\"Parent\"] != transcript_id:\n continue\n \n start = exon[\"start\"]\n end = exon[\"end\"]\n \n exon_ranges.append((start, end))\n \n return exon_ranges", "response": "get the range of the exon that overlap a given transcript"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_cds_ranges_for_transcript(self, transcript_id):\n \n headers = {\"content-type\": \"application/json\"}\n \n self.attempt = 0\n ext = \"/overlap/id/{}?feature=cds\".format(transcript_id)\n r = self.ensembl_request(ext, headers)\n \n cds_ranges = []\n for cds_range in json.loads(r):\n if cds_range[\"Parent\"] != transcript_id:\n continue\n \n start = cds_range[\"start\"]\n end = cds_range[\"end\"]\n \n cds_ranges.append((start, end))\n \n return cds_ranges", "response": "get the sequence for a given transcript"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rate_limit_ensembl_requests(self):\n \n current_time = time.time()\n diff_time = current_time - self.prior_time\n \n # sleep until the current rate limit period is finished\n if diff_time < self.rate_limit:\n time.sleep(self.rate_limit - diff_time)\n \n # set the access time to now, for later checks\n self.prior_time = time.time()", "response": "limit ensembl requests to one per 0. 0067 s\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nregister a NBTool object with the NBToolManager.", "response": "def register(nbtool):\n \"\"\"\n Register the provided NBTool object\n \"\"\"\n global _py_funcs\n _lazy_init()\n\n # Save references to the tool's load() and render() functions\n load_key = nbtool.origin + '|' + nbtool.id + '|load'\n render_key = nbtool.origin + '|' + nbtool.id + '|render'\n _py_funcs[load_key] = nbtool.load\n _py_funcs[render_key] = nbtool.render\n\n # Clean optional metadata for inclusion in JavaScript\n clean_description = \"null\" if nbtool.description is None else '\"' + nbtool.description.replace('\"','\\\\\"') + '\"'\n clean_version = \"null\" if nbtool.version is None else '\"' + nbtool.version.replace('\"','\\\\\"') + '\"'\n clean_tags = \"null\" if nbtool.tags is None else json.dumps(nbtool.tags)\n clean_attributes = \"null\" if nbtool.attributes is None else json.dumps(nbtool.attributes)\n\n # Pass the metadata to JavaScript\n IPython.display.display_javascript(\"\"\"\n console.log('ok');\n NBToolManager.instance().register(new NBToolManager.NBTool({\n origin: \"%s\",\n id: \"%s\",\n name: \"%s\",\n description: %s,\n version: %s,\n tags: %s,\n attributes: %s,\n load: function() {\n var x = Jupyter.notebook.kernel.execute('nbtools._py_funcs[\"%s\"]()',\n {\n iopub: {\n output: function(response) {\n // Print the return value of the Python code to the console\n console.log(response.content.data[\"text/plain\"]);\n }\n }\n },\n {\n silent: false,\n store_history: false,\n stop_on_error: true\n });\n return true;\n },\n render: function() {\n var x = Jupyter.notebook.kernel.execute('nbtools._py_funcs[\"%s\"]()',\n {\n iopub: {\n output: function(response) {\n // Print the return value of the Python code to the console\n console.log(response.content.data[\"text/plain\"]);\n }\n }\n },\n {\n silent: false,\n store_history: false,\n stop_on_error: true\n });\n return null;\n },\n }));\n \"\"\" % (nbtool.origin, nbtool.id, nbtool.name,\n clean_description, clean_version, clean_tags, clean_attributes,\n load_key, render_key), raw=True)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef call_simple_cli(command, cwd=None, universal_newlines=False, redirect_stderr=False):\n\treturn SimpleCliTool()._call_cli(command, cwd, universal_newlines, redirect_stderr)", "response": "Wrapper around SimpleCliTool. Simple."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nexecutes a command in the specified working directory and returns the output of the command.", "response": "def _call_cli(self, command, cwd=None, universal_newlines=False, redirect_stderr=False):\n\t\t\"\"\"\n\t\tExecutes the given command, internally using Popen. The output of\n\t\tstdout and stderr are returned as a tuple. The returned tuple looks\n\t\tlike: (stdout, stderr, returncode)\n\n\t\tParameters\n\t\t----------\n\t\tcommand: string\n\t\t\tThe command to execute.\n\t\tcwd: string\n\t\t\tChange the working directory of the program to the specified path.\n\t\tuniversal_newlines: boolean\n\t\t\tEnable the universal_newlines feature of Popen.\n\t\tredirect_stderr: boolean\n\t\t\tIf True, redirect stderr into stdout\n\t\t\"\"\"\n\t\tcommand = str(command.encode(\"utf-8\").decode(\"ascii\", \"ignore\"))\n\t\tenv = os.environ.copy()\n\t\tenv.update(self.envvars)\n\t\tstderr = STDOUT if redirect_stderr else PIPE\n\t\tproc = Popen(shlex.split(command), stdout=PIPE, stderr=stderr, cwd=cwd, universal_newlines=universal_newlines, env=env)\n\t\tstdout, stderr = proc.communicate()\n\n\t\treturn (stdout, stderr, proc.returncode)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _on_msg(self, msg):\n data = msg['content']['data']\n\n # If the message is a call invoke, run the function and send\n # the results.\n if 'callback' in data:\n guid = data['callback']\n callback = callback_registry[guid]\n args = data['arguments']\n args = [self.deserialize(a) for a in args]\n index = data['index']\n\n results = callback(*args)\n return self.serialize(self._send('return', index=index, results=results))\n\n # The message is not a call invoke, it must be an object\n # that is a response to a Python request.\n else:\n index = data['index']\n immutable = data['immutable']\n value = data['value']\n if index in self._callbacks:\n self._callbacks[index].resolve({\n 'immutable': immutable,\n 'value': value\n })\n del self._callbacks[index]", "response": "Handle messages from the front - end."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nserialize an object for sending to the front - end.", "response": "def serialize(self, obj):\n \"\"\"Serialize an object for sending to the front-end.\"\"\"\n if hasattr(obj, '_jsid'):\n return {'immutable': False, 'value': obj._jsid}\n else:\n obj_json = {'immutable': True}\n try:\n json.dumps(obj)\n obj_json['value'] = obj\n except:\n pass\n if callable(obj):\n guid = str(uuid.uuid4())\n callback_registry[guid] = obj\n obj_json['callback'] = guid\n return obj_json"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deserialize(self, obj):\n if obj['immutable']:\n return obj['value']\n else:\n guid = obj['value']\n if not guid in object_registry:\n instance = JSObject(self, guid)\n object_registry[guid] = instance\n return object_registry[guid]", "response": "Deserialize an object from the front - end."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _send(self, method, **parameters):\n msg = {\n 'index': self._calls,\n 'method': method,\n }\n msg.update(parameters)\n\n promise = SimplePromise()\n self._callbacks[self._calls] = promise\n\n self._calls += 1\n self._comm.send(msg)\n\n return promise", "response": "Sends a message to the front - end and returns a promise."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreply to a RPC request. This function will use the default exchange, to directly contact the reply_to queue. Parameters ---------- channel: object Properly initialized AMQP channel to use. original_headers: dict The headers of the originating message that caused this reply. message: string Message to reply with properties: dict Properties to set on message. This parameter is optional, but if set, at least the following options must be set: content_type: string - what content_type to specify, default is 'text/plain'. delivery_mode: int - what delivery_mode to use. By default message are not persistent, but this can be set by specifying PERSISTENT_MESSAGE .", "response": "def rpc_reply(channel, original_headers, message, properties=None):\n\t\"\"\"\n\tReply to a RPC request. This function will use the default exchange, to directly contact the reply_to queue.\n\n\tParameters\n\t----------\n\tchannel: object\n\t\tProperly initialized AMQP channel to use.\n\toriginal_headers: dict\n\t\tThe headers of the originating message that caused this reply.\n\tmessage: string\n\t\tMessage to reply with\n\tproperties: dict\n\t\tProperties to set on message. This parameter is optional, but if set, at least the following options must be set:\n\t\t\tcontent_type: string - what content_type to specify, default is 'text/plain'.\n\t\t\tdelivery_mode: int - what delivery_mode to use. By default message are not persistent, but this can be\n\t\t\t\tset by specifying PERSISTENT_MESSAGE .\n\t\"\"\"\n\tif not properties:\n\t\tproperties = {}\n\tproperties['correlation_id'] = original_headers.correlation_id\n\n\tpublish_message(channel, '', original_headers.reply_to, message, properties)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef consume(self, consumer_callback=None, exclusive=False):\n\t\tif not hasattr(self, \"queue_name\") and consumer_callback:\n\t\t\traise ValueError(\"Trying to set a callback, while no general purpose queue was declared.\")\n\n\t\tself.rpc_consumer_tag = self.channel.basic_consume(consumer_callback=self._rpc_response_callback, queue=self.rpc_queue_name, exclusive=False)\n\n\t\tif consumer_callback:\n\t\t\tsuper(Rpc, self).consume(consumer_callback, exclusive, True)", "response": "Initialize consuming of messages from an AMQP RPC queue."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _rpc_response_callback(self, channel, method_frame, header_frame, body):\n\t\tself.logger.debug(\"Received RPC response with correlation_id: {0}\".format(header_frame.correlation_id))\n\t\tif header_frame.correlation_id in self.responses:\n\t\t\tself.responses[header_frame.correlation_id] = {\n\t\t\t\t\"method_frame\": method_frame,\n\t\t\t\t\"header_frame\": header_frame,\n\t\t\t\t\"body\": body\n\t\t\t}\n\t\tchannel.basic_ack(method_frame.delivery_tag)", "response": "This method is called by the server when the response is received from the server."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_response(self, correlation_id=None):\n\t\tif not correlation_id:\n\t\t\tcorrelation_id = str(uuid.uuid1())\n\n\t\tif correlation_id in self.responses:\n\t\t\traise KeyError(\"Correlation_id {0} was already registered, and therefor not unique.\".format(correlation_id))\n\n\t\tself.responses[correlation_id] = None\n\t\treturn correlation_id", "response": "Register the receiving of a RPC response."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving a registered RPC response.", "response": "def retrieve_response(self, correlation_id):\n\t\t\"\"\"\n\t\tRetrieve a registered RPC response. If the correlation_id was not registered, an KeyError will the raised. If not value has been\n\t\treceived yet, None will be returned. After retrieving the response, the value will be unset internally.\n\n\t\tThe returned value will include the entire RabbitMQ message, consisting of a dict with the following keys:\n\n\t\t\tmethod_frame: dict\n\t\t\t\tInformation about the message\n\t\t\theader_frame: dict\n\t\t\t\tHeaders of the message\n\t\t\tbody: string\n\t\t\t\tBody of the message\n\n\t\tParameters\n\t\t----------\n\t\tcorrelation_id: string\n\t\t\tIdentifier to retrieve the RPC response for\n\t\t\"\"\"\n\t\tif correlation_id not in self.responses:\n\t\t\traise KeyError(\"Given RPC response correlation_id was not registered.\")\n\t\tif not self.responses[correlation_id]:\n\t\t\treturn None\n\n\t\tresponse = self.responses[correlation_id]\n\t\tdel(self.responses[correlation_id])\n\t\treturn response"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef publish(self, exchange, routing_key, message, properties=None, mandatory=False):\n\t\treturn publish_message(self.channel, exchange, routing_key, message, properties, mandatory)", "response": "Publish a message to an AMQP exchange."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreplies to a RPC request. This function will use the default exchange, to directly contact the reply_to queue. Parameters ---------- original_headers: dict The headers of the originating message that caused this reply. message: string Message to reply with properties: dict Properties to set on message. This parameter is optional, but if set, at least the following options must be set: content_type: string - what content_type to specify, default is 'text/plain'. delivery_mode: int - what delivery_mode to use. By default message are not persistent, but this can be set by specifying PERSISTENT_MESSAGE .", "response": "def reply(self, original_headers, message, properties=None):\n\t\t\"\"\"\n\t\tReply to a RPC request. This function will use the default exchange, to directly contact the reply_to queue.\n\n\t\tParameters\n\t\t----------\n\t\toriginal_headers: dict\n\t\t\tThe headers of the originating message that caused this reply.\n\t\tmessage: string\n\t\t\tMessage to reply with\n\t\tproperties: dict\n\t\t\tProperties to set on message. This parameter is optional, but if set, at least the following options must be set:\n\t\t\t\tcontent_type: string - what content_type to specify, default is 'text/plain'.\n\t\t\t\tdelivery_mode: int - what delivery_mode to use. By default message are not persistent, but this can be\n\t\t\t\t\tset by specifying PERSISTENT_MESSAGE .\n\t\t\"\"\"\n\t\trpc_reply(self.channel, original_headers, message, properties)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef clear_usersettings_cache(sender, **kwargs):\n instance = kwargs['instance']\n try:\n del USERSETTINGS_CACHE[instance.site.pk]\n except KeyError:\n pass", "response": "Clear the USERSETTINGS_CACHE for the current site."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the current UserSettings object based on the SITE_ID in the project s settings.", "response": "def get_current(self):\n \"\"\"\n Returns the current ``UserSettings`` based on the SITE_ID in the\n project's settings. The ``UserSettings`` object is cached the first\n time it's retrieved from the database.\n \"\"\"\n from django.conf import settings\n try:\n site_id = settings.SITE_ID\n except AttributeError:\n raise ImproperlyConfigured(\n 'You\\'re using the Django \"sites framework\" without having '\n 'set the SITE_ID setting. Create a site in your database and '\n 'set the SITE_ID setting to fix this error.')\n\n try:\n current_usersettings = USERSETTINGS_CACHE[site_id]\n except KeyError:\n current_usersettings = self.get(site_id=site_id)\n USERSETTINGS_CACHE[site_id] = current_usersettings\n return current_usersettings"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef append_url(base_url, path):\n if base_url[-1] != \"/\":\n base_url += \"/\"\n if path[0] == \"/\":\n path = path[1:]\n return urljoin(base_url, path)", "response": "Append path to base_url"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef include_frameshift_rates(path):\n \n with open(path) as handle:\n lines = [ x.strip().split('\\t') for x in handle ]\n \n nonsense = sum([ 10**(float(x[4])) for x in lines[1:] if x[4] != 'NA' ])\n length = sum([ int(x[2]) for x in lines[1:] if x[2] != 'NA' ])\n \n # add the frameshift rates to each line in turn, while writing the output\n # back to the output path\n frameshift_sum = nonsense * 1.25\n with open(path, \"w\") as handle:\n for line in lines:\n if line[0] == \"transcript_id\":\n line.append(\"frameshift_rate\")\n elif line[4] == 'NA':\n line.append('NA')\n else:\n # estimate the frameshift rate for the gene\n frameshift = (float(line[2])/length) * frameshift_sum\n frameshift = math.log10(frameshift)\n line.append(str(frameshift))\n \n line = \"\\t\".join(line) +\"\\n\"\n handle.write(line)", "response": "This function adds per - gene frameshift mutation rates to the output file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _randomString():\n\n return ''.join(\n random.choice(string.ascii_uppercase + string.digits)\n for x in range(10))", "response": "Random string for message signing"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalls custom cjdns admin function", "response": "def _callFunc(session, funcName, password, args):\n \"\"\"Call custom cjdns admin function\"\"\"\n\n txid = _randomString()\n sock = session.socket\n sock.send(bytearray('d1:q6:cookie4:txid10:%se' % txid, 'utf-8'))\n msg = _getMessage(session, txid)\n cookie = msg['cookie']\n txid = _randomString()\n tohash = (password + cookie).encode('utf-8')\n req = {\n 'q': funcName,\n 'hash': hashlib.sha256(tohash).hexdigest(),\n 'cookie': cookie,\n 'args': args,\n 'txid': txid\n }\n\n if password:\n req['aq'] = req['q']\n req['q'] = 'auth'\n reqBenc = bencode(req).encode('utf-8')\n req['hash'] = hashlib.sha256(reqBenc).hexdigest()\n\n reqBenc = bencode(req)\n sock.send(bytearray(reqBenc, 'utf-8'))\n return _getMessage(session, txid)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreceives messages from cjdns admin server", "response": "def _receiverThread(session):\n \"\"\"Receiving messages from cjdns admin server\"\"\"\n\n timeOfLastSend = time.time()\n timeOfLastRecv = time.time()\n try:\n while True:\n if timeOfLastSend + KEEPALIVE_INTERVAL_SECONDS < time.time():\n if timeOfLastRecv + 10 < time.time():\n raise exceptions.PingTimeout()\n session.socket.send(\n b'd1:q18:Admin_asyncEnabled4:txid8:keepalive')\n timeOfLastSend = time.time()\n\n try:\n data = session.socket.recv(BUFFER_SIZE)\n except socket.timeout:\n continue\n\n try:\n benc = bdecode(data)\n except (KeyError, ValueError):\n logger.error(\"error decoding [%s]\", data)\n continue\n\n if benc['txid'] == 'keepaliv':\n if benc['asyncEnabled'] == 0:\n raise exceptions.SessionLost()\n timeOfLastRecv = time.time()\n else:\n session.queue.put(benc)\n\n except KeyboardInterrupt:\n logger.exception(\"interrupted\")\n import thread\n thread.interrupt_main()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting a message from the queue and store it in session. messages.", "response": "def _getMessage(session, txid):\n \"\"\"Getting message associated with txid\"\"\"\n\n while True:\n if txid in session.messages:\n msg = session.messages[txid]\n del session.messages[txid]\n return msg\n else:\n try:\n # apparently any timeout at all allows the thread to be\n # stopped but none make it unstoppable with ctrl+c\n nextMessage = session.queue.get(timeout=100)\n except queue.Empty:\n continue\n if 'txid' in nextMessage:\n session.messages[nextMessage['txid']] = nextMessage\n else:\n logger.info(\"message with no txid: %s\" % nextMessage)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfunction fabric for Session class", "response": "def _functionFabric(func_name, argList, oargList, password):\n \"\"\"Function fabric for Session class\"\"\"\n\n def functionHandler(self, *args, **kwargs):\n call_args = {}\n\n for (key, value) in oargList.items():\n call_args[key] = value\n\n for i, arg in enumerate(argList):\n if i < len(args):\n call_args[arg] = args[i]\n\n for (key, value) in kwargs.items():\n call_args[key] = value\n\n return _callFunc(self, func_name, password, call_args)\n\n functionHandler.__name__ = str(func_name)\n return functionHandler"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconnects to cjdns admin with this attributes", "response": "def connect(ipAddr, port, password):\n \"\"\"Connect to cjdns admin with this attributes\"\"\"\n\n sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n sock.connect((ipAddr, port))\n sock.settimeout(2)\n\n # Make sure it pongs.\n sock.send(b'd1:q4:pinge')\n data = sock.recv(BUFFER_SIZE)\n if not data.endswith(b'1:q4:ponge'):\n raise exceptions.NotACjdnsAdminSocket(\"Looks like %s:%d is to a non-cjdns socket.\",\n (ipAddr, port))\n\n # Get the functions and make the object\n page = 0\n availableFunctions = {}\n while True:\n sock.send(bytearray(\n 'd1:q24:Admin_availableFunctions4:argsd4:pagei%seee' % page,\n 'utf-8'))\n data = sock.recv(BUFFER_SIZE)\n benc = bdecode(data)\n for func in benc['availableFunctions']:\n availableFunctions[func] = benc['availableFunctions'][func]\n if 'more' not in benc:\n break\n page = page+1\n\n funcArgs = {}\n funcOargs = {}\n\n for (i, func) in availableFunctions.items():\n items = func.items()\n\n # grab all the required args first\n # append all the optional args\n rargList = [arg for arg, atts in items if atts['required']]\n argList = rargList + [\n arg for arg, atts in items if not atts['required']]\n\n # for each optional arg setup a default value with\n # a type which will be ignored by the core.\n oargList = {}\n for (arg, atts) in items:\n if not atts['required']:\n oargList[arg] = (\n \"''\" if func[arg]['type'] == 'Int'\n else \"\")\n\n setattr(Session, i, _functionFabric(\n i, argList, oargList, password))\n\n funcArgs[i] = rargList\n funcOargs[i] = oargList\n\n session = Session(sock)\n\n kat = threading.Thread(target=_receiverThread, args=[session])\n kat.setDaemon(True)\n kat.start()\n\n # Check our password.\n ret = _callFunc(session, \"ping\", password, {})\n if 'error' in ret:\n raise exceptions.InvalidAdminPassword(ret.get(\"error\"))\n\n session._functions = \"\"\n\n funcOargs_c = {}\n for func in funcOargs:\n funcOargs_c[func] = list([\n key + \"=\" + str(value) for (key, value) in funcOargs[func].items()\n ])\n\n for func in availableFunctions:\n session._functions += (\n func + \"(\" + ', '.join(funcArgs[func] + funcOargs_c[func]) + \")\\n\")\n\n return session"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connectWithAdminInfo(path=None):\n\n if path is None:\n path = os.path.expanduser('~/.cjdnsadmin')\n try:\n with open(path, 'r') as adminInfo:\n data = json.load(adminInfo)\n except IOError:\n logger.info('~/.cjdnsadmin not found; using default credentials', file=sys.stderr)\n data = {\n 'password': 'NONE',\n 'addr': '127.0.0.1',\n 'port': 11234,\n }\n\n return connect(data['addr'], data['port'], data['password'])", "response": "Connect to cjdns admin with data from user file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npublish a message to an AMQP exchange.", "response": "def publish_message(channel, exchange, routing_key, message, properties=None, mandatory=False):\n\t\"\"\"\n\tPublish a message to an AMQP exchange.\n\n\tParameters\n\t----------\n\tchannel: object\n\t\tProperly initialized AMQP channel to use.\n\texchange: string\n\t\tExchange to publish to.\n\trouting_key: string\n\t\tRouting key to use for this message.\n\tmessage: string\n\t\tMessage to publish.\n\tproperties: dict\n\t\tProperties to set on message. This parameter is optional, but if set, at least the following options must be set:\n\t\t\tcontent_type: string - what content_type to specify, default is 'text/plain'.\n\t\t\tdelivery_mode: int - what delivery_mode to use. By default message are not persistent, but this can be\n\t\t\t\tset by specifying PERSISTENT_MESSAGE .\n\t\tThe following options are also available:\n\t\t\trouting_key: string - what routing_key to use. Will override the one set in the parameters.\n\t\t\texchange: string - what exchange to use. Will override the one set in the parameters.\n\tmandatory: boolean\n\t\tIf set to True, the mandatory bit will be set on the published message.\n\n\tReturns\n\t-------\n\tDepending on the mode of the Channel, the return value can signify different things:\n\n\tbasic_Confirm is active:\n\t\tTrue means that the message has been delivered to a queue, False means it hasn't.\n\tmandatory bit was set on message:\n\t\tTrue means that the message has been delivered to a consumer, False means that it has been returned.\n\tNo special bit or mode has been set:\n\t\tNone is returned.\n\t\"\"\"\n\tif properties is None:\n\t\tproperties = {}\n\tif properties and \"routing_key\" in properties:\n\t\trouting_key = properties[\"routing_key\"]\n\t\tdel(properties[\"routing_key\"])\n\tif properties and \"exchange\" in properties:\n\t\texchange = properties[\"exchange\"]\n\t\tdel(properties[\"exchange\"])\n\n\tif not routing_key:\n\t\traise ValueError(\"routing_key was not specified\")\n\tif not exchange and not exchange == \"\":\n\t\traise ValueError(\"exchange was not specified\")\n\n\tlogging.getLogger(__name__ + \".publish_message\").debug(\"Publishing message to exchange {0} with routing_key {1}\".format(exchange, routing_key))\n\n\treturn channel.basic_publish(exchange, routing_key, message, pika.BasicProperties(**properties), mandatory)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef publish(self, message, properties=None, mandatory=False):\n\t\treturn publish_message(self.channel, self.exchange_name, self.default_routing_key, message, properties, mandatory)", "response": "Publish a message to an AMQP exchange."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads a file listing gene and transcript IDs and return a dictionary of transcripts", "response": "def load_genes(path):\n \"\"\" load a file listing gene and transcript IDs\n \n Args:\n path: path to file containing gene IDs and transcript IDs e.g.\n gene_1 transcript_1.1 length_1 denovo_count\n gene_2 transcript_2.1 length_3 denovo_count\n \n Returns:\n dict of transcripts eg {'CTC1': [\"ENST00000315684\", \"ENST00000485511\"]}\n \"\"\"\n \n with open(path, 'rt') as f:\n lines = [ x.split('\\t')[:2] for x in f if not x.startswith('hgnc') ]\n \n transcripts = {}\n for symbol, tx in lines:\n if symbol not in transcripts:\n transcripts[symbol] = []\n transcripts[symbol].append(tx)\n \n return transcripts"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_mutation_rates(transcripts, mut_dict, ensembl):\n \n rates = {'missense': 0, 'nonsense': 0, 'splice_lof': 0,\n 'splice_region': 0, 'synonymous': 0}\n combined = None\n \n for tx_id in transcripts:\n try:\n tx = construct_gene_object(ensembl, tx_id)\n except ValueError:\n continue\n \n if len(tx.get_cds_sequence()) % 3 != 0:\n raise ValueError(\"anomalous_coding_sequence\")\n \n # ignore mitochondrial genes\n if tx.get_chrom() == \"MT\":\n continue\n \n sites = SiteRates(tx, mut_dict, masked_sites=combined)\n combined = tx + combined\n \n for cq in ['missense', 'nonsense', 'splice_lof', 'splice_region', 'synonymous']:\n rates[cq] += sites[cq].get_summed_rate()\n \n if combined is None:\n raise ValueError('no tx found')\n \n length = combined.get_coding_distance(combined.get_cds_end())['pos']\n \n return rates, combined, length", "response": "This function determines the mutation rates per functional category for a list of transcripts and the ensembl request object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_options():\n \n parser = argparse.ArgumentParser(description='denovonear cli interface')\n \n ############################################################################\n # CLI options in common\n parent = argparse.ArgumentParser(add_help=False)\n parent.add_argument(\"--out\", help=\"output filename\")\n parent.add_argument(\"--rates\",\n help=\"Path to file containing sequence context-based mutation rates.\")\n parent.add_argument(\"--genome-build\", choices=[\"grch37\",\n \"GRCh37\", \"grch38\", \"GRCh38\"], default=\"grch37\", help=\"Genome build \"\n \"that the de novo coordinates are based on (GRCh37 or GRCh38\")\n parent.add_argument(\"--cache-folder\",\n default=os.path.join(os.path.expanduser('~'), \".cache\", 'denovonear'),\n help=\"where to cache Ensembl data (default is ~/.cache/denovonear)\")\n \n subparsers = parser.add_subparsers()\n \n ############################################################################\n # CLI options for clustering\n cluster = subparsers.add_parser('cluster', parents=[parent],\n description=\"Tests the proximity of de novo mutations in genes.\")\n cluster.add_argument(\"--in\", dest=\"input\", required=True, help=\"Path to \"\n \"file listing known mutations in genes. See example file in data folder \"\n \"for format.\")\n \n cluster.set_defaults(func=clustering)\n \n ############################################################################\n # CLI options for identifing transcripts to use\n transcripts = subparsers.add_parser('transcripts', parents=[parent],\n description=\"Identify transcripts for a gene containing de novo events.\")\n transcripts.add_argument(\"--de-novos\", required=True, help=\"Path to \"\n \"file listing de novo variants in genes.\")\n transcripts.set_defaults(func=find_transcripts)\n group = transcripts.add_mutually_exclusive_group(required=True)\n group.add_argument(\"--all-transcripts\", action=\"store_true\", default=False,\n help=\"Flag if you want to identify all transcripts with more than \"\n \"one de novo on it.\")\n group.add_argument(\"--minimise-transcripts\", action=\"store_true\",\n default=False, help=\"Flag if you want to identify the minimal set of \"\n \"transcripts to contain all de novos.\")\n \n ############################################################################\n # CLI options for identifing transcripts to use\n rater = subparsers.add_parser(\"rates\", parents=[parent],\n description=\"determine mutation rates for genes given transcript IDs.\")\n rater.add_argument(\"--genes\", help=\"Path to file \"\n \"listing HGNC symbols, with one or more transcript IDs per gene. \"\n \"The tab-separated input format is gene symbol followed by transcript \"\n \"ID. Alternative transcripts are listed on separate lines.\")\n rater.set_defaults(func=gene_rates)\n \n args = parser.parse_args()\n if 'func' not in args:\n print('Use one of the subcommands: cluster, rates, or transcripts\\n')\n parser.print_help()\n sys.exit()\n \n return args", "response": "get the command line options for the get_command_line_switches"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget IPv6 address from a public key.", "response": "def to_ipv6(key):\n \"\"\"Get IPv6 address from a public key.\"\"\"\n\n if key[-2:] != '.k':\n raise ValueError('Key does not end with .k')\n\n key_bytes = base32.decode(key[:-2])\n hash_one = sha512(key_bytes).digest()\n hash_two = sha512(hash_one).hexdigest()\n\n return ':'.join([hash_two[i:i+4] for i in range(0, 32, 4)])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef glob(self):\n\t\tmatches = []\n\t\tfor root, dirnames, filenames in os.walk(self.path):\n\t\t\tif not self.recursive:\n\t\t\t\twhile len(dirnames) > 0:\n\t\t\t\t\tdirnames.pop()\n\t\t\tfor include in self.include:\n\t\t\t\tfor filename in fnmatch.filter(filenames, include):\n\t\t\t\t\tmatches.append(os.path.join(root, filename))\n\t\treturn matches", "response": "Return all absolute filenames of files that match the globbing patterns."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndisplays a choice form to select which site to add settings.", "response": "def select_site_view(self, request, form_url=''):\n \"\"\"\n Display a choice form to select which site to add settings.\n \"\"\"\n if not self.has_add_permission(request):\n raise PermissionDenied\n\n extra_qs = ''\n if request.META['QUERY_STRING']:\n extra_qs = '&' + request.META['QUERY_STRING']\n\n site_choices = self.get_site_choices()\n\n if len(site_choices) == 1:\n return HttpResponseRedirect('?site_id={0}{1}'.format(site_choices[0][0], extra_qs))\n\n # Create form\n form = self.select_site_form(\n data=request.POST if request.method == 'POST' else None,\n initial={'site': site_choices[0][0]}\n )\n\n form.fields['site'].choices = site_choices\n\n if form.is_valid():\n return HttpResponseRedirect(\n '?site_id={0}{1}'.format(form.cleaned_data['site'], extra_qs))\n\n # Wrap in all admin layout\n fieldsets = ((None, {'fields': ('site',)}),)\n adminForm = AdminForm(form, fieldsets, {}, model_admin=self)\n media = self.media + adminForm.media\n\n context = {\n 'title': _('Add %s') % force_text(self.opts.verbose_name),\n 'adminform': adminForm,\n 'is_popup': '_popup' in request.GET,\n 'media': mark_safe(media),\n 'errors': AdminErrorList(form, ()),\n 'app_label': self.opts.app_label,\n }\n\n return self.render_select_site_form(request, context, form_url)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef render_select_site_form(self, request, context, form_url=''):\n app_label = self.opts.app_label\n context.update({\n 'has_change_permission': self.has_change_permission(request),\n 'form_url': mark_safe(form_url),\n 'opts': self.opts,\n 'add': True,\n 'save_on_top': self.save_on_top,\n })\n\n return render_to_response(self.select_site_form_template or [\n 'admin/%s/%s/select_site_form.html' % (app_label, self.opts.object_name.lower()),\n 'admin/%s/select_site_form.html' % app_label,\n 'admin/usersettings/select_site_form.html', # added default here\n 'admin/select_site_form.html'\n ], context)", "response": "Render the site choice form."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef xauth(base_url_template=DEFAULT_READER_URL_TEMPLATE, **xargs):\n consumer_key = xargs.get('consumer_key') or required_from_env('READABILITY_CONSUMER_KEY')\n consumer_secret = xargs.get('consumer_secret') or required_from_env('READABILITY_CONSUMER_SECRET')\n username = xargs.get('username') or required_from_env('READABILITY_USERNAME')\n password = xargs.get('password') or required_from_env('READABILITY_PASSWORD')\n\n client = Client(consumer_key, client_secret=consumer_secret, signature_type='BODY')\n url = base_url_template.format(ACCESS_TOKEN_URL)\n headers = {'Content-Type': 'application/x-www-form-urlencoded'}\n params = {\n 'x_auth_username': username,\n 'x_auth_password': password,\n 'x_auth_mode': 'client_auth'\n }\n\n uri, headers, body = client.sign(url,\n http_method='POST',\n body=urlencode(params),\n headers=headers)\n\n response = requests.post(uri, data=body)\n logger.debug('POST to %s.', uri)\n\n token = parse_qs(response.content)\n try:\n # The indexes below are a little weird. parse_qs above gives us\n # back a dict where each value is a list. We want the first value\n # in those lists.\n token = (token[b'oauth_token'][0].decode(), token[b'oauth_token_secret'][0].decode())\n except KeyError:\n raise ValueError('Invalid Credentials.')\n\n return token", "response": "Generate an OAuth token tuple that can be used with clients. ReaderClient."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef log_transform(rates):\n \n transformed = []\n for key in ['missense', 'nonsense', 'splice_lof', 'splice_region',\n 'synonymous']:\n \n try:\n value = math.log10(rates[key])\n except ValueError:\n value = \"NA\"\n except KeyError:\n continue\n \n transformed.append(value)\n \n return '\\t'.join(map(str, transformed))", "response": "log transform a numeric value unless it is zero or negative\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_usersettings_model():\n try:\n from django.apps import apps\n get_model = apps.get_model\n except ImportError:\n from django.db.models.loading import get_model\n\n try:\n app_label, model_name = settings.USERSETTINGS_MODEL.split('.')\n except ValueError:\n raise ImproperlyConfigured('USERSETTINGS_MODEL must be of the '\n 'form \"app_label.model_name\"')\n usersettings_model = get_model(app_label, model_name)\n if usersettings_model is None:\n raise ImproperlyConfigured('USERSETTINGS_MODEL refers to model \"%s\" that has '\n 'not been installed' % settings.USERSETTINGS_MODEL)\n return usersettings_model", "response": "Returns the UserSettings model that is active in this project."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_current_usersettings():\n USERSETTINGS_MODEL = get_usersettings_model()\n try:\n current_usersettings = USERSETTINGS_MODEL.objects.get_current()\n except USERSETTINGS_MODEL.DoesNotExist:\n current_usersettings = USERSETTINGS_MODEL.get_default()\n return current_usersettings", "response": "Returns the current UserSettings based on the SITE_ID in the project s settings\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_p_value(transcript, rates, iterations, consequence, de_novos):\n \n if len(de_novos) < 2:\n return (float('nan'), float('nan'))\n \n rename = {\"lof\": \"loss_of_function\"}\n if consequence in rename:\n consequence = rename[consequence]\n \n weights = rates[consequence]\n \n cds_positions = [ transcript.get_coding_distance(x)['pos'] for x in de_novos ]\n distances = get_distances(cds_positions)\n observed = geomean(distances)\n \n # call a cython wrapped C++ library to handle the simulations\n sim_prob = analyse_de_novos(weights, iterations, len(de_novos), observed)\n \n observed = \"{0:0.1f}\".format(observed)\n \n return (observed, sim_prob)", "response": "This function calculates the probability of getting the observed de novos with a mean conservation."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_options():\n \n parser = argparse.ArgumentParser(description=\"Script to batch process de\"\n \"novo clustering.\")\n parser.add_argument(\"--in\", dest=\"input\", required=True, help=\"Path to\"\n \"file listing known mutations in genes. See example file in data folder\"\n \"for format.\")\n parser.add_argument(\"--temp-dir\", required=True, help=\"path to hold intermediate files\")\n parser.add_argument(\"--out\", required=True, help=\"Path to output file.\")\n \n args = parser.parse_args()\n \n return args", "response": "get the command line options"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncount the number of missense variants in each gene.", "response": "def count_missense_per_gene(lines):\n \"\"\" count the number of missense variants in each gene.\n \"\"\"\n \n counts = {}\n for x in lines:\n x = x.split(\"\\t\")\n gene = x[0]\n consequence = x[3]\n \n if gene not in counts:\n counts[gene] = 0\n \n if consequence != \"missense_variant\":\n continue\n \n counts[gene] += 1\n \n return counts"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef split_denovos(denovo_path, temp_dir):\n \n # open the de novos, drop the header line, then sort them (which will be by\n # HGNC symbol, as the first element of each line)\n with open(denovo_path, \"r\") as handle:\n lines = handle.readlines()\n header = lines.pop(0)\n \n basename = os.path.basename(denovo_path)\n \n # only include genes with 2+ missense SNVs\n counts = count_missense_per_gene(lines)\n counts = dict((k, v) for k, v in counts.items() if v > 1 )\n \n genes = set([])\n for line in sorted(lines):\n gene = line.split(\"\\t\")[0]\n \n # open a new output file whenever we hit a different gene\n if gene not in genes and gene in counts:\n genes.add(gene)\n path = os.path.join(temp_dir, \"tmp.{}.txt\".format(len(genes)))\n output = open(path, \"w\")\n output.write(header)\n \n if gene in counts:\n output.write(line)\n \n return len(genes)", "response": "split de novos from an input file into files one for each gene\nCTYPE"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_random_string():\n \n # set up a random string to associate with the run\n hash_string = \"%8x\" % random.getrandbits(32)\n hash_string = hash_string.strip()\n \n # done't allow the random strings to be equivalent to a number, since\n # the LSF cluster interprets those differently from letter-containing\n # strings\n while is_number(hash_string):\n hash_string = \"%8x\" % random.getrandbits(32)\n hash_string = hash_string.strip()\n \n return hash_string", "response": "make a random string that we can use for bsub job IDs"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef submit_bsub_job(command, job_id=None, dependent_id=None, memory=None, requeue_code=None, logfile=None):\n \n if job_id is None:\n job_id = get_random_string()\n \n job = \"-J \\\"{0}\\\"\".format(job_id)\n \n mem = \"\"\n if memory is not None:\n mem = \"-R 'select[mem>{0}] rusage[mem={0}]' -M {0}\".format(memory)\n \n requeue = \"\"\n if requeue_code is not None:\n requeue = \"-Q 'EXCLUDE({0})'\".format(requeue_code)\n \n dependent = \"\"\n if dependent_id is not None:\n if type(dependent_id) == list:\n dependent_id = \" && \".join(dependent_id)\n dependent = \"-w '{0}'\".format(dependent_id)\n \n log = \"bjob_output.txt\"\n if logfile is not None:\n log = logfile\n \n preamble = [\"bsub\", job, dependent, requeue, \"-q\", \"normal\", \"-o\", log, mem]\n command = [\"bash\", \"-c\", \"\\\"\"] + command + [\"\\\"\"]\n \n command = \" \".join(preamble + command)\n subprocess.call(command, shell=True)", "response": "Submits a bsub job to the current directory."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef batch_process(de_novo_path, temp_dir, output_path):\n \n temp_dir = tempfile.mkdtemp(dir=temp_dir)\n count = split_denovos(de_novo_path, temp_dir)\n \n # set up run parameters\n job_name = \"denovonear\"\n job_id = \"{0}[1-{1}]%20\".format(job_name, count)\n \n basename = os.path.basename(de_novo_path)\n infile = os.path.join(temp_dir, \"tmp.\\$LSB_JOBINDEX\\.txt\")\n outfile = os.path.join(temp_dir, \"tmp.\\$LSB_JOBINDEX\\.output\")\n \n command = [\"denovonear\", \"cluster\",\n \"--in\", infile,\n \"--out\", outfile]\n submit_bsub_job(command, job_id, memory=3500, requeue_code=134, logfile=\"clustering.bjob\")\n time.sleep(2)\n \n # merge the array output after the array finishes\n merge_id = \"{}_merge\".format(job_name)\n command = [\"head\", \"-n\", \"1\", os.path.join(temp_dir, \"tmp.1.output\"), \">\", output_path, \\\n \"; tail\", \"-q\", \"-n\", \"+2\", os.path.join(temp_dir, \"tmp.*.output\"), \"|\", \"sort\", \">>\", output_path]\n submit_bsub_job(command, merge_id, memory=100, dependent_id=job_id, logfile=\"clustering.bjob\")\n time.sleep(2)\n \n # submit a cleanup job to the cluster\n submit_bsub_job([\"rm\", \"-r\", temp_dir], job_id=\"{}_cleanup\".format(job_name), \\\n memory=100, dependent_id=merge_id, logfile=\"clustering.bjob\")", "response": "This function processes the given file into a single array of directories."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload the per - trinucleotide rates from a file", "response": "def load_mutation_rates(path=None):\n \"\"\" load sequence context-based mutation rates\n \n Args:\n path: path to table of sequence context-based mutation rates. If None,\n this defaults to per-trinucleotide rates provided by Kaitlin Samocha\n (Broad Institute).\n \n Returns:\n list of [initial, changed, rate] lists e.g. [['AGA', 'ATA', '5e-8']]\n \"\"\"\n \n if path is None:\n path = resource_filename(__name__, \"data/rates.txt\")\n \n rates = []\n with open(path) as handle:\n for line in handle:\n if line.startswith(\"from\"): # ignore the header line\n continue\n \n line = [ x.encode('utf8') for x in line.strip().split() ]\n rates.append(line)\n \n return rates"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef authenticated_user(self, auth):\n response = self.get(\"/user\", auth=auth)\n return GogsUser.from_json(response.json())", "response": "Returns the authenticated user by the provided authentication"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the list of tokens owned by the specified user.", "response": "def get_tokens(self, auth, username=None):\n \"\"\"\n Returns the tokens owned by the specified user. If no user is specified,\n uses the user authenticated by ``auth``.\n\n :param auth.Authentication auth: authentication for user to retrieve.\n Must be a username-password authentication,\n due to a restriction of the Gogs API\n :param str username: username of owner of tokens\n\n :return: list of tokens\n :rtype: List[Token]\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n if username is None:\n username = self.authenticated_user(auth).username\n response = self.get(\"/users/{u}/tokens\".format(u=username), auth=auth)\n return [Token.from_json(o) for o in response.json()]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new token with the specified name for the specified user.", "response": "def create_token(self, auth, name, username=None):\n \"\"\"\n Creates a new token with the specified name for the specified user.\n If no user is specified, uses user authenticated by ``auth``.\n\n :param auth.Authentication auth: authentication for user to retrieve.\n Must be a username-password authentication,\n due to a restriction of the Gogs API\n :param str name: name of new token\n :param str username: username of owner of new token\n\n :return: new token representation\n :rtype: Token\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n if username is None:\n username = self.authenticated_user(auth).username\n data = {\"name\": name}\n response = self.post(\"/users/{u}/tokens\".format(u=username), auth=auth, data=data)\n return Token.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ensure_token(self, auth, name, username=None):\n if username is None:\n username = self.authenticated_user(auth).username\n tokens = [token for token in self.get_tokens(auth, username) if token.name == name]\n if len(tokens) > 0:\n return tokens[0]\n return self.create_token(auth, name, username)", "response": "Ensures that a token with the specified name exists for the specified user. Creates a new token if none exists."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_repo(self, auth, name, description=None, private=False, auto_init=False,\n gitignore_templates=None, license_template=None, readme_template=None,\n organization=None):\n \"\"\"\n Creates a new repository, and returns the created repository.\n\n :param auth.Authentication auth: authentication object\n :param str name: name of the repository to create\n :param str description: description of the repository to create\n :param bool private: whether the create repository should be private\n :param bool auto_init: whether the created repository should be auto-initialized with an initial commit\n :param list[str] gitignore_templates: collection of ``.gitignore`` templates to apply\n :param str license_template: license template to apply\n :param str readme_template: README template to apply\n :param str organization: organization under which repository is created\n :return: a representation of the created repository\n :rtype: GogsRepo\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n gitignores = None if gitignore_templates is None \\\n else \",\".join(gitignore_templates)\n data = {\n \"name\": name,\n \"description\": description,\n \"private\": private,\n \"auto_init\": auto_init,\n \"gitignores\": gitignores,\n \"license\": license_template,\n \"readme\": readme_template\n }\n data = {k: v for (k, v) in data.items() if v is not None}\n url = \"/org/{0}/repos\".format(organization) if organization else \"/user/repos\"\n response = self.post(url, auth=auth, data=data)\n return GogsRepo.from_json(response.json())", "response": "Creates a new repository and returns it."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef repo_exists(self, auth, username, repo_name):\n path = \"/repos/{u}/{r}\".format(u=username, r=repo_name)\n return self._get(path, auth=auth).ok", "response": "Returns whether a repository with name repo_name owned by the user with username exists."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a repository with name repo_name owned by username.", "response": "def get_repo(self, auth, username, repo_name):\n \"\"\"\n Returns a the repository with name ``repo_name`` owned by\n the user with username ``username``.\n\n :param auth.Authentication auth: authentication object\n :param str username: username of owner of repository\n :param str repo_name: name of repository\n :return: a representation of the retrieved repository\n :rtype: GogsRepo\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n path = \"/repos/{u}/{r}\".format(u=username, r=repo_name)\n response = self.get(path, auth=auth)\n return GogsRepo.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the list of repositories owned by the user with username username.", "response": "def get_user_repos(self, auth, username):\n \"\"\"\n Returns the repositories owned by\n the user with username ``username``.\n\n :param auth.Authentication auth: authentication object\n :param str username: username of owner of repository\n :return: a list of repositories\n :rtype: List[GogsRepo]\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n path = \"/users/{u}/repos\".format(u=username)\n response = self.get(path, auth=auth)\n return [GogsRepo.from_json(repo_json) for repo_json in response.json()]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the branch with name branch_name in the repository with name repo_name owned by the user with username username.", "response": "def get_branch(self, auth, username, repo_name, branch_name):\n \"\"\"\n Returns the branch with name ``branch_name`` in the repository with name ``repo_name``\n owned by the user with username ``username``.\n\n :param auth.Authentication auth: authentication object \n :param str username: username of owner of repository containing the branch\n :param str repo_name: name of the repository with the branch\n :param str branch_name: name of the branch to return\n :return: a branch\n :rtype: GogsBranch\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n path = \"/repos/{u}/{r}/branches/{b}\".format(u=username, r=repo_name, b=branch_name)\n response = self.get(path, auth=auth)\n return GogsBranch.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_branches(self, auth, username, repo_name):\n path = \"/repos/{u}/{r}/branches\".format(u=username, r=repo_name)\n response = self.get(path, auth=auth)\n return [GogsBranch.from_json(branch_json) for branch_json in response.json()]", "response": "Returns the branches owned by the user with username username."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_repo(self, auth, username, repo_name):\n path = \"/repos/{u}/{r}\".format(u=username, r=repo_name)\n self.delete(path, auth=auth)", "response": "Deletes the repository with name repo_name owned by the user with username username."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmigrates a repository from another Git hosting source for the authenticated user. :param auth.Authentication auth: authentication object :param str clone_addr: Remote Git address (HTTP/HTTPS URL or local path) :param int uid: user ID of repository owner :param str repo_name: Repository name :param bool mirror: Repository will be a mirror. Default is false :param bool private: Repository will be private. Default is false :param str description: Repository description :return: a representation of the migrated repository :rtype: GogsRepo :raises NetworkFailure: if there is an error communicating with the server :raises ApiFailure: if the request cannot be serviced", "response": "def migrate_repo(self, auth, clone_addr,\n uid, repo_name, auth_username=None, auth_password=None,\n mirror=False, private=False, description=None):\n \"\"\"\n Migrate a repository from another Git hosting source for the authenticated user.\n\n :param auth.Authentication auth: authentication object\n :param str clone_addr: Remote Git address (HTTP/HTTPS URL or local path)\n :param int uid: user ID of repository owner\n :param str repo_name: Repository name\n :param bool mirror: Repository will be a mirror. Default is false\n :param bool private: Repository will be private. Default is false\n :param str description: Repository description\n :return: a representation of the migrated repository\n :rtype: GogsRepo\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n # \"auth_username\": auth_username,\n # \"auth_password\": auth_password,\n\n data = {\n \"clone_addr\": clone_addr,\n \"uid\": uid,\n \"repo_name\": repo_name,\n \"mirror\": mirror,\n \"private\": private,\n \"description\": description,\n }\n data = {k: v for (k, v) in data.items() if v is not None}\n url = \"/repos/migrate\"\n response = self.post(url, auth=auth, data=data)\n return GogsRepo.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_user(self, auth, login_name, username, email, password, send_notify=False):\n # Since the source_id parameter was not well-documented at the time this method was\n # written, force the default value\n data = {\n \"login_name\": login_name,\n \"username\": username,\n \"email\": email,\n \"password\": password,\n \"send_notify\": send_notify\n }\n response = self.post(\"/admin/users\", auth=auth, data=data)\n return GogsUser.from_json(response.json())", "response": "Creates a new user and returns the created user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef user_exists(self, username):\n path = \"/users/{}\".format(username)\n return self._get(path).ok", "response": "Returns whether a user with the specified username exists."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef search_users(self, username_keyword, limit=10):\n params = {\"q\": username_keyword, \"limit\": limit}\n response = self.get(\"/users/search\", params=params)\n return [GogsUser.from_json(user_json) for user_json in response.json()[\"data\"]]", "response": "Search for users whose username matches username_keyword and returns\n a list of matched users."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_user(self, auth, username):\n path = \"/users/{}\".format(username)\n response = self.get(path, auth=auth)\n return GogsUser.from_json(response.json())", "response": "Returns a representing the user with username username."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating the user with username according to update.", "response": "def update_user(self, auth, username, update):\n \"\"\"\n Updates the user with username ``username`` according to ``update``.\n\n :param auth.Authentication auth: authentication object, must be admin-level\n :param str username: username of user to update\n :param GogsUserUpdate update: a ``GogsUserUpdate`` object describing the requested update\n :return: the updated user\n :rtype: GogsUser\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n path = \"/admin/users/{}\".format(username)\n response = self.patch(path, auth=auth, data=update.as_dict())\n return GogsUser.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_user(self, auth, username):\n path = \"/admin/users/{}\".format(username)\n self.delete(path, auth=auth)", "response": "Deletes the user with username username."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning all hooks owned by the user with username username.", "response": "def get_repo_hooks(self, auth, username, repo_name):\n \"\"\"\n Returns all hooks of repository with name ``repo_name`` owned by\n the user with username ``username``.\n\n :param auth.Authentication auth: authentication object\n :param str username: username of owner of repository\n :param str repo_name: name of repository\n :return: a list of hooks for the specified repository\n :rtype: List[GogsRepo.Hooks]\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n path = \"/repos/{u}/{r}/hooks\".format(u=username, r=repo_name)\n response = self.get(path, auth=auth)\n return [GogsRepo.Hook.from_json(hook) for hook in response.json()]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new hook and returns it.", "response": "def create_hook(self, auth, repo_name, hook_type, config, events=None, organization=None, active=False):\n \"\"\"\n Creates a new hook, and returns the created hook.\n\n :param auth.Authentication auth: authentication object, must be admin-level\n :param str repo_name: the name of the repo for which we create the hook\n :param str hook_type: The type of webhook, either \"gogs\" or \"slack\"\n :param dict config: Settings for this hook (possible keys are\n ``\"url\"``, ``\"content_type\"``, ``\"secret\"``)\n :param list events: Determines what events the hook is triggered for. Default: [\"push\"]\n :param str organization: Organization of the repo\n :param bool active: Determines whether the hook is actually triggered on pushes. Default is false\n :return: a representation of the created hook\n :rtype: GogsRepo.Hook\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n if events is None:\n events = [\"push\"] # default value is mutable, so assign inside body\n\n data = {\n \"type\": hook_type,\n \"config\": config,\n \"events\": events,\n \"active\": active\n }\n\n url = \"/repos/{o}/{r}/hooks\".format(o=organization, r=repo_name) if organization is not None \\\n else \"/repos/{r}/hooks\".format(r=repo_name)\n response = self.post(url, auth=auth, data=data)\n return GogsRepo.Hook.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_hook(self, auth, repo_name, hook_id, update, organization=None):\n if organization is not None:\n path = \"/repos/{o}/{r}/hooks/{i}\".format(o=organization, r=repo_name, i=hook_id)\n else:\n path = \"/repos/{r}/hooks/{i}\".format(r=repo_name, i=hook_id)\n response = self._patch(path, auth=auth, data=update.as_dict())\n return GogsRepo.Hook.from_json(response.json())", "response": "Updates the hook with id hook_id according to update."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_hook(self, auth, username, repo_name, hook_id):\n path = \"/repos/{u}/{r}/hooks/{i}\".format(u=username, r=repo_name, i=hook_id)\n self.delete(path, auth=auth)", "response": "Deletes the hook with id hook_id for repo with name repo_name owned by the user with username username."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new organization and returns the created organization.", "response": "def create_organization(self, auth, owner_name, org_name, full_name=None, description=None,\n website=None, location=None):\n \"\"\"\n Creates a new organization, and returns the created organization.\n\n :param auth.Authentication auth: authentication object, must be admin-level\n :param str owner_name: Username of organization owner\n :param str org_name: Organization name\n :param str full_name: Full name of organization \n :param str description: Description of the organization\n :param str website: Official website\n :param str location: Organization location\n :return: a representation of the created organization\n :rtype: GogsOrg\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n data = {\n \"username\": org_name,\n \"full_name\": full_name,\n \"description\": description,\n \"website\": website,\n \"location\": location\n }\n\n url = \"/admin/users/{u}/orgs\".format(u=owner_name)\n response = self.post(url, auth=auth, data=data)\n return GogsOrg.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new organization team.", "response": "def create_organization_team(self, auth, org_name, name, description=None, permission=\"read\"):\n \"\"\"\n Creates a new team of the organization.\n\n :param auth.Authentication auth: authentication object, must be admin-level\n :param str org_name: Organization user name\n :param str name: Full name of the team\n :param str description: Description of the team\n :param str permission: Team permission, can be read, write or admin, default is read\n :return: a representation of the created team\n :rtype: GogsTeam\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n data = {\n \"name\": name,\n \"description\": description,\n \"permission\": permission\n }\n\n url = \"/admin/orgs/{o}/teams\".format(o=org_name)\n response = self.post(url, auth=auth, data=data)\n return GogsTeam.from_json(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding user to team.", "response": "def add_team_membership(self, auth, team_id, username):\n \"\"\"\n Add user to team.\n\n :param auth.Authentication auth: authentication object, must be admin-level\n :param str team_id: Team's id\n :param str username: Username of the user to be added to team\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n url = \"/admin/teams/{t}/members/{u}\".format(t=team_id, u=username)\n self.put(url, auth=auth)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving user from team s cache.", "response": "def remove_team_membership(self, auth, team_id, username):\n \"\"\"\n Remove user from team.\n\n :param auth.Authentication auth: authentication object, must be admin-level\n :param str team_id: Team's id\n :param str username: Username of the user to be removed from the team\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n url = \"/admin/teams/{t}/members/{u}\".format(t=team_id, u=username)\n self.delete(url, auth=auth)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds or update a repo from a team.", "response": "def add_repo_to_team(self, auth, team_id, repo_name):\n \"\"\"\n Add or update repo from team.\n\n :param auth.Authentication auth: authentication object, must be admin-level\n :param str team_id: Team's id\n :param str repo_name: Name of the repo to be added to the team\n :raises NetworkFailure: if there is an error communicating with the server\n :raises ApiFailure: if the request cannot be serviced\n \"\"\"\n url = \"/admin/teams/{t}/repos/{r}\".format(t=team_id, r=repo_name)\n self.put(url, auth=auth)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove_repo_from_team(self, auth, team_id, repo_name):\n url = \"/admin/teams/{t}/repos/{r}\".format(t=team_id, r=repo_name)\n self.delete(url, auth=auth)", "response": "Remove repo from team."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef list_deploy_keys(self, auth, username, repo_name):\n response = self.get(\"/repos/{u}/{r}/keys\".format(u=username, r=repo_name), auth=auth)\n return [GogsRepo.DeployKey.from_json(key_json) for key_json in response.json()]", "response": "List deploy keys for the specified repo."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_deploy_key(self, auth, username, repo_name, key_id):\n response = self.get(\"/repos/{u}/{r}/keys/{k}\".format(u=username, r=repo_name, k=key_id), auth=auth)\n return GogsRepo.DeployKey.from_json(response.json())", "response": "Get a deploy key for the specified repo."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_deploy_key(self, auth, username, repo_name, title, key_content):\n data = {\n \"title\": title,\n \"key\": key_content\n }\n response = self.post(\"/repos/{u}/{r}/keys\".format(u=username, r=repo_name), auth=auth, data=data)\n return GogsRepo.DeployKey.from_json(response.json())", "response": "Adds a deploy key to the specified repo."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_deploy_key(self, auth, username, repo_name, key_id):\n self.delete(\"/repos/{u}/{r}/keys/{k}\".format(u=username, r=repo_name, k=key_id), auth=auth)", "response": "Remove deploy key for the specified repo."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete(self, path, auth=None, **kwargs):\n return self._check_ok(self._delete(path, auth=auth, **kwargs))", "response": "Manually make a DELETE request."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, path, auth=None, **kwargs):\n return self._check_ok(self._get(path, auth=auth, **kwargs))", "response": "Manually make a GET request."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef patch(self, path, auth=None, **kwargs):\n return self._check_ok(self._patch(path, auth=auth, **kwargs))", "response": "Manually make a PATCH request."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef put(self, path, auth=None, **kwargs):\n return self._check_ok(self._put(path, auth=auth, **kwargs))", "response": "Manually make a PUT request."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nraises an ApiFailure pertaining to the given response", "response": "def _fail(response):\n \"\"\"\n Raise an ApiFailure pertaining to the given response\n \"\"\"\n message = \"Status code: {}-{}, url: {}\".format(response.status_code, response.reason, response.url)\n try:\n message += \", message:{}\".format(response.json()[\"message\"])\n except (ValueError, KeyError):\n pass\n raise ApiFailure(message, response.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setStartAction(self, action, *args, **kwargs):\n\n\t\tself.init_action = action\n\t\tself.init_args = args\n\t\tself.init_kwargs = kwargs", "response": "set the start action for the current session"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset a function to call when run() is stopping, after the main action is called. Parameters ---------- action: function pointer The function to call. *args Positional arguments to pass to action. **kwargs: Keyword arguments to pass to action.", "response": "def setStopAction(self, action, *args, **kwargs):\n\t\t\"\"\"\n\t\tSet a function to call when run() is stopping, after the main action is called.\n\n\t\tParameters\n\t\t----------\n\t\taction: function pointer\n\t\t\tThe function to call.\n\t\t*args\n\t\t\tPositional arguments to pass to action.\n\t\t**kwargs:\n\t\t\tKeyword arguments to pass to action.\n\t\t\"\"\"\n\t\tself.stop_action = action\n\t\tself.stop_args = args\n\t\tself.stop_kwargs = kwargs"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run(self):\n\t\tself.logger.debug(\"Thread {0} is entering main loop\".format(self.name))\n\n\t\tif hasattr(self, \"init_action\"):\n\t\t\tself.logger.debug(\"Thread {0} is calling its init action\")\n\t\t\tself.init_action(*self.init_args, **self.init_kwargs)\n\n\t\twhile not self.stop:\n\t\t\tself.logger.debug(\"Delay is {0}\".format(self.delay))\n\t\t\tif (datetime.datetime.now() - self.lastRun).total_seconds() > self.delay:\n\t\t\t\tself.logger.debug(\"Thread {0} is running\".format(self.name))\n\t\t\t\ttry:\n\t\t\t\t\tself.main_action(*self.main_args, **self.main_kwargs)\n\t\t\t\texcept Exception:\n\t\t\t\t\tself.logger.exception(\"Thread {0} generated an exception!\".format(self.name))\n\n\t\t\t\tself.lastRun = datetime.datetime.now()\n\t\t\t\tself.logger.debug(\"Thread {0} is done\".format(self.name))\n\t\t\ttime.sleep(1)\n\n\t\tif hasattr(self, \"stop_action\"):\n\t\t\tself.logger.debug(\"Thread {0} is calling its stop action\")\n\t\t\tself.stop_action(*self.stop_args, **self.stop_kwargs)\n\n\t\tself.logger.debug(\"Thread {0} is exiting main loop\".format(self.name))", "response": "This method runs the main loop."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef Printer(open_file=sys.stdout, closing=False):\n try:\n while True:\n logstr = (yield)\n open_file.write(logstr)\n open_file.write('\\n') # new line\n except GeneratorExit:\n if closing:\n try: open_file.close()\n except: pass", "response": "Print items with a timestamp."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nopening a file and returns a Printer object.", "response": "def FilePrinter(filename, mode='a', closing=True):\n path = os.path.abspath(os.path.expanduser(filename))\n \"\"\" Opens the given file and returns a printer to it. \"\"\"\n f = open(path, mode)\n return Printer(f, closing)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef SocketWriter(host, port, af=None, st=None):\n import socket\n if af is None:\n af = socket.AF_INET\n if st is None:\n st = socket.SOCK_STREAM\n message = '({0}): {1}'\n s = socket.socket(af, st)\n s.connect(host, port)\n try:\n while True:\n logstr = (yield)\n s.send(logstr)\n except GeneratorExit:\n s.close()", "response": "Writes messages to a socket."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef Emailer(recipients, sender=None):\n import smtplib\n hostname = socket.gethostname()\n if not sender:\n sender = 'lggr@{0}'.format(hostname)\n smtp = smtplib.SMTP('localhost')\n try:\n while True:\n logstr = (yield)\n try:\n smtp.sendmail(sender, recipients, logstr)\n except smtplib.SMTPException:\n pass\n except GeneratorExit:\n smtp.quit()", "response": "Send messages as emails to the given list\nridge s recipients."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef GMailer(recipients, username, password, subject='Log message from lggr.py'):\n import smtplib\n srvr = smtplib.SMTP('smtp.gmail.com', 587)\n srvr.ehlo()\n srvr.starttls()\n srvr.ehlo()\n srvr.login(username, password)\n\n if not (isinstance(recipients, list) or isinstance(recipients, tuple)):\n recipients = [recipients]\n\n gmail_sender = '{0}@gmail.com'.format(username)\n\n msg = 'To: {0}\\nFrom: '+gmail_sender+'\\nSubject: '+subject+'\\n'\n msg = msg + '\\n{1}\\n\\n'\n\n try:\n while True:\n logstr = (yield)\n for rcp in recipients:\n message = msg.format(rcp, logstr)\n srvr.sendmail(gmail_sender, rcp, message)\n except GeneratorExit:\n srvr.quit()", "response": "Send messages as emails to the given list of recipients from a GMail account."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngives a list or tuple of logging levels add a logger instance to each level.", "response": "def add(self, levels, logger):\n \"\"\" Given a list or tuple of logging levels,\n add a logger instance to each. \"\"\"\n if isinstance(levels, (list, tuple)):\n for lvl in levels:\n self.config[lvl].add(logger)\n else:\n self.config[levels].add(logger)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove(self, level, logger):\n self.config[level].discard(logger)\n logger.close()", "response": "Given a level remove a given logger function\n if it is a member of that level close the logger functionEither way."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef clear(self, level):\n for item in self.config[level]:\n item.close()\n self.config[level].clear()", "response": "Remove all logger functions from a given level."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _make_record(self,\n level,\n fmt,\n args,\n extra,\n exc_info,\n inc_stackinfo,\n inc_multiproc):\n \"\"\" Create a 'record' (a dictionary) with information to be logged. \"\"\"\n\n fn = fname = '(unknown file)'\n lno = 0\n func = '(unknown function)'\n code = '(code not available)'\n cc = []\n sinfo = None\n module = '(unknown module)'\n if _srcfile and inc_stackinfo:\n #IronPython doesn't track Python frames, so _find_caller throws an\n #exception on some versionf of IronPython. We trap it here so that\n #IronPython can use logging.\n try:\n fn, lno, func, code, cc, sinfo = self._find_caller()\n fname = os.path.basename(fn)\n module = os.path.splitext(fname)[0]\n except ValueError:\n pass\n\n if not exc_info or not isinstance(exc_info, tuple):\n # Allow passed in exc_info, but supply it if it isn't\n exc_info = sys.exc_info()\n\n log_record = { # This is available information for logging functions.\n #TODO: proc_name, thread_name\n # see http://hg.python.org/cpython/file/74fa415dc715/Lib/logging/__init__.py#l279\n 'asctime': time.asctime(), # TODO: actual specifier for format\n 'code': code,\n 'codecontext': ''.join(cc),\n 'excinfo' : exc_info,\n 'filename' : fname,\n 'funcname' : func,\n 'levelname' : level,\n 'levelno' : ALL.index(level),\n 'lineno' : lno,\n 'logmessage' : None,\n 'messagefmt' : fmt,\n 'module' : module,\n 'pathname' : fn,\n 'process' : os.getpid(),\n 'processname' : None,\n 'stackinfo' : sinfo,\n 'threadid' : None,\n 'threadname' : None,\n 'time' : time.time(),\n # The custom `extra` information can only be used to format the\n # default format. The `logmessage` can only be passed a dictionary\n # or a list (as `args`).\n 'defaultfmt' : self.config['defaultfmt']\n }\n # If the user passed a single dict, use that with format. If we're\n # passed a tuple or list, dereference its contents as args to format,\n # too. Otherwise, leave the log message as None.\n if args:\n if (isinstance(args, (tuple, list)) and\n len(args) == 1 and\n isinstance(args[0], dict)):\n log_record['logmessage'] = fmt.format(**args[0])\n else:\n log_record['logmessage'] = fmt.format(*args)\n else:\n log_record['logmessage'] = fmt\n\n if extra:\n log_record.update(extra) # add custom variables to record\n\n if threading: # check to use threading\n curthread = threading.current_thread()\n log_record.update({\n 'threadid' : curthread.ident,\n 'threadname' : curthread.name\n })\n\n if not inc_multiproc: # check to use multiprocessing\n procname = None\n else:\n procname = 'MainProcess'\n if mp:\n try:\n procname = mp.curent_process().name\n except StandardError:\n pass\n log_record['processname'] = procname\n\n return log_record", "response": "Create a record from the given parameters and return it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsend a log message to all of the logging functions that are currently open and remove the log message from the logger instance.", "response": "def _log(self,\n level,\n fmt,\n args=None,\n extra=None,\n exc_info=None,\n inc_stackinfo=False,\n inc_multiproc=False):\n \"\"\" Send a log message to all of the logging functions\n for a given level as well as adding the\n message to this logger instance's history. \"\"\"\n if not self.enabled:\n return # Fail silently so that logging can easily be removed\n\n log_record = self._make_record(\n level, fmt, args, extra, exc_info, inc_stackinfo, inc_multiproc)\n\n logstr = log_record['defaultfmt'].format(**log_record) #whoah.\n\n if self.keep_history:\n self.history.append(logstr)\n\n log_funcs = self.config[level]\n to_remove = []\n for lf in log_funcs:\n try:\n lf.send(logstr)\n except StopIteration:\n # in the case that the log function is already closed, add it\n # to the list of functions to be deleted.\n to_remove.append(lf)\n for lf in to_remove:\n self.remove(level, lf)\n self.info('Logging function {} removed from level {}', lf, level)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef log(self, *args, **kwargs):\n if self.suppress_errors:\n try:\n self._log(*args, **kwargs)\n return True\n except:\n return False\n else:\n self._log(*args, **kwargs)\n return True", "response": "Do logging but handle error suppression."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef debug(self, msg, *args, **kwargs):\n kwargs.setdefault('inc_stackinfo', True)\n self.log(DEBUG, msg, args, **kwargs)", "response": "Log a message with DEBUG level."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlogging a message at ERROR level.", "response": "def error(self, msg, *args, **kwargs):\n \"\"\" Log a message with ERROR level. Automatically includes stack and\n process info unless they are specifically not included. \"\"\"\n kwargs.setdefault('inc_stackinfo', True)\n kwargs.setdefault('inc_multiproc', True)\n self.log(ERROR, msg, args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlog a message with CRITICAL level.", "response": "def critical(self, msg, *args, **kwargs):\n \"\"\" Log a message with CRITICAL level. Automatically includes stack and\n process info unless they are specifically not included. \"\"\"\n kwargs.setdefault('inc_stackinfo', True)\n kwargs.setdefault('inc_multiproc', True)\n self.log(CRITICAL, msg, args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef multi(self, lvl_list, msg, *args, **kwargs):\n for level in lvl_list:\n self.log(level, msg, args, **kwargs)", "response": "Log a message at multiple levels."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef all(self, msg, *args, **kwargs):\n self.multi(ALL, msg, args, **kwargs)", "response": "Log a message at every known log level"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfind the stack frame of the caller so that we can note the source file and line number and function name.", "response": "def _find_caller(self):\n \"\"\"\n Find the stack frame of the caller so that we can note the source file\n name, line number, and function name.\n \"\"\"\n rv = ('(unknown file)',\n 0,\n '(unknown function)',\n '(code not available)',\n [],\n None)\n f = inspect.currentframe()\n while hasattr(f, 'f_code'):\n co = f.f_code\n filename = os.path.normcase(co.co_filename)\n # When lggr is imported as a module, the `_src_file` filename ends\n # in '.pyc', while the filename grabbed from inspect will end in\n # '.py'. We use splitext here to compare absolute paths without the\n # extension, which restores the intended behavior of dropping down\n # the callstack until we reach the first file not part of this\n # library.\n if os.path.splitext(filename)[0] == os.path.splitext(_srcfile)[0]:\n f = f.f_back # get out of this logging file\n continue\n sinfo = traceback.extract_stack(f)\n fname, lno, fnc, cc, i = inspect.getframeinfo(f, context=10)\n\n # Mark the calling line with a >\n cc = map(lambda info: ('> ' if info[0] == i else '| ') + info[1],\n enumerate(cc))\n code = '>' + cc[i]\n rv = (fname, lno, fnc, code, cc, sinfo)\n break\n return rv"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _map_value(self, value):\n ''' Maps a raw deserialized row to proper types '''\n # TODO: Once we've gotten here, we've done the following:\n # -> Recieve the full response, copy it from network buffer it into a ByteBuffer (copy 1)\n # -> Copy all the data into a String (copy 2)\n # -> Deserialize that string (copy 3)\n # -> Map the deserialized JSON to our response format (copy 4, what we are doing in this method)\n # This should not bee needed. Technically, this mapping from transport format to python types should require exactly one copy, \n # from the network buffer into the python VM.\n if isinstance(value, list):\n out = []\n for c in value:\n out.append(self._map_value( c ))\n return out\n elif isinstance(value, dict) and 'metadata' in value and 'labels' in value['metadata'] and 'self' in value:\n return neo4j.Node(ustr(value['metadata']['id']), value['metadata']['labels'], value['data'])\n elif isinstance(value, dict) and 'metadata' in value and 'type' in value and 'self' in value:\n return neo4j.Relationship(ustr(value['metadata']['id']), value['type'], value['start'].split('/')[-1], value['end'].split('/')[-1], value['data'])\n elif isinstance(value, dict):\n out = {}\n for k,v in value.items():\n out[k] = self._map_value( v )\n return out\n elif isinstance(value, str):\n return ustr(value)\n else:\n return value", "response": "Maps a raw deserialized row to proper types"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a dict of the gene IDs used during in DDD datasets that have been deprecated in favour of other gene IDs", "response": "def get_deprecated_gene_ids(filename):\n \"\"\" gets a dict of the gene IDs used during in DDD datasets that have been\n deprecated in favour of other gene IDs\n \"\"\"\n \n deprecated = {}\n with open(filename) as handle:\n for line in handle:\n line = line.strip().split()\n old = line[0]\n new = line[1]\n deprecated[old] = new\n \n return deprecated"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_transcript_lengths(ensembl, transcript_ids):\n \n transcripts = {}\n for transcript_id in transcript_ids:\n # get the transcript's protein sequence via the ensembl REST API\n try:\n seq = ensembl.get_protein_seq_for_transcript(transcript_id)\n except ValueError:\n continue\n \n transcripts[transcript_id] = len(seq)\n \n return transcripts", "response": "returns a dictionary of length for each transcript in the list transcript_ids"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef construct_gene_object(ensembl, transcript_id):\n \n # get the sequence for the identified transcript\n (chrom, start, end, strand, genomic_sequence) = ensembl.get_genomic_seq_for_transcript(transcript_id, expand=10)\n cds_sequence = ensembl.get_cds_seq_for_transcript(transcript_id)\n \n # get the locations of the exons and cds from ensembl\n cds_ranges = ensembl.get_cds_ranges_for_transcript(transcript_id)\n exon_ranges = ensembl.get_exon_ranges_for_transcript(transcript_id)\n \n # start a Transcript object with the locations and sequence\n transcript = Transcript(transcript_id, chrom, start, end, strand)\n transcript.set_exons(exon_ranges, cds_ranges)\n transcript.set_cds(cds_ranges)\n \n transcript.add_cds_sequence(cds_sequence)\n transcript.add_genomic_sequence(genomic_sequence, offset=10)\n \n return transcript", "response": "constructs a Transcript object from the database"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the de novo positions within a given transcript", "response": "def get_de_novos_in_transcript(transcript, de_novos):\n \"\"\" get the de novos within the coding sequence of a transcript\n \n Args:\n transcript: Transcript object, which defines the transcript coordinates\n de_novos: list of chromosome sequence positions for de novo events\n \n Returns:\n list of de novo positions found within the transcript\n \"\"\"\n \n in_transcript = []\n for de_novo in de_novos:\n # we check if the de novo is within the transcript by converting the\n # chromosomal position to a CDS-based position. Variants outside the CDS\n # will raise an error, which we catch and pass on. It's better to do\n # this, rather than use the function in_coding_region(), since that\n # function does not allow for splice site variants.\n site = transcript.get_coding_distance(de_novo)\n cds_length = transcript.get_coding_distance(transcript.get_cds_end())\n within_cds = site['pos'] >= 0 and site['pos'] < cds_length['pos']\n if within_cds and (transcript.in_coding_region(de_novo) or abs(site['offset']) < 9):\n in_transcript.append(de_novo)\n \n return in_transcript"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting transcript IDs for a given gene", "response": "def get_transcript_ids(ensembl, gene_id):\n \"\"\" gets transcript IDs for a gene.\n \n Args:\n ensembl: EnsemblRequest object to request data from ensembl\n gene_id: HGNC symbol for gene\n \n Returns:\n dictionary of transcript ID: transcript lengths for all transcripts\n for a given HGNC symbol.\n \"\"\"\n \n ensembl_genes = ensembl.get_genes_for_hgnc_id(gene_id)\n transcript_ids = ensembl.get_transcript_ids_for_ensembl_gene_ids(ensembl_genes, [gene_id])\n \n # sometimes we get HGNC symbols that do not match the ensembl rest version\n # that we are currentl using. We can look for earlier HGNC symbols for\n # the gene using the service at rest.genenames.org\n alt_symbols = []\n if len(transcript_ids) == 0:\n alt_symbols = ensembl.get_previous_symbol(gene_id)\n genes = [ensembl.get_genes_for_hgnc_id(symbol) for symbol in alt_symbols]\n genes = [item for sublist in genes for item in sublist]\n ensembl_genes += genes\n symbols = [gene_id] + alt_symbols\n \n transcript_ids = ensembl.get_transcript_ids_for_ensembl_gene_ids(ensembl_genes, symbols)\n \n return get_transcript_lengths(ensembl, transcript_ids)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading all the necessary sequences and positions for a gene", "response": "def load_gene(ensembl, gene_id, de_novos=[]):\n \"\"\" sort out all the necessary sequences and positions for a gene\n \n Args:\n ensembl: EnsemblRequest object to request data from ensembl\n gene_id: HGNC symbol for gene\n de_novos: list of de novo positions, so we can check they all fit in\n the gene transcript\n \n Returns:\n list of Transcript objects for gene, including genomic ranges and sequences\n \"\"\"\n \n transcripts = minimise_transcripts(ensembl, gene_id, de_novos)\n \n genes = []\n for transcript_id in transcripts:\n gene = construct_gene_object(ensembl, transcript_id)\n genes.append(gene)\n \n if len(genes) == 0:\n raise IndexError(\"{0}: no suitable transcripts\".format(gene_id))\n \n return genes"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef count_de_novos_per_transcript(ensembl, gene_id, de_novos=[]):\n \n transcripts = get_transcript_ids(ensembl, gene_id)\n \n # TODO: allow for genes without any coding sequence.\n if len(transcripts) == 0:\n raise IndexError(\"{0} lacks coding transcripts\".format(gene_id))\n \n # count the de novos observed in all transcripts\n counts = {}\n for key in transcripts:\n try:\n gene = construct_gene_object(ensembl, key)\n total = len(get_de_novos_in_transcript(gene, de_novos))\n if total > 0:\n counts[key] = {}\n counts[key][\"n\"] = total\n counts[key][\"len\"] = transcripts[key]\n except ValueError:\n pass\n \n return counts", "response": "Count the number of de novo positions in a single gene."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a set of minimal transcripts to contain all the de novos. We identify the minimal number of transcripts to contain all de novos. This allows for de novos on mutually exclusive transcripts. The transcripts are selected on the basis of containing the most number of de novos, while also being the longest possible transcript for the gene. Args: ensembl: EnsemblRequest object to request data from ensembl gene_id: HGNC symbol for gene de_novos: set of de novo positions Returns: dictionary of lengths and de novo counts, indexed by transcript ID for the set of minimal transcripts necessary to contain all de novos.", "response": "def minimise_transcripts(ensembl, gene_id, de_novos):\n \"\"\" get a set of minimal transcripts to contain all the de novos.\n \n We identify the minimal number of transcripts to contain all de novos. This\n allows for de novos on mutually exclusive transcripts. The transcripts are\n selected on the basis of containing the most number of de novos, while also\n being the longest possible transcript for the gene.\n \n Args:\n ensembl: EnsemblRequest object to request data from ensembl\n gene_id: HGNC symbol for gene\n de_novos: set of de novo positions\n \n Returns:\n dictionary of lengths and de novo counts, indexed by transcript ID for\n the set of minimal transcripts necessary to contain all de novos.\n \"\"\"\n \n if len(de_novos) == 0:\n return {}\n \n counts = count_de_novos_per_transcript(ensembl, gene_id, de_novos)\n \n if len(counts) == 0:\n return {}\n \n # find the transcripts with the most de novos\n max_count = max( val[\"n\"] for key, val in counts.items() )\n transcripts = [ key for key, val in counts.items() if val[\"n\"] == max_count ]\n \n # find the transcripts with the greatest length, and the most de novos\n max_length = max( counts[key][\"len\"] for key in transcripts )\n tx_ids = [ key for key in transcripts if counts[key][\"len\"] == max_length ]\n max_transcripts = {x: counts[x] for x in counts if x in tx_ids}\n \n # find which de novos occur in the transcript with the most de novos\n gene = construct_gene_object(ensembl, next(iter(max_transcripts)))\n denovos_in_gene = get_de_novos_in_transcript(gene, de_novos)\n \n # trim the de novos to the ones not in the current transcript\n leftovers = [ x for x in de_novos if x not in denovos_in_gene ]\n \n # and recursively return the transcripts in the current transcript, along\n # with transcripts for the reminaing de novos\n max_transcripts.update(minimise_transcripts(ensembl, gene_id, leftovers))\n \n return max_transcripts"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndumps a SimpleDb as string in the following format : ", "response": "def dump_simple_db(path):\n\t\"\"\"\n\tDumps a SimpleDb as string in the following format:\n\t: \n\t\"\"\"\n\toutput = []\n\tsimpledb = SimpleDb(path, mode=\"r\", sync=False)\n\twith simpledb as db:\n\t\tfor key in db:\n\t\t\toutput.append(\"{0}: {1}\".format(key, db.dumpvalue(key)))\n\treturn \"\\n\".join(output)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef required_from_env(key):\n val = os.environ.get(key)\n if not val:\n raise ValueError(\n \"Required argument '{}' not supplied and not found in environment variables\".format(key))\n return val", "response": "Retrieve a required variable from the current environment variables. Raises a ValueError if the required variable is not supplied."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmake a HTTP GET request to the Reader API.", "response": "def get(self, url):\n \"\"\"\n Make a HTTP GET request to the Reader API.\n\n :param url: url to which to make a GET request.\n \"\"\"\n logger.debug('Making GET request to %s', url)\n return self.oauth_session.get(url)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef post(self, url, post_params=None):\n params = urlencode(post_params)\n logger.debug('Making POST request to %s with body %s', url, params)\n return self.oauth_session.post(url, data=params)", "response": "Make a HTTP POST request to the Reader API."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete(self, url):\n logger.debug('Making DELETE request to %s', url)\n return self.oauth_session.delete(url)", "response": "Make a HTTP DELETE request to the Readability API."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a single article represented by article_id.", "response": "def get_article(self, article_id):\n \"\"\"\n Get a single article represented by `article_id`.\n\n :param article_id: ID of the article to retrieve.\n \"\"\"\n url = self._generate_url('articles/{0}'.format(article_id))\n return self.get(url)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget all bookmarks for the current user.", "response": "def get_bookmarks(self, **filters):\n \"\"\"\n Get Bookmarks for the current user.\n\n Filters:\n\n :param archive: Filter Bookmarks returned by archived status.\n :param favorite: Filter Bookmarks returned by favorite status.\n :param domain: Filter Bookmarks returned by a domain.\n :param added_since: Filter bookmarks by date added (since this date).\n :param added_until: Filter bookmarks by date added (until this date).\n :param opened_since: Filter bookmarks by date opened (since this date).\n :param opened_until: Filter bookmarks by date opened (until this date).\n :param archived_since: Filter bookmarks by date archived (since this date.)\n :param archived_until: Filter bookmarks by date archived (until this date.)\n :param updated_since: Filter bookmarks by date updated (since this date.)\n :param updated_until: Filter bookmarks by date updated (until this date.)\n :param page: What page of results to return. Default is 1.\n :param per_page: How many results to return per page. Default is 20, max is 50.\n :param only_deleted: Return only bookmarks that this user has deleted.\n :param tags: Comma separated string of tags to filter bookmarks.\n \"\"\"\n filter_dict = filter_args_to_dict(filters, ACCEPTED_BOOKMARK_FILTERS)\n url = self._generate_url('bookmarks', query_params=filter_dict)\n return self.get(url)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a single bookmark represented by bookmark_id.", "response": "def get_bookmark(self, bookmark_id):\n \"\"\"\n Get a single bookmark represented by `bookmark_id`.\n\n The requested bookmark must belong to the current user.\n\n :param bookmark_id: ID of the bookmark to retrieve.\n \"\"\"\n url = self._generate_url('bookmarks/{0}'.format(bookmark_id))\n return self.get(url)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_bookmark(self, url, favorite=False, archive=False, allow_duplicates=True):\n rdb_url = self._generate_url('bookmarks')\n params = {\n \"url\": url,\n \"favorite\": int(favorite),\n \"archive\": int(archive),\n \"allow_duplicates\": int(allow_duplicates)\n }\n return self.post(rdb_url, params)", "response": "Adds given bookmark to the authenticated user s user s list of bookmarks."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_bookmark(self, bookmark_id, favorite=None, archive=None, read_percent=None):\n rdb_url = self._generate_url('bookmarks/{0}'.format(bookmark_id))\n params = {}\n if favorite is not None:\n params['favorite'] = 1 if favorite == True else 0\n if archive is not None:\n params['archive'] = 1 if archive == True else 0\n if read_percent is not None:\n try:\n params['read_percent'] = float(read_percent)\n except ValueError:\n pass\n return self.post(rdb_url, params)", "response": "Updates the given bookmark."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_bookmark(self, bookmark_id):\n url = self._generate_url('bookmarks/{0}'.format(bookmark_id))\n return self.delete(url)", "response": "Delete a single bookmark represented by bookmark_id."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_bookmark_tags(self, bookmark_id):\n url = self._generate_url('bookmarks/{0}/tags'.format(bookmark_id))\n return self.get(url)", "response": "Retrieves the tags that have been applied to a bookmark."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd tags to a bookmark.", "response": "def add_tags_to_bookmark(self, bookmark_id, tags):\n \"\"\"\n Add tags to to a bookmark.\n\n The identified bookmark must belong to the current user.\n\n :param bookmark_id: ID of the bookmark to delete.\n :param tags: Comma separated tags to be applied.\n \"\"\"\n url = self._generate_url('bookmarks/{0}/tags'.format(bookmark_id))\n params = dict(tags=tags)\n return self.post(url, params)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove a single tag from a bookmark.", "response": "def delete_tag_from_bookmark(self, bookmark_id, tag_id):\n \"\"\"\n Remove a single tag from a bookmark.\n\n The identified bookmark must belong to the current user.\n\n :param bookmark_id: ID of the bookmark to delete.\n \"\"\"\n url = self._generate_url('bookmarks/{0}/tags/{1}'.format(\n bookmark_id, tag_id))\n return self.delete(url)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_tag(self, tag_id):\n url = self._generate_url('tags/{0}'.format(tag_id))\n return self.get(url)", "response": "Get a single tag represented by tag_id."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes an HTTP POST request to the Parser API.", "response": "def post(self, url, post_params=None):\n \"\"\"\n Make an HTTP POST request to the Parser API.\n\n :param url: url to which to make the request\n :param post_params: POST data to send along. Expected to be a dict.\n \"\"\"\n post_params['token'] = self.token\n params = urlencode(post_params)\n logger.debug('Making POST request to %s with body %s', url, params)\n return requests.post(url, data=params)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _generate_url(self, resource, query_params=None):\n resource = '{resource}?token={token}'.format(resource=resource, token=self.token)\n if query_params:\n resource += \"&{}\".format(urlencode(query_params))\n return self.base_url_template.format(resource)", "response": "Build the url to resource."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_article(self, url=None, article_id=None, max_pages=25):\n query_params = {}\n if url is not None:\n query_params['url'] = url\n if article_id is not None:\n query_params['article_id'] = article_id\n query_params['max_pages'] = max_pages\n url = self._generate_url('parser', query_params=query_params)\n return self.get(url)", "response": "Send a GET request to the parser API to get the representation of an article."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nposts the content to the Parser API.", "response": "def post_article_content(self, content, url, max_pages=25):\n \"\"\"\n POST content to be parsed to the Parser API.\n\n Note: Even when POSTing content, a url must still be provided.\n\n :param content: the content to be parsed\n :param url: the url that represents the content\n :param max_pages (optional): the maximum number of pages to parse\n and combine. Default is 25.\n \"\"\"\n params = {\n 'doc': content,\n 'max_pages': max_pages\n }\n url = self._generate_url('parser', {\"url\": url})\n return self.post(url, post_params=params)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_article_status(self, url=None, article_id=None):\n query_params = {}\n if url is not None:\n query_params['url'] = url\n if article_id is not None:\n query_params['article_id'] = article_id\n url = self._generate_url('parser', query_params=query_params)\n return self.head(url)", "response": "Send a HEAD request to the parser API to get the articles status."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_confidence(self, url=None, article_id=None):\n query_params = {}\n if url is not None:\n query_params['url'] = url\n if article_id is not None:\n query_params['article_id'] = article_id\n url = self._generate_url('confidence', query_params=query_params)\n return self.get(url)", "response": "Send a GET request to the Confidence endpoint of the Parser API."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save(f, arr, vocab):\n itr = iter(vocab)\n # Avoid empty line at the end\n word, idx = next(itr)\n _write_line(f, arr[idx], word)\n for word, idx in itr:\n f.write(b'\\n')\n _write_line(f, arr[idx], word)", "response": "Save the word embedding file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting pretrained word embedding file in one format to another.", "response": "def convert(outputfile, inputfile, to_format, from_format):\n \"\"\"\n Convert pretrained word embedding file in one format to another.\n \"\"\"\n emb = word_embedding.WordEmbedding.load(\n inputfile, format=_input_choices[from_format][1],\n binary=_input_choices[from_format][2])\n emb.save(outputfile, format=_output_choices[to_format][1],\n binary=_output_choices[to_format][2])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks format of inputfile.", "response": "def check_format(inputfile):\n \"\"\"\n Check format of inputfile.\n \"\"\"\n t = word_embedding.classify_format(inputfile)\n if t == word_embedding._glove:\n _echo_format_result('glove')\n elif t == word_embedding._word2vec_bin:\n _echo_format_result('word2vec-binary')\n elif t == word_embedding._word2vec_text:\n _echo_format_result('word2vec-text')\n else:\n assert not \"Should not get here!\""} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndoes the default action of twitter command.", "response": "def main():\n \"\"\"Do the default action of `twitter` command.\"\"\"\n from twitter.cmdline import Action, OPTIONS\n twitter = Twitter.from_oauth_file()\n Action()(twitter, OPTIONS)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an instance of TwitterAPIAuth using an OAuth file.", "response": "def from_oauth_file(cls, filepath=None):\n \"\"\"Get an object bound to the Twitter API using your own credentials.\n\n The `twitter` library ships with a `twitter` command that uses PIN\n OAuth. Generate your own OAuth credentials by running `twitter` from\n the shell, which will open a browser window to authenticate you. Once\n successfully run, even just one time, you will have a credential file\n at ~/.twitter_oauth.\n\n This factory function reuses your credential file to get a `Twitter`\n object. (Really, this code is just lifted from the `twitter.cmdline`\n module to minimize OAuth dancing.)\n \"\"\"\n if filepath is None:\n # Use default OAuth filepath from `twitter` command-line program.\n home = os.environ.get('HOME', os.environ.get('USERPROFILE', ''))\n filepath = os.path.join(home, '.twitter_oauth')\n\n oauth_token, oauth_token_secret = read_token_file(filepath)\n\n twitter = cls(\n auth=OAuth(\n oauth_token, oauth_token_secret, CONSUMER_KEY, CONSUMER_SECRET),\n api_version='1.1',\n domain='api.twitter.com')\n\n return twitter"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsearches twitter for q return results directly from twitter. search. tweets", "response": "def search(self, q=None, **kw):\n \"\"\"Search twitter for ``q``, return `results`__ directly from twitter.\n\n __ https://dev.twitter.com/rest/reference/get/search/tweets\n \"\"\"\n if q is None:\n raise ValueError('No search query provided for `q` keyword.')\n return self.twitter.search.tweets(q=q, **kw)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nperforms : meth : lookup on return value of search.", "response": "def lookup_search_result(self, result, **kw):\n \"\"\"Perform :meth:`lookup` on return value of :meth:`search`.\"\"\"\n return self.lookup(s['id_str'] for s in result['statuses'], **kw)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef lookup(self, id_list, **kw):\n result_id_pack = ','.join([str(_id) for _id in id_list])\n if not result_id_pack:\n return []\n return self.twitter.statuses.lookup(_id=result_id_pack)", "response": "Lookup list of statuses return results directly from twitter.\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_probe(mea, ax=None, xlim=None, ylim=None, color_currents=False, top=None, bottom=None,\n cmap='viridis', type='shank', alpha_elec=0.7, alpha_prb=0.3):\n '''\n \n Parameters\n ----------\n mea\n axis\n xlim\n ylim\n\n Returns\n -------\n\n '''\n\n\n if ax is None:\n fig = plt.figure()\n ax = fig.add_subplot(111)\n\n n_elec = mea.positions.shape[0]\n elec_size = mea.size\n\n if mea.type == 'mea':\n mea_pos = np.array([np.dot(mea.positions, mea.main_axes[0]), np.dot(mea.positions, mea.main_axes[1])]).T\n\n min_x, max_x = [np.min(np.dot(mea.positions, mea.main_axes[0])),\n np.max(np.dot(mea.positions, mea.main_axes[0]))]\n center_x = (min_x + max_x)/2.\n min_y, max_y = [np.min(np.dot(mea.positions, mea.main_axes[1])),\n np.max(np.dot(mea.positions, mea.main_axes[1]))]\n center_y = (min_y + max_y)/2.\n\n if type == 'shank':\n if top is None:\n probe_height = 200\n else:\n probe_height = top\n probe_top = max_y + probe_height\n if bottom is None:\n probe_bottom = min_y - probe_height\n else:\n probe_bottom = min_y - bottom\n probe_corner = min_y - 0.1*probe_height\n probe_left = min_x - 0.1*probe_height\n probe_right = max_x + 0.1*probe_height\n\n verts = [\n (min_x - 2*elec_size, probe_top), # left, bottom\n (min_x - 2*elec_size, probe_corner), # left, top\n (center_x, probe_bottom), # right, top\n (max_x + 2*elec_size, probe_corner), # right, bottom\n (max_x + 2*elec_size, probe_top),\n (min_x - 2 * elec_size, max_y + 2 * elec_size) # ignored\n ]\n\n codes = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO,\n Path.CLOSEPOLY,\n ]\n\n elif type == 'planar':\n probe_top = max_y + 2 * elec_size\n probe_bottom = min_y - 2 * elec_size\n probe_left = min_x - 2 * elec_size\n probe_right = max_x + 2 * elec_size\n\n verts = [\n (min_x - 2 * elec_size, max_y + 2 * elec_size), # left, bottom\n (min_x - 2 * elec_size, min_y - 2 * elec_size), # left, top\n (max_x + 2 * elec_size, min_y - 2 * elec_size), # right, bottom\n (max_x + 2 * elec_size, max_y + 2 * elec_size), # ignored\n (max_x + 2 * elec_size, max_y + 2 * elec_size) # ignored\n ]\n\n codes = [Path.MOVETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO,\n Path.CLOSEPOLY,\n ]\n else:\n raise AttributeError(\"'type' can be 'shank' or 'planar'\")\n\n path = Path(verts, codes)\n\n patch = patches.PathPatch(path, facecolor='green', edgecolor='k', lw=0.5, alpha=alpha_prb)\n ax.add_patch(patch)\n\n if color_currents:\n norm_curr = mea.currents / np.max(np.abs(mea.currents))\n colormap = plt.get_cmap(cmap)\n elec_colors = colormap(norm_curr)\n else:\n elec_colors = ['orange'] * mea.number_electrodes\n\n if mea.shape == 'square':\n for e in range(n_elec):\n elec = patches.Rectangle((mea_pos[e, 0] - elec_size, mea_pos[e, 1] - elec_size), 2*elec_size,\n 2*elec_size, alpha=alpha_elec, facecolor=elec_colors[e],\n edgecolor=[0.3, 0.3, 0.3], lw=0.5)\n\n ax.add_patch(elec)\n elif mea.shape == 'circle':\n for e in range(n_elec):\n elec = patches.Circle((mea_pos[e, 0], mea_pos[e, 1]), elec_size, alpha=alpha_elec,\n facecolor=elec_colors[e], edgecolor=[0.3, 0.3, 0.3], lw=0.5)\n\n ax.add_patch(elec)\n else:\n raise NotImplementedError('Wire type plotting not implemented')\n\n ax.axis('equal')\n\n if xlim:\n ax.set_xlim(xlim)\n else:\n ax.set_xlim(probe_left - 5 * elec_size, probe_right + 5 * elec_size)\n if ylim:\n ax.set_ylim(ylim)\n else:\n ax.set_ylim(probe_bottom - 5 * elec_size, probe_top + 5 * elec_size)\n\n return ax", "response": "Plots the probe for a single MEA."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_version(filepath='src/birding/version.py'):\n with open(get_abspath(filepath)) as version_file:\n return re.search(\n r\"\"\"__version__\\s+=\\s+(['\"])(?P.+?)\\1\"\"\",\n version_file.read()).group('version')", "response": "Get version without import which avoids dependency issues."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef search(self, q, **kw):\n url = '{base_url}/search/{stream}'.format(**vars(self))\n\n params = {\n 'q': q,\n }\n params.update(self.params)\n params.update(kw)\n\n response = self.session.get(url, params=params)\n response.raise_for_status()\n return response.json()", "response": "Search Gnip for given query returning deserialized response."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndumping result into a string useful for debugging.", "response": "def dump(result):\n \"\"\"Dump result into a string, useful for debugging.\"\"\"\n if isinstance(result, dict):\n # Result is a search result.\n statuses = result['results']\n else:\n # Result is a lookup result.\n statuses = result\n status_str_list = []\n for status in statuses:\n status_str_list.append(textwrap.dedent(u\"\"\"\n @{screen_name} -- https://twitter.com/{screen_name}\n {text}\n \"\"\").strip().format(\n screen_name=status['actor']['preferredUsername'],\n text=status['body']))\n return u'\\n\\n'.join(status_str_list)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a Shelf instance dynamically based on config.", "response": "def shelf_from_config(config, **default_init):\n \"\"\"Get a `Shelf` instance dynamically based on config.\n\n `config` is a dictionary containing ``shelf_*`` keys as defined in\n :mod:`birding.config`.\n \"\"\"\n shelf_cls = import_name(config['shelf_class'], default_ns='birding.shelf')\n init = {}\n init.update(default_init)\n init.update(config['shelf_init'])\n shelf = shelf_cls(**init)\n if hasattr(shelf, 'set_expiration') and 'shelf_expiration' in config:\n shelf.set_expiration(config['shelf_expiration'])\n return shelf"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unpack(self, key, value):\n value, freshness = value\n if not self.is_fresh(freshness):\n raise KeyError('{} (stale)'.format(key))\n return value", "response": "Unpack and return value only if it is fresh."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if given freshness value has expired else False.", "response": "def is_fresh(self, freshness):\n \"\"\"Return False if given freshness value has expired, else True.\"\"\"\n if self.expire_after is None:\n return True\n return self.freshness() - freshness <= self.expire_after"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dictionary of Django form errors for the given form instance.", "response": "def get_form_errors(form):\n \"\"\"\n Django form errors do not obey natural field order,\n this template tag returns non-field and field-specific errors\n :param form: the form instance\n \"\"\"\n return {\n 'non_field': form.non_field_errors(),\n 'field_specific': OrderedDict(\n (field, form.errors[field.name])\n for field in form\n if field.name in form.errors\n )\n }"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sub_nav(context):\n request = context.get('request')\n return {\n 'alt_urls': make_alternate_language_urls(request),\n 'breadcrumbs': context.get('breadcrumbs'),\n 'breadcrumbs_back': context.get('breadcrumbs_back'),\n }", "response": "Return a dict with the sub - nav data."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if the current instance is the first instance in the ASG group sorted by instance_id.", "response": "def is_first_instance_aws():\n \"\"\"\n Returns True if the current instance is the first instance in the ASG group,\n sorted by instance_id.\n \"\"\"\n try:\n # get instance id and aws region\n instance_details = requests.get('http://169.254.169.254/latest/dynamic/instance-identity/document',\n timeout=5).json()\n instance_id = instance_details['instanceId']\n instance_region = instance_details['region']\n except (requests.RequestException, ValueError, KeyError) as e:\n raise StackInterrogationException(e)\n\n try:\n # get instance's autoscaling group\n autoscaling_client = boto3.client('autoscaling', region_name=instance_region)\n response = autoscaling_client.describe_auto_scaling_instances(InstanceIds=[instance_id])\n assert len(response['AutoScalingInstances']) == 1\n autoscaling_group = response['AutoScalingInstances'][0]['AutoScalingGroupName']\n except ClientError as e:\n raise StackInterrogationException(e)\n except AssertionError:\n raise InstanceNotInAsgException()\n\n try:\n # list in-service instances in autoscaling group\n # instances being launched or terminated should not be considered\n response = autoscaling_client.describe_auto_scaling_groups(AutoScalingGroupNames=[autoscaling_group])\n assert len(response['AutoScalingGroups']) == 1\n autoscaling_group_instance_ids = sorted(\n instance['InstanceId']\n for instance in response['AutoScalingGroups'][0]['Instances']\n if instance['LifecycleState'] == 'InService'\n )\n except (ClientError, AssertionError) as e:\n raise StackInterrogationException(e)\n\n return bool(autoscaling_group_instance_ids and autoscaling_group_instance_ids[0] == instance_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_first_instance_k8s(current_pod_name=None):\n current_pod_name = current_pod_name or os.environ.get('POD_NAME')\n if not current_pod_name:\n raise StackInterrogationException('Pod name not known')\n\n namespace = 'money-to-prisoners-%s' % settings.ENVIRONMENT\n try:\n load_incluster_config()\n except ConfigException as e:\n raise StackInterrogationException(e)\n try:\n response = k8s_client.CoreV1Api().list_namespaced_pod(\n namespace=namespace,\n label_selector='app=%s' % settings.APP,\n watch=False,\n )\n except ApiException as e:\n raise StackInterrogationException(e)\n pod_names = sorted(pod.metadata.name for pod in filter(lambda pod: pod.status.phase == 'Running', response.items))\n return bool(pod_names and pod_names[0] == current_pod_name)", "response": "Returns True if the current pod is the first instance of Kubernetes cluster."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef splitarg(args):\n '''\n This function will split arguments separated by spaces or commas \n to be backwards compatible with the original ArcGet command line tool\n '''\n if not args:\n return args\n split = list()\n for arg in args:\n if ',' in arg:\n split.extend([x for x in arg.split(',') if x])\n elif arg:\n split.append(arg)\n return split", "response": "This function will split arguments separated by spaces or commas \n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nhandling the login action.", "response": "def login(request, template_name=None,\n redirect_field_name=REDIRECT_FIELD_NAME,\n authentication_form=AuthenticationForm,\n current_app=None, extra_context=None):\n \"\"\"\n Displays the login form and handles the login action.\n \"\"\"\n redirect_to = request.POST.get(\n redirect_field_name,\n request.GET.get(redirect_field_name, '')\n )\n\n def get_redirect_to():\n # Ensure the user-originating redirection url is safe.\n if not is_safe_url(url=redirect_to, host=request.get_host()):\n return resolve_url(settings.LOGIN_REDIRECT_URL)\n return redirect_to\n\n if request.user.is_authenticated:\n return HttpResponseRedirect(get_redirect_to())\n\n if request.method == \"POST\":\n form = authentication_form(request=request, data=request.POST)\n if form.is_valid():\n\n # Okay, security check complete. Log the user in.\n auth_login(request, form.get_user())\n\n return HttpResponseRedirect(get_redirect_to())\n else:\n form = authentication_form(request=request)\n\n current_site = get_current_site(request)\n\n context = {\n 'form': form,\n redirect_field_name: redirect_to,\n 'site': current_site,\n 'site_name': current_site.name,\n }\n if extra_context is not None:\n context.update(extra_context)\n\n if current_app is not None:\n request.current_app = current_app\n\n return TemplateResponse(request, template_name, context)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef logout(request, template_name=None,\n next_page=None,\n redirect_field_name=REDIRECT_FIELD_NAME,\n current_app=None, extra_context=None):\n \"\"\"\n Logs out the user.\n \"\"\"\n auth_logout(request)\n\n if next_page is not None:\n next_page = resolve_url(next_page)\n\n if (redirect_field_name in request.POST or\n redirect_field_name in request.GET):\n next_page = request.POST.get(redirect_field_name,\n request.GET.get(redirect_field_name))\n # Security check -- don't allow redirection to a different host.\n if not is_safe_url(url=next_page, host=request.get_host()):\n next_page = request.path\n\n if next_page:\n # Redirect to this page until the session has been cleared.\n return HttpResponseRedirect(next_page)\n\n current_site = get_current_site(request)\n context = {\n 'site': current_site,\n 'site_name': current_site.name,\n 'title': _('Logged out')\n }\n if extra_context is not None:\n context.update(extra_context)\n\n if current_app is not None:\n request.current_app = current_app\n\n return TemplateResponse(request, template_name, context)", "response": "Logs out the user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting a dictionary of current birding configuration.", "response": "def get_config(filepath=None, default_loader=None, on_missing=None):\n \"\"\"Get a dict for the current birding configuration.\n\n The resulting dictionary is fully populated with defaults, such that all\n valid keys will resolve to valid values. Invalid and extra values in the\n configuration result in an exception.\n\n See :ref:`config` (module-level docstring) for discussion on how birding\n configuration works, including filepath loading. Note that a non-default\n filepath set via env results in a :py:exc:`OSError` when the file is\n missing, but the default filepath is ignored when missing.\n\n This function caches its return values as to only parse configuration once\n per set of inputs. As such, treat the resulting dictionary as read-only as\n not to accidentally write values which will be seen by other handles of the\n dictionary.\n\n Args:\n filepath (str): path to birding configuration YAML file.\n default_loader (callable):\n callable which returns file descriptor with YAML data of default\n configuration values\n on_missing (callable): callback to call when file is missing.\n Returns:\n dict: dict of current birding configuration; treat as read-only.\n\n \"\"\"\n # Handle cache lookup explicitly in order to support keyword arguments.\n cache_key = (filepath, default_loader, on_missing)\n if CACHE.get(cache_key) is not None:\n return CACHE.get(cache_key)\n\n logger = logging.getLogger('birding')\n\n if filepath is None:\n filepath = BIRDING_CONF\n if default_loader is None:\n default_loader = get_defaults_file\n if on_missing is None:\n on_missing = logger.info\n\n logger.info(\n 'Looking for configuration file: {}'.format(os.path.abspath(filepath)))\n if not os.path.exists(filepath):\n # Log a message if filepath is default; raise error if not default.\n on_missing('No {} configuration file found.'.format(filepath))\n if filepath != BIRDING_CONF_DEFAULT:\n # Stat the missing file to result in OSError.\n os.stat(filepath)\n\n config = yaml.safe_load(default_loader())\n tv.validate(SCHEMA, config)\n if os.path.exists(filepath):\n file_config = yaml.safe_load(open(filepath))\n if file_config:\n config = overlay(file_config, config)\n tv.validate(SCHEMA, config)\n\n CACHE.put(cache_key, config)\n\n return config"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_defaults_file(*a, **kw):\n fd = StringIO()\n fd.write(get_defaults_str(*a, **kw))\n fd.seek(0)\n return fd", "response": "Get a file object with YAML data of configuration defaults. Arguments are passed through to get_defaults_str."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the string YAML representation of configuration defaults.", "response": "def get_defaults_str(raw=None, after='Defaults::'):\n \"\"\"Get the string YAML representation of configuration defaults.\"\"\"\n if raw is None:\n raw = __doc__\n return unicode(textwrap.dedent(raw.split(after)[-1]).strip())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef overlay(upper, lower):\n result = {}\n for key in upper:\n if is_mapping(upper[key]):\n lower_value = lower.get(key, {})\n if not is_mapping(lower_value):\n msg = 'Attempting to overlay a mapping on a non-mapping: {}'\n raise ValueError(msg.format(key))\n result[key] = overlay(upper[key], lower_value)\n else:\n result[key] = upper[key]\n for key in lower:\n if key in result:\n continue\n result[key] = lower[key]\n return result", "response": "Overlays the dict onto the lower dict."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nimports an object based on the dotted string.", "response": "def import_name(name, default_ns=None):\n \"\"\"Import an object based on the dotted string.\n\n >>> import_name('textwrap') # doctest: +ELLIPSIS\n \n >>> import_name('birding.config') # doctest: +ELLIPSIS\n \n >>> import_name('birding.config.get_config') # doctest: +ELLIPSIS\n \n >>>\n\n If `ns` is provided, use it as the namespace if `name` does not have a dot.\n\n >>> ns = 'birding.config'\n >>> x = import_name('birding.config.get_config')\n >>> x # doctest: +ELLIPSIS\n \n >>> x == import_name('get_config', default_ns=ns)\n True\n >>> x == import_name('birding.config.get_config', default_ns=ns)\n True\n >>>\n \"\"\"\n if '.' not in name:\n if default_ns is None:\n return importlib.import_module(name)\n else:\n name = default_ns + '.' + name\n module_name, object_name = name.rsplit('.', 1)\n module = importlib.import_module(module_name)\n return getattr(module, object_name)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread kafka config then dispatch to follow_topic.", "response": "def follow_topic_from_config():\n \"\"\"Read kafka config, then dispatch to `follow_topic`.\"\"\"\n config = get_config()['ResultTopicBolt']\n kafka_class = import_name(config['kafka_class'])\n return follow_topic(kafka_class, config['topic'], **config['kafka_init'])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndumping each message from kafka topic to stdio.", "response": "def follow_topic(kafka_class, name, retry_interval=1, **kafka_init):\n \"\"\"Dump each message from kafka topic to stdio.\"\"\"\n while True:\n try:\n client = kafka_class(**kafka_init)\n topic = client.topics[name]\n consumer = topic.get_simple_consumer(reset_offset_on_start=True)\n except Exception as e:\n if not should_try_kafka_again(e):\n raise\n with flushing(sys.stderr):\n print(\n 'Failed attempt to connect to Kafka. Will retry ...',\n file=sys.stderr)\n sleep(retry_interval)\n else:\n with flushing(sys.stdout):\n print('Connected to Kafka.')\n break\n\n dump = Dump()\n for message in consumer:\n with flushing(sys.stdout, sys.stderr):\n status = load(message.value)\n if status:\n dump(status)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndumps each line of input to stdio.", "response": "def follow_fd(fd):\n \"\"\"Dump each line of input to stdio.\"\"\"\n dump = Dump()\n for line in fd:\n if not line.strip():\n continue\n\n with flushing(sys.stdout, sys.stderr):\n status = load(line)\n if status:\n dump(status)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef should_try_kafka_again(error):\n msg = 'Unable to retrieve'\n return isinstance(error, KafkaException) and str(error).startswith(msg)", "response": "Determine if the error means to retry or fail True to retry."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_valid(line0, line1):\n data = line0.strip().split(b' ')\n if len(data) <= 2:\n return False\n # check if data[2:] is float values\n try:\n map(float, data[2:])\n except:\n return False\n return True", "response": "Check if a file is valid Glove format."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_with_vocab(fin, vocab, dtype=np.float32):\n arr = None\n for line in fin:\n try:\n token, v = _parse_line(line, dtype)\n except (ValueError, IndexError):\n raise ParseError(b'Parsing error in line: ' + line)\n if token in vocab:\n if arr is None:\n arr = np.empty((len(vocab), len(v)), dtype=dtype)\n arr.fill(np.NaN)\n elif arr.shape[1] != len(v):\n raise ParseError(b'Vector size did not match in line: ' + line)\n arr[vocab[token], :] = np.array(v, dtype=dtype).reshape(1, -1)\n return arr", "response": "Load word embedding file with predefined vocabulary"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads word embedding file into numpy array.", "response": "def load(fin, dtype=np.float32, max_vocab=None):\n \"\"\"\n Load word embedding file.\n\n Args:\n fin (File): File object to read. File should be open for reading ascii.\n dtype (numpy.dtype): Element data type to use for the array.\n max_vocab (int): Number of vocabulary to read.\n\n Returns:\n numpy.ndarray: Word embedding representation vectors\n dict: Mapping from words to vector indices.\n\n \"\"\"\n vocab = {}\n arr = None\n i = 0\n for line in fin:\n if max_vocab is not None and i >= max_vocab:\n break\n try:\n token, v = _parse_line(line, dtype)\n except (ValueError, IndexError):\n raise ParseError(b'Parsing error in line: ' + line)\n if token in vocab:\n parse_warn(b'Duplicated vocabulary ' + token)\n continue\n if arr is None:\n arr = np.array(v, dtype=dtype).reshape(1, -1)\n else:\n if arr.shape[1] != len(v):\n raise ParseError(b'Vector size did not match in line: ' + line)\n arr = np.append(arr, [v], axis=0)\n vocab[token] = i\n i += 1\n return arr, vocab"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef python_dependencies(context: Context, extras=None):\n with mock.patch('setuptools.setup'):\n from setup import install_requires, extras_require\n\n requirements = install_requires.copy()\n if extras:\n requirements.extend(extras_require[extras])\n return context.pip_command('install', *requirements)", "response": "Updates python dependencies with pip install"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_version(context: Context, version=None, bump=False):\n if bump and version:\n raise TaskError('You cannot bump and set a specific version')\n if bump:\n from mtp_common import VERSION\n\n version = list(VERSION)\n version[-1] += 1\n else:\n try:\n version = list(map(int, version.split('.')))\n assert len(version) == 3\n except (AttributeError, ValueError, AssertionError):\n raise TaskError('Version must be in the form N.N.N')\n\n dotted_version = '.'.join(map(str, version))\n replacements = [\n (r'^VERSION =.*$',\n 'VERSION = (%s)' % ', '.join(map(str, version)),\n 'mtp_common/__init__.py'),\n (r'^ \"version\":.*$',\n ' \"version\": \"%s\",' % dotted_version,\n 'package.json'),\n ]\n for search, replacement, path in replacements:\n with open(os.path.join(root_path, path)) as f:\n content = f.read()\n content = re.sub(search, replacement, content, flags=re.MULTILINE)\n with open(os.path.join(root_path, path), 'w') as f:\n f.write(content)\n context.debug('Updated version to %s' % dotted_version)", "response": "Updates the version of the MTP - common item."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nauthenticates the user with the given username and password.", "response": "def authenticate(self, username=None, password=None):\n \"\"\"\n Returns a valid `MojUser` if the authentication is successful\n or None if the credentials were wrong.\n \"\"\"\n data = api_client.authenticate(username, password)\n if not data:\n return\n\n return User(\n data.get('pk'), data.get('token'), data.get('user_data')\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_client_token():\n if getattr(settings, 'NOMIS_API_CLIENT_TOKEN', ''):\n return settings.NOMIS_API_CLIENT_TOKEN\n global client_token\n if not client_token or client_token['expires'] and client_token['expires'] - now() < datetime.timedelta(days=1):\n session = None\n try:\n session = api_client.get_authenticated_api_session(settings.TOKEN_RETRIEVAL_USERNAME,\n settings.TOKEN_RETRIEVAL_PASSWORD)\n client_token = session.get('/tokens/nomis/').json()\n except (requests.RequestException, HttpNotFoundError, ValueError, AttributeError):\n logger.exception('Cannot load NOMIS API client token')\n return None\n finally:\n if session and getattr(session, 'access_token', None):\n api_client.revoke_token(session.access_token)\n if client_token.get('expires'):\n client_token['expires'] = parse_datetime(client_token['expires'])\n if client_token['expires'] < now():\n logger.error('NOMIS API client token from mtp-api had expired')\n return None\n return client_token['token']", "response": "Requests and stores the NOMIS API client token from mtp - api\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef include(self, *fields, **kwargs):\n clone = self._clone()\n\n # Preserve the stickiness of related querysets\n # NOTE: by default _clone will clear this attribute\n # .include does not modify the actual query, so we\n # should not clear `filter_is_sticky`\n if self.query.filter_is_sticky:\n clone.query.filter_is_sticky = True\n\n clone._include_limit = kwargs.pop('limit_includes', None)\n assert not kwargs, '\"limit_includes\" is the only accepted kwargs. Eat your heart out 2.7'\n\n # Copy the behavior of .select_related(None)\n if fields == (None, ):\n for field in clone._includes.keys():\n clone.query._annotations.pop('__{}'.format(field.name), None)\n clone._includes.clear()\n return clone\n\n # Parse everything the way django handles joins/select related\n # Including multiple child fields ie .include(field1__field2, field1__field3)\n # turns into {field1: {field2: {}, field3: {}}\n for name in fields:\n ctx, model = clone._includes, clone.model\n for spl in name.split('__'):\n field = model._meta.get_field(spl)\n if isinstance(field, ForeignObjectRel) and field.is_hidden():\n raise ValueError('Hidden field \"{!r}\" has no descriptor and therefore cannot be included'.format(field))\n model = field.related_model\n ctx = ctx.setdefault(field, OrderedDict())\n\n for field in clone._includes.keys():\n clone._include(field)\n\n return clone", "response": "Returns a new QuerySet instance that will include related objects."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsearching for DICOM files within a given directory and receive back a dictionary of DICOM files.", "response": "def search(d, recursive=True, store_meta=True):\n '''\n Search for DICOM files within a given directory and receive back a \n dictionary of {StudyInstanceUID: {SeriesNumber: [files]}}\n \n Example usage::\n >>> import yaxil.dicom\n >>> yaxil.dicom.search(\"~/dicoms\").keys()\n ['1.2.340.500067.8.9.10.11012.13000001401516017181900000200']\n \n :param d: Directory name\n :type d: str\n :param recursive: Search recursively\n :type recursive: bool\n :param store_meta: Read and store metadata for each file for fast lookups\n :type store_meta: bool\n :returns: Dictionary of {StudyInstanceUID: {SeriesNumber: [files]}}\n :rtype: dict\n '''\n # say this fast three times\n scans = col.defaultdict(lambda: col.defaultdict(lambda: col.defaultdict(list)))\n for dirpath,dirnames,filenames in os.walk(os.path.expanduser(d)):\n for f in filenames:\n fullfile = os.path.join(dirpath, f)\n try:\n d = pydicom.read_file(fullfile, stop_before_pixels=True)\n except pydicom.filereader.InvalidDicomError:\n continue\n meta = {k: getattr(d, k, None) for k in d.dir()} if store_meta else None\n scans[d.StudyInstanceUID][d.SeriesNumber][d.InstanceNumber].append(DicomFile(meta=meta, file=fullfile))\n if not recursive:\n del dirnames[:]\n return scans"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreplace Django s generic error messages with MTP - specific versions", "response": "def replace_default_error_messages():\n \"\"\"\n Replace Django's generic error messages with MTP-specific versions\n NB: avoid trailing full stops visually, they are added for screen readers in templates\n \"\"\"\n forms.Field.default_error_messages['required'] = _('This field is required')\n forms.CharField.default_error_messages['min_length'] = _('You\u2019ve entered too few characters')\n forms.CharField.default_error_messages['max_length'] = _('You\u2019ve entered too many characters')\n forms.IntegerField.default_error_messages['invalid'] = _('Enter a whole number')\n forms.FloatField.default_error_messages['invalid'] = _('Enter a number')\n forms.DecimalField.default_error_messages['invalid'] = _('Enter a number')\n forms.DateField.default_error_messages['invalid'] = _('Enter a valid date')\n forms.TimeField.default_error_messages['invalid'] = _('Enter a valid time')\n forms.DateTimeField.default_error_messages['invalid'] = _('Enter a valid date and time')\n forms.FileField.default_error_messages.update({\n 'invalid': _('No file was submitted'),\n 'missing': _('No file was submitted'),\n 'empty': _('The submitted file is empty'),\n })\n forms.SplitDateTimeField.default_error_messages['invalid_date'] = _('Enter a valid date')\n forms.SplitDateTimeField.default_error_messages['invalid_time'] = _('Enter a valid time')\n validators.EmailValidator.message = _('Enter a valid email address')"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload vocabulary from file created by word2vec with - save - vocab option.", "response": "def load_vocab(fin):\n \"\"\"\n Load vocabulary from vocab file created by word2vec with\n ``-save-vocab `` option.\n\n Args:\n fin (File): File-like object to read from.\n encoding (bytes): Encoding of the input file as defined in ``codecs``\n module of Python standard library.\n errors (bytes): Set the error handling scheme. The default error\n handler is 'strict' meaning that encoding errors raise ValueError.\n Refer to ``codecs`` module for more information.\n\n Returns:\n OrderedDict: Mapping from a word (``bytes``) to the number of\n appearance in the original text (``int``). Order are preserved from\n the original vocab file.\n \"\"\"\n vocab = OrderedDict()\n for line in fin:\n v, c = line.strip().split()\n vocab[v] = int(c)\n return vocab"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef authenticate(username, password):\n session = MoJOAuth2Session(\n client=LegacyApplicationClient(\n client_id=settings.API_CLIENT_ID\n )\n )\n\n token = session.fetch_token(\n token_url=get_request_token_url(),\n username=username,\n password=password,\n auth=HTTPBasicAuth(settings.API_CLIENT_ID, settings.API_CLIENT_SECRET),\n timeout=15,\n encoding='utf-8'\n )\n\n user_data = session.get('/users/{username}/'.format(username=username)).json()\n\n return {\n 'pk': user_data.get('pk'),\n 'token': token,\n 'user_data': user_data\n }", "response": "Authenticate the user with the API."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove an access token and associated refresh token", "response": "def revoke_token(access_token):\n \"\"\"\n Instructs the API to delete this access token and associated refresh token\n \"\"\"\n response = requests.post(\n get_revoke_token_url(),\n data={\n 'token': access_token,\n 'client_id': settings.API_CLIENT_ID,\n 'client_secret': settings.API_CLIENT_SECRET,\n },\n timeout=15\n )\n return response.status_code == 200"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning an authenticated api session", "response": "def get_authenticated_api_session(username, password):\n \"\"\"\n :return: an authenticated api session\n \"\"\"\n session = MoJOAuth2Session(\n client=LegacyApplicationClient(\n client_id=settings.API_CLIENT_ID\n )\n )\n\n session.fetch_token(\n token_url=get_request_token_url(),\n username=username,\n password=password,\n auth=HTTPBasicAuth(settings.API_CLIENT_ID, settings.API_CLIENT_SECRET),\n timeout=15,\n encoding='utf-8'\n )\n return session"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck if the line0 and line1 are valid.", "response": "def check_valid(line0, line1):\n \"\"\"\n Check :func:`word_embedding_loader.loader.glove.check_valid` for the API.\n \"\"\"\n data0 = line0.split(b' ')\n if len(data0) != 2:\n return False\n # check if data0 is int values\n try:\n map(int, data0)\n _parse_line(line1, float)\n except:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_with_vocab(fin, vocab, dtype=np.float32):\n line = next(fin)\n data = line.strip().split(b' ')\n assert len(data) == 2\n size = int(data[1])\n arr = np.empty((len(vocab), size), dtype=dtype)\n arr.fill(np.NaN)\n for line in fin:\n token, v = _load_line(line, dtype, size)\n if token in vocab:\n arr[vocab[token], :] = v\n if np.any(np.isnan(arr)):\n raise ParseError(b\"Some of vocab was not found in word embedding file\")\n return arr", "response": "Load a word embedding file with a vocabulary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(fin, dtype=np.float32, max_vocab=None):\n vocab = {}\n line = next(fin)\n data = line.strip().split(b' ')\n assert len(data) == 2\n words = int(data[0])\n if max_vocab is not None:\n words = min(max_vocab, words)\n size = int(data[1])\n arr = np.empty((words, size), dtype=dtype)\n i = 0\n for n_line, line in enumerate(fin):\n if i >= words:\n break\n token, v = _load_line(line, dtype, size)\n if token in vocab:\n parse_warn(b'Duplicated vocabulary ' + token)\n continue\n arr[i, :] = v\n vocab[token] = i\n i += 1\n if i != words:\n # Use + instead of formatting because python 3.4.* does not allow\n # format with bytes\n parse_warn(\n b'EOF before the defined size (read ' + bytes(i) + b', expected '\n + bytes(words) + b')'\n )\n arr = arr[:i, :]\n return arr, vocab", "response": "Load a sequence of word embedding data from a file - like object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef auth(alias=None, url=None, cfg=\"~/.xnat_auth\"):\n '''\n Read connection details from an xnat_auth XML file\n\n Example:\n >>> import yaxil\n >>> auth = yaxil.auth('xnatastic')\n >>> auth.url, auth.username, auth.password\n ('https://www.xnatastic.org/', 'username', '********')\n\n :param alias: XNAT alias\n :type alias: str\n :param url: XNAT URL\n :type url: str\n :param cfg: Configuration file\n :type cfg: str\n :returns: Named tuple of (url, username, password)\n :rtype: :mod:`yaxil.XnatAuth`\n '''\n if not alias and not url:\n raise ValueError('you must provide an alias or url argument')\n if alias and url:\n raise ValueError('cannot provide both alias and url arguments')\n # check and parse config file\n cfg = os.path.expanduser(cfg)\n if not os.path.exists(cfg):\n raise AuthError(\"could not locate auth file %s\" % cfg)\n tree = etree.parse(os.path.expanduser(cfg))\n # search by alias or url\n res = None\n if alias:\n res = tree.findall(\"./%s\" % alias)\n if url:\n res = tree.findall(\"./*/[url='%s']\" % url)\n if not res:\n raise AuthError(\"failed to locate xnat credentials within %s\" % cfg)\n elif len(res) > 1:\n raise AuthError(\"found too many sets of credentials within %s\" % cfg)\n res = res.pop()\n # get url\n url = res.findall(\"url\")\n if not url:\n raise AuthError(\"no url for %s in %s\" % (alias, cfg))\n elif len(url) > 1:\n raise AuthError(\"too many urls for %s in %s\" % (alias, cfg))\n # get username\n username = res.findall(\"username\")\n if not username:\n raise AuthError(\"no username for %s in %s\" % (alias, cfg))\n elif len(username) > 1:\n raise AuthError(\"too many usernames for %s in %s\" % (alias, cfg))\n # get password\n password = res.findall(\"password\")\n if not password:\n raise AuthError(\"no password for %s in %s\" % (alias, cfg))\n elif len(password) > 1:\n raise AuthError(\"too many passwords for %s in %s\" % (alias, cfg))\n return XnatAuth(url=url.pop().text, username=username.pop().text, \n password=password.pop().text)", "response": "Read connection details from an xnat_auth XML file and return a named tuple of url username password"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef subjects(auth, label=None, project=None):\n '''\n Retrieve Subject tuples for subjects returned by this function.\n \n Example:\n >>> import yaxil\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> yaxil.subjects(auth, 'AB1234C')\n Subject(uri=u'/data/experiments/XNAT_S0001', label=u'AB1234C', id=u'XNAT_S0001', \n project=u'MyProject')\n\n :param auth: XNAT authentication\n :type auth: :mod:`yaxil.XnatAuth`\n :param label: XNAT Subject label\n :type label: str\n :param project: XNAT Subject Project\n :type project: str\n :returns: Subject objects\n :rtype: :mod:`yaxil.Subject`\n '''\n url = '{0}/data/subjects'.format(auth.url.rstrip('/'))\n logger.debug('issuing http request %s', url)\n # compile query string\n columns = [\n 'ID',\n 'label',\n 'project'\n ]\n payload = {\n 'columns': ','.join(columns)\n }\n if label:\n payload['label'] = label\n if project:\n payload['project'] = project\n # submit the request\n r = requests.get(url, params=payload, auth=(auth.username, auth.password), \n verify=CHECK_CERTIFICATE)\n # validate response\n if r.status_code != requests.codes.ok:\n raise AccessionError('response not ok ({0}) from {1}'.format(r.status_code, r.url))\n try:\n results = r.json()\n __quick_validate(results)\n except ResultSetError as e:\n raise ResultSetError('{0} from {1}'.format(e.message, r.url))\n results = results['ResultSet']\n if int(results['totalRecords']) == 0:\n raise NoSubjectsError('no records returned from {0}'.format(r.url))\n # start generating consumable results for the caller\n for item in results['Result']:\n yield Subject(uri=item['URI'],\n id=item['ID'],\n project=item['project'],\n label=item['label'])", "response": "Retrieve Subject tuples for subjects returned by this function."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve experiments for a specific subject.", "response": "def experiments(auth, label=None, project=None, subject=None):\n '''\n Retrieve Experiment tuples for experiments returned by this function.\n \n Example:\n >>> import yaxil\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> yaxil.experiment(auth, 'AB1234C')\n Experiment(uri=u'/data/experiments/XNAT_E0001', label=u'AB1234C', id=u'XNAT_E0001', \n project=u'MyProject', subject_id=u'XNAT_S0001', subject_label='ABC')\n\n :param auth: XNAT authentication\n :type auth: :mod:`yaxil.XnatAuth`\n :param label: XNAT Experiment label\n :type label: str\n :param project: XNAT Experiment Project\n :type project: str\n :param subject: YAXIL Subject\n :type subject: :mod:`yaxil.Subject`\n :returns: Experiment object\n :rtype: :mod:`yaxil.Experiment`\n '''\n if subject and (label or project):\n raise ValueError('cannot provide subject with label or project')\n url = '{0}/data/experiments'.format(auth.url.rstrip('/'))\n logger.debug('issuing http request %s', url)\n # compile query string\n columns = [\n 'ID',\n 'label',\n 'project',\n 'xnat:subjectassessordata/subject_id',\n 'subject_label',\n 'insert_date'\n ]\n payload = {\n 'columns': ','.join(columns)\n }\n if label:\n payload['label'] = label\n if project:\n payload['project'] = project\n if subject:\n payload['project'] = subject.project\n payload['xnat:subjectassessordata/subject_id'] = subject.id\n # submit request\n r = requests.get(url, params=payload, auth=(auth.username, auth.password), \n verify=CHECK_CERTIFICATE)\n # validate response\n if r.status_code != requests.codes.ok:\n raise AccessionError('response not ok ({0}) from {1}'.format(r.status_code, r.url))\n try:\n results = r.json()\n __quick_validate(results)\n except ResultSetError as e:\n raise ResultSetError('{0} from {1}'.format(e.message, r.url))\n results = results['ResultSet']\n if int(results['totalRecords']) == 0:\n raise NoExperimentsError('no records returned for {0}'.format(r.url))\n for item in results['Result']:\n yield Experiment(uri=item['URI'], \n id=item['ID'], \n project=item['project'], \n label=item['label'], \n subject_id=item['subject_ID'], \n subject_label=item['subject_label'], \n archived_date=item['insert_date'])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the Accession ID for any label.", "response": "def accession(auth, label, project=None):\n '''\n Get the Accession ID for any Experiment label.\n\n Example:\n >>> import yaxil\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> yaxil.accession(auth, 'AB1234C')\n u'XNAT_E00001'\n\n :param auth: XNAT authentication\n :type auth: :mod:`yaxil.XnatAuth`\n :param label: XNAT Experiment label\n :type label: str\n :param project: XNAT Experiment Project\n :type project: str\n :returns: Accession ID\n :rtype: str\n '''\n return list(experiments(auth, label, project))[0].id"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef download(auth, label, scan_ids=None, project=None, aid=None,\n out_dir='.', in_mem=True, progress=False, attempts=1):\n '''\n Download scan data from XNAT.\n \n Example:\n >>> import yaxil\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> yaxil.download(auth, 'AB1234C', ['1', '2'], out_dir='./data')\n\n :param auth: XNAT authentication object\n :type auth: :mod:`yaxil.XnatAuth`\n :param label: XNAT MR Session label\n :type label: str\n :param scan_ids: Scan numbers to return; use None for all\n :type scan_ids: list\n :param project: XNAT MR Session project\n :type project: str\n :param aid: XNAT Accession ID\n :type aid: str\n :param out_dir: Output directory\n :type out_dir: str\n :param in_mem: Keep download content in memory; faster but uses more memory\n :type in_mem: bool\n :param progress: Show download progress every N bytes\n :type progress: int\n :param attempts: Number of download attempts\n :type attempts: int\n ''' \n if not scan_ids:\n scan_ids = ['ALL']\n if not aid:\n aid = accession(auth, label, project)\n # build the url\n url = \"%s/data/experiments/%s/scans/%s/files?format=zip\" % (auth.url.rstrip('/'), \n aid, ','.join([str(x) for x in scan_ids]))\n # issue the http request, with exponential backoff retry behavior\n backoff = 10\n for _ in range(attempts): \n logger.debug(\"issuing http request %s\", url)\n r = requests.get(url, stream=True, auth=(auth.username, auth.password), verify=CHECK_CERTIFICATE)\n logger.debug(\"response headers %s\", r.headers)\n if r.status_code == requests.codes.ok:\n break\n fuzz = random.randint(0, 10)\n logger.warn(\"download unsuccessful (%s), retrying in %s seconds\", r.status_code, \n backoff + fuzz)\n time.sleep(backoff + fuzz)\n backoff *= 2\n # if we still have a not-ok status at this point, the download failed\n if r.status_code != requests.codes.ok:\n raise DownloadError(\"response not ok (%s) from %s\" % (r.status_code, r.url))\n # create output directory\n if not os.path.exists(out_dir):\n os.makedirs(out_dir)\n # keep response content in memory or write to a file (memory is obviously faster, but limited)\n if in_mem:\n content = io.BytesIO()\n logger.debug(\"response content will be read into memory\")\n else:\n content = tf.NamedTemporaryFile(dir=out_dir, prefix=\"xnat\", suffix=\".zip\")\n logger.debug(\"response content will be stored on disk %s\", content.name)\n # progress indicator setup\n if progress:\n sys.stdout.write(\"reading response data: \")\n sys.stdout.flush()\n # read response content in chunks\n meter = 0\n chunk_size = 1024\n for chunk in r.iter_content(chunk_size=chunk_size):\n if progress and meter >= progress:\n sys.stdout.write(next(commons.spinner)); sys.stdout.flush()\n sys.stdout.write('\\b')\n meter = 0\n content.write(chunk)\n meter += chunk_size\n # progress indicator shut down\n if progress:\n sys.stdout.write('done.\\n'); sys.stdout.flush()\n # load reponse content into a zipfile object\n try:\n zf = zipfile.ZipFile(content, allowZip64=True)\n except zipfile.BadZipfile:\n with tf.NamedTemporaryFile(dir=out_dir, prefix=\"xnat\",\n suffix=\".zip\", delete=False) as fo:\n content.seek(0)\n fo.write(content.read())\n raise DownloadError(\"bad zip file, written to %s\" % fo.name)\n # finally extract the zipfile (with various nasty edge cases handled)\n logger.debug(\"extracting zip archive to %s\", out_dir)\n extract(zf, content, out_dir)", "response": "Download scan data from XNAT MR Session."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef extract(zf, content, out_dir='.'):\n '''\n Extracting a Java 1.6 XNAT ZIP archive in Python.\n\n :param zf: ZipFile object\n :type zf: zipfile.ZipFile\n :param out_dir: Output directory\n :type out_dir: str\n '''\n previous_header_offset = 0\n compensation = Namespace(value=2**32, factor=0)\n for i,member in enumerate(zf.infolist()):\n '''\n Right... so when Java 1.6 produces a Zip filesystem that exceeds 2^32\n bytes, the Central Directory local file header offsets after the 2^32 \n byte appear to overflow. The Python zipfile module then adds any \n unexpected bytes to each header offset thereafter. This attempts to fix \n that. My guess is that this comment might make perfect sense now, but \n will make aboslutely no sense in about a year.\n '''\n # undo concat padding added from zipfile.py:819\n if i == 0:\n concat = member.header_offset\n member.header_offset -= concat\n # if a header offset moves backward, add 2^32 bytes * factor\n if previous_header_offset > member.header_offset:\n compensation.factor += 1\n previous_header_offset = member.header_offset\n member.header_offset += compensation.value * compensation.factor\n # read the archive member into a bytes file-like object\n try:\n bio = io.BytesIO(zf.read(member.filename))\n except zipfile.BadZipfile:\n with tf.NamedTemporaryFile(dir=out_dir, prefix=\"xnat\",\n suffix=\".zip\", delete=False) as fo:\n content.seek(0)\n fo.write(content.read())\n raise DownloadError(\"bad zip file, written to %s\" % fo.name)\n # xnat archives may contain files that are gzipped without the .gz\n if not member.filename.endswith(\".gz\"):\n try:\n gz = gzip.GzipFile(fileobj=bio, mode=\"rb\")\n gz.read()\n bio = gz\n except IOError:\n pass\n # write the file out to the filesystem\n bio.seek(0)\n f = os.path.join(out_dir, os.path.basename(member.filename))\n with open(f, \"wb\") as fo:\n fo.write(bio.read())", "response": "Extract a Java 1. 6 XNAT ZIP archive into a Python file - like object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef scansearch(auth, label, filt, project=None, aid=None):\n '''\n Search for scans by supplying a set of SQL-based conditionals.\n\n Example:\n >>> import yaxil\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> query = {\n ... 'eor1': \"note LIKE %EOR1%\",\n ... 'eor2': \"note LIKE %EOR2%\",\n ... 'mpr': \"series_description='T1_MEMPRAGE RMS' OR note LIKE %ANAT%\"\n ... }\n >>> yaxil.scansearch(auth, 'AB1234C', query)\n {\"mpr\": [4], \"eor1\": [13], \"eor2\": [14]}\n\n :param auth: XNAT authentication object\n :type auth: :mod:`yaxil.XnatAuth`\n :param label: XNAT MR Session label\n :type label: str\n :param filt: Scan search filter/query\n :type filt: dict\n :param project: XNAT MR Session project\n :type project: str\n :param aid: XNAT Accession ID\n :type aid: str\n :returns: Same dictionary that was passed in, but values are now matching scans\n :rtype: dict\n '''\n if not aid:\n aid = accession(auth, label, project)\n # get scans for accession as a csv\n url = \"%s/data/experiments/%s/scans?format=csv\" % (auth.url.rstrip('/'), aid)\n logger.debug(\"issuing http request %s\", url)\n r = requests.get(url, auth=(auth.username, auth.password), verify=CHECK_CERTIFICATE)\n if r.status_code != requests.codes.ok:\n raise ScanSearchError(\"response not ok (%s) from %s\" % (r.status_code, r.url))\n if not r.content:\n raise ScanSearchError(\"response is empty from %s\" % r.url)\n # read the result into a csv reader\n reader = csv.reader(io.StringIO(r.content.decode()))\n columns = next(reader)\n # create an in-memory database\n conn = sqlite3.connect(\":memory:\")\n c = conn.cursor()\n # create scans table and insert data\n c.execute(\"CREATE TABLE scans (%s)\" % ','.join(columns))\n query = \"INSERT INTO scans VALUES (%s)\" % ','.join('?' * len(columns))\n for row in reader:\n c.execute(query, [x for x in row])\n conn.commit()\n # run the user supplied filters and return result\n result = col.defaultdict(list)\n for token,filt in iter(filt.items()):\n try:\n result[token] = [x[0] for x in c.execute(\"SELECT ID FROM scans where %s\" % filt)]\n except sqlite3.OperationalError:\n logger.critical(\"something is wrong with the filter: %s\", filt)\n raise\n return result", "response": "Search for scans by supplying a set of SQL - based conditionals."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef scans(auth, label=None, scan_ids=None, project=None, experiment=None):\n '''\n Get scan information for a MR Session as a sequence of dictionaries.\n\n Example:\n >>> import yaxil\n >>> import json\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> for scan in yaxil.scans2(auth, 'AB1234C'):\n ... print(json.dumps(scan, indent=2))\n\n :param auth: XNAT authentication object\n :type auth: :mod:`yaxil.XnatAuth`\n :param label: XNAT MR Session label\n :type label: str\n :param scan_ids: Scan numbers to return\n :type scan_ids: list\n :param project: XNAT MR Session project\n :type project: str\n :param experiment: YAXIL Experiment\n :type experiment: :mod:`yaxil.Experiment`\n :returns: Generator of scan data dictionaries\n :rtype: dict\n '''\n if experiment and (label or project):\n raise ValueError('cannot supply experiment with label or project')\n if experiment:\n label,project = experiment.label,experiment.project\n aid = accession(auth, label, project)\n path = '/data/experiments'\n params = {\n 'xsiType': 'xnat:mrSessionData',\n 'columns': ','.join(scans.columns.keys())\n }\n params['xnat:mrSessionData/ID'] = aid\n _,result = _get(auth, path, 'json', autobox=True, params=params)\n for result in result['ResultSet']['Result']:\n if scan_ids == None or result['xnat:mrscandata/id'] in scan_ids:\n data = dict()\n for k,v in iter(scans.columns.items()):\n data[v] = result[k]\n yield data", "response": "Get information about a MR Session as a sequence of dictionaries."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting ExtendedBOLDQC data as a sequence of dictionaries.", "response": "def extendedboldqc(auth, label, scan_ids=None, project=None, aid=None):\n '''\n Get ExtendedBOLDQC data as a sequence of dictionaries.\n\n Example:\n >>> import yaxil\n >>> import json\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> for eqc in yaxil.extendedboldqc2(auth, 'AB1234C')\n ... print(json.dumps(eqc, indent=2))\n\n :param auth: XNAT authentication object\n :type auth: :mod:`yaxil.XnatAuth`\n :param label: XNAT MR Session label\n :type label: str\n :param scan_ids: Scan numbers to return\n :type scan_ids: list\n :param project: XNAT MR Session project\n :type project: str\n :param aid: XNAT Accession ID\n :type aid: str\n :returns: Generator of scan data dictionaries\n :rtype: :mod:`dict`\n '''\n if not aid:\n aid = accession(auth, label, project)\n path = '/data/experiments'\n params = {\n 'xsiType': 'neuroinfo:extendedboldqc',\n 'columns': ','.join(extendedboldqc.columns.keys())\n }\n if project:\n params['project'] = project\n params['xnat:mrSessionData/ID'] = aid\n _,result = _get(auth, path, 'json', autobox=True, params=params)\n for result in result['ResultSet']['Result']:\n if scan_ids == None or result['neuroinfo:extendedboldqc/scan/scan_id'] in scan_ids:\n data = dict()\n for k,v in iter(extendedboldqc.columns.items()):\n data[v] = result[k]\n yield data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get(auth, path, fmt, autobox=True, params=None):\n '''\n Issue a GET request to the XNAT REST API and box the response content.\n \n Example:\n >>> import yaxil\n >>> from yaxil import Format\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> yaxil.get(auth, '/data/experiments', Format.JSON)\n\n :param auth: XNAT authentication\n :type auth: :mod:`yaxil.XnatAuth`\n :param path: API URL path\n :type path: str\n :param fmt: API result format\n :type fmt: :mod:`yaxil.Format`\n :param autobox: Autobox response content into an appropriate reader or other data structure\n :type autobox: bool\n :param params: Additional query parameters\n :type params: dict\n :returns: Tuple of (URL, :mod:`dict` | :mod:`xml.etree.ElementTree` | :mod:`csv.reader` | :mod:`str`)\n :rtype: tuple\n '''\n if not params:\n params = {}\n url = \"%s/%s\" % (auth.url.rstrip('/'), path.lstrip('/'))\n params[\"format\"] = fmt\n logger.debug(\"issuing http request %s\", url)\n logger.debug(\"query parameters %s\", params)\n r = requests.get(url, params=params, auth=(auth.username, auth.password), verify=CHECK_CERTIFICATE)\n if r.status_code != requests.codes.ok:\n raise RestApiError(\"response not ok (%s) from %s\" % (r.status_code, r.url))\n if not r.content:\n raise RestApiError(\"response is empty from %s\" % r.url)\n if autobox:\n return r.url,_autobox(r.text, fmt)\n else:\n return r.url,r.content", "response": "Issue a GET request to the XNAT REST API and box the response content."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntest if a project contains any items of a particular xsi type.", "response": "def has(auth, xsitype, project=None):\n '''\n Test if a project contains any items of a particular xsi:type.\n\n Example:\n >>> import yaxil\n >>> auth = yaxil.XnatAuth(url='...', username='...', password='...')\n >>> yaxil.has(auth, 'neuroinfo:extendedboldqc', project='MyProject')\n\n :param auth: XNAT authentication\n :type auth: :mod:`yaxil.XnatAuth`\n :param xsitype: XNAT xsi:type\n :param xsitype: str\n :param project: XNAT Project\n :type project: str\n :returns: True or False\n :rtype: bool\n '''\n path = \"/data/experiments\"\n params = {\n \"xsiType\": xsitype,\n \"columns\": 'ID'\n }\n if project:\n params[\"project\"] = project\n url,result = _get(auth, path, Format.JSON, autobox=True, params=params)\n try:\n __quick_validate(result)\n except ResultSetError as e:\n raise ResultSetError(\"%s in response from %s\" % (e.message, url))\n if int(result[\"ResultSet\"][\"totalRecords\"]) == 0:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_user(request):\n if not hasattr(request, '_cached_user'):\n request._cached_user = auth_get_user(request)\n return request._cached_user", "response": "Returns a cached copy of the user if it exists."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ensure_compatible_admin(view):\n\n def wrapper(request, *args, **kwargs):\n user_roles = request.user.user_data.get('roles', [])\n if len(user_roles) != 1:\n context = {\n 'message': 'I need to be able to manage user accounts. '\n 'My username is %s' % request.user.username\n }\n return render(request, 'mtp_common/user_admin/incompatible-admin.html', context=context)\n return view(request, *args, **kwargs)\n\n return wrapper", "response": "Decorator that ensures that the user is in exactly one role and that the user is not in prison - pumpk role."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fault_barrier(fn):\n @functools.wraps(fn)\n def process(self, tup):\n try:\n return fn(self, tup)\n except Exception as e:\n if isinstance(e, KeyboardInterrupt):\n return\n print(str(e), file=sys.stderr)\n self.fail(tup)\n return process", "response": "Method decorator to catch and log errors then send fail message."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef initialize(self, conf, ctx):\n self.manager = get_search_manager()\n config = get_config()['TwitterSearchBolt']\n self.term_shelf = shelf_from_config(config)", "response": "Initialization steps:\n\n 1. Get :func:`~birding.search.search_manager_from_config`.\n 2. Prepare to track searched terms as to avoid redundant searches."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprocesses the term_shelf and emit the results.", "response": "def process(self, tup):\n \"\"\"Process steps:\n\n 1. Stream in (term, timestamp).\n 2. Perform :meth:`~birding.search.SearchManager.search` on term.\n 3. Emit (term, timestamp, search_result).\n \"\"\"\n term, timestamp = tup.values\n if term not in self.term_shelf:\n self.log(\n 'search: {term}, {timestamp}'\n .format(term=term, timestamp=timestamp))\n search_result = self.manager.search(q=term)\n self.emit([term, timestamp, search_result])\n self.term_shelf[term] = timestamp"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process(self, tup):\n term, timestamp, search_result = tup.values\n self.log(\n 'lookup: {term}, {timestamp}'\n .format(term=term, timestamp=timestamp))\n lookup_result = self.manager.lookup_search_result(search_result)\n self.emit([term, timestamp, lookup_result])", "response": "Process the tuple and emit the result."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef initialize(self, conf, ctx):\n config = get_config()['ElasticsearchIndexBolt']\n elasticsearch_class = import_name(config['elasticsearch_class'])\n self.es = elasticsearch_class(**config['elasticsearch_init'])\n self.index = config['index']\n self.doc_type = config['doc_type']", "response": "Initialization steps:\n\n 1. Prepare elasticsearch connection, including details for indexing."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef process(self, tup):\n self.es.bulk(\n self.generate_bulk_body(tup.values[2]),\n index=self.index,\n doc_type=self.doc_type)", "response": "Process steps for the get_record method."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef initialize(self, conf, ctx):\n config = get_config()['ResultTopicBolt']\n kafka_class = import_name(config['kafka_class'])\n self.client = kafka_class(**config['kafka_init'])\n self.topic = self.client.topics[config['topic']]\n self.producer = self.topic.get_producer()\n\n # Use own default index value while still allowing user config.\n self.tweet_shelf = shelf_from_config(config, index='pre_kafka_shelf')", "response": "Initialize the object with the given configuration."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nprocessing steps : 1. Stream third positional value from input into Kafka topic. 2. Stream third positional value from input into Kafka topic.", "response": "def process(self, tup):\n \"\"\"Process steps:\n\n 1. Stream third positional value from input into Kafka topic.\n \"\"\"\n status_seq = self.iter_using_shelf(tup.values[2], self.tweet_shelf)\n # This could be more efficient by passing the result from twitter\n # straight through to the producer, instead of deserializing and\n # reserializing json.\n self.producer.produce(json.dumps(status) for status in status_seq)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef search_manager_from_config(config, **default_init):\n manager_cls = import_name(config['class'], default_ns='birding.search')\n init = {}\n init.update(default_init)\n init.update(config['init'])\n manager = manager_cls(**init)\n return manager", "response": "Return a SearchManager instance dynamically based on config."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef bids_from_config(sess, scans_metadata, config, out_base):\n '''\n Create a BIDS output directory from configuration file\n '''\n # get session and subject labels from scan metadata\n _item = next(iter(scans_metadata))\n session,subject = _item['session_label'],_item['subject_label']\n # bids and sourcedata base directories\n sourcedata_base = os.path.join(\n out_base,\n 'sourcedata',\n 'sub-{0}'.format(legal.sub('', subject)),\n 'ses-{0}'.format(legal.sub('', session))\n )\n bids_base = os.path.join(\n out_base,\n 'sub-{0}'.format(legal.sub('', subject)),\n 'ses-{0}'.format(legal.sub('', session))\n )\n # put arguments in a struct\n args = commons.struct(\n xnat=sess,\n subject=subject,\n session=session,\n bids=bids_base,\n sourcedata=sourcedata_base\n )\n # process func, anat, and fmap\n func_refs = proc_func(config, args)\n anat_refs = proc_anat(config, args)\n fmap_refs = proc_fmap(config, args, func_refs)", "response": "Create a BIDS output directory from a configuration file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef proc_anat(config, args):\n '''\n Download anatomical data and convert to BIDS\n '''\n refs = dict()\n for scan in iterconfig(config, 'anat'):\n ref = scan.get('id', None)\n templ = 'sub-${sub}_ses-${ses}'\n if 'acquisition' in scan:\n templ += '_acq-${acquisition}'\n if 'run' in scan:\n templ += '_run-${run}'\n templ += '_${modality}'\n templ = string.Template(templ)\n fbase = templ.safe_substitute(\n sub=legal.sub('', args.subject),\n ses=legal.sub('', args.session),\n acquisition=scan.get('acquisition', None),\n run=scan.get('run', None),\n modality=scan.get('modality', None),\n )\n # download data to bids sourcedata directory\n sourcedata_dir = os.path.join(args.sourcedata, scan['type'])\n if not os.path.exists(sourcedata_dir):\n os.makedirs(sourcedata_dir)\n dicom_dir = os.path.join(sourcedata_dir, '{0}.dicom'.format(fbase))\n logger.info('downloading session=%s, scan=%s, loc=%s', args.session, scan['scan'], dicom_dir)\n args.xnat.download(args.session, [scan['scan']], out_dir=dicom_dir)\n # convert to nifti\n fname = '{0}.nii.gz'.format(fbase)\n refs[ref] = os.path.join(scan['type'], fname)\n fullfile = os.path.join(args.bids, scan['type'], fname)\n logger.info('converting %s to %s', dicom_dir, fullfile)\n convert(dicom_dir, fullfile)\n return refs", "response": "Download anatomical data and convert to BIDS"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef iterconfig(config, scan_type):\n '''\n Iterate over BIDS configuration file\n '''\n if scan_type in config:\n for modality,scans in iter(config[scan_type].items()):\n for scan in scans:\n scan.update({\n 'type': scan_type,\n 'modality': modality\n })\n yield scan", "response": "Iterate over BIDS configuration file"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrenames magnitude fieldmap file to BIDS specification", "response": "def rename_fmapm(bids_base, basename):\n '''\n Rename magnitude fieldmap file to BIDS specification\n '''\n files = dict()\n for ext in ['nii.gz', 'json']:\n for echo in [1, 2]:\n fname = '{0}_e{1}.{2}'.format(basename, echo, ext)\n src = os.path.join(bids_base, 'fmap', fname)\n if os.path.exists(src):\n dst = src.replace(\n 'magnitude_e{0}'.format(echo),\n 'magnitude{0}'.format(echo)\n )\n logger.debug('renaming %s to %s', src, dst)\n os.rename(src, dst)\n files[ext] = dst\n return files"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nruns dcm2niix on input file and write to output file", "response": "def convert(input, output):\n '''\n Run dcm2niix on input file\n '''\n dirname = os.path.dirname(output)\n if not os.path.exists(dirname):\n os.makedirs(dirname)\n basename = os.path.basename(output)\n basename = re.sub('.nii(.gz)?', '', basename)\n dcm2niix = commons.which('dcm2niix')\n cmd = [\n 'dcm2niix',\n '-s', 'y',\n '-b', 'y',\n '-z', 'y',\n '-f', basename,\n '-o', dirname,\n input\n ]\n logger.debug(cmd)\n sp.check_output(cmd)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds messages for each error returned from the MTP API", "response": "def api_errors_to_messages(request, error, fallback_text):\n \"\"\"\n Adds messages for each error returned from the MTP apis\n Useful for displaying errors when there's no form on the page\n :param request: HttpRequest\n :param error: HttpClientError\n :param fallback_text: fallback error text\n \"\"\"\n try:\n response_body = json.loads(error.content.decode('utf-8'))\n for field, errors in response_body.items():\n if isinstance(errors, list):\n for error in errors:\n messages.error(request, error)\n else:\n messages.error(request, errors)\n except (AttributeError, ValueError, KeyError):\n messages.error(request, fallback_text)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef retrieve_all_pages(api_endpoint, **kwargs):\n page_size = getattr(settings, 'REQUEST_PAGE_SIZE', 20)\n loaded_results = []\n\n offset = 0\n while True:\n response = api_endpoint(limit=page_size, offset=offset,\n **kwargs)\n count = response.get('count', 0)\n loaded_results += response.get('results', [])\n if len(loaded_results) >= count:\n break\n offset += page_size\n\n return loaded_results", "response": "This method loads all pages from the API endpoint and returns them as a list"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef retrieve_all_pages_for_path(session, path, **params):\n page_size = getattr(settings, 'REQUEST_PAGE_SIZE', 20)\n loaded_results = []\n\n offset = 0\n while True:\n response = session.get(\n path,\n params=dict(limit=page_size, offset=offset, **params)\n )\n content = response.json()\n count = content.get('count', 0)\n loaded_results += content.get('results', [])\n if len(loaded_results) >= count:\n break\n offset += page_size\n\n return loaded_results", "response": "This method loads all pages for a given path and returns a list of all the results."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndispatch a single object to the appropriate spout class based on config.", "response": "def DispatchSpout(*a, **kw):\n \"\"\"Factory to dispatch spout class based on config.\"\"\"\n spout_class_name = get_config()['Spout']\n spout_class = import_name(spout_class_name, default_ns='birding.spout')\n return spout_class(*a, **kw)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninitialize the term cycle sequence based on config.", "response": "def initialize(self, stormconf, context):\n \"\"\"Initialization steps:\n\n 1. Prepare sequence of terms based on config: TermCycleSpout/terms.\n \"\"\"\n self.terms = get_config()['TermCycleSpout']['terms']\n self.term_seq = itertools.cycle(self.terms)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef next_tuple(self):\n term = next(self.term_seq)\n timestamp = datetime.datetime.utcnow().isoformat()\n self.emit([term, timestamp], tup_id=self.pack_tup_id(term, timestamp))", "response": "Next tuple steps:\n\n 1. Emit (term, timestamp) for next term in sequence w/current UTC time."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncast a basestring to a more appropriate type.", "response": "def cast(s):\n '''\n Cast a basestring to a more appropriate type.\n Example::\n >>> from yaxil import cast\n >>> type(cast(\"999\"))\n \n \n :param s: String\n :type s: basestring\n :returns: Casted string\n :rtype: int|float|str\n '''\n if not isinstance(s, basestring):\n raise TypeError(\"argument must be a string\")\n for test in [int, float, str]:\n try:\n return test(s)\n except ValueError:\n continue\n return str(s)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrite a file atomically by writing the content to a temporary location.", "response": "def atomic_write(filename, content, overwrite=True, permissions=0o0644, encoding='utf-8'):\n '''\n Write a file atomically by writing the file content to a\n temporary location first, then renaming the file. \n \n TODO: this relies pretty heavily on os.rename to ensure atomicity, but \n os.rename does not silently overwrite files that already exist on \n Windows natively. For now, the functionality provided here can only be \n supported under Windows Subsystem for Linux on Windows 10 version 1607 \n and later.\n\n :param filename: Filename\n :type filename: str\n :param content: File content\n :type content: str\n :param overwrite: Overwrite\n :type overwrite: bool\n :param permissions: Octal permissions\n :type permissions: octal\n '''\n filename = os.path.expanduser(filename)\n if not overwrite and os.path.exists(filename):\n raise WriteError('file already exists: {0}'.format(filename))\n dirname = os.path.dirname(filename)\n with tf.NamedTemporaryFile(dir=dirname, prefix='.', delete=False) as tmp:\n if isinstance(content, six.string_types):\n tmp.write(content.decode(encoding))\n else:\n tmp.write(content)\n os.chmod(tmp.name, permissions)\n os.rename(tmp.name, filename)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef which(x):\n '''\n Same as which command on Linux\n '''\n for p in os.environ.get('PATH').split(os.pathsep):\n p = os.path.join(p, x)\n if os.path.exists(p):\n return os.path.abspath(p)\n return None", "response": "Same as which command on Linux\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef serve(context: Context, port=8000, browsersync_port=3000, browsersync_ui_port=3030):\n try:\n from watchdog.observers import Observer\n except ImportError:\n context.pip_command('install', 'watchdog>0.8,<0.9')\n from watchdog.observers import Observer\n\n from watchdog.events import PatternMatchingEventHandler\n\n class RebuildHandler(PatternMatchingEventHandler):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._patterns = ['*.js', '*.scss', '*.html']\n self._ignore_directories = True\n self.builder = None\n self.rebuild_javascript = threading.Event()\n self.rebuild_stylesheets = threading.Event()\n\n def on_any_event(self, event):\n if self.builder:\n self.builder.cancel()\n extension = event.src_path.rsplit('.', 1)[-1].lower()\n if extension == 'js':\n self.rebuild_javascript.set()\n elif extension == 'scss':\n self.rebuild_stylesheets.set()\n self.builder = threading.Timer(3, self.rebuild)\n self.builder.start()\n\n def rebuild(self):\n if self.rebuild_javascript.is_set():\n self.rebuild_javascript.clear()\n context.debug('Triggering javascript build')\n bundle_javascript(context)\n if self.rebuild_stylesheets.is_set():\n self.rebuild_stylesheets.clear()\n context.debug('Triggering stylesheet build')\n bundle_stylesheets(context)\n context.debug('Reloading browsers')\n context.node_tool('browser-sync', 'reload', '--port=%s' % browsersync_port)\n\n context.info('Watching sources')\n observer = Observer()\n paths = [\n context.app.common_asset_source_path,\n context.app.asset_source_path,\n context.app.common_templates_path,\n context.app.templates_path,\n ]\n handler = RebuildHandler()\n for path in paths:\n observer.schedule(handler, path, recursive=True)\n observer.setDaemon(True)\n observer.start()\n\n context.info('Starting browser sync')\n browsersync_args = ['start', '--host=localhost', '--no-open',\n '--logLevel', {0: 'silent', 1: 'info', 2: 'debug'}[context.verbosity],\n '--port=%s' % browsersync_port, '--proxy=localhost:%s' % port,\n '--ui-port=%s' % browsersync_ui_port]\n browsersync = functools.partial(context.node_tool, 'browser-sync', *browsersync_args)\n threading.Thread(target=browsersync, daemon=True).start()\n\n context.info('Starting web server')\n return start(context, port=port)", "response": "Starts a development server with auto - rebuilding and live - reloading."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_build_paths(context: Context):\n paths = [context.app.asset_build_path, context.app.screenshots_build_path, context.app.collected_assets_path]\n for path in filter(None, paths):\n os.makedirs(path, exist_ok=True)", "response": "Creates directories needed for build outputs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating node. js dependencies", "response": "def node_dependencies(context: Context):\n \"\"\"\n Updates node.js dependencies\n \"\"\"\n args = ['--loglevel', {0: 'silent', 1: 'warn', 2: 'info'}[context.verbosity]]\n if not context.use_colour:\n args.append('--color false')\n args.append('install')\n return context.shell('npm', *args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrunning the app in a docker container ; for local development only!", "response": "def local_docker(context: Context):\n \"\"\"\n Runs the app in a docker container; for local development only!\n Once performed, `docker-compose up` can be used directly\n \"\"\"\n output = io.StringIO()\n with contextlib.redirect_stdout(output):\n context.shell('docker-machine', 'ip', 'default')\n host_machine_ip = output.getvalue().strip() or socket.gethostbyname(socket.gethostname())\n args = ()\n if context.verbosity > 1:\n args += ('--verbose',)\n args += ('up', '--build', '--remove-orphans')\n if not context.use_colour:\n args += ('--no-color',)\n context.shell('docker-compose', *args, environment={'HOST_MACHINE_IP': host_machine_ip})"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompile stylesheets and returns the code to be used in the bundle.", "response": "def bundle_stylesheets(context: Context):\n \"\"\"\n Compiles stylesheets\n \"\"\"\n args = [\n '--output', context.app.scss_build_path,\n '--output-style', 'compressed',\n ]\n if context.verbosity == 0:\n args.append('--quiet')\n if not context.use_colour:\n args.append('--no-color')\n for path in context.app.scss_include_paths:\n args.append('--include-path')\n args.append(path)\n return_code = 0\n for source_file in context.app.scss_source_file_set.paths_for_shell(separator=None):\n return_code = context.node_tool('node-sass', *args + [source_file]) or return_code\n return return_code"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef lint_javascript(context: Context):\n args = [\n '--config', os.path.join(context.app.common_templates_path, 'mtp_common', 'build_tasks', 'eslintrc.json'),\n '--format', 'stylish',\n ]\n if context.verbosity == 0:\n args.append('--quiet')\n if not context.use_colour:\n args.append('--no-color')\n args.append(context.app.javascript_source_path)\n return context.node_tool('eslint', *args)", "response": "Tests javascript for code and style errors"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef lint_stylesheets(context: Context):\n args = [\n '--config', os.path.join(context.app.common_templates_path, 'mtp_common', 'build_tasks', 'sass-lint.yml'),\n '--format', 'stylish',\n '--syntax', 'scss',\n ]\n if context.verbosity > 1:\n args.append('--verbose')\n args.append(os.path.join(context.app.scss_source_path, '**', '*.scss'))\n return context.node_tool('sass-lint', *args)", "response": "Tests stylesheets for code and style errors"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninstalling GOV. UK template", "response": "def govuk_template(context: Context, version='0.23.0', replace_fonts=True):\n \"\"\"\n Installs GOV.UK template\n \"\"\"\n if FileSet(os.path.join(context.app.govuk_templates_path, 'base.html')):\n # NB: check is only on main template and not the assets included\n return\n url = 'https://github.com/alphagov/govuk_template/releases' \\\n '/download/v{0}/django_govuk_template-{0}.tgz'.format(version)\n try:\n context.shell('curl --location %(silent)s --output govuk_template.tgz %(url)s' % {\n 'silent': '--silent' if context.verbosity == 0 else '',\n 'url': url,\n })\n context.shell('tar xzf govuk_template.tgz ./govuk_template')\n rsync_flags = '-avz' if context.verbosity == 2 else '-az'\n context.shell('rsync %s govuk_template/static/ %s/' % (rsync_flags, context.app.asset_build_path))\n context.shell('rsync %s govuk_template/templates/ %s/' % (rsync_flags, context.app.templates_path))\n finally:\n context.shell('rm -rf govuk_template.tgz ./govuk_template')\n if replace_fonts:\n # govuk_template includes .eot font files for IE, but they load relative to current URL not the stylesheet\n # this option removes these files so common's fonts-ie8.css override is used\n context.shell('rm -rf %s/stylesheets/fonts-ie8.css'\n ' %s/stylesheets/fonts/' % (context.app.asset_build_path, context.app.asset_build_path))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncollect assets from GOV. UK frontend toolkit", "response": "def additional_assets(context: Context):\n \"\"\"\n Collects assets from GOV.UK frontend toolkit\n \"\"\"\n rsync_flags = '-avz' if context.verbosity == 2 else '-az'\n for path in context.app.additional_asset_paths:\n context.shell('rsync %s %s %s/' % (rsync_flags, path, context.app.asset_build_path))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef precompile_python_code(context: Context):\n from compileall import compile_dir\n\n kwargs = {}\n if context.verbosity < 2:\n kwargs['quiet'] = True\n compile_dir(context.app.django_app_name, **kwargs)", "response": "Pre - compiles python modules"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmake all the messages in the current language.", "response": "def make_messages(context: Context, javascript=False, fuzzy=False):\n \"\"\"\n Collects text into translation source files\n \"\"\"\n kwargs = {\n 'all': True,\n 'keep_pot': True,\n 'no_wrap': True,\n }\n if fuzzy:\n kwargs['allow_fuzzy'] = True\n if javascript:\n kwargs.update(domain='djangojs', ignore_patterns=['*.bundle.js'])\n with in_dir(context.app.django_app_name):\n return context.management_command('makemessages', **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsynchronising translations with transifex. com", "response": "def translations(context: Context, pull=False, push=False):\n \"\"\"\n Synchronises translations with transifex.com\n \"\"\"\n if not (pull or push):\n raise TaskError('Specify whether to push or pull translations')\n if pull:\n context.shell('tx', 'pull')\n make_messages(context, javascript=False)\n make_messages(context, javascript=True)\n if push:\n context.shell('tx', 'push', '--source', '--no-interactive')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef csrf_failure(request, reason=''):\n if _csrf_failed_view.no_moj_csrf:\n from django.views.csrf import csrf_failure\n\n return csrf_failure(request, reason=reason)\n\n # present a sensible error message to users\n if reason == REASON_NO_CSRF_COOKIE:\n reason = _('Please try again.') + ' ' + \\\n _('Make sure you haven\u2019t disabled cookies.')\n elif reason == REASON_NO_REFERER:\n reason = _('Please try again.') + ' ' + \\\n _('Make sure you are using a modern web browser '\n 'such as Firefox or Google Chrome.')\n else:\n reason = _('Your browser failed a security check.') + ' ' + \\\n _('Please try again.')\n messages.error(request, reason)\n\n # convert into GET request and show view again\n request.method = 'GET'\n request.POST = QueryDict()\n\n # call the original view but set response status to forbidden\n response = _csrf_failed_view.callback(request, *_csrf_failed_view.args, **_csrf_failed_view.kwargs)\n if response.status_code == 200:\n response.status_code = 403\n return response", "response": "CSRF failure view which converts the failed POST request into GET and calls the original view with a sensible error message presented\n to the user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncomputing the positions of the elctrodes based on the elinfo dictionary.", "response": "def get_positions(elinfo, center=True):\n '''Computes the positions of the elctrodes based on the elinfo\n\n Parameters\n ----------\n elinfo: dict\n Contains electrode information from yaml file (dim, pitch, sortlist, plane, pos)\n\n Returns\n -------\n positions: np.array\n 3d points with the centers of the electrodes\n\n '''\n electrode_pos = False\n # method 1: positions in elinfo\n if 'pos' in elinfo.keys():\n pos = np.array(elinfo['pos'])\n nelec = pos.shape[0]\n if len(pos.shape) == 1:\n if len(pos) == 2:\n pos2d = np.array([pos])\n if 'plane' not in elinfo.keys():\n # print(\"'plane' field with 2D dimensions assumed to be 'yz\")\n plane = 'yz'\n else:\n plane = elinfo['plane']\n if 'offset' not in elinfo.keys():\n offset = 0\n else:\n offset = elinfo['offset']\n pos = add_3dim(pos2d, plane, offset)\n elif len(pos) == 3:\n pos = np.array([pos])\n elif len(pos) != 3:\n raise AttributeError('pos attribute should be one or a list of 2D or 3D points')\n elif len(pos.shape) == 2:\n if pos.shape[1] == 2:\n pos2d = pos\n if 'plane' not in elinfo.keys():\n # print(\"'plane' field with 2D dimensions assumed to be 'yz\")\n plane = 'yz'\n else:\n plane = elinfo['plane']\n if 'offset' not in elinfo.keys():\n offset = 0\n else:\n offset = elinfo['offset']\n pos = add_3dim(pos2d, plane, offset)\n elif pos.shape[1] != 3:\n raise AttributeError('pos attribute should be a list of 2D or 3D points')\n electrode_pos = True\n\n # method 2: dim, pithch, stagger\n if 'dim' in elinfo.keys():\n dim = elinfo['dim']\n if dim == 1:\n if 'plane' not in elinfo.keys():\n # print(\"'plane' field with 2D dimensions assumed to be 'yz\")\n plane = 'yz'\n else:\n plane = elinfo['plane']\n if 'offset' not in elinfo.keys():\n offset = 0\n else:\n offset = elinfo['offset']\n pos2d = np.array([[0, 0]])\n pos = add_3dim(pos2d, plane, offset)\n else:\n if 'pitch' not in elinfo.keys():\n raise AttributeError(\"When 'dim' is used, also 'pitch' should be specified.\")\n else:\n pitch = elinfo['pitch']\n\n if isinstance(dim, (int, np.integer)):\n dim = [dim, dim]\n if isinstance(pitch, (int, np.integer)) or isinstance(pitch, (float, np.float)):\n pitch = [pitch, pitch]\n if len(dim) == 2:\n d1 = np.array([])\n d2 = np.array([])\n if 'stagger' in elinfo.keys():\n stagger = elinfo['stagger']\n else:\n stagger = None\n for d_i in np.arange(dim[1]):\n if stagger is not None:\n if isinstance(stagger, (int, np.integer)) or isinstance(stagger, (float, np.float)):\n if np.mod(d_i, 2):\n d1new = np.arange(dim[0]) * pitch[0] + stagger\n else:\n d1new = np.arange(dim[0]) * pitch[0]\n elif len(stagger) == len(dim):\n d1new = np.arange(dim[0]) * pitch[0] + stagger[d_i]\n else:\n d1new = np.arange(dim[0]) * pitch[0]\n else:\n d1new = np.arange(dim[0]) * pitch[0]\n d1 = np.concatenate((d1, d1new))\n d2 = np.concatenate((d2, dim[0] * [pitch[1] * d_i]))\n pos2d = np.vstack((d2, d1)).T\n if 'plane' not in elinfo.keys():\n # print(\"'plane' field with 2D dimensions assumed to be 'yz\")\n plane = 'yz'\n else:\n plane = elinfo['plane']\n if 'offset' not in elinfo.keys():\n offset = 0\n else:\n offset = elinfo['offset']\n pos2d = np.concatenate((np.reshape(d2.T, (d1.size, 1)),\n np.reshape(d1.T, (d2.size, 1))), axis=1)\n pos = add_3dim(pos2d, plane, offset)\n\n elif len(dim) >= 3:\n d1 = np.array([])\n d2 = np.array([])\n if 'stagger' in elinfo.keys():\n stagger = elinfo['stagger']\n else:\n stagger = None\n for d_i, d in enumerate(dim):\n if stagger is not None:\n if isinstance(stagger, (int, np.integer)) or isinstance(stagger, (float, np.float)):\n if np.mod(d_i, 2):\n d1new = np.arange(d) * pitch[0] + stagger\n else:\n d1new = np.arange(d) * pitch[0]\n elif len(stagger) == len(dim):\n d1new = np.arange(d) * pitch[0] + stagger[d_i]\n else:\n d1new = np.arange(d) * pitch[0]\n else:\n d1new = np.arange(d) * pitch[0]\n d1 = np.concatenate((d1, d1new))\n d2 = np.concatenate((d2, d * [pitch[1] * d_i]))\n pos2d = np.vstack((d2, d1)).T\n if 'plane' not in elinfo.keys():\n # print(\"'plane' field with 2D dimensions assumed to be 'yz\")\n plane = 'yz'\n else:\n plane = elinfo['plane']\n if 'offset' not in elinfo.keys():\n offset = 0\n else:\n offset = elinfo['offset']\n pos = add_3dim(pos2d, plane, offset)\n electrode_pos = True\n\n if electrode_pos and center:\n centered_pos = center_mea(pos)\n # resort electrodes in case\n centered_pos_sorted = copy.deepcopy(centered_pos)\n if 'sortlist' in elinfo.keys() and elinfo['sortlist'] is not None:\n sortlist = elinfo['sortlist']\n for i, si in enumerate(sortlist):\n centered_pos_sorted[si] = centered_pos[i]\n else:\n centered_pos_sorted = centered_pos\n return centered_pos_sorted\n elif electrode_pos and not center:\n # resort electrodes in case\n pos_sorted = copy.deepcopy(pos)\n if 'sortlist' in elinfo.keys() and elinfo['sortlist'] is not None:\n sortlist = elinfo['sortlist']\n for i, si in enumerate(sortlist):\n pos_sorted[si] = pos[i]\n else:\n pos_sorted = pos\n return pos_sorted\n else:\n print(\"Define either a list of positions 'pos' or 'dim' and 'pitch'\")\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd the mea design defined by the yaml file in the install folder", "response": "def add_mea(mea_yaml_path):\n '''Adds the mea design defined by the yaml file in the install folder\n\n Parameters\n ----------\n mea_yaml_file\n\n Returns\n -------\n\n '''\n path = os.path.abspath(mea_yaml_path)\n\n if path.endswith('.yaml') or path.endswith('.yml') and os.path.isfile(path):\n with open(path, 'r') as meafile:\n if use_loader:\n elinfo = yaml.load(meafile, Loader=yaml.FullLoader)\n else:\n elinfo = yaml.load(meafile)\n if 'pos' not in elinfo.keys():\n if 'dim' in elinfo.keys():\n if elinfo['dim'] != 1 and 'pitch' not in elinfo.keys():\n raise AttributeError(\"The yaml file should contin either a list of 3d or 2d positions 'pos' or \"\n \"intormation about dimension and pitch ('dim' and 'pitch')\")\n else:\n raise AttributeError(\"The yaml file should contin either a list of 3d or 2d positions 'pos' or \"\n \"intormation about dimension and pitch ('dim' and 'pitch') - unless dim=1\")\n\n this_dir, this_filename = os.path.split(__file__)\n shutil.copy(path, os.path.join(this_dir, 'electrodes'))\n if path.endswith('.yaml'):\n electrodes = [f[:-5] for f in os.listdir(os.path.join(this_dir, \"electrodes\"))]\n elif path.endswith('.yml'):\n electrodes = [f[:-4] for f in os.listdir(os.path.join(this_dir, \"electrodes\"))]\n print('Available MEA: \\n', electrodes)\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove the mea design defined by the yaml file in the install folder", "response": "def remove_mea(mea_name):\n '''Adds the mea design defined by the yaml file in the install folder\n\n Parameters\n ----------\n mea_yaml_file\n\n Returns\n -------\n\n '''\n this_dir, this_filename = os.path.split(__file__)\n electrodes = [f for f in os.listdir(os.path.join(this_dir, \"electrodes\"))]\n for e in electrodes:\n if mea_name in e:\n if os.path.isfile(os.path.join(this_dir, \"electrodes\", mea_name + '.yaml')):\n os.remove(os.path.join(this_dir, \"electrodes\", mea_name + '.yaml'))\n print(\"Removed: \", os.path.join(this_dir, \"electrodes\", mea_name + '.yaml'))\n elif os.path.isfile(os.path.join(this_dir, \"electrodes\", mea_name + '.yml')):\n os.remove(os.path.join(this_dir, \"electrodes\", mea_name + '.yml'))\n print(\"Removed: \", os.path.join(this_dir, \"electrodes\", mea_name + '.yml'))\n electrodes = [f[:-5] for f in os.listdir(os.path.join(this_dir, \"electrodes\"))]\n print('Available MEA: \\n', electrodes)\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_two_lines(f):\n l0 = f.readline()\n l1 = f.readline()\n return l0, l1", "response": "Get the first and second lines of a single file."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining the format of word embedding file by their content. This operation only looks at the first two lines and does not check the sanity of input file. Args: f (Filelike): Returns: class", "response": "def classify_format(f):\n \"\"\"\n Determine the format of word embedding file by their content. This operation\n only looks at the first two lines and does not check the sanity of input\n file.\n\n Args:\n f (Filelike):\n\n Returns:\n class\n\n \"\"\"\n l0, l1 = _get_two_lines(f)\n if loader.glove.check_valid(l0, l1):\n return _glove\n elif loader.word2vec_text.check_valid(l0, l1):\n return _word2vec_text\n elif loader.word2vec_bin.check_valid(l0, l1):\n return _word2vec_bin\n else:\n raise OSError(b\"Invalid format\")"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads a pretrained word embedding from a file.", "response": "def load(cls, path, vocab=None, dtype=np.float32, max_vocab=None,\n format=None, binary=False):\n \"\"\"\n Load pretrained word embedding from a file.\n\n Args:\n path (str): Path of file to load.\n vocab (str or None): Path to vocabulary file created by word2vec\n with ``-save-vocab `` option. If vocab is given,\n :py:attr:`~vectors` and :py:attr:`~vocab` is ordered in\n descending order of frequency.\n dtype (numpy.dtype): Element data type to use for the array.\n max_vocab (int): Number of vocabulary to read.\n format (str or None): Format of the file. ``'word2vec'`` for file\n that was implemented in\n `word2vec `_,\n by Mikolov et al.. ``'glove'`` for file that was implemented in\n `GloVe `_, Global\n Vectors for Word Representation, by Jeffrey Pennington,\n Richard Socher, Christopher D. Manning from Stanford NLP group.\n If ``None`` is given, the format is guessed from the content.\n binary (bool): Load file as binary file as in word embedding file\n created by\n `word2vec `_ with\n ``-binary 1`` option. If ``format`` is ``'glove'`` or ``None``,\n this argument is simply ignored\n\n Returns:\n :class:`~word_embedding_loader.word_embedding.WordEmbedding`\n \"\"\"\n freqs = None\n if vocab is not None:\n with open(vocab, mode='rb') as f:\n freqs = loader.vocab.load_vocab(f)\n # Create vocab from freqs\n # [:None] gives all the list member\n vocab = {k: i for i, (k, v) in enumerate(\n sorted(six.iteritems(freqs),\n key=lambda k_v: k_v[1], reverse=True)[:max_vocab])}\n\n with open(path, mode='rb') as f:\n if format is None:\n mod = classify_format(f)\n else:\n mod = _select_module(format, binary)\n\n with open(path, mode='rb') as f:\n if vocab is not None:\n arr = mod.loader.load_with_vocab(f, vocab, dtype=dtype)\n v = vocab\n else:\n arr, v = mod.loader.load(f, max_vocab=max_vocab, dtype=dtype)\n\n obj = cls(arr, v, freqs)\n obj._load_cond = mod\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef save(self, path, format, binary=False, use_load_condition=False):\n if use_load_condition:\n if self._load_cond is None:\n raise ValueError(\n b\"use_load_condition was specified but the object is not \"\n b\"loaded from a file\")\n # Use load condition\n mod = self._load_cond\n else:\n mod = _select_module(format, binary)\n\n\n if self.freqs is None:\n itr = list(\n sorted(six.iteritems(self.vocab), key=lambda k_v: k_v[1]))\n else:\n itr = list(\n sorted(six.iteritems(self.vocab),\n key=lambda k_v: self.freqs[k_v[0]], reverse=True)\n )\n\n with open(path, mode='wb') as f:\n mod.saver.save(f, self.vectors, itr)", "response": "Save the object to a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef paths_for_shell(paths, separator=' '):\n paths = filter(None, paths)\n paths = map(shlex.quote, paths)\n if separator is None:\n return paths\n return separator.join(paths)", "response": "Converts a list of paths for use in shell commands"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register(self, *dependencies, default=False, hidden=False, ignore_return_code=False):\n\n def outer(func):\n task = Task(func, *dependencies, default=default, hidden=hidden, ignore_return_code=ignore_return_code)\n overidden_task = self._tasks.pop(task.name, None)\n if overidden_task:\n self._overidden_tasks[task.name].append(overidden_task)\n self[task.name] = task\n return task\n\n return outer", "response": "Decorator to turn a callable into a task\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlooking up a task by name or callable.", "response": "def lookup_task(self, task):\n \"\"\"\n Looks up a task by name or by callable\n \"\"\"\n if isinstance(task, str):\n try:\n return self[task]\n except KeyError:\n pass\n raise TaskError('Unknown task %s' % task)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_default_task(self):\n default_tasks = list(filter(lambda task: task.default, self.values()))\n if len(default_tasks) == 1:\n return default_tasks[0]", "response": "Returns the default task if there is only one"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread a function or method signature to produce a set of parameters", "response": "def from_callable(cls, func, ignored_parameters=set()):\n \"\"\"\n Reads a function or method signature to produce a set of parameters\n \"\"\"\n group = cls()\n signature = inspect.signature(func)\n for parameter in signature.parameters.values():\n if parameter.name.startswith('_') or parameter.name in ignored_parameters:\n continue\n parameter = Parameter.from_callable_parameter(parameter)\n group[parameter.name] = parameter\n return group"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nproducing a set of parameters from a mapping.", "response": "def from_mapping(cls, mapping):\n \"\"\"\n Produces a set of parameters from a mapping\n \"\"\"\n group = cls()\n for name, value in mapping.items():\n if name.startswith('_'):\n continue\n group[name] = Parameter(\n name=name,\n value=value,\n constraint=Parameter.constraint_from_type(value),\n )\n return group"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting the set of parameters into a dict.", "response": "def to_dict(self):\n \"\"\"\n Converts the set of parameters into a dict\n \"\"\"\n return dict((parameter.name, parameter.value) for parameter in self.values())"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntakes arguments from a list while there are parameters that can accept them", "response": "def consume_arguments(self, argument_list):\n \"\"\"\n Takes arguments from a list while there are parameters that can accept them\n \"\"\"\n while True:\n argument_count = len(argument_list)\n for parameter in self.values():\n argument_list = parameter.consume_arguments(argument_list)\n if len(argument_list) == argument_count:\n return argument_list"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_from(self, mapping):\n for key, value in mapping.items():\n if key in self:\n if isinstance(value, Parameter):\n value = value.value\n self[key].value = value", "response": "Updates the set of parameters from a mapping."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_callable_parameter(cls, parameter):\n if parameter.kind == parameter.KEYWORD_ONLY or \\\n parameter.kind == parameter.POSITIONAL_OR_KEYWORD and parameter.default is not parameter.empty:\n if parameter.annotation is not parameter.empty:\n constraint = parameter.annotation\n else:\n constraint = Parameter.constraint_from_type(parameter.default)\n return cls(\n name=parameter.name,\n value=parameter.default,\n constraint=constraint,\n )\n else:\n raise ParameterError('Only keyword parameters are supported')", "response": "Produces a parameter from a callable parameter."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef constraint_from_type(cls, value):\n if value is None:\n return None\n value_type = type(value)\n if value_type in (str, int, bool):\n return value_type\n raise ParameterError('Parameter type cannot be %s' % value_type)", "response": "Returns the constraint callable given a value"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef constraint_from_choices(cls, value_type: type, choices: collections.Sequence):\n choices_str = ', '.join(map(str, choices))\n\n def constraint(value):\n value = value_type(value)\n if value not in choices:\n raise ParameterError('Argument must be one of %s' % choices_str)\n return value\n\n constraint.__name__ = 'choices_%s' % value_type.__name__\n constraint.__doc__ = 'choice of %s' % choices_str\n return constraint", "response": "Returns a constraint callable based on choices of a given type"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef arg_name(self):\n if self.constraint is bool and self.value:\n return '--no-%s' % self.name.replace('_', '-')\n return '--%s' % self.name.replace('_', '-')", "response": "Returns the name of the parameter as a command line flag"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef consume_arguments(self, argument_list):\n if len(argument_list) == 0:\n return []\n if argument_list[0] == self.arg_name:\n argument_list = argument_list[1:]\n if self.constraint is bool:\n self.value = not self.value\n else:\n try:\n value = argument_list.pop(0)\n except IndexError:\n raise ParameterError('Argument %s expects a value' % self.arg_name)\n self.value = value\n return argument_list", "response": "Takes arguments from a list while this parameter can accept them\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pip_command(self, command, *args):\n try:\n from pip._internal import main as pip_main\n except ImportError:\n from pip import main as pip_main\n\n args = [command] + list(args)\n if self.verbosity == 0:\n args.insert(0, '--quiet')\n elif self.verbosity == 2:\n args.insert(0, '--verbose')\n return pip_main(args)", "response": "Runs a pip command"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef shell(self, command, *args, environment=None):\n command += ' ' + ' '.join(args)\n command = command.strip()\n self.debug(self.yellow_style('$ %s' % command))\n env = self.env.copy()\n env.update(environment or {})\n return subprocess.call(command, shell=True, env=env)", "response": "Runs a shell command"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrunning a Django management command", "response": "def management_command(self, command, *args, **kwargs):\n \"\"\"\n Runs a Django management command\n \"\"\"\n self.setup_django()\n if 'verbosity' not in kwargs:\n kwargs['verbosity'] = self.verbosity\n if not self.use_colour:\n kwargs['no_color'] = False\n self.debug(self.yellow_style('$ manage.py %s' % command))\n return call_command(command, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef help(self, context):\n context.info('%s\\n%s [global options] [task] [task options]...\\n' % (self.name, sys.argv[0]))\n\n def print_parameter(prn, p):\n if p.description:\n suffix = ' - {0.description}'.format(p)\n else:\n suffix = ''\n if p.constraint is bool:\n prn(' {0.arg_name}'.format(p) + suffix)\n else:\n prn(' {0.arg_name} [{0.value}]'.format(p) + suffix)\n\n context.info('Global options:')\n for parameter in self.context_parameters.values():\n print_parameter(context.info, parameter)\n context.info()\n\n context.info('Commands:')\n name_template = ' {0.name:<%d}' % min(max(map(len, self.available_tasks)), 20)\n for task in self.available_tasks.values():\n printer = context.debug if task.hidden else context.info\n if task.description:\n printer((name_template + ' - {0.description}').format(task))\n else:\n printer(name_template.format(task))\n for parameter in task.parameters.values():\n print_parameter(printer, parameter)", "response": "Prints help for the current SEO class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef login(request, user):\n session_auth_hash = ''\n if user is None:\n user = request.user\n if hasattr(user, 'get_session_auth_hash'):\n session_auth_hash = user.get_session_auth_hash()\n\n if SESSION_KEY in request.session:\n session_key = request.session[SESSION_KEY]\n if session_key != user.pk or (\n session_auth_hash and\n request.session.get(HASH_SESSION_KEY) != session_auth_hash):\n # To avoid reusing another user's session, create a new, empty\n # session if the existing session corresponds to a different\n # authenticated user.\n request.session.flush()\n else:\n request.session.cycle_key()\n request.session[SESSION_KEY] = user.pk\n request.session[BACKEND_SESSION_KEY] = user.backend\n request.session[USER_DATA_SESSION_KEY] = user.user_data\n request.session[HASH_SESSION_KEY] = session_auth_hash\n\n update_token_in_session(request.session, user.token)\n\n if hasattr(request, 'user'):\n request.user = user\n rotate_token(request)\n user_logged_in.send(sender=user.__class__, request=request, user=user)", "response": "Log in a user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the user model instance associated with the given request session.", "response": "def get_user(request):\n \"\"\"\n Returns the user model instance associated with the given request session.\n If no user is retrieved an instance of `MojAnonymousUser` is returned.\n \"\"\"\n user = None\n try:\n user_id = request.session[SESSION_KEY]\n token = request.session[AUTH_TOKEN_SESSION_KEY]\n user_data = request.session[USER_DATA_SESSION_KEY]\n backend_path = request.session[BACKEND_SESSION_KEY]\n except KeyError:\n pass\n else:\n if backend_path in settings.AUTHENTICATION_BACKENDS:\n backend = load_backend(backend_path)\n user = backend.get_user(user_id, token, user_data)\n # Verify the session\n if hasattr(user, 'get_session_auth_hash'):\n session_hash = request.session.get(HASH_SESSION_KEY)\n session_hash_verified = session_hash and constant_time_compare(\n session_hash,\n user.get_session_auth_hash()\n )\n if not session_hash_verified:\n request.session.flush()\n user = None\n\n return user or MojAnonymousUser()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef logout(request):\n # Dispatch the signal before the user is logged out so the receivers have a\n # chance to find out *who* logged out.\n user = getattr(request, 'user', None)\n if hasattr(user, 'is_authenticated') and not user.is_authenticated:\n user = None\n user_logged_out.send(sender=user.__class__, request=request, user=user)\n\n # remember language choice saved to session\n language = request.session.get(LANGUAGE_SESSION_KEY)\n\n request.session.flush()\n\n if language is not None:\n request.session[LANGUAGE_SESSION_KEY] = language\n\n if hasattr(request, 'user'):\n request.user = MojAnonymousUser()", "response": "Removes the authenticated user s ID from the request and flushes their\n session data."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsaves a list of words to a file.", "response": "def save(f, arr, vocab):\n \"\"\"\n Save word embedding file.\n Check :func:`word_embedding_loader.saver.glove.save` for the API.\n \"\"\"\n f.write(('%d %d' % (arr.shape[0], arr.shape[1])).encode('utf-8'))\n for word, idx in vocab:\n _write_line(f, arr[idx], word)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef spoolable(*, pre_condition=True, body_params=()):\n\n def decorator(func):\n context_name = None\n keyword_kinds = {inspect.Parameter.POSITIONAL_OR_KEYWORD, inspect.Parameter.KEYWORD_ONLY}\n invalid_body_params = set(body_params)\n for name, parameter in inspect.signature(func).parameters.items():\n if parameter.kind not in keyword_kinds:\n continue\n if not context_name and parameter.annotation is Context:\n context_name = name\n elif name in invalid_body_params:\n invalid_body_params.remove(name)\n if invalid_body_params:\n raise TypeError('Spoolable task body_params must be keyword arguments')\n\n task = Task(func, context_name=context_name, pre_condition=pre_condition, body_params=body_params)\n spooler.register(task)\n return task\n\n return decorator", "response": "Decorator that makes a function spoolable using uWSGI."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fields(self):\n return {k:getattr(self, k, None) for k in self.schema.fields}", "response": "Returns all the fields and their raw values for this Orm instance."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create(cls, fields=None, **fields_kwargs):\n # NOTE -- you cannot use hydrate/populate here because populate alters modified fields\n instance = cls(fields, **fields_kwargs)\n instance.save()\n return instance", "response": "create an instance of cls with the passed in fields and set it into the db\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef populate(self, fields=None, **fields_kwargs):\n\n # this will run all the fields of the Orm, not just the fields in fields\n # dict, another name would be hydrate\n pop_fields = {}\n fields = self.make_dict(fields, fields_kwargs)\n for k in self.schema.fields.keys():\n pop_fields[k] = fields.get(k, None)\n\n self._populate(pop_fields)", "response": "take the passed in fields combine them with missing fields that should\n be there and then run all those through appropriate methods to hydrate this object"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef depopulate(self, is_update):\n fields = {}\n schema = self.schema\n for k, field in schema.fields.items():\n is_modified = k in self.modified_fields\n orig_v = getattr(self, k)\n v = field.iset(\n self,\n orig_v,\n is_update=is_update,\n is_modified=is_modified\n )\n\n if is_modified or v is not None:\n if is_update and field.is_pk() and v == orig_v:\n continue\n\n else:\n fields[k] = v\n\n if not is_update:\n for field_name in schema.required_fields.keys():\n if field_name not in fields:\n raise KeyError(\"Missing required field {}\".format(field_name))\n\n return fields", "response": "Get all the fields that need to be saved by the user."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insert(self):\n ret = True\n\n schema = self.schema\n fields = self.depopulate(False)\n\n q = self.query\n q.set_fields(fields)\n pk = q.insert()\n if pk:\n fields = q.fields\n fields[schema.pk.name] = pk\n self._populate(fields)\n\n else:\n ret = False\n\n return ret", "response": "persist the field values of this orm"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update(self):\n ret = True\n fields = self.depopulate(True)\n q = self.query\n q.set_fields(fields)\n\n pk = self.pk\n if pk:\n q.is_field(self.schema.pk.name, pk)\n\n else:\n raise ValueError(\"You cannot update without a primary key\")\n\n if q.update():\n fields = q.fields\n self._populate(fields)\n\n else:\n ret = False\n\n return ret", "response": "re - persist the updated field values of this orm that has a primary key"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save(self):\n ret = False\n\n # we will only use the primary key if it hasn't been modified\n pk = None\n if self.schema.pk.name not in self.modified_fields:\n pk = self.pk\n\n if pk:\n ret = self.update()\n else:\n ret = self.insert()\n\n return ret", "response": "persist the fields in this object into the db"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes the object from the db", "response": "def delete(self):\n \"\"\"delete the object from the db if pk is set\"\"\"\n ret = False\n q = self.query\n pk = self.pk\n if pk:\n pk_name = self.schema.pk.name\n self.query.is_field(pk_name, pk).delete()\n setattr(self, pk_name, None)\n\n # mark all the fields that still exist as modified\n self.reset_modified()\n for field_name in self.schema.fields:\n if getattr(self, field_name, None) != None:\n self.modified_fields.add(field_name)\n\n ret = True\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreset modified_fields to the set of all the passed in fields", "response": "def reset_modified(self):\n \"\"\"\n reset field modification tracking\n\n this is handy for when you are loading a new Orm with the results from a query and\n you don't want set() to do anything, you can Orm(**fields) and then orm.reset_modified() to\n clear all the passed in fields from the modified list\n \"\"\"\n self.modified_fields = set()\n\n # compensate for us not having knowledge of certain fields changing\n for field_name, field in self.schema.normal_fields.items():\n if isinstance(field, ObjectField):\n self.modified_fields.add(field_name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmodifying the fields of this instance", "response": "def modify(self, fields=None, **fields_kwargs):\n \"\"\"update the fields of this instance with the values in dict fields\n\n this should rarely be messed with, if you would like to manipulate the\n fields you should override _modify()\n\n :param fields: dict, the fields in a dict\n :param **fields_kwargs: dict, if you would like to pass the fields as key=val\n this picks those up and combines them with fields\n :returns: set, all the names of the fields that were modified\n \"\"\"\n modified_fields = set()\n fields = self.make_dict(fields, fields_kwargs)\n fields = self._modify(fields)\n for field_name, field_val in fields.items():\n in_schema = field_name in self.schema.fields\n if in_schema:\n setattr(self, field_name, field_val)\n modified_fields.add(field_name)\n\n return modified_fields"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a public version of this object that can be jsonified by the user.", "response": "def jsonable(self, *args, **options):\n \"\"\"\n return a public version of this instance that can be jsonified\n\n Note that this does not return _id, _created, _updated, the reason why is\n because lots of times you have a different name for _id (like if it is a \n user object, then you might want to call it user_id instead of _id) and I\n didn't want to make assumptions\n\n note 2, I'm not crazy about the name, but I didn't like to_dict() and pretty\n much any time I need to convert the object to a dict is for json, I kind of\n like dictify() though, but I've already used this method in so many places\n \"\"\"\n d = {}\n for field_name, field in self.schema.normal_fields.items():\n field_val = getattr(self, field_name, None)\n field_val = field.jsonable(self, field_val)\n if field_val is not None:\n d[field_name] = field_val\n\n return d"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmerge 2 EMIR images.", "response": "def merge2images(hdu1, hdu2, debugplot):\n \"\"\"Merge 2 EMIR images, averaging the common region.\n\n Parameters\n ----------\n hdu1 : HDUList object\n Input image #1.\n hdu2 : HDUList object\n Input image #2.\n debugplot : int\n Determines whether intermediate computations and/or plots\n are displayed. The valid codes are defined in\n numina.array.display.pause_debugplot.\n\n Returns\n -------\n image_merged : HDUList object\n Output image.\n\n \"\"\"\n\n # check image dimensions\n image_header = hdu1[0].header\n image_header_ = hdu2[0].header\n #\n naxis = image_header['naxis']\n naxis_ = image_header_['naxis']\n if naxis != naxis_:\n raise ValueError('Incompatible NAXIS values: {}, {}'.format(\n naxis, naxis_\n ))\n naxis1 = image_header['naxis1']\n naxis1_ = image_header_['naxis1']\n if naxis1 != naxis1_:\n raise ValueError('Incompatible NAXIS1 values: {}, {}'.format(\n naxis1, naxis1_\n ))\n if naxis == 1:\n naxis2 = 1\n elif naxis == 2:\n naxis2 = image_header['naxis2']\n naxis2_ = image_header_['naxis2']\n if naxis2 != naxis2_:\n raise ValueError('Incompatible NAXIS2 values: {}, {}'.format(\n naxis2, naxis2_\n ))\n else:\n raise ValueError('Unexpected NAXIS value: {}'.format(naxis))\n\n # initialize output array\n image2d_merged = np.zeros((naxis2, naxis1))\n\n # main loop\n data1 = hdu1[0].data\n data2 = hdu2[0].data\n for i in range(naxis2):\n sp1 = data1[i, :]\n sp2 = data2[i, :]\n jmin1, jmax1 = find_pix_borders(sp1, sought_value=0)\n jmin2, jmax2 = find_pix_borders(sp2, sought_value=0)\n image2d_merged[i, :] = sp1 + sp2\n useful1 = (jmin1 != -1) and (jmax1 != naxis1)\n useful2 = (jmin2 != -1) and (jmax2 != naxis1)\n if useful1 and useful2:\n jmineff = max(jmin1, jmin2)\n jmaxeff = min(jmax1, jmax2)\n image2d_merged[i, jmineff:(jmaxeff+1)] /= 2\n\n # return result\n image_merged = fits.PrimaryHDU(data=image2d_merged.astype(np.float32),\n header=image_header)\n\n return image_merged"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef configure_environ(dsn_env_name='PROM_DSN', connection_class=DsnConnection):\n inters = []\n cs = dsnparse.parse_environs(dsn_env_name, parse_class=connection_class)\n for c in cs:\n inter = c.interface\n set_interface(inter, c.name)\n inters.append(inter)\n\n return inters", "response": "configure interfaces based on environment variables PROM_DSN and PROM_DSN_N"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconfigures an interface to be used to query a backend", "response": "def configure(dsn, connection_class=DsnConnection):\n \"\"\"\n configure an interface to be used to query a backend\n\n you use this function to configure an Interface using a dsn, then you can get\n that interface using the get_interface() method\n\n dsn -- string -- a properly formatted prom dsn, see DsnConnection for how to format the dsn\n \"\"\"\n c = dsnparse.parse(dsn, parse_class=connection_class)\n inter = c.interface\n set_interface(inter, c.name)\n return inter"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the interface for the specified name.", "response": "def set_interface(interface, name=''):\n \"\"\"\n don't want to bother with a dsn? Use this method to make an interface available\n \"\"\"\n global interfaces\n\n if not interface: raise ValueError('interface is empty')\n\n # close down the interface before we discard it\n if name in interfaces:\n interfaces[name].close()\n\n interfaces[name] = interface"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\nasync def open_websocket_server(sock, filter=None): # pylint: disable=W0622\n ws = await create_websocket_server(sock, filter=filter)\n try:\n yield ws\n finally:\n await ws.close()", "response": "A context manager which serves this websocket."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new websocket server.", "response": "async def create_websocket_server(sock, filter=None): # pylint: disable=W0622\n \"\"\"\n A more low-level form of open_websocket_server.\n You are responsible for closing this websocket.\n \"\"\"\n ws = Websocket()\n await ws.start_server(sock, filter=filter)\n return ws"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef normal_fields(self):\n return {f:v for f, v in self.fields.items() if not f.startswith('_')}", "response": "fields that aren t magic"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef required_fields(self):\n return {f:v for f, v in self.normal_fields.items() if v.required}", "response": "The normal required fields."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef magic_fields(self):\n return {f:v for f, v in self.fields.items() if f.startswith('_')}", "response": "the magic fields for the schema"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_index(self, index_name, index):\n if not index_name:\n raise ValueError(\"index_name must have a value\")\n if index_name in self.indexes:\n raise ValueError(\"index_name {} has already been defined on {}\".format(\n index_name, str(self.indexes[index_name].fields)\n ))\n if not isinstance(index, Index): raise ValueError(\"{} is not an Index instance\".format(type(index)))\n\n index.name = index_name\n self.indexes[index_name] = index\n return self", "response": "Add an index to the schema"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef schema(self):\n if not hasattr(self, \"_schema\"):\n ret = None\n o = self._type\n if isinstance(o, type):\n ret = getattr(o, \"schema\", None)\n\n elif isinstance(o, Schema):\n ret = o\n\n else:\n module, klass = utils.get_objects(o)\n ret = klass.schema\n\n self._schema = ret\n\n return self._schema", "response": "return the schema instance if this is reference to another table"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the raw value that this property is holding internally for instance", "response": "def fval(self, instance):\n \"\"\"return the raw value that this property is holding internally for instance\"\"\"\n try:\n val = instance.__dict__[self.instance_field_name]\n except KeyError as e:\n #raise AttributeError(str(e))\n val = None\n\n return val"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the default reference for a method.", "response": "def set_default_reference(self, method, reference):\n \"\"\"\n Set the default reference for a method.\n\n :arg method: name of a method\n :type method: :class:`str`\n\n {reference}\n \"\"\"\n if method not in self._available_methods:\n raise ValueError('Unknown method: {0}'.format(method))\n self._default_references[method] = reference"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_default_reference(self, method):\n if method not in self._available_methods:\n raise ValueError('Unknown method: {0}'.format(method))\n return self._default_references.get(method)", "response": "Returns the default reference for a method."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprints all x - ray transitions for an element with their different notations and energy.", "response": "def print_element_xray_transitions(self, element, file=sys.stdout, tabulate_kwargs=None):\n \"\"\"\n Prints all x-ray transitions for an element, with their different\n notations and energy.\n\n {element}\n\n :arg file: file for output, default to standard out\n \"\"\"\n header = ['IUPAC', 'Siegbahn', 'Energy (eV)', 'Probability']\n\n rows = []\n for xraytransition in self.element_xray_transitions(element):\n try:\n iupac = self.xray_transition_notation(xraytransition, 'iupac')\n except:\n iupac = ''\n\n try:\n siegbahn = self.xray_transition_notation(xraytransition, 'siegbahn')\n except:\n siegbahn = ''\n\n try:\n energy_eV = self.xray_transition_energy_eV(element, xraytransition)\n except:\n energy_eV = ''\n\n try:\n probability = self.xray_transition_probability(element, xraytransition)\n except:\n probability = ''\n\n rows.append([iupac, siegbahn, energy_eV, probability])\n\n rows.sort(key=operator.itemgetter(2))\n\n if tabulate_kwargs is None:\n tabulate_kwargs = {}\n file.write(tabulate.tabulate(rows, header, **tabulate_kwargs))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning all found modules at modulepath including modulepath module", "response": "def get_modules(modulepath):\n \"\"\"return all found modules at modulepath (eg, foo.bar) including modulepath module\"\"\"\n m = importlib.import_module(modulepath)\n mpath = m.__file__\n ret = set([m])\n\n if \"__init__.\" in mpath.lower():\n mpath = os.path.dirname(mpath)\n\n # https://docs.python.org/2/library/pkgutil.html#pkgutil.iter_modules\n for module_info in pkgutil.iter_modules([mpath]):\n submodulepath = \".\".join([modulepath, module_info[1]])\n if module_info[2]:\n # module is a package\n submodules = get_modules(submodulepath)\n ret.update(submodules)\n else:\n ret.add(importlib.import_module(submodulepath))\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngive a module return all the parent_class subclasses that are found in that module and any submodules.", "response": "def get_subclasses(modulepath, parent_class):\n \"\"\"given a module return all the parent_class subclasses that are found in\n that module and any submodules.\n\n :param modulepath: string, a path like foo.bar.che\n :param parent_class: object, the class whose children you are looking for\n :returns: set, all the found child classes in modulepath of parent_class\n \"\"\"\n if isinstance(modulepath, ModuleType):\n modules = get_modules(modulepath.__name__)\n else:\n modules = get_modules(modulepath)\n\n ret = set()\n for m in modules:\n cs = inspect.getmembers(m, lambda v: inspect.isclass(v) and issubclass(v, parent_class))\n for class_name, klass in cs:\n ret.add(klass)\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build_dump_order(orm_class, orm_classes):\n if orm_class in orm_classes: return\n\n for field_name, field_val in orm_class.schema.fields.items():\n if field_val.is_ref():\n build_dump_order(field_val.schema.orm_class, orm_classes)\n\n if orm_class not in orm_classes:\n orm_classes.append(orm_class)", "response": "build the dump order of the given orm_class"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main_dump(paths, directory, dry_run):\n table_map = get_table_map(paths)\n\n for conn_name, conn_info in table_map.items():\n inter = conn_info[\"interface\"]\n conn = inter.connection_config\n table_names = conn_info[\"table_names\"]\n\n cmd = get_base_cmd(\"backup\", inter, directory)\n cmd.extend(table_names)\n\n if dry_run:\n echo.out(\" \".join(cmd))\n\n else:\n run_cmd(cmd)", "response": "dump all or part of the prom data"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrestoring your database dumped with the dump command just a wrapper around `dump restore` https://github.com/Jaymon/dump", "response": "def main_restore(directory, conn_name):\n \"\"\"Restore your database dumped with the dump command\n\n just a wrapper around `dump restore` https://github.com/Jaymon/dump\n \"\"\"\n inter = get_interface(conn_name)\n conn = inter.connection_config\n cmd = get_base_cmd(\"restore\", inter, directory)\n run_cmd(cmd)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning all the fields for the given schema", "response": "def _get_fields(self, table_name, **kwargs):\n \"\"\"return all the fields for the given schema\"\"\"\n ret = {}\n query_str = []\n query_args = ['f', table_name]\n\n # I had to brush up on my join knowledge while writing this query\n # https://en.wikipedia.org/wiki/Join_(SQL)\n #\n # other helpful links\n # https://wiki.postgresql.org/wiki/Retrieve_primary_key_columns\n # https://www.postgresql.org/docs/9.4/static/catalog-pg-attribute.html\n # https://www.postgresql.org/docs/9.3/static/catalog-pg-type.html\n # \n # another approach\n # http://dba.stackexchange.com/questions/22362/how-do-i-list-all-columns-for-a-specified-table\n # http://gis.stackexchange.com/questions/94049/how-to-get-the-data-type-of-each-column-from-a-postgis-table\n query_str.append('SELECT')\n query_str.append(', '.join([\n 'a.attnum',\n 'a.attname',\n 'a.attnotnull',\n 't.typname',\n 'i.indisprimary',\n #'s.conname',\n #'pg_get_constraintdef(s.oid, true) as condef',\n 'c.relname AS confrelname',\n ]))\n query_str.append('FROM')\n query_str.append(' pg_attribute a')\n query_str.append('JOIN pg_type t ON a.atttypid = t.oid')\n query_str.append('LEFT JOIN pg_index i ON a.attrelid = i.indrelid')\n query_str.append(' AND a.attnum = any(i.indkey)')\n query_str.append('LEFT JOIN pg_constraint s ON a.attrelid = s.conrelid')\n query_str.append(' AND s.contype = {} AND a.attnum = any(s.conkey)'.format(self.val_placeholder))\n query_str.append('LEFT JOIN pg_class c ON s.confrelid = c.oid')\n query_str.append('WHERE')\n query_str.append(' a.attrelid = {}::regclass'.format(self.val_placeholder))\n query_str.append(' AND a.attisdropped = False')\n query_str.append(' AND a.attnum > 0')\n query_str.append('ORDER BY a.attnum ASC')\n\n query_str = os.linesep.join(query_str)\n fields = self.query(query_str, *query_args, **kwargs)\n\n pg_types = {\n \"float8\": float,\n \"timestamp\": datetime.datetime,\n \"int2\": int,\n \"int4\": int,\n \"int8\": long,\n \"numeric\": decimal.Decimal,\n \"text\": str,\n \"bpchar\": str,\n \"varchar\": str,\n \"bool\": bool,\n \"date\": datetime.date,\n \"blob\": bytearray,\n }\n\n # the rows we can set: field_type, name, field_required, min_size, max_size,\n # size, unique, pk, \n # These keys will roughly correspond with schema.Field\n for row in fields:\n field = {\n \"name\": row[\"attname\"],\n \"field_type\": pg_types[row[\"typname\"]],\n \"field_required\": row[\"attnotnull\"],\n \"pk\": bool(row[\"indisprimary\"]),\n }\n\n if row[\"confrelname\"]:\n # TODO -- I can't decide which name I like\n field[\"schema_table_name\"] = row[\"confrelname\"]\n field[\"ref_table_name\"] = row[\"confrelname\"]\n\n ret[field[\"name\"]] = field\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_indexes(self, schema, **kwargs):\n ret = {}\n query_str = []\n query_str.append('SELECT')\n query_str.append(' tbl.relname AS table_name, i.relname AS index_name, a.attname AS field_name,')\n query_str.append(' ix.indkey AS index_order, a.attnum AS field_num')\n query_str.append('FROM')\n query_str.append(' pg_class tbl, pg_class i, pg_index ix, pg_attribute a')\n query_str.append('WHERE')\n query_str.append(' tbl.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = tbl.oid')\n query_str.append(' AND a.attnum = ANY(ix.indkey) AND tbl.relkind = %s AND tbl.relname = %s')\n query_str.append('ORDER BY')\n query_str.append(' tbl.relname, i.relname')\n query_str = os.linesep.join(query_str)\n\n indexes = self.query(query_str, 'r', str(schema), **kwargs)\n\n # massage the data into more readable {index_name: fields} format\n for idict in indexes:\n if idict['index_name'] not in ret:\n ret[idict['index_name']] = list(map(int, idict['index_order'].split(' ')))\n\n i = ret[idict['index_name']].index(idict['field_num'])\n ret[idict['index_name']][i] = idict['field_name']\n\n return ret", "response": "return all the indexes for the given schema"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the index name for the current record.", "response": "def _set_index(self, schema, name, fields, **index_options):\n \"\"\"\n NOTE -- we set the index name using _ format since indexes have to have\n a globally unique name in postgres\n\n http://www.postgresql.org/docs/9.1/static/sql-createindex.html\n \"\"\"\n index_fields = []\n for field_name in fields:\n field = schema.fields[field_name]\n if issubclass(field.type, basestring):\n if field.options.get('ignore_case', False):\n field_name = 'UPPER({})'.format(self._normalize_name(field_name))\n index_fields.append(field_name)\n\n query_str = 'CREATE {}INDEX {} ON {} USING BTREE ({})'.format(\n 'UNIQUE ' if index_options.get('unique', False) else '',\n self._normalize_name(\"{}_{}\".format(schema, name)),\n self._normalize_table_name(schema),\n ', '.join(index_fields)\n )\n\n return self.query(query_str, ignore_result=True, **index_options)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeals with client authentication", "response": "def client_authentication(self, request, auth=None, **kwargs):\n \"\"\"\n Deal with client authentication\n\n :param request: The refresh access token request\n :param auth: Client authentication information\n :param kwargs: Extra keyword arguments\n :return: dictionary containing client id, client authentication method\n and possibly access token.\n \"\"\"\n\n try:\n auth_info = verify_client(self.endpoint_context, request, auth)\n except Exception as err:\n msg = \"Failed to verify client due to: {}\".format(err)\n logger.error(msg)\n return self.error_cls(error=\"unauthorized_client\",\n error_description=msg)\n else:\n if 'client_id' not in auth_info:\n logger.error('No client_id, authentication failed')\n return self.error_cls(error=\"unauthorized_client\",\n error_description='unknown client')\n\n return auth_info"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _post_parse_request(self, request, client_id='', **kwargs):\n\n if 'state' in request:\n try:\n sinfo = self.endpoint_context.sdb[request['code']]\n except KeyError:\n logger.error('Code not present in SessionDB')\n return self.error_cls(error=\"unauthorized_client\")\n else:\n state = sinfo['authn_req']['state']\n\n if state != request['state']:\n logger.error('State value mismatch')\n return self.error_cls(error=\"unauthorized_client\")\n\n if \"client_id\" not in request: # Optional for access token request\n request[\"client_id\"] = client_id\n\n logger.debug(\"%s: %s\" % (request.__class__.__name__, sanitize(request)))\n\n return request", "response": "This is where clients come to get their access tokens"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nregistering outputs and metadata.", "response": "def register(self, new_outputs, *args, **kwargs):\n \"\"\"\n Register outputs and metadata.\n\n * ``initial_value`` - used in dynamic calculations\n * ``size`` - number of elements per timestep\n * ``uncertainty`` - in percent of nominal value\n * ``variance`` - dictionary of covariances, diagonal is square of\n uncertianties, no units\n * ``jacobian`` - dictionary of sensitivities dxi/dfj\n * ``isconstant`` - ``True`` if constant, ``False`` if periodic\n * ``isproperty`` - ``True`` if output stays at last value during\n thresholds, ``False`` if reverts to initial value\n * ``timeseries`` - name of corresponding time series output, ``None`` if\n no time series\n * ``output_source`` - name\n\n :param new_outputs: new outputs to register.\n \"\"\"\n kwargs.update(zip(self.meta_names, args))\n # call super method\n super(OutputRegistry, self).register(new_outputs, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nclassifying input frames into a list of images and images_info.", "response": "def initial_classification(self, obresult, target_is_sky=False):\n \"\"\"Classify input frames, \"\"\"\n # lists of targets and sky frames\n\n with obresult.frames[0].open() as baseimg:\n # Initial checks\n has_bpm_ext = 'BPM' in baseimg\n self.logger.debug('images have BPM extension: %s', has_bpm_ext)\n\n images_info = []\n for f in obresult.frames:\n with f.open() as img:\n # Getting some metadata from FITS header\n hdr = img[0].header\n\n iinfo = ImageInfo(f)\n\n finfo = {}\n iinfo.metadata = finfo\n\n finfo['uuid'] = hdr['UUID']\n finfo['exposure'] = hdr['EXPTIME']\n # frame.baseshape = get_image_shape(hdr)\n finfo['airmass'] = hdr['airmass']\n finfo['mjd'] = hdr['tstamp']\n\n iinfo.label = 'result_image_{}'.format(finfo['uuid'])\n iinfo.mask = nfcom.Extension(\"BPM\")\n # Insert pixel offsets between frames\n iinfo.objmask_data = None\n iinfo.valid_target = False\n iinfo.valid_sky = False\n\n # FIXME: hardcode itype for the moment\n iinfo.itype = 'TARGET'\n if iinfo.itype == 'TARGET':\n iinfo.valid_target = True\n #targetframes.append(iinfo)\n if target_is_sky:\n iinfo.valid_sky = True\n #skyframes.append(iinfo)\n if iinfo.itype == 'SKY':\n iinfo.valid_sky = True\n #skyframes.append(iinfo)\n images_info.append(iinfo)\n\n return images_info"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef read_csu_2(barmodel, hdr):\n conf = CSUConf(barmodel)\n\n conf.conf_f = hdr.get('CSUCONFF', 'UNKNOWN')\n # Read CSUPOS and set position in model\n # The bars\n for idx in conf.bars:\n key = \"CSUP{}\".format(idx)\n # UNIT of CSUPOS?\n # who knows\n\n # set CSUPOS for BAR\n # Using the model, this sets the X,Y coordinate\n conf.bars[idx].csupos = hdr[key]\n # print('read_csu {}, position is {}'.format(idx, conf.bars[idx].current_pos))\n\n # Slits. We dont have keywords to fuse slitlets into larger logical slits\n # so there is one slitslet for each pair of bars\n for idx in range(1, EMIR_NBARS + 1):\n l_ids = [idx]\n r_ids = [idx + EMIR_NBARS]\n\n lbars = {lid: conf.bars[lid] for lid in l_ids}\n rbars = {rid: conf.bars[rid] for rid in r_ids}\n\n this = LogicalSlit(idx, lbars=lbars, rbars=rbars)\n # References from header\n try:\n slit_t = hdr[\"SLIFL%d\" % idx]\n this.target_type = TargetType(slit_t)\n except KeyError as err:\n # print('warning', err)\n this.target_type = TargetType.UNKNOWN\n\n xref = hdr.get(\"XRSLI%d\" % this.idx, -100) - 1\n yref = hdr.get(\"YRSLI%d\" % this.idx, -100) - 1\n this.target_coordinates = (xref, yref)\n\n xref = hdr.get(\"XVSLI%d\" % this.idx, -100) - 1\n yref = hdr.get(\"YVSLI%d\" % this.idx, -100) - 1\n this.target_coordinates_v = (xref, yref)\n conf.slits[idx] = this\n\n return conf", "response": "Read CSU information and slits from header"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef read_csu_3(barmodel, hdr):\n conf = CSUConf(barmodel)\n conf.set_state(hdr)\n return conf", "response": "Read CSU information and slits from header"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_state(self, hdr):\n\n self.conf_f = hdr.get('CSUCONFF', 'UNKNOWN')\n # Read CSUPOS and set position in model\n # The bars\n for idx in self.bars:\n key = \"CSUP{}\".format(idx)\n # UNIT is mm\n\n # set CSUPOS for BAR\n # Using the model, this sets the X,Y coordinate\n self.bars[idx].csupos = hdr[key]\n\n self._update_slits(hdr)", "response": "Read CSU information and slits from header and set position in model"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_slits(self, hdr):\n\n # Slits.\n # For the moment we will fuse only reference slits\n\n # clean existing slits\n self.slits = {}\n mm = []\n for idx in self.LBARS:\n\n # References from header\n try:\n slit_t = hdr[\"SLIFL%d\" % idx]\n target_type = TargetType(slit_t)\n except KeyError:\n target_type = TargetType.UNKNOWN\n\n xref = hdr.get(\"XRSLI%d\" % idx, -100) - 1\n yref = hdr.get(\"YRSLI%d\" % idx, -100) - 1\n target_coordinates = (xref, yref)\n\n xref = hdr.get(\"XVSLI%d\" % idx, -100) - 1\n yref = hdr.get(\"YVSLI%d\" % idx, -100) - 1\n target_coordinates_v = (xref, yref)\n\n mm.append((idx, target_type, target_coordinates, target_coordinates_v))\n\n bag = merge_slits(mm, max_slits=3, tol=1e-2)\n\n for idx, l_ids in bag.items():\n r_ids = [lid + EMIR_NBARS for lid in l_ids]\n\n lbars = {lid: self.bars[lid] for lid in l_ids}\n rbars = {rid: self.bars[rid] for rid in r_ids}\n\n this = LogicalSlit(idx, lbars=lbars, rbars=rbars)\n # References from header\n ref = mm[l_ids[0] - 1]\n this.target_type = ref[1]\n this.target_coordinates = ref[2]\n this.target_coordinates_v = ref[3]\n self.slits[idx] = this", "response": "Recreate the list of all the slits from header."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update(self, duration):\n if duration >= 0:\n self.histogram.update(duration)\n self.meter.mark()", "response": "Add a recorded duration."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _build_g5k_conf(vmong5k_conf):\n clusters = [m.cluster for m in vmong5k_conf.machines]\n sites = g5k_api_utils.get_clusters_sites(clusters)\n site_names = set(sites.values())\n if len(site_names) > 1:\n raise Exception(\"Multisite deployment not supported yet\")\n site = site_names.pop()\n return _do_build_g5k_conf(vmong5k_conf, site)", "response": "Build the g5k conf from the vmong5k conf."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef copy_model_instance(obj):\n meta = getattr(obj, '_meta') # make pycharm happy\n # dictionary of model values excluding auto created and related fields\n return {f.name: getattr(obj, f.name)\n for f in meta.get_fields(include_parents=False)\n if not f.auto_created}", "response": "Copy Django model instance as a dictionary excluding auto - created and many - to - one reverse relation."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load_data(self, *args, **kwargs):\n # get positional argument names from parameters and apply them to args\n # update data with additional kwargs\n argpos = {\n v['extras']['argpos']: k for k, v in self.parameters.iteritems()\n if 'argpos' in v['extras']\n }\n data = dict(\n {argpos[n]: a for n, a in enumerate(args)}, **kwargs\n )\n return self.apply_units_to_cache(data)", "response": "Loads the data from the parameters and applies units to the cache."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\napplying units to data when a proxy reader is used.", "response": "def apply_units_to_cache(self, data):\n \"\"\"\n Applies units to data when a proxy reader is used. For example if the\n data is cached as JSON and retrieved using the\n :class:`~simkit.core.data_readers.JSONReader`, then units can be\n applied from the original parameter schema.\n\n :param data: Data read by proxy reader.\n :return: data with units applied\n \"\"\"\n # if units key exists then apply\n for k, v in self.parameters.iteritems():\n if v and v.get('units'):\n data[k] = Q_(data[k], v.get('units'))\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\napplying units to model.", "response": "def load_data(self, model_instance, *args, **kwargs):\n \"\"\"\n Apply units to model.\n :return: data\n \"\"\"\n model_dict = copy_model_instance(model_instance)\n return super(DjangoModelReader, self).load_data(**model_dict)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cached(f):\n @functools.wraps(f)\n def wrapped(*args, **kwargs):\n with _cache_lock:\n identifier = (f.__name__, args, kwargs)\n key = pickle.dumps(identifier)\n value = cache.get(key)\n if value is not None:\n logger.debug(\"HIT for %s -> %s\" % (str(identifier), value))\n else:\n logger.debug(\"MISS for %s\" % str(identifier))\n value = f(*args, **kwargs)\n cache[key] = value\n return value\n return wrapped", "response": "Decorator for caching API calls request.\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the reference to the API cient singleton.", "response": "def get_api_client():\n \"\"\"Gets the reference to the API cient (singleton).\"\"\"\n with _api_lock:\n global _api_client\n if not _api_client:\n conf_file = os.path.join(os.environ.get(\"HOME\"),\n \".python-grid5000.yaml\")\n _api_client = Client.from_yaml(conf_file)\n\n return _api_client"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreloads all running or pending jobs with a given name.", "response": "def grid_reload_from_name(job_name):\n \"\"\"Reload all running or pending jobs of Grid'5000 with a given name.\n\n By default all the sites will be searched for jobs with the name\n ``job_name``. Using EnOSlib there can be only one job per site with name\n ``job_name``.\n\n Note that it honors the ``exluded_sites`` attribute of the client so the\n scan can be reduced.\n\n Args:\n job_name (str): the job name\n\n\n Returns:\n The list of the python-grid5000 jobs retrieved.\n\n Raises:\n EnosG5kDuplicateJobsError: if there's several jobs with the same name\n on a site.\n \"\"\"\n gk = get_api_client()\n sites = get_all_sites_obj()\n jobs = []\n for site in [s for s in sites if s.uid not in gk.excluded_site]:\n logger.info(\"Reloading %s from %s\" % (job_name, site.uid))\n _jobs = site.jobs.list(name=job_name,\n state=\"waiting,launching,running\")\n if len(_jobs) == 1:\n logger.info(\"Reloading %s from %s\" % (_jobs[0].uid, site.uid))\n jobs.append(_jobs[0])\n elif len(_jobs) > 1:\n raise EnosG5kDuplicateJobsError(site, job_name)\n return jobs"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreloads all running or pending jobs of Grid 5000 from their ids jobs", "response": "def grid_reload_from_ids(oargrid_jobids):\n \"\"\"Reload all running or pending jobs of Grid'5000 from their ids\n\n Args:\n oargrid_jobids (list): list of ``(site, oar_jobid)`` identifying the\n jobs on each site\n\n Returns:\n The list of python-grid5000 jobs retrieved\n \"\"\"\n gk = get_api_client()\n jobs = []\n for site, job_id in oargrid_jobids:\n jobs.append(gk.sites[site].jobs[job_id])\n return jobs"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndestroys all the jobs with a given name.", "response": "def grid_destroy_from_name(job_name):\n \"\"\"Destroy all the jobs with a given name.\n\n Args:\n job_name (str): the job name\n \"\"\"\n jobs = grid_reload_from_name(job_name)\n for job in jobs:\n job.delete()\n logger.info(\"Killing the job (%s, %s)\" % (job.site, job.uid))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndestroys all the jobs with corresponding ids", "response": "def grid_destroy_from_ids(oargrid_jobids):\n \"\"\"Destroy all the jobs with corresponding ids\n\n Args:\n oargrid_jobids (list): the ``(site, oar_job_id)`` list of tuple\n identifying the jobs for each site. \"\"\"\n jobs = grid_reload_from_ids(oargrid_jobids)\n for job in jobs:\n job.delete()\n logger.info(\"Killing the jobs %s\" % oargrid_jobids)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef submit_jobs(job_specs):\n gk = get_api_client()\n jobs = []\n try:\n for site, job_spec in job_specs:\n logger.info(\"Submitting %s on %s\" % (job_spec, site))\n jobs.append(gk.sites[site].jobs.create(job_spec))\n except Exception as e:\n logger.error(\"An error occured during the job submissions\")\n logger.error(\"Cleaning the jobs created\")\n for job in jobs:\n job.delete()\n raise(e)\n\n return jobs", "response": "Submit a list of jobs to the Grid5000 API."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwaits for all the jobs to be runnning.", "response": "def wait_for_jobs(jobs):\n \"\"\"Waits for all the jobs to be runnning.\n\n Args:\n jobs(list): list of the python-grid5000 jobs to wait for\n\n\n Raises:\n Exception: if one of the job gets in error state.\n \"\"\"\n\n all_running = False\n while not all_running:\n all_running = True\n time.sleep(5)\n for job in jobs:\n job.refresh()\n scheduled = getattr(job, \"scheduled_at\", None)\n if scheduled is not None:\n logger.info(\"Waiting for %s on %s [%s]\" % (job.uid,\n job.site,\n _date2h(scheduled)))\n all_running = all_running and job.state == \"running\"\n if job.state == \"error\":\n raise Exception(\"The job %s is in error state\" % job)\n logger.info(\"All jobs are Running !\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef grid_deploy(site, nodes, options):\n gk = get_api_client()\n environment = options.pop(\"env_name\")\n options.update(environment=environment)\n options.update(nodes=nodes)\n key_path = DEFAULT_SSH_KEYFILE\n options.update(key=key_path.read_text())\n logger.info(\"Deploying %s with options %s\" % (nodes, options))\n deployment = gk.sites[site].deployments.create(options)\n while deployment.status not in [\"terminated\", \"error\"]:\n deployment.refresh()\n print(\"Waiting for the end of deployment [%s]\" % deployment.uid)\n time.sleep(10)\n\n deploy = []\n undeploy = []\n if deployment.status == \"terminated\":\n deploy = [node for node, v in deployment.result.items()\n if v[\"state\"] == \"OK\"]\n undeploy = [node for node, v in deployment.result.items()\n if v[\"state\"] == \"KO\"]\n elif deployment.status == \"error\":\n undeploy = nodes\n return deploy, undeploy", "response": "Deploy and wait for the deployment to complete."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the interface of the nodes in a specific vlan.", "response": "def set_nodes_vlan(site, nodes, interface, vlan_id):\n \"\"\"Set the interface of the nodes in a specific vlan.\n\n It is assumed that the same interface name is available on the node.\n\n Args:\n site(str): site to consider\n nodes(list): nodes to consider\n interface(str): the network interface to put in the vlan\n vlan_id(str): the id of the vlan\n \"\"\"\n def _to_network_address(host):\n \"\"\"Translate a host to a network address\n e.g:\n paranoia-20.rennes.grid5000.fr -> paranoia-20-eth2.rennes.grid5000.fr\n \"\"\"\n splitted = host.split('.')\n splitted[0] = splitted[0] + \"-\" + interface\n return \".\".join(splitted)\n\n gk = get_api_client()\n network_addresses = [_to_network_address(n) for n in nodes]\n gk.sites[site].vlans[str(vlan_id)].submit({\"nodes\": network_addresses})"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all the corresponding sites of the passed clusters.", "response": "def clusters_sites_obj(clusters):\n \"\"\"Get all the corresponding sites of the passed clusters.\n\n Args:\n clusters(list): list of string uid of sites (e.g 'rennes')\n\n Return:\n dict corresponding to the mapping cluster uid to python-grid5000 site\n \"\"\"\n result = {}\n all_clusters = get_all_clusters_sites()\n clusters_sites = {c: s for (c, s) in all_clusters.items()\n if c in clusters}\n for cluster, site in clusters_sites.items():\n\n # here we want the site python-grid5000 site object\n result.update({cluster: get_site_obj(site)})\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget all the cluster of all the sites", "response": "def get_all_clusters_sites():\n \"\"\"Get all the cluster of all the sites.\n\n Returns:\n dict corresponding to the mapping cluster uid to python-grid5000 site\n \"\"\"\n result = {}\n gk = get_api_client()\n sites = gk.sites.list()\n for site in sites:\n clusters = site.clusters.list()\n result.update({c.uid: site.uid for c in clusters})\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget all the nodes of a given cluster", "response": "def get_nodes(cluster):\n \"\"\"Get all the nodes of a given cluster.\n\n Args:\n cluster(string): uid of the cluster (e.g 'rennes')\n \"\"\"\n gk = get_api_client()\n site = get_cluster_site(cluster)\n return gk.sites[site].clusters[cluster].nodes.list()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the network interfaces that are part of a cluster.", "response": "def get_cluster_interfaces(cluster, extra_cond=lambda nic: True):\n \"\"\"Get the network interfaces names corresponding to a criteria.\n\n Note that the cluster is passed (not the individual node names), thus it is\n assumed that all nodes in a cluster have the same interface names same\n configuration. In addition to ``extra_cond``, only the mountable and\n Ehernet interfaces are returned.\n\n Args:\n cluster(str): the cluster to consider\n extra_cond(lambda): boolean lambda that takes the nic(dict) as\n parameter\n \"\"\"\n nics = get_nics(cluster)\n # NOTE(msimonin): Since 05/18 nics on g5k nodes have predictable names but\n # the api description keep the legacy name (device key) and the new\n # predictable name (key name). The legacy names is still used for api\n # request to the vlan endpoint This should be fixed in\n # https://intranet.grid5000.fr/bugzilla/show_bug.cgi?id=9272\n # When its fixed we should be able to only use the new predictable name.\n nics = [(nic['device'], nic['name']) for nic in nics\n if nic['mountable']\n and nic['interface'] == 'Ethernet'\n and not nic['management']\n and extra_cond(nic)]\n nics = sorted(nics)\n return nics"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning for each cluster the available cluster interfaces", "response": "def get_clusters_interfaces(clusters, extra_cond=lambda nic: True):\n \"\"\" Returns for each cluster the available cluster interfaces\n\n Args:\n clusters (str): list of the clusters\n extra_cond (lambda): extra predicate to filter network card retrieved\n from the API. E.g lambda nic: not nic['mounted'] will retrieve all the\n usable network cards that are not mounted by default.\n\n Returns:\n dict of cluster with their associated nic names\n\n Examples:\n .. code-block:: python\n\n # pseudo code\n actual = get_clusters_interfaces([\"paravance\"])\n expected = {\"paravance\": [\"eth0\", \"eth1\"]}\n assertDictEquals(expected, actual)\n \"\"\"\n\n interfaces = {}\n for cluster in clusters:\n nics = get_cluster_interfaces(cluster, extra_cond=extra_cond)\n interfaces.setdefault(cluster, nics)\n\n return interfaces"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a given set of nodes can be started on a given cluster.", "response": "def can_start_on_cluster(nodes_status,\n nodes,\n start,\n walltime):\n \"\"\"Check if #nodes can be started on a given cluster.\n\n This is intended to give a good enough approximation.\n This can be use to prefiltered possible reservation dates before submitting\n them on oar.\n \"\"\"\n candidates = []\n for node, status in nodes_status.items():\n reservations = status.get(\"reservations\", [])\n # we search for the overlapping reservations\n overlapping_reservations = []\n for reservation in reservations:\n queue = reservation.get(\"queue\")\n if queue == \"besteffort\":\n # ignoring any besteffort reservation\n continue\n r_start = reservation.get(\"started_at\",\n reservation.get(\"scheduled_at\"))\n if r_start is None:\n break\n r_start = int(r_start)\n r_end = r_start + int(reservation[\"walltime\"])\n # compute segment intersection\n _intersect = min(r_end, start + walltime) - max(r_start, start)\n if _intersect > 0:\n overlapping_reservations.append(reservation)\n if len(overlapping_reservations) == 0:\n # this node can be accounted for a potential reservation\n candidates.append(node)\n if len(candidates) >= nodes:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef chunked(iterable, n):\n return iter(functools.partial(take, n, iter(iterable)), [])", "response": "Break an iterable into n lists of length n."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_data(self, rawdata):\n\n for data in rawdata:\n try:\n item = data[0]\n if item[0] == 2:\n # Flush messages but we got nothing to flush\n continue\n if item[0] != 0:\n warnings.warn(f\"Unknown message type '{item[0]}'\", Warning)\n continue\n item = item[1]\n # convert target to string because error codes are ints\n target = str(item[0])\n try:\n data = item[1]\n except IndexError:\n data = dict()\n try:\n method = getattr(self, self.__head + target)\n method(data)\n except AttributeError:\n self._handle_unhandled(target, data)\n except IndexError:\n LOGGER.warning(\"Wrongly constructed message received: %r\", data)\n\n self.conn.process_queues()", "response": "Add data to given room s state"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nregistering a callback that we will have to wait for", "response": "def register_callback(self):\n \"\"\"Register callback that we will have to wait for\"\"\"\n\n cid = str(self.__cid)\n self.__cid += 1\n event = queue.Queue()\n self.__callbacks[cid] = event\n return cid, event"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nhandle Lain being helpful", "response": "def _handle_401(self, data):\n \"\"\"Handle Lain being helpful\"\"\"\n\n ex = ConnectionError(\n \"Can't login to a protected room without a proper password\", data\n )\n self.conn.reraise(ex)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nhandling Lain being helpful", "response": "def _handle_429(self, data):\n \"\"\"Handle Lain being helpful\"\"\"\n\n ex = IOError(\"Too fast\", data)\n self.conn.reraise(ex)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _handle_callback(self, data):\n\n cb_id = data.get(\"id\")\n args = data.get(\"args\")\n event = self.__callbacks.pop(cb_id, None)\n if not event:\n return\n if not args:\n event.put(args)\n return\n err, info = args\n if err is None:\n event.put(info)\n else:\n LOGGER.warning(\"Callback returned error of %s\", str(err))\n event.put(err)", "response": "Handle a callback. Only used with getFileinfo so far."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _handle_userCount(self, data):\n\n self.room.user_count = data\n self.conn.enqueue_data(\"user_count\", self.room.user_count)", "response": "Handle user count changes"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _handle_userInfo(self, data):\n\n for k, v in data.items():\n if k == \"nick\":\n if v == \"None\":\n v = \"Volaphile\"\n setattr(self.room.user, k, v)\n self.conn.enqueue_data(k, self.room.user.nick)\n elif k != \"profile\":\n if not hasattr(self.room, k):\n warnings.warn(f\"Skipping unset property {k}\", ResourceWarning)\n continue\n setattr(self.room, k, v)\n self.conn.enqueue_data(k, getattr(self.room, k))\n self.room.user_info = k, v\n self.conn.enqueue_data(\"user_info\", self.room.user_info)", "response": "Handle user info from the server"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nhandling initial config push and config changes", "response": "def _handle_config(self, data):\n \"\"\"Handle initial config push and config changes\"\"\"\n\n self.room.config.update(data)\n self.conn.enqueue_data(\"config\", data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nhandling new files being uploaded", "response": "def _handle_files(self, data):\n \"\"\"Handle new files being uploaded\"\"\"\n\n initial = data.get(\"set\", False)\n files = data[\"files\"]\n for f in files:\n try:\n fobj = File(\n self.room,\n self.conn,\n f[0],\n f[1],\n type=f[2],\n size=f[3],\n expire_time=int(f[4]) / 1000,\n uploader=f[6].get(\"nick\") or f[6].get(\"user\"),\n )\n self.room.filedict = fobj.fid, fobj\n if not initial:\n self.conn.enqueue_data(\"file\", fobj)\n except Exception:\n import pprint\n\n LOGGER.exception(\"bad file\")\n pprint.pprint(f)\n if initial:\n self.conn.enqueue_data(\"initial_files\", self.room.files)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nhandles files being removed", "response": "def _handle_delete_file(self, data):\n \"\"\"Handle files being removed\"\"\"\n\n file = self.room.filedict.get(data)\n if file:\n self.room.filedict = data, None\n self.conn.enqueue_data(\"delete_file\", file)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _handle_chat_name(self, data):\n\n self.room.user.nick = data\n self.conn.enqueue_data(\"user\", self.room.user)", "response": "Handle user name changes"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the corrector for intensity flat", "response": "def get_corrector_f(rinput, meta, ins, datamodel):\n \"\"\"Corrector for intensity flat\"\"\"\n from emirdrp.processing.flatfield import FlatFieldCorrector\n flat_info = meta['master_flat']\n with rinput.master_flat.open() as hdul:\n _logger.info('loading intensity flat')\n _logger.debug('flat info: %s', flat_info)\n mflat = hdul[0].data\n # Check NaN and Ceros\n mask1 = mflat < 0\n mask2 = ~numpy.isfinite(mflat)\n if numpy.any(mask1):\n _logger.warning('flat has %d values below 0', mask1.sum())\n if numpy.any(mask2):\n _logger.warning('flat has %d NaN', mask2.sum())\n flat_corrector = FlatFieldCorrector(mflat,\n datamodel=datamodel,\n calibid=datamodel.get_imgid(hdul))\n\n return flat_corrector"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlogging the contents of Recipe Input", "response": "def loginfo(method):\n \"\"\"Log the contents of Recipe Input\"\"\"\n\n\n def loginfo_method(self, rinput):\n\n klass = rinput.__class__\n\n for key in klass.stored():\n val = getattr(rinput, key)\n if isinstance(val, DataFrame):\n self.logger.debug(\"DataFrame %s\", info.gather_info_dframe(val))\n elif isinstance(val, ObservationResult):\n for f in val.images:\n self.logger.debug(\"OB DataFrame %s\" , info.gather_info_dframe(f))\n else:\n pass\n\n result = method(self, rinput)\n\n return result\n\n return loginfo_method"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncopy or coerce to a Host.", "response": "def to_host(self):\n \"\"\"Copy or coerce to a Host.\"\"\"\n return Host(self.address,\n alias=self.alias,\n user=self.user,\n keyfile=self.keyfile,\n port=self.port,\n extra=self.extra)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_table_info(*table_names):\n ret = {}\n if table_names:\n for table_name in table_names:\n for name, inter in get_interfaces().items():\n if inter.has_table(table_name):\n yield table_name, inter, inter.get_fields(table_name)\n\n else:\n for name, inter in get_interfaces().items():\n table_names = inter.get_tables()\n for table_name in table_names:\n yield table_name, inter, inter.get_fields(table_name)", "response": "Returns a dict with table_name keys mapped to the Interface that table exists in\n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts virtual pixel to real pixel", "response": "def exvp(pos_x, pos_y):\n \"\"\"Convert virtual pixel to real pixel\"\"\"\n pos_x = numpy.asarray(pos_x)\n pos_y = numpy.asarray(pos_y)\n # convert virtual pixel to real pixel\n # convert world coordinate to pixel\n center = [1024.5, 1024.5]\n cf = EMIR_PLATESCALE_RADS\n pos_base_x = pos_x - center[0]\n pos_base_y = pos_y - center[1]\n ra = numpy.hypot(pos_base_x, pos_base_y)\n thet = numpy.arctan2(pos_base_y, pos_base_x)\n\n r = cf * ra\n\n rr1 = 1 + 14606.7 * r**2 + 1739716115.1 * r**4\n\n nx1 = rr1 * ra * numpy.cos(thet) + center[0]\n ny1 = rr1 * ra * numpy.sin(thet) + center[1]\n return nx1, ny1"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef match_sp_sep(first, second):\n if isinstance(first, list):\n one = [set(v.split(\" \")) for v in first]\n else:\n one = [{v} for v in first.split(\" \")]\n\n if isinstance(second, list):\n other = [set(v.split(\" \")) for v in second]\n else:\n other = [{v} for v in second.split(\" \")]\n\n # all values in one must appear in other\n if any(rt not in other for rt in one):\n return False\n return True", "response": "Verify that all the values in first appear in second."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nverifying that the sector identifier_uri is reachable and that it contains redirect_uri s.", "response": "def _verify_sector_identifier(self, request):\n \"\"\"\n Verify `sector_identifier_uri` is reachable and that it contains \n `redirect_uri`s.\n\n :param request: Provider registration request\n :return: si_redirects, sector_id\n :raises: InvalidSectorIdentifier\n \"\"\"\n si_url = request[\"sector_identifier_uri\"]\n try:\n res = self.endpoint_context.httpc.get(si_url)\n except Exception as err:\n logger.error(err)\n res = None\n\n if not res:\n raise InvalidSectorIdentifier(\"Couldn't read from sector_identifier_uri\")\n\n logger.debug(\"sector_identifier_uri => %s\", sanitize(res.text))\n\n try:\n si_redirects = json.loads(res.text)\n except ValueError:\n raise InvalidSectorIdentifier(\n \"Error deserializing sector_identifier_uri content\")\n\n if \"redirect_uris\" in request:\n logger.debug(\"redirect_uris: %s\", request[\"redirect_uris\"])\n for uri in request[\"redirect_uris\"]:\n if uri not in si_redirects:\n raise InvalidSectorIdentifier(\n \"redirect_uri missing from sector_identifiers\")\n\n return si_redirects, si_url"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread and analyse the next line of the SExtractor file and return a dictionary.", "response": "def readline(self):\n \"\"\"\n Read and analyse the next line of the SExtractor catalog\n and return a dictionary {'param1': value, 'param2': value, ...}.\n \"\"\"\n if not(self._firstline):\n self._line = self._file.readline()\n\n self._firstline = False\n\n if not(self._line):\n return None\n __ll = (self._line).replace('\\n', '')\n __values = __ll.split()\n self._outdict.update(dict(zip(self._keys, __values)))\n for i in self._keys:\n self._outdict[i] = (\n SExtractorfile._SE_keys[i][\"infunc\"](self._outdict[i]))\n\n return self._outdict.copy()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read(self):\n __result = []\n\n __ll = self.readline()\n while __ll:\n __result.append(__ll)\n __ll = self.readline()\n\n return list(__result)", "response": "Read the file until EOF and return a list of dictionaries."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef close(self):\n if self._file:\n if not(self._file.closed):\n self._file.close()\n self.closed = True", "response": "Close the SExtractor file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds an instance of the driver to interact with G5K", "response": "def get_driver(configuration):\n \"\"\"Build an instance of the driver to interact with G5K\n \"\"\"\n resources = configuration[\"resources\"]\n machines = resources[\"machines\"]\n networks = resources[\"networks\"]\n oargrid_jobids = configuration.get(\"oargrid_jobids\")\n\n if oargrid_jobids:\n logger.debug(\"Loading the OargridStaticDriver\")\n return OargridStaticDriver(oargrid_jobids)\n else:\n job_name = configuration.get(\"job_name\", DEFAULT_JOB_NAME)\n walltime = configuration.get(\"walltime\", DEFAULT_WALLTIME)\n job_type = configuration.get(\"job_type\", JOB_TYPE_DEPLOY)\n reservation_date = configuration.get(\"reservation\", False)\n # NOTE(msimonin): some time ago asimonet proposes to auto-detect\n # the queues and it was quiet convenient\n # see https://github.com/BeyondTheClouds/enos/pull/62\n queue = configuration.get(\"queue\", None)\n logger.debug(\"Loading the OargridDynamicDriver\")\n\n return OargridDynamicDriver(\n job_name,\n walltime,\n job_type,\n reservation_date,\n queue,\n machines,\n networks\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ver_dec_content(parts, sign_key=None, enc_key=None, sign_alg='SHA256'):\n\n if parts is None:\n return None\n elif len(parts) == 3:\n # verify the cookie signature\n timestamp, load, b64_mac = parts\n mac = base64.b64decode(b64_mac)\n verifier = HMACSigner(algorithm=sign_alg)\n if verifier.verify(load.encode('utf-8') + timestamp.encode('utf-8'),\n mac, sign_key.key):\n return load, timestamp\n else:\n raise VerificationError()\n elif len(parts) == 4:\n b_timestamp = parts[0]\n iv = base64.b64decode(parts[1])\n ciphertext = base64.b64decode(parts[2])\n tag = base64.b64decode(parts[3])\n\n decrypter = AES_GCMEncrypter(key=enc_key.key)\n msg = decrypter.decrypt(ciphertext, iv, tag=tag)\n p = lv_unpack(msg.decode('utf-8'))\n load = p[0]\n timestamp = p[1]\n if len(p) == 3:\n verifier = HMACSigner(algorithm=sign_alg)\n if verifier.verify(load.encode('utf-8') + timestamp.encode('utf-8'),\n base64.b64decode(p[2]), sign_key.key):\n return load, timestamp\n else:\n return load, timestamp\n return None", "response": "Verifies the value of a cookie and returns the load and timestamp."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate and return a SimpleCookie instance for the cookie.", "response": "def make_cookie_content(name, load, sign_key, domain=None, path=None,\n timestamp=\"\", enc_key=None, max_age=0,\n sign_alg='SHA256'):\n \"\"\"\n Create and return a cookies content\n\n If you only provide a `seed`, a HMAC gets added to the cookies value\n and this is checked, when the cookie is parsed again.\n\n If you provide both `seed` and `enc_key`, the cookie gets protected\n by using AEAD encryption. This provides both a MAC over the whole cookie\n and encrypts the `load` in a single step.\n\n The `seed` and `enc_key` parameters should be byte strings of at least\n 16 bytes length each. Those are used as cryptographic keys.\n\n :param name: Cookie name\n :type name: text\n :param load: Cookie load\n :type load: text\n :param sign_key: A sign_key key for payload signing\n :type sign_key: A :py:class:`cryptojwt.jwk.hmac.SYMKey` instance\n :param domain: The domain of the cookie\n :param path: The path specification for the cookie\n :param timestamp: A time stamp\n :type timestamp: text\n :param enc_key: The key to use for payload encryption.\n :type enc_key: A :py:class:`cryptojwt.jwk.hmac.SYMKey` instance\n :param max_age: The time in seconds for when a cookie will be deleted\n :type max_age: int\n :return: A SimpleCookie instance\n \"\"\"\n if not timestamp:\n timestamp = str(int(time.time()))\n\n _cookie_value = sign_enc_payload(load, timestamp, sign_key=sign_key,\n enc_key=enc_key, sign_alg=sign_alg)\n\n content = {name: {\"value\": _cookie_value}}\n if path is not None:\n content[name][\"path\"] = path\n if domain is not None:\n content[name][\"domain\"] = domain\n\n content[name]['httponly'] = True\n\n if max_age:\n content[name][\"expires\"] = in_a_while(seconds=max_age)\n\n return content"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of parts of the cookie payload", "response": "def cookie_parts(name, kaka):\n \"\"\"\n Give me the parts of the cookie payload\n\n :param name: A name of a cookie object\n :param kaka: The cookie\n :return: A list of parts or None if there is no cookie object with the\n given name\n \"\"\"\n cookie_obj = SimpleCookie(as_unicode(kaka))\n morsel = cookie_obj.get(name)\n if morsel:\n return morsel.value.split(\"|\")\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses and verifies a cookie value.", "response": "def parse_cookie(name, sign_key, kaka, enc_key=None, sign_alg='SHA256'):\n \"\"\"Parses and verifies a cookie value\n\n Parses a cookie created by `make_cookie` and verifies\n it has not been tampered with.\n\n You need to provide the same `sign_key` and `enc_key`\n used when creating the cookie, otherwise the verification\n fails. See `make_cookie` for details about the verification.\n\n :param sign_key: A signing key used to create the signature\n :type sign_key: A :py:class:`cryptojwt.jwk.hmac.SYMKey` instance\n :param kaka: The cookie\n :param enc_key: The encryption key used.\n :type enc_key: A :py:class:`cryptojwt.jwk.hmac.SYMKey` instance or None\n :raises InvalidCookieSign: When verification fails.\n :return: A tuple consisting of (payload, timestamp) or None if parsing fails\n \"\"\"\n if not kaka:\n return None\n\n parts = cookie_parts(name, kaka)\n\n return ver_dec_content(parts, sign_key, enc_key, sign_alg)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_cookie(self, cookie_name=None):\n if cookie_name is None:\n cookie_name = self.default_value['name']\n\n return self.create_cookie(\"\", \"\", cookie_name=cookie_name, kill=True)", "response": "Create a cookie that will immediately expire when it hits the otherCOOKIE."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_cookie_value(self, cookie=None, cookie_name=None):\n if cookie_name is None:\n cookie_name = self.default_value['name']\n\n if cookie is None or cookie_name is None:\n return None\n else:\n try:\n info, timestamp = parse_cookie(cookie_name, self.sign_key, cookie,\n self.enc_key, self.sign_alg)\n except (TypeError, AssertionError):\n return None\n else:\n value, _ts, typ = info.split(\"::\")\n if timestamp == _ts:\n return value, _ts, typ\n return None", "response": "Get the value of a cookie."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a cookie to a SimpleCookie instance.", "response": "def append_cookie(self, cookie, name, payload, typ, domain=None, path=None,\n timestamp=\"\", max_age=0):\n \"\"\"\n Adds a cookie to a SimpleCookie instance\n\n :param cookie:\n :param name:\n :param payload:\n :param typ:\n :param domain:\n :param path:\n :param timestamp:\n :param max_age:\n :return:\n \"\"\"\n timestamp = str(int(time.time()))\n\n # create cookie payload\n try:\n _payload = \"::\".join([payload, timestamp, typ])\n except TypeError:\n _payload = \"::\".join([payload[0], timestamp, typ])\n\n content = make_cookie_content(name, _payload, self.sign_key, domain=domain,\n path=path, timestamp=timestamp,\n enc_key=self.enc_key, max_age=max_age,\n sign_alg=self.sign_alg)\n\n for name, args in content.items():\n cookie[name] = args['value']\n for key, value in args.items():\n if key == 'value':\n continue\n cookie[name][key] = value\n\n return cookie"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalculates AC power of a node.", "response": "def f_ac_power(inverter, v_mp, p_mp):\n \"\"\"\n Calculate AC power\n\n :param inverter:\n :param v_mp:\n :param p_mp:\n :return: AC power [W]\n \"\"\"\n return pvlib.pvsystem.snlinverter(v_mp, p_mp, inverter).flatten()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalculating DC power using Sandia Performance model", "response": "def f_dc_power(effective_irradiance, cell_temp, module):\n \"\"\"\n Calculate DC power using Sandia Performance model\n\n :param effective_irradiance: effective irradiance [suns]\n :param cell_temp: PV cell temperature [degC]\n :param module: PV module dictionary or pandas data frame\n :returns: i_sc, i_mp, v_oc, v_mp, p_mp\n \"\"\"\n dc = pvlib.pvsystem.sapm(effective_irradiance, cell_temp, module)\n fields = ('i_sc', 'i_mp', 'v_oc', 'v_mp', 'p_mp')\n return tuple(dc[field] for field in fields)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef f_effective_irradiance(poa_direct, poa_diffuse, am_abs, aoi, module):\n Ee = pvlib.pvsystem.sapm_effective_irradiance(poa_direct, poa_diffuse,\n am_abs, aoi, module)\n return Ee.reshape(1, -1)", "response": "Calculates effective irradiance for Sandia Performance model"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncalculate cell temperature. :param poa_global: plane of array global irradiance [W/m**2] :param wind_speed: wind speed [m/s] :param air_temp: ambient dry bulb air temperature [degC] :return: cell temperature [degC]", "response": "def f_cell_temp(poa_global, wind_speed, air_temp):\n \"\"\"\n Calculate cell temperature.\n\n :param poa_global: plane of array global irradiance [W/m**2]\n :param wind_speed: wind speed [m/s]\n :param air_temp: ambient dry bulb air temperature [degC]\n :return: cell temperature [degC]\n \"\"\"\n temps = pvlib.pvsystem.sapm_celltemp(poa_global, wind_speed, air_temp)\n return temps['temp_cell'].values, temps['temp_module'].values"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef f_aoi(surface_tilt, surface_azimuth, solar_zenith, solar_azimuth):\n return pvlib.irradiance.aoi(surface_tilt, surface_azimuth,\n solar_zenith, solar_azimuth)", "response": "Calculate angle of incidence on a node in a node list."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_version(filename):\n with open(filename, \"r\") as filep:\n version_file = filep.read()\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M\n )\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")", "response": "Find the version string in given file and return it."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save_ndarray_to_fits(array=None, file_name=None,\n main_header=None,\n cast_to_float=True,\n crpix1=None, crval1=None, cdelt1=None,\n overwrite=True):\n \"\"\"Save numpy array(s) into a FITS file with the provided filename.\n\n Parameters\n ----------\n array : numpy array or list of numpy arrays\n Array(s) to be exported as the FITS file. If the input is\n a list, a multi-extension FITS file is generated assuming\n that the list contains a list of arrays.\n file_name : string\n File name for the output FITS file.\n main_header : astropy FITS header\n Header to be introduced in the primary HDU.\n cast_to_float : bool or list of booleans\n If True, the array(s) data are save as float. If a list of\n arrays has been provided, this parameter must be either a list\n (with the same length) of booleans or None.\n crpix1 : float, list of floats or None\n If not None, this value is used for the keyword CRPIX1.\n If a list of arrays has been provided, this paramater must be\n either a list (with the same length) of floats or None.\n crval1 : float, list of floats or None\n If not None, this value is used for the keyword CRVAL1.\n If a list of arrays has been provided, this paramater must be\n either a list (with the same length) of floats or None.\n cdelt1 : float, list of floats or None\n If not None, this value is used for the keyword CDELT1.\n If a list of arrays has been provided, this paramater must be\n either a list (with the same length) of floats or None.\n overwrite : bool\n If True, the file is overwritten (in the case it already\n exists).\n\n \"\"\"\n\n # protections\n if file_name is None:\n raise ValueError(\"File_name is not defined in save_ndarray_to_fits\")\n\n if type(array) is list:\n list_of_arrays = array\n narrays = len(list_of_arrays)\n # cast_to_float must be a list of bools\n if type(cast_to_float) is not list:\n raise ValueError(\"Expected list of cast_to_float not found!\")\n else:\n if len(cast_to_float) != narrays:\n raise ValueError(\"Unexpected length of cast_to_float\")\n list_cast_to_float = cast_to_float\n # check that the additional associated lists have been provided\n # and that they have the expected length (or they are None)\n for ldum, cdum in zip([crpix1, crval1, cdelt1],\n ['crpix1', 'crval1', 'cdelt1']):\n if ldum is not None:\n if type(ldum) is not list:\n raise ValueError(\"Expected list of \" + cdum +\n \" not found!\")\n else:\n if len(ldum) != narrays:\n raise ValueError(\"Unexpected length of \" + cdum)\n if crpix1 is None:\n list_crpix1 = [None] * narrays\n else:\n list_crpix1 = crpix1\n if crval1 is None:\n list_crval1 = [None] * narrays\n else:\n list_crval1 = crval1\n if cdelt1 is None:\n list_cdelt1 = [None] * narrays\n else:\n list_cdelt1 = cdelt1\n else:\n list_of_arrays = [array]\n list_cast_to_float = [cast_to_float]\n list_crpix1 = [crpix1]\n list_crval1 = [crval1]\n list_cdelt1 = [cdelt1]\n\n hdulist = fits.HDUList()\n\n for ihdu, tmp_array in enumerate(list_of_arrays):\n if type(tmp_array) is not np.ndarray:\n raise ValueError(\"Array#\" + str(ihdu) + \"=\" + str(tmp_array) +\n \" must be a numpy.ndarray\")\n if ihdu == 0:\n if list_cast_to_float[ihdu]:\n hdu = fits.PrimaryHDU(data=tmp_array.astype(np.float),\n header=main_header)\n else:\n hdu = fits.PrimaryHDU(data=tmp_array, header=main_header)\n else:\n if list_cast_to_float[ihdu]:\n hdu = fits.ImageHDU(data=tmp_array.astype(np.float))\n else:\n hdu = fits.ImageHDU(data=tmp_array)\n\n # set additional FITS keywords if requested\n tmp_crpix1 = list_crpix1[ihdu]\n if tmp_crpix1 is not None:\n hdu.header.set('CRPIX1', tmp_crpix1, 'Reference pixel')\n tmp_crval1 = list_crval1[ihdu]\n if tmp_crval1 is not None:\n hdu.header.set('CRVAL1', tmp_crval1,\n 'Reference wavelength corresponding to CRPIX1')\n tmp_cdelt1 = list_cdelt1[ihdu]\n if tmp_cdelt1 is not None:\n hdu.header.set('CDELT1', tmp_cdelt1,\n 'Linear dispersion (angstrom/pixel)')\n\n # add HDU to HDUList\n hdulist.append(hdu)\n\n # write output file\n hdulist.writeto(file_name, overwrite=overwrite)", "response": "Save numpy array into a FITS file with the provided filename."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find_position(edges, prow, bstart, bend, total=5):\n\n nt = total // 2\n\n # This bar is too near the border\n if prow-nt < 0 or prow + nt >= edges.shape[0]:\n return []\n\n s2edges = edges[prow-nt:prow+nt+1, bstart:bend]\n\n structure = morph.generate_binary_structure(2,2) # 8 way conection\n har, num_f = mes.label(s2edges, structure=structure)\n\n cen_of_mass = mes.center_of_mass(s2edges, labels=har, index=range(1, num_f + 1))\n\n # center_of_mass returns y, x coordinates\n\n cen_of_mass_off = [(x + bstart, prow-nt + y) for y,x in cen_of_mass]\n\n return cen_of_mass_off", "response": "Find a EMIR CSU bar position in a 2d edge image."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the FWHM of the maximum image in the given axis", "response": "def calc_fwhm(img, region, fexpand=3, axis=0):\n \"\"\"Compute the FWHM in the direction given by axis\"\"\"\n\n # We compute know the FWHM of the slit\n # Given the computed position of the slit\n # Expand 'fexpand' pixels around\n # and cut an slice in the median filtered image\n\n xpregion = expand_region(region, fexpand, fexpand)\n cslit = img[xpregion]\n\n # Collapse it\n pslit = cslit.mean(axis=axis)\n\n # Estimate the background as a flat line\n # starting in pslit[0] and ending in pslit[-1]\n x2 = len(pslit)\n y1, y2 = pslit[0], pslit[-1]\n mslope = (y2-y1) / x2\n # background estimation\n backstim = mslope*numpy.arange(x2) + y1\n\n # We subtract background\n qslit = pslit-backstim\n # and find the pixel of the maximum\n pidx = numpy.argmax(qslit)\n peak, fwhm = fmod.compute_fwhm_1d_simple(qslit, pidx)\n return fwhm"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfind the first peak to the right of start", "response": "def simple_prot(x, start):\n \"\"\"Find the first peak to the right of start\"\"\"\n\n # start must b >= 1\n\n for i in range(start,len(x)-1):\n a,b,c = x[i-1], x[i], x[i+1]\n if b - a > 0 and b -c >= 0:\n return i\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding the position where the value is half of the peak", "response": "def position_half_h(pslit, cpix, backw=4):\n \"\"\"Find the position where the value is half of the peak\"\"\"\n\n # Find the first peak to the right of cpix\n next_peak = simple_prot(pslit, cpix)\n\n if next_peak is None:\n raise ValueError\n\n dis_peak = next_peak - cpix\n\n wpos2 = cpix - dis_peak\n wpos1 = wpos2 - backw\n\n # Compute background in a window of width backw\n # in a position simetrical to the peak\n # around cpix\n left_background = pslit[wpos1:wpos2].min()\n\n # height of the peak\n height = pslit[next_peak] - left_background\n\n half_height = left_background + 0.5 * height\n\n # Position at halg peak, linear interpolation\n vv = pslit[wpos1:next_peak+1] - half_height\n\n res1, = numpy.nonzero(numpy.diff(vv > 0))\n i1 = res1[0]\n\n xint = wpos1 + i1 + (0 - vv[i1]) / (vv[i1+1] - vv[i1])\n\n return xint, next_peak, wpos1, wpos2, left_background, half_height"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfine position of the left CSU bar", "response": "def locate_bar_l(icut, epos):\n \"\"\"Fine position of the left CSU bar\"\"\"\n def swap_coor(x):\n return x\n\n def swap_line(tab):\n return tab\n\n return _locate_bar_gen(icut, epos,\n transform1=swap_coor,\n transform2=swap_line\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfining position of the right CSU bar", "response": "def locate_bar_r(icut, epos):\n \"\"\"Fine position of the right CSU bar\"\"\"\n sm = len(icut)\n\n def swap_coor(x):\n return sm - 1 - x\n\n def swap_line(tab):\n return tab[::-1]\n\n return _locate_bar_gen(icut, epos, transform1=swap_coor,\n transform2=swap_line)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef overlap(intv1, intv2):\n return max(0, min(intv1[1], intv2[1]) - max(intv1[0], intv2[0]))", "response": "Overlaps of two intervals"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert virtual pixel to real pixel.", "response": "def exvp_scalar(x, y, x0, y0, c2, c4, theta0, ff):\n \"\"\"Convert virtual pixel to real pixel.\n\n Parameters\n ----------\n x : array-like of floats\n X coordinate (pixel).\n y : array-like of floats\n Y coordinate (pixel).\n x0 : float\n X coordinate of reference pixel, in units of 1E3.\n y0 : float\n Y coordinate of reference pixel, in units of 1E3.\n c2 : float\n Coefficient corresponding to the term r**2 in distortion\n equation, in units of 1E4.\n c4 : float\n Coefficient corresponding to the term r**4 in distortion\n equation, in units of 1E9\n theta0 : float\n Additional rotation angle (radians).\n ff : float\n Scaling factor to be applied to the Y axis.\n\n Returns\n -------\n xdist, ydist : tuple of floats\n Distorted coordinates.\n\n \"\"\"\n\n # plate scale: 0.1944 arcsec/pixel\n # conversion factor (in radian/pixel)\n factor = 0.1944 * np.pi/(180.0*3600)\n # distance from image center (pixels)\n r_pix = np.sqrt((x - x0*1000)**2 + (y - y0*1000)**2)\n # distance from imagen center (radians)\n r_rad = factor * r_pix\n # radial distortion: this number is 1.0 for r=0 and increases\n # slightly (reaching values around 1.033) for r~sqrt(2)*1024\n # (the distance to the corner of the detector measured from the\n # center)\n rdist = (1 +\n c2 * 1.0E4 * r_rad**2 +\n c4 * 1.0E9 * r_rad**4)\n # angle measured from the Y axis towards the X axis\n theta = np.arctan((x - x0*1000)/(y - y0*1000))\n if y < y0*1000:\n theta = theta - np.pi\n # distorted coordinates\n xdist = (rdist * r_pix * np.sin(theta+theta0)) + x0*1000\n ydist = (ff * rdist * r_pix * np.cos(theta+theta0)) + y0*1000\n\n return xdist, ydist"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef exvp(x, y, x0, y0, c2, c4, theta0, ff):\n\n if all([np.isscalar(x), np.isscalar(y)]):\n xdist, ydist = exvp_scalar(x, y, x0=x0, y0=y0,\n c2=c2, c4=c4, theta0=theta0, ff=ff)\n return xdist, ydist\n elif any([np.isscalar(x), np.isscalar(y)]):\n raise ValueError(\"invalid mixture of scalars and arrays\")\n else:\n xdist = []\n ydist = []\n for x_, y_ in zip(x, y):\n xdist_, ydist_ = exvp_scalar(x_, y_, x0=x0, y0=y0,\n c2=c2, c4=c4, theta0=theta0, ff=ff)\n xdist.append(xdist_)\n ydist.append(ydist_)\n return np.array(xdist), np.array(ydist)", "response": "This function converts virtual pixel to real pixel."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef return_params(islitlet, csu_bar_slit_center, params, parmodel):\n\n if parmodel == \"longslit\":\n # set each variable in EXPECTED_PARAMETER_LIST to the value\n # transferred through 'params'\n c2 = params['c2'].value\n c4 = params['c4'].value\n ff = params['ff'].value\n slit_gap = params['slit_gap'].value\n slit_height = params['slit_height'].value\n theta0_origin = params['theta0_origin'].value\n theta0_slope = params['theta0_slope'].value\n x0 = params['x0'].value\n y0 = params['y0'].value\n y_baseline = params['y_baseline'].value\n else:\n # set each variable in EXPECTED_PARAMETER_LIST_EXTENDED to the value\n # transferred through 'params'\n c2_a0s = params['c2_a0s'].value\n c2_a1s = params['c2_a1s'].value / 1E3\n c2_a2s = params['c2_a2s'].value / 1E6\n c2 = c2_a0s + \\\n c2_a1s * csu_bar_slit_center + \\\n c2_a2s * csu_bar_slit_center ** 2\n # ---\n c4_a0s = params['c4_a0s'].value\n c4_a1s = params['c4_a1s'].value / 1E3\n c4_a2s = params['c4_a2s'].value / 1E6\n c4 = c4_a0s + \\\n c4_a1s * csu_bar_slit_center + \\\n c4_a2s * csu_bar_slit_center ** 2\n # ---\n ff_a0s = params['ff_a0s'].value\n ff_a1s = params['ff_a1s'].value / 1E3\n ff_a2s = params['ff_a2s'].value / 1E6\n ff = ff_a0s + \\\n ff_a1s * csu_bar_slit_center + \\\n ff_a2s * csu_bar_slit_center ** 2\n # ---\n slit_gap_a0s = params['slit_gap_a0s'].value\n slit_gap_a1s = params['slit_gap_a1s'].value / 1E3\n slit_gap_a2s = params['slit_gap_a2s'].value / 1E6\n slit_gap = slit_gap_a0s + \\\n slit_gap_a1s * csu_bar_slit_center + \\\n slit_gap_a2s * csu_bar_slit_center ** 2\n # ---\n slit_height_a0s = params['slit_height_a0s'].value\n slit_height_a1s = params['slit_height_a1s'].value / 1E3\n slit_height_a2s = params['slit_height_a2s'].value / 1E6\n slit_height = slit_height_a0s + \\\n slit_height_a1s * csu_bar_slit_center + \\\n slit_height_a2s * csu_bar_slit_center ** 2\n # ---\n theta0_origin_a0s = params['theta0_origin_a0s'].value\n theta0_origin_a1s = params['theta0_origin_a1s'].value / 1E3\n theta0_origin_a2s = params['theta0_origin_a2s'].value / 1E6\n theta0_origin = theta0_origin_a0s + \\\n theta0_origin_a1s * csu_bar_slit_center + \\\n theta0_origin_a2s * csu_bar_slit_center ** 2\n # ---\n theta0_slope_a0s = params['theta0_slope_a0s'].value\n theta0_slope_a1s = params['theta0_slope_a1s'].value / 1E3\n theta0_slope_a2s = params['theta0_slope_a2s'].value / 1E6\n theta0_slope = theta0_slope_a0s + \\\n theta0_slope_a1s * csu_bar_slit_center + \\\n theta0_slope_a2s * csu_bar_slit_center ** 2\n # ---\n x0_a0s = params['x0_a0s'].value\n x0_a1s = params['x0_a1s'].value / 1E3\n x0_a2s = params['x0_a2s'].value / 1E6\n x0 = x0_a0s + \\\n x0_a1s * csu_bar_slit_center + \\\n x0_a2s * csu_bar_slit_center ** 2\n # ---\n y0_a0s = params['y0_a0s'].value\n y0_a1s = params['y0_a1s'].value / 1E3\n y0_a2s = params['y0_a2s'].value / 1E6\n y0 = y0_a0s + \\\n y0_a1s * csu_bar_slit_center + \\\n y0_a2s * csu_bar_slit_center ** 2\n # ---\n y_baseline_a0s = params['y_baseline_a0s'].value\n y_baseline_a1s = params['y_baseline_a1s'].value / 1E3\n y_baseline_a2s = params['y_baseline_a2s'].value / 1E6\n y_baseline = y_baseline_a0s + \\\n y_baseline_a1s * csu_bar_slit_center + \\\n y_baseline_a2s * csu_bar_slit_center ** 2\n\n theta0 = theta0_origin / 1E3 + theta0_slope / 1E4 * islitlet\n\n return c2, c4, ff, slit_gap, slit_height, theta0, x0, y0, y_baseline", "response": "Return individual model parameters from object of type Parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the expected distorted boundaries of a given set of parameters.", "response": "def expected_distorted_boundaries(islitlet, csu_bar_slit_center,\n borderlist, params, parmodel,\n numpts, deg, debugplot=0):\n \"\"\"Return expected SpectrumTrail instances associated to a given slitlet.\n\n Several SpectrumTrail objects can be computed for the considered\n slitlet. The parameter borderlist is a list of floats, ranging\n from 0 to 1, indicating the spatial location of the spectrum trail\n within the slitlet: 0 means the lower boundary and 1 corresponds\n to the upper bounday. Any value in (0,1) will provide the\n spectrum trail located in between accordingly.\n\n Parameters\n ----------\n islitlet : int\n Number of slitlet.\n csu_bar_slit_center : float\n CSU bar slit center, in mm.\n borderlist : list of floats\n Each float provides the fractional vertical location of the\n spectrum trail relative to the lower boundary. In other words,\n 0.0 corresponds to the lower boundary, 1.0 to the upper\n boundary, and any number in the interval (0,1) will be a\n spectral trail in between.\n params : :class:`~lmfit.parameter.Parameters`\n Parameters to be employed in the prediction of the distorted\n boundaries.\n parmodel : str\n Model to be assumed. Allowed values are 'longslit' and\n 'multislit'.\n numpts : int\n Number of points in which the X-range interval is subdivided\n before fitting the returned polynomial(s).\n deg : int\n Degree of the fitted polynomial.\n debugplot : int\n Debugging level for messages and plots. For details see\n 'numina.array.display.pause_debugplot.py'.\n\n Returns\n -------\n list_spectrails : list of SpectrumTrail objects\n List containing the fitted spectrum trails.\n\n \"\"\"\n\n c2, c4, ff, slit_gap, slit_height, theta0, x0, y0, y_baseline = \\\n return_params(islitlet, csu_bar_slit_center, params, parmodel)\n\n xp = np.linspace(1, EMIR_NAXIS1, numpts)\n slit_dist = (slit_height * 10) + slit_gap\n\n # undistorted (constant) y-coordinate of the lower and upper boundaries\n ybottom = y_baseline * 100 + (islitlet - 1) * slit_dist\n ytop = ybottom + (slit_height * 10)\n\n list_spectrails = []\n for borderval in borderlist:\n yvalue = ybottom + borderval * (ytop - ybottom)\n # undistorted boundary\n yp_value = np.ones(numpts) * yvalue\n # distorted boundary\n xdist, ydist = exvp(xp, yp_value, x0=x0, y0=y0,\n c2=c2, c4=c4, theta0=theta0, ff=ff)\n spectrail = SpectrumTrail() # declare SpectrumTrail instance\n spectrail.fit(x=xdist, y=ydist, deg=deg, debugplot=debugplot)\n list_spectrails.append(spectrail)\n\n return list_spectrails"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfunction to be minimised.", "response": "def fun_residuals(params, parmodel, bounddict,\n shrinking_factor, numresolution,\n islitmin, islitmax, debugplot):\n \"\"\"Function to be minimised.\n\n Parameters\n ----------\n params : :class:`~lmfit.parameter.Parameters`\n Parameters to be employed in the prediction of the distorted\n boundaries.\n parmodel : str\n Model to be assumed. Allowed values are 'longslit' and\n 'multislit'.\n bounddict : JSON structure\n Structure employed to store bounddict information.\n shrinking_factor : float\n Fraction of the detected X range (specrtral) to be employed\n in the fit. This must be a number verifying\n 0 < shrinking_factor <= 1. The resulting interval will be\n centered within the original one.\n numresolution : int\n Number of points in which the X-range interval is subdivided\n before computing the residuals.\n islitmin : int\n Minimum slitlet number.\n islitmax : int\n Maximum slitlet number.\n debugplot : int\n Debugging level for messages and plots. For details see\n 'numina.array.display.pause_debugplot.py'.\n\n Returns\n -------\n global_residual : float\n Squared root of the averaged sum of squared residuals.\n\n \"\"\"\n\n global FUNCTION_EVALUATIONS\n global_residual = 0.0\n nsummed = 0\n\n read_slitlets = list(bounddict['contents'].keys())\n # read_slitlets.sort() # this is not really necessary\n for tmp_slitlet in read_slitlets:\n islitlet = int(tmp_slitlet[7:])\n if islitmin <= islitlet <= islitmax:\n read_dateobs = list(bounddict['contents'][tmp_slitlet].keys())\n # read_dateobs.sort() # this is not really necessary\n for tmp_dateobs in read_dateobs:\n tmp_dict = bounddict['contents'][tmp_slitlet][tmp_dateobs]\n csu_bar_slit_center = tmp_dict['csu_bar_slit_center']\n # expected boundaries using provided parameters\n list_spectrails = expected_distorted_boundaries(\n islitlet, csu_bar_slit_center,\n [0, 1], params, parmodel,\n numpts=numresolution, deg=5, debugplot=0\n )\n poly_lower_expected = list_spectrails[0].poly_funct\n poly_upper_expected = list_spectrails[1].poly_funct\n # measured lower boundary\n poly_lower_measured = np.polynomial.Polynomial(\n tmp_dict['boundary_coef_lower']\n )\n xmin_lower_bound = tmp_dict['boundary_xmin_lower']\n xmax_lower_bound = tmp_dict['boundary_xmax_lower']\n dx = (xmax_lower_bound - xmin_lower_bound) * \\\n (1 - shrinking_factor) / 2\n xdum_lower = np.linspace(xmin_lower_bound + dx,\n xmax_lower_bound - dx,\n num=numresolution)\n # distance between expected and measured polynomials\n poly_diff = poly_lower_expected - poly_lower_measured\n global_residual += np.sum(poly_diff(xdum_lower)**2)\n nsummed += numresolution\n # measured upper boundary\n poly_upper_measured = np.polynomial.Polynomial(\n tmp_dict['boundary_coef_upper']\n )\n xmin_upper_bound = tmp_dict['boundary_xmin_upper']\n xmax_upper_bound = tmp_dict['boundary_xmax_upper']\n dx = (xmax_lower_bound - xmin_lower_bound) * \\\n (1 - shrinking_factor) / 2\n xdum_upper = np.linspace(xmin_upper_bound + dx,\n xmax_upper_bound - dx,\n num=numresolution)\n # distance between expected and measured polynomials\n poly_diff = poly_upper_expected - poly_upper_measured\n global_residual += np.sum(poly_diff(xdum_upper)**2)\n nsummed += numresolution\n\n if nsummed > 0:\n global_residual = np.sqrt(global_residual/nsummed)\n if debugplot >= 10:\n FUNCTION_EVALUATIONS += 1\n print('-' * 79)\n print('>>> Number of function evaluations:', FUNCTION_EVALUATIONS)\n print('>>> global residual...............:', global_residual)\n params.pretty_print()\n return global_residual"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef overplot_boundaries_from_bounddict(ax, bounddict, micolors, linetype='-'):\n\n for islitlet in range(1, EMIR_NBARS + 1):\n tmpcolor = micolors[islitlet % 2]\n tmp_slitlet = 'slitlet' + str(islitlet).zfill(2)\n if tmp_slitlet in bounddict['contents'].keys():\n read_dateobs = list(bounddict['contents'][tmp_slitlet].keys())\n read_dateobs.sort()\n for tmp_dateobs in read_dateobs:\n tmp_dict = bounddict['contents'][tmp_slitlet][tmp_dateobs]\n # lower boundary\n pol_lower_measured = np.polynomial.Polynomial(\n tmp_dict['boundary_coef_lower']\n )\n xdum = np.linspace(1, EMIR_NAXIS1, num=EMIR_NAXIS1)\n ydum = pol_lower_measured(xdum)\n ax.plot(xdum, ydum, tmpcolor + linetype)\n pol_upper_measured = np.polynomial.Polynomial(\n tmp_dict['boundary_coef_upper']\n )\n ydum = pol_upper_measured(xdum)\n ax.plot(xdum, ydum, tmpcolor + linetype)", "response": "Overplot boundaries on current plot."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_boundaries_from_bounddict_ds9(bounddict, ds9_filename, numpix=100):\n\n ds9_file = open(ds9_filename, 'w')\n\n ds9_file.write('# Region file format: DS9 version 4.1\\n')\n ds9_file.write('global color=green dashlist=2 4 width=2 '\n 'font=\"helvetica 10 normal roman\" select=1 '\n 'highlite=1 dash=1 fixed=0 edit=1 '\n 'move=1 delete=1 include=1 source=1\\n')\n ds9_file.write('physical\\n')\n\n uuid = bounddict['uuid']\n spfilter = bounddict['tags']['filter']\n grism = bounddict['tags']['grism']\n\n ds9_file.write('#\\n# uuid (boundict file): {0}\\n'.format(uuid))\n ds9_file.write('# filter..............: {0}\\n'.format(spfilter))\n ds9_file.write('# grism...............: {0}\\n'.format(grism))\n\n colorbox = ['green', 'green']\n for islitlet in range(1, EMIR_NBARS + 1):\n tmp_slitlet = 'slitlet' + str(islitlet).zfill(2)\n if tmp_slitlet in bounddict['contents'].keys():\n ds9_file.write('#\\n# islitlet: {0}\\n'.format(tmp_slitlet))\n read_dateobs = list(bounddict['contents'][tmp_slitlet].keys())\n read_dateobs.sort()\n for tmp_dateobs in read_dateobs:\n ds9_file.write('#\\n# date-obs: {0}\\n'.format(tmp_dateobs))\n tmp_dict = bounddict['contents'][tmp_slitlet][tmp_dateobs]\n # lower boundary\n pol_lower_measured = np.polynomial.Polynomial(\n tmp_dict['boundary_coef_lower']\n )\n xmin_lower = tmp_dict['boundary_xmin_lower']\n xmax_lower = tmp_dict['boundary_xmax_lower']\n xdum = np.linspace(xmin_lower, xmax_lower, num=numpix)\n ydum = pol_lower_measured(xdum)\n for i in range(len(xdum) - 1):\n ds9_file.write(\n 'line {0} {1} {2} {3}'.format(xdum[i], ydum[i],\n xdum[i + 1], ydum[i + 1])\n )\n ds9_file.write(\n ' # color={0}\\n'.format(colorbox[islitlet % 2]))\n # upper boundary\n pol_upper_measured = np.polynomial.Polynomial(\n tmp_dict['boundary_coef_upper']\n )\n xmin_upper = tmp_dict['boundary_xmin_upper']\n xmax_upper = tmp_dict['boundary_xmax_upper']\n xdum = np.linspace(xmin_upper, xmax_upper, num=numpix)\n ydum = pol_upper_measured(xdum)\n for i in range(len(xdum) - 1):\n ds9_file.write(\n 'line {0} {1} {2} {3}'.format(xdum[i], ydum[i],\n xdum[i + 1], ydum[i + 1])\n )\n ds9_file.write(\n ' # color={0}\\n'.format(colorbox[islitlet % 2]))\n # slitlet label\n xlabel = xmax_lower + xmax_upper + xmin_lower + xmin_upper\n xlabel /= 4\n yc_lower = pol_lower_measured(xlabel)\n yc_upper = pol_upper_measured(xlabel)\n ds9_file.write('text {0} {1} {{{2}}} # color={3} '\n 'font=\"helvetica 10 bold '\n 'roman\"\\n'.format(xlabel,\n (yc_lower + yc_upper) / 2,\n islitlet,\n colorbox[islitlet % 2]))\n\n ds9_file.close()", "response": "Export to ds9 region file the boundaries in bounddict."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save_boundaries_from_params_ds9(params, parmodel,\n list_islitlet,\n list_csu_bar_slit_center,\n uuid, grism, spfilter,\n ds9_filename, numpix=100,\n global_offset_x_pix=0,\n global_offset_y_pix=0):\n \"\"\"Export to ds9 region file the boundaries parametrised with params.\n\n Parameters\n ----------\n params : :class:`~lmfit.parameter.Parameters`\n Parameters to be employed in the prediction of the distorted\n boundaries.\n parmodel : str\n Model to be assumed. Allowed values are 'longslit' and\n 'multislit'.\n list_islitlet : list (integers)\n Slitlet numbers to be considered.\n list_csu_bar_slit_center : list of floats\n CSU bar slit centers of the considered slitlets.\n uuid: str\n UUID associated to the parameters 'params'.\n grism : str\n Employed grism.\n spfilter : str\n Employed filter.\n ds9_filename : str\n Output file name for the ds9 region file.\n numpix : int\n Number of points in which the X-range interval is subdivided\n in order to save each boundary as a connected set of line\n segments.\n global_offset_x_pix : int or float\n Global offset in the X direction to be applied after computing\n the expected location.\n global_offset_y_pix : int or float\n Global offset in the Y direction to be applied after computing\n the expected location.\n\n \"\"\"\n\n ds9_file = open(ds9_filename, 'w')\n\n ds9_file.write('# Region file format: DS9 version 4.1\\n')\n ds9_file.write('global color=green dashlist=2 4 width=2 '\n 'font=\"helvetica 10 normal roman\" select=1 '\n 'highlite=1 dash=1 fixed=0 edit=1 '\n 'move=1 delete=1 include=1 source=1\\n')\n ds9_file.write('physical\\n#\\n')\n\n ds9_file.write('#\\n# uuid..: {0}\\n'.format(uuid))\n ds9_file.write('# filter: {0}\\n'.format(spfilter))\n ds9_file.write('# grism.: {0}\\n'.format(grism))\n\n ds9_file.write('#\\n# global_offset_x_pix: {0}\\n'.format(\n global_offset_x_pix))\n ds9_file.write('# global_offset_y_pix: {0}\\n#\\n'.format(\n global_offset_y_pix))\n\n # duplicate to shorten the variable names\n xoff = float(global_offset_x_pix)\n yoff = float(global_offset_y_pix)\n\n if parmodel == \"longslit\":\n for dumpar in EXPECTED_PARAMETER_LIST:\n parvalue = params[dumpar].value\n ds9_file.write('# {0}: {1}\\n'.format(dumpar, parvalue))\n else:\n for dumpar in EXPECTED_PARAMETER_LIST_EXTENDED:\n parvalue = params[dumpar].value\n ds9_file.write('# {0}: {1}\\n'.format(dumpar, parvalue))\n\n for islitlet, csu_bar_slit_center in \\\n zip(list_islitlet, list_csu_bar_slit_center):\n if islitlet % 2 == 0:\n colorbox = '#ff00ff' # '#ff77ff'\n else:\n colorbox = '#00ffff' # '#4444ff'\n\n ds9_file.write(\n '#\\n# islitlet...........: {0}\\n'.format(islitlet)\n )\n ds9_file.write(\n '# csu_bar_slit_center: {0}\\n'.format(csu_bar_slit_center)\n )\n pol_lower_expected = expected_distorted_boundaries(\n islitlet, csu_bar_slit_center, [0], params, parmodel,\n numpts=101, deg=5, debugplot=0\n )[0].poly_funct\n pol_upper_expected = expected_distorted_boundaries(\n islitlet, csu_bar_slit_center, [1], params, parmodel,\n numpts=101, deg=5, debugplot=0\n )[0].poly_funct\n xdum = np.linspace(1, EMIR_NAXIS1, num=numpix)\n ydum = pol_lower_expected(xdum)\n for i in range(len(xdum)-1):\n ds9_file.write(\n 'line {0} {1} {2} {3}'.format(xdum[i]+xoff, ydum[i]+yoff,\n xdum[i+1]+xoff, ydum[i+1]+yoff)\n )\n ds9_file.write(' # color={0}\\n'.format(colorbox))\n ydum = pol_upper_expected(xdum)\n for i in range(len(xdum)-1):\n ds9_file.write(\n 'line {0} {1} {2} {3}'.format(xdum[i]+xoff, ydum[i]+yoff,\n xdum[i+1]+xoff, ydum[i+1]+yoff)\n )\n ds9_file.write(' # color={0}\\n'.format(colorbox))\n # slitlet label\n yc_lower = pol_lower_expected(EMIR_NAXIS1 / 2 + 0.5)\n yc_upper = pol_upper_expected(EMIR_NAXIS1 / 2 + 0.5)\n ds9_file.write('text {0} {1} {{{2}}} # color={3} '\n 'font=\"helvetica 10 bold '\n 'roman\"\\n'.format(EMIR_NAXIS1 / 2 + 0.5 + xoff,\n (yc_lower + yc_upper) / 2 + yoff,\n islitlet,\n colorbox))\n\n ds9_file.close()", "response": "Export to ds9 region file the boundaries parametrised with params."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndefining ~lmfit. parameter. Parameters object from dictionary.", "response": "def bound_params_from_dict(bound_param_dict):\n \"\"\"Define `~lmfit.parameter.Parameters` object from dictionary.\n\n Parameters\n ----------\n bound_param_dict : dictionary\n Dictionary containing the JSON contents of a boundary\n parameter file.\n\n Returns\n -------\n params : :class:`~lmfit.parameter.Parameters`\n Parameters object.\n\n \"\"\"\n\n params = Parameters()\n for mainpar in EXPECTED_PARAMETER_LIST:\n if mainpar not in bound_param_dict['contents'].keys():\n raise ValueError('Parameter ' + mainpar + ' not found!')\n if bound_param_dict['meta_info']['parmodel'] == \"longslit\":\n dumdict = bound_param_dict['contents'][mainpar]\n params.add(mainpar, value=dumdict[\"value\"],\n vary=dumdict[\"vary\"])\n elif bound_param_dict['meta_info']['parmodel'] == 'multislit':\n for subpar in ['a0s', 'a1s', 'a2s']:\n if subpar not in bound_param_dict['contents'][mainpar].keys():\n raise ValueError('Subparameter ' + subpar + ' not found' +\n ' under parameter ' + mainpar)\n cpar = mainpar + '_' + subpar\n dumdict = bound_param_dict['contents'][mainpar][subpar]\n params.add(cpar, value=dumdict[\"value\"],\n vary=dumdict[\"vary\"])\n else:\n print('parmodel: ', bound_param_dict['meta_info']['parmodel'])\n raise ValueError('Unexpected parmodel')\n return params"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_base_headers(self, hdr):\n hdr = super(StareImageRecipe2, self).set_base_headers(hdr)\n # Set EXP to 0\n hdr['EXP'] = 0\n return hdr", "response": "Set metadata in FITS headers."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset metadata in FITS headers.", "response": "def set_base_headers(self, hdr):\n \"\"\"Set metadata in FITS headers.\"\"\"\n hdr = super(StareImageBaseRecipe, self).set_base_headers(hdr)\n # Update EXP to 0\n hdr['EXP'] = 0\n return hdr"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef apply_rectwv_coeff(reduced_image,\n rectwv_coeff,\n args_resampling=2,\n args_ignore_dtu_configuration=True,\n debugplot=0):\n \"\"\"Compute rectification and wavelength calibration coefficients.\n\n Parameters\n ----------\n reduced_image : HDUList object\n Image with preliminary basic reduction: bpm, bias, dark and\n flatfield.\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for the\n particular CSU configuration.\n args_resampling : int\n 1: nearest neighbour, 2: flux preserving interpolation.\n args_ignore_dtu_configuration : bool\n If True, ignore differences in DTU configuration.\n debugplot : int\n Debugging level for messages and plots. For details see\n 'numina.array.display.pause_debugplot.py'.\n\n Returns\n -------\n rectwv_image : HDUList object\n Rectified and wavelength calibrated image.\n\n \"\"\"\n\n logger = logging.getLogger(__name__)\n\n # header and data array (use deepcopy to avoid modifying\n # reduced_image[0].header as a side effect)\n header = copy.deepcopy(reduced_image[0].header)\n image2d = reduced_image[0].data\n\n # apply global offsets\n image2d = apply_integer_offsets(\n image2d=image2d,\n offx=rectwv_coeff.global_integer_offset_x_pix,\n offy=rectwv_coeff.global_integer_offset_y_pix\n )\n\n # check grism and filter\n filter_name = header['filter']\n logger.info('Filter: ' + filter_name)\n if filter_name != rectwv_coeff.tags['filter']:\n raise ValueError('Filter name does not match!')\n grism_name = header['grism']\n logger.info('Grism: ' + grism_name)\n if grism_name != rectwv_coeff.tags['grism']:\n raise ValueError('Grism name does not match!')\n\n # read the DTU configuration from the image header\n dtu_conf = DtuConfiguration.define_from_header(header)\n\n # retrieve DTU configuration from RectWaveCoeff object\n dtu_conf_calib = DtuConfiguration.define_from_dictionary(\n rectwv_coeff.meta_info['dtu_configuration']\n )\n # check that the DTU configuration employed to obtain the calibration\n # corresponds to the DTU configuration in the input FITS file\n if dtu_conf != dtu_conf_calib:\n if args_ignore_dtu_configuration:\n logger.warning('DTU configuration differences found!')\n else:\n logger.warning('DTU configuration from image header:')\n logger.warning(dtu_conf)\n logger.warning('DTU configuration from master calibration:')\n logger.warning(dtu_conf_calib)\n raise ValueError(\"DTU configurations do not match!\")\n else:\n logger.info('DTU configuration match!')\n\n # valid slitlet numbers\n list_valid_islitlets = list(range(1, EMIR_NBARS + 1))\n for idel in rectwv_coeff.missing_slitlets:\n list_valid_islitlets.remove(idel)\n logger.debug('Valid slitlet numbers:\\n' + str(list_valid_islitlets))\n\n # ---\n\n # relevant wavelength calibration parameters for rectified and wavelength\n # calibrated image\n wv_parameters = set_wv_parameters(filter_name, grism_name)\n crpix1_enlarged = wv_parameters['crpix1_enlarged']\n crval1_enlarged = wv_parameters['crval1_enlarged']\n cdelt1_enlarged = wv_parameters['cdelt1_enlarged']\n naxis1_enlarged = wv_parameters['naxis1_enlarged']\n\n # initialize rectified and wavelength calibrated image\n naxis2_enlarged = EMIR_NBARS * EMIR_NPIXPERSLIT_RECTIFIED\n image2d_rectwv = np.zeros((naxis2_enlarged, naxis1_enlarged),\n dtype='float32')\n\n # main loop\n\n logger.info('Applying rectification and wavelength calibration')\n logger.info('RectWaveCoeff uuid={}'.format(rectwv_coeff.uuid))\n\n cout = '0'\n for islitlet in range(1, EMIR_NBARS + 1):\n\n if islitlet in list_valid_islitlets:\n\n # define Slitlet2D object\n slt = Slitlet2D(islitlet=islitlet,\n rectwv_coeff=rectwv_coeff,\n debugplot=debugplot)\n\n # extract (distorted) slitlet from the initial image\n slitlet2d = slt.extract_slitlet2d(image2d)\n\n # rectify slitlet\n slitlet2d_rect = slt.rectify(slitlet2d, resampling=args_resampling)\n\n # wavelength calibration of the rectifed slitlet\n slitlet2d_rect_wv = resample_image2d_flux(\n image2d_orig=slitlet2d_rect,\n naxis1=naxis1_enlarged,\n cdelt1=cdelt1_enlarged,\n crval1=crval1_enlarged,\n crpix1=crpix1_enlarged,\n coeff=slt.wpoly\n )\n\n # minimum and maximum useful row in the full 2d rectified image\n # (starting from 0)\n i1 = slt.iminslt - 1\n i2 = slt.imaxslt\n\n # minimum and maximum scan in the rectified slitlet\n # (in pixels, from 1 to NAXIS2)\n ii1 = slt.min_row_rectified\n ii2 = slt.max_row_rectified + 1\n\n # save rectified slitlet in its corresponding location within\n # the full 2d rectified image\n image2d_rectwv[i1:i2, :] = slitlet2d_rect_wv[ii1:ii2, :]\n\n # include scan range in FITS header\n header['imnslt' + str(islitlet).zfill(2)] = \\\n slt.iminslt, 'minimum Y pixel of useful slitlet region'\n header['imxslt' + str(islitlet).zfill(2)] = \\\n slt.imaxslt, 'maximum Y pixel of useful slitlet region'\n\n # determine useful channel region in each spectrum and include\n # that information in FITS header\n jminslt = []\n jmaxslt = []\n for idum in range(ii1, ii2 + 1):\n jminmax = find_pix_borders(\n slitlet2d_rect_wv[idum, :],\n sought_value=0\n )\n if jminmax != (-1, naxis1_enlarged):\n jminslt.append(jminmax[0])\n jmaxslt.append(jminmax[1])\n if len(jminslt) > 0:\n slt.jminslt = min(jminslt) + 1\n slt.jmaxslt = max(jmaxslt) + 1\n header['jmnslt' + str(islitlet).zfill(2)] = \\\n slt.jminslt, 'minimum X pixel of useful slitlet region'\n header['jmxslt' + str(islitlet).zfill(2)] = \\\n slt.jmaxslt, 'maximum X pixel of useful slitlet region'\n\n cout += '.'\n\n else:\n\n # include scan and channel range in FITS header\n header['imnslt' + str(islitlet).zfill(2)] = \\\n 0, 'minimum Y pixel of useful slitlet region'\n header['imxslt' + str(islitlet).zfill(2)] = \\\n 0, 'maximum Y pixel of useful slitlet region'\n header['jmnslt' + str(islitlet).zfill(2)] = \\\n 0, 'minimum X pixel of useful slitlet region'\n header['jmxslt' + str(islitlet).zfill(2)] = \\\n 0, 'maximum X pixel of useful slitlet region'\n\n cout += 'i'\n\n if islitlet % 10 == 0:\n if cout != 'i':\n cout = str(islitlet // 10)\n\n logger.info(cout)\n\n # update wavelength calibration in FITS header\n logger.info('Updating image header')\n for keyword in ['crval1', 'crpix1', 'crval2', 'crpix2']:\n if keyword in header:\n header.remove(keyword)\n header['crpix1'] = (crpix1_enlarged, 'reference pixel')\n header['crval1'] = (crval1_enlarged, 'central wavelength at crpix1')\n header['cdelt1'] = (cdelt1_enlarged, 'linear dispersion (Angstrom/pixel)')\n header['cunit1'] = ('Angstrom', 'units along axis1')\n header['ctype1'] = 'WAVELENGTH'\n header['crpix2'] = (0.0, 'reference pixel')\n header['crval2'] = (0.0, 'central value at crpix2')\n header['cdelt2'] = (1.0, 'increment')\n header['ctype2'] = 'PIXEL'\n header['cunit2'] = ('Pixel', 'units along axis2')\n for keyword in ['cd1_1', 'cd1_2', 'cd2_1', 'cd2_2',\n 'PCD1_1', 'PCD1_2', 'PCD2_1', 'PCD2_2',\n 'PCRPIX1', 'PCRPIX2']:\n if keyword in header:\n header.remove(keyword)\n\n # update history in FITS header\n header['history'] = 'Boundary parameters uuid:' + \\\n rectwv_coeff.meta_info['origin']['bound_param'][4:]\n if 'master_rectwv' in rectwv_coeff.meta_info['origin']:\n header['history'] = \\\n 'MasterRectWave uuid:' + \\\n rectwv_coeff.meta_info['origin']['master_rectwv'][4:]\n header['history'] = 'RectWaveCoeff uuid:' + rectwv_coeff.uuid\n header['history'] = 'Rectification and wavelength calibration time ' \\\n + datetime.now().isoformat()\n\n logger.info('Generating rectified and wavelength calibrated image')\n\n rectwv_image = fits.PrimaryHDU(data=image2d_rectwv, header=header)\n\n return fits.HDUList([rectwv_image])", "response": "Compute rectification and wavelength calibration coefficients for a given image."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nstarting a transaction this will increment transaction semaphore and pass it to _transaction_start()", "response": "def transaction_start(self, name):\n \"\"\"\n start a transaction\n\n this will increment transaction semaphore and pass it to _transaction_start()\n \"\"\"\n if not name:\n raise ValueError(\"Transaction name cannot be empty\")\n #uid = id(self)\n\n self.transaction_count += 1\n logger.debug(\"{}. Start transaction {}\".format(self.transaction_count, name))\n if self.transaction_count == 1:\n self._transaction_start()\n else:\n self._transaction_started(name)\n\n return self.transaction_count"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstops a transaction if ready", "response": "def transaction_stop(self):\n \"\"\"stop/commit a transaction if ready\"\"\"\n if self.transaction_count > 0:\n logger.debug(\"{}. Stop transaction\".format(self.transaction_count))\n if self.transaction_count == 1:\n self._transaction_stop()\n\n self.transaction_count -= 1\n\n return self.transaction_count"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef connect(self, connection_config=None, *args, **kwargs):\n if self.connected: return self.connected\n\n if connection_config: self.connection_config = connection_config\n\n self.connected = True\n try:\n self._connect(self.connection_config)\n\n except Exception as e:\n self.connected = False\n self.raise_error(e)\n\n self.log(\"Connected {}\", self.connection_config.interface_name)\n return self.connected", "response": "connect to the interface"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nclose an open connection", "response": "def close(self):\n \"\"\"close an open connection\"\"\"\n if not self.connected: return True\n\n self._close()\n self.connected = False\n self.log(\"Closed Connection {}\", self.connection_config.interface_name)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef query(self, query_str, *query_args, **query_options):\n with self.connection(**query_options) as connection:\n query_options['connection'] = connection\n return self._query(query_str, query_args, **query_options)", "response": "run a raw query on the database"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef transaction(self, connection=None, **kwargs):\n with self.connection(connection) as connection:\n name = connection.transaction_name()\n connection.transaction_start(name)\n try:\n yield connection\n connection.transaction_stop()\n\n except Exception as e:\n connection.transaction_fail(name)\n self.raise_error(e)", "response": "A context manager that creates a transaction for a bunch of db calls."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds the table to the database", "response": "def set_table(self, schema, **kwargs):\n \"\"\"\n add the table to the db\n\n schema -- Schema() -- contains all the information about the table\n \"\"\"\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n if self.has_table(str(schema), **kwargs): return True\n\n try:\n with self.transaction(**kwargs):\n self._set_table(schema, **kwargs)\n\n for index_name, index in schema.indexes.items():\n self.set_index(\n schema,\n name=index.name,\n fields=index.fields,\n connection=connection,\n **index.options\n )\n\n except InterfaceError:\n # check to see if this table now exists, it might have been created\n # in another thread\n if not self.has_table(schema, **kwargs):\n raise"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking to see if a table exists in the db", "response": "def has_table(self, table_name, **kwargs):\n \"\"\"\n check to see if a table is in the db\n\n table_name -- string -- the table to check\n return -- boolean -- True if the table exists, false otherwise\n \"\"\"\n with self.connection(kwargs.get('connection', None)) as connection:\n kwargs['connection'] = connection\n tables = self.get_tables(table_name, **kwargs)\n return len(tables) > 0"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget all the tables of the currently connected db", "response": "def get_tables(self, table_name=\"\", **kwargs):\n \"\"\"\n get all the tables of the currently connected db\n\n table_name -- string -- if you would like to filter the tables list to only include matches with this name\n return -- list -- a list of table names\n \"\"\"\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n return self._get_tables(str(table_name), **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes a table from the database", "response": "def delete_table(self, schema, **kwargs):\n \"\"\"\n remove a table matching schema from the db\n\n schema -- Schema()\n \"\"\"\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n if not self.has_table(str(schema), **kwargs): return True\n with self.transaction(**kwargs):\n self._delete_table(schema, **kwargs)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_tables(self, **kwargs):\n if not kwargs.get('disable_protection', False):\n raise ValueError('In order to delete all the tables, pass in disable_protection=True')\n\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n self._delete_tables(**kwargs)", "response": "Delete all the tables in the database"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_indexes(self, schema, **kwargs):\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n return self._get_indexes(schema, **kwargs)", "response": "get all the indexes in a specific schema"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds an index to the table", "response": "def set_index(self, schema, name, fields, **index_options):\n \"\"\"\n add an index to the table\n\n schema -- Schema()\n name -- string -- the name of the index\n fields -- array -- the fields the index should be on\n **index_options -- dict -- any index options that might be useful to create the index\n \"\"\"\n with self.transaction(**index_options) as connection:\n index_options['connection'] = connection\n self._set_index(schema, name, fields, **index_options)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npersist d into the db", "response": "def insert(self, schema, fields, **kwargs):\n \"\"\"\n Persist d into the db\n\n schema -- Schema()\n fields -- dict -- the values to persist\n\n return -- int -- the primary key of the row just inserted\n \"\"\"\n r = 0\n\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n try:\n with self.transaction(**kwargs):\n r = self._insert(schema, fields, **kwargs)\n\n except Exception as e:\n exc_info = sys.exc_info()\n if self.handle_error(schema, e, **kwargs):\n r = self._insert(schema, fields, **kwargs)\n else:\n self.raise_error(e, exc_info)\n\n return r"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating the fields of the record that match query. fields_where", "response": "def update(self, schema, fields, query, **kwargs):\n \"\"\"\n Persist the query.fields into the db that match query.fields_where\n\n schema -- Schema()\n fields -- dict -- the values to persist\n query -- Query() -- will be used to create the where clause\n\n return -- int -- how many rows where updated\n \"\"\"\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n try:\n with self.transaction(**kwargs):\n r = self._update(schema, fields, query, **kwargs)\n\n except Exception as e:\n exc_info = sys.exc_info()\n if self.handle_error(schema, e, **kwargs):\n r = self._update(schema, fields, query, **kwargs)\n else:\n self.raise_error(e, exc_info)\n\n return r"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_query(self, callback, schema, query=None, *args, **kwargs):\n if not query: query = Query()\n\n ret = None\n with self.connection(**kwargs) as connection:\n kwargs['connection'] = connection\n try:\n if connection.in_transaction():\n # we wrap SELECT queries in a transaction if we are in a transaction because\n # it could cause data loss if it failed by causing the db to discard\n # anything in the current transaction if the query isn't wrapped,\n # go ahead, ask me how I know this\n with self.transaction(**kwargs):\n ret = callback(schema, query, *args, **kwargs)\n\n else:\n ret = callback(schema, query, *args, **kwargs)\n\n except Exception as e:\n exc_info = sys.exc_info()\n if self.handle_error(schema, e, **kwargs):\n ret = callback(schema, query, *args, **kwargs)\n else:\n self.raise_error(e, exc_info)\n\n return ret", "response": "wrapper around all the get queries"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget one row from the db", "response": "def get_one(self, schema, query=None, **kwargs):\n \"\"\"\n get one row from the db matching filters set in query\n\n schema -- Schema()\n query -- Query()\n\n return -- dict -- the matching row\n \"\"\"\n ret = self._get_query(self._get_one, schema, query, **kwargs)\n if not ret: ret = {}\n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget matching rows from the db", "response": "def get(self, schema, query=None, **kwargs):\n \"\"\"\n get matching rows from the db matching filters set in query\n\n schema -- Schema()\n query -- Query()\n\n return -- list -- a list of matching dicts\n \"\"\"\n ret = self._get_query(self._get, schema, query, **kwargs)\n if not ret: ret = []\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap around the module s logger. log", "response": "def log(self, format_str, *format_args, **log_options):\n \"\"\"\n wrapper around the module's logger\n\n format_str -- string -- the message to log\n *format_args -- list -- if format_str is a string containing {}, then format_str.format(*format_args) is ran\n **log_options -- \n level -- something like logging.DEBUG\n \"\"\"\n if isinstance(format_str, Exception):\n logger.exception(format_str, *format_args)\n else:\n log_level = log_options.get('level', logging.DEBUG)\n if logger.isEnabledFor(log_level):\n if format_args:\n logger.log(log_level, format_str.format(*format_args))\n else:\n logger.log(log_level, format_str)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef raise_error(self, e, exc_info=None):\n if not exc_info:\n exc_info = sys.exc_info()\n\n if not isinstance(e, InterfaceError):\n # allow python's built in errors to filter up through\n # https://docs.python.org/2/library/exceptions.html\n #if not hasattr(exceptions, e.__class__.__name__):\n if not hasattr(builtins, e.__class__.__name__):\n e = self._create_error(e, exc_info)\n\n reraise(e.__class__, e, exc_info[2])", "response": "this is just a wrapper to make the passed in exception an InterfaceError"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _query(self, query_str, query_args=None, **query_options):\n ret = True\n # http://stackoverflow.com/questions/6739355/dictcursor-doesnt-seem-to-work-under-psycopg2\n connection = query_options.get('connection', None)\n with self.connection(connection) as connection:\n cur = connection.cursor()\n ignore_result = query_options.get('ignore_result', False)\n count_result = query_options.get('count_result', False)\n one_result = query_options.get('fetchone', query_options.get('one_result', False))\n cursor_result = query_options.get('cursor_result', False)\n\n try:\n if query_args:\n self.log(\"{}{}{}\", query_str, os.linesep, query_args)\n cur.execute(query_str, query_args)\n else:\n self.log(query_str)\n cur.execute(query_str)\n\n if cursor_result:\n ret = cur\n\n elif not ignore_result:\n if one_result:\n ret = self._normalize_result_dict(cur.fetchone())\n elif count_result:\n ret = cur.rowcount\n else:\n ret = self._normalize_result_list(cur.fetchall())\n\n except Exception as e:\n self.log(e)\n raise\n\n return ret", "response": "Query the database for the related object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_SQL(self, schema, query, **sql_options):\n only_where_clause = sql_options.get('only_where_clause', False)\n symbol_map = {\n 'in': {'symbol': 'IN', 'list': True},\n 'nin': {'symbol': 'NOT IN', 'list': True},\n 'is': {'symbol': '=', 'none_symbol': 'IS'},\n 'not': {'symbol': '!=', 'none_symbol': 'IS NOT'},\n 'gt': {'symbol': '>'},\n 'gte': {'symbol': '>='},\n 'lt': {'symbol': '<'},\n 'lte': {'symbol': '<='},\n # https://www.tutorialspoint.com/postgresql/postgresql_like_clause.htm\n # https://www.tutorialspoint.com/sqlite/sqlite_like_clause.htm\n 'like': {'symbol': 'LIKE'},\n 'nlike': {'symbol': 'NOT LIKE'},\n }\n\n query_args = []\n query_str = []\n\n if not only_where_clause:\n query_str.append('SELECT')\n select_fields = query.fields_select\n if select_fields:\n distinct = \"DISTINCT \" if select_fields.options.get(\"unique\", False) else \"\"\n select_fields_str = distinct + ',{}'.format(os.linesep).join(\n (self._normalize_name(f) for f in select_fields.names())\n )\n else:\n select_fields_str = \"*\"\n\n if sql_options.get('count_query', False):\n query_str.append(' count({}) as ct'.format(select_fields_str))\n\n else:\n query_str.append(' {}'.format(select_fields_str))\n\n query_str.append('FROM')\n query_str.append(\" {}\".format(self._normalize_table_name(schema)))\n\n if query.fields_where:\n query_str.append('WHERE')\n\n for i, field in enumerate(query.fields_where):\n if i > 0: query_str.append('AND')\n\n field_str = ''\n field_args = []\n sd = symbol_map[field[0]]\n\n # field[0], field[1], field[2], field[3]\n _, field_name, field_val, field_kwargs = field\n field_str, field_args = self._normalize_val_SQL(\n schema,\n sd,\n field_name,\n field_val,\n field_kwargs\n )\n\n query_str.append(' {}'.format(field_str))\n query_args.extend(field_args)\n\n if query.fields_sort:\n query_sort_str = []\n query_str.append('ORDER BY')\n for field in query.fields_sort:\n sort_dir_str = 'ASC' if field[0] > 0 else 'DESC'\n if field[2]:\n field_sort_str, field_sort_args = self._normalize_sort_SQL(field[1], field[2], sort_dir_str)\n query_sort_str.append(field_sort_str)\n query_args.extend(field_sort_args)\n\n else:\n query_sort_str.append(' {} {}'.format(field[1], sort_dir_str))\n\n query_str.append(',{}'.format(os.linesep).join(query_sort_str))\n\n if query.bounds:\n offset = query.bounds.offset\n limit = 1 if sql_options.get('one_query', False) else query.bounds.limit\n query_str.append('LIMIT {} OFFSET {}'.format(\n limit,\n offset\n ))\n\n query_str = os.linesep.join(query_str)\n return query_str, query_args", "response": "convert the generic Query instance into SQL"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _set_all_tables(self, schema, **kwargs):\n with self.transaction(**kwargs) as connection:\n kwargs['connection'] = connection\n # go through and make sure all foreign key referenced tables exist\n for field_name, field_val in schema.fields.items():\n s = field_val.schema\n if s:\n self._set_all_tables(s, **kwargs)\n\n # now that we know all fk tables exist, create this table\n self.set_table(schema, **kwargs)\n\n return True", "response": "This method will go through all tables referenced by a \n and create all tables that are not referenced by any foreign key referenced by a \n ."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating a random path for uploads.", "response": "def random_path(instance, filename):\n \"\"\" Random path generator for uploads, specify this for upload_to= argument of FileFields\n \"\"\"\n # Split the uuid into two parts so that we won't run into subdirectory count limits. First part has 3 hex chars,\n # thus 4k possible values.\n uuid_hex = get_uuid()\n return os.path.join(uuid_hex[:3], uuid_hex[3:], filename)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninitializes metrics, must be invoked at least once prior to invoking any other method.", "response": "def initialize(signal_number=DEFAULT_TIMER_SIGNAL_NUMBER,\n update_period_s=DEFAULT_UPDATE_PERIOD_S):\n \"\"\"Initialize metrics, must be invoked at least once prior to invoking any\n other method.\"\"\"\n global initialized\n if initialized:\n return\n initialized = True\n uwsgi.add_timer(signal_number, update_period_s)\n uwsgi.register_signal(signal_number, MULE, emit)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef emit(_):\n if not initialized:\n raise NotInitialized\n\n view = {\n 'version': __version__,\n 'counters': {},\n 'gauges': {},\n 'histograms': {},\n 'meters': {},\n 'timers': {},\n }\n\n for (ty, module, name), metric in six.iteritems(all_metrics):\n view[ty]['%s.%s' % (module, name)] = metric.view()\n\n marshalled_view = marshal.dumps(view)\n if len(marshalled_view) > MAX_MARSHALLED_VIEW_SIZE:\n log.warn(\n 'Marshalled length too large, got %d, max %d. '\n 'Try recording fewer metrics or increasing '\n 'MAX_MARSHALLED_VIEW_SIZE'\n % (len(marshalled_view), MAX_MARSHALLED_VIEW_SIZE))\n return\n marshalled_metrics_mmap.seek(0)\n try:\n # Reading and writing to/from an mmap'ed buffer is not guaranteed\n # to be atomic, so we must serialize access to it.\n uwsgi.lock()\n marshalled_metrics_mmap.write(marshalled_view)\n finally:\n uwsgi.unlock()", "response": "Serialize metrics to the memory mapped buffer."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a dictionary representation of current metrics.", "response": "def view():\n \"\"\"Get a dictionary representation of current metrics.\"\"\"\n if not initialized:\n raise NotInitialized\n\n marshalled_metrics_mmap.seek(0)\n try:\n uwsgi.lock()\n marshalled_view = marshalled_metrics_mmap.read(\n MAX_MARSHALLED_VIEW_SIZE)\n finally:\n uwsgi.unlock()\n return marshal.loads(marshalled_view)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef timer(module, name, delta, duration_units='milliseconds'):\n timer = get_metric('timers', module, name, Timer(duration_units))\n timer.update(delta)", "response": "Record a timing delta"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrecording a value in a histogram", "response": "def histogram(module, name, value):\n \"\"\"\n Record a value in a histogram:\n ::\n\n histogram(__name__, 'my_histogram', len(queue))\n \"\"\"\n histogram = get_metric('histograms', module, name, Histogram())\n histogram.update(value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef counter(module, name, count=1):\n counter = get_metric('counters', module, name, Counter())\n counter.inc(count)", "response": "Record an event s occurence in a counter"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef meter(module, name, count=1):\n meter = get_metric('meters', module, name, Meter())\n meter.mark(count)", "response": "Record a meter event rate"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef exec_command_on_nodes(nodes, cmd, label, conn_params=None):\n\n if isinstance(nodes, BASESTRING):\n nodes = [nodes]\n\n if conn_params is None:\n conn_params = DEFAULT_CONN_PARAMS\n\n logger.debug(\"Running %s on %s \", label, nodes)\n remote = ex.get_remote(cmd, nodes, conn_params)\n remote.run()\n if not remote.finished_ok:\n raise Exception('An error occcured during remote execution')\n return remote", "response": "Execute a command on a set of nodes."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert_out(self, obj):\n newobj = super(ProcessedImageProduct, self).convert_out(obj)\n if newobj:\n hdulist = newobj.open()\n hdr = hdulist[0].header\n if 'EMIRUUID' not in hdr:\n hdr['EMIRUUID'] = str(uuid.uuid1())\n return newobj", "response": "Write EMIRUUID header on reduction"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef verify_client(endpoint_context, request, authorization_info):\n\n if not authorization_info:\n if 'client_id' in request and 'client_secret' in request:\n auth_info = ClientSecretPost(endpoint_context).verify(request)\n auth_info['method'] = 'client_secret_post'\n elif 'client_assertion' in request:\n auth_info = JWSAuthnMethod(endpoint_context).verify(request)\n # If symmetric key was used\n # auth_method = 'client_secret_jwt'\n # If asymmetric key was used\n auth_info['method'] = 'private_key_jwt'\n elif 'access_token' in request:\n auth_info = BearerBody(endpoint_context).verify(request)\n auth_info['method'] = 'bearer_body'\n else:\n raise UnknownOrNoAuthnMethod()\n else:\n if authorization_info.startswith('Basic '):\n auth_info = ClientSecretBasic(endpoint_context).verify(\n request, authorization_info)\n auth_info['method'] = 'client_secret_basic'\n elif authorization_info.startswith('Bearer '):\n auth_info = BearerHeader(endpoint_context).verify(\n request, authorization_info)\n auth_info['method'] = 'bearer_header'\n else:\n raise UnknownOrNoAuthnMethod(authorization_info)\n\n try:\n client_id = auth_info['client_id']\n except KeyError:\n client_id = ''\n try:\n _token = auth_info['token']\n except KeyError:\n logger.warning('Unknown client ID')\n else:\n sinfo = endpoint_context.sdb[_token]\n auth_info['client_id'] = sinfo['authn_req']['client_id']\n else:\n try:\n _cinfo = endpoint_context.cdb[client_id]\n except KeyError:\n raise ValueError('Unknown Client ID')\n else:\n if isinstance(_cinfo,str):\n try:\n _cinfo = endpoint_context.cdb[_cinfo]\n except KeyError:\n raise ValueError('Unknown Client ID')\n\n try:\n valid_client_info(_cinfo)\n except KeyError:\n logger.warning('Client registration has timed out')\n raise ValueError('Not valid client')\n else:\n # check that the expected authz method was used\n try:\n endpoint_context.cdb[client_id]['auth_method'][\n request.__class__.__name__] = auth_info['method']\n except KeyError:\n try:\n endpoint_context.cdb[client_id]['auth_method'] = {\n request.__class__.__name__: auth_info['method']}\n except KeyError:\n pass\n\n return auth_info", "response": "Verify the client ID client ID and client authentication method and access token."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a random ID of given length", "response": "def random_id(length):\n \"\"\"Generates a random ID of given length\"\"\"\n\n def char():\n \"\"\"Generate single random char\"\"\"\n\n return random.choice(string.ascii_letters + string.digits)\n\n return \"\".join(char() for _ in range(length))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delayed_close(closable):\n\n close = getattr(closable, \"close\", None)\n if close:\n # we do not want the library to close file in case we need to\n # resume, hence make close a no-op\n # pylint: disable=unused-argument\n def replacement_close(*args, **kw):\n \"\"\" No op \"\"\"\n pass\n\n # pylint: enable=unused-argument\n setattr(closable, \"close\", replacement_close)\n try:\n yield closable\n finally:\n if close:\n setattr(closable, \"close\", close)\n closable.close()", "response": "Context manager that closes the closable object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef map_sid2uid(self, sid, uid):\n self.set('sid2uid', sid, uid)\n self.set('uid2sid', uid, sid)", "response": "Store the connection between a Session ID and a User ID"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef map_sid2sub(self, sid, sub):\n self.set('sid2sub', sid, sub)\n self.set('sub2sid', sub, sid)", "response": "Store the connection between a Session ID and a Subject ID."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfind all subject identifiers that are connected to a User ID.", "response": "def get_subs_by_uid(self, uid):\n \"\"\"\n Find all subject identifiers that is connected to a User ID.\n\n :param uid: A User ID\n :return: A set of subject identifiers\n \"\"\"\n res = set()\n for sid in self.get('uid2sid', uid):\n res |= set(self.get('sid2sub', sid))\n return res"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove the connection between a session ID and a Subject ID.", "response": "def remove_sid2sub(self, sid, sub):\n \"\"\"\n Remove the connection between a session ID and a Subject\n\n :param sid: Session ID\n :param sub: Subject identifier\n\u00b4 \"\"\"\n self.remove('sub2sid', sub, sid)\n self.remove('sid2sub', sid, sub)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef remove_sid2uid(self, sid, uid):\n self.remove('uid2sid', uid, sid)\n self.remove('sid2uid', sid, uid)", "response": "Removes the connection between a session ID and a SubjectID."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_session_id(self, sid):\n for uid in self.get('sid2uid', sid):\n self.remove('uid2sid', uid, sid)\n self.delete('sid2uid', sid)\n\n for sub in self.get('sid2sub', sid):\n self.remove('sub2sid', sub, sid)\n self.delete('sid2sub', sid)", "response": "Remove all references to a specific Session ID"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving all references to a specific User ID", "response": "def remove_uid(self, uid):\n \"\"\"\n Remove all references to a specific User ID\n\n :param uid: A User ID\n \"\"\"\n for sid in self.get('uid2sid', uid):\n self.remove('sid2uid', sid, uid)\n self.delete('uid2sid', uid)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove all references to a specific Subject ID", "response": "def remove_sub(self, sub):\n \"\"\"\n Remove all references to a specific Subject ID\n\n :param sub: A Subject ID\n \"\"\"\n for _sid in self.get('sub2sid', sub):\n self.remove('sid2sub', _sid, sub)\n self.delete('sub2sid', sub)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nstart a new experiment", "response": "def up(force=True, env=None, **kwargs):\n \"Starts a new experiment\"\n inventory = os.path.join(os.getcwd(), \"hosts\")\n conf = Configuration.from_dictionnary(provider_conf)\n provider = Enos_vagrant(conf)\n roles, networks = provider.init()\n check_networks(roles, networks)\n env[\"roles\"] = roles\n env[\"networks\"] = networks"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn public attributes of class cls.", "response": "def get_public_attributes(cls, as_list=True):\n \"\"\"\n Return class attributes that are neither private nor magic.\n\n :param cls: class\n :param as_list: [True] set to False to return generator\n :return: only public attributes of class\n \"\"\"\n attrs = (a for a in dir(cls) if not a.startswith('_'))\n if as_list:\n return list(attrs)\n return attrs"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nregister new items in the internal registry.", "response": "def register(self, newitems, *args, **kwargs):\n \"\"\"\n Register newitems in registry.\n\n :param newitems: New items to add to registry. When registering new\n items, keys are not allowed to override existing keys in the\n registry.\n :type newitems: mapping\n :param args: Positional arguments with meta data corresponding to order\n of meta names class attributes\n :param kwargs: Maps of corresponding meta for new keys. Each set of\n meta keys must be a subset of the new item keys.\n :raises:\n :exc:`~simkit.core.exceptions.DuplicateRegItemError`,\n :exc:`~simkit.core.exceptions.MismatchRegMetaKeysError`\n \"\"\"\n newkeys = newitems.viewkeys() # set of the new item keys\n if any(self.viewkeys() & newkeys): # duplicates\n raise DuplicateRegItemError(self.viewkeys() & newkeys)\n self.update(newitems) # register new item\n # update meta fields\n kwargs.update(zip(self.meta_names, args))\n for k, v in kwargs.iteritems():\n meta = getattr(self, k) # get the meta attribute\n if v:\n if not v.viewkeys() <= newkeys:\n raise MismatchRegMetaKeysError(newkeys - v.viewkeys())\n meta.update(v)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves items from the registry.", "response": "def unregister(self, items):\n \"\"\"\n Remove items from registry.\n\n :param items:\n \"\"\"\n items = _listify(items)\n # get all members of Registry except private, special or class\n meta_names = (m for m in vars(self).iterkeys()\n if (not m.startswith('_') and m not in dir(Registry)))\n # check that meta names matches\n # FIXME: this is so lame. replace this with something more robust\n for m in meta_names:\n if m not in self.meta_names:\n raise AttributeError('Meta name %s not listed.')\n # pop items from Registry and from meta\n for it in items:\n if it in self:\n self.pop(it)\n for m in (getattr(self, m_) for m_ in self.meta_names):\n if it in m:\n m.pop(it)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_meta(mcs, bases, attr):\n # pop the meta class from the attributes\n meta = attr.pop(mcs._meta_cls, types.ClassType(mcs._meta_cls, (), {}))\n # get a list of the meta public class attributes\n meta_attrs = get_public_attributes(meta)\n # check all bases for meta\n for base in bases:\n base_meta = getattr(base, mcs._meta_cls, None)\n # skip if base has no meta\n if base_meta is None:\n continue\n # loop over base meta\n for a in get_public_attributes(base_meta, as_list=False):\n # skip if already in meta\n if a in meta_attrs:\n continue\n # copy meta-option attribute from base\n setattr(meta, a, getattr(base_meta, a))\n attr[mcs._meta_attr] = meta # set _meta combined from bases\n return attr", "response": "Sets the meta attribute of the class with the meta class from bases and adds the meta - option attributes to the attributes of the class with the meta class from attributes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_param_file_or_parameters(mcs, attr):\n meta = attr[mcs._meta_attr] # look for parameter file path in meta\n cls_path = getattr(meta, mcs._path_attr, None)\n cls_file = getattr(meta, mcs._file_attr, None)\n # read parameters\n attr[mcs._param_attr] = {}\n attr[mcs._param_file] = None\n # read parameters from file\n if None not in [cls_path, cls_file]:\n param_file = os.path.join(cls_path, cls_file)\n attr[mcs._param_file] = param_file\n # read and load JSON parameter map file as \"parameters\"\n with open(param_file, 'r') as param_file:\n file_params = json.load(param_file)\n # update meta from file\n for k, v in file_params.pop(mcs._meta_cls, {}).iteritems():\n setattr(meta, k, v)\n # dictionary of parameters for reading source file\n attr[mcs._param_attr] = {\n k: mcs._param_cls(**v) for k, v in file_params.iteritems()\n }\n # get parameters from class\n parameters = dict.fromkeys(\n k for k, v in attr.iteritems() if isinstance(v, Parameter)\n )\n # update parameters\n for k in parameters:\n attr[mcs._param_attr][k] = attr.pop(k)\n return attr", "response": "Set parameters from a parameter file or from a parameter file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef factory(cls, **kwargs):\n for name, obj in inspect.getmembers(sys.modules[__name__]):\n if inspect.isclass(obj) and issubclass(obj, UserAuthnMethod):\n try:\n if obj.__name__ == cls:\n return obj(**kwargs)\n except AttributeError:\n pass", "response": "Returns an instance of the class cls if it is known otherwise None."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register(self, new_formulas, *args, **kwargs):\n kwargs.update(zip(self.meta_names, args))\n # call super method, meta must be passed as kwargs!\n super(FormulaRegistry, self).register(new_formulas, **kwargs)", "response": "Register new formulas and meta data."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nimports formulas specified in parameters.", "response": "def import_formulas(self):\n \"\"\"\n Import formulas specified in :attr:`parameters`.\n\n :returns: formulas\n :rtype: dict\n \"\"\"\n # TODO: unit tests!\n # TODO: move this to somewhere else and call it \"importy\", maybe\n # core.__init__.py since a lot of modules might use it.\n module = self.meta.module # module read from parameters\n package = getattr(self.meta, 'package', None) # package read from meta\n name = package + module if package else module # concat pkg + name\n path = getattr(self.meta, 'path', None) # path read from parameters\n # import module using module and package\n mod = None\n # SEE ALSO: http://docs.python.org/2/library/imp.html#examples\n try:\n # fast path: see if module was already imported\n mod = sys.modules[name]\n except KeyError:\n try:\n # import module specified in parameters\n mod = importlib.import_module(module, package)\n except ImportError as err:\n if not path:\n msg = ('%s could not be imported either because it was not '\n 'on the PYTHONPATH or path was not given.')\n LOGGER.exception(msg, name)\n raise err\n else:\n # import module using path\n # expand ~, environmental variables and make path absolute\n if not os.path.isabs(path):\n path = os.path.expanduser(os.path.expandvars(path))\n path = os.path.abspath(path)\n # paths must be a list\n paths = [path]\n # imp does not find hierarchical module names, find and load\n # packages recursively, then load module, see last paragraph\n # https://docs.python.org/2/library/imp.html#imp.find_module\n pname = '' # full dotted name of package to load\n # traverse namespace\n while name:\n # if dot in name get first package\n if '.' in name:\n pkg, name = name.split('.', 1)\n else:\n pkg, name = name, None # pkg is the module\n # Find package or module by name and path\n fp, filename, desc = imp.find_module(pkg, paths)\n # full dotted name of package to load\n pname = pkg if not pname else '%s.%s' % (pname, pkg)\n LOGGER.debug('package name: %s', pname)\n # try to load the package or module\n try:\n mod = imp.load_module(pname, fp, filename, desc)\n finally:\n if fp:\n fp.close()\n # append package paths for imp.find_module\n if name:\n paths = mod.__path__\n formulas = {} # an empty list of formulas\n formula_param = self.parameters # formulas key\n # FYI: iterating over dictionary is equivalent to iterkeys()\n if isinstance(formula_param, (list, tuple, dict)):\n # iterate through formulas\n for f in formula_param:\n formulas[f] = getattr(mod, f)\n elif isinstance(formula_param, basestring):\n # only one formula\n # FYI: use basestring to test for str and unicode\n # SEE: http://docs.python.org/2/library/functions.html#basestring\n formulas[formula_param] = getattr(mod, formula_param)\n else:\n # autodetect formulas assuming first letter is f\n formulas = {f: getattr(mod, f) for f in dir(mod) if f[:2] == 'f_'}\n if not len(formulas):\n for f in dir(mod):\n mod_attr = getattr(mod, f)\n if inspect.isfunction(mod_attr):\n formulas[f] = mod_attr\n return formulas"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_blazar_client(config, session):\n return blazar_client.Client(session=session,\n service_type=\"reservation\",\n region_name=os.environ[\"OS_REGION_NAME\"])", "response": "Create a new client for the reservation if nescessary."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reconnecting(count=None, backoff=None):\n # we get trixxy here so we can manipulate these values in the wrapped function,\n # this is one of the first times I wish we were on Python 3\n # http://stackoverflow.com/a/9264845/5006\n reconn_params = {\n \"count\": count,\n \"backoff\": backoff\n }\n\n def retry_decorator(func):\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n\n count = reconn_params[\"count\"]\n backoff = reconn_params[\"backoff\"]\n\n if count is None:\n count = self.connection_config.options.get('reconnect_attempts', 3)\n\n if backoff is None:\n backoff = self.connection_config.options.get('reconnect_backoff', 1.0)\n\n count = int(count)\n backoff = float(backoff)\n\n for attempt in range(1, count + 1):\n try:\n backoff_seconds = float(attempt - 1) * backoff\n if backoff_seconds:\n logger.debug(\"sleeping {} seconds before attempt {}\".format(\n backoff_seconds,\n attempt\n ))\n time.sleep(backoff_seconds)\n\n return func(self, *args, **kwargs)\n\n except InterfaceError as e:\n e_msg = str(e.e)\n # TODO -- this gets us by SQLite and Postgres, but might not\n # work in the future, so this needs to be a tad more robust\n if \"closed\" in e_msg.lower():\n if attempt == count:\n logger.debug(\"all {} attempts failed\".format(count))\n raise\n else:\n logger.debug(\"attempt {}/{} failed, retrying\".format(\n attempt,\n count\n ))\n\n else:\n raise\n\n return wrapper\n\n return retry_decorator", "response": "This is a very specific decorator that can be used to be used on Interface classes. It will attempt to reconnect if the connection is closed and run the same \n method again."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef escape_header(val):\n\n if val is None:\n return None\n try:\n return quote(val, encoding=\"ascii\", safe=\"/ \")\n except ValueError:\n return \"utf-8''\" + quote(val, encoding=\"utf-8\", safe=\"/ \")", "response": "Escapes a value so that it can be used in a mime header"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate one or more streams for each name value pair.", "response": "def make_streams(name, value, boundary, encoding):\n \"\"\"Generates one or more streams for each name, value pair\"\"\"\n\n filename = None\n mime = None\n\n # user passed in a special dict.\n if isinstance(value, collections.Mapping) and \"name\" in value and \"value\" in value:\n filename = value[\"name\"]\n try:\n mime = value[\"mime\"]\n except KeyError:\n pass\n value = value[\"value\"]\n\n if not filename:\n filename = getattr(value, \"name\", None)\n if filename:\n filename = os.path.split(filename)[1]\n\n mime = mime or \"application/octet-stream\"\n\n name, filename, mime = [escape_header(v) for v in (name, filename, mime)]\n\n stream = BytesIO()\n stream.write(\"--{}\\r\\n\".format(boundary).encode(encoding))\n if not filename:\n stream.write(\n 'Content-Disposition: form-data; name=\"{}\"\\r\\n'.format(name).encode(\n encoding\n )\n )\n else:\n stream.write(\n 'Content-Disposition: form-data; name=\"{}\"; filename=\"{}\"\\r\\n'.format(\n name, filename\n ).encode(encoding)\n )\n if mime:\n stream.write(\"Content-Type: {}\\r\\n\".format(mime).encode(encoding))\n stream.write(b\"\\r\\n\")\n\n if hasattr(value, \"read\"):\n stream.seek(0)\n return stream, value, BytesIO(\"\\r\\n\".encode(encoding))\n\n # not a file-like object, encode headers and value in one go\n value = value if isinstance(value, (str, bytes)) else json.dumps(value)\n if isinstance(value, bytes):\n stream.write(value)\n else:\n stream.write(value.encode(encoding))\n stream.write(b\"\\r\\n\")\n stream.seek(0)\n return (stream,)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef len(self):\n # The len property is needed for requests.\n # requests checks __len__, then len\n # Since we cannot implement __len__ because python 32-bit uses 32-bit\n # sizes, we implement this instead.\n def stream_len(stream):\n \"\"\"Stream length\"\"\"\n cur = stream.tell()\n try:\n stream.seek(0, 2)\n return stream.tell() - cur\n finally:\n stream.seek(cur)\n\n return sum(stream_len(s) for s in self.streams)", "response": "Returns the length of the data stream."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nindexing a thing by the keyfnc to construct pools of things.", "response": "def mk_pools(things, keyfnc=lambda x: x):\n \"Indexes a thing by the keyfnc to construct pools of things.\"\n pools = {}\n sthings = sorted(things, key=keyfnc)\n for key, thingz in groupby(sthings, key=keyfnc):\n pools.setdefault(key, []).extend(list(thingz))\n return pools"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npicking a maximum of n things in a dict of indexed pool of things.", "response": "def pick_things(pools, key, n):\n \"Picks a maximum of n things in a dict of indexed pool of things.\"\n pool = pools.get(key)\n if not pool:\n return []\n things = pool[:n]\n del pool[:n]\n return things"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef listen(room):\n def onmessage(m):\n \"\"\"Print the new message and respond to it.\"\"\"\n print(m)\n if m.admin or m.nick == r.user.name:\n return\n if \"parrot\" in m.msg.lower():\n r.post_chat(\"ayy lmao\")\n elif m.msg.lower() in (\"lol\", \"lel\", \"kek\"):\n r.post_chat(\"*kok\")\n else:\n r.post_chat(re.sub(r\"\\blain\\b\", \"purpleadmin\", m.msg, re.I))\n\n with Room(room) as r:\n r.user.change_nick(\"DumbParrot\")\n r.add_listener(\"chat\", onmessage)\n r.listen()", "response": "Open a volafile room and start listening to it."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes valid scan range for provided y0_frontier lower and upper frontiers.", "response": "def nscan_minmax_frontiers(y0_frontier_lower, y0_frontier_upper,\n resize=False):\n \"\"\"Compute valid scan range for provided y0_frontier values.\n Parameters\n ----------\n y0_frontier_lower : float\n Ordinate of the lower frontier.\n y0_frontier_upper : float\n Ordinate of the upper frontier.\n resize : bool\n If True, when the limits are beyond the expected values\n [1,EMIR_NAXIS2], the values are truncated.\n Returns\n -------\n nscan_min : int\n Minimum useful scan for the image.\n nscan_max : int\n Maximum useful scan for the image.\n \"\"\"\n\n fraction_pixel = y0_frontier_lower - int(y0_frontier_lower)\n if fraction_pixel > 0.0:\n nscan_min = int(y0_frontier_lower) + 1\n else:\n nscan_min = int(y0_frontier_lower)\n if nscan_min < 1:\n if resize:\n nscan_min = 1\n else:\n raise ValueError(\"nscan_min=\" + str(nscan_min) + \" is < 1\")\n\n fraction_pixel = y0_frontier_upper - int(y0_frontier_upper)\n if fraction_pixel > 0.0:\n nscan_max = int(y0_frontier_upper)\n else:\n nscan_max = int(y0_frontier_upper) - 1\n if nscan_max > EMIR_NAXIS2:\n if resize:\n nscan_max = EMIR_NAXIS2\n else:\n raise ValueError(\"nscan_max=\" + str(nscan_max) +\n \" is > NAXIS2_EMIR=\" + str(EMIR_NAXIS2))\n\n return nscan_min, nscan_max"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the average DtuConfiguration instance with averaged values.", "response": "def average_dtu_configurations(list_of_objects):\n \"\"\"Return DtuConfiguration instance with averaged values.\n\n Parameters\n ----------\n list_of_objects : python list\n List of DtuConfiguration instances to be averaged.\n\n Returns\n -------\n result : DtuConfiguration instance\n Object with averaged values.\n\n \"\"\"\n\n result = DtuConfiguration()\n\n if len(list_of_objects) == 0:\n return result\n\n list_of_members = result.__dict__.keys()\n\n # compute average of all the members of the class\n for member in list_of_members:\n result.__dict__[member] = np.mean(\n [tmp_dtu.__dict__[member] for tmp_dtu in list_of_objects]\n )\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef maxdiff_dtu_configurations(list_of_objects):\n\n result = DtuConfiguration()\n\n if len(list_of_objects) == 0:\n return result\n\n list_of_members = result.__dict__.keys()\n\n # compute maximum difference for each member\n for member in list_of_members:\n tmp_array = np.array(\n [tmp_dtu.__dict__[member] for tmp_dtu in list_of_objects]\n )\n minval = tmp_array.min()\n maxval = tmp_array.max()\n result.__dict__[member] = maxval - minval\n\n return result", "response": "Returns a new DtuConfiguration instance with maximum differences."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef define_from_fits(cls, fitsobj, extnum=0):\n\n # read input FITS file\n with fits.open(fitsobj) as hdulist:\n image_header = hdulist[extnum].header\n return cls.define_from_header(image_header)", "response": "Define class object from header information in FITS file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef define_from_dictionary(cls, inputdict):\n\n self = DtuConfiguration()\n for item in self.__dict__:\n self.__dict__[item] = inputdict[item]\n return self", "response": "Define class object from dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndefines class object from provided values.", "response": "def define_from_values(cls, xdtu, ydtu, zdtu, xdtu_0, ydtu_0, zdtu_0):\n \"\"\"Define class object from from provided values.\n\n Parameters\n ----------\n xdtu : float\n XDTU fits keyword value.\n ydtu : float\n YDTU fits keyword value.\n zdtu : float\n ZDTU fits keyword value.\n xdtu_0 : float\n XDTU_0 fits keyword value.\n ydtu_0 : float\n YDTU_0 fits keyword value.\n zdtu_0 : float\n ZDTU_0 fits keyword value.\n\n \"\"\"\n\n self = DtuConfiguration()\n # define DTU variables\n self.xdtu = xdtu\n self.ydtu = ydtu\n self.zdtu = zdtu\n self.xdtu_0 = xdtu_0\n self.ydtu_0 = ydtu_0\n self.zdtu_0 = zdtu_0\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking that all the members are within the specified absolute error.", "response": "def closeto(self, other, abserror):\n \"\"\"Check that all the members are equal within provided absolute error.\n\n Parameters\n ----------\n other : DtuConfiguration object\n DTU configuration instance to be compared with self.\n abserror : float\n Absolute maximum allowed error.\n\n Returns\n -------\n result : bool\n True is all members are within the specified maximum\n absolute error\n\n \"\"\"\n\n result = \\\n (abs(self.xdtu - other.xdtu) <= abserror) and \\\n (abs(self.ydtu - other.ydtu) <= abserror) and \\\n (abs(self.zdtu - other.zdtu) <= abserror) and \\\n (abs(self.xdtu_0 - other.xdtu_0) <= abserror) and \\\n (abs(self.ydtu_0 - other.ydtu_0) <= abserror) and \\\n (abs(self.zdtu_0 - other.zdtu_0) <= abserror)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef outdict(self, ndigits=3):\n\n output = self.__dict__.copy()\n for item in output:\n output[item] = round(output[item], ndigits)\n return output", "response": "Return dictionary structure rounded to a given precision."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build_endpoints(conf, endpoint_context, client_authn_method, issuer):\n\n if issuer.endswith('/'):\n _url = issuer[:-1]\n else:\n _url = issuer\n\n endpoint = {}\n for name, spec in conf.items():\n try:\n kwargs = spec['kwargs']\n except KeyError:\n kwargs = {}\n\n if isinstance(spec['class'], str):\n _instance = importer(spec['class'])(\n endpoint_context=endpoint_context, **kwargs)\n else:\n _instance = spec['class'](endpoint_context=endpoint_context,\n **kwargs)\n\n _instance.endpoint_path = spec['path']\n _instance.full_path = '{}/{}'.format(_url, spec['path'])\n if 'provider_info' in spec:\n _instance.provider_info = spec['provider_info']\n\n try:\n _client_authn_method = kwargs['client_authn_method']\n except KeyError:\n _instance.client_auth_method = client_authn_method\n else:\n _instance.client_auth_method = _client_authn_method\n\n endpoint[name] = _instance\n\n return endpoint", "response": "Builds the list of endpoints for the given configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef lv_pack(*args):\n s = []\n for a in args:\n s.append('{}:{}'.format(len(a), a))\n return ''.join(s)", "response": "Serializes using length : value format\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef lv_unpack(txt):\n txt = txt.strip()\n res = []\n while txt:\n l, v = txt.split(':', 1)\n res.append(v[:int(l)])\n txt = v[int(l):]\n return res", "response": "Deserializes a string of the length : value format\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreserves and deploys the vagrant boxes.", "response": "def init(self, force_deploy=False):\n \"\"\"Reserve and deploys the vagrant boxes.\n\n Args:\n force_deploy (bool): True iff new machines should be started\n \"\"\"\n machines = self.provider_conf.machines\n networks = self.provider_conf.networks\n _networks = []\n for network in networks:\n ipnet = IPNetwork(network.cidr)\n _networks.append({\n \"netpool\": list(ipnet)[10:-10],\n \"cidr\": network.cidr,\n \"roles\": network.roles,\n \"gateway\": ipnet.ip\n })\n\n vagrant_machines = []\n vagrant_roles = {}\n j = 0\n for machine in machines:\n for _ in range(machine.number):\n vagrant_machine = {\n \"name\": \"enos-%s\" % j,\n \"cpu\": machine.flavour_desc[\"core\"],\n \"mem\": machine.flavour_desc[\"mem\"],\n \"ips\": [n[\"netpool\"].pop() for n in _networks],\n }\n vagrant_machines.append(vagrant_machine)\n # Assign the machines to the right roles\n for role in machine.roles:\n vagrant_roles.setdefault(role, []).append(vagrant_machine)\n j = j + 1\n\n logger.debug(vagrant_roles)\n\n loader = FileSystemLoader(searchpath=TEMPLATE_DIR)\n env = Environment(loader=loader, autoescape=True)\n template = env.get_template('Vagrantfile.j2')\n vagrantfile = template.render(machines=vagrant_machines,\n provider_conf=self.provider_conf)\n vagrantfile_path = os.path.join(os.getcwd(), \"Vagrantfile\")\n with open(vagrantfile_path, 'w') as f:\n f.write(vagrantfile)\n\n # Build env for Vagrant with a copy of env variables (needed by\n # subprocess opened by vagrant\n v_env = dict(os.environ)\n v_env['VAGRANT_DEFAULT_PROVIDER'] = self.provider_conf.backend\n\n v = vagrant.Vagrant(root=os.getcwd(),\n quiet_stdout=False,\n quiet_stderr=False,\n env=v_env)\n if force_deploy:\n v.destroy()\n v.up()\n v.provision()\n roles = {}\n for role, machines in vagrant_roles.items():\n for machine in machines:\n keyfile = v.keyfile(vm_name=machine['name'])\n port = v.port(vm_name=machine['name'])\n address = v.hostname(vm_name=machine['name'])\n roles.setdefault(role, []).append(\n Host(address,\n alias=machine['name'],\n user=self.provider_conf.user,\n port=port,\n keyfile=keyfile))\n\n networks = [{\n 'cidr': str(n[\"cidr\"]),\n 'start': str(n[\"netpool\"][0]),\n 'end': str(n[\"netpool\"][-1]),\n 'dns': '8.8.8.8',\n 'gateway': str(n[\"gateway\"]),\n 'roles': n[\"roles\"]\n } for n in _networks]\n logger.debug(roles)\n logger.debug(networks)\n\n return (roles, networks)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndestroys all vagrant box involved in the deployment.", "response": "def destroy(self):\n \"\"\"Destroy all vagrant box involved in the deployment.\"\"\"\n v = vagrant.Vagrant(root=os.getcwd(),\n quiet_stdout=False,\n quiet_stderr=True)\n v.destroy()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tick(self):\n instant_rate = self.count / float(self.tick_interval_s)\n self.count = 0\n if self.initialized:\n self.rate += (self.alpha * (instant_rate - self.rate))\n else:\n self.rate = instant_rate\n self.initialized = True", "response": "Mark the passage of time and decay the current rate accordingly."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmerges odd - and even - numbered CSU configurations into one CSU.", "response": "def merge_odd_even_csu_configurations(conf_odd, conf_even):\n \"\"\"Merge CSU configuration using odd- and even-numbered values.\n\n The CSU returned CSU configuration include the odd-numbered values\n from 'conf_odd' and the even-numbered values from 'conf_even'.\n\n Parameters\n ----------\n conf_odd : CsuConfiguration instance\n CSU configuration corresponding to odd-numbered slitlets.\n conf_even : CsuConfiguration instance\n CSU configuration corresponding to even-numbered slitlets.\n\n Returns\n -------\n merged_conf : CsuConfiguration instance\n CSU configuration resulting from the merging process.\n\n \"\"\"\n\n # initialize resulting CsuConfiguration instance using one of the\n # input configuration corresponding to the odd-numbered slitlets\n merged_conf = deepcopy(conf_odd)\n\n # update the resulting configuration with the values corresponding\n # to the even-numbered slitlets\n for i in range(EMIR_NBARS):\n ibar = i + 1\n if ibar % 2 == 0:\n merged_conf._csu_bar_left[i] = conf_even._csu_bar_left[i]\n merged_conf._csu_bar_right[i] = conf_even._csu_bar_right[i]\n merged_conf._csu_bar_slit_center[i] = \\\n conf_even._csu_bar_slit_center[i]\n merged_conf._csu_bar_slit_width[i] = \\\n conf_even._csu_bar_slit_width[i]\n\n # return merged configuration\n return merged_conf"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef define_from_header(cls, image_header):\n\n self = CsuConfiguration()\n\n # declare lists to store configuration of CSU bars\n self._csu_bar_left = []\n self._csu_bar_right = []\n self._csu_bar_slit_center = []\n self._csu_bar_slit_width = []\n\n for i in range(EMIR_NBARS):\n ibar = i + 1\n keyword = 'CSUP{}'.format(ibar)\n if keyword in image_header:\n self._csu_bar_left.append(image_header[keyword])\n else:\n raise ValueError(\"Expected keyword \" + keyword + \" not found!\")\n keyword = 'CSUP{}'.format(ibar + EMIR_NBARS)\n if keyword in image_header:\n # set the same origin as the one employed for _csu_bar_left\n self._csu_bar_right.append(341.5 - image_header[keyword])\n else:\n raise ValueError(\"Expected keyword \" + keyword + \" not found!\")\n self._csu_bar_slit_center.append(\n (self._csu_bar_left[i] + self._csu_bar_right[i]) / 2\n )\n self._csu_bar_slit_width.append(\n self._csu_bar_right[i] - self._csu_bar_left[i]\n )\n\n return self", "response": "Define class members directly from FITS header."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn dictionary structure rounded to a given precision.", "response": "def outdict(self, ndigits=3):\n \"\"\"Return dictionary structure rounded to a given precision.\"\"\"\n\n outdict = {}\n for i in range(EMIR_NBARS):\n ibar = i + 1\n cbar = 'slitlet' + str(ibar).zfill(2)\n outdict[cbar] = {}\n outdict[cbar]['_csu_bar_left'] = \\\n round(self._csu_bar_left[i], ndigits)\n outdict[cbar]['_csu_bar_right'] = \\\n round(self._csu_bar_right[i], ndigits)\n outdict[cbar]['_csu_bar_slit_center'] = \\\n round(self._csu_bar_slit_center[i], ndigits)\n outdict[cbar]['_csu_bar_slit_width'] = \\\n round(self._csu_bar_slit_width[i], ndigits)\n\n return outdict"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef widths_in_range_mm(\n self,\n minwidth=EMIR_MINIMUM_SLITLET_WIDTH_MM,\n maxwidth=EMIR_MAXIMUM_SLITLET_WIDTH_MM\n ):\n \"\"\"Return list of slitlets which width is within given range\n\n Parameters\n ----------\n minwidth : float\n Minimum slit width (mm).\n maxwidth : float\n Maximum slit width (mm).\n\n Returns\n -------\n list_ok : list\n List of booleans indicating whether the corresponding\n slitlet width is within range\n\n \"\"\"\n\n list_ok = []\n for i in range(EMIR_NBARS):\n slitlet_ok = minwidth <= self._csu_bar_slit_width[i] <= maxwidth\n if slitlet_ok:\n list_ok.append(i + 1)\n\n return list_ok", "response": "Return list of slitlets which width is within given range."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nenclosing function backported from old photutils", "response": "def encloses_annulus(x_min, x_max, y_min, y_max, nx, ny, r_in, r_out):\n '''Encloses function backported from old photutils'''\n\n gout = circular_overlap_grid(x_min, x_max, y_min, y_max, nx, ny, r_out, 1, 1)\n gin = circular_overlap_grid(x_min, x_max, y_min, y_max, nx, ny, r_in, 1, 1)\n return gout - gin"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef comp_back_with_annulus(img, xc, yc, r_in, r_out, frac=0.1):\n '''\n center: [x,y], center of first pixel is [0,0]\n '''\n\n x_min = -0.5 - xc\n x_max = img.shape[1] - 0.5 - xc\n y_min = -0.5 - yc\n y_max = img.shape[1] - 0.5 - yc\n mm = encloses_annulus(x_min, x_max, y_min, y_max,\n img.shape[1], img.shape[0],\n r_in, r_out)\n\n valid = mm > frac\n rr = img[valid]\n\n if rr.size == 0:\n raise ValueError(\"Not enough points to compute background\")\n\n # mode?\n bck = mode_half_sample(rr)\n\n return bck, mm", "response": "Compute background using anannulus"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main(args=None):\n\n # parse command-line options\n parser = argparse.ArgumentParser(prog='rect_wpoly_for_mos')\n # required arguments\n parser.add_argument(\"input_list\",\n help=\"TXT file with list JSON files derived from \"\n \"longslit data\")\n parser.add_argument(\"--fitted_bound_param\", required=True,\n help=\"Input JSON with fitted boundary parameters\",\n type=argparse.FileType('rt'))\n parser.add_argument(\"--out_MOSlibrary\", required=True,\n help=\"Output JSON file with results\",\n type=lambda x: arg_file_is_new(parser, x))\n # optional arguments\n parser.add_argument(\"--debugplot\",\n help=\"Integer indicating plotting & debugging options\"\n \" (default=0)\",\n default=0, type=int,\n choices=DEBUGPLOT_CODES)\n parser.add_argument(\"--echo\",\n help=\"Display full command line\",\n action=\"store_true\")\n args = parser.parse_args(args)\n\n if args.echo:\n print('\\033[1m\\033[31m% ' + ' '.join(sys.argv) + '\\033[0m\\n')\n\n # ---\n\n # Read input TXT file with list of JSON files\n list_json_files = list_fileinfo_from_txt(args.input_list)\n nfiles = len(list_json_files)\n if abs(args.debugplot) >= 10:\n print('>>> Number of input JSON files:', nfiles)\n for item in list_json_files:\n print(item)\n if nfiles < 2:\n raise ValueError(\"Insufficient number of input JSON files\")\n\n # read fitted boundary parameters and check that all the longslit JSON\n # files have been computed using the same fitted boundary parameters\n refined_boundary_model = RefinedBoundaryModelParam._datatype_load(\n args.fitted_bound_param.name)\n for ifile in range(nfiles):\n coef_rect_wpoly = RectWaveCoeff._datatype_load(\n list_json_files[ifile].filename)\n uuid_tmp = coef_rect_wpoly.meta_info['origin']['bound_param']\n if uuid_tmp[4:] != refined_boundary_model.uuid:\n print('Expected uuid:', refined_boundary_model.uuid)\n print('uuid for ifile #' + str(ifile + 1) + \": \" + uuid_tmp)\n raise ValueError(\"Fitted boundary parameter uuid's do not match\")\n\n # check consistency of grism, filter, DTU configuration and list of\n # valid slitlets\n coef_rect_wpoly_first_longslit = RectWaveCoeff._datatype_load(\n list_json_files[0].filename)\n filter_name = coef_rect_wpoly_first_longslit.tags['filter']\n grism_name = coef_rect_wpoly_first_longslit.tags['grism']\n dtu_conf = DtuConfiguration.define_from_dictionary(\n coef_rect_wpoly_first_longslit.meta_info['dtu_configuration']\n )\n list_valid_islitlets = list(range(1, EMIR_NBARS + 1))\n for idel in coef_rect_wpoly_first_longslit.missing_slitlets:\n list_valid_islitlets.remove(idel)\n for ifile in range(1, nfiles):\n coef_rect_wpoly = RectWaveCoeff._datatype_load(\n list_json_files[ifile].filename)\n filter_tmp = coef_rect_wpoly.tags['filter']\n if filter_name != filter_tmp:\n print(filter_name)\n print(filter_tmp)\n raise ValueError(\"Unexpected different filter found\")\n grism_tmp = coef_rect_wpoly.tags['grism']\n if grism_name != grism_tmp:\n print(grism_name)\n print(grism_tmp)\n raise ValueError(\"Unexpected different grism found\")\n coef_rect_wpoly = RectWaveCoeff._datatype_load(\n list_json_files[ifile].filename)\n dtu_conf_tmp = DtuConfiguration.define_from_dictionary(\n coef_rect_wpoly.meta_info['dtu_configuration']\n )\n if dtu_conf != dtu_conf_tmp:\n print(dtu_conf)\n print(dtu_conf_tmp)\n raise ValueError(\"Unexpected different DTU configurations found\")\n list_valid_islitlets_tmp = list(range(1, EMIR_NBARS + 1))\n for idel in coef_rect_wpoly.missing_slitlets:\n list_valid_islitlets_tmp.remove(idel)\n if list_valid_islitlets != list_valid_islitlets_tmp:\n print(list_valid_islitlets)\n print(list_valid_islitlets_tmp)\n raise ValueError(\"Unexpected different list of valid slitlets\")\n\n # check consistency of horizontal bounding box limits (bb_nc1_orig and\n # bb_nc2_orig) and ymargin_bb, and store the values for each slitlet\n dict_bb_param = {}\n print(\"Checking horizontal bounding box limits and ymargin_bb:\")\n for islitlet in list_valid_islitlets:\n islitlet_progress(islitlet, EMIR_NBARS)\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n dict_bb_param[cslitlet] = {}\n for par in ['bb_nc1_orig', 'bb_nc2_orig', 'ymargin_bb']:\n value_initial = \\\n coef_rect_wpoly_first_longslit.contents[islitlet - 1][par]\n for ifile in range(1, nfiles):\n coef_rect_wpoly = RectWaveCoeff._datatype_load(\n list_json_files[ifile].filename)\n value_tmp = coef_rect_wpoly.contents[islitlet - 1][par]\n if value_initial != value_tmp:\n print(islitlet, value_initial, value_tmp)\n print(value_tmp)\n raise ValueError(\"Unexpected different \" + par)\n dict_bb_param[cslitlet][par] = value_initial\n print('OK!')\n\n # ---\n\n # Read and store all the longslit data\n list_coef_rect_wpoly = []\n for ifile in range(nfiles):\n coef_rect_wpoly = RectWaveCoeff._datatype_load(\n list_json_files[ifile].filename)\n list_coef_rect_wpoly.append(coef_rect_wpoly)\n\n # ---\n\n # Initialize structure to save results into an ouptut JSON file\n outdict = {}\n outdict['refined_boundary_model'] = refined_boundary_model.__getstate__()\n outdict['instrument'] = 'EMIR'\n outdict['meta_info'] = {}\n outdict['meta_info']['creation_date'] = datetime.now().isoformat()\n outdict['meta_info']['description'] = \\\n 'rectification and wavelength calibration polynomial coefficients ' \\\n 'as a function of csu_bar_slit_center for MOS'\n outdict['meta_info']['recipe_name'] = 'undefined'\n outdict['meta_info']['origin'] = {}\n outdict['meta_info']['origin']['wpoly_longslits'] = {}\n for ifile in range(nfiles):\n cdum = 'longslit_' + str(ifile + 1).zfill(3) + '_uuid'\n outdict['meta_info']['origin']['wpoly_longslits'][cdum] = \\\n list_coef_rect_wpoly[ifile].uuid\n outdict['tags'] = {}\n outdict['tags']['grism'] = grism_name\n outdict['tags']['filter'] = filter_name\n outdict['dtu_configuration'] = dtu_conf.outdict()\n outdict['uuid'] = str(uuid4())\n outdict['contents'] = {}\n\n # include bb_nc1_orig, bb_nc2_orig and ymargin_bb for each slitlet\n # (note that the values of bb_ns1_orig and bb_ns2_orig cannot be\n # computed at this stage because they depend on csu_bar_slit_center)\n for islitlet in list_valid_islitlets:\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n outdict['contents'][cslitlet] = dict_bb_param[cslitlet]\n\n # check that order for rectification transformations is the same for all\n # the slitlets and longslit configurations\n order_check_list = []\n for ifile in range(nfiles):\n tmpdict = list_coef_rect_wpoly[ifile].contents\n for islitlet in list_valid_islitlets:\n ttd_order = tmpdict[islitlet - 1]['ttd_order']\n if ttd_order is not None:\n order_check_list.append(ttd_order)\n ttd_order_modeled = \\\n tmpdict[islitlet - 1]['ttd_order_longslit_model']\n order_check_list.append(ttd_order_modeled)\n # remove duplicates in list\n order_no_duplicates = list(set(order_check_list))\n if len(order_no_duplicates) != 1:\n print('order_no_duplicates:', order_no_duplicates)\n raise ValueError('tdd_order is not constant!')\n ttd_order = int(order_no_duplicates[0])\n ncoef_rect = ncoef_fmap(ttd_order)\n if abs(args.debugplot) >= 10:\n print('>>> ttd_order........:', ttd_order)\n print('>>> ncoef_rect.......:', ncoef_rect)\n\n # check that polynomial degree in frontiers and spectrails are the same\n poldeg_check_list = []\n for ifile in range(nfiles):\n tmpdict = list_coef_rect_wpoly[ifile].contents\n for islitlet in list_valid_islitlets:\n tmppoly = tmpdict[islitlet - 1]['frontier']['poly_coef_lower']\n poldeg_check_list.append(len(tmppoly) - 1)\n tmppoly = tmpdict[islitlet - 1]['frontier']['poly_coef_upper']\n poldeg_check_list.append(len(tmppoly) - 1)\n tmppoly = tmpdict[islitlet - 1]['spectrail']['poly_coef_lower']\n poldeg_check_list.append(len(tmppoly) - 1)\n tmppoly = tmpdict[islitlet - 1]['spectrail']['poly_coef_middle']\n poldeg_check_list.append(len(tmppoly) - 1)\n tmppoly = tmpdict[islitlet - 1]['spectrail']['poly_coef_upper']\n poldeg_check_list.append(len(tmppoly) - 1)\n # remove duplicates in list\n poldeg_no_duplicates = list(set(poldeg_check_list))\n if len(poldeg_no_duplicates) != 1:\n print('poldeg_no_duplicates:', poldeg_no_duplicates)\n raise ValueError('poldeg is not constant in frontiers and '\n 'spectrails!')\n poldeg_spectrails = int(poldeg_no_duplicates[0])\n if abs(args.debugplot) >= 10:\n print('>>> poldeg spectrails:', poldeg_spectrails)\n\n # check that polynomial degree of wavelength calibration is the same for\n # all the slitlets\n poldeg_check_list = []\n for ifile in range(nfiles):\n tmpdict = list_coef_rect_wpoly[ifile].contents\n for islitlet in list_valid_islitlets:\n tmppoly = tmpdict[islitlet - 1]['wpoly_coeff']\n poldeg_check_list.append(len(tmppoly) - 1)\n tmppoly = tmpdict[islitlet - 1]['wpoly_coeff_longslit_model']\n poldeg_check_list.append(len(tmppoly) - 1)\n # remove duplicates in list\n poldeg_no_duplicates = list(set(poldeg_check_list))\n if len(poldeg_no_duplicates) != 1:\n print('poldeg_no_duplicates:', poldeg_no_duplicates)\n raise ValueError('poldeg is not constant in wavelength calibration '\n 'polynomials!')\n poldeg_wavecal = int(poldeg_no_duplicates[0])\n if abs(args.debugplot) >= 10:\n print('>>> poldeg wavecal...:', poldeg_wavecal)\n\n # ---\n\n # csu_bar_slit_center values for each slitlet\n print(\"CSU_bar_slit_center values:\")\n for islitlet in list_valid_islitlets:\n islitlet_progress(islitlet, EMIR_NBARS)\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n list_csu_bar_slit_center = []\n for ifile in range(nfiles):\n tmpdict = list_coef_rect_wpoly[ifile].contents[islitlet - 1]\n csu_bar_slit_center = tmpdict['csu_bar_slit_center']\n list_csu_bar_slit_center.append(csu_bar_slit_center)\n # check that list_csu_bar_slit_center is properly sorted\n if not np.all(list_csu_bar_slit_center[:-1] <=\n list_csu_bar_slit_center[1:]):\n print('cslitlet: ', cslitlet)\n print('list_csu_bar_slit_center: ', list_csu_bar_slit_center)\n raise ValueError('Unsorted list_csu_bar_slit_center')\n outdict['contents'][cslitlet]['list_csu_bar_slit_center'] = \\\n list_csu_bar_slit_center\n print('OK!')\n\n # ---\n\n # rectification polynomial coefficients\n\n # note: when aij and bij have not been computed, we use the modeled\n # version aij_longslit_model and bij_longslit_model\n print(\"Rectification polynomial coefficients:\")\n for islitlet in list_valid_islitlets:\n islitlet_progress(islitlet, EMIR_NBARS)\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n outdict['contents'][cslitlet]['ttd_order'] = ttd_order\n outdict['contents'][cslitlet]['ncoef_rect'] = ncoef_rect\n for keycoef in ['ttd_aij', 'ttd_bij', 'tti_aij', 'tti_bij']:\n for icoef in range(ncoef_rect):\n ccoef = str(icoef).zfill(2)\n list_cij = []\n for ifile in range(nfiles):\n tmpdict = \\\n list_coef_rect_wpoly[ifile].contents[islitlet - 1]\n cij = tmpdict[keycoef]\n if cij is not None:\n list_cij.append(cij[icoef])\n else:\n cij_modeled = tmpdict[keycoef + '_longslit_model']\n if cij_modeled is None:\n raise ValueError(\"Unexpected cij_modeled=None!\")\n else:\n list_cij.append(cij_modeled[icoef])\n if abs(args.debugplot) >= 10:\n print(\"Warning: using \" + keycoef +\n \"_longslit_model for \" + cslitlet +\n \" in file \" +\n list_json_files[ifile].filename)\n cdum = 'list_' + keycoef + '_' + ccoef\n outdict['contents'][cslitlet][cdum] = list_cij\n print('OK!')\n\n # ---\n\n # wavelength calibration polynomial coefficients\n\n # note: when wpoly_coeff have not been computed, we use the\n # wpoly_coeff_longslit_model\n print(\"Wavelength calibration polynomial coefficients:\")\n for islitlet in list_valid_islitlets:\n islitlet_progress(islitlet, EMIR_NBARS)\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n outdict['contents'][cslitlet]['wpoly_degree'] = poldeg_wavecal\n for icoef in range(poldeg_wavecal + 1):\n ccoef = str(icoef).zfill(2)\n list_cij = []\n for ifile in range(nfiles):\n tmpdict = list_coef_rect_wpoly[ifile].contents[islitlet - 1]\n cij = tmpdict['wpoly_coeff']\n if cij is not None:\n list_cij.append(cij[icoef])\n else:\n cij_modeled = tmpdict['wpoly_coeff_longslit_model']\n if cij_modeled is None:\n raise ValueError(\"Unexpected cij_modeled=None!\")\n else:\n list_cij.append(cij_modeled[icoef])\n if abs(args.debugplot) >= 10:\n print(\"Warning: using wpoly_coeff_longslit_model\" +\n \" for \" + cslitlet +\n \" in file \" +\n list_json_files[ifile].filename)\n outdict['contents'][cslitlet]['list_wpoly_coeff_' + ccoef] = \\\n list_cij\n print('OK!')\n\n # ---\n\n # OBSOLETE\n # Save resulting JSON structure\n '''\n with open(args.out_MOSlibrary.name + '_old', 'w') as fstream:\n json.dump(outdict, fstream, indent=2, sort_keys=True)\n print('>>> Saving file ' + args.out_MOSlibrary.name + '_old')\n '''\n\n # --\n\n # Create object of type MasterRectWave with library of coefficients\n # for rectification and wavelength calibration\n master_rectwv = MasterRectWave(instrument='EMIR')\n master_rectwv.quality_control = numina.types.qc.QC.GOOD\n master_rectwv.tags['grism'] = grism_name\n master_rectwv.tags['filter'] = filter_name\n master_rectwv.meta_info['dtu_configuration'] = outdict['dtu_configuration']\n master_rectwv.meta_info['refined_boundary_model'] = {\n 'parmodel': refined_boundary_model.meta_info['parmodel']\n }\n master_rectwv.meta_info['refined_boundary_model'].update(\n outdict['refined_boundary_model']['contents']\n )\n master_rectwv.total_slitlets = EMIR_NBARS\n master_rectwv.meta_info['origin'] = {\n 'bound_param': 'uuid' + refined_boundary_model.uuid,\n 'longslit_frames': ['uuid:' + list_coef_rect_wpoly[ifile].uuid\n for ifile in range(nfiles)]\n }\n for i in range(EMIR_NBARS):\n islitlet = i + 1\n dumdict = {'islitlet': islitlet}\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n if cslitlet in outdict['contents']:\n dumdict.update(outdict['contents'][cslitlet])\n else:\n dumdict.update({\n 'bb_nc1_orig': 0,\n 'bb_nc2_orig': 0,\n 'ymargin_bb': 0,\n 'list_csu_bar_slit_center': [],\n 'ttd_order': 0,\n 'ncoef_rect': 0,\n 'wpolydegree': 0\n })\n master_rectwv.missing_slitlets.append(islitlet)\n master_rectwv.contents.append(dumdict)\n master_rectwv.writeto(args.out_MOSlibrary.name)\n print('>>> Saving file ' + args.out_MOSlibrary.name)", "response": "Main function for the rect_wpoly_for_mos command line tool."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef display_slitlet_arrangement(fileobj,\n grism=None,\n spfilter=None,\n bbox=None,\n adjust=None,\n geometry=None,\n debugplot=0):\n \"\"\"Display slitlet arrangment from CSUP keywords in FITS header.\n\n Parameters\n ----------\n fileobj : file object\n FITS or TXT file object.\n grism : str\n Grism.\n grism : str\n Filter.\n bbox : tuple of 4 floats\n If not None, values for xmin, xmax, ymin and ymax.\n adjust : bool\n Adjust X range according to minimum and maximum csu_bar_left\n and csu_bar_right (note that this option is overriden by 'bbox')\n geometry : tuple (4 integers) or None\n x, y, dx, dy values employed to set the Qt backend geometry.\n debugplot : int\n Determines whether intermediate computations and/or plots\n are displayed. The valid codes are defined in\n numina.array.display.pause_debugplot\n\n Returns\n -------\n csu_bar_left : list of floats\n Location (mm) of the left bar for each slitlet.\n csu_bar_right : list of floats\n Location (mm) of the right bar for each slitlet, using the\n same origin employed for csu_bar_left (which is not the\n value stored in the FITS keywords.\n csu_bar_slit_center : list of floats\n Middle point (mm) in between the two bars defining a slitlet.\n csu_bar_slit_width : list of floats\n Slitlet width (mm), computed as the distance between the two\n bars defining the slitlet.\n\n \"\"\"\n\n if fileobj.name[-4:] == \".txt\":\n if grism is None:\n raise ValueError(\"Undefined grism!\")\n if spfilter is None:\n raise ValueError(\"Undefined filter!\")\n # define CsuConfiguration object\n csu_config = CsuConfiguration()\n csu_config._csu_bar_left = []\n csu_config._csu_bar_right = []\n csu_config._csu_bar_slit_center = []\n csu_config._csu_bar_slit_width = []\n\n # since the input filename has been opened with argparse in binary\n # mode, it is necessary to close it and open it in text mode\n fileobj.close()\n # read TXT file\n with open(fileobj.name, mode='rt') as f:\n file_content = f.read().splitlines()\n next_id_bar = 1\n for line in file_content:\n if len(line) > 0:\n if line[0] not in ['#']:\n line_contents = line.split()\n id_bar = int(line_contents[0])\n position = float(line_contents[1])\n if id_bar == next_id_bar:\n if id_bar <= EMIR_NBARS:\n csu_config._csu_bar_left.append(position)\n next_id_bar = id_bar + EMIR_NBARS\n else:\n csu_config._csu_bar_right.append(341.5 - position)\n next_id_bar = id_bar - EMIR_NBARS + 1\n else:\n raise ValueError(\"Unexpected id_bar:\" + str(id_bar))\n\n # compute slit width and center\n for i in range(EMIR_NBARS):\n csu_config._csu_bar_slit_center.append(\n (csu_config._csu_bar_left[i] + csu_config._csu_bar_right[i])/2\n )\n csu_config._csu_bar_slit_width.append(\n csu_config._csu_bar_right[i] - csu_config._csu_bar_left[i]\n )\n\n else:\n # read input FITS file\n hdulist = fits.open(fileobj.name)\n image_header = hdulist[0].header\n hdulist.close()\n\n # additional info from header\n grism = image_header['grism']\n spfilter = image_header['filter']\n\n # define slitlet arrangement\n csu_config = CsuConfiguration.define_from_fits(fileobj)\n\n # determine calibration\n if grism in [\"J\", \"OPEN\"] and spfilter == \"J\":\n wv_parameters = set_wv_parameters(\"J\", \"J\")\n elif grism in [\"H\", \"OPEN\"] and spfilter == \"H\":\n wv_parameters = set_wv_parameters(\"H\", \"H\")\n elif grism in [\"K\", \"OPEN\"] and spfilter == \"Ksp\":\n wv_parameters = set_wv_parameters(\"Ksp\", \"K\")\n elif grism in [\"LR\", \"OPEN\"] and spfilter == \"YJ\":\n wv_parameters = set_wv_parameters(\"YJ\", \"LR\")\n elif grism in [\"LR\", \"OPEN\"] and spfilter == \"HK\":\n wv_parameters = set_wv_parameters(\"HK\", \"LR\")\n else:\n raise ValueError(\"Invalid grism + filter configuration\")\n\n crval1 = wv_parameters['poly_crval1_linear']\n cdelt1 = wv_parameters['poly_cdelt1_linear']\n\n wvmin_useful = wv_parameters['wvmin_useful']\n wvmax_useful = wv_parameters['wvmax_useful']\n\n # display arrangement\n if abs(debugplot) >= 10:\n print(\"slit left right center width min.wave max.wave\")\n print(\"==== ======= ======= ======= ===== ======== ========\")\n for i in range(EMIR_NBARS):\n ibar = i + 1\n csu_crval1 = crval1(csu_config.csu_bar_slit_center(ibar))\n csu_cdelt1 = cdelt1(csu_config.csu_bar_slit_center(ibar))\n csu_crvaln = csu_crval1 + (EMIR_NAXIS1 - 1) * csu_cdelt1\n if wvmin_useful is not None:\n csu_crval1 = np.amax([csu_crval1, wvmin_useful])\n if wvmax_useful is not None:\n csu_crvaln = np.amin([csu_crvaln, wvmax_useful])\n print(\"{0:4d} {1:8.3f} {2:8.3f} {3:8.3f} {4:7.3f} \"\n \"{5:8.2f} {6:8.2f}\".format(\n ibar, csu_config.csu_bar_left(ibar),\n csu_config.csu_bar_right(ibar),\n csu_config.csu_bar_slit_center(ibar),\n csu_config.csu_bar_slit_width(ibar),\n csu_crval1, csu_crvaln)\n )\n print(\n \"---> {0:8.3f} {1:8.3f} {2:8.3f} {3:7.3f} <- mean (all)\".format(\n np.mean(csu_config._csu_bar_left),\n np.mean(csu_config._csu_bar_right),\n np.mean(csu_config._csu_bar_slit_center),\n np.mean(csu_config._csu_bar_slit_width)\n )\n )\n print(\n \"---> {0:8.3f} {1:8.3f} {2:8.3f} {3:7.3f} <- mean (odd)\".format(\n np.mean(csu_config._csu_bar_left[::2]),\n np.mean(csu_config._csu_bar_right[::2]),\n np.mean(csu_config._csu_bar_slit_center[::2]),\n np.mean(csu_config._csu_bar_slit_width[::2])\n )\n )\n print(\n \"---> {0:8.3f} {1:8.3f} {2:8.3f} {3:7.3f} <- mean (even)\".format(\n np.mean(csu_config._csu_bar_left[1::2]),\n np.mean(csu_config._csu_bar_right[1::2]),\n np.mean(csu_config._csu_bar_slit_center[1::2]),\n np.mean(csu_config._csu_bar_slit_width[1::2])\n )\n )\n\n # display slit arrangement\n if abs(debugplot) % 10 != 0:\n fig = plt.figure()\n set_window_geometry(geometry)\n ax = fig.add_subplot(111)\n if bbox is None:\n if adjust:\n xmin = min(csu_config._csu_bar_left)\n xmax = max(csu_config._csu_bar_right)\n dx = xmax - xmin\n if dx == 0:\n dx = 1\n xmin -= dx/20\n xmax += dx/20\n ax.set_xlim(xmin, xmax)\n else:\n ax.set_xlim(0., 341.5)\n ax.set_ylim(0, 56)\n else:\n ax.set_xlim(bbox[0], bbox[1])\n ax.set_ylim(bbox[2], bbox[3])\n ax.set_xlabel('csu_bar_position (mm)')\n ax.set_ylabel('slit number')\n for i in range(EMIR_NBARS):\n ibar = i + 1\n ax.add_patch(patches.Rectangle(\n (csu_config.csu_bar_left(ibar), ibar-0.5),\n csu_config.csu_bar_slit_width(ibar), 1.0))\n ax.plot([0., csu_config.csu_bar_left(ibar)], [ibar, ibar],\n '-', color='gray')\n ax.plot([csu_config.csu_bar_right(ibar), 341.5],\n [ibar, ibar], '-', color='gray')\n plt.title(\"File: \" + fileobj.name + \"\\ngrism=\" + grism +\n \", filter=\" + spfilter)\n pause_debugplot(debugplot, pltshow=True)\n\n # return results\n return csu_config._csu_bar_left, csu_config._csu_bar_right, \\\n csu_config._csu_bar_slit_center, csu_config._csu_bar_slit_width", "response": "Display a slitlet arrangement from CSUP keywords in FITS header."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef extract_slitlet2d(self, image_2k2k):\n\n # protections\n naxis2, naxis1 = image_2k2k.shape\n if naxis1 != EMIR_NAXIS1:\n raise ValueError('Unexpected naxis1')\n if naxis2 != EMIR_NAXIS2:\n raise ValueError('Unexpected naxis2')\n\n # extract slitlet region\n slitlet2d = image_2k2k[(self.bb_ns1_orig - 1):self.bb_ns2_orig,\n (self.bb_nc1_orig - 1):self.bb_nc2_orig]\n\n # transform to float\n slitlet2d = slitlet2d.astype(np.float)\n\n # display slitlet2d with boundaries and middle spectrum trail\n if abs(self.debugplot) in [21, 22]:\n self.ximshow_unrectified(slitlet2d)\n\n # return slitlet image\n return slitlet2d", "response": "Extract slitlet 2d image from image with original EMIR dimensions."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rectify(self, slitlet2d, resampling, inverse=False):\n\n if resampling not in [1, 2]:\n raise ValueError(\"Unexpected resampling value=\" + str(resampling))\n\n # check image dimension\n naxis2, naxis1 = slitlet2d.shape\n if naxis1 != self.bb_nc2_orig - self.bb_nc1_orig + 1:\n raise ValueError(\"Unexpected slitlet2d_rect naxis1\")\n if naxis2 != self.bb_ns2_orig - self.bb_ns1_orig + 1:\n raise ValueError(\"Unexpected slitlet2d_rect naxis2\")\n\n if inverse:\n aij = self.tti_aij\n bij = self.tti_bij\n else:\n aij = self.ttd_aij\n bij = self.ttd_bij\n\n # rectify image\n slitlet2d_rect = rectify2d(\n image2d=slitlet2d,\n aij=aij,\n bij=bij,\n resampling=resampling\n )\n\n if abs(self.debugplot % 10) != 0:\n if inverse:\n self.ximshow_unrectified(slitlet2d_rect)\n else:\n self.ximshow_rectified(slitlet2d_rect)\n\n return slitlet2d_rect", "response": "Rectify a 2d slitlet image."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndisplaying unrectified image with spectrails and frontiers.", "response": "def ximshow_unrectified(self, slitlet2d):\n \"\"\"Display unrectified image with spectrails and frontiers.\n\n Parameters\n ----------\n slitlet2d : numpy array\n Array containing the unrectified slitlet image.\n\n \"\"\"\n\n title = \"Slitlet#\" + str(self.islitlet)\n ax = ximshow(slitlet2d, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n show=False)\n xdum = np.linspace(1, EMIR_NAXIS1, num=EMIR_NAXIS1)\n ylower = self.list_spectrails[0](xdum)\n ax.plot(xdum, ylower, 'b-')\n ymiddle = self.list_spectrails[1](xdum)\n ax.plot(xdum, ymiddle, 'b--')\n yupper = self.list_spectrails[2](xdum)\n ax.plot(xdum, yupper, 'b-')\n ylower_frontier = self.list_frontiers[0](xdum)\n ax.plot(xdum, ylower_frontier, 'b:')\n yupper_frontier = self.list_frontiers[1](xdum)\n ax.plot(xdum, yupper_frontier, 'b:')\n pause_debugplot(debugplot=self.debugplot, pltshow=True)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef ximshow_rectified(self, slitlet2d_rect):\n\n title = \"Slitlet#\" + str(self.islitlet) + \" (rectify)\"\n ax = ximshow(slitlet2d_rect, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n show=False)\n # grid with fitted transformation: spectrum trails\n xx = np.arange(0, self.bb_nc2_orig - self.bb_nc1_orig + 1,\n dtype=np.float)\n for spectrail in self.list_spectrails:\n yy0 = self.corr_yrect_a + \\\n self.corr_yrect_b * spectrail(self.x0_reference)\n yy = np.tile([yy0 - self.bb_ns1_orig], xx.size)\n ax.plot(xx + self.bb_nc1_orig, yy + self.bb_ns1_orig, \"b\")\n for spectrail in self.list_frontiers:\n yy0 = self.corr_yrect_a +\\\n self.corr_yrect_b * spectrail(self.x0_reference)\n yy = np.tile([yy0 - self.bb_ns1_orig], xx.size)\n ax.plot(xx + self.bb_nc1_orig, yy + self.bb_ns1_orig, \"b:\")\n # show plot\n pause_debugplot(self.debugplot, pltshow=True)", "response": "Display rectified image with spectrails and frontiers."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef info(self, token):\n verifier = JWT(key_jar=self.key_jar, allowed_sign_algs=[self.alg])\n _payload = verifier.unpack(token)\n if is_expired(_payload['exp']):\n raise ToOld('Token has expired')\n return _payload['sid'], _payload['ttype']", "response": "Return the type of Token and session id."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef is_expired(self, token, when=0):\n verifier = JWT(key_jar=self.key_jar, allowed_sign_algs=[self.alg])\n _payload = verifier.unpack(token)\n return is_expired(_payload['exp'], when)", "response": "Evaluate whether the token has expired or not."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\nasync def create_websocket(url: str,\n ssl: Optional[SSLContext] = None,\n headers: Optional[list] = None,\n subprotocols: Optional[list] = None):\n \"\"\"\n A more low-level form of open_websocket.\n You are responsible for closing this websocket.\n \"\"\"\n url = yarl.URL(url)\n args = {}\n if headers:\n args[\"headers\"] = headers\n\n # automatically use ssl if it's websocket secure\n if ssl is None:\n ssl = url.scheme == \"wss\"\n if ssl:\n if ssl is True:\n ssl = SSLContext()\n args[\"ssl_context\"] = ssl\n args[\"autostart_tls\"] = True\n args[\"tls_standard_compatible\"] = False\n\n addr = (url.host, int(url.port))\n ws = Websocket()\n await ws.__ainit__(\n addr=addr, path=url.path_qs, subprotocols=subprotocols, **args)\n return ws", "response": "Create a new websocket."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nopen a websocket on top of a socket.", "response": "async def open_websocket_client(sock: anyio.abc.SocketStream,\n addr,\n path: str,\n headers: Optional[list] = None,\n subprotocols: Optional[list] = None):\n \"\"\"Create a websocket on top of a socket.\"\"\"\n ws = await create_websocket_client(\n sock, addr=addr, path=path, headers=headers, subprotocols=subprotocols)\n try:\n yield ws\n finally:\n await ws.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new websocket client.", "response": "async def create_websocket_client(sock: anyio.abc.SocketStream,\n addr,\n path: str,\n headers: Optional[List] = None,\n subprotocols: Optional[List[str]] = None):\n \"\"\"\n A more low-level form of create_websocket_client.\n You are responsible for closing this websocket.\n \"\"\"\n ws = Websocket()\n await ws.start_client(\n sock, addr=addr, path=path, headers=headers, subprotocols=subprotocols)\n return ws"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\niterate over the MAC addresses given as string.", "response": "def range_mac(mac_start, mac_end, step=1):\n \"\"\"Iterate over mac addresses (given as string).\"\"\"\n start = int(EUI(mac_start))\n end = int(EUI(mac_end))\n for i_mac in range(start, end, step):\n mac = EUI(int(EUI(i_mac)) + 1)\n ip = ['10'] + [str(int(i, 2)) for i in mac.bits().split('-')[-3:]]\n yield str(mac).replace('-', ':'), '.'.join(ip)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets all rows of the query. Raises a NotFoundError if there are 0 rows.", "response": "def _all(self, query):\n \"\"\"Gets all rows of the query. Raises a NotFoundError if there are 0 rows\"\"\"\n if query.count() > 0:\n results = query.all()\n for result in results:\n if result.has_expired(self._expirations):\n raise NotFoundError\n return results\n else:\n raise NotFoundError"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nput a item into the database. Updates lastUpdate column", "response": "def _put(self, item: SQLBaseObject):\n \"\"\"Puts a item into the database. Updates lastUpdate column\"\"\"\n if item._dto_type in self._expirations and self._expirations[item._dto_type] == 0:\n # The expiration time has been set to 0 -> shoud not be cached\n return\n item.updated()\n self._session().merge(item)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _put_many(self, items: Iterable[DtoObject], cls):\n if cls._dto_type in self._expirations and self._expirations[cls._dto_type] == 0:\n # The expiration time has been set to 0 -> shoud not be cached\n return\n session = self._session\n for item in items:\n item = cls(**item)\n item.updated()\n session.merge(item)", "response": "Puts many items into the database. Updates lastUpdate column for each of them"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning only those claims that are asked for.", "response": "def filter(self, userinfo, user_info_claims=None):\n \"\"\"\n Return only those claims that are asked for.\n It's a best effort task; if essential claims are not present\n no error is flagged.\n\n :param userinfo: A dictionary containing the available info for one user\n :param user_info_claims: A dictionary specifying the asked for claims\n :return: A dictionary of filtered claims.\n \"\"\"\n\n if user_info_claims is None:\n return copy.copy(userinfo)\n else:\n result = {}\n missing = []\n optional = []\n for key, restr in user_info_claims.items():\n try:\n result[key] = userinfo[key]\n except KeyError:\n if restr == {\"essential\": True}:\n missing.append(key)\n else:\n optional.append(key)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_data(room, conn, data):\n files = list()\n rooms = dict()\n msg = str()\n\n for part in data[\"message\"]:\n ptype = part[\"type\"]\n if ptype == \"text\":\n val = part[\"value\"]\n msg += val\n elif ptype == \"break\":\n msg += \"\\n\"\n elif ptype == \"file\":\n fileid = part[\"id\"]\n fileobj = room.filedict.get(fileid)\n if fileobj:\n files += (fileobj,)\n fileid = f\"@{fileid}\"\n msg += fileid\n elif ptype == \"room\":\n roomid = part[\"id\"]\n rooms[roomid] = part[\"name\"]\n roomid = f\"#{roomid}\"\n msg += roomid\n elif ptype == \"url\":\n msg += part[\"text\"]\n elif ptype == \"raw\":\n msg += html_to_text(part[\"value\"])\n else:\n import warnings\n\n warnings.warn(f\"unknown message type '{ptype}'\", Warning)\n\n nick = data.get(\"nick\") or data.get(\"user\")\n options = data.get(\"options\", dict())\n data = data.get(\"data\", dict())\n\n message = ChatMessage(\n room,\n conn,\n nick,\n msg,\n roles=Roles.from_options(options),\n options=options,\n data=data,\n files=files,\n rooms=rooms,\n )\n return message", "response": "Construct a ChatMessage instance from raw protocol data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_rot2d(angle):\n ca = math.cos(angle)\n sa = math.sin(angle)\n return np.array([[ca, -sa], [sa, ca]])", "response": "Create 2D rotation matrix"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef comp_centroid(data, bounding_box, debug_plot=False, plot_reference=None, logger=None):\n\n from matplotlib.patches import Ellipse\n\n if logger is None:\n logger = logging.getLogger(__name__)\n\n region = bounding_box.slice\n ref_x = region[1].start\n ref_y = region[0].start\n logger.debug('region ofset is %s, %s', ref_x, ref_y)\n subimage = data[region].copy()\n bkg = sep.Background(subimage)\n data_sub = subimage - bkg\n objects = sep.extract(data_sub, 1.5, err=bkg.globalrms)\n # Select brightest object\n logger.debug('%d object found', len(objects))\n\n if len(objects) == 0:\n # print('No objects')\n return None\n\n iadx = objects['flux'].argmax()\n # plot background-subtracted image\n maxflux = objects[iadx]\n\n if debug_plot:\n fig, ax = plt.subplots()\n m, s = np.mean(data_sub), np.std(data_sub)\n ax.imshow(data_sub, interpolation='nearest', cmap='gray',\n vmin=m - s, vmax=m + s, origin='lower',\n extent=bounding_box.extent)\n if plot_reference:\n e = Ellipse(xy=(plot_reference[0], plot_reference[1]),\n width=6,\n height=6,\n angle=0)\n e.set_facecolor('none')\n e.set_edgecolor('green')\n ax.add_artist(e)\n\n # plot an ellipse for each object\n for idx, obj in enumerate(objects):\n e = Ellipse(xy=(obj['x'] + ref_x, obj['y'] + ref_y),\n width=6 * obj['a'],\n height=6 * obj['b'],\n angle=obj['theta'] * 180. / np.pi)\n e.set_facecolor('none')\n if idx == iadx:\n e.set_edgecolor('blue')\n else:\n e.set_edgecolor('red')\n ax.add_artist(e)\n return maxflux['x'], maxflux['y'], ax\n else:\n return maxflux['x'], maxflux['y']", "response": "Detect objects in a region and return the brightest one"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef sign_encrypt(self, session_info, client_id, code=None,\n access_token=None, user_info=None, sign=True,\n encrypt=False, extra_claims=None):\n \"\"\"\n Signed and or encrypt a IDToken\n\n :param session_info: Session information\n :param client_id: Client ID\n :param code: Access grant\n :param access_token: Access Token\n :param user_info: User information\n :param sign: If the JWT should be signed\n :param encrypt: If the JWT should be encrypted\n :param extra_claims: Extra claims to be added to the ID Token\n :return: IDToken as a signed and/or encrypted JWT\n \"\"\"\n\n _cntx = self.endpoint_context\n\n client_info = _cntx.cdb[client_id]\n alg_dict = get_sign_and_encrypt_algorithms(_cntx, client_info,\n 'id_token', sign=sign,\n encrypt=encrypt)\n\n _authn_event = session_info['authn_event']\n\n _idt_info = self.payload(session_info,\n acr=_authn_event[\"authn_info\"],\n alg=alg_dict['sign_alg'], code=code,\n access_token=access_token, user_info=user_info,\n auth_time=_authn_event[\"authn_time\"],\n extra_claims=extra_claims)\n\n _jwt = JWT(_cntx.keyjar, iss=_cntx.issuer,\n lifetime=_idt_info['lifetime'], **alg_dict)\n\n return _jwt.pack(_idt_info['payload'], recv=client_id)", "response": "Sign and encrypt an IDToken"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add(self, src_cls, module, package=None):\n # import module containing the layer class\n mod = importlib.import_module(module, package)\n # get layer class definition from the module\n self.sources[src_cls] = getattr(mod, src_cls)", "response": "Add layer class to model."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a data source to the model.", "response": "def add(self, data_source, module, package=None):\n \"\"\"\n Add data_source to model. Tries to import module, then looks for data\n source class definition.\n\n :param data_source: Name of data source to add.\n :type data_source: str\n :param module: Module in which data source resides. Can be absolute or\n relative. See :func:`importlib.import_module`\n :type module: str\n :param package: Optional, but must be used if module is relative.\n :type package: str\n\n .. seealso::\n :func:`importlib.import_module`\n \"\"\"\n super(Data, self).add(data_source, module, package)\n # only update layer info if it is missing!\n if data_source not in self.layer:\n # copy data source parameters to :attr:`Layer.layer`\n self.layer[data_source] = {'module': module, 'package': package}\n # add a place holder for the data source object when it's constructed\n self.objects[data_source] = None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nopen filename to get data for a specific data source.", "response": "def open(self, data_source, *args, **kwargs):\n \"\"\"\n Open filename to get data for data_source.\n\n :param data_source: Data source for which the file contains data.\n :type data_source: str\n\n Positional and keyword arguments can contain either the data to use for\n the data source or the full path of the file which contains data for the\n data source.\n \"\"\"\n if self.sources[data_source]._meta.data_reader.is_file_reader:\n filename = kwargs.get('filename')\n path = kwargs.get('path', '')\n rel_path = kwargs.get('rel_path', '')\n if len(args) > 0:\n filename = args[0]\n if len(args) > 1:\n path = args[1]\n if len(args) > 2:\n rel_path = args[2]\n args = ()\n kwargs = {'filename': os.path.join(rel_path, path, filename)}\n LOGGER.debug('filename: %s', kwargs['filename'])\n # call constructor of data source with filename argument\n self.objects[data_source] = self.sources[data_source](*args, **kwargs)\n # register data and uncertainty in registry\n data_src_obj = self.objects[data_source]\n meta = [getattr(data_src_obj, m) for m in self.reg.meta_names]\n self.reg.register(data_src_obj.data, *meta)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(self, rel_path=None):\n for k, v in self.layer.iteritems():\n self.add(k, v['module'], v.get('package'))\n filename = v.get('filename')\n path = v.get('path')\n if filename:\n # default path for data is in ../data\n if not path:\n path = rel_path\n else:\n path = os.path.join(rel_path, path)\n # filename can be a list or a string, concatenate list with\n # os.pathsep and append the full path to strings.\n if isinstance(filename, basestring):\n filename = os.path.join(path, filename)\n else:\n file_list = [os.path.join(path, f) for f in filename]\n filename = os.path.pathsep.join(file_list)\n self.open(k, filename)", "response": "Load all data sources from the layer and open files with data for the data_source."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef edit(self, data_src, value):\n # check if opening file\n if 'filename' in value:\n items = [k for k, v in self.reg.data_source.iteritems() if\n v == data_src]\n self.reg.unregister(items) # remove items from Registry\n # open file and register new data\n self.open(data_src, value['filename'], value.get('path'))\n self.layer[data_src].update(value)", "response": "Edit data layer.\n\n :param data_src: Name of :class:`DataSource` to edit.\n :type data_src: str\n :param value: Values to edit.\n :type value: dict"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add(self, formula, module, package=None):\n super(Formulas, self).add(formula, module, package)\n # only update layer info if it is missing!\n if formula not in self.layer:\n # copy formula source parameters to :attr:`Layer.layer`\n self.layer[formula] = {'module': module, 'package': package}\n self.objects[formula] = self.sources[formula]()\n # register formula and linearity in registry\n formula_src_obj = self.objects[formula]\n meta = [getattr(formula_src_obj, m) for m in self.reg.meta_names]\n self.reg.register(formula_src_obj.formulas, *meta)", "response": "Adds a formula to the registry."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load(self, _=None):\n for k, v in self.layer.iteritems():\n self.add(k, v['module'], v.get('package'))", "response": "Load the current set of class names from the layer."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add(self, calc, module, package=None):\n super(Calculations, self).add(calc, module, package)\n # only update layer info if it is missing!\n if calc not in self.layer:\n # copy calc source parameters to :attr:`Layer.layer`\n self.layer[calc] = {'module': module, 'package': package}\n # instantiate the calc object\n self.objects[calc] = self.sources[calc]()\n # register calc and dependencies in registry\n calc_src_obj = self.objects[calc]\n meta = [getattr(calc_src_obj, m) for m in self.reg.meta_names]\n self.reg.register(calc_src_obj.calcs, *meta)", "response": "Add a new calc to the layer."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds output to the internal dictionary.", "response": "def add(self, output, module, package=None):\n \"\"\"\n Add output to\n \"\"\"\n super(Outputs, self).add(output, module, package)\n # only update layer info if it is missing!\n if output not in self.layer:\n # copy output source parameters to :attr:`Layer.layer`\n self.layer[output] = {'module': module, 'package': package}\n # instantiate the output object\n self.objects[output] = self.sources[output]()\n # register outputs and meta-data in registry\n out_src_obj = self.objects[output]\n meta = [getattr(out_src_obj, m) for m in self.reg.meta_names]\n self.reg.register(out_src_obj.outputs, *meta)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add(self, sim, module, package=None):\n super(Simulations, self).add(sim, module, package)\n # only update layer info if it is missing!\n if sim not in self.layer:\n # copy simulation source parameters to :attr:`Layer.layer`\n self.layer[sim] = {'module': module, 'package': package}", "response": "Add simulation to layer."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(self, rel_path=None):\n for k, v in self.layer.iteritems():\n self.add(k, v['module'], v.get('package'))\n filename = v.get('filename')\n path = v.get('path')\n if filename:\n warnings.warn(DeprecationWarning(SIMFILE_LOAD_WARNING))\n # default path for data is in ../simulations\n if not path:\n path = rel_path\n else:\n path = os.path.join(rel_path, path)\n filename = os.path.join(path, filename)\n self.open(k, filename)", "response": "Load all the sim_src files from the file system."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalculating solar position for lat / long at times.", "response": "def f_solpos(times, latitude, longitude):\n \"\"\"\n Calculate solar position for lat/long at times.\n\n :param times: Python :class:`datetime.datetime` object.\n :type times: list\n :param latitude: latitude [degrees]\n :type latitude: float\n :param longitude: longitude [degrees]\n :type longitude: float\n :returns: apparent zenith, azimuth\n \"\"\"\n # pvlib converts Python datetime objects to pandas DatetimeIndex\n solpos = pvlib.solarposition.get_solarposition(times, latitude, longitude)\n # solpos is a pandas DataFrame, so unpack the desired values\n # return shape is (2, NOBS), so unc_wrapper sees 2 dependent variables\n return solpos['apparent_zenith'].values, solpos['azimuth'].values"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef f_total_irrad(times, surface_tilt, surface_azimuth, solar_zenith,\n solar_azimuth, dni, ghi, dhi, dni_extra, am_abs,\n model='haydavies'):\n \"\"\"\n Calculate total irradiance\n\n :param times: timestamps\n :param surface_tilt: panel tilt from horizontal [deg]\n :param surface_azimuth: panel azimuth from north [deg]\n :param solar_zenith: refracted solar zenith angle [deg]\n :param solar_azimuth: solar azimuth [deg]\n :param dni: direct normal irradiance [W/m**2]\n :param ghi: global horizonal irradiance [W/m**2]\n :param dhi: diffuse horizontal irradiance [W/m**2]\n :param dni_extra: extraterrestrial irradiance [W/m**2]\n :param am_abs: absolute airmass [dimensionless]\n :param model: irradiance model name, default is ``'haydavies'``\n :type model: str\n :return: global, direct and diffuse plane of array irradiance [W/m**2]\n \"\"\"\n am_abs = am_abs.squeeze()\n # make a DataFrame for time series arguments\n df = pd.DataFrame(\n {'solar_zenith': solar_zenith, 'solar_azimuth': solar_azimuth,\n 'dni': dni, 'ghi': ghi, 'dhi': dhi, 'dni_extra': dni_extra,\n 'am_abs': am_abs},\n index=times\n )\n # calculate total irradiance using PVLIB\n total_irrad = pvlib.irradiance.total_irrad(\n surface_tilt, surface_azimuth, df['solar_zenith'], df['solar_azimuth'],\n df['dni'], df['ghi'], df['dhi'], dni_extra=df['dni_extra'],\n airmass=df['am_abs'], model=model\n ).fillna(0.0)\n # convert to ndarrays\n poa_global = total_irrad['poa_global'].values\n poa_direct = total_irrad['poa_direct'].values\n poa_diffuse = total_irrad['poa_diffuse'].values\n return poa_global, poa_direct, poa_diffuse", "response": "Calculate total irradiance of a single resource in a single resource in a single resource in a single resource in a single resource in a single resource in a single resource in a single resource in a single resource."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rescale_array_to_z1z2(array, z1z2=(-1.0, 1.0)):\n\n if type(array) is not np.ndarray:\n raise ValueError(\"array=\" + str(array) + \" must be a numpy.ndarray\")\n\n array_min = array.min()\n array_max = array.max()\n\n z1, z2 = z1z2\n delta = array_max - array_min\n b_flux = (z2 - z1) / delta\n c_flux = (z2 * array_min - z1 * array_max) / delta\n\n array_rs = b_flux * array - c_flux\n\n return array_rs, (b_flux, c_flux)", "response": "Rescale the values in a numpy array to the z1z2 interval."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrestoring the values in a numpy array rescaled to the [ z1 z2 interval.", "response": "def rescale_array_from_z1z2(array_rs, coef_rs=None):\n \"\"\"Restore the values in a numpy array rescaled to the [z1,z2] interval.\n\n The transformation is carried out following the relation\n array = (array_rs + c_flux)/b_flux\n as explained in Appendix B1 of Cardiel (2009, MNRAS, 396, 680)\n\n Parameters\n ----------\n array_rs : numpy array\n Numpy array previously rescaled to the [z1,z2] interval with\n the function rescale_array_to_z1z2().\n coef_rs : tuple, floats\n Coefficients b_flux and c_flux previously employed in the\n rescaling operation. This tuple is one of the parameters\n returned by function_rescale_array_to_z1z2().\n\n Returns\n -------\n array : numpy array\n Array with restored values.\n\n \"\"\"\n\n if type(array_rs) is not np.ndarray:\n raise ValueError(\n \"array_rs=\" + str(array_rs) + \"must be a numpy.ndarray\")\n\n b_flux, c_flux = coef_rs\n\n array = (array_rs + c_flux) / b_flux\n\n return array"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef with_meta(class_to_decorate=None, add_init_kwargs=True):\n\n if class_to_decorate is None:\n return functools.partial(with_meta, add_init_kwargs=add_init_kwargs)\n\n if add_init_kwargs:\n def get_extra_args_function(self):\n return {k: v for k, v in self.get_meta().items() if not k.startswith('_')}\n\n add_args_to_init_call(class_to_decorate, get_extra_args_function)\n\n setattr(class_to_decorate, 'get_meta', classmethod(get_meta))\n\n return class_to_decorate", "response": "A class decorator to add Meta class attributes to the base class."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_meta(cls):\n merged_attributes = Struct()\n for class_ in reversed(cls.mro()):\n if hasattr(class_, 'Meta'):\n for key, value in class_.Meta.__dict__.items():\n merged_attributes[key] = value\n return merged_attributes", "response": "Returns a Struct containing all members of any contained meta class declarations from the given class or any of its base classes."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nclassing decorator that ensures that instances will be ordered after creation order when sorted. :type class_to_decorate: class :rtype: class", "response": "def creation_ordered(class_to_decorate):\n \"\"\"\n Class decorator that ensures that instances will be ordered after creation order when sorted.\n\n :type class_to_decorate: class\n :rtype: class\n \"\"\"\n\n next_index = functools.partial(next, itertools.count())\n\n __init__orig = class_to_decorate.__init__\n\n @functools.wraps(__init__orig, assigned=['__doc__'])\n def __init__(self, *args, **kwargs):\n object.__setattr__(self, '_index', next_index())\n __init__orig(self, *args, **kwargs)\n\n setattr(class_to_decorate, '__init__', __init__)\n\n # noinspection PyProtectedMember\n def __lt__(self, other):\n return self._index < other._index # pragma: no mutate\n\n setattr(class_to_decorate, '__lt__', __lt__)\n\n class_to_decorate = functools.total_ordering(class_to_decorate)\n\n return class_to_decorate"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn an ordered list of all class level attributes that match the given criteria.", "response": "def get_members(cls, member_class=None, is_member=None, sort_key=None, _parameter=None):\n \"\"\"\n Collect all class level attributes matching the given criteria.\n\n :param class member_class: Class(es) to collect\n :param is_member: Function to determine if an object should be collected\n :param sort_key: Function to invoke on members to obtain ordering (Default is to use ordering from `creation_ordered`)\n\n :type is_member: (object) -> bool\n :type sort_key: (object) -> object\n \"\"\"\n if member_class is None and is_member is None:\n raise TypeError(\"get_members either needs a member_class parameter or an is_member check function (or both)\")\n\n members = OrderedDict()\n for base in cls.__bases__:\n if _parameter is None:\n inherited_members = get_members(base, member_class=member_class, is_member=is_member, sort_key=sort_key)\n else:\n # When user by @declarative, only traverse up the class inheritance to the decorated class.\n inherited_members = get_declared(base, _parameter)\n members.update(inherited_members)\n\n def generate_member_bindings():\n for name in cls.__dict__:\n if name.startswith('__'):\n continue\n obj = getattr(cls, name)\n if member_class is not None and isinstance(obj, member_class):\n yield name, obj\n elif is_member is not None and is_member(obj):\n yield name, obj\n elif type(obj) is tuple and len(obj) == 1 and isinstance(obj[0], member_class):\n raise TypeError(\"'%s' is a one-tuple containing what we are looking for. Trailing comma much? Don't... just don't.\" % name) # pragma: no mutate\n\n bindings = generate_member_bindings()\n\n if sort_key is not None:\n try:\n sorted_bindings = sorted(bindings, key=lambda x: sort_key(x[1]))\n except AttributeError:\n if sort_key is default_sort_key:\n raise TypeError('Missing member ordering definition. Use @creation_ordered or specify sort_key')\n else: # pragma: no covererage\n raise\n members.update(sorted_bindings)\n else:\n members.update(bindings)\n\n return members"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef declarative(member_class=None, parameter='members', add_init_kwargs=True, sort_key=default_sort_key, is_member=None):\n if member_class is None and is_member is None:\n raise TypeError(\"The @declarative decorator needs either a member_class parameter or an is_member check function (or both)\")\n\n def decorator(class_to_decorate):\n class DeclarativeMeta(class_to_decorate.__class__):\n # noinspection PyTypeChecker\n def __init__(cls, name, bases, dict):\n members = get_members(cls, member_class=member_class, is_member=is_member, sort_key=sort_key, _parameter=parameter)\n set_declared(cls, members, parameter)\n super(DeclarativeMeta, cls).__init__(name, bases, dict)\n\n new_class = DeclarativeMeta(class_to_decorate.__name__,\n class_to_decorate.__bases__,\n {k: v for k, v in class_to_decorate.__dict__.items() if k not in ['__dict__', '__weakref__']})\n\n def get_extra_args_function(self):\n declared = get_declared(self, parameter)\n\n def copy_declared():\n for k, v in declared.items():\n try:\n v = copy(v)\n except TypeError:\n pass # Not always possible to copy methods\n yield (k, v)\n\n copied_members = OrderedDict(copy_declared())\n self.__dict__.update(copied_members)\n return {parameter: copied_members}\n\n if add_init_kwargs:\n add_args_to_init_call(new_class, get_extra_args_function)\n else:\n add_init_call_hook(new_class, get_extra_args_function)\n\n setattr(new_class, 'get_declared', classmethod(get_declared))\n setattr(new_class, 'set_declared', classmethod(set_declared))\n\n return new_class\n\n return decorator", "response": "A class decorator that can be used to create a new hierarchy of classes that are defined in django models."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the signature of the function.", "response": "def get_signature(func):\n \"\"\"\n :type func: Callable\n :rtype: str\n \"\"\"\n try:\n return object.__getattribute__(func, '__tri_declarative_signature')\n except AttributeError:\n pass\n\n try:\n if sys.version_info[0] < 3: # pragma: no mutate\n names, _, varkw, defaults = inspect.getargspec(func) # pragma: no mutate\n else:\n names, _, varkw, defaults, _, _, _ = inspect.getfullargspec(func) # pragma: no covererage\n except TypeError:\n return None\n\n first_arg_index = 1 if inspect.ismethod(func) else 0 # Skip self argument on methods\n\n number_of_defaults = len(defaults) if defaults else 0\n if number_of_defaults > 0:\n required = ','.join(sorted(names[first_arg_index:-number_of_defaults]))\n optional = ','.join(sorted(names[-number_of_defaults:]))\n else:\n required = ','.join(sorted(names[first_arg_index:]))\n optional = ''\n wildcard = '*' if varkw is not None else ''\n\n signature = '|'.join((required, optional, wildcard))\n try:\n object.__setattr__(func, '__tri_declarative_signature', signature)\n except TypeError:\n # For classes\n type.__setattr__(func, '__tri_declarative_signature', signature)\n except AttributeError:\n pass\n return signature"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef collect_namespaces(values):\n namespaces = {}\n result = dict(values)\n for key, value in values.items():\n parts = key.split('__', 1) # pragma: no mutate\n if len(parts) == 2:\n prefix, name = parts\n if prefix not in namespaces:\n initial_namespace = values.get(prefix)\n if initial_namespace is None:\n initial_namespace = {}\n elif not isinstance(initial_namespace, dict):\n initial_namespace = {initial_namespace: True}\n namespaces[prefix] = initial_namespace\n namespaces[prefix][name] = result.pop(key)\n for prefix, namespace in namespaces.items():\n result[prefix] = namespace\n return result", "response": "Collect the namespace mappings from a dictionary of values."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef extract_subkeys(kwargs, prefix, defaults=None):\n\n prefix += '__'\n result = {k[len(prefix):]: v for k, v in kwargs.items() if k.startswith(prefix)}\n if defaults is not None:\n return setdefaults(result, defaults)\n else:\n return result", "response": "Extract mappings of the shape foo__bar baz 3 etc."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the default values of the object d to the values of d2.", "response": "def setdefaults(d, d2):\n \"\"\"\n :type d: dict\n :type d2: dict\n :rtype: dict\n \"\"\"\n for k, v in d2.items():\n d.setdefault(k, v)\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getattr_path(obj, path):\n path = path.split('__')\n for name in path:\n obj = getattr(obj, name)\n if obj is None:\n return None\n return obj", "response": "Get an attribute path as defined by a string separated by __."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting an attribute path as defined by a string separated by __.", "response": "def setattr_path(obj, path, value):\n \"\"\"\n Set an attribute path, as defined by a string separated by '__'.\n setattr_path(foo, 'a__b__c', value) is equivalent to \"foo.a.b.c = value\".\n \"\"\"\n path = path.split('__')\n o = obj\n for name in path[:-1]:\n o = getattr(o, name)\n setattr(o, path[-1], value)\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate the. rst files for the current node.", "response": "def generate_rst_docs(directory, classes, missing_objects=None):\n \"\"\"\n Generate documentation for tri.declarative APIs\n\n :param directory: directory to write the .rst files into\n :param classes: list of classes to generate documentation for\n :param missing_objects: tuple of objects to count as missing markers, if applicable\n \"\"\"\n\n doc_by_filename = _generate_rst_docs(classes=classes, missing_objects=missing_objects) # pragma: no mutate\n for filename, doc in doc_by_filename: # pragma: no mutate\n with open(directory + filename, 'w') as f2: # pragma: no mutate\n f2.write(doc)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef reg_copy(reg, keys=None):\n if keys is None:\n keys = reg.keys()\n reg_cls = type(reg)\n new_reg = reg_cls()\n mk = {} # empty dictionary for meta keys\n # loop over registry meta names\n for m in reg_cls.meta_names:\n mstar = getattr(reg, m, None) # current value of metakey in registry\n if not mstar:\n # if there is no value, the value is empty or None, set it to None\n # it's never false or zero, should be dictionary of reg items\n mk[m] = None\n continue\n mk[m] = {} # emtpy dictionary of registry meta\n # loop over keys to copy and set values of meta keys for each reg item\n for k in keys:\n kstar = mstar.get(k)\n # if key exists in registry meta and is not None, then copy it\n if kstar is not None:\n mk[m][k] = kstar\n new_reg.register({k: reg[k] for k in keys}, **mk)\n return new_reg", "response": "Make a copy of a subset of a registry."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the covariance matrix for the given data and output variables.", "response": "def get_covariance(datargs, outargs, vargs, datvar, outvar):\n \"\"\"\n Get covariance matrix.\n\n :param datargs: data arguments\n :param outargs: output arguments\n :param vargs: variable arguments\n :param datvar: variance of data arguments\n :param outvar: variance of output arguments\n :return: covariance\n \"\"\"\n # number of formula arguments that are not constant\n argn = len(vargs)\n # number of observations must be the same for all vargs\n nobs = 1\n c = []\n # FIXME: can just loop ver varg, don't need indices I think, do we?\n for m in xrange(argn):\n a = vargs[m] # get the variable formula arg in vargs at idx=m\n try:\n a = datargs[a] # get the calculation data arg\n except (KeyError, TypeError):\n a = outargs[a] # get the calculation output arg\n if not isinstance(a, basestring):\n # calculation arg might be sequence (arg, idx, [unit])\n a = a[0] # if a is a sequence, get just the arg from a[0]\n LOGGER.debug('using output variance key: %r', a)\n avar = outvar[a] # get variance from output registry\n else:\n if not isinstance(a, basestring):\n # calculation arg might be sequence (arg, idx, [unit])\n a = a[0] # if a is a sequence, get just the arg from a[0]\n LOGGER.debug('using data variance key: %r', a)\n avar = datvar[a] # get variance from data registry\n d = []\n # avar is a dictionary with the variance of \"a\" vs all other vargs\n for n in xrange(argn):\n # FIXME: just get all of the calculation args one time\n b = vargs[n] # get the variable formula arg in vargs at idx=n\n try:\n b = datargs[b] # get the calculation data arg\n except (KeyError, TypeError):\n b = outargs[b] # get variance from output registry\n if not isinstance(b, basestring):\n # calculation arg might be sequence (arg, idx, [unit])\n b = b[0] # if a is a sequence, get just the arg from b[0]\n LOGGER.debug('using variance key: %r', b)\n d.append(avar.get(b, 0.0)) # add covariance to sequence\n # figure out number of observations from longest covariance\n # only works if nobs is in one of the covariance\n # fails if nobs > 1, but covariance shape doesn't have nobs!!!\n # eg: if variance for data is uniform for all observations!!!\n try:\n nobs = max(nobs, len(d[-1]))\n except (TypeError, ValueError):\n LOGGER.debug('c of %s vs %s = %g', a, b, d[-1])\n LOGGER.debug('d:\\n%r', d)\n c.append(d)\n # covariance matrix is initially zeros\n cov = np.zeros((nobs, argn, argn))\n # loop over arguments in both directions, fill in covariance\n for m in xrange(argn):\n d = c.pop()\n LOGGER.debug('pop row %d:\\n%r', argn-1-m, d)\n for n in xrange(argn):\n LOGGER.debug('pop col %d:\\n%r', argn - 1 - n, d[-1])\n cov[:, argn-1-m, argn-1-n] = d.pop()\n if nobs == 1:\n cov = cov.squeeze() # squeeze out any extra dimensions\n LOGGER.debug('covariance:\\n%r', cov)\n return cov"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncalculate the repeat arguments for a specific time period.", "response": "def calculate(self, calc, formula_reg, data_reg, out_reg,\n timestep=None, idx=None):\n \"\"\"\n Calculate looping over specified repeat arguments.\n\n :param calc: Calculation to loop over.\n :param formula_reg: Formula registry\n :param data_reg: Data registry\n :param out_reg: Outputs registry\n :param timestep: timestep used for dynamic calcs\n :param idx: index used in dynamic calcs\n \"\"\"\n # the superclass Calculator.calculate() method\n base_calculator = super(LazyLoopingCalculator, self).calculate\n # call base calculator and return if there are no repeat args\n if not self.repeat_args:\n base_calculator(calc, formula_reg, data_reg, out_reg, timestep, idx)\n return\n # make dictionaries of the calculation data and outputs argument maps\n # this maps what the formulas and registries call the repeats arguments\n data_rargs, out_rargs = {}, {} # allocate dictionaries for repeat args\n calc_data = calc['args'].get('data')\n calc_outs = calc['args'].get('outputs')\n # get dictionaries of repeat args from calculation data and outputs\n for rarg in self.repeat_args:\n # rarg could be either data or output so try both\n try:\n data_rargs[rarg] = calc_data[rarg]\n except (KeyError, TypeError):\n out_rargs[rarg] = calc_outs[rarg]\n # get values of repeat data and outputs from registries\n rargs = dict(index_registry(data_rargs, data_reg, timestep, idx),\n **index_registry(out_rargs, out_reg, timestep, idx))\n rargkeys, rargvals = zip(*rargs.iteritems()) # split keys and values\n rargvals = zip(*rargvals) # reshuffle values, should be same size?\n # allocate dictionary of empty numpy arrays for each return value\n returns = calc['returns'] # return keys\n retvals = {rv: [] for rv in returns} # empty dictionary of return vals\n retvalu = {rv: None for rv in returns} # dictionary of return units\n ret_var = {rv: {rv: [] for rv in returns}\n for rv in returns} # variances\n ret_unc = {rv: {rv: [] for rv in returns}\n for rv in returns} # uncertainty\n ret_jac = dict.fromkeys(returns) # jacobian\n # get calc data and outputs keys to copy from registries\n try:\n calc_data_keys = calc_data.values()\n except (AttributeError, TypeError):\n calc_data_keys = [] # if there are no data, leave it empty\n try:\n calc_outs_keys = calc_outs.values()\n except (AttributeError, TypeError):\n calc_outs_keys = [] # if there are no outputs, leave it empty\n # copy returns and this calculations output arguments from output reg\n data_reg_copy = reg_copy(data_reg, calc_data_keys)\n out_reg_copy = reg_copy(out_reg, returns + calc_outs_keys)\n # loop over first repeat arg values and enumerate numpy indices as n\n for vals in rargvals:\n rargs_keys = dict(zip(rargkeys, vals))\n # this is the magic or garbage depending on how you look at it,\n # change the registry copies to only contain the values for this\n # iteration of the repeats\n # TODO: instead of using copies rewrite index_registry to do this\n # copies means that calculations can't use a registry backend that\n # uses shared memory, which will limit ability to run asynchronously\n for k, v in data_rargs.iteritems():\n data_reg_copy[v] = rargs_keys[k]\n for k, v in out_rargs.iteritems():\n out_reg_copy[v] = rargs_keys[k]\n # run base calculator to get retvals, var, unc and jac\n base_calculator(calc, formula_reg, data_reg_copy, out_reg_copy,\n timestep, idx)\n # re-assign retvals for this index of repeats\n for rv, rval in retvals.iteritems():\n rval.append(out_reg_copy[rv].m) # append magnitude to returns\n retvalu[rv] = out_reg_copy[rv].u # save units for this repeat\n # re-assign variance for this index of repeats\n if out_reg_copy.variance.get(rv) is None:\n continue\n for rv2, rval2 in ret_var.iteritems():\n rval2[rv].append(out_reg_copy.variance[rv2][rv])\n # uncertainty only on diagonal of variance\n if rv == rv2:\n ret_unc[rv][rv2].append(out_reg_copy.uncertainty[rv][rv2])\n else:\n # FIXME: inefficient to get length every iteration!\n unc_size = len(out_reg_copy.uncertainty[rv][rv])\n ret_unc[rv][rv2].append(Q_([0.]*unc_size, 'percent'))\n # jacobian is dictionary of returns versus arguments\n if ret_jac[rv] is None:\n # first time through create dictionary of sensitivities\n ret_jac[rv] = {o: v for o, v in\n out_reg_copy.jacobian[rv].iteritems()}\n else:\n # next time through, vstack the sensitivities to existing\n for o, v in out_reg_copy.jacobian[rv].iteritems():\n ret_jac[rv][o] = np.vstack((ret_jac[rv][o], v))\n LOGGER.debug('ret_jac:\\n%r', ret_jac)\n # TODO: handle jacobian for repeat args and for dynamic simulations\n # apply units if they were\n for k in retvals:\n if retvalu[k] is not None:\n if retvalu[k] == out_reg[k].u:\n retvals[k] = Q_(retvals[k], retvalu[k])\n else:\n retvals[k] = Q_(retvals[k], retvalu[k]).to(out_reg[k].u)\n # put return values into output registry\n if idx is None:\n out_reg.update(retvals)\n out_reg.variance.update(ret_var)\n out_reg.uncertainty.update(ret_unc)\n out_reg.jacobian.update(ret_jac)\n else:\n for k, v in retvals:\n out_reg[k][idx] = v"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlistens for changes in all registered listeners in all specified rooms", "response": "def listen_many(*rooms):\n \"\"\"Listen for changes in all registered listeners in all specified rooms\"\"\"\n\n rooms = set(r.conn for r in rooms)\n for room in rooms:\n room.validate_listeners()\n with ARBITRATOR.condition:\n while any(r.connected for r in rooms):\n ARBITRATOR.condition.wait()\n rooms = [r for r in rooms if r.run_queues()]\n if not rooms:\n return"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connect(self, username, checksum, password=None, key=None):\n\n ws_url = (\n f\"{BASE_WS_URL}?room={self.room.room_id}&cs={checksum}&nick={username}\"\n f\"&rn={random_id(6)}&t={int(time.time() * 1000)}&transport=websocket&EIO=3\"\n )\n if password:\n ws_url += f\"&password={password}\"\n elif key:\n ws_url += f\"&key={key}\"\n\n ARBITRATOR.create_connection(\n self.proto, ws_url, self.headers[\"User-Agent\"], self.cookies\n )\n self.__conn_barrier.wait()", "response": "Connect to the websocket through asyncio http interface"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef send_ack(self):\n\n if self.last_ack == self.proto.max_id:\n return\n LOGGER.debug(\"ack (%d)\", self.proto.max_id)\n self.last_ack = self.proto.max_id\n self.send_message(f\"4{to_json([self.proto.max_id])}\")", "response": "Send an ack message to the broker."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_call(self, fun, *args):\n\n obj = {\"fn\": fun, \"args\": list(args)}\n obj = [self.proto.max_id, [[0, [\"call\", obj]], self.proto.send_count]]\n self.send_message(f\"4{to_json(obj)}\")\n self.proto.send_count += 1", "response": "Makes a regular API call"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_call_with_cb(self, fun, *args):\n\n cid, event = self.handler.register_callback()\n argscp = list(args)\n argscp.append(cid)\n self.make_call(fun, *argscp)\n return event", "response": "Makes an API call with a callback to wait for"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_api_call(self, call, params):\n\n if not isinstance(params, dict):\n raise ValueError(\"params argument must be a dictionary\")\n kw = dict(\n params=params,\n headers={\"Origin\": BASE_URL, \"Referer\": f\"{BASE_URL}/r/{self.room.name}\"},\n )\n return self.get(BASE_REST_URL + call, **kw).json()", "response": "Make a REST API call"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef reraise(self, ex):\n\n self.exception = ex\n self.process_queues(forced=True)", "response": "Reraise an exception passed by the event thread"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef on_message(self, new_data):\n # https://github.com/socketio/engine.io-protocol\n\n LOGGER.debug(\"new frame [%r]\", new_data)\n try:\n what = int(new_data[0])\n data = new_data[1:]\n data = data and from_json(data)\n if what == 0:\n self.ping_interval = float(data[\"pingInterval\"]) / 1000\n LOGGER.debug(\"adjusted ping interval\")\n return\n\n if what == 1:\n LOGGER.debug(\"received close\")\n self.reraise(IOError(\"Connection closed remotely\"))\n return\n\n if what == 3:\n self.__lastpong = time.time()\n LOGGER.debug(\"received a pong\")\n return\n\n if what == 4:\n self._on_frame(data)\n return\n\n if what == 6:\n LOGGER.debug(\"received noop\")\n self.send_message(\"5\")\n return\n\n LOGGER.debug(\"unhandled message: [%d] [%r]\", what, data)\n except Exception as ex:\n self.reraise(ex)", "response": "Process incoming messages according to engine - io rules."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a listener for specific event type.", "response": "def add_listener(self, event_type, callback):\n \"\"\"Add a listener for specific event type.\n You'll need to actually listen for changes using the listen method\"\"\"\n\n if not self.connected:\n # wait for errors set by reraise method\n time.sleep(1)\n if self.exception:\n # pylint: disable=raising-bad-type\n raise self.exception\n raise ConnectionError(f\"{self.room} is not connected\")\n\n thread = get_thread_ident()\n with self.lock:\n listener = self.listeners[thread]\n listener.add(event_type, callback)\n # use \"initial_files\" event to listen for whole filelist on room join\n self.process_queues()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprocesses queues if any have data queued", "response": "def process_queues(self, forced=False):\n \"\"\"Process queues if any have data queued\"\"\"\n\n with self.lock:\n if (not forced and not self.must_process) or not self.queues_enabled:\n return\n self.must_process = False\n ARBITRATOR.awaken()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating that some listeners are actually registered", "response": "def validate_listeners(self):\n \"\"\"Validates that some listeners are actually registered\"\"\"\n\n if self.exception:\n # pylint: disable=raising-bad-type\n raise self.exception\n\n listeners = self.__listeners_for_thread\n if not sum(len(l) for l in listeners):\n raise ValueError(\"No active listeners\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlistens for changes in all registered listeners.", "response": "def listen(self):\n \"\"\"Listen for changes in all registered listeners.\"\"\"\n\n self.validate_listeners()\n with ARBITRATOR.condition:\n while self.connected:\n ARBITRATOR.condition.wait()\n if not self.run_queues():\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run_queues(self):\n\n if self.exception:\n # pylint: disable=raising-bad-type\n raise self.exception\n listeners = self.__listeners_for_thread\n return sum(l.process() for l in listeners) > 0", "response": "Run all queues that have data queued"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd gettable and settable room config property during runtime", "response": "def __add_prop(self, key, admin=False):\n \"\"\"Add gettable and settable room config property during runtime\"\"\"\n\n def getter(self):\n return self.config[key]\n\n def setter(self, val):\n if admin and not self.admin:\n raise RuntimeError(\n f\"You can't set the {key} key without mod privileges\"\n )\n self.__set_config_value(self.config.get_real_key(key), val)\n\n setattr(self.__class__, key, property(getter, setter))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __set_config_value(self, key, value):\n\n self.check_owner()\n params = {\"room\": self.room_id, \"config\": to_json({key: value})}\n resp = self.conn.make_api_call(\"setRoomConfig\", params)\n if \"error\" in resp:\n raise RuntimeError(f\"{resp['error'].get('message') or resp['error']}\")\n return resp", "response": "Sets a value for a room config"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef listen(self, once=False):\n\n if once:\n # we listen for time event and return false so our\n # run_queues function will be also falsy and break the loop\n self.add_listener(\"time\", lambda _: False)\n return self.conn.listen()", "response": "Listen for changes in all registered listeners."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __expire_files(self):\n\n self.__files = OrderedDict(\n item for item in self.__files.items() if not item[1].expired\n )", "response": "This method is used to expire the files in the cache."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn data about the given user. Returns None if the user does not exist. Returns None if the user does not exist.", "response": "def get_user_stats(self, name):\n \"\"\"Return data about the given user. Returns None if user\n does not exist.\"\"\"\n\n req = self.conn.get(BASE_URL + \"/user/\" + name)\n if req.status_code != 200 or not name:\n return None\n\n return self.conn.make_api_call(\"getUserInfo\", {\"name\": name})"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nposting a msg to this room s chat. Set me = True.", "response": "def post_chat(self, msg, is_me=False, is_a=False):\n \"\"\"Posts a msg to this room's chat. Set me=True if you want to /me\"\"\"\n\n if len(msg) > self.config.max_message:\n raise ValueError(\n f\"Chat message must be at most {self.config.max_message} characters.\"\n )\n while not self.user.nick:\n with ARBITRATOR.condition:\n ARBITRATOR.condition.wait()\n if is_a:\n if not self.admin and not self.staff:\n raise RuntimeError(\"Can't modchat if you're not a mod or trusted\")\n self.conn.make_call(\"command\", self.user.nick, \"a\", msg)\n return\n if is_me:\n self.conn.make_call(\"command\", self.user.nick, \"me\", msg)\n return\n\n self.conn.make_call(\"chat\", self.user.nick, msg)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nuploading a file to this room.", "response": "def upload_file(\n self,\n filename,\n upload_as=None,\n blocksize=None,\n callback=None,\n information_callback=None,\n allow_timeout=False,\n ):\n \"\"\"Uploads a file with given filename to this room.\n You may specify upload_as to change the name it is uploaded as.\n You can also specify a blocksize and a callback if you wish.\n Returns the file's id on success and None on failure.\"\"\"\n\n with delayed_close(\n filename if hasattr(filename, \"read\") else open(filename, \"rb\")\n ) as file:\n filename = upload_as or os.path.split(filename)[1]\n try:\n file.seek(0, 2)\n if file.tell() > self.config.max_file:\n raise ValueError(\n f\"File must be at most {self.config.max_file >> 30} GB\"\n )\n finally:\n try:\n file.seek(0)\n except Exception:\n pass\n\n files = Data(\n {\"file\": {\"name\": filename, \"value\": file}},\n blocksize=blocksize,\n callback=callback,\n )\n\n headers = {\"Origin\": BASE_URL}\n headers.update(files.headers)\n\n while True:\n key, server, file_id = self._generate_upload_key(\n allow_timeout=allow_timeout\n )\n info = dict(\n key=key,\n server=server,\n file_id=file_id,\n room=self.room_id,\n filename=filename,\n len=files.len,\n resumecount=0,\n )\n if information_callback:\n if information_callback(info) is False:\n continue\n break\n\n params = {\"room\": self.room_id, \"key\": key, \"filename\": filename}\n\n if self.key:\n params[\"roomKey\"] = self.key\n if self.password:\n params[\"password\"] = self.password\n\n while True:\n try:\n post = self.conn.post(\n f\"https://{server}/upload\",\n params=params,\n data=files,\n headers=headers,\n )\n post.raise_for_status()\n break\n\n except requests.exceptions.ConnectionError as ex:\n if \"aborted\" not in repr(ex): # ye, that's nasty but \"compatible\"\n raise\n try:\n resume = self.conn.get(\n f\"https://{server}/rest/uploadStatus\",\n params={\"key\": key, \"c\": 1},\n ).text\n resume = from_json(resume)\n resume = resume[\"receivedBytes\"]\n if resume <= 0:\n raise ConnectionError(\"Cannot resume\")\n file.seek(resume)\n files = Data(\n {\"file\": {\"name\": filename, \"value\": file}},\n blocksize=blocksize,\n callback=callback,\n logical_offset=resume,\n )\n headers.update(files.headers)\n params[\"startAt\"] = resume\n info[\"resumecount\"] += 1\n if information_callback:\n information_callback(info)\n except requests.exceptions.ConnectionError as iex:\n # ye, that's nasty but \"compatible\"\n if \"aborted\" not in repr(iex):\n raise\n continue # another day, another try\n return file_id"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef close(self):\n\n if hasattr(self, \"conn\"):\n self.conn.close()\n del self.conn\n if hasattr(self, \"user\"):\n del self.user", "response": "Close connection to this room"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef user_info(self, kv):\n\n key, value = kv\n self.__user_info[key] = value", "response": "Sets the user_info dict entry through a tuple."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nasks lain about what he knows about a given file.", "response": "def fileinfo(self, fid):\n \"\"\"Ask lain about what he knows about given file. If the given file\n exists in the file dict, it will get updated.\"\"\"\n\n if not isinstance(fid, str):\n raise TypeError(\"Your file ID must be a string\")\n try:\n info = self.conn.make_call_with_cb(\"getFileinfo\", fid).get(timeout=5)\n if not info:\n warnings.warn(\n f\"Your query for file with ID: '{fid}' failed.\", RuntimeWarning\n )\n elif fid in self.__files and not self.__files[fid].updated:\n self.__files[fid].fileupdate(info)\n except queue.Empty as ex:\n raise ValueError(\n \"lain didn't produce a callback!\\n\"\n \"Are you sure your query wasn't malformed?\"\n ) from ex\n return info"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _generate_upload_key(self, allow_timeout=False):\n\n # Wait for server to set username if not set already.\n while not self.user.nick:\n with ARBITRATOR.condition:\n ARBITRATOR.condition.wait()\n while True:\n params = {\n \"name\": self.user.nick,\n \"room\": self.room_id,\n \"c\": self.__upload_count,\n }\n if self.key:\n params[\"roomKey\"] = self.key\n if self.password:\n params[\"password\"] = self.password\n info = self.conn.make_api_call(\"getUploadKey\", params)\n self.__upload_count += 1\n try:\n return info[\"key\"], info[\"server\"], info[\"file_id\"]\n except Exception:\n to = int(info.get(\"error\", {}).get(\"info\", {}).get(\"timeout\", 0))\n if to <= 0 or not allow_timeout:\n raise IOError(f\"Failed to retrieve key {info}\")\n time.sleep(to / 10000)", "response": "Generate a new upload key"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove one or more files", "response": "def delete_files(self, ids):\n \"\"\"Remove one or more files\"\"\"\n\n self.check_owner()\n if not isinstance(ids, list):\n raise TypeError(\"You must specify list of files to delete!\")\n self.conn.make_call(\"deleteFiles\", ids)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef transfer_owner(self, new_owner):\n\n if not self.owner and not self.admin:\n raise RuntimeError(\"You need more street creed to do this\")\n\n new_owner = new_owner.strip().lower()\n if not new_owner:\n raise ValueError(\"Empty strings cannot be new owners\")\n\n self.__set_config_value(\"owner\", new_owner)", "response": "Transfer owner to new owner"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a janitor to the room", "response": "def add_janitor(self, janitor):\n \"\"\"Add janitor to the room\"\"\"\n\n if not self.owner and not self.admin:\n raise RuntimeError(\"Not enough street creed to do this\")\n\n janitor = janitor.strip().lower()\n if not janitor:\n raise ValueError(\"Empty strings cannot be janitors\")\n\n if janitor in self.config.janitors:\n return\n\n self.config.janitors.append(janitor)\n self.__set_config_value(\"janitors\", self.config.janitors)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove_janitor(self, janitor):\n\n if not self.owner and not self.admin:\n raise RuntimeError(\"Not enough street creed to do this\")\n\n janitor = janitor.strip().lower()\n if not janitor:\n raise ValueError(\"Empty strings cannot be janitors\")\n\n if janitor not in self.config.janitors:\n return\n\n self.config.janitors.remove(janitor)\n self.__set_config_value(\"janitors\", self.config.janitors)", "response": "Remove a janitor from the room"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _make_env(resultdir=None):\n env = {\n \"config\": {}, # The config\n \"resultdir\": \"\", # Path to the result directory\n \"config_file\": \"\", # The initial config file\n \"nodes\": {}, # Roles with nodes\n \"phase\": \"\", # Last phase that have been run\n \"user\": \"\", # User id for this job\n \"cwd\": os.getcwd() # Current Working Directory\n }\n\n if resultdir:\n env_path = os.path.join(resultdir, \"env\")\n if os.path.isfile(env_path):\n with open(env_path, \"r\") as f:\n env.update(yaml.load(f))\n logger.debug(\"Loaded environment %s\", env_path)\n\n if \"config_file\" in env and env[\"config_file\"] is not None:\n # Resets the configuration of the environment\n if os.path.isfile(env[\"config_file\"]):\n with open(env[\"config_file\"], \"r\") as f:\n env[\"config\"].update(yaml.load(f))\n logger.debug(\"Reloaded config %s\", env[\"config\"])\n\n return env", "response": "Loads the environment from the resultdir if not None and returns it."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _save_env(env):\n env_path = os.path.join(env[\"resultdir\"], \"env\")\n\n if os.path.isdir(env[\"resultdir\"]):\n with open(env_path, \"w\") as f:\n yaml.dump(env, f)", "response": "Saves one environment.\n\n Args:\n env (dict): the env dict to save."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _check_env(fn):\n def decorator(*args, **kwargs):\n # If no directory is provided, set the default one\n resultdir = kwargs.get(\"--env\", SYMLINK_NAME)\n # Check if the env file exists\n env_path = os.path.join(resultdir, \"env\")\n if not os.path.isfile(env_path):\n raise Exception(\"The file %s does not exist.\" % env_path)\n\n # Proceeds with the function execution\n return fn(*args, **kwargs)\n return decorator", "response": "Decorator for an Enos Task.\n This decorator checks if an environment file exists."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _set_resultdir(name=None):\n # Compute file path of results directory\n resultdir_name = name or \"enos_\" + datetime.today().isoformat()\n resultdir_path = os.path.abspath(resultdir_name)\n\n # Raise error if a related file exists\n if os.path.isfile(resultdir_path):\n raise EnosFilePathError(resultdir_path,\n \"Result directory cannot be created due \"\n \"to existing file %s\" % resultdir_path)\n\n # Create the result directory if it does not exist\n if not os.path.isdir(resultdir_path):\n os.mkdir(resultdir_path)\n logger.info(\"Generate results directory %s\" % resultdir_path)\n\n # Symlink the result directory with the \"cwd/current\" directory\n link_path = SYMLINK_NAME\n if os.path.lexists(link_path):\n os.remove(link_path)\n try:\n os.symlink(resultdir_path, link_path)\n logger.info(\"Symlink %s to %s\" % (resultdir_path, link_path))\n except OSError:\n # An harmless error can occur due to a race condition when\n # multiple regions are simultaneously deployed\n logger.warning(\"Symlink %s to %s failed\" %\n (resultdir_path, link_path))\n\n return resultdir_path", "response": "Set or get the directory to store experiment results."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nregisters new data in the data store.", "response": "def register(self, newdata, *args, **kwargs):\n \"\"\"\n Register data in registry. Meta for each data is specified by positional\n or keyword arguments after the new data and consists of the following:\n\n * ``uncertainty`` - Map of uncertainties in percent corresponding to new\n keys. The uncertainty keys must be a subset of the new data keys.\n * ``variance`` - Square of the uncertainty (no units).\n * ``isconstant``: Map corresponding to new keys whose values are``True``\n if constant or ``False`` if periodic. These keys must be a subset of\n the new data keys.\n * ``timeseries``: Name of corresponding time series data, ``None`` if no\n time series. _EG_: DNI data ``timeseries`` attribute might be set to a\n date/time data that it corresponds to. More than one data can have the\n same ``timeseries`` data.\n * ``data_source``: the\n :class:`~simkit.core.data_sources.DataSource` superclass that\n was used to acquire this data. This can be used to group data from a\n specific source together.\n\n :param newdata: New data to add to registry. When registering new data,\n keys are not allowed to override existing keys in the data\n registry.\n :type newdata: mapping\n :raises:\n :exc:`~simkit.core.exceptions.UncertaintyPercentUnitsError`\n \"\"\"\n kwargs.update(zip(self.meta_names, args))\n # check uncertainty has units of percent\n uncertainty = kwargs['uncertainty']\n variance = kwargs['variance']\n isconstant = kwargs['isconstant']\n # check uncertainty is percent\n if uncertainty:\n for k0, d in uncertainty.iteritems():\n for k1, v01 in d.iteritems():\n units = v01.units\n if units != UREG('percent'):\n keys = '%s-%s' % (k0, k1)\n raise UncertaintyPercentUnitsError(keys, units)\n # check variance is square of uncertainty\n if variance and uncertainty:\n for k0, d in variance.iteritems():\n for k1, v01 in d.iteritems():\n keys = '%s-%s' % (k0, k1)\n missing = k1 not in uncertainty[k0]\n v2 = np.asarray(uncertainty[k0][k1].to('fraction').m) ** 2.0\n if missing or not np.allclose(np.asarray(v01), v2):\n raise UncertaintyVarianceError(keys, v01)\n # check that isconstant is boolean\n if isconstant:\n for k, v in isconstant.iteritems():\n if not isinstance(v, bool):\n classname = self.__class__.__name__\n error_msg = ['%s meta \"isconstant\" should be' % classname,\n 'boolean, but it was \"%s\" for \"%s\".' % (v, k)]\n raise TypeError(' '.join(error_msg))\n # call super method, meta must be passed as kwargs!\n super(DataRegistry, self).register(newdata, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndetermines if the filename is cached using the given extension.", "response": "def _is_cached(self, ext='.json'):\n \"\"\"\n Determine if ``filename`` is cached using extension ``ex`` a string.\n\n :param ext: extension used to cache ``filename``, default is '.json'\n :type ext: str\n :return: True if ``filename`` is cached using extensions ``ex``\n :rtype: bool\n \"\"\"\n # extension must start with a dot\n if not ext.startswith('.'):\n # prepend extension with a dot\n ext = '.%s' % ext\n # cache file is filename with extension\n cache_file = '%s%s' % (self.filename, ext)\n # if filename already ends with extension or there's a file with the\n # extension, then assume the data is cached\n return self.filename.endswith(ext) or os.path.exists(cache_file)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef saveas_json(self, save_name):\n # make pycharm by defining inferred objects\n meta = getattr(self, DataSourceBase._meta_attr)\n param_file = getattr(self, DataSourceBase._param_file)\n # JSONEncoder removes units and converts arrays to lists\n # save last time file was modified\n utc_mod_time = list(time.gmtime(os.path.getmtime(save_name)))\n json_data = {'data': self.data, 'utc_mod_time': utc_mod_time,\n 'param_file': param_file,\n 'data_reader': meta.data_reader.__name__,\n 'data_source': self.__class__.__name__}\n if not save_name.endswith('.json'):\n save_name += '.json'\n with open(save_name, 'w') as fp:\n json.dump(json_data, fp, cls=SimKitJSONEncoder)\n # TODO: test file save successful\n # TODO: need to update model\n self._is_saved = True", "response": "Save data as JSON file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nedits data in data_reg.", "response": "def edit(self, edits, data_reg):\n \"\"\"\n Edit data in :class:`Data_Source`. Sets :attr:`issaved` to ``False``.\n \"\"\"\n data_reg.update(edits)\n self._is_saved = False"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_slitlets_from_string(s, islitlet_min, islitlet_max):\n\n # protection\n if not isinstance(s, str):\n print('type(s): ', type(s))\n print('ERROR: function expected a string parameter')\n\n # initialize empty output\n set_slitlets = set()\n\n # remove leading blank spaces\n s = re.sub('^ *', '', s)\n # remove trailing blank spaces\n s = re.sub(' *$', '', s)\n # remove blank space before ','\n s = re.sub(' *,', ',', s)\n # remove blank spaces after ','\n s = re.sub(', *', ',', s)\n # remove many blank spaces by a single blank space\n s = re.sub(' +', ' ', s)\n stuples = s.split()\n for item in stuples:\n subitems = item.split(',')\n nsubitems = len(subitems)\n if nsubitems == 1:\n n1 = int(subitems[0])\n n2 = n1\n step = 1\n elif nsubitems == 2:\n n1 = int(subitems[0])\n n2 = int(subitems[1])\n step = 1\n elif nsubitems == 3:\n n1 = int(subitems[0])\n n2 = int(subitems[1])\n step = int(subitems[2])\n else:\n raise ValueError('Unexpected slitlet range:', s)\n for i in range(n1, n2 + 1, step):\n if islitlet_min <= i <= islitlet_max:\n set_slitlets.add(i)\n else:\n print('islitlet_min: ', islitlet_min)\n print('islitlet_max: ', islitlet_max)\n print('i...........: ', i)\n raise ValueError(\"Slitlet number out of range!\")\n\n list_slitlets = list(set_slitlets)\n list_slitlets.sort()\n\n # return result\n return list_slitlets", "response": "Return list of slitlets from string specification."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ref(self, orm_classpath, cls_pk=None):\n # split orm from module path\n orm_module, orm_class = get_objects(orm_classpath)\n# if orm_classpath.startswith(\".\"):\n# # we handle relative classpaths by using the orm_class and its parents\n# # to find the relative import\n# if self.orm_class:\n# try:\n# orm_module, orm_class = get_objects(\n# orm_classpath,\n# self.orm_class.__module__\n# )\n# except ImportError:\n# parents = inspect.getmro(self.orm_class)\n# if parents:\n# for pc in parents[1:-1]:\n# try:\n# orm_module, orm_class = get_objects(\n# orm_classpath,\n# pc.__module__\n# )\n# except ImportError:\n# pass\n# \n# if not orm_module or not orm_class:\n# raise ImportError(\n# \"Unable to resolve relative ref using {}\".format(\n# self.orm_class.__module__\n# )\n# )\n# \n# else:\n# raise ImportError(\"trying relative ref without orm_class\")\n# \n# else:\n# orm_module, orm_class = get_objects(orm_classpath)\n\n\n# if isinstance(orm_classpath, basestring):\n# orm_module, orm_class = get_objects(orm_classpath)\n# else:\n# orm_module, orm_class = get_objects(orm_classpath[0], orm_classpath[1])\n\n q = orm_class.query\n if cls_pk:\n found = False\n for fn, f in orm_class.schema.fields.items():\n cls_ref_s = f.schema\n if cls_ref_s and self.schema == cls_ref_s:\n q.is_field(fn, cls_pk)\n found = True\n break\n\n if not found:\n raise ValueError(\"Did not find a foreign key field for [{}] in [{}]\".format(\n self.orm_class.table_name,\n orm_class.table_name,\n ))\n\n return q", "response": "This function takes a string path to allow query -ing from another Orm class and returns a Query object that represents the object in the class A."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef unique_field(self, field_name):\n self.fields_set.options[\"unique\"] = True\n return self.select_field(field_name)", "response": "set a unique field to be selected"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef select_fields(self, *fields):\n if fields:\n if not isinstance(fields[0], basestring): \n fields = list(fields[0]) + list(fields)[1:]\n\n for field_name in fields:\n field_name = self._normalize_field_name(field_name)\n self.select_field(field_name)\n return self", "response": "set multiple fields to be selected"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting a field into the internal list of fields that will be inserted into the db", "response": "def set_field(self, field_name, field_val=None):\n \"\"\"\n set a field into .fields attribute\n\n n insert/update queries, these are the fields that will be inserted/updated into the db\n \"\"\"\n field_name = self._normalize_field_name(field_name)\n self.fields_set.append(field_name, [field_name, field_val])\n return self"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef in_field(self, field_name, *field_vals, **field_kwargs):\n field_name = self._normalize_field_name(field_name)\n fv = make_list(field_vals[0]) if field_vals else None\n if field_kwargs:\n for k in field_kwargs:\n if not field_kwargs[k]:\n raise ValueError(\"Cannot IN an empty list\")\n\n field_kwargs[k] = make_list(field_kwargs[k])\n\n else:\n if not fv: self.can_get = False\n\n self.fields_where.append(field_name, [\"in\", field_name, fv, field_kwargs])\n return self", "response": "Add a field to the where clause of the resource."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nperform a field_name NOT LIKE field_val query on the resource object", "response": "def nlike_field(self, field_name, *field_val, **field_kwargs):\n \"\"\"Perform a field_name NOT LIKE field_val query\n\n :param field_name: string, the field we are filtering on\n :param field_val: string, the like query: %val, %val%, val%\n :returns: self, for fluid interface\n \"\"\"\n if not field_val:\n raise ValueError(\"Cannot NOT LIKE nothing\")\n field_name = self._normalize_field_name(field_name)\n fv = field_val[0]\n self.fields_where.append(field_name, [\"nlike\", field_name, fv, field_kwargs])\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef sort_field(self, field_name, direction, field_vals=None):\n field_name = self._normalize_field_name(field_name)\n if direction > 0:\n direction = 1\n elif direction < 0:\n direction = -1\n else:\n raise ValueError(\"direction {} is undefined\".format(direction))\n\n self.fields_sort.append(field_name, [direction, field_name, list(field_vals) if field_vals else field_vals])\n return self", "response": "sort this query by field_name in directrion\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget results from the dbget", "response": "def get(self, limit=None, page=None):\n \"\"\"\n get results from the db\n\n return -- Iterator()\n \"\"\"\n has_more = False\n self.bounds.paginate = True\n limit_paginate, offset = self.bounds.get(limit, page)\n self.default_val = []\n results = self._query('get')\n\n if limit_paginate:\n self.bounds.paginate = False\n if len(results) == limit_paginate:\n has_more = True\n results.pop(-1)\n\n it = ResultsIterator(results, orm_class=self.orm_class, has_more=has_more, query=self)\n return self.iterator_class(it)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_one(self):\n self.default_val = None\n o = self.default_val\n d = self._query('get_one')\n if d:\n o = self.orm_class(d, hydrate=True)\n return o", "response": "get one row from the db"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all the values from the query", "response": "def values(self, limit=None, page=None):\n \"\"\"\n convenience method to get just the values from the query (same as get().values())\n\n if you want to get all values, you can use: self.all().values()\n \"\"\"\n return self.get(limit=limit, page=page).values()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef pks(self, limit=None, page=None):\n self.fields_set.reset()\n return self.select_pk().values(limit, page)", "response": "return a list of the primary keys of the current record set"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_pk(self, field_val):\n field_name = self.schema.pk.name\n return self.is_field(field_name, field_val).get_one()", "response": "convenience method for running is_pk. get_one"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the count of the criteria", "response": "def count(self):\n \"\"\"return the count of the criteria\"\"\"\n\n # count queries shouldn't care about sorting\n fields_sort = self.fields_sort\n self.fields_sort = self.fields_sort_class()\n\n self.default_val = 0\n ret = self._query('count')\n\n # restore previous values now that count is done\n self.fields_sort = fields_sort\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update(self):\n self.default_val = 0\n #fields = self.fields\n #fields = self.orm_class.depart(self.fields, is_update=True)\n #self.set_fields(fields)\n return self.interface.update(\n self.schema,\n self.fields,\n self\n )", "response": "persist the. fields using. fields_where"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nquery the internal cache for the current entry set", "response": "def raw(self, query_str, *query_args, **query_options):\n \"\"\"\n use the interface.query() method to pass in your own raw query without\n any processing\n\n NOTE -- This will allow you to make any raw query and will usually return\n raw results, it won't wrap those results in a prom.Orm iterator instance\n like other methods like .all() and .get()\n\n query_str -- string -- the raw query for whatever the backend interface is\n query_args -- list -- if you have named parameters, you can pass in the values\n **query_options -- dict -- key:val options for the backend, these are very backend specific\n return -- mixed -- depends on the backend and the type of query\n \"\"\"\n i = self.interface\n return i.query(query_str, *query_args, **query_options)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef reduce(self, target_map, target_reduce, threads=0):\n if not threads:\n threads = multiprocessing.cpu_count()\n\n # we subtract one for the main process\n map_threads = threads - 1 if threads > 1 else 1\n\n q = self.copy()\n limit = q.bounds.limit\n offset = q.bounds.offset\n\n total_count = limit if limit else q.count()\n limit_count = int(math.ceil(float(total_count) / float(map_threads)))\n logger.info(\"{} processes will handle {} rows each for a total of {}\".format(\n map_threads,\n limit_count,\n total_count\n ))\n\n queue = multiprocessing.JoinableQueue()\n\n # close all open db global connections just in case, because we can't be sure\n # what the target_map methods are going to do, we want them to re-open connections\n # that they need\n interfaces = get_interfaces()\n for name, inter in interfaces.items():\n inter.close()\n\n # just in case we also close the query connection since it can in theory\n # be non-global\n q.interface.close()\n\n ts = []\n for page in range(map_threads):\n q = self.copy()\n q.limit(limit_count).offset(offset + (limit_count * page))\n t = ReduceThread(\n target=target_map,\n query=q,\n queue=queue,\n )\n t.start()\n ts.append(t)\n\n while ts or not queue.empty():\n try:\n val = queue.get(True, 1.0)\n target_reduce(val)\n\n except queues.Empty:\n pass\n\n else:\n queue.task_done()\n\n # faster than using any((t.is_alive() for t in mts))\n ts = [t for t in ts if t.is_alive()]", "response": "map and reduce the tables in the database"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndecide if this query is cacheable returns a key if it is otherwise empty", "response": "def cache_key(self, method_name):\n \"\"\"decides if this query is cacheable, returns a key if it is, otherwise empty\"\"\"\n key = \"\"\n method = getattr(self, \"cache_key_{}\".format(method_name), None)\n if method:\n key = method()\n\n return key"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef normalize(self):\n\n logger.debug(\"normalize: {}\".format(type(self).__name__))\n\n normalize_headers = self._normalize_headers()\n\n return TableData(\n self.__normalize_table_name(),\n normalize_headers,\n self._normalize_rows(normalize_headers),\n dp_extractor=self._tabledata.dp_extractor,\n type_hints=self._type_hints,\n )", "response": "Returns a sanitized version of the object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _apply_units_to_numpy_data_readers(parameters, data):\n # apply header units\n header_param = parameters.get('header') # default is None\n # check for headers\n if header_param:\n fields = header_param['fields'] # header fields\n # dictionary of header field parameters\n header_fields = {field[0]: field[1:] for field in fields}\n # loop over fieldnames\n for k, val in header_fields.iteritems():\n # check for units in header field parameters\n if len(val) > 1:\n data[k] *= UREG(str(val[1])) # apply units\n # apply other data units\n data_units = parameters['data'].get('units') # default is None\n if data_units:\n for k, val in data_units.iteritems():\n data[k] *= UREG(str(val)) # apply units\n return data", "response": "Apply units to data originally loaded by NumPyLoadTxtReader or NumPyGenFromTxtReader."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread and parse data from 1st line of a file.", "response": "def _read_header(f, header_param):\n \"\"\"\n Read and parse data from 1st line of a file.\n\n :param f: :func:`file` or :class:`~StringIO.StringIO` object from which to\n read 1st line.\n :type f: file\n :param header_param: Parameters used to parse the data from the header.\n Contains \"delimiter\" and \"fields\".\n :type header_param: dict\n :returns: Dictionary of data read from header.\n :rtype: dict\n :raises: :exc:`~simkit.core.exceptions.UnnamedDataError`\n\n The **header_param** argument contains keys to read the 1st line of **f**.\n If \"delimiter\" is ``None`` or missing, the default delimiter is a comma,\n otherwise \"delimiter\" can be any single character, integer or sequence of\n ``int``.\n\n * single character -- a delimiter\n * single integer -- uniform fixed width\n * sequence of ``int`` -- fixed widths, the number of fields should \\\n correspond to the length of the sequence.\n\n The \"fields\" key is a list of (parameter-name, parameter-type[, parameter-\n units]) lists.\n \"\"\"\n # default delimiter is a comma, can't be None\n header_delim = str(header_param.get('delimiter', ','))\n # don't allow unnamed fields\n if 'fields' not in header_param:\n raise UnnamedDataError(f.name)\n header_fields = {field[0]: field[1:] for field in header_param['fields']}\n # header_names can't be generator b/c DictReader needs list, and can't be\n # dictionary b/c must be same order as 'fields' to match data readby csv\n header_names = [field[0] for field in header_param['fields']]\n # read header\n header_str = StringIO(f.readline()) # read the 1st line\n # use csv because it will preserve quoted fields with commas\n # make a csv.DictReader from header string, use header names for\n # fieldnames and set delimiter to header delimiter\n header_reader = csv.DictReader(header_str, header_names,\n delimiter=header_delim,\n skipinitialspace=True)\n data = header_reader.next() # parse the header dictionary\n # iterate over items in data\n for k, v in data.iteritems():\n header_type = header_fields[k][0] # spec'd type\n # whitelist header types\n if isinstance(header_type, basestring):\n if header_type.lower().startswith('int'):\n header_type = int # coerce to integer\n elif header_type.lower().startswith('long'):\n header_type = long # coerce to long integer\n elif header_type.lower().startswith('float'):\n header_type = float # to floating decimal point\n elif header_type.lower().startswith('str'):\n header_type = str # coerce to string\n elif header_type.lower().startswith('bool'):\n header_type = bool # coerce to boolean\n else:\n raise TypeError('\"%s\" is not a supported type.' % header_type)\n # WARNING! Use of `eval` considered harmful. `header_type` is read\n # from JSON file, not secure input, could be used to exploit system\n data[k] = header_type(v) # cast v to type\n # check for units in 3rd element\n if len(header_fields[k]) > 1:\n units = UREG(str(header_fields[k][1])) # spec'd units\n data[k] = data[k] * units # apply units\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\napply units to data.", "response": "def _apply_units(data_data, data_units, fname):\n \"\"\"\n Apply units to data.\n\n :param data_data: NumPy structured array with data from fname.\n :type data_data: :class:`numpy.ndarray`\n :param data_units: Units of fields in data_data.\n :type data_units: dict\n :param fname: Name of file from which data_data was read.\n :type fname: str\n :returns: Dictionary of data with units applied.\n :rtype: dict\n :raises: :exc:`~simkit.core.exceptions.UnnamedDataError`\n \"\"\"\n data_names = data_data.dtype.names\n # raise error if NumPy data doesn't have names\n if not data_names:\n raise UnnamedDataError(fname)\n data = dict.fromkeys(data_names) # dictionary of data read by NumPy\n # iterate over data read by NumPy\n for data_name in data_names:\n if data_name in data_units:\n # if units specified in parameters, then convert to string\n units = str(data_units[data_name])\n data[data_name] = data_data[data_name] * UREG(units)\n elif np.issubdtype(data_data[data_name].dtype, str):\n # if no units specified and is string\n data[data_name] = data_data[data_name].tolist()\n else:\n data[data_name] = data_data[data_name]\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _utf8_list_to_ascii_tuple(utf8_list):\n for n, utf8 in enumerate(utf8_list):\n utf8_list[n][0] = str(utf8[0])\n utf8_list[n][1] = str(utf8[1])\n utf8_list[n] = tuple(utf8)", "response": "Convert unicode strings in a list of lists to ascii in a list of tuples."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_data(self, filename, *args, **kwargs):\n # append .json extension if needed\n if not filename.endswith('.json'):\n filename += '.json' # append \"json\" to filename\n # open file and load JSON data\n with open(filename, 'r') as fid:\n json_data = json.load(fid)\n # if JSONReader is the original reader then apply units and return\n if (not self.orig_data_reader or\n isinstance(self, self.orig_data_reader)):\n return self.apply_units_to_cache(json_data['data'])\n # last modification since JSON file was saved\n utc_mod_time = json_data.get('utc_mod_time')\n # instance of original data reader with original parameters\n orig_data_reader_obj = self.orig_data_reader(self.parameters, self.meta)\n # check if file has been modified since saved as JSON file\n if utc_mod_time:\n # convert to ordered tuple\n utc_mod_time = time.struct_time(utc_mod_time)\n orig_filename = filename[:-5] # original filename\n # use original file if it's been modified since JSON file saved\n if utc_mod_time < time.gmtime(os.path.getmtime(orig_filename)):\n os.remove(filename) # delete JSON file\n return orig_data_reader_obj.load_data(orig_filename)\n # use JSON file if original file hasn't been modified\n return orig_data_reader_obj.apply_units_to_cache(json_data['data'])", "response": "Load JSON data.\n\n :param filename: name of JSON file with data\n :type filename: str\n :return: data\n :rtype: dict"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\napplies units to data read using JSONReader.", "response": "def apply_units_to_cache(self, data):\n \"\"\"\n Apply units to data read using :class:`JSONReader`.\n\n :param data: cached data\n :return: data with units applied\n :rtype: :class:`~pint.unit.Quantity`\n \"\"\"\n for k, val in self.parameters.iteritems():\n if 'units' in val:\n data[k] = Q_(data[k], val.get('units'))\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload data from Excel file.", "response": "def load_data(self, filename, *args, **kwargs):\n \"\"\"\n Load parameters from Excel spreadsheet.\n\n :param filename: Name of Excel workbook with data.\n :type filename: str\n :returns: Data read from Excel workbook.\n :rtype: dict\n \"\"\"\n # workbook read from file\n workbook = open_workbook(filename, verbosity=True)\n data = {} # an empty dictionary to store data\n # iterate through sheets in parameters\n # iterate through the parameters on each sheet\n for param, pval in self.parameters.iteritems():\n sheet = pval['extras']['sheet']\n # get each worksheet from the workbook\n worksheet = workbook.sheet_by_name(sheet)\n # split the parameter's range elements\n prng0, prng1 = pval['extras']['range']\n # missing \"units\", json ``null`` and Python ``None`` all OK!\n # convert to str from unicode, None to '' (dimensionless)\n punits = str(pval.get('units') or '')\n # replace None with empty list\n if prng0 is None:\n prng0 = []\n if prng1 is None:\n prng1 = []\n # FIXME: Use duck-typing here instead of type-checking!\n # if both elements in range are `int` then parameter is a cell\n if isinstance(prng0, int) and isinstance(prng1, int):\n datum = worksheet.cell_value(prng0, prng1)\n # if the either element is a `list` then parameter is a slice\n elif isinstance(prng0, list) and isinstance(prng1, int):\n datum = worksheet.col_values(prng1, *prng0)\n elif isinstance(prng0, int) and isinstance(prng1, list):\n datum = worksheet.row_values(prng0, *prng1)\n # if both elements are `list` then parameter is 2-D\n else:\n datum = []\n for col in xrange(prng0[1], prng1[1]):\n datum.append(worksheet.col_values(col, prng0[0],\n prng1[0]))\n # duck typing that datum is real\n try:\n npdatum = np.array(datum, dtype=np.float)\n except ValueError as err:\n # check for iterable:\n # if `datum` can't be coerced to float, then it must be\n # *string* & strings *are* iterables, so don't check!\n # check for strings:\n # data must be real or *all* strings!\n # empty string, None or JSON null also OK\n # all([]) == True but any([]) == False\n if not datum:\n data[param] = None # convert empty to None\n elif all(isinstance(_, basestring) for _ in datum):\n data[param] = datum # all str is OK (EG all 'TMY')\n elif all(not _ for _ in datum):\n data[param] = None # convert list of empty to None\n else:\n raise err # raise ValueError if not all real or str\n else:\n data[param] = npdatum * UREG(punits)\n # FYI: only put one statement into try-except test otherwise\n # might catch different error than expected. use ``else`` as\n # option to execute only if exception *not* raised.\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef apply_units_to_cache(self, data):\n # iterate through sheets in parameters\n # iterate through the parameters on each sheet\n for param, pval in self.parameters.iteritems():\n # try to apply units\n try:\n data[param] *= UREG(str(pval.get('units') or ''))\n except TypeError:\n continue\n return data", "response": "Apply units to cached data read using JSONReader."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_data(self, filename, *args, **kwargs):\n # header keys\n header_param = self.parameters.get('header') # default is None\n # data keys\n data_param = self.parameters['data'] # raises KeyError if no 'data'\n dtype = data_param['dtype'] # raises KeyError if no 'dtype'\n # convert to tuple and normal ASCII\n _utf8_list_to_ascii_tuple(dtype) if dtype else None # -> tuple of str\n delimiter = data_param.get('delimiter') # default is None\n skiprows = data_param.get('skiprows') # default is None\n data_units = data_param.get('units', {}) # default is an empty dict\n data = {} # a dictionary for data\n # open file for reading\n with open(filename, 'r') as fid:\n # read header\n if header_param:\n data.update(_read_header(fid, header_param))\n fid.seek(0) # move cursor back to beginning\n # read data\n data_data = np.loadtxt(fid, dtype, delimiter=delimiter,\n skiprows=skiprows)\n # apply units\n data.update(_apply_units(data_data, data_units, fid.name))\n return data", "response": "load data from text file using numpy. loadtxt"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads data from text file using numpy. genfromtxt", "response": "def load_data(self, filename, *args, **kwargs):\n \"\"\"\n load data from text file.\n\n :param filename: name of file to read\n :type filename: str\n :returns: data read from file using :func:`numpy.genfromtxt`\n :rtype: dict\n :raises: :exc:`~simkit.core.exceptions.UnnamedDataError`\n \"\"\"\n # header keys\n header_param = self.parameters.get('header') # default is None\n # data keys\n data_param = self.parameters['data'] # raises KeyError if no 'data'\n dtype = data_param.get('dtype') # default is None\n # if not None convert to tuple and normal ASCII\n _utf8_list_to_ascii_tuple(dtype) if dtype else None # -> tuple of str\n delimiter = data_param.get('delimiter') # default is None\n skip_header = data_param.get('skip_header') # default is None\n usecols = data_param.get('usecols') # default is None\n names = data_param.get('names') # default is None\n names = [str(_) for _ in names] if names else None # -> str\n excludelist = data_param.get('excludelist') # default is None\n deletechars = data_param.get('deletechars') # default is None\n data_units = data_param.get('units', {}) # default is an empty dict\n # either dtype or names must be specified\n if not (dtype or names):\n raise UnnamedDataError(filename)\n data = {} # a dictionary for data\n # open file for reading\n with open(filename, 'r') as fid:\n # read header\n if header_param:\n data.update(_read_header(fid, header_param))\n fid.seek(0) # move cursor back to beginning\n # data\n data_data = np.genfromtxt(fid, dtype, delimiter=delimiter,\n skip_header=skip_header, usecols=usecols,\n names=names, excludelist=excludelist,\n deletechars=deletechars)\n # apply units\n data.update(_apply_units(data_data, data_units, fid.name))\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading parameterized data from different sheets.", "response": "def load_data(self, filename, *args, **kwargs):\n \"\"\"\n Load parameterized data from different sheets.\n \"\"\"\n # load parameterized data\n data = super(ParameterizedXLS, self).load_data(filename)\n # add parameter to data\n parameter_name = self.parameterization['parameter']['name']\n parameter_values = self.parameterization['parameter']['values']\n parameter_units = str(self.parameterization['parameter']['units'])\n data[parameter_name] = parameter_values * UREG(parameter_units)\n # number of sheets\n num_sheets = len(self.parameterization['parameter']['sheets'])\n # parse and concatenate parameterized data\n for key in self.parameterization['data']:\n units = str(self.parameterization['data'][key].get('units')) or ''\n datalist = []\n for n in xrange(num_sheets):\n k = key + '_' + str(n)\n datalist.append(data[k].reshape((1, -1)))\n data.pop(k) # remove unused data keys\n data[key] = np.concatenate(datalist, axis=0) * UREG(units)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef apply_units_to_cache(self, data):\n # parameter\n parameter_name = self.parameters['parameter']['name']\n parameter_units = str(self.parameters['parameter']['units'])\n data[parameter_name] *= UREG(parameter_units)\n # data\n self.parameters.pop('parameter')\n return super(ParameterizedXLS, self).apply_units_to_cache(data)", "response": "Apply units to data reader."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_data(self, filename, *args, **kwargs):\n # load text data\n data = super(MixedTextXLS, self).load_data(filename)\n # iterate through sheets in parameters\n for sheet_params in self.parameters.itervalues():\n # iterate through the parameters on each sheet\n for param, pval in sheet_params.iteritems():\n pattern = pval.get('pattern', EFG_PATTERN) # get pattern\n re_meth = pval.get('method', 'search') # get re method\n # whitelist re methods, getattr could be considered harmful\n if re_meth in RE_METH:\n re_meth = getattr(re, pval.get('method', 'search'))\n else:\n msg = 'Only', '\"%s\", ' * len(RE_METH) % tuple(RE_METH)\n msg += 'regex methods are allowed.'\n raise AttributeError(msg)\n # if not isinstance(data[param], basestring):\n # re_meth = lambda p, dp: [re_meth(p, d) for d in dp]\n match = re_meth(pattern, data[param]) # get matches\n if match:\n try:\n match = match.groups()\n except AttributeError:\n match = [m.groups() for m in match]\n npdata = np.array(match, dtype=float).squeeze()\n data[param] = npdata * UREG(str(pval.get('units') or ''))\n else:\n raise MixedTextNoMatchError(re_meth, pattern, data[param])\n return data", "response": "Load text data from different sheets."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mark(self, n=1):\n self.tick_if_necessary()\n self.count += n\n self.m1_rate.update(n)\n self.m5_rate.update(n)\n self.m15_rate.update(n)", "response": "Mark the occurrence of a given number of events."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads or update all or part of model.", "response": "def _load(self, layer=None):\n \"\"\"\n Load or update all or part of :attr:`model`.\n\n :param layer: Optionally load only specified layer.\n :type layer: str\n \"\"\"\n # open model file for reading and convert JSON object to dictionary\n # read and load JSON parameter map file as \"parameters\"\n with open(self.param_file, 'r') as param_file:\n file_params = json.load(param_file)\n for layer, params in file_params.iteritems():\n # update parameters from file\n self.parameters[layer] = ModelParameter(**params)\n # if layer argument spec'd then only update/load spec'd layer\n if not layer or not self.model:\n # update/load model if layer not spec'd or if no model exists yet\n # TODO: separate model and parameters according to comments in #78\n self.model = copy.deepcopy(self.parameters)\n else:\n # convert non-sequence to tuple\n layers = _listify(layer)\n # update/load layers\n for layer in layers:\n self.model[layer] = copy.deepcopy(self.parameters[layer])"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the internal state of the object.", "response": "def _update(self, layer=None):\n \"\"\"\n Update layers in model.\n \"\"\"\n meta = getattr(self, ModelBase._meta_attr)\n if not layer:\n layers = self.layers\n else:\n # convert non-sequence to tuple\n layers = _listify(layer)\n for layer in layers:\n # relative path to layer files from model file\n path = os.path.abspath(os.path.join(meta.modelpath, layer))\n getattr(self, layer).load(path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninitialize model and layers.", "response": "def _initialize(self):\n \"\"\"\n Initialize model and layers.\n \"\"\"\n meta = getattr(self, ModelBase._meta_attr)\n # read modelfile, convert JSON and load/update model\n if self.param_file is not None:\n self._load()\n LOGGER.debug('model:\\n%r', self.model)\n # initialize layers\n # FIXME: move import inside loop for custom layers in different modules\n mod = importlib.import_module(meta.layers_mod, meta.layers_pkg)\n src_model = {}\n for layer, value in self.model.iteritems():\n # from layers module get the layer's class definition\n layer_cls = getattr(mod, meta.layer_cls_names[layer]) # class def\n self.layers[layer] = layer_cls # add layer class def to model\n # check if model layers are classes\n src_value = {} # layer value generated from source classes\n for src in value['sources']:\n # check if source has keyword arguments\n try:\n src, kwargs = src\n except (TypeError, ValueError):\n kwargs = {} # no key work arguments\n # skip if not a source class\n if isinstance(src, basestring):\n continue\n # generate layer value from source class\n src_value[src.__name__] = {'module': src.__module__,\n 'package': None}\n # update layer keyword arguments\n src_value[src.__name__].update(kwargs)\n # use layer values generated from source class\n if src_value:\n value = src_model[layer] = src_value\n else:\n srcmod, srcpkg = value.get('module'), value.get('package')\n try:\n value = dict(value['sources'])\n except ValueError:\n value = dict.fromkeys(value['sources'], {})\n for src in value.viewkeys():\n if srcmod is not None:\n value[src]['module'] = srcmod\n if srcpkg is not None:\n value[src]['package'] = srcpkg\n # set layer attribute with model data\n setattr(self, layer, layer_cls(value))\n # update model with layer values generated from source classes\n if src_model:\n self.model.update(src_model)\n self._update()\n self._state = 'initialized'"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load(self, modelfile, layer=None):\n # read modelfile, convert JSON and load/update model\n self.param_file = modelfile\n self._load(layer)\n self._update(layer)", "response": "Load or update a model or layers in a model."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef edit(self, layer, item, delete=False):\n # get layer attribute with model data\n if hasattr(self, layer):\n layer_obj = getattr(self, layer)\n else:\n raise AttributeError('missing layer: %s', layer)\n if delete:\n return layer_obj\n # iterate over items and edit layer\n for k, v in item.iteritems():\n if k in layer_obj.layer:\n layer_obj.edit(k, v) # edit layer\n else:\n raise AttributeError('missing layer item: %s', k)\n # update model data\n if k in self.model[layer]:\n self.model[layer][k].update(v)\n else:\n raise AttributeError('missing model layer item: %s', k)", "response": "Edit model.\n\n :param layer: Layer of model to edit\n :type layer: str\n :param item: Items to edit.\n :type item: dict\n :param delete: Flag to return\n :class:`~simkit.core.layers.Layer` to delete item.\n :type delete: bool"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding items in model.", "response": "def add(self, layer, items):\n \"\"\"\n Add items in model.\n \"\"\"\n for k in items.iterkeys():\n if k in self.model[layer]:\n raise Exception('item %s is already in layer %s' % (k, layer))\n self.model[layer].update(items)\n # this should also update Layer.layer, the layer data\n # same as calling layer constructor\n # so now just need to add items to the layer\n for k, v in items.iteritems():\n getattr(self, layer).add(k, v['module'], v.get('package'))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete(self, layer, items):\n # Use edit to get the layer obj containing item\n items = _listify(items) # make items a list if it's not\n layer_obj = self.edit(layer, dict.fromkeys(items), delete=True)\n for k in items:\n if k in layer_obj.layer:\n layer_obj.delete(k)\n else:\n raise AttributeError('item %s missing from layer %s' %\n (k, layer))", "response": "Delete items in model."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save(self, modelfile, layer=None):\n if layer:\n obj = {layer: self.model[layer]}\n else:\n obj = self.model\n with open(modelfile, 'w') as fp:\n json.dump(obj, fp, indent=2, sort_keys=True)", "response": "Save a model file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef command(self, cmd, progress_hook=None, *args, **kwargs):\n cmds = cmd.split(None, 1) # split commands and simulations\n sim_names = cmds[1:] # simulations\n if not sim_names:\n sim_names = self.cmd_layer.reg.iterkeys()\n for sim_name in sim_names:\n sim_cmd = getattr(self.cmd_layer.reg[sim_name], cmd)\n sim_cmd(self, progress_hook=progress_hook, *args, **kwargs)", "response": "Execute a command in the command layer."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef login(self, password):\n\n if self.logged_in:\n raise RuntimeError(\"User already logged in!\")\n\n params = {\"name\": self.nick, \"password\": password}\n resp = self.conn.make_api_call(\"login\", params)\n if \"error\" in resp:\n raise RuntimeError(\n f\"Login failed: {resp['error'].get('message') or resp['error']}\"\n )\n self.session = resp[\"session\"]\n self.conn.make_call(\"useSession\", self.session)\n self.conn.cookies.update({\"session\": self.session})\n self.logged_in = True\n return True", "response": "Attempts to log in as the current user with given password"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef login_transplant(self, other):\n\n if not other.logged_in:\n raise ValueError(\"Other room is not logged in\")\n cookie = other.session\n if not cookie:\n raise ValueError(\"Other room has no cookie\")\n self.conn.cookies.update({\"session\": cookie})\n self.session = cookie\n self.logged_in = True\n return True", "response": "Attempts to carry over the login state from another room"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef logout(self):\n\n if not self.logged_in:\n raise RuntimeError(\"User is not logged in\")\n if self.conn.connected:\n params = {\"room\": self.conn.room.room_id}\n resp = self.conn.make_api_call(\"logout\", params)\n if not resp.get(\"success\", False):\n raise RuntimeError(\n f\"Logout unsuccessful: \"\n f\"{resp['error'].get('message') or resp['error']}\"\n )\n self.conn.make_call(\"logout\", params)\n self.conn.cookies.pop(\"session\")\n self.logged_in = False", "response": "Logs your user out"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef change_nick(self, new_nick):\n\n if self.logged_in:\n raise RuntimeError(\"User must be logged out\")\n self.__verify_username(new_nick)\n\n self.conn.make_call(\"command\", self.nick, \"nick\", new_nick)\n self.nick = new_nick", "response": "Change the name of your user"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register(self, password):\n\n if len(password) < 8:\n raise ValueError(\"Password must be at least 8 characters.\")\n\n params = {\"name\": self.nick, \"password\": password}\n resp = self.conn.make_api_call(\"register\", params)\n\n if \"error\" in resp:\n raise RuntimeError(f\"{resp['error'].get('message') or resp['error']}\")\n\n self.conn.make_call(\"useSession\", resp[\"session\"])\n self.conn.cookies.update({\"session\": resp[\"session\"]})\n self.logged_in = True", "response": "Registers the current user with the given password."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __verify_username(self, username):\n\n if len(username) > self.__max_length or len(username) < 3:\n raise ValueError(\n f\"Username must be between 3 and {self.__max_length} characters.\"\n )\n if any(c not in string.ascii_letters + string.digits for c in username):\n raise ValueError(\"Usernames can only contain alphanumeric characters.\")", "response": "Raises an exception if the given username is not valid."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a value to the reservoir.", "response": "def update(self, value, timestamp=None):\n \"\"\"Add a value to the reservoir.\n\n :param value: the value to be added\n :param timestamp: the epoch timestamp of the value in seconds, defaults\n to the current timestamp if not specified.\n \"\"\"\n\n if timestamp is None:\n timestamp = self.current_time_in_fractional_seconds()\n self.rescale_if_needed()\n priority = self.weight(timestamp - self.start_time) / random.random()\n self.values[priority] = value\n if len(self.values) > self.size:\n self.values.remove_min()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef rescale(self, now):\n\n old_start_time = self.start_time\n self.start_time = self.current_time_in_fractional_seconds()\n self.next_scale_time = now + RESCALE_THRESHOLD_S\n new_values = treap.treap()\n for key in self.values.keys():\n new_key = key * math.exp(\n -self.alpha * (self.start_time - old_start_time))\n new_values[new_key] = self.values[key]\n self.values = new_values", "response": "A common feature of the above techniques that allows us to track the decayed weights efficiently. This is done by scaling the counts and other quantities based on the current time of the current entry."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns module and class objects given a classpath like foo. bar. Baz or foo. bar. Che. Che is the class name of the class name of Che. Che is the class name of the Che class in the Che class Baz. Che. Che is the class name of Che. Che is the class name of Che. Che is the class name of Che. Che.", "response": "def get_objects(classpath, calling_classpath=\"\"):\n \"\"\"\n given a classpath like foo.bar.Baz return module foo.bar and class Baz\n objects\n\n .. seealso::\n https://docs.python.org/2.5/whatsnew/pep-328.html\n https://www.python.org/dev/peps/pep-0328/\n\n :param classpath: string, the full python class path (includes modules), a classpath\n is something like foo.bar.Che where Che is the class definied in the foo.bar\n module\n :param calling_classpath: string, if classpath is relative (eg, ..foo.Bar) then\n this is needed to resolve the relative classpath, it is usually the path of\n the module that is calling get_objects()\n :returns: tuple, (module, class)\n \"\"\"\n# if classpath.startswith(\".\"):\n# rel_count = len(re.match(\"^\\.+\", classpath).group(0))\n# if calling_classpath:\n# calling_count = calling_classpath.count(\".\")\n# if rel_count > calling_count:\n# raise ValueError(\n# \"Attempting relative import passed calling_classpath {}\".format(\n# calling_classpath\n# )\n# )\n# \n# bits = calling_classpath.rsplit('.', rel_count)\n# parent_classpath = bits[0]\n# classpath = \".\".join([parent_classpath, classpath[rel_count:]])\n# \n# else:\n# raise ValueError(\"Attempting relative import without calling_classpath\")\n# \n module_name, class_name = classpath.rsplit('.', 1)\n module = importlib.import_module(module_name, calling_classpath)\n try:\n klass = getattr(module, class_name)\n except AttributeError:\n raise AttributeError(\"module {} has no attribute {} parsing {}\".format(\n module.__name__,\n class_name,\n classpath\n ))\n\n return module, klass"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef make_dict(fields, fields_kwargs):\n ret = {}\n if fields:\n ret.update(fields)\n\n if fields_kwargs:\n ret.update(fields_kwargs)\n\n return ret", "response": "This function takes a dict or kwargs and returns a dict that can be used to create a new base object for the base object"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_line(self, line, count=1):\n self.write(line)\n self.write_newlines(count)", "response": "writes the line and count newlines after the line"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add(self, key, val, priority=None):\n\n if key in self.item_finder:\n self.remove(key)\n\n else:\n # keep the queue contained\n if self.full():\n raise OverflowError(\"Queue is full\")\n\n if priority is None:\n priority = next(self.counter)\n\n item = [priority, key, val]\n self.item_finder[key] = item\n heapq.heappush(self.pq, item)", "response": "add a value to the queue with priority using the key to know uniqueness"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove the value found at key from the queue", "response": "def remove(self, key):\n \"\"\"remove the value found at key from the queue\"\"\"\n item = self.item_finder.pop(key)\n item[-1] = None\n self.removed_count += 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving the next prioritized key val priority and return it", "response": "def popitem(self):\n \"\"\"remove the next prioritized [key, val, priority] and return it\"\"\"\n pq = self.pq\n while pq:\n priority, key, val = heapq.heappop(pq)\n if val is None:\n self.removed_count -= 1\n\n else:\n del self.item_finder[key]\n return key, val, priority\n\n raise KeyError(\"pop from an empty priority queue\")"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning True if the queue is full False otherwise", "response": "def full(self):\n \"\"\"Return True if the queue is full\"\"\"\n if not self.size: return False\n return len(self.pq) == (self.size + self.removed_count)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nextracts slitlet 2d image from image with original EMIR dimensions.", "response": "def extract_slitlet2d(image_2k2k, sltlim):\n \"\"\"Extract slitlet 2d image from image with original EMIR dimensions.\n\n Parameters\n ----------\n image_2k2k : 2d numpy array, float\n Original image (dimensions NAXIS1 * NAXIS2)\n sltlim : instance of SlitLimits class\n Object containing relevant information concerning the slitlet\n region to be extracted.\n\n Returns\n -------\n slitlet2d : 2d numpy array, float\n Image corresponding to the slitlet region defined by its\n bounding box.\n\n \"\"\"\n\n # extract slitlet region\n slitlet2d = image_2k2k[(sltlim.bb_ns1_orig - 1):sltlim.bb_ns2_orig,\n (sltlim.bb_nc1_orig - 1):sltlim.bb_nc2_orig]\n\n # transform to float\n slitlet2d = slitlet2d.astype(np.float)\n\n # return slitlet image\n return slitlet2d"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the slitlet boundaries using a 2D image.", "response": "def compute_slitlet_boundaries(\n filename, grism, spfilter, list_slitlets,\n size_x_medfilt, size_y_savgol,\n times_sigma_threshold,\n bounddict, debugplot=0):\n \"\"\"Compute slitlet boundaries using continuum lamp images.\n\n Parameters\n ----------\n filename : string\n Input continumm lamp image.\n grism : string\n Grism name. It must be one in EMIR_VALID_GRISMS.\n spfilter : string\n Filter name. It must be one in EMIR_VALID_FILTERS.\n list_slitlets : list of integers\n Number of slitlets to be updated.\n size_x_medfilt : int\n Window in the X (spectral) direction, in pixels, to apply\n the 1d median filter in order to remove bad pixels.\n size_y_savgol : int\n Window in the Y (spatial) direction to be used when using the\n 1d Savitzky-Golay filter.\n times_sigma_threshold : float\n Times sigma to detect peaks in derivatives.\n bounddict : dictionary of dictionaries\n Structure to store the boundaries.\n debugplot : int\n Determines whether intermediate computations and/or plots are\n displayed.\n\n \"\"\"\n\n # read 2D image\n hdulist = fits.open(filename)\n image_header = hdulist[0].header\n image2d = hdulist[0].data\n naxis2, naxis1 = image2d.shape\n hdulist.close()\n if debugplot >= 10:\n print('>>> NAXIS1:', naxis1)\n print('>>> NAXIS2:', naxis2)\n\n\n # ToDo: replace this by application of cosmetic defect mask!\n for j in range(1024):\n image2d[1024, j] = (image2d[1023, j] + image2d[1025, j]) / 2\n image2d[1023, j + 1024] = (image2d[1022, j + 1024] +\n image2d[1024, j + 1024]) / 2\n \n # remove path from filename\n sfilename = os.path.basename(filename)\n\n # check that the FITS file has been obtained with EMIR\n instrument = image_header['instrume']\n if instrument != 'EMIR':\n raise ValueError(\"INSTRUME keyword is not 'EMIR'!\")\n\n # read CSU configuration from FITS header\n csu_config = CsuConfiguration.define_from_fits(filename)\n\n # read DTU configuration from FITS header\n dtu_config = DtuConfiguration.define_from_fits(filename)\n\n # read grism\n grism_in_header = image_header['grism']\n if grism != grism_in_header:\n raise ValueError(\"GRISM keyword=\" + grism_in_header +\n \" is not the expected value=\" + grism)\n\n # read filter\n spfilter_in_header = image_header['filter']\n if spfilter != spfilter_in_header:\n raise ValueError(\"FILTER keyword=\" + spfilter_in_header +\n \" is not the expected value=\" + spfilter)\n\n # read rotator position angle\n rotang = image_header['rotang']\n\n # read date-obs\n date_obs = image_header['date-obs']\n\n for islitlet in list_slitlets:\n if debugplot < 10:\n sys.stdout.write('.')\n sys.stdout.flush()\n\n sltlim = SlitletLimits(grism, spfilter, islitlet)\n # extract slitlet2d\n slitlet2d = extract_slitlet2d(image2d, sltlim)\n if debugplot % 10 != 0:\n ximshow(slitlet2d,\n title=sfilename + \" [original]\"\n \"\\nslitlet=\" + str(islitlet) +\n \", grism=\" + grism +\n \", filter=\" + spfilter +\n \", rotang=\" + str(round(rotang, 2)),\n first_pixel=(sltlim.bb_nc1_orig, sltlim.bb_ns1_orig),\n debugplot=debugplot)\n\n # apply 1d median filtering (along the spectral direction)\n # to remove bad pixels\n size_x = size_x_medfilt\n size_y = 1\n slitlet2d_smooth = ndimage.filters.median_filter(\n slitlet2d, size=(size_y, size_x))\n\n if debugplot % 10 != 0:\n ximshow(slitlet2d_smooth,\n title=sfilename + \" [smoothed]\"\n \"\\nslitlet=\" + str(islitlet) +\n \", grism=\" + grism +\n \", filter=\" + spfilter +\n \", rotang=\" + str(round(rotang, 2)),\n first_pixel=(sltlim.bb_nc1_orig, sltlim.bb_ns1_orig),\n debugplot=debugplot)\n\n # apply 1d Savitzky-Golay filter (along the spatial direction)\n # to compute first derivative\n slitlet2d_savgol = savgol_filter(\n slitlet2d_smooth, window_length=size_y_savgol, polyorder=2,\n deriv=1, axis=0)\n\n # compute basic statistics\n q25, q50, q75 = np.percentile(slitlet2d_savgol, q=[25.0, 50.0, 75.0])\n sigmag = 0.7413 * (q75 - q25) # robust standard deviation\n if debugplot >= 10:\n print(\"q50, sigmag:\", q50, sigmag)\n\n if debugplot % 10 != 0:\n ximshow(slitlet2d_savgol,\n title=sfilename + \" [S.-G.filt.]\"\n \"\\nslitlet=\" + str(islitlet) +\n \", grism=\" + grism +\n \", filter=\" + spfilter +\n \", rotang=\" + str(round(rotang, 2)),\n first_pixel=(sltlim.bb_nc1_orig, sltlim.bb_ns1_orig),\n z1z2=(q50-times_sigma_threshold*sigmag,\n q50+times_sigma_threshold*sigmag),\n debugplot=debugplot)\n\n # identify objects in slitlet2d_savgol: pixels with positive\n # derivatives are identify independently from pixels with\n # negative derivaties; then the two set of potential features\n # are merged; this approach avoids some problems when, in\n # nearby regions, there are pixels with positive and negative\n # derivatives (in those circumstances a single search as\n # np.logical_or(\n # slitlet2d_savgol < q50 - times_sigma_threshold * sigmag,\n # slitlet2d_savgol > q50 + times_sigma_threshold * sigmag)\n # led to erroneous detections!)\n #\n # search for positive derivatives\n labels2d_objects_pos, no_objects_pos = ndimage.label(\n slitlet2d_savgol > q50 + times_sigma_threshold * sigmag)\n # search for negative derivatives\n labels2d_objects_neg, no_objects_neg = ndimage.label(\n slitlet2d_savgol < q50 - times_sigma_threshold * sigmag)\n # merge both sets\n non_zero_neg = np.where(labels2d_objects_neg > 0)\n labels2d_objects = np.copy(labels2d_objects_pos)\n labels2d_objects[non_zero_neg] += \\\n labels2d_objects_neg[non_zero_neg] + no_objects_pos\n no_objects = no_objects_pos + no_objects_neg\n\n if debugplot >= 10:\n print(\"Number of objects with positive derivative:\",\n no_objects_pos)\n print(\"Number of objects with negative derivative:\",\n no_objects_neg)\n print(\"Total number of objects initially found...:\", no_objects)\n\n if debugplot % 10 != 0:\n ximshow(labels2d_objects,\n z1z2=(0, no_objects),\n title=sfilename + \" [objects]\"\n \"\\nslitlet=\" + str(islitlet) +\n \", grism=\" + grism +\n \", filter=\" + spfilter +\n \", rotang=\" + str(round(rotang, 2)),\n first_pixel=(sltlim.bb_nc1_orig, sltlim.bb_ns1_orig),\n cbar_label=\"Object number\",\n debugplot=debugplot)\n\n # select boundaries as the largest objects found with\n # positive and negative derivatives\n n_der_pos = 0 # number of pixels covered by the object with deriv > 0\n i_der_pos = 0 # id of the object with deriv > 0\n n_der_neg = 0 # number of pixels covered by the object with deriv < 0\n i_der_neg = 0 # id of the object with deriv < 0\n for i in range(1, no_objects+1):\n xy_tmp = np.where(labels2d_objects == i)\n n_pix = len(xy_tmp[0])\n if i <= no_objects_pos:\n if n_pix > n_der_pos:\n i_der_pos = i\n n_der_pos = n_pix\n else:\n if n_pix > n_der_neg:\n i_der_neg = i\n n_der_neg = n_pix\n\n # determine which boundary is lower and which is upper\n y_center_mass_der_pos = ndimage.center_of_mass(\n slitlet2d_savgol, labels2d_objects, [i_der_pos])[0][0]\n y_center_mass_der_neg = ndimage.center_of_mass(\n slitlet2d_savgol, labels2d_objects, [i_der_neg])[0][0]\n if y_center_mass_der_pos < y_center_mass_der_neg:\n i_lower = i_der_pos\n i_upper = i_der_neg\n if debugplot >= 10:\n print(\"-> lower boundary has positive derivatives\")\n else:\n i_lower = i_der_neg\n i_upper = i_der_pos\n if debugplot >= 10:\n print(\"-> lower boundary has negative derivatives\")\n list_slices_ok = [i_lower, i_upper]\n\n # adjust individual boundaries passing the selection:\n # - select points in the image belonging to a given boundary\n # - compute weighted mean of the pixels of the boundary, column\n # by column (this reduces dramatically the number of points\n # to be fitted to determine the boundary)\n list_boundaries = []\n for k in range(2): # k=0 lower boundary, k=1 upper boundary\n # select points to be fitted for a particular boundary\n # (note: be careful with array indices and pixel\n # coordinates)\n xy_tmp = np.where(labels2d_objects == list_slices_ok[k])\n xmin = xy_tmp[1].min() # array indices (integers)\n xmax = xy_tmp[1].max() # array indices (integers)\n xfit = []\n yfit = []\n # fix range for fit\n if k == 0:\n xmineff = max(sltlim.xmin_lower_boundary_fit, xmin)\n xmaxeff = min(sltlim.xmax_lower_boundary_fit, xmax)\n else:\n xmineff = max(sltlim.xmin_upper_boundary_fit, xmin)\n xmaxeff = min(sltlim.xmax_upper_boundary_fit, xmax)\n # loop in columns of the image belonging to the boundary\n for xdum in range(xmineff, xmaxeff + 1): # array indices (integer)\n iok = np.where(xy_tmp[1] == xdum)\n y_tmp = xy_tmp[0][iok] + sltlim.bb_ns1_orig # image pixel\n weight = slitlet2d_savgol[xy_tmp[0][iok], xy_tmp[1][iok]]\n y_wmean = sum(y_tmp * weight) / sum(weight)\n xfit.append(xdum + sltlim.bb_nc1_orig)\n yfit.append(y_wmean)\n xfit = np.array(xfit)\n yfit = np.array(yfit)\n # declare new SpectrumTrail instance\n boundary = SpectrumTrail()\n # define new boundary\n boundary.fit(x=xfit, y=yfit, deg=sltlim.deg_boundary,\n times_sigma_reject=10,\n title=\"slit:\" + str(sltlim.islitlet) +\n \", deg=\" + str(sltlim.deg_boundary),\n debugplot=0)\n list_boundaries.append(boundary)\n\n if debugplot % 10 != 0:\n for tmp_img, tmp_label in zip(\n [slitlet2d_savgol, slitlet2d],\n [' [S.-G.filt.]', ' [original]']\n ):\n ax = ximshow(tmp_img,\n title=sfilename + tmp_label +\n \"\\nslitlet=\" + str(islitlet) +\n \", grism=\" + grism +\n \", filter=\" + spfilter +\n \", rotang=\" + str(round(rotang, 2)),\n first_pixel=(sltlim.bb_nc1_orig,\n sltlim.bb_ns1_orig),\n show=False,\n debugplot=debugplot)\n for k in range(2):\n xpol, ypol = list_boundaries[k].linspace_pix(\n start=1, stop=EMIR_NAXIS1)\n ax.plot(xpol, ypol, 'b--', linewidth=1)\n for k in range(2):\n xpol, ypol = list_boundaries[k].linspace_pix()\n ax.plot(xpol, ypol, 'g--', linewidth=4)\n # show plot\n pause_debugplot(debugplot, pltshow=True)\n\n # update bounddict\n tmp_dict = {\n 'boundary_coef_lower':\n list_boundaries[0].poly_funct.coef.tolist(),\n 'boundary_xmin_lower': list_boundaries[0].xlower_line,\n 'boundary_xmax_lower': list_boundaries[0].xupper_line,\n 'boundary_coef_upper':\n list_boundaries[1].poly_funct.coef.tolist(),\n 'boundary_xmin_upper': list_boundaries[1].xlower_line,\n 'boundary_xmax_upper': list_boundaries[1].xupper_line,\n 'csu_bar_left': csu_config.csu_bar_left(islitlet),\n 'csu_bar_right': csu_config.csu_bar_right(islitlet),\n 'csu_bar_slit_center': csu_config.csu_bar_slit_center(islitlet),\n 'csu_bar_slit_width': csu_config.csu_bar_slit_width(islitlet),\n 'rotang': rotang,\n 'xdtu': dtu_config.xdtu,\n 'ydtu': dtu_config.ydtu,\n 'zdtu': dtu_config.zdtu,\n 'xdtu_0': dtu_config.xdtu_0,\n 'ydtu_0': dtu_config.ydtu_0,\n 'zdtu_0': dtu_config.zdtu_0,\n 'zzz_info1': os.getlogin() + '@' + socket.gethostname(),\n 'zzz_info2': datetime.now().isoformat()\n }\n slitlet_label = \"slitlet\" + str(islitlet).zfill(2)\n if slitlet_label not in bounddict['contents']:\n bounddict['contents'][slitlet_label] = {}\n bounddict['contents'][slitlet_label][date_obs] = tmp_dict\n\n if debugplot < 10:\n print(\"\")"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nindex into a :class:`~simkit.core.Registry` to return arguments from :class:`~simkit.core.data_sources.DataRegistry` and :class:`~simkit.core.outputs.OutputRegistry` based on the calculation parameter file. :param args: Arguments field from the calculation parameter file. :param reg: Registry in which to index to get the arguments. :type reg: :class:`~simkit.core.data_sources.DataRegistry`, :class:`~simkit.core.outputs.OutputRegistry` :param ts: Time step [units of time]. :param idx: [None] Index of current time step for dynamic calculations. Required arguments for static and dynamic calculations are specified in the calculation parameter file by the \"args\" key. Arguments can be from either the data registry or the outputs registry, which is denoted by the \"data\" and \"outputs\" keys. Each argument is a dictionary whose key is the name of the argument in the formula specified and whose value can be one of the following: * The name of the argument in the registry :: {\"args\": {\"outputs\": {\"T_bypass\": \"T_bypass_diode\"}}} maps the formula argument \"T_bypass\" to the outputs registry item \"T_bypass_diode\". * A list with the name of the argument in the registry as the first element and a negative integer denoting the index relative to the current timestep as the second element :: {\"args\": {\"data\": {\"T_cell\": [\"Tcell\", -1]}}} indexes the previous timestep of \"Tcell\" from the data registry. * A list with the name of the argument in the registry as the first element and a list of positive integers denoting the index into the item from the registry as the second element :: {\"args\": {\"data\": {\"cov\": [\"bypass_diode_covariance\", [2]]}}} indexes the third element of \"bypass_diode_covariance\". * A list with the name of the argument in the registry as the first element, a negative real number denoting the time relative to the current timestep as the second element, and the units of the time as the third :: {\"args\": {\"data\": {\"T_cell\": [\"Tcell\", -1, 'day']}}} indexes the entire previous day of \"Tcell\".", "response": "def index_registry(args, reg, ts=None, idx=None):\n \"\"\"\n Index into a :class:`~simkit.core.Registry` to return arguments\n from :class:`~simkit.core.data_sources.DataRegistry` and\n :class:`~simkit.core.outputs.OutputRegistry` based on the\n calculation parameter file.\n\n :param args: Arguments field from the calculation parameter file.\n :param reg: Registry in which to index to get the arguments.\n :type reg: :class:`~simkit.core.data_sources.DataRegistry`,\n :class:`~simkit.core.outputs.OutputRegistry`\n :param ts: Time step [units of time].\n :param idx: [None] Index of current time step for dynamic calculations.\n\n Required arguments for static and dynamic calculations are specified in the\n calculation parameter file by the \"args\" key. Arguments can be from\n either the data registry or the outputs registry, which is denoted by the\n \"data\" and \"outputs\" keys. Each argument is a dictionary whose key is the\n name of the argument in the formula specified and whose value can be one of\n the following:\n\n * The name of the argument in the registry ::\n\n {\"args\": {\"outputs\": {\"T_bypass\": \"T_bypass_diode\"}}}\n\n maps the formula argument \"T_bypass\" to the outputs registry item\n \"T_bypass_diode\".\n\n * A list with the name of the argument in the registry as the first element\n and a negative integer denoting the index relative to the current\n timestep as the second element ::\n\n {\"args\": {\"data\": {\"T_cell\": [\"Tcell\", -1]}}}\n\n indexes the previous timestep of \"Tcell\" from the data registry.\n\n * A list with the name of the argument in the registry as the first element\n and a list of positive integers denoting the index into the item from the\n registry as the second element ::\n\n {\"args\": {\"data\": {\"cov\": [\"bypass_diode_covariance\", [2]]}}}\n\n indexes the third element of \"bypass_diode_covariance\".\n\n * A list with the name of the argument in the registry as the first\n element, a negative real number denoting the time relative to the current\n timestep as the second element, and the units of the time as the third ::\n\n {\"args\": {\"data\": {\"T_cell\": [\"Tcell\", -1, 'day']}}}\n\n indexes the entire previous day of \"Tcell\".\n \"\"\"\n # TODO: move this to new Registry method or __getitem__\n # TODO: replace idx with datetime object and use timeseries to interpolate\n # into data, not necessary for outputs since that will conform to idx\n rargs = dict.fromkeys(args) # make dictionary from arguments\n # iterate over arguments\n for k, v in args.iteritems():\n # var ------------------ states ------------------\n # idx ===== not None ===== ======= None =======\n # isconstant True False None True False None\n # is_dynamic no yes yes no no no\n is_dynamic = idx and not reg.isconstant.get(v)\n # switch based on string type instead of sequence\n if isinstance(v, basestring):\n # the default assumes the current index\n rargs[k] = reg[v][idx] if is_dynamic else reg[v]\n elif len(v) < 3:\n if reg.isconstant[v[0]]:\n # only get indices specified by v[1]\n # tuples interpreted as a list of indices, see\n # NumPy basic indexing: Dealing with variable\n # numbers of indices within programs\n rargs[k] = reg[v[0]][tuple(v[1])]\n elif v[1] < 0:\n # specified offset from current index\n rargs[k] = reg[v[0]][idx + v[1]]\n else:\n # get indices specified by v[1] at current index\n rargs[k] = reg[v[0]][idx][tuple(v[1])]\n else:\n # specified timedelta from current index\n dt = 1 + (v[1] * UREG(str(v[2])) / ts).item()\n # TODO: deal with fractions of timestep\n rargs[k] = reg[v[0]][(idx + dt):(idx + 1)]\n return rargs"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_covariance(datargs, outargs, vargs, datvar, outvar):\n # number of formula arguments that are not constant\n argn = len(vargs)\n # number of observations must be the same for all vargs\n nobs = 1\n for m in xrange(argn):\n a = vargs[m]\n try:\n a = datargs[a]\n except (KeyError, TypeError):\n a = outargs[a]\n avar = outvar[a]\n else:\n avar = datvar[a]\n for n in xrange(argn):\n b = vargs[n]\n try:\n b = datargs[b]\n except (KeyError, TypeError):\n b = outargs[b]\n c = avar.get(b, 0.0)\n try:\n nobs = max(nobs, len(c))\n except (TypeError, ValueError):\n LOGGER.debug('c of %s vs %s = %g', a, b, c)\n # covariance matrix is initially zeros\n cov = np.zeros((nobs, argn, argn))\n # loop over arguments in both directions, fill in covariance\n for m in xrange(argn):\n a = vargs[m]\n try:\n a = datargs[a]\n except (KeyError, TypeError):\n a = outargs[a]\n avar = outvar[a]\n else:\n avar = datvar[a]\n for n in xrange(argn):\n b = vargs[n]\n try:\n b = datargs[b]\n except (KeyError, TypeError):\n b = outargs[b]\n cov[:, m, n] = avar.get(b, 0.0)\n if nobs == 1:\n cov = cov.squeeze() # squeeze out any extra dimensions\n LOGGER.debug('covariance:\\n%r', cov)\n return cov", "response": "Get the covariance matrix of the data and output."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef calculate(cls, calc, formula_reg, data_reg, out_reg,\n timestep=None, idx=None):\n \"\"\"\n Execute calculation\n\n :param calc: calculation, with formula, args and return keys\n :type calc: dict\n :param formula_reg: Registry of formulas.\n :type formula_reg: :class:`~simkit.core.FormulaRegistry`\n :param data_reg: Data registry.\n :type data_reg: :class:`~simkit.core.data_sources.DataRegistry`\n :param out_reg: Outputs registry.\n :type out_reg: :class:`~simkit.core.outputs.OutputRegistry`\n :param timestep: simulation interval length [time], default is ``None``\n :param idx: interval index, default is ``None``\n :type idx: int\n \"\"\"\n # get the formula-key from each static calc\n formula = calc['formula'] # name of formula in calculation\n func = formula_reg[formula] # formula function object\n fargs = formula_reg.args.get(formula, []) # formula arguments\n constants = formula_reg.isconstant.get(formula) # constant args\n # formula arguments that are not constant\n vargs = [] if constants is None else [a for a in fargs if a not in constants]\n args = calc['args'] # calculation arguments\n # separate data and output arguments\n datargs, outargs = args.get('data', {}), args.get('outputs', {})\n data = index_registry(datargs, data_reg, timestep, idx)\n outputs = index_registry(outargs, out_reg, timestep, idx)\n kwargs = dict(data, **outputs) # combined data and output args\n args = [kwargs.pop(a) for a in fargs if a in kwargs]\n returns = calc['returns'] # return arguments\n # if constants is None then the covariance should also be None\n # TODO: except other values, eg: \"all\" to indicate no covariance\n if constants is None:\n cov = None # do not propagate uncertainty\n else:\n # get covariance matrix\n cov = cls.get_covariance(datargs, outargs, vargs,\n data_reg.variance, out_reg.variance)\n # update kwargs with covariance if it exists\n kwargs['__covariance__'] = cov\n retval = func(*args, **kwargs) # calculate function\n # update output registry with covariance and jacobian\n if cov is not None:\n # split uncertainty and jacobian from return values\n cov, jac = retval[-2:]\n retval = retval[:-2]\n # scale covariance\n scale = np.asarray(\n [1 / r.m if isinstance(r, UREG.Quantity) else 1 / r\n for r in retval]\n ) # use magnitudes if quantities\n cov = (np.swapaxes((cov.T * scale), 0, 1) * scale).T\n nret = len(retval) # number of return output\n for m in xrange(nret):\n a = returns[m] # name in output registry\n out_reg.variance[a] = {}\n out_reg.uncertainty[a] = {}\n out_reg.jacobian[a] = {}\n for n in xrange(nret):\n b = returns[n]\n out_reg.variance[a][b] = cov[:, m, n]\n if a == b:\n unc = np.sqrt(cov[:, m, n]) * 100 * UREG.percent\n out_reg.uncertainty[a][b] = unc\n for n in xrange(len(vargs)):\n b = vargs[n]\n try:\n b = datargs[b]\n except (KeyError, TypeError):\n b = outargs[b]\n out_reg.jacobian[a][b] = jac[:, m, n]\n LOGGER.debug('%s cov:\\n%r', a, out_reg.variance[a])\n LOGGER.debug('%s jac:\\n%r', a, out_reg.jacobian[a])\n LOGGER.debug('%s unc:\\n%r', a, out_reg.uncertainty[a])\n # if there's only one return value, squeeze out extra dimensions\n if len(retval) == 1:\n retval = retval[0]\n # put return values into output registry\n if len(returns) > 1:\n # more than one return, zip them up\n if idx is None:\n out_reg.update(zip(returns, retval))\n else:\n for k, v in zip(returns, retval):\n out_reg[k][idx] = v\n else:\n # only one return, get it by index at 0\n if idx is None:\n out_reg[returns[0]] = retval\n else:\n out_reg[returns[0]][idx] = retval", "response": "Calculate the keys for a given time - interval."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nput every point in its category. levels must be sorted.", "response": "def check_photometry_categorize(x, y, levels, tags=None):\n '''Put every point in its category.\n\n levels must be sorted.'''\n x = numpy.asarray(x)\n y = numpy.asarray(y)\n ys = y.copy()\n ys.sort()\n # Mean of the upper half\n m = ys[len(ys) // 2:].mean()\n y /= m\n m = 1.0\n s = ys[len(ys) // 2:].std()\n result = []\n\n if tags is None:\n tags = list(six.moves.range(len(levels) + 1))\n\n for l, t in zip(levels, tags):\n indc = y < l\n if indc.any():\n x1 = x[indc]\n y1 = y[indc]\n result.append((x1, y1, t))\n\n x = x[~indc]\n y = y[~indc]\n else:\n result.append((x, y, tags[-1]))\n\n return result, (m, s)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef client_authentication(self, request, auth=None, **kwargs):\n\n return verify_client(self.endpoint_context, request, auth)", "response": "Do client authentication\n\n :param endpoint_context: A\n :py:class:`oidcendpoint.endpoint_context.SrvInfo` instance\n :param request: Parsed request, a self.request_cls class instance\n :param authn: Authorization info\n :return: client_id or raise and exception"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconstructs the response object for the response_args request and kwargs.", "response": "def construct(self, response_args, request, **kwargs):\n \"\"\"\n Construct the response\n\n :param response_args: response arguments\n :param request: The parsed request, a self.request_cls class instance\n :param kwargs: Extra keyword arguments\n :return: An instance of the self.response_cls class\n \"\"\"\n response_args = self.do_pre_construct(response_args, request, **kwargs)\n\n # logger.debug(\"kwargs: %s\" % sanitize(kwargs))\n response = self.response_cls(**response_args)\n\n return self.do_post_construct(response, request, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_wv_parameters(filter_name, grism_name):\n\n # protections\n if filter_name not in EMIR_VALID_FILTERS:\n raise ValueError('Unexpected filter_name:', filter_name)\n if grism_name not in EMIR_VALID_GRISMS:\n raise ValueError('Unexpected grism_name:', grism_name)\n\n # intialize output\n wv_parameters = {}\n # set parameters\n wv_parameters['crpix1_enlarged'] = 1.0\n if grism_name == \"J\" and filter_name == \"J\":\n wv_parameters['islitlet_min'] = 2\n wv_parameters['islitlet_max'] = 54\n wv_parameters['nbrightlines'] = [18]\n wv_parameters['poly_crval1_linear'] = np.polynomial.Polynomial([\n 1.25122231e+04,\n -4.83412417e+00,\n 5.31657015e-04\n ])\n wv_parameters['poly_cdelt1_linear'] = np.polynomial.Polynomial([\n 7.73411692e-01,\n -3.28055653e-05,\n -1.20813896e-08\n ])\n wv_parameters['crval1_enlarged'] = 11200.0000 # Angstroms\n wv_parameters['cdelt1_enlarged'] = 0.77 # Angstroms/pixel\n wv_parameters['naxis1_enlarged'] = EMIR_NAXIS1_ENLARGED # pixels\n wv_parameters['wvmin_expected'] = None\n wv_parameters['wvmax_expected'] = None\n wv_parameters['wvmin_useful'] = None\n wv_parameters['wvmax_useful'] = None\n elif grism_name == \"H\" and filter_name == \"H\":\n wv_parameters['islitlet_min'] = 2\n wv_parameters['islitlet_max'] = 54\n wv_parameters['nbrightlines'] = [12]\n wv_parameters['poly_crval1_linear'] = np.polynomial.Polynomial([\n 1.65536274e+04,\n -7.63517173e+00,\n 7.74790265e-04\n ])\n wv_parameters['poly_cdelt1_linear'] = np.polynomial.Polynomial([\n 1.21327515e+00,\n 1.42140078e-05,\n -1.27489119e-07\n ])\n wv_parameters['crval1_enlarged'] = 14500.0000 # Angstroms\n wv_parameters['cdelt1_enlarged'] = 1.2200 # Angstroms/pixel\n wv_parameters['naxis1_enlarged'] = EMIR_NAXIS1_ENLARGED # pixels\n wv_parameters['wvmin_expected'] = None\n wv_parameters['wvmax_expected'] = None\n wv_parameters['wvmin_useful'] = None\n wv_parameters['wvmax_useful'] = None\n elif grism_name == \"K\" and filter_name == \"Ksp\":\n wv_parameters['islitlet_min'] = 2\n wv_parameters['islitlet_max'] = 54\n wv_parameters['nbrightlines'] = [12]\n wv_parameters['poly_crval1_linear'] = np.polynomial.Polynomial([\n 2.21044741e+04,\n -1.08737529e+01,\n 9.05081653e-04\n ])\n wv_parameters['poly_cdelt1_linear'] = np.polynomial.Polynomial([\n 1.72696857e+00,\n 2.35009351e-05,\n -1.02164228e-07\n ])\n wv_parameters['crval1_enlarged'] = 19100.0000 # Angstroms\n wv_parameters['cdelt1_enlarged'] = 1.7300 # Angstroms/pixel\n wv_parameters['naxis1_enlarged'] = EMIR_NAXIS1_ENLARGED # pixels\n wv_parameters['wvmin_expected'] = None\n wv_parameters['wvmax_expected'] = None\n wv_parameters['wvmin_useful'] = None\n wv_parameters['wvmax_useful'] = None\n elif grism_name == \"LR\" and filter_name == \"YJ\":\n wv_parameters['islitlet_min'] = 4\n wv_parameters['islitlet_max'] = 55\n wv_parameters['nbrightlines'] = [20]\n wv_parameters['poly_crval1_linear'] = np.polynomial.Polynomial([\n 1.04272465e+04,\n -2.33176855e+01,\n 6.55101267e-03\n ])\n wv_parameters['poly_cdelt1_linear'] = np.polynomial.Polynomial([\n 3.49037727e+00,\n 1.26008332e-03,\n -4.66149678e-06\n ])\n wv_parameters['crval1_enlarged'] = 8900.0000 # Angstroms\n wv_parameters['cdelt1_enlarged'] = 3.5600 # Angstroms/pixel\n wv_parameters['naxis1_enlarged'] = 1270 # pixels\n wv_parameters['wvmin_expected'] = 4000\n wv_parameters['wvmax_expected'] = 15000\n wv_parameters['wvmin_useful'] = 8900\n wv_parameters['wvmax_useful'] = 13400\n elif grism_name == \"LR\" and filter_name == \"HK\":\n wv_parameters['islitlet_min'] = 4\n wv_parameters['islitlet_max'] = 55\n wv_parameters['nbrightlines'] = [25]\n wv_parameters['poly_crval1_linear'] = np.polynomial.Polynomial([\n 2.00704978e+04,\n -4.07702886e+01,\n -5.95247468e-03\n ])\n wv_parameters['poly_cdelt1_linear'] = np.polynomial.Polynomial([\n 6.54247758e+00,\n 2.09061196e-03,\n -2.48206609e-06\n ])\n wv_parameters['crval1_enlarged'] = 14500.0000 # Angstroms\n wv_parameters['cdelt1_enlarged'] = 6.83 # Angstroms/pixel\n wv_parameters['naxis1_enlarged'] = 1435 # pixels\n wv_parameters['wvmin_expected'] = 8000\n wv_parameters['wvmax_expected'] = 30000\n wv_parameters['wvmin_useful'] = 14500\n wv_parameters['wvmax_useful'] = 24300\n else:\n print(\"filter_name..:\", filter_name)\n print(\"grism_name...:\", grism_name)\n raise ValueError(\"invalid filter_name and grism_name combination\")\n\n return wv_parameters", "response": "Set wavelength calibration parameters for rectified images."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef inputs(form_args):\n element = []\n for name, value in form_args.items():\n element.append(\n ''.format(name, value))\n return \"\\n\".join(element)", "response": "Creates list of input elements"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef verify_uri(endpoint_context, request, uri_type, client_id=None):\n try:\n _cid = request[\"client_id\"]\n except KeyError:\n _cid = client_id\n\n if not _cid:\n logger.error('No client id found')\n raise UnknownClient('No client_id provided')\n\n _redirect_uri = unquote(request[uri_type])\n\n part = urlparse(_redirect_uri)\n if part.fragment:\n raise URIError(\"Contains fragment\")\n\n (_base, _query) = splitquery(_redirect_uri)\n if _query:\n _query = parse_qs(_query)\n\n match = False\n try:\n values = endpoint_context.cdb[_cid]['{}s'.format(uri_type)]\n except KeyError:\n raise ValueError('No registered {}'.format(uri_type))\n else:\n for regbase, rquery in values:\n # The URI MUST exactly match one of the Redirection URI\n if _base == regbase:\n # every registered query component must exist in the uri\n if rquery:\n if not _query:\n raise ValueError('Missing query part')\n\n for key, vals in rquery.items():\n if key not in _query:\n raise ValueError('\"{}\" not in query part'.format(key))\n\n for val in vals:\n if val not in _query[key]:\n raise ValueError('{}={} value not in query part'.format(key, val))\n\n # and vice versa, every query component in the uri\n # must be registered\n if _query:\n if not rquery:\n raise ValueError('No registered query part')\n\n for key, vals in _query.items():\n if key not in rquery:\n raise ValueError('\"{}\" extra in query part'.format(key))\n for val in vals:\n if val not in rquery[key]:\n raise ValueError('Extra {}={} value in query part'.format(key, val))\n match = True\n break\n if not match:\n raise RedirectURIError(\"Doesn't match any registered uris\")", "response": "Verify that the URI is faulty."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_uri(endpoint_context, request, uri_type):\n if uri_type in request:\n verify_uri(endpoint_context, request, uri_type)\n uri = request[uri_type]\n else:\n try:\n _specs = endpoint_context.cdb[\n str(request[\"client_id\"])][\"{}s\".format(uri_type)]\n except KeyError:\n raise ParameterError(\n \"Missing {} and none registered\".format(uri_type))\n else:\n if len(_specs) > 1:\n raise ParameterError(\n \"Missing {} and more than one registered\".format(uri_type))\n else:\n uri = join_query(*_specs[0])\n\n return uri", "response": "verify that the redirect URI is reasonable"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef post_authentication(self, user, request, sid, **kwargs):\n\n response_info = {}\n\n # Do the authorization\n try:\n permission = self.endpoint_context.authz(\n user, client_id=request['client_id'])\n except ToOld as err:\n return self.error_response(\n response_info, 'access_denied',\n 'Authentication to old {}'.format(err.args))\n except Exception as err:\n return self.error_response(response_info, 'access_denied',\n '{}'.format(err.args))\n else:\n try:\n self.endpoint_context.sdb.update(sid, permission=permission)\n except Exception as err:\n return self.error_response(response_info, 'server_error',\n '{}'.format(err.args))\n\n logger.debug(\"response type: %s\" % request[\"response_type\"])\n\n if self.endpoint_context.sdb.is_session_revoked(sid):\n return self.error_response(response_info, \"access_denied\",\n \"Session is revoked\")\n\n response_info = create_authn_response(self, request, sid)\n\n try:\n redirect_uri = get_uri(self.endpoint_context, request, 'redirect_uri')\n except (RedirectURIError, ParameterError) as err:\n return self.error_response(response_info,\n 'invalid_request',\n '{}'.format(err.args))\n else:\n response_info['return_uri'] = redirect_uri\n\n # Must not use HTTP unless implicit grant type and native application\n # info = self.aresp_check(response_info['response_args'], request)\n # if isinstance(info, ResponseMessage):\n # return info\n\n _cookie = new_cookie(\n self.endpoint_context, sub=user, sid=sid, state=request['state'],\n client_id=request['client_id'],\n cookie_name=self.endpoint_context.cookie_name['session'])\n\n # Now about the response_mode. Should not be set if it's obvious\n # from the response_type. Knows about 'query', 'fragment' and\n # 'form_post'.\n\n if \"response_mode\" in request:\n try:\n response_info = self.response_mode(request, **response_info)\n except InvalidRequest as err:\n return self.error_response(response_info,\n 'invalid_request',\n '{}'.format(err.args))\n\n response_info['cookie'] = [_cookie]\n\n return response_info", "response": "Things that are done after a successful authentication.\n\n :param user:\n :param request:\n :param sid:\n :param kwargs:\n :return: A dictionary with 'response_args'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef authz_part2(self, user, authn_event, request, **kwargs):\n sid = setup_session(self.endpoint_context, request, user,\n authn_event=authn_event)\n\n try:\n resp_info = self.post_authentication(user, request, sid, **kwargs)\n except Exception as err:\n return self.error_response({}, 'server_error', err)\n\n if \"check_session_iframe\" in self.endpoint_context.provider_info:\n ec = self.endpoint_context\n salt = rndstr()\n if ec.sdb.is_session_revoked(sid):\n pass\n else:\n authn_event = ec.sdb.get_authentication_event(sid) # use the last session\n _state = json.dumps({'authn_time': authn_event['authn_time']})\n\n session_cookie = ec.cookie_dealer.create_cookie(\n json.dumps(_state), typ=\"session\",\n cookie_name=ec.cookie_name['session_management'])\n\n opbs = session_cookie[ec.cookie_name['session_management']]\n\n _session_state = compute_session_state(opbs.value, salt,\n request[\"client_id\"],\n resp_info['return_uri'])\n\n if 'cookie' in resp_info:\n if isinstance(resp_info['cookie'], list):\n resp_info['cookie'].append(session_cookie)\n else:\n append_cookie(resp_info['cookie'], session_cookie)\n else:\n resp_info['cookie'] = session_cookie\n\n resp_info['response_args']['session_state'] = _session_state\n\n # Mix-Up mitigation\n resp_info['response_args']['iss'] = self.endpoint_context.issuer\n resp_info['response_args']['client_id'] = request['client_id']\n\n return resp_info", "response": "This function is called by the client when the user is authenticated."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef process_request(self, request_info=None, **kwargs):\n\n if isinstance(request_info, AuthorizationErrorResponse):\n return request_info\n\n _cid = request_info[\"client_id\"]\n cinfo = self.endpoint_context.cdb[_cid]\n try:\n cookie = kwargs['cookie']\n except KeyError:\n cookie = ''\n else:\n del kwargs['cookie']\n\n if proposed_user(request_info):\n kwargs['req_user'] = proposed_user(request_info)\n else:\n try:\n _login_hint = request_info['login_hint']\n except KeyError:\n pass\n else:\n if self.endpoint_context.login_hint_lookup:\n kwargs['req_user'] = self.endpoint_context.login_hint_lookup[\n _login_hint]\n\n info = self.setup_auth(request_info, request_info[\"redirect_uri\"],\n cinfo, cookie, **kwargs)\n\n if 'error' in info:\n return info\n\n try:\n _function = info['function']\n except KeyError: # already authenticated\n logger.debug(\"- authenticated -\")\n logger.debug(\"AREQ keys: %s\" % request_info.keys())\n\n res = self.authz_part2(info['user'], info['authn_event'],\n request_info, cookie=cookie)\n\n return res\n else:\n try:\n # Run the authentication function\n return {\n 'http_response': _function(**info['args']),\n 'return_uri': request_info[\"redirect_uri\"]\n }\n except Exception as err:\n logger.exception(err)\n return {'http_response': 'Internal error: {}'.format(err)}", "response": "This method is called by the client to process the request and returns the response as a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting up a user session :param endpoint_context: :param areq: :param uid: :param acr: :param client_id: :param salt: :param authn_event: A already made AuthnEvent :return:", "response": "def setup_session(endpoint_context, areq, uid, client_id='', acr='', salt='salt',\n authn_event=None):\n \"\"\"\n Setting up a user session\n\n :param endpoint_context:\n :param areq:\n :param uid:\n :param acr:\n :param client_id:\n :param salt:\n :param authn_event: A already made AuthnEvent\n :return:\n \"\"\"\n if authn_event is None and acr:\n authn_event = AuthnEvent(uid=uid, salt=salt, authn_info=acr,\n authn_time=time.time())\n\n if not client_id:\n client_id = areq['client_id']\n\n sid = endpoint_context.sdb.create_authz_session(authn_event, areq,\n client_id=client_id,\n uid=uid)\n endpoint_context.sdb.do_sub(sid, uid, '')\n return sid"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if all attribute value pairs in a also appears in b.", "response": "def dict_match(a, b):\n \"\"\"\n Check if all attribute/value pairs in a also appears in b\n\n :param a: A dictionary\n :param b: A dictionary\n :return: True/False\n \"\"\"\n res = []\n for k, v in a.items():\n try:\n res.append(b[k] == v)\n except KeyError:\n pass\n return all(res)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nminting a new sub - subject identifier", "response": "def mint_sub(client_salt, sector_id=\"\", subject_type=\"public\",\n uid='', user_salt=''):\n \"\"\"\n Mint a new sub (subject identifier)\n\n :param authn_event: Authentication event information\n :param client_salt: client specific salt - used in pairwise\n :param sector_id: Possible sector identifier\n :param subject_type: 'public'/'pairwise'\n :return: Subject identifier\n \"\"\"\n\n if subject_type == \"public\":\n sub = hashlib.sha256(\n \"{}{}\".format(uid, user_salt).encode(\"utf-8\")).hexdigest()\n else:\n sub = pairwise_id(uid, sector_id,\n \"{}{}\".format(client_salt, user_salt))\n return sub"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(self, sid, **kwargs):\n item = self[sid]\n for attribute, value in kwargs.items():\n item[attribute] = value\n self[sid] = item", "response": "Update a special session with the specified attributes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_by_token(self, token, **kwargs):\n _sid = self.handler.sid(token)\n return self.update(_sid, **kwargs)", "response": "Update the session info by token"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreplaces an old refresh_token with a new one", "response": "def replace_token(self, sid, sinfo, token_type):\n \"\"\"\n Replace an old refresh_token with a new one\n\n :param sid: session ID\n :param sinfo: session info\n :param token_type: What type of tokens should be replaced\n :return: Updated session info\n \"\"\"\n try:\n # Mint a new one\n refresh_token = self.handler[token_type](sid, sinfo=sinfo)\n except KeyError:\n pass\n else:\n # blacklist the old is there is one\n try:\n self.handler[token_type].black_list(sinfo[token_type])\n except KeyError:\n pass\n\n sinfo[token_type] = refresh_token\n\n return sinfo"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nissuing a new access token using a valid refresh token", "response": "def refresh_token(self, token, new_refresh=False):\n \"\"\"\n Issue a new access token using a valid refresh token\n\n :param token: Refresh token\n :param new_refresh: Whether a new refresh token should be minted or not\n :return: Dictionary with session info\n :raises: ExpiredToken for invalid refresh token\n WrongTokenType for wrong token type\n \"\"\"\n\n try:\n _tinfo = self.handler['refresh_token'].info(token)\n except KeyError:\n return False\n\n if is_expired(int(_tinfo['exp'])) or _tinfo['black_listed']:\n raise ExpiredToken()\n\n _sid = _tinfo['sid']\n session_info = self[_sid]\n\n session_info = self.replace_token(_sid, session_info, 'access_token')\n\n session_info[\"token_type\"] = self.handler['access_token'].token_type\n\n if new_refresh:\n session_info = self.replace_token(_sid, session_info,\n 'refresh_token')\n\n self[_sid] = session_info\n return session_info"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_token_valid(self, token):\n\n try:\n _tinfo = self.handler.info(token)\n except KeyError:\n return False\n\n if is_expired(int(_tinfo['exp'])) or _tinfo['black_listed']:\n return False\n\n # Dependent on what state the session is in.\n session_info = self[_tinfo['sid']]\n\n if session_info[\"oauth_state\"] == \"authz\":\n if _tinfo['handler'] != self.handler['code']:\n return False\n elif session_info[\"oauth_state\"] == \"token\":\n if _tinfo['handler'] != self.handler['access_token']:\n return False\n\n return True", "response": "Checks validity of a given token"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrevoking the token from the cache.", "response": "def revoke_token(self, token, token_type=''):\n \"\"\"\n Revokes access token\n\n :param token: access token\n \"\"\"\n if token_type:\n self.handler[token_type].black_list(token)\n else:\n self.handler.black_list(token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrevoking a session by explicitly revoking all issued tokens", "response": "def revoke_session(self, sid='', token=''):\n \"\"\"\n Mark session as revoked but also explicitly revoke all issued tokens\n\n :param token: any token connected to the session\n :param sid: Session identifier\n \"\"\"\n if not sid:\n if token:\n sid = self.handler.sid(token)\n else:\n raise ValueError('Need one of \"sid\" or \"token\"')\n\n for typ in ['access_token', 'refresh_token', 'code']:\n try:\n self.revoke_token(self[sid][typ], typ)\n except KeyError: # If no such token has been issued\n pass\n\n self.update(sid, revoked=True)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncomputing the longslit model coefficients for a specific RectWaveCoeff instance.", "response": "def rectwv_coeff_add_longslit_model(rectwv_coeff, geometry, debugplot=0):\n \"\"\"Compute longslit_model coefficients for RectWaveCoeff object.\n\n Parameters\n ----------\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for a\n particular CSU configuration corresponding to a longslit\n observation.\n geometry : TBD\n debugplot : int\n Debugging level for messages and plots. For details see\n 'numina.array.display.pause_debugplot.py'.\n\n Returns\n -------\n rectwv_coeff : RectWaveCoeff instance\n Updated object with longslit_model coefficients computed.\n\n \"\"\"\n\n logger = logging.getLogger(__name__)\n\n # check grism and filter\n grism_name = rectwv_coeff.tags['grism']\n logger.info('Grism: ' + grism_name)\n filter_name = rectwv_coeff.tags['filter']\n logger.info('Filter: ' + filter_name)\n\n # list of slitlets to be computed\n list_valid_islitlets = list(range(1, EMIR_NBARS + 1))\n for idel in rectwv_coeff.missing_slitlets:\n list_valid_islitlets.remove(idel)\n if abs(debugplot) >= 10:\n print('>>> valid slitlet numbers:\\n', list_valid_islitlets)\n\n # ---\n\n # check that the CSU configuration corresponds to longslit\n csu_bar_slit_center_list = []\n for islitlet in list_valid_islitlets:\n csu_bar_slit_center_list.append(\n rectwv_coeff.contents[islitlet - 1]['csu_bar_slit_center']\n )\n if abs(debugplot) >= 10:\n logger.debug('Checking csu_bar_slit_center values:')\n summary(np.array(csu_bar_slit_center_list), debug=True)\n pause_debugplot(debugplot)\n\n # ---\n\n # polynomial coefficients corresponding to the wavelength calibration\n\n # step 0: determine poldeg_refined, checking that it is the same for\n # all the slitlets\n poldeg_refined_list = []\n for islitlet in list_valid_islitlets:\n poldeg_refined_list.append(\n len(rectwv_coeff.contents[islitlet - 1]['wpoly_coeff']) - 1\n )\n # remove duplicates\n poldeg_refined_list = list(set(poldeg_refined_list))\n if len(poldeg_refined_list) != 1:\n raise ValueError('Unexpected different poldeg_refined found: ' +\n str(poldeg_refined_list))\n poldeg_refined = poldeg_refined_list[0]\n\n # step 1: compute variation of each coefficient as a function of\n # y0_reference_middle of each slitlet\n list_poly = []\n for i in range(poldeg_refined + 1):\n xp = []\n yp = []\n for islitlet in list_valid_islitlets:\n tmp_dict = rectwv_coeff.contents[islitlet - 1]\n wpoly_coeff = tmp_dict['wpoly_coeff']\n if wpoly_coeff is not None:\n xp.append(tmp_dict['y0_reference_middle'])\n yp.append(wpoly_coeff[i])\n poly, yres, reject = polfit_residuals_with_sigma_rejection(\n x=np.array(xp),\n y=np.array(yp),\n deg=2,\n times_sigma_reject=5,\n xlabel='y0_rectified',\n ylabel='coeff[' + str(i) + ']',\n title=\"Fit to refined wavelength calibration coefficients\",\n geometry=geometry,\n debugplot=debugplot\n )\n list_poly.append(poly)\n\n # step 2: use the variation of each polynomial coefficient with\n # y0_reference_middle to infer the expected wavelength calibration\n # polynomial for each rectifified slitlet\n for islitlet in list_valid_islitlets:\n tmp_dict = rectwv_coeff.contents[islitlet - 1]\n y0_reference_middle = tmp_dict['y0_reference_middle']\n list_new_coeff = []\n for i in range(poldeg_refined + 1):\n new_coeff = list_poly[i](y0_reference_middle)\n list_new_coeff.append(new_coeff)\n tmp_dict['wpoly_coeff_longslit_model'] = list_new_coeff\n\n # ---\n\n # rectification transformation coefficients aij and bij\n\n # step 0: determine order_fmap, checking that it is the same for\n # all the slitlets\n order_fmap_list = []\n for islitlet in list_valid_islitlets:\n order_fmap_list.append(\n rectwv_coeff.contents[islitlet - 1]['ttd_order']\n )\n # remove duplicates\n order_fmap_list = list(set(order_fmap_list))\n if len(order_fmap_list) != 1:\n raise ValueError('Unexpected different order_fmap found')\n order_fmap = order_fmap_list[0]\n\n # step 1: compute variation of each coefficient as a function of\n # y0_reference_middle of each slitlet\n list_poly_ttd_aij = []\n list_poly_ttd_bij = []\n list_poly_tti_aij = []\n list_poly_tti_bij = []\n ncoef_ttd = ncoef_fmap(order_fmap)\n for i in range(ncoef_ttd):\n xp = []\n yp_ttd_aij = []\n yp_ttd_bij = []\n yp_tti_aij = []\n yp_tti_bij = []\n for islitlet in list_valid_islitlets:\n tmp_dict = rectwv_coeff.contents[islitlet - 1]\n ttd_aij = tmp_dict['ttd_aij']\n ttd_bij = tmp_dict['ttd_bij']\n tti_aij = tmp_dict['tti_aij']\n tti_bij = tmp_dict['tti_bij']\n if ttd_aij is not None:\n xp.append(tmp_dict['y0_reference_middle'])\n yp_ttd_aij.append(ttd_aij[i])\n yp_ttd_bij.append(ttd_bij[i])\n yp_tti_aij.append(tti_aij[i])\n yp_tti_bij.append(tti_bij[i])\n poly, yres, reject = polfit_residuals_with_sigma_rejection(\n x=np.array(xp),\n y=np.array(yp_ttd_aij),\n deg=5,\n times_sigma_reject=5,\n xlabel='y0_rectified',\n ylabel='ttd_aij[' + str(i) + ']',\n geometry=geometry,\n debugplot=debugplot\n )\n list_poly_ttd_aij.append(poly)\n poly, yres, reject = polfit_residuals_with_sigma_rejection(\n x=np.array(xp),\n y=np.array(yp_ttd_bij),\n deg=5,\n times_sigma_reject=5,\n xlabel='y0_rectified',\n ylabel='ttd_bij[' + str(i) + ']',\n geometry=geometry,\n debugplot=debugplot\n )\n list_poly_ttd_bij.append(poly)\n poly, yres, reject = polfit_residuals_with_sigma_rejection(\n x=np.array(xp),\n y=np.array(yp_tti_aij),\n deg=5,\n times_sigma_reject=5,\n xlabel='y0_rectified',\n ylabel='tti_aij[' + str(i) + ']',\n geometry=geometry,\n debugplot=debugplot\n )\n list_poly_tti_aij.append(poly)\n poly, yres, reject = polfit_residuals_with_sigma_rejection(\n x=np.array(xp),\n y=np.array(yp_tti_bij),\n deg=5,\n times_sigma_reject=5,\n xlabel='y0_rectified',\n ylabel='tti_bij[' + str(i) + ']',\n geometry=geometry,\n debugplot=debugplot\n )\n list_poly_tti_bij.append(poly)\n\n # step 2: use the variation of each coefficient with y0_reference_middle\n # to infer the expected rectification transformation for each slitlet\n for islitlet in list_valid_islitlets:\n tmp_dict = rectwv_coeff.contents[islitlet - 1]\n y0_reference_middle = tmp_dict['y0_reference_middle']\n tmp_dict['ttd_order_longslit_model'] = order_fmap\n ttd_aij_longslit_model = []\n ttd_bij_longslit_model = []\n tti_aij_longslit_model = []\n tti_bij_longslit_model = []\n for i in range(ncoef_ttd):\n new_coeff = list_poly_ttd_aij[i](y0_reference_middle)\n ttd_aij_longslit_model.append(new_coeff)\n new_coeff = list_poly_ttd_bij[i](y0_reference_middle)\n ttd_bij_longslit_model.append(new_coeff)\n new_coeff = list_poly_tti_aij[i](y0_reference_middle)\n tti_aij_longslit_model.append(new_coeff)\n new_coeff = list_poly_tti_bij[i](y0_reference_middle)\n tti_bij_longslit_model.append(new_coeff)\n tmp_dict['ttd_aij_longslit_model'] = ttd_aij_longslit_model\n tmp_dict['ttd_bij_longslit_model'] = ttd_bij_longslit_model\n tmp_dict['tti_aij_longslit_model'] = tti_aij_longslit_model\n tmp_dict['tti_bij_longslit_model'] = tti_bij_longslit_model\n\n # ---\n\n # update uuid and meta_info in output JSON structure\n rectwv_coeff.uuid = str(uuid4())\n rectwv_coeff.meta_info['creation_date'] = datetime.now().isoformat()\n\n # return updated object\n return rectwv_coeff"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npick the authentication method and its authn class ref from the AuthN broker.", "response": "def pick_auth(endpoint_context, areq, all=False):\n \"\"\"\n Pick authentication method\n\n :param areq: AuthorizationRequest instance\n :return: A dictionary with the authentication method and its authn class ref\n \"\"\"\n\n acrs = []\n try:\n if len(endpoint_context.authn_broker) == 1:\n return endpoint_context.authn_broker.default()\n\n if \"acr_values\" in areq:\n if not isinstance(areq[\"acr_values\"], list):\n areq[\"acr_values\"] = [areq[\"acr_values\"]]\n acrs = areq[\"acr_values\"]\n else: # same as any\n try:\n acrs = areq[\"claims\"][\"id_token\"][\"acr\"][\"values\"]\n except KeyError:\n try:\n _ith = areq[verified_claim_name(\"id_token_hint\")]\n except KeyError:\n try:\n _hint = areq['login_hint']\n except KeyError:\n pass\n else:\n if endpoint_context.login_hint2acrs:\n acrs = endpoint_context.login_hint2acrs(_hint)\n else:\n try:\n acrs = [_ith['acr']]\n except KeyError:\n pass\n\n if not acrs:\n return endpoint_context.authn_broker.default()\n\n for acr in acrs:\n res = endpoint_context.authn_broker.pick(acr)\n logger.debug(\"Picked AuthN broker for ACR %s: %s\" % (\n str(acr), str(res)))\n if res:\n if all:\n return res\n else:\n # Return the first guess by pick.\n return res[0]\n\n except KeyError as exc:\n logger.debug(\n \"An error occurred while picking the authN broker: %s\" % str(exc))\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_method(self, cls_name):\n for id, spec in self.db.items():\n if spec[\"method\"].__class__.__name__ == cls_name:\n yield spec[\"method\"]", "response": "Generator that returns all registered authenticators based on a specific authentication class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pick(self, acr=None):\n\n if acr is None:\n # Anything else doesn't make sense\n return self.db.values()\n else:\n return self._pick_by_class_ref(acr)", "response": "Given the authentication context find zero or more authn methods\n that could be used."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlooks for SExtractor program ('sextractor', or 'sex'). If a full path is provided, only this path is checked. Raise a SExtractorException if it failed. Return program and version if it succeed.", "response": "def setup(self, path=None):\n \"\"\"\n Look for SExtractor program ('sextractor', or 'sex').\n If a full path is provided, only this path is checked.\n Raise a SExtractorException if it failed.\n Return program and version if it succeed.\n \"\"\"\n\n # -- Finding sextractor program and its version\n # first look for 'sextractor', then 'sex'\n\n candidates = ['sextractor', 'sex']\n\n if (path):\n candidates = [path]\n\n selected = None\n for candidate in candidates:\n try:\n p = subprocess.Popen(candidate, shell=True,\n stdin=subprocess.PIPE,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n close_fds=True)\n (_out_err, _in) = (p.stdout, p.stdin)\n versionline = _out_err.read()\n if (versionline.find(\"SExtractor\") != -1):\n selected = candidate\n break\n except IOError:\n continue\n\n if not(selected):\n raise SExtractorException(\n \"\"\"\n Cannot find SExtractor program. Check your PATH,\n or provide the SExtractor program path in the constructor.\n \"\"\"\n )\n\n _program = selected\n\n # print versionline\n _version_match = re.search(\"[Vv]ersion ([0-9\\.])+\", versionline)\n if not _version_match:\n raise SExtractorException(\n \"Cannot determine SExtractor version.\"\n )\n\n _version = _version_match.group()[8:]\n if not _version:\n raise SExtractorException(\n \"Cannot determine SExtractor version.\"\n )\n\n # print \"Use \" + self.program + \" [\" + self.version + \"]\"\n\n return _program, _version"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_config(self):\n\n # -- Write filter configuration file\n\n # First check the filter itself\n\n filter = self.config['FILTER_MASK']\n rows = len(filter)\n cols = len(filter[0]) # May raise ValueError, OK\n\n filter_f = __builtin__.open(self.config['FILTER_NAME'], 'w')\n filter_f.write(\"CONV NORM\\n\")\n filter_f.write(\"# %dx%d Generated from sextractor.py module.\\n\" %\n (rows, cols))\n for row in filter:\n filter_f.write(\" \".join(map(repr, row)))\n filter_f.write(\"\\n\")\n\n filter_f.close()\n\n # -- Write parameter list file\n\n parameters_f = __builtin__.open(self.config['PARAMETERS_NAME'], 'w')\n for parameter in self.config['PARAMETERS_LIST']:\n print(parameter, file=parameters_f)\n\n parameters_f.close()\n\n # -- Write NNW configuration file\n\n nnw_f = __builtin__.open(self.config['STARNNW_NAME'], 'w')\n nnw_f.write(nnw_config)\n nnw_f.close()\n\n # -- Write main configuration file\n\n main_f = __builtin__.open(self.config['CONFIG_FILE'], 'w')\n\n for key in self.config.keys():\n if (key in SExtractor._SE_config_special_keys):\n continue\n\n if (key == \"PHOT_AUTOPARAMS\"): # tuple instead of a single value\n value = \" \".join(map(str, self.config[key]))\n else:\n value = str(self.config[key])\n\n print((\"%-16s %-16s # %s\" % (key, value, SExtractor._SE_config[key]['comment'])), file=main_f)\n\n main_f.close()", "response": "Update the in - memory SExtractor configuration files according to the current in - memory SExtractor configuration."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrun the SExtractor command line and return the current version of the SExtractor.", "response": "def run(self, file, updateconfig=True, clean=False, path=None):\n \"\"\"\n Run SExtractor.\n\n If updateconfig is True (default), the configuration\n files will be updated before running SExtractor.\n\n If clean is True (default: False), configuration files\n (if any) will be deleted after SExtractor terminates.\n\n \"\"\"\n\n if updateconfig:\n self.update_config()\n\n # Try to find SExtractor program\n # This will raise an exception if it failed\n\n self.program, self.version = self.setup(path)\n\n commandline = (\n self.program + \" -c \" + self.config['CONFIG_FILE'] + \" \" + file)\n # print commandline\n\n rcode = os.system(commandline)\n\n if (rcode):\n raise SExtractorException(\n \"SExtractor command [%s] failed.\" % commandline\n )\n\n if clean:\n self.clean()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef catalog(self):\n\n output_f = SExtractorfile(self.config['CATALOG_NAME'], 'r')\n c = output_f.read()\n output_f.close()\n\n return c", "response": "Read the output catalog produced by the last SExtractor run."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving the generated SExtractor files and the generated configuration files.", "response": "def clean(self, config=True, catalog=False, check=False):\n \"\"\"\n Remove the generated SExtractor files (if any).\n If config is True, remove generated configuration files.\n If catalog is True, remove the output catalog.\n If check is True, remove output check image.\n \"\"\"\n\n try:\n if (config):\n os.unlink(self.config['FILTER_NAME'])\n os.unlink(self.config['PARAMETERS_NAME'])\n os.unlink(self.config['STARNNW_NAME'])\n os.unlink(self.config['CONFIG_FILE'])\n if (catalog):\n os.unlink(self.config['CATALOG_NAME'])\n if (check):\n os.unlink(self.config['CHECKIMAGE_NAME'])\n\n except OSError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef refine_rectwv_coeff(input_image, rectwv_coeff,\n refine_wavecalib_mode,\n minimum_slitlet_width_mm,\n maximum_slitlet_width_mm,\n save_intermediate_results=False,\n debugplot=0):\n \"\"\"Refine RectWaveCoeff object using a catalogue of lines\n\n One and only one among refine_with_oh_lines_mode and\n refine_with_arc_lines must be different from zero.\n\n Parameters\n ----------\n input_image : HDUList object\n Input 2D image.\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for the\n particular CSU configuration.\n refine_wavecalib_mode : int\n Integer, indicating the type of refinement:\n 0 : no refinement\n 1 : apply the same global offset to all the slitlets (using ARC lines)\n 2 : apply individual offset to each slitlet (using ARC lines)\n 11 : apply the same global offset to all the slitlets (using OH lines)\n 12 : apply individual offset to each slitlet (using OH lines)\n minimum_slitlet_width_mm : float\n Minimum slitlet width (mm) for a valid slitlet.\n maximum_slitlet_width_mm : float\n Maximum slitlet width (mm) for a valid slitlet.\n save_intermediate_results : bool\n If True, save plots in PDF files\n debugplot : int\n Determines whether intermediate computations and/or plots\n are displayed. The valid codes are defined in\n numina.array.display.pause_debugplot.\n\n Returns\n -------\n refined_rectwv_coeff : RectWaveCoeff instance\n Refined rectification and wavelength calibration coefficients\n for the particular CSU configuration.\n expected_cat_image : HDUList object\n Output 2D image with the expected catalogued lines.\n\n \"\"\"\n\n logger = logging.getLogger(__name__)\n\n if save_intermediate_results:\n from matplotlib.backends.backend_pdf import PdfPages\n pdf = PdfPages('crosscorrelation.pdf')\n else:\n pdf = None\n\n # image header\n main_header = input_image[0].header\n filter_name = main_header['filter']\n grism_name = main_header['grism']\n\n # protections\n if refine_wavecalib_mode not in [1, 2, 11, 12]:\n logger.error('Wavelength calibration refinemente mode={}'. format(\n refine_wavecalib_mode\n ))\n raise ValueError(\"Invalid wavelength calibration refinement mode\")\n\n # read tabulated lines\n if refine_wavecalib_mode in [1, 2]: # ARC lines\n if grism_name == 'LR':\n catlines_file = 'lines_argon_neon_xenon_empirical_LR.dat'\n else:\n catlines_file = 'lines_argon_neon_xenon_empirical.dat'\n dumdata = pkgutil.get_data('emirdrp.instrument.configs', catlines_file)\n arc_lines_tmpfile = StringIO(dumdata.decode('utf8'))\n catlines = np.genfromtxt(arc_lines_tmpfile)\n # define wavelength and flux as separate arrays\n catlines_all_wave = catlines[:, 0]\n catlines_all_flux = catlines[:, 1]\n mode = refine_wavecalib_mode\n elif refine_wavecalib_mode in [11, 12]: # OH lines\n dumdata = pkgutil.get_data(\n 'emirdrp.instrument.configs',\n 'Oliva_etal_2013.dat'\n )\n oh_lines_tmpfile = StringIO(dumdata.decode('utf8'))\n catlines = np.genfromtxt(oh_lines_tmpfile)\n # define wavelength and flux as separate arrays\n catlines_all_wave = np.concatenate((catlines[:, 1], catlines[:, 0]))\n catlines_all_flux = np.concatenate((catlines[:, 2], catlines[:, 2]))\n mode = refine_wavecalib_mode - 10\n else:\n raise ValueError('Unexpected mode={}'.format(refine_wavecalib_mode))\n\n # initialize output\n refined_rectwv_coeff = deepcopy(rectwv_coeff)\n\n logger.info('Computing median spectrum')\n # compute median spectrum and normalize it\n sp_median = median_slitlets_rectified(\n input_image,\n mode=2,\n minimum_slitlet_width_mm=minimum_slitlet_width_mm,\n maximum_slitlet_width_mm=maximum_slitlet_width_mm\n )[0].data\n sp_median /= sp_median.max()\n\n # determine minimum and maximum useful wavelength\n jmin, jmax = find_pix_borders(sp_median, 0)\n naxis1 = main_header['naxis1']\n naxis2 = main_header['naxis2']\n crpix1 = main_header['crpix1']\n crval1 = main_header['crval1']\n cdelt1 = main_header['cdelt1']\n xwave = crval1 + (np.arange(naxis1) + 1.0 - crpix1) * cdelt1\n if grism_name == 'LR':\n wv_parameters = set_wv_parameters(filter_name, grism_name)\n wave_min = wv_parameters['wvmin_useful']\n wave_max = wv_parameters['wvmax_useful']\n else:\n wave_min = crval1 + (jmin + 1 - crpix1) * cdelt1\n wave_max = crval1 + (jmax + 1 - crpix1) * cdelt1\n logger.info('Setting wave_min to {}'.format(wave_min))\n logger.info('Setting wave_max to {}'.format(wave_max))\n\n # extract subset of catalogue lines within current wavelength range\n lok1 = catlines_all_wave >= wave_min\n lok2 = catlines_all_wave <= wave_max\n catlines_reference_wave = catlines_all_wave[lok1*lok2]\n catlines_reference_flux = catlines_all_flux[lok1*lok2]\n catlines_reference_flux /= catlines_reference_flux.max()\n\n # estimate sigma to broaden catalogue lines\n csu_config = CsuConfiguration.define_from_header(main_header)\n # segregate slitlets\n list_useful_slitlets = csu_config.widths_in_range_mm(\n minwidth=minimum_slitlet_width_mm,\n maxwidth=maximum_slitlet_width_mm\n )\n # remove missing slitlets\n if len(refined_rectwv_coeff.missing_slitlets) > 0:\n for iremove in refined_rectwv_coeff.missing_slitlets:\n if iremove in list_useful_slitlets:\n list_useful_slitlets.remove(iremove)\n\n list_not_useful_slitlets = [i for i in list(range(1, EMIR_NBARS + 1))\n if i not in list_useful_slitlets]\n logger.info('list of useful slitlets: {}'.format(\n list_useful_slitlets))\n logger.info('list of not useful slitlets: {}'.format(\n list_not_useful_slitlets))\n tempwidths = np.array([csu_config.csu_bar_slit_width(islitlet)\n for islitlet in list_useful_slitlets])\n widths_summary = summary(tempwidths)\n logger.info('Statistics of useful slitlet widths (mm):')\n logger.info('- npoints....: {0:d}'.format(widths_summary['npoints']))\n logger.info('- mean.......: {0:7.3f}'.format(widths_summary['mean']))\n logger.info('- median.....: {0:7.3f}'.format(widths_summary['median']))\n logger.info('- std........: {0:7.3f}'.format(widths_summary['std']))\n logger.info('- robust_std.: {0:7.3f}'.format(widths_summary['robust_std']))\n # empirical transformation of slit width (mm) to pixels\n sigma_broadening = cdelt1 * widths_summary['median']\n\n # convolve location of catalogue lines to generate expected spectrum\n xwave_reference, sp_reference = convolve_comb_lines(\n catlines_reference_wave, catlines_reference_flux, sigma_broadening,\n crpix1, crval1, cdelt1, naxis1\n )\n sp_reference /= sp_reference.max()\n\n # generate image2d with expected lines\n image2d_expected_lines = np.tile(sp_reference, (naxis2, 1))\n hdu = fits.PrimaryHDU(data=image2d_expected_lines, header=main_header)\n expected_cat_image = fits.HDUList([hdu])\n\n if (abs(debugplot) % 10 != 0) or (pdf is not None):\n ax = ximplotxy(xwave, sp_median, 'C1-',\n xlabel='Wavelength (Angstroms, in vacuum)',\n ylabel='Normalized number of counts',\n title='Median spectrum',\n label='observed spectrum', show=False)\n # overplot reference catalogue lines\n ax.stem(catlines_reference_wave, catlines_reference_flux, 'C4-',\n markerfmt=' ', basefmt='C4-', label='tabulated lines')\n # overplot convolved reference lines\n ax.plot(xwave_reference, sp_reference, 'C0-',\n label='expected spectrum')\n ax.legend()\n if pdf is not None:\n pdf.savefig()\n else:\n pause_debugplot(debugplot=debugplot, pltshow=True)\n\n # compute baseline signal in sp_median\n baseline = np.percentile(sp_median[sp_median > 0], q=10)\n if (abs(debugplot) % 10 != 0) or (pdf is not None):\n fig = plt.figure()\n ax = fig.add_subplot(111)\n ax.hist(sp_median, bins=1000, log=True)\n ax.set_xlabel('Normalized number of counts')\n ax.set_ylabel('Number of pixels')\n ax.set_title('Median spectrum')\n ax.axvline(float(baseline), linestyle='--', color='grey')\n if pdf is not None:\n pdf.savefig()\n else:\n geometry = (0, 0, 640, 480)\n set_window_geometry(geometry)\n plt.show()\n # subtract baseline to sp_median (only pixels with signal above zero)\n lok = np.where(sp_median > 0)\n sp_median[lok] -= baseline\n\n # compute global offset through periodic correlation\n logger.info('Computing global offset')\n global_offset, fpeak = periodic_corr1d(\n sp_reference=sp_reference,\n sp_offset=sp_median,\n fminmax=None,\n naround_zero=50,\n plottitle='Median spectrum (cross-correlation)',\n pdf=pdf,\n debugplot=debugplot\n )\n logger.info('Global offset: {} pixels'.format(-global_offset))\n\n missing_slitlets = rectwv_coeff.missing_slitlets\n\n if mode == 1:\n # apply computed offset to obtain refined_rectwv_coeff_global\n for islitlet in range(1, EMIR_NBARS + 1):\n if islitlet not in missing_slitlets:\n i = islitlet - 1\n dumdict = refined_rectwv_coeff.contents[i]\n dumdict['wpoly_coeff'][0] -= global_offset*cdelt1\n\n elif mode == 2:\n # compute individual offset for each slitlet\n logger.info('Computing individual offsets')\n median_55sp = median_slitlets_rectified(input_image, mode=1)\n offset_array = np.zeros(EMIR_NBARS)\n xplot = []\n yplot = []\n xplot_skipped = []\n yplot_skipped = []\n cout = '0'\n for islitlet in range(1, EMIR_NBARS + 1):\n if islitlet in list_useful_slitlets:\n i = islitlet - 1\n sp_median = median_55sp[0].data[i, :]\n lok = np.where(sp_median > 0)\n if np.any(lok):\n baseline = np.percentile(sp_median[lok], q=10)\n sp_median[lok] -= baseline\n sp_median /= sp_median.max()\n offset_array[i], fpeak = periodic_corr1d(\n sp_reference=sp_reference,\n sp_offset=median_55sp[0].data[i, :],\n fminmax=None,\n naround_zero=50,\n plottitle='slitlet #{0} (cross-correlation)'.format(\n islitlet),\n pdf=pdf,\n debugplot=debugplot\n )\n else:\n offset_array[i] = 0.0\n dumdict = refined_rectwv_coeff.contents[i]\n dumdict['wpoly_coeff'][0] -= offset_array[i]*cdelt1\n xplot.append(islitlet)\n yplot.append(-offset_array[i])\n # second correction\n wpoly_coeff_refined = check_wlcalib_sp(\n sp=median_55sp[0].data[i, :],\n crpix1=crpix1,\n crval1=crval1-offset_array[i]*cdelt1,\n cdelt1=cdelt1,\n wv_master=catlines_reference_wave,\n coeff_ini=dumdict['wpoly_coeff'],\n naxis1_ini=EMIR_NAXIS1,\n title='slitlet #{0} (after applying offset)'.format(\n islitlet),\n ylogscale=False,\n pdf=pdf,\n debugplot=debugplot\n )\n dumdict['wpoly_coeff'] = wpoly_coeff_refined\n cout += '.'\n\n else:\n xplot_skipped.append(islitlet)\n yplot_skipped.append(0)\n cout += 'i'\n\n if islitlet % 10 == 0:\n if cout != 'i':\n cout = str(islitlet // 10)\n\n logger.info(cout)\n\n # show offsets with opposite sign\n stat_summary = summary(np.array(yplot))\n logger.info('Statistics of individual slitlet offsets (pixels):')\n logger.info('- npoints....: {0:d}'.format(stat_summary['npoints']))\n logger.info('- mean.......: {0:7.3f}'.format(stat_summary['mean']))\n logger.info('- median.....: {0:7.3f}'.format(stat_summary['median']))\n logger.info('- std........: {0:7.3f}'.format(stat_summary['std']))\n logger.info('- robust_std.: {0:7.3f}'.format(stat_summary[\n 'robust_std']))\n if (abs(debugplot) % 10 != 0) or (pdf is not None):\n ax = ximplotxy(xplot, yplot,\n linestyle='', marker='o', color='C0',\n xlabel='slitlet number',\n ylabel='-offset (pixels) = offset to be applied',\n title='cross-correlation result',\n show=False, **{'label': 'individual slitlets'})\n if len(xplot_skipped) > 0:\n ax.plot(xplot_skipped, yplot_skipped, 'mx')\n ax.axhline(-global_offset, linestyle='--', color='C1',\n label='global offset')\n ax.legend()\n if pdf is not None:\n pdf.savefig()\n else:\n pause_debugplot(debugplot=debugplot, pltshow=True)\n else:\n raise ValueError('Unexpected mode={}'.format(mode))\n\n # close output PDF file\n if pdf is not None:\n pdf.close()\n\n # return result\n return refined_rectwv_coeff, expected_cat_image", "response": "Refine rectification and wavelength calibration coefficients for a particular CSU configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef select_unrectified_slitlet(image2d, islitlet, csu_bar_slit_center,\n params, parmodel, maskonly):\n \"\"\"Returns image with the indicated slitlet (zero anywhere else).\n\n Parameters\n ----------\n image2d : numpy array\n Initial image from which the slitlet data will be extracted.\n islitlet : int\n Slitlet number.\n csu_bar_slit_center : float\n CSU bar slit center.\n params : :class:`~lmfit.parameter.Parameters`\n Parameters to be employed in the prediction of the distorted\n boundaries.\n parmodel : str\n Model to be assumed. Allowed values are 'longslit' and\n 'multislit'.\n maskonly : bool\n If True, returns simply a mask (1 in the slitlet region and\n zero anywhere else.\n\n Returns\n -------\n image2d_output : numpy array\n 2D image with the pixel information corresponding to the\n selected slitlet and zero everywhere else.\n\n \"\"\"\n\n # protection\n if image2d.shape != (EMIR_NAXIS2, EMIR_NAXIS1):\n raise ValueError(\"NAXIS1, NAXIS2 unexpected for EMIR detector\")\n\n # initialize image output\n image2d_output = np.zeros_like(image2d)\n\n # expected slitlet frontiers\n list_expected_frontiers = expected_distorted_frontiers(\n islitlet, csu_bar_slit_center,\n params, parmodel, numpts=101, deg=5, debugplot=0\n )\n pol_lower_expected = list_expected_frontiers[0].poly_funct\n pol_upper_expected = list_expected_frontiers[1].poly_funct\n\n # main loop: compute for each channel the minimum and maximum scan\n for j in range(EMIR_NAXIS1):\n xchannel = j + 1\n y0_lower = pol_lower_expected(xchannel)\n y0_upper = pol_upper_expected(xchannel)\n n1, n2 = nscan_minmax_frontiers(y0_frontier_lower=y0_lower,\n y0_frontier_upper=y0_upper,\n resize=True)\n # note that n1 and n2 are scans (ranging from 1 to NAXIS2)\n if maskonly:\n image2d_output[(n1 - 1):n2, j] = np.repeat(\n [1.0], (n2 - n1 + 1)\n )\n else:\n image2d_output[(n1 - 1):n2, j] = image2d[(n1 - 1):n2, j]\n\n return image2d_output", "response": "Select unrectified slitlet from the image2d."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nevaluates rect.+wavecal. coefficients from arc image.", "response": "def rectwv_coeff_from_arc_image(reduced_image,\n bound_param,\n lines_catalog,\n args_nbrightlines=None,\n args_ymargin_bb=2,\n args_remove_sp_background=True,\n args_times_sigma_threshold=10,\n args_order_fmap=2,\n args_sigma_gaussian_filtering=2,\n args_margin_npix=50,\n args_poldeg_initial=3,\n args_poldeg_refined=5,\n args_interactive=False,\n args_threshold_wv=0,\n args_ylogscale=False,\n args_pdf=None,\n args_geometry=(0,0,640,480),\n debugplot=0):\n \"\"\"Evaluate rect.+wavecal. coefficients from arc image\n\n Parameters\n ----------\n reduced_image : HDUList object\n Image with preliminary basic reduction: bpm, bias, dark and\n flatfield.\n bound_param : RefinedBoundaryModelParam instance\n Refined boundary model.\n lines_catalog : Numpy array\n 2D numpy array with the contents of the master file with the\n expected arc line wavelengths.\n args_nbrightlines : int\n TBD\n args_ymargin_bb : int\n TBD\n args_remove_sp_background : bool\n TBD\n args_times_sigma_threshold : float\n TBD\n args_order_fmap : int\n TBD\n args_sigma_gaussian_filtering : float\n TBD\n args_margin_npix : int\n TBD\n args_poldeg_initial : int\n TBD\n args_poldeg_refined : int\n TBD\n args_interactive : bool\n TBD\n args_threshold_wv : float\n TBD\n args_ylogscale : bool\n TBD\n args_pdf : TBD\n args_geometry : TBD\n debugplot : int\n Debugging level for messages and plots. For details see\n 'numina.array.display.pause_debugplot.py'.\n\n Returns\n -------\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for the\n particular CSU configuration of the input arc image.\n reduced_55sp : HDUList object\n Image with 55 spectra corresponding to the median spectrum for\n each slitlet, employed to derived the wavelength calibration\n polynomial.\n\n \"\"\"\n\n logger = logging.getLogger(__name__)\n\n # protections\n if args_interactive and args_pdf is not None:\n logger.error('--interactive and --pdf are incompatible options')\n raise ValueError('--interactive and --pdf are incompatible options')\n\n # header and data array\n header = reduced_image[0].header\n image2d = reduced_image[0].data\n\n # check grism and filter\n filter_name = header['filter']\n logger.info('Filter: ' + filter_name)\n if filter_name != bound_param.tags['filter']:\n raise ValueError('Filter name does not match!')\n grism_name = header['grism']\n logger.info('Grism: ' + grism_name)\n if grism_name != bound_param.tags['grism']:\n raise ValueError('Grism name does not match!')\n\n # read the CSU configuration from the image header\n csu_conf = CsuConfiguration.define_from_header(header)\n logger.debug(csu_conf)\n\n # read the DTU configuration from the image header\n dtu_conf = DtuConfiguration.define_from_header(header)\n logger.debug(dtu_conf)\n\n # set boundary parameters\n parmodel = bound_param.meta_info['parmodel']\n params = bound_params_from_dict(bound_param.__getstate__())\n if abs(debugplot) >= 10:\n print('-' * 83)\n print('* FITTED BOUND PARAMETERS')\n params.pretty_print()\n pause_debugplot(debugplot)\n\n # determine parameters according to grism+filter combination\n wv_parameters = set_wv_parameters(filter_name, grism_name)\n islitlet_min = wv_parameters['islitlet_min']\n islitlet_max = wv_parameters['islitlet_max']\n if args_nbrightlines is None:\n nbrightlines = wv_parameters['nbrightlines']\n else:\n nbrightlines = [int(idum) for idum in args_nbrightlines.split(',')]\n poly_crval1_linear = wv_parameters['poly_crval1_linear']\n poly_cdelt1_linear = wv_parameters['poly_cdelt1_linear']\n wvmin_expected = wv_parameters['wvmin_expected']\n wvmax_expected = wv_parameters['wvmax_expected']\n wvmin_useful = wv_parameters['wvmin_useful']\n wvmax_useful = wv_parameters['wvmax_useful']\n\n # list of slitlets to be computed\n logger.info('list_slitlets: [' + str(islitlet_min) + ',... ' +\n str(islitlet_max) + ']')\n\n # read master arc line wavelengths (only brightest lines)\n wv_master = read_wv_master_from_array(\n master_table=lines_catalog, lines='brightest', debugplot=debugplot\n )\n\n # read master arc line wavelengths (whole data set)\n wv_master_all = read_wv_master_from_array(\n master_table=lines_catalog, lines='all', debugplot=debugplot\n )\n\n # check that the arc lines in the master file are properly sorted\n # in ascending order\n for i in range(len(wv_master_all) - 1):\n if wv_master_all[i] >= wv_master_all[i + 1]:\n logger.error('>>> wavelengths: ' +\n str(wv_master_all[i]) + ' ' +\n str(wv_master_all[i+1]))\n raise ValueError('Arc lines are not sorted in master file')\n\n # ---\n\n image2d_55sp = np.zeros((EMIR_NBARS, EMIR_NAXIS1))\n\n # compute rectification transformation and wavelength calibration\n # polynomials\n\n measured_slitlets = []\n\n cout = '0'\n for islitlet in range(1, EMIR_NBARS + 1):\n\n if islitlet_min <= islitlet <= islitlet_max:\n\n # define Slitlet2dArc object\n slt = Slitlet2dArc(\n islitlet=islitlet,\n csu_conf=csu_conf,\n ymargin_bb=args_ymargin_bb,\n params=params,\n parmodel=parmodel,\n debugplot=debugplot\n )\n\n # extract 2D image corresponding to the selected slitlet, clipping\n # the image beyond the unrectified slitlet (in order to isolate\n # the arc lines of the current slitlet; otherwise there are\n # problems with arc lines from neighbour slitlets)\n image2d_tmp = select_unrectified_slitlet(\n image2d=image2d,\n islitlet=islitlet,\n csu_bar_slit_center=csu_conf.csu_bar_slit_center(islitlet),\n params=params,\n parmodel=parmodel,\n maskonly=False\n )\n slitlet2d = slt.extract_slitlet2d(image2d_tmp)\n\n # subtract smooth background computed as follows:\n # - median collapsed spectrum of the whole slitlet2d\n # - independent median filtering of the previous spectrum in the\n # two halves in the spectral direction\n if args_remove_sp_background:\n spmedian = np.median(slitlet2d, axis=0)\n naxis1_tmp = spmedian.shape[0]\n jmidpoint = naxis1_tmp // 2\n sp1 = medfilt(spmedian[:jmidpoint], [201])\n sp2 = medfilt(spmedian[jmidpoint:], [201])\n spbackground = np.concatenate((sp1, sp2))\n slitlet2d -= spbackground\n\n # locate unknown arc lines\n slt.locate_unknown_arc_lines(\n slitlet2d=slitlet2d,\n times_sigma_threshold=args_times_sigma_threshold)\n\n # continue working with current slitlet only if arc lines have\n # been detected\n if slt.list_arc_lines is not None:\n\n # compute intersections between spectrum trails and arc lines\n slt.xy_spectrail_arc_intersections(slitlet2d=slitlet2d)\n\n # compute rectification transformation\n slt.estimate_tt_to_rectify(order=args_order_fmap,\n slitlet2d=slitlet2d)\n\n # rectify image\n slitlet2d_rect = slt.rectify(slitlet2d,\n resampling=2,\n transformation=1)\n\n # median spectrum and line peaks from rectified image\n sp_median, fxpeaks = slt.median_spectrum_from_rectified_image(\n slitlet2d_rect,\n sigma_gaussian_filtering=args_sigma_gaussian_filtering,\n nwinwidth_initial=5,\n nwinwidth_refined=5,\n times_sigma_threshold=5,\n npix_avoid_border=6,\n nbrightlines=nbrightlines\n )\n\n image2d_55sp[islitlet - 1, :] = sp_median\n\n # determine expected wavelength limits prior to the wavelength\n # calibration\n csu_bar_slit_center = csu_conf.csu_bar_slit_center(islitlet)\n crval1_linear = poly_crval1_linear(csu_bar_slit_center)\n cdelt1_linear = poly_cdelt1_linear(csu_bar_slit_center)\n expected_wvmin = crval1_linear - \\\n args_margin_npix * cdelt1_linear\n naxis1_linear = sp_median.shape[0]\n crvaln_linear = crval1_linear + \\\n (naxis1_linear - 1) * cdelt1_linear\n expected_wvmax = crvaln_linear + \\\n args_margin_npix * cdelt1_linear\n # override previous estimates when necessary\n if wvmin_expected is not None:\n expected_wvmin = wvmin_expected\n if wvmax_expected is not None:\n expected_wvmax = wvmax_expected\n\n # clip initial master arc line list with bright lines to\n # the expected wavelength range\n lok1 = expected_wvmin <= wv_master\n lok2 = wv_master <= expected_wvmax\n lok = lok1 * lok2\n wv_master_eff = wv_master[lok]\n\n # perform initial wavelength calibration\n solution_wv = wvcal_spectrum(\n sp=sp_median,\n fxpeaks=fxpeaks,\n poly_degree_wfit=args_poldeg_initial,\n wv_master=wv_master_eff,\n wv_ini_search=expected_wvmin,\n wv_end_search=expected_wvmax,\n wvmin_useful=wvmin_useful,\n wvmax_useful=wvmax_useful,\n geometry=args_geometry,\n debugplot=slt.debugplot\n )\n # store initial wavelength calibration polynomial in current\n # slitlet instance\n slt.wpoly = np.polynomial.Polynomial(solution_wv.coeff)\n pause_debugplot(debugplot)\n\n # clip initial master arc line list with all the lines to\n # the expected wavelength range\n lok1 = expected_wvmin <= wv_master_all\n lok2 = wv_master_all <= expected_wvmax\n lok = lok1 * lok2\n wv_master_all_eff = wv_master_all[lok]\n\n # clip master arc line list to useful region\n if wvmin_useful is not None:\n lok = wvmin_useful <= wv_master_all_eff\n wv_master_all_eff = wv_master_all_eff[lok]\n if wvmax_useful is not None:\n lok = wv_master_all_eff <= wvmax_useful\n wv_master_all_eff = wv_master_all_eff[lok]\n\n # refine wavelength calibration\n if args_poldeg_refined > 0:\n plottitle = '[slitlet#{}, refined]'.format(islitlet)\n poly_refined, yres_summary = refine_arccalibration(\n sp=sp_median,\n poly_initial=slt.wpoly,\n wv_master=wv_master_all_eff,\n poldeg=args_poldeg_refined,\n ntimes_match_wv=1,\n interactive=args_interactive,\n threshold=args_threshold_wv,\n plottitle=plottitle,\n ylogscale=args_ylogscale,\n geometry=args_geometry,\n pdf=args_pdf,\n debugplot=slt.debugplot\n )\n # store refined wavelength calibration polynomial in\n # current slitlet instance\n slt.wpoly = poly_refined\n\n # compute approximate linear values for CRVAL1 and CDELT1\n naxis1_linear = sp_median.shape[0]\n crmin1_linear = slt.wpoly(1)\n crmax1_linear = slt.wpoly(naxis1_linear)\n slt.crval1_linear = crmin1_linear\n slt.cdelt1_linear = \\\n (crmax1_linear - crmin1_linear) / (naxis1_linear - 1)\n\n # check that the trimming of wv_master and wv_master_all has\n # preserved the wavelength range [crmin1_linear, crmax1_linear]\n if crmin1_linear < expected_wvmin:\n logger.warning(\">>> islitlet: \" +str(islitlet))\n logger.warning(\"expected_wvmin: \" + str(expected_wvmin))\n logger.warning(\"crmin1_linear.: \" + str(crmin1_linear))\n logger.warning(\"WARNING: Unexpected crmin1_linear < \"\n \"expected_wvmin\")\n if crmax1_linear > expected_wvmax:\n logger.warning(\">>> islitlet: \" +str(islitlet))\n logger.warning(\"expected_wvmax: \" + str(expected_wvmax))\n logger.warning(\"crmax1_linear.: \" + str(crmax1_linear))\n logger.warning(\"WARNING: Unexpected crmax1_linear > \"\n \"expected_wvmax\")\n\n cout += '.'\n\n else:\n\n cout += 'x'\n\n if islitlet % 10 == 0:\n if cout != 'x':\n cout = str(islitlet // 10)\n\n if debugplot != 0:\n pause_debugplot(debugplot)\n\n else:\n\n # define Slitlet2dArc object\n slt = Slitlet2dArc(\n islitlet=islitlet,\n csu_conf=csu_conf,\n ymargin_bb=args_ymargin_bb,\n params=None,\n parmodel=None,\n debugplot=debugplot\n )\n\n cout += 'i'\n\n # store current slitlet in list of measured slitlets\n measured_slitlets.append(slt)\n\n logger.info(cout)\n\n # ---\n\n # generate FITS file structure with 55 spectra corresponding to the\n # median spectrum for each slitlet\n reduced_55sp = fits.PrimaryHDU(data=image2d_55sp)\n reduced_55sp.header['crpix1'] = (0.0, 'reference pixel')\n reduced_55sp.header['crval1'] = (0.0, 'central value at crpix2')\n reduced_55sp.header['cdelt1'] = (1.0, 'increment')\n reduced_55sp.header['ctype1'] = 'PIXEL'\n reduced_55sp.header['cunit1'] = ('Pixel', 'units along axis2')\n reduced_55sp.header['crpix2'] = (0.0, 'reference pixel')\n reduced_55sp.header['crval2'] = (0.0, 'central value at crpix2')\n reduced_55sp.header['cdelt2'] = (1.0, 'increment')\n reduced_55sp.header['ctype2'] = 'PIXEL'\n reduced_55sp.header['cunit2'] = ('Pixel', 'units along axis2')\n\n # ---\n\n # Generate structure to store intermediate results\n outdict = {}\n outdict['instrument'] = 'EMIR'\n outdict['meta_info'] = {}\n outdict['meta_info']['creation_date'] = datetime.now().isoformat()\n outdict['meta_info']['description'] = \\\n 'computation of rectification and wavelength calibration polynomial ' \\\n 'coefficients for a particular CSU configuration'\n outdict['meta_info']['recipe_name'] = 'undefined'\n outdict['meta_info']['origin'] = {}\n outdict['meta_info']['origin']['bound_param_uuid'] = \\\n bound_param.uuid\n outdict['meta_info']['origin']['arc_image_uuid'] = 'undefined'\n outdict['tags'] = {}\n outdict['tags']['grism'] = grism_name\n outdict['tags']['filter'] = filter_name\n outdict['tags']['islitlet_min'] = islitlet_min\n outdict['tags']['islitlet_max'] = islitlet_max\n outdict['dtu_configuration'] = dtu_conf.outdict()\n outdict['uuid'] = str(uuid4())\n outdict['contents'] = {}\n\n missing_slitlets = []\n for slt in measured_slitlets:\n\n islitlet = slt.islitlet\n\n if islitlet_min <= islitlet <= islitlet_max:\n\n # avoid error when creating a python list of coefficients from\n # numpy polynomials when the polynomials do not exist (note that\n # the JSON format doesn't handle numpy arrays and such arrays must\n # be transformed into native python lists)\n if slt.wpoly is None:\n wpoly_coeff = None\n else:\n wpoly_coeff = slt.wpoly.coef.tolist()\n if slt.wpoly_longslit_model is None:\n wpoly_coeff_longslit_model = None\n else:\n wpoly_coeff_longslit_model = \\\n slt.wpoly_longslit_model.coef.tolist()\n\n # avoid similar error when creating a python list of coefficients\n # when the numpy array does not exist; note that this problem\n # does not happen with tt?_aij_longslit_model and\n # tt?_bij_longslit_model because the latter have already been\n # created as native python lists\n if slt.ttd_aij is None:\n ttd_aij = None\n else:\n ttd_aij = slt.ttd_aij.tolist()\n if slt.ttd_bij is None:\n ttd_bij = None\n else:\n ttd_bij = slt.ttd_bij.tolist()\n if slt.tti_aij is None:\n tti_aij = None\n else:\n tti_aij = slt.tti_aij.tolist()\n if slt.tti_bij is None:\n tti_bij = None\n else:\n tti_bij = slt.tti_bij.tolist()\n\n # creating temporary dictionary with the information corresponding\n # to the current slitlett that will be saved in the JSON file\n tmp_dict = {\n 'csu_bar_left': slt.csu_bar_left,\n 'csu_bar_right': slt.csu_bar_right,\n 'csu_bar_slit_center': slt.csu_bar_slit_center,\n 'csu_bar_slit_width': slt.csu_bar_slit_width,\n 'x0_reference': slt.x0_reference,\n 'y0_reference_lower': slt.y0_reference_lower,\n 'y0_reference_middle': slt.y0_reference_middle,\n 'y0_reference_upper': slt.y0_reference_upper,\n 'y0_reference_lower_expected':\n slt.y0_reference_lower_expected,\n 'y0_reference_middle_expected':\n slt.y0_reference_middle_expected,\n 'y0_reference_upper_expected':\n slt.y0_reference_upper_expected,\n 'y0_frontier_lower': slt.y0_frontier_lower,\n 'y0_frontier_upper': slt.y0_frontier_upper,\n 'y0_frontier_lower_expected': slt.y0_frontier_lower_expected,\n 'y0_frontier_upper_expected': slt.y0_frontier_upper_expected,\n 'corr_yrect_a': slt.corr_yrect_a,\n 'corr_yrect_b': slt.corr_yrect_b,\n 'min_row_rectified': slt.min_row_rectified,\n 'max_row_rectified': slt.max_row_rectified,\n 'ymargin_bb': slt.ymargin_bb,\n 'bb_nc1_orig': slt.bb_nc1_orig,\n 'bb_nc2_orig': slt.bb_nc2_orig,\n 'bb_ns1_orig': slt.bb_ns1_orig,\n 'bb_ns2_orig': slt.bb_ns2_orig,\n 'spectrail': {\n 'poly_coef_lower':\n slt.list_spectrails[\n slt.i_lower_spectrail].poly_funct.coef.tolist(),\n 'poly_coef_middle':\n slt.list_spectrails[\n slt.i_middle_spectrail].poly_funct.coef.tolist(),\n 'poly_coef_upper':\n slt.list_spectrails[\n slt.i_upper_spectrail].poly_funct.coef.tolist(),\n },\n 'frontier': {\n 'poly_coef_lower':\n slt.list_frontiers[0].poly_funct.coef.tolist(),\n 'poly_coef_upper':\n slt.list_frontiers[1].poly_funct.coef.tolist(),\n },\n 'ttd_order': slt.ttd_order,\n 'ttd_aij': ttd_aij,\n 'ttd_bij': ttd_bij,\n 'tti_aij': tti_aij,\n 'tti_bij': tti_bij,\n 'ttd_order_longslit_model': slt.ttd_order_longslit_model,\n 'ttd_aij_longslit_model': slt.ttd_aij_longslit_model,\n 'ttd_bij_longslit_model': slt.ttd_bij_longslit_model,\n 'tti_aij_longslit_model': slt.tti_aij_longslit_model,\n 'tti_bij_longslit_model': slt.tti_bij_longslit_model,\n 'wpoly_coeff': wpoly_coeff,\n 'wpoly_coeff_longslit_model': wpoly_coeff_longslit_model,\n 'crval1_linear': slt.crval1_linear,\n 'cdelt1_linear': slt.cdelt1_linear\n }\n else:\n missing_slitlets.append(islitlet)\n tmp_dict = {\n 'csu_bar_left': slt.csu_bar_left,\n 'csu_bar_right': slt.csu_bar_right,\n 'csu_bar_slit_center': slt.csu_bar_slit_center,\n 'csu_bar_slit_width': slt.csu_bar_slit_width,\n 'x0_reference': slt.x0_reference,\n 'y0_frontier_lower_expected': slt.y0_frontier_lower_expected,\n 'y0_frontier_upper_expected': slt.y0_frontier_upper_expected\n }\n slitlet_label = \"slitlet\" + str(islitlet).zfill(2)\n outdict['contents'][slitlet_label] = tmp_dict\n\n # ---\n\n # OBSOLETE\n '''\n # save JSON file needed to compute the MOS model\n with open(args.out_json.name, 'w') as fstream:\n json.dump(outdict, fstream, indent=2, sort_keys=True)\n print('>>> Saving file ' + args.out_json.name)\n '''\n\n # ---\n\n # Create object of type RectWaveCoeff with coefficients for\n # rectification and wavelength calibration\n rectwv_coeff = RectWaveCoeff(instrument='EMIR')\n rectwv_coeff.quality_control = numina.types.qc.QC.GOOD\n rectwv_coeff.tags['grism'] = grism_name\n rectwv_coeff.tags['filter'] = filter_name\n rectwv_coeff.meta_info['origin']['bound_param'] = \\\n 'uuid' + bound_param.uuid\n rectwv_coeff.meta_info['dtu_configuration'] = outdict['dtu_configuration']\n rectwv_coeff.total_slitlets = EMIR_NBARS\n rectwv_coeff.missing_slitlets = missing_slitlets\n for i in range(EMIR_NBARS):\n islitlet = i + 1\n dumdict = {'islitlet': islitlet}\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n if cslitlet in outdict['contents']:\n dumdict.update(outdict['contents'][cslitlet])\n else:\n raise ValueError(\"Unexpected error\")\n rectwv_coeff.contents.append(dumdict)\n # debugging __getstate__ and __setstate__\n # rectwv_coeff.writeto(args.out_json.name)\n # print('>>> Saving file ' + args.out_json.name)\n # check_setstate_getstate(rectwv_coeff, args.out_json.name)\n logger.info('Generating RectWaveCoeff object with uuid=' +\n rectwv_coeff.uuid)\n\n return rectwv_coeff, reduced_55sp"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef offsets_from_wcs(frames, pixref):\n '''Compute offsets between frames using WCS information.\n\n :parameter frames: sequence of FITS filenames or file descriptors\n :parameter pixref: numpy array used as reference pixel\n\n The sky world coordinates are computed on *pixref* using\n the WCS of the first frame in the sequence. Then, the\n pixel coordinates of the reference sky world-coordinates\n are computed for the rest of the frames.\n\n The results is a numpy array with the difference between the\n computed pixel value and the reference pixel. The first line\n of the array is [0, 0], being the offset from the first image\n to itself.\n\n '''\n\n result = numpy.zeros((len(frames), pixref.shape[1]))\n\n with frames[0].open() as hdulist:\n wcsh = wcs.WCS(hdulist[0].header)\n skyref = wcsh.wcs_pix2world(pixref, 1)\n\n for idx, frame in enumerate(frames[1:]):\n with frame.open() as hdulist:\n wcsh = wcs.WCS(hdulist[0].header)\n pixval = wcsh.wcs_world2pix(skyref, 1)\n result[idx + 1] = -(pixval[0] - pixref[0])\n\n return result", "response": "Compute offsets between frames using WCS information."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reference_pix_from_wcs(frames, pixref, origin=1):\n\n result = []\n\n with frames[0].open() as hdulist:\n wcsh = wcs.WCS(hdulist[0].header)\n skyref = wcsh.wcs_pix2world([pixref], origin)\n result.append(pixref)\n\n for idx, frame in enumerate(frames[1:]):\n with frame.open() as hdulist:\n wcsh = wcs.WCS(hdulist[0].header)\n pixval = wcsh.wcs_world2pix(skyref, origin)\n result.append(tuple(pixval[0]))\n\n return result", "response": "Compute reference pixels between frames using WCS information."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reference_pix_from_wcs_imgs(imgs, pixref, origin=1):\n\n result = []\n\n refimg = imgs[0]\n\n wcsh = wcs.WCS(refimg[0].header)\n skyref = wcsh.wcs_pix2world([pixref], origin)\n result.append(pixref)\n\n for idx, img in enumerate(imgs[1:]):\n wcsh = wcs.WCS(img[0].header)\n pixval = wcsh.wcs_world2pix(skyref, origin)\n result.append(tuple(pixval[0]))\n\n return result", "response": "Compute reference pixels between frames using WCS information."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprocesses the request and return the user s session id and session id.", "response": "def process_request(self, request=None, cookie=None, **kwargs):\n \"\"\"\n Perform user logout\n\n :param request:\n :param cookie:\n :param kwargs:\n :return:\n \"\"\"\n _cntx = self.endpoint_context\n _sdb = _cntx.sdb\n\n if 'post_logout_redirect_uri' in request:\n if 'id_token_hint' not in request:\n raise InvalidRequest(\n \"If post_logout_redirect_uri then id_token_hint is a MUST\")\n _cookie_name = self.endpoint_context.cookie_name['session']\n try:\n part = self.endpoint_context.cookie_dealer.get_cookie_value(\n cookie, cookie_name=_cookie_name)\n except IndexError:\n raise InvalidRequest('Cookie error')\n except KeyError:\n part = None\n\n if part:\n # value is a base64 encoded JSON document\n _cookie_info = json.loads(as_unicode(b64d(as_bytes(part[0]))))\n logger.debug('Cookie info: {}'.format(_cookie_info))\n _sid = _cookie_info['sid']\n else:\n logger.debug('No relevant cookie')\n _sid = ''\n _cookie_info = {}\n\n if 'id_token_hint' in request:\n logger.debug(\n 'ID token hint: {}'.format(\n request[verified_claim_name(\"id_token_hint\")]))\n\n auds = request[verified_claim_name(\"id_token_hint\")]['aud']\n _ith_sid = ''\n _sids = _sdb.sso_db.get_sids_by_sub(\n request[verified_claim_name(\"id_token_hint\")]['sub'])\n\n if _sids is None:\n raise ValueError('Unknown subject identifier')\n\n for _isid in _sids:\n if _sdb[_isid]['authn_req']['client_id'] in auds:\n _ith_sid = _isid\n break\n\n if not _ith_sid:\n raise ValueError('Unknown subject')\n\n if _sid:\n if _ith_sid != _sid: # someone's messing with me\n raise ValueError('Wrong ID Token hint')\n else:\n _sid = _ith_sid\n else:\n auds = []\n\n try:\n session = _sdb[_sid]\n except KeyError:\n raise ValueError(\"Can't find any corresponding session\")\n\n client_id = session['authn_req']['client_id']\n # Does this match what's in the cookie ?\n if _cookie_info:\n if client_id != _cookie_info['client_id']:\n logger.warning(\n 'Client ID in authz request and in cookie does not match')\n raise ValueError(\"Wrong Client\")\n\n if auds:\n if client_id not in auds:\n raise ValueError('Incorrect ID Token hint')\n\n _cinfo = _cntx.cdb[client_id]\n\n # verify that the post_logout_redirect_uri if present are among the ones\n # registered\n\n try:\n _uri = request['post_logout_redirect_uri']\n except KeyError:\n if _cntx.issuer.endswith('/'):\n _uri = \"{}{}\".format(_cntx.issuer,\n self.kwargs['post_logout_uri_path'])\n else:\n _uri = \"{}/{}\".format(_cntx.issuer,\n self.kwargs['post_logout_uri_path'])\n plur = False\n else:\n plur = True\n verify_uri(_cntx, request, 'post_logout_redirect_uri',\n client_id=client_id)\n\n payload = {\n 'sid': _sid, 'client_id': client_id,\n 'user': session['authn_event']['uid']\n }\n\n # redirect user to OP logout verification page\n if plur and 'state' in request:\n _uri = '{}?{}'.format(_uri, urlencode({'state': request['state']}))\n payload['state'] = request['state']\n\n payload['redirect_uri'] = _uri\n\n logger.debug('JWS payload: {}'.format(payload))\n # From me to me\n _jws = JWT(_cntx.keyjar, iss=_cntx.issuer, lifetime=86400,\n sign_alg=self.kwargs['signing_alg'])\n sjwt = _jws.pack(payload=payload, recv=_cntx.issuer)\n\n location = '{}?{}'.format(self.kwargs['logout_verify_url'],\n urlencode({'sjwt': sjwt}))\n return {'redirect_location': location}"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nevaluate rect.+wavecal. coefficients from MOS library.", "response": "def rectwv_coeff_from_mos_library(reduced_image,\n master_rectwv,\n ignore_dtu_configuration=True,\n debugplot=0):\n \"\"\"Evaluate rect.+wavecal. coefficients from MOS library\n\n Parameters\n ----------\n reduced_image : HDUList object\n Image with preliminary basic reduction: bpm, bias, dark and\n flatfield.\n master_rectwv : MasterRectWave instance\n Rectification and Wavelength Calibrartion Library product.\n Contains the library of polynomial coefficients necessary\n to generate an instance of RectWaveCoeff with the rectification\n and wavelength calibration coefficients for the particular\n CSU configuration.\n ignore_dtu_configuration : bool\n If True, ignore differences in DTU configuration.\n debugplot : int\n Debugging level for messages and plots. For details see\n 'numina.array.display.pause_debugplot.py'.\n\n Returns\n -------\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for the\n particular CSU configuration.\n\n \"\"\"\n\n logger = logging.getLogger(__name__)\n logger.info('Computing expected RectWaveCoeff from CSU configuration')\n\n # header\n header = reduced_image[0].header\n\n # read the CSU configuration from the image header\n csu_conf = CsuConfiguration.define_from_header(header)\n\n # read the DTU configuration from the image header\n dtu_conf = DtuConfiguration.define_from_header(header)\n\n # retrieve DTU configuration from MasterRectWave object\n dtu_conf_calib = DtuConfiguration.define_from_dictionary(\n master_rectwv.meta_info['dtu_configuration']\n )\n # check that the DTU configuration employed to obtain the calibration\n # corresponds to the DTU configuration in the input FITS file\n if dtu_conf != dtu_conf_calib:\n if ignore_dtu_configuration:\n logger.warning('DTU configuration differences found!')\n else:\n logger.info('DTU configuration from image header:')\n logger.info(dtu_conf)\n logger.info('DTU configuration from master calibration:')\n logger.info(dtu_conf_calib)\n raise ValueError(\"DTU configurations do not match!\")\n else:\n logger.info('DTU configuration match!')\n\n # check grism and filter\n filter_name = header['filter']\n logger.debug('Filter: ' + filter_name)\n if filter_name != master_rectwv.tags['filter']:\n raise ValueError('Filter name does not match!')\n grism_name = header['grism']\n logger.debug('Grism: ' + grism_name)\n if grism_name != master_rectwv.tags['grism']:\n raise ValueError('Grism name does not match!')\n\n # valid slitlet numbers\n list_valid_islitlets = list(range(1, EMIR_NBARS + 1))\n for idel in master_rectwv.missing_slitlets:\n list_valid_islitlets.remove(idel)\n logger.debug('valid slitlet numbers: ' + str(list_valid_islitlets))\n\n # initialize intermediate dictionary with relevant information\n # (note: this dictionary corresponds to an old structure employed to\n # store the information in a JSON file; this is no longer necessary,\n # but here we reuse that dictionary for convenience)\n outdict = {}\n outdict['instrument'] = 'EMIR'\n outdict['meta_info'] = {}\n outdict['meta_info']['creation_date'] = datetime.now().isoformat()\n outdict['meta_info']['description'] = \\\n 'computation of rectification and wavelength calibration polynomial ' \\\n 'coefficients for a particular CSU configuration from a MOS model '\n outdict['meta_info']['recipe_name'] = 'undefined'\n outdict['meta_info']['origin'] = {}\n outdict['meta_info']['origin']['fits_frame_uuid'] = 'TBD'\n outdict['meta_info']['origin']['rect_wpoly_mos_uuid'] = \\\n master_rectwv.uuid\n outdict['meta_info']['origin']['fitted_boundary_param_uuid'] = \\\n master_rectwv.meta_info['origin']['bound_param']\n outdict['tags'] = {}\n outdict['tags']['grism'] = grism_name\n outdict['tags']['filter'] = filter_name\n outdict['dtu_configuration'] = dtu_conf.outdict()\n outdict['uuid'] = str(uuid4())\n outdict['contents'] = {}\n\n # compute rectification and wavelength calibration coefficients for each\n # slitlet according to its csu_bar_slit_center value\n for islitlet in list_valid_islitlets:\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n\n # csu_bar_slit_center of current slitlet in initial FITS image\n csu_bar_slit_center = csu_conf.csu_bar_slit_center(islitlet)\n\n # input data structure\n tmpdict = master_rectwv.contents[islitlet - 1]\n list_csu_bar_slit_center = tmpdict['list_csu_bar_slit_center']\n\n # check extrapolations\n if csu_bar_slit_center < min(list_csu_bar_slit_center):\n logger.warning('extrapolating table with ' + cslitlet)\n logger.warning('minimum tabulated value: ' +\n str(min(list_csu_bar_slit_center)))\n logger.warning('sought value...........: ' +\n str(csu_bar_slit_center))\n if csu_bar_slit_center > max(list_csu_bar_slit_center):\n logger.warning('extrapolating table with ' + cslitlet)\n logger.warning('maximum tabulated value: ' +\n str(max(list_csu_bar_slit_center)))\n logger.warning('sought value...........: ' +\n str(csu_bar_slit_center))\n\n # rectification coefficients\n ttd_order = tmpdict['ttd_order']\n ncoef = ncoef_fmap(ttd_order)\n outdict['contents'][cslitlet] = {}\n outdict['contents'][cslitlet]['ttd_order'] = ttd_order\n outdict['contents'][cslitlet]['ttd_order_longslit_model'] = None\n for keycoef in ['ttd_aij', 'ttd_bij', 'tti_aij', 'tti_bij']:\n coef_out = []\n for icoef in range(ncoef):\n ccoef = str(icoef).zfill(2)\n list_cij = tmpdict['list_' + keycoef + '_' + ccoef]\n funinterp_coef = interp1d(list_csu_bar_slit_center,\n list_cij,\n kind='linear',\n fill_value='extrapolate')\n # note: funinterp_coef expects a numpy array\n dum = funinterp_coef([csu_bar_slit_center])\n coef_out.append(dum[0])\n outdict['contents'][cslitlet][keycoef] = coef_out\n outdict['contents'][cslitlet][keycoef + '_longslit_model'] = None\n\n # wavelength calibration coefficients\n ncoef = tmpdict['wpoly_degree'] + 1\n wpoly_coeff = []\n for icoef in range(ncoef):\n ccoef = str(icoef).zfill(2)\n list_cij = tmpdict['list_wpoly_coeff_' + ccoef]\n funinterp_coef = interp1d(list_csu_bar_slit_center,\n list_cij,\n kind='linear',\n fill_value='extrapolate')\n # note: funinterp_coef expects a numpy array\n dum = funinterp_coef([csu_bar_slit_center])\n wpoly_coeff.append(dum[0])\n outdict['contents'][cslitlet]['wpoly_coeff'] = wpoly_coeff\n outdict['contents'][cslitlet]['wpoly_coeff_longslit_model'] = None\n\n # update cdelt1_linear and crval1_linear\n wpoly_function = np.polynomial.Polynomial(wpoly_coeff)\n crmin1_linear = wpoly_function(1)\n crmax1_linear = wpoly_function(EMIR_NAXIS1)\n cdelt1_linear = (crmax1_linear - crmin1_linear) / (EMIR_NAXIS1 - 1)\n crval1_linear = crmin1_linear\n outdict['contents'][cslitlet]['crval1_linear'] = crval1_linear\n outdict['contents'][cslitlet]['cdelt1_linear'] = cdelt1_linear\n\n # update CSU keywords\n outdict['contents'][cslitlet]['csu_bar_left'] = \\\n csu_conf.csu_bar_left(islitlet)\n outdict['contents'][cslitlet]['csu_bar_right'] = \\\n csu_conf.csu_bar_right(islitlet)\n outdict['contents'][cslitlet]['csu_bar_slit_center'] = \\\n csu_conf.csu_bar_slit_center(islitlet)\n outdict['contents'][cslitlet]['csu_bar_slit_width'] = \\\n csu_conf.csu_bar_slit_width(islitlet)\n\n # for each slitlet compute spectrum trails and frontiers using the\n # fitted boundary parameters\n fitted_bound_param_json = {\n 'contents': master_rectwv.meta_info['refined_boundary_model']\n }\n parmodel = fitted_bound_param_json['contents']['parmodel']\n fitted_bound_param_json.update({'meta_info': {'parmodel': parmodel}})\n params = bound_params_from_dict(fitted_bound_param_json)\n if abs(debugplot) >= 10:\n logger.debug('Fitted boundary parameters:')\n logger.debug(params.pretty_print())\n for islitlet in list_valid_islitlets:\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n # csu_bar_slit_center of current slitlet in initial FITS image\n csu_bar_slit_center = csu_conf.csu_bar_slit_center(islitlet)\n # compute and store x0_reference value\n x0_reference = float(EMIR_NAXIS1) / 2.0 + 0.5\n outdict['contents'][cslitlet]['x0_reference'] = x0_reference\n # compute spectrum trails (lower, middle and upper)\n list_spectrails = expected_distorted_boundaries(\n islitlet, csu_bar_slit_center,\n [0, 0.5, 1], params, parmodel,\n numpts=101, deg=5, debugplot=0\n )\n # store spectrails in output JSON file\n outdict['contents'][cslitlet]['spectrail'] = {}\n for idum, cdum in zip(range(3), ['lower', 'middle', 'upper']):\n outdict['contents'][cslitlet]['spectrail']['poly_coef_' + cdum] = \\\n list_spectrails[idum].poly_funct.coef.tolist()\n outdict['contents'][cslitlet]['y0_reference_' + cdum] = \\\n list_spectrails[idum].poly_funct(x0_reference)\n # compute frontiers (lower, upper)\n list_frontiers = expected_distorted_frontiers(\n islitlet, csu_bar_slit_center,\n params, parmodel,\n numpts=101, deg=5, debugplot=0\n )\n # store frontiers in output JSON\n outdict['contents'][cslitlet]['frontier'] = {}\n for idum, cdum in zip(range(2), ['lower', 'upper']):\n outdict['contents'][cslitlet]['frontier']['poly_coef_' + cdum] = \\\n list_frontiers[idum].poly_funct.coef.tolist()\n outdict['contents'][cslitlet]['y0_frontier_' + cdum] = \\\n list_frontiers[idum].poly_funct(x0_reference)\n\n # store bounding box parameters for each slitlet\n xdum = np.linspace(1, EMIR_NAXIS1, num=EMIR_NAXIS1)\n for islitlet in list_valid_islitlets:\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n # parameters already available in the input JSON file\n for par in ['bb_nc1_orig', 'bb_nc2_orig', 'ymargin_bb']:\n outdict['contents'][cslitlet][par] = \\\n master_rectwv.contents[islitlet - 1][par]\n # estimate bb_ns1_orig and bb_ns2_orig using the already computed\n # frontiers and the value of ymargin_bb, following the same approach\n # employed in Slitlet2dArc.__init__()\n poly_lower_frontier = np.polynomial.Polynomial(\n outdict['contents'][cslitlet]['frontier']['poly_coef_lower']\n )\n poly_upper_frontier = np.polynomial.Polynomial(\n outdict['contents'][cslitlet]['frontier']['poly_coef_upper']\n )\n ylower = poly_lower_frontier(xdum)\n yupper = poly_upper_frontier(xdum)\n ymargin_bb = master_rectwv.contents[islitlet - 1]['ymargin_bb']\n bb_ns1_orig = int(ylower.min() + 0.5) - ymargin_bb\n if bb_ns1_orig < 1:\n bb_ns1_orig = 1\n bb_ns2_orig = int(yupper.max() + 0.5) + ymargin_bb\n if bb_ns2_orig > EMIR_NAXIS2:\n bb_ns2_orig = EMIR_NAXIS2\n outdict['contents'][cslitlet]['bb_ns1_orig'] = bb_ns1_orig\n outdict['contents'][cslitlet]['bb_ns2_orig'] = bb_ns2_orig\n\n # additional parameters (see Slitlet2dArc.__init__)\n for islitlet in list_valid_islitlets:\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n # define expected frontier ordinates at x0_reference for the rectified\n # image imposing the vertical length of the slitlet to be constant\n # and equal to EMIR_NPIXPERSLIT_RECTIFIED\n outdict['contents'][cslitlet]['y0_frontier_lower_expected'] = \\\n expected_y0_lower_frontier(islitlet)\n outdict['contents'][cslitlet]['y0_frontier_upper_expected'] = \\\n expected_y0_upper_frontier(islitlet)\n # compute linear transformation to place the rectified slitlet at\n # the center of the current slitlet bounding box\n tmpdict = outdict['contents'][cslitlet]\n xdum1 = tmpdict['y0_frontier_lower']\n ydum1 = tmpdict['y0_frontier_lower_expected']\n xdum2 = tmpdict['y0_frontier_upper']\n ydum2 = tmpdict['y0_frontier_upper_expected']\n corr_yrect_b = (ydum2 - ydum1) / (xdum2 - xdum1)\n corr_yrect_a = ydum1 - corr_yrect_b * xdum1\n # compute expected location of rectified boundaries\n y0_reference_lower_expected = \\\n corr_yrect_a + corr_yrect_b * tmpdict['y0_reference_lower']\n y0_reference_middle_expected = \\\n corr_yrect_a + corr_yrect_b * tmpdict['y0_reference_middle']\n y0_reference_upper_expected = \\\n corr_yrect_a + corr_yrect_b * tmpdict['y0_reference_upper']\n # shift transformation to center the rectified slitlet within the\n # slitlet bounding box\n ydummid = (ydum1 + ydum2) / 2\n ioffset = int(\n ydummid - (tmpdict['bb_ns1_orig'] + tmpdict['bb_ns2_orig']) / 2.0)\n corr_yrect_a -= ioffset\n # minimum and maximum row in the rectified slitlet encompassing\n # EMIR_NPIXPERSLIT_RECTIFIED pixels\n # a) scan number (in pixels, from 1 to NAXIS2)\n xdum1 = corr_yrect_a + \\\n corr_yrect_b * tmpdict['y0_frontier_lower']\n xdum2 = corr_yrect_a + \\\n corr_yrect_b * tmpdict['y0_frontier_upper']\n # b) row number (starting from zero)\n min_row_rectified = \\\n int((round(xdum1 * 10) + 5) / 10) - tmpdict['bb_ns1_orig']\n max_row_rectified = \\\n int((round(xdum2 * 10) - 5) / 10) - tmpdict['bb_ns1_orig']\n # save previous results in outdict\n outdict['contents'][cslitlet]['y0_reference_lower_expected'] = \\\n y0_reference_lower_expected\n outdict['contents'][cslitlet]['y0_reference_middle_expected'] = \\\n y0_reference_middle_expected\n outdict['contents'][cslitlet]['y0_reference_upper_expected'] = \\\n y0_reference_upper_expected\n outdict['contents'][cslitlet]['corr_yrect_a'] = corr_yrect_a\n outdict['contents'][cslitlet]['corr_yrect_b'] = corr_yrect_b\n outdict['contents'][cslitlet]['min_row_rectified'] = min_row_rectified\n outdict['contents'][cslitlet]['max_row_rectified'] = max_row_rectified\n\n # ---\n\n # Create object of type RectWaveCoeff with coefficients for\n # rectification and wavelength calibration\n rectwv_coeff = RectWaveCoeff(instrument='EMIR')\n rectwv_coeff.quality_control = numina.types.qc.QC.GOOD\n rectwv_coeff.tags['grism'] = grism_name\n rectwv_coeff.tags['filter'] = filter_name\n rectwv_coeff.meta_info['origin']['bound_param'] = \\\n master_rectwv.meta_info['origin']['bound_param']\n rectwv_coeff.meta_info['origin']['master_rectwv'] = \\\n 'uuid' + master_rectwv.uuid\n rectwv_coeff.meta_info['dtu_configuration'] = outdict['dtu_configuration']\n rectwv_coeff.total_slitlets = EMIR_NBARS\n for i in range(EMIR_NBARS):\n islitlet = i + 1\n dumdict = {'islitlet': islitlet}\n cslitlet = 'slitlet' + str(islitlet).zfill(2)\n if cslitlet in outdict['contents']:\n dumdict.update(outdict['contents'][cslitlet])\n else:\n dumdict.update({\n 'csu_bar_left': csu_conf.csu_bar_left(islitlet),\n 'csu_bar_right': csu_conf.csu_bar_right(islitlet),\n 'csu_bar_slit_center': csu_conf.csu_bar_slit_center(islitlet),\n 'csu_bar_slit_width': csu_conf.csu_bar_slit_width(islitlet),\n 'x0_reference': float(EMIR_NAXIS1) / 2.0 + 0.5,\n 'y0_frontier_lower_expected':\n expected_y0_lower_frontier(islitlet),\n 'y0_frontier_upper_expected':\n expected_y0_upper_frontier(islitlet)\n })\n rectwv_coeff.missing_slitlets.append(islitlet)\n rectwv_coeff.contents.append(dumdict)\n # debugging __getstate__ and __setstate__\n # rectwv_coeff.writeto(args.out_rect_wpoly.name)\n # print('>>> Saving file ' + args.out_rect_wpoly.name)\n # check_setstate_getstate(rectwv_coeff, args.out_rect_wpoly.name)\n logger.info('Generating RectWaveCoeff object with uuid=' +\n rectwv_coeff.uuid)\n\n return rectwv_coeff"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef median_slitlets_rectified(\n input_image,\n mode=0,\n minimum_slitlet_width_mm=EMIR_MINIMUM_SLITLET_WIDTH_MM,\n maximum_slitlet_width_mm=EMIR_MAXIMUM_SLITLET_WIDTH_MM,\n debugplot=0\n ):\n \"\"\"Compute median spectrum for each slitlet.\n\n Parameters\n ----------\n input_image : HDUList object\n Input 2D image.\n mode : int\n Indicate desired result:\n 0 : image with the same size as the input image, with the\n median spectrum of each slitlet spanning all the spectra\n of the corresponding slitlet\n 1 : image with 55 spectra, containing the median spectra of\n each slitlet\n 2 : single collapsed median spectrum, using exclusively the\n useful slitlets from the input image\n minimum_slitlet_width_mm : float\n Minimum slitlet width (mm) for a valid slitlet.\n maximum_slitlet_width_mm : float\n Maximum slitlet width (mm) for a valid slitlet.\n debugplot : int\n Determines whether intermediate computations and/or plots\n are displayed. The valid codes are defined in\n numina.array.display.pause_debugplot.\n\n Returns\n -------\n image_median : HDUList object\n Output image.\n\n \"\"\"\n\n image_header = input_image[0].header\n image2d = input_image[0].data\n\n # check image dimensions\n naxis2_expected = EMIR_NBARS * EMIR_NPIXPERSLIT_RECTIFIED\n\n naxis2, naxis1 = image2d.shape\n if naxis2 != naxis2_expected:\n raise ValueError(\"NAXIS2={0} should be {1}\".format(\n naxis2, naxis2_expected\n ))\n\n # check that the FITS file has been obtained with EMIR\n instrument = image_header['instrume']\n if instrument != 'EMIR':\n raise ValueError(\"INSTRUME keyword is not 'EMIR'!\")\n\n # initialize output image\n if mode == 0:\n image2d_median = np.zeros((naxis2, naxis1))\n else:\n image2d_median = np.zeros((EMIR_NBARS, naxis1))\n\n # main loop\n for i in range(EMIR_NBARS):\n ns1 = i * EMIR_NPIXPERSLIT_RECTIFIED + 1\n ns2 = ns1 + EMIR_NPIXPERSLIT_RECTIFIED - 1\n sp_median = np.median(image2d[(ns1-1):ns2, :], axis=0)\n\n if mode == 0:\n image2d_median[(ns1-1):ns2, :] = np.tile(\n sp_median, (EMIR_NPIXPERSLIT_RECTIFIED, 1)\n )\n else:\n image2d_median[i] = np.copy(sp_median)\n\n if mode == 2:\n # get CSU configuration from FITS header\n csu_config = CsuConfiguration.define_from_header(image_header)\n\n # define wavelength calibration parameters\n crpix1 = image_header['crpix1']\n crval1 = image_header['crval1']\n cdelt1 = image_header['cdelt1']\n\n # segregate slitlets\n list_useful_slitlets = csu_config.widths_in_range_mm(\n minwidth=minimum_slitlet_width_mm,\n maxwidth=maximum_slitlet_width_mm\n )\n list_not_useful_slitlets = [i for i in list(range(1, EMIR_NBARS + 1))\n if i not in list_useful_slitlets]\n if abs(debugplot) != 0:\n print('>>> list_useful_slitlets....:', list_useful_slitlets)\n print('>>> list_not_useful_slitlets:', list_not_useful_slitlets)\n\n # define mask from array data\n mask2d, borders = define_mask_borders(image2d_median, sought_value=0)\n if abs(debugplot) % 10 != 0:\n ximshow(mask2d.astype(int), z1z2=(-.2, 1.2), crpix1=crpix1,\n crval1=crval1, cdelt1=cdelt1, debugplot=debugplot)\n\n # update mask with unused slitlets\n for islitlet in list_not_useful_slitlets:\n mask2d[islitlet - 1, :] = np.array([True] * naxis1)\n if abs(debugplot) % 10 != 0:\n ximshow(mask2d.astype(int), z1z2=(-.2, 1.2), crpix1=crpix1,\n crval1=crval1, cdelt1=cdelt1, debugplot=debugplot)\n\n # useful image pixels\n image2d_masked = image2d_median * (1 - mask2d.astype(int))\n if abs(debugplot) % 10 != 0:\n ximshow(image2d_masked, crpix1=crpix1, crval1=crval1,\n cdelt1=cdelt1, debugplot=debugplot)\n\n # masked image\n image2d_masked = np.ma.masked_array(image2d_median, mask=mask2d)\n # median spectrum\n image1d_median = np.ma.median(image2d_masked, axis=0).data\n\n image_median = fits.PrimaryHDU(data=image1d_median,\n header=image_header)\n\n else:\n image_median = fits.PrimaryHDU(data=image2d_median,\n header=image_header)\n\n return fits.HDUList([image_median])", "response": "Compute the median spectrum of each slitlet in the input image."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the networks assoiated with the resource description.", "response": "def get_networks(self):\n \"\"\"Get the networks assoiated with the resource description.\n\n Returns\n list of tuple roles, network\n \"\"\"\n networks = self.c_resources[\"networks\"]\n result = []\n for net in networks:\n _c_network = net.get(\"_c_network\")\n if _c_network is None:\n continue\n roles = utils.get_roles_as_list(net)\n result.append((roles, _c_network))\n return result"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_roles(self):\n\n machines = self.c_resources[\"machines\"]\n result = {}\n for desc in machines:\n roles = utils.get_roles_as_list(desc)\n hosts = self._denormalize(desc)\n for role in roles:\n result.setdefault(role, [])\n result[role].extend(hosts)\n return result", "response": "Get the roles associated with the hosts."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstarting a client WS connection.", "response": "async def start_client(self,\n sock: anyio.abc.SocketStream,\n addr,\n path: str,\n headers: Optional[List] = None,\n subprotocols: Optional[List[str]] = None):\n \"\"\"Start a client WS connection on this socket.\n\n Returns: the AcceptConnection message.\n \"\"\"\n self._sock = sock\n self._connection = WSConnection(ConnectionType.CLIENT)\n if headers is None:\n headers = []\n if subprotocols is None:\n subprotocols = []\n data = self._connection.send(\n Request(\n host=addr[0],\n target=path,\n extra_headers=headers,\n subprotocols=subprotocols))\n await self._sock.send_all(data)\n\n assert self._scope is None\n self._scope = True\n try:\n event = await self._next_event()\n if not isinstance(event, AcceptConnection):\n raise ConnectionError(\"Failed to establish a connection\",\n event)\n return event\n finally:\n self._scope = None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstarts a server WS connection on this socket.", "response": "async def start_server(self, sock: anyio.abc.SocketStream, filter=None): # pylint: disable=W0622\n \"\"\"Start a server WS connection on this socket.\n\n Filter: an async callable that gets passed the initial Request.\n It may return an AcceptConnection message, a bool, or a string (the\n subprotocol to use).\n Returns: the Request message.\n \"\"\"\n assert self._scope is None\n self._scope = True\n self._sock = sock\n self._connection = WSConnection(ConnectionType.SERVER)\n\n try:\n event = await self._next_event()\n if not isinstance(event, Request):\n raise ConnectionError(\"Failed to establish a connection\",\n event)\n msg = None\n if filter is not None:\n msg = await filter(event)\n if not msg:\n msg = RejectConnection()\n elif msg is True:\n msg = None\n elif isinstance(msg, str):\n msg = AcceptConnection(subprotocol=msg)\n if not msg:\n msg = AcceptConnection(subprotocol=event.subprotocols[0])\n data = self._connection.send(msg)\n await self._sock.send_all(data)\n if not isinstance(msg, AcceptConnection):\n raise ConnectionError(\"Not accepted\", msg)\n finally:\n self._scope = None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the next event from the queue.", "response": "async def _next_event(self):\n \"\"\"\n Gets the next event.\n \"\"\"\n while True:\n for event in self._connection.events():\n if isinstance(event, Message):\n # check if we need to buffer\n if event.message_finished:\n return self._wrap_data(self._gather_buffers(event))\n self._buffer(event)\n break # exit for loop\n else:\n return event\n\n data = await self._sock.receive_some(4096)\n if not data:\n return CloseConnection(code=500, reason=\"Socket closed\")\n self._connection.receive_data(data)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def send(self, data: Union[bytes, str], final: bool = True):\n MsgType = TextMessage if isinstance(data, str) else BytesMessage\n data = MsgType(data=data, message_finished=final)\n data = self._connection.send(event=data)\n await self._sock.send_all(data)", "response": "Sends some data down the connection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _buffer(self, event: Message):\n if isinstance(event, BytesMessage):\n self._byte_buffer.write(event.data)\n elif isinstance(event, TextMessage):\n self._string_buffer.write(event.data)", "response": "Buffers an event to the internal buffer."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _gather_buffers(self, event: Message):\n if isinstance(event, BytesMessage):\n buf = self._byte_buffer\n else:\n buf = self._string_buffer\n\n # yay for code shortening\n buf.write(event.data)\n buf.seek(0)\n data = buf.read()\n buf.seek(0)\n buf.truncate()\n return data", "response": "Gathers all the data from a buffer."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _wrap_data(data: Union[str, bytes]):\n MsgType = TextMessage if isinstance(data, str) else BytesMessage\n return MsgType(data=data, frame_finished=True, message_finished=True)", "response": "Wraps data into the right event."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the SQL for a given field with full type information.", "response": "def get_field_SQL(self, field_name, field):\n \"\"\"\n returns the SQL for a given field with full type information\n\n http://www.sqlite.org/datatype3.html\n\n field_name -- string -- the field's name\n field -- Field() -- the set options for the field\n\n return -- string -- the field type (eg, foo BOOL NOT NULL)\n \"\"\"\n field_type = \"\"\n is_pk = field.options.get('pk', False)\n\n if issubclass(field.type, bool):\n field_type = 'BOOLEAN'\n\n elif issubclass(field.type, long):\n if is_pk:\n field_type = 'INTEGER PRIMARY KEY'\n else:\n field_type = 'BIGINT'\n\n elif issubclass(field.type, int):\n field_type = 'INTEGER'\n if is_pk:\n field_type += ' PRIMARY KEY'\n\n elif issubclass(field.type, basestring):\n fo = field.options\n if field.is_ref():\n # TODO -- 7-8-17 - this isn't a great way to do this, ideally the Field instance\n # would combine all the options of both the current field and the\n # foreign key field and return all those when Field.options is called\n # (with the current field's options taking precedence) but there are\n # lots of circular dependency things that happen when one field is\n # trying to get the schema of another field and I don't have time\n # to sort it all out right now\n ref_s = field.schema\n fo = ref_s.pk.options\n\n if 'size' in fo:\n field_type = 'CHARACTER({})'.format(fo['size'])\n elif 'max_size' in fo:\n field_type = 'VARCHAR({})'.format(fo['max_size'])\n else:\n field_type = 'TEXT'\n\n if fo.get('ignore_case', False):\n field_type += ' COLLATE NOCASE'\n\n if is_pk:\n field_type += ' PRIMARY KEY'\n\n elif issubclass(field.type, datetime.datetime):\n #field_type = 'DATETIME'\n field_type = 'TIMESTAMP'\n\n elif issubclass(field.type, datetime.date):\n field_type = 'DATE'\n\n elif issubclass(field.type, float):\n field_type = 'REAL'\n size = field.options.get('size', field.options.get('max_size', 0))\n if size > 6:\n field_type = 'DOUBLE PRECISION'\n\n elif issubclass(field.type, decimal.Decimal):\n field_type = 'NUMERIC'\n\n elif issubclass(field.type, bytearray):\n field_type = 'BLOB'\n\n else:\n raise ValueError('unknown python type: {}'.format(field.type.__name__))\n\n if field.required:\n field_type += ' NOT NULL'\n else:\n field_type += ' NULL'\n\n if not is_pk:\n if field.is_ref():\n ref_s = field.schema\n if field.required: # strong ref, it deletes on fk row removal\n field_type += ' REFERENCES {} ({}) ON UPDATE CASCADE ON DELETE CASCADE'.format(\n ref_s,\n ref_s.pk.name\n )\n\n else: # weak ref, it sets column to null on fk row removal\n field_type += ' REFERENCES {} ({}) ON UPDATE CASCADE ON DELETE SET NULL'.format(\n ref_s,\n ref_s.pk.name\n )\n\n return '{} {}'.format(self._normalize_name(field_name), field_type)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _set_table(self, schema, **kwargs):\n query_str = []\n query_str.append(\"CREATE TABLE {} (\".format(self._normalize_table_name(schema)))\n\n query_fields = []\n for field_name, field in schema.fields.items():\n query_fields.append(' {}'.format(self.get_field_SQL(field_name, field)))\n\n query_str.append(\",{}\".format(os.linesep).join(query_fields))\n query_str.append(')')\n query_str = os.linesep.join(query_str)\n ret = self._query(query_str, ignore_result=True, **kwargs)", "response": "Set the table in the database."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates an index for the current record.", "response": "def _set_index(self, schema, name, fields, **index_options):\n \"\"\"\n https://www.sqlite.org/lang_createindex.html\n \"\"\"\n query_str = \"CREATE {}INDEX IF NOT EXISTS '{}_{}' ON {} ({})\".format(\n 'UNIQUE ' if index_options.get('unique', False) else '',\n schema,\n name,\n self._normalize_table_name(schema),\n ', '.join((self._normalize_name(f) for f in fields))\n )\n\n return self._query(query_str, ignore_result=True, **index_options)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_indexes(self, schema, **kwargs):\n # http://www.sqlite.org/pragma.html#schema\n # http://www.mail-archive.com/sqlite-users@sqlite.org/msg22055.html\n # http://stackoverflow.com/questions/604939/\n ret = {}\n rs = self._query('PRAGMA index_list({})'.format(self._normalize_table_name(schema)), **kwargs)\n if rs:\n for r in rs:\n iname = r['name']\n ret.setdefault(iname, [])\n indexes = self._query('PRAGMA index_info({})'.format(r['name']), **kwargs)\n for idict in indexes:\n ret[iname].append(idict['name'])\n\n return ret", "response": "return all the indexes for the given schema"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn all the fields for the given table", "response": "def _get_fields(self, table_name, **kwargs):\n \"\"\"return all the fields for the given table\"\"\"\n ret = {}\n query_str = 'PRAGMA table_info({})'.format(self._normalize_table_name(table_name))\n fields = self._query(query_str, **kwargs)\n #pout.v([dict(d) for d in fields])\n\n query_str = 'PRAGMA foreign_key_list({})'.format(self._normalize_table_name(table_name))\n fks = {f[\"from\"]: f for f in self._query(query_str, **kwargs)}\n #pout.v([dict(d) for d in fks.values()])\n\n pg_types = {\n \"INTEGER\": int,\n \"BIGINT\": long,\n \"DOUBLE PRECISION\": float,\n \"FLOAT\": float,\n \"REAL\": float,\n \"NUMERIC\": decimal.Decimal,\n \"BOOLEAN\": bool,\n \"DATE\": datetime.date,\n \"TIMESTAMP\": datetime.datetime,\n \"CHARACTER\": str,\n \"VARCHAR\": str,\n \"TEXT\": str,\n \"BLOB\": bytearray,\n }\n\n # the rows we can set: field_type, name, field_required, min_size, max_size,\n # size, unique, pk, \n # These keys will roughly correspond with schema.Field\n # TODO -- we could actually use \"type\" to get the size because SQLite returns\n # a value like VARCHAR[32]\n for row in fields:\n field = {\n \"name\": row[\"name\"],\n \"field_required\": bool(row[\"notnull\"]) or bool(row[\"pk\"]),\n \"pk\": bool(row[\"pk\"]),\n }\n\n for tname, ty in pg_types.items():\n if row[\"type\"].startswith(tname):\n field[\"field_type\"] = ty\n break\n\n if field[\"pk\"] and field[\"field_type\"] is int:\n # we compensate for SQLite internally setting pk to int\n field[\"field_type\"] = long\n\n if row[\"name\"] in fks:\n field[\"schema_table_name\"] = fks[row[\"name\"]][\"table\"]\n field[\"ref_table_name\"] = fks[row[\"name\"]][\"table\"]\n\n ret[field[\"name\"]] = field\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _normalize_date_SQL(self, field_name, field_kwargs, symbol):\n fstrs = []\n k_opts = {\n 'day': \"CAST(strftime('%d', {}) AS integer)\",\n 'hour': \"CAST(strftime('%H', {}) AS integer)\",\n 'doy': \"CAST(strftime('%j', {}) AS integer)\", # day of year\n 'julian_day': \"strftime('%J', {})\", # YYYY-MM-DD\n 'month': \"CAST(strftime('%m', {}) AS integer)\",\n 'minute': \"CAST(strftime('%M', {}) AS integer)\",\n 'dow': \"CAST(strftime('%w', {}) AS integer)\", # day of week 0 = sunday\n 'week': \"CAST(strftime('%W', {}) AS integer)\",\n 'year': \"CAST(strftime('%Y', {}) AS integer)\"\n }\n\n for k, v in field_kwargs.items():\n fstrs.append([k_opts[k].format(self._normalize_name(field_name)), self.val_placeholder, v])\n\n return fstrs", "response": "allow extracting information from date\n http://www. sqlite. org / lang_datefunc. html\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _normalize_sort_SQL(self, field_name, field_vals, sort_dir_str):\n fvi = None\n if sort_dir_str == 'ASC':\n fvi = (t for t in enumerate(field_vals)) \n\n else:\n fvi = (t for t in enumerate(reversed(field_vals))) \n\n query_sort_str = [' CASE {}'.format(self._normalize_name(field_name))]\n query_args = []\n for i, v in fvi:\n query_sort_str.append(' WHEN {} THEN {}'.format(self.val_placeholder, i))\n query_args.append(v)\n\n query_sort_str.append(' END')\n query_sort_str = \"\\n\".join(query_sort_str)\n return query_sort_str, query_args", "response": "This function returns a SQL sort string and a list of query arguments."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nregister new calculations and meta data.", "response": "def register(self, new_calc, *args, **kwargs):\n \"\"\"\n Register calculations and meta data.\n\n * ``dependencies`` - list of prerequisite calculations\n * ``always_calc`` - ``True`` if calculation ignores thresholds\n * ``frequency`` - frequency of calculation in intervals or units of time\n\n :param new_calc: register new calculation\n \"\"\"\n kwargs.update(zip(self.meta_names, args))\n # dependencies should be a list of other calculations\n if isinstance(kwargs['dependencies'], basestring):\n kwargs['dependencies'] = [kwargs['dependencies']]\n # call super method, now meta can be passed as args or kwargs.\n super(CalcRegistry, self).register(new_calc, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef slits_to_ds9_reg(ds9reg, slits):\n\n # open output file and insert header\n\n ds9reg.write('# Region file format: DS9 version 4.1\\n')\n ds9reg.write(\n 'global color=green dashlist=8 3 width=1 font=\"helvetica 10 '\n 'normal roman\" select=1 highlite=1 dash=0 fixed=0 edit=1 '\n 'move=1 delete=1 include=1 source=1\\n')\n ds9reg.write('physical\\n')\n\n for idx, slit in enumerate(slits, 1):\n xpos1, y2, xpos2, y2, xpos2, y1, xpos1, y1 = slit\n xc = 0.5 * (xpos1 + xpos2) + 1\n yc = 0.5 * (y1 + y2) + 1\n xd = (xpos2 - xpos1)\n yd = (y2 - y1)\n ds9reg.write('box({0},{1},{2},{3},0)\\n'.format(xc, yc, xd, yd))\n ds9reg.write('# text({0},{1}) color=red text={{{2}}}\\n'.format(xpos1 - 5, yc, idx))\n ds9reg.write('# text({0},{1}) color=red text={{{2}}}\\n'.format(xpos2 + 5, yc, idx + EMIR_NBARS))", "response": "Transform fiber traces to ds9 - region format."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef id_maker(obj):\n dtfmt = '%Y%m%d-%H%M%S'\n return '%s-%s' % (obj.__class__.__name__, datetime.now().strftime(dtfmt))", "response": "Makes an ID from the object s class name and the datetime now in ISO format."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef register(self, sim, *args, **kwargs):\n kwargs.update(zip(self.meta_names, args))\n # call super method, now meta can be passed as args or kwargs.\n super(SimRegistry, self).register(sim, **kwargs)", "response": "register simulation and metadata.\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking if data loaded for all sources in data layer.", "response": "def check_data(self, data):\n \"\"\"\n Check if data loaded for all sources in data layer.\n\n :param data: data layer from model\n :type data: :class:`~simkit.core.layer.Data`\n :return: dictionary of data sources and objects or `None` if not loaded\n \"\"\"\n data_objs = {\n data_src: data.objects.get(data_src) for data_src in data.layer\n }\n self._is_data_loaded = all(data_objs.values())\n return data_objs"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef initialize(self, calc_reg):\n self._isinitialized = True\n # TODO: if calculations are edited, loaded, added, etc. then reset\n self.calc_order = topological_sort(calc_reg.dependencies)", "response": "Initializes the simulation by dependency."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef index_iterator(self):\n idx = 0 # index\n while idx < self.number_intervals:\n new_idx = yield idx\n idx += 1\n if new_idx:\n idx = new_idx - 1", "response": "Generator that returns the index of the items in the index."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstarts the simulation from time zero.", "response": "def start(self, model, progress_hook=None):\n \"\"\"\n Start the simulation from time zero.\n\n :param model: Model with layers and registries containing parameters\n :type: :class:`~simkit.core.models.Model`\n :param progress_hook: A function that receives either a string or a\n list containing the index followed by tuples of the data or outputs\n names and values specified by ``write_fields`` in the simfile.\n :type progress_hook: function\n\n\n The model registries should contain the following layer registries:\n * :class:`~simkit.core.data_sources.DataRegistry`,\n * :class:`~simkit.core.formulas.FormulaRegistry`,\n * :class:`~simkit.core.outputs.OutputRegistry`,\n * :class:`~simkit.core.calculation.CalcRegistry`\n \"\"\"\n # check if data loaded\n data_objs = self.check_data(model.data)\n if not self.is_data_loaded:\n raise MissingDataError([ds for ds in data_objs if ds is None])\n # get layer registries\n data_reg = model.registries['data']\n formula_reg = model.registries['formulas']\n out_reg = model.registries['outputs']\n calc_reg = model.registries['calculations']\n # initialize\n if not self.isinitialized:\n self.initialize(calc_reg)\n # default progress hook\n if not progress_hook:\n progress_hook = functools.partial(\n sim_progress_hook, display_header=True\n )\n # start, resume or restart\n if self.ispaused:\n # if paused, then resume, do not resize outputs again.\n self._ispaused = False # change pause state\n progress_hook('resume simulation')\n elif self.iscomplete:\n # if complete, then restart, do not resize outputs again.\n self._iscomplete = False # change pause state\n progress_hook('restart simulation')\n self.idx_iter = self.index_iterator()\n else:\n # resize outputs\n # assumes that self.write_frequency is immutable\n # TODO: allow self.write_frequency to be changed\n # only resize outputs first time simulation is started\n # repeat output rows to self.write_frequency\n # put initial conditions of outputs last so it's copied when\n # idx == 0\n progress_hook('resize outputs') # display progress\n for k in out_reg:\n if out_reg.isconstant[k]:\n continue\n # repeat rows (axis=0)\n out_reg[k] = out_reg[k].repeat(self.write_frequency, 0)\n _initial_value = out_reg.initial_value[k]\n if not _initial_value:\n continue\n if isinstance(_initial_value, basestring):\n # initial value is from data registry\n # assign in a scalar to a vector fills in the vector, yes!\n out_reg[k][-1] = data_reg[_initial_value]\n else:\n out_reg[k][-1] = _initial_value * out_reg[k].units\n progress_hook('start simulation')\n # check and/or make SimKit_Simulations and simulation ID folders\n mkdir_p(self.path)\n sim_id_path = os.path.join(self.path, self.ID)\n mkdir_p(sim_id_path)\n # header & units for save files\n data_fields = self.write_fields.get('data', []) # any data fields\n out_fields = self.write_fields.get('outputs', []) # any outputs fields\n save_header = tuple(data_fields + out_fields) # concatenate fields\n # get units as strings from data & outputs\n data_units = [str(data_reg[f].dimensionality) for f in data_fields]\n out_units = [str(out_reg[f].dimensionality) for f in out_fields]\n save_units = tuple(data_units + out_units) # concatenate units\n # string format for header & units\n save_str = ('%s' + ',%s' * (len(save_header) - 1)) + '\\n' # format\n save_header = (save_str * 2) % (save_header + save_units) # header\n save_header = save_header[:-1] # remove trailing new line\n # ===================\n # Static calculations\n # ===================\n progress_hook('static calcs')\n for calc in self.calc_order:\n if not calc_reg.is_dynamic[calc]:\n calc_reg.calculator[calc].calculate(\n calc_reg[calc], formula_reg, data_reg, out_reg\n )\n # ====================\n # Dynamic calculations\n # ====================\n progress_hook('dynamic calcs')\n # TODO: assumes that interval size and indices are same, but should\n # interpolate for any size interval or indices\n for idx_tot in self.idx_iter:\n self.interval_idx = idx_tot # update simulation interval counter\n idx = idx_tot % self.write_frequency\n # update properties\n for k, v in out_reg.isproperty.iteritems():\n # set properties from previous interval at night\n if v:\n out_reg[k][idx] = out_reg[k][idx - 1]\n # night if any threshold exceeded\n if self.thresholds:\n night = not all(limits[0] < data_reg[data][idx] < limits[1] for\n data, limits in self.thresholds.iteritems())\n else:\n night = None\n # daytime or always calculated outputs\n for calc in self.calc_order:\n # Determine if calculation is scheduled for this timestep\n # TODO: add ``start_at`` parameter combined with ``frequency``\n freq = calc_reg.frequency[calc]\n if not freq.dimensionality:\n is_scheduled = (idx_tot % freq) == 0\n else:\n # Frequency with units of time\n is_scheduled = ((idx_tot * self.interval) % freq) == 0\n is_scheduled = (\n is_scheduled and (not night or calc_reg.always_calc[calc])\n )\n if calc_reg.is_dynamic[calc] and is_scheduled:\n calc_reg.calculator[calc].calculate(\n calc_reg[calc], formula_reg, data_reg, out_reg,\n timestep=self.interval, idx=idx\n )\n # display progress\n if not (idx % self.display_frequency):\n progress_hook(self.format_progress(idx, data_reg, out_reg))\n # disp_head = False\n # create an index for the save file, 0 if not saving\n if not ((idx_tot + 1) % self.write_frequency):\n savenum = (idx_tot + 1) / self.write_frequency\n elif idx_tot == self.number_intervals - 1:\n # save file index should be integer!\n savenum = int(np.ceil((idx_tot + 1) /\n float(self.write_frequency)))\n else:\n savenum = 0 # not saving this iteration\n # save file to disk\n if savenum:\n savename = self.ID + '_' + str(savenum) + '.csv' # filename\n savepath = os.path.join(sim_id_path, savename) # path\n # create array of all data & outputs to save\n save_array = self.format_write(data_reg, out_reg, idx + 1)\n # save as csv using default format & turn comments off\n np.savetxt(savepath, save_array, delimiter=',',\n header=save_header, comments='')\n try:\n cmd = self.cmd_queue.get_nowait()\n except Queue.Empty:\n continue\n if cmd == 'pause':\n self._ispaused = True\n return\n self._iscomplete = True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npauses the simulation. How is this different from stopping it? Maintain info sufficient to restart simulation. Sets ``is_paused`` to True. Will this state allow analysis? changing parameters? What can you do with a paused simulation? Should be capable of saving paused simulation for loading/resuming later, that is the main usage. EG: someone else need computer, or power goes out, so on battery backup quickly pause simulation, and save. Is save automatic? Should there be a parameter for auto save changed?", "response": "def pause(self, progress_hook=None):\n \"\"\"\n Pause the simulation. How is this different from stopping it? Maintain\n info sufficient to restart simulation. Sets ``is_paused`` to True.\n Will this state allow analysis? changing parameters? What can you do\n with a paused simulation?\n Should be capable of saving paused simulation for loading/resuming\n later, that is the main usage. EG: someone else need computer, or power\n goes out, so on battery backup quickly pause simulation, and save.\n Is save automatic? Should there be a parameter for auto save changed?\n \"\"\"\n # default progress hook\n if progress_hook is None:\n progress_hook = sim_progress_hook\n progress_hook('simulation paused')\n self.cmd_queue.put('pause')\n self._ispaused = True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsaving ds9 region output info filename.", "response": "def save_ds9(output, filename):\n \"\"\"Save ds9 region output info filename.\n\n Parameters\n ----------\n output : str\n String containing the full output to be exported as a ds9 region\n file.\n filename : str\n Output file name.\n\n \"\"\"\n\n ds9_file = open(filename, 'wt')\n ds9_file.write(output)\n ds9_file.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef save_four_ds9(rectwv_coeff, debugplot=0):\n\n for limits, rectified, suffix in zip(\n ['frontiers', 'frontiers', 'boundaries', 'boundaries'],\n [False, True, False, True],\n ['rawimage', 'rectified', 'rawimage', 'rectified']\n ):\n output = rectwv_coeff_to_ds9(rectwv_coeff=rectwv_coeff,\n limits=limits,\n rectified=rectified)\n filename = 'ds9_' + limits + '_' + suffix + '.reg'\n if abs(debugplot) >= 10:\n print('>>> Saving: ', filename)\n save_ds9(output, filename)", "response": "Save the 4 possible ds9 region files."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef rectwv_coeff_to_ds9(rectwv_coeff,\n limits=None,\n rectified=False,\n numpix=100):\n \"\"\"Generate ds9 region output with requested slitlet limits\n\n Parameters\n ----------\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for the\n particular CSU configuration.\n limits : str\n Region to be saved: the only two possibilities are 'boundaries'\n and 'frontiers'.\n rectified : bool\n If True, the regions correspond to the rectified image.\n numpix : int\n Number of points in which the X-range interval is subdivided\n in order to save each boundary as a connected set of line\n segments. This option is only relevant when rectified=False.\n\n Returns\n -------\n output : str\n String containing the full output to be exported as a ds9 region\n file.\n \"\"\"\n\n # protections\n if limits not in ['boundaries', 'frontiers']:\n raise ValueError('Unexpect limits=' + str(limits))\n\n # retrieve relevant wavelength calibration parameters\n grism_name = rectwv_coeff.tags['grism']\n filter_name = rectwv_coeff.tags['filter']\n wv_parameters = set_wv_parameters(filter_name, grism_name)\n naxis1_enlarged = wv_parameters['naxis1_enlarged']\n crpix1_enlarged = wv_parameters['crpix1_enlarged']\n crval1_enlarged = wv_parameters['crval1_enlarged']\n cdelt1_enlarged = wv_parameters['cdelt1_enlarged']\n\n ds9_output = '# Region file format: DS9 version 4.1\\n' \\\n 'global color=green dashlist=2 4 width=2 ' \\\n 'font=\"helvetica 10 normal roman\" select=1 ' \\\n 'highlite=1 dash=1 fixed=0 edit=1 ' \\\n 'move=1 delete=1 include=1 source=1\\nphysical\\n#\\n'\n\n ds9_output += '#\\n# uuid (rectwv_coeff): {0}\\n'.format(rectwv_coeff.uuid)\n ds9_output += \\\n '# grism...............: {0}\\n'.format(rectwv_coeff.tags['grism'])\n ds9_output += \\\n '# filter..............: {0}\\n'.format(rectwv_coeff.tags['filter'])\n\n for islitlet in range(1, EMIR_NBARS + 1):\n if islitlet not in rectwv_coeff.missing_slitlets:\n dumdict = rectwv_coeff.contents[islitlet - 1]\n if islitlet % 2 == 0:\n if limits == 'frontiers':\n colorbox = '#0000ff' # '#ff77ff'\n else:\n colorbox = '#ff00ff' # '#ff77ff'\n else:\n if limits == 'frontiers':\n colorbox = '#0000ff' # '#4444ff'\n else:\n colorbox = '#00ffff' # '#4444ff'\n\n ds9_output += '#\\n# islitlet...........: {0}\\n'.format(islitlet)\n ds9_output += '# csu_bar_slit_center: {0}\\n'.format(\n dumdict['csu_bar_slit_center']\n )\n if rectified:\n crpix1_linear = 1.0\n crval1_linear = dumdict['crval1_linear']\n cdelt1_linear = dumdict['cdelt1_linear']\n if limits == 'frontiers':\n ydum_lower = dumdict['y0_frontier_lower_expected']\n ydum_upper = dumdict['y0_frontier_upper_expected']\n else:\n ydum_lower = dumdict['y0_reference_lower_expected']\n ydum_upper = dumdict['y0_reference_upper_expected']\n wave_ini = crval1_linear + \\\n (0.5 - crpix1_linear) * cdelt1_linear\n xdum_ini = (wave_ini - crval1_enlarged) / cdelt1_enlarged\n xdum_ini += crpix1_enlarged\n wave_end = crval1_linear + \\\n (EMIR_NAXIS1 + 0.5 - crpix1_linear) * cdelt1_linear\n xdum_end = (wave_end - crval1_enlarged) / cdelt1_enlarged\n xdum_end += crpix1_enlarged\n for ydum in [ydum_lower, ydum_upper]:\n ds9_output += \\\n 'line {0} {1} {2} {3}'.format(\n xdum_ini, ydum,\n xdum_end, ydum\n )\n ds9_output += ' # color={0}\\n'.format(colorbox)\n # slitlet label\n ydum_label = (ydum_lower + ydum_upper) / 2.0\n xdum_label = EMIR_NAXIS1 / 2 + 0.5\n wave_center = crval1_linear + \\\n (xdum_label - crpix1_linear) * cdelt1_linear\n xdum_label = (wave_center - crval1_enlarged) / cdelt1_enlarged\n xdum_label += crpix1_enlarged\n ds9_output += 'text {0} {1} {{{2}}} # color={3} ' \\\n 'font=\"helvetica 10 bold ' \\\n 'roman\"\\n'.format(xdum_label, ydum_label,\n islitlet, colorbox)\n else:\n if limits == 'frontiers':\n pol_lower = Polynomial(\n dumdict['frontier']['poly_coef_lower']\n )\n pol_upper = Polynomial(\n dumdict['frontier']['poly_coef_upper']\n )\n else:\n pol_lower = Polynomial(\n dumdict['spectrail']['poly_coef_lower']\n )\n pol_upper = Polynomial(\n dumdict['spectrail']['poly_coef_upper']\n )\n xdum = np.linspace(1, EMIR_NAXIS1, num=numpix)\n ydum = pol_lower(xdum)\n for i in range(len(xdum) - 1):\n ds9_output += \\\n 'line {0} {1} {2} {3}'.format(\n xdum[i], ydum[i],\n xdum[i + 1], ydum[i + 1]\n )\n ds9_output += ' # color={0}\\n'.format(colorbox)\n ydum = pol_upper(xdum)\n for i in range(len(xdum) - 1):\n ds9_output += \\\n 'line {0} {1} {2} {3}'.format(\n xdum[i], ydum[i],\n xdum[i + 1], ydum[i + 1]\n )\n ds9_output += ' # color={0}\\n'.format(colorbox)\n # slitlet label\n xdum_label = EMIR_NAXIS1 / 2 + 0.5\n ydum_lower = pol_lower(xdum_label)\n ydum_upper = pol_upper(xdum_label)\n ydum_label = (ydum_lower + ydum_upper) / 2.0\n ds9_output += 'text {0} {1} {{{2}}} # color={3} ' \\\n 'font=\"helvetica 10 bold ' \\\n 'roman\"\\n'.format(xdum_label, ydum_label,\n islitlet, colorbox)\n\n return ds9_output", "response": "Generate a DS9 region file from a RectWaveCoeff instance."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsaves expected location of arc OH airglow to ds9 region files.", "response": "def save_spectral_lines_ds9(rectwv_coeff, debugplot=0):\n \"\"\"Save expected location of arc and OH airglow to ds9 region files.\n\n Parameters\n ----------\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for the\n particular CSU configuration.\n debugplot : int\n Debugging level for messages and plots. For details see\n 'numina.array.display.pause_debugplot.py'.\n\n \"\"\"\n\n for spectral_lines, rectified, suffix in zip(\n ['arc', 'arc', 'oh', 'oh'],\n [False, True, False, True],\n ['rawimage', 'rectified', 'rawimage', 'rectified']\n ):\n output = spectral_lines_to_ds9(rectwv_coeff=rectwv_coeff,\n spectral_lines=spectral_lines,\n rectified=rectified)\n filename = 'ds9_' + spectral_lines + '_' + suffix + '.reg'\n if abs(debugplot) >= 10:\n print('>>> Saving: ', filename)\n save_ds9(output, filename)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates a ds9 region output with requested spectral lines.", "response": "def spectral_lines_to_ds9(rectwv_coeff,\n spectral_lines=None,\n rectified=False):\n \"\"\"Generate ds9 region output with requested spectral lines\n\n Parameters\n ----------\n rectwv_coeff : RectWaveCoeff instance\n Rectification and wavelength calibration coefficients for the\n particular CSU configuration.\n spectral_lines : str\n Spectral lines to be saved: the only two possibilities are 'arc'\n and 'oh'.\n rectified : bool\n If True, the regions correspond to the rectified image.\n\n Returns\n -------\n output : str\n String containing the full output to be exported as a ds9 region\n file.\n \"\"\"\n\n # protections\n if spectral_lines not in ['arc', 'oh']:\n raise ValueError('Unexpected spectral lines=' + str(spectral_lines))\n\n if spectral_lines == 'arc':\n grism_name = rectwv_coeff.tags['grism']\n if grism_name == 'LR':\n catlines_file = 'lines_argon_neon_xenon_empirical_LR.dat'\n else:\n catlines_file = 'lines_argon_neon_xenon_empirical.dat'\n dumdata = pkgutil.get_data('emirdrp.instrument.configs',\n catlines_file)\n arc_lines_tmpfile = StringIO(dumdata.decode('utf8'))\n catlines = np.genfromtxt(arc_lines_tmpfile)\n # define wavelength and flux as separate arrays\n catlines_all_wave = catlines[:, 0]\n catlines_all_flux = catlines[:, 1]\n elif spectral_lines == 'oh':\n dumdata = pkgutil.get_data('emirdrp.instrument.configs',\n 'Oliva_etal_2013.dat')\n oh_lines_tmpfile = StringIO(dumdata.decode('utf8'))\n catlines = np.genfromtxt(oh_lines_tmpfile)\n # define wavelength and flux as separate arrays\n catlines_all_wave = np.concatenate((catlines[:, 1],\n catlines[:, 0]))\n catlines_all_flux = np.concatenate((catlines[:, 2],\n catlines[:, 2]))\n else:\n raise ValueError('This should not happen!')\n\n # retrieve relevant wavelength calibration parameters\n grism_name = rectwv_coeff.tags['grism']\n filter_name = rectwv_coeff.tags['filter']\n wv_parameters = set_wv_parameters(filter_name, grism_name)\n naxis1_enlarged = wv_parameters['naxis1_enlarged']\n crpix1_enlarged = wv_parameters['crpix1_enlarged']\n crval1_enlarged = wv_parameters['crval1_enlarged']\n cdelt1_enlarged = wv_parameters['cdelt1_enlarged']\n\n ds9_output = '# Region file format: DS9 version 4.1\\n' \\\n 'global color=#00ffff dashlist=0 0 width=2 ' \\\n 'font=\"helvetica 10 normal roman\" select=1 ' \\\n 'highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 ' \\\n 'include=1 source=1\\nphysical\\n'\n\n ds9_output += '#\\n# uuid (rectwv_coeff): {0}\\n'.format(\n rectwv_coeff.uuid)\n ds9_output += \\\n '# grism...............: {0}\\n'.format(rectwv_coeff.tags['grism'])\n ds9_output += \\\n '# filter..............: {0}\\n'.format(rectwv_coeff.tags['filter'])\n\n global_integer_offset_x_pix = rectwv_coeff.global_integer_offset_x_pix\n global_integer_offset_y_pix = rectwv_coeff.global_integer_offset_y_pix\n\n for islitlet in range(1, EMIR_NBARS + 1):\n if islitlet not in rectwv_coeff.missing_slitlets:\n dumdict = rectwv_coeff.contents[islitlet - 1]\n if islitlet % 2 == 0:\n colorbox = '#ff00ff' # '#ff77ff'\n else:\n colorbox = '#00ffff' # '#4444ff'\n\n ds9_output += '#\\n# islitlet...........: {0}\\n'.format(\n islitlet)\n ds9_output += '# csu_bar_slit_center: {0}\\n'.format(\n dumdict['csu_bar_slit_center']\n )\n crpix1_linear = 1.0\n crval1_linear = dumdict['crval1_linear']\n cdelt1_linear = dumdict['cdelt1_linear']\n wave_ini = crval1_linear + \\\n (0.5 - crpix1_linear) * cdelt1_linear\n wave_end = crval1_linear + \\\n (EMIR_NAXIS1 + 0.5 - crpix1_linear) * cdelt1_linear\n if rectified:\n ydum_lower = dumdict['y0_reference_lower_expected']\n ydum_upper = dumdict['y0_reference_upper_expected']\n # spectral lines\n for wave in catlines_all_wave:\n if wave_ini <= wave <= wave_end:\n xdum = (wave - crval1_enlarged) / cdelt1_enlarged\n xdum += crpix1_enlarged\n ds9_output += \\\n 'line {0} {1} {2} {3}'.format(\n xdum, ydum_lower,\n xdum, ydum_upper\n )\n ds9_output += ' # color={0}\\n'.format(colorbox)\n # slitlet label\n ydum_label = (ydum_lower + ydum_upper) / 2.0\n xdum_label = EMIR_NAXIS1 / 2 + 0.5\n wave_center = crval1_linear + \\\n (xdum_label - crpix1_linear) * cdelt1_linear\n xdum_label = (wave_center - crval1_enlarged) / cdelt1_enlarged\n xdum_label += crpix1_enlarged\n ds9_output += 'text {0} {1} {{{2}}} # color={3} ' \\\n 'font=\"helvetica 10 bold ' \\\n 'roman\"\\n'.format(xdum_label, ydum_label,\n islitlet, colorbox)\n else:\n bb_ns1_orig = dumdict['bb_ns1_orig']\n ttd_order = dumdict['ttd_order']\n aij = dumdict['ttd_aij']\n bij = dumdict['ttd_bij']\n min_row_rectified = float(dumdict['min_row_rectified'])\n max_row_rectified = float(dumdict['max_row_rectified'])\n mean_row_rectified = (min_row_rectified + max_row_rectified)/2\n wpoly_coeff = dumdict['wpoly_coeff']\n x0 = []\n y0 = []\n x1 = []\n y1 = []\n x2 = []\n y2 = []\n # spectral lines\n for wave in catlines_all_wave:\n if wave_ini <= wave <= wave_end:\n tmp_coeff = np.copy(wpoly_coeff)\n tmp_coeff[0] -= wave\n tmp_xroots = np.polynomial.Polynomial(\n tmp_coeff).roots()\n for dum in tmp_xroots:\n if np.isreal(dum):\n dum = dum.real\n if 1 <= dum <= EMIR_NAXIS1:\n x0.append(dum)\n y0.append(mean_row_rectified)\n x1.append(dum)\n y1.append(min_row_rectified)\n x2.append(dum)\n y2.append(max_row_rectified)\n pass\n xx0, yy0 = fmap(ttd_order, aij, bij, np.array(x0),\n np.array(y0))\n xx0 -= global_integer_offset_x_pix\n yy0 += bb_ns1_orig\n yy0 -= global_integer_offset_y_pix\n xx1, yy1 = fmap(ttd_order, aij, bij, np.array(x1),\n np.array(y1))\n xx1 -= global_integer_offset_x_pix\n yy1 += bb_ns1_orig\n yy1 -= global_integer_offset_y_pix\n xx2, yy2 = fmap(ttd_order, aij, bij, np.array(x2),\n np.array(y2))\n xx2 -= global_integer_offset_x_pix\n yy2 += bb_ns1_orig\n yy2 -= global_integer_offset_y_pix\n for xx1_, xx2_, yy1_, yy2_ in zip(xx1, xx2, yy1, yy2):\n ds9_output += \\\n 'line {0} {1} {2} {3}'.format(\n xx1_, yy1_, xx2_, yy2_\n )\n ds9_output += ' # color={0}\\n'.format(colorbox)\n # slitlet label\n pol_lower = Polynomial(dumdict['spectrail']['poly_coef_lower'])\n pol_upper = Polynomial(dumdict['spectrail']['poly_coef_upper'])\n xdum_label = EMIR_NAXIS1 / 2 + 0.5\n ydum_lower = pol_lower(xdum_label)\n ydum_upper = pol_upper(xdum_label)\n ydum_label = (ydum_lower + ydum_upper) / 2.0\n ds9_output += 'text {0} {1} {{{2}}} # color={3} ' \\\n 'font=\"helvetica 10 bold ' \\\n 'roman\"\\n'.format(xdum_label, ydum_label,\n islitlet, colorbox)\n\n return ds9_output"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_providerinfo(self, capabilities):\n\n _pinfo = self.package_capabilities()\n not_supported = {}\n for key, val in capabilities.items():\n try:\n allowed = _pinfo[key]\n except KeyError:\n _pinfo[key] = val\n else:\n if isinstance(allowed, bool):\n if allowed is False:\n if val is True:\n not_supported[key] = True\n else:\n _pinfo[key] = val\n elif isinstance(allowed, str):\n if val != allowed:\n not_supported[key] = val\n elif isinstance(allowed, list):\n if isinstance(val, str):\n sv = {val}\n else:\n try:\n sv = set(val)\n except TypeError:\n if key == 'response_types_supported':\n sv = set()\n for v in val:\n v.sort()\n sv.add(' '.join(v))\n else:\n raise\n else:\n sv = set()\n for v in val:\n vs = v.split(' ')\n vs.sort()\n sv.add(' '.join(vs))\n\n sa = set(allowed)\n\n if (sv & sa) == sv:\n _pinfo[key] = list(sv)\n else:\n not_supported[key] = list(sv - sa)\n\n if not_supported:\n _msg = \"Server doesn't support the following features: {}\".format(\n not_supported)\n logger.error(_msg)\n raise ConfigurationError(_msg)\n\n if self.jwks_uri and self.keyjar:\n _pinfo[\"jwks_uri\"] = self.jwks_uri\n\n for name, instance in self.endpoint.items():\n if name not in ['webfinger', 'provider_info']:\n _pinfo['{}_endpoint'.format(name)] = instance.full_path\n\n return _pinfo", "response": "Dynamically create the provider info response from the capabilities"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrite a length - prefixed UTF - 8 string.", "response": "def _write_utf8(write, value):\n \"\"\"Writes a length-prefixed UTF-8 string.\"\"\"\n write('h', len(value))\n write.io.write(value.encode('utf-8'))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread the tag in using the reader rd.", "response": "def read(cls, read, has_name=True):\n \"\"\"\n Read the tag in using the reader `rd`.\n If `has_name` is `False`, skip reading the tag name.\n \"\"\"\n name = cls._read_utf8(read) if has_name else None\n\n if cls is TAG_Compound:\n # A TAG_Compound is almost identical to Python's native dict()\n # object, or a Java HashMap.\n final = {}\n while True:\n # Find the type of each tag in a compound in turn.\n tag = read('b', 1)[0]\n if tag == 0:\n # A tag of 0 means we've reached TAG_End, used to terminate\n # a TAG_Compound.\n break\n # We read in each tag in turn, using its name as the key in\n # the dict (Since a compound cannot have repeating names,\n # this works fine).\n tmp = _tags[tag].read(read)\n final[tmp.name] = tmp\n return cls(final, name=name)\n elif cls is TAG_List:\n # A TAG_List is a very simple homogeneous array, similar to\n # Python's native list() object, but restricted to a single type.\n tag_type, length = read('bi', 5)\n tag_read = _tags[tag_type].read\n return cls(\n _tags[tag_type],\n [tag_read(read, has_name=False) for x in range(0, length)],\n name=name\n )\n elif cls is TAG_String:\n # A simple length-prefixed UTF-8 string.\n value = cls._read_utf8(read)\n return cls(value, name=name)\n elif cls is TAG_Byte_Array:\n # A simple array of (signed) bytes.\n length = read('i', 4)[0]\n return cls(read('{0}b'.format(length), length), name=name)\n elif cls is TAG_Int_Array:\n # A simple array of (signed) 4-byte integers.\n length = read('i', 4)[0]\n return cls(read('{0}i'.format(length), length * 4), name=name)\n elif cls is TAG_Long_Array:\n # A simple array of (signed) 8-byte longs.\n length = read('i', 4)[0]\n return cls(read('{0}q'.format(length), length * 8), name=name)\n elif cls is TAG_Byte:\n # A single (signed) byte.\n return cls(read('b', 1)[0], name=name)\n elif cls is TAG_Short:\n # A single (signed) short.\n return cls(read('h', 2)[0], name=name)\n elif cls is TAG_Int:\n # A signed (signed) 4-byte int.\n return cls(read('i', 4)[0], name=name)\n elif cls is TAG_Long:\n # A single (signed) 8-byte long.\n return cls(read('q', 8)[0], name=name)\n elif cls is TAG_Float:\n # A single single-precision floating point value.\n return cls(read('f', 4)[0], name=name)\n elif cls is TAG_Double:\n # A single double-precision floating point value.\n return cls(read('d', 8)[0], name=name)\n elif cls is TAG_End:\n # A End of Compound Tag\n return cls(read('2b', 2)[0], name=name)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save(self, io, little_endian=False):\n if little_endian:\n write = lambda fmt, *args: io.write(pack('<' + fmt, *args))\n else:\n write = lambda fmt, *args: io.write(pack('>' + fmt, *args))\n write.io = io\n\n self.write(write)", "response": "Saves the NBTFile to io."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a token handler", "response": "def factory(ec, code=None, token=None, refresh=None, **kwargs):\n \"\"\"\n Create a token handler\n\n :param code:\n :param token:\n :param refresh:\n :return: TokenHandler instance\n \"\"\"\n\n TTYPE = {'code': 'A', 'token': 'T', 'refresh': 'R'}\n\n args = {}\n\n if code:\n args['code_handler'] = init_token_handler(ec, code, TTYPE['code'])\n if token:\n args['access_token_handler'] = init_token_handler(ec, token, TTYPE['token'])\n if refresh:\n args['refresh_token_handler'] = init_token_handler(ec, token, TTYPE['refresh'])\n\n return TokenHandler(**args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a key that can be used to access the session id.", "response": "def key(self, user=\"\", areq=None):\n \"\"\"\n Return a key (the session id)\n\n :param user: User id\n :param areq: The authorization request\n :return: An ID\n \"\"\"\n csum = hashlib.new('sha224')\n csum.update(rndstr(32).encode('utf-8'))\n return csum.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef info(self, token):\n _res = dict(zip(['_id', 'type', 'sid', 'exp'],\n self.split_token(token)))\n if _res['type'] != self.type:\n raise WrongTokenType(_res['type'])\n else:\n _res['handler'] = self\n _res['black_listed'] = self.is_black_listed(token)\n return _res", "response": "Return token information.\n\n :param token: A token\n :return: dictionary with info about the token"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef process_wildcard(fractions):\n wildcard_zs = set()\n total_fraction = 0.0\n for z, fraction in fractions.items():\n if fraction == '?':\n wildcard_zs.add(z)\n else:\n total_fraction += fraction\n\n if not wildcard_zs:\n return fractions\n\n balance_fraction = (1.0 - total_fraction) / len(wildcard_zs)\n for z in wildcard_zs:\n fractions[z] = balance_fraction\n\n return fractions", "response": "Processes element with a wildcard weight fraction and returns a balanced fraction."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting a mass fraction to an atomic fraction.", "response": "def convert_mass_to_atomic_fractions(mass_fractions):\n \"\"\"\n Converts a mass fraction :class:`dict` to an atomic fraction :class:`dict`.\n\n Args:\n mass_fractions (dict): mass fraction :class:`dict`.\n The composition is specified by a dictionary.\n The keys are atomic numbers and the values weight fractions.\n No wildcard are accepted.\n \"\"\"\n atomic_fractions = {}\n\n for z, mass_fraction in mass_fractions.items():\n atomic_fractions[z] = mass_fraction / pyxray.element_atomic_weight(z)\n\n total_fraction = sum(atomic_fractions.values())\n\n for z, fraction in atomic_fractions.items():\n try:\n atomic_fractions[z] = fraction / total_fraction\n except ZeroDivisionError:\n atomic_fractions[z] = 0.0\n\n return atomic_fractions"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef convert_atomic_to_mass_fractions(atomic_fractions):\n # Calculate total atomic mass\n atomic_masses = {}\n total_atomic_mass = 0.0\n for z, atomic_fraction in atomic_fractions.items():\n atomic_mass = pyxray.element_atomic_weight(z)\n atomic_masses[z] = atomic_mass\n total_atomic_mass += atomic_fraction * atomic_mass\n\n # Create mass fraction\n mass_fractions = {}\n for z, atomic_fraction in atomic_fractions.items():\n mass_fractions[z] = atomic_fraction * atomic_masses[z] / total_atomic_mass\n\n return mass_fractions", "response": "Converts an atomic fraction to a mass fraction."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef convert_formula_to_atomic_fractions(formula):\n mole_fractions = {}\n total_mole_fraction = 0.0\n\n for match in CHEMICAL_FORMULA_PATTERN.finditer(formula):\n symbol, mole_fraction = match.groups()\n\n z = pyxray.element_atomic_number(symbol.strip())\n\n if mole_fraction == '':\n mole_fraction = 1.0\n mole_fraction = float(mole_fraction)\n\n mole_fraction = float(mole_fraction)\n mole_fractions[z] = mole_fraction\n total_mole_fraction += mole_fraction\n\n # Calculate atomic fractions\n atomic_fractions = {}\n for z, mole_fraction in mole_fractions.items():\n atomic_fractions[z] = mole_fraction / total_mole_fraction\n\n return atomic_fractions", "response": "Converts a chemical formula to an atomic fractions."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_name(atomic_fractions):\n if not atomic_fractions:\n return ''\n\n if len(atomic_fractions) == 1:\n z = list(atomic_fractions.keys())[0]\n return pyxray.element_symbol(z)\n\n symbols = []\n fractions = []\n for z in sorted(atomic_fractions.keys(), reverse=True):\n symbols.append(pyxray.element_symbol(z))\n fractions.append(Fraction(atomic_fractions[z]).limit_denominator())\n\n # Find gcd of the fractions\n gcds = []\n for a, b in itertools.combinations(fractions, 2):\n gcds.append(math.gcd(a.denominator, b.denominator))\n smallest_gcd = min(gcds)\n\n # Write formula\n name = ''\n for symbol, fraction in zip(symbols, fractions):\n mole_fraction = int(fraction * smallest_gcd)\n if mole_fraction == 0:\n continue\n elif mole_fraction == 1:\n name += \"%s\" % symbol\n else:\n name += '%s%i' % (symbol, mole_fraction)\n\n return name", "response": "Generates a name from the composition."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a pure composition.", "response": "def from_pure(cls, z):\n \"\"\"\n Creates a pure composition.\n\n Args:\n z (int): atomic number\n \"\"\"\n return cls(cls._key, {z: 1.0}, {z: 1.0}, pyxray.element_symbol(z))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new composition from a mass fractions.", "response": "def from_mass_fractions(cls, mass_fractions, formula=None):\n \"\"\"\n Creates a composition from a mass fraction :class:`dict`.\n\n Args:\n mass_fractions (dict): mass fraction :class:`dict`.\n The keys are atomic numbers and the values weight fractions.\n Wildcard are accepted, e.g. ``{5: '?', 25: 0.4}`` where boron\n will get a mass fraction of 0.6.\n formula (str): optional chemical formula for the composition.\n If ``None``, a formula will be generated for the composition.\n \"\"\"\n mass_fractions = process_wildcard(mass_fractions)\n atomic_fractions = convert_mass_to_atomic_fractions(mass_fractions)\n if not formula:\n formula = generate_name(atomic_fractions)\n return cls(cls._key, mass_fractions, atomic_fractions, formula)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_atomic_fractions(cls, atomic_fractions, formula=None):\n atomic_fractions = process_wildcard(atomic_fractions)\n mass_fractions = convert_atomic_to_mass_fractions(atomic_fractions)\n if not formula:\n formula = generate_name(atomic_fractions)\n return cls(cls._key, mass_fractions, atomic_fractions, formula)", "response": "Creates a composition from an atomic fractions dictionary."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\nasync def _collect_sample(self, url, url_pattern):\n samples = []\n\n urls = [self.path_generator.generate_url(url, url_pattern) for _ in range(self.confirmation_factor)]\n\n iterator = asyncio.as_completed([self._fetch_sample(url) for url in urls])\n for promise in iterator:\n try:\n sig = await promise\n if sig:\n samples.append(sig)\n except RejectRequest as e:\n pass\n\n if not samples:\n raise StopRequest(\"Impossible to obtain sample\")\n else:\n return samples", "response": "Collect the sample from the server."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_pattern(self, url):\n path = urlparse(url).path\n directories, filename = os.path.split(path)\n if len(filename) > 0:\n pattern = self.get_pattern_for_filename(filename)\n if directories[-1] != \"/\":\n directories += \"/\"\n return directories + pattern\n else:\n return self.get_pattern_for_directory(directories)", "response": "Return the path part of the URL with the last element replaced with its pattern."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads the bounddict json file and return the polynomial boundaries.", "response": "def get_boundaries(bounddict_file, slitlet_number):\n \"\"\"Read the bounddict json file and return the polynomial boundaries.\n\n Parameters\n ----------\n bounddict_file : file handler\n File containing the bounddict JSON data.\n slitlet_number : int\n Number of slitlet.\n\n Returns\n -------\n pol_lower_boundary : numpy polynomial\n Polynomial defining the lower boundary of the slitlet.\n pol_upper_boundary : numpy polynomial\n Polynomial defining the upper boundary of the slitlet.\n xmin_lower : float\n Minimum abscissae for the lower boundary.\n xmax_lower : float\n Maximum abscissae for the lower boundary.\n xmin_upper : float\n Minimum abscissae for the upper boundary.\n xmax_upper : float\n Maximum abscissae for the upper boundary.\n csu_bar_slit_center : float\n CSU bar slit center (in mm)\n\n \"\"\"\n\n bounddict = json.loads(open(bounddict_file.name).read())\n\n # return values in case the requested slitlet number is not defined\n pol_lower_boundary = None\n pol_upper_boundary = None\n xmin_lower = None\n xmax_lower = None\n xmin_upper = None\n xmax_upper = None\n csu_bar_slit_center = None\n\n # search the slitlet number in bounddict\n slitlet_label = \"slitlet\" + str(slitlet_number).zfill(2)\n if slitlet_label in bounddict['contents'].keys():\n list_date_obs = list(bounddict['contents'][slitlet_label].keys())\n list_date_obs.sort()\n num_date_obs = len(list_date_obs)\n if num_date_obs == 1:\n date_obs = list_date_obs[0]\n tmp_dict = bounddict['contents'][slitlet_label][date_obs]\n pol_lower_boundary = Polynomial(tmp_dict['boundary_coef_lower'])\n pol_upper_boundary = Polynomial(tmp_dict['boundary_coef_upper'])\n xmin_lower = tmp_dict['boundary_xmin_lower']\n xmax_lower = tmp_dict['boundary_xmax_lower']\n xmin_upper = tmp_dict['boundary_xmin_upper']\n xmax_upper = tmp_dict['boundary_xmax_upper']\n csu_bar_slit_center = tmp_dict['csu_bar_slit_center']\n else:\n raise ValueError(\"num_date_obs =\", num_date_obs,\n \" (must be 1)\")\n else:\n print(\"WARNING: slitlet number \" + str(slitlet_number) +\n \" is not available in \" + bounddict_file.name)\n\n # return result\n return pol_lower_boundary, pol_upper_boundary, \\\n xmin_lower, xmax_lower, xmin_upper, xmax_upper, \\\n csu_bar_slit_center"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntransform the roles to use enoslib. host. Host hosts.", "response": "def _to_enos_roles(roles):\n \"\"\"Transform the roles to use enoslib.host.Host hosts.\n\n Args:\n roles (dict): roles returned by\n :py:func:`enoslib.infra.provider.Provider.init`\n \"\"\"\n\n def to_host(h):\n extra = {}\n # create extra_vars for the nics\n # network_role = ethX\n for nic, roles in h[\"nics\"]:\n for role in roles:\n extra[role] = nic\n\n return Host(h[\"host\"], user=\"root\", extra=extra)\n\n enos_roles = {}\n for role, hosts in roles.items():\n enos_roles[role] = [to_host(h) for h in hosts]\n logger.debug(enos_roles)\n return enos_roles"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _to_enos_networks(networks):\n nets = []\n for roles, network in networks:\n nets.append(network.to_enos(roles))\n logger.debug(nets)\n return nets", "response": "Transform the networks returned by deploy5k.\n to enos."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef init(self, force_deploy=False, client=None):\n _force_deploy = self.provider_conf.force_deploy\n self.provider_conf.force_deploy = _force_deploy or force_deploy\n self._provider_conf = self.provider_conf.to_dict()\n r = api.Resources(self._provider_conf, client=client)\n r.launch()\n roles = r.get_roles()\n networks = r.get_networks()\n\n return (_to_enos_roles(roles),\n _to_enos_networks(networks))", "response": "Reserve and deploys the nodes according to the resources section."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding all Wikipedia pages referring to the specified name in English and return a dictionary where the keys are the language code and the values are the titles of the corresponding pages.", "response": "def _find_wikipedia_names(self, name_en):\n \"\"\"\n Finds all Wikipedia pages referring to the specified name in English and\n returns a dictionary where the keys are the language code and the values\n are the titles of the corresponding pages.\n \"\"\"\n url = 'https://en.wikipedia.org/w/api.php'\n params = {'action': 'query',\n 'titles': name_en,\n 'prop': 'langlinks',\n 'lllimit': 500,\n 'format': 'json'}\n r = requests.get(url, params=params)\n if not r:\n raise ValueError('Could not find wikipedia page: {0}'.format(name_en))\n out = r.json()\n\n names = {}\n pages = out['query']['pages']\n for page in pages:\n for langlink in pages[page].get('langlinks', []):\n names[langlink['lang']] = langlink['*']\n\n return names"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef append_response(self, response):\n self._responses.append(response)\n if 'Warning' in response.headers:\n LOGGER.warning(\n 'HTTP %s %s Warning (%s): %s (attempt %s)',\n response.request.method, response.request.url,\n response.code, response.headers['Warning'],\n len(self._responses))", "response": "Append the response to the stack of responses."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef body(self):\n if not self._responses:\n return None\n if self._responses[-1].code >= 400:\n return self._error_message()\n return self._deserialize()", "response": "Returns the HTTP response body deserialized if possible."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the parsed link header if it was set returning a list of dicts.", "response": "def links(self):\n \"\"\"Return the parsed link header if it was set, returning a list of\n the links as a dict.\n\n :rtype: list(dict()) or None\n\n \"\"\"\n if not self._responses:\n return None\n if 'Link' in self._responses[-1].headers:\n links = []\n for l in headers.parse_link(self._responses[-1].headers['Link']):\n link = {'target': l.target}\n link.update({k: v for (k, v) in l.parameters})\n links.append(link)\n return links"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndecodes bytes to UTF - 8 strings as a singe value list or dict.", "response": "def _decode(self, value):\n \"\"\"Decode bytes to UTF-8 strings as a singe value, list, or dict.\n\n :param mixed value: The value to decode\n :rtype: mixed\n\n \"\"\"\n if isinstance(value, list):\n return [self._decode(v) for v in value]\n elif isinstance(value, dict):\n return {self._decode(k): self._decode(v)\n for k, v in value.items()}\n elif isinstance(value, bytes):\n return value.decode('utf-8')\n return value"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _deserialize(self):\n if not self._responses or not self._responses[-1].body:\n return None\n if 'Content-Type' not in self._responses[-1].headers:\n return self._responses[-1].body\n try:\n content_type = algorithms.select_content_type(\n [headers.parse_content_type(\n self._responses[-1].headers['Content-Type'])],\n AVAILABLE_CONTENT_TYPES)\n except errors.NoMatch:\n return self._responses[-1].body\n\n if content_type[0] == CONTENT_TYPE_JSON:\n return self._decode(\n self._json.loads(self._decode(self._responses[-1].body)))\n elif content_type[0] == CONTENT_TYPE_MSGPACK: # pragma: nocover\n return self._decode(\n self._msgpack.unpackb(self._responses[-1].body))", "response": "Try and deserialize a response body based upon the specified\n content type."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntry and extract the error message from a HTTP error response.", "response": "def _error_message(self):\n \"\"\"Try and extract the error message from a HTTP error response.\n\n :rtype: str\n\n \"\"\"\n body = self._deserialize()\n return body.get('message', body) if isinstance(body, dict) else body"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\nasync def http_fetch(self, url,\n method='GET',\n request_headers=None,\n body=None,\n content_type=CONTENT_TYPE_MSGPACK,\n follow_redirects=False,\n max_redirects=MAX_REDIRECTS,\n connect_timeout=DEFAULT_CONNECT_TIMEOUT,\n request_timeout=DEFAULT_REQUEST_TIMEOUT,\n max_http_attempts=MAX_HTTP_RETRIES,\n auth_username=None,\n auth_password=None,\n user_agent=None,\n validate_cert=True,\n allow_nonstandard_methods=False,\n dont_retry=None):\n \"\"\"Perform a HTTP request\n\n Will retry up to ``self.MAX_HTTP_RETRIES`` times.\n\n :param str url: The URL for the request\n :param str method: The HTTP request method, defaults to ``GET``\n :param dict request_headers: Headers to include in the HTTP request\n :param mixed body: The HTTP request body to send with the request\n :param content_type: The mime type to use for requests & responses.\n Defaults to ``application/msgpack``\n :type content_type: :py:class:`~ietfparse.datastructures.ContentType`\n or str\n :param bool follow_redirects: Follow HTTP redirects when received\n :param int max_redirects: Maximum number of redirects to follow,\n default is 5\n :param float connect_timeout: Timeout for initial connection in\n seconds, default 20 seconds\n :param float request_timeout: Timeout for entire request in seconds,\n default 20 seconds\n :param int max_http_attempts: Maximum number of times to retry\n a request, default is 3 attempts\n :param str auth_username: Username for HTTP authentication\n :param str auth_password: Password for HTTP authentication\n :param str user_agent: The str used for the ``User-Agent`` header,\n default used if unspecified.\n :param bool validate_cert: For HTTPS requests, validate the server's\n certificate? Default is True\n :param bool allow_nonstandard_methods: Allow methods that don't adhere\n to the HTTP spec.\n :param set dont_retry: A list of status codes that will not be retried\n if an error is returned. Default: set({})\n :rtype: HTTPResponse\n\n \"\"\"\n response = HTTPResponse()\n\n request_headers = self._http_req_apply_default_headers(\n request_headers, content_type, body)\n\n if body:\n body = self._http_req_body_serialize(\n body, request_headers['Content-Type'])\n\n if not dont_retry:\n dont_retry = set({})\n\n client = httpclient.AsyncHTTPClient()\n\n # Workaround for Tornado defect.\n if hasattr(client, 'max_clients') and os.getenv('HTTP_MAX_CLIENTS'):\n client.max_clients = int(os.getenv('HTTP_MAX_CLIENTS'))\n\n for attempt in range(0, max_http_attempts):\n LOGGER.debug('%s %s (Attempt %i of %i) %r',\n method, url, attempt + 1, max_http_attempts,\n request_headers)\n if attempt > 0:\n request_headers['X-Retry-Attempt'] = str(attempt + 1)\n try:\n resp = await client.fetch(\n url,\n method=method,\n headers=request_headers,\n body=body,\n auth_username=auth_username,\n auth_password=auth_password,\n connect_timeout=connect_timeout,\n request_timeout=request_timeout,\n user_agent=user_agent or self._http_req_user_agent(),\n follow_redirects=follow_redirects,\n max_redirects=max_redirects,\n raise_error=False,\n validate_cert=validate_cert,\n allow_nonstandard_methods=allow_nonstandard_methods)\n except (ConnectionError,\n CurlError,\n OSError,\n socket.gaierror) as error:\n response.append_exception(error)\n LOGGER.warning(\n 'HTTP Request Error for %s to %s attempt %i of %i: %s',\n method, url, attempt + 1, max_http_attempts, error)\n continue\n\n # Keep track of each response\n response.append_response(resp)\n\n # If the response is ok, finish and exit\n if response.ok:\n response.finish()\n return response\n elif resp.code in dont_retry:\n break\n elif resp.code in {423, 429}:\n await self._http_resp_rate_limited(resp)\n elif resp.code < 500:\n LOGGER.debug('HTTP Response Error for %s to %s'\n 'attempt %i of %i (%s): %s',\n method, url, resp.code, attempt + 1,\n max_http_attempts, response.body)\n response.finish()\n return response\n\n LOGGER.warning(\n 'HTTP Error for %s to %s, attempt %i of %i (%s): %s',\n method, url, attempt + 1, max_http_attempts, resp.code,\n response.body)\n\n LOGGER.warning('HTTP %s to %s failed after %i attempts', method, url,\n max_http_attempts)\n response.finish()\n return response", "response": "Perform a HTTP request."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset default values for common HTTP request headers.", "response": "def _http_req_apply_default_headers(self, request_headers,\n content_type, body):\n \"\"\"Set default values for common HTTP request headers\n\n :param dict request_headers: The HTTP request headers\n :param content_type: The mime-type used in the request/response\n :type content_type: :py:class:`ietfparse.datastructures.ContentType`\n or str\n :param mixed body: The request body\n :rtype: dict\n\n \"\"\"\n if not request_headers:\n request_headers = {}\n request_headers.setdefault(\n 'Accept', ', '.join([str(ct) for ct in AVAILABLE_CONTENT_TYPES]))\n if body:\n request_headers.setdefault(\n 'Content-Type', str(content_type) or str(CONTENT_TYPE_MSGPACK))\n if hasattr(self, 'correlation_id'):\n request_headers.setdefault(\n 'Correlation-Id', self.correlation_id)\n elif hasattr(self, 'request') and \\\n self.request.headers.get('Correlation-Id'):\n request_headers.setdefault(\n 'Correlation-Id', self.request.headers['Correlation-Id'])\n return request_headers"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _http_req_body_serialize(self, body, content_type):\n if not body or not isinstance(body, (dict, list)):\n return body\n content_type = headers.parse_content_type(content_type)\n if content_type == CONTENT_TYPE_JSON:\n return self.__hcm_json.dumps(body)\n elif content_type == CONTENT_TYPE_MSGPACK:\n return self.__hcm_msgpack.packb(body)\n raise ValueError('Unsupported Content Type')", "response": "Conditionally serialize the request body value if mime_type is set\n and it s serializable."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _http_req_user_agent(self):\n # Tornado Request Handler\n try:\n return '{}/{}'.format(\n self.settings['service'], self.settings['version'])\n except (AttributeError, KeyError):\n pass\n\n # Rejected Consumer\n if hasattr(self, '_process'):\n try:\n return '{}/{}'.format(\n self._process.consumer_name,\n self._process.consumer_version)\n except AttributeError:\n pass\n return DEFAULT_USER_AGENT", "response": "Return the User - Agent value to use in HTTP requests."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nextract the Retry - After header value if the request was rate limited and return a future to sleep for the specified duration.", "response": "def _http_resp_rate_limited(response):\n \"\"\"Extract the ``Retry-After`` header value if the request was rate\n limited and return a future to sleep for the specified duration.\n\n :param tornado.httpclient.HTTPResponse response: The response\n :rtype: tornado.concurrent.Future\n\n \"\"\"\n parsed = parse.urlparse(response.request.url)\n duration = int(response.headers.get('Retry-After', 3))\n LOGGER.warning('Rate Limited by %s, retrying in %i seconds',\n parsed.netloc, duration)\n return asyncio.sleep(duration)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_session():\n # NOTE(msimonin): We provide only a basic support which focus\n # Chameleon cloud and its rc files\n if os.environ.get(\"OS_IDENTITY_API_VERSION\") == \"3\":\n logging.info(\"Creating a v3 Keystone Session\")\n auth = v3.Password(\n auth_url=os.environ[\"OS_AUTH_URL\"],\n username=os.environ[\"OS_USERNAME\"],\n password=os.environ[\"OS_PASSWORD\"],\n project_id=os.environ[\"OS_PROJECT_ID\"],\n user_domain_name=os.environ[\"OS_USER_DOMAIN_NAME\"]\n )\n\n else:\n logging.info(\"Creating a v2 Keystone Session\")\n auth = v2.Password(\n auth_url=os.environ[\"OS_AUTH_URL\"],\n username=os.environ[\"OS_USERNAME\"],\n password=os.environ[\"OS_PASSWORD\"],\n tenant_id=os.environ[\"OS_TENANT_ID\"])\n\n return session.Session(auth=auth)", "response": "Build the session object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks that the base image is available.", "response": "def check_glance(session, image_name):\n \"\"\"Check that the base image is available.\n\n Fails if the base image isn't added.\n This means the image should be added manually.\n \"\"\"\n gclient = glance.Client(GLANCE_VERSION, session=session,\n region_name=os.environ['OS_REGION_NAME'])\n images = gclient.images.list()\n name_ids = [{'name': i['name'], 'id': i['id']} for i in images]\n if image_name not in list(map(itemgetter('name'), name_ids)):\n logger.error(\"[glance]: Image %s is missing\" % image_name)\n raise Exception(\"Image %s is missing\" % image_name)\n else:\n image = [i for i in name_ids if i['name'] == image_name]\n image_id = image[0]['id']\n logger.info(\"[glance]: Using image %s:%s\" % (image_name, image_id))\n return image_id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding the flavors mapping returns the mappings id < = > flavor", "response": "def check_flavors(session):\n \"\"\"Build the flavors mapping\n\n returns the mappings id <-> flavor\n \"\"\"\n nclient = nova.Client(NOVA_VERSION, session=session,\n region_name=os.environ['OS_REGION_NAME'])\n flavors = nclient.flavors.list()\n to_id = dict(list(map(lambda n: [n.name, n.id], flavors)))\n to_flavor = dict(list(map(lambda n: [n.id, n.name], flavors)))\n return to_id, to_flavor"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_network(session, configure_network, network, subnet,\n dns_nameservers=None, allocation_pool=None):\n \"\"\"Check the network status for the deployment.\n\n If needed, it creates a dedicated :\n * private network /subnet\n * router between this network and the external network\n \"\"\"\n nclient = neutron.Client('2', session=session,\n region_name=os.environ['OS_REGION_NAME'])\n\n # Check the security groups\n secgroups = nclient.list_security_groups()['security_groups']\n secgroup_name = SECGROUP_NAME\n if secgroup_name not in list(map(itemgetter('name'), secgroups)):\n secgroup = {'name': secgroup_name,\n 'description': 'Enos Security Group'}\n res = nclient.create_security_group({'security_group': secgroup})\n secgroup = res['security_group']\n logger.info(\"[neutron]: %s security group created\" % secgroup_name)\n # create the rules\n tcp = {'protocol': 'tcp',\n 'direction': 'ingress',\n 'port_range_min': '1',\n 'port_range_max': '65535',\n 'security_group_id': secgroup['id']}\n icmp = {'protocol': 'icmp',\n 'direction': 'ingress',\n 'security_group_id': secgroup['id']}\n nclient.create_security_group_rule({'security_group_rule': tcp})\n logger.info(\"[neutron]: %s rule (all tcp) created\" % secgroup_name)\n nclient.create_security_group_rule({'security_group_rule': icmp})\n logger.info(\"[neutron]: %s rule (all icmp) created\" % secgroup_name)\n else:\n logger.info(\"[neutron]: Reusing security-groups %s \" % secgroup_name)\n\n networks = nclient.list_networks()['networks']\n network_name = network['name']\n if network_name not in list(map(itemgetter('name'), networks)):\n network = {'name': network_name}\n res = nclient.create_network({'network': network})\n network = res['network']\n logger.info(\"[neutron]: %s network created\" % network_name)\n else:\n network = [n for n in networks if n['name'] == network_name][0]\n logger.info(\"[neutron]: Reusing existing %s network\", network)\n\n # find ext_net\n ext_net = [n for n in networks if n['router:external']]\n if len(ext_net) < 1:\n raise Exception(\"No external network found\")\n ext_net = ext_net[0]\n\n subnets = nclient.list_subnets()['subnets']\n subnet_name = subnet['name']\n if (subnet_name not in list(map(itemgetter('name'), subnets))\n and configure_network):\n subnet = {'name': subnet['name'],\n 'network_id': network['id'],\n # NOTE(msimonin): using the dns of chameleon\n # for a generic openstack provider we should think to\n # parameteried this or use public available dns\n 'cidr': subnet['cidr'],\n 'ip_version': 4}\n if dns_nameservers is not None:\n subnet.update({'dns_nameservers': dns_nameservers})\n if allocation_pool is not None:\n subnet.update({'allocation_pools': [allocation_pool]})\n\n s = nclient.create_subnet({'subnet': subnet})\n logger.debug(s)\n subnet = s['subnet']\n logger.info(\"[neutron]: %s subnet created\" % subnet_name)\n else:\n subnet = [s for s in subnets if s['name'] == subnet_name][0]\n logger.info(\"[neutron]: Reusing %s subnet\", subnet)\n\n # create a router\n routers = nclient.list_routers()\n router_name = ROUTER_NAME\n logger.debug(routers)\n router_present = router_name not in [r['name'] for r in routers['routers']]\n if (router_present and configure_network):\n router = {\n 'name': router_name,\n 'external_gateway_info': {\n 'network_id': ext_net['id']\n }\n }\n r = nclient.create_router({'router': router})\n logger.info(\"[neutron] %s router created\" % router_name)\n # NOTE(msimonin): We should check the interface outside this block\n # in case the router is created but the interface is not added\n interface = {\n 'subnet_id': subnet['id'].encode('UTF-8')\n }\n nclient.add_interface_router(str(r['router']['id']), interface)\n\n return (ext_net, network, subnet)", "response": "Check the status of the network and create a dedicated network if needed"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwaits for the servers to be ready.", "response": "def wait_for_servers(session, servers):\n \"\"\"Wait for the servers to be ready.\n\n Note(msimonin): we don't garantee the SSH connection to be ready.\n \"\"\"\n nclient = nova.Client(NOVA_VERSION, session=session,\n region_name=os.environ['OS_REGION_NAME'])\n while True:\n deployed = []\n undeployed = []\n for server in servers:\n c = nclient.servers.get(server.id)\n if c.addresses != {} and c.status == 'ACTIVE':\n deployed.append(server)\n if c.status == 'ERROR':\n undeployed.append(server)\n logger.info(\"[nova]: Polling the Deployment\")\n logger.info(\"[nova]: %s deployed servers\" % len(deployed))\n logger.info(\"[nova]: %s undeployed servers\" % len(undeployed))\n if len(deployed) + len(undeployed) >= len(servers):\n break\n time.sleep(3)\n return deployed, undeployed"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck the status of the servers for the deployment.", "response": "def check_servers(session, machines, extra_prefix=\"\",\n force_deploy=False, key_name=None, image_id=None,\n flavors='m1.medium', network=None, ext_net=None,\n scheduler_hints=None):\n \"\"\"Checks the servers status for the deployment.\n\n If needed, it creates new servers and add a floating ip to one of them.\n This server can be used as a gateway to the others.\n \"\"\"\n\n scheduler_hints = scheduler_hints or []\n nclient = nova.Client(NOVA_VERSION, session=session,\n region_name=os.environ['OS_REGION_NAME'])\n servers = nclient.servers.list(\n search_opts={'name': '-'.join([DEFAULT_PREFIX, extra_prefix])})\n wanted = _get_total_wanted_machines(machines)\n if force_deploy:\n for server in servers:\n server.delete()\n servers = []\n\n if len(servers) == wanted:\n logger.info(\"[nova]: Reusing existing servers : %s\", servers)\n return servers\n elif len(servers) > 0 and len(servers) < wanted:\n raise Exception(\"Only %s/%s servers found\" % (servers, wanted))\n\n # starting the servers\n total = 0\n for machine in machines:\n number = machine.number\n roles = machine.roles\n logger.info(\"[nova]: Starting %s servers\" % number)\n logger.info(\"[nova]: for roles %s\" % roles)\n logger.info(\"[nova]: with extra hints %s\" % scheduler_hints)\n for _ in range(number):\n flavor = machine.flavour\n if isinstance(flavors, str):\n flavor = flavors\n else:\n flavor_to_id, _ = flavors\n flavor = flavor_to_id[flavor]\n\n if scheduler_hints:\n _scheduler_hints = \\\n scheduler_hints[total % len(scheduler_hints)]\n else:\n _scheduler_hints = []\n\n server = nclient.servers.create(\n name='-'.join([DEFAULT_PREFIX, extra_prefix, str(total)]),\n image=image_id,\n flavor=flavor,\n nics=[{'net-id': network['id']}],\n key_name=key_name,\n security_groups=[SECGROUP_NAME],\n scheduler_hints=_scheduler_hints)\n servers.append(server)\n total = total + 1\n return servers"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_in_current_deployment(server, extra_prefix=\"\"):\n return re.match(r\"^%s\" % '-'.join([DEFAULT_PREFIX, extra_prefix]),\n server.name) is not None", "response": "Check if a server is in the current deployment"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nallow several interfaces to be added and accessed from other machines.", "response": "def allow_address_pairs(session, network, subnet):\n \"\"\"Allow several interfaces to be added and accessed from the other machines.\n\n This is particularly useful when working with virtual ips.\n \"\"\"\n nclient = neutron.Client('2', session=session,\n region_name=os.environ['OS_REGION_NAME'])\n ports = nclient.list_ports()\n ports_to_update = filter(\n lambda p: p['network_id'] == network['id'],\n ports['ports'])\n logger.info('[nova]: Allowing address pairs for ports %s' %\n list(map(lambda p: p['fixed_ips'], ports_to_update)))\n for port in ports_to_update:\n try:\n nclient.update_port(port['id'], {\n 'port': {\n 'allowed_address_pairs': [{\n 'ip_address': subnet\n }]\n }\n })\n except Exception:\n # NOTE(msimonin): dhcp and router interface port\n # seems to have enabled_sec_groups = False which\n # prevent them to be updated, just throw a warning\n # a skip them\n logger.warn(\"Can't update port %s\" % port)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck all ressources needed by Enos.", "response": "def check_environment(provider_conf):\n \"\"\"Check all ressources needed by Enos.\"\"\"\n session = get_session()\n image_id = check_glance(session, provider_conf.image)\n flavor_to_id, id_to_flavor = check_flavors(session)\n ext_net, network, subnet = check_network(\n session,\n provider_conf.configure_network,\n provider_conf.network,\n subnet=provider_conf.subnet,\n dns_nameservers=provider_conf.dns_nameservers,\n allocation_pool=provider_conf.allocation_pool)\n\n return {\n 'session': session,\n 'image_id': image_id,\n 'flavor_to_id': flavor_to_id,\n 'id_to_flavor': id_to_flavor,\n 'ext_net': ext_net,\n 'network': network,\n 'subnet': subnet\n }"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndetermining if a claimspec matches a single value.", "response": "def claims_match(value, claimspec):\n \"\"\"\n Implements matching according to section 5.5.1 of\n http://openid.net/specs/openid-connect-core-1_0.html\n The lack of value is not checked here.\n Also the text doesn't prohibit having both 'value' and 'values'.\n\n :param value: single value\n :param claimspec: None or dictionary with 'essential', 'value' or 'values'\n as key\n :return: Boolean\n \"\"\"\n if claimspec is None: # match anything\n return True\n\n matched = False\n for key, val in claimspec.items():\n if key == \"value\":\n if value == val:\n matched = True\n elif key == \"values\":\n if value in val:\n matched = True\n elif key == 'essential':\n # Whether it's essential or not doesn't change anything here\n continue\n\n if matched:\n break\n\n if matched is False:\n if list(claimspec.keys()) == ['essential']:\n return True\n\n return matched"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncollect user info from a session.", "response": "def collect_user_info(endpoint_context, session, userinfo_claims=None):\n \"\"\"\n Collect information about a user.\n This can happen in two cases, either when constructing an IdToken or\n when returning user info through the UserInfo endpoint\n\n :param session: Session information\n :param userinfo_claims: user info claims\n :return: User info\n \"\"\"\n authn_req = session['authn_req']\n\n if userinfo_claims is None:\n uic = scope2claims(authn_req[\"scope\"])\n\n # Get only keys allowed by user and update the dict if such info\n # is stored in session\n perm_set = session.get('permission')\n if perm_set:\n uic = {key: uic[key] for key in uic if key in perm_set}\n\n uic = update_claims(session, \"userinfo\", uic)\n\n if uic:\n userinfo_claims = Claims(**uic)\n else:\n userinfo_claims = None\n\n logger.debug(\n \"userinfo_claim: %s\" % sanitize(userinfo_claims.to_dict()))\n\n logger.debug(\"Session info: %s\" % sanitize(session))\n\n authn_event = session['authn_event']\n if authn_event:\n uid = authn_event[\"uid\"]\n else:\n uid = session['uid']\n\n info = endpoint_context.userinfo(uid, authn_req['client_id'],\n userinfo_claims)\n\n if \"sub\" in userinfo_claims:\n if not claims_match(session[\"sub\"], userinfo_claims[\"sub\"]):\n raise FailedAuthentication(\"Unmatched sub claim\")\n\n info[\"sub\"] = session[\"sub\"]\n try:\n logger.debug(\"user_info_response: {}\".format(info))\n except UnicodeEncodeError:\n try:\n logger.debug(\n \"user_info_response: {}\".format(info.encode('utf-8')))\n except Exception:\n pass\n\n return info"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef userinfo_in_id_token_claims(endpoint_context, session, def_itc=None):\n if def_itc:\n itc = def_itc\n else:\n itc = {}\n\n itc.update(id_token_claims(session))\n\n if not itc:\n return None\n\n _claims = by_schema(endpoint_context.id_token_schema, **itc)\n\n if _claims:\n return collect_user_info(endpoint_context, session, _claims)\n else:\n return None", "response": "Collect user info claims that are to be placed in the id token."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning |list| or |tuple| of rows of tabular data.", "response": "def value_matrix(self):\n \"\"\"Converted rows of tabular data.\n\n Returns:\n |list| or |tuple|: Table rows.\n \"\"\"\n\n if self.__value_matrix:\n return self.__value_matrix\n\n self.__value_matrix = [\n [value_dp.data for value_dp in value_dp_list] for value_dp_list in self.value_dp_matrix\n ]\n\n return self.__value_matrix"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef value_dp_matrix(self):\n\n if self.__value_dp_matrix is None:\n self.__value_dp_matrix = self.__dp_extractor.to_dp_matrix(\n to_value_matrix(self.headers, self.rows)\n )\n\n return self.__value_dp_matrix", "response": "Returns the value DP matrix for the current entry."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates that the rows of the table are valid.", "response": "def validate_rows(self):\n \"\"\"\n :raises ValueError:\n \"\"\"\n\n invalid_row_idx_list = []\n\n for row_idx, row in enumerate(self.rows):\n if isinstance(row, (list, tuple)) and len(self.headers) != len(row):\n invalid_row_idx_list.append(row_idx)\n\n if isinstance(row, dict):\n if not all([header in row for header in self.headers]):\n invalid_row_idx_list.append(row_idx)\n\n if not invalid_row_idx_list:\n return\n\n for invalid_row_idx in invalid_row_idx_list:\n logger.debug(\n \"invalid row (line={}): {}\".format(invalid_row_idx, self.rows[invalid_row_idx])\n )\n\n raise ValueError(\n \"table header length and row length are mismatch:\\n\"\n + \" header(len={}): {}\\n\".format(len(self.headers), self.headers)\n + \" # of miss match rows: {} ouf of {}\\n\".format(\n len(invalid_row_idx_list), self.num_rows\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the table data as a |dict| instance.", "response": "def as_dict(self):\n \"\"\"\n :return: Table data as a |dict| instance.\n :rtype: dict\n\n :Sample Code:\n .. code:: python\n\n from tabledata import TableData\n\n TableData(\n \"sample\",\n [\"a\", \"b\"],\n [[1, 2], [3.3, 4.4]]\n ).as_dict()\n\n :Output:\n .. code:: json\n\n {'sample': [OrderedDict([('a', 1), ('b', 2)]), OrderedDict([('a', 3.3), ('b', 4.4)])]}\n \"\"\"\n\n dict_body = []\n for row in self.rows:\n if not row:\n continue\n\n values = [\n (header, value) for header, value in zip(self.headers, row) if value is not None\n ]\n\n if not values:\n continue\n\n dict_body.append(OrderedDict(values))\n\n return {self.table_name: dict_body}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef as_tuple(self):\n\n Row = namedtuple(\"Row\", self.headers)\n\n for value_dp_list in self.value_dp_matrix:\n if typepy.is_empty_sequence(value_dp_list):\n continue\n\n row = Row(*[value_dp.data for value_dp in value_dp_list])\n\n yield row", "response": "Return the rows of the table."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the value matrix as a pandas. DataFrame instance.", "response": "def as_dataframe(self):\n \"\"\"\n :return: Table data as a ``pandas.DataFrame`` instance.\n :rtype: pandas.DataFrame\n\n :Sample Code:\n .. code-block:: python\n\n from tabledata import TableData\n\n TableData(\n \"sample\",\n [\"a\", \"b\"],\n [[1, 2], [3.3, 4.4]]\n ).as_dataframe()\n\n :Output:\n .. code-block:: none\n\n a b\n 0 1 2\n 1 3.3 4.4\n\n :Dependency Packages:\n - `pandas `__\n \"\"\"\n\n import pandas\n\n dataframe = pandas.DataFrame(self.value_matrix)\n if not self.is_empty_header():\n dataframe.columns = self.headers\n\n return dataframe"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_dataframe(dataframe, table_name=\"\"):\n\n return TableData(table_name, list(dataframe.columns.values), dataframe.values.tolist())", "response": "Initialize a TableData instance from a pandas. DataFrame instance."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndecorates a function to be called async on the loop thread", "response": "def call_async(func):\n \"\"\"Decorates a function to be called async on the loop thread\"\"\"\n\n @wraps(func)\n def wrapper(self, *args, **kw):\n \"\"\"Wraps instance method to be called on loop thread\"\"\"\n\n def call():\n \"\"\"Calls function on loop thread\"\"\"\n try:\n func(self, *args, **kw)\n except Exception:\n logger.exception(\n \"failed to call async [%r] with [%r] [%r]\", func, args, kw\n )\n\n self.loop.call_soon_threadsafe(call)\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef call_sync(func):\n\n @wraps(func)\n def wrapper(self, *args, **kw):\n \"\"\"Wraps instance method to be called on loop thread\"\"\"\n\n # Just return when already on the event thread\n if self.thread.ident == get_ident():\n return func(self, *args, **kw)\n\n barrier = Barrier(2)\n result = None\n ex = None\n\n def call():\n \"\"\"Calls function on loop thread\"\"\"\n nonlocal result, ex\n try:\n result = func(self, *args, **kw)\n except Exception as exc:\n ex = exc\n finally:\n barrier.wait()\n\n self.loop.call_soon_threadsafe(call)\n barrier.wait()\n if ex:\n raise ex or Exception(\"Unknown error\")\n return result\n\n return wrapper", "response": "Decorator to call a function on the loop thread."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef target(self):\n while self.event.wait():\n self.event.clear()\n while True:\n with self.lock:\n if not self.count:\n break\n self.count -= 1\n with self.condition:\n self.condition.notify_all()", "response": "Thread routine that returns when the count of the items in the queue is reached."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_connection(self, room, ws_url, agent, cookies):\n\n urlparts = urlsplit(ws_url)\n req = Request(\"GET\", ws_url)\n cookies = get_cookie_header(cookies, req)\n if cookies:\n headers = dict(Cookie=cookies)\n else:\n headers = None\n\n factory = WebSocketClientFactory(ws_url, headers=headers, loop=self.loop)\n factory.useragent = agent\n factory.protocol = lambda: room\n conn = self.loop.create_connection(\n factory,\n host=urlparts.netloc,\n port=urlparts.port or 443,\n ssl=urlparts.scheme == \"wss\",\n )\n asyncio.ensure_future(conn, loop=self.loop)", "response": "Creates a new connection to the given room."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __send_message(self, proto, payload):\n # pylint: disable=no-self-use\n \"\"\"Sends a message\"\"\"\n\n try:\n if not isinstance(payload, bytes):\n payload = payload.encode(\"utf-8\")\n if not proto.connected:\n raise IOError(\"not connected\")\n proto.sendMessage(payload)\n logger.debug(\"sent: %r\", payload)\n except Exception as ex:\n logger.exception(\"Failed to send message\")\n proto.reraise(ex)", "response": "Sends a message to the specified protocol."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef close(self, proto):\n # pylint: disable=no-self-use\n \"\"\"Closes a connection\"\"\"\n\n try:\n proto.sendClose()\n except Exception as ex:\n logger.exception(\"Failed to send close\")\n proto.reraise(ex)", "response": "Closes a connection to the specified protocol."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef process(self):\n\n with self.lock, self.enlock:\n queue = copy(self.queue)\n self.queue.clear()\n callbacks = copy(self.callbacks)\n\n with self.lock:\n rm_cb = False\n for ki, vi in queue.items():\n if ki in self.callbacks:\n for item in vi:\n for cb in self.callbacks[ki]:\n if cb(item) is False:\n callbacks[ki].remove(cb)\n if not callbacks[ki]:\n del callbacks[ki]\n rm_cb = True\n\n with self.lock:\n if rm_cb:\n self.callbacks.clear()\n for k, v in callbacks.items():\n self.callbacks[k].extend(v)\n return len(self.callbacks)", "response": "Process queue for these listeners. Only the items with type that\n matches."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a new listener", "response": "def add(self, callback_type, callback):\n \"\"\"Add a new listener\"\"\"\n\n with self.lock:\n self.callbacks[callback_type].append(callback)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nqueues a new data item, make item iterable", "response": "def enqueue(self, item_type, item):\n \"\"\"Queue a new data item, make item iterable\"\"\"\n\n with self.enlock:\n self.queue[item_type].append(item)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nextracting slitlet 2d image from image with original EMIR dimensions.", "response": "def extract_slitlet2d(self, image_2k2k):\n \"\"\"Extract slitlet 2d image from image with original EMIR dimensions.\n\n Parameters\n ----------\n image_2k2k : numpy array\n Original image (dimensions EMIR_NAXIS1 * EMIR_NAXIS2)\n\n Returns\n -------\n slitlet2d : numpy array\n Image corresponding to the slitlet region defined by its\n bounding box.\n\n \"\"\"\n\n # protections\n naxis2, naxis1 = image_2k2k.shape\n if naxis1 != EMIR_NAXIS1:\n raise ValueError('Unexpected naxis1')\n if naxis2 != EMIR_NAXIS2:\n raise ValueError('Unexpected naxis2')\n\n # extract slitlet region\n slitlet2d = image_2k2k[(self.bb_ns1_orig - 1):self.bb_ns2_orig,\n (self.bb_nc1_orig - 1):self.bb_nc2_orig]\n\n # transform to float\n slitlet2d = slitlet2d.astype(np.float)\n\n # display slitlet2d with boundaries and middle spectrum trail\n if abs(self.debugplot) in [21, 22]:\n ax = ximshow(slitlet2d, title=\"Slitlet#\" + str(self.islitlet),\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n show=False)\n xdum = np.linspace(1, EMIR_NAXIS1, num=EMIR_NAXIS1)\n ylower = \\\n self.list_spectrails[self.i_lower_spectrail].poly_funct(xdum)\n ax.plot(xdum, ylower, 'b-')\n ymiddle = \\\n self.list_spectrails[self.i_middle_spectrail].poly_funct(xdum)\n ax.plot(xdum, ymiddle, 'b--')\n yupper = \\\n self.list_spectrails[self.i_upper_spectrail].poly_funct(xdum)\n ax.plot(xdum, yupper, 'b-')\n ylower_frontier = self.list_frontiers[0].poly_funct(xdum)\n ax.plot(xdum, ylower_frontier, 'b:')\n yupper_frontier = self.list_frontiers[1].poly_funct(xdum)\n ax.plot(xdum, yupper_frontier, 'b:')\n pause_debugplot(debugplot=self.debugplot, pltshow=True)\n\n # return slitlet image\n return slitlet2d"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef locate_unknown_arc_lines(self, slitlet2d,\n times_sigma_threshold=15,\n minimum_threshold=None,\n delta_x_max=30,\n delta_y_min=30,\n min_dist_from_middle=15):\n \"\"\"Determine the location of known arc lines in slitlet.\n\n Parameters\n ----------\n slitlet2d : numpy array\n Image containing the 2d slitlet image.\n times_sigma_threshold : float\n Times (robust) sigma above the median of the image to look\n for arc lines.\n minimum_threshold : float or None\n Minimum threshold to look for arc lines.\n delta_x_max : float\n Maximum size of potential arc line in the X direction.\n delta_y_min : float\n Minimum size of potential arc line in the Y direction.\n min_dist_from_middle : float\n Minimum Y distance from the middle spectrum trail to the\n extreme of the potential arc line. This constraint avoid\n detecting arc line reflections as bone fide arc lines.\n\n \"\"\"\n\n # smooth denoising of slitlet2d\n slitlet2d_rs, coef_rs = rescale_array_to_z1z2(slitlet2d, z1z2=(-1, 1))\n slitlet2d_dn = restoration.denoise_nl_means(slitlet2d_rs,\n patch_size=3,\n patch_distance=2,\n multichannel=False)\n slitlet2d_dn = rescale_array_from_z1z2(slitlet2d_dn, coef_rs)\n\n # compute basic statistics\n q25, q50, q75 = np.percentile(slitlet2d_dn, q=[25.0, 50.0, 75.0])\n sigmag = 0.7413 * (q75 - q25) # robust standard deviation\n if abs(self.debugplot) >= 10:\n q16, q84 = np.percentile(slitlet2d_dn, q=[15.87, 84.13])\n print('>>> q16...:', q16)\n print('>>> q25...:', q25)\n print('>>> q50...:', q50)\n print('>>> q75...:', q75)\n print('>>> q84...:', q84)\n print('>>> sigmaG:', sigmag)\n if abs(self.debugplot) in [21, 22]:\n # display initial image with zscale cuts\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (locate_unknown_arc_lines, step #1)\"\n ximshow(slitlet2d, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n debugplot=self.debugplot)\n # display denoised image with zscale cuts\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (locate_unknown_arc_lines, step #2)\"\n ximshow(slitlet2d_dn, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n debugplot=self.debugplot)\n # display image with different cuts\n z1z2 = (q50 + times_sigma_threshold * sigmag,\n q50 + 2 * times_sigma_threshold * sigmag)\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (locate_unknown_arc_lines, step #3)\"\n ximshow(slitlet2d_dn, title=title, z1z2=z1z2,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n debugplot=self.debugplot)\n\n # determine threshold (using the maximum of q50 + t *sigmag or\n # minimum_threshold)\n threshold = q50 + times_sigma_threshold * sigmag\n if minimum_threshold is not None:\n if minimum_threshold > threshold:\n threshold = minimum_threshold\n\n # identify objects in slitlet2d above threshold\n labels2d_objects, no_objects = ndimage.label(slitlet2d_dn > threshold)\n if abs(self.debugplot) >= 10:\n print(\"Number of objects initially found:\", no_objects)\n if abs(self.debugplot) in [21, 22]:\n # display all objects identified in the image\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (locate_unknown_arc_lines, step #4)\"\n z1z2 = (labels2d_objects.min(), labels2d_objects.max())\n ximshow(labels2d_objects, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n cbar_label=\"Object number\",\n z1z2=z1z2, cmap=\"nipy_spectral\",\n debugplot=self.debugplot)\n\n # select arc lines by imposing the criteria based on the\n # dimensions of the detected objects and the intersection with\n # the middle spectrum trail\n slices_possible_arc_lines = ndimage.find_objects(labels2d_objects)\n slices_ok = np.repeat([False], no_objects) # flag\n for i in range(no_objects):\n if abs(self.debugplot) >= 10:\n print('object', i + 1,\n '[in np.array coordinates]:',\n slices_possible_arc_lines[i])\n slice_x = slices_possible_arc_lines[i][1]\n slice_y = slices_possible_arc_lines[i][0]\n # note that the width computation doesn't require to\n # add +1 since slice_x.stop (and slice_y.stop) is\n # already the upper limit +1 (in np.array coordinates)\n delta_x = slice_x.stop - slice_x.start\n delta_y = slice_y.stop - slice_y.start\n # dimensions criterion\n if delta_x <= delta_x_max and delta_y >= delta_y_min:\n # intersection with middle spectrum trail criterion;\n # note that slice_x and slice_y are given in np.array\n # coordinates and are transformed into image coordinates;\n # in addition, -0.5 shift the origin to the lower left\n # corner of the pixel\n xini_slice = slice_x.start + self.bb_nc1_orig - 0.5\n xmiddle_slice = xini_slice + delta_x / 2\n polydum = \\\n self.list_spectrails[self.i_middle_spectrail].poly_funct\n ymiddle_slice = polydum(xmiddle_slice)\n yini_slice = slice_y.start + self.bb_ns1_orig - 0.5\n yend_slice = yini_slice + delta_y\n if yini_slice + min_dist_from_middle <= ymiddle_slice <= \\\n yend_slice - min_dist_from_middle:\n slices_ok[i] = True\n\n # generate list with ID of arc lines (note that first object is\n # number 0 and not 1)\n list_slices_ok = []\n for i in range(no_objects):\n if slices_ok[i]:\n list_slices_ok.append(i + 1)\n number_arc_lines = len(list_slices_ok)\n if abs(self.debugplot) >= 10:\n print(\"\\nNumber of arc lines initially identified is:\",\n number_arc_lines)\n if number_arc_lines > 0:\n print(\"Slice ID of lines passing the selection:\\n\",\n list_slices_ok)\n if number_arc_lines == 0:\n return\n\n # display arc lines\n if abs(self.debugplot) in [21, 22]:\n # display all objects identified in the image\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (locate_unknown_arc_lines, step #5)\"\n z1z2 = (labels2d_objects.min(),\n labels2d_objects.max())\n ax = ximshow(labels2d_objects, show=False, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n cbar_label=\"Object number\",\n z1z2=z1z2, cmap=\"nipy_spectral\",\n debugplot=self.debugplot)\n # plot rectangle around identified arc lines\n for i in range(no_objects):\n if slices_ok[i]:\n slice_x = slices_possible_arc_lines[i][1]\n slice_y = slices_possible_arc_lines[i][0]\n # note that slice_x and slice_y are given in np.array\n # coordinates and are transformed into image coordinates;\n # in addition, -0.5 shift the origin to the lower left\n # corner of the pixel\n xini_slice = slice_x.start + self.bb_nc1_orig - 0.5\n yini_slice = slice_y.start + self.bb_ns1_orig - 0.5\n # note that the width computation doesn't require to\n # add +1 since slice_x.stop (and slice_y.stop) is\n # already the upper limit +1 (in np.array coordinates)\n xwidth_slice = slice_x.stop - slice_x.start\n ywidth_slice = slice_y.stop - slice_y.start\n rect = Rectangle((xini_slice, yini_slice),\n xwidth_slice, ywidth_slice,\n edgecolor='w', facecolor='none')\n ax.add_patch(rect)\n # show plot\n pause_debugplot(self.debugplot, pltshow=True)\n\n # adjust individual arc lines passing the initial selection\n self.list_arc_lines = [] # list of ArcLines\n for k in range(number_arc_lines): # fit each arc line\n # select points to be fitted for a particular arc line\n xy_tmp = np.where(labels2d_objects == list_slices_ok[k])\n x_tmp = xy_tmp[1] + self.bb_nc1_orig # use image coordinates\n y_tmp = xy_tmp[0] + self.bb_ns1_orig # use image coordinates\n w_tmp = slitlet2d_dn[xy_tmp]\n # declare new ArcLine instance\n arc_line = ArcLine()\n # define new ArcLine using a weighted fit\n # (note that it must be X vs Y)\n arc_line.fit(x=x_tmp, y=y_tmp, deg=1, w=w_tmp, y_vs_x=False)\n if len(arc_line.poly_funct.coef) == 2:\n # update list with identified ArcLines\n self.list_arc_lines.append(arc_line)\n else:\n # ignore (sometimes the arc_line.fit returns a constant!)\n pass\n\n # recompute number_arc_lines just in case in the previous fits\n # some lines have just given a zero degree polynomial\n number_arc_lines = len(self.list_arc_lines)\n\n # remove arc lines with unexpected slopes\n yfit = np.array([self.list_arc_lines[k].poly_funct.coef[1]\n for k in range(number_arc_lines)])\n xfit = np.zeros(number_arc_lines)\n # intersection between middle spectrum trail and arc line\n for k in range(number_arc_lines):\n arcline = self.list_arc_lines[k]\n xfit[k], ydum = intersection_spectrail_arcline(\n self.list_spectrails[self.i_middle_spectrail], arcline\n )\n\n # fit slope versus x-coordinate of the intersection of the arc line\n # with the middle spectrum trail\n if len(yfit) > 5:\n degeff = 5\n else:\n degeff = len(yfit) - 1\n polydum, residum, rejected = polfit_residuals_with_sigma_rejection(\n x=xfit, y=yfit, deg=degeff, times_sigma_reject=4.0,\n xlabel='arc line center (islitlet #' + str(self.islitlet) + ')',\n ylabel='arc line slope', debugplot=0\n )\n # remove rejected arc lines\n if len(rejected) > 0:\n if abs(self.debugplot) >= 10:\n print('Rejecting', sum(rejected),\n 'arc lines with suspicious slopes: Slice ID',\n [list_slices_ok[k] for k in range(number_arc_lines)\n if rejected[k]])\n self.list_arc_lines = \\\n [self.list_arc_lines[k] for k in range(number_arc_lines)\n if not rejected[k]]\n # recompute number of arc lines\n number_arc_lines = len(self.list_arc_lines)\n if abs(self.debugplot) >= 10:\n print(\"\\nNumber of arc lines finally identified is:\",\n number_arc_lines)\n\n if abs(self.debugplot) >= 20:\n # print list of arc lines\n print('\\nlist_arc_lines:')\n for k in range(number_arc_lines):\n print(k, '->', self.list_arc_lines[k], '\\n')\n\n # display results\n if abs(self.debugplot) in [21, 22]:\n # generate mask with all the arc-line points passing the selection\n mask_arc_lines = np.zeros_like(slitlet2d_dn)\n for k in list_slices_ok:\n mask_arc_lines[labels2d_objects == k] = 1\n # compute image with only the arc lines passing the selection\n labels2d_arc_lines = labels2d_objects * mask_arc_lines\n # display background image with filtered arc lines\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (locate_unknown_arc_lines, step #6)\"\n z1z2 = (labels2d_arc_lines.min(),\n labels2d_arc_lines.max())\n ax = ximshow(labels2d_arc_lines, show=False,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n cbar_label=\"Object number\",\n title=title, z1z2=z1z2, cmap=\"nipy_spectral\",\n debugplot=self.debugplot)\n # plot weighted fit for each arc line (note that the fit is\n # X vs Y)\n for k in range(number_arc_lines):\n xpol, ypol = self.list_arc_lines[k].linspace_pix()\n ax.plot(xpol, ypol, 'g--')\n # display lower and upper points of each arc line\n x_tmp = [arc_line.xlower_line for arc_line in self.list_arc_lines]\n y_tmp = [arc_line.ylower_line for arc_line in self.list_arc_lines]\n ax.plot(x_tmp, y_tmp, 'w+')\n x_tmp = [arc_line.xupper_line for arc_line in self.list_arc_lines]\n y_tmp = [arc_line.yupper_line for arc_line in self.list_arc_lines]\n ax.plot(x_tmp, y_tmp, 'w+')\n # show plot\n pause_debugplot(self.debugplot, pltshow=True)", "response": "Locate the locations of unknown arc lines in a 2d slitlet image."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncomputing the intersection points of spectrum trails with arc lines.", "response": "def xy_spectrail_arc_intersections(self, slitlet2d=None):\n \"\"\"Compute intersection points of spectrum trails with arc lines.\n\n The member list_arc_lines is updated with new keyword:keyval\n values for each arc line.\n\n Parameters\n ----------\n slitlet2d : numpy array\n Slitlet image to be displayed with the computed boundaries\n and intersecting points overplotted. This argument is\n optional.\n\n \"\"\"\n\n # protections\n if self.list_arc_lines is None:\n raise ValueError(\"Arc lines not sought\")\n number_spectrum_trails = len(self.list_spectrails)\n if number_spectrum_trails == 0:\n raise ValueError(\"Number of available spectrum trails is 0\")\n number_arc_lines = len(self.list_arc_lines)\n if number_arc_lines == 0:\n raise ValueError(\"Number of available arc lines is 0\")\n\n # intersection of the arc lines with the spectrum trails\n # (note: the coordinates are computed using pixel values,\n # ranging from 1 to EMIR_NAXIS1, as given in the original\n # image reference system ---not in the slitlet image reference\n # system---)\n self.x_inter_rect = np.array([]) # rectified image coordinates\n self.y_inter_rect = np.array([]) # rectified image coordinates\n for arcline in self.list_arc_lines:\n # middle spectrum trail\n spectrail = self.list_spectrails[self.i_middle_spectrail]\n xroot, yroot = intersection_spectrail_arcline(\n spectrail=spectrail, arcline=arcline\n )\n arcline.x_rectified = xroot\n self.x_inter_rect = np.append(\n self.x_inter_rect, [xroot] * number_spectrum_trails\n )\n for spectrail in self.list_spectrails:\n # compute expected ordinate y_expected in the rectified\n # image\n y_expected = self.corr_yrect_a + self.corr_yrect_b * \\\n spectrail.y_rectified\n self.y_inter_rect = np.append(self.y_inter_rect, y_expected)\n if abs(self.debugplot) >= 10:\n print('>>> y0_frontier_lower_expected........: ',\n self.y0_frontier_lower_expected)\n print('>>> y0_frontier_upper_expected........: ',\n self.y0_frontier_upper_expected)\n print('>>> shifted y0_frontier_upper_expected: ',\n self.corr_yrect_a +\n self.corr_yrect_b * self.y0_frontier_lower)\n print('>>> shifted y0_frontier_lower_expected: ',\n self.corr_yrect_a +\n self.corr_yrect_b * self.y0_frontier_upper)\n #\n self.x_inter_orig = np.array([]) # original image coordinates\n self.y_inter_orig = np.array([]) # original image coordinates\n for arcline in self.list_arc_lines:\n for spectrail in self.list_spectrails:\n xroot, yroot = intersection_spectrail_arcline(\n spectrail=spectrail, arcline=arcline\n )\n self.x_inter_orig = np.append(self.x_inter_orig, xroot)\n self.y_inter_orig = np.append(self.y_inter_orig, yroot)\n\n # display intersection points\n if abs(self.debugplot % 10) != 0 and slitlet2d is not None:\n # display image with zscale cuts\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (xy_spectrail_arc_intersections)\"\n ax = ximshow(slitlet2d, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n show=False)\n # spectrum trails\n for spectrail in self.list_spectrails:\n xdum, ydum = spectrail.linspace_pix(start=self.bb_nc1_orig,\n stop=self.bb_nc2_orig)\n ax.plot(xdum, ydum, 'g')\n # arc lines\n for arcline in self.list_arc_lines:\n xdum, ydum = arcline.linspace_pix(start=self.bb_ns1_orig,\n stop=self.bb_ns2_orig)\n ax.plot(xdum, ydum, 'g')\n # intersection points\n ax.plot(self.x_inter_orig, self.y_inter_orig, 'co')\n ax.plot(self.x_inter_rect, self.y_inter_rect, 'bo')\n # show plot\n pause_debugplot(self.debugplot, pltshow=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nestimates the polynomial transformation to rectify the image.", "response": "def estimate_tt_to_rectify(self, order, slitlet2d=None):\n \"\"\"Estimate the polynomial transformation to rectify the image.\n\n Parameters\n ----------\n order : int\n Order of the polynomial transformation.\n slitlet2d : numpy array\n Slitlet image to be displayed with the computed boundaries\n and intersecting points overplotted. This argument is\n optional.\n\n \"\"\"\n\n # protections\n if self.x_inter_orig is None \\\n or self.y_inter_orig is None \\\n or self.x_inter_rect is None \\\n or self.y_inter_rect is None:\n raise ValueError('Intersection points not computed')\n\n npoints = len(self.x_inter_orig)\n if len(self.y_inter_orig) != npoints \\\n or len(self.x_inter_rect) != npoints \\\n or len(self.y_inter_rect) != npoints:\n raise ValueError('Unexpected different number of points')\n\n # IMPORTANT: correct coordinates from origin in order to manipulate\n # coordinates corresponding to image indices\n x_inter_orig_shifted = self.x_inter_orig - self.bb_nc1_orig\n y_inter_orig_shifted = self.y_inter_orig - self.bb_ns1_orig\n x_inter_rect_shifted = self.x_inter_rect - self.bb_nc1_orig\n y_inter_rect_shifted = self.y_inter_rect - self.bb_ns1_orig\n\n # compute 2D transformation\n self.ttd_order = order\n self.ttd_aij, self.ttd_bij = compute_distortion(\n x_inter_orig_shifted, y_inter_orig_shifted,\n x_inter_rect_shifted, y_inter_rect_shifted,\n order,\n self.debugplot\n )\n self.tti_aij, self.tti_bij = compute_distortion(\n x_inter_rect_shifted, y_inter_rect_shifted,\n x_inter_orig_shifted, y_inter_orig_shifted,\n order,\n self.debugplot\n )\n\n # display slitlet with intersection points and grid indicating\n # the fitted transformation\n if abs(self.debugplot % 10) != 0 and slitlet2d is not None:\n # display image with zscale cuts\n title = \"Slitlet#\" + str(self.islitlet) + \\\n \" (estimate_tt_to_rectify)\"\n ax = ximshow(slitlet2d, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n show=False)\n # intersection points\n ax.plot(self.x_inter_orig, self.y_inter_orig, 'co')\n ax.plot(self.x_inter_rect, self.y_inter_rect, 'bo')\n # grid with fitted transformation: spectrum trails\n xx = np.arange(0, self.bb_nc2_orig - self.bb_nc1_orig + 1,\n dtype=np.float)\n for spectrail in self.list_spectrails:\n yy0 = self.corr_yrect_a + \\\n self.corr_yrect_b * spectrail.y_rectified\n yy = np.tile([yy0 - self.bb_ns1_orig], xx.size)\n ax.plot(xx + self.bb_nc1_orig, yy + self.bb_ns1_orig, \"b\")\n xxx, yyy = fmap(self.ttd_order, self.ttd_aij, self.ttd_bij,\n xx, yy)\n ax.plot(xxx + self.bb_nc1_orig, yyy + self.bb_ns1_orig, \"g\")\n # grid with fitted transformation: arc lines\n ylower_line = \\\n self.list_spectrails[self.i_lower_spectrail].y_rectified\n ylower_line = self.corr_yrect_a + self.corr_yrect_b * ylower_line\n yupper_line = \\\n self.list_spectrails[self.i_upper_spectrail].y_rectified\n yupper_line = self.corr_yrect_a + self.corr_yrect_b * yupper_line\n n_points = int(yupper_line - ylower_line + 0.5) + 1\n yy = np.linspace(ylower_line - self.bb_ns1_orig,\n yupper_line - self.bb_ns1_orig,\n num=n_points,\n dtype=np.float)\n for arc_line in self.list_arc_lines:\n xline = arc_line.x_rectified - self.bb_nc1_orig\n xx = np.array([xline] * n_points)\n ax.plot(xx + self.bb_nc1_orig,yy + self.bb_ns1_orig, \"b\" )\n xxx, yyy = fmap(self.ttd_order, self.ttd_aij, self.ttd_bij,\n xx, yy)\n ax.plot(xxx + self.bb_nc1_orig, yyy + self.bb_ns1_orig, \"c\")\n # show plot\n pause_debugplot(self.debugplot, pltshow=True)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rectify(self, slitlet2d, resampling, transformation, inverse=False):\n\n if resampling not in [1, 2]:\n raise ValueError(\"Unexpected resampling value=\" + str(resampling))\n\n if transformation not in [1, 2]:\n raise ValueError(\"Unexpected transformation value=\" +\n str(transformation))\n\n # verify image dimension\n naxis2, naxis1 = slitlet2d.shape\n if naxis1 != self.bb_nc2_orig - self.bb_nc1_orig + 1:\n raise ValueError(\"Unexpected slitlet2d_rect naxis1\")\n if naxis2 != self.bb_ns2_orig - self.bb_ns1_orig + 1:\n raise ValueError(\"Unexpected slitlet2d_rect naxis2\")\n\n # transformation to be employed (and direction)\n if transformation == 1:\n if inverse:\n aij = self.tti_aij\n bij = self.tti_bij\n else:\n aij = self.ttd_aij\n bij = self.ttd_bij\n else:\n if inverse:\n aij = self.tti_aij_longslit_model\n bij = self.tti_bij_longslit_model\n else:\n aij = self.ttd_aij_longslit_model\n bij = self.ttd_bij_longslit_model\n\n # rectify image\n slitlet2d_rect = rectify2d(\n image2d=slitlet2d,\n aij=aij,\n bij=bij,\n resampling=resampling\n )\n\n if abs(self.debugplot % 10) != 0:\n title = \"Slitlet#\" + str(self.islitlet) + \" (rectify)\"\n ax = ximshow(slitlet2d_rect, title=title,\n first_pixel=(self.bb_nc1_orig, self.bb_ns1_orig),\n show=False)\n if self.list_arc_lines is not None:\n # intersection points\n ax.plot(self.x_inter_rect, self.y_inter_rect, 'bo')\n # grid with fitted transformation: spectrum trails\n xx = np.arange(0, self.bb_nc2_orig - self.bb_nc1_orig + 1,\n dtype=np.float)\n for spectrail in self.list_spectrails:\n yy0 = self.corr_yrect_a + \\\n self.corr_yrect_b * spectrail.y_rectified\n yy = np.tile([yy0 - self.bb_ns1_orig], xx.size)\n ax.plot(xx + self.bb_nc1_orig, yy + self.bb_ns1_orig, \"b\")\n for spectrail in self.list_frontiers:\n yy0 = self.corr_yrect_a + \\\n self.corr_yrect_b * spectrail.y_rectified\n yy = np.tile([yy0 - self.bb_ns1_orig], xx.size)\n ax.plot(xx + self.bb_nc1_orig, yy + self.bb_ns1_orig, \"b:\")\n # grid with fitted transformation: arc lines\n ylower_line = \\\n self.list_spectrails[self.i_lower_spectrail].y_rectified\n ylower_line = self.corr_yrect_a + self.corr_yrect_b * ylower_line\n yupper_line = \\\n self.list_spectrails[self.i_upper_spectrail].y_rectified\n yupper_line = self.corr_yrect_a + self.corr_yrect_b * yupper_line\n n_points = int(yupper_line - ylower_line + 0.5) + 1\n yy = np.linspace(ylower_line - self.bb_ns1_orig,\n yupper_line - self.bb_ns1_orig,\n num=n_points,\n dtype=np.float)\n if self.list_arc_lines is not None:\n for arc_line in self.list_arc_lines:\n xline = arc_line.x_rectified - self.bb_nc1_orig\n xx = np.array([xline] * n_points)\n ax.plot(xx + self.bb_nc1_orig, yy + self.bb_ns1_orig, \"b\")\n # show plot\n pause_debugplot(self.debugplot, pltshow=True)\n\n return slitlet2d_rect", "response": "Rectify a 2d slitlet image using computed transformation."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef median_spectrum_from_rectified_image(self, slitlet2d_rect,\n sigma_gaussian_filtering=0,\n nwinwidth_initial=5,\n nwinwidth_refined=7,\n times_sigma_threshold=5,\n minimum_threshold=None,\n npix_avoid_border=0,\n nbrightlines=None):\n \"\"\"Median spectrum and line peaks from rectified image.\n\n In order to avoid the line ghosts, the line peaks are identified\n independently in the upper and lower halves of the rectified\n image. The final peaks correspond to lines that appear in both\n spectra.\n\n Parameters\n ----------\n slitlet2d_rect : numpy array\n Rectified slitlet image.\n sigma_gaussian_filtering : float\n Sigma of the gaussian filter to be applied to the spectrum\n in order to avoid problems with saturated lines. This\n filtering is skipped when this parameter is <= 0.\n nwinwidth_initial : int\n Width of the window where each peak must be found using\n the initial method (approximate)\n nwinwidth_refined : int\n Width of the window where each peak location will be\n refined.\n times_sigma_threshold : float\n Times (robust) sigma above the median of the image to set\n the minimum threshold when searching for line peaks.\n minimum_threshold : float or None\n Minimum value of the threshold.\n npix_avoid_border : int\n Number of pixels at the borders of the spectrum where peaks\n are not considered. If zero, the actual number will be\n given by nwinwidth_initial.\n nbrightlines : list\n List with maximum number of brightest lines to be employed\n in the wavelength calibration. The length of the list\n indicates the number of equal-size subintervals in the\n wavelength calibration direction to be considered.\n If this value is [0] or None, all the detected lines will be\n employed.\n\n Returns\n -------\n sp0 : numpy array\n Median spectrum.\n fxpeaks : numpy array\n Refined location of arc lines (in array index scale).\n\n \"\"\"\n\n # protections\n naxis2, naxis1 = slitlet2d_rect.shape\n if naxis1 != self.bb_nc2_orig - self.bb_nc1_orig + 1:\n raise ValueError(\"Unexpected slitlet2d_rect naxis1\")\n if naxis2 != self.bb_ns2_orig - self.bb_ns1_orig + 1:\n raise ValueError(\"Unexpected slitlet2d_rect naxis2\")\n\n # lower, middle and upper spectrum trails\n ylower_line = \\\n self.list_spectrails[self.i_lower_spectrail].y_rectified\n ymiddle_line = \\\n self.list_spectrails[self.i_middle_spectrail].y_rectified\n yupper_line = \\\n self.list_spectrails[self.i_upper_spectrail].y_rectified\n\n ilower = int(ylower_line + 0.5) - self.bb_ns1_orig\n imiddle = int(ymiddle_line + 0.5) - self.bb_ns1_orig\n iupper = int(yupper_line + 0.5) - self.bb_ns1_orig\n\n # median spectra using different image regions\n sp0_ini = np.median(slitlet2d_rect[ilower:(iupper + 1), :], axis=0)\n sp1_ini = np.median(slitlet2d_rect[ilower:(imiddle + 1), :], axis=0)\n sp2_ini = np.median(slitlet2d_rect[imiddle:(iupper + 1), :], axis=0)\n\n # gaussian filtering when requested (to avoid line saturation)\n if sigma_gaussian_filtering > 0:\n sp0 = ndimage.filters.gaussian_filter(\n sp0_ini,\n sigma=sigma_gaussian_filtering\n )\n sp1 = ndimage.filters.gaussian_filter(\n sp1_ini,\n sigma=sigma_gaussian_filtering\n )\n sp2 = ndimage.filters.gaussian_filter(\n sp2_ini,\n sigma=sigma_gaussian_filtering\n )\n else:\n sp0 = np.copy(sp0_ini)\n sp1 = np.copy(sp1_ini)\n sp2 = np.copy(sp2_ini)\n\n # compute threshold\n q25, q50, q75 = np.percentile(sp0, q=[25.0, 50.0, 75.0])\n sigma_g = 0.7413 * (q75 - q25) # robust standard deviation\n threshold = q50 + times_sigma_threshold * sigma_g\n if abs(self.debugplot) >= 10:\n print(\"median...........:\", q50)\n print(\"robuts std.......:\", sigma_g)\n print(\"threshold........:\", threshold)\n if minimum_threshold is not None:\n if minimum_threshold > threshold:\n threshold = minimum_threshold\n if abs(self.debugplot) >= 10:\n print(\"minimum threshold:\", minimum_threshold)\n print(\"final threshold..:\", threshold)\n\n # initial location of the peaks (integer values)\n ixpeaks0 = find_peaks_spectrum(sp0, nwinwidth=nwinwidth_initial,\n threshold=threshold,\n debugplot=self.debugplot)\n\n # peaks in the lower and upper regions\n ixpeaks1 = find_peaks_spectrum(sp1, nwinwidth=nwinwidth_initial,\n threshold=threshold,\n debugplot=self.debugplot)\n ixpeaks2 = find_peaks_spectrum(sp2, nwinwidth=nwinwidth_initial,\n threshold=threshold,\n debugplot=self.debugplot)\n\n # the peaks are valid if the are also found in the lower and\n # upper regions (with a tolerance of +1 or -1 pixel)\n ixpeaks = []\n for ixpeak in ixpeaks0:\n l1 = ixpeak in np.concatenate((ixpeaks1, ixpeaks1+1, ixpeaks1-1))\n l2 = ixpeak in np.concatenate((ixpeaks2, ixpeaks2+1, ixpeaks2-1))\n if l1 and l2:\n ixpeaks.append(ixpeak)\n ixpeaks = np.array(ixpeaks)\n if abs(self.debugplot) >= 10:\n print(\"Merged initial list of peaks:\\n\", ixpeaks)\n\n # remove peaks too close to any of the borders of the spectrum\n if npix_avoid_border > 0:\n lok_ini = ixpeaks >= npix_avoid_border\n lok_end = ixpeaks <= len(sp0) - 1 - npix_avoid_border\n ixpeaks = ixpeaks[lok_ini * lok_end]\n\n # select a maximum number of brightest lines in each region\n if nbrightlines is None:\n pass\n elif len(nbrightlines) == 1 and nbrightlines[0] == 0:\n pass\n else:\n if abs(self.debugplot) >= 10:\n print('nbrightlines =', nbrightlines)\n print('ixpeaks in whole spectrum:\\n', ixpeaks)\n region_size = (naxis1-1)/len(nbrightlines)\n ixpeaks_filtered = np.array([], dtype=int)\n for iregion, nlines_in_region in enumerate(nbrightlines):\n if nlines_in_region > 0:\n imin = int(iregion * region_size)\n imax = int((iregion + 1) * region_size)\n if iregion > 0:\n imin += 1\n ixpeaks_region = \\\n ixpeaks[np.logical_and(ixpeaks >= imin,\n ixpeaks <= imax)]\n if len(ixpeaks_region) > 0:\n peak_fluxes = sp0[ixpeaks_region]\n spos = peak_fluxes.argsort()\n ixpeaks_tmp = ixpeaks_region[spos[-nlines_in_region:]]\n ixpeaks_tmp.sort() # in-place sort\n if abs(self.debugplot) >= 10:\n print('ixpeaks in region........:\\n', ixpeaks_tmp)\n ixpeaks_filtered = np.concatenate((ixpeaks_filtered,\n ixpeaks_tmp))\n ixpeaks = ixpeaks_filtered\n if abs(self.debugplot) >= 10:\n print('ixpeaks filtered.........:\\n', ixpeaks)\n\n # refined location of the peaks (float values)\n fxpeaks, sxpeaks = refine_peaks_spectrum(sp0, ixpeaks,\n nwinwidth=nwinwidth_refined,\n method=\"gaussian\")\n\n if abs(self.debugplot) % 10 != 0:\n x = np.arange(self.bb_nc1_orig, self.bb_nc2_orig + 1)\n title = \"Slitlet#\" + str(self.islitlet) + \" (median spectrum)\"\n ax = ximplotxy(x, sp1, show=False, title=title,\n xlabel='pixel coordinate (from 1 to NAXIS1)',\n ylabel='number of counts',\n **{'marker': ' ', 'label': 'lower region'})\n ax.plot(x, sp2, label='upper region')\n ax.plot(x, sp0, label='whole region')\n # mark peak location\n ax.plot(ixpeaks + self.bb_nc1_orig,\n sp0[ixpeaks], 'o', label=\"initial location\")\n ax.plot(fxpeaks + self.bb_nc1_orig,\n sp0[ixpeaks], 'o', label=\"refined location\")\n ax.legend()\n pause_debugplot(self.debugplot, pltshow=True, tight_layout=False)\n\n # return median spectrum and refined peak location\n return sp0, fxpeaks", "response": "This function takes a rectified image and returns the median spectrum and line peaks from the image."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nintersect slices with a region.", "response": "def clip_slices(r, region, scale=1):\n '''Intersect slices with a region.'''\n t = []\n for ch in r:\n a1 = intersection(ch[0], region[0], scale=scale)\n if a1 is None:\n continue\n a2 = intersection(ch[1], region[1], scale=scale)\n if a2 is None:\n continue\n\n t.append((a1, a2))\n\n return t"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_base_headers(self, hdr):\n hdr = super(JoinDitheredImagesRecipe, self).set_base_headers(hdr)\n hdr['IMGOBBL'] = 0\n hdr['OBSMODE'] = 'DITHERED_IMAGE'\n return hdr", "response": "Set metadata in FITS headers."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _load_defaults(inventory_path=None, roles=None, extra_vars=None, tags=None,\n basedir=False):\n \"\"\"Load common defaults data structures.\n\n For factorization purpose.\"\"\"\n\n extra_vars = extra_vars or {}\n tags = tags or []\n loader = DataLoader()\n if basedir:\n loader.set_basedir(basedir)\n\n inventory = EnosInventory(loader=loader,\n sources=inventory_path, roles=roles)\n\n variable_manager = VariableManager(loader=loader,\n inventory=inventory)\n\n # seems mandatory to load group_vars variable\n if basedir:\n variable_manager.safe_basedir = True\n\n if extra_vars:\n variable_manager.extra_vars = extra_vars\n\n # NOTE(msimonin): The ansible api is \"low level\" in the\n # sense that we are redefining here all the default values\n # that are usually enforce by ansible called from the cli\n Options = namedtuple(\"Options\", [\"listtags\", \"listtasks\",\n \"listhosts\", \"syntax\",\n \"connection\", \"module_path\",\n \"forks\", \"private_key_file\",\n \"ssh_common_args\",\n \"ssh_extra_args\",\n \"sftp_extra_args\",\n \"scp_extra_args\", \"become\",\n \"become_method\", \"become_user\",\n \"remote_user\", \"verbosity\",\n \"check\", \"tags\",\n \"diff\", \"basedir\"])\n\n options = Options(listtags=False, listtasks=False,\n listhosts=False, syntax=False, connection=\"ssh\",\n module_path=None, forks=100,\n private_key_file=None, ssh_common_args=None,\n ssh_extra_args=None, sftp_extra_args=None,\n scp_extra_args=None, become=None,\n become_method=\"sudo\", become_user=\"root\",\n remote_user=None, verbosity=2, check=False,\n tags=tags, diff=None, basedir=basedir)\n\n return inventory, variable_manager, loader, options", "response": "Load common defaults data structures."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_play(play_source, inventory_path=None, roles=None,\n extra_vars=None, on_error_continue=False):\n \"\"\"Run a play.\n\n Args:\n pattern_hosts (str): pattern to describe ansible hosts to target.\n see https://docs.ansible.com/ansible/latest/intro_patterns.html\n play_source (dict): ansible task\n inventory_path (str): inventory to use\n extra_vars (dict): extra_vars to use\n on_error_continue(bool): Don't throw any exception in case a host is\n unreachable or the playbooks run with errors\n\n Raises:\n :py:class:`enoslib.errors.EnosFailedHostsError`: if a task returns an\n error on a host and ``on_error_continue==False``\n :py:class:`enoslib.errors.EnosUnreachableHostsError`: if a host is\n unreachable (through ssh) and ``on_error_continue==False``\n\n Returns:\n List of all the results\n \"\"\"\n # NOTE(msimonin): inventory could be infered from a host list (maybe)\n results = []\n inventory, variable_manager, loader, options = _load_defaults(\n inventory_path=inventory_path,\n roles=roles,\n extra_vars=extra_vars)\n callback = _MyCallback(results)\n passwords = {}\n tqm = task_queue_manager.TaskQueueManager(\n inventory=inventory,\n variable_manager=variable_manager,\n loader=loader,\n options=options,\n passwords=passwords,\n stdout_callback=callback)\n\n # create play\n play_inst = play.Play().load(play_source,\n variable_manager=variable_manager,\n loader=loader)\n\n # actually run it\n try:\n tqm.run(play_inst)\n finally:\n tqm.cleanup()\n\n # Handling errors\n failed_hosts = []\n unreachable_hosts = []\n for r in results:\n if r.status == STATUS_UNREACHABLE:\n unreachable_hosts.append(r)\n if r.status == STATUS_FAILED:\n failed_hosts.append(r)\n\n if len(failed_hosts) > 0:\n logger.error(\"Failed hosts: %s\" % failed_hosts)\n if not on_error_continue:\n raise EnosFailedHostsError(failed_hosts)\n if len(unreachable_hosts) > 0:\n logger.error(\"Unreachable hosts: %s\" % unreachable_hosts)\n if not on_error_continue:\n raise EnosUnreachableHostsError(unreachable_hosts)\n\n return results", "response": "Run a play.\n\n Args:\n pattern_hosts (str): pattern to describe ansible hosts to target.\n see https://docs.ansible.com/ansible/latest/intro_patterns.html\n play_source (dict): ansible task\n inventory_path (str): inventory to use\n extra_vars (dict): extra_vars to use\n on_error_continue(bool): Don't throw any exception in case a host is\n unreachable or the playbooks run with errors\n\n Raises:\n :py:class:`enoslib.errors.EnosFailedHostsError`: if a task returns an\n error on a host and ``on_error_continue==False``\n :py:class:`enoslib.errors.EnosUnreachableHostsError`: if a host is\n unreachable (through ssh) and ``on_error_continue==False``\n\n Returns:\n List of all the results"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns a command on some remote hosts.", "response": "def run_command(pattern_hosts, command, inventory_path=None, roles=None,\n extra_vars=None,\n on_error_continue=False):\n \"\"\"Run a shell command on some remote hosts.\n\n Args:\n pattern_hosts (str): pattern to describe ansible hosts to target.\n see https://docs.ansible.com/ansible/latest/intro_patterns.html\n command (str): the command to run\n inventory_path (str): inventory to use\n extra_vars (dict): extra_vars to use\n on_error_continue(bool): Don't throw any exception in case a host is\n unreachable or the playbooks run with errors\n\n Raises:\n :py:class:`enoslib.errors.EnosFailedHostsError`: if a task returns an\n error on a host and ``on_error_continue==False``\n :py:class:`enoslib.errors.EnosUnreachableHostsError`: if a host is\n unreachable (through ssh) and ``on_error_continue==False``\n\n Returns:\n Dict combining the stdout and stderr of ok and failed hosts and every\n results of tasks executed (this may include the fact gathering tasks)\n\n Example:\n\n .. code-block:: python\n\n # Inventory\n [control1]\n enos-0\n [control2]\n enos-1\n\n # Python\n result = run_command(\"control*\", \"date\", inventory)\n\n # Result\n {\n 'failed': {},\n 'ok':\n {\n u'enos-0':\n {\n 'stderr': u'',\n 'stdout': u'Tue Oct 31 04:53:04 GMT 2017'\n },\n u'enos-1':\n {\n 'stderr': u'',\n 'stdout': u'Tue Oct 31 04:53:05 GMT 2017'}\n },\n 'results': [...]\n }\n\n If facts are gathers this is possible to use ansible templating\n\n .. code-block:: python\n\n result = run_command(\"control*\", \"ping -c 1\n {{hostvars['enos-1']['ansible_' + n1].ipv4.address}}\", inventory)\n \"\"\"\n\n def filter_results(results, status):\n _r = [r for r in results\n if r.status == status and r.task == COMMAND_NAME]\n s = dict([[r.host, {\"stdout\": r.payload.get(\"stdout\"),\n \"stderr\": r.payload.get(\"stderr\")}]\n for r in _r])\n return s\n\n play_source = {\n \"hosts\": pattern_hosts,\n \"tasks\": [{\n \"name\": COMMAND_NAME,\n \"shell\": command,\n }]\n }\n results = run_play(play_source,\n inventory_path=inventory_path,\n roles=roles,\n extra_vars=extra_vars)\n ok = filter_results(results, STATUS_OK)\n failed = filter_results(results, STATUS_FAILED)\n return {\"ok\": ok, \"failed\": failed, \"results\": results}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrun Ansible. Args: playbooks (list): list of paths to the playbooks to run inventory_path (str): path to the hosts file (inventory) extra_var (dict): extra vars to pass tags (list): list of tags to run on_error_continue(bool): Don't throw any exception in case a host is unreachable or the playbooks run with errors Raises: :py:class:`enoslib.errors.EnosFailedHostsError`: if a task returns an error on a host and ``on_error_continue==False`` :py:class:`enoslib.errors.EnosUnreachableHostsError`: if a host is unreachable (through ssh) and ``on_error_continue==False``", "response": "def run_ansible(playbooks, inventory_path=None, roles=None, extra_vars=None,\n tags=None, on_error_continue=False, basedir='.'):\n \"\"\"Run Ansible.\n\n Args:\n playbooks (list): list of paths to the playbooks to run\n inventory_path (str): path to the hosts file (inventory)\n extra_var (dict): extra vars to pass\n tags (list): list of tags to run\n on_error_continue(bool): Don't throw any exception in case a host is\n unreachable or the playbooks run with errors\n\n Raises:\n :py:class:`enoslib.errors.EnosFailedHostsError`: if a task returns an\n error on a host and ``on_error_continue==False``\n :py:class:`enoslib.errors.EnosUnreachableHostsError`: if a host is\n unreachable (through ssh) and ``on_error_continue==False``\n \"\"\"\n\n inventory, variable_manager, loader, options = _load_defaults(\n inventory_path=inventory_path,\n roles=roles,\n extra_vars=extra_vars,\n tags=tags,\n basedir=basedir\n )\n passwords = {}\n for path in playbooks:\n logger.info(\"Running playbook %s with vars:\\n%s\" % (path, extra_vars))\n pbex = PlaybookExecutor(\n playbooks=[path],\n inventory=inventory,\n variable_manager=variable_manager,\n loader=loader,\n options=options,\n passwords=passwords\n )\n\n code = pbex.run()\n stats = pbex._tqm._stats\n hosts = stats.processed.keys()\n result = [{h: stats.summarize(h)} for h in hosts]\n results = {\"code\": code, \"result\": result, \"playbook\": path}\n print(results)\n\n failed_hosts = []\n unreachable_hosts = []\n\n for h in hosts:\n t = stats.summarize(h)\n if t[\"failures\"] > 0:\n failed_hosts.append(h)\n\n if t[\"unreachable\"] > 0:\n unreachable_hosts.append(h)\n\n if len(failed_hosts) > 0:\n logger.error(\"Failed hosts: %s\" % failed_hosts)\n if not on_error_continue:\n raise EnosFailedHostsError(failed_hosts)\n if len(unreachable_hosts) > 0:\n logger.error(\"Unreachable hosts: %s\" % unreachable_hosts)\n if not on_error_continue:\n raise EnosUnreachableHostsError(unreachable_hosts)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef discover_networks(roles, networks, fake_interfaces=None,\n fake_networks=None):\n \"\"\"Checks the network interfaces on the nodes.\n\n This enables to auto-discover the mapping interface name <-> network role.\n Beware, this has a side effect on each Host in roles.\n\n Args:\n roles (dict): role->hosts mapping as returned by\n :py:meth:`enoslib.infra.provider.Provider.init`\n networks (list): network list as returned by\n :py:meth:`enoslib.infra.provider.Provider.init`\n fake_interfaces (list): names of optionnal dummy interfaces to create\n fake_networks (list): names of the roles to associate with the fake\n interfaces. Like reguilar network interfaces, the mapping will be\n added to the host vars. Internally this will be zipped with the\n fake_interfaces to produce the mapping.\n\n If the command is successful each host will be added some variables.\n Assuming that one network whose role is `mynetwork` has been declared, the\n following variables will be available through the ansible hostvars:\n\n - ``mynetwork=eth1``, `eth1` has been discovered has the interface in the\n network `mynetwork`.\n - ``mynetwork_dev=eth1``, same as above with a different accessor names\n - ``mynetwork_ip=192.168.42.42``, this indicates the ip in the network\n `mynetwork` for this node\n\n All of this variable can then be accessed by the other nodes through the\n hostvars: ``hostvars[remote_node][\"mynetwork_ip\"]``\n \"\"\"\n\n def get_devices(facts):\n \"\"\"Extract the network devices information from the facts.\"\"\"\n devices = []\n for interface in facts['ansible_interfaces']:\n ansible_interface = 'ansible_' + interface\n # filter here (active/ name...)\n if 'ansible_' + interface in facts:\n interface = facts[ansible_interface]\n devices.append(interface)\n return devices\n\n wait_ssh(roles)\n tmpdir = os.path.join(os.getcwd(), TMP_DIRNAME)\n _check_tmpdir(tmpdir)\n fake_interfaces = fake_interfaces or []\n fake_networks = fake_networks or []\n\n utils_playbook = os.path.join(ANSIBLE_DIR, 'utils.yml')\n facts_file = os.path.join(tmpdir, 'facts.json')\n options = {\n 'enos_action': 'check_network',\n 'facts_file': facts_file,\n 'fake_interfaces': fake_interfaces\n }\n run_ansible([utils_playbook], roles=roles,\n extra_vars=options,\n on_error_continue=False)\n\n # Read the file\n # Match provider networks to interface names for each host\n with open(facts_file) as f:\n facts = json.load(f)\n for _, host_facts in facts.items():\n host_nets = _map_device_on_host_networks(networks,\n get_devices(host_facts))\n # Add the mapping : networks <-> nic name\n host_facts['networks'] = host_nets\n\n # Finally update the env with this information\n # generate the extra_mapping for the fake interfaces\n extra_mapping = dict(zip(fake_networks, fake_interfaces))\n _update_hosts(roles, facts, extra_mapping=extra_mapping)", "response": "This function returns a list of hosts that are connected to the given networks."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_inventory(roles, networks, inventory_path, check_networks=False,\n fake_interfaces=None, fake_networks=None):\n \"\"\"Generate an inventory file in the ini format.\n\n The inventory is generated using the ``roles`` in the ``ini`` format. If\n ``check_network == True``, the function will try to discover which networks\n interfaces are available and map them to one network of the ``networks``\n parameters. Note that this auto-discovery feature requires the servers to\n have their IP set.\n\n Args:\n roles (dict): role->hosts mapping as returned by\n :py:meth:`enoslib.infra.provider.Provider.init`\n networks (list): network list as returned by\n :py:meth:`enoslib.infra.provider.Provider.init`\n inventory_path (str): path to the inventory to generate\n check_networks (bool): True to enable the auto-discovery of the mapping\n interface name <-> network role\n fake_interfaces (list): names of optionnal dummy interfaces to create\n on the nodes\n fake_networks (list): names of the roles to associate with the fake\n interfaces. Like reguilar network interfaces, the mapping will be\n added to the host vars. Internally this will be zipped with the\n fake_interfaces to produce the mapping. \"\"\"\n\n with open(inventory_path, \"w\") as f:\n f.write(_generate_inventory(roles))\n\n if check_networks:\n discover_networks(\n roles,\n networks,\n fake_interfaces=fake_interfaces,\n fake_networks=fake_networks\n )\n with open(inventory_path, \"w\") as f:\n f.write(_generate_inventory(roles))", "response": "Generate an inventory file in the ini format."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nemulate network links. Read ``network_constraints`` and apply ``tc`` rules on all the nodes. Constraints are applied between groups of machines. Theses groups are described in the ``network_constraints`` variable and must be found in the inventory file. The newtwork constraints support ``delay``, ``rate`` and ``loss``. Args: network_constraints (dict): network constraints to apply roles (dict): role->hosts mapping as returned by :py:meth:`enoslib.infra.provider.Provider.init` inventory_path(string): path to an inventory extra_vars (dict): extra_vars to pass to ansible Examples: * Using defaults The following will apply the network constraints between every groups. For instance the constraints will be applied for the communication between \"n1\" and \"n3\" but not between \"n1\" and \"n2\". Note that using default leads to symetric constraints. .. code-block:: python roles = { \"grp1\": [\"n1\", \"n2\"], \"grp2\": [\"n3\", \"n4\"], \"grp3\": [\"n3\", \"n4\"], } tc = { \"enable\": True, \"default_delay\": \"20ms\", \"default_rate\": \"1gbit\", } emulate_network(roles, tc) If you want to control more precisely which groups need to be taken into account, you can use ``except`` or ``groups`` key .. code-block:: python tc = { \"enable\": True, \"default_delay\": \"20ms\", \"default_rate\": \"1gbit\", \"except\": \"grp3\" } emulate_network(roles, tc) is equivalent to .. code-block:: python tc = { \"enable\": True, \"default_delay\": \"20ms\", \"default_rate\": \"1gbit\", \"groups\": [\"grp1\", \"grp2\"] } emulate_network(roles, inventory, tc) * Using ``src`` and ``dst`` The following will enforce a symetric constraint between ``grp1`` and ``grp2``. .. code-block:: python tc = { \"enable\": True, \"default_delay\": \"20ms\", \"default_rate\": \"1gbit\", \"constraints\": [{ \"src\": \"grp1\" \"dst\": \"grp2\" \"delay\": \"10ms\" \"symetric\": True }] } emulate_network(roles, inventory, tc)", "response": "def emulate_network(network_constraints,\n roles=None,\n inventory_path=None,\n extra_vars=None):\n \"\"\"Emulate network links.\n\n Read ``network_constraints`` and apply ``tc`` rules on all the nodes.\n Constraints are applied between groups of machines. Theses groups are\n described in the ``network_constraints`` variable and must be found in the\n inventory file. The newtwork constraints support ``delay``, ``rate`` and\n ``loss``.\n\n Args:\n network_constraints (dict): network constraints to apply\n roles (dict): role->hosts mapping as returned by\n :py:meth:`enoslib.infra.provider.Provider.init`\n inventory_path(string): path to an inventory\n extra_vars (dict): extra_vars to pass to ansible\n\n Examples:\n\n * Using defaults\n\n The following will apply the network constraints between every groups.\n For instance the constraints will be applied for the communication\n between \"n1\" and \"n3\" but not between \"n1\" and \"n2\". Note that using\n default leads to symetric constraints.\n\n .. code-block:: python\n\n roles = {\n \"grp1\": [\"n1\", \"n2\"],\n \"grp2\": [\"n3\", \"n4\"],\n \"grp3\": [\"n3\", \"n4\"],\n }\n\n tc = {\n \"enable\": True,\n \"default_delay\": \"20ms\",\n \"default_rate\": \"1gbit\",\n }\n emulate_network(roles, tc)\n\n If you want to control more precisely which groups need to be taken\n into account, you can use ``except`` or ``groups`` key\n\n .. code-block:: python\n\n tc = {\n \"enable\": True,\n \"default_delay\": \"20ms\",\n \"default_rate\": \"1gbit\",\n \"except\": \"grp3\"\n }\n emulate_network(roles, tc)\n\n is equivalent to\n\n .. code-block:: python\n\n tc = {\n \"enable\": True,\n \"default_delay\": \"20ms\",\n \"default_rate\": \"1gbit\",\n \"groups\": [\"grp1\", \"grp2\"]\n }\n emulate_network(roles, inventory, tc)\n\n * Using ``src`` and ``dst``\n\n The following will enforce a symetric constraint between ``grp1`` and\n ``grp2``.\n\n .. code-block:: python\n\n tc = {\n \"enable\": True,\n \"default_delay\": \"20ms\",\n \"default_rate\": \"1gbit\",\n \"constraints\": [{\n \"src\": \"grp1\"\n \"dst\": \"grp2\"\n \"delay\": \"10ms\"\n \"symetric\": True\n }]\n }\n emulate_network(roles, inventory, tc)\n\n \"\"\"\n # 1) Retrieve the list of ips for all nodes (Ansible)\n # 2) Build all the constraints (Python)\n # {source:src, target: ip_dest, device: if, rate:x, delay:y}\n # 3) Enforce those constraints (Ansible)\n\n if not network_constraints:\n return\n\n if roles is None and inventory is None:\n raise ValueError(\"roles and inventory can't be None\")\n\n if not extra_vars:\n extra_vars = {}\n\n # 1. getting ips/devices information\n logger.debug('Getting the ips of all nodes')\n tmpdir = os.path.join(os.getcwd(), TMP_DIRNAME)\n _check_tmpdir(tmpdir)\n utils_playbook = os.path.join(ANSIBLE_DIR, 'utils.yml')\n ips_file = os.path.join(tmpdir, 'ips.txt')\n options = {'enos_action': 'tc_ips',\n 'ips_file': ips_file}\n run_ansible([utils_playbook], roles=roles, extra_vars=options)\n\n # 2.a building the group constraints\n logger.debug('Building all the constraints')\n constraints = _build_grp_constraints(roles, network_constraints)\n # 2.b Building the ip/device level constaints\n with open(ips_file) as f:\n ips = yaml.safe_load(f)\n # will hold every single constraint\n ips_with_constraints = _build_ip_constraints(roles,\n ips,\n constraints)\n # dumping it for debugging purpose\n ips_with_constraints_file = os.path.join(tmpdir,\n 'ips_with_constraints.yml')\n with open(ips_with_constraints_file, 'w') as g:\n yaml.dump(ips_with_constraints, g)\n\n # 3. Enforcing those constraints\n logger.info('Enforcing the constraints')\n # enabling/disabling network constraints\n enable = network_constraints.setdefault('enable', True)\n utils_playbook = os.path.join(ANSIBLE_DIR, 'utils.yml')\n options = {\n 'enos_action': 'tc_apply',\n 'ips_with_constraints': ips_with_constraints,\n 'tc_enable': enable,\n }\n options.update(extra_vars)\n run_ansible([utils_playbook],\n roles=roles,\n inventory_path=inventory_path,\n extra_vars=options)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nvalidate the network parameters.", "response": "def validate_network(roles=None,\n inventory_path=None,\n output_dir=None,\n extra_vars=None):\n \"\"\"Validate the network parameters (latency, bandwidth ...)\n\n Performs flent, ping tests to validate the constraints set by\n :py:func:`emulate_network`. Reports are available in the tmp directory used\n by enos.\n\n Args:\n roles (dict): role->hosts mapping as returned by\n :py:meth:`enoslib.infra.provider.Provider.init`\n inventory_path (str): path to an inventory\n output_dir (str): directory where validation files will be stored.\n Default to :py:const:`enoslib.constants.TMP_DIRNAME`.\n \"\"\"\n logger.debug('Checking the constraints')\n if not output_dir:\n output_dir = os.path.join(os.getcwd(), TMP_DIRNAME)\n\n if not extra_vars:\n extra_vars = {}\n\n output_dir = os.path.abspath(output_dir)\n _check_tmpdir(output_dir)\n utils_playbook = os.path.join(ANSIBLE_DIR, 'utils.yml')\n options = {'enos_action': 'tc_validate',\n 'tc_output_dir': output_dir}\n\n options.update(extra_vars)\n run_ansible([utils_playbook],\n roles=roles,\n inventory_path=inventory_path,\n extra_vars=options)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reset_network(roles, extra_vars=None):\n logger.debug('Reset the constraints')\n\n if not extra_vars:\n extra_vars = {}\n\n tmpdir = os.path.join(os.getcwd(), TMP_DIRNAME)\n\n _check_tmpdir(tmpdir)\n utils_playbook = os.path.join(ANSIBLE_DIR, 'utils.yml')\n options = {'enos_action': 'tc_reset',\n 'tc_output_dir': tmpdir}\n options.update(extra_vars)\n run_ansible([utils_playbook], roles=roles, extra_vars=options)", "response": "Reset the network constraints."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef wait_ssh(roles, retries=100, interval=30):\n utils_playbook = os.path.join(ANSIBLE_DIR, 'utils.yml')\n options = {'enos_action': 'ping'}\n\n for i in range(0, retries):\n try:\n run_ansible([utils_playbook],\n roles=roles,\n extra_vars=options,\n on_error_continue=False)\n break\n except EnosUnreachableHostsError as e:\n logger.info(\"Hosts unreachable: %s \" % e.hosts)\n logger.info(\"Retrying... %s/%s\" % (i + 1, retries))\n time.sleep(interval)\n else:\n raise EnosSSHNotReady('Maximum retries reached')", "response": "Wait for all the machines to be ssh - reachable"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef expand_groups(grp):\n p = re.compile(r\"(?P.+)\\[(?P\\d+)-(?P\\d+)\\]\")\n m = p.match(grp)\n if m is not None:\n s = int(m.group('start'))\n e = int(m.group('end'))\n n = m.group('name')\n return list(map(lambda x: n + str(x), range(s, e + 1)))\n else:\n return [grp]", "response": "Expand group names.\n\n Args:\n grp (string): group names to expand\n\n Returns:\n list of groups\n\n Examples:\n\n * grp[1-3] will be expanded to [grp1, grp2, grp3]\n * grp1 will be expanded to [grp1]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _expand_description(desc):\n srcs = expand_groups(desc['src'])\n dsts = expand_groups(desc['dst'])\n descs = []\n for src in srcs:\n for dst in dsts:\n local_desc = desc.copy()\n local_desc['src'] = src\n local_desc['dst'] = dst\n descs.append(local_desc)\n\n return descs", "response": "Expand the description given the group names and patterns."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _generate_default_grp_constraints(roles, network_constraints):\n default_delay = network_constraints.get('default_delay')\n default_rate = network_constraints.get('default_rate')\n default_loss = network_constraints.get('default_loss', 0)\n except_groups = network_constraints.get('except', [])\n grps = network_constraints.get('groups', roles.keys())\n # expand each groups\n grps = [expand_groups(g) for g in grps]\n # flatten\n grps = [x for expanded_group in grps for x in expanded_group]\n # building the default group constraints\n return [{'src': grp1,\n 'dst': grp2,\n 'delay': default_delay,\n 'rate': default_rate,\n 'loss': default_loss}\n for grp1 in grps for grp2 in grps\n if ((grp1 != grp2\n or _src_equals_dst_in_constraints(network_constraints, grp1))\n and grp1 not in except_groups and grp2 not in except_groups)]", "response": "Generate default symetric grp constraints."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _generate_actual_grp_constraints(network_constraints):\n if 'constraints' not in network_constraints:\n return []\n\n constraints = network_constraints['constraints']\n actual = []\n for desc in constraints:\n descs = _expand_description(desc)\n for desc in descs:\n actual.append(desc)\n if 'symetric' in desc:\n sym = desc.copy()\n sym['src'] = desc['dst']\n sym['dst'] = desc['src']\n actual.append(sym)\n return actual", "response": "Generate the user specified constraints\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _merge_constraints(constraints, overrides):\n for o in overrides:\n i = 0\n while i < len(constraints):\n c = constraints[i]\n if _same(o, c):\n constraints[i].update(o)\n break\n i = i + 1", "response": "Merge the constraints avoiding duplicates\n Change constraints in place."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _build_grp_constraints(roles, network_constraints):\n # generate defaults constraints\n constraints = _generate_default_grp_constraints(roles,\n network_constraints)\n # Updating the constraints if necessary\n if 'constraints' in network_constraints:\n actual = _generate_actual_grp_constraints(network_constraints)\n _merge_constraints(constraints, actual)\n\n return constraints", "response": "Generate constraints at the group level and deal with symetric constraints."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding the constraints at the ip level.", "response": "def _build_ip_constraints(roles, ips, constraints):\n \"\"\"Generate the constraints at the ip/device level.\n Those constraints are those used by ansible to enforce tc/netem rules.\n \"\"\"\n local_ips = copy.deepcopy(ips)\n for constraint in constraints:\n gsrc = constraint['src']\n gdst = constraint['dst']\n gdelay = constraint['delay']\n grate = constraint['rate']\n gloss = constraint['loss']\n for s in roles[gsrc]:\n # one possible source\n # Get all the active devices for this source\n active_devices = filter(lambda x: x[\"active\"],\n local_ips[s.alias]['devices'])\n # Get only the devices specified in the network constraint\n if 'network' in constraint:\n active_devices = filter(\n lambda x:\n x['device'] == s.extra[constraint['network']],\n active_devices)\n # Get only the name of the active devices\n sdevices = map(lambda x: x['device'], active_devices)\n for sdevice in sdevices:\n # one possible device\n for d in roles[gdst]:\n # one possible destination\n dallips = local_ips[d.alias]['all_ipv4_addresses']\n # Let's keep docker bridge out of this\n dallips = filter(lambda x: x != '172.17.0.1', dallips)\n for dip in dallips:\n local_ips[s.alias].setdefault('tc', []).append({\n 'source': s.alias,\n 'target': dip,\n 'device': sdevice,\n 'delay': gdelay,\n 'rate': grate,\n 'loss': gloss\n })\n return local_ips"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _map_device_on_host_networks(provider_nets, devices):\n networks = copy.deepcopy(provider_nets)\n for network in networks:\n for device in devices:\n network.setdefault('device', None)\n ip_set = IPSet([network['cidr']])\n if 'ipv4' not in device:\n continue\n ips = device['ipv4']\n if not isinstance(ips, list):\n ips = [ips]\n if len(ips) < 1:\n continue\n ip = IPAddress(ips[0]['address'])\n if ip in ip_set:\n network['device'] = device['device']\n continue\n return networks", "response": "Decorate each network with the corresponding nic name."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of dates in a frequency range.", "response": "def f_daterange(freq, tz='UTC', *args, **kwargs):\n \"\"\"\n Use ``dateutil.rrule`` to create a range of dates. The frequency must be a\n string in the following list: YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY,\n MINUTELY or SECONDLY.\n\n See `dateutil rrule`_ documentation for more detail.\n\n .. _dateutil rrule: https://dateutil.readthedocs.org/en/latest/rrule.html\n\n :param freq: One of the ``dateutil.rrule`` frequencies\n :type freq: str\n :param tz: One of the ``pytz`` timezones, defaults to UTC\n :type tz: str\n :param args: start date , interval between each frequency ,\n max number of recurrences , end date \n :param kwargs: ``dtstart``, ``interval``, ``count``, ``until``\n :return: range of dates\n :rtype: list\n \"\"\"\n tz = pytz.timezone(tz)\n freq = getattr(rrule, freq.upper()) # get frequency enumeration from rrule\n return [tz.localize(dt) for dt in rrule.rrule(freq, *args, **kwargs)]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the total energy accumulated from the end of each timestep between the given times.", "response": "def f_energy(ac_power, times):\n \"\"\"\n Calculate the total energy accumulated from AC power at the end of each\n timestep between the given times.\n\n :param ac_power: AC Power [W]\n :param times: times\n :type times: np.datetime64[s]\n :return: energy [W*h] and energy times\n \"\"\"\n dt = np.diff(times) # calculate timesteps\n # convert timedeltas to quantities\n dt = dt.astype('timedelta64[s]').astype('float') / sc_const.hour\n # energy accumulate during timestep\n energy = dt * (ac_power[:-1] + ac_power[1:]) / 2\n return energy, times[1:]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef groupby_freq(items, times, freq, wkst='SU'):\n timeseries = zip(times, items) # timeseries map of items\n # create a key lambda to group timeseries by\n if freq.upper() == 'DAILY':\n def key(ts_): return ts_[0].day\n elif freq.upper() == 'WEEKLY':\n weekday = getattr(rrule, wkst.upper()) # weekday start\n # generator that searches times for weekday start\n days = (day for day in times if day.weekday() == weekday.weekday)\n day0 = days.next() # first weekday start of all times\n\n def key(ts_): return (ts_[0] - day0).days // 7\n else:\n def key(ts_): return getattr(ts_[0], freq.lower()[:-2])\n for k, ts in itertools.groupby(timeseries, key):\n yield k, ts", "response": "Group items in a timeseries by frequency."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nusing :func:`groupby_freq` to rollup items :param items: items in timeseries :param times: times corresponding to items :param freq: One of the ``dateutil.rrule`` frequency constants :type freq: str", "response": "def f_rollup(items, times, freq):\n \"\"\"\n Use :func:`groupby_freq` to rollup items\n\n :param items: items in timeseries\n :param times: times corresponding to items\n :param freq: One of the ``dateutil.rrule`` frequency constants\n :type freq: str\n \"\"\"\n rollup = [np.sum(item for __, item in ts)\n for _, ts in groupby_freq(items, times, freq)]\n return np.array(rollup)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fileupdate(self, data):\n\n self.name = data[\"name\"]\n add = self.__additional\n add[\"filetype\"] = \"other\"\n for filetype in (\"book\", \"image\", \"video\", \"audio\", \"archive\"):\n if filetype in data:\n add[\"filetype\"] = filetype\n break\n if add[\"filetype\"] in (\"image\", \"video\", \"audio\"):\n add[\"thumb\"] = data.get(\"thumb\", dict())\n # checksum is md5\n add[\"checksum\"] = data[\"checksum\"]\n add[\"expire_time\"] = data[\"expires\"] / 1000\n add[\"size\"] = data[\"size\"]\n add[\"info\"] = data.get(add[\"filetype\"], dict())\n add[\"uploader\"] = data[\"user\"]\n if self.room.admin:\n add[\"info\"].update({\"room\": data.get(\"room\")})\n add[\"info\"].update({\"uploader_ip\": data.get(\"uploader_ip\")})\n self.updated = True", "response": "Method to update extra metadata fields with dict obtained\n through fileinfo"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the thumbnail s url for this image audio or video file.", "response": "def thumbnail(self):\n \"\"\"Returns the thumbnail's url for this image, audio, or video file.\n Returns empty string if the file has no thumbnail\"\"\"\n\n if self.filetype not in (\"video\", \"image\", \"audio\"):\n raise RuntimeError(\"Only video, audio and image files can have thumbnails\")\n thumb_srv = self.thumb.get(\"server\")\n url = f\"https://{thumb_srv}\" if thumb_srv else None\n return f\"{url}/asset/{self.fid}/thumb\" if url else \"\""} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef duration(self):\n\n if self.filetype not in (\"video\", \"audio\"):\n raise RuntimeError(\"Only videos and audio have durations\")\n return self.info.get(\"length\") or self.info.get(\"duration\")", "response": "Returns the duration in seconds of this audio or video file"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete(self):\n self.room.check_owner()\n self.conn.make_call(\"deleteFiles\", [self.fid])", "response": "Remove this file from the room"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_gid(self):\n if self.group:\n gid = getgrnam(self.group).gr_gid\n try:\n os.setgid(gid)\n except Exception:\n message = (\"Unable to switch ownership to {0}:{1}. \" +\n \"Did you start the daemon as root?\")\n print(message.format(self.user, self.group))\n sys.exit(1)", "response": "Change the group of the running process"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchanging the user of the running process", "response": "def set_uid(self):\n \"\"\"Change the user of the running process\"\"\"\n if self.user:\n uid = getpwnam(self.user).pw_uid\n try:\n os.setuid(uid)\n except Exception:\n message = ('Unable to switch ownership to {0}:{1}. ' +\n 'Did you start the daemon as root?')\n print(message.format(self.user, self.group))\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef setup_logging(self):\n self.logger = logging.getLogger()\n\n if os.path.exists('/dev/log'):\n handler = SysLogHandler('/dev/log')\n else:\n handler = SysLogHandler()\n self.logger.addHandler(handler)", "response": "Setup logging to syslog using the\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndoing the UNIX double-fork magic, see Stevens' \"Advanced Programming in the UNIX Environment\" for details (ISBN 0201563177) http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16", "response": "def daemonize(self):\n \"\"\"\n Do the UNIX double-fork magic, see Stevens' \"Advanced\n Programming in the UNIX Environment\" for details (ISBN 0201563177)\n http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16\n \"\"\"\n try:\n pid = os.fork()\n if pid > 0:\n sys.exit(0)\n except OSError as e:\n sys.stderr.write('fork #1 failed: %d (%s)\\n' %\n (e.errno, e.strerror))\n sys.exit(1)\n os.chdir('/')\n os.setsid()\n os.umask(0)\n try:\n pid = os.fork()\n if pid > 0:\n sys.exit(0)\n except OSError as e:\n sys.stderr.write('fork #2 failed: %d (%s)\\n' %\n (e.errno, e.strerror))\n sys.exit(1)\n # to properly close the fd's, we need to use sys and os:\n # http://chromano.me/2011/05/23/starting-python-daemons-via-ssh.html\n sys.stdin.close()\n sys.stdout.close()\n sys.stderr.close()\n os.closerange(0, 3)\n\n pid = os.getpid()\n with open(self.pid_file, 'w+') as fp:\n fp.write(\"%s\\n\" % pid)\n\n def handler(*args):\n raise BaseException(\"SIGTERM was caught\")\n signal.signal(signal.SIGTERM, handler)\n # atexit functions are \"not called when the program is killed by a\n # signal not handled by Python\". But since SIGTERM is now handled, the\n # atexit functions do get called.\n atexit.register(self._shutdown)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining the status of the daemon", "response": "def status(self):\n \"\"\"Determine the status of the daemon\"\"\"\n my_name = os.path.basename(sys.argv[0])\n if self._already_running():\n message = \"{0} (pid {1}) is running...\\n\".format(my_name, self.pid)\n sys.stdout.write(message)\n return 0\n sys.stdout.write(\"{0} is stopped\\n\".format(my_name))\n return 3"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve embeddable resource from given uri.", "response": "def fetch_resources(uri, rel):\n \"\"\"\n Retrieves embeddable resource from given ``uri``.\n\n For now only local resources (images, fonts) are supported.\n\n :param str uri: path or url to image or font resource\n :returns: path to local resource file.\n :rtype: str\n :raises: :exc:`~easy_pdf.exceptions.UnsupportedMediaPathException`\n \"\"\"\n if settings.STATIC_URL and uri.startswith(settings.STATIC_URL):\n path = os.path.join(settings.STATIC_ROOT, uri.replace(settings.STATIC_URL, \"\"))\n elif settings.MEDIA_URL and uri.startswith(settings.MEDIA_URL):\n path = os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, \"\"))\n else:\n path = os.path.join(settings.STATIC_ROOT, uri)\n\n if not os.path.isfile(path):\n raise UnsupportedMediaPathException(\n \"media urls must start with {} or {}\".format(\n settings.MEDIA_ROOT, settings.STATIC_ROOT\n )\n )\n\n return path.replace(\"\\\\\", \"/\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef html_to_pdf(content, encoding=\"utf-8\",\n link_callback=fetch_resources, **kwargs):\n \"\"\"\n Converts html ``content`` into PDF document.\n\n :param unicode content: html content\n :returns: PDF content\n :rtype: :class:`bytes`\n :raises: :exc:`~easy_pdf.exceptions.PDFRenderingError`\n \"\"\"\n src = BytesIO(content.encode(encoding))\n dest = BytesIO()\n\n pdf = pisa.pisaDocument(src, dest, encoding=encoding,\n link_callback=link_callback, **kwargs)\n if pdf.err:\n logger.error(\"Error rendering PDF document\")\n for entry in pdf.log:\n if entry[0] == xhtml2pdf.default.PML_ERROR:\n logger_x2p.error(\"line %s, msg: %s, fragment: %s\", entry[1], entry[2], entry[3])\n raise PDFRenderingError(\"Errors rendering PDF\", content=content, log=pdf.log)\n\n if pdf.warn:\n for entry in pdf.log:\n if entry[0] == xhtml2pdf.default.PML_WARNING:\n logger_x2p.warning(\"line %s, msg: %s, fragment: %s\", entry[1], entry[2], entry[3])\n\n return dest.getvalue()", "response": "Converts html content into PDF document."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_response(content, filename=None, content_type=\"application/pdf\"):\n response = HttpResponse(content, content_type=content_type)\n if filename is not None:\n response[\"Content-Disposition\"] = \"attachment; %s\" % encode_filename(filename)\n return response", "response": "Creates a response object with content."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrenders a Django html template and return a PDF document.", "response": "def render_to_pdf(template, context, using=None, request=None, encoding=\"utf-8\", **kwargs):\n \"\"\"\n Create PDF document from Django html template.\n\n :param str template: Path to Django template\n :param dict context: Template context\n :param using: Optional Django template engine\n :param request: Django HTTP request\n :type request: :class:`django.http.HttpRequest`\n\n :returns: rendered PDF\n :rtype: :class:`bytes`\n\n :raises: :exc:`~easy_pdf.exceptions.PDFRenderingError`, :exc:`~easy_pdf.exceptions.UnsupportedMediaPathException`\n \"\"\"\n content = loader.render_to_string(template, context, request=request, using=using)\n return html_to_pdf(content, encoding, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrender a PDF response using given request template and context.", "response": "def render_to_pdf_response(request, template, context, using=None, filename=None,\n encoding=\"utf-8\", **kwargs):\n \"\"\"\n Renders a PDF response using given ``request``, ``template`` and ``context``.\n\n If ``filename`` param is specified then the response ``Content-Disposition``\n header will be set to ``attachment`` making the browser display\n a \"Save as..\" dialog.\n\n :param request: Django HTTP request\n :type request: :class:`django.http.HttpRequest`\n :param str template: Path to Django template\n :param dict context: Template context\n :param using: Optional Django template engine\n :rtype: :class:`django.http.HttpResponse`\n \"\"\"\n try:\n pdf = render_to_pdf(template, context, using=using, encoding=encoding, **kwargs)\n return make_response(pdf, filename)\n except PDFRenderingError as e:\n logger.exception(e.message)\n return HttpResponse(e.message)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_pdf_response(self, context, **response_kwargs):\n return render_to_pdf_response(\n request=self.request,\n template=self.get_template_names(),\n context=context,\n using=self.template_engine,\n filename=self.get_pdf_filename(),\n **self.get_pdf_kwargs()\n )", "response": "Renders PDF document and prepares response."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreplace all strings in dictionary with compiled version of themselves and return dictionary.", "response": "def compile_patterns_in_dictionary(dictionary):\n \"\"\"\n Replace all strings in dictionary with compiled\n version of themselves and return dictionary.\n \"\"\"\n for key, value in dictionary.items():\n if isinstance(value, str):\n dictionary[key] = re.compile(value)\n elif isinstance(value, dict):\n compile_patterns_in_dictionary(value)\n return dictionary"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef filter(self, source_file, encoding): # noqa A001\n\n with codecs.open(source_file, 'r', encoding=encoding) as f:\n text = f.read()\n\n return [filters.SourceText(self._filter(text), source_file, encoding, 'context')]", "response": "Parse file and return list of source text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _filter(self, text):\n\n if self.line_endings:\n text = self.norm_nl(text)\n\n new_text = []\n index = 0\n last = 0\n end = len(text)\n while index < end:\n m = self.escapes.match(text, pos=index) if self.escapes else None\n if m:\n index = m.end(0)\n continue\n handled = False\n for delimiter in self.delimiters:\n m = delimiter[0].match(text, pos=index)\n if m:\n if self.context_visible_first is True:\n new_text.append(text[last:m.start(0)])\n else:\n new_text.append(m.group(delimiter[1]))\n index = m.end(0)\n last = index\n handled = True\n break\n if handled:\n continue\n index += 1\n if last < end and self.context_visible_first is True:\n new_text.append(text[last:end])\n\n return ' '.join(new_text)", "response": "Filter the text for context delimiter."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a new connection which can later be upgraded to use TLS.", "response": "def create_starttls_connection(\n loop,\n protocol_factory,\n host=None,\n port=None,\n *,\n sock=None,\n ssl_context_factory=None,\n use_starttls=False,\n local_addr=None,\n **kwargs):\n \"\"\"\n Create a connection which can later be upgraded to use TLS.\n\n .. versionchanged:: 0.4\n\n The `local_addr` argument was added.\n\n :param loop: The event loop to use.\n :type loop: :class:`asyncio.BaseEventLoop`\n :param protocol_factory: Factory for the protocol for the connection\n :param host: The host name or address to connect to\n :type host: :class:`str` or :data:`None`\n :param port: The port to connect to\n :type port: :class:`int` or :data:`None`\n :param sock: A socket to wrap (conflicts with `host` and `port`)\n :type sock: :class:`socket.socket`\n :param ssl_context_factory: Function which returns a\n :class:`OpenSSL.SSL.Context` to use for TLS operations\n :param use_starttls: Flag to control whether TLS is negotiated right away\n or deferredly.\n :type use_starttls: :class:`bool`\n :param local_addr: Address to bind to\n\n This is roughly a copy of the asyncio implementation of\n :meth:`asyncio.BaseEventLoop.create_connection`. It returns a pair\n ``(transport, protocol)``, where `transport` is a newly created\n :class:`STARTTLSTransport` instance. Further keyword arguments are\n forwarded to the constructor of :class:`STARTTLSTransport`.\n\n `loop` must be a :class:`asyncio.BaseEventLoop`, with support for\n :meth:`asyncio.BaseEventLoop.add_reader` and the corresponding writer and\n removal functions for sockets. This is typically a selector type event\n loop.\n\n `protocol_factory` must be a callable which (without any arguments) returns\n a :class:`asyncio.Protocol` which will be connected to the STARTTLS\n transport.\n\n `host` and `port` must be a hostname and a port number, or both\n :data:`None`. Both must be :data:`None`, if and only if `sock` is not\n :data:`None`. In that case, `sock` is used instead of a newly created\n socket. `sock` is put into non-blocking mode and must be a stream socket.\n\n If `use_starttls` is :data:`True`, no TLS handshake will be performed\n initially. Instead, the connection is established without any\n transport-layer security. It is expected that the\n :meth:`STARTTLSTransport.starttls` method is used when the application\n protocol requires TLS. If `use_starttls` is :data:`False`, the TLS\n handshake is initiated right away.\n\n `local_addr` may be an address to bind this side of the socket to. If\n omitted or :data:`None`, the local address is assigned by the operating\n system.\n\n This coroutine returns when the stream is established. If `use_starttls` is\n :data:`False`, this means that the full TLS handshake has to be finished\n for this coroutine to return. Otherwise, no TLS handshake takes place. It\n must be invoked using the :meth:`STARTTLSTransport.starttls` coroutine.\n \"\"\"\n\n if host is not None and port is not None:\n host_addrs = yield from loop.getaddrinfo(\n host, port,\n type=socket.SOCK_STREAM)\n\n exceptions = []\n\n for family, type, proto, cname, address in host_addrs:\n sock = None\n try:\n sock = socket.socket(family=family, type=type, proto=proto)\n sock.setblocking(False)\n if local_addr is not None:\n sock.bind(local_addr)\n yield from loop.sock_connect(sock, address)\n except OSError as exc:\n if sock is not None:\n sock.close()\n exceptions.append(exc)\n else:\n break\n else:\n if len(exceptions) == 1:\n raise exceptions[0]\n\n model = str(exceptions[0])\n if all(str(exc) == model for exc in exceptions):\n raise exceptions[0]\n\n try:\n from aioxmpp.errors import MultiOSError\n except ImportError:\n MultiOSError = OSError\n\n exc = MultiOSError(\n \"could not connect to [{}]:{}\".format(host, port),\n exceptions)\n raise exc\n elif sock is None:\n raise ValueError(\"sock must not be None if host and/or port are None\")\n else:\n sock.setblocking(False)\n\n protocol = protocol_factory()\n waiter = asyncio.Future(loop=loop)\n transport = STARTTLSTransport(loop, sock, protocol,\n ssl_context_factory=ssl_context_factory,\n waiter=waiter,\n use_starttls=use_starttls,\n **kwargs)\n yield from waiter\n\n return transport, protocol"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _call_connection_lost_and_clean_up(self, exc):\n\n self._state = _State.CLOSED\n try:\n self._protocol.connection_lost(exc)\n finally:\n self._rawsock.close()\n if self._tls_conn is not None:\n self._tls_conn.set_app_data(None)\n self._tls_conn = None\n self._rawsock = None\n self._protocol = None\n self._loop = None", "response": "Clean up all resources and call the protocol connection lost method."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nabort the current session.", "response": "def abort(self):\n \"\"\"\n Immediately close the stream, without sending remaining buffers or\n performing a proper shutdown.\n \"\"\"\n if self._state == _State.CLOSED:\n self._invalid_state(\"abort() called\")\n return\n\n self._force_close(None)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncloses the current session.", "response": "def close(self):\n \"\"\"\n Close the stream. This performs a proper stream shutdown, except if the\n stream is currently performing a TLS handshake. In that case, calling\n :meth:`close` is equivalent to calling :meth:`abort`.\n\n Otherwise, the transport waits until all buffers are transmitted.\n \"\"\"\n\n if self._state == _State.CLOSED:\n self._invalid_state(\"close() called\")\n return\n\n if self._state == _State.TLS_HANDSHAKING:\n # hard-close\n self._force_close(None)\n elif self._state == _State.TLS_SHUTTING_DOWN:\n # shut down in progress, nothing to do\n pass\n elif self._buffer:\n # there is data to be send left, first wait for it to transmit ...\n self._closing = True\n elif self._state.tls_started:\n # normal TLS state, nothing left to transmit, shut down\n self._tls_shutdown()\n else:\n # normal non-TLS state, nothing left to transmit, close\n self._raw_shutdown()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstarting a TLS stream on top of the socket.", "response": "def starttls(self, ssl_context=None,\n post_handshake_callback=None):\n \"\"\"\n Start a TLS stream on top of the socket. This is an invalid operation\n if the stream is not in RAW_OPEN state.\n\n If `ssl_context` is set, it overrides the `ssl_context` passed to the\n constructor. If `post_handshake_callback` is set, it overrides the\n `post_handshake_callback` passed to the constructor.\n\n .. versionchanged:: 0.4\n\n This method is now a barrier with respect to reads and writes:\n before the handshake is completed (including the post handshake\n callback, if any), no data is received or sent.\n \"\"\"\n if self._state != _State.RAW_OPEN or self._closing:\n raise self._invalid_state(\"starttls() called\")\n\n if ssl_context is not None:\n self._ssl_context = ssl_context\n self._extra.update(\n sslcontext=ssl_context\n )\n else:\n self._ssl_context = self._ssl_context_factory(self)\n\n if post_handshake_callback is not None:\n self._tls_post_handshake_callback = post_handshake_callback\n\n self._waiter = asyncio.Future()\n self._waiter.add_done_callback(self._waiter_done)\n self._initiate_tls()\n try:\n yield from self._waiter\n finally:\n self._waiter = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite data to the transport.", "response": "def write(self, data):\n \"\"\"\n Write data to the transport. This is an invalid operation if the stream\n is not writable, that is, if it is closed. During TLS negotiation, the\n data is buffered.\n \"\"\"\n if not isinstance(data, (bytes, bytearray, memoryview)):\n raise TypeError('data argument must be byte-ish (%r)',\n type(data))\n\n if not self._state.is_writable or self._closing:\n raise self._invalid_state(\"write() called\")\n\n if not data:\n return\n\n if not self._buffer:\n self._loop.add_writer(self._raw_fd, self._write_ready)\n\n self._buffer.extend(data)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef explode(prefix: str):\n def _app(i, e=None):\n if i is not None:\n return {k: v for (k, v) in iter_fields(i)}, None\n return i, e\n\n def iter_fields(event_field: Union[dict, list]):\n if type(event_field) is dict:\n for key, val in event_field.items():\n yield (key, val)\n elif type(event_field) is list:\n for i, value in enumerate(event_field):\n for key, val in value.items():\n if not i == 0:\n yield (\"{}_{}\".format(key, i), val)\n else:\n yield (key, val)\n\n return compose(_app, add_column_prefix(prefix))", "response": "given an array of objects de - normalized into fields\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if the content contains XML encoding.", "response": "def _has_xml_encode(self, content):\n \"\"\"Check XML encoding.\"\"\"\n\n encode = None\n\n m = RE_XML_START.match(content)\n if m:\n if m.group(1):\n m2 = RE_XML_ENCODE.match(m.group(1))\n\n if m2:\n enc = m2.group(2).decode('ascii')\n\n try:\n codecs.getencoder(enc)\n encode = enc\n except LookupError:\n pass\n else:\n if m.group(2):\n enc = 'utf-32-be'\n text = m.group(2)\n elif m.group(3):\n enc = 'utf-32-le'\n text = m.group(3)\n elif m.group(4):\n enc = 'utf-16-be'\n text = m.group(4)\n elif m.group(5):\n enc = 'utf-16-le'\n text = m.group(5)\n try:\n m2 = RE_XML_ENCODE_U.match(text.decode(enc))\n except Exception: # pragma: no cover\n m2 = None\n\n if m2:\n enc = m2.group(2)\n\n try:\n codecs.getencoder(enc)\n encode = enc\n except Exception:\n pass\n\n return encode"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_break_tag(self, el):\n\n name = el.name\n return name in self.break_tags or name in self.user_break_tags", "response": "Check if tag is an element we should break on."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nstoring the text as desired.", "response": "def store_blocks(self, el, blocks, text, force_root):\n \"\"\"Store the text as desired.\"\"\"\n\n if force_root or el.parent is None or self.is_break_tag(el):\n content = ''.join(text)\n if content:\n blocks.append((content, self.construct_selector(el)))\n text = []\n return text"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef construct_selector(self, el, attr=''):\n\n selector = deque()\n\n ancestor = el\n while ancestor and ancestor.parent:\n if ancestor is not el:\n selector.appendleft(ancestor.name)\n else:\n tag = ancestor.name\n prefix = ancestor.prefix\n sel = ''\n if prefix:\n sel += prefix + '|'\n sel = tag\n if attr:\n sel += '[%s]' % attr\n selector.appendleft(sel)\n ancestor = ancestor.parent\n return '>'.join(selector)", "response": "Construct an selector for context."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_text(self, tree, force_root=False):\n\n self.extract_tag_metadata(tree)\n\n text = []\n attributes = []\n comments = []\n blocks = []\n\n if not (self.ignores.match(tree) if self.ignores else None):\n # The root of the document is the BeautifulSoup object\n capture = self.captures.match(tree) if self.captures is not None else None\n # Check attributes for normal tags\n if capture:\n for attr in self.attributes:\n value = tree.attrs.get(attr, '').strip()\n if value:\n sel = self.construct_selector(tree, attr=attr)\n attributes.append((value, sel))\n\n # Walk children\n for child in tree.children:\n string = str(child).strip()\n is_comment = isinstance(child, bs4.Comment)\n if isinstance(child, bs4.element.Tag):\n t, b, a, c = self.to_text(child)\n text.extend(t)\n attributes.extend(a)\n comments.extend(c)\n blocks.extend(b)\n # Get content if not the root and not a comment (unless we want comments).\n elif not isinstance(child, NON_CONTENT) and (not is_comment or self.comments):\n string = str(child).strip()\n if string:\n if is_comment:\n sel = self.construct_selector(tree) + ''\n comments.append((string, sel))\n elif capture:\n text.append(string)\n text.append(' ')\n elif self.comments:\n for child in tree.descendants:\n if isinstance(child, bs4.Comment):\n string = str(child).strip()\n if string:\n sel = self.construct_selector(tree) + ''\n comments.append((string, sel))\n\n text = self.store_blocks(tree, blocks, text, force_root)\n\n if tree.parent is None or force_root:\n return blocks, attributes, comments\n else:\n return text, blocks, attributes, comments", "response": "Extract text from tags and attributes."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfilter the source text.", "response": "def _filter(self, text, context, encoding):\n \"\"\"Filter the source text.\"\"\"\n\n content = []\n blocks, attributes, comments = self.to_text(bs4.BeautifulSoup(text, self.parser))\n if self.comments:\n for c, desc in comments:\n content.append(filters.SourceText(c, context + ': ' + desc, encoding, self.type + 'comment'))\n if self.attributes:\n for a, desc in attributes:\n content.append(filters.SourceText(a, context + ': ' + desc, encoding, self.type + 'attribute'))\n for b, desc in blocks:\n content.append(filters.SourceText(b, context + ': ' + desc, encoding, self.type + 'content'))\n return content"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfiltering the source text.", "response": "def sfilter(self, source):\n \"\"\"Filter.\"\"\"\n\n return self._filter(source.text, source.context, source.encoding)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef lazy_import(func):\n try:\n f = sys._getframe(1)\n except Exception: # pragma: no cover\n namespace = None\n else:\n namespace = f.f_locals\n return _LazyImport(func.__name__, func, namespace)", "response": "Decorator for declaring a lazy import."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert csv to dict", "response": "def csv_to_map(fields, delimiter=','):\n \"\"\"\n Convert csv to dict\n\n :param delimiter:\n :param fields:\n :return:\n \"\"\"\n\n def _csv_to_list(csv_input):\n \"\"\"\n Util function to overcome the use of files by in-memory io buffer\n\n :param csv_input:\n :return:\n \"\"\"\n io_file = io.StringIO(csv_input)\n return next(csv.reader(io_file, delimiter=delimiter))\n\n def _app(current_tuple, e=None):\n if current_tuple is None or len(current_tuple) == 0:\n return None, \"no input\"\n csv_list = _csv_to_list(current_tuple)\n if len(csv_list) != len(fields):\n e = {\"input\": \"unexpected number of fields {} obtained {} expected\".format(len(csv_list), len(fields))}\n return None, e\n return {k: v for (k, v) in zip(fields, csv_list)}, e\n if fields is None or len(fields) == 0:\n return fixed_input(None, \"no fields provided, cannot proceed without order\")\n return _app"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts dict to csv", "response": "def map_to_csv(fields, delimiter=\",\"):\n \"\"\"\n Convert dict to csv\n\n :param fields:\n :return:\n \"\"\"\n\n def _list_to_csv(l):\n \"\"\"\n Util function to overcome the use of files by in-memory io buffer\n\n :param l:\n :return:\n \"\"\"\n io_file = io.StringIO()\n writer = csv.writer(io_file, quoting=csv.QUOTE_NONNUMERIC, lineterminator='', delimiter=delimiter)\n writer.writerow(l)\n return io_file.getvalue()\n\n def _app(current_tuple, e=None):\n if e is not None:\n return None, e\n csv_list = []\n for f in fields:\n if f in current_tuple:\n csv_list.append(current_tuple[f])\n else:\n e.update({\"output\": \"expected field {} not found\".format(f)})\n return None, e\n return _list_to_csv(csv_list), e\n if fields is None or len(fields) == 0:\n return fixed_input(None, \"no fields provided, cannot proceed without order\")\n return _app"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmake a function compatible with xarray. DataArray.", "response": "def xarrayfunc(func):\n \"\"\"Make a function compatible with xarray.DataArray.\n\n This function is intended to be used as a decorator like::\n\n >>> @dc.xarrayfunc\n >>> def func(array):\n ... # do something\n ... return newarray\n >>>\n >>> result = func(array)\n\n Args:\n func (function): Function to be wrapped. The first argument\n of the function must be an array to be processed.\n\n Returns:\n wrapper (function): Wrapped function.\n \"\"\"\n @wraps(func)\n def wrapper(*args, **kwargs):\n if any(isinstance(arg, xr.DataArray) for arg in args):\n newargs = []\n for arg in args:\n if isinstance(arg, xr.DataArray):\n newargs.append(arg.values)\n else:\n newargs.append(arg)\n\n return dc.full_like(args[0], func(*newargs, **kwargs))\n else:\n return func(*args, **kwargs)\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking a function compatible with multicore chunk processing.", "response": "def chunk(*argnames, concatfunc=None):\n \"\"\"Make a function compatible with multicore chunk processing.\n\n This function is intended to be used as a decorator like::\n\n >>> @dc.chunk('array')\n >>> def func(array):\n ... # do something\n ... return newarray\n >>>\n >>> result = func(array, timechunk=10)\n\n or you can set a global chunk parameter outside the function::\n\n >>> timechunk = 10\n >>> result = func(array)\n \"\"\"\n def _chunk(func):\n depth = [s.function for s in stack()].index('')\n f_globals = getframe(depth).f_globals\n\n # original (unwrapped) function\n orgname = '_original_' + func.__name__\n orgfunc = dc.utils.copy_function(func, orgname)\n f_globals[orgname] = orgfunc\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n depth = [s.function for s in stack()].index('')\n f_globals = getframe(depth).f_globals\n\n # parse args and kwargs\n params = signature(func).parameters\n for i, (key, val) in enumerate(params.items()):\n if not val.kind == Parameter.POSITIONAL_OR_KEYWORD:\n break\n\n try:\n kwargs.update({key: args[i]})\n except IndexError:\n kwargs.setdefault(key, val.default)\n\n # n_chunks and n_processes\n n_chunks = DEFAULT_N_CHUNKS\n n_processes = MAX_WORKERS\n\n if argnames:\n length = len(kwargs[argnames[0]])\n\n if 'numchunk' in kwargs:\n n_chunks = kwargs.pop('numchunk')\n elif 'timechunk' in kwargs:\n n_chunks = round(length / kwargs.pop('timechunk'))\n elif 'numchunk' in f_globals:\n n_chunks = f_globals['numchunk']\n elif 'timechunk' in f_globals:\n n_chunks = round(length / f_globals['timechunk'])\n\n if 'n_processes' in kwargs:\n n_processes = kwargs.pop('n_processes')\n elif 'n_processes' in f_globals:\n n_processes = f_globals['n_processes']\n\n # make chunked args\n chunks = {}\n for name in argnames:\n arg = kwargs.pop(name)\n try:\n chunks.update({name: np.array_split(arg, n_chunks)})\n except TypeError:\n chunks.update({name: np.tile(arg, n_chunks)})\n\n # run the function\n futures = []\n results = []\n with dc.utils.one_thread_per_process(), Pool(n_processes) as p:\n for i in range(n_chunks):\n chunk = {key: val[i] for key, val in chunks.items()}\n futures.append(p.submit(orgfunc, **{**chunk, **kwargs}))\n\n for future in futures:\n results.append(future.result())\n\n # make an output\n if concatfunc is not None:\n return concatfunc(results)\n\n try:\n return xr.concat(results, 't')\n except TypeError:\n return np.concatenate(results, 0)\n\n return wrapper\n\n return _chunk"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef format_sec(s):\n\n prefixes = [\"\", \"m\", \"u\", \"n\"]\n unit = 0\n\n while s < 1 and unit + 1 < len(prefixes):\n s *= 1000\n unit += 1\n\n return \"{:.1f} {}s\".format(s, prefixes[unit])", "response": "Format seconds in a more human readable way. It supports units down to\n nanoseconds."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_entity(self, **kwargs):\n data = kwargs\n headers = kwargs.get('headers', {})\n if 'headers' in data:\n del data['headers']\n return self._create(\"/entities\", data=data, coerce=self.get_entity, headers=headers)", "response": "Create a new entity."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_gtfs(source_path, target_path, buffer, ndigits):\n pfeed = pf.read_protofeed(source_path)\n feed = m.build_feed(pfeed, buffer=buffer)\n gt.write_gtfs(feed, target_path, ndigits=ndigits)", "response": "Create a GTFS feed from the source file and write it to the target file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef psd(data, dt, ndivide=1, window=hanning, overlap_half=False):\n logger = getLogger('decode.utils.ndarray.psd')\n\n if overlap_half:\n step = int(len(data) / (ndivide + 1))\n size = step * 2\n else:\n step = int(len(data) / ndivide)\n size = step\n\n if bin(len(data)).count('1') != 1:\n logger.warning('warning: length of data is not power of 2: {}'.format(len(data)))\n size = int(len(data) / ndivide)\n if bin(size).count('1') != 1.:\n if overlap_half:\n logger.warning('warning: ((length of data) / (ndivide+1)) * 2 is not power of 2: {}'.format(size))\n else:\n logger.warning('warning: (length of data) / ndivide is not power of 2: {}'.format(size))\n psd = np.zeros(size)\n T = (size - 1) * dt\n vs = 1 / dt\n vk_ = fftfreq(size, dt)\n vk = vk_[np.where(vk_ >= 0)]\n\n for i in range(ndivide):\n d = data[i * step:i * step + size]\n if window is None:\n w = np.ones(size)\n corr = 1.0\n else:\n w = window(size)\n corr = np.mean(w**2)\n psd = psd + 2 * (np.abs(fft(d * w)))**2 / size * dt / corr\n\n return vk, psd[:len(vk)] / ndivide", "response": "Calculate the power spectrum density of a set of data."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncalculate the Allan variance.", "response": "def allan_variance(data, dt, tmax=10):\n \"\"\"Calculate Allan variance.\n\n Args:\n data (np.ndarray): Input data.\n dt (float): Time between each data.\n tmax (float): Maximum time.\n\n Returns:\n vk (np.ndarray): Frequency.\n allanvar (np.ndarray): Allan variance.\n \"\"\"\n allanvar = []\n nmax = len(data) if len(data) < tmax / dt else int(tmax / dt)\n for i in range(1, nmax+1):\n databis = data[len(data) % i:]\n y = databis.reshape(len(data)//i, i).mean(axis=1)\n allanvar.append(((y[1:] - y[:-1])**2).mean() / 2)\n return dt * np.arange(1, nmax+1), np.array(allanvar)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef discover(cls, path, depth=\"0\"):\n\n # Path should already be sanitized\n attributes = _get_attributes_from_path(path)\n\n try:\n if len(attributes) == 3:\n # If an item, create a collection for the item.\n item = attributes.pop()\n path = \"/\".join(attributes)\n collection = cls(path, _is_principal(path))\n yield collection.get(item)\n return\n\n collection = cls(path, _is_principal(path))\n except api.exceptions.DoesNotExist:\n return\n\n yield collection\n\n if depth == \"0\":\n return\n\n if len(attributes) == 0:\n yield cls(posixpath.join(path, cls.user), principal=True)\n elif len(attributes) == 1:\n for journal in cls.etesync.list():\n if journal.collection.TYPE in (api.AddressBook.TYPE, api.Calendar.TYPE, api.TaskList.TYPE):\n yield cls(posixpath.join(path, journal.uid), principal=False)\n elif len(attributes) == 2:\n for item in collection.list():\n yield collection.get(item)\n\n elif len(attributes) > 2:\n raise RuntimeError(\"Found more than one attribute. Shouldn't happen\")", "response": "Discover a list of collections under the given path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nencode as quoted - string ( see RFC 2616 ).", "response": "def etag(self):\n \"\"\"Encoded as quoted-string (see RFC 2616).\"\"\"\n if self.is_fake:\n return\n\n entry = None\n for entry in self.journal.list():\n pass\n\n return entry.uid if entry is not None else hashlib.sha256(b\"\").hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_collection(cls, href, collection=None, props=None):\n # Path should already be sanitized\n attributes = _get_attributes_from_path(href)\n if len(attributes) <= 1:\n raise PrincipalNotAllowedError\n\n # Try to infer tag\n if not props:\n props = {}\n if not props.get(\"tag\") and collection:\n props[\"tag\"] = collection[0].name\n\n # Try first getting the collection if exists, or create a new one otherwise.\n try:\n self = cls(href, principal=False, tag=props.get(\"tag\"))\n except api.exceptions.DoesNotExist:\n user_path = posixpath.join('/', cls.user)\n collection_name = hashlib.sha256(str(time.time()).encode()).hexdigest()\n sane_path = posixpath.join(user_path, collection_name)\n\n if props.get(\"tag\") == \"VCALENDAR\":\n inst = api.Calendar.create(cls.etesync, collection_name, None)\n elif props.get(\"tag\") == \"VADDRESSBOOK\":\n inst = api.AddressBook.create(cls.etesync, collection_name, None)\n else:\n raise RuntimeError(\"Bad tag.\")\n\n inst.save()\n self = cls(sane_path, principal=False)\n\n self.set_meta(props)\n\n if collection:\n if props.get(\"tag\") == \"VCALENDAR\":\n collection, = collection\n items = []\n for content in (\"vevent\", \"vtodo\", \"vjournal\"):\n items.extend(\n getattr(collection, \"%s_list\" % content, []))\n items_by_uid = groupby(sorted(items, key=get_uid), get_uid)\n vobject_items = {}\n for uid, items in items_by_uid:\n new_collection = vobject.iCalendar()\n for item in items:\n new_collection.add(item)\n href = self._find_available_file_name(\n vobject_items.get)\n vobject_items[href] = new_collection\n self.upload_all_nonatomic(vobject_items)\n elif props.get(\"tag\") == \"VADDRESSBOOK\":\n vobject_items = {}\n for card in collection:\n href = self._find_available_file_name(\n vobject_items.get)\n vobject_items[href] = card\n self.upload_all_nonatomic(vobject_items)\n\n return self", "response": "Create a new collection."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the current sync token and changed items for synchronization.", "response": "def sync(self, old_token=None):\n \"\"\"Get the current sync token and changed items for synchronization.\n\n ``old_token`` an old sync token which is used as the base of the\n delta update. If sync token is missing, all items are returned.\n ValueError is raised for invalid or old tokens.\n \"\"\"\n # FIXME: Actually implement\n token = \"http://radicale.org/ns/sync/%s\" % self.etag.strip(\"\\\"\")\n if old_token:\n raise ValueError(\"Sync token are not supported (you can ignore this warning)\")\n return token, self.list()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfetch a single item.", "response": "def get(self, href):\n \"\"\"Fetch a single item.\"\"\"\n if self.is_fake:\n return\n\n uid = _trim_suffix(href, ('.ics', '.ical', '.vcf'))\n etesync_item = self.collection.get(uid)\n if etesync_item is None:\n return None\n\n try:\n item = vobject.readOne(etesync_item.content)\n except Exception as e:\n raise RuntimeError(\"Failed to parse item %r in %r\" %\n (href, self.path)) from e\n # FIXME: Make this sensible\n last_modified = time.strftime(\n \"%a, %d %b %Y %H:%M:%S GMT\",\n time.gmtime(time.time()))\n return EteSyncItem(self, item, href, last_modified=last_modified, etesync_item=etesync_item)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nuploads a new or replace an existing item.", "response": "def upload(self, href, vobject_item):\n \"\"\"Upload a new or replace an existing item.\"\"\"\n if self.is_fake:\n return\n\n content = vobject_item.serialize()\n try:\n item = self.get(href)\n etesync_item = item.etesync_item\n etesync_item.content = content\n except api.exceptions.DoesNotExist:\n etesync_item = self.collection.get_content_class().create(self.collection, content)\n\n etesync_item.save()\n\n return self.get(href)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete an item. When ``href`` is ``None``, delete the collection.", "response": "def delete(self, href=None):\n \"\"\"Delete an item.\n\n When ``href`` is ``None``, delete the collection.\n\n \"\"\"\n if self.is_fake:\n return\n\n if href is None:\n self.collection.delete()\n return\n\n item = self.get(href)\n if item is None:\n raise ComponentNotFoundError(href)\n\n item.etesync_item.delete()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget metadata value for collection.", "response": "def get_meta(self, key=None):\n \"\"\"Get metadata value for collection.\"\"\"\n if self.is_fake:\n return {}\n\n if key == \"tag\":\n return self.tag\n elif key is None:\n ret = {}\n for key in self.journal.info.keys():\n ret[key] = self.meta_mappings.map_get(self.journal.info, key)[1]\n return ret\n else:\n key, value = self.meta_mappings.map_get(self.journal.info, key)\n return value"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_meta(self, _props):\n if self.is_fake:\n return\n\n props = {}\n for key, value in _props.items():\n key, value = self.meta_mappings.map_set(key, value)\n props[key] = value\n\n # Pop out tag which we don't want\n props.pop(\"tag\", None)\n\n self.journal.update_info({})\n self.journal.update_info(props)\n self.journal.save()", "response": "Set metadata values for collection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_meta_all(self, props):\n delta_props = self.get_meta()\n for key in delta_props.keys():\n if key not in props:\n delta_props[key] = None\n delta_props.update(props)\n self.set_meta(delta_props)", "response": "Set metadata values for all elements in the collection."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the HTTP - datetime of when the collection was modified.", "response": "def last_modified(self):\n \"\"\"Get the HTTP-datetime of when the collection was modified.\"\"\"\n # FIXME: Make this sensible\n last_modified = time.strftime(\n \"%a, %d %b %Y %H:%M:%S GMT\",\n time.gmtime(time.time()))\n return last_modified"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the unicode string representing the whole collection.", "response": "def serialize(self):\n \"\"\"Get the unicode string representing the whole collection.\"\"\"\n import datetime\n items = []\n time_begin = datetime.datetime.now()\n for href in self.list():\n items.append(self.get(href).item)\n time_end = datetime.datetime.now()\n self.logger.info(\n \"Collection read %d items in %s sec from %s\", len(items),\n (time_end - time_begin).total_seconds(), self.path)\n if self.get_meta(\"tag\") == \"VCALENDAR\":\n collection = vobject.iCalendar()\n for item in items:\n for content in (\"vevent\", \"vtodo\", \"vjournal\"):\n if content in item.contents:\n for item_part in getattr(item, \"%s_list\" % content):\n collection.add(item_part)\n break\n return collection.serialize()\n elif self.get_meta(\"tag\") == \"VADDRESSBOOK\":\n return \"\".join([item.serialize() for item in items])\n return \"\""} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nacquires a lock for the entire storage.", "response": "def acquire_lock(cls, mode, user=None):\n \"\"\"Set a context manager to lock the whole storage.\n\n ``mode`` must either be \"r\" for shared access or \"w\" for exclusive\n access.\n\n ``user`` is the name of the logged in user or empty.\n\n \"\"\"\n if not user:\n return\n\n with EteSyncCache.lock:\n cls.user = user\n\n cls.etesync = cls._get_etesync_for_user(cls.user)\n\n if cls._should_sync():\n cls._mark_sync()\n cls.etesync.get_or_create_user_info(force_fetch=True)\n cls.etesync.sync_journal_list()\n for journal in cls.etesync.list():\n cls.etesync.pull_journal(journal.uid)\n yield\n if cls.etesync.journal_list_is_dirty():\n cls.etesync.sync_journal_list()\n for journal in cls.etesync.list():\n if cls.etesync.journal_is_dirty(journal.uid):\n cls.etesync.sync_journal(journal.uid)\n\n cls.etesync = None\n cls.user = None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef setup(self):\n\n self.blocks = self.config['block_comments']\n self.lines = self.config['line_comments']\n self.group_comments = self.config['group_comments']\n self.prefix = self.config['prefix']\n self.generic_mode = self.config['generic_mode']\n self.strings = self.config['strings']\n self.trigraphs = self.config['trigraphs']\n self.decode_escapes = self.config['decode_escapes']\n self.charset_size = self.config['charset_size']\n self.wide_charset_size = self.config['wide_charset_size']\n self.exec_charset = self.get_encoding_name(self.config['exec_charset'])\n self.wide_exec_charset = self.get_encoding_name(self.config['wide_exec_charset'])\n self.string_types, self.wild_string_types = self.eval_string_type(self.config['string_types'])\n if not self.generic_mode:\n self.pattern = RE_CPP", "response": "Initialize the object variables that are not yet set."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef evaluate_inline_tail(self, groups):\n\n if self.lines:\n self.line_comments.append([groups['line'][2:].replace('\\\\\\n', ''), self.line_num, self.current_encoding])", "response": "Evaluate inline comments at the tail of source code."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nevaluate inline comments on their own lines.", "response": "def evaluate_inline(self, groups):\n \"\"\"Evaluate inline comments on their own lines.\"\"\"\n\n # Consecutive lines with only comments with same leading whitespace\n # will be captured as a single block.\n if self.lines:\n if (\n self.group_comments and\n self.line_num == self.prev_line + 1 and\n groups['leading_space'] == self.leading\n ):\n self.line_comments[-1][0] += '\\n' + groups['line'][2:].replace('\\\\\\n', '')\n else:\n self.line_comments.append(\n [groups['line'][2:].replace('\\\\\\n', ''), self.line_num, self.current_encoding]\n )\n self.leading = groups['leading_space']\n self.prev_line = self.line_num"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsearch for strings code and inline comments.", "response": "def evaluate(self, m):\n \"\"\"Search for comments.\"\"\"\n\n g = m.groupdict()\n if g[\"strings\"]:\n self.evaluate_strings(g)\n self.line_num += g['strings'].count('\\n')\n elif g[\"code\"]:\n self.line_num += g[\"code\"].count('\\n')\n else:\n if g['block']:\n self.evaluate_block(g)\n elif g['start'] is None:\n self.evaluate_inline_tail(g)\n else:\n self.evaluate_inline(g)\n self.line_num += g['comments'].count('\\n')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef extend_src_text(self, content, context, text_list, category):\n\n prefix = self.prefix + '-' if self.prefix else ''\n\n for comment, line, encoding in text_list:\n content.append(\n filters.SourceText(\n textwrap.dedent(comment),\n \"%s (%d)\" % (context, line),\n encoding,\n prefix + category\n )\n )", "response": "Extend the source text list with the gathered text data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cube(data, xcoords=None, ycoords=None, chcoords=None, scalarcoords=None, datacoords=None, attrs=None, name=None):\n # initialize coords with default values\n cube = xr.DataArray(data, dims=('x', 'y', 'ch'), attrs=attrs, name=name)\n cube.dcc._initcoords()\n\n # update coords with input values (if any)\n if xcoords is not None:\n cube.coords.update({key: ('x', xcoords[key]) for key in xcoords})\n\n if ycoords is not None:\n cube.coords.update({key: ('y', ycoords[key]) for key in ycoords})\n\n if chcoords is not None:\n cube.coords.update({key: ('ch', chcoords[key]) for key in chcoords})\n\n if datacoords is not None:\n cube.coords.update({key: (('x', 'y', 'ch'), datacoords[key]) for key in datacoords})\n\n if scalarcoords is not None:\n cube.coords.update(scalarcoords)\n\n return cube", "response": "Create a cube as an instance of xarray. DataArray with default values."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tocube(array, **kwargs):\n # pick up kwargs\n unit = kwargs.pop('unit', 'deg')\n unit2deg = getattr(u, unit).to('deg')\n\n xc = kwargs.pop('xc', float(array.xref)) * unit2deg\n yc = kwargs.pop('yc', float(array.yref)) * unit2deg\n xarr = kwargs.pop('xarr', None)\n yarr = kwargs.pop('yarr', None)\n xmin = kwargs.pop('xmin', None)\n xmax = kwargs.pop('xmax', None)\n ymin = kwargs.pop('ymin', None)\n ymax = kwargs.pop('ymax', None)\n gx = kwargs.pop('gx', None)\n gy = kwargs.pop('gy', None)\n nx = kwargs.pop('nx', None)\n ny = kwargs.pop('ny', None)\n if None not in [xarr, yarr]:\n x_grid = xr.DataArray(xarr * unit2deg, dims='grid')\n y_grid = xr.DataArray(yarr * unit2deg, dims='grid')\n else:\n if None not in [xmin, xmax, ymin, ymax]:\n xmin = xmin * unit2deg\n xmax = xmax * unit2deg\n ymin = ymin * unit2deg\n ymax = ymax * unit2deg\n else:\n xmin = array.x.min()\n xmax = array.x.max()\n ymin = array.y.min()\n ymax = array.y.max()\n logger.info('xmin xmax ymin ymax')\n logger.info('{} {} {} {}'.format(xmin, xmax, ymin, ymax))\n\n if None not in [gx, gy]:\n gx = gx * unit2deg\n gy = gy * unit2deg\n logger.info('xc yc gx gy')\n logger.info('{} {} {} {}'.format(xc, yc, gx, gy))\n\n gxmin = np.floor((xmin - xc) / gx)\n gxmax = np.ceil((xmax - xc) / gx)\n gymin = np.floor((ymin - yc) / gy)\n gymax = np.ceil((ymax - yc) / gy)\n xmin = gxmin * gx + xc\n xmax = gxmax * gx + xc\n ymin = gymin * gy + yc\n ymax = gymax * gy + yc\n\n x_grid = xr.DataArray(np.arange(xmin, xmax+gx, gx), dims='grid')\n y_grid = xr.DataArray(np.arange(ymin, ymax+gy, gy), dims='grid')\n elif None not in [nx, ny]:\n logger.info('nx ny')\n logger.info('{} {}'.format(nx, ny))\n ### nx/ny does not support xc/yc\n xc = 0\n yc = 0\n\n x_grid = xr.DataArray(np.linspace(xmin, xmax, nx), dims='grid')\n y_grid = xr.DataArray(np.linspace(ymin, ymax, ny), dims='grid')\n else:\n raise KeyError('Arguments are wrong.')\n\n # reverse the direction of x when coordsys == 'RADEC'\n if array.coordsys == 'RADEC':\n x_grid = x_grid[::-1]\n\n # compute gridding\n nx, ny, nch = len(x_grid), len(y_grid), len(array.ch)\n i = np.abs(array.x - x_grid).argmin('grid').compute()\n j = np.abs(array.y - y_grid).argmin('grid').compute()\n index = i + nx*j\n\n array.coords.update({'index': index})\n groupedarray = array.groupby('index')\n groupedones = dc.ones_like(array).groupby('index')\n\n gridarray = groupedarray.mean('t')\n stdarray = groupedarray.std('t')\n numarray = groupedones.sum('t')\n\n logger.info('Gridding started.')\n gridarray = gridarray.compute()\n noisearray = (stdarray / numarray**0.5).compute()\n logger.info('Gridding finished.')\n\n # create cube\n mask = gridarray.index.values\n\n temp = np.full([ny*nx, nch], np.nan)\n temp[mask] = gridarray.values\n data = temp.reshape((ny, nx, nch)).swapaxes(0, 1)\n\n temp = np.full([ny*nx, nch], np.nan)\n temp[mask] = noisearray.values\n noise = temp.reshape((ny, nx, nch)).swapaxes(0, 1)\n\n xcoords = {'x': x_grid.values}\n ycoords = {'y': y_grid.values}\n chcoords = {'masterid': array.masterid.values,\n 'kidid': array.kidid.values,\n 'kidfq': array.kidfq.values,\n 'kidtp': array.kidtp.values}\n scalarcoords = {'coordsys': array.coordsys.values,\n 'datatype': array.datatype.values,\n 'xref': array.xref.values,\n 'yref': array.yref.values}\n datacoords = {'noise': noise}\n\n return dc.cube(data, xcoords=xcoords, ycoords=ycoords, chcoords=chcoords,\n scalarcoords=scalarcoords, datacoords=datacoords)", "response": "Convert a decode array to decode cube."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef makecontinuum(cube, **kwargs):\n ### pick up kwargs\n inchs = kwargs.pop('inchs', None)\n exchs = kwargs.pop('exchs', None)\n\n if (inchs is not None) or (exchs is not None):\n raise KeyError('Inchs and exchs are no longer supported. Use weight instead.')\n\n # if inchs is not None:\n # logger.info('inchs')\n # logger.info('{}'.format(inchs))\n # subcube = cube[:, :, inchs]\n # else:\n # mask = np.full(len(cube.ch), True)\n # if exchs is not None:\n # logger.info('exchs')\n # logger.info('{}'.format(exchs))\n # mask[exchs] = False\n # subcube = cube[:, :, mask]\n\n if weight is None:\n weight = 1.\n # else:\n # cont = (subcube * (1 / subcube.noise**2)).sum(dim='ch') / (1 / subcube.noise**2).sum(dim='ch')\n # cont = cont.expand_dims(dim='ch', axis=2)\n cont = (cube * (1 / weight**2)).sum(dim='ch') / (1 / weight**2).sum(dim='ch')\n\n ### define coordinates\n xcoords = {'x': cube.x.values}\n ycoords = {'y': cube.y.values}\n chcoords = {'masterid': np.array([0]), # np.array([int(subcube.masterid.mean(dim='ch'))]),\n 'kidid': np.array([0]), # np.array([int(subcube.kidid.mean(dim='ch'))]),\n 'kidfq': np.array([0]), # np.array([float(subcube.kidfq.mean(dim='ch'))]),\n 'kidtp': np.array([1])} # np.array([1])}\n scalarcoords = {'coordsys': cube.coordsys.values, 'datatype': cube.datatype.values,\n 'xref': cube.xref.values, 'yref': cube.yref.values}\n\n return dc.cube(cont.values, xcoords=xcoords, ycoords=ycoords, chcoords=chcoords,\n scalarcoords=scalarcoords)", "response": "Make a continuum array."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _verify_encoding(self, enc):\n\n enc = PYTHON_ENCODING_NAMES.get(enc, enc)\n try:\n codecs.getencoder(enc)\n encoding = enc\n except LookupError:\n encoding = None\n return encoding", "response": "Verify encoding is okay."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks for UTF8 UTF16 and UTF32 BOMs.", "response": "def has_bom(self, f):\n \"\"\"Check for UTF8, UTF16, and UTF32 BOMs.\"\"\"\n\n content = f.read(4)\n encoding = None\n m = RE_UTF_BOM.match(content)\n if m is not None:\n if m.group(1):\n encoding = 'utf-8-sig'\n elif m.group(2):\n encoding = 'utf-32'\n elif m.group(3):\n encoding = 'utf-32'\n elif m.group(4):\n encoding = 'utf-16'\n elif m.group(5):\n encoding = 'utf-16'\n return encoding"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _utf_strip_bom(self, encoding):\n\n if encoding is None:\n pass\n elif encoding.lower() == 'utf-8':\n encoding = 'utf-8-sig'\n elif encoding.lower().startswith('utf-16'):\n encoding = 'utf-16'\n elif encoding.lower().startswith('utf-32'):\n encoding = 'utf-32'\n return encoding", "response": "Return an encoding that will ignore the BOM."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _detect_buffer_encoding(self, f):\n\n encoding = None\n with contextlib.closing(mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)) as m:\n encoding = self._analyze_file(m)\n return encoding", "response": "Guess by checking BOM and checking _special_encode_check and using memory map."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nanalyze the file and return the content of the object.", "response": "def _analyze_file(self, f):\n \"\"\"Analyze the file.\"\"\"\n\n f.seek(0)\n # Check for BOMs\n if self.CHECK_BOM:\n encoding = self.has_bom(f)\n f.seek(0)\n else:\n util.warn_deprecated(\n \"'CHECK_BOM' attribute is deprecated. \"\n \"Please override 'has_bom` function to control or avoid BOM detection.\"\n )\n # Check file extensions\n if encoding is None:\n encoding = self._utf_strip_bom(self.header_check(f.read(1024)))\n f.seek(0)\n if encoding is None:\n encoding = self._utf_strip_bom(self.content_check(f))\n f.seek(0)\n\n return encoding"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns on as first in chain.", "response": "def _run_first(self, source_file):\n \"\"\"Run on as first in chain.\"\"\"\n\n self.reset()\n self.current_encoding = self.default_encoding\n encoding = None\n try:\n encoding = self._detect_encoding(source_file)\n content = self.filter(source_file, encoding)\n except UnicodeDecodeError:\n if not encoding or encoding != self.default_encoding:\n content = self.filter(source_file, self.default_encoding)\n else:\n raise\n return content"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nguessing the encoding and decode the content of the file.", "response": "def _guess(self, filename):\n \"\"\"Guess the encoding and decode the content of the file.\"\"\"\n\n encoding = None\n\n file_size = os.path.getsize(filename)\n # If the file is really big, lets just call it binary.\n # We don't have time to let Python chug through a massive file.\n if not self._is_very_large(file_size):\n with open(filename, \"rb\") as f:\n if file_size == 0:\n encoding = 'ascii'\n else:\n encoding = self._detect_buffer_encoding(f)\n if encoding is None:\n raise UnicodeDecodeError('None', b'', 0, 0, 'Unicode cannot be detected.')\n if encoding != BINARY_ENCODE:\n encoding = self._verify_encoding(encoding)\n else: # pragma: no cover\n raise UnicodeDecodeError('None', b'', 0, 0, 'Unicode detection is not applied to very large files!')\n\n return encoding"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning the spellcheck command.", "response": "def run(config, **kwargs):\n \"\"\"Run.\"\"\"\n\n names = kwargs.get('names', [])\n groups = kwargs.get('groups', [])\n binary = kwargs.get('binary', '')\n spellchecker = kwargs.get('spellchecker', '')\n verbose = kwargs.get('verbose', 0)\n sources = kwargs.get('sources', [])\n debug = kwargs.get('debug', False)\n\n fail = False\n count = 0\n for results in spellcheck(\n config,\n names=names,\n groups=groups,\n binary=binary,\n checker=spellchecker,\n sources=sources,\n verbose=verbose,\n debug=debug\n ):\n count += 1\n if results.error:\n fail = True\n print('ERROR: %s -- %s' % (results.context, results.error))\n elif results.words:\n fail = True\n print('Misspelled words:\\n<%s> %s' % (results.category, results.context))\n print('-' * 80)\n for word in results.words:\n print(word)\n print('-' * 80)\n print('')\n\n if fail:\n print('!!!Spelling check failed!!!')\n else:\n print('Spelling check passed :)')\n\n return fail"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef deprecated(message): # pragma: no cover\n\n def deprecated_decorator(func):\n \"\"\"Deprecation decorator.\"\"\"\n\n @wraps(func)\n def deprecated_func(*args, **kwargs):\n \"\"\"Display deprecation warning.\"\"\"\n\n warnings.warn(\n \"'{}' is deprecated. {}\".format(func.__name__, message),\n category=DeprecationWarning,\n stacklevel=2\n )\n return func(*args, **kwargs)\n return deprecated_func\n return deprecated_decorator", "response": "Decorator to mark a function as deprecated."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_process(cmd):\n\n if sys.platform.startswith('win'):\n startupinfo = subprocess.STARTUPINFO()\n startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW\n process = subprocess.Popen(\n cmd,\n startupinfo=startupinfo,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n stdin=subprocess.PIPE,\n shell=False\n )\n else:\n process = subprocess.Popen(\n cmd,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n stdin=subprocess.PIPE,\n shell=False\n )\n return process", "response": "Get a command process."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_process_output(process, encoding=None):\n\n output = process.communicate()\n returncode = process.returncode\n\n if not encoding:\n try:\n encoding = sys.stdout.encoding\n except Exception:\n encoding = locale.getpreferredencoding()\n\n if returncode != 0:\n raise RuntimeError(\"Runtime Error: %s\" % (output[0].rstrip().decode(encoding, errors='replace')))\n\n return output[0].decode(encoding, errors='replace')", "response": "Get the output from the process."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef call_spellchecker(cmd, input_text=None, encoding=None):\n\n process = get_process(cmd)\n\n # A buffer has been provided\n if input_text is not None:\n for line in input_text.splitlines():\n # Hunspell truncates lines at `0x1fff` (at least on Windows this has been observed)\n # Avoid truncation by chunking the line on white space and inserting a new line to break it.\n offset = 0\n end = len(line)\n while True:\n chunk_end = offset + 0x1fff\n m = None if chunk_end >= end else RE_LAST_SPACE_IN_CHUNK.search(line, offset, chunk_end)\n if m:\n chunk_end = m.start(1)\n chunk = line[offset:m.start(1)]\n offset = m.end(1)\n else:\n chunk = line[offset:chunk_end]\n offset = chunk_end\n # Avoid wasted calls to empty strings\n if chunk and not chunk.isspace():\n process.stdin.write(chunk + b'\\n')\n if offset >= end:\n break\n\n return get_process_output(process, encoding)", "response": "Call spell checker with arguments."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates a random python attribute name.", "response": "def random_name_gen(size=6):\n \"\"\"Generate a random python attribute name.\"\"\"\n\n return ''.join(\n [random.choice(string.ascii_uppercase)] +\n [random.choice(string.ascii_uppercase + string.digits) for i in range(size - 1)]\n ) if size > 0 else ''"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload a YAML file into a dictionary.", "response": "def yaml_load(source, loader=yaml.Loader):\n \"\"\"\n Wrap PyYaml's loader so we can extend it to suit our needs.\n\n Load all strings as Unicode: http://stackoverflow.com/a/2967461/3609487.\n \"\"\"\n\n def construct_yaml_str(self, node):\n \"\"\"Override the default string handling function to always return Unicode objects.\"\"\"\n return self.construct_scalar(node)\n\n class Loader(loader):\n \"\"\"Define a custom loader to leave the global loader unaltered.\"\"\"\n\n # Attach our Unicode constructor to our custom loader ensuring all strings\n # will be Unicode on translation.\n Loader.add_constructor('tag:yaml.org,2002:str', construct_yaml_str)\n\n return yaml.load(source, Loader)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fix_timedelta_repr(func):\n # We don't need to do anything if we're not on 3.7 or above\n if version < (3, 7):\n return func\n\n def fix_timedelta(match):\n values = match.group(1).split(\", \")\n param_repr = \", \".join(\n \"{}={}\".format(param, value)\n for param, value in zip((\"days\", \"seconds\", \"microseconds\"), values)\n if value != \"0\"\n )\n\n # If we have a zero length timedelta it should be represented as\n # timedelta(0), i.e. without named parameters\n if not param_repr:\n param_repr = \"0\"\n\n return \"timedelta({})\".format(param_repr)\n\n func.__doc__ = re.sub(r\"timedelta\\(([^)]+)\\)\", fix_timedelta, func.__doc__)\n return func", "response": "Fix timedelta in docstrings."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking if url is available and returns given tag type", "response": "def _get_tag(self, url, tag, print_warn=True):\n \"\"\"Check if url is available and returns given tag type\n :param url: url to content relative to base url\n :param anchor: anchor type to return\n :param print_warn: if True print warn when url is unavailable\n \"\"\"\n # construct complete url\n complete_url = urljoin(self.base_url, url)\n # check if exists\n if requests.get('http:' + complete_url).ok:\n # construct tag\n if tag == 'script':\n return ''.format(complete_url)\n elif tag == 'stylesheet':\n return ''.format(complete_url)\n else:\n warnings.warn('Given tag is not valid')\n elif print_warn:\n warnings.warn('Url {0} not valid'.format(complete_url))\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef include_codemirror(self):\n contents = []\n # base\n js = self._get_tag('codemirror.js', 'script')\n css = self._get_tag('codemirror.css', 'stylesheet')\n if js and css:\n contents.append(js)\n contents.append(css)\n # languages\n for language in self.languages:\n url = self.__class__.LANGUAGE_REL_URL.format(language)\n js = self._get_tag(url, 'script')\n if js:\n contents.append(js)\n # theme\n if self.theme:\n url = self.__class__.THEME_REL_URL.format(self.theme)\n css = self._get_tag(url, 'stylesheet')\n if css:\n contents.append(css)\n # addons\n if self.addons:\n # add to list\n for addon_type, name in self.addons:\n url = self.__class__.ADDON_REL_URL.format(addon_type, name)\n js = self._get_tag(url, 'script')\n if js:\n contents.append(js)\n # if there is a css file relative to this addon\n url = self.__class__.ADDON_CSS_REL_URL.format(addon_type, name)\n css = self._get_tag(url, 'stylesheet', False)\n if css:\n contents.append(css)\n # return html\n return Markup('\\n'.join(contents))", "response": "Include resources in pages in codemirror. js and codemirror. css."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init_app(self, app):\n if not hasattr(app, 'extensions'):\n app.extensions = {}\n app.extensions['codemirror'] = CodeMirrorHeaders(app.config)\n app.context_processor(self.context_processor)", "response": "Register CodeMirror as a Flask extension\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndump the class data in the format of a. netrc file.", "response": "def format(self):\n \"\"\"Dump the class data in the format of a .netrc file.\"\"\"\n self._netrc.hosts = dedictify_machines(self.machines)\n rep = \"\"\n for host in self._netrc.hosts.keys():\n attrs = self._netrc.hosts[host]\n rep += \"machine {host}\\n\\tlogin {attrs[0]}\\n\".format(host=host,\n attrs=attrs)\n if attrs[1]:\n rep += \"\\taccount {attrs[1]}\\n\".format(attrs=attrs)\n rep += \"\\tpassword {attrs[2]}\\n\".format(attrs=attrs)\n for macro in self._netrc.macros.keys():\n rep += \"macdef {macro}\\n\".format(macro=macro)\n for line in self._netrc.macros[macro]:\n rep += line\n rep += \"\\n\"\n return rep"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef neo_get_scripthash_from_address(address):\n hash_bytes = binascii.hexlify(base58.b58decode_check(address))\n return reverse_hex(hash_bytes[2:].decode('utf-8'))", "response": "Convert a Public Address String to a ScriptHash String."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef num2hexstring(number, size=1, little_endian=False):\n # if (type(number) != = 'number') throw new Error('num must be numeric')\n # if (num < 0) throw new RangeError('num is unsigned (>= 0)')\n # if (size % 1 !== 0) throw new Error('size must be a whole integer')\n # if (!Number.isSafeInteger(num)) throw new RangeError(`num (${num}) must be a safe integer`)\n size = size * 2\n hexstring = hex(number)[2:]\n if len(hexstring) % size != 0:\n hexstring = ('0' * size + hexstring)[len(hexstring):]\n if little_endian:\n hexstring = reverse_hex(hexstring)\n return hexstring", "response": "Converts a number to a big endian hexstring of a suitable size optionally little endian"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef num2varint(num):\n # if (typeof num !== 'number') throw new Error('VarInt must be numeric')\n # if (num < 0) throw new RangeError('VarInts are unsigned (> 0)')\n # if (!Number.isSafeInteger(num)) throw new RangeError('VarInt must be a safe integer')\n if num < 0xfd:\n return num2hexstring(num)\n elif num <= 0xffff:\n # uint16\n return 'fd' + num2hexstring(number=num, size=2, little_endian=True)\n elif num <= 0xffffffff:\n # uint32\n return 'fe' + num2hexstring(number=num, size=4, little_endian=True)\n else:\n # uint64\n return 'ff' + num2hexstring(number=num, size=8, little_endian=True)", "response": "Convert a number to a variable length Int. Used for array length header."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nperforms a GET request on the object store", "response": "def get(self, path, params=None):\n \"\"\"Perform GET request\"\"\"\n r = requests.get(url=self.url + path, params=params, timeout=self.timeout)\n r.raise_for_status()\n return r.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef post(self, path, data=None, json_data=None, params=None):\n r = requests.post(url=self.url + path, data=data, json=json_data, params=params, timeout=self.timeout)\n try:\n r.raise_for_status()\n except requests.exceptions.HTTPError:\n raise SwitcheoApiException(r.json()['error_code'], r.json()['error_message'], r.json()['error'])\n return r.json()", "response": "Perform a POST request to the Switcheo API."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if tag is an element we should break on.", "response": "def is_break_tag(self, el):\n \"\"\"Check if tag is an element we should break on.\"\"\"\n\n name = sv.util.lower(el.name) if self.type != 'xhtml' else el.name\n return name in self.break_tags or name in self.user_break_tags"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef construct_selector(self, el, attr=''):\n\n selector = deque()\n ancestor = el\n while ancestor and ancestor.parent:\n if ancestor is not el:\n selector.appendleft(ancestor.name)\n else:\n tag = ancestor.name\n prefix = ancestor.prefix\n classes = self.get_classes(ancestor)\n tag_id = ancestor.attrs.get('id', '').strip()\n sel = ''\n if prefix:\n sel += prefix + '|'\n sel += tag\n if tag_id:\n sel += '#' + tag_id\n if classes:\n sel += '.' + '.'.join(classes)\n if attr:\n sel += '[%s]' % attr\n selector.appendleft(sel)\n ancestor = ancestor.parent\n return '>'.join(selector)", "response": "Construct an selector for context."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck if the file has BOM.", "response": "def has_bom(self, filestream):\n \"\"\"Check if has BOM.\"\"\"\n\n content = filestream.read(4)\n if content == b'PK\\x03\\x04':\n # Zip file found.\n # Return `BINARY_ENCODE` as content is binary type,\n # but don't return None which means we don't know what we have.\n return filters.BINARY_ENCODE\n # We only handle zip files, so if we are checking this, we've already failed.\n return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_context(self, filename):\n\n if self.type == 'pptx':\n context = '{}: '.format(RE_SLIDE.search(filename).group(1))\n elif self.type == 'docx':\n context = '{}: '.format(RE_DOCS.match(filename).group(1))\n else:\n context = ''\n return context", "response": "Get the context of the current file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_protofeed(path):\n path = Path(path)\n\n service_windows = pd.read_csv(\n path/'service_windows.csv')\n\n meta = pd.read_csv(path/'meta.csv',\n dtype={'start_date': str, 'end_date': str})\n\n shapes = gpd.read_file(str(path/'shapes.geojson'), driver='GeoJSON')\n\n if (path/'stops.csv').exists():\n stops = (\n pd.read_csv(path/'stops.csv', dtype={\n 'stop_id': str,\n 'stop_code': str,\n 'zone_id': str,\n 'location_type': int,\n 'parent_station': str,\n 'stop_timezone': str,\n 'wheelchair_boarding': int,\n })\n .drop_duplicates(subset=['stop_lon', 'stop_lat'])\n .dropna(subset=['stop_lon', 'stop_lat'], how='any')\n )\n else:\n stops = None\n\n frequencies = pd.read_csv(path/'frequencies.csv', dtype={\n 'route_short_name': str,\n 'service_window_id': str,\n 'shape_id': str,\n 'direction': int,\n 'frequency': int,\n })\n\n pfeed = ProtoFeed(frequencies, meta, service_windows, shapes, stops)\n\n # Validate\n v = vd.validate(pfeed)\n if 'error' in v.type.values:\n raise ValueError(\n \"Invalid ProtoFeed files:\\n\\n\" + v.to_string(justify='left'))\n\n return pfeed", "response": "Reads the data files in the given directory path and builds a ProtoFeed from them."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a copy of this ProtoFeed with all the same attributes.", "response": "def copy(self):\n \"\"\"\n Return a copy of this ProtoFeed, that is, a feed with all the\n same attributes.\n \"\"\"\n other = ProtoFeed()\n for key in cs.PROTOFEED_ATTRS:\n value = getattr(self, key)\n if isinstance(value, pd.DataFrame):\n # Pandas copy DataFrame\n value = value.copy()\n setattr(other, key, value)\n\n return other"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mapped_jsontree_class(mapping):\n mapper = mapping\n if not callable(mapping):\n if not isinstance(mapping, collections.Mapping):\n raise TypeError(\"Argument mapping is not collable or an instance \"\n \"of collections.Mapping\")\n mapper = lambda name: mapping.get(name, name)\n class mapped_jsontree(collections.defaultdict):\n def __init__(self, *args, **kwdargs):\n super(mapped_jsontree, self).__init__(mapped_jsontree, *args, **kwdargs)\n def __getattribute__(self, name):\n mapped_name = mapper(name)\n if not isinstance(mapped_name, basestring):\n return self[mapped_name]\n try:\n return object.__getattribute__(self, mapped_name)\n except AttributeError:\n return self[mapped_name]\n def __setattr__(self, name, value):\n mapped_name = mapper(name)\n self[mapped_name] = value\n return value\n return mapped_jsontree", "response": "Return a class which is a jsontree but with a supplied attribute name mapping."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncloning an object by first searializing out and then loading it back in.", "response": "def clone(root, jsontreecls=jsontree, datetimeencoder=_datetimeencoder,\n datetimedecoder=_datetimedecoder):\n \"\"\"Clone an object by first searializing out and then loading it back in.\n \"\"\"\n return json.loads(json.dumps(root, cls=JSONTreeEncoder,\n datetimeencoder=datetimeencoder),\n cls=JSONTreeDecoder, jsontreecls=jsontreecls,\n datetimedecoder=datetimedecoder)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef loads(s, encoding=None, cls=JSONTreeDecoder, object_hook=None,\n parse_float=None, parse_int=None, parse_constant=None,\n object_pairs_hook=None, **kargs):\n \"\"\"JSON load from string function that defaults the loading class to be\n JSONTreeDecoder\n \"\"\"\n return json.loads(s, encoding, cls, object_hook,\n parse_float, parse_int, parse_constant,\n object_pairs_hook, **kargs)", "response": "JSON load from string function that defaults the loading class to be\n JSONTreeDecoder"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninitialize the object variables that are only set once.", "response": "def setup(self):\n \"\"\"Setup.\"\"\"\n\n self.comments = self.config['comments']\n self.docstrings = self.config['docstrings']\n self.strings = self.config['strings']\n self.group_comments = self.config['group_comments']\n self.string_types, self.wild_string_types = self.eval_string_type(self.config['string_types'])\n self.decode_escapes = self.config['decode_escapes']"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef replace_unicode(self, m):\n\n groups = m.groupdict()\n esc = m.group(0)\n if groups.get('fesc'):\n value = m.group(0)\n elif groups.get('format'):\n value = ' '\n elif groups.get('special'):\n value = BACK_SLASH_TRANSLATION[esc]\n elif groups.get('char'):\n try:\n value = chr(int(esc[2:], 16))\n except Exception:\n value = esc\n elif groups.get('oct'):\n value = chr(int(esc[1:], 8))\n elif groups.get('name'):\n try:\n value = unicodedata.lookup(esc[3:-1])\n except Exception:\n value = esc\n return value.replace('\\x00', '\\n')", "response": "Replace escapes with unicode."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _filter(self, text, context, encoding):\n\n docstrings = []\n strings = []\n comments = []\n prev_token_type = tokenize.NEWLINE\n indent = ''\n name = None\n stack = [(context, 0, self.MODULE)]\n last_comment = False\n possible_fmt_str = None\n\n src = io.StringIO(text)\n\n for token in tokenizer(src.readline):\n token_type = token[0]\n value = token[1]\n line = str(token[2][0])\n line_num = token[2][0]\n\n value = token[1]\n line = str(token[2][0])\n\n # Track function and class ancestry\n if token_type == tokenize.NAME:\n possible_fmt_str = None\n if value in ('def', 'class'):\n name = value\n elif name:\n parent = stack[-1][2]\n prefix = ''\n if parent != self.MODULE:\n prefix = '.' if parent == self.CLASS else ', '\n if name == 'class':\n stack.append(('%s%s' % (prefix, value), len(indent), self.CLASS))\n elif name == 'def':\n stack.append(('%s%s()' % (prefix, value), len(indent), self.FUNCTION))\n name = None\n elif not F_SUPPORT and value in FMT_STR:\n possible_fmt_str = (prev_token_type, value)\n elif token_type != tokenize.STRING:\n possible_fmt_str = None\n\n if token_type == tokenize.COMMENT and self.comments:\n # Capture comments\n if (\n self.group_comments and\n last_comment and\n comments and (comments[-1][2] + 1) == line_num\n ):\n # Group multiple consecutive comments\n comments[-1][0] += '\\n' + value[1:]\n comments[-1][2] = line_num\n else:\n if len(stack) > 1:\n loc = \"%s(%s): %s\" % (stack[0][0], line, ''.join([crumb[0] for crumb in stack[1:]]))\n else:\n loc = \"%s(%s)\" % (stack[0][0], line)\n comments.append([value[1:], loc, line_num])\n if token_type == tokenize.STRING:\n # Capture docstrings.\n # If we captured an `INDENT` or `NEWLINE` previously we probably have a docstring.\n # `NL` means end of line, but not the end of the Python code line (line continuation).\n if (\n (prev_token_type in PREV_DOC_TOKENS) or\n (possible_fmt_str and possible_fmt_str[0] in PREV_DOC_TOKENS)\n ):\n if self.docstrings:\n value = value.strip()\n if possible_fmt_str and value.startswith((\"'\", \"\\\"\")):\n value = possible_fmt_str[1] + value\n string, is_bytes = self.process_strings(value, docstrings=True)\n if string:\n loc = \"%s(%s): %s\" % (stack[0][0], line, ''.join([crumb[0] for crumb in stack[1:]]))\n docstrings.append(\n filters.SourceText(string, loc, 'utf-8', 'py-docstring')\n )\n elif self.strings:\n value = value.strip()\n if possible_fmt_str and value.startswith((\"'\", \"\\\"\")):\n value = possible_fmt_str[1] + value\n string, is_bytes = self.process_strings(value)\n if string:\n loc = \"%s(%s): %s\" % (stack[0][0], line, ''.join([crumb[0] for crumb in stack[1:]]))\n strings.append(filters.SourceText(string, loc, 'utf-8', 'py-string'))\n\n if token_type == tokenize.INDENT:\n indent = value\n elif token_type == tokenize.DEDENT:\n indent = indent[:-4]\n if len(stack) > 1 and len(indent) <= stack[-1][1]:\n stack.pop()\n\n # We purposefully avoid storing comments as comments can come before docstrings,\n # and that can mess up our logic. So if the token is a comment we won't track it,\n # and comments are always followed with `NL` so we ignore that as well.\n # We only care that docstrings are preceded by `NEWLINE`.\n if token_type != tokenize.COMMENT and not (last_comment and token_type == tokenize.NL):\n prev_token_type = token_type\n last_comment = False\n else:\n last_comment = True\n\n final_comments = []\n for comment in comments:\n final_comments.append(filters.SourceText(textwrap.dedent(comment[0]), comment[1], encoding, 'py-comment'))\n\n return docstrings + final_comments + strings", "response": "Filter the Python docstrings and strings."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_slots(cls):\n\n slots = set()\n for c in cls.__mro__:\n cslots = getattr(c, \"__slots__\", tuple())\n\n if not cslots:\n continue\n elif isinstance(cslots, (bstr, ustr)):\n cslots = (cslots,)\n\n slots.update(cslots)\n\n return slots", "response": "Return a set of all slots for a given class and its parents"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck if an object is an instance of date or datetime.", "response": "def _is_valid_date(obj, accept_none=True):\n \"\"\"\n Check if an object is an instance of, or a subclass deriving from, a\n ``date``. However, it does not consider ``datetime`` or subclasses thereof\n as valid dates.\n\n :param obj: Object to test as date.\n :param accept_none: If True None is considered as a valid date object.\n \"\"\"\n\n if accept_none and obj is None:\n return True\n return isinstance(obj, date) and not isinstance(obj, datetime)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns an empty set.", "response": "def empty(cls):\n \"\"\"\n Returns an empty set. An empty set is unbounded and only contain the\n empty set.\n\n >>> intrange.empty() in intrange.empty()\n True\n\n It is unbounded but the boundaries are not infinite. Its boundaries are\n returned as ``None``. Every set contains the empty set.\n \"\"\"\n\n self = cls.__new__(cls)\n self._range = _empty_internal_range\n return self"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef replace(self, *args, **kwargs):\n\n replacements = {\n \"lower\" : self.lower,\n \"upper\" : self.upper,\n \"lower_inc\" : self.lower_inc,\n \"upper_inc\" : self.upper_inc\n }\n replacements.update(\n dict(zip((\"lower\", \"upper\", \"lower_inc\", \"upper_inc\"), args)))\n replacements.update(kwargs)\n\n return self.__class__(**replacements)", "response": "Returns a new instance of this instance with the given arguments replaced."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef contains(self, other):\n\n if self.is_valid_range(other):\n if not self:\n return not other\n elif not other or other.startsafter(self) and other.endsbefore(self):\n return True\n else:\n return False\n elif self.is_valid_scalar(other):\n # If the lower bounary is not unbound we can safely perform the\n # comparison. Otherwise we'll try to compare a scalar to None, which\n # is bad\n is_within_lower = True\n if not self.lower_inf:\n lower_cmp = operator.le if self.lower_inc else operator.lt\n is_within_lower = lower_cmp(self.lower, other)\n\n # If the upper bounary is not unbound we can safely perform the\n # comparison. Otherwise we'll try to compare a scalar to None, which\n # is bad\n is_within_upper = True\n if not self.upper_inf:\n upper_cmp = operator.ge if self.upper_inc else operator.gt\n is_within_upper = upper_cmp(self.upper, other)\n\n return is_within_lower and is_within_upper\n else:\n raise TypeError(\n \"Unsupported type to test for inclusion '{0.__class__.__name__}'\".format(\n other))", "response": "Return True if this object contains other."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntests if this range is completely within the given range.", "response": "def within(self, other):\n \"\"\"\n Tests if this range is within `other`.\n\n >>> a = intrange(1, 10)\n >>> b = intrange(3, 8)\n >>> a.contains(b)\n True\n >>> b.within(a)\n True\n\n This is the same as the ``self <@ other`` in PostgreSQL. One difference\n however is that unlike PostgreSQL ``self`` in this can't be a scalar\n value.\n\n :param other: Range to test against.\n :return: ``True`` if this range is completely within the given range,\n otherwise ``False``.\n :raises TypeError: If given range is of the wrong type.\n\n .. seealso::\n This method is the inverse of :meth:`~spans.types.Range.contains`\n \"\"\"\n\n if not self.is_valid_range(other):\n raise TypeError(\n \"Unsupported type to test for inclusion '{0.__class__.__name__}'\".format(\n other))\n return other.contains(self)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if both ranges share any points.", "response": "def overlap(self, other):\n \"\"\"\n Returns True if both ranges share any points.\n\n >>> intrange(1, 10).overlap(intrange(5, 15))\n True\n >>> intrange(1, 5).overlap(intrange(5, 10))\n False\n\n This is the same as the ``&&`` operator for two ranges in PostgreSQL.\n\n :param other: Range to test against.\n :return: ``True`` if ranges overlap, otherwise ``False``.\n :raises TypeError: If `other` is of another type than this range.\n\n .. seealso::\n If you need to know which part that overlapped, consider using\n :meth:`~spans.types.Range.intersection`.\n \"\"\"\n\n # Special case for empty ranges\n if not self or not other:\n return False\n\n if self < other:\n a, b = self, other\n else:\n a, b = other, self\n\n # We need to explicitly handle unbounded ranges since a.upper and b.lower\n # make the intervals seem adjacent even though they are not\n if a.upper_inf or b.lower_inf:\n return True\n return a.upper > b.lower or a.upper == b.lower and a.upper_inc and b.lower_inc"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if this range is adjacent with other.", "response": "def adjacent(self, other):\n \"\"\"\n Returns True if ranges are directly next to each other but does not\n overlap.\n\n >>> intrange(1, 5).adjacent(intrange(5, 10))\n True\n >>> intrange(1, 5).adjacent(intrange(10, 15))\n False\n\n The empty set is not adjacent to any set.\n\n This is the same as the ``-|-`` operator for two ranges in PostgreSQL.\n\n :param other: Range to test against.\n :return: ``True`` if this range is adjacent with `other`, otherwise\n ``False``.\n :raises TypeError: If given argument is of invalid type\n \"\"\"\n\n if not self.is_valid_range(other):\n raise TypeError(\n \"Unsupported type to test for inclusion '{0.__class__.__name__}'\".format(\n other))\n # Must return False if either is an empty set\n elif not self or not other:\n return False\n return (\n (self.lower == other.upper and self.lower_inc != other.upper_inc) or\n (self.upper == other.lower and self.upper_inc != other.lower_inc))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the union of this and other.", "response": "def union(self, other):\n \"\"\"\n Merges this range with a given range.\n\n >>> intrange(1, 5).union(intrange(5, 10))\n intrange([1,10))\n >>> intrange(1, 10).union(intrange(5, 15))\n intrange([1,15))\n\n Two ranges can not be merged if the resulting range would be split in\n two. This happens when the two sets are neither adjacent nor overlaps.\n\n >>> intrange(1, 5).union(intrange(10, 15))\n Traceback (most recent call last):\n File \"\", line 1, in \n ValueError: Ranges must be either adjacent or overlapping\n\n This does not modify the range in place.\n\n This is the same as the ``+`` operator for two ranges in PostgreSQL.\n\n :param other: Range to merge with.\n :return: A new range that is the union of this and `other`.\n :raises ValueError: If `other` can not be merged with this range.\n \"\"\"\n\n if not self.is_valid_range(other):\n msg = \"Unsupported type to test for union '{.__class__.__name__}'\"\n raise TypeError(msg.format(other))\n\n # Optimize empty ranges\n if not self:\n return other\n elif not other:\n return self\n\n # Order ranges to simplify checks\n if self < other:\n a, b = self, other\n else:\n a, b = other, self\n\n if (a.upper < b.lower or a.upper == b.lower and not\n a.upper_inc and not b.lower_inc) and not a.adjacent(b):\n raise ValueError(\"Ranges must be either adjacent or overlapping\")\n\n # a.lower is guaranteed to be the lower bound, but either a.upper or\n # b.upper can be the upper bound\n if a.upper == b.upper:\n upper = a.upper\n upper_inc = a.upper_inc or b.upper_inc\n elif a.upper < b.upper:\n upper = b.upper\n upper_inc = b.upper_inc\n else:\n upper = a.upper\n upper_inc = a.upper_inc\n\n return self.__class__(a.lower, upper, a.lower_inc, upper_inc)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef difference(self, other):\n\n if not self.is_valid_range(other):\n msg = \"Unsupported type to test for difference '{.__class__.__name__}'\"\n raise TypeError(msg.format(other))\n\n # Consider empty ranges or no overlap\n if not self or not other or not self.overlap(other):\n return self\n # If self is contained within other, the result is empty\n elif self in other:\n return self.empty()\n elif other in self and not (self.startswith(other) or self.endswith(other)):\n raise ValueError(\"Other range must not be within this range\")\n elif self.endsbefore(other):\n return self.replace(upper=other.lower, upper_inc=not other.lower_inc)\n elif self.startsafter(other):\n return self.replace(lower=other.upper, lower_inc=not other.upper_inc)\n else:\n return self.empty()", "response": "Compute the difference between two sets of intractions."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a new range containing all points shared by both ranges.", "response": "def intersection(self, other):\n \"\"\"\n Returns a new range containing all points shared by both ranges. If no\n points are shared an empty range is returned.\n\n >>> intrange(1, 5).intersection(intrange(1, 10))\n intrange([1,5))\n >>> intrange(1, 5).intersection(intrange(5, 10))\n intrange(empty)\n >>> intrange(1, 10).intersection(intrange(5, 10))\n intrange([5,10))\n\n This is the same as the ``+`` operator for two ranges in PostgreSQL.\n\n :param other: Range to interect with.\n :return: A new range that is the intersection between this and `other`.\n \"\"\"\n\n if not self.is_valid_range(other):\n msg = \"Unsupported type to test for intersection '{.__class__.__name__}'\"\n raise TypeError(msg.format(other))\n\n # Handle ranges not intersecting\n if not self or not other or not self.overlap(other):\n return self.empty()\n\n lower_end_span = self if self.startsafter(other) else other\n upper_end_span = self if self.endsbefore(other) else other\n\n return lower_end_span.replace(\n upper=upper_end_span.upper,\n upper_inc=upper_end_span.upper_inc)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntest if this range starts with other.", "response": "def startswith(self, other):\n \"\"\"\n Test if this range starts with `other`. `other` may be either range or\n scalar.\n\n >>> intrange(1, 5).startswith(1)\n True\n >>> intrange(1, 5).startswith(intrange(1, 10))\n True\n\n :param other: Range or scalar to test.\n :return: ``True`` if this range starts with `other`, otherwise ``False``\n :raises TypeError: If `other` is of the wrong type.\n \"\"\"\n\n if self.is_valid_range(other):\n if self.lower_inc == other.lower_inc:\n return self.lower == other.lower\n else:\n return False\n elif self.is_valid_scalar(other):\n if self.lower_inc:\n return self.lower == other\n else:\n return False\n else:\n raise TypeError(\n \"Unsupported type to test for starts with '{}'\".format(\n other.__class__.__name__))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntest if this range ends with other.", "response": "def endswith(self, other):\n \"\"\"\n Test if this range ends with `other`. `other` may be either range or\n scalar.\n\n >>> intrange(1, 5).endswith(4)\n True\n >>> intrange(1, 10).endswith(intrange(5, 10))\n True\n\n :param other: Range or scalar to test.\n :return: ``True`` if this range ends with `other`, otherwise ``False``\n :raises TypeError: If `other` is of the wrong type.\n \"\"\"\n\n if self.is_valid_range(other):\n if self.upper_inc == other.upper_inc:\n return self.upper == other.upper\n else:\n return False\n elif self.is_valid_scalar(other):\n if self.upper_inc:\n return self.upper == other\n else:\n return False\n else:\n raise TypeError(\n \"Unsupported type to test for ends with '{}'\".format(\n other.__class__.__name__))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef startsafter(self, other):\n\n if self.is_valid_range(other):\n if self.lower == other.lower:\n return other.lower_inc or not self.lower_inc\n elif self.lower_inf:\n return False\n elif other.lower_inf:\n return True\n else:\n return self.lower > other.lower\n elif self.is_valid_scalar(other):\n return self.lower >= other\n else:\n raise TypeError(\n \"Unsupported type to test for starts after '{}'\".format(\n other.__class__.__name__))", "response": "Test if this range starts after other."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntests if this range ends before other.", "response": "def endsbefore(self, other):\n \"\"\"\n Test if this range ends before `other`. `other` may be either range or\n scalar. This only takes the upper end of the ranges into consideration.\n If the scalar or the upper end of the given range is less than or equal\n to this range's upper end, ``True`` is returned.\n\n >>> intrange(1, 5).endsbefore(5)\n True\n >>> intrange(1, 5).endsbefore(intrange(1, 5))\n True\n\n :param other: Range or scalar to test.\n :return: ``True`` if this range ends before `other`, otherwise ``False``\n :raises TypeError: If `other` is of the wrong type.\n \"\"\"\n\n if self.is_valid_range(other):\n if self.upper == other.upper:\n return not self.upper_inc or other.upper_inc\n elif self.upper_inf:\n return False\n elif other.upper_inf:\n return True\n else:\n return self.upper <= other.upper\n elif self.is_valid_scalar(other):\n return self.upper <= other\n else:\n raise TypeError(\n \"Unsupported type to test for ends before '{}'\".format(\n other.__class__.__name__))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef left_of(self, other):\n\n if not self.is_valid_range(other):\n msg = (\n \"Left of is not supported for '{}', provide a proper range \"\n \"class\").format(other.__class__.__name__)\n raise TypeError(msg)\n\n return self < other and not self.overlap(other)", "response": "Test if this range is strictly left of other."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef offset(self, offset):\n\n # If range is empty it can't be offset\n if not self:\n return self\n\n offset_type = self.type if self.offset_type is None else self.offset_type\n\n if offset is not None and not isinstance(offset, offset_type):\n raise TypeError((\n \"Invalid type for offset '{offset_type.__name__}'\"\n \" expected '{expected_type.__name__}'\").format(\n expected_type=offset_type,\n offset_type=offset.__class__))\n\n lower = None if self.lower is None else self.lower + offset\n upper = None if self.upper is None else self.upper + offset\n\n return self.replace(lower=lower, upper=upper)", "response": "Shift the range to the left or right with the given offset."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a day long daterange from a given date and optional period.", "response": "def from_date(cls, date, period=None):\n \"\"\"\n Create a day long daterange from for the given date.\n\n >>> daterange.from_date(date(2000, 1, 1))\n daterange([datetime.date(2000, 1, 1),datetime.date(2000, 1, 2)))\n\n :param date: A date to convert.\n :param period: The period to normalize date to. A period may be one of:\n ``day`` (default), ``week``, ``american_week``,\n ``month``, ``quarter`` or ``year``.\n :return: A new range that contains the given date.\n\n\n .. seealso::\n\n There are convenience methods for most period types:\n :meth:`~spans.types.daterange.from_week`,\n :meth:`~spans.types.daterange.from_month`,\n :meth:`~spans.types.daterange.from_quarter` and\n :meth:`~spans.types.daterange.from_year`.\n\n :class:`~spans.types.PeriodRange` has the same interface but is\n period aware. This means it is possible to get things like next week\n or month.\n\n .. versionchanged:: 0.4.0\n Added the period parameter.\n \"\"\"\n\n if period is None or period == \"day\":\n return cls(date, date, upper_inc=True)\n elif period == \"week\":\n start = date - timedelta(date.weekday())\n return cls(start, start + timedelta(7))\n elif period == \"american_week\":\n start = date - timedelta((date.weekday() + 1) % 7)\n return cls(start, start + timedelta(7))\n elif period == \"month\":\n start = date.replace(day=1)\n return cls(start, (start + timedelta(31)).replace(day=1))\n elif period == \"quarter\":\n start = date.replace(month=(date.month - 1) // 3 * 3 + 1, day=1)\n return cls(start, (start + timedelta(93)).replace(day=1))\n elif period == \"year\":\n start = date.replace(month=1, day=1)\n return cls(start, (start + timedelta(366)).replace(day=1))\n else:\n raise ValueError(\"Unexpected period, got {!r}\".format(period))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a daterange based on a year and an ISO week number.", "response": "def from_week(cls, year, iso_week):\n \"\"\"\n Create ``daterange`` based on a year and an ISO week\n\n :param year: Year as an integer\n :param iso_week: ISO week number\n :return: A new ``daterange`` for the given week\n\n .. versionadded:: 0.4.0\n \"\"\"\n\n first_day = date_from_iso_week(year, iso_week)\n return cls.from_date(first_day, period=\"week\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_month(cls, year, month):\n\n first_day = date(year, month, 1)\n return cls.from_date(first_day, period=\"month\")", "response": "Create a daterange based on a year and month."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a daterange from a year and quarter.", "response": "def from_quarter(cls, year, quarter):\n \"\"\"\n Create ``daterange`` based on a year and quarter.\n\n A quarter is considered to be:\n\n - January through March (Q1),\n - April through June (Q2),\n - July through September (Q3) or,\n - October through December (Q4)\n\n :param year: Year as an integer\n :param quarter: Quarter as an integer between 1 and 4\n :return: A new ``daterange`` for the given quarter\n\n .. versionadded:: 0.4.0\n \"\"\"\n\n quarter_months = {\n 1: 1,\n 2: 4,\n 3: 7,\n 4: 10,\n }\n\n if quarter not in quarter_months:\n error_msg = (\n \"quarter is not a valid quarter. Expected a value between 1 \"\n \"and 4 got {!r}\")\n raise ValueError(error_msg.format(quarter))\n\n first_day = date(year, quarter_months[quarter], 1)\n return cls.from_date(first_day, period=\"quarter\")"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_year(cls, year):\n\n first_day = date(year, 1, 1)\n return cls.from_date(first_day, period=\"year\")", "response": "Create a daterange based on a year."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a naive version of the daterange.", "response": "def daterange(self):\n \"\"\"\n This ``PeriodRange`` represented as a naive\n :class:`~spans.types.daterange`.\n \"\"\"\n\n return daterange(\n lower=self.lower,\n upper=self.upper,\n lower_inc=self.lower_inc,\n upper_inc=self.upper_inc)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef offset(self, offset):\n\n span = self\n if offset > 0:\n for i in iter_range(offset):\n span = span.next_period()\n elif offset < 0:\n for i in iter_range(-offset):\n span = span.prev_period()\n return span", "response": "Offset the date range by the given amount of periods."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef prev_period(self):\n\n return self.from_date(self.prev(self.lower), period=self.period)", "response": "Returns a new PeriodRange for the period before this range."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_package_hashes(filename):\n log.debug('Getting package hashes')\n filename = os.path.abspath(filename)\n with open(filename, 'rb') as f:\n data = f.read()\n\n _hash = hashlib.sha256(data).hexdigest()\n log.debug('Hash for file %s: %s', filename, _hash)\n return _hash", "response": "Provides hash of given filename."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef setup(self):\n\n self.blocks = self.config['block_comments']\n self.lines = self.config['line_comments']\n self.group_comments = self.config['group_comments']\n # If the style isn't found, just go with CSS, then use the appropriate prefix.\n self.stylesheets = STYLESHEET_TYPE.get(self.config['stylesheets'].lower(), CSS)\n self.prefix = [k for k, v in STYLESHEET_TYPE.items() if v == SASS][0]\n self.pattern = RE_CSS if self.stylesheets == CSS else RE_SCSS", "response": "Setup the object variables."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads a decode array from a DFITS file.", "response": "def loaddfits(fitsname, coordtype='azel', loadtype='temperature', starttime=None, endtime=None,\n pixelids=None, scantypes=None, mode=0, **kwargs):\n \"\"\"Load a decode array from a DFITS file.\n\n Args:\n fitsname (str): Name of DFITS file.\n coordtype (str): Coordinate type included into a decode array.\n 'azel': Azimuth / elevation.\n 'radec': Right ascension / declination.\n loadtype (str): Data unit of xarray.\n 'Tsignal': Temperature [K].\n 'Psignal': Power [W].\n 'amplitude': Amplitude.\n 'phase': Phase.\n 'linphase': Linear phase.\n starttime (int, str or numpy.datetime64): Start time of loaded data.\n It can be specified by the start index (int), the time compatible with numpy.datetime64 (str),\n or numpy.datetime64 (numpy.datetime64). Default is None and it means the data will be loaded\n from the first record.\n endtime (int, str or numpy.datetime64): End time of loaded data.\n It can be specified by the end index (int), the time compatible with numpy.datetime64 (str),\n or numpy.datetime64 (numpy.datetime64). Default is None and it means the data will be loaded\n until the last record.\n pixelids (int or list): Under development.\n scantypes (list(str)): Scan types, such as 'GRAD', 'SCAN', 'OFF', 'R', and so on.\n mode (int): Loading mode.\n 0: Relative coordinates with cosine projection (RECOMMENDED).\n 1: Relative coordinates without cosine projection.\n 2: Absolute coordinates.\n kwargs (optional):\n findR (bool): Automatically find R positions.\n ch (int): Representative channel id used for finding R.\n Rth (float): Threshold of R.\n skyth (flaot): Threshold of sky.\n cutnum (int): The number of points of unused data at the edge.\n still (bool): When it is true, scantypes of on/off are manually assigned.\n period (float): On/off period in second for still data.\n shuttle (bool): For shuttle observations.\n xmin_off (float): Minimum x of off-point data.\n xmax_off (float): Maximum x of off-point data.\n xmin_on (float): Minimum x of on-point data.\n xmax_on (float): Maximum x of on-point data.\n\n Returns:\n decode array (decode.array): Loaded decode array.\n \"\"\"\n if mode not in [0, 1, 2]:\n raise KeyError(mode)\n\n logger.info('coordtype starttime endtime mode loadtype')\n logger.info('{} {} {} {} {}'.format(coordtype, starttime, endtime, mode, loadtype))\n\n ### pick up kwargs\n # for findR\n findR = kwargs.pop('findR', False)\n ch = kwargs.pop('ch', 0)\n Rth = kwargs.pop('Rth', 280)\n skyth = kwargs.pop('skyth', 150)\n cutnum = kwargs.pop('cutnum', 1)\n # for still\n still = kwargs.pop('still', False)\n period = kwargs.pop('period', 2)\n # for shuttle\n shuttle = kwargs.pop('shuttle', False)\n xmin_off = kwargs.pop('xmin_off', 0)\n xmax_off = kwargs.pop('xmax_off', 0)\n xmin_on = kwargs.pop('xmin_on', 0)\n xmax_on = kwargs.pop('xmax_on', 0)\n\n ### load data\n fitsname = str(Path(fitsname).expanduser())\n\n with fits.open(fitsname) as hdulist:\n obsinfo = hdulist['OBSINFO'].data\n obshdr = hdulist['OBSINFO'].header\n antlog = hdulist['ANTENNA'].data\n readout = hdulist['READOUT'].data\n wealog = hdulist['WEATHER'].data\n\n ### obsinfo\n masterids = obsinfo['masterids'][0].astype(np.int64)\n kidids = obsinfo['kidids'][0].astype(np.int64)\n kidfreqs = obsinfo['kidfreqs'][0].astype(np.float64)\n kidtypes = obsinfo['kidtypes'][0].astype(np.int64)\n\n ### parse start/end time\n t_ant = np.array(antlog['time']).astype(np.datetime64)\n t_out = np.array(readout['starttime']).astype(np.datetime64)\n t_wea = np.array(wealog['time']).astype(np.datetime64)\n\n if starttime is None:\n startindex = 0\n elif isinstance(starttime, int):\n startindex = starttime\n elif isinstance(starttime, str):\n startindex = np.searchsorted(t_out, np.datetime64(starttime))\n elif isinstance(starttime, np.datetime64):\n startindex = np.searchsorted(t_out, starttime)\n else:\n raise ValueError(starttime)\n\n if endtime is None:\n endindex = t_out.shape[0]\n elif isinstance(endtime, int):\n endindex = endtime\n elif isinstance(endtime, str):\n endindex = np.searchsorted(t_out, np.datetime64(endtime), 'right')\n elif isinstance(endtime, np.datetime64):\n endindex = np.searchsorted(t_out, endtime, 'right')\n else:\n raise ValueError(starttime)\n\n if t_out[endindex-1] > t_ant[-1]:\n logger.warning('Endtime of readout is adjusted to that of ANTENNA HDU.')\n endindex = np.searchsorted(t_out, t_ant[-1], 'right')\n\n t_out = t_out[startindex:endindex]\n\n ### readout\n pixelid = readout['pixelid'][startindex:endindex]\n if loadtype == 'temperature':\n response = readout['Tsignal'][startindex:endindex].astype(np.float64)\n elif loadtype == 'power':\n response = readout['Psignal'][startindex:endindex].astype(np.float64)\n elif loadtype == 'amplitude':\n response = readout['amplitude'][startindex:endindex].astype(np.float64)\n elif loadtype == 'phase':\n response = readout['phase'][startindex:endindex].astype(np.float64)\n elif loadtype == 'linphase':\n response = readout['line_phase'][startindex:endindex].astype(np.float64)\n else:\n raise KeyError(loadtype)\n\n ### antenna\n if coordtype == 'azel':\n x = antlog['az'].copy()\n y = antlog['el'].copy()\n xref = np.median(antlog['az_center'])\n yref = np.median(antlog['el_center'])\n if mode in [0, 1]:\n x -= antlog['az_center']\n y -= antlog['el_center']\n if mode == 0:\n x *= np.cos(np.deg2rad(antlog['el']))\n elif coordtype == 'radec':\n x = antlog['ra'].copy()\n y = antlog['dec'].copy()\n xref = obshdr['RA']\n yref = obshdr['DEC']\n if mode in [0, 1]:\n x -= xref\n y -= yref\n if mode == 0:\n x *= np.cos(np.deg2rad(antlog['dec']))\n else:\n raise KeyError(coordtype)\n scantype = antlog['scantype']\n\n ### weatherlog\n temp = wealog['temperature']\n pressure = wealog['pressure']\n vpressure = wealog['vapor-pressure']\n windspd = wealog['windspd']\n winddir = wealog['winddir']\n\n ### interpolation\n dt_out = (t_out - t_out[0]) / np.timedelta64(1, 's')\n dt_ant = (t_ant - t_out[0]) / np.timedelta64(1, 's')\n dt_wea = (t_wea - t_out[0]) / np.timedelta64(1, 's')\n x_i = np.interp(dt_out, dt_ant, x)\n y_i = np.interp(dt_out, dt_ant, y)\n\n temp_i = np.interp(dt_out, dt_wea, temp)\n pressure_i = np.interp(dt_out, dt_wea, pressure)\n vpressure_i = np.interp(dt_out, dt_wea, vpressure)\n windspd_i = np.interp(dt_out, dt_wea, windspd)\n winddir_i = np.interp(dt_out, dt_wea, winddir)\n\n scandict = {t: n for n, t in enumerate(np.unique(scantype))}\n scantype_v = np.zeros(scantype.shape[0], dtype=int)\n for k, v in scandict.items():\n scantype_v[scantype == k] = v\n scantype_vi = interp1d(dt_ant, scantype_v, kind='nearest', bounds_error=False,\n fill_value=(scantype_v[0], scantype_v[-1]))(dt_out)\n scantype_i = np.full_like(scantype_vi, 'GRAD', dtype='= Rth)\n scantype_i[Rindex] = 'R'\n movemask = np.hstack([[False] * cutnum, scantype_i[cutnum:] != scantype_i[:-cutnum]]) | \\\n np.hstack([scantype_i[:-cutnum] != scantype_i[cutnum:], [False] * cutnum]) & (scantype_i == 'R')\n scantype_i[movemask] = 'JUNK'\n scantype_i[(response[:, ch] > skyth) & (scantype_i != 'R')] = 'JUNK'\n scantype_i[(response[:, ch] <= skyth) & (scantype_i == 'R')] = 'JUNK'\n skyindex = np.where(response[:, ch] <= skyth)\n scantype_i_temp = scantype_i.copy()\n scantype_i_temp[skyindex] = 'SKY'\n movemask = np.hstack([[False] * cutnum, scantype_i_temp[cutnum:] != scantype_i_temp[:-cutnum]]) | \\\n np.hstack([scantype_i_temp[:-cutnum] != scantype_i_temp[cutnum:], [False] * cutnum]) & (scantype_i_temp == 'SKY')\n scantype_i[movemask] = 'JUNK'\n\n ### scanid\n scanid_i = np.cumsum(np.hstack([False, scantype_i[1:] != scantype_i[:-1]]))\n\n ### coordinates\n tcoords = {'x': x_i, 'y': y_i, 'time': t_out, 'temp': temp_i, 'pressure': pressure_i,\n 'vapor-pressure': vpressure_i, 'windspd': windspd_i, 'winddir': winddir_i,\n 'scantype': scantype_i, 'scanid': scanid_i}\n chcoords = {'masterid': masterids, 'kidid': kidids, 'kidfq': kidfreqs, 'kidtp': kidtypes}\n scalarcoords = {'coordsys': coordtype.upper(), 'datatype': loadtype, 'xref': xref, 'yref': yref}\n\n ### make array\n array = dc.array(response, tcoords=tcoords, chcoords=chcoords, scalarcoords=scalarcoords)\n if scantypes is not None:\n mask = np.full(array.shape[0], False)\n for scantype in scantypes:\n mask |= (array.scantype == scantype)\n array = array[mask]\n\n return array"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsave a 3D - cube FITS file.", "response": "def savefits(cube, fitsname, **kwargs):\n \"\"\"Save a cube to a 3D-cube FITS file.\n\n Args:\n cube (xarray.DataArray): Cube to be saved.\n fitsname (str): Name of output FITS file.\n kwargs (optional): Other arguments common with astropy.io.fits.writeto().\n \"\"\"\n ### pick up kwargs\n dropdeg = kwargs.pop('dropdeg', False)\n ndim = len(cube.dims)\n\n ### load yaml\n FITSINFO = get_data('decode', 'data/fitsinfo.yaml')\n hdrdata = yaml.load(FITSINFO, dc.utils.OrderedLoader)\n\n ### default header\n if ndim == 2:\n header = fits.Header(hdrdata['dcube_2d'])\n data = cube.values.T\n elif ndim == 3:\n if dropdeg:\n header = fits.Header(hdrdata['dcube_2d'])\n data = cube.values[:, :, 0].T\n else:\n header = fits.Header(hdrdata['dcube_3d'])\n\n kidfq = cube.kidfq.values\n freqrange = ~np.isnan(kidfq)\n orderedfq = np.argsort(kidfq[freqrange])\n newcube = cube[:, :, orderedfq]\n data = newcube.values.T\n else:\n raise TypeError(ndim)\n\n ### update Header\n if cube.coordsys == 'AZEL':\n header.update({'CTYPE1': 'dAZ', 'CTYPE2': 'dEL'})\n elif cube.coordsys == 'RADEC':\n header.update({'OBSRA': float(cube.xref), 'OBSDEC': float(cube.yref)})\n else:\n pass\n header.update({'CRVAL1': float(cube.x[0]),\n 'CDELT1': float(cube.x[1] - cube.x[0]),\n 'CRVAL2': float(cube.y[0]),\n 'CDELT2': float(cube.y[1] - cube.y[0]),\n 'DATE': datetime.now(timezone('UTC')).isoformat()})\n if (ndim == 3) and (not dropdeg):\n header.update({'CRVAL3': float(newcube.kidfq[0]),\n 'CDELT3': float(newcube.kidfq[1] - newcube.kidfq[0])})\n\n fitsname = str(Path(fitsname).expanduser())\n fits.writeto(fitsname, data, header, **kwargs)\n logger.info('{} has been created.'.format(fitsname))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nloading a dataarray from a NetCDF file.", "response": "def loadnetcdf(filename, copy=True):\n \"\"\"Load a dataarray from a NetCDF file.\n\n Args:\n filename (str): Filename (*.nc).\n copy (bool): If True, dataarray is copied in memory. Default is True.\n\n Returns:\n dataarray (xarray.DataArray): Loaded dataarray.\n \"\"\"\n filename = str(Path(filename).expanduser())\n\n if copy:\n dataarray = xr.open_dataarray(filename).copy()\n else:\n dataarray = xr.open_dataarray(filename, chunks={})\n\n if dataarray.name is None:\n dataarray.name = filename.rstrip('.nc')\n\n for key, val in dataarray.coords.items():\n if val.dtype.kind == 'S':\n dataarray[key] = val.astype('U')\n elif val.dtype == np.int32:\n dataarray[key] = val.astype('i8')\n\n return dataarray"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving a xarray. DataArray to a NetCDF file.", "response": "def savenetcdf(dataarray, filename=None):\n \"\"\"Save a dataarray to a NetCDF file.\n\n Args:\n dataarray (xarray.DataArray): Dataarray to be saved.\n filename (str): Filename (used as .nc).\n If not spacified, random 8-character name will be used.\n \"\"\"\n if filename is None:\n if dataarray.name is not None:\n filename = dataarray.name\n else:\n filename = uuid4().hex[:8]\n else:\n filename = str(Path(filename).expanduser())\n\n if not filename.endswith('.nc'):\n filename += '.nc'\n\n dataarray.to_netcdf(filename)\n logger.info('{} has been created.'.format(filename))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfunction to sign the parameters required to create a cancellation request from the Switcheo Exchange. Execution of this function is as follows:: sign_create_cancellation(cancellation_params=signable_params, private_key=eth_private_key) The expected return result for this function is as follows:: { 'order_id': '3125550a-04f9-4475-808b-42b5f89d6693', 'timestamp': 1542088842108, 'address': '0x32c46323b51c977814e05ef5e258ee4da0e4c3c3', 'signature': 'dac70ca711bcfbeefbdead2158ef8b15fab1a1....' } :param cancellation_params: Dictionary with Order ID and timestamp to sign for creating the cancellation. :type cancellation_params: dict :param private_key: The Ethereum private key to sign the deposit parameters. :type private_key: str :return: Dictionary of signed message to send to the Switcheo API.", "response": "def sign_create_cancellation(cancellation_params, private_key):\n \"\"\"\n Function to sign the parameters required to create a cancellation request from the Switcheo Exchange.\n Execution of this function is as follows::\n\n sign_create_cancellation(cancellation_params=signable_params, private_key=eth_private_key)\n\n The expected return result for this function is as follows::\n\n {\n 'order_id': '3125550a-04f9-4475-808b-42b5f89d6693',\n 'timestamp': 1542088842108,\n 'address': '0x32c46323b51c977814e05ef5e258ee4da0e4c3c3',\n 'signature': 'dac70ca711bcfbeefbdead2158ef8b15fab1a1....'\n }\n\n :param cancellation_params: Dictionary with Order ID and timestamp to sign for creating the cancellation.\n :type cancellation_params: dict\n :param private_key: The Ethereum private key to sign the deposit parameters.\n :type private_key: str\n :return: Dictionary of signed message to send to the Switcheo API.\n \"\"\"\n hash_message = defunct_hash_message(text=stringify_message(cancellation_params))\n hex_message = binascii.hexlify(hash_message).decode()\n signed_message = binascii.hexlify(Account.signHash(hex_message, private_key=private_key)['signature']).decode()\n create_params = cancellation_params.copy()\n create_params['address'] = to_normalized_address(Account.privateKeyToAccount(private_key=private_key).address)\n create_params['signature'] = signed_message\n return create_params"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sign_execute_cancellation(cancellation_params, private_key):\n cancellation_sha256 = cancellation_params['transaction']['sha256']\n signed_sha256 = binascii.hexlify(\n Account.signHash(cancellation_sha256, private_key=private_key)['signature']).decode()\n return {'signature': '0x' + signed_sha256}", "response": "This function is used to sign the parameters required to execute a cancellation request on the Switcheo Exchange."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sign_execute_deposit(deposit_params, private_key, infura_url):\n create_deposit_upper = deposit_params.copy()\n create_deposit_upper['transaction']['from'] = to_checksum_address(create_deposit_upper['transaction']['from'])\n create_deposit_upper['transaction']['to'] = to_checksum_address(create_deposit_upper['transaction']['to'])\n create_deposit_upper['transaction'].pop('sha256')\n signed_create_txn = Account.signTransaction(create_deposit_upper['transaction'], private_key=private_key)\n execute_signed_txn = binascii.hexlify(signed_create_txn['hash']).decode()\n\n # Broadcast transaction to Ethereum Network.\n Web3(HTTPProvider(infura_url)).eth.sendRawTransaction(signed_create_txn.rawTransaction)\n\n return {'transaction_hash': '0x' + execute_signed_txn}", "response": "This function is used to sign the deposit request by sending the transaction to the Switcheo API."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sign_execute_order(order_params, private_key):\n execute_params = {\n 'signatures': {\n 'fill_groups': sign_txn_array(messages=order_params['fill_groups'], private_key=private_key),\n 'fills': {},\n 'makes': sign_txn_array(messages=order_params['makes'], private_key=private_key),\n }\n }\n return execute_params", "response": "This function is used to sign the execute order request by signing the transaction generated by the create order function."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfunctions to execute the withdrawal request by signing the transaction generated from the create withdrawal function. Execution of this function is as follows:: sign_execute_withdrawal(withdrawal_params=signable_params, private_key=eth_private_key) The expected return result for this function is as follows:: { 'signature': '0x33656f88b364d344e5b04f6aead01cdd3ac084489c39a9efe88c9873249bf1954525b1....' } :param withdrawal_params: The parameters generated by the create function that now require signing. :type withdrawal_params: dict :param private_key: The Ethereum private key to sign the deposit parameters. :type private_key: str :return: Dictionary of the signed transaction hash and initiate the withdrawal of ETH via the Switcheo API.", "response": "def sign_execute_withdrawal(withdrawal_params, private_key):\n \"\"\"\n Function to execute the withdrawal request by signing the transaction generated from the create withdrawal function.\n Execution of this function is as follows::\n\n sign_execute_withdrawal(withdrawal_params=signable_params, private_key=eth_private_key)\n\n The expected return result for this function is as follows::\n\n {\n 'signature': '0x33656f88b364d344e5b04f6aead01cdd3ac084489c39a9efe88c9873249bf1954525b1....'\n }\n\n :param withdrawal_params: The parameters generated by the create function that now require signing.\n :type withdrawal_params: dict\n :param private_key: The Ethereum private key to sign the deposit parameters.\n :type private_key: str\n :return: Dictionary of the signed transaction hash and initiate the withdrawal of ETH via the Switcheo API.\n \"\"\"\n withdrawal_sha256 = withdrawal_params['transaction']['sha256']\n signed_sha256 = binascii.hexlify(Account.signHash(withdrawal_sha256, private_key=private_key)['signature']).decode()\n return {'signature': '0x' + signed_sha256}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_zip_content(self, filename):\n\n with zipfile.ZipFile(filename, 'r') as z:\n self.determine_file_type(z)\n for item in z.infolist():\n if glob.globmatch(item.filename, self.filepattern, flags=self.FLAGS):\n yield z.read(item.filename), item.filename", "response": "Get the content of the file in the zip."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_content(self, zipbundle):\n\n for content, filename in self.get_zip_content(zipbundle):\n with io.BytesIO(content) as b:\n encoding = self._analyze_file(b)\n if encoding is None:\n encoding = self.default_encoding\n b.seek(0)\n text = b.read().decode(encoding)\n yield text, filename, encoding", "response": "Get content from the given zip."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef content_break(self, el):\n\n should_break = False\n if self.type == 'odp':\n if el.name == 'page' and el.namespace and el.namespace == self.namespaces['draw']:\n should_break = True\n return should_break", "response": "Break on specified boundaries."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\napply soft break if needed.", "response": "def soft_break(self, el, text):\n \"\"\"Apply soft break if needed.\"\"\"\n\n if el.name == 'p' and el.namespace and el.namespace == self.namespaces[\"text\"]:\n text.append('\\n')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nstore the text as desired.", "response": "def store_blocks(self, el, blocks, text, force_root):\n \"\"\"Store the text as desired.\"\"\"\n\n self.soft_break(el, text)\n\n if force_root or el.parent is None or self.content_break(el):\n content = html.unescape(''.join(text))\n if content:\n blocks.append((content, self.additional_context + self.construct_selector(el)))\n text = []\n return text"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nextracting meta data from an element.", "response": "def extract_tag_metadata(self, el):\n \"\"\"Extract meta data.\"\"\"\n\n if self.type == 'odp':\n if el.namespace and el.namespace == self.namespaces['draw'] and el.name == 'page-thumbnail':\n name = el.attrs.get('draw:page-number', '')\n self.additional_context = 'slide{}:'.format(name)\n super().extract_tag_metadata(el)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_sub_node(self, node):\n\n subnode = node.find('office:document')\n if subnode:\n mimetype = subnode.attrs['office:mimetype']\n self.type = MIMEMAP[mimetype]\n node = node.find('office:body')\n return node", "response": "Extract node from document if desired."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef array(data, tcoords=None, chcoords=None, scalarcoords=None, datacoords=None, attrs=None, name=None):\n # initialize coords with default values\n array = xr.DataArray(data, dims=('t', 'ch'), attrs=attrs, name=name)\n array.dca._initcoords()\n\n # update coords with input values (if any)\n if tcoords is not None:\n array.coords.update({key: ('t', tcoords[key]) for key in tcoords})\n\n if chcoords is not None:\n array.coords.update({key: ('ch', chcoords[key]) for key in chcoords})\n\n if scalarcoords is not None:\n array.coords.update(scalarcoords)\n\n if datacoords is not None:\n array.coords.update({key: (('t', 'ch'), datacoords[key]) for key in datacoords})\n\n return array", "response": "Create an array as an instance of xarray. DataArray with accessor."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates an array of given shape and type filled with zeros.", "response": "def zeros(shape, dtype=None, **kwargs):\n \"\"\"Create an array of given shape and type, filled with zeros.\n\n Args:\n shape (sequence of ints): 2D shape of the array.\n dtype (data-type, optional): Desired data-type for the array.\n kwargs (optional): Other arguments of the array (*coords, attrs, and name).\n\n Returns:\n array (decode.array): Decode array filled with zeros.\n \"\"\"\n data = np.zeros(shape, dtype)\n return dc.array(data, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an array of given shape and type filled with ones.", "response": "def ones(shape, dtype=None, **kwargs):\n \"\"\"Create an array of given shape and type, filled with ones.\n\n Args:\n shape (sequence of ints): 2D shape of the array.\n dtype (data-type, optional): Desired data-type for the array.\n kwargs (optional): Other arguments of the array (*coords, attrs, and name).\n\n Returns:\n array (decode.array): Decode array filled with ones.\n \"\"\"\n data = np.ones(shape, dtype)\n return dc.array(data, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates an array of given shape and type filled with fill_value.", "response": "def full(shape, fill_value, dtype=None, **kwargs):\n \"\"\"Create an array of given shape and type, filled with `fill_value`.\n\n Args:\n shape (sequence of ints): 2D shape of the array.\n fill_value (scalar or numpy.ndarray): Fill value or array.\n dtype (data-type, optional): Desired data-type for the array.\n kwargs (optional): Other arguments of the array (*coords, attrs, and name).\n\n Returns:\n array (decode.array): Decode array filled with `fill_value`.\n \"\"\"\n return (dc.zeros(shape, **kwargs) + fill_value).astype(dtype)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating an array of given shape and type without initializing entries.", "response": "def empty(shape, dtype=None, **kwargs):\n \"\"\"Create an array of given shape and type, without initializing entries.\n\n Args:\n shape (sequence of ints): 2D shape of the array.\n dtype (data-type, optional): Desired data-type for the array.\n kwargs (optional): Other arguments of the array (*coords, attrs, and name).\n\n Returns:\n array (decode.array): Decode array without initializing entries.\n \"\"\"\n data = np.empty(shape, dtype)\n return dc.array(data, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef zeros_like(array, dtype=None, keepmeta=True):\n if keepmeta:\n return xr.zeros_like(array, dtype)\n else:\n return dc.zeros(array.shape, dtype)", "response": "Create an array of zeros with the same shape and type as the input array."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ones_like(array, dtype=None, keepmeta=True):\n if keepmeta:\n return xr.ones_like(array, dtype)\n else:\n return dc.ones(array.shape, dtype)", "response": "Create an array of ones with the same shape and type as the input array."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates an array of fill_value with the same shape and type as the input array.", "response": "def full_like(array, fill_value, reverse=False, dtype=None, keepmeta=True):\n \"\"\"Create an array of `fill_value` with the same shape and type as the input array.\n\n Args:\n array (xarray.DataArray): The shape and data-type of it define\n these same attributes of the output array.\n fill_value (scalar or numpy.ndarray): Fill value or array.\n dtype (data-type, optional): If spacified, this function overrides\n the data-type of the output array.\n keepmeta (bool, optional): Whether *coords, attrs, and name of the input\n array are kept in the output one. Default is True.\n\n Returns:\n array (decode.array): Decode array filled with `fill_value`.\n \"\"\"\n if keepmeta:\n return (dc.zeros_like(array) + fill_value).astype(dtype)\n else:\n return dc.full(array.shape, fill_value, dtype)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an array of empty with the same shape and type as the input array.", "response": "def empty_like(array, dtype=None, keepmeta=True):\n \"\"\"Create an array of empty with the same shape and type as the input array.\n\n Args:\n array (xarray.DataArray): The shape and data-type of it define\n these same attributes of the output array.\n dtype (data-type, optional): If spacified, this function overrides\n the data-type of the output array.\n keepmeta (bool, optional): Whether *coords, attrs, and name of the input\n array are kept in the output one. Default is True.\n\n Returns:\n array (decode.array): Decode array without initializing entries.\n \"\"\"\n if keepmeta:\n return dc.empty(array.shape, dtype,\n tcoords=array.dca.tcoords, chcoords=array.dca.chcoords,\n scalarcoords=array.dca.scalarcoords, attrs=array.attrs, name=array.name\n )\n else:\n return dc.empty(array.shape, dtype)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\niterates tasks in the order of their names and groups.", "response": "def iter_tasks(matrix, names, groups):\n \"\"\"Iterate tasks.\"\"\"\n\n # Build name index\n name_index = dict([(task.get('name', ''), index) for index, task in enumerate(matrix)])\n\n for index, task in enumerate(matrix):\n name = task.get('name', '')\n group = task.get('group', '')\n hidden = task.get('hidden', False)\n if names and name in names and index == name_index[name]:\n yield task\n elif groups and group in groups and not hidden:\n yield task\n elif not names and not groups and not hidden:\n yield task"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef spellcheck(config_file, names=None, groups=None, binary='', checker='', sources=None, verbose=0, debug=False):\n\n hunspell = None\n aspell = None\n spellchecker = None\n config = util.read_config(config_file)\n if sources is None:\n sources = []\n\n matrix = config.get('matrix', [])\n preferred_checker = config.get('spellchecker', 'aspell')\n if not matrix:\n matrix = config.get('documents', [])\n if matrix:\n util.warn_deprecated(\"'documents' key in config is deprecated. 'matrix' should be used going forward.\")\n\n groups = set() if groups is None else set(groups)\n names = set() if names is None else set(names)\n\n # Sources are only recognized when requesting a single name.\n if (len(names) != 1 and len(sources)):\n sources = []\n\n for task in iter_tasks(matrix, names, groups):\n\n if not checker:\n checker = preferred_checker\n\n if checker == \"hunspell\": # pragma: no cover\n if hunspell is None:\n hunspell = Hunspell(config, binary, verbose, debug)\n spellchecker = hunspell\n\n elif checker == \"aspell\":\n if aspell is None:\n aspell = Aspell(config, binary, verbose, debug)\n spellchecker = aspell\n else:\n raise ValueError('%s is not a valid spellchecker!' % checker)\n\n spellchecker.log('Using %s to spellcheck %s' % (checker, task.get('name', '')), 1)\n for result in spellchecker.run_task(task, source_patterns=sources):\n spellchecker.log('Context: %s' % result.context, 2)\n yield result\n spellchecker.log(\"\", 1)", "response": "Yields a list of spellcheck results."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the error message.", "response": "def get_error(self, e):\n \"\"\"Get the error.\"\"\"\n\n import traceback\n\n return traceback.format_exc() if self.debug else str(e)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _walk_src(self, targets, flags, pipeline):\n\n for target in targets:\n for f in glob.iglob(target, flags=flags | glob.S):\n if not os.path.isdir(f):\n self.log('', 2)\n self.log('> Processing: %s' % f, 1)\n if pipeline:\n try:\n yield pipeline[0]._run_first(f)\n except Exception as e:\n err = self.get_error(e)\n yield [filters.SourceText('', f, '', '', err)]\n else:\n try:\n if self.default_encoding:\n encoding = filters.PYTHON_ENCODING_NAMES.get(\n self.default_encoding, self.default_encoding\n ).lower()\n encoding = codecs.lookup(encoding).name\n else:\n encoding = self.default_encoding\n yield [filters.SourceText('', f, encoding, 'file')]\n except Exception as e:\n err = self.get_error(e)\n yield [filters.SourceText('', f, '', '', err)]", "response": "Walk source files and parse files."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _build_pipeline(self, task):\n\n self.pipeline_steps = []\n kwargs = {}\n if self.default_encoding:\n kwargs[\"default_encoding\"] = self.default_encoding\n\n steps = task.get('pipeline', [])\n if steps is None:\n self.pipeline_steps = None\n else:\n if not steps:\n steps = task.get('filters', [])\n if steps:\n util.warn_deprecated(\n \"'filters' key in config is deprecated. 'pipeline' should be used going forward.\"\n )\n\n if not steps:\n steps.append('pyspelling.filters.text')\n\n for step in steps:\n # Retrieve module and module options\n if isinstance(step, dict):\n name, options = list(step.items())[0]\n else:\n name = step\n options = {}\n if options is None:\n options = {}\n\n module = self._get_module(name)\n if issubclass(module, filters.Filter):\n self.pipeline_steps.append(module(options, **kwargs))\n elif issubclass(module, flow_control.FlowControl):\n if self.pipeline_steps:\n self.pipeline_steps.append(module(options))\n else:\n raise ValueError(\"Pipeline cannot start with a 'Flow Control' plugin!\")\n else:\n raise ValueError(\"'%s' is not a valid plugin!\" % name)", "response": "Build up the pipeline."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _to_flags(self, text):\n\n flags = 0\n for x in text.split('|'):\n value = x.strip().upper()\n if value:\n flags |= self.GLOB_FLAG_MAP.get(value, 0)\n return flags", "response": "Convert text representation of flags to actual flags."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run_task(self, task, source_patterns=None):\n\n # Perform spell check\n self.log('Running Task: %s...' % task.get('name', ''), 1)\n\n # Setup filters and variables for the spell check\n self.default_encoding = task.get('default_encoding', '')\n options = self.setup_spellchecker(task)\n personal_dict = self.setup_dictionary(task)\n glob_flags = self._to_flags(task.get('glob_flags', \"N|B|G\"))\n self._build_pipeline(task)\n\n if not source_patterns:\n source_patterns = task.get('sources', [])\n\n for sources in self._walk_src(source_patterns, glob_flags, self.pipeline_steps):\n\n if self.pipeline_steps is not None:\n yield from self._spelling_pipeline(sources, options, personal_dict)\n else:\n yield from self.spell_check_no_pipeline(sources, options, personal_dict)", "response": "Walk source and initiate spell check."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompile the user dictionary.", "response": "def compile_dictionary(self, lang, wordlists, encoding, output):\n \"\"\"Compile user dictionary.\"\"\"\n\n cmd = [\n self.binary,\n '--lang', lang,\n '--encoding', codecs.lookup(filters.PYTHON_ENCODING_NAMES.get(encoding, encoding).lower()).name,\n 'create',\n 'master', output\n ]\n\n wordlist = ''\n\n try:\n output_location = os.path.dirname(output)\n if not os.path.exists(output_location):\n os.makedirs(output_location)\n if os.path.exists(output):\n os.remove(output)\n\n self.log(\"Compiling Dictionary...\", 1)\n # Read word lists and create a unique set of words\n words = set()\n for wordlist in wordlists:\n with open(wordlist, 'rb') as src:\n for word in src.read().split(b'\\n'):\n words.add(word.replace(b'\\r', b''))\n\n # Compile wordlist against language\n util.call(\n [\n self.binary,\n '--lang', lang,\n '--encoding=utf-8',\n 'create',\n 'master', output\n ],\n input_text=b'\\n'.join(sorted(words)) + b'\\n'\n )\n except Exception:\n self.log(cmd, 0)\n self.log(\"Current wordlist: '%s'\" % wordlist, 0)\n self.log(\"Problem compiling dictionary. Check the binary path and options.\", 0)\n raise"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nspell check without the pipeline.", "response": "def spell_check_no_pipeline(self, sources, options, personal_dict):\n \"\"\"Spell check without the pipeline.\"\"\"\n\n for source in sources:\n if source._has_error(): # pragma: no cover\n yield Results([], source.context, source.category, source.error)\n\n cmd = self.setup_command(source.encoding, options, personal_dict, source.context)\n self.log('', 3)\n self.log(\"Command: \" + str(cmd), 4)\n try:\n wordlist = util.call_spellchecker(cmd, input_text=None, encoding=source.encoding)\n yield Results(\n [w for w in sorted(set(wordlist.replace('\\r', '').split('\\n'))) if w],\n source.context,\n source.category\n )\n except Exception as e: # pragma: no cover\n err = self.get_error(e)\n yield Results([], source.context, source.category, err)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fusion_append(fields, error_field, etl_func):\n return compose(read_fields(fields),\n etl_func, dict_enforcer, write_error_field(error_field))", "response": "appends a new element to a list of nacvables"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmatch the category with the pattern.", "response": "def match(self, category, pattern):\n \"\"\"Match the category.\"\"\"\n\n return fnmatch.fnmatch(category, pattern, flags=self.FNMATCH_FLAGS)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadjusting the flow of source control objects.", "response": "def adjust_flow(self, category):\n \"\"\"Adjust the flow of source control objects.\"\"\"\n\n status = flow_control.SKIP\n for allow in self.allow:\n if self.match(category, allow):\n status = flow_control.ALLOW\n for skip in self.skip:\n if self.match(category, skip):\n status = flow_control.SKIP\n for halt in self.halt:\n if self.match(category, halt):\n status = flow_control.HALT\n if status != flow_control.ALLOW:\n break\n return status"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef plot_tcoords(array, coords, scantypes=None, ax=None, **kwargs):\n if ax is None:\n ax = plt.gca()\n\n if scantypes is None:\n ax.plot(array[coords[0]], array[coords[1]], label='ALL', **kwargs)\n else:\n for scantype in scantypes:\n ax.plot(array[coords[0]][array.scantype == scantype],\n array[coords[1]][array.scantype == scantype], label=scantype, **kwargs)\n ax.set_xlabel(coords[0])\n ax.set_ylabel(coords[1])\n ax.set_title('{} vs {}'.format(coords[1], coords[0]))\n ax.legend()\n\n logger.info('{} vs {} has been plotted.'.format(coords[1], coords[0]))", "response": "Plot the coordinates related to the time axis."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef plot_timestream(array, kidid, xtick='time', scantypes=None, ax=None, **kwargs):\n if ax is None:\n ax = plt.gca()\n\n index = np.where(array.kidid == kidid)[0]\n if len(index) == 0:\n raise KeyError('Such a kidid does not exist.')\n index = int(index)\n\n if scantypes is None:\n if xtick == 'time':\n ax.plot(array.time, array[:, index], label='ALL', **kwargs)\n elif xtick == 'index':\n ax.plot(np.ogrid[:len(array.time)], array[:, index], label='ALL', **kwargs)\n else:\n for scantype in scantypes:\n if xtick == 'time':\n ax.plot(array.time[array.scantype == scantype],\n array[:, index][array.scantype == scantype], label=scantype, **kwargs)\n elif xtick == 'index':\n ax.plot(np.ogrid[:len(array.time[array.scantype == scantype])],\n array[:, index][array.scantype == scantype], label=scantype, **kwargs)\n ax.set_xlabel('{}'.format(xtick))\n ax.set_ylabel(str(array.datatype.values))\n ax.legend()\n\n kidtpdict = {0: 'wideband', 1: 'filter', 2: 'blind'}\n try:\n kidtp = kidtpdict[int(array.kidtp[index])]\n except KeyError:\n kidtp = 'filter'\n ax.set_title('ch #{} ({})'.format(kidid, kidtp))\n\n logger.info('timestream data (ch={}) has been plotted.'.format(kidid))", "response": "Plot timestream data.\n\n Args:\n array (xarray.DataArray): Array which the timestream data are included.\n kidid (int): Kidid.\n xtick (str): Type of x axis.\n 'time': Time.\n 'index': Time index.\n scantypes (list): Scantypes. If None, all scantypes are used.\n ax (matplotlib.axes): Axis you want to plot on.\n kwargs (optional): Plot options passed to ax.plot()."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nplot a spectrum on the specified axis.", "response": "def plot_spectrum(cube, xtick, ytick, aperture, ax=None, **kwargs):\n \"\"\"Plot a spectrum.\n\n Args:\n cube (xarray.DataArray): Cube which the spectrum information is included.\n xtick (str): Type of x axis.\n 'freq': Frequency [GHz].\n 'id': Kid id.\n ytick (str): Type of y axis.\n 'max': Maximum.\n 'sum': Summation.\n 'mean': Mean.\n aperture (str): The shape of aperture.\n 'box': Box.\n 'circle': Circle.\n ax (matplotlib.axes): Axis you want to plot on.\n kwargs (optional):\n When 'box' is specified as shape,\n xc: Center of x.\n yc: Center of y.\n width: Width.\n height: Height.\n xmin: Minimum of x.\n xmax: Maximum of x.\n ymin: Minimum of y.\n ymax: Maximum of y.\n When 'circle' is specified as shape,\n xc: Center of x.\n yc: Center of y.\n radius: Radius.\n Remaining kwargs are passed to ax.step().\n\n Notes:\n All kwargs should be specified as pixel coordinates.\n \"\"\"\n if ax is None:\n ax = plt.gca()\n\n ### pick up kwargs\n xc = kwargs.pop('xc', None)\n yc = kwargs.pop('yc', None)\n width = kwargs.pop('width', None)\n height = kwargs.pop('height', None)\n xmin = kwargs.pop('xmin', None)\n xmax = kwargs.pop('xmax', None)\n ymin = kwargs.pop('ymin', None)\n ymax = kwargs.pop('ymax', None)\n radius = kwargs.pop('radius', None)\n exchs = kwargs.pop('exchs', None)\n\n ### labels\n xlabeldict = {'freq': 'frequency [GHz]', 'id': 'kidid'}\n\n cube = cube.copy()\n datatype = cube.datatype\n if aperture == 'box':\n if None not in [xc, yc, width, height]:\n xmin, xmax = int(xc - width / 2), int(xc + width / 2)\n ymin, ymax = int(yc - width / 2), int(yc + width / 2)\n elif None not in [xmin, xmax, ymin, ymax]:\n pass\n else:\n raise KeyError('Invalid arguments.')\n value = getattr(cube[xmin:xmax, ymin:ymax, :], ytick)(dim=('x', 'y'))\n elif aperture == 'circle':\n if None not in [xc, yc, radius]:\n pass\n else:\n raise KeyError('Invalid arguments.')\n x, y = np.ogrid[0:len(cube.x), 0:len(cube.y)]\n mask = ((x - xc)**2 + (y - yc)**2 < radius**2)\n mask = np.broadcast_to(mask[:, :, np.newaxis], cube.shape)\n masked = np.ma.array(cube.values, mask=~mask)\n value = getattr(np, 'nan'+ytick)(masked, axis=(0, 1))\n else:\n raise KeyError(aperture)\n\n if xtick == 'freq':\n kidfq = cube.kidfq.values\n freqrange = ~np.isnan(kidfq)\n if exchs is not None:\n freqrange[exchs] = False\n x = kidfq[freqrange]\n y = value[freqrange]\n ax.step(x[np.argsort(x)], y[np.argsort(x)], where='mid', **kwargs)\n elif xtick == 'id':\n ax.step(cube.kidid.values, value, where='mid', **kwargs)\n else:\n raise KeyError(xtick)\n ax.set_xlabel('{}'.format(xlabeldict[xtick]))\n ax.set_ylabel('{} ({})'.format(datatype.values, ytick))\n ax.set_title('spectrum')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nplots an intensity map.", "response": "def plot_chmap(cube, kidid, ax=None, **kwargs):\n \"\"\"Plot an intensity map.\n\n Args:\n cube (xarray.DataArray): Cube which the spectrum information is included.\n kidid (int): Kidid.\n ax (matplotlib.axes): Axis the figure is plotted on.\n kwargs (optional): Plot options passed to ax.imshow().\n \"\"\"\n if ax is None:\n ax = plt.gca()\n\n index = np.where(cube.kidid == kidid)[0]\n if len(index) == 0:\n raise KeyError('Such a kidid does not exist.')\n index = int(index)\n\n im = ax.pcolormesh(cube.x, cube.y, cube[:, :, index].T, **kwargs)\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_title('intensity map ch #{}'.format(kidid))\n return im"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plotpsd(data, dt, ndivide=1, window=hanning, overlap_half=False, ax=None, **kwargs):\n if ax is None:\n ax = plt.gca()\n vk, psddata = psd(data, dt, ndivide, window, overlap_half)\n ax.loglog(vk, psddata, **kwargs)\n ax.set_xlabel('Frequency [Hz]')\n ax.set_ylabel('PSD')\n ax.legend()", "response": "Plot a power - spectrum density."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nplot Allan variance. Args: data (np.ndarray): Input data. dt (float): Time between each data. tmax (float): Maximum time. ax (matplotlib.axes): Axis the figure is plotted on. kwargs (optional): Plot options passed to ax.plot().", "response": "def plotallanvar(data, dt, tmax=10, ax=None, **kwargs):\n \"\"\"Plot Allan variance.\n\n Args:\n data (np.ndarray): Input data.\n dt (float): Time between each data.\n tmax (float): Maximum time.\n ax (matplotlib.axes): Axis the figure is plotted on.\n kwargs (optional): Plot options passed to ax.plot().\n \"\"\"\n if ax is None:\n ax = plt.gca()\n tk, allanvar = allan_variance(data, dt, tmax)\n ax.loglog(tk, allanvar, **kwargs)\n ax.set_xlabel('Time [s]')\n ax.set_ylabel('Allan Variance')\n ax.legend()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfunction to create a cancellation request for the order ID from the open orders on the order book. Execution of this function is as follows:: create_cancellation(order_id=order['id'], private_key=kp) The expected return result for this function is as follows:: { 'id': '6b9f40de-f9bb-46b6-9434-d281f8c06b74', 'transaction': { 'hash': '50d99ebd7e57dbdceb7edc2014da5f446c8f44cc0a0b6d9c762a29e8a74bb051', 'sha256': '509edb9888fa675988fa71a27600b2655e63fe979424f13f5c958897b2e99ed8', 'type': 209, 'version': 1, 'attributes': [ { 'usage': 32, 'data': '592c8a46a0d06c600f06c994d1f25e7283b8a2fe' } ], 'inputs': [ { 'prevHash': '9eaca1adcbbc0669a936576cb9ad03c11c99c356347aae3037ce1f0e4d330d85', 'prevIndex': 0 }, { 'prevHash': 'c858e4d2af1e1525fa974fb2b1678caca1f81a5056513f922789594939ff713d', 'prevIndex': 37 } ], 'outputs': [ { 'assetId': '602c79718b16e442de58778e148d0b1084e3b2dffd5de6b7b16cee7969282de7', 'scriptHash': 'e707714512577b42f9a011f8b870625429f93573', 'value': 1e-08 } ], 'scripts': [], 'script': '....', 'gas': 0 }, 'script_params': { 'scriptHash': 'a195c1549e7da61b8da315765a790ac7e7633b82', 'operation': 'cancelOffer', 'args': [ '923ebd865a6c5db39bd67403c1677f29d3949d9d8d9a2eb36a56dabd2dd83068' ] } } :param order_id: The order ID of the open transaction on the order book that you want to cancel. :type order_id: str :param private_key: The KeyPair that will be used to sign the transaction sent to the blockchain. :type private_key: KeyPair :return: Dictionary that contains the cancellation request along with blockchain transaction information.", "response": "def create_cancellation(self, order_id, private_key):\n \"\"\"\n Function to create a cancellation request for the order ID from the open orders on the order book.\n Execution of this function is as follows::\n\n create_cancellation(order_id=order['id'], private_key=kp)\n\n The expected return result for this function is as follows::\n\n {\n 'id': '6b9f40de-f9bb-46b6-9434-d281f8c06b74',\n 'transaction': {\n 'hash': '50d99ebd7e57dbdceb7edc2014da5f446c8f44cc0a0b6d9c762a29e8a74bb051',\n 'sha256': '509edb9888fa675988fa71a27600b2655e63fe979424f13f5c958897b2e99ed8',\n 'type': 209,\n 'version': 1,\n 'attributes': [\n {\n 'usage': 32,\n 'data': '592c8a46a0d06c600f06c994d1f25e7283b8a2fe'\n }\n ],\n 'inputs': [\n {\n 'prevHash': '9eaca1adcbbc0669a936576cb9ad03c11c99c356347aae3037ce1f0e4d330d85',\n 'prevIndex': 0\n }, {\n 'prevHash': 'c858e4d2af1e1525fa974fb2b1678caca1f81a5056513f922789594939ff713d',\n 'prevIndex': 37\n }\n ],\n 'outputs': [\n {\n 'assetId': '602c79718b16e442de58778e148d0b1084e3b2dffd5de6b7b16cee7969282de7',\n 'scriptHash': 'e707714512577b42f9a011f8b870625429f93573',\n 'value': 1e-08\n }\n ],\n 'scripts': [],\n 'script': '....',\n 'gas': 0\n },\n 'script_params': {\n 'scriptHash': 'a195c1549e7da61b8da315765a790ac7e7633b82',\n 'operation': 'cancelOffer',\n 'args': [\n '923ebd865a6c5db39bd67403c1677f29d3949d9d8d9a2eb36a56dabd2dd83068'\n ]\n }\n }\n\n :param order_id: The order ID of the open transaction on the order book that you want to cancel.\n :type order_id: str\n :param private_key: The KeyPair that will be used to sign the transaction sent to the blockchain.\n :type private_key: KeyPair\n :return: Dictionary that contains the cancellation request along with blockchain transaction information.\n \"\"\"\n cancellation_params = {\n \"order_id\": order_id,\n \"timestamp\": get_epoch_milliseconds()\n }\n api_params = self.sign_create_cancellation_function[self.blockchain](cancellation_params, private_key)\n return self.request.post(path='/cancellations', json_data=api_params)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_deposit(self, asset, amount, private_key):\n signable_params = {\n 'blockchain': self.blockchain,\n 'asset_id': asset,\n 'amount': str(self.blockchain_amount[self.blockchain](amount)),\n 'timestamp': get_epoch_milliseconds(),\n 'contract_hash': self.contract_hash\n }\n api_params = self.sign_create_deposit_function[self.blockchain](signable_params, private_key)\n return self.request.post(path='/deposits', json_data=api_params)", "response": "This function generates a deposit request for a specific asset."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef execute_deposit(self, deposit_params, private_key):\n deposit_id = deposit_params['id']\n api_params = self.sign_execute_deposit_function[self.blockchain](deposit_params, private_key)\n return self.request.post(path='/deposits/{}/broadcast'.format(deposit_id), json_data=api_params)", "response": "Execute the deposit request by signing the transaction generated by the create deposit function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfunction to create an order for the trade pair and details requested. Execution of this function is as follows:: create_order(pair=\"SWTH_NEO\", side=\"buy\", price=0.0002, quantity=100, private_key=kp, use_native_token=True, order_type=\"limit\") The expected return result for this function is as follows:: { 'id': '4e6a59fd-d750-4332-aaf0-f2babfa8ad67', 'blockchain': 'neo', 'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82', 'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59', 'side': 'buy', 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b', 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132', 'offer_amount': '2000000', 'want_amount': '10000000000', 'transfer_amount': '0', 'priority_gas_amount': '0', 'use_native_token': True, 'native_fee_transfer_amount': 0, 'deposit_txn': None, 'created_at': '2018-08-05T10:38:37.714Z', 'status': 'pending', 'fills': [], 'makes': [ { 'id': 'e30a7fdf-779c-4623-8f92-8a961450d843', 'offer_hash': None, 'available_amount': None, 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b', 'offer_amount': '2000000', 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132', 'want_amount': '10000000000', 'filled_amount': None, 'txn': { 'offerHash': 'b45ddfb97ade5e0363d9e707dac9ad1c530448db263e86494225a0025006f968', 'hash': '5c4cb1e73b9f2e608b6e768e0654649a4d15e08a7fe63fc536c454fa563a2f0f', 'sha256': 'f0b70640627947584a2976edeb055a124ae85594db76453532b893c05618e6ca', 'invoke': { 'scriptHash': 'a195c1549e7da61b8da315765a790ac7e7633b82', 'operation': 'makeOffer', 'args': [ '592c8a46a0d06c600f06c994d1f25e7283b8a2fe', '9b7cffdaa674beae0f930ebe6085af9093e5fe56b34a5c220ccdcf6efc336fc5', 2000000, '32e125258b7db0a0dffde5bd03b2b859253538ab', 10000000000, '65333061376664662d373739632d343632332d386639322d386139363134353064383433' ] }, 'type': 209, 'version': 1, 'attributes': [ { 'usage': 32, 'data': '592c8a46a0d06c600f06c994d1f25e7283b8a2fe' } ], 'inputs': [ { 'prevHash': '0fcfd792a9d20a7795255d1d3d3927f5968b9953e80d16ffd222656edf8fedbc', 'prevIndex': 0 }, { 'prevHash': 'c858e4d2af1e1525fa974fb2b1678caca1f81a5056513f922789594939ff713d', 'prevIndex': 35 } ], 'outputs': [ { 'assetId': '602c79718b16e442de58778e148d0b1084e3b2dffd5de6b7b16cee7969282de7', 'scriptHash': 'e707714512577b42f9a011f8b870625429f93573', 'value': 1e-08 } ], 'scripts': [], 'script': '....', 'gas': 0 }, 'cancel_txn': None, 'price': '0.0002', 'status': 'pending', 'created_at': '2018-08-05T10:38:37.731Z', 'transaction_hash': '5c4cb1e73b9f2e608b6e768e0654649a4d15e08a7fe63fc536c454fa563a2f0f', 'trades': [] } ] } :param pair: The trading pair this order is being submitted for. :type pair: str :param side: The side of the trade being submitted i.e. buy or sell :type side: str :param price: The price target for this trade. :type price: float :param quantity: The amount of the asset being exchanged in the trade. :type quantity: float :param private_key: The Private Key (ETH) or KeyPair (NEO) for the wallet being used to sign deposit message. :type private_key: KeyPair or str :param use_native_token: Flag to indicate whether or not to pay fees with the Switcheo native token. :type use_native_token: bool :param order_type: The type of order being submitted, currently this can only be a limit order. :type order_type: str :param otc_address: The address to trade with for Over the Counter exchanges. :type otc_address: str :return: Dictionary of order details to specify which parts of the trade will be filled (taker) or open (maker)", "response": "def create_order(self, pair, side, price, quantity, private_key, use_native_token=True, order_type=\"limit\",\n otc_address=None):\n \"\"\"\n Function to create an order for the trade pair and details requested.\n Execution of this function is as follows::\n\n create_order(pair=\"SWTH_NEO\", side=\"buy\", price=0.0002, quantity=100, private_key=kp,\n use_native_token=True, order_type=\"limit\")\n\n The expected return result for this function is as follows::\n\n {\n 'id': '4e6a59fd-d750-4332-aaf0-f2babfa8ad67',\n 'blockchain': 'neo',\n 'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82',\n 'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59',\n 'side': 'buy',\n 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b',\n 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'offer_amount': '2000000',\n 'want_amount': '10000000000',\n 'transfer_amount': '0',\n 'priority_gas_amount': '0',\n 'use_native_token': True,\n 'native_fee_transfer_amount': 0,\n 'deposit_txn': None,\n 'created_at': '2018-08-05T10:38:37.714Z',\n 'status': 'pending',\n 'fills': [],\n 'makes': [\n {\n 'id': 'e30a7fdf-779c-4623-8f92-8a961450d843',\n 'offer_hash': None,\n 'available_amount': None,\n 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b',\n 'offer_amount': '2000000',\n 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'want_amount': '10000000000',\n 'filled_amount': None,\n 'txn': {\n 'offerHash': 'b45ddfb97ade5e0363d9e707dac9ad1c530448db263e86494225a0025006f968',\n 'hash': '5c4cb1e73b9f2e608b6e768e0654649a4d15e08a7fe63fc536c454fa563a2f0f',\n 'sha256': 'f0b70640627947584a2976edeb055a124ae85594db76453532b893c05618e6ca',\n 'invoke': {\n 'scriptHash': 'a195c1549e7da61b8da315765a790ac7e7633b82',\n 'operation': 'makeOffer',\n 'args': [\n '592c8a46a0d06c600f06c994d1f25e7283b8a2fe',\n '9b7cffdaa674beae0f930ebe6085af9093e5fe56b34a5c220ccdcf6efc336fc5',\n 2000000,\n '32e125258b7db0a0dffde5bd03b2b859253538ab',\n 10000000000,\n '65333061376664662d373739632d343632332d386639322d386139363134353064383433'\n ]\n },\n 'type': 209,\n 'version': 1,\n 'attributes': [\n {\n 'usage': 32,\n 'data': '592c8a46a0d06c600f06c994d1f25e7283b8a2fe'\n }\n ],\n 'inputs': [\n {\n 'prevHash': '0fcfd792a9d20a7795255d1d3d3927f5968b9953e80d16ffd222656edf8fedbc',\n 'prevIndex': 0\n }, {\n 'prevHash': 'c858e4d2af1e1525fa974fb2b1678caca1f81a5056513f922789594939ff713d',\n 'prevIndex': 35\n }\n ],\n 'outputs': [\n {\n 'assetId': '602c79718b16e442de58778e148d0b1084e3b2dffd5de6b7b16cee7969282de7',\n 'scriptHash': 'e707714512577b42f9a011f8b870625429f93573',\n 'value': 1e-08\n }\n ],\n 'scripts': [],\n 'script': '....',\n 'gas': 0\n },\n 'cancel_txn': None,\n 'price': '0.0002',\n 'status': 'pending',\n 'created_at': '2018-08-05T10:38:37.731Z',\n 'transaction_hash': '5c4cb1e73b9f2e608b6e768e0654649a4d15e08a7fe63fc536c454fa563a2f0f',\n 'trades': []\n }\n ]\n }\n\n :param pair: The trading pair this order is being submitted for.\n :type pair: str\n :param side: The side of the trade being submitted i.e. buy or sell\n :type side: str\n :param price: The price target for this trade.\n :type price: float\n :param quantity: The amount of the asset being exchanged in the trade.\n :type quantity: float\n :param private_key: The Private Key (ETH) or KeyPair (NEO) for the wallet being used to sign deposit message.\n :type private_key: KeyPair or str\n :param use_native_token: Flag to indicate whether or not to pay fees with the Switcheo native token.\n :type use_native_token: bool\n :param order_type: The type of order being submitted, currently this can only be a limit order.\n :type order_type: str\n :param otc_address: The address to trade with for Over the Counter exchanges.\n :type otc_address: str\n :return: Dictionary of order details to specify which parts of the trade will be filled (taker) or open (maker)\n \"\"\"\n if side.lower() not in [\"buy\", \"sell\"]:\n raise ValueError(\"Allowed trade types are buy or sell, you entered {}\".format(side.lower()))\n if order_type.lower() not in [\"limit\", \"market\", \"otc\"]:\n raise ValueError(\"Allowed order type is limit, you entered {}\".format(order_type.lower()))\n if order_type.lower() == \"otc\" and otc_address is None:\n raise ValueError(\"OTC Address is required when trade type is otc (over the counter).\")\n order_params = {\n \"blockchain\": self.blockchain,\n \"pair\": pair,\n \"side\": side,\n \"price\": '{:.8f}'.format(price) if order_type.lower() != \"market\" else None,\n \"quantity\": str(self.blockchain_amount[self.blockchain](quantity)),\n \"use_native_tokens\": use_native_token,\n \"order_type\": order_type,\n \"timestamp\": get_epoch_milliseconds(),\n \"contract_hash\": self.contract_hash\n }\n api_params = self.sign_create_order_function[self.blockchain](order_params, private_key)\n return self.request.post(path='/orders', json_data=api_params)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes the order and returns the order record.", "response": "def execute_order(self, order_params, private_key):\n \"\"\"\n This function executes the order created before it and signs the transaction to be submitted to the blockchain.\n Execution of this function is as follows::\n\n execute_order(order_params=create_order, private_key=kp)\n\n The expected return result for this function is the same as the execute_order function::\n\n {\n 'id': '4e6a59fd-d750-4332-aaf0-f2babfa8ad67',\n 'blockchain': 'neo',\n 'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82',\n 'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59',\n 'side': 'buy',\n 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b',\n 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'offer_amount': '2000000',\n 'want_amount': '10000000000',\n 'transfer_amount': '0',\n 'priority_gas_amount': '0',\n 'use_native_token': True,\n 'native_fee_transfer_amount': 0,\n 'deposit_txn': None,\n 'created_at': '2018-08-05T10:38:37.714Z',\n 'status': 'processed',\n 'fills': [],\n 'makes': [\n {\n 'id': 'e30a7fdf-779c-4623-8f92-8a961450d843',\n 'offer_hash': 'b45ddfb97ade5e0363d9e707dac9ad1c530448db263e86494225a0025006f968',\n 'available_amount': '2000000',\n 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b',\n 'offer_amount': '2000000',\n 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'want_amount': '10000000000',\n 'filled_amount': '0.0',\n 'txn': None,\n 'cancel_txn': None,\n 'price': '0.0002',\n 'status': 'confirming',\n 'created_at': '2018-08-05T10:38:37.731Z',\n 'transaction_hash': '5c4cb1e73b9f2e608b6e768e0654649a4d15e08a7fe63fc536c454fa563a2f0f',\n 'trades': []\n }\n ]\n }\n\n :param order_params: Dictionary generated from the create order function.\n :type order_params: dict\n :param private_key: The Private Key (ETH) or KeyPair (NEO) for the wallet being used to sign deposit message.\n :type private_key: KeyPair or str\n :return: Dictionary of the transaction on the order book.\n \"\"\"\n order_id = order_params['id']\n api_params = self.sign_execute_order_function[self.blockchain](order_params, private_key)\n return self.request.post(path='/orders/{}/broadcast'.format(order_id), json_data=api_params)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef withdrawal(self, asset, amount, private_key):\n create_withdrawal = self.create_withdrawal(asset=asset, amount=amount, private_key=private_key)\n return self.execute_withdrawal(withdrawal_params=create_withdrawal, private_key=private_key)", "response": "This function creates a new withdrawal request for the asset and amount of coins and tokens."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfunctioning to create a withdrawal request by generating a withdrawal ID request from the Switcheo API. Execution of this function is as follows:: create_withdrawal(asset=\"SWTH\", amount=1.1, private_key=kp) The expected return result for this function is as follows:: { 'id': 'a5a4d396-fa9f-4191-bf50-39a3d06d5e0d' } :param asset: Script Hash of asset ID from the available products. :type asset: str :param amount: The amount of coins/tokens to be withdrawn. :type amount: float :param private_key: The Private Key (ETH) or KeyPair (NEO) for the wallet being used to sign deposit message. :type private_key: KeyPair or str :return: Dictionary with the withdrawal ID generated by the Switcheo API.", "response": "def create_withdrawal(self, asset, amount, private_key):\n \"\"\"\n Function to create a withdrawal request by generating a withdrawal ID request from the Switcheo API.\n Execution of this function is as follows::\n\n create_withdrawal(asset=\"SWTH\", amount=1.1, private_key=kp)\n\n The expected return result for this function is as follows::\n\n {\n 'id': 'a5a4d396-fa9f-4191-bf50-39a3d06d5e0d'\n }\n\n :param asset: Script Hash of asset ID from the available products.\n :type asset: str\n :param amount: The amount of coins/tokens to be withdrawn.\n :type amount: float\n :param private_key: The Private Key (ETH) or KeyPair (NEO) for the wallet being used to sign deposit message.\n :type private_key: KeyPair or str\n :return: Dictionary with the withdrawal ID generated by the Switcheo API.\n \"\"\"\n signable_params = {\n 'blockchain': self.blockchain,\n 'asset_id': asset,\n 'amount': str(self.blockchain_amount[self.blockchain](amount)),\n 'timestamp': get_epoch_milliseconds(),\n 'contract_hash': self.contract_hash\n }\n api_params = self.sign_create_withdrawal_function[self.blockchain](signable_params, private_key)\n return self.request.post(path='/withdrawals', json_data=api_params)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn an adaptive rejection sampling of the given logpdf and target distribution.", "response": "def adaptive_rejection_sampling(logpdf: callable,\n a: float, b: float,\n domain: Tuple[float, float],\n n_samples: int,\n random_stream=None):\n \"\"\"\n Adaptive rejection sampling samples exactly (all samples are i.i.d) and efficiently from any univariate log-concave distribution. The basic idea is to successively determine an envelope of straight-line segments to construct an increasingly accurate approximation of the logarithm.\n It does not require any normalization of the target distribution.\n\n Parameters\n ----------\n logpdf: callable\n Univariate function that computes :math:`log(f(u))`\n for a given :math:`u`, where :math:`f(u)` is proportional\n to the target density to sample from.\n\n a: float\n Lower starting point used to initialize the hulls.\n Must lie in the domain of the logpdf and it\n must hold: :math:`a < b`.\n\n b: float\n Upper starting point used to initialize the hulls.\n Must lie in the domain of the logpdf and it\n must hold: :math:`a < b`.\n\n domain : Tuple[float, float]\n Domain of `logpdf`.\n May be unbounded on either or both sides,\n in which case `(float(\"-inf\"), float(\"inf\"))`\n would be passed.\n If this domain is unbounded to the left,\n the derivative of the logpdf\n for x<= a must be positive.\n If this domain is unbounded to the right the derivative of the logpdf for x>=b\n must be negative.\n\n n_samples: int\n Number of samples to draw.\n\n random_stream : RandomState, optional\n Seeded random number generator object with same interface as a NumPy\n RandomState object. Defaults to `None` in which case a NumPy\n RandomState seeded from `/dev/urandom` if available or the clock if not\n will be used.\n\n Returns\n ----------\n samples : list\n A list of samples drawn from the\n target distribution :math:`f`\n with the given `logpdf`.\n\n Examples\n ----------\n Sampling from a simple gaussian, adaptive rejection sampling style.\n We use the logpdf of a standard gaussian and this small code snippet\n demonstrates that our sample approximation accurately approximates the mean:\n\n >>> from math import isclose\n >>> from numpy import log, exp, mean\n >>> gaussian_logpdf = lambda x, sigma=1: log(exp(-x ** 2 / sigma))\n >>> a, b = -2, 2 # a < b must hold\n >>> domain = (float(\"-inf\"), float(\"inf\"))\n >>> n_samples = 10000\n >>> samples = adaptive_rejection_sampling(logpdf=gaussian_logpdf, a=a, b=b, domain=domain, n_samples=n_samples)\n >>> isclose(mean(samples), 0.0, abs_tol=1e-02)\n True\n\n \"\"\"\n assert(hasattr(logpdf, \"__call__\"))\n assert(len(domain) == 2), \"Domain must be two-element iterable.\"\n assert(domain[1] >= domain[0]), \"Invalid domain, it must hold: domain[1] >= domain[0].\"\n assert(n_samples >= 0), \"Number of samples must be >= 0.\"\n\n if random_stream is None:\n random_stream = RandomState()\n\n if a >= b or isinf(a) or isinf(b) or a < domain[0] or b > domain[1]:\n raise ValueError(\"invalid a and b\")\n\n n_derivative_steps = 1e-3 * (b - a)\n\n S = (a, a + n_derivative_steps, b - n_derivative_steps, b)\n\n if domain[0] == float(\"-inf\"):\n # ensure positive derivative at 'a'\n derivative_sign = sign(logpdf(a + n_derivative_steps) - logpdf(a))\n positive_derivative = derivative_sign > 0\n\n assert(positive_derivative), \"derivative at 'a' must be positive, since the domain is unbounded to the left\"\n\n if domain[1] == float(\"inf\"):\n # ensure negative derivative at 'b'\n derivative_sign = sign(logpdf(b) - logpdf(b - n_derivative_steps))\n negative_derivative = derivative_sign < 0\n\n assert(negative_derivative), \"derivative at 'b' must be negative, since the domain is unbounded to the right\"\n\n # initialize a mesh on which to create upper & lower hulls\n n_initial_mesh_points = 3\n\n S = unique(\n (S[0], *(linspace(S[1], S[2], num=n_initial_mesh_points + 2)), S[3])\n )\n\n fS = tuple(logpdf(s) for s in S)\n\n lower_hull, upper_hull = compute_hulls(S=S, fS=fS, domain=domain)\n\n samples = []\n\n while len(samples) < n_samples:\n\n mesh_changed = False\n\n x = sample_upper_hull(upper_hull, random_stream=random_stream)\n\n lh_val, uh_val = evaluate_hulls(x, lower_hull, upper_hull)\n\n U = random_stream.rand()\n\n if log(U) <= lh_val - uh_val:\n # accept u is below lower bound\n samples.append(x)\n\n elif log(U) <= logpdf(x) - uh_val:\n # accept, u is between lower bound and f\n samples.append(x)\n\n mesh_changed = True\n else:\n # reject, u is between f and upper_bound\n mesh_changed = True\n\n if mesh_changed:\n S = sorted([*S, x])\n fS = tuple(logpdf(s) for s in S)\n lower_hull, upper_hull = compute_hulls(S=S, fS=fS, domain=domain)\n return samples"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sign_create_cancellation(cancellation_params, key_pair):\n encoded_message = encode_message(cancellation_params)\n create_params = cancellation_params.copy()\n create_params['address'] = neo_get_scripthash_from_private_key(private_key=key_pair.PrivateKey).ToString()\n create_params['signature'] = sign_message(encoded_message=encoded_message,\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n return create_params", "response": "This function is used to sign the parameters required to create a cancellation request from Switcheo Exchange."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfunctioning to sign the parameters required to execute a cancellation request on the Switcheo Exchange. Execution of this function is as follows:: sign_execute_cancellation(cancellation_params=signable_params, key_pair=key_pair) The expected return result for this function is as follows:: { 'signature': '6a40d6c011b7517f8fd3f2d0de32dd486adfd1d424d06d56c80eb....' } :param cancellation_params: Parameters the Switcheo Exchange returns from the create cancellation. :type cancellation_params: dict :param key_pair: The KeyPair for the wallet being used to sign deposit message. :type key_pair: KeyPair :return: Dictionary of signed message to send to the Switcheo API.", "response": "def sign_execute_cancellation(cancellation_params, key_pair):\n \"\"\"\n Function to sign the parameters required to execute a cancellation request on the Switcheo Exchange.\n Execution of this function is as follows::\n\n sign_execute_cancellation(cancellation_params=signable_params, key_pair=key_pair)\n\n The expected return result for this function is as follows::\n\n {\n 'signature': '6a40d6c011b7517f8fd3f2d0de32dd486adfd1d424d06d56c80eb....'\n }\n\n :param cancellation_params: Parameters the Switcheo Exchange returns from the create cancellation.\n :type cancellation_params: dict\n :param key_pair: The KeyPair for the wallet being used to sign deposit message.\n :type key_pair: KeyPair\n :return: Dictionary of signed message to send to the Switcheo API.\n \"\"\"\n signature = sign_transaction(transaction=cancellation_params['transaction'],\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n return {'signature': signature}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef sign_create_deposit(deposit_params, key_pair):\n encoded_message = encode_message(deposit_params)\n create_params = deposit_params.copy()\n create_params['address'] = neo_get_scripthash_from_private_key(private_key=key_pair.PrivateKey).ToString()\n create_params['signature'] = sign_message(encoded_message=encoded_message,\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n return create_params", "response": "This function generates a transaction request from the Switcheo API and signs it."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef sign_execute_deposit(deposit_params, key_pair):\n signature = sign_transaction(transaction=deposit_params['transaction'],\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n return {'signature': signature}", "response": "This function is used to execute the deposit request by signing the transaction generated by the create deposit function."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef sign_create_order(order_params, key_pair):\n encoded_message = encode_message(order_params)\n create_params = order_params.copy()\n create_params['address'] = neo_get_scripthash_from_private_key(private_key=key_pair.PrivateKey).ToString()\n create_params['signature'] = sign_message(encoded_message=encoded_message,\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n return create_params", "response": "This function is used to sign the create order parameters and send to Switcheo API."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef sign_execute_order(order_params, key_pair):\n execute_params = {\n 'signatures': {\n 'fill_groups': {},\n 'fills': sign_txn_array(messages=order_params['fills'],\n private_key_hex=private_key_to_hex(key_pair=key_pair)),\n 'makes': sign_txn_array(messages=order_params['makes'],\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n }\n }\n return execute_params", "response": "This function is used to sign the order request by signing the transaction generated by the create order function."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfunction to create the withdrawal request by signing the parameters necessary for withdrawal. Execution of this function is as follows:: sign_create_withdrawal(withdrawal_params=signable_params, private_key=eth_private_key) The expected return result for this function is as follows:: { 'blockchain': 'neo', 'asset_id': 'SWTH', 'amount': '100', 'timestamp': 1542090737236, 'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82', 'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59', 'signature': 'f66d604c0a80940bf70ce9e13c0fd47bc79de....' } :param withdrawal_params: Dictionary specifications for withdrawal from the Switcheo Smart Contract. :type withdrawal_params: dict :param key_pair: The NEO key pair to be used to sign messages for the NEO Blockchain. :type key_pair: KeyPair :return: Dictionary of parameters to be sent to the Switcheo API", "response": "def sign_create_withdrawal(withdrawal_params, key_pair):\n \"\"\"\n Function to create the withdrawal request by signing the parameters necessary for withdrawal.\n Execution of this function is as follows::\n\n sign_create_withdrawal(withdrawal_params=signable_params, private_key=eth_private_key)\n\n The expected return result for this function is as follows::\n\n {\n 'blockchain': 'neo',\n 'asset_id': 'SWTH',\n 'amount': '100',\n 'timestamp': 1542090737236,\n 'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82',\n 'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59',\n 'signature': 'f66d604c0a80940bf70ce9e13c0fd47bc79de....'\n }\n\n :param withdrawal_params: Dictionary specifications for withdrawal from the Switcheo Smart Contract.\n :type withdrawal_params: dict\n :param key_pair: The NEO key pair to be used to sign messages for the NEO Blockchain.\n :type key_pair: KeyPair\n :return: Dictionary of parameters to be sent to the Switcheo API\n \"\"\"\n encoded_message = encode_message(withdrawal_params)\n create_params = withdrawal_params.copy()\n create_params['address'] = neo_get_scripthash_from_private_key(private_key=key_pair.PrivateKey).ToString()\n create_params['signature'] = sign_message(encoded_message=encoded_message,\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n return create_params"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sign_execute_withdrawal(withdrawal_params, key_pair):\n withdrawal_id = withdrawal_params['id']\n signable_params = {\n 'id': withdrawal_id,\n 'timestamp': get_epoch_milliseconds()\n }\n encoded_message = encode_message(signable_params)\n execute_params = signable_params.copy()\n execute_params['signature'] = sign_message(encoded_message=encoded_message,\n private_key_hex=private_key_to_hex(key_pair=key_pair))\n return execute_params", "response": "This function is used to sign the withdrawal request for the NEO Blockchain."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register(cls, range_mixin):\n\n def decorator(range_set_mixin):\n cls.add(range_mixin, range_set_mixin)\n return range_set_mixin\n return decorator", "response": "A decorator for registering range set mixins for global use. This works with the same as the meta - range - set. add decorator. This works with the meta - range - set. add decorator. This works with the meta - range - set. add decorator."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pca(onarray, offarray, n=10, exchs=None, pc=False, mode='mean'):\n logger = getLogger('decode.models.pca')\n logger.info('n_components exchs mode')\n if exchs is None:\n exchs = [16, 44, 46]\n logger.info('{} {} {}'.format(n, exchs, mode))\n\n offid = np.unique(offarray.scanid)\n onid = np.unique(onarray.scanid)\n\n onarray = onarray.copy() # Xarray\n onarray[:, exchs] = 0\n onvalues = onarray.values\n onscanid = onarray.scanid.values\n offarray = offarray.copy() # Xarray\n offarray[:, exchs] = 0\n offvalues = offarray.values\n offscanid = offarray.scanid.values\n\n Ps, Cs = [], []\n Xatm = dc.full_like(onarray, onarray)\n Xatmvalues = Xatm.values\n model = TruncatedSVD(n_components=n)\n for i in onid:\n leftid = np.searchsorted(offid, i) - 1\n rightid = np.searchsorted(offid, i)\n\n Xon = onvalues[onscanid == i]\n if leftid == -1:\n Xoff = offvalues[offscanid == offid[rightid]]\n Xoff_m = getattr(np, 'nan'+mode)(Xoff, axis=0) if mode is not None else 0\n Xon_m = Xoff_m\n model.fit(Xoff - Xoff_m)\n elif rightid == len(offid):\n Xoff = offvalues[offscanid == offid[leftid]]\n Xoff_m = getattr(np, 'nan'+mode)(Xoff, axis=0) if mode is not None else 0\n Xon_m = Xoff_m\n model.fit(Xoff - Xoff_m)\n else:\n Xoff_l = offvalues[offscanid == offid[leftid]]\n Xoff_lm = getattr(np, 'nan'+mode)(Xoff_l, axis=0) if mode is not None else 0\n Xoff_r = offvalues[offscanid == offid[rightid]]\n Xoff_rm = getattr(np, 'nan'+mode)(Xoff_r, axis=0) if mode is not None else 0\n Xon_m = getattr(np, 'nan'+mode)(np.vstack([Xoff_l, Xoff_r]), axis=0) if mode is not None else 0\n model.fit(np.vstack([Xoff_l - Xoff_lm, Xoff_r - Xoff_rm]))\n P = model.components_\n C = model.transform(Xon - Xon_m)\n\n Xatmvalues[onscanid == i] = C @ P + Xon_m\n # Xatms.append(dc.full_like(Xon, C @ P + Xon_m.values))\n Ps.append(P)\n Cs.append(C)\n\n if pc:\n return Xatm, Ps, Cs\n else:\n return Xatm", "response": "This function calculates the pricipal components at each time."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef r_division(onarray, offarray, rarray, mode='mean'):\n logger = getLogger('decode.models.r_division')\n logger.info('mode')\n logger.info('{}'.format(mode))\n\n offid = np.unique(offarray.scanid)\n onid = np.unique(onarray.scanid)\n rid = np.unique(rarray.scanid)\n\n onarray = onarray.copy() # Xarray\n onvalues = onarray.values\n onscanid = onarray.scanid.values\n offarray = offarray.copy() # Xarray\n offvalues = offarray.values\n offscanid = offarray.scanid.values\n rarray = rarray.copy() # Xarray\n rvalues = rarray.values\n rscanid = rarray.scanid.values\n for i in onid:\n rleftid = np.searchsorted(rid, i) - 1\n rrightid = np.searchsorted(rid, i)\n\n if rleftid == -1:\n Xr = rvalues[rscanid == rid[rrightid]]\n Xr_m = getattr(np, 'nan'+mode)(Xr, axis=0)\n elif rrightid == len(rid):\n Xr = rvalues[rscanid == rid[rleftid]]\n Xr_m = getattr(np, 'nan'+mode)(Xr, axis=0)\n else:\n Xr_l = rvalues[rscanid == rid[rleftid]]\n Xr_r = rvalues[rscanid == rid[rrightid]]\n Xr_m = getattr(np, 'nan'+mode)(np.vstack([Xr_l, Xr_r]), axis=0)\n onvalues[onscanid == i] /= Xr_m\n\n for j in offid:\n rleftid = np.searchsorted(rid, j) - 1\n rrightid = np.searchsorted(rid, j)\n\n Xoff_m = getattr(np, 'nan'+mode)(offvalues[offscanid == j], axis=0)\n if rleftid == -1:\n Xr = rvalues[rscanid == rid[rrightid]]\n Xr_m = getattr(np, 'nan'+mode)(Xr, axis=0)\n elif rrightid == len(rid):\n Xr = rvalues[rscanid == rid[rleftid]]\n Xr_m = getattr(np, 'nan'+mode)(Xr, axis=0)\n else:\n Xr_l = rvalues[rscanid == rid[rleftid]]\n Xr_r = rvalues[rscanid == rid[rrightid]]\n Xr_m = getattr(np, 'nan'+mode)(np.vstack([Xr_l, Xr_r]), axis=0)\n offvalues[offscanid == j] /= Xr_m\n\n Xon_rdiv = dc.full_like(onarray, onarray)\n Xoff_rdiv = dc.full_like(offarray, offarray)\n Xonoff_rdiv = dc.concat([Xon_rdiv, Xoff_rdiv], dim='t')\n Xonoff_rdiv_sorted = Xonoff_rdiv[np.argsort(Xonoff_rdiv.time.values)]\n\n scantype = Xonoff_rdiv_sorted.scantype.values\n newscanid = np.cumsum(np.hstack([False, scantype[1:] != scantype[:-1]]))\n onmask = np.in1d(Xonoff_rdiv_sorted.scanid, onid)\n offmask = np.in1d(Xonoff_rdiv_sorted.scanid, offid)\n Xon_rdiv = Xonoff_rdiv_sorted[onmask]\n Xoff_rdiv = Xonoff_rdiv_sorted[offmask]\n Xon_rdiv.coords.update({'scanid': ('t', newscanid[onmask])})\n Xoff_rdiv.coords.update({'scanid': ('t', newscanid[offmask])})\n\n return Xon_rdiv, Xoff_rdiv", "response": "Apply R division to the on - point and off - point observations."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef gauss_fit(map_data, chs=None, mode='deg', amplitude=1, x_mean=0, y_mean=0, x_stddev=None, y_stddev=None, theta=None, cov_matrix=None, noise=0, **kwargs):\n\n if chs is None:\n chs = np.ogrid[0:63] # the number of channels would be changed\n\n if len(chs) > 1:\n for n,ch in enumerate(chs):\n subdata = np.transpose(np.full_like(map_data[:, :, ch], map_data.values[:, :, ch]))\n subdata[np.isnan(subdata)] = 0\n\n if mode == 'deg':\n mX, mY = np.meshgrid(map_data.x, map_data.y)\n\n elif mode == 'pix':\n mX, mY = np.mgrid[0:len(map_data.y), 0:len(map_data.x)]\n\n g_init = models.Gaussian2D(amplitude=np.nanmax(subdata),\n x_mean=x_mean, y_mean=y_mean,\n x_stddev=x_stddev, y_stddev=y_stddev, theta=theta,\n cov_matrix=cov_matrix, **kwargs) + models.Const2D(noise)\n fit_g = fitting.LevMarLSQFitter()\n g = fit_g(g_init, mX, mY, subdata)\n\n g_init2 = models.Gaussian2D(amplitude=np.nanmax(subdata-g.amplitude_1),\n x_mean=x_mean, y_mean=y_mean,\n x_stddev=x_stddev, y_stddev=y_stddev, theta=theta,\n cov_matrix=cov_matrix, **kwargs)\n fit_g2 = fitting.LevMarLSQFitter()\n g2 = fit_g2(g_init2, mX, mY, subdata)\n\n if n == 0:\n results = np.array([g2(mX,mY)])\n peaks = np.array([g2.amplitude.value])\n x_means = np.array([g2.x_mean.value])\n y_means = np.array([g2.y_mean.value])\n x_stddevs = np.array([g2.x_stddev.value])\n y_stddevs = np.array([g2.y_stddev.value])\n thetas = np.array([g2.theta.value])\n if fit_g2.fit_info['param_cov'] is None:\n unserts = nop.array([0])\n else:\n error = np.diag(fit_g2.fit_info['param_cov'])**0.5\n uncerts = np.array([error[0]])\n\n else:\n results = np.append(results, [g2(mX,mY)], axis=0)\n peaks = np.append(peaks, [g2.amplitude.value], axis=0)\n x_means = np.append(x_means, [g2.x_mean.value], axis=0)\n y_means = np.append(y_means, [g2.y_mean.value], axis=0)\n x_stddevs = np.append(x_stddevs, [g2.x_stddev.value], axis=0)\n y_stddevs = np.append(y_stddevs, [g2.y_stddev.value], axis=0)\n thetas = np.append(thetas, [g2.theta.value], axis=0)\n\n if fit_g2.fit_info['param_cov'] is None:\n uncerts = np.append(uncerts, [0], axis=0)\n else:\n error = np.diag(fit_g2.fit_info['param_cov'])**0.5\n uncerts = np.append(uncerts, [error[0]], axis=0)\n\n result = map_data.copy()\n result.values = np.transpose(results)\n result.attrs.update({'peak': peaks, 'x_mean': x_means, 'y_mean': y_means,\n 'x_stddev': x_stddevs, 'y_stddev': y_stddevs,\n 'theta': thetas, 'uncert': uncerts})\n\n else:\n subdata = np.transpose(np.full_like(map_data[:, :, 0], map_data.values[:, :, 0]))\n subdata[np.isnan(subdata)] = 0\n\n if mode == 'deg':\n mX, mY = np.meshgrid(map_data.x, map_data.y)\n\n elif mode == 'pix':\n mX, mY = np.mgrid[0:len(map_data.y), 0:len(map_data.x)]\n\n g_init = models.Gaussian2D(amplitude=np.nanmax(subdata),\n x_mean=x_mean, y_mean=y_mean,\n x_stddev=x_stddev, y_stddev=y_stddev, theta=theta,\n cov_matrix=cov_matrix, **kwargs) + models.Const2D(noise)\n fit_g = fitting.LevMarLSQFitter()\n g = fit_g(g_init, mX, mY, subdata)\n\n g_init2 = models.Gaussian2D(amplitude=np.nanmax(subdata-g.amplitude_1),\n x_mean=x_mean, y_mean=y_mean,\n x_stddev=x_stddev, y_stddev=y_stddev, theta=theta,\n cov_matrix=cov_matrix, **kwargs)\n fit_g2 = fitting.LevMarLSQFitter()\n g2 = fit_g2(g_init2, mX, mY, subdata)\n\n results = np.array([g2(mX, mY)])\n peaks = np.array([g2.amplitude.value])\n x_means = np.array([g2.x_mean.value])\n y_means = np.array([g2.y_mean.value])\n x_stddevs = np.array([g2.x_stddev.value])\n y_stddevs = np.array([g2.y_stddev.value])\n thetas = np.array([g2.theta.value])\n error = np.diag(fit_g2.fit_info['param_cov'])**0.5\n uncerts = np.array(error[0])\n\n result = map_data.copy()\n result.values = np.transpose(results)\n result.attrs.update({'peak': peaks, 'x_mean': x_means, 'y_mean': y_means,\n 'x_stddev': x_stddevs, 'y_stddev': y_stddevs,\n 'theta': thetas, 'uncert': uncerts})\n\n return result", "response": "make a 2D Gaussian model and fit the observed data with the model."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _filter(self, text):\n\n if self.urls:\n text = RE_LINK.sub('', text)\n if self.emails:\n text = RE_MAIL.sub('', text)\n return text", "response": "Filter out the URL and email addresses."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn parent directory of mac. app", "response": "def get_mac_dot_app_dir(directory):\n \"\"\"Returns parent directory of mac .app\n\n Args:\n\n directory (str): Current directory\n\n Returns:\n\n (str): Parent directory of mac .app\n \"\"\"\n return os.path.dirname(os.path.dirname(os.path.dirname(directory)))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef copy_function(func, name=None):\n code = func.__code__\n newname = name or func.__name__\n newcode = CodeType(\n code.co_argcount,\n code.co_kwonlyargcount,\n code.co_nlocals,\n code.co_stacksize,\n code.co_flags,\n code.co_code,\n code.co_consts,\n code.co_names,\n code.co_varnames,\n code.co_filename,\n newname,\n code.co_firstlineno,\n code.co_lnotab,\n code.co_freevars,\n code.co_cellvars,\n )\n newfunc = FunctionType(\n newcode,\n func.__globals__,\n newname,\n func.__defaults__,\n func.__closure__,\n )\n newfunc.__dict__.update(func.__dict__)\n return newfunc", "response": "Copy a function object with different name."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a context manager where only one thread is allocated to a process.", "response": "def one_thread_per_process():\n \"\"\"Return a context manager where only one thread is allocated to a process.\n\n This function is intended to be used as a with statement like::\n\n >>> with process_per_thread():\n ... do_something() # one thread per process\n\n Notes:\n This function only works when MKL (Intel Math Kernel Library)\n is installed and used in, for example, NumPy and SciPy.\n Otherwise this function does nothing.\n\n \"\"\"\n try:\n import mkl\n is_mkl = True\n except ImportError:\n is_mkl = False\n\n if is_mkl:\n n_threads = mkl.get_max_threads()\n mkl.set_num_threads(1)\n try:\n # block nested in the with statement\n yield\n finally:\n # revert to the original value\n mkl.set_num_threads(n_threads)\n else:\n yield"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef compute_hulls(S, fS, domain):\n assert(len(S) == len(fS))\n assert(len(domain) == 2)\n\n lower_hull = []\n\n for li in range(len(S) - 1):\n m = (fS[li + 1] - fS[li]) / (S[li + 1] - S[li])\n b = fS[li] - m * S[li]\n left = S[li]\n right = S[li + 1]\n lower_hull.append(HullNode(m=m, b=b, left=left, right=right))\n\n # compute upper piecewise-linear hull\n\n # expected final length of upper hull after full computation\n n_upper_segments = 2 * (len(S) - 2) + isinf(domain[0]) + isinf(domain[1])\n\n upper_hull = []\n\n if isinf(domain[0]):\n # first line (from -infinity)\n\n m = (fS[1] - fS[0]) / (S[1] - S[0])\n b = fS[0] - m * S[0]\n pr = compute_segment_log_prob(float(\"-inf\"), S[0], m, b)\n\n upper_hull.append(\n HullNode(m=m, b=b, pr=pr, left=float(\"-inf\"), right=S[0])\n )\n\n # second line\n m = (fS[2] - fS[1]) / (S[2] - S[1])\n b = fS[1] - m * S[1]\n pr = compute_segment_log_prob(S[0], S[1], m, b)\n\n upper_hull.append(HullNode(m=m, b=b, pr=pr, left=S[0], right=S[1]))\n\n # interior lines\n # there are two lines between each abscissa\n for li in range(1, len(S) - 2):\n\n m1 = (fS[li] - fS[li - 1]) / (S[li] - S[li - 1])\n b1 = fS[li] - m1 * S[li]\n\n m2 = (fS[li + 2] - fS[li + 1]) / (S[li + 2] - S[li + 1])\n b2 = fS[li + 1] - m2 * S[li + 1]\n\n if isinf(m1) and isinf(m2):\n raise ValueError(\"both hull slopes are infinite\")\n\n dx1 = S[li] - S[li - 1]\n df1 = fS[li] - fS[li - 1]\n dx2 = S[li + 2] - S[li + 1]\n df2 = fS[li + 2] - fS[li + 1]\n\n f1 = fS[li]\n f2 = fS[li + 1]\n x1 = S[li]\n x2 = S[li + 1]\n\n # more numerically stable than above\n ix = ((f1 * dx1 - df1 * x1) * dx2 - (f2 * dx2 - df2 * x2) * dx1) / (df2 * dx1 - df1 * dx2)\n\n if isinf(m1) or abs(m1 - m2) < 10.0 ** 8 * eps(m1):\n ix = S[li]\n pr1 = float(\"-inf\")\n pr2 = compute_segment_log_prob(ix, S[li + 1], m2, b2)\n elif isinf(m2):\n ix = S[li + 1]\n pr1 = compute_segment_log_prob(S[li], ix, m1, b1)\n pr2 = float(\"-inf\")\n else:\n if isinf(ix):\n raise ValueError(\"Non finite intersection\")\n\n if abs(ix - S[li]) < 10.0 ** 12 * eps(S[li]):\n ix = S[li]\n elif abs(ix - S[li + 1]) < 10.0**12 * eps(S[li + 1]):\n ix = S[li + 1]\n\n if ix < S[li] or ix > S[li + 1]:\n raise ValueError(\"Intersection out of bounds -- logpdf is not concave\")\n\n pr1 = compute_segment_log_prob(S[li], ix, m1, b1)\n pr2 = compute_segment_log_prob(ix, S[li + 1], m2, b2)\n\n upper_hull.append(HullNode(m=m1, b=b1, pr=pr1, left=S[li], right=ix))\n upper_hull.append(HullNode(m=m2, b=b2, pr=pr2, left=ix, right=S[li + 1]))\n\n # second last line\n m = (fS[-2] - fS[-3]) / float(S[-2] - S[-3])\n b = fS[-2] - m * S[-2]\n pr = compute_segment_log_prob(S[-2], S[-1], m, b)\n\n upper_hull.append(HullNode(m=m, b=b, pr=pr, left=S[-2], right=S[-1]))\n\n if isinf(domain[1]):\n # last line (to infinity)\n m = (fS[-1] - fS[-2]) / (S[-1] - S[-2])\n b = fS[-1] - m * S[-1]\n pr = compute_segment_log_prob(S[-1], float(\"inf\"), m, b)\n\n upper_hull.append(\n HullNode(m=m, b=b, pr=pr, left=S[-1], right=float(\"inf\"))\n )\n\n # normalize probabilities\n normalized_probabilities = exp_normalize(\n asarray([node.pr for node in upper_hull])\n )\n\n for node, probability in zip(upper_hull, normalized_probabilities):\n node.pr = probability\n\n assert(len(lower_hull) == len(S) - 1)\n assert(len(upper_hull) == n_upper_segments)\n\n return lower_hull, upper_hull", "response": "Compute the upper and lower hulls given the segment points S with function values fS and the domain of the logpdf."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sample_upper_hull(upper_hull, random_stream):\n cdf = cumsum([node.pr for node in upper_hull])\n\n # randomly choose a line segment\n U = random_stream.rand()\n\n node = next(\n (node for node, cdf_value in zip(upper_hull, cdf) if U < cdf_value),\n upper_hull[-1] # default is last line segment\n )\n\n # sample along that line segment\n U = random_stream.rand()\n\n m, left, right = node.m, node.left, node.right\n\n M = max(m * right, m * left)\n x = (log(U * (exp(m * right - M) - exp(m * left - M)) + exp(m * left - M)) + M) / m\n\n assert(x >= left and x <= right)\n\n if isinf(x) or isnan(x):\n raise ValueError(\"sampled an infinite or 'nan' x\")\n\n return x", "response": "Return a single value randomly sampled from the given upper_hull."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef check_for_required_columns(problems, table, df):\n r = cs.PROTOFEED_REF\n req_columns = r.loc[(r['table'] == table) & r['column_required'],\n 'column'].values\n for col in req_columns:\n if col not in df.columns:\n problems.append(['error', 'Missing column {!s}'.format(col),\n table, []])\n\n return problems", "response": "Checks that the given ProtoFeed table has the required columns."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_for_invalid_columns(problems, table, df):\n r = cs.PROTOFEED_REF\n valid_columns = r.loc[r['table'] == table, 'column'].values\n for col in df.columns:\n if col not in valid_columns:\n problems.append(['warning',\n 'Unrecognized column {!s}'.format(col),\n table, []])\n\n return problems", "response": "Checks if the given ProtoFeed DataFrame contains invalid columns."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_frequencies(pfeed, *, as_df=False, include_warnings=False):\n table = 'frequencies'\n problems = []\n\n # Preliminary checks\n if pfeed.frequencies is None:\n problems.append(['error', 'Missing table', table, []])\n else:\n f = pfeed.frequencies.copy()\n problems = check_for_required_columns(problems, table, f)\n if problems:\n return gt.format_problems(problems, as_df=as_df)\n\n if include_warnings:\n problems = check_for_invalid_columns(problems, table, f)\n\n # Check route_short_name and route_long_name\n for column in ['route_short_name', 'route_long_name']:\n problems = gt.check_column(problems, table, f, column, gt.valid_str,\n column_required=False)\n\n cond = ~(f['route_short_name'].notnull() | f['route_long_name'].notnull())\n problems = gt.check_table(problems, table, f, cond,\n 'route_short_name and route_long_name both empty')\n\n # Check route_type\n v = lambda x: x in range(8)\n problems = gt.check_column(problems, table, f, 'route_type', v)\n\n # Check service window ID\n problems = gt.check_column_linked_id(problems, table, f,\n 'service_window_id', pfeed.service_windows)\n\n # Check direction\n v = lambda x: x in range(3)\n problems = gt.check_column(problems, table, f, 'direction', v)\n\n # Check frequency\n v = lambda x: isinstance(x, int)\n problems = gt.check_column(problems, table, f, 'frequency', v)\n\n # Check speed\n problems = gt.check_column(problems, table, f, 'speed', valid_speed,\n column_required=False)\n\n # Check shape ID\n problems = gt.check_column_linked_id(problems, table, f, 'shape_id',\n pfeed.shapes)\n\n return gt.format_problems(problems, as_df=as_df)", "response": "Check that the frequency of the current feed is well - formed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking the meta data for the current feed.", "response": "def check_meta(pfeed, *, as_df=False, include_warnings=False):\n \"\"\"\n Analog of :func:`check_frequencies` for ``pfeed.meta``\n \"\"\"\n table = 'meta'\n problems = []\n\n # Preliminary checks\n if pfeed.meta is None:\n problems.append(['error', 'Missing table', table, []])\n else:\n f = pfeed.meta.copy()\n problems = check_for_required_columns(problems, table, f)\n if problems:\n return gt.format_problems(problems, as_df=as_df)\n\n if include_warnings:\n problems = check_for_invalid_columns(problems, table, f)\n\n if f.shape[0] > 1:\n problems.append(['error', 'Meta must have only one row',\n table, list(range(1, f.shape[0]))])\n\n # Check agency_name\n problems = gt.check_column(problems, table, f, 'agency_name', gt.valid_str)\n\n # Check agency_url\n problems = gt.check_column(problems, table, f, 'agency_url', gt.valid_url)\n\n # Check agency_timezone\n problems = gt.check_column(problems, table, f, 'agency_timezone',\n gt.valid_timezone)\n\n # Check start_date and end_date\n for col in ['start_date', 'end_date']:\n problems = gt.check_column(problems, table, f, col, gt.valid_date)\n\n # Check default_route_speed\n problems = gt.check_column(problems, table, f, 'default_route_speed',\n valid_speed)\n\n return gt.format_problems(problems, as_df=as_df)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_service_windows(pfeed, *, as_df=False, include_warnings=False):\n table = 'service_windows'\n problems = []\n\n # Preliminary checks\n if pfeed.service_windows is None:\n problems.append(['error', 'Missing table', table, []])\n else:\n f = pfeed.service_windows.copy()\n problems = check_for_required_columns(problems, table, f)\n if problems:\n return gt.format_problems(problems, as_df=as_df)\n\n if include_warnings:\n problems = check_for_invalid_columns(problems, table, f)\n\n # Check service window ID\n problems = gt.check_column_id(problems, table, f,\n 'service_window_id')\n\n # Check start_time and end_time\n for column in ['start_time', 'end_time']:\n problems = gt.check_column(problems, table, f, column, gt.valid_time)\n\n # Check weekday columns\n v = lambda x: x in range(2)\n for col in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday',\n 'saturday', 'sunday']:\n problems = gt.check_column(problems, table, f, col, v)\n\n return gt.format_problems(problems, as_df=as_df)", "response": "Checks that the service windows are valid and returns a list of problems."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck that all required columns are present in the feed.", "response": "def check_shapes(pfeed, *, as_df=False, include_warnings=False):\n \"\"\"\n Analog of :func:`check_frequencies` for ``pfeed.shapes``\n \"\"\"\n table = 'shapes'\n problems = []\n\n # Preliminary checks\n if pfeed.shapes is None:\n return problems\n\n f = pfeed.shapes.copy()\n problems = check_for_required_columns(problems, table, f)\n if problems:\n return gt.format_problems(problems, as_df=as_df)\n\n if include_warnings:\n problems = check_for_invalid_columns(problems, table, f)\n\n # Check shape_id\n problems = gt.check_column(problems, table, f, 'shape_id', gt.valid_str)\n\n # Check geometry\n v = lambda x: isinstance(x, sg.LineString) and not x.is_empty\n problems = gt.check_column(problems, table, f, 'geometry', v)\n\n return gt.format_problems(problems, as_df=as_df)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking that all stops in a given feed are in a given distribution.", "response": "def check_stops(pfeed, *, as_df=False, include_warnings=False):\n \"\"\"\n Analog of :func:`check_frequencies` for ``pfeed.stops``\n \"\"\"\n # Use gtfstk's stop validator\n if pfeed.stops is not None:\n stop_times = pd.DataFrame(columns=['stop_id'])\n feed = gt.Feed(stops=pfeed.stops, stop_times=stop_times,\n dist_units='km')\n return gt.check_stops(feed, as_df=as_df, include_warnings=False)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate(pfeed, *, as_df=True, include_warnings=True):\n problems = []\n\n # Check for invalid columns and check the required tables\n checkers = [\n 'check_frequencies',\n 'check_meta',\n 'check_service_windows',\n 'check_shapes',\n 'check_stops',\n ]\n for checker in checkers:\n problems.extend(globals()[checker](pfeed,\n include_warnings=include_warnings))\n\n return gt.format_problems(problems, as_df=as_df)", "response": "Validate a ProtoFeed object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef youtube(keyword=None):\n if keyword is None:\n web.open('https://www.youtube.com/watch?v=L_mBVT2jBFw')\n else:\n web.open(quote('https://www.youtube.com/results?search_query={}'.format(keyword), RESERVED))", "response": "Open youtube.\n\n Args:\n keyword (optional): Search word."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef replace_escapes(self, m):\n\n groups = m.groupdict()\n esc = m.group(0)\n if groups.get('special'):\n value = BACK_SLASH_TRANSLATION[esc]\n elif groups.get('char'):\n try:\n if esc.endswith('}'):\n value = chr(int(esc[3:-1], 16))\n else:\n value = chr(int(esc[2:], 16))\n except Exception:\n value = esc\n elif groups.get('oct'):\n # JavaScript only supports hex range.\n # So \\400 would be \\40 + '0'\n value = int(esc[1:], 8)\n overflow = ''\n if value > 255:\n value = esc[1:-1]\n overflow + esc[-1]\n value = chr(value) + overflow\n elif('other'):\n value = esc[1:]\n return value.replace('\\x00', '\\n')", "response": "Replace escapes in a message."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreplacing surrogates in a string.", "response": "def replace_surrogates(self, m):\n \"\"\"Replace surrogates.\"\"\"\n\n high, low = ord(m.group(1)), ord(m.group(2))\n return chr((high - 0xD800) * 0x400 + low - 0xDC00 + 0x10000)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_content(self, text, index=0, backtick=False):\n\n curly_count = 0\n last = '\\n'\n self.lines_num = 1\n length = len(text)\n while index < length:\n start_index = index\n c = text[index]\n if c == '{' and backtick:\n curly_count += 1\n elif c == '}' and backtick:\n if curly_count:\n curly_count -= 1\n else:\n index += 1\n return index\n elif c == '`':\n done = False\n first = True\n backtick_content = []\n start_index = index\n while not done:\n m = (RE_TEMPLATE_START if first else RE_TEMPLATE_MIDDLE_END).match(text, index)\n first = False\n if m:\n self.line_num += m.group(0).count('\\n')\n content = self.evaluate_strings(m.group(1), True)\n if content:\n backtick_content.append(content)\n index = m.end(0)\n if m.group(2) == '${':\n index = self.find_content(text, index, True)\n else:\n done = True\n else:\n done = True\n if backtick_content:\n self.quoted_strings.append([' '.join(backtick_content), self.line_num, 'utf-8'])\n elif c in ('\\'', '\"'):\n m = RE_STRING.match(text, index)\n if m:\n self.evaluate_strings(m.group(0)[1:-1])\n self.line_num += m.group(0).count('\\n')\n index = m.end(0)\n elif c == '\\n':\n self.line_num += 1\n elif last == '\\n' or c == '/':\n m = RE_COMMENT.match(text, index)\n if m:\n g = m.groupdict()\n if g['start'] is None:\n self.evaluate_inline_tail(g)\n else:\n self.evaluate_inline(g)\n index = m.end(0)\n elif c == '/':\n m = RE_BLOCK_COMMENT.match(text, index)\n if m:\n self.evaluate_block(m.group(0))\n self.line_num += m.group(0).count('\\n')\n index = m.end(0)\n\n if index == start_index:\n index += 1\n last = text[index - 1]", "response": "Find content in text."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef override_config(self, options):\n\n for k, v in options.items():\n # Reject names not in the default configuration\n if k not in self.config:\n raise KeyError(\"'{}' is not a valid option for '{}'\".format(k, self.__class__.__name__))\n self.validate_options(k, v)\n self.config[k] = v", "response": "Override the default configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the duration of the time period between the first and second time string in the given units.", "response": "def get_duration(timestr1, timestr2, units='s'):\n \"\"\"\n Return the duration of the time period between the first and second\n time string in the given units.\n Allowable units are 's' (seconds), 'min' (minutes), 'h' (hours).\n Assume ``timestr1 < timestr2``.\n \"\"\"\n valid_units = ['s', 'min', 'h']\n assert units in valid_units,\\\n \"Units must be one of {!s}\".format(valid_units)\n\n duration = (\n gt.timestr_to_seconds(timestr2) - gt.timestr_to_seconds(timestr1)\n )\n\n if units == 's':\n return duration\n elif units == 'min':\n return duration/60\n else:\n return duration/3600"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild a list of stop IDs based on the given shape ID.", "response": "def build_stop_ids(shape_id):\n \"\"\"\n Create a pair of stop IDs based on the given shape ID.\n \"\"\"\n return [cs.SEP.join(['stp', shape_id, str(i)]) for i in range(2)]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ngive a ProtoFeed return a DataFrame representing agency. txt", "response": "def build_agency(pfeed):\n \"\"\"\n Given a ProtoFeed, return a DataFrame representing ``agency.txt``\n \"\"\"\n return pd.DataFrame({\n 'agency_name': pfeed.meta['agency_name'].iat[0],\n 'agency_url': pfeed.meta['agency_url'].iat[0],\n 'agency_timezone': pfeed.meta['agency_timezone'].iat[0],\n }, index=[0])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving a ProtoFeed return a DataFrame representing a calendar. txt and a dictionary of the form -> where each dictionary contains the service IDs and each service window ID is a unique identifier for each service window.", "response": "def build_calendar_etc(pfeed):\n \"\"\"\n Given a ProtoFeed, return a DataFrame representing ``calendar.txt``\n and a dictionary of the form -> ,\n respectively.\n \"\"\"\n windows = pfeed.service_windows.copy()\n\n # Create a service ID for each distinct days_active field and map the\n # service windows to those service IDs\n def get_sid(bitlist):\n return 'srv' + ''.join([str(b) for b in bitlist])\n\n weekdays = ['monday', 'tuesday', 'wednesday', 'thursday', 'friday',\n 'saturday', 'sunday']\n bitlists = set()\n\n # Create a dictionary -> \n d = dict()\n for index, window in windows.iterrows():\n bitlist = window[weekdays].tolist()\n d[window['service_window_id']] = get_sid(bitlist)\n bitlists.add(tuple(bitlist))\n service_by_window = d\n\n # Create calendar\n start_date = pfeed.meta['start_date'].iat[0]\n end_date = pfeed.meta['end_date'].iat[0]\n F = []\n for bitlist in bitlists:\n F.append([get_sid(bitlist)] + list(bitlist) +\n [start_date, end_date])\n calendar = pd.DataFrame(F, columns=(\n ['service_id'] + weekdays + ['start_date', 'end_date']))\n\n return calendar, service_by_window"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build_routes(pfeed):\n f = pfeed.frequencies[['route_short_name', 'route_long_name',\n 'route_type', 'shape_id']].drop_duplicates().copy()\n\n # Create route IDs\n f['route_id'] = 'r' + f['route_short_name'].map(str)\n\n del f['shape_id']\n\n return f", "response": "Given a ProtoFeed return a DataFrame representing routes. txt."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving a ProtoFeed return a DataFrame representing the shapes. txt.", "response": "def build_shapes(pfeed):\n \"\"\"\n Given a ProtoFeed, return DataFrame representing ``shapes.txt``.\n Only use shape IDs that occur in both ``pfeed.shapes`` and\n ``pfeed.frequencies``.\n Create reversed shapes where routes traverse shapes in both\n directions.\n \"\"\"\n rows = []\n for shape, geom in pfeed.shapes[['shape_id',\n 'geometry']].itertuples(index=False):\n if shape not in pfeed.shapes_extra:\n continue\n if pfeed.shapes_extra[shape] == 2:\n # Add shape and its reverse\n shid = shape + '-1'\n new_rows = [[shid, i, lon, lat]\n for i, (lon, lat) in enumerate(geom.coords)]\n rows.extend(new_rows)\n shid = shape + '-0'\n new_rows = [[shid, i, lon, lat]\n for i, (lon, lat) in enumerate(reversed(geom.coords))]\n rows.extend(new_rows)\n else:\n # Add shape\n shid = '{}{}{}'.format(shape, cs.SEP, pfeed.shapes_extra[shape])\n new_rows = [[shid, i, lon, lat]\n for i, (lon, lat) in enumerate(geom.coords)]\n rows.extend(new_rows)\n\n return pd.DataFrame(rows, columns=['shape_id', 'shape_pt_sequence',\n 'shape_pt_lon', 'shape_pt_lat'])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build_stops(pfeed, shapes=None):\n if pfeed.stops is not None:\n stops = pfeed.stops.copy()\n else:\n if shapes is None:\n raise ValueError('Must input shapes built by build_shapes()')\n\n geo_shapes = gt.geometrize_shapes(shapes)\n rows = []\n for shape, geom in geo_shapes[['shape_id',\n 'geometry']].itertuples(index=False):\n stop_ids = build_stop_ids(shape)\n stop_names = build_stop_names(shape)\n for i in range(2):\n stop_id = stop_ids[i]\n stop_name = stop_names[i]\n stop_lon, stop_lat = geom.interpolate(i,\n normalized=True).coords[0]\n rows.append([stop_id, stop_name, stop_lon, stop_lat])\n\n stops = (\n pd.DataFrame(rows, columns=['stop_id', 'stop_name',\n 'stop_lon', 'stop_lat'])\n .drop_duplicates(subset=['stop_lon', 'stop_lat'])\n )\n\n return stops", "response": "Given a ProtoFeed return a DataFrame representing the stops. txt."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build_trips(pfeed, routes, service_by_window):\n # Put together the route and service data\n routes = pd.merge(routes[['route_id', 'route_short_name']],\n pfeed.frequencies)\n routes = pd.merge(routes, pfeed.service_windows)\n\n # For each row in routes, add trips at the specified frequency in\n # the specified direction\n rows = []\n for index, row in routes.iterrows():\n shape = row['shape_id']\n route = row['route_id']\n window = row['service_window_id']\n start, end = row[['start_time', 'end_time']].values\n duration = get_duration(start, end, 'h')\n frequency = row['frequency']\n if not frequency:\n # No trips during this service window\n continue\n # Rounding down occurs here if the duration isn't integral\n # (bad input)\n num_trips_per_direction = int(frequency*duration)\n service = service_by_window[window]\n direction = row['direction']\n if direction == 2:\n directions = [0, 1]\n else:\n directions = [direction]\n for direction in directions:\n # Warning: this shape-ID-making logic needs to match that\n # in ``build_shapes``\n shid = '{}{}{}'.format(shape, cs.SEP, direction)\n rows.extend([[\n route,\n cs.SEP.join(['t', route, window, start,\n str(direction), str(i)]),\n direction,\n shid,\n service\n ] for i in range(num_trips_per_direction)])\n\n return pd.DataFrame(rows, columns=['route_id', 'trip_id', 'direction_id',\n 'shape_id', 'service_id'])", "response": "Given a ProtoFeed and its corresponding routes and service - by - window return a DataFrame representing the trips. txt."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngive a Shapely LineString a side of the LineString and a buffer size in the distance units of the LineString return the resulting Shapely polygon.", "response": "def buffer_side(linestring, side, buffer):\n \"\"\"\n Given a Shapely LineString, a side of the LineString\n (string; 'left' = left hand side of LineString,\n 'right' = right hand side of LineString, or\n 'both' = both sides), and a buffer size in the distance units of\n the LineString, buffer the LineString on the given side by\n the buffer size and return the resulting Shapely polygon.\n \"\"\"\n b = linestring.buffer(buffer, cap_style=2)\n if side in ['left', 'right'] and buffer > 0:\n # Make a tiny buffer to split the normal-size buffer\n # in half across the linestring\n eps = min(buffer/2, 0.001)\n b0 = linestring.buffer(eps, cap_style=3)\n diff = b.difference(b0)\n polys = so.polygonize(diff)\n # Buffer sides slightly to include original linestring\n if side == 'left':\n b = list(polys)[0].buffer(1.1*eps)\n else:\n b = list(polys)[-1].buffer(1.1*eps)\n\n return b"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving a GeoDataFrame of stops a Shapely LineString and a side of the LineString return a GeoDataFrame of all the stops that lie within a buffer.", "response": "def get_nearby_stops(geo_stops, linestring, side, buffer=cs.BUFFER):\n \"\"\"\n Given a GeoDataFrame of stops, a Shapely LineString in the\n same coordinate system, a side of the LineString\n (string; 'left' = left hand side of LineString,\n 'right' = right hand side of LineString, or\n 'both' = both sides), and a buffer in the distance units of that\n coordinate system, do the following.\n Return a GeoDataFrame of all the stops that lie within\n ``buffer`` distance units to the ``side`` of the LineString.\n \"\"\"\n b = buffer_side(linestring, side, buffer)\n\n # Collect stops\n return geo_stops.loc[geo_stops.intersects(b)].copy()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef build_stop_times(pfeed, routes, shapes, stops, trips, buffer=cs.BUFFER):\n # Get the table of trips and add frequency and service window details\n routes = (\n routes\n .filter(['route_id', 'route_short_name'])\n .merge(pfeed.frequencies.drop(['shape_id'], axis=1))\n )\n trips = (\n trips\n .assign(service_window_id=lambda x: x.trip_id.map(\n lambda y: y.split(cs.SEP)[2]))\n .merge(routes)\n )\n\n # Get the geometries of ``shapes`` and not ``pfeed.shapes``\n geometry_by_shape = dict(\n gt.geometrize_shapes(shapes, use_utm=True)\n .filter(['shape_id', 'geometry'])\n .values\n )\n\n # Save on distance computations by memoizing\n dist_by_stop_by_shape = {shape: {} for shape in geometry_by_shape}\n\n def compute_stops_dists_times(geo_stops, linestring, shape,\n start_time, end_time):\n \"\"\"\n Given a GeoDataFrame of stops on one side of a given Shapely\n LineString with given shape ID, compute distances and departure\n times of a trip traversing the LineString from start to end\n at the given start and end times (in seconds past midnight)\n and stopping at the stops encountered along the way.\n Do not assume that the stops are ordered by trip encounter.\n Return three lists of the same length: the stop IDs in order\n that the trip encounters them, the shape distances traveled\n along distances at the stops, and the times the stops are\n encountered, respectively.\n \"\"\"\n g = geo_stops.copy()\n dists_and_stops = []\n for i, stop in enumerate(g['stop_id'].values):\n if stop in dist_by_stop_by_shape[shape]:\n d = dist_by_stop_by_shape[shape][stop]\n else:\n d = gt.get_segment_length(linestring,\n g.geometry.iat[i])/1000 # km\n dist_by_stop_by_shape[shape][stop] = d\n dists_and_stops.append((d, stop))\n dists, stops = zip(*sorted(dists_and_stops))\n D = linestring.length/1000\n dists_are_reasonable = all([d < D + 100 for d in dists])\n if not dists_are_reasonable:\n # Assume equal distances between stops :-(\n n = len(stops)\n delta = D/(n - 1)\n dists = [i*delta for i in range(n)]\n\n # Compute times using distances, start and end stop times,\n # and linear interpolation\n t0, t1 = start_time, end_time\n d0, d1 = dists[0], dists[-1]\n # Interpolate\n times = np.interp(dists, [d0, d1], [t0, t1])\n return stops, dists, times\n\n # Iterate through trips and set stop times based on stop ID\n # and service window frequency.\n # Remember that every trip has a valid shape ID.\n # Gather stops geographically from ``stops``.\n rows = []\n geo_stops = gt.geometrize_stops(stops, use_utm=True)\n # Look on the side of the traffic side of street for this timezone\n side = cs.traffic_by_timezone[pfeed.meta.agency_timezone.iat[0]]\n for index, row in trips.iterrows():\n shape = row['shape_id']\n geom = geometry_by_shape[shape]\n stops = get_nearby_stops(geo_stops, geom, side, buffer=buffer)\n # Don't make stop times for trips without nearby stops\n if stops.empty:\n continue\n length = geom.length/1000 # km\n speed = row['speed'] # km/h\n duration = int((length/speed)*3600) # seconds\n frequency = row['frequency']\n if not frequency:\n # No stop times for this trip/frequency combo\n continue\n headway = 3600/frequency # seconds\n trip = row['trip_id']\n __, route, window, base_timestr, direction, i = (\n trip.split(cs.SEP))\n direction = int(direction)\n base_time = gt.timestr_to_seconds(base_timestr)\n start_time = base_time + headway*int(i)\n end_time = start_time + duration\n stops, dists, times = compute_stops_dists_times(stops, geom, shape,\n start_time, end_time)\n new_rows = [[trip, stop, j, time, time, dist]\n for j, (stop, time, dist) in enumerate(zip(stops, times, dists))]\n rows.extend(new_rows)\n\n g = pd.DataFrame(rows, columns=['trip_id', 'stop_id', 'stop_sequence',\n 'arrival_time', 'departure_time', 'shape_dist_traveled'])\n\n # Convert seconds back to time strings\n g[['arrival_time', 'departure_time']] =\\\n g[['arrival_time', 'departure_time']].applymap(\n lambda x: gt.timestr_to_seconds(x, inverse=True))\n\n return g", "response": "Builds a DataFrame representing stop times for a given ProtoFeed."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _generate_content(self):\n # concat into a dict\n dic = self.config\n dic['mode'] = self.language\n if self.theme:\n dic['theme'] = self.theme\n # dumps with json\n return json.dumps(dic, indent=8, separators=(',', ': '))", "response": "Dumps content using JSON to send to CodeMirror"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_token_details(self, show_listing_details=False, show_inactive=False):\n api_params = {\n \"show_listing_details\": show_listing_details,\n \"show_inactive\": show_inactive\n }\n return self.request.get(path='/exchange/tokens', params=api_params)", "response": "This method returns the available tokens for the Switcheo exchange."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfunctions to fetch trading metrics from the past 24 hours for all trading pairs offered on the exchange. Execution of this function is as follows:: get_candlesticks(pair=\"SWTH_NEO\", start_time=round(time.time()) - 350000, end_time=round(time.time()), interval=360)) The expected return result for this function is as follows:: [{ 'time': '1533168000', 'open': '0.00046835', 'close': '0.00046835', 'high': '0.00046835', 'low': '0.00046835', 'volume': '240315335.0', 'quote_volume': '513110569018.0' },{ 'time': '1533081600', 'open': '0.00046835', 'close': '0.00046835', 'high': '0.00046835', 'low': '0.00046835', 'volume': '1170875.0', 'quote_volume': '2500000000.0' }, ... ] :param pair: The trading pair used to request candle statistics. :type pair: str :param start_time: The start time (in epoch seconds) range for collecting candle statistics. :type start_time: int :param end_time: The end time (in epoch seconds) range for collecting candle statistics. :type end_time: int :param interval: The time interval (in minutes) for candle statistics. Allowed values: 1, 5, 30, 60, 360, 1440 :type interval: int :return: List of dictionaries containing the candles statistics based on the parameter filters.", "response": "def get_candlesticks(self, pair, start_time, end_time, interval):\n \"\"\"\n Function to fetch trading metrics from the past 24 hours for all trading pairs offered on the exchange.\n Execution of this function is as follows::\n\n get_candlesticks(pair=\"SWTH_NEO\",\n start_time=round(time.time()) - 350000,\n end_time=round(time.time()),\n interval=360))\n\n The expected return result for this function is as follows::\n\n [{\n 'time': '1533168000',\n 'open': '0.00046835',\n 'close': '0.00046835',\n 'high': '0.00046835',\n 'low': '0.00046835',\n 'volume': '240315335.0',\n 'quote_volume': '513110569018.0'\n },{\n 'time': '1533081600',\n 'open': '0.00046835',\n 'close': '0.00046835',\n 'high': '0.00046835',\n 'low': '0.00046835',\n 'volume': '1170875.0',\n 'quote_volume': '2500000000.0'\n },\n ...\n ]\n\n :param pair: The trading pair used to request candle statistics.\n :type pair: str\n :param start_time: The start time (in epoch seconds) range for collecting candle statistics.\n :type start_time: int\n :param end_time: The end time (in epoch seconds) range for collecting candle statistics.\n :type end_time: int\n :param interval: The time interval (in minutes) for candle statistics. Allowed values: 1, 5, 30, 60, 360, 1440\n :type interval: int\n :return: List of dictionaries containing the candles statistics based on the parameter filters.\n \"\"\"\n api_params = {\n \"pair\": pair,\n \"interval\": interval,\n \"start_time\": start_time,\n \"end_time\": end_time,\n \"contract_hash\": self.contract_hash\n }\n return self.request.get(path='/tickers/candlesticks', params=api_params)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_last_price(self, symbols=None, bases=None):\n api_params = {}\n if symbols is not None:\n api_params['symbols'] = symbols\n if bases is not None:\n api_params['bases'] = bases\n return self.request.get(path='/tickers/last_price', params=api_params)", "response": "This function returns the last price of the most recently executed trade on the order book for each trading pair."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfunction to fetch the open orders on the order book for the trade pair requested. Execution of this function is as follows:: get_offers(pair=\"SWTH_NEO\") The expected return result for this function is as follows:: [{ 'id': '2716c0ca-59bb-4c86-8ee4-6b9528d0e5d2', 'offer_asset': 'GAS', 'want_asset': 'NEO', 'available_amount': 9509259, 'offer_amount': 30000000, 'want_amount': 300000000, 'address': '7f345d1a031c4099540dbbbc220d4e5640ab2b6f' }, { .... }] :param pair: The trading pair that will be used to request open offers on the order book. :type pair: str :return: List of dictionaries consisting of the open offers for the requested trading pair.", "response": "def get_offers(self, pair=\"SWTH_NEO\"):\n \"\"\"\n Function to fetch the open orders on the order book for the trade pair requested.\n Execution of this function is as follows::\n\n get_offers(pair=\"SWTH_NEO\")\n\n The expected return result for this function is as follows::\n\n [{\n 'id': '2716c0ca-59bb-4c86-8ee4-6b9528d0e5d2',\n 'offer_asset': 'GAS',\n 'want_asset': 'NEO',\n 'available_amount': 9509259,\n 'offer_amount': 30000000,\n 'want_amount': 300000000,\n 'address': '7f345d1a031c4099540dbbbc220d4e5640ab2b6f'\n }, {\n ....\n }]\n\n :param pair: The trading pair that will be used to request open offers on the order book.\n :type pair: str\n :return: List of dictionaries consisting of the open offers for the requested trading pair.\n \"\"\"\n api_params = {\n \"pair\": pair,\n \"contract_hash\": self.contract_hash\n }\n return self.request.get(path='/offers', params=api_params)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfunctioning to fetch a list of filled trades for the parameters requested. Execution of this function is as follows:: get_trades(pair=\"SWTH_NEO\", limit=3) The expected return result for this function is as follows:: [{ 'id': '15bb16e2-7a80-4de1-bb59-bcaff877dee0', 'fill_amount': 100000000, 'take_amount': 100000000, 'event_time': '2018-08-04T15:00:12.634Z', 'is_buy': True }, { 'id': 'b6f9e530-60ff-46ff-9a71-362097a2025e', 'fill_amount': 47833882, 'take_amount': 97950000000, 'event_time': '2018-08-03T02:44:47.706Z', 'is_buy': True }, { 'id': '7a308ccc-b7f5-46a3-bf6b-752ab076cc9f', 'fill_amount': 1001117, 'take_amount': 2050000000, 'event_time': '2018-08-03T02:32:50.703Z', 'is_buy': True }] :param pair: The trading pair that will be used to request filled trades. :type pair: str :param start_time: Only return trades after this time (in epoch seconds). :type start_time: int :param end_time: Only return trades before this time (in epoch seconds). :type end_time: int :param limit: The number of filled trades to return. Min: 1, Max: 10000, Default: 5000 :type limit: int :return: List of dictionaries consisting of filled orders that meet requirements of the parameters passed to it.", "response": "def get_trades(self, pair=\"SWTH_NEO\", start_time=None, end_time=None, limit=5000):\n \"\"\"\n Function to fetch a list of filled trades for the parameters requested.\n Execution of this function is as follows::\n\n get_trades(pair=\"SWTH_NEO\", limit=3)\n\n The expected return result for this function is as follows::\n\n [{\n 'id': '15bb16e2-7a80-4de1-bb59-bcaff877dee0',\n 'fill_amount': 100000000,\n 'take_amount': 100000000,\n 'event_time': '2018-08-04T15:00:12.634Z',\n 'is_buy': True\n }, {\n 'id': 'b6f9e530-60ff-46ff-9a71-362097a2025e',\n 'fill_amount': 47833882,\n 'take_amount': 97950000000,\n 'event_time': '2018-08-03T02:44:47.706Z',\n 'is_buy': True\n }, {\n 'id': '7a308ccc-b7f5-46a3-bf6b-752ab076cc9f',\n 'fill_amount': 1001117,\n 'take_amount': 2050000000,\n 'event_time': '2018-08-03T02:32:50.703Z',\n 'is_buy': True\n }]\n\n :param pair: The trading pair that will be used to request filled trades.\n :type pair: str\n :param start_time: Only return trades after this time (in epoch seconds).\n :type start_time: int\n :param end_time: Only return trades before this time (in epoch seconds).\n :type end_time: int\n :param limit: The number of filled trades to return. Min: 1, Max: 10000, Default: 5000\n :type limit: int\n :return: List of dictionaries consisting of filled orders that meet requirements of the parameters passed to it.\n \"\"\"\n if limit > 10000 or limit < 1:\n raise ValueError(\"Attempting to request more trades than allowed by the API.\")\n api_params = {\n \"blockchain\": self.blockchain,\n \"pair\": pair,\n \"contract_hash\": self.contract_hash\n }\n if start_time is not None:\n api_params['from'] = start_time\n if end_time is not None:\n api_params['to'] = end_time\n if limit != 5000:\n api_params['limit'] = limit\n return self.request.get(path='/trades', params=api_params)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfunction to fetch a list of the 20 most recently filled trades for the parameters requested. Execution of this function is as follows:: get_recent_trades(pair=\"SWTH_NEO\") The expected return result for this function is as follows:: [{ 'id': '15bb16e2-7a80-4de1-bb59-bcaff877dee0', 'fill_amount': 100000000, 'take_amount': 100000000, 'event_time': '2018-08-04T15:00:12.634Z', 'is_buy': True }, { 'id': 'b6f9e530-60ff-46ff-9a71-362097a2025e', 'fill_amount': 47833882, 'take_amount': 97950000000, 'event_time': '2018-08-03T02:44:47.706Z', 'is_buy': True }, ...., { 'id': '7a308ccc-b7f5-46a3-bf6b-752ab076cc9f', 'fill_amount': 1001117, 'take_amount': 2050000000, 'event_time': '2018-08-03T02:32:50.703Z', 'is_buy': True }] :param pair: The trading pair that will be used to request filled trades. :type pair: str :return: List of 20 dictionaries consisting of filled orders for the trade pair.", "response": "def get_recent_trades(self, pair=\"SWTH_NEO\"):\n \"\"\"\n Function to fetch a list of the 20 most recently filled trades for the parameters requested.\n Execution of this function is as follows::\n\n get_recent_trades(pair=\"SWTH_NEO\")\n\n The expected return result for this function is as follows::\n\n [{\n 'id': '15bb16e2-7a80-4de1-bb59-bcaff877dee0',\n 'fill_amount': 100000000,\n 'take_amount': 100000000,\n 'event_time': '2018-08-04T15:00:12.634Z',\n 'is_buy': True\n }, {\n 'id': 'b6f9e530-60ff-46ff-9a71-362097a2025e',\n 'fill_amount': 47833882,\n 'take_amount': 97950000000,\n 'event_time': '2018-08-03T02:44:47.706Z',\n 'is_buy': True\n }, ...., {\n 'id': '7a308ccc-b7f5-46a3-bf6b-752ab076cc9f',\n 'fill_amount': 1001117,\n 'take_amount': 2050000000,\n 'event_time': '2018-08-03T02:32:50.703Z',\n 'is_buy': True\n }]\n\n :param pair: The trading pair that will be used to request filled trades.\n :type pair: str\n :return: List of 20 dictionaries consisting of filled orders for the trade pair.\n \"\"\"\n api_params = {\n \"pair\": pair\n }\n return self.request.get(path='/trades/recent', params=api_params)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_pairs(self, base=None, show_details=False):\n api_params = {}\n if show_details:\n api_params[\"show_details\"] = show_details\n if base is not None and base in [\"NEO\", \"GAS\", \"SWTH\", \"USD\", \"ETH\"]:\n api_params[\"bases\"] = [base]\n return self.request.get(path='/exchange/pairs', params=api_params)", "response": "This function returns a list of trading pairs offered on Switcheo decentralized exchange."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_orders(self, address, chain_name='NEO', contract_version='V2', pair=None, from_epoch_time=None,\n order_status=None, before_id=None, limit=50):\n \"\"\"\n Function to fetch the order history of the given address.\n Execution of this function is as follows::\n\n get_orders(address=neo_get_scripthash_from_address(address=address))\n\n The expected return result for this function is as follows::\n\n [{\n 'id': '7cbdf481-6acf-4bf3-a1ed-4773f31e6931',\n 'blockchain': 'neo',\n 'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82',\n 'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59',\n 'side': 'buy',\n 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b',\n 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'offer_amount': '53718500',\n 'want_amount': '110000000000',\n 'transfer_amount': '0',\n 'priority_gas_amount': '0',\n 'use_native_token': True,\n 'native_fee_transfer_amount': 0,\n 'deposit_txn': None,\n 'created_at': '2018-08-03T02:44:47.692Z',\n 'status': 'processed',\n 'fills': [{\n 'id': 'b6f9e530-60ff-46ff-9a71-362097a2025e',\n 'offer_hash': '95b3b03be0bff8f58aa86a8dd599700bbaeaffc05078329d5b726b6b995f4cda',\n 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b',\n 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'fill_amount': '47833882',\n 'want_amount': '97950000000',\n 'filled_amount': '',\n 'fee_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'fee_amount': '73462500',\n 'price': '0.00048835',\n 'txn': None,\n 'status': 'success',\n 'created_at': '2018-08-03T02:44:47.706Z',\n 'transaction_hash': '694745a09e33845ec008cfb79c73986a556e619799ec73274f82b30d85bda13a'\n }],\n 'makes': [{\n 'id': '357088a0-cc80-49ab-acdd-980589c2d7d8',\n 'offer_hash': '420cc85abf02feaceb1bcd91489a0c1949c972d2a9a05ae922fa15d79de80c00',\n 'available_amount': '0',\n 'offer_asset_id': 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b',\n 'offer_amount': '5884618',\n 'want_asset_id': 'ab38352559b8b203bde5fddfa0b07d8b2525e132',\n 'want_amount': '12050000000',\n 'filled_amount': '0.0',\n 'txn': None,\n 'cancel_txn': None,\n 'price': '0.000488350041493775933609958506224066390041494',\n 'status': 'cancelled',\n 'created_at': '2018-08-03T02:44:47.708Z',\n 'transaction_hash': '1afa946546550151bbbd19f197a87cec92e9be58c44ec431cae42076298548b7',\n 'trades': []\n }]\n }, {\n ....\n }]\n\n :param address: The ScriptHash of the address to filter orders for.\n :type address: str\n :param pair: The trading pair to filter order requests on.\n :type pair: str\n :param chain_name: The name of the chain to find orders against.\n :type chain_name: str\n :param contract_version: The version of the contract to find orders against.\n :type contract_version: str\n :param from_epoch_time: Only return orders that are last updated at or after this time.\n :type from_epoch_time: int\n :param order_status: Only return orders have this status. Possible values are open, cancelled, completed.\n :type order_status: str\n :param before_id: Only return orders that are created before the order with this id.\n :type before_id: str\n :param limit: Only return up to this number of orders (min: 1, max: 200, default: 50).\n :type limit: int\n :return: List of dictionaries containing the orders for the given NEO address and (optional) trading pair.\n \"\"\"\n api_params = {\n \"address\": address,\n \"contract_hash\": self.get_contracts()[chain_name.upper()][contract_version.upper()],\n \"limit\": limit\n }\n if pair is not None:\n api_params['pair'] = pair\n if from_epoch_time is not None:\n api_params['from_epoch_time'] = from_epoch_time\n if order_status is not None:\n api_params['order_status'] = order_status\n if before_id is not None:\n api_params['before_id'] = before_id\n return self.request.get(path='/orders', params=api_params)", "response": "This function returns the order history for the given address."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_balance(self, addresses, contracts):\n api_params = {\n \"addresses[]\": addresses,\n \"contract_hashes[]\": contracts\n }\n return self.request.get(path='/balances', params=api_params)", "response": "This function returns the current balance for the given address in the Switcheo smart contract."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconfigure a file extension to be ignored.", "response": "def ignore_file_extension(self, extension):\n \"\"\"\n Configure a file extension to be ignored.\n\n :param extension: file extension to be ignored\n (ex. .less, .scss, etc)\n \"\"\"\n logger.info('Ignoring file extension: {}'.format(extension))\n self.watcher.ignore_file_extension(extension)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding the given filepath for watcher list.", "response": "def watch(self, filepath, func=None, delay=None):\n \"\"\"Add the given filepath for watcher list.\n\n Once you have intialized a server, watch file changes before\n serve the server::\n\n server.watch('static/*.stylus', 'make static')\n def alert():\n print('foo')\n server.watch('foo.txt', alert)\n server.serve()\n\n :param filepath: files to be watched, it can be a filepath,\n a directory, or a glob pattern\n :param func: the function to be called, it can be a string of\n shell command, or any callable object without\n parameters\n :param delay: Delay sending the reload message. Use 'forever' to\n not send it. This is useful to compile sass files to\n css, but reload on changed css files then only.\n \"\"\"\n if isinstance(func, string_types):\n func = shell(func)\n\n self.watcher.watch(filepath, func, delay)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef serve(self, liveport=None, host=None, restart_delay=2):\n host = host or '127.0.0.1'\n logger.info('Serving on http://%s:%s' % (host, liveport))\n\n self.application(host, liveport=liveport)\n\n try:\n self.watcher._changes.append(('__livereload__', restart_delay))\n LiveReloadHandler.start_tasks()\n IOLoop.instance().start()\n except KeyboardInterrupt:\n logger.info('Shutting down...')", "response": "Start serving the application with the given port."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking if there are changes run the given task.", "response": "def examine(self):\n \"\"\"Check if there are changes, if true, run the given task.\"\"\"\n if self._changes:\n return self._changes.pop()\n\n # clean filepath\n self.filepath = None\n delays = set([0])\n for path in self._tasks:\n item = self._tasks[path]\n if self.is_changed(path, item['ignore']):\n func = item['func']\n func and func()\n delay = item['delay']\n if delay:\n delays.add(delay)\n\n if 'forever' in delays:\n delay = 'forever'\n else:\n delay = max(delays)\n return self.filepath, delay"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nperform the LiveReload request.", "response": "def livereload_request(self, **options):\n \"\"\"\n Performs the LiveReload request.\n \"\"\"\n style = color_style()\n verbosity = int(options['verbosity'])\n host = '%s:%d' % (\n options['livereload_host'],\n options['livereload_port'],\n )\n try:\n urlopen('http://%s/forcereload' % host)\n self.message('LiveReload request emitted.\\n',\n verbosity, style.HTTP_INFO)\n except IOError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the handler for this command.", "response": "def get_handler(self, *args, **options):\n \"\"\"\n Entry point to plug the LiveReload feature.\n \"\"\"\n handler = super(Command, self).get_handler(*args, **options)\n if options['use_livereload']:\n threading.Timer(1, self.livereload_request, kwargs=options).start()\n return handler"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the unsigned value along with its timestamp the time when it got dumped.", "response": "def loads_with_timestamp(value, salt):\n \"\"\"Returns the unsigned value along with its timestamp, the time when it\n got dumped.\"\"\"\n try:\n signing.loads(value, salt=salt, max_age=-999999)\n except signing.SignatureExpired as e:\n age = float(str(e).split('Signature age ')[1].split(' >')[0])\n timestamp = timezone.now() - datetime.timedelta(seconds=age)\n return timestamp, signing.loads(value, salt=salt)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets TCP keepalive on an open socket.", "response": "def set_keepalive(sock, after_idle_sec=1, interval_sec=3, max_fails=5):\n \"\"\"Set TCP keepalive on an open socket.\n\n It activates after 1 second (after_idle_sec) of idleness,\n then sends a keepalive ping once every 3 seconds (interval_sec),\n and closes the connection after 5 failed ping (max_fails), or 15 seconds\n \"\"\"\n if hasattr(socket, \"SO_KEEPALIVE\"):\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)\n if hasattr(socket, \"TCP_KEEPIDLE\"):\n sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, after_idle_sec)\n if hasattr(socket, \"TCP_KEEPINTVL\"):\n sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, interval_sec)\n if hasattr(socket, \"TCP_KEEPCNT\"):\n sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, max_fails)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ningest the veron catalogue into the catalogues database", "response": "def ingest(self):\n \"\"\"ingest the veron catalogue into the catalogues database\n\n See class docstring for usage.\n \"\"\"\n self.log.debug('starting the ``get`` method')\n\n dictList = self._create_dictionary_of_veron()\n\n tableName = self.dbTableName\n createStatement = \"\"\"\n CREATE TABLE `%(tableName)s` (\n `primaryId` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'An internal counter',\n `B_V` float DEFAULT NULL,\n `U_B` float DEFAULT NULL,\n `abs_magnitude` float DEFAULT NULL,\n `dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,\n `decDeg` double DEFAULT NULL,\n `magnitude` float DEFAULT NULL,\n `raDeg` double DEFAULT NULL,\n `class` varchar(10) COLLATE utf8_unicode_ci DEFAULT NULL,\n `name` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,\n `redshift` float DEFAULT NULL,\n `not_radio` varchar(10) COLLATE utf8_unicode_ci DEFAULT NULL,\n `magnitude_filter` varchar(10) COLLATE utf8_unicode_ci DEFAULT 'V',\n `htm16ID` bigint(20) DEFAULT NULL,\n `redshift_flag` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,\n `spectral_classification` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,\n `dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,\n `updated` varchar(45) DEFAULT '0',\n `htm10ID` bigint(20) DEFAULT NULL,\n `htm13ID` bigint(20) DEFAULT NULL,\n PRIMARY KEY (`primaryId`),\n UNIQUE KEY `radeg_decdeg` (`raDeg`,`decDeg`),\n KEY `idx_htm16ID` (`htm16ID`),\n KEY `idx_htm10ID` (`htm10ID`),\n KEY `idx_htm13ID` (`htm13ID`)\n ) ENGINE=MyISAM AUTO_INCREMENT=168945 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;\n\"\"\" % locals()\n\n self.add_data_to_database_table(\n dictList=dictList,\n createStatement=createStatement\n )\n\n self.log.debug('completed the ``get`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _create_dictionary_of_veron(\n self):\n \"\"\"create a list of dictionaries containing all the rows in the veron catalogue\n\n **Return:**\n - ``dictList`` - a list of dictionaries containing all the rows in the veron catalogue\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_create_dictionary_of_veron`` method')\n\n dictList = []\n lines = string.split(self.catData, '\\n')\n\n totalCount = len(lines)\n count = 0\n switch = 0\n for line in lines:\n if (len(line) == 0 or line[0] in [\"#\", \" \"]) and switch == 0:\n continue\n else:\n switch = 1\n count += 1\n if count > 1:\n # Cursor up one line and clear line\n sys.stdout.write(\"\\x1b[1A\\x1b[2K\")\n print \"%(count)s / %(totalCount)s veron data added to memory\" % locals()\n\n if count == 1:\n theseKeys = []\n someKeys = string.split(line, '|')\n for key in someKeys:\n if key == \"_RAJ2000\":\n key = \"raDeg\"\n if key == \"_DEJ2000\":\n key = \"decDeg\"\n if key == \"Cl\":\n key = \"class\"\n if key == \"nR\":\n key = \"not_radio\"\n if key == \"Name\":\n key = \"name\"\n if key == \"l_z\":\n key = \"redshift_flag\"\n if key == \"z\":\n key = \"redshift\"\n if key == \"Sp\":\n key = \"spectral_classification\"\n if key == \"n_Vmag\":\n key = \"magnitude_filter\"\n if key == \"Vmag\":\n key = \"magnitude\"\n if key == \"B-V\":\n key = \"B_V\"\n if key == \"U-B\":\n key = \"U_B\"\n if key == \"Mabs\":\n key = \"abs_magnitude\"\n theseKeys.append(key)\n continue\n\n if count in [2, 3]:\n continue\n\n thisDict = {}\n theseValues = string.split(line, '|')\n\n for k, v in zip(theseKeys, theseValues):\n v = v.strip()\n if len(v) == 0 or v == \"-\":\n v = None\n thisDict[k] = v\n dictList.append(thisDict)\n\n self.log.debug(\n 'completed the ``_create_dictionary_of_veron`` method')\n return dictList", "response": "create a list of dictionaries containing all the rows in the catalogue"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main(arguments=None):\n # setup the command-line util settings\n\n su = tools(\n arguments=arguments,\n docString=__doc__,\n logLevel=\"WARNING\",\n options_first=False,\n projectName=\"sherlock\"\n )\n arguments, settings, log, dbConn = su.setup()\n\n # unpack remaining cl arguments using `exec` to setup the variable names\n # automatically\n for arg, val in arguments.iteritems():\n if arg[0] == \"-\":\n varname = arg.replace(\"-\", \"\") + \"Flag\"\n else:\n varname = arg.replace(\"<\", \"\").replace(\">\", \"\")\n if varname == \"import\":\n varname = \"iimport\"\n if isinstance(val, str) or isinstance(val, unicode):\n exec(varname + \" = '%s'\" % (val,))\n else:\n exec(varname + \" = %s\" % (val,))\n if arg == \"--dbConn\":\n dbConn = val\n log.debug('%s = %s' % (varname, val,))\n\n ## START LOGGING ##\n startTime = times.get_now_sql_datetime()\n log.debug(\n '--- STARTING TO RUN THE cl_utils.py AT %s' %\n (startTime,))\n\n # call the worker function\n # x-if-settings-or-database-credientials\n if init:\n from os.path import expanduser\n home = expanduser(\"~\")\n filepath = home + \"/.config/sherlock/sherlock.yaml\"\n cmd = \"\"\"open %(filepath)s\"\"\" % locals()\n p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)\n try:\n cmd = \"\"\"open %(filepath)s\"\"\" % locals()\n p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)\n except:\n pass\n try:\n cmd = \"\"\"start %(filepath)s\"\"\" % locals()\n p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)\n except:\n pass\n\n if match or dbmatch:\n if verboseFlag:\n verbose = 2\n else:\n verbose = 1\n\n if skipNedUpdateFlag:\n updateNed = False\n else:\n updateNed = True\n\n if skipMagUpdateFlag:\n updatePeakMags = False\n else:\n updatePeakMags = True\n\n classifier = transient_classifier.transient_classifier(\n log=log,\n settings=settings,\n ra=ra,\n dec=dec,\n name=False,\n verbose=verbose,\n update=updateFlag,\n updateNed=updateNed,\n updatePeakMags=updatePeakMags\n )\n classifier.classify()\n\n if clean:\n cleaner = database_cleaner(\n log=log,\n settings=settings\n )\n cleaner.clean()\n if wiki:\n updateWiki = update_wiki_pages(\n log=log,\n settings=settings\n )\n updateWiki.update()\n\n if iimport and ned:\n ned = nedStreamImporter(\n log=log,\n settings=settings,\n coordinateList=[\"%(ra)s %(dec)s\" % locals()],\n radiusArcsec=radiusArcsec\n )\n ned.ingest()\n if iimport and cat:\n\n if cat_name == \"veron\":\n catalogue = veronImporter(\n log=log,\n settings=settings,\n pathToDataFile=pathToDataFile,\n version=cat_version,\n catalogueName=cat_name\n )\n catalogue.ingest()\n\n if \"ned_d\" in cat_name:\n catalogue = nedImporter(\n log=log,\n settings=settings,\n pathToDataFile=pathToDataFile,\n version=cat_version,\n catalogueName=cat_name\n )\n catalogue.ingest()\n if iimport and stream:\n if \"marshall\" in stream_name:\n stream = marshallImporter(\n log=log,\n settings=settings,\n )\n stream.ingest()\n if \"ifs\" in stream_name:\n stream = ifsImporter(\n log=log,\n settings=settings\n )\n stream.ingest()\n if not init and not match and not clean and not wiki and not iimport and ra:\n\n classifier = transient_classifier.transient_classifier(\n log=log,\n settings=settings,\n ra=ra,\n dec=dec,\n name=False,\n verbose=verboseFlag\n )\n classifier.classify()\n\n if info:\n print \"sherlock-catalogues\"\n wiki = update_wiki_pages(\n log=log,\n settings=settings\n )\n table = list(wiki._get_table_infos(trimmed=True))\n\n dataSet = list_of_dictionaries(\n log=log,\n listOfDictionaries=table\n )\n tableData = dataSet.reST(filepath=None)\n print tableData\n print\n\n print \"Crossmatch Streams\"\n table = list(wiki._get_stream_view_infos(trimmed=True))\n dataSet = list_of_dictionaries(\n log=log,\n listOfDictionaries=table\n )\n tableData = dataSet.reST(filepath=None)\n print tableData\n print\n\n print \"Views on Catalogues and Streams\"\n\n table = list(wiki._get_view_infos(trimmed=True))\n dataSet = list_of_dictionaries(\n log=log,\n listOfDictionaries=table\n )\n tableData = dataSet.reST(filepath=None)\n print tableData\n\n if \"dbConn\" in locals() and dbConn:\n dbConn.commit()\n dbConn.close()\n ## FINISH LOGGING ##\n endTime = times.get_now_sql_datetime()\n runningTime = times.calculate_time_difference(startTime, endTime)\n log.debug('-- FINISHED ATTEMPT TO RUN THE cl_utils.py AT %s (RUNTIME: %s) --' %\n (endTime, runningTime, ))\n\n return", "response": "This is the main function used by the cl_utils. py script."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndisplays the effect of this filter", "response": "def display(self, display_before=False):\n \"\"\"shows the effect of this filter\"\"\"\n try:\n copy = self.image.copy()\n except AttributeError:\n raise Exception(\"You need to set the Filter.image attribute for displaying\")\n copy = BrightnessProcessor(brightness=0.6).process(copy)\n s, g = self._input, self.good_segments_indexes\n draw_segments(copy, s[g], (0, 255, 0))\n draw_segments(copy, s[True ^ g], (0, 0, 255))\n show_image_and_wait_for_key(copy, \"segments filtered by \" + self.__class__.__name__)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing the output analysis files from MIP for adding info to trend database", "response": "def parse_mip_analysis(mip_config_raw: dict, qcmetrics_raw: dict, sampleinfo_raw: dict) -> dict:\n \"\"\"Parse the output analysis files from MIP for adding info\n to trend database\n\n Args:\n mip_config_raw (dict): raw YAML input from MIP analysis config file\n qcmetrics_raw (dict): raw YAML input from MIP analysis qc metric file\n sampleinfo_raw (dict): raw YAML input from MIP analysis qc sample info file\n Returns:\n dict: parsed data\n \"\"\"\n outdata = _define_output_dict()\n\n _config(mip_config_raw, outdata)\n _qc_metrics(outdata, qcmetrics_raw)\n _qc_sample_info(outdata, sampleinfo_raw)\n\n return outdata"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncleaning up and run some maintenance tasks on the crossmatch catalogue helper tables and views", "response": "def clean(self):\n \"\"\"*clean up and run some maintance tasks on the crossmatch catalogue helper tables*\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``get`` method')\n\n self._update_tcs_helper_catalogue_tables_info_with_new_tables()\n self._updated_row_counts_in_tcs_helper_catalogue_tables_info()\n self._clean_up_columns()\n self._update_tcs_helper_catalogue_views_info_with_new_views()\n self._clean_up_columns()\n self._updated_row_counts_in_tcs_helper_catalogue_tables_info()\n\n print \"`tcs_helper_catalogue_tables_info` & `tcs_helper_catalogue_views_info` database tables updated\"\n\n self.log.debug('completed the ``get`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _updated_row_counts_in_tcs_helper_catalogue_tables_info(\n self):\n \"\"\" updated row counts in tcs catalogue tables\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_updated_row_counts_in_tcs_helper_catalogue_tables_info`` method')\n\n sqlQuery = u\"\"\"\n select * from tcs_helper_catalogue_tables_info where table_name like \"%%stream\" or (number_of_rows is null and legacy_table = 0)\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n for row in rows:\n tbName = row[\"table_name\"]\n\n sqlQuery = u\"\"\"\n update tcs_helper_catalogue_tables_info set number_of_rows = (select count(*) as count from %(tbName)s) where table_name = \"%(tbName)s\"\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n sqlQuery = u\"\"\"\n select * from tcs_helper_catalogue_views_info where (number_of_rows is null and legacy_view = 0)\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n for row in rows:\n tbName = row[\"view_name\"]\n print tbName\n\n sqlQuery = u\"\"\"\n update tcs_helper_catalogue_views_info set number_of_rows = (select count(*) as count from %(tbName)s) where view_name = \"%(tbName)s\"\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n self.log.debug(\n 'completed the ``_updated_row_counts_in_tcs_helper_catalogue_tables_info`` method')\n return None", "response": "update row counts in tcs catalogue tables info"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates tcs helper catalogue tables info with new tables", "response": "def _update_tcs_helper_catalogue_tables_info_with_new_tables(\n self):\n \"\"\"update tcs helper catalogue tables info with new tables\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_update_tcs_helper_catalogue_tables_info_with_new_tables`` method')\n\n sqlQuery = u\"\"\"\n SELECT max(id) as thisId FROM tcs_helper_catalogue_tables_info;\n \"\"\" % locals()\n thisId = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n try:\n highestId = thisId[0][\"thisId\"] + 1\n except:\n highestId = 1\n\n sqlQuery = u\"\"\"\n SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='BASE TABLE' AND TABLE_SCHEMA like '%%catalogues%%' and TABLE_NAME like \"tcs_cat%%\";\n \"\"\" % locals()\n tablesInDatabase = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n sqlQuery = u\"\"\"\n SELECT table_name, old_table_name FROM tcs_helper_catalogue_tables_info;\n \"\"\" % locals()\n tableList = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n tbList = []\n oldList = []\n for tb in tableList:\n oldList.append(tb[\"old_table_name\"])\n\n for tb in tableList:\n if tb[\"old_table_name\"] not in tbList:\n tbList.append(tb[\"old_table_name\"])\n if tb[\"table_name\"] not in tbList:\n tbList.append(tb[\"table_name\"])\n\n for tb in tablesInDatabase:\n if tb[\"TABLE_NAME\"] not in tbList:\n thisTableName = tb[\"TABLE_NAME\"]\n print \"`%(thisTableName)s` added to `tcs_helper_catalogue_tables_info` database table\" % locals()\n sqlQuery = u\"\"\"\n INSERT INTO tcs_helper_catalogue_tables_info (\n id,\n table_name\n )\n VALUES (\n %(highestId)s,\n \"%(thisTableName)s\"\n )\"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n highestId += 1\n\n self.log.debug(\n 'completed the ``_update_tcs_helper_catalogue_tables_info_with_new_tables`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nclean up the columns of the tcs_helper_catalogue_tables_info table_name and views_name.", "response": "def _clean_up_columns(\n self):\n \"\"\"clean up columns\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_clean_up_columns`` method')\n\n sqlQueries = [\n \"update tcs_helper_catalogue_tables_info set old_table_name = table_name where old_table_name is null;\",\n \"update tcs_helper_catalogue_tables_info set version_number = 'stream' where table_name like '%%stream' and version_number is null;\",\n \"\"\"update tcs_helper_catalogue_tables_info set in_ned = 0 where table_name like '%%stream' and in_ned is null;\"\"\",\n \"\"\"update tcs_helper_catalogue_tables_info set vizier_link = 0 where table_name like '%%stream' and vizier_link is null;\"\"\",\n \"update tcs_helper_catalogue_views_info set old_view_name = view_name where old_view_name is null;\",\n ]\n\n for sqlQuery in sqlQueries:\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n # VIEW OBJECT TYPES\n sqlQuery = u\"\"\"\n SELECT view_name FROM tcs_helper_catalogue_views_info where legacy_view = 0 and object_type is null;\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n for row in rows:\n view_name = row[\"view_name\"]\n object_type = view_name.replace(\"tcs_view_\", \"\").split(\"_\")[0]\n\n sqlQuery = u\"\"\"\n update tcs_helper_catalogue_views_info set object_type = \"%(object_type)s\" where view_name = \"%(view_name)s\"\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n # MASTER TABLE ID FOR VIEWS\n sqlQuery = u\"\"\"\n SELECT view_name FROM tcs_helper_catalogue_views_info where legacy_view = 0 and table_id is null;\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n for row in rows:\n view_name = row[\"view_name\"]\n table_name = view_name.replace(\"tcs_view_\", \"\").split(\"_\")[1:]\n table_name = (\"_\").join(table_name)\n table_name = \"tcs_cat_%(table_name)s\" % locals()\n\n sqlQuery = u\"\"\"\n update tcs_helper_catalogue_views_info set table_id = (select id from tcs_helper_catalogue_tables_info where table_name = \"%(table_name)s\") where view_name = \"%(view_name)s\"\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n self.log.debug('completed the ``_clean_up_columns`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _update_tcs_helper_catalogue_views_info_with_new_views(\n self):\n \"\"\" update tcs helper catalogue tables info with new tables\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_update_tcs_helper_catalogue_views_info_with_new_views`` method')\n\n sqlQuery = u\"\"\"\n SELECT max(id) as thisId FROM tcs_helper_catalogue_views_info;\n \"\"\" % locals()\n thisId = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n try:\n highestId = thisId[0][\"thisId\"] + 1\n except:\n highestId = 1\n\n sqlQuery = u\"\"\"\n SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='VIEW' AND TABLE_SCHEMA like '%%catalogues%%' and TABLE_NAME like \"tcs_view%%\" and TABLE_NAME not like \"%%helper%%\";\n \"\"\" % locals()\n tablesInDatabase = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n sqlQuery = u\"\"\"\n SELECT view_name FROM tcs_helper_catalogue_views_info;\n \"\"\" % locals()\n tableList = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n tbList = []\n for tb in tableList:\n tbList.append(tb[\"view_name\"])\n\n for tb in tablesInDatabase:\n if tb[\"TABLE_NAME\"] not in tbList:\n thisViewName = tb[\"TABLE_NAME\"]\n print \"`%(thisViewName)s` added to `tcs_helper_catalogue_views_info` database table\" % locals()\n sqlQuery = u\"\"\"\n INSERT INTO tcs_helper_catalogue_views_info (\n id,\n view_name\n )\n VALUES (\n %(highestId)s,\n \"%(thisViewName)s\"\n )\"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n highestId += 1\n\n self.log.debug(\n 'completed the ``_update_tcs_helper_catalogue_views_info_with_new_views`` method')\n return None", "response": "update tcs helper catalogue tables info with new tables"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef restart_listener(self, topics):\n '''Restart listener after configuration update.\n '''\n if self.listener is not None:\n if self.listener.running:\n self.stop()\n self.__init__(topics=topics)", "response": "Restart listener after configuration update."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstops the listener thread.", "response": "def stop(self):\n '''Stop listener.'''\n self.logger.debug(\"Stopping listener.\")\n self.listener.stop()\n if self.thread is not None:\n self.thread.join()\n self.thread = None\n self.logger.debug(\"Listener stopped.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_subscriber(self):\n '''Create a subscriber instance using specified addresses and\n message types.\n '''\n if self.subscriber is None:\n if self.topics:\n self.subscriber = NSSubscriber(self.services, self.topics,\n addr_listener=True,\n addresses=self.addresses,\n nameserver=self.nameserver)\n self.recv = self.subscriber.start().recv", "response": "Create a subscriber instance using specified addresses and message types."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stop(self):\n '''Stop subscriber and delete the instance\n '''\n self.running = False\n time.sleep(1)\n if self.subscriber is not None:\n self.subscriber.stop()\n self.subscriber = None", "response": "Stop subscriber and delete the instance"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngives a dictionary of all version data available determine the current version", "response": "def get_current_semver(data):\n \"\"\"Given a dictionary of all version data available, determine the current version\"\"\"\n # get the not-none values from data\n known = {\n key: data.get(alias)\n for key, alias in config._forward_aliases.items()\n if data.get(alias) is not None\n }\n\n # prefer the strict field, if available\n potentials = [\n known.pop(Constants.VERSION_STRICT_FIELD, None),\n known.pop(Constants.VERSION_FIELD, None),\n ]\n\n from_components = [known.get(k) for k in SemVerSigFig._fields if k in known]\n if len(from_components) == 3:\n potentials.append(\".\".join(from_components))\n\n versions = set()\n for potential in potentials:\n if not potential:\n continue\n match = re_semver.match(potential)\n if match:\n parts = match.groupdict()\n parts.pop(\"tail\")\n versions.add(SemVer(**parts))\n\n if len(versions) > 1:\n raise ValueError(\"conflicting versions within project: %s\" % versions)\n\n if not versions:\n _LOG.debug(\"key pairs found: \\n%r\", known)\n raise ValueError(\"could not find existing semver\")\n return versions.pop()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndefining how to increment semver based on which significant figure is triggered", "response": "def make_new_semver(current_semver, all_triggers, **overrides):\n \"\"\"Defines how to increment semver based on which significant figure is triggered\"\"\"\n new_semver = {}\n bumped = False\n for sig_fig in SemVerSigFig: # iterate sig figs in order of significance\n value = getattr(current_semver, sig_fig)\n override = overrides.get(sig_fig)\n if override is not None:\n new_semver[sig_fig] = override\n if int(override) > int(value):\n bumped = True\n elif bumped:\n new_semver[sig_fig] = \"0\"\n elif sig_fig in all_triggers:\n new_semver[sig_fig] = str(int(value) + 1)\n bumped = True\n else:\n new_semver[sig_fig] = value\n return SemVer(**new_semver)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing the output of a traceroute execution into a tree structure.", "response": "def loads(data):\n \"\"\"Parser entry point. Parses the output of a traceroute execution\"\"\"\n data += \"\\n_EOS_\" # Append EOS token. Helps to match last RE_HOP\n\n # Get headers\n match_dest = RE_HEADER.search(data)\n dest_name = match_dest.group(1)\n dest_ip = match_dest.group(2)\n\n # The Traceroute is the root of the tree.\n traceroute = Traceroute(dest_name, dest_ip)\n\n # Get hops\n matches_hop = RE_HOP.findall(data)\n\n for match_hop in matches_hop:\n # Initialize a hop\n idx = int(match_hop[0])\n if match_hop[1]:\n asn = int(match_hop[1])\n else:\n asn = None\n hop = Hop(idx, asn)\n\n # Parse probes data: | <(IP)> | | 'ms' | '*'\n probes_data = match_hop[2].split()\n # Get rid of 'ms': | <(IP)> | | '*'\n probes_data = filter(lambda s: s.lower() != 'ms', probes_data)\n\n i = 0\n while i < len(probes_data):\n # For each hop parse probes\n name = None\n ip = None\n rtt = None\n anno = ''\n\n # RTT check comes first because RE_PROBE_NAME can confuse rtt with an IP as name\n # The regex RE_PROBE_NAME can be improved\n if RE_PROBE_RTT.match(probes_data[i]):\n # Matched rtt, so name and IP have been parsed before\n rtt = float(probes_data[i])\n i += 1\n elif RE_PROBE_NAME.match(probes_data[i]):\n # Matched a name, so next elements are IP and rtt\n name = probes_data[i]\n ip = probes_data[i+1].strip('()')\n rtt = float(probes_data[i+2])\n i += 3\n elif RE_PROBE_TIMEOUT.match(probes_data[i]):\n # Its a timeout, so maybe name and IP have been parsed before\n # or maybe not. But it's Hop job to deal with it\n rtt = None\n i += 1\n else:\n ext = \"i: %d\\nprobes_data: %s\\nname: %s\\nip: %s\\nrtt: %s\\nanno: %s\" % (i, probes_data, name, ip, rtt, anno)\n raise ParseError(\"Parse error \\n%s\" % ext)\n # Check for annotation\n try:\n if RE_PROBE_ANNOTATION.match(probes_data[i]):\n anno = probes_data[i]\n i += 1\n except IndexError:\n pass\n\n probe = Probe(name, ip, rtt, anno)\n hop.add_probe(probe)\n\n traceroute.add_hop(hop)\n\n return traceroute"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a Probe instance to this hop s results.", "response": "def add_probe(self, probe):\n \"\"\"Adds a Probe instance to this hop's results.\"\"\"\n if self.probes:\n probe_last = self.probes[-1]\n if not probe.ip:\n probe.ip = probe_last.ip\n probe.name = probe_last.name\n self.probes.append(probe)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfunction decorator to make function synchronized on self. _lock.", "response": "def synchronized(func, *args, **kwargs):\n \"\"\"Function decorator to make function synchronized on ``self._lock``.\n\n If the first argument to the function (hopefully self) does not have a _lock\n attribute, then this decorator does nothing.\n \"\"\"\n if not (args and hasattr(args[0], '_lock')):\n return func(*args, **kwargs)\n with args[0]._lock: # pylint: disable=W0212\n return func(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef normalize_job_id(job_id):\n\tif not isinstance(job_id, uuid.UUID):\n\t\tjob_id = uuid.UUID(job_id)\n\treturn job_id", "response": "Normalizes a value to a job id."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a : py : class : datetime. datetime instance representing the current time.", "response": "def now(self):\n\t\t\"\"\"\n\t\tReturn a :py:class:`datetime.datetime` instance representing the current time.\n\n\t\t:rtype: :py:class:`datetime.datetime`\n\t\t\"\"\"\n\t\tif self.use_utc:\n\t\t\treturn datetime.datetime.utcnow()\n\t\telse:\n\t\t\treturn datetime.datetime.now()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nstart the JobManager thread.", "response": "def start(self):\n\t\t\"\"\"\n\t\tStart the JobManager thread.\n\t\t\"\"\"\n\t\tif self._thread_running.is_set():\n\t\t\traise RuntimeError('the JobManager has already been started')\n\t\tself._thread.start()\n\t\tself._thread_running.wait()\n\t\treturn"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstops the JobManager thread.", "response": "def stop(self):\n\t\t\"\"\"\n\t\tStop the JobManager thread.\n\t\t\"\"\"\n\t\tself.logger.debug('stopping the job manager')\n\t\tself._thread_running.clear()\n\t\tself._thread_shutdown.wait()\n\t\tself._job_lock.acquire()\n\t\tself.logger.debug('waiting on ' + str(len(self._jobs)) + ' job threads')\n\t\tfor job_desc in self._jobs.values():\n\t\t\tif job_desc['job'] is None:\n\t\t\t\tcontinue\n\t\t\tif not job_desc['job'].is_alive():\n\t\t\t\tcontinue\n\t\t\tjob_desc['job'].join()\n\n\t\t# the job lock must be released before the thread can be joined because the thread routine acquires it before\n\t\t# checking if it should exit, see https://github.com/zeroSteiner/smoke-zephyr/issues/4 for more details\n\t\tself._job_lock.release()\n\t\tself._thread.join()\n\n\t\tself.logger.info('the job manager has been stopped')\n\t\treturn"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a job and run it once immediately.", "response": "def job_run(self, callback, parameters=None):\n\t\t\"\"\"\n\t\tAdd a job and run it once immediately.\n\n\t\t:param function callback: The function to run asynchronously.\n\t\t:param parameters: The parameters to be provided to the callback.\n\t\t:type parameters: list, tuple\n\t\t:return: The job id.\n\t\t:rtype: :py:class:`uuid.UUID`\n\t\t\"\"\"\n\t\tif not self._thread_running.is_set():\n\t\t\traise RuntimeError('the JobManager is not running')\n\t\tparameters = (parameters or ())\n\t\tif not isinstance(parameters, (list, tuple)):\n\t\t\tparameters = (parameters,)\n\t\tjob_desc = {}\n\t\tjob_desc['job'] = JobRun(callback, parameters)\n\t\tjob_desc['last_run'] = None\n\t\tjob_desc['run_every'] = datetime.timedelta(0, 1)\n\t\tjob_desc['callback'] = callback\n\t\tjob_desc['parameters'] = parameters\n\t\tjob_desc['enabled'] = True\n\t\tjob_desc['tolerate_exceptions'] = False\n\t\tjob_desc['run_count'] = 0\n\t\tjob_desc['expiration'] = 0\n\t\tjob_id = uuid.uuid4()\n\t\tself.logger.info('adding new job with id: ' + str(job_id) + ' and callback function: ' + callback.__name__)\n\t\twith self._job_lock:\n\t\t\tself._jobs[job_id] = job_desc\n\t\t\tself._job_execute(job_id)\n\t\treturn job_id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a job to the job manager.", "response": "def job_add(self, callback, parameters=None, hours=0, minutes=0, seconds=0, tolerate_exceptions=True, expiration=None):\n\t\t\"\"\"\n\t\tAdd a job to the job manager.\n\n\t\t:param function callback: The function to run asynchronously.\n\t\t:param parameters: The parameters to be provided to the callback.\n\t\t:type parameters: list, tuple\n\t\t:param int hours: Number of hours to sleep between running the callback.\n\t\t:param int minutes: Number of minutes to sleep between running the callback.\n\t\t:param int seconds: Number of seconds to sleep between running the callback.\n\t\t:param bool tolerate_execptions: Whether to continue running a job after it has thrown an exception.\n\t\t:param expiration: When to expire and remove the job. If an integer\n\t\t\tis provided, the job will be executed that many times. If a\n\t\t\tdatetime or timedelta instance is provided, then the job will\n\t\t\tbe removed after the specified time.\n\t\t:type expiration: int, :py:class:`datetime.timedelta`, :py:class:`datetime.datetime`\n\t\t:return: The job id.\n\t\t:rtype: :py:class:`uuid.UUID`\n\t\t\"\"\"\n\t\tif not self._thread_running.is_set():\n\t\t\traise RuntimeError('the JobManager is not running')\n\t\tparameters = (parameters or ())\n\t\tif not isinstance(parameters, (list, tuple)):\n\t\t\tparameters = (parameters,)\n\t\tjob_desc = {}\n\t\tjob_desc['job'] = JobRun(callback, parameters)\n\t\tjob_desc['last_run'] = None\n\t\tjob_desc['run_every'] = datetime.timedelta(0, ((hours * 60 * 60) + (minutes * 60) + seconds))\n\t\tjob_desc['callback'] = callback\n\t\tjob_desc['parameters'] = parameters\n\t\tjob_desc['enabled'] = True\n\t\tjob_desc['tolerate_exceptions'] = tolerate_exceptions\n\t\tjob_desc['run_count'] = 0\n\t\tif isinstance(expiration, int):\n\t\t\tjob_desc['expiration'] = expiration\n\t\telif isinstance(expiration, datetime.timedelta):\n\t\t\tjob_desc['expiration'] = self.now() + expiration\n\t\telif isinstance(expiration, datetime.datetime):\n\t\t\tjob_desc['expiration'] = expiration\n\t\telse:\n\t\t\tjob_desc['expiration'] = None\n\t\tjob_id = uuid.uuid4()\n\t\tself.logger.info('adding new job with id: ' + str(job_id) + ' and callback function: ' + callback.__name__)\n\t\twith self._job_lock:\n\t\t\tself._jobs[job_id] = job_desc\n\t\treturn job_id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the number of enabled jobs.", "response": "def job_count_enabled(self):\n\t\t\"\"\"\n\t\tReturn the number of enabled jobs.\n\n\t\t:return: The number of jobs that are enabled.\n\t\t:rtype: int\n\t\t\"\"\"\n\t\tenabled = 0\n\t\tfor job_desc in self._jobs.values():\n\t\t\tif job_desc['enabled']:\n\t\t\t\tenabled += 1\n\t\treturn enabled"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef job_enable(self, job_id):\n\t\tjob_id = normalize_job_id(job_id)\n\t\twith self._job_lock:\n\t\t\tjob_desc = self._jobs[job_id]\n\t\t\tjob_desc['enabled'] = True", "response": "Enable a job.\n\n\t\t:param job_id: Job identifier to enable.\n\t\t:type job_id: :py:class:`uuid.UUID`"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef job_disable(self, job_id):\n\t\tjob_id = normalize_job_id(job_id)\n\t\twith self._job_lock:\n\t\t\tjob_desc = self._jobs[job_id]\n\t\t\tjob_desc['enabled'] = False", "response": "Disable a job. Disabled jobs will not be executed.\n\n\t\t:param job_id: Job identifier to disable.\n\t\t:type job_id: :py:class:`uuid.UUID`"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes a job. :param job_id: Job identifier to delete. :type job_id: :py:class:`uuid.UUID` :param bool wait: If the job is currently running, wait for it to complete before deleting it.", "response": "def job_delete(self, job_id, wait=True):\n\t\t\"\"\"\n\t\tDelete a job.\n\n\t\t:param job_id: Job identifier to delete.\n\t\t:type job_id: :py:class:`uuid.UUID`\n\t\t:param bool wait: If the job is currently running, wait for it to complete before deleting it.\n\t\t\"\"\"\n\t\tjob_id = normalize_job_id(job_id)\n\t\tself.logger.info('deleting job with id: ' + str(job_id) + ' and callback function: ' + self._jobs[job_id]['callback'].__name__)\n\t\tjob_desc = self._jobs[job_id]\n\t\twith self._job_lock:\n\t\t\tjob_desc['enabled'] = False\n\t\t\tif wait and self.job_is_running(job_id):\n\t\t\t\tjob_desc['job'].join()\n\t\t\tdel self._jobs[job_id]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a job is enabled.", "response": "def job_is_enabled(self, job_id):\n\t\t\"\"\"\n\t\tCheck if a job is enabled.\n\n\t\t:param job_id: Job identifier to check the status of.\n\t\t:type job_id: :py:class:`uuid.UUID`\n\t\t:rtype: bool\n\t\t\"\"\"\n\t\tjob_id = normalize_job_id(job_id)\n\t\tjob_desc = self._jobs[job_id]\n\t\treturn job_desc['enabled']"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks if a job is currently running. False is returned if the job does not exist.", "response": "def job_is_running(self, job_id):\n\t\t\"\"\"\n\t\tCheck if a job is currently running. False is returned if the job does\n\t\tnot exist.\n\n\t\t:param job_id: Job identifier to check the status of.\n\t\t:type job_id: :py:class:`uuid.UUID`\n\t\t:rtype: bool\n\t\t\"\"\"\n\t\tjob_id = normalize_job_id(job_id)\n\t\tif job_id not in self._jobs:\n\t\t\treturn False\n\t\tjob_desc = self._jobs[job_id]\n\t\tif job_desc['job']:\n\t\t\treturn job_desc['job'].is_alive()\n\t\treturn False"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bin_b64_type(arg):\n\ttry:\n\t\targ = base64.standard_b64decode(arg)\n\texcept (binascii.Error, TypeError):\n\t\traise argparse.ArgumentTypeError(\"{0} is invalid base64 data\".format(repr(arg)))\n\treturn arg", "response": "An argparse type representing binary data encoded in base64."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef bin_hex_type(arg):\n\tif re.match(r'^[a-f0-9]{2}(:[a-f0-9]{2})+$', arg, re.I):\n\t\targ = arg.replace(':', '')\n\telif re.match(r'^(\\\\x[a-f0-9]{2})+$', arg, re.I):\n\t\targ = arg.replace('\\\\x', '')\n\ttry:\n\t\targ = binascii.a2b_hex(arg)\n\texcept (binascii.Error, TypeError):\n\t\traise argparse.ArgumentTypeError(\"{0} is invalid hex data\".format(repr(arg)))\n\treturn arg", "response": "An argparse type representing binary data encoded in hex."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dir_type(arg):\n\tif not os.path.isdir(arg):\n\t\traise argparse.ArgumentTypeError(\"{0} is not a valid directory\".format(repr(arg)))\n\treturn arg", "response": "An argparse type representing a valid directory."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef email_type(arg):\n\tif not is_valid_email_address(arg):\n\t\traise argparse.ArgumentTypeError(\"{0} is not a valid email address\".format(repr(arg)))\n\treturn arg", "response": "An argparse type representing an email address."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef log_level_type(arg):\n\tif not arg.upper() in ('NOTSET', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'):\n\t\traise argparse.ArgumentTypeError(\"{0} is not a valid log level\".format(repr(arg)))\n\treturn getattr(logging, arg.upper())", "response": "An argparse type representing a logging level."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef port_type(arg):\n\terror_msg = \"{0} is not a valid port\".format(repr(arg))\n\ttry:\n\t\targ = ast.literal_eval(arg)\n\texcept ValueError:\n\t\traise argparse.ArgumentTypeError(error_msg)\n\tif arg < 0 or arg > 65535:\n\t\traise argparse.ArgumentTypeError(error_msg)\n\treturn arg", "response": "An argparse type representing a tcp or udp port number."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef timespan_type(arg):\n\ttry:\n\t\targ = parse_timespan(arg)\n\texcept ValueError:\n\t\traise argparse.ArgumentTypeError(\"{0} is not a valid time span\".format(repr(arg)))\n\treturn arg", "response": "An argparse type representing a timespan such as 6h for 6 hours."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef search(self):\n self.log.debug('starting the ``search`` method')\n\n # ACCOUNT FOR TYPE OF SEARCH\n sqlWhere = False\n magnitudeLimitFilter = self.magnitudeLimitFilter\n disCols = [\"zColName\",\n \"distanceColName\", \"semiMajorColName\"]\n sqlWhere = \"\"\n\n if self.physicalSearch == True:\n for d in disCols:\n colName = self.colMaps[self.tableName][d]\n if colName:\n sqlWhere += \" or `%(colName)s` is not null\" % locals()\n if len(sqlWhere):\n sqlWhere = \" and (\" + sqlWhere[4:] + \")\"\n\n if self.upperMagnitudeLimit != False and self.upperMagnitudeLimit and not self.lowerMagnitudeLimit:\n mag = self.upperMagnitudeLimit\n sqlWhere += \" and `%(magnitudeLimitFilter)s` > %(mag)s\" % locals()\n if self.lowerMagnitudeLimit != False and self.lowerMagnitudeLimit and not self.upperMagnitudeLimit:\n mag = self.lowerMagnitudeLimit\n sqlWhere += \" and `%(magnitudeLimitFilter)s` < %(mag)s\" % locals()\n # THE GENERAL (INBETWEEN) CASE\n if self.lowerMagnitudeLimit != False and self.lowerMagnitudeLimit and self.upperMagnitudeLimit != False and self.upperMagnitudeLimit:\n upperMagnitudeLimit = self.upperMagnitudeLimit\n lowerMagnitudeLimit = self.lowerMagnitudeLimit\n sqlWhere += \" and ((`%(magnitudeLimitFilter)s` > %(upperMagnitudeLimit)s and `%(magnitudeLimitFilter)s` < %(lowerMagnitudeLimit)s) or `%(magnitudeLimitFilter)s` is null)\" % locals()\n\n if sqlWhere and \" and\" == sqlWhere[0:4]:\n sqlWhere = sqlWhere[5:]\n\n # THE COLUMN MAP LIFTED FROM ``tcs_helper_catalogue_tables_info` TABLE\n # IN CATALOGUE DATABASE (COLUMN NAMES ENDDING WITH 'ColName')\n columns = {}\n for k, v in self.colMaps[self.tableName].iteritems():\n name = k.replace(\"ColName\", \"\")\n if \"colname\" in k.lower() and v:\n columns[k] = \"`%(v)s` as `%(name)s`\" % locals()\n columns = \", \".join(columns.values())\n\n cs = hmptyConesearch(\n log=self.log,\n dbConn=self.dbConn,\n tableName=self.tableName,\n columns=columns,\n ra=self.ra,\n dec=self.dec,\n radiusArcsec=self.radius,\n separations=True,\n distinct=False,\n sqlWhere=sqlWhere,\n closest=self.nearestOnly,\n raCol=\"ra\",\n decCol=\"dec\"\n )\n matchIndies, matches = cs.search()\n\n # MATCH ARE NOT NECESSARILY UNIQUE IF MANY TRANSIENT MATCH ONE SOURCE\n uniqueMatchDicts = []\n uniqueMatchDicts[:] = [copy.copy(d) for d in matches.list]\n\n self.log.debug('completed the ``search`` method')\n return matchIndies, uniqueMatchDicts", "response": "search the transient sources for the specified transient sources and return the list of transient sources that match the criteria"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nperform an angular crossmatch against a given catalogue", "response": "def angular_crossmatch_against_catalogue(\n self,\n objectList,\n searchPara={},\n search_name=\"\",\n brightnessFilter=False,\n physicalSearch=False,\n classificationType=False\n ):\n \"\"\"*perform an angular separation crossmatch against a given catalogue in the database and annotate the crossmatch with some value added parameters (distances, physical separations, sub-type of transient etc)*\n\n **Key Arguments:**\n - ``objectList`` -- the list of transient locations to match against the crossmatch catalogue\n - ``searchPara`` -- the search parameters for this individual search as lifted from the search algorithm in the sherlock settings file\n - ``search_name`` -- the name of the search as given in the sherlock settings file\n - ``brightnessFilter`` -- is this search to be constrained by magnitude of the catalogue sources? Default *False*. [bright|faint|general]\n - ``physicalSearch`` -- is this angular search a sub-part of a physical separation search\n - ``classificationType`` -- synonym, association or annotation. Default *False*\n\n **Return:**\n - matchedObjects -- any sources matched against the object\n\n **Usage:**\n\n Take a list of transients from somewhere\n\n .. code-block:: python \n\n\n transients = [\n {'ps1_designation': u'PS1-14aef',\n 'name': u'4L3Piiq',\n 'detection_list_id': 2,\n 'local_comments': u'',\n 'ra': 0.02548233704918263,\n 'followup_id': 2065412L,\n 'dec': -4.284933417540423,\n 'id': 1000006110041705700L,\n 'object_classification': 0L\n },\n\n {'ps1_designation': u'PS1-13dcr',\n 'name': u'3I3Phzx',\n 'detection_list_id': 2,\n 'local_comments': u'',\n 'ra': 4.754236999477372,\n 'followup_id': 1140386L,\n 'dec': 28.276703631398625,\n 'id': 1001901011281636100L,\n 'object_classification': 0L\n },\n\n {'ps1_designation': u'PS1-13dhc',\n 'name': u'3I3Pixd',\n 'detection_list_id': 2,\n 'local_comments': u'',\n 'ra': 1.3324973428505413,\n 'followup_id': 1202386L,\n 'dec': 32.98869220595689,\n 'id': 1000519791325919200L,\n 'object_classification': 0L\n }\n ]\n\n Then run the ``angular_crossmatch_against_catalogue`` method to crossmatch against the catalogues and return results:\n\n .. code-block:: python \n\n # ANGULAR CONESEARCH ON CATALOGUE\n search_name = \"ned_d spec sn\"\n searchPara = self.settings[\"search algorithm\"][search_name]\n matchedObjects = xmatcher.angular_crossmatch_against_catalogue(\n objectList=transients,\n searchPara=searchPara,\n search_name=search_name\n )\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``angular_crossmatch_against_catalogue`` method')\n\n self.log.info(\"STARTING %s SEARCH\" %\n (search_name,))\n\n start_time = time.time()\n\n # DEFAULTS\n\n # print search_name, classificationType\n\n magnitudeLimitFilter = None\n upperMagnitudeLimit = False\n lowerMagnitudeLimit = False\n\n catalogueName = searchPara[\"database table\"]\n\n if not \"mag column\" in searchPara:\n searchPara[\"mag column\"] = None\n\n if brightnessFilter:\n if \"mag column\" in searchPara and searchPara[\"mag column\"]:\n magnitudeLimitFilter = self.colMaps[\n catalogueName][searchPara[\"mag column\"] + \"ColName\"]\n theseSearchPara = searchPara[brightnessFilter]\n else:\n theseSearchPara = searchPara\n\n # EXTRACT PARAMETERS FROM ARGUMENTS & SETTINGS FILE\n if classificationType == \"synonym\":\n radius = self.settings[\"synonym radius arcsec\"]\n matchedType = theseSearchPara[\"synonym\"]\n elif classificationType == \"association\":\n radius = theseSearchPara[\"angular radius arcsec\"]\n matchedType = theseSearchPara[\"association\"]\n elif classificationType == \"annotation\":\n radius = theseSearchPara[\"angular radius arcsec\"]\n matchedType = theseSearchPara[\"annotation\"]\n\n if brightnessFilter == \"faint\":\n upperMagnitudeLimit = theseSearchPara[\"mag limit\"]\n elif brightnessFilter == \"bright\":\n lowerMagnitudeLimit = theseSearchPara[\"mag limit\"]\n elif brightnessFilter == \"general\":\n if \"faint\" in searchPara:\n lowerMagnitudeLimit = searchPara[\"faint\"][\"mag limit\"]\n if \"bright\" in searchPara:\n upperMagnitudeLimit = searchPara[\"bright\"][\"mag limit\"]\n\n # VARIABLES\n matchedObjects = []\n matchSubset = []\n\n transRAs = []\n transRAs[:] = [t['ra'] for t in objectList]\n transDecs = []\n transDecs[:] = [t['dec'] for t in objectList]\n\n if len(transRAs) == 0:\n return []\n\n cs = catalogue_conesearch(\n log=self.log,\n ra=transRAs,\n dec=transDecs,\n radiusArcsec=radius,\n colMaps=self.colMaps,\n tableName=catalogueName,\n dbConn=self.dbConn,\n nearestOnly=False,\n physicalSearch=physicalSearch,\n upperMagnitudeLimit=upperMagnitudeLimit,\n lowerMagnitudeLimit=lowerMagnitudeLimit,\n magnitudeLimitFilter=magnitudeLimitFilter\n )\n\n # catalogueMatches ARE ORDERED BY ANGULAR SEPARATION\n indices, catalogueMatches = cs.search()\n count = 1\n annotatedcatalogueMatches = []\n\n for i, xm in zip(indices, catalogueMatches):\n\n # CALCULATE PHYSICAL PARAMETERS ... IF WE CAN\n if \"cmSepArcsec\" in xm:\n xm[\"separationArcsec\"] = xm[\"cmSepArcsec\"]\n # CALCULATE SEPARATION IN ARCSEC\n\n calculator = separations(\n log=self.log,\n ra1=objectList[i][\"ra\"],\n dec1=objectList[i][\"dec\"],\n ra2=xm[\"ra\"],\n dec2=xm[\"dec\"]\n )\n angularSeparation, north, east = calculator.get()\n\n xm[\"northSeparationArcsec\"] = north\n xm[\"eastSeparationArcsec\"] = east\n del xm[\"cmSepArcsec\"]\n\n xm[\"association_type\"] = matchedType\n xm[\"catalogue_view_name\"] = catalogueName\n xm[\"transient_object_id\"] = objectList[i][\"id\"]\n xm[\"catalogue_table_name\"] = self.colMaps[\n catalogueName][\"description\"]\n xm[\"catalogue_table_id\"] = self.colMaps[\n catalogueName][\"table_id\"]\n xm[\"catalogue_view_id\"] = self.colMaps[\n catalogueName][\"id\"]\n if classificationType == \"synonym\":\n xm[\"classificationReliability\"] = 1\n elif classificationType == \"association\":\n xm[\"classificationReliability\"] = 2\n elif classificationType == \"annotation\":\n xm[\"classificationReliability\"] = 3\n\n xm = self._annotate_crossmatch_with_value_added_parameters(\n crossmatchDict=xm,\n catalogueName=catalogueName,\n searchPara=theseSearchPara,\n search_name=search_name\n )\n annotatedcatalogueMatches.append(xm)\n\n catalogueMatches = annotatedcatalogueMatches\n\n # IF BRIGHT STAR SEARCH\n if brightnessFilter == \"bright\" and \"star\" in search_name:\n catalogueMatches = self._bright_star_match(\n matchedObjects=catalogueMatches,\n catalogueName=catalogueName,\n lowerMagnitudeLimit=lowerMagnitudeLimit,\n magnitudeLimitFilter=searchPara[\"mag column\"]\n )\n\n if brightnessFilter == \"general\" and \"galaxy\" in search_name and \"galaxy-like\" not in search_name and \"physical radius kpc\" not in theseSearchPara:\n catalogueMatches = self._galaxy_association_cuts(\n matchedObjects=catalogueMatches,\n catalogueName=catalogueName,\n lowerMagnitudeLimit=lowerMagnitudeLimit,\n upperMagnitudeLimit=upperMagnitudeLimit,\n magnitudeLimitFilter=searchPara[\"mag column\"]\n )\n\n if \"match nearest source only\" in theseSearchPara and theseSearchPara[\"match nearest source only\"] == True and len(catalogueMatches):\n nearestMatches = []\n transList = []\n for c in catalogueMatches:\n if c[\"transient_object_id\"] not in transList:\n transList.append(c[\"transient_object_id\"])\n nearestMatches.append(c)\n catalogueMatches = nearestMatches\n\n self.log.debug(\n 'completed the ``angular_crossmatch_against_catalogue`` method')\n\n self.log.debug(\"FINISHED %s SEARCH IN %0.5f s\" %\n (search_name, time.time() - start_time,))\n\n return catalogueMatches"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _annotate_crossmatch_with_value_added_parameters(\n self,\n crossmatchDict,\n catalogueName,\n searchPara,\n search_name):\n \"\"\"*annotate each crossmatch with physical parameters such are distances etc*\n\n **Key Arguments:**\n - ``crossmatchDict`` -- the crossmatch dictionary\n - ``catalogueName`` -- the name of the catalogue the crossmatch results from\n - ``searchPara`` -- the search parameters for this individual search as lifted from the search algorithm in the sherlock settings file\n - ``search_name`` -- the name of the search as given in the sherlock settings file\n\n **Return:**\n - ``crossmatchDict`` -- the annotated crossmatch dictionary\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_annotate_crossmatch_with_value_added_parameters`` method')\n\n redshift = None\n z = None\n scale = None\n distance = None\n distance_modulus = None\n major_axis_arcsec = None\n direct_distance = None\n direct_distance_scale = None\n direct_distance_modulus = None\n\n # IF THERE'S A REDSHIFT, CALCULATE PHYSICAL PARAMETERS\n if 'z' in crossmatchDict:\n # THE CATALOGUE HAS A REDSHIFT COLUMN\n redshift = crossmatchDict['z']\n elif 'photoZ' in crossmatchDict:\n redshift = crossmatchDict['photoZ']\n if redshift and redshift > 0.0:\n # CALCULATE DISTANCE MODULUS, ETC\n c = converter(log=self.log)\n dists = c.redshift_to_distance(\n z=redshift,\n WM=0.3,\n WV=0.7,\n H0=70.0\n )\n\n if dists:\n z = dists['z']\n scale = dists[\"da_scale\"]\n distance = dists[\"dl_mpc\"]\n distance_modulus = dists[\"dmod\"]\n # ADD MAJOR AXIS VALUE\n if \"or within semi major axis\" in searchPara and searchPara[\"or within semi major axis\"] == True and \"semiMajor\" in crossmatchDict and crossmatchDict[\"semiMajor\"]:\n major_axis_arcsec = crossmatchDict[\n \"semiMajor\"] * self.colMaps[catalogueName][\"semiMajorToArcsec\"]\n\n if \"semiMajor\" in crossmatchDict:\n del crossmatchDict[\"semiMajor\"]\n # ADD DISTANCE VALUES\n if \"distance\" in crossmatchDict and crossmatchDict[\"distance\"]:\n direct_distance = crossmatchDict[\"distance\"]\n direct_distance_scale = direct_distance / 206.264806\n direct_distance_modulus = 5 * \\\n math.log10(direct_distance * 1e6) - 5\n # crossmatchDict['z'] = z\n crossmatchDict['scale'] = scale\n crossmatchDict['distance'] = distance\n crossmatchDict['distance_modulus'] = distance_modulus\n crossmatchDict['major_axis_arcsec'] = major_axis_arcsec\n crossmatchDict['direct_distance'] = direct_distance\n crossmatchDict['direct_distance_scale'] = direct_distance_scale\n crossmatchDict['direct_distance_modulus'] = direct_distance_modulus\n\n crossmatchDict['catalogue_object_type'] = self.colMaps[\n catalogueName][\"object_type\"]\n crossmatchDict[\"search_name\"] = search_name\n crossmatchDict[\"raDeg\"] = crossmatchDict[\"ra\"]\n\n crossmatchDict[\"decDeg\"] = crossmatchDict[\"dec\"]\n del crossmatchDict[\"ra\"]\n del crossmatchDict[\"dec\"]\n crossmatchDict[\"original_search_radius_arcsec\"] = searchPara[\n \"angular radius arcsec\"]\n\n physical_separation_kpc = None\n # CALCULATE MOST ACCURATE PHYSICAL SEPARATION\n if crossmatchDict[\"direct_distance_scale\"]:\n physical_separation_kpc = crossmatchDict[\n \"direct_distance_scale\"] * crossmatchDict[\"separationArcsec\"]\n elif crossmatchDict[\"scale\"]:\n physical_separation_kpc = crossmatchDict[\n \"scale\"] * crossmatchDict[\"separationArcsec\"]\n\n crossmatchDict[\"physical_separation_kpc\"] = physical_separation_kpc\n\n self.log.debug(\n 'completed the ``_annotate_crossmatch_with_value_added_parameters`` method')\n return crossmatchDict", "response": "This method is used to annotate each crossmatch with physical parameters."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nperforms a bright star match on the matched objects", "response": "def _bright_star_match(\n self,\n matchedObjects,\n catalogueName,\n magnitudeLimitFilter,\n lowerMagnitudeLimit):\n \"\"\"*perform a bright star match on the crossmatch results if required by the catalogue search*\n\n **Key Arguments:**\n - ``matchedObjects`` -- the list of matched sources from the catalogue crossmatch\n - ``catalogueName`` -- the name of the catalogue the crossmatch results from\n - ``magnitudeLimitFilter`` -- the name of the column containing the magnitude to filter on\n - ``lowerMagnitudeLimit`` -- the lower magnitude limit to match bright stars against\n\n **Return:**\n - ``brightStarMatches`` -- the trimmed matched sources (bright stars associations only)\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_bright_star_match`` method')\n\n import decimal\n decimal.getcontext().prec = 10\n\n # MATCH BRIGHT STAR ASSOCIATIONS\n brightStarMatches = []\n for row in matchedObjects:\n mag = decimal.Decimal(row[magnitudeLimitFilter])\n if mag and mag < lowerMagnitudeLimit:\n sep = decimal.Decimal(row[\"separationArcsec\"])\n if sep < decimal.Decimal(decimal.Decimal(10)**(-decimal.Decimal(0.2) * mag + decimal.Decimal(3.7))) and sep < 20.:\n brightStarMatches.append(row)\n\n self.log.debug('completed the ``_bright_star_match`` method')\n return brightStarMatches"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nperforming a bright star match on the matched objects and return a list of galaxies that match the given values.", "response": "def _galaxy_association_cuts(\n self,\n matchedObjects,\n catalogueName,\n magnitudeLimitFilter,\n upperMagnitudeLimit,\n lowerMagnitudeLimit):\n \"\"\"*perform a bright star match on the crossmatch results if required by the catalogue search*\n\n **Key Arguments:**\n - ``matchedObjects`` -- the list of matched sources from the catalogue crossmatch\n - ``catalogueName`` -- the name of the catalogue the crossmatch results from\n - ``magnitudeLimitFilter`` -- the name of the column containing the magnitude to filter on\n - ``lowerMagnitudeLimit`` -- the lower magnitude limit to match general galaxies against\n - ``upperMagnitudeLimit`` -- the upper magnitude limit to match general galaxies against\n\n **Return:**\n - ``galaxyMatches`` -- the trimmed matched sources (associated galaxies only)\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_galaxy_association_cuts`` method')\n\n import decimal\n decimal.getcontext().prec = 10\n\n # MATCH BRIGHT STAR ASSOCIATIONS\n galaxyMatches = []\n for row in matchedObjects:\n if not magnitudeLimitFilter or row[magnitudeLimitFilter] == None:\n galaxyMatches.append(row)\n else:\n mag = decimal.Decimal(row[magnitudeLimitFilter])\n if mag and mag < lowerMagnitudeLimit and mag > upperMagnitudeLimit:\n sep = decimal.Decimal(row[\"separationArcsec\"])\n if sep < decimal.Decimal(decimal.Decimal(10)**(decimal.Decimal((decimal.Decimal(25.) - mag) / decimal.Decimal(6.)))):\n galaxyMatches.append(row)\n\n self.log.debug('completed the ``_galaxy_association_cuts`` method')\n return galaxyMatches"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nperforming an physical separation crossmatch against a given catalogue", "response": "def physical_separation_crossmatch_against_catalogue(\n self,\n objectList,\n searchPara,\n search_name,\n brightnessFilter=False,\n classificationType=False\n ):\n \"\"\"*perform an physical separation crossmatch against a given catalogue in the database*\n\n This search is basically the same as the angular separation search except extra filtering is done to exclude sources outside the physical search radius (matched sources require distance info to calulate physical separations)\n\n **Key Arguments:**\n - ``objectList`` -- transients to be crossmatched\n - ``searchPara`` -- parameters of the search (from settings file)\n - ``search_name`` -- the name of the search\n - ``brightnessFilter`` -- is this search to be constrained by magnitude of the catalogue sources? Default *False*. [bright|faint|general]\n - ``classificationType`` -- synonym, association or annotation. Default *False*\n\n **Return:**\n - matchedObjects -- any sources matched against the object\n\n To run a physical separation crossmatch, run in a similar way to the angular separation crossmatch:\n\n **Usage:**\n\n .. code-block:: python \n\n search_name = \"ned spec sn\"\n searchPara = self.settings[\"search algorithm\"][search_name]\n matchedObjects = xmatcher.physical_separation_crossmatch_against_catalogue(\n objectList=transients,\n searchPara=searchPara,\n search_name=search_name\n )\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``physical_separation_crossmatch_against_catalogue`` method')\n\n start_time = time.time()\n\n bf = brightnessFilter\n # SETUP PARAMETERS\n tableName = searchPara[\"database table\"]\n if bf:\n angularRadius = searchPara[bf][\"angular radius arcsec\"]\n physicalRadius = searchPara[bf][\"physical radius kpc\"]\n matchedType = searchPara[bf][classificationType]\n if \"match nearest source only\" in searchPara[bf]:\n nearestOnly = searchPara[bf][\"match nearest source only\"]\n else:\n nearestOnly = False\n else:\n angularRadius = searchPara[\"angular radius arcsec\"]\n physicalRadius = searchPara[\"physical radius kpc\"]\n matchedType = searchPara[classificationType]\n if \"match nearest source only\" in searchPara:\n nearestOnly = searchPara[\"match nearest source only\"]\n else:\n nearestOnly = False\n\n matchedObjects = []\n matchSubset = []\n\n # RETURN ALL ANGULAR MATCHES BEFORE RETURNING NEAREST PHYSICAL SEARCH\n tmpSearchPara = dict(searchPara)\n tmpSearchPara[\"match nearest source only\"] = False\n\n # ANGULAR CONESEARCH ON CATALOGUE - RETURN ALL MATCHES\n catalogueMatches = self.angular_crossmatch_against_catalogue(\n objectList=objectList,\n searchPara=tmpSearchPara,\n search_name=search_name,\n physicalSearch=True,\n brightnessFilter=brightnessFilter,\n classificationType=classificationType\n )\n\n # OK - WE HAVE SOME ANGULAR SEPARATION MATCHES. NOW SEARCH THROUGH THESE FOR MATCHES WITH\n # A PHYSICAL SEPARATION WITHIN THE PHYSICAL RADIUS.\n if catalogueMatches:\n for row in catalogueMatches:\n thisMatch = False\n physical_separation_kpc = row[\"physical_separation_kpc\"]\n newsearch_name = search_name\n\n # FIRST CHECK FOR MAJOR AXIS MEASUREMENT\n # BYPASS NED FAULTY AXES MEASUREMENTS:\n # https://gist.github.com/search?utf8=%E2%9C%93&q=user%3Athespacedoctor+ned\n if row[\"major_axis_arcsec\"] and (\"ned\" not in search_name or (row[\"unkMag\"] and row[\"unkMag\"] < 20.)):\n if row[\"separationArcsec\"] < row[\"major_axis_arcsec\"] * self.settings[\"galaxy radius stetch factor\"]:\n thisMatch = True\n newsearch_name = newsearch_name + \\\n \" (within %s * major axis)\" % (\n self.settings[\"galaxy radius stetch factor\"],)\n newAngularSep = row[\n \"major_axis_arcsec\"] * self.settings[\"galaxy radius stetch factor\"]\n else:\n continue\n # NOW CHECK FOR A DIRECT DISTANCE MEASUREMENT\n elif row[\"direct_distance_scale\"] and physical_separation_kpc < physicalRadius:\n if row[\"separationArcsec\"] > 300.:\n continue\n thisMatch = True\n newsearch_name = newsearch_name + \" (direct distance)\"\n newAngularSep = physicalRadius / \\\n row[\"direct_distance_scale\"]\n # NEW CHECK FOR A REDSHIFT DISTANCE\n elif row[\"scale\"] and physical_separation_kpc < physicalRadius:\n thisMatch = True\n newsearch_name = newsearch_name + \" (redshift distance)\"\n newAngularSep = physicalRadius / row[\"scale\"]\n\n if thisMatch == True:\n row[\"physical_separation_kpc\"] = physical_separation_kpc\n row[\"original_search_radius_arcsec\"] = newAngularSep\n if physical_separation_kpc:\n self.log.debug(\n \"\\t\\tPhysical separation = %.2f kpc\" % (physical_separation_kpc,))\n row[\"search_name\"] = newsearch_name\n matchSubset.append(row)\n\n if matchSubset:\n\n from operator import itemgetter\n matchSubset = sorted(matchSubset, key=itemgetter(\n 'physical_separation_kpc'), reverse=False)\n\n if nearestOnly == True:\n theseMatches = matchSubset[0]\n else:\n theseMatches = matchSubset\n\n matchedObjects = matchSubset\n self.log.debug(\n 'completed the ``physical_separation_crossmatch_against_catalogue`` method')\n\n self.log.debug(\"FINISHED %s SEARCH IN %0.5f s\" %\n (search_name, time.time() - start_time,))\n\n return matchedObjects"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the host s ip number.", "response": "def get_own_ip():\n \"\"\"Get the host's ip number.\n \"\"\"\n sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n try:\n sock.connect((\"8.8.8.8\", 80))\n except socket.gaierror:\n ip_ = \"127.0.0.1\"\n else:\n ip_ = sock.getsockname()[0]\n finally:\n sock.close()\n return ip_"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsending a message to the related user.", "response": "def send(self, msg):\n \"\"\"Send the given message.\n \"\"\"\n with self._pub_lock:\n self.publish.send_string(msg)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef heartbeat(self, min_interval=0):\n if not self._heartbeat:\n self._heartbeat = _PublisherHeartbeat(self)\n self._heartbeat(min_interval)", "response": "Send a heartbeat... but only if min_interval * seconds have passed since last beat."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef light_curve(self, t, texp=0.0, tol=1e-8, maxdepth=4):\n t = np.atleast_1d(t)\n return CythonSolver().simple_light_curve(self._get_params(),\n t, texp, tol, maxdepth)", "response": "Simple light curve function."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef light_curve_gradient(self, t, texp=0.0, tol=1e-8, maxdepth=4):\n t = np.atleast_1d(t)\n f, df = CythonSolver().simple_gradient(self._get_params(),\n t, texp, tol, maxdepth)\n\n return f, df", "response": "Calculates the gradient of the light curve at a list of times using the current object s model."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef configure_stream_logger(logger='', level=None, formatter='%(levelname)-8s %(message)s'):\n\tlevel = level or logging.WARNING\n\tif isinstance(level, str):\n\t\tlevel = getattr(logging, level, None)\n\t\tif level is None:\n\t\t\traise ValueError('invalid log level: ' + level)\n\troot_logger = logging.getLogger('')\n\tfor handler in root_logger.handlers:\n\t\troot_logger.removeHandler(handler)\n\n\tlogging.getLogger(logger).setLevel(logging.DEBUG)\n\tconsole_log_handler = logging.StreamHandler()\n\tconsole_log_handler.setLevel(level)\n\tif isinstance(formatter, str):\n\t\tformatter = logging.Formatter(formatter)\n\telif not isinstance(formatter, logging.Formatter):\n\t\traise TypeError('formatter must be an instance of logging.Formatter')\n\tconsole_log_handler.setFormatter(formatter)\n\tlogging.getLogger(logger).addHandler(console_log_handler)\n\tlogging.captureWarnings(True)\n\treturn console_log_handler", "response": "Configure the default stream handler for logging messages to the console."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndownloads a file from a url and save it to disk.", "response": "def download(url, filename=None):\n\t\"\"\"\n\tDownload a file from a url and save it to disk.\n\n\t:param str url: The URL to fetch the file from.\n\t:param str filename: The destination file to write the data to.\n\t\"\"\"\n\t# requirements os, shutil, urllib.parse, urllib.request\n\tif not filename:\n\t\turl_parts = urllib.parse.urlparse(url)\n\t\tfilename = os.path.basename(url_parts.path)\n\turl_h = urllib.request.urlopen(url)\n\twith open(filename, 'wb') as file_h:\n\t\tshutil.copyfileobj(url_h, file_h)\n\turl_h.close()\n\treturn"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntakes a number of bytes and convert it to a human readable number.", "response": "def format_bytes_size(val):\n\t\"\"\"\n\tTake a number of bytes and convert it to a human readable number.\n\n\t:param int val: The number of bytes to format.\n\t:return: The size in a human readable format.\n\t:rtype: str\n\t\"\"\"\n\tif not val:\n\t\treturn '0 bytes'\n\tfor sz_name in ['bytes', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB']:\n\t\tif val < 1024.0:\n\t\t\treturn \"{0:.2f} {1}\".format(val, sz_name)\n\t\tval /= 1024.0\n\traise OverflowError()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef grep(expression, file, flags=0, invert=False):\n\t# requirements = re\n\tif isinstance(file, str):\n\t\tfile = open(file)\n\tlines = []\n\tfor line in file:\n\t\tif bool(re.search(expression, line, flags=flags)) ^ invert:\n\t\t\tlines.append(line)\n\treturn lines", "response": "Search a file and return a list of all the lines that match a regular expression."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nopen a URI in a platform intelligent way.", "response": "def open_uri(uri):\n\t\"\"\"\n\tOpen a URI in a platform intelligent way. On Windows this will use\n\t'cmd.exe /c start' and on Linux this will use gvfs-open or xdg-open\n\tdepending on which is available. If no suitable application can be\n\tfound to open the URI, a RuntimeError will be raised.\n\n\t.. versionadded:: 1.3.0\n\n\t:param str uri: The URI to open.\n\t\"\"\"\n\tclose_fds = True\n\tstartupinfo = None\n\tproc_args = []\n\tif sys.platform.startswith('win'):\n\t\tproc_args.append(which('cmd.exe'))\n\t\tproc_args.append('/c')\n\t\tproc_args.append('start')\n\t\turi = uri.replace('&', '^&')\n\t\tclose_fds = False\n\t\tstartupinfo = subprocess.STARTUPINFO()\n\t\tstartupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW\n\t\tstartupinfo.wShowWindow = subprocess.SW_HIDE\n\telif which('gvfs-open'):\n\t\tproc_args.append(which('gvfs-open'))\n\telif which('xdg-open'):\n\t\tproc_args.append(which('xdg-open'))\n\telse:\n\t\traise RuntimeError('could not find suitable application to open uri')\n\tproc_args.append(uri)\n\tproc_h = subprocess.Popen(proc_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=close_fds, startupinfo=startupinfo)\n\treturn proc_h.wait() == 0"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts a string from snake_case to CamelCase.", "response": "def parse_case_snake_to_camel(snake, upper_first=True):\n\t\"\"\"\n\tConvert a string from snake_case to CamelCase.\n\n\t:param str snake: The snake_case string to convert.\n\t:param bool upper_first: Whether or not to capitalize the first\n\t\tcharacter of the string.\n\t:return: The CamelCase version of string.\n\t:rtype: str\n\t\"\"\"\n\tsnake = snake.split('_')\n\tfirst_part = snake[0]\n\tif upper_first:\n\t\tfirst_part = first_part.title()\n\treturn first_part + ''.join(word.title() for word in snake[1:])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_server(server, default_port):\n\tserver = server.rsplit(':', 1)\n\thost = server[0]\n\tif host.startswith('[') and host.endswith(']'):\n\t\thost = host[1:-1]\n\tif len(server) == 1:\n\t\treturn (host, default_port)\n\tport = server[1]\n\tif not port:\n\t\tport = default_port\n\telse:\n\t\tport = int(port)\n\treturn (host, port)", "response": "parse_server - Convert a server string to a tuple suitable for passing to connect for the connection."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_timespan(timedef):\n\tif isinstance(timedef, int):\n\t\treturn timedef\n\tconverter_order = ('w', 'd', 'h', 'm', 's')\n\tconverters = {\n\t\t'w': 604800,\n\t\t'd': 86400,\n\t\t'h': 3600,\n\t\t'm': 60,\n\t\t's': 1\n\t}\n\ttimedef = timedef.lower()\n\tif timedef.isdigit():\n\t\treturn int(timedef)\n\telif len(timedef) == 0:\n\t\treturn 0\n\tseconds = -1\n\tfor spec in converter_order:\n\t\ttimedef = timedef.split(spec)\n\t\tif len(timedef) == 1:\n\t\t\ttimedef = timedef[0]\n\t\t\tcontinue\n\t\telif len(timedef) > 2 or not timedef[0].isdigit():\n\t\t\tseconds = -1\n\t\t\tbreak\n\t\tadjustment = converters[spec]\n\t\tseconds = max(seconds, 0)\n\t\tseconds += (int(timedef[0]) * adjustment)\n\t\ttimedef = timedef[1]\n\t\tif not len(timedef):\n\t\t\tbreak\n\tif seconds < 0:\n\t\traise ValueError('invalid time format')\n\treturn seconds", "response": "parse_timespan - Convert a string timespan definition to seconds."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing a string into a slug format suitable for use in URLs and other characters restricted applications.", "response": "def parse_to_slug(words, maxlen=24):\n\t\"\"\"\n\tParse a string into a slug format suitable for use in URLs and other\n\tcharacter restricted applications. Only utf-8 strings are supported at this\n\ttime.\n\n\t:param str words: The words to parse.\n\t:param int maxlen: The maximum length of the slug.\n\t:return: The parsed words as a slug.\n\t:rtype: str\n\t\"\"\"\n\tslug = ''\n\tmaxlen = min(maxlen, len(words))\n\tfor c in words:\n\t\tif len(slug) == maxlen:\n\t\t\tbreak\n\t\tc = ord(c)\n\t\tif c == 0x27:\n\t\t\tcontinue\n\t\telif c >= 0x30 and c <= 0x39:\n\t\t\tslug += chr(c)\n\t\telif c >= 0x41 and c <= 0x5a:\n\t\t\tslug += chr(c + 0x20)\n\t\telif c >= 0x61 and c <= 0x7a:\n\t\t\tslug += chr(c)\n\t\telif len(slug) and slug[-1] != '-':\n\t\t\tslug += '-'\n\tif len(slug) and slug[-1] == '-':\n\t\tslug = slug[:-1]\n\treturn slug"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate a random string of *size* length consisting of mixed case letters and numbers. This function is not meant for cryptographic purposes. :param int size: The length of the string to return. :return: A string consisting of random characters. :rtype: str", "response": "def random_string_alphanumeric(size):\n\t\"\"\"\n\tGenerate a random string of *size* length consisting of mixed case letters\n\tand numbers. This function is not meant for cryptographic purposes.\n\n\t:param int size: The length of the string to return.\n\t:return: A string consisting of random characters.\n\t:rtype: str\n\t\"\"\"\n\t# requirements = random, string\n\treturn ''.join(random.choice(string.ascii_letters + string.digits) for x in range(size))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate a random string of *size* length consisting of lowercase letters and numbers. This function is not meant for cryptographic purposes. :param int size: The length of the string to return. :return: A string consisting of random characters. :rtype: str", "response": "def random_string_lower_numeric(size):\n\t\"\"\"\n\tGenerate a random string of *size* length consisting of lowercase letters\n\tand numbers. This function is not meant for cryptographic purposes.\n\n\t:param int size: The length of the string to return.\n\t:return: A string consisting of random characters.\n\t:rtype: str\n\t\"\"\"\n\t# requirements = random, string\n\treturn ''.join(random.choice(string.ascii_lowercase + string.digits) for x in range(size))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the probability that two random values selected from an arbitrary random value pool of unique values will be equal. This is commonly known as the Birthday Problem.", "response": "def selection_collision(selections, poolsize):\n\t\"\"\"\n\tCalculate the probability that two random values selected from an arbitrary\n\tsized pool of unique values will be equal. This is commonly known as the\n\t\"Birthday Problem\".\n\n\t:param int selections: The number of random selections.\n\t:param int poolsize: The number of unique random values in the pool to choose from.\n\t:rtype: float\n\t:return: The chance that a collision will occur as a percentage.\n\t\"\"\"\n\t# requirments = sys\n\tprobability = 100.0\n\tpoolsize = float(poolsize)\n\tfor i in range(selections):\n\t\tprobability = probability * (poolsize - i) / poolsize\n\tprobability = (100.0 - probability)\n\treturn probability"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a list or tuple from a provided list or tuple and preserve the order of the unique items in the list or tuple.", "response": "def unique(seq, key=None):\n\t\"\"\"\n\tCreate a unique list or tuple from a provided list or tuple and preserve the\n\torder.\n\n\t:param seq: The list or tuple to preserve unique items from.\n\t:type seq: list, tuple\n\t:param key: If key is provided it will be called during the\n\t\tcomparison process.\n\t:type key: function, None\n\t\"\"\"\n\tif key is None:\n\t\tkey = lambda x: x\n\tpreserved_type = type(seq)\n\tif preserved_type not in (list, tuple):\n\t\traise TypeError(\"unique argument 1 must be list or tuple, not {0}\".format(preserved_type.__name__))\n\tseen = []\n\tresult = []\n\tfor item in seq:\n\t\tmarker = key(item)\n\t\tif marker in seen:\n\t\t\tcontinue\n\t\tseen.append(marker)\n\t\tresult.append(item)\n\treturn preserved_type(result)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef weighted_choice(choices, weight):\n\t# requirements = random\n\tweights = []\n\t# get weight values for each of the choices\n\tfor choice in choices:\n\t\tchoice_weight = weight(choice)\n\t\tif not (isinstance(choice_weight, int) and choice_weight > 0):\n\t\t\traise TypeError('weight results must be positive integers')\n\t\tweights.append(choice_weight)\n\n\t# make a selection within the acceptable range\n\tselection = random.randint(0, sum(weights) - 1)\n\n\t# find and return the corresponding choice\n\tfor idx, choice in enumerate(choices):\n\t\tif selection < sum(weights[:idx + 1]):\n\t\t\treturn choice\n\traise RuntimeError('no selection could be made')", "response": "Make a randomly selected choice from the provided choices."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\niterate through an arithmetic progression.", "response": "def xfrange(start, stop=None, step=1):\n\t\"\"\"\n\tIterate through an arithmetic progression.\n\n\t:param start: Starting number.\n\t:type start: float, int, long\n\t:param stop: Stopping number.\n\t:type stop: float, int, long\n\t:param step: Stepping size.\n\t:type step: float, int, long\n\t\"\"\"\n\tif stop is None:\n\t\tstop = start\n\t\tstart = 0.0\n\tstart = float(start)\n\twhile start < stop:\n\t\tyield start\n\t\tstart += step"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving expired items from the cache.", "response": "def cache_clean(self):\n\t\t\"\"\"\n\t\tRemove expired items from the cache.\n\t\t\"\"\"\n\t\tnow = time.time()\n\t\tkeys_for_removal = collections.deque()\n\t\tfor key, (_, expiration) in self.__cache.items():\n\t\t\tif expiration < now:\n\t\t\t\tkeys_for_removal.append(key)\n\t\tfor key in keys_for_removal:\n\t\t\tdel self.__cache[key]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set(self, option, value):\n\t\tself.config_parser.set(self.section_name, option, value)", "response": "Set an option to an arbitrary value."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreformats an exception by adding a message to it and reporting the original exception name and the exception message.", "response": "def reformat_exception(cls, message, err, *format_args, **format_kwds):\n \"\"\"\n Reformats an exception by adding a message to it and reporting the\n original exception name and message\n \"\"\"\n final_message = message.format(*format_args, **format_kwds)\n final_message = \"{} -- {}: {}\".format(\n final_message,\n type(err).__name__,\n str(err),\n )\n final_message = cls.sanitize_errstr(final_message)\n return final_message"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nhandling errors in the exception and re - packaged exceptions.", "response": "def handle_errors(\n cls, message, *format_args,\n re_raise=True, exception_class=Exception,\n do_finally=None, do_except=None, do_else=None,\n **format_kwds\n ):\n \"\"\"\n provides a context manager that will intercept exceptions and repackage\n them as Buzz instances with a message attached:\n\n .. code-block:: python\n with Buzz.handle_errors(\"It didn't work\"):\n some_code_that_might_raise_an_exception()\n\n :param: message: The message to attach to the raised Buzz\n :param: format_args: Format arguments. Follows str.format conv.\n :param: format_kwds: Format keyword args. Follows str.format conv.\n :param: re_raise: If true, the re-packaged exception will be\n raised\n :param: exception_class: Limits the class of exceptions that will be\n re-packaged as a Buzz exception.\n Any other exception types will not be caught\n and re-packaged.\n Defaults to Exception (will handle all\n exceptions)\n :param: do_finally: A function that should always be called at the\n end of the block. Should take no parameters\n :param: do_except: A function that should be called only if there\n was an exception. Should take the raised\n exception as its first parameter, the final\n message for the exception that will be raised\n as its second, and the traceback as its third\n :param: do_else: A function taht should be called only if there\n were no exceptions encountered\n \"\"\"\n try:\n yield\n except exception_class as err:\n try:\n final_message = cls.reformat_exception(\n message, err, *format_args, **format_kwds\n )\n except Exception as msg_err:\n raise cls(\n \"Failed while formatting message: {}\".format(repr(msg_err))\n )\n\n trace = cls.get_traceback()\n\n if do_except is not None:\n do_except(err, final_message, trace)\n if re_raise:\n raise cls(final_message).with_traceback(trace)\n else:\n if do_else is not None:\n do_else()\n finally:\n if do_finally is not None:\n do_finally()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef require_condition(cls, expr, message, *format_args, **format_kwds):\n if not expr:\n raise cls(message, *format_args, **format_kwds)", "response": "Raise an exception if expr is False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef activate_user(self, username, activation_key):\n if SHA1_RE.search(activation_key):\n try:\n user = self.get(activation_key=activation_key)\n except self.model.DoesNotExist:\n return False\n if not user.activation_key_expired():\n user.activation_key = defaults.ACCOUNTS_ACTIVATED\n user.is_active = True\n user.save(using=self._db)\n # Send the activation_complete signal\n accounts_signals.activation_complete.send(sender=None, \n user=user)\n return user\n return False", "response": "Activates a user by supplying a valid activation key."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck for expired users and delete s the User associated with it. Skips the user is_staff", "response": "def delete_expired_users(self):\n \"\"\"\n Checks for expired users and delete's the ``User`` associated with\n it. Skips if the user ``is_staff``.\n\n :return: A list containing the deleted users.\n\n \"\"\"\n deleted_users = []\n for user in self.filter(is_staff=False, is_active=False):\n if user.activation_key_expired():\n deleted_users.append(user)\n user.delete()\n return deleted_users"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconfirms an email address by checking a confirmation key.", "response": "def confirm_email(self, username, confirmation_key):\n \"\"\"\n Confirm an email address by checking a ``confirmation_key``.\n\n A valid ``confirmation_key`` will set the newly wanted email address\n as the current email address. Returns the user after success or\n ``False`` when the confirmation key is invalid.\n\n :param confirmation_key:\n String containing the secret SHA1 that is used for verification.\n\n :return:\n The verified :class:`User` or ``False`` if not successful.\n\n \"\"\"\n if SHA1_RE.search(confirmation_key):\n try:\n user = self.select_related().get(username=username,\n email_confirmation_key=confirmation_key,\n email_unconfirmed__isnull=False)\n except self.model.DoesNotExist:\n return False\n else:\n user.email = user.email_unconfirmed\n user.email_unconfirmed, user.email_confirmation_key = '',''\n user.save(using=self._db)\n # Send the confirmation_complete signal\n accounts_signals.confirmation_complete.send(sender=None, \n user=user)\n return user\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_visible_profiles(self, user=None):\n profiles = self.select_related().all()\n\n filter_kwargs = {'is_active': True}\n\n profiles = profiles.filter(**filter_kwargs)\n if user and isinstance(user, AnonymousUser):\n profiles = []\n return profiles", "response": "Returns all the visible profiles for this user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngenerating html elements and schematic json elements and schematic json elements", "response": "def visit_html(self, node):\n \"\"\"\n Generate html elements and schematic json\n \"\"\"\n parentClsNode = node.parent.parent\n assert parentClsNode.attributes['objtype'] == 'class'\n assert parentClsNode.attributes['domain'] == 'py'\n sign = node.parent.parent.children[0]\n assert isinstance(sign, desc_signature)\n absolute_name = sign.attributes['ids'][0]\n _construct = node[\"constructor_fn \"] \n serialno = node[\"serialno\"]\n\n try:\n if _construct is None:\n unitCls = generic_import(absolute_name)\n if not issubclass(unitCls, Unit):\n raise AssertionError(\n \"Can not use hwt-schematic sphinx directive and create schematic\"\n \" for %s because it is not subclass of %r\" % (absolute_name, Unit))\n u = unitCls()\n else:\n assert len(_construct) > 0 and RE_IS_ID.match(_construct), _construct\n _absolute_name = []\n assert \"..\" not in absolute_name, absolute_name\n for n in absolute_name.split(sep=\".\")[:-1]:\n if n != \"\":\n _absolute_name.append(n)\n _absolute_name.append(_construct)\n \n constructor_fn = generic_import(_absolute_name)\n u = constructor_fn()\n if not isinstance(u, Unit):\n raise AssertionError(\n \"Can not use hwt-schematic sphinx directive and create schematic\"\n \" for %s because function did not returned instance of %r, (%r)\" % (\n _absolute_name, Unit, u))\n\n schem_file = SchematicPaths.get_sch_file_name_absolute(\n self.document, absolute_name, serialno)\n makedirs(path.dirname(schem_file), exist_ok=True)\n\n with open(schem_file, \"w\") as f:\n synthesised(u, DEFAULT_PLATFORM)\n g = UnitToLNode(u, optimizations=DEFAULT_LAYOUT_OPTIMIZATIONS)\n idStore = ElkIdStore()\n data = g.toElkJson(idStore)\n json.dump(data, f)\n\n viewer = SchematicPaths.get_sch_viewer_link(self.document)\n sch_name = SchematicPaths.get_sch_file_name(\n self.document, absolute_name, serialno)\n ref = nodes.reference(text=_(\"schematic\"), # internal=False,\n refuri=\"%s?schematic=%s\" % (\n viewer,\n path.join(SchematicPaths.SCHEMATIC_DIR_PREFIX,\n sch_name)))\n node += ref\n except Exception as e:\n logging.error(e, exc_info=True)\n raise Exception(\n \"Error occured while processing of %s\" % absolute_name)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run(self):\n has_npm = npm_installation_check()\n if has_npm:\n run_npm_install()\n else:\n print(\"Warning: npm not installed using prebuilded js files!\",\n file=sys.stderr)\n \"\"\"\n Download npm packages required by package.json and extract required\n files from them\n \"\"\"\n for js in JS_FILES:\n downloaded_js_name = os.path.join(TOP_DIR, js)\n installed_js_name = os.path.join(TOP_DIR, \"sphinx_hwt\", \"html\", js)\n if has_npm:\n assert os.path.exists(downloaded_js_name), downloaded_js_name\n os.makedirs(os.path.dirname(installed_js_name), exist_ok=True)\n copyfile(downloaded_js_name, installed_js_name)\n print(\"copy generated from NPM packages\", installed_js_name)\n else:\n if os.path.exists(installed_js_name):\n print(\"using prebuilded\", installed_js_name)\n else:\n raise Exception(\"Can not find npm,\"\n \" which is required for the installation \"\n \"and this is pacpage has not js prebuilded\")", "response": "Run the main script."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the stellar density in CGS units", "response": "def density(self):\n \"\"\"Stellar density in CGS units\n \"\"\"\n r = self.radius * _Rsun\n m = self.mass * _Msun\n return 0.75 * m / (np.pi * r * r * r)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the approximate duration of the transit : math : T__0 in the Windows 2000.", "response": "def duration(self):\n \"\"\"\n The approximate duration of the transit :math:`T_\\mathrm{tot}` from\n Equation (14) in Winn (2010).\n\n \"\"\"\n self._check_ps()\n\n rstar = self.system.central.radius\n k = self.r/rstar\n dur = self.period / np.pi\n arg = rstar/self.a * np.sqrt((1+k)**2 - self.b**2)\n arg /= np.sin(np.radians(self.incl))\n dur *= np.arcsin(arg)\n if self.e > 0.0:\n dur *= np.sqrt(1 - self.e**2) / (1 + self.e * np.sin(self.omega))\n return dur"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_body(self, body):\n body.system = self\n self.bodies.append(body)\n self.unfrozen = np.concatenate((\n self.unfrozen[:-2], np.zeros(7, dtype=bool), self.unfrozen[-2:]\n ))", "response": "Adds a body to the system."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef light_curve(self, t, texp=0.0, tol=1e-8, maxdepth=4, use_batman=False):\n t = np.atleast_1d(t)\n if len(self.bodies) == 0:\n return self.central.flux + np.zeros_like(t)\n\n return CythonSolver().kepler_light_curve(len(self.bodies),\n self._get_params(),\n t, texp, tol, maxdepth,\n use_batman=use_batman)", "response": "Return the light curve evaluated at a list of times using the current object s model."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the gradient of the light curve at a list of times using the current object.", "response": "def light_curve_gradient(self, t, texp=0.0, tol=1e-8, maxdepth=4):\n \"\"\"\n Get the light curve evaluated at a list of times using the current\n model.\n\n :param t:\n The times where the light curve should be evaluated (in days).\n\n :param tol:\n The stopping criterion for the exposure time integration.\n\n :param maxdepth:\n The maximum recursion depth of the exposure time integrator.\n\n \"\"\"\n t = np.atleast_1d(t)\n if len(self.bodies) == 0:\n grad = np.zeros((len(t), 5), dtype=float)\n grad[:, 0] = 1.0\n return self.central.flux + np.zeros_like(t), grad[:, self.unfrozen]\n\n f, df = CythonSolver().kepler_gradient(len(self.bodies),\n self._get_params(),\n t, texp, tol, maxdepth)\n return f, df[:, self.unfrozen].T"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n # see:\n # https://packaging.python.org/tutorials/distributing-packages/#uploading-your-project-to-pypi\n twine_repo = os.getenv('TWINE_REPOSITORY_URL') or os.getenv('TWINE_REPOSITORY')\n print('tagging and releasing to %s as %s' % (\n twine_repo,\n os.getenv('TWINE_USERNAME')\n ))\n\n if not twine_repo:\n raise Exception('cannot release to implicit pypi repository. explicitly set the repo/url.')\n\n version = subprocess.check_output(['pipenv', 'run', 'python', 'setup.py', '--version']).decode().strip()\n if 'dev' in version:\n raise Exception('cannot release unversioned project: %s' % version)\n\n print('Preparing environment')\n subprocess.check_call(['git', 'config', '--global', 'user.name', 'monty-bot'])\n subprocess.check_call(['git', 'config', '--global', 'user.email', 'monty-bot@arm.com'])\n url = subprocess.check_output(['git', 'remote', 'get-url', 'origin'])\n branch_name = subprocess.check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])\n new_url = git_url_ssh_to_https(url.decode())\n subprocess.check_call(['git', 'remote', 'set-url', 'origin', new_url])\n branch_spec = 'origin/%s' % branch_name.decode('utf-8').strip()\n subprocess.check_call(['git', 'branch', '--set-upstream-to', branch_spec])\n print('Committing the changelog & version')\n subprocess.check_call(['git', 'add', 'src/auto_version/__version__.py'])\n subprocess.check_call(['git', 'add', 'CHANGELOG.md'])\n subprocess.check_call(['git', 'add', 'docs/news/*'])\n message = ':checkered_flag: :newspaper: Releasing version %s\\n[skip ci]' % version\n subprocess.check_call(['git', 'commit', '-m', message])\n print('Tagging the project')\n subprocess.check_call(['git', 'tag', '-a', version, '-m', 'Release %s' % version])\n print('Pushing changes back to GitHub')\n subprocess.check_call(['git', 'push', '--follow-tags'])\n print('Marking this commit as latest')\n subprocess.check_call(['git', 'tag', '-f', 'latest'])\n subprocess.check_call(['git', 'push', '-f', '--tags'])\n print('Generating a release package')\n subprocess.check_call(\n ['pipenv', 'run', 'python', 'setup.py', 'clean', '--all', 'bdist_wheel', '--dist-dir', 'release-dist'])\n print('Uploading to PyPI')\n subprocess.check_call(['pipenv', 'run', 'python', '-m', 'twine', 'upload', 'release-dist/*'])\n print('Done.')", "response": "Tag the current repository\n and commits changes to news files\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_context():\n pid = os.getpid()\n if pid not in context:\n context[pid] = zmq.Context()\n logger.debug('renewed context for PID %d', pid)\n return context[pid]", "response": "Provide the context to use."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecodes an ISO formatted string to a datetime object.", "response": "def strp_isoformat(strg):\n \"\"\"Decode an ISO formatted string to a datetime object.\n Allow a time-string without microseconds.\n\n We handle input like: 2011-11-14T12:51:25.123456\n \"\"\"\n if isinstance(strg, datetime):\n return strg\n if len(strg) < 19 or len(strg) > 26:\n if len(strg) > 30:\n strg = strg[:30] + '...'\n raise ValueError(\"Invalid ISO formatted time string '%s'\"%strg)\n if strg.find(\".\") == -1:\n strg += '.000000'\n if sys.version[0:3] >= '2.6':\n return datetime.strptime(strg, \"%Y-%m-%dT%H:%M:%S.%f\")\n else:\n dat, mis = strg.split(\".\")\n dat = datetime.strptime(dat, \"%Y-%m-%dT%H:%M:%S\")\n mis = int(float('.' + mis)*1000000)\n return dat.replace(microsecond=mis)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete(context, force, yes, analysis_id):\n analysis_obj = context.obj['store'].analysis(analysis_id)\n if analysis_obj is None:\n print(click.style('analysis log not found', fg='red'))\n context.abort()\n\n print(click.style(f\"{analysis_obj.family}: {analysis_obj.status}\"))\n\n if analysis_obj.is_temp:\n if yes or click.confirm(f\"remove analysis log?\"):\n analysis_obj.delete()\n context.obj['store'].commit()\n print(click.style(f\"analysis deleted: {analysis_obj.family}\", fg='blue'))\n else:\n if analysis_obj.is_deleted:\n print(click.style(f\"{analysis_obj.family}: already deleted\", fg='red'))\n context.abort()\n\n if Path(analysis_obj.out_dir).exists():\n root_dir = context.obj['store'].families_dir\n family_dir = analysis_obj.out_dir\n if not force and (len(family_dir) <= len(root_dir) or root_dir not in family_dir):\n print(click.style(f\"unknown analysis output dir: {analysis_obj.out_dir}\", fg='red'))\n print(click.style(\"use '--force' to override\"))\n context.abort()\n\n if yes or click.confirm(f\"remove analysis output: {analysis_obj.out_dir}?\"):\n shutil.rmtree(analysis_obj.out_dir, ignore_errors=True)\n analysis_obj.is_deleted = True\n context.obj['store'].commit()\n print(click.style(f\"analysis deleted: {analysis_obj.family}\", fg='blue'))\n else:\n print(click.style(f\"analysis output doesn't exist: {analysis_obj.out_dir}\", fg='red'))\n context.abort()", "response": "Delete an analysis log from the database."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_data_to_database_table(\n self,\n dictList,\n createStatement=False):\n \"\"\"*Import data in the list of dictionaries in the requested database table*\n\n Also adds HTMIDs and updates the sherlock-catalogue database helper table with the time-stamp of when the imported catlogue was last updated\n\n **Key Arguments:**\n - ``dictList`` - a list of dictionaries containing all the rows in the catalogue to be imported\n - ``createStatement`` - the table's mysql create statement (used to generate table if it does not yet exist in database). Default *False*\n\n **Usage:**\n\n .. code-block:: python \n\n self.add_data_to_database_table(\n dictList=dictList,\n createStatement=createStatement\n )\n\n .. todo ::\n\n - Write a checklist for creating a new sherlock database importer\n \"\"\"\n self.log.debug('starting the ``add_data_to_database_table`` method')\n\n if len(dictList) == 0:\n return\n\n myPid = self.myPid\n dbTableName = self.dbTableName\n\n if createStatement:\n writequery(\n log=self.log,\n sqlQuery=createStatement,\n dbConn=self.cataloguesDbConn,\n )\n\n insert_list_of_dictionaries_into_database_tables(\n dbConn=self.cataloguesDbConn,\n log=self.log,\n dictList=dictList,\n dbTableName=dbTableName,\n uniqueKeyList=[],\n dateModified=True,\n dateCreated=True,\n batchSize=10000,\n replace=True,\n dbSettings=self.settings[\"database settings\"][\n \"static catalogues\"]\n )\n\n self._add_htmids_to_database_table()\n\n cleaner = database_cleaner(\n log=self.log,\n settings=self.settings\n )\n cleaner._update_tcs_helper_catalogue_tables_info_with_new_tables()\n\n self._update_database_helper_table()\n\n print \"\"\"Now:\n\n- [ ] edit the `%(dbTableName)s` row in the sherlock catalogues database adding relevant column mappings, catalogue version number etc\n- [ ] retire any previous version of this catlogue in the database. Renaming the catalogue-table by appending `legacy_` and also change the name in the `tcs_helper_catalogue_tables_info` table\n- [ ] dupliate views from the previous catalogue version to point towards the new version and then delete the old views\n- [ ] run the command `sherlock clean [-s ]` to clean up helper tables\n- [ ] switch out the old catalogue table/views in your sherlock search algorithms in the yaml settings files\n- [ ] run a test batch of transients to make sure catalogue is installed as expected\n\n\"\"\" % locals()\n\n self.log.debug('completed the ``add_data_to_database_table`` method')\n return None", "response": "Adds data in the list of dictionaries into the database table."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd HTMIDs to the database table if all the data has been imported.", "response": "def _add_htmids_to_database_table(\n self):\n \"\"\"*Add HTMIDs to database table once all the data has been imported (HTM Levels 10,13,16)*\n\n **Usage:**\n\n .. code-block:: python \n\n self._add_htmids_to_database_table()\n \"\"\"\n self.log.debug('starting the ``add_htmids_to_database_table`` method')\n\n tableName = self.dbTableName\n\n self.log.info(\"Adding HTMIds to %(tableName)s\" % locals())\n\n add_htm_ids_to_mysql_database_table(\n raColName=self.raColName,\n declColName=self.declColName,\n tableName=self.dbTableName,\n dbConn=self.cataloguesDbConn,\n log=self.log,\n primaryIdColumnName=self.primaryIdColumnName\n )\n\n self.log.debug('completed the ``add_htmids_to_database_table`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _update_database_helper_table(\n self):\n \"\"\"*Update the sherlock catalogues database helper table with the time-stamp of when this catlogue was last updated*\n\n **Usage:**\n\n .. code-block:: python \n\n self._update_database_helper_table()\n \"\"\"\n self.log.debug('starting the ``_update_database_helper_table`` method')\n\n tableName = self.dbTableName\n\n sqlQuery = u\"\"\"\n update tcs_helper_catalogue_tables_info set last_updated = now() where table_name = \"%(tableName)s\";\n \"\"\" % locals()\n\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n self.log.debug(\n 'completed the ``_update_database_helper_table`` method')\n return None", "response": "Update the sherlock catalogues database helper table with the time - stamp of when this catlogue was last updated"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nattempting to resolve all internal links in the resources.", "response": "def resolve_links(self):\n \"\"\"Attempt to resolve all internal links (locally).\n\n In case the linked resources are found either as members of the array or within\n the `includes` element, those will be replaced and reference the actual resources.\n No network calls will be performed.\n \"\"\"\n for resource in self.items_mapped['Entry'].values():\n for dct in [getattr(resource, '_cf_cda', {}), resource.fields]:\n for k, v in dct.items():\n if isinstance(v, ResourceLink):\n resolved = self._resolve_resource_link(v)\n if resolved is not None:\n dct[k] = resolved\n elif isinstance(v, (MultipleAssets, MultipleEntries, list)):\n for idx, ele in enumerate(v):\n if not isinstance(ele, ResourceLink):\n break\n\n resolved = self._resolve_resource_link(ele)\n if resolved is not None:\n v[idx] = resolved"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nauthenticate a user through the combination email and username with the given password.", "response": "def authenticate(self, request, identification, password=None, \n check_password=True):\n \"\"\"\n Authenticates a user through the combination email/username with\n password.\n\n :param identification:\n A string containing the username or email of the user that is\n trying to authenticate.\n\n :password:\n Optional string containing the password for the user.\n\n :param check_password:\n Boolean that defines if the password should be checked for this\n user. Always keep this ``True``. This is only used by accounts at\n activation when a user opens a page with a secret hash.\n\n :return: The logged in :class:`User`.\n\n \"\"\"\n UserModel = get_user_model()\n try:\n validators.validate_email(identification)\n try: \n user = UserModel.objects.get(email__iexact=identification)\n except UserModel.DoesNotExist: \n return None\n except:\n validators.ValidationError\n try: \n user = UserModel.objects.get(username__iexact=identification)\n except UserModel.DoesNotExist: \n return None\n if check_password:\n if user.check_password(password):\n return user\n return None\n else: \n return user"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding a single analysis.", "response": "def find_analysis(self, family, started_at, status):\n \"\"\"Find a single analysis.\"\"\"\n query = self.Analysis.query.filter_by(\n family=family,\n started_at=started_at,\n status=status,\n )\n return query.first()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef analyses(self, *, family: str=None, query: str=None, status: str=None, deleted: bool=None,\n temp: bool=False, before: dt.datetime=None, is_visible: bool=None):\n \"\"\"Fetch analyses form the database.\"\"\"\n analysis_query = self.Analysis.query\n if family:\n analysis_query = analysis_query.filter_by(family=family)\n elif query:\n analysis_query = analysis_query.filter(sqa.or_(\n self.Analysis.family.like(f\"%{query}%\"),\n self.Analysis.status.like(f\"%{query}%\"),\n ))\n if status:\n analysis_query = analysis_query.filter_by(status=status)\n if isinstance(deleted, bool):\n analysis_query = analysis_query.filter_by(is_deleted=deleted)\n if temp:\n analysis_query = analysis_query.filter(self.Analysis.status.in_(TEMP_STATUSES))\n if before:\n analysis_query = analysis_query.filter(self.Analysis.started_at < before)\n if is_visible is not None:\n analysis_query = analysis_query.filter_by(is_visible=is_visible)\n return analysis_query.order_by(self.Analysis.started_at.desc())", "response": "Fetch analyses form the database."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef analysis(self, analysis_id: int) -> models.Analysis:\n return self.Analysis.query.get(analysis_id)", "response": "Get a single analysis."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the last updated date in the database.", "response": "def track_update(self):\n \"\"\"Update the lastest updated date in the database.\"\"\"\n metadata = self.info()\n metadata.updated_at = dt.datetime.now()\n self.commit()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_running(self, family: str) -> bool:\n latest_analysis = self.analyses(family=family).first()\n return latest_analysis and latest_analysis.status in TEMP_STATUSES", "response": "Check if an analysis is currently running for a family."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_pending(self, family: str, email: str=None) -> models.Analysis:\n started_at = dt.datetime.now()\n new_log = self.Analysis(family=family, status='pending', started_at=started_at)\n new_log.user = self.user(email) if email else None\n self.add_commit(new_log)\n return new_log", "response": "Add a pending entry for an analysis."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a new user to the database.", "response": "def add_user(self, name: str, email: str) -> models.User:\n \"\"\"Add a new user to the database.\"\"\"\n new_user = self.User(name=name, email=email)\n self.add_commit(new_user)\n return new_user"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfetch a user from the database.", "response": "def user(self, email: str) -> models.User:\n \"\"\"Fetch a user from the database.\"\"\"\n return self.User.query.filter_by(email=email).first()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef aggregate_failed(self) -> List:\n categories = self.session.query(\n self.Job.name.label('name'),\n sqa.func.count(self.Job.id).label('count')\n ).filter(self.Job.status != 'cancelled').group_by(self.Job.name).all()\n data = [{'name': category.name, 'count': category.count} for category in categories]\n return data", "response": "Aggregate the number of failed jobs per category."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ningest the NED object with the NED data from the online NED database and store the results in the sherlock - database table.", "response": "def ingest(self):\n \"\"\"*Perform conesearches of the online NED database and import the results into a the sherlock-database*\n\n The code:\n\n 1. uses the list of transient coordinates and queries NED for the results within the given search radius\n 2. Creates the `tcs_cat_ned_stream` table if it doesn't exist\n 3. Adds the resulting NED IDs/Names to the `tcs_cat_ned_stream` table\n 4. Updates the NED query history table\n 5. Queris NED via NED IDs for the remaining source metadata to be added to the `tcs_cat_ned_stream` table\n\n **Usage:**\n\n Having setup the NED object with a coordinate list and cone-search radius, run the `ingest()` method\n\n .. code-block:: python\n\n stream.ingest()\n\n .. todo ::\n\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``ingest`` method')\n\n if not self.radiusArcsec:\n self.log.error(\n 'please give a radius in arcsec with which to preform the initial NED conesearch' % locals())\n sys.exit(0)\n\n # VARIABLES\n # SIZE OF NUMBER OF ROWS TO INSERT INTO DATABASE TABLE AT ANY ONE GO\n self.databaseInsertbatchSize = 10000\n\n # THE DATABASE TABLE TO STREAM THE NED DATA INTO\n self.dbTableName = \"tcs_cat_ned_stream\"\n\n dictList = self._create_dictionary_of_ned()\n\n tableName = self.dbTableName\n\n createStatement = \"\"\"CREATE TABLE IF NOT EXISTS `%(tableName)s` (\n `primaryId` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'An internal counter',\n `ned_name` varchar(150) NOT NULL,\n `redshift` double DEFAULT NULL,\n `dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,\n `dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,\n `updated` varchar(45) DEFAULT '0',\n `major_diameter_arcmin` double DEFAULT NULL,\n `ned_notes` varchar(700) DEFAULT NULL,\n `object_type` varchar(100) DEFAULT NULL,\n `redshift_err` double DEFAULT NULL,\n `redshift_quality` varchar(100) DEFAULT NULL,\n `magnitude_filter` varchar(10) DEFAULT NULL,\n `minor_diameter_arcmin` double DEFAULT NULL,\n `morphology` varchar(50) DEFAULT NULL,\n `hierarchy` varchar(50) DEFAULT NULL,\n `galaxy_morphology` varchar(50) DEFAULT NULL,\n `radio_morphology` varchar(50) DEFAULT NULL,\n `activity_type` varchar(50) DEFAULT NULL,\n `raDeg` double DEFAULT NULL,\n `decDeg` double DEFAULT NULL,\n `eb_v` double DEFAULT NULL,\n `htm16ID` bigint(20) DEFAULT NULL,\n `download_error` tinyint(1) DEFAULT '0',\n `htm10ID` bigint(20) DEFAULT NULL,\n `htm13ID` bigint(20) DEFAULT NULL,\n PRIMARY KEY (`primaryId`),\n UNIQUE KEY `ned_name` (`ned_name`),\n KEY `idx_htm16ID` (`htm16ID`),\n KEY `raDeg` (`raDeg`),\n KEY `downloadError` (`download_error`),\n KEY `idx_htm10ID` (`htm10ID`),\n KEY `idx_htm13ID` (`htm13ID`)\n) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=latin1;\n\"\"\" % locals()\n\n self.add_data_to_database_table(\n dictList=dictList,\n createStatement=createStatement\n )\n\n self._update_ned_query_history()\n self._download_ned_source_metadata()\n\n self.log.debug('completed the ``ingest`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _create_dictionary_of_ned(\n self):\n \"\"\"*Create a list of dictionaries containing all the object ids (NED names) in the ned stream*\n\n **Return:**\n - ``dictList`` - a list of dictionaries containing all the object ids (NED names) in the ned stream\n\n **Usage:**\n\n .. code-block:: python\n\n dictList = stream._create_dictionary_of_ned()\n \"\"\"\n self.log.debug(\n 'starting the ``_create_dictionary_of_ned`` method')\n\n # GET THE NAMES (UNIQUE IDS) OF THE SOURCES WITHIN THE CONESEARCH FROM\n # NED\n names, searchParams = conesearch(\n log=self.log,\n radiusArcsec=self.radiusArcsec,\n nearestOnly=False,\n unclassified=True,\n quiet=False,\n listOfCoordinates=self.coordinateList,\n outputFilePath=False,\n verbose=False\n ).get_crossmatch_names()\n\n dictList = []\n dictList[:] = [{\"ned_name\": n} for n in names]\n\n self.log.debug(\n 'completed the ``_create_dictionary_of_ned`` method')\n return dictList", "response": "Create a list of dictionaries containing all the object ids in the ned stream"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the database helper table to give details of the ned cone searches performed", "response": "def _update_ned_query_history(\n self):\n \"\"\"*Update the database helper table to give details of the ned cone searches performed*\n\n *Usage:*\n\n .. code-block:: python\n\n stream._update_ned_query_history()\n \"\"\"\n self.log.debug('starting the ``_update_ned_query_history`` method')\n\n myPid = self.myPid\n\n # ASTROCALC UNIT CONVERTER OBJECT\n converter = unit_conversion(\n log=self.log\n )\n\n # UPDATE THE DATABASE HELPER TABLE TO GIVE DETAILS OF THE NED CONE\n # SEARCHES PERFORMED\n dataList = []\n for i, coord in enumerate(self.coordinateList):\n if isinstance(coord, str):\n ra = coord.split(\" \")[0]\n dec = coord.split(\" \")[1]\n elif isinstance(coord, tuple) or isinstance(coord, list):\n ra = coord[0]\n dec = coord[1]\n\n dataList.append(\n {\"raDeg\": ra,\n \"decDeg\": dec,\n \"arcsecRadius\": self.radiusArcsec}\n )\n\n if len(dataList) == 0:\n return None\n\n # CREATE TABLE IF NOT EXIST\n createStatement = \"\"\"CREATE TABLE IF NOT EXISTS `tcs_helper_ned_query_history` (\n `primaryId` bigint(20) NOT NULL AUTO_INCREMENT,\n `raDeg` double DEFAULT NULL,\n `decDeg` double DEFAULT NULL,\n `dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,\n `dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,\n `updated` varchar(45) DEFAULT '0',\n `arcsecRadius` int(11) DEFAULT NULL,\n `dateQueried` datetime DEFAULT CURRENT_TIMESTAMP,\n `htm16ID` bigint(20) DEFAULT NULL,\n `htm13ID` int(11) DEFAULT NULL,\n `htm10ID` int(11) DEFAULT NULL,\n PRIMARY KEY (`primaryId`),\n KEY `idx_htm16ID` (`htm16ID`),\n KEY `dateQueried` (`dateQueried`),\n KEY `dateHtm16` (`dateQueried`,`htm16ID`),\n KEY `idx_htm10ID` (`htm10ID`),\n KEY `idx_htm13ID` (`htm13ID`)\n) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;\n \"\"\"\n writequery(\n log=self.log,\n sqlQuery=createStatement,\n dbConn=self.cataloguesDbConn\n )\n\n # USE dbSettings TO ACTIVATE MULTIPROCESSING\n insert_list_of_dictionaries_into_database_tables(\n dbConn=self.cataloguesDbConn,\n log=self.log,\n dictList=dataList,\n dbTableName=\"tcs_helper_ned_query_history\",\n uniqueKeyList=[],\n dateModified=True,\n batchSize=10000,\n replace=True,\n dbSettings=self.settings[\"database settings\"][\n \"static catalogues\"]\n )\n\n # INDEX THE TABLE FOR LATER SEARCHES\n add_htm_ids_to_mysql_database_table(\n raColName=\"raDeg\",\n declColName=\"decDeg\",\n tableName=\"tcs_helper_ned_query_history\",\n dbConn=self.cataloguesDbConn,\n log=self.log,\n primaryIdColumnName=\"primaryId\"\n )\n\n self.log.debug('completed the ``_update_ned_query_history`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndownloading NED metadata for the specified source.", "response": "def _download_ned_source_metadata(\n self):\n \"\"\"*Query NED using the names of the NED sources in our local database to retrieve extra metadata*\n\n *Usage:*\n\n .. code-block:: python\n\n stream._download_ned_source_metadata()\n \"\"\"\n self.log.debug('starting the ``_download_ned_source_metadata`` method')\n\n self.dbTableName = \"tcs_cat_ned_stream\"\n\n total, batches = self._count_ned_sources_in_database_requiring_metadata()\n\n self.log.info(\n \"%(total)s galaxies require metadata. Need to send %(batches)s batch requests to NED.\" % locals())\n\n totalBatches = self.batches\n thisCount = 0\n\n # FOR EACH BATCH, GET THE GALAXY IDs, QUERY NED AND UPDATE THE DATABASE\n # THEN RECOUNT TO DETERMINE IF THERE ARE REMAINING SOURCES TO GRAB\n # METADATA FOR\n while self.total:\n thisCount += 1\n self._get_ned_sources_needing_metadata()\n self._do_ned_namesearch_queries_and_add_resulting_metadata_to_database(\n thisCount)\n self._count_ned_sources_in_database_requiring_metadata()\n\n self.log.debug(\n 'completed the ``_download_ned_source_metadata`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_ned_sources_needing_metadata(\n self):\n \"\"\"*Get the names of 50000 or less NED sources that still require metabase in the database*\n\n **Return:**\n - ``len(self.theseIds)`` -- the number of NED IDs returned\n\n *Usage:*\n\n .. code-block:: python\n\n numberSources = stream._get_ned_sources_needing_metadata()\n \"\"\"\n self.log.debug(\n 'starting the ``_get_ned_sources_needing_metadata`` method')\n\n tableName = self.dbTableName\n\n # SELECT THE DATA FROM NED TABLE\n sqlQuery = u\"\"\"\n select ned_name from %(tableName)s where raDeg is null and (download_error != 1 or download_error is null) limit 50000;\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n self.theseIds = []\n self.theseIds[:] = [r[\"ned_name\"] for r in rows]\n\n self.log.debug(\n 'completed the ``_get_ned_sources_needing_metadata`` method')\n\n return len(self.theseIds)", "response": "Get the names of NED sources that still require metabase in the database"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _do_ned_namesearch_queries_and_add_resulting_metadata_to_database(\n self,\n batchCount):\n \"\"\"*Query NED via name searcha and add result metadata to database*\n\n **Key Arguments:**\n - ``batchCount`` - the index number of the batch sent to NED (only needed for printing to STDOUT to give user idea of progress)\n\n *Usage:*\n\n .. code-block:: python\n\n numberSources = stream._do_ned_namesearch_queries_and_add_resulting_metadata_to_database(batchCount=10)\n \"\"\"\n self.log.debug(\n 'starting the ``_do_ned_namesearch_queries_and_add_resulting_metadata_to_database`` method')\n\n # ASTROCALC UNIT CONVERTER OBJECT\n converter = unit_conversion(\n log=self.log\n )\n tableName = self.dbTableName\n\n # QUERY NED WITH BATCH\n totalCount = len(self.theseIds)\n print \"requesting metadata from NED for %(totalCount)s galaxies (batch %(batchCount)s)\" % locals()\n # QUERY THE ONLINE NED DATABASE USING NEDDY'S NAMESEARCH METHOD\n search = namesearch(\n log=self.log,\n names=self.theseIds,\n quiet=True\n )\n results = search.get()\n print \"results returned from ned -- starting to add to database\" % locals()\n\n # CLEAN THE RETURNED DATA AND UPDATE DATABASE\n totalCount = len(results)\n count = 0\n sqlQuery = \"\"\n dictList = []\n for thisDict in results:\n thisDict[\"tableName\"] = tableName\n count += 1\n for k, v in thisDict.iteritems():\n if not v or len(v) == 0:\n thisDict[k] = \"null\"\n if k in [\"major_diameter_arcmin\", \"minor_diameter_arcmin\"] and (\":\" in v or \"?\" in v or \"<\" in v):\n thisDict[k] = v.replace(\":\", \"\").replace(\n \"?\", \"\").replace(\"<\", \"\")\n if isinstance(v, str) and '\"' in v:\n thisDict[k] = v.replace('\"', '\\\\\"')\n if \"Input name not\" not in thisDict[\"input_note\"] and \"Same object as\" not in thisDict[\"input_note\"]:\n try:\n thisDict[\"raDeg\"] = converter.ra_sexegesimal_to_decimal(\n ra=thisDict[\"ra\"]\n )\n thisDict[\"decDeg\"] = converter.dec_sexegesimal_to_decimal(\n dec=thisDict[\"dec\"]\n )\n except:\n name = thisDict[\"input_name\"]\n self.log.warning(\n \"Could not convert the RA & DEC for the %(name)s NED source\" % locals())\n continue\n thisDict[\"eb_v\"] = thisDict[\"eb-v\"]\n thisDict[\"ned_name\"] = thisDict[\"input_name\"]\n row = {}\n for k in [\"redshift_quality\", \"redshift\", \"hierarchy\", \"object_type\", \"major_diameter_arcmin\", \"morphology\", \"magnitude_filter\", \"ned_notes\", \"eb_v\", \"raDeg\", \"radio_morphology\", \"activity_type\", \"minor_diameter_arcmin\", \"decDeg\", \"redshift_err\", \"ned_name\"]:\n if thisDict[k] == \"null\":\n row[k] = None\n else:\n row[k] = thisDict[k]\n\n dictList.append(row)\n\n self.add_data_to_database_table(\n dictList=dictList,\n createStatement=\"\"\"SET SESSION sql_mode=\"\";\"\"\"\n )\n\n theseIds = (\"\\\", \\\"\").join(self.theseIds)\n\n sqlQuery = u\"\"\"\n update %(tableName)s set download_error = 1 where ned_name in (\"%(theseIds)s\");\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n print \"%(count)s/%(totalCount)s galaxy metadata batch entries added to database\" % locals()\n if count < totalCount:\n # Cursor up one line and clear line\n sys.stdout.write(\"\\x1b[1A\\x1b[2K\")\n\n sqlQuery = u\"\"\"\n update tcs_helper_catalogue_tables_info set last_updated = now() where table_name = \"%(tableName)s\"\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n self.log.debug(\n 'completed the ``_do_ned_namesearch_queries_and_add_resulting_metadata_to_database`` method')\n return None", "response": "Query NED via name searcha and add result metadata to database"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _count_ned_sources_in_database_requiring_metadata(\n self):\n \"\"\"*Count the sources in the NED table requiring metadata*\n\n **Return:**\n - ``self.total``, ``self.batches`` -- total number of galaxies needing metadata & the number of batches required to be sent to NED\n\n *Usage:*\n\n .. code-block:: python\n\n totalRemaining, numberOfBatches = stream._count_ned_sources_in_database_requiring_metadata()\n \"\"\"\n self.log.debug(\n 'starting the ``_count_ned_sources_in_database_requiring_metadata`` method')\n\n tableName = self.dbTableName\n\n sqlQuery = u\"\"\"\n select count(*) as count from %(tableName)s where raDeg is null and (download_error != 1 or download_error is null)\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n self.total = rows[0][\"count\"]\n self.batches = int(self.total / 50000.) + 1\n\n if self.total == 0:\n self.batches = 0\n\n self.log.debug(\n 'completed the ``_count_ned_sources_in_database_requiring_metadata`` method')\n return self.total, self.batches", "response": "Counts the number of sources in the NED table requiring metadata"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the address list.", "response": "def get(self, name=\"\"):\n \"\"\"Get the address(es).\n \"\"\"\n addrs = []\n\n with self._address_lock:\n for metadata in self._addresses.values():\n if (name == \"\" or\n (name and name in metadata[\"service\"])):\n mda = copy.copy(metadata)\n mda[\"receive_time\"] = mda[\"receive_time\"].isoformat()\n addrs.append(mda)\n LOGGER.debug('return address %s', str(addrs))\n return addrs"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks the age of the receiver.", "response": "def _check_age(self, pub, min_interval=timedelta(seconds=0)):\n \"\"\"Check the age of the receiver.\n \"\"\"\n now = datetime.utcnow()\n if (now - self._last_age_check) <= min_interval:\n return\n\n LOGGER.debug(\"%s - checking addresses\", str(datetime.utcnow()))\n self._last_age_check = now\n to_del = []\n with self._address_lock:\n for addr, metadata in self._addresses.items():\n atime = metadata[\"receive_time\"]\n if now - atime > self._max_age:\n mda = {'status': False,\n 'URI': addr,\n 'service': metadata['service']}\n msg = Message('/address/' + metadata['name'], 'info', mda)\n to_del.append(addr)\n LOGGER.info(\"publish remove '%s'\", str(msg))\n pub.send(msg.encode())\n for addr in to_del:\n del self._addresses[addr]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef approximate_size(size, a_kilobyte_is_1024_bytes=True):\n '''\n Humansize.py from Dive into Python3\n Mark Pilgrim - http://www.diveintopython3.net/\n Copyright (c) 2009, Mark Pilgrim, All rights reserved.\n\n Convert a file size to human-readable form.\n Keyword arguments:\n size -- file size in bytes\n a_kilobyte_is_1024_bytes -- if True (default), use multiples of 1024\n if False, use multiples of 1000\n Returns: string\n '''\n\n size = float(size)\n\n if size < 0:\n raise ValueError('number must be non-negative')\n\n multiple = 1024 if a_kilobyte_is_1024_bytes else 1000\n for suffix in SUFFIXES[multiple]:\n size /= multiple\n if size < multiple:\n return '{0:.1f}{1}'.format(size, suffix)\n\n raise ValueError('number too large')", "response": "A function to convert a file size in bytes into a human - readable form."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_console_width():\n\n if os.name == 'nt':\n STD_INPUT_HANDLE = -10\n STD_OUTPUT_HANDLE = -11\n STD_ERROR_HANDLE = -12\n\n # get console handle\n from ctypes import windll, Structure, byref\n try:\n from ctypes.wintypes import SHORT, WORD, DWORD\n except ImportError:\n # workaround for missing types in Python 2.5\n from ctypes import (\n c_short as SHORT, c_ushort as WORD, c_ulong as DWORD)\n console_handle = windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE)\n\n # CONSOLE_SCREEN_BUFFER_INFO Structure\n class COORD(Structure):\n _fields_ = [(\"X\", SHORT), (\"Y\", SHORT)]\n\n class SMALL_RECT(Structure):\n _fields_ = [(\"Left\", SHORT), (\"Top\", SHORT),\n (\"Right\", SHORT), (\"Bottom\", SHORT)]\n\n class CONSOLE_SCREEN_BUFFER_INFO(Structure):\n _fields_ = [(\"dwSize\", COORD),\n (\"dwCursorPosition\", COORD),\n (\"wAttributes\", WORD),\n (\"srWindow\", SMALL_RECT),\n (\"dwMaximumWindowSize\", DWORD)]\n\n sbi = CONSOLE_SCREEN_BUFFER_INFO()\n ret = windll.kernel32.GetConsoleScreenBufferInfo(\n console_handle, byref(sbi))\n if ret == 0:\n return 0\n return sbi.srWindow.Right + 1\n\n elif os.name == 'posix':\n from fcntl import ioctl\n from termios import TIOCGWINSZ\n from array import array\n\n winsize = array(\"H\", [0] * 4)\n try:\n ioctl(sys.stdout.fileno(), TIOCGWINSZ, winsize)\n except IOError:\n pass\n return (winsize[1], winsize[0])[0]\n\n return 80", "response": "Return the width of the console area. Autodetection works for Windows and POSIX platforms. Returns 80 for others."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the filename from the url", "response": "def filename_from_url(url):\n \"\"\":return: detected filename or None\"\"\"\n fname = os.path.basename(urlparse.urlparse(url).path)\n if len(fname.strip(\" \\n\\t.\")) == 0:\n return None\n return fname"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndetect filename from Content - Disposition header if present.", "response": "def filename_from_headers(headers):\n \"\"\"Detect filename from Content-Disposition headers if present.\n http://greenbytes.de/tech/tc2231/\n\n :param: headers as dict, list or string\n :return: filename from content-disposition header or None\n \"\"\"\n if type(headers) == str:\n headers = headers.splitlines()\n if type(headers) == list:\n headers = dict([x.split(':', 1) for x in headers])\n cdisp = headers.get(\"Content-Disposition\")\n if not cdisp:\n return None\n cdtype = cdisp.split(';')\n if len(cdtype) == 1:\n return None\n if cdtype[0].strip().lower() not in ('inline', 'attachment'):\n return None\n # several filename params is illegal, but just in case\n fnames = [x for x in cdtype[1:] if x.strip().startswith('filename=')]\n if len(fnames) > 1:\n return None\n name = fnames[0].split('=')[1].strip(' \\t\"')\n name = os.path.basename(name)\n if not name:\n return None\n return name"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexpands name portion of filename with numeric x suffix to return filename that doesn t exist already.", "response": "def filename_fix_existing(filename, dirname):\n \"\"\"Expands name portion of filename with numeric ' (x)' suffix to\n return filename that doesn't exist already.\n \"\"\"\n name, ext = filename.rsplit('.', 1)\n names = [x for x in os.listdir(dirname) if x.startswith(name)]\n names = [x.rsplit('.', 1)[0] for x in names]\n suffixes = [x.replace(name, '') for x in names]\n # filter suffixes that match ' (x)' pattern\n suffixes = [x[2:-1] for x in suffixes\n if x.startswith(' (') and x.endswith(')')]\n indexes = [int(x) for x in suffixes\n if set(x) <= set('0123456789')]\n idx = 1\n if indexes:\n idx += sorted(indexes)[-1]\n return '{0}({1}).{2}'.format(name, idx, ext)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef report_onlysize(bytes_so_far, total_size, speed, eta):\n '''\n This callback for the download function is used when console width\n is not enough to print the bar.\n It prints only the sizes\n '''\n percent = int(bytes_so_far * 100 / total_size)\n current = approximate_size(bytes_so_far).center(10)\n total = approximate_size(total_size).center(10)\n sys.stdout.write('D: {0}% -{1}/{2}'.format(percent, current, total) + \"eta {0}\".format(eta))\n sys.stdout.write(\"\\r\")\n sys.stdout.flush()", "response": "This callback is used to print only the sizes of the available bytes so far."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef download(link, outdir='.', chunk_size=4096):\n '''\n This is the Main function, which downloads a given link\n and saves on outdir (default = current directory)\n '''\n url = None\n fh = None\n eta = 'unknown '\n bytes_so_far = 0\n filename = filename_from_url(link) or \".\"\n cj = cjar.CookieJar()\n\n # get filename for temp file in current directory\n (fd_tmp, tmpfile) = tempfile.mkstemp(\n \".tmp\", prefix=filename + \".\", dir=outdir)\n os.close(fd_tmp)\n os.unlink(tmpfile)\n\n try:\n opener = ulib.build_opener(ulib.HTTPCookieProcessor(cj))\n url = opener.open(link)\n fh = open(tmpfile, mode='wb')\n\n headers = url.info()\n try:\n total_size = int(headers['Content-Length'])\n except (ValueError, KeyError, TypeError):\n total_size = 'unknown'\n\n try:\n md5_header = headers['Content-MD5']\n except (ValueError, KeyError, TypeError):\n md5_header = None\n\n # Define which callback we're gonna use\n if total_size != 'unknown':\n if CONSOLE_WIDTH > 57:\n reporthook = report_bar\n else:\n reporthook = report_onlysize\n else:\n reporthook = report_unknown\n\n # Below are the registers to calculate network transfer rate\n time_register = time()\n speed = 0.0\n speed_list = []\n bytes_register = 0.0\n eta = 'unknown '\n\n # Loop that reads in chunks, calculates speed and does the callback to\n # print the progress\n while True:\n chunk = url.read(chunk_size)\n # Update Download Speed every 1 second\n if time() - time_register > 0.5:\n speed = (bytes_so_far - bytes_register) / \\\n (time() - time_register)\n speed_list.append(speed)\n\n # Set register properly for future use\n time_register = time()\n bytes_register = bytes_so_far\n\n # Estimative of remaining download time\n if total_size != 'unknown' and len(speed_list) == 3:\n speed_mean = sum(speed_list) / 3\n eta_sec = int((total_size - bytes_so_far) / speed_mean)\n eta = str(datetime.timedelta(seconds=eta_sec))\n speed_list = []\n\n bytes_so_far += len(chunk)\n\n if not chunk:\n sys.stdout.write('\\n')\n break\n\n fh.write(chunk)\n reporthook(bytes_so_far, total_size, speed, eta)\n except KeyboardInterrupt:\n print('\\n\\nCtrl + C: Download aborted by user')\n print('Partial downloaded file:\\n{0}'.format(os.path.abspath(tmpfile)))\n sys.exit(1)\n finally:\n if url:\n url.close()\n if fh:\n fh.close()\n\n filenamealt = filename_from_headers(headers)\n if filenamealt:\n filename = filenamealt\n\n # add numeric '(x)' suffix if filename already exists\n if os.path.exists(os.path.join(outdir, filename)):\n filename = filename_fix_existing(filename, outdir)\n filename = os.path.join(outdir, filename)\n\n shutil.move(tmpfile, filename)\n\n # Check if sizes matches\n if total_size != 'unknown' and total_size != bytes_so_far:\n print(\n '\\n\\nWARNING!! Downloaded file size mismatches... Probably corrupted...')\n\n # Check md5 if it was in html header\n if md5_header:\n print('\\nValidating MD5 checksum...')\n if md5_header == md5sum(filename):\n print('MD5 checksum passed!')\n else:\n print('MD5 checksum do NOT passed!!!')\n\n return filename", "response": "This function downloads a given link and saves it on outdir."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndisplays recent logs for analyses.", "response": "def ls_cmd(context, before, status):\n \"\"\"Display recent logs for analyses.\"\"\"\n runs = context.obj['store'].analyses(\n status=status,\n deleted=False,\n before=parse_date(before) if before else None,\n ).limit(30)\n for run_obj in runs:\n if run_obj.status == 'pending':\n message = f\"{run_obj.id} | {run_obj.family} [{run_obj.status.upper()}]\"\n else:\n message = (f\"{run_obj.id} | {run_obj.family} {run_obj.started_at.date()} \"\n f\"[{run_obj.type.upper()}/{run_obj.status.upper()}]\")\n if run_obj.status == 'running':\n message = click.style(f\"{message} - {run_obj.progress * 100}/100\", fg='blue')\n elif run_obj.status == 'completed':\n message = click.style(f\"{message} - {run_obj.completed_at}\", fg='green')\n elif run_obj.status == 'failed':\n message = click.style(message, fg='red')\n print(message)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef analytics(account=None, *args, **kwargs):\n if not account:\n try:\n account = settings.GOOGLE_ANALYTICS_ACCOUNT\n except:\n raise template.TemplateSyntaxError( \n \"Analytics account could not found either \"\n \"in tag parameters or settings\")\n return {'account': account, 'params':kwargs }", "response": "Simple Google Analytics integration."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndisplaying a single analysis.", "response": "def analysis(analysis_id):\n \"\"\"Display a single analysis.\"\"\"\n analysis_obj = store.analysis(analysis_id)\n if analysis_obj is None:\n return abort(404)\n\n if request.method == 'PUT':\n analysis_obj.update(request.json)\n store.commit()\n\n data = analysis_obj.to_dict()\n data['failed_jobs'] = [job_obj.to_dict() for job_obj in analysis_obj.failed_jobs]\n data['user'] = analysis_obj.user.to_dict() if analysis_obj.user else None\n return jsonify(**data)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget s a Gravatar for a user s email address.", "response": "def get_gravatar(email, size=80, default='identicon'):\n \"\"\" Get's a Gravatar for a email address.\n\n :param size:\n The size in pixels of one side of the Gravatar's square image.\n Optional, if not supplied will default to ``80``.\n\n :param default:\n Defines what should be displayed if no image is found for this user.\n Optional argument which defaults to ``identicon``. The argument can be\n a URI to an image or one of the following options:\n\n ``404``\n Do not load any image if none is associated with the email\n hash, instead return an HTTP 404 (File Not Found) response.\n\n ``mm``\n Mystery-man, a simple, cartoon-style silhouetted outline of a\n person (does not vary by email hash).\n\n ``identicon``\n A geometric pattern based on an email hash.\n\n ``monsterid``\n A generated 'monster' with different colors, faces, etc.\n\n ``wavatar``\n Generated faces with differing features and backgrounds\n\n :return: The URI pointing to the Gravatar.\n\n \"\"\"\n if defaults.ACCOUNTS_GRAVATAR_SECURE:\n base_url = 'https://secure.gravatar.com/avatar/'\n else: base_url = 'http://www.gravatar.com/avatar/'\n\n gravatar_url = '%(base_url)s%(gravatar_id)s?' % \\\n {'base_url': base_url,\n 'gravatar_id': hashlib.md5(email.lower().encode('utf-8')).hexdigest()}\n\n gravatar_url += urllib.parse.urlencode({'s': str(size),\n 'd': default})\n return gravatar_url"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef login_redirect(redirect=None, user=None):\n if redirect: return redirect\n elif user is not None:\n return defaults.ACCOUNTS_LOGIN_REDIRECT_URL % \\\n {'username': user.username}\n else: return settings.LOGIN_REDIRECT_URL", "response": "Returns the URI to redirect to after successful sign in."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate_sha1(string, salt=None):\n if not salt:\n salt = hashlib.sha1(str(random.random()).encode('utf-8')).hexdigest()[:5]\n hash = hashlib.sha1((salt+str(string)).encode('utf-8')).hexdigest()\n\n return (salt, hash)", "response": "Generates a sha1 hash for the supplied string."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_datetime_now():\n try:\n from django.utils import timezone\n return timezone.now()\n except ImportError:\n return datetime.datetime.now()", "response": "Returns a datetime object with current point in time."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef complex_handle_errors():\n print(\"Demonstrating complex handle_errors example\")\n\n def _handler_function(err, final_message, trace):\n \"\"\"\n This function is a helper function for handling an exception from\n the handle_errors ctx_mgr.\n \"\"\"\n print(\"Handling exception\")\n print(\"==================\")\n print(\"Final Message\")\n print(\"-------------\")\n print(final_message)\n print(\"Traceback\")\n print(\"---------\")\n print(''.join(format_tb(trace)))\n\n with HandleError.handle_errors(\n \"something went wrong ({example_type})\",\n example_type='complex example',\n re_raise=False,\n do_except=_handler_function,\n do_else=lambda: print('No exception occurred'),\n do_finally=lambda: print('Finished handling exceptions'),\n ):\n print(\"we are fine\")\n raise ValueError(\"here we die\")\n print(\"we should not get here\")", "response": "This function demonstrates a more complex usage of the HandleError. handle_errors method."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef is_valid_data(obj):\n if obj:\n try:\n tmp = json.dumps(obj, default=datetime_encoder)\n del tmp\n except (TypeError, UnicodeDecodeError):\n return False\n return True", "response": "Check if data is JSON serializable."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndecode datetimes to python objects.", "response": "def datetime_decoder(dct):\n \"\"\"Decode datetimes to python objects.\n \"\"\"\n if isinstance(dct, list):\n pairs = enumerate(dct)\n elif isinstance(dct, dict):\n pairs = dct.items()\n result = []\n for key, val in pairs:\n if isinstance(val, six.string_types):\n try:\n val = strp_isoformat(val)\n except ValueError:\n pass\n elif isinstance(val, (dict, list)):\n val = datetime_decoder(val)\n result.append((key, val))\n if isinstance(dct, list):\n return [x[1] for x in result]\n elif isinstance(dct, dict):\n return dict(result)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a raw string to a Message.", "response": "def _decode(rawstr):\n \"\"\"Convert a raw string to a Message.\n \"\"\"\n # Check for the magick word.\n try:\n rawstr = rawstr.decode('utf-8')\n except (AttributeError, UnicodeEncodeError):\n pass\n except (UnicodeDecodeError):\n try:\n rawstr = rawstr.decode('iso-8859-1')\n except (UnicodeDecodeError):\n rawstr = rawstr.decode('utf-8', 'ignore')\n if not rawstr.startswith(_MAGICK):\n raise MessageError(\"This is not a '%s' message (wrong magick word)\"\n % _MAGICK)\n rawstr = rawstr[len(_MAGICK):]\n\n # Check for element count and version\n raw = re.split(r\"\\s+\", rawstr, maxsplit=6)\n if len(raw) < 5:\n raise MessageError(\"Could node decode raw string: '%s ...'\"\n % str(rawstr[:36]))\n version = raw[4][:len(_VERSION)]\n if not _is_valid_version(version):\n raise MessageError(\"Invalid Message version: '%s'\" % str(version))\n\n # Start to build message\n msg = dict((('subject', raw[0].strip()),\n ('type', raw[1].strip()),\n ('sender', raw[2].strip()),\n ('time', strp_isoformat(raw[3].strip())),\n ('version', version)))\n\n # Data part\n try:\n mimetype = raw[5].lower()\n data = raw[6]\n except IndexError:\n mimetype = None\n\n if mimetype is None:\n msg['data'] = ''\n msg['binary'] = False\n elif mimetype == 'application/json':\n try:\n msg['data'] = json.loads(raw[6], object_hook=datetime_decoder)\n msg['binary'] = False\n except ValueError:\n raise MessageError(\"JSON decode failed on '%s ...'\" % raw[6][:36])\n elif mimetype == 'text/ascii':\n msg['data'] = str(data)\n msg['binary'] = False\n elif mimetype == 'binary/octet-stream':\n msg['data'] = data\n msg['binary'] = True\n else:\n raise MessageError(\"Unknown mime-type '%s'\" % mimetype)\n\n return msg"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts a Message to a raw string.", "response": "def _encode(msg, head=False, binary=False):\n \"\"\"Convert a Message to a raw string.\n \"\"\"\n rawstr = str(_MAGICK) + u\"{0:s} {1:s} {2:s} {3:s} {4:s}\".format(\n msg.subject, msg.type, msg.sender, msg.time.isoformat(), msg.version)\n\n if not head and msg.data:\n if not binary and isinstance(msg.data, six.string_types):\n return (rawstr + ' ' +\n 'text/ascii' + ' ' + msg.data)\n elif not binary:\n return (rawstr + ' ' +\n 'application/json' + ' ' +\n json.dumps(msg.data, default=datetime_encoder))\n else:\n return (rawstr + ' ' +\n 'binary/octet-stream' + ' ' + msg.data)\n return rawstr"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _getsender():\n import os\n import pwd\n import socket\n host = socket.gethostname()\n user = pwd.getpwuid(os.getuid())[0]\n return \"%s@%s\" % (user, host)", "response": "Return local sender.\n Don't use the getpass module, it looks at various environment variables\n and is unreliable."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _validate(self):\n if not is_valid_subject(self.subject):\n raise MessageError(\"Invalid subject: '%s'\" % self.subject)\n if not is_valid_type(self.type):\n raise MessageError(\"Invalid type: '%s'\" % self.type)\n if not is_valid_sender(self.sender):\n raise MessageError(\"Invalid sender: '%s'\" % self.sender)\n if not self.binary and not is_valid_data(self.data):\n raise MessageError(\"Invalid data: data is not JSON serializable: %s\"\n % str(self.data))", "response": "Validate a messages attributes."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __generateSPK(self):\n\n key = self.__KeyPair.generate()\n\n key_serialized = self.__PublicKeyEncoder.encodePublicKey(\n key.pub,\n self.__curve\n )\n\n signature = self.__XEdDSA(mont_priv = self.__ik.priv).sign(key_serialized)\n\n self.__spk = {\n \"key\": key,\n \"signature\": signature,\n \"timestamp\": time.time()\n }", "response": "Generate a new PK and sign its public key using the IK and add the timestamp aswell\n to allow for periodic rotations."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __generateOTPKs(self, num_otpks = None):\n\n if num_otpks == None:\n num_otpks = self.__max_num_otpks\n \n otpks = []\n\n for _ in range(num_otpks):\n otpks.append(self.__KeyPair.generate())\n\n try:\n self.__otpks.extend(otpks)\n except AttributeError:\n self.__otpks = otpks", "response": "Generates the given amount of OTPKs."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef __kdf(self, secret_key_material):\n\n salt = b\"\\x00\" * self.__hash_function().digest_size\n\n if self.__curve == \"25519\":\n input_key_material = b\"\\xFF\" * 32\n if self.__curve == \"448\":\n input_key_material = b\"\\xFF\" * 57\n\n input_key_material += secret_key_material\n\n hkdf = HKDF(\n algorithm=self.__hash_function(),\n length=32,\n salt=salt,\n info=self.__info_string,\n backend=self.__class__.CRYPTOGRAPHY_BACKEND\n )\n\n return hkdf.derive(input_key_material)", "response": "Derive the shared secret key from the secret key material."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck whether the SPK is too old and generate a new one if it is too old.", "response": "def __checkSPKTimestamp(self):\n \"\"\"\n Check whether the SPK is too old and generate a new one in that case.\n \"\"\"\n\n if time.time() - self.__spk[\"timestamp\"] > self.__spk_timeout:\n self.__generateSPK()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrefill the OTPKs of the current class if they are not longer than the minimum.", "response": "def __refillOTPKs(self):\n \"\"\"\n If the amount of available OTPKs fell under the minimum, refills the OTPKs up to\n the maximum limit again.\n \"\"\"\n\n remainingOTPKs = len(self.__otpks)\n\n if remainingOTPKs < self.__min_num_otpks:\n self.__generateOTPKs(self.__max_num_otpks - remainingOTPKs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhides a one - time pre key from the public bundle.", "response": "def hideFromPublicBundle(self, otpk_pub):\n \"\"\"\n Hide a one-time pre key from the public bundle.\n\n :param otpk_pub: The public key of the one-time pre key to hide, encoded as a\n bytes-like object.\n \"\"\"\n\n self.__checkSPKTimestamp()\n\n for otpk in self.__otpks:\n if otpk.pub == otpk_pub:\n self.__otpks.remove(otpk)\n self.__hidden_otpks.append(otpk)\n self.__refillOTPKs()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef deleteOTPK(self, otpk_pub):\n\n self.__checkSPKTimestamp()\n\n for otpk in self.__otpks:\n if otpk.pub == otpk_pub:\n self.__otpks.remove(otpk)\n\n for otpk in self.__hidden_otpks:\n if otpk.pub == otpk_pub:\n self.__hidden_otpks.remove(otpk)\n\n self.__refillOTPKs()", "response": "Delete a one - time pre key."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn an instance of PublicBundle filled with the public data of this State.", "response": "def getPublicBundle(self):\n \"\"\"\n Fill a PublicBundle object with the public bundle data of this State.\n\n :returns: An instance of PublicBundle, filled with the public data of this State.\n \"\"\"\n\n self.__checkSPKTimestamp()\n\n ik_pub = self.__ik.pub\n spk_pub = self.__spk[\"key\"].pub\n spk_sig = self.__spk[\"signature\"]\n otpk_pubs = [ otpk.pub for otpk in self.__otpks ]\n\n return PublicBundle(ik_pub, spk_pub, spk_sig, otpk_pubs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef changed(self):\n\n self.__checkSPKTimestamp()\n\n changed = self._changed\n self._changed = False\n return changed", "response": "Returns True if the state has changed since the last flag was loaded or cleared."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getSharedSecretActive(\n self,\n other_public_bundle,\n allow_zero_otpks = False\n ):\n \"\"\"\n Do the key exchange, as the active party. This involves selecting keys from the\n passive parties' public bundle.\n\n :param other_public_bundle: An instance of PublicBundle, filled with the public\n data of the passive party.\n :param allow_zero_otpks: A flag indicating whether bundles with no one-time pre\n keys are allowed or throw an error. False is the recommended default.\n :returns: A dictionary containing the shared secret, the shared associated data\n and the data the passive party needs to finalize the key exchange.\n\n The returned structure looks like this::\n \n {\n \"to_other\": {\n # The public key of the active parties' identity key pair\n \"ik\": bytes,\n\n # The public key of the active parties' ephemeral key pair\n \"ek\": bytes,\n\n # The public key of the used passive parties' one-time pre key or None\n \"otpk\": bytes or None,\n\n # The public key of the passive parties' signed pre key pair\n \"spk\": bytes\n },\n \"ad\": bytes, # The shared associated data\n \"sk\": bytes # The shared secret\n }\n\n :raises KeyExchangeException: If an error occurs during the key exchange. The\n exception message will contain (human-readable) details.\n \"\"\"\n\n self.__checkSPKTimestamp()\n\n other_ik = self.__KeyPair(pub = other_public_bundle.ik)\n\n other_spk = {\n \"key\": self.__KeyPair(pub = other_public_bundle.spk),\n \"signature\": other_public_bundle.spk_signature\n }\n\n other_otpks = [\n self.__KeyPair(pub = otpk) for otpk in other_public_bundle.otpks\n ]\n\n if len(other_otpks) == 0 and not allow_zero_otpks:\n raise KeyExchangeException(\n \"The other public bundle does not contain any OTPKs, which is not \" +\n \"allowed.\"\n )\n\n other_spk_serialized = self.__PublicKeyEncoder.encodePublicKey(\n other_spk[\"key\"].pub,\n self.__curve\n )\n\n if not self.__XEdDSA(mont_pub = other_ik.pub).verify(\n other_spk_serialized,\n other_spk[\"signature\"]\n ):\n raise KeyExchangeException(\n \"The signature of this public bundle's spk could not be verifified.\"\n )\n\n ek = self.__KeyPair.generate()\n\n dh1 = self.__ik.getSharedSecret(other_spk[\"key\"])\n dh2 = ek.getSharedSecret(other_ik)\n dh3 = ek.getSharedSecret(other_spk[\"key\"])\n dh4 = b\"\"\n\n otpk = None\n if len(other_otpks) > 0:\n otpk_index = ord(os.urandom(1)) % len(other_otpks)\n otpk = other_otpks[otpk_index]\n\n dh4 = ek.getSharedSecret(otpk)\n\n sk = self.__kdf(dh1 + dh2 + dh3 + dh4)\n\n ik_pub_serialized = self.__PublicKeyEncoder.encodePublicKey(\n self.__ik.pub,\n self.__curve\n )\n\n other_ik_pub_serialized = self.__PublicKeyEncoder.encodePublicKey(\n other_ik.pub,\n self.__curve\n )\n\n ad = ik_pub_serialized + other_ik_pub_serialized\n\n return {\n \"to_other\": {\n \"ik\": self.__ik.pub,\n \"ek\": ek.pub,\n \"otpk\": otpk.pub if otpk else None,\n \"spk\": other_spk[\"key\"].pub\n },\n \"ad\": ad,\n \"sk\": sk\n }", "response": "This method returns the shared secret for the active party."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getSharedSecretPassive(\n self,\n passive_exchange_data,\n allow_no_otpk = False,\n keep_otpk = False\n ):\n \"\"\"\n Do the key exchange, as the passive party. This involves retrieving data about the\n key exchange from the active party.\n\n :param passive_exchange_data: A structure generated by the active party, which\n contains data requried to complete the key exchange. See the \"to_other\" part\n of the structure returned by \"getSharedSecretActive\".\n :param allow_no_otpk: A boolean indicating whether to allow key exchange, even if\n the active party did not use a one-time pre key. The recommended default is\n False.\n :param keep_otpk: Keep the one-time pre key after using it, instead of deleting\n it. See the notes below.\n :returns: A dictionary containing the shared secret and the shared associated\n data.\n\n The returned structure looks like this::\n \n {\n \"ad\": bytes, # The shared associated data\n \"sk\": bytes # The shared secret\n }\n\n The specification of X3DH dictates to delete one-time pre keys as soon as they are\n used.\n\n This behaviour provides security but may lead to considerable usability downsides\n in some environments.\n\n For that reason the keep_otpk flag exists.\n If set to True, the one-time pre key is not automatically deleted.\n USE WITH CARE, THIS MAY INTRODUCE SECURITY LEAKS IF USED INCORRECTLY.\n If you decide to set the flag and to keep the otpks, you have to manage deleting\n them yourself, e.g. by subclassing this class and overriding this method.\n\n :raises KeyExchangeException: If an error occurs during the key exchange. The\n exception message will contain (human-readable) details.\n \"\"\"\n\n self.__checkSPKTimestamp()\n\n other_ik = self.__KeyPair(pub = passive_exchange_data[\"ik\"])\n other_ek = self.__KeyPair(pub = passive_exchange_data[\"ek\"])\n\n if self.__spk[\"key\"].pub != passive_exchange_data[\"spk\"]:\n raise KeyExchangeException(\n \"The SPK used for this key exchange has been rotated, the key exchange \" +\n \"can not be completed.\"\n )\n\n my_otpk = None\n if \"otpk\" in passive_exchange_data:\n for otpk in self.__otpks:\n if otpk.pub == passive_exchange_data[\"otpk\"]:\n my_otpk = otpk\n break\n\n for otpk in self.__hidden_otpks:\n if otpk.pub == passive_exchange_data[\"otpk\"]:\n my_otpk = otpk\n break\n\n if not my_otpk:\n raise KeyExchangeException(\n \"The OTPK used for this key exchange has been deleted, the key \" +\n \"exchange can not be completed.\"\n )\n elif not allow_no_otpk:\n raise KeyExchangeException(\n \"This key exchange data does not contain an OTPK, which is not allowed.\"\n )\n\n dh1 = self.__spk[\"key\"].getSharedSecret(other_ik)\n dh2 = self.__ik.getSharedSecret(other_ek)\n dh3 = self.__spk[\"key\"].getSharedSecret(other_ek)\n dh4 = b\"\"\n \n if my_otpk:\n dh4 = my_otpk.getSharedSecret(other_ek)\n\n sk = self.__kdf(dh1 + dh2 + dh3 + dh4)\n\n other_ik_pub_serialized = self.__PublicKeyEncoder.encodePublicKey(\n other_ik.pub,\n self.__curve\n )\n\n ik_pub_serialized = self.__PublicKeyEncoder.encodePublicKey(\n self.__ik.pub,\n self.__curve\n )\n\n ad = other_ik_pub_serialized + ik_pub_serialized\n\n if my_otpk and not keep_otpk:\n self.deleteOTPK(my_otpk.pub)\n\n return {\n \"ad\": ad,\n \"sk\": sk\n }", "response": "This method is used to get the shared secret for a specific key exchange."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nserializes the config data to the user home directory.", "response": "def save(self, mode=0o600):\n \"\"\"\n Serialize the config data to the user home directory.\n\n :param mode: The octal Unix mode (permissions) for the config file.\n \"\"\"\n if self._parent is not None:\n self._parent.save(mode=mode)\n else:\n config_dir = os.path.dirname(os.path.abspath(self.config_files[-1]))\n try:\n os.makedirs(config_dir)\n except OSError as e:\n if not (e.errno == errno.EEXIST and os.path.isdir(config_dir)):\n raise\n with open(self.config_files[-1], \"wb\" if sys.version_info < (3, 0) else \"w\") as fh:\n self._dump(fh)\n os.chmod(self.config_files[-1], mode)\n self._logger.debug(\"Saved config to %s\", self.config_files[-1])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncall solver in order to calculate chemical potential.", "response": "def getChemicalPotential(self, solution):\n \"\"\"Call solver in order to calculate chemical potential.\n \"\"\"\n if isinstance(solution, Solution):\n solution = solution.getSolution()\n\n self.mu = self.solver.chemicalPotential(solution)\n return self.mu"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef store(self, filename=None, label=None, desc=None, date=None):\n date = date if date else datetime.now()\n date = date.replace(microsecond=0).isoformat()\n filename = filename if filename else date + '.mat'\n\n matfile = {\n 'model': str(type(self)),\n 'date': date,\n 'dim': len(self.init_sol.shape),\n 'dimlesses': self.coeffs,\n 'init_solution': self.init_sol,\n 'num_iters': self.num_iters,\n 'num_nodes': self.num_nodes,\n 'order': self.order,\n 'originals': self.originals,\n 'pumping': self.getPumping(),\n 'spatial_step': self.dx,\n 'time_step': self.dt,\n }\n\n if desc:\n matfile['desc'] = desc\n if label:\n matfile['label'] = label\n\n savemat(filename, matfile)", "response": "Store object to mat - file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrestores object from mat - file.", "response": "def restore(self, filename):\n \"\"\"Restore object from mat-file. TODO: determine format specification\n \"\"\"\n matfile = loadmat(filename)\n matfile['originals'] = matfile['originals'][0, 0]\n\n if matfile['dim'] == 1:\n matfile['init_solution'] = matfile['init_solution'][0, :]\n matfile['pumping'] = matfile['pumping'][0, :]\n\n self.coeffs = matfile['dimlesses'][0, :]\n self.init_sol = matfile['init_solution']\n self.num_nodes = matfile['num_nodes'][0, 0]\n self.num_iters = matfile['num_iters'][0, 0]\n self.pumping = GridPumping(matfile['pumping'])\n self.dx = matfile['spatial_step'][0, 0]\n self.dt = matfile['time_step'][0, 0]\n\n types = matfile['originals'].dtype\n values = matfile['originals']\n\n self.originals = dict(zip(types.names, (value[0, 0] for value in values)))\n\n if 'desc' in matfile:\n self.desc = str(matfile['desc'][0])\n if 'label' in matfile:\n self.label = str(matfile['label'][0])\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncalculate integral of damping terms of hamiltonian using rectangular method.", "response": "def getDampingIntegral(self):\n \"\"\"Calculate integral of damping terms of hamiltonian using rectangular method.\n \"\"\"\n reservoir = self.getReservoir()\n density = self.getDensity()\n length = self.model.getSpatialStep()\n\n if self.solution.ndim == 1:\n nodes = self.model.getNumberOfNodes()\n radius = linspace(0, nodes * self.model.getSpatialStep(), nodes)\n integral = 2 * pi * sum((reservoir - 1.0) * density * radius * length)\n elif self.solution.ndim == 2:\n area = length ** 2\n integral = sum(sum((reservoir - 1.0) * density * area))\n\n return integral"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef store(self, filename=None, label=None, desc=None, date=None):\n date = datetime.now() if date is None else date\n filename = filename if filename else date.replace(microsecond=0).isoformat() + '.mat'\n\n def storeWithFileLikeObject(file_like):\n content = {\n 'elapsed_time': self.elapsed_time,\n 'solution': self.solution,\n 'version': version(),\n }\n self.model.store(file_like, label, desc, date)\n savemat(file_like, content, appendmat=True)\n\n if isinstance(filename, file):\n storeWithFileLikeObject(filename)\n else:\n with open(filename, 'wb') as f:\n storeWithFileLikeObject(f)", "response": "Store object to mat - file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef restore(self, filename):\n matfile = loadmat(filename)\n\n if matfile['dim'] == 1:\n matfile['solution'] = matfile['solution'][0, :]\n\n self.elapsed_time = matfile['elapsed_time'][0, 0]\n self.solution = matfile['solution']\n\n return self", "response": "Restore object from mat - file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nformats a value for inclusion in an informative text string.", "response": "def fmt_val(val, shorten=True):\n \"\"\"Format a value for inclusion in an \n informative text string.\n \"\"\"\n val = repr(val)\n max = 50\n if shorten:\n if len(val) > max:\n close = val[-1]\n val = val[0:max-4] + \"...\"\n if close in (\">\", \"'\", '\"', ']', '}', ')'):\n val = val + close\n return val"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fmt_dict_vals(dict_vals, shorten=True):\n items = dict_vals.items()\n if not items:\n return [fmt_val(None, shorten=shorten)]\n return [\"%s=%s\" % (k, fmt_val(v, shorten=shorten)) for k,v in items]", "response": "Formats a dictionary of key = val pairs for inclusion in an informative text string."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbuilds the command to execute MIP.", "response": "def build_command(self, config, **kwargs):\n \"\"\"Builds the command to execute MIP.\"\"\"\n command = ['perl', self.script, CLI_OPTIONS['config']['option'], config]\n for key, value in kwargs.items():\n # enable passing in flags as \"False\" - shouldn't add command\n if value:\n command.append(CLI_OPTIONS[key]['option'])\n if value is True:\n command.append(CLI_OPTIONS[key].get('default', '1'))\n else:\n command.append(value)\n return command"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstart a new MIP run.", "response": "def execute(self, command):\n \"\"\"Start a new MIP run.\"\"\"\n process = subprocess.Popen(\n command,\n preexec_fn=lambda: signal.signal(signal.SIGPIPE, signal.SIG_DFL)\n )\n return process"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating wiki pages and github tables", "response": "def update(self):\n \"\"\"\n Update wiki pages\n\n See class docstring for usage\n \"\"\"\n self.log.debug('starting the ``update`` method')\n\n if \"sherlock wiki root\" not in self.settings:\n print \"Sherlock wiki settings not found in settings file\"\n return\n\n staticTableInfo = self._get_table_infos()\n viewInfo = self._get_view_infos()\n streamedTableInfo = self._get_stream_view_infos()\n self._create_md_tables(\n tableData=staticTableInfo,\n viewData=viewInfo,\n streamData=streamedTableInfo\n )\n self._write_wiki_pages()\n self._update_github()\n\n self.log.debug('completed the ``update`` method')\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the table info from the crossmatch - catalogues database", "response": "def _get_table_infos(\n self,\n trimmed=False):\n \"\"\"query the sherlock-catalogues database table metadata\n \"\"\"\n self.log.debug('starting the ``_get_table_infos`` method')\n\n sqlQuery = u\"\"\"\n SELECT * FROM crossmatch_catalogues.tcs_helper_catalogue_tables_info where legacy_table = 0 and table_name not like \"legacy%%\" and table_name not like \"%%stream\" order by number_of_rows desc;\n \"\"\" % locals()\n tableInfo = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n if trimmed:\n cleanTable = []\n for r in tableInfo:\n orow = collections.OrderedDict(sorted({}.items()))\n for c in self.basicColumns:\n if c in r:\n orow[c] = r[c]\n cleanTable.append(orow)\n tableInfo = cleanTable\n\n self.log.debug('completed the ``_get_table_infos`` method')\n return tableInfo"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_view_infos(\n self,\n trimmed=False):\n \"\"\"query the sherlock-catalogues database view metadata\n \"\"\"\n self.log.debug('starting the ``_get_view_infos`` method')\n\n sqlQuery = u\"\"\"\n SELECT v.*, t.description as \"master table\" FROM crossmatch_catalogues.tcs_helper_catalogue_views_info as v, crossmatch_catalogues.tcs_helper_catalogue_tables_info AS t where v.legacy_view = 0 and v.view_name not like \"legacy%%\" and t.id=v.table_id order by number_of_rows desc\n \"\"\" % locals()\n viewInfo = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n if trimmed:\n cleanTable = []\n for r in viewInfo:\n orow = collections.OrderedDict(sorted({}.items()))\n for c in self.basicColumns:\n if c in r:\n orow[c] = r[c]\n cleanTable.append(orow)\n viewInfo = cleanTable\n\n self.log.debug('completed the ``_get_view_infos`` method')\n return viewInfo", "response": "get the view info for the current user"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_stream_view_infos(\n self,\n trimmed=False):\n \"\"\"query the sherlock-catalogues database streamed data tables' metadata\n \"\"\"\n self.log.debug('starting the ``_get_stream_view_infos`` method')\n\n sqlQuery = u\"\"\"\n SELECT * FROM crossmatch_catalogues.tcs_helper_catalogue_tables_info where legacy_table = 0 and table_name not like \"legacy%%\" and table_name like \"%%stream\" order by number_of_rows desc;\n \"\"\" % locals()\n streamInfo = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n if trimmed:\n cleanTable = []\n for r in streamInfo:\n orow = collections.OrderedDict(sorted({}.items()))\n for c in self.basicColumns:\n if c in r:\n orow[c] = r[c]\n cleanTable.append(orow)\n streamInfo = cleanTable\n\n self.log.debug('completed the ``_get_stream_view_infos`` method')\n return streamInfo", "response": "query the sherlock - catalogues database streamed data tables info"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _create_md_tables(\n self,\n tableData,\n viewData,\n streamData\n ):\n \"\"\"generate markdown format tables from the database query results\n\n **Key Arguments:**\n - ``tableData`` -- the sherlock-catalogues database table metadata.\n - ``viewData`` -- the sherlock-catalogues database view metadata.\n - ``streamData`` -- the sherlock-catalogues database streamed data tables' metadata.\n\n **Return:**\n - None\n \"\"\"\n self.log.debug('starting the ``_create_md_tables`` method')\n\n header = u\"\"\"\n| Table Name | Description | Reference | Number Rows | Vizier | NED | Objects | Weight (1-10) |\n| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |\"\"\"\n\n rows = u\"\"\n for ti in tableData:\n table_name = ti[\"table_name\"]\n description = ti[\"description\"]\n url = ti[\"url\"]\n number_of_rows = ti[\"number_of_rows\"]\n reference_url = ti[\"reference_url\"]\n reference_text = ti[\"reference_text\"]\n notes = ti[\"notes\"]\n vizier_link = ti[\"vizier_link\"]\n in_ned = ti[\"in_ned\"]\n object_types = ti[\"object_types\"]\n version_number = ti[\"version_number\"]\n last_updated = ti[\"last_updated\"]\n legacy_table = ti[\"legacy_table\"]\n old_table_name = ti[\"old_table_name\"]\n weight = ti[\"object_type_accuracy\"]\n\n number_of_rows = str(number_of_rows)\n thisLen = len(number_of_rows)\n newNumber = \"\"\n count = 0\n while count < thisLen:\n count += 1\n newNumber = number_of_rows[-count] + newNumber\n if count % 3 == 0:\n newNumber = \",\" + newNumber\n if newNumber[0] == \",\":\n newNumber = newNumber[1:]\n\n if vizier_link and len(vizier_link) and vizier_link != 0 and vizier_link != \"0\":\n vizier_link = u\"[\u2713](%(vizier_link)s)\" % locals()\n else:\n vizier_link = u\"\"\n\n if in_ned:\n in_ned = u\"\u2713\"\n else:\n in_ned = u\"\"\n\n rows += u\"\"\"\n| %(table_name)s | [%(description)s](%(url)s) | [%(reference_text)s](%(reference_url)s) | %(newNumber)s | %(vizier_link)s | %(in_ned)s | %(object_types)s | %(weight)s |\"\"\" % locals()\n\n self.mdTables = header + rows\n\n header = u\"\"\"\n| View Name | Number Rows | Object Type |\n| :--- | :--- | :--- |\"\"\"\n\n rows = u\"\"\n for ti in viewData:\n view_name = ti[\"view_name\"]\n number_of_rows = ti[\"number_of_rows\"]\n object_type = ti[\"object_type\"]\n\n number_of_rows = str(number_of_rows)\n thisLen = len(number_of_rows)\n newNumber = \"\"\n count = 0\n while count < thisLen:\n count += 1\n newNumber = number_of_rows[-count] + newNumber\n if count % 3 == 0:\n newNumber = \",\" + newNumber\n if newNumber[0] == \",\":\n newNumber = newNumber[1:]\n\n rows += u\"\"\"\n| %(view_name)s | %(newNumber)s | %(object_type)s |\"\"\" % locals()\n\n self.mdViews = header + rows\n\n header = u\"\"\"\n| Table Name | Description | Reference | Number Rows | Objects | \n| :--- | :--- | :--- | :--- | :--- | \"\"\"\n\n rows = u\"\"\n for ti in streamData:\n table_name = ti[\"table_name\"]\n description = ti[\"description\"]\n url = ti[\"url\"]\n number_of_rows = ti[\"number_of_rows\"]\n reference_url = ti[\"reference_url\"]\n reference_text = ti[\"reference_text\"]\n notes = ti[\"notes\"]\n vizier_link = ti[\"vizier_link\"]\n in_ned = ti[\"in_ned\"]\n object_types = ti[\"object_types\"]\n version_number = ti[\"version_number\"]\n last_updated = ti[\"last_updated\"]\n legacy_table = ti[\"legacy_table\"]\n old_table_name = ti[\"old_table_name\"]\n\n number_of_rows = str(number_of_rows)\n thisLen = len(number_of_rows)\n newNumber = \"\"\n count = 0\n while count < thisLen:\n count += 1\n newNumber = number_of_rows[-count] + newNumber\n if count % 3 == 0:\n newNumber = \",\" + newNumber\n if newNumber[0] == \",\":\n newNumber = newNumber[1:]\n\n if len(vizier_link) and vizier_link != 0 and vizier_link != \"0\":\n vizier_link = u\"[\u2713](%(vizier_link)s)\" % locals()\n else:\n vizier_link = u\"\"\n\n if in_ned:\n in_ned = u\"\u2713\"\n else:\n in_ned = u\"\"\n\n rows += u\"\"\"\n| %(table_name)s | [%(description)s](%(url)s) | [%(reference_text)s](%(reference_url)s) | %(newNumber)s | %(object_types)s |\"\"\" % locals()\n\n self.mdStreams = header + rows\n\n self.log.debug('completed the ``_create_md_tables`` method')\n return", "response": "generate markdown format tables from the database query results"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _write_wiki_pages(\n self):\n \"\"\"write the markdown formated content of the database tables' metadata to local wiki pages\n \"\"\"\n self.log.debug('starting the ``_write_wiki_pages`` method')\n\n pathToWriteFile = self.settings[\n \"sherlock wiki root\"] + \"/Crossmatch-Catalogue Tables.md\"\n writeFile = codecs.open(pathToWriteFile, encoding='utf-8', mode='w')\n now = datetime.now()\n now = now.strftime(\"%Y-%m-%d %H:%M\")\n lastUpdated = \"\"\"Last Updated %(now)s\n\"\"\" % locals()\n\n writeFile.write(lastUpdated + self.mdTables)\n writeFile.close()\n\n pathToWriteFile = self.settings[\n \"sherlock wiki root\"] + \"/Crossmatch-Catalogue Views.md\"\n writeFile = codecs.open(pathToWriteFile, encoding='utf-8', mode='w')\n now = datetime.now()\n now = now.strftime(\"%Y-%m-%d %H:%M\")\n lastUpdated = \"\"\"Last Updated %(now)s\n\"\"\" % locals()\n\n writeFile.write(lastUpdated + self.mdViews)\n writeFile.close()\n\n pathToWriteFile = self.settings[\n \"sherlock wiki root\"] + \"/Crossmatch-Catalogue Streams.md\"\n writeFile = codecs.open(pathToWriteFile, encoding='utf-8', mode='w')\n now = datetime.now()\n now = now.strftime(\"%Y-%m-%d %H:%M\")\n lastUpdated = \"\"\"Last Updated %(now)s\n\"\"\" % locals()\n\n writeFile.write(lastUpdated + self.mdStreams)\n writeFile.close()\n\n self.log.debug('completed the ``_write_wiki_pages`` method')\n return None", "response": "write the markdown formated content of the database tables views and streams to local wiki pages\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _update_github(\n self):\n \"\"\"commit the changes and push them to github\n \"\"\"\n self.log.debug('starting the ``_update_github`` method')\n\n from subprocess import Popen, PIPE, STDOUT\n gdir = self.settings[\"sherlock wiki root\"]\n cmd = \"\"\"cd %(gdir)s && git add --all && git commit -m \"x\" && git pull origin master && git push origin master\"\"\" % locals()\n p = Popen(cmd, stdout=PIPE, stdin=PIPE, shell=True)\n output = p.communicate()[0]\n print output\n self.log.debug('output: %(output)s' % locals())\n\n self.log.debug('completed the ``_update_github`` method')\n return None", "response": "commit the changes and push them to github"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef email_user(\n self, subject, message, from_email=settings.DEFAULT_FROM_EMAIL, **kwargs\n ):\n \"\"\"\n Sends an email to this User.\n If settings.EMAIL_OVERRIDE_ADDRESS is set, this mail will be redirected to the alternate mail address.\n\n \"\"\"\n receiver = self.email\n if settings.EMAIL_OVERRIDE_ADDRESS:\n receiver = settings.EMAIL_OVERRIDE_ADDRESS\n\n send_mail(subject, message, from_email, [receiver], **kwargs)", "response": "Sends an email to the user with the specified subject and message."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nclean up files from old analyses runs.", "response": "def clean(context, days_ago, yes):\n \"\"\"Clean up files from \"old\" analyses runs.\"\"\"\n number_of_days_ago = dt.datetime.now() - dt.timedelta(days=days_ago)\n analyses = context.obj['store'].analyses(\n status='completed',\n before=number_of_days_ago,\n deleted=False,\n )\n for analysis_obj in analyses:\n LOG.debug(f\"checking analysis: {analysis_obj.family} ({analysis_obj.id})\")\n latest_analysis = context.obj['store'].analyses(family=analysis_obj.family).first()\n if analysis_obj != latest_analysis:\n print(click.style(f\"{analysis_obj.family}: family has been re-started\", fg='yellow'))\n else:\n print(f\"delete analysis: {analysis_obj.family} ({analysis_obj.id})\")\n context.invoke(delete, analysis_id=analysis_obj.id, yes=yes)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef job_ids(log_stream):\n id_rows = [line for line in log_stream if 'scancel' in line]\n jobs = [id_row.strip()[-7:-1] for id_row in id_rows]\n return jobs", "response": "Grep out all lines with scancel example."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload or create a config from a file or create it if it doesn t exist.", "response": "def get_or_create_config(path, config):\n \"\"\"Using TOML format, load config from given path, or write out example based on defaults\"\"\"\n if os.path.isfile(path):\n with open(path) as fh:\n _LOG.debug(\"loading config from %s\", os.path.abspath(path))\n config._inflate(toml.load(fh))\n else:\n try:\n os.makedirs(os.path.dirname(path))\n except OSError:\n pass\n with open(path, \"w\") as fh:\n toml.dump(config._deflate(), fh)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _deflate(cls):\n data = {k: v for k, v in vars(cls).items() if not k.startswith(\"_\")}\n return {Constants.CONFIG_KEY: data}", "response": "Prepare for serialisation - returns a dictionary"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate config by deserialising input dictionary", "response": "def _inflate(cls, data):\n \"\"\"Update config by deserialising input dictionary\"\"\"\n for k, v in data[Constants.CONFIG_KEY].items():\n setattr(cls, k, v)\n return cls._deflate()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef with_patched_object(obj, attr_name, patched_value):\n def patcher(method):\n @wraps(method)\n def method_call(*m_args, **m_kw):\n patched_obj = patch_object(obj, attr_name, patched_value)\n try:\n return method(*m_args, **m_kw)\n finally:\n patched_obj.restore()\n return method_call\n return patcher", "response": "Decorator that patches an object before the decorated method is called and restores it afterwards."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npatches an object and returns an instance of fudge. patcher. PatchHandler for later restoration.", "response": "def patch_object(obj, attr_name, patched_value):\n \"\"\"Patches an object and returns an instance of :class:`fudge.patcher.PatchHandler` for later restoration.\n\n Note that if *obj* is not an object but a path to a module then it will be imported.\n\n You may want to use a more convenient wrapper :func:`with_patched_object` or :func:`patched_context`\n\n Example::\n\n >>> from fudge import patch_object\n >>> class Session:\n ... state = 'clean'\n ...\n >>> patched_session = patch_object(Session, \"state\", \"dirty\")\n >>> Session.state\n 'dirty'\n >>> patched_session.restore()\n >>> Session.state\n 'clean'\n\n Here is another example showing how to patch multiple objects at once::\n\n >>> class Session:\n ... state = 'clean'\n ...\n >>> class config:\n ... session_strategy = 'database'\n ...\n >>> patches = [\n ... patch_object(config, \"session_strategy\", \"filesystem\"),\n ... patch_object(Session, \"state\", \"dirty\")\n ... ]\n >>> try:\n ... # your app under test would run here ...\n ... print \"(while patched)\"\n ... print \"config.session_strategy=%r\" % config.session_strategy\n ... print \"Session.state=%r\" % Session.state\n ... finally:\n ... for p in patches:\n ... p.restore()\n ... print \"(patches restored)\"\n (while patched)\n config.session_strategy='filesystem'\n Session.state='dirty'\n (patches restored)\n >>> config.session_strategy\n 'database'\n >>> Session.state\n 'clean'\n\n \"\"\"\n if isinstance(obj, (str, unicode)):\n obj_path = adjusted_path = obj\n done = False\n exc = None\n at_top_level = False\n while not done:\n try:\n obj = __import__(adjusted_path)\n done = True\n except ImportError:\n # Handle paths that traveerse object attributes.\n # Such as: smtplib.SMTP.connect\n # smtplib <- module to import\n adjusted_path = adjusted_path.rsplit('.', 1)[0]\n if not exc:\n exc = sys.exc_info()\n if at_top_level:\n # We're at the top level module and it doesn't exist.\n # Raise the first exception since it will make more sense:\n etype, val, tb = exc\n raise etype, val, tb\n if not adjusted_path.count('.'):\n at_top_level = True\n for part in obj_path.split('.')[1:]:\n obj = getattr(obj, part)\n\n handle = PatchHandler(obj, attr_name)\n handle.patch(patched_value)\n return handle"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting a new value for the attribute of the object.", "response": "def patch(self, patched_value):\n \"\"\"Set a new value for the attribute of the object.\"\"\"\n try:\n if self.getter:\n setattr(self.getter_class, self.attr_name, patched_value)\n else:\n setattr(self.orig_object, self.attr_name, patched_value)\n except TypeError:\n # Workaround for patching builtin objects:\n proxy_name = 'fudge_proxy_%s_%s_%s' % (\n self.orig_object.__module__,\n self.orig_object.__name__,\n patched_value.__class__.__name__\n )\n self.proxy_object = type(proxy_name, (self.orig_object,),\n {self.attr_name: patched_value})\n mod = sys.modules[self.orig_object.__module__]\n setattr(mod, self.orig_object.__name__, self.proxy_object)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef restore(self):\n if self.proxy_object is None:\n if self.getter:\n setattr(self.getter_class, self.attr_name, self.getter)\n elif self.is_local:\n setattr(self.orig_object, self.attr_name, self.orig_value)\n else:\n # Was not a local, safe to delete:\n delattr(self.orig_object, self.attr_name)\n else:\n setattr(sys.modules[self.orig_object.__module__],\n self.orig_object.__name__,\n self.orig_object)", "response": "Restore the saved value for the attribute of the object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_config(data: dict) -> dict:\n return {\n 'email': data.get('email'),\n 'family': data['family_id'],\n 'samples': [{\n 'id': sample_id,\n 'type': analysis_type,\n } for sample_id, analysis_type in data['analysis_type'].items()],\n 'config_path': data['config_file_analysis'],\n 'is_dryrun': True if 'dry_run_all' in data else False,\n 'log_path': data['log_file'],\n 'out_dir': data['outdata_dir'],\n 'priority': data['slurm_quality_of_service'],\n 'sampleinfo_path': data['sample_info_file'],\n }", "response": "Parse MIP config file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_sampleinfo(data: dict) -> dict:\n\n genome_build = data['human_genome_build']\n genome_build_str = f\"{genome_build['source']}{genome_build['version']}\"\n if 'svdb' in data['program']:\n svdb_outpath = (f\"{data['program']['svdb']['path']}\")\n else:\n svdb_outpath = ''\n outdata = {\n 'date': data['analysis_date'],\n 'family': data['family'],\n 'genome_build': genome_build_str,\n 'rank_model_version': data['program']['genmod']['rank_model']['version'],\n 'is_finished': True if data['analysisrunstatus'] == 'finished' else False,\n 'pedigree_path': data['pedigree_minimal'],\n 'peddy': {\n 'ped': (data['program']['peddy']['peddy']['path'] if\n 'peddy' in data['program'] else None),\n 'ped_check': (data['program']['peddy']['ped_check']['path'] if\n 'peddy' in data['program'] else None),\n 'sex_check': (data['program']['peddy']['sex_check']['path'] if\n 'peddy' in data['program'] else None),\n },\n 'qcmetrics_path': data['program']['qccollect']['path'],\n 'samples': [],\n 'snv': {\n 'bcf': data['most_complete_bcf']['path'],\n 'clinical_vcf': data['vcf_binary_file']['clinical']['path'],\n 'gbcf': data['gbcf_file']['path'],\n 'research_vcf': data['vcf_binary_file']['research']['path'],\n },\n 'svdb_outpath': svdb_outpath,\n 'sv': {\n 'bcf': data.get('sv_bcf_file', {}).get('path'),\n 'clinical_vcf': (data['sv_vcf_binary_file']['clinical']['path'] if\n 'sv_vcf_binary_file' in data else None),\n 'merged': svdb_outpath,\n 'research_vcf': (data['sv_vcf_binary_file']['research']['path'] if\n 'sv_vcf_binary_file' in data else None),\n },\n 'version': data['mip_version'],\n }\n\n for sample_id, sample_data in data['sample'].items():\n sample = {\n 'id': sample_id,\n 'bam': sample_data['most_complete_bam']['path'],\n 'sambamba': list(sample_data['program']['sambamba_depth'].values())[0]['path'],\n 'sex': sample_data['sex'],\n # subsample mt is only for wgs data\n 'subsample_mt': (list(sample_data['program']['samtools_subsample_mt'].values())[0]['path'] if\n 'samtools_subsample_mt' in sample_data['program'] else None),\n 'vcf2cytosure': list(sample_data['program']['vcf2cytosure'].values())[0]['path'],\n }\n chanjo_sexcheck = list(sample_data['program']['chanjo_sexcheck'].values())[0]\n sample['chanjo_sexcheck'] = chanjo_sexcheck['path']\n outdata['samples'].append(sample)\n\n return outdata", "response": "Parse MIP sample info file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses MIP qc metrics file.", "response": "def parse_qcmetrics(metrics: dict) -> dict:\n \"\"\"Parse MIP qc metrics file.\n Args:\n metrics (dict): raw YAML input from MIP qc metrics file\n\n Returns:\n dict: parsed data\n \"\"\"\n data = {\n 'versions': {\n 'freebayes': metrics['program']['freebayes']['version'],\n 'gatk': metrics['program']['gatk']['version'],\n 'manta': metrics['program'].get('manta', {}).get('version'),\n 'bcftools': metrics['program']['bcftools']['version'],\n 'vep': metrics['program']['varianteffectpredictor']['version'],\n },\n 'samples': [],\n }\n\n plink_samples = {}\n plink_sexcheck = metrics['program'].get('plink_sexcheck', {}).get('sample_sexcheck')\n if isinstance(plink_sexcheck, str):\n sample_id, sex_number = plink_sexcheck.strip().split(':', 1)\n plink_samples[sample_id] = PED_SEX_MAP.get(int(sex_number))\n elif isinstance(plink_sexcheck, list):\n for sample_raw in plink_sexcheck:\n sample_id, sex_number = sample_raw.split(':', 1)\n plink_samples[sample_id] = PED_SEX_MAP.get(int(sex_number))\n\n for sample_id, sample_metrics in metrics['sample'].items():\n\n ## Bam stats metrics\n bam_stats = [values['bamstats'] for key, values in sample_metrics.items()\n if key[:-1].endswith('.lane')]\n total_reads = sum(int(bam_stat['raw_total_sequences']) for bam_stat in bam_stats)\n total_mapped = sum(int(bam_stat['reads_mapped']) for bam_stat in bam_stats)\n\n ## Picard metrics\n main_key = [key for key in sample_metrics.keys() if '_lanes_' in key][0]\n\n hs_metrics = sample_metrics[main_key]['collecthsmetrics']['header']['data']\n multiple_inst_metrics = sample_metrics[main_key]['collectmultiplemetricsinsertsize']['header']['data']\n multiple_metrics = sample_metrics[main_key]['collectmultiplemetrics']['header']['pair']\n\n sample_data = {\n 'at_dropout': hs_metrics['AT_DROPOUT'],\n 'completeness_target': {\n 10: hs_metrics['PCT_TARGET_BASES_10X'],\n 20: hs_metrics['PCT_TARGET_BASES_20X'],\n 50: hs_metrics['PCT_TARGET_BASES_50X'],\n 100: hs_metrics['PCT_TARGET_BASES_100X'],\n },\n 'duplicates': float(sample_metrics[main_key]['markduplicates']['fraction_duplicates']),\n 'gc_dropout': hs_metrics['GC_DROPOUT'],\n 'id': sample_id,\n 'median_insert_size': multiple_inst_metrics['MEDIAN_INSERT_SIZE'],\n 'mapped': total_mapped / total_reads,\n 'plink_sex': plink_samples.get(sample_id),\n 'predicted_sex': sample_metrics[main_key]['chanjo_sexcheck']['gender'],\n 'reads': total_reads,\n 'insert_size_standard_deviation': float(multiple_inst_metrics['STANDARD_DEVIATION']),\n 'strand_balance': float(multiple_metrics['STRAND_BALANCE']),\n 'target_coverage': float(hs_metrics['MEAN_TARGET_COVERAGE']),\n }\n data['samples'].append(sample_data)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_peddy_sexcheck(handle: TextIO):\n data = {}\n samples = csv.DictReader(handle)\n for sample in samples:\n data[sample['sample_id']] = {\n 'predicted_sex': sample['predicted_sex'],\n 'het_ratio': float(sample['het_ratio']),\n 'error': True if sample['error'] == 'True' else False,\n }\n return data", "response": "Parse Peddy sexcheck output."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing Chanjo sex - check output.", "response": "def parse_chanjo_sexcheck(handle: TextIO):\n \"\"\"Parse Chanjo sex-check output.\"\"\"\n samples = csv.DictReader(handle, delimiter='\\t')\n for sample in samples:\n return {\n 'predicted_sex': sample['sex'],\n 'x_coverage': float(sample['#X_coverage']),\n 'y_coverage': float(sample['Y_coverage']),\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef import_util(imp):\n '''\n Lazily imports a utils (class,\n function,or variable) from a module) from\n a string.\n @param imp:\n '''\n\n mod_name, obj_name = imp.rsplit('.', 1)\n mod = importlib.import_module(mod_name)\n return getattr(mod, obj_name)", "response": "Lazily imports a utils class from a module or function or variable."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndraws segments on image", "response": "def draw_segments(image, segments, color=(255, 0, 0), line_width=1):\n \"\"\"draws segments on image\"\"\"\n for segment in segments:\n x, y, w, h = segment\n cv2.rectangle(image, (x, y), (x + w, y + h), color, line_width)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts time in string format to seconds.", "response": "def time_to_sec(time_str: str) -> int:\n \"\"\"Convert time in string format to seconds.\n\n Skipping seconds since sometimes the last column is truncated\n for entries where >10 days.\n \"\"\"\n total_sec = 0\n if '-' in time_str:\n # parse out days\n days, time_str = time_str.split('-')\n total_sec += (int(days) * 24 * 60 * 60)\n\n # parse out the hours and mins (skip seconds)\n hours_min_raw = time_str.split(':')[:-1]\n time_parts = [int(round(float(val))) for val in hours_min_raw]\n total_sec += time_parts[-1] * 60 # minutes\n if len(time_parts) > 1:\n total_sec += time_parts[-2] * 60 * 60 # hours\n return total_sec"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting sacct row to dict.", "response": "def convert_job(row: list) -> dict:\n \"\"\"Convert sacct row to dict.\"\"\"\n state = row[-2]\n start_time_raw = row[-4]\n end_time_raw = row[-3]\n if state not in ('PENDING', 'CANCELLED'):\n start_time = datetime.strptime(start_time_raw, '%Y-%m-%dT%H:%M:%S')\n if state != 'RUNNING':\n end_time = datetime.strptime(end_time_raw, '%Y-%m-%dT%H:%M:%S')\n else:\n end_time = None\n else:\n start_time = end_time = None\n\n # parse name of job\n job_name = row[1]\n step_name, step_context = job_name.rstrip('_BOTH').rstrip('_SV').rsplit('_', 1)\n\n return {\n 'id': int(row[0]),\n 'name': job_name,\n 'step': step_name,\n 'context': step_context,\n 'state': state,\n 'start': start_time,\n 'end': end_time,\n 'elapsed': time_to_sec(row[-5]),\n 'cpu': time_to_sec(row[-6]),\n 'is_completed': state == 'COMPLETED',\n }"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse out information from sacct status output.", "response": "def parse_sacct(sacct_stream):\n \"\"\"Parse out information from sacct status output.\"\"\"\n rows = (line.split() for line in sacct_stream)\n # filter rows that begin with a SLURM job id\n relevant_rows = (row for row in rows if row[0].isdigit())\n jobs = [convert_job(row) for row in relevant_rows]\n return jobs"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfilters jobs that have a FAILED etc. status.", "response": "def filter_jobs(sacct_jobs, failed=True):\n \"\"\"Filter jobs that have a FAILED etc. status.\"\"\"\n categories = FAILED_CATEGORIES if failed else NORMAL_CATEGORIES\n filtered_jobs = [job for job in sacct_jobs if job['state'] in categories]\n return filtered_jobs"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngives a list of segments outputs a array of line numbers or - 1 if it doesn t belong to any segment", "response": "def guess_segments_lines(segments, lines, nearline_tolerance=5.0):\n \"\"\"\n given segments, outputs a array of line numbers, or -1 if it\n doesn't belong to any\n \"\"\"\n ys = segments[:, 1]\n closeness = numpy.abs(numpy.subtract.outer(ys, lines)) # each row a y, each collumn a distance to each line\n line_of_y = numpy.argmin(closeness, axis=1)\n distance = numpy.min(closeness, axis=1)\n bad = distance > numpy.mean(distance) + nearline_tolerance * numpy.std(distance)\n line_of_y[bad] = -1\n return line_of_y"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef contained_segments_matrix(segments):\n x1, y1 = segments[:, 0], segments[:, 1]\n x2, y2 = x1 + segments[:, 2], y1 + segments[:, 3]\n n = len(segments)\n\n x1so, x2so, y1so, y2so = list(map(numpy.argsort, (x1, x2, y1, y2)))\n x1soi, x2soi, y1soi, y2soi = list(map(numpy.argsort, (x1so, x2so, y1so, y2so))) # inverse transformations\n # let rows be x1 and collumns be x2. this array represents where x1x2\n o2 = numpy.tril(numpy.ones((n, n)), k=0).astype(bool)\n a_inside_b_x = o2[x1soi][:, x1soi] * o1[x2soi][:, x2soi] # (x1[a]>x1[b] and x2[a]y1[b] and y2[a] expected_call_len:\n # only show the first extra call since this\n # will be triggered before all calls are finished:\n error = \"#%s %s was unexpected\" % (\n expected_call_len+1,\n self._actual_calls[expected_call_len]\n )\n\n if error:\n msg = \"%s; Expected: %s\" % (\n error, self._repr_call_list(self._call_order))\n raise AssertionError(msg)", "response": "assert that all calls have been made in the right order."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nredefines a call. The fake method will execute your function.", "response": "def calls(self, call):\n \"\"\"Redefine a call.\n\n The fake method will execute your function. I.E.::\n\n >>> f = Fake().provides('hello').calls(lambda: 'Why, hello there')\n >>> f.hello()\n 'Why, hello there'\n\n \"\"\"\n exp = self._get_current_call()\n exp.call_replacement = call\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexpects a call. .. doctest:: :hide: >>> import fudge >>> fudge.clear_expectations() >>> fudge.clear_calls() If the method *call_name* is never called, then raise an error. I.E.:: >>> session = Fake('session').expects('open').expects('close') >>> session.open() >>> fudge.verify() Traceback (most recent call last): ... AssertionError: fake:session.close() was not called .. note:: If you want to also verify the order these calls are made in, use :func:`fudge.Fake.remember_order`. When using :func:`fudge.Fake.next_call` after ``expects(...)``, each new call will be part of the expected order Declaring ``expects()`` multiple times is the same as declaring :func:`fudge.Fake.next_call`", "response": "def expects(self, call_name):\n \"\"\"Expect a call.\n\n .. doctest::\n :hide:\n\n >>> import fudge\n >>> fudge.clear_expectations()\n >>> fudge.clear_calls()\n\n If the method *call_name* is never called, then raise an error. I.E.::\n\n >>> session = Fake('session').expects('open').expects('close')\n >>> session.open()\n >>> fudge.verify()\n Traceback (most recent call last):\n ...\n AssertionError: fake:session.close() was not called\n\n .. note::\n If you want to also verify the order these calls are made in,\n use :func:`fudge.Fake.remember_order`. When using :func:`fudge.Fake.next_call`\n after ``expects(...)``, each new call will be part of the expected order\n\n Declaring ``expects()`` multiple times is the same as\n declaring :func:`fudge.Fake.next_call`\n \"\"\"\n if call_name in self._declared_calls:\n return self.next_call(for_method=call_name)\n\n self._last_declared_call_name = call_name\n c = ExpectedCall(self, call_name, call_order=self._expected_call_order)\n self._declare_call(call_name, c)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef next_call(self, for_method=None):\n last_call_name = self._last_declared_call_name\n if for_method:\n if for_method not in self._declared_calls:\n raise FakeDeclarationError(\n \"next_call(for_method=%r) is not possible; \"\n \"declare expects(%r) or provides(%r) first\" % (\n for_method, for_method, for_method))\n else:\n # set this for the local function:\n last_call_name = for_method\n # reset this for subsequent methods:\n self._last_declared_call_name = last_call_name\n\n if last_call_name:\n exp = self._declared_calls[last_call_name]\n elif self._callable:\n exp = self._callable\n else:\n raise FakeDeclarationError('next_call() must follow provides(), '\n 'expects() or is_callable()')\n\n if getattr(exp, 'expected_times_called', None) is not None:\n raise FakeDeclarationError(\"Cannot use next_call() in combination with times_called()\")\n\n if not isinstance(exp, CallStack):\n # lazily create a stack with the last defined\n # expected call as the first on the stack:\n stack = CallStack(self, initial_calls=[exp],\n expected=isinstance(exp, ExpectedCall),\n call_name=exp.call_name)\n\n # replace the old call obj using the same name:\n if last_call_name:\n self._declare_call(last_call_name, stack)\n elif self._callable:\n self._callable = stack\n else:\n stack = exp\n\n # hmm, we need a copy here so that the last call\n # falls off the stack.\n if stack.expected:\n next_call = ExpectedCall(self, call_name=exp.call_name, call_order=self._expected_call_order)\n else:\n next_call = Call(self, call_name=exp.call_name)\n stack.add_call(next_call)\n return self", "response": "Start expecting or provides multiple calls."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nproviding a call. The call acts as a stub -- no error is raised if it is not called.:: >>> session = Fake('session').provides('open').provides('close') >>> import fudge >>> fudge.clear_expectations() # from any previously declared fakes >>> fudge.clear_calls() >>> session.open() >>> fudge.verify() # close() not called but no error Declaring ``provides()`` multiple times is the same as declaring :func:`fudge.Fake.next_call`", "response": "def provides(self, call_name):\n \"\"\"Provide a call.\n\n The call acts as a stub -- no error is raised if it is not called.::\n\n >>> session = Fake('session').provides('open').provides('close')\n >>> import fudge\n >>> fudge.clear_expectations() # from any previously declared fakes\n >>> fudge.clear_calls()\n >>> session.open()\n >>> fudge.verify() # close() not called but no error\n\n Declaring ``provides()`` multiple times is the same as\n declaring :func:`fudge.Fake.next_call`\n\n \"\"\"\n if call_name in self._declared_calls:\n return self.next_call(for_method=call_name)\n\n self._last_declared_call_name = call_name\n c = Call(self, call_name)\n self._declare_call(call_name, c)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef raises(self, exc):\n exp = self._get_current_call()\n exp.exception_to_raise = exc\n return self", "response": "Set the last call to raise an exception class or instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nverifying that all subsequent calls to get_id are in the right order.", "response": "def remember_order(self):\n \"\"\"Verify that subsequent :func:`fudge.Fake.expects` are called in the right order.\n\n For example::\n\n >>> import fudge\n >>> db = fudge.Fake('db').remember_order().expects('insert').expects('update')\n >>> db.update()\n Traceback (most recent call last):\n ...\n AssertionError: Call #1 was fake:db.update(); Expected: #1 fake:db.insert(), #2 fake:db.update(), end\n >>> fudge.clear_expectations()\n\n When declaring multiple calls using :func:`fudge.Fake.next_call`, each subsequent call will be added\n to the expected order of calls ::\n\n >>> import fudge\n >>> sess = fudge.Fake(\"session\").remember_order().expects(\"get_id\").returns(1)\n >>> sess = sess.expects(\"set_id\").with_args(5)\n >>> sess = sess.next_call(for_method=\"get_id\").returns(5)\n\n Multiple calls to ``get_id()`` are now expected ::\n\n >>> sess.get_id()\n 1\n >>> sess.set_id(5)\n >>> sess.get_id()\n 5\n >>> fudge.verify()\n >>> fudge.clear_expectations()\n\n \"\"\"\n if self._callable:\n raise FakeDeclarationError(\n \"remember_order() cannot be used for Fake(callable=True) or Fake(expect_call=True)\")\n self._expected_call_order = ExpectedCallOrder(self)\n registry.remember_expected_call_order(self._expected_call_order)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the last call to return a value.", "response": "def returns(self, val):\n \"\"\"Set the last call to return a value.\n\n Set a static value to return when a method is called. I.E.::\n\n >>> f = Fake().provides('get_number').returns(64)\n >>> f.get_number()\n 64\n\n \"\"\"\n exp = self._get_current_call()\n exp.return_val = val\n return self"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef returns_fake(self, *args, **kwargs):\n exp = self._get_current_call()\n endpoint = kwargs.get('name', exp.call_name)\n name = self._endpoint_name(endpoint)\n kwargs['name'] = '%s()' % name\n fake = self.__class__(*args, **kwargs)\n exp.return_val = fake\n return fake", "response": "Set the last call to return a fake object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef times_called(self, n):\n if self._last_declared_call_name:\n actual_last_call = self._declared_calls[self._last_declared_call_name]\n if isinstance(actual_last_call, CallStack):\n raise FakeDeclarationError(\"Cannot use times_called() in combination with next_call()\")\n # else: # self._callable is in effect\n\n exp = self._get_current_call()\n exp.expected_times_called = n\n return self", "response": "Sets the number of times an object can be called."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef with_args(self, *args, **kwargs):\n exp = self._get_current_call()\n if args:\n exp.expected_args = args\n if kwargs:\n exp.expected_kwargs = kwargs\n return self", "response": "Set the last call to expect specific argument values."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef with_matching_args(self, *args, **kwargs):\n exp = self._get_current_call()\n if args:\n exp.expected_matching_args = args\n if kwargs:\n exp.expected_matching_kwargs = kwargs\n return self", "response": "Set the last call to expect specific argument values if those arguments exist."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef without_args(self, *args, **kwargs):\n exp = self._get_current_call()\n if args:\n exp.unexpected_args = args\n if kwargs:\n exp.unexpected_kwargs = kwargs\n return self", "response": "This method is used to set the last call to expect that certain arguments will not exist."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef with_arg_count(self, count):\n exp = self._get_current_call()\n exp.expected_arg_count = count\n return self", "response": "Set the last call to expect an exact argument count."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef with_kwarg_count(self, count):\n exp = self._get_current_call()\n exp.expected_kwarg_count = count\n return self", "response": "Set the last call to expect an exact count of keyword arguments."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _setup_tunnel(\n self,\n tunnelParameters):\n \"\"\"\n *setup a ssh tunnel for a database connection to port through*\n\n **Key Arguments:**\n - ``tunnelParameters`` -- the tunnel parameters found associated with the database settings\n\n **Return:**\n - ``sshPort`` -- the port the ssh tunnel is connected via\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_setup_tunnel`` method')\n\n # TEST TUNNEL DOES NOT ALREADY EXIST\n sshPort = tunnelParameters[\"port\"]\n connected = self._checkServer(\n \"127.0.0.1\", sshPort)\n if connected:\n self.log.debug('ssh tunnel already exists - moving on')\n else:\n # GRAB TUNNEL SETTINGS FROM SETTINGS FILE\n ru = tunnelParameters[\"remote user\"]\n rip = tunnelParameters[\"remote ip\"]\n rh = tunnelParameters[\"remote datbase host\"]\n\n cmd = \"ssh -fnN %(ru)s@%(rip)s -L %(sshPort)s:%(rh)s:3306\" % locals()\n p = Popen(cmd, shell=True, close_fds=True)\n output = p.communicate()[0]\n self.log.debug('output: %(output)s' % locals())\n\n # TEST CONNECTION - QUIT AFTER SO MANY TRIES\n connected = False\n count = 0\n while not connected:\n connected = self._checkServer(\n \"127.0.0.1\", sshPort)\n time.sleep(1)\n count += 1\n if count == 5:\n self.log.error(\n 'cound not setup tunnel to remote datbase' % locals())\n sys.exit(0)\n return sshPort", "response": "setup a ssh tunnel for a single database connection"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _checkServer(self, address, port):\n self.log.debug('starting the ``_checkServer`` method')\n\n # CREATE A TCP SOCKET\n import socket\n s = socket.socket()\n self.log.debug(\n \"\"\"Attempting to connect to `%(address)s` on port `%(port)s`\"\"\" % locals())\n try:\n s.connect((address, port))\n self.log.debug(\n \"\"\"Connected to `%(address)s` on port `%(port)s`\"\"\" % locals())\n return True\n except socket.error, e:\n self.log.warning(\n \"\"\"Connection to `%(address)s` on port `%(port)s` failed - try again: %(e)s\"\"\" % locals())\n return False\n\n return None", "response": "Checks that the TCP Port we ve decided to use for tunnelling is available."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate that the username is unique and not listed in defaults. ACCOUNTS_FORBIDDEN_USERNAMES list.", "response": "def clean_username(self):\n \"\"\"\n Validate that the username is unique and not listed \n in ``defaults.ACCOUNTS_FORBIDDEN_USERNAMES`` list.\n \n \"\"\"\n try: \n user = get_user_model().objects.get(username=self.cleaned_data[\"username\"])\n except get_user_model().DoesNotExist: \n pass\n else: \n raise forms.ValidationError(\n self.error_messages['duplicate_username'])\n\n if self.cleaned_data['username'].lower() \\\n in defaults.ACCOUNTS_FORBIDDEN_USERNAMES:\n raise forms.ValidationError(_(u'This username is not allowed.'))\n return self.cleaned_data['username']"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates that the email address is unique.", "response": "def clean_email(self):\n \"\"\" \n Validate that the email address is unique. \n \n \"\"\"\n if get_user_model().objects.filter(\n Q(email__iexact=self.cleaned_data['email']) |\n Q(email_unconfirmed__iexact=self.cleaned_data['email'])):\n raise forms.ValidationError(_(u'This email address is already '\n 'in use. Please supply a different email.'))\n return self.cleaned_data['email']"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save(self):\n username, email, password = (self.cleaned_data['username'],\n self.cleaned_data['email'],\n self.cleaned_data['password1'])\n\n user = get_user_model().objects.create_user(username, email, password,\n not defaults.ACCOUNTS_ACTIVATION_REQUIRED, \n defaults.ACCOUNTS_ACTIVATION_REQUIRED)\n return user", "response": "Creates a new user and returns the newly created user."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clean(self):\n identification = self.cleaned_data.get('identification')\n password = self.cleaned_data.get('password')\n\n if identification and password:\n self.user_cache = authenticate(identification=identification, \n password=password)\n if self.user_cache is None:\n raise forms.ValidationError(_(u\"Please enter a correct \"\n \"username or email address and password. \"\n \"Note that both fields are case-sensitive.\"))\n return self.cleaned_data", "response": "Checks for the combination of the identification and password and returns the cleaned_data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef clean_email(self):\n if self.cleaned_data['email'].lower() == self.user.email:\n raise forms.ValidationError(_(u\"You're already known under \"\n \"this email address.\"))\n if get_user_model().objects.filter(\n email__iexact=self.cleaned_data['email']).exclude(\n email__iexact=self.user.email):\n raise forms.ValidationError(_(u'This email address is already '\n 'in use. Please supply a different email address.'))\n return self.cleaned_data['email']", "response": "Validate that the email is not already registered with another user."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nvalidating that the uploaded picture is valid and returns the url of the image.", "response": "def clean_picture(self):\n \"\"\"\n Validates format and file size of uploaded profile picture.\n \n \"\"\"\n if self.cleaned_data.get('picture'):\n picture_data = self.cleaned_data['picture']\n if 'error' in picture_data:\n raise forms.ValidationError(_(u'Upload a valid image. '\n 'The file you uploaded was either not an image '\n 'or a corrupted image.'))\n \n content_type = picture_data.content_type\n if content_type:\n main, sub = content_type.split('/')\n if not (main == 'image' \n and sub in defaults.ACCOUNTS_PICTURE_FORMATS):\n raise forms.ValidationError(_(u'%s only.' % \n defaults.ACCOUNTS_PICTURE_FORMATS))\n \n if picture_data.size > int(defaults.ACCOUNTS_PICTURE_MAX_FILE):\n raise forms.ValidationError(_(u'Image size is too big.'))\n return self.cleaned_data['picture']"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef visit_grouping(self, grouping, asfrom=False, **kwargs):\n return {\n 'type': 'grouping',\n 'grouping': grouping.element._compiler_dispatch(self, **kwargs)\n }", "response": "Visit a grouping element."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nvisits a select and return a JSON object that represents the result.", "response": "def visit_select(self, select, asfrom=False, parens=True,\n fromhints=None,\n compound_index=0,\n nested_join_translation=False,\n select_wraps_for=None,\n **kwargs):\n jsn = {'command': 'advanced/search', 'type': 'select'}\n needs_nested_translation = \\\n select.use_labels and \\\n not nested_join_translation and \\\n not self.stack and \\\n not self.dialect.supports_right_nested_joins\n\n if needs_nested_translation:\n transformed_select = self._transform_select_for_nested_joins(\n select)\n text = self.visit_select(\n transformed_select, asfrom=asfrom, parens=parens,\n fromhints=fromhints,\n compound_index=compound_index,\n nested_join_translation=True, **kwargs\n )\n\n toplevel = not self.stack\n entry = self._default_stack_entry if toplevel else self.stack[-1]\n\n populate_result_map = toplevel or \\\n (\n compound_index == 0 and entry.get(\n 'need_result_map_for_compound', False)\n ) or entry.get('need_result_map_for_nested',\n False)\n\n # this was first proposed as part of #3372; however, it is not\n # reached in current tests and could possibly be an assertion\n # instead.\n if not populate_result_map and 'add_to_result_map' in kwargs:\n del kwargs['add_to_result_map']\n\n if needs_nested_translation:\n if populate_result_map:\n self._transform_result_map_for_nested_joins(\n select, transformed_select)\n return jsn\n\n froms = self._setup_select_stack(select, entry, asfrom)\n\n column_clause_args = kwargs.copy()\n column_clause_args.update({\n 'within_label_clause': False,\n 'within_columns_clause': False\n })\n\n if select._hints:\n hint_text, byfrom = self._setup_select_hints(select)\n if hint_text:\n text += hint_text + \" \"\n else:\n byfrom = None\n\n\n \"\"\"\n if select._prefixes:\n text += self._generate_prefixes(\n select, select._prefixes, **kwargs)\n \"\"\"\n if select._distinct:\n jsn['distinct'] = True\n\n # the actual list of columns to print in the SELECT column list.\n inner_columns = [\n c for c in [\n self._label_select_column(\n select,\n column,\n populate_result_map, asfrom,\n column_clause_args,\n name=name)\n for name, column in select._columns_plus_names\n ]\n if c is not None\n ]\n\n if populate_result_map and select_wraps_for is not None:\n # if this select is a compiler-generated wrapper,\n # rewrite the targeted columns in the result map\n wrapped_inner_columns = set(select_wraps_for.inner_columns)\n translate = dict(\n (outer, inner.pop()) for outer, inner in [\n (\n outer,\n outer.proxy_set.intersection(wrapped_inner_columns))\n for outer in select.inner_columns\n ] if inner\n )\n self._result_columns = [\n (key, name, tuple(translate.get(o, o) for o in obj), type_)\n for key, name, obj, type_ in self._result_columns\n ]\n\n jsn = self._compose_select_body(\n jsn, select, inner_columns, froms, byfrom, kwargs)\n\n if select._statement_hints:\n per_dialect = [\n ht for (dialect_name, ht)\n in select._statement_hints\n if dialect_name in ('*', self.dialect.name)\n ]\n if per_dialect:\n text += \" \" + self.get_statement_hint_text(per_dialect)\n\n if self.ctes and self._is_toplevel_select(select):\n text = self._render_cte_clause() + text\n\n if select._suffixes:\n text += \" \" + self._generate_prefixes(\n select, select._suffixes, **kwargs)\n\n self.stack.pop(-1)\n\n if asfrom and parens:\n return jsn # \"(\" + text + \")\"\n else:\n return jsn"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nproduce labeled columns present in a select statement.", "response": "def _label_select_column(self, select, column,\n populate_result_map,\n asfrom, column_clause_args,\n name=None,\n within_columns_clause=True):\n \"\"\"produce labeled columns present in a select().\"\"\"\n\n if column.type._has_column_expression and \\\n populate_result_map:\n col_expr = column.type.column_expression(column)\n add_to_result_map = lambda keyname, name, objects, type_: \\\n self._add_to_result_map(\n keyname, name,\n (column,) + objects, type_)\n else:\n col_expr = column\n if populate_result_map:\n add_to_result_map = self._add_to_result_map\n else:\n add_to_result_map = None\n\n if not within_columns_clause:\n result_expr = col_expr\n elif isinstance(column, elements.Label):\n if col_expr is not column:\n result_expr = compiler._CompileLabel(\n col_expr,\n column.name,\n alt_names=(column.element,)\n )\n else:\n result_expr = col_expr\n\n elif select is not None and name:\n result_expr = compiler._CompileLabel(\n col_expr,\n name,\n alt_names=(column._key_label,)\n )\n\n elif \\\n asfrom and \\\n isinstance(column,\n elements.ColumnClause) and \\\n not column.is_literal and \\\n column.table is not None and \\\n not isinstance(column.table, selectable.Select):\n result_expr = compiler._CompileLabel(col_expr,\n elements._as_truncated(\n column.name),\n alt_names=(column.key,))\n elif (\n not isinstance(column, elements.TextClause) and\n (\n not isinstance(column,\n elements.UnaryExpression) or\n column.wraps_column_expression\n ) and\n (\n not hasattr(column, 'name') or\n isinstance(column, functions.Function)\n )\n ):\n result_expr = compiler._CompileLabel(col_expr, column.anon_label)\n elif col_expr is not column:\n # TODO: are we sure \"column\" has a .name and .key here ?\n # assert isinstance(column, elements.ColumnClause)\n result_expr = compiler._CompileLabel(col_expr,\n elements._as_truncated(\n column.name),\n alt_names=(column.key,))\n else:\n result_expr = col_expr\n\n column_clause_args.update(\n within_columns_clause=within_columns_clause,\n add_to_result_map=add_to_result_map\n )\n return result_expr._compiler_dispatch(\n self,\n **column_clause_args\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstops all started tasks on system exit.", "response": "def cleanup():\n \"\"\"Stop all started tasks on system exit.\n\n Note: This only handles signals caught by the atexit module by default.\n SIGKILL, for instance, will not be caught, so cleanup is not guaranteed in\n all cases.\n \"\"\"\n to_stop = STARTED_TASKS.copy()\n if to_stop:\n print \"Cleaning up...\"\n for task in to_stop:\n try:\n task.stop()\n except: # pylint: disable=W0702\n etype, value, trace = sys.exc_info()\n # Disregard no such process exceptions, print out the rest\n if not (isinstance(value, OSError) and value.errno == 3):\n print ''.join(format_exception(etype, value, trace, None))\n continue"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstarts a task. This function depends on the underlying implementation of _start, which any subclass of ``Task`` should implement. Args: wait (bool): Whether or not to wait on the task to finish before returning from this function. Default `False`. Raises: RuntimeError: If the task has already been started without a subsequent call to ``reset()``.", "response": "def start(self, wait=False):\n \"\"\"Start a task.\n\n This function depends on the underlying implementation of _start, which\n any subclass of ``Task`` should implement.\n\n Args:\n wait (bool): Whether or not to wait on the task to finish before\n returning from this function. Default `False`.\n\n Raises:\n RuntimeError: If the task has already been started without a\n subsequent call to ``reset()``.\n \"\"\"\n if self._status is not TaskStatus.IDLE:\n raise RuntimeError(\"Cannot start %s in state %s\" %\n (self, self._status))\n self._status = TaskStatus.STARTED\n STARTED_TASKS.add(self)\n self._start()\n\n if wait:\n self.wait()\n\n return self.return_values"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef wait(self):\n if self._status is not TaskStatus.STARTED:\n raise RuntimeError(\"Cannot wait on %s in state %s\" %\n (self, self._status))\n self._wait()\n self.stop()\n return self.return_values", "response": "Wait on a task to finish and stop it when it has finished."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef stop(self):\n if self._status is TaskStatus.STOPPED:\n return\n\n if self._status is not TaskStatus.STARTED:\n raise RuntimeError(\"Cannot stop %s in state %s\" %\n (self, self._status))\n self._stop()\n\n STARTED_TASKS.remove(self)\n self._status = TaskStatus.STOPPED", "response": "Stops a task immediately."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreset the internal state of the task.", "response": "def reset(self):\n \"\"\"Reset a task.\n\n Allows a task to be started again, clears the ``return_values``.\n\n Raises:\n RuntimeError: If the task has not been stopped.\n \"\"\"\n if self._status is not TaskStatus.STOPPED:\n raise RuntimeError(\"Cannot reset %s in state %s\" %\n (self, self._status))\n self._reset()\n self.return_values = {}\n self._status = TaskStatus.IDLE"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _aggregate(self):\n # pylint: disable=W0212\n nonremote = [t for t in self._tasks if not isinstance(t, RemoteTask)]\n remote = [t for t in self._tasks if isinstance(t, RemoteTask)]\n\n host_dict = defaultdict(list)\n for task in remote:\n host_dict[task.host].append(task)\n\n aggregated = []\n for task_group in host_dict.values():\n # Build up combined command\n combined_cmd = []\n for task in task_group:\n if combined_cmd:\n combined_cmd.append('&')\n combined_cmd.append(' '.join(task._remote_command))\n\n # Now, generated aggregate task\n t0 = task_group[0] # pylint: disable=C0103\n task = RemoteTask(\n t0.host, combined_cmd, t0._quiet, t0._return_output,\n t0._kill_remote, t0._identity_file)\n\n aggregated.append(task)\n\n self._tasks = nonremote + aggregated", "response": "Helper method to aggregate RemoteTasks into single ssh session."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nuploading a picture for a user to the ACCOUNTS_PICTURE_PATH and saves it under unique hash for the image.", "response": "def upload_to_picture(instance, filename):\n \"\"\"\n Uploads a picture for a user to the ``ACCOUNTS_PICTURE_PATH`` and \n saving it under unique hash for the image. This is for privacy \n reasons so others can't just browse through the picture directory.\n\n \"\"\"\n extension = filename.split('.')[-1].lower()\n salt, hash = generate_sha1(instance.id)\n return '%(path)s/%(hash)s.%(extension)s' % {\n 'path': getattr(defaults, \n 'ACCOUNTS_PICTURE_PATH','%s/%s' % (\n str(instance._meta.app_label), \n str(instance._meta.model_name))),\n 'hash': hash[:10],\n 'extension': extension}"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef activation_key_expired(self):\n expiration_days = datetime.timedelta(\n days=defaults.ACCOUNTS_ACTIVATION_DAYS)\n expiration_date = self.date_joined + expiration_days\n if self.activation_key == defaults.ACCOUNTS_ACTIVATED:\n return True\n if get_datetime_now() >= expiration_date:\n return True\n return False", "response": "Checks if activation key is expired."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef send_activation_email(self):\n context= {'user': self,\n 'protocol': get_protocol(),\n 'activation_days': defaults.ACCOUNTS_ACTIVATION_DAYS,\n 'activation_key': self.activation_key,\n 'site': Site.objects.get_current()}\n\n subject = ''.join(render_to_string(\n 'accounts/emails/activation_email_subject.txt',\n context).splitlines())\n\n message = render_to_string(\n 'accounts/emails/activation_email_message.txt',\n context)\n send_mail(subject,\n message,\n settings.DEFAULT_FROM_EMAIL,\n [self.email,])", "response": "Sends an activation email to the user."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef change_email(self, email):\n self.email_unconfirmed = email\n\n salt, hash = generate_sha1(self.username)\n self.email_confirmation_key = hash\n self.email_confirmation_key_created = get_datetime_now()\n self.save()\n\n # Send email for activation\n self.send_confirmation_email()\n \n return self", "response": "Changes the email address for a user."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef send_confirmation_email(self):\n context= {'user': self,\n 'new_email': self.email_unconfirmed,\n 'protocol': get_protocol(),\n 'confirmation_key': self.email_confirmation_key,\n 'site': Site.objects.get_current()}\n\n\n # Email to the old address\n subject_old = ''.join(render_to_string(\n 'accounts/emails/confirmation_email_subject_old.txt',\n context).splitlines())\n \n message_old = render_to_string(\n 'accounts/emails/confirmation_email_message_old.txt',\n context)\n\n send_mail(subject_old,\n message_old,\n settings.DEFAULT_FROM_EMAIL,\n [self.email])\n\n # Email to the new address\n subject_new = ''.join(render_to_string(\n 'accounts/emails/confirmation_email_subject_new.txt',\n context).splitlines())\n\n message_new = render_to_string(\n 'accounts/emails/confirmation_email_message_new.txt',\n context)\n\n send_mail(subject_new,\n message_new,\n settings.DEFAULT_FROM_EMAIL,\n [self.email_unconfirmed,])", "response": "This method sends an email to confirm the new email address."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the url of the picture for the user.", "response": "def get_picture_url(self):\n \"\"\"\n Returns the image containing the picture for the user.\n\n The picture can be a uploaded image or a Gravatar.\n\n Gravatar functionality will only be used when\n ``ACCOUNTS_GRAVATAR_PICTURE`` is set to ``True``.\n\n :return:\n ``None`` when Gravatar is not used and no default image is \n supplied by ``ACCOUNTS_GRAVATAR_DEFAULT``.\n\n \"\"\"\n # First check for a picture and if any return that.\n if self.picture:\n return self.picture.url\n\n # Use Gravatar if the user wants to.\n if defaults.ACCOUNTS_GRAVATAR_PICTURE:\n return get_gravatar(self.email,\n defaults.ACCOUNTS_GRAVATAR_SIZE,\n defaults.ACCOUNTS_GRAVATAR_DEFAULT)\n\n # Gravatar not used, check for a default image.\n else:\n if defaults.ACCOUNTS_GRAVATAR_DEFAULT not in ['404', 'mm',\n 'identicon',\n 'monsterid',\n 'wavatar']:\n return defaults.ACCOUNTS_GRAVATAR_DEFAULT\n else: return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_full_name_or_username(self):\n if self.first_name or self.last_name:\n # We will return this as translated string. Maybe there are some\n # countries that first display the last name.\n name = _(u\"%(first_name)s %(last_name)s\") % \\\n {'first_name': self.first_name,\n 'last_name': self.last_name}\n else:\n # Fallback to the username if usernames are used\n if not defaults.ACCOUNTS_WITHOUT_USERNAMES:\n name = \"%(username)s\" % {'username': self.username}\n else:\n name = \"%(email)s\" % {'email': self.email}\n return name.strip()", "response": "Returns the full name of the user or the username if none is supplied will return the username or email address depending on the ACCOUNTS_WITHOUT_USERNAMES settings."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_config(config):\n non_null_params = ['space_id', 'access_token']\n for param in non_null_params:\n if getattr(config, param) is None:\n raise Exception('Configuration for \\\"{0}\\\" must not be empty.'.format(param))\n\n for clazz in config.custom_entries:\n if not issubclass(clazz, Entry):\n raise Exception(\n 'Provided class \\\"{0}\\\" must be a subclass of Entry.'.format(clazz.__name__))\n elif clazz is Entry:\n raise Exception('Cannot register \"Entry\" as a custom entry class.')", "response": "Verify sanity for a : class :. Config instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fetch(self, resource_class):\n if issubclass(resource_class, Entry):\n params = None\n content_type = getattr(resource_class, '__content_type__', None)\n if content_type is not None:\n params = {'content_type': resource_class.__content_type__}\n return RequestArray(self.dispatcher, utils.path_for_class(resource_class), self.config.resolve_links,\n params=params)\n\n else:\n remote_path = utils.path_for_class(resource_class)\n if remote_path is None:\n raise Exception('Invalid resource type \\\"{0}\\\".'.format(resource_class))\n\n return RequestArray(self.dispatcher, remote_path, self.config.resolve_links)", "response": "Construct a : class :. Request instance for the given resource type."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef resolve(self, link_resource_type, resource_id, array=None):\n result = None\n\n if array is not None:\n container = array.items_mapped.get(link_resource_type)\n result = container.get(resource_id)\n\n if result is None:\n clz = utils.class_for_type(link_resource_type)\n result = self.fetch(clz).where({'sys.id': resource_id}).first()\n\n return result", "response": "Resolve a link to a CDA resource."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef resolve_dict_link(self, dct, array=None):\n sys = dct.get('sys')\n return self.resolve(sys['linkType'], sys['id'], array) if sys is not None else None", "response": "Convenience method for resolving links given a dictionary object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef invoke(self, request):\n url = '{0}/{1}'.format(self.base_url, request.remote_path)\n r = self.httpclient.get(url, params=request.params, headers=self.get_headers())\n if 200 <= r.status_code < 300:\n return self.resource_factory.from_json(r.json())\n else:\n if r.status_code in ErrorMapping.mapping:\n raise ErrorMapping.mapping[r.status_code](r)\n else:\n raise ApiError(r)", "response": "Invoke the given request using the associated Dispatcher."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nattempt to retrieve all available resources matching this request.", "response": "def all(self):\n \"\"\"Attempt to retrieve all available resources matching this request.\n\n :return: Result instance as returned by the :class:`.Dispatcher`.\n \"\"\"\n result = self.invoke()\n if self.resolve_links:\n result.resolve_links()\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nattempts to retrieve only the first resource matching this request.", "response": "def first(self):\n \"\"\"Attempt to retrieve only the first resource matching this request.\n\n :return: Result instance, or `None` if there are no matching resources.\n \"\"\"\n self.params['limit'] = 1\n result = self.all()\n return result.items[0] if result.total > 0 else None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset a dict of parameters to be passed to the API when invoking this request.", "response": "def where(self, params):\n \"\"\"Set a dict of parameters to be passed to the API when invoking this request.\n\n :param params: (dict) query parameters.\n :return: this :class:`.RequestArray` instance for convenience.\n \"\"\"\n self.params = dict(self.params, **params) # params overrides self.params\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nhighlights menu item based on url tag.", "response": "def active_url(context, urls, css=None):\n \"\"\"\n Highlight menu item based on url tag.\n \n Returns a css class if ``request.path`` is in given ``url``.\n \n :param url:\n Django url to be reversed.\n\n :param css:\n Css class to be returned for highlighting. Return active if none set.\n\n \"\"\"\n request = context['request']\n if request.get_full_path in (reverse(url) for url in urls.split()):\n return css if css else 'active'\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef active_path(context, pattern, css=None):\n request = context['request']\n #pattern = \"^\" + pattern + \"$\"\n if re.search(pattern, request.path):\n return css if css else 'active'\n return ''", "response": "Highlight menu item based on request. path."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(self, item_name):\n\t\tif self.prefix:\n\t\t\titem_name = self.prefix + self.seperator + item_name\n\t\titem_names = item_name.split(self.seperator)\n\t\tnode = self._storage\n\t\tfor item_name in item_names:\n\t\t\tnode = node[item_name]\n\t\treturn node", "response": "Retrieve the value of an option in the configuration."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve the value of an option if it exists otherwise return default_value instead of raising an error.", "response": "def get_if_exists(self, item_name, default_value=None):\n\t\t\"\"\"\n\t\tRetrieve the value of an option if it exists, otherwise\n\t\treturn *default_value* instead of raising an error:\n\n\t\t:param str item_name: The name of the option to retrieve.\n\t\t:param default_value: The value to return if *item_name* does not exist.\n\t\t:return: The value of *item_name* in the configuration.\n\t\t\"\"\"\n\t\tif self.has_option(item_name):\n\t\t\treturn self.get(item_name)\n\t\treturn default_value"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck that an option exists in the configuration.", "response": "def has_option(self, option_name):\n\t\t\"\"\"\n\t\tCheck that an option exists.\n\n\t\t:param str option_name: The name of the option to check.\n\t\t:return: True of the option exists in the configuration.\n\t\t:rtype: bool\n\t\t\"\"\"\n\t\tif self.prefix:\n\t\t\toption_name = self.prefix + self.seperator + option_name\n\t\titem_names = option_name.split(self.seperator)\n\t\tnode = self._storage\n\t\tfor item_name in item_names:\n\t\t\tif node is None:\n\t\t\t\treturn False\n\t\t\tif not item_name in node:\n\t\t\t\treturn False\n\t\t\tnode = node[item_name]\n\t\treturn True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef has_section(self, section_name):\n\t\tif not self.has_option(section_name):\n\t\t\treturn False\n\t\treturn isinstance(self.get(section_name), dict)", "response": "Checks that an option exists and that it contains sub options."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the value of an option in the configuration.", "response": "def set(self, item_name, item_value):\n\t\t\"\"\"\n\t\tSets the value of an option in the configuration.\n\n\t\t:param str item_name: The name of the option to set.\n\t\t:param item_value: The value of the option to set.\n\t\t\"\"\"\n\t\tif self.prefix:\n\t\t\titem_name = self.prefix + self.seperator + item_name\n\t\titem_names = item_name.split(self.seperator)\n\t\titem_last = item_names.pop()\n\t\tnode = self._storage\n\t\tfor item_name in item_names:\n\t\t\tif not item_name in node:\n\t\t\t\tnode[item_name] = {}\n\t\t\tnode = node[item_name]\n\t\tnode[item_last] = item_value\n\t\treturn"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_missing(self, verify_file):\n\t\tvconf = Configuration(verify_file)\n\t\tmissing = {}\n\t\tfor setting, setting_type in vconf.get('settings').items():\n\t\t\tif not self.has_option(setting):\n\t\t\t\tmissing['missing'] = missing.get('settings', [])\n\t\t\t\tmissing['missing'].append(setting)\n\t\t\telif not type(self.get(setting)).__name__ == setting_type:\n\t\t\t\tmissing['incompatible'] = missing.get('incompatible', [])\n\t\t\t\tmissing['incompatible'].append((setting, setting_type))\n\t\treturn missing", "response": "This method returns a dictionary of missing and incompatible settings."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsaves the current configuration to disk.", "response": "def save(self):\n\t\t\"\"\"\n\t\tSave the current configuration to disk.\n\t\t\"\"\"\n\t\twith open(self.configuration_file, 'w') as file_h:\n\t\t\tfile_h.write(self._serializer('dumps', self._storage))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nusing regexer to perform replacements on lines of text", "response": "def replace_lines(regexer, handler, lines):\n \"\"\"Uses replacement handler to perform replacements on lines of text\n\n First we strip off all whitespace\n We run the replacement on a clean 'content' string\n Finally we replace the original content with the replaced version\n This ensures that we retain the correct whitespace from the original line\n \"\"\"\n result = []\n for line in lines:\n content = line.strip()\n replaced = regexer.sub(handler, content)\n result.append(line.replace(content, replaced, 1))\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite version info into version file", "response": "def write_targets(targets, **params):\n \"\"\"Writes version info into version file\"\"\"\n handler = ReplacementHandler(**params)\n for target, regexer in regexer_for_targets(targets):\n with open(target) as fh:\n lines = fh.readlines()\n lines = replace_lines(regexer, handler, lines)\n with open(target, \"w\") as fh:\n fh.writelines(lines)\n if handler.missing:\n raise Exception(\n \"Failed to complete all expected replacements: %r\" % handler.missing\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef regexer_for_targets(targets):\n for target in targets:\n path, file_ext = os.path.splitext(target)\n regexer = config.regexers[file_ext]\n yield target, regexer", "response": "Yields a list of target files with their correct regex"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extract_keypairs(lines, regexer):\n updates = {}\n for line in lines:\n # for consistency we must match the replacer and strip whitespace / newlines\n match = regexer.match(line.strip())\n if not match:\n continue\n k_v = match.groupdict()\n updates[k_v[Constants.KEY_GROUP]] = k_v[Constants.VALUE_GROUP]\n return updates", "response": "Given some lines of text extract key - value pairs from them"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads generic key - value pairs from input files", "response": "def read_targets(targets):\n \"\"\"Reads generic key-value pairs from input files\"\"\"\n results = {}\n for target, regexer in regexer_for_targets(targets):\n with open(target) as fh:\n results.update(extract_keypairs(fh.readlines(), regexer))\n return results"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndetect which files will trigger a version bump", "response": "def detect_file_triggers(trigger_patterns):\n \"\"\"The existence of files matching configured globs will trigger a version bump\"\"\"\n triggers = set()\n for trigger, pattern in trigger_patterns.items():\n matches = glob.glob(pattern)\n if matches:\n _LOG.debug(\"trigger: %s bump from %r\\n\\t%s\", trigger, pattern, matches)\n triggers.add(trigger)\n else:\n _LOG.debug(\"trigger: no match on %r\", pattern)\n return triggers"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_all_triggers(bump, file_triggers):\n triggers = set()\n if file_triggers:\n triggers = triggers.union(detect_file_triggers(config.trigger_patterns))\n if bump:\n _LOG.debug(\"trigger: %s bump requested\", bump)\n triggers.add(bump)\n return triggers", "response": "Get all significant figures to bump"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the lock behaviour for the current version of the current page.", "response": "def get_lock_behaviour(triggers, all_data, lock):\n \"\"\"Binary state lock protects from version increments if set\"\"\"\n updates = {}\n lock_key = config._forward_aliases.get(Constants.VERSION_LOCK_FIELD)\n # if we are explicitly setting or locking the version, then set the lock field True anyway\n if lock:\n updates[Constants.VERSION_LOCK_FIELD] = config.VERSION_LOCK_VALUE\n elif (\n triggers\n and lock_key\n and str(all_data.get(lock_key)) == str(config.VERSION_LOCK_VALUE)\n ):\n triggers.clear()\n updates[Constants.VERSION_LOCK_FIELD] = config.VERSION_UNLOCK_VALUE\n return updates"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates the final version string for the current version string.", "response": "def get_final_version_string(release_mode, semver, commit_count=0):\n \"\"\"Generates update dictionary entries for the version string\"\"\"\n version_string = \".\".join(semver)\n maybe_dev_version_string = version_string\n updates = {}\n if release_mode:\n # in production, we have something like `1.2.3`, as well as a flag e.g. PRODUCTION=True\n updates[Constants.RELEASE_FIELD] = config.RELEASED_VALUE\n else:\n # in dev mode, we have a dev marker e.g. `1.2.3.dev678`\n maybe_dev_version_string = config.DEVMODE_TEMPLATE.format(\n version=version_string, count=commit_count\n )\n\n # make available all components of the semantic version including the full string\n updates[Constants.VERSION_FIELD] = maybe_dev_version_string\n updates[Constants.VERSION_STRICT_FIELD] = version_string\n return updates"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_dvcs_info():\n cmd = \"git rev-list --count HEAD\"\n commit_count = str(\n int(subprocess.check_output(shlex.split(cmd)).decode(\"utf8\").strip())\n )\n cmd = \"git rev-parse HEAD\"\n commit = str(subprocess.check_output(shlex.split(cmd)).decode(\"utf8\").strip())\n return {Constants.COMMIT_FIELD: commit, Constants.COMMIT_COUNT_FIELD: commit_count}", "response": "Gets current repository info from git"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlogging an analysis. CONFIG: MIP config file for an analysis", "response": "def log_cmd(context, sampleinfo, sacct, quiet, config):\n \"\"\"Log an analysis.\n\n CONFIG: MIP config file for an analysis\n \"\"\"\n log_analysis = LogAnalysis(context.obj['store'])\n try:\n new_run = log_analysis(config, sampleinfo=sampleinfo, sacct=sacct)\n except MissingFileError as error:\n click.echo(click.style(f\"Skipping, missing Sacct file: {error.message}\", fg='yellow'))\n return\n except KeyError as error:\n print(click.style(f\"unexpected output, missing key: {error.args[0]}\", fg='yellow'))\n return\n if new_run is None:\n if not quiet:\n click.echo(click.style('Analysis already logged', fg='yellow'))\n else:\n message = f\"New log added: {new_run.family} ({new_run.id}) - {new_run.status}\"\n click.echo(click.style(message, fg='green'))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef start(context, mip_config, email, priority, dryrun, command, start_with, family):\n mip_cli = MipCli(context.obj['script'])\n mip_config = mip_config or context.obj['mip_config']\n email = email or environ_email()\n kwargs = dict(config=mip_config, family=family, priority=priority, email=email, dryrun=dryrun, start_with=start_with)\n if command:\n mip_command = mip_cli.build_command(**kwargs)\n click.echo(' '.join(mip_command))\n else:\n try:\n mip_cli(**kwargs)\n if not dryrun:\n context.obj['store'].add_pending(family, email=email)\n except MipStartError as error:\n click.echo(click.style(error.message, fg='red'))", "response": "Start a new analysis."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nscan a directory for analyses.", "response": "def scan(context, root_dir):\n \"\"\"Scan a directory for analyses.\"\"\"\n root_dir = root_dir or context.obj['root']\n config_files = Path(root_dir).glob('*/analysis/*_config.yaml')\n for config_file in config_files:\n LOG.debug(\"found analysis config: %s\", config_file)\n with config_file.open() as stream:\n context.invoke(log_cmd, config=stream, quiet=True)\n\n context.obj['store'].track_update()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef user(context, name, email):\n existing_user = context.obj['store'].user(email)\n if existing_user:\n click.echo(existing_user.to_dict())\n elif name:\n new_user = context.obj['store'].add_user(name, email)\n click.echo(click.style(f\"New user added: {email} ({new_user.id})\", fg='green'))\n else:\n click.echo(click.style('User not found', fg='yellow'))", "response": "Add a new user or display information about an existing user."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncanceling all jobs in a run.", "response": "def cancel(context, jobs, analysis_id):\n \"\"\"Cancel all jobs in a run.\"\"\"\n analysis_obj = context.obj['store'].analysis(analysis_id)\n if analysis_obj is None:\n click.echo('analysis not found')\n context.abort()\n elif analysis_obj.status != 'running':\n click.echo(f\"analysis not running: {analysis_obj.status}\")\n context.abort()\n\n config_path = Path(analysis_obj.config_path)\n with config_path.open() as config_stream:\n config_raw = ruamel.yaml.safe_load(config_stream)\n config_data = parse_config(config_raw)\n\n log_path = Path(f\"{config_data['log_path']}\")\n if not log_path.exists():\n click.echo(f\"missing MIP log file: {log_path}\")\n context.abort()\n\n with log_path.open() as log_stream:\n all_jobs = job_ids(log_stream)\n\n if jobs:\n for job_id in all_jobs:\n click.echo(job_id)\n else:\n for job_id in all_jobs:\n LOG.debug(f\"cancelling job: {job_id}\")\n process = subprocess.Popen(['scancel', job_id])\n process.wait()\n\n analysis_obj.status = 'canceled'\n context.obj['store'].commit()\n click.echo('cancelled analysis successfully!')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_pub_addresses(names=None, timeout=10, nameserver=\"localhost\"):\n addrs = []\n if names is None:\n names = [\"\", ]\n for name in names:\n then = datetime.now() + timedelta(seconds=timeout)\n while datetime.now() < then:\n addrs += get_pub_address(name, nameserver=nameserver)\n if addrs:\n break\n time.sleep(.5)\n return addrs", "response": "Get the addresses of a given list of names."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_pub_address(name, timeout=10, nameserver=\"localhost\"):\n\n # Socket to talk to server\n socket = get_context().socket(REQ)\n try:\n socket.setsockopt(LINGER, timeout * 1000)\n socket.connect(\"tcp://\" + nameserver + \":\" + str(PORT))\n logger.debug('Connecting to %s',\n \"tcp://\" + nameserver + \":\" + str(PORT))\n poller = Poller()\n poller.register(socket, POLLIN)\n\n message = Message(\"/oper/ns\", \"request\", {\"service\": name})\n socket.send_string(six.text_type(message))\n\n # Get the reply.\n sock = poller.poll(timeout=timeout * 1000)\n if sock:\n if sock[0][0] == socket:\n message = Message.decode(socket.recv_string(NOBLOCK))\n return message.data\n else:\n raise TimeoutError(\"Didn't get an address after %d seconds.\"\n % timeout)\n finally:\n socket.close()", "response": "Get the address of a given publisher name from the nameserver."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_active_address(name, arec):\n addrs = arec.get(name)\n if addrs:\n return Message(\"/oper/ns\", \"info\", addrs)\n else:\n return Message(\"/oper/ns\", \"info\", \"\")", "response": "Get the addresses of the active modules for a given publisher name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning the listener and answer to requests.", "response": "def run(self, *args):\n \"\"\"Run the listener and answer to requests.\n \"\"\"\n del args\n\n arec = AddressReceiver(max_age=self._max_age,\n multicast_enabled=self._multicast_enabled)\n arec.start()\n port = PORT\n\n try:\n with nslock:\n self.listener = get_context().socket(REP)\n self.listener.bind(\"tcp://*:\" + str(port))\n logger.debug('Listening on port %s', str(port))\n poller = Poller()\n poller.register(self.listener, POLLIN)\n while self.loop:\n with nslock:\n socks = dict(poller.poll(1000))\n if socks:\n if socks.get(self.listener) == POLLIN:\n msg = self.listener.recv_string()\n else:\n continue\n logger.debug(\"Replying to request: \" + str(msg))\n msg = Message.decode(msg)\n self.listener.send_unicode(six.text_type(get_active_address(\n msg.data[\"service\"], arec)))\n except KeyboardInterrupt:\n # Needed to stop the nameserver.\n pass\n finally:\n arec.stop()\n self.stop()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef stop(self):\n self.listener.setsockopt(LINGER, 1)\n self.loop = False\n with nslock:\n self.listener.close()", "response": "Stop the name server."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_requirements(requirements, ignore=None):\n\tignore = (ignore or [])\n\tnot_satisfied = []\n\tworking_set = pkg_resources.working_set\n\tinstalled_packages = dict((p.project_name, p) for p in working_set) # pylint: disable=E1133\n\n\tif isinstance(requirements, str):\n\t\twith open(requirements, 'r') as file_h:\n\t\t\trequirements = file_h.readlines()\n\telif hasattr(requirements, 'readlines'):\n\t\trequirements = requirements.readlines()\n\telif not isinstance(requirements, (list, tuple)):\n\t\traise TypeError('invalid type for argument requirements')\n\n\tfor req_line in requirements:\n\t\treq_line = req_line.strip()\n\t\tparts = re.match(r'^([\\w\\-]+)(([<>=]=)(\\d+(\\.\\d+)*))?$', req_line)\n\t\tif not parts:\n\t\t\traise ValueError(\"requirement '{0}' is in an invalid format\".format(req_line))\n\t\treq_pkg = parts.group(1)\n\t\tif req_pkg in ignore:\n\t\t\tcontinue\n\t\tif req_pkg not in installed_packages:\n\t\t\ttry:\n\t\t\t\tfind_result = working_set.find(pkg_resources.Requirement.parse(req_line))\n\t\t\texcept pkg_resources.ResolutionError:\n\t\t\t\tfind_result = False\n\t\t\tif not find_result:\n\t\t\t\tnot_satisfied.append(req_pkg)\n\t\t\tcontinue\n\t\tif not parts.group(2):\n\t\t\tcontinue\n\t\treq_version = distutils.version.StrictVersion(parts.group(4))\n\t\tinstalled_pkg = installed_packages[req_pkg]\n\t\tinstalled_version = re.match(r'^((\\d+\\.)*\\d+)', installed_pkg.version)\n\t\tif not installed_version:\n\t\t\tnot_satisfied.append(req_pkg)\n\t\t\tcontinue\n\t\tinstalled_version = distutils.version.StrictVersion(installed_version.group(0))\n\t\tif parts.group(3) == '==' and installed_version != req_version:\n\t\t\tnot_satisfied.append(req_pkg)\n\t\telif parts.group(3) == '>=' and installed_version < req_version:\n\t\t\tnot_satisfied.append(req_pkg)\n\t\telif parts.group(3) == '<=' and installed_version > req_version:\n\t\t\tnot_satisfied.append(req_pkg)\n\treturn not_satisfied", "response": "Parse the requirements file for package information to determine if all requirements are met."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives a list of 4 - element tuples transforms it into a numpy array", "response": "def segments_to_numpy(segments):\n \"\"\"given a list of 4-element tuples, transforms it into a numpy array\"\"\"\n segments = numpy.array(segments, dtype=SEGMENT_DATATYPE, ndmin=2) # each segment in a row\n segments = segments if SEGMENTS_DIRECTION == 0 else numpy.transpose(segments)\n return segments"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngive a rectangle and an image returns it s corresponding subimage", "response": "def region_from_segment(image, segment):\n \"\"\"given a segment (rectangle) and an image, returns it's corresponding subimage\"\"\"\n x, y, w, h = segment\n return image[y:y + h, x:x + w]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ingest(self):\n self.log.debug('starting the ``get`` method')\n\n # A YAML DICTIONARY OF sherlock-catalogues TABLE NAME AND THE SELECT\n # QUERY TO LIFT THE DATA FROM THE MARSHALL\n yamlFilePath = '/'.join(string.split(__file__, '/')\n [:-1]) + \"/resources/pessto_marshall_table_selects.yaml\"\n stream = file(yamlFilePath, 'r')\n marshallQueries = yaml.load(stream)\n stream.close()\n self.primaryIdColumnName = \"primaryId\"\n self.raColName = \"raDeg\"\n self.declColName = \"decDeg\"\n\n # CREATE THE MARSHALL IMPORT CATALOGUE TABLES (IF THEY DO NOT EXIST)\n directory_script_runner(\n log=self.log,\n pathToScriptDirectory='/'.join(string.split(__file__,\n '/')[:-1]) + \"/resources\",\n databaseName=self.settings[\"database settings\"][\n \"static catalogues\"][\"db\"],\n loginPath=self.settings[\"database settings\"][\n \"static catalogues\"][\"loginPath\"],\n successRule=\"delete\",\n failureRule=\"failed\"\n )\n\n for k, v in marshallQueries[\"pessto queries\"].iteritems():\n self.dbTableName = k\n self.databaseInsertbatchSize = 500\n dictList = self._create_dictionary_of_marshall(\n marshallQuery=v[\"query\"],\n marshallTable=v[\"marshallTable\"]\n )\n\n tableName = self.dbTableName\n self.add_data_to_database_table(\n dictList=dictList\n )\n\n self.log.debug('completed the ``get`` method')\n return None", "response": "Ingest the ePESSTO Marshall transient stream into the sherlock - catalogues database"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _create_dictionary_of_marshall(\n self,\n marshallQuery,\n marshallTable):\n \"\"\"create a list of dictionaries containing all the rows in the marshall stream\n\n **Key Arguments:**\n - ``marshallQuery`` -- the query used to lift the required data from the marshall database.\n - ``marshallTable`` -- the name of the marshall table we are lifting the data from.\n\n **Return:**\n - ``dictList`` - a list of dictionaries containing all the rows in the marshall stream\n \"\"\"\n self.log.debug(\n 'starting the ``_create_dictionary_of_marshall`` method')\n\n dictList = []\n tableName = self.dbTableName\n\n rows = readquery(\n log=self.log,\n sqlQuery=marshallQuery,\n dbConn=self.pmDbConn,\n quiet=False\n )\n\n totalCount = len(rows)\n count = 0\n\n for row in rows:\n if \"dateCreated\" in row:\n del row[\"dateCreated\"]\n count += 1\n if count > 1:\n # Cursor up one line and clear line\n sys.stdout.write(\"\\x1b[1A\\x1b[2K\")\n print \"%(count)s / %(totalCount)s `%(tableName)s` data added to memory\" % locals()\n dictList.append(dict(row))\n\n self.log.debug(\n 'completed the ``_create_dictionary_of_marshall`` method')\n return dictList", "response": "create a list of dictionaries containing all the rows in the marshall stream"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nimports the ned_d catalogue into the catalogues database and creates the HTMIDs for that table.", "response": "def ingest(self):\n \"\"\"Import the ned_d catalogue into the catalogues database\n\n The method first generates a list of python dictionaries from the ned_d datafile, imports this list of dictionaries into a database table and then generates the HTMIDs for that table. \n\n **Usage:**\n\n See class docstring for usage\n\n .. todo ::\n\n\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``get`` method')\n\n dictList = self._create_dictionary_of_ned_d()\n self.primaryIdColumnName = \"primaryId\"\n self.raColName = \"raDeg\"\n self.declColName = \"decDeg\"\n\n tableName = self.dbTableName\n createStatement = u\"\"\"\n CREATE TABLE `%(tableName)s` (\n `primaryId` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'An internal counter',\n `Method` varchar(150) DEFAULT NULL,\n `dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,\n `dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,\n `updated` varchar(45) DEFAULT '0',\n `dist_derived_from_sn` varchar(150) DEFAULT NULL,\n `dist_in_ned_flag` varchar(10) DEFAULT NULL,\n `dist_index_id` mediumint(9) DEFAULT NULL,\n `dist_mod` double DEFAULT NULL,\n `dist_mod_err` double DEFAULT NULL,\n `dist_mpc` double DEFAULT NULL,\n `galaxy_index_id` mediumint(9) DEFAULT NULL,\n `hubble_const` double DEFAULT NULL,\n `lmc_mod` double DEFAULT NULL,\n `notes` varchar(500) DEFAULT NULL,\n `primary_ned_id` varchar(150) DEFAULT NULL,\n `redshift` double DEFAULT NULL,\n `ref` varchar(150) DEFAULT NULL,\n `ref_date` int(11) DEFAULT NULL,\n `master_row` tinyint(4) DEFAULT '0',\n `major_diameter_arcmin` double DEFAULT NULL,\n `ned_notes` varchar(700) DEFAULT NULL,\n `object_type` varchar(100) DEFAULT NULL,\n `redshift_err` double DEFAULT NULL,\n `redshift_quality` varchar(100) DEFAULT NULL,\n `magnitude_filter` varchar(10) DEFAULT NULL,\n `minor_diameter_arcmin` double DEFAULT NULL,\n `morphology` varchar(50) DEFAULT NULL,\n `hierarchy` varchar(50) DEFAULT NULL,\n `galaxy_morphology` varchar(50) DEFAULT NULL,\n `radio_morphology` varchar(50) DEFAULT NULL,\n `activity_type` varchar(50) DEFAULT NULL,\n `in_ned` tinyint(4) DEFAULT NULL,\n `raDeg` double DEFAULT NULL,\n `decDeg` double DEFAULT NULL,\n `eb_v` double DEFAULT NULL,\n `sdss_coverage` TINYINT DEFAULT NULL,\n PRIMARY KEY (`primaryId`),\n UNIQUE KEY `galaxy_index_id_dist_index_id` (`galaxy_index_id`,`dist_index_id`)\n ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=latin1;\n DROP VIEW IF EXISTS `view_%(tableName)s_master_recorders`;\n CREATE\n VIEW `view_%(tableName)s_master_recorders` AS\n (SELECT \n `%(tableName)s`.`primary_ned_id` AS `primary_ned_id`,\n `%(tableName)s`.`object_type` AS `object_type`,\n `%(tableName)s`.`raDeg` AS `raDeg`,\n `%(tableName)s`.`decDeg` AS `decDeg`,\n `%(tableName)s`.`dist_mpc` AS `dist_mpc`,\n `%(tableName)s`.`dist_mod` AS `dist_mod`,\n `%(tableName)s`.`dist_mod_err` AS `dist_mod_err`,\n `%(tableName)s`.`Method` AS `dist_measurement_method`,\n `%(tableName)s`.`redshift` AS `redshift`,\n `%(tableName)s`.`redshift_err` AS `redshift_err`,\n `%(tableName)s`.`redshift_quality` AS `redshift_quality`,\n `%(tableName)s`.`major_diameter_arcmin` AS `major_diameter_arcmin`,\n `%(tableName)s`.`minor_diameter_arcmin` AS `minor_diameter_arcmin`,\n `%(tableName)s`.`magnitude_filter` AS `magnitude_filter`,\n `%(tableName)s`.`eb_v` AS `gal_eb_v`,\n `%(tableName)s`.`hierarchy` AS `hierarchy`,\n `%(tableName)s`.`morphology` AS `morphology`,\n `%(tableName)s`.`radio_morphology` AS `radio_morphology`,\n `%(tableName)s`.`activity_type` AS `activity_type`,\n `%(tableName)s`.`ned_notes` AS `ned_notes`,\n `%(tableName)s`.`in_ned` AS `in_ned`,\n `%(tableName)s`.`primaryId` AS `primaryId`\n FROM\n `%(tableName)s`\n WHERE\n (`%(tableName)s`.`master_row` = 1));\n \"\"\" % locals()\n\n self.add_data_to_database_table(\n dictList=dictList,\n createStatement=createStatement\n )\n\n self._clean_up_columns()\n self._get_metadata_for_galaxies()\n self._update_sdss_coverage()\n\n self.log.debug('completed the ``get`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a list of dictionaries containing all the rows in the ned_d catalogue", "response": "def _create_dictionary_of_ned_d(\n self):\n \"\"\"create a list of dictionaries containing all the rows in the ned_d catalogue\n\n **Return:**\n - ``dictList`` - a list of dictionaries containing all the rows in the ned_d catalogue\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_create_dictionary_of_ned_d`` method')\n\n count = 0\n with open(self.pathToDataFile, 'rb') as csvFile:\n csvReader = csv.reader(\n csvFile, dialect='excel', delimiter=',', quotechar='\"')\n totalRows = sum(1 for row in csvReader)\n csvFile.close()\n totalCount = totalRows\n\n with open(self.pathToDataFile, 'rb') as csvFile:\n csvReader = csv.reader(\n csvFile, dialect='excel', delimiter=',', quotechar='\"')\n theseKeys = []\n dictList = []\n for row in csvReader:\n if len(theseKeys) == 0:\n totalRows -= 1\n if \"Exclusion Code\" in row and \"Hubble const.\" in row:\n for i in row:\n if i == \"redshift (z)\":\n theseKeys.append(\"redshift\")\n elif i == \"Hubble const.\":\n theseKeys.append(\"hubble_const\")\n elif i == \"G\":\n theseKeys.append(\"galaxy_index_id\")\n elif i == \"err\":\n theseKeys.append(\"dist_mod_err\")\n elif i == \"D (Mpc)\":\n theseKeys.append(\"dist_mpc\")\n elif i == \"Date (Yr. - 1980)\":\n theseKeys.append(\"ref_date\")\n elif i == \"REFCODE\":\n theseKeys.append(\"ref\")\n elif i == \"Exclusion Code\":\n theseKeys.append(\"dist_in_ned_flag\")\n elif i == \"Adopted LMC modulus\":\n theseKeys.append(\"lmc_mod\")\n elif i == \"m-M\":\n theseKeys.append(\"dist_mod\")\n elif i == \"Notes\":\n theseKeys.append(\"notes\")\n elif i == \"SN ID\":\n theseKeys.append(\"dist_derived_from_sn\")\n elif i == \"method\":\n theseKeys.append(\"dist_method\")\n elif i == \"Galaxy ID\":\n theseKeys.append(\"primary_ned_id\")\n elif i == \"D\":\n theseKeys.append(\"dist_index_id\")\n else:\n theseKeys.append(i)\n continue\n\n if len(theseKeys):\n count += 1\n if count > 1:\n # Cursor up one line and clear line\n sys.stdout.write(\"\\x1b[1A\\x1b[2K\")\n if count > totalCount:\n count = totalCount\n percent = (float(count) / float(totalCount)) * 100.\n print \"%(count)s / %(totalCount)s (%(percent)1.1f%%) rows added to memory\" % locals()\n rowDict = {}\n for t, r in zip(theseKeys, row):\n rowDict[t] = r\n if t == \"ref_date\":\n try:\n rowDict[t] = int(r) + 1980\n except:\n rowDict[t] = None\n\n if rowDict[\"dist_index_id\"] != \"999999\":\n dictList.append(rowDict)\n\n csvFile.close()\n\n self.log.debug(\n 'completed the ``_create_dictionary_of_ned_d`` method')\n return dictList"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncleaning up the columns of the NED table", "response": "def _clean_up_columns(\n self):\n \"\"\"clean up columns of the NED table\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_clean_up_columns`` method')\n\n tableName = self.dbTableName\n\n print \"cleaning up %(tableName)s columns\" % locals()\n\n sqlQuery = u\"\"\"\n set sql_mode=\"STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION\";\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n sqlQuery = u\"\"\"\n update %(tableName)s set dist_mod_err = null where dist_mod_err = 0;\n update %(tableName)s set dist_in_ned_flag = null where dist_in_ned_flag = \"\";\n update %(tableName)s set notes = null where notes = \"\";\n update %(tableName)s set redshift = null where redshift = 0;\n update %(tableName)s set dist_derived_from_sn = null where dist_derived_from_sn = \"\";\n update %(tableName)s set hubble_const = null where hubble_const = 0;\n update %(tableName)s set lmc_mod = null where lmc_mod = 0;\n update %(tableName)s set master_row = 0;\n update %(tableName)s set master_row = 1 where primaryId in (select * from (select distinct primaryId from %(tableName)s group by galaxy_index_id) as alias);\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n self.log.debug('completed the ``_clean_up_columns`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting metadata for a set of galaxies", "response": "def _get_metadata_for_galaxies(\n self):\n \"\"\"get metadata for galaxies\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_get_metadata_for_galaxies`` method')\n\n total, batches = self._count_galaxies_requiring_metadata()\n print \"%(total)s galaxies require metadata. Need to send %(batches)s batch requests to NED.\" % locals()\n\n totalBatches = self.batches\n thisCount = 0\n\n # FOR EACH BATCH, GET THE GALAXY IDs, QUERY NED AND UPDATE THE DATABASE\n while self.total:\n thisCount += 1\n self._get_3000_galaxies_needing_metadata()\n dictList = self._query_ned_and_add_results_to_database(thisCount)\n\n self.add_data_to_database_table(\n dictList=dictList,\n createStatement=False\n )\n\n self._count_galaxies_requiring_metadata()\n\n self.log.debug('completed the ``_get_metadata_for_galaxies`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget 3000 galaxies needing metadata", "response": "def _get_3000_galaxies_needing_metadata(\n self):\n \"\"\" get 3000 galaxies needing metadata\n\n **Return:**\n - ``len(self.theseIds)`` -- the number of NED IDs returned\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_get_3000_galaxies_needing_metadata`` method')\n\n tableName = self.dbTableName\n\n # SELECT THE DATA FROM NED TABLE\n self.theseIds = {}\n sqlQuery = u\"\"\"\n select primaryId, primary_ned_id from %(tableName)s where master_row = 1 and in_ned is null limit 3000;\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n for row in rows:\n self.theseIds[row[\"primary_ned_id\"]] = row[\"primaryId\"]\n\n self.log.debug(\n 'completed the ``_get_3000_galaxies_needing_metadata`` method')\n\n return len(self.theseIds)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _query_ned_and_add_results_to_database(\n self,\n batchCount):\n \"\"\" query ned and add results to database\n\n **Key Arguments:**\n - ``batchCount`` - the index number of the batch sent to NED\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_query_ned_and_add_results_to_database`` method')\n\n tableName = self.dbTableName\n # ASTROCALC UNIT CONVERTER OBJECT\n converter = unit_conversion(\n log=self.log\n )\n\n # QUERY NED WITH BATCH\n totalCount = len(self.theseIds)\n print \"requesting metadata from NED for %(totalCount)s galaxies (batch %(batchCount)s)\" % locals()\n search = namesearch(\n log=self.log,\n names=self.theseIds.keys(),\n quiet=True\n )\n results = search.get()\n print \"results returned from ned -- starting to add to database\" % locals()\n\n # CLEAN THE RETURNED DATA AND UPDATE DATABASE\n totalCount = len(results)\n count = 0\n sqlQuery = \"\"\n dictList = []\n\n colList = [\"redshift_quality\", \"redshift\", \"hierarchy\", \"object_type\", \"major_diameter_arcmin\", \"morphology\", \"magnitude_filter\",\n \"ned_notes\", \"eb_v\", \"raDeg\", \"radio_morphology\", \"activity_type\", \"minor_diameter_arcmin\", \"decDeg\", \"redshift_err\", \"in_ned\"]\n\n if not len(results):\n for k, v in self.theseIds.iteritems():\n dictList.append({\n \"in_ned\": 0,\n \"primaryID\": v\n })\n for thisDict in results:\n\n thisDict[\"tableName\"] = tableName\n count += 1\n for k, v in thisDict.iteritems():\n if not v or len(v) == 0:\n thisDict[k] = \"null\"\n if k in [\"major_diameter_arcmin\", \"minor_diameter_arcmin\"] and (\":\" in v or \"?\" in v or \"<\" in v):\n thisDict[k] = v.replace(\":\", \"\").replace(\n \"?\", \"\").replace(\"<\", \"\")\n if isinstance(v, str) and '\"' in v:\n thisDict[k] = v.replace('\"', '\\\\\"')\n if \"Input name not\" not in thisDict[\"input_note\"] and \"Same object as\" not in thisDict[\"input_note\"]:\n if thisDict[\"ra\"] != \"null\" and thisDict[\"dec\"] != \"null\":\n thisDict[\"raDeg\"] = converter.ra_sexegesimal_to_decimal(\n ra=thisDict[\"ra\"]\n )\n thisDict[\"decDeg\"] = converter.dec_sexegesimal_to_decimal(\n dec=thisDict[\"dec\"]\n )\n else:\n thisDict[\"raDeg\"] = None\n thisDict[\"decDeg\"] = None\n thisDict[\"in_ned\"] = 1\n thisDict[\"eb_v\"] = thisDict[\"eb-v\"]\n\n row = {}\n row[\"primary_ned_id\"] = thisDict[\"input_name\"]\n\n try:\n row[\"primaryID\"] = self.theseIds[thisDict[\"input_name\"]]\n for c in colList:\n if thisDict[c] == \"null\":\n row[c] = None\n else:\n row[c] = thisDict[c]\n dictList.append(row)\n except:\n g = thisDict[\"input_name\"]\n self.log.error(\n \"Cannot find database table %(tableName)s primaryID for '%(g)s'\\n\\n\" % locals())\n dictList.append({\n \"in_ned\": 0,\n \"primary_ned_id\": thisDict[\"input_name\"]\n })\n\n else:\n dictList.append({\n \"primary_ned_id\": thisDict[\"input_name\"],\n \"in_ned\": 0,\n \"primaryID\": self.theseIds[thisDict[\"input_name\"]]\n })\n\n self.log.debug(\n 'completed the ``_query_ned_and_add_results_to_database`` method')\n return dictList", "response": "query ned and add results to database"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate SDSS coverage for the current key - item", "response": "def _update_sdss_coverage(\n self):\n \"\"\" update sdss coverage\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_update_sdss_coverage`` method')\n\n tableName = self.dbTableName\n\n # SELECT THE LOCATIONS NEEDING TO BE CHECKED\n sqlQuery = u\"\"\"\n select primary_ned_id, primaryID, raDeg, decDeg, sdss_coverage from %(tableName)s where sdss_coverage is null and master_row = 1 and in_ned = 1 order by dist_mpc;\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n quiet=False\n )\n\n totalCount = len(rows)\n count = 0\n for row in rows:\n count += 1\n if count > 1:\n # Cursor up three lines and clear\n sys.stdout.write(\"\\x1b[1A\\x1b[2K\")\n sys.stdout.write(\"\\x1b[1A\\x1b[2K\")\n sys.stdout.write(\"\\x1b[1A\\x1b[2K\")\n\n if count > totalCount:\n count = totalCount\n percent = (float(count) / float(totalCount)) * 100.\n\n primaryID = row[\"primaryID\"]\n raDeg = float(row[\"raDeg\"])\n decDeg = float(row[\"decDeg\"])\n primary_ned_id = row[\"primary_ned_id\"]\n\n # SDSS CAN ONLY ACCEPT 60 QUERIES/MIN\n time.sleep(1.1)\n print \"%(count)s / %(totalCount)s (%(percent)1.1f%%) NED galaxies checked for SDSS coverage\" % locals()\n print \"NED NAME: \", primary_ned_id\n\n # covered = True | False | 999 (i.e. not sure)\n sdss_coverage = check_coverage(\n log=self.log,\n ra=raDeg,\n dec=decDeg\n ).get()\n\n if sdss_coverage == 999:\n sdss_coverage_flag = \"null\"\n elif sdss_coverage == True:\n sdss_coverage_flag = 1\n elif sdss_coverage == False:\n sdss_coverage_flag = 0\n else:\n self.log.error('cound not get sdss coverage' % locals())\n sys.exit(0)\n\n # UPDATE THE DATABASE FLAG\n sqlQuery = u\"\"\"\n update %(tableName)s set sdss_coverage = %(sdss_coverage_flag)s where primaryID = %(primaryID)s\n \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn,\n )\n\n self.log.debug('completed the ``_update_sdss_coverage`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self, command, **kwargs):\n return RemoteTask(self.hostname, command,\n identity_file=self._identity_file, **kwargs)", "response": "Run a command on the remote host."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef send_file(self, file_name, remote_destination=None, **kwargs):\n if not remote_destination:\n remote_destination = file_name\n\n return SubprocessTask(\n self._rsync_cmd() +\n ['-ut', file_name, '%s:%s' % (self.hostname, remote_destination)],\n **kwargs)", "response": "Sends a file to a remote host with rsync."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_file(self, file_name, local_destination=None, **kwargs):\n if not local_destination:\n local_destination = file_name\n\n return SubprocessTask(\n self._rsync_cmd() +\n ['-ut', '%s:%s' % (self.hostname, file_name), local_destination],\n **kwargs)", "response": "Get a file from a remote host with rsync."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_config(self, data: dict):\n self.validate_config(data)\n config_data = self.prepare_config(data)\n return config_data", "response": "Make a MIP config."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_config(data: dict) -> dict:\n errors = ConfigSchema().validate(data)\n if errors:\n for field, messages in errors.items():\n if isinstance(messages, dict):\n for level, sample_errors in messages.items():\n sample_id = data['samples'][level]['sample_id']\n for sub_field, sub_messages in sample_errors.items():\n LOG.error(f\"{sample_id} -> {sub_field}: {', '.join(sub_messages)}\")\n else:\n LOG.error(f\"{field}: {', '.join(messages)}\")\n raise ConfigError('invalid config input', errors=errors)", "response": "Convert to MIP config format."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npreparing the config data.", "response": "def prepare_config(data: dict) -> dict:\n \"\"\"Prepare the config data.\"\"\"\n data_copy = deepcopy(data)\n # handle single sample cases with 'unknown' phenotype\n if len(data_copy['samples']) == 1:\n if data_copy['samples'][0]['phenotype'] == 'unknown':\n LOG.info(\"setting 'unknown' phenotype to 'unaffected'\")\n data_copy['samples'][0]['phenotype'] = 'unaffected'\n # set the mother/father to '0' if they are not set for a sample\n for sample_data in data_copy['samples']:\n sample_data['mother'] = sample_data.get('mother') or '0'\n sample_data['father'] = sample_data.get('father') or '0'\n if sample_data['analysis_type'] == 'wgs' and sample_data.get('capture_kit') is None:\n sample_data['capture_kit'] = DEFAULT_CAPTURE_KIT\n return data_copy"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save_config(self, data: dict) -> Path:\n out_dir = Path(self.families_dir) / data['family']\n out_dir.mkdir(parents=True, exist_ok=True)\n out_path = out_dir / 'pedigree.yaml'\n dump = ruamel.yaml.round_trip_dump(data, indent=4, block_seq_indent=2)\n out_path.write_text(dump)\n return out_path", "response": "Save a config to the expected location."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _broadcast(src_processor, src_atr_name, dest_processors, dest_atr_name, transform_function):\n value = getattr(src_processor, src_atr_name)\n value = transform_function(value)\n for d in dest_processors:\n setattr(d, dest_atr_name, value)", "response": "This function is used exclusively by create_broadcast."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a function intended to be called as a Processor posthook that copies some of the processor s attributes to other processors", "response": "def create_broadcast(src_atr_name, dest_processors, dest_atr_name=None, transform_function=lambda x: x):\n \"\"\"\n This method creates a function, intended to be called as a\n Processor posthook, that copies some of the processor's attributes\n to other processors\n \"\"\"\n from functools import partial\n if dest_atr_name == None:\n dest_atr_name = src_atr_name\n if not hasattr(dest_processors, \"__iter__\"): # a single processor was given instead\n dest_processors = [dest_processors]\n return partial(_broadcast, src_atr_name=src_atr_name, dest_processors=dest_processors, dest_atr_name=dest_atr_name,\n transform_function=transform_function)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_parameters(self):\n parameter_names = self.PARAMETERS.keys()\n # TODO: Unresolved reference for processor\n parameter_values = [getattr(processor, n) for n in parameter_names]\n return dict(zip(parameter_names, parameter_values))", "response": "returns a dictionary with the processor s stored parameters"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_parameters(self, **args):\n for k, v in self.PARAMETERS.items():\n new_value = args.get(k)\n if new_value != None:\n if not _same_type(new_value, v):\n raise Exception(\n \"On processor {0}, argument {1} takes something like {2}, but {3} was given\".format(self, k, v,\n new_value))\n setattr(self, k, new_value)\n not_used = set(args.keys()).difference(set(self.PARAMETERS.keys()))\n not_given = set(self.PARAMETERS.keys()).difference(set(args.keys()))\n return not_used, not_given", "response": "sets the processor stored parameters"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_parameters(self):\n d = {}\n for p in self.processors:\n parameter_names = list(p.PARAMETERS.keys())\n parameter_values = [getattr(p, n) for n in parameter_names]\n d.update(dict(zip(parameter_names, parameter_values)))\n return d", "response": "gets from all wrapped processors"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting to all wrapped processors", "response": "def set_parameters(self, **args):\n \"\"\"sets to all wrapped processors\"\"\"\n not_used = set()\n not_given = set()\n for p in self.processors:\n nu, ng = p.set_parameters(**args)\n not_used = not_used.union(nu)\n not_given = not_given.union(ng)\n return not_used, not_given"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef render(self, filename):\n self.elapsed_time = -time()\n dpi = 100\n fig = figure(figsize=(16, 9), dpi=dpi)\n with self.writer.saving(fig, filename, dpi):\n for frame_id in xrange(self.frames + 1):\n self.renderFrame(frame_id)\n self.writer.grab_frame()\n self.elapsed_time += time()", "response": "Render the content of the current page."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprint in standard output report about animation rendering.", "response": "def report(self):\n \"\"\"Prints in standard output report about animation rendering. Namely, it prints seconds spent, number of\n frames and step size that is used in functional animation.\n \"\"\"\n message = 'Elapsed in {0} seconds with {1} frames and {2} step.'\n print(message.format(self.elapsed_time, self.frames, self.step))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a new address to the subscribing list for topics.", "response": "def add(self, address, topics=None):\n \"\"\"Add *address* to the subscribing list for *topics*.\n\n It topics is None we will subscibe to already specified topics.\n \"\"\"\n with self._lock:\n if address in self.addresses:\n return False\n\n topics = self._magickfy_topics(topics) or self._topics\n LOGGER.info(\"Subscriber adding address %s with topics %s\",\n str(address), str(topics))\n subscriber = get_context().socket(SUB)\n for t__ in topics:\n subscriber.setsockopt_string(SUBSCRIBE, six.text_type(t__))\n subscriber.connect(address)\n self.sub_addr[subscriber] = address\n self.addr_sub[address] = subscriber\n if self.poller:\n self.poller.register(subscriber, POLLIN)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef remove(self, address):\n with self._lock:\n try:\n subscriber = self.addr_sub[address]\n except KeyError:\n return False\n LOGGER.info(\"Subscriber removing address %s\", str(address))\n if self.poller:\n self.poller.unregister(subscriber)\n del self.addr_sub[address]\n del self.sub_addr[subscriber]\n subscriber.close()\n return True", "response": "Remove a topic from the subscribing list for topics*."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating with a set of addresses.", "response": "def update(self, addresses):\n \"\"\"Updating with a set of addresses.\n \"\"\"\n if isinstance(addresses, six.string_types):\n addresses = [addresses, ]\n s0_, s1_ = set(self.addresses), set(addresses)\n sr_, sa_ = s0_.difference(s1_), s1_.difference(s0_)\n for a__ in sr_:\n self.remove(a__)\n for a__ in sa_:\n self.add(a__)\n return bool(sr_ or sa_)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_hook_sub(self, address, topics, callback):\n LOGGER.info(\"Subscriber adding SUB hook %s for topics %s\",\n str(address), str(topics))\n socket = get_context().socket(SUB)\n for t__ in self._magickfy_topics(topics):\n socket.setsockopt_string(SUBSCRIBE, six.text_type(t__))\n socket.connect(address)\n self._add_hook(socket, callback)", "response": "Add a SUB socket to receive the specified topics."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_hook_pull(self, address, callback):\n LOGGER.info(\"Subscriber adding PULL hook %s\", str(address))\n socket = get_context().socket(PULL)\n socket.connect(address)\n self._add_hook(socket, callback)", "response": "Add a PULL socket to be used to pull the current version of the current process."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _add_hook(self, socket, callback):\n self._hooks.append(socket)\n self._hooks_cb[socket] = callback\n if self.poller:\n self.poller.register(socket, POLLIN)", "response": "Generic hook. The passed socket has to be \"receive only\"."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef recv(self, timeout=None):\n if timeout:\n timeout *= 1000.\n\n for sub in list(self.subscribers) + self._hooks:\n self.poller.register(sub, POLLIN)\n self._loop = True\n try:\n while self._loop:\n sleep(0)\n try:\n socks = dict(self.poller.poll(timeout=timeout))\n if socks:\n for sub in self.subscribers:\n if sub in socks and socks[sub] == POLLIN:\n m__ = Message.decode(sub.recv_string(NOBLOCK))\n if not self._filter or self._filter(m__):\n if self._translate:\n url = urlsplit(self.sub_addr[sub])\n host = url[1].split(\":\")[0]\n m__.sender = (m__.sender.split(\"@\")[0]\n + \"@\" + host)\n yield m__\n\n for sub in self._hooks:\n if sub in socks and socks[sub] == POLLIN:\n m__ = Message.decode(sub.recv_string(NOBLOCK))\n self._hooks_cb[sub](m__)\n else:\n # timeout\n yield None\n except ZMQError as err:\n LOGGER.exception(\"Receive failed: %s\", str(err))\n finally:\n for sub in list(self.subscribers) + self._hooks:\n self.poller.unregister(sub)", "response": "Receive a message from all the subscribers."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncloses the subscriber and close the local subscribers.", "response": "def close(self):\n \"\"\"Close the subscriber: stop it and close the local subscribers.\n \"\"\"\n self.stop()\n for sub in list(self.subscribers) + self._hooks:\n try:\n sub.setsockopt(LINGER, 1)\n sub.close()\n except ZMQError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding the magick to the topics if missing.", "response": "def _magickfy_topics(topics):\n \"\"\"Add the magick to the topics if missing.\n \"\"\"\n # If topic does not start with messages._MAGICK (pytroll:/), it will be\n # prepended.\n if topics is None:\n return None\n if isinstance(topics, six.string_types):\n topics = [topics, ]\n ts_ = []\n for t__ in topics:\n if not t__.startswith(_MAGICK):\n if t__ and t__[0] == '/':\n t__ = _MAGICK + t__\n else:\n t__ = _MAGICK + '/' + t__\n ts_.append(t__)\n return ts_"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef handle_msg(self, msg):\n addr_ = msg.data[\"URI\"]\n status = msg.data.get('status', True)\n if status:\n service = msg.data.get('service')\n for service in self.services:\n if not service or service in service:\n LOGGER.debug(\"Adding address %s %s\", str(addr_),\n str(service))\n self.subscriber.add(addr_)\n break\n else:\n LOGGER.debug(\"Removing address %s\", str(addr_))\n self.subscriber.remove(addr_)", "response": "handle the message *msg*."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nclassify the transients in the database", "response": "def classify(self):\n \"\"\"\n *classify the transients selected from the transient selection query in the settings file or passed in via the CL or other code*\n\n **Return:**\n - ``crossmatches`` -- list of dictionaries of crossmatched associated sources\n - ``classifications`` -- the classifications assigned to the transients post-crossmatches (dictionary of rank ordered list of classifications)\n\n See class docstring for usage.\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n\n global theseBatches\n global crossmatchArray\n\n self.log.debug('starting the ``classify`` method')\n\n remaining = 1\n\n # THE COLUMN MAPS - WHICH COLUMNS IN THE CATALOGUE TABLES = RA, DEC,\n # REDSHIFT, MAG ETC\n colMaps = get_crossmatch_catalogues_column_map(\n log=self.log,\n dbConn=self.cataloguesDbConn\n )\n\n self._create_tables_if_not_exist()\n\n import time\n start_time = time.time()\n\n # COUNT SEARCHES\n sa = self.settings[\"search algorithm\"]\n searchCount = 0\n brightnessFilters = [\"bright\", \"faint\", \"general\"]\n for search_name, searchPara in sa.iteritems():\n for bf in brightnessFilters:\n if bf in searchPara:\n searchCount += 1\n\n cpuCount = psutil.cpu_count()\n if searchCount > cpuCount:\n searchCount = cpuCount\n\n largeBatchSize = 5000\n miniBatchSize = 100\n self.largeBatchSize = largeBatchSize\n\n # print \"mini batch size \", str(miniBatchSize)\n\n while remaining:\n\n # IF A TRANSIENT HAS NOT BEEN PASSED IN VIA THE COMMAND-LINE, THEN\n # QUERY THE TRANSIENT DATABASE\n if not self.ra and not self.dec:\n\n # COUNT REMAINING TRANSIENTS\n from fundamentals.mysql import readquery\n sqlQuery = self.settings[\"database settings\"][\n \"transients\"][\"transient count\"]\n thisInt = randint(0, 100)\n if \"where\" in sqlQuery:\n sqlQuery = sqlQuery.replace(\n \"where\", \"where %(thisInt)s=%(thisInt)s and \" % locals())\n\n if remaining == 1 or remaining < largeBatchSize:\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n )\n remaining = rows[0][\"count(*)\"]\n else:\n remaining = remaining - largeBatchSize\n\n print \"%(remaining)s transient sources requiring a classification remain\" % locals()\n\n # START THE TIME TO TRACK CLASSIFICATION SPPED\n start_time = time.time()\n\n # A LIST OF DICTIONARIES OF TRANSIENT METADATA\n transientsMetadataList = self._get_transient_metadata_from_database_list()\n\n count = len(transientsMetadataList)\n print \" now classifying the next %(count)s transient sources\" % locals()\n\n # EXAMPLE OF TRANSIENT METADATA\n # { 'name': 'PS17gx',\n # 'alt_id': 'PS17gx',\n # 'object_classification': 'SN',\n # 'dec': '+43:25:44.1',\n # 'id': 1,\n # 'ra': '08:57:57.19'}\n # TRANSIENT PASSED VIA COMMAND-LINE\n else:\n if not self.name:\n name = \"transient\"\n else:\n name = self.name\n transient = {\n 'name': name,\n 'object_classification': None,\n 'dec': self.dec,\n 'id': name,\n 'ra': self.ra\n }\n transientsMetadataList = [transient]\n remaining = 0\n\n if self.oneRun:\n remaining = 0\n\n if len(transientsMetadataList) == 0:\n if self.daemonMode == False:\n remaining = 0\n print \"No transients need classified\"\n return None, None\n else:\n print \"No remaining transients need classified, will try again in 5 mins\"\n time.sleep(\"10\")\n\n # FROM THE LOCATIONS OF THE TRANSIENTS, CHECK IF OUR LOCAL NED DATABASE\n # NEEDS UPDATED\n if self.updateNed:\n\n self._update_ned_stream(\n transientsMetadataList=transientsMetadataList\n )\n\n # SOME TESTING SHOWED THAT 25 IS GOOD\n total = len(transientsMetadataList)\n batches = int((float(total) / float(miniBatchSize)) + 1.)\n\n if batches == 0:\n batches = 1\n\n start = 0\n end = 0\n theseBatches = []\n for i in range(batches):\n end = end + miniBatchSize\n start = i * miniBatchSize\n thisBatch = transientsMetadataList[start:end]\n theseBatches.append(thisBatch)\n\n print \"BATCH SIZE = %(total)s\" % locals()\n print \"MINI BATCH SIZE = %(batches)s x %(miniBatchSize)s\" % locals()\n\n # DEFINE AN INPUT ARRAY\n # cores = psutil.cpu_count()\n # if cores > 8:\n # cores = 8\n\n start_time2 = time.time()\n\n print \"START CROSSMATCH\"\n\n crossmatchArray = fmultiprocess(log=self.log, function=self._crossmatch_transients_against_catalogues,\n inputArray=range(len(theseBatches)), poolSize=None, colMaps=colMaps)\n\n print \"FINISH CROSSMATCH/START RANKING: %d\" % (time.time() - start_time2,)\n start_time2 = time.time()\n\n classifications = {}\n crossmatches = []\n\n for sublist in crossmatchArray:\n sublist = sorted(\n sublist, key=itemgetter('transient_object_id'))\n\n # REORGANISE INTO INDIVIDUAL TRANSIENTS FOR RANKING AND\n # TOP-LEVEL CLASSIFICATION EXTRACTION\n\n batch = []\n if len(sublist) != 0:\n transientId = sublist[0]['transient_object_id']\n for s in sublist:\n if s['transient_object_id'] != transientId:\n # RANK TRANSIENT CROSSMATCH BATCH\n cl, cr = self._rank_classifications(\n batch, colMaps)\n crossmatches.extend(cr)\n classifications = dict(\n classifications.items() + cl.items())\n\n transientId = s['transient_object_id']\n batch = [s]\n else:\n batch.append(s)\n\n # RANK FINAL BATCH\n cl, cr = self._rank_classifications(\n batch, colMaps)\n classifications = dict(\n classifications.items() + cl.items())\n crossmatches.extend(cr)\n\n for t in transientsMetadataList:\n if t[\"id\"] not in classifications:\n classifications[t[\"id\"]] = [\"ORPHAN\"]\n\n if self.cl:\n self._print_results_to_stdout(\n classifications=classifications,\n crossmatches=crossmatches\n )\n\n # UPDATE THE TRANSIENT DATABASE IF UPDATE REQUESTED (ADD DATA TO\n # tcs_crossmatch_table AND A CLASSIFICATION TO THE ORIGINAL TRANSIENT\n # TABLE)\n print \"FINISH RANKING/START UPDATING TRANSIENT DB: %d\" % (time.time() - start_time2,)\n start_time2 = time.time()\n if self.update and not self.ra:\n self._update_transient_database(\n crossmatches=crossmatches,\n classifications=classifications,\n transientsMetadataList=transientsMetadataList,\n colMaps=colMaps\n )\n\n print \"FINISH UPDATING TRANSIENT DB/START ANNOTATING TRANSIENT DB: %d\" % (time.time() - start_time2,)\n start_time2 = time.time()\n\n if self.ra:\n return classifications, crossmatches\n\n if self.updatePeakMags and self.settings[\"database settings\"][\"transients\"][\"transient peak magnitude query\"]:\n self.update_peak_magnitudes()\n self.update_classification_annotations_and_summaries(\n self.updatePeakMags)\n\n print \"FINISH ANNOTATING TRANSIENT DB: %d\" % (time.time() - start_time2,)\n start_time2 = time.time()\n\n classificationRate = count / (time.time() - start_time)\n print \"Sherlock is classify at a rate of %(classificationRate)2.1f transients/sec\" % locals()\n\n self.log.debug('completed the ``classify`` method')\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_transient_metadata_from_database_list(\n self):\n \"\"\"use the transient query in the settings file to generate a list of transients to corssmatch and classify\n\n **Return:**\n - ``transientsMetadataList``\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug(\n 'starting the ``_get_transient_metadata_from_database_list`` method')\n\n sqlQuery = self.settings[\"database settings\"][\n \"transients\"][\"transient query\"] + \" limit \" + str(self.largeBatchSize)\n\n thisInt = randint(0, 100)\n if \"where\" in sqlQuery:\n sqlQuery = sqlQuery.replace(\n \"where\", \"where %(thisInt)s=%(thisInt)s and \" % locals())\n\n transientsMetadataList = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n quiet=False\n )\n\n self.log.debug(\n 'completed the ``_get_transient_metadata_from_database_list`` method')\n return transientsMetadataList", "response": "use the transient query to generate a list of transients to corssmatch and classify\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the NED stream for the given transient metadata", "response": "def _update_ned_stream(\n self,\n transientsMetadataList\n ):\n \"\"\" update the NED stream within the catalogues database at the locations of the transients\n\n **Key Arguments:**\n - ``transientsMetadataList`` -- the list of transient metadata lifted from the database.\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_update_ned_stream`` method')\n\n coordinateList = []\n for i in transientsMetadataList:\n # thisList = str(i[\"ra\"]) + \" \" + str(i[\"dec\"])\n thisList = (i[\"ra\"], i[\"dec\"])\n coordinateList.append(thisList)\n\n coordinateList = self._remove_previous_ned_queries(\n coordinateList=coordinateList\n )\n\n # MINIMISE COORDINATES IN LIST TO REDUCE NUMBER OF REQUIRE NED QUERIES\n coordinateList = self._consolidate_coordinateList(\n coordinateList=coordinateList\n )\n\n stream = ned(\n log=self.log,\n settings=self.settings,\n coordinateList=coordinateList,\n radiusArcsec=self.settings[\"ned stream search radius arcec\"]\n )\n stream.ingest()\n\n sqlQuery = \"\"\"SET session sql_mode = \"\";\"\"\" % locals(\n )\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn\n )\n\n sqlQuery = \"\"\"update tcs_cat_ned_stream set magnitude = CAST(`magnitude_filter` AS DECIMAL(5,2)) where magnitude is null;\"\"\" % locals(\n )\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.cataloguesDbConn\n )\n\n self.log.debug('completed the ``_update_ned_stream`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _remove_previous_ned_queries(\n self,\n coordinateList):\n \"\"\"iterate through the transient locations to see if we have recent local NED coverage of that area already\n\n **Key Arguments:**\n - ``coordinateList`` -- set of coordinate to check for previous queries\n\n **Return:**\n - ``updatedCoordinateList`` -- coordinate list with previous queries removed\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_remove_previous_ned_queries`` method')\n\n # 1 DEGREE QUERY RADIUS\n radius = 60. * 60.\n updatedCoordinateList = []\n keepers = []\n\n # CALCULATE THE OLDEST RESULTS LIMIT\n now = datetime.now()\n td = timedelta(\n days=self.settings[\"ned stream refresh rate in days\"])\n refreshLimit = now - td\n refreshLimit = refreshLimit.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n raList = []\n raList[:] = [c[0] for c in coordinateList]\n decList = []\n decList[:] = [c[1] for c in coordinateList]\n\n # MATCH COORDINATES AGAINST PREVIOUS NED SEARCHES\n cs = conesearch(\n log=self.log,\n dbConn=self.cataloguesDbConn,\n tableName=\"tcs_helper_ned_query_history\",\n columns=\"*\",\n ra=raList,\n dec=decList,\n radiusArcsec=radius,\n separations=True,\n distinct=True,\n sqlWhere=\"dateQueried > '%(refreshLimit)s'\" % locals(),\n closest=False\n )\n matchIndies, matches = cs.search()\n\n # DETERMINE WHICH COORDINATES REQUIRE A NED QUERY\n curatedMatchIndices = []\n curatedMatches = []\n for i, m in zip(matchIndies, matches.list):\n match = False\n row = m\n row[\"separationArcsec\"] = row[\"cmSepArcsec\"]\n raStream = row[\"raDeg\"]\n decStream = row[\"decDeg\"]\n radiusStream = row[\"arcsecRadius\"]\n dateStream = row[\"dateQueried\"]\n angularSeparation = row[\"separationArcsec\"]\n\n if angularSeparation + self.settings[\"first pass ned search radius arcec\"] < radiusStream:\n curatedMatchIndices.append(i)\n curatedMatches.append(m)\n\n # NON MATCHES\n for i, v in enumerate(coordinateList):\n if i not in curatedMatchIndices:\n updatedCoordinateList.append(v)\n\n self.log.debug('completed the ``_remove_previous_ned_queries`` method')\n return updatedCoordinateList", "response": "remove previous NED queries for the given transient locations"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun the transients through the crossmatch algorithm in the settings file and return the list of dictionaries of the associated sources that are crossmatched from the database", "response": "def _crossmatch_transients_against_catalogues(\n self,\n transientsMetadataListIndex,\n colMaps):\n \"\"\"run the transients through the crossmatch algorithm in the settings file\n\n **Key Arguments:**\n - ``transientsMetadataListIndex`` -- the list of transient metadata lifted from the database.\n - ``colMaps`` -- dictionary of dictionaries with the name of the database-view (e.g. `tcs_view_agn_milliquas_v4_5`) as the key and the column-name dictary map as value (`{view_name: {columnMap}}`).\n\n\n **Return:**\n - ``crossmatches`` -- a list of dictionaries of the associated sources crossmatched from the catalogues database\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n\n global theseBatches\n\n self.log.debug(\n 'starting the ``_crossmatch_transients_against_catalogues`` method')\n\n # SETUP ALL DATABASE CONNECTIONS\n\n transientsMetadataList = theseBatches[transientsMetadataListIndex]\n\n dbConn = database(\n log=self.log,\n dbSettings=self.settings[\"database settings\"][\"static catalogues\"]\n ).connect()\n\n self.allClassifications = []\n\n cm = transient_catalogue_crossmatch(\n log=self.log,\n dbConn=dbConn,\n transients=transientsMetadataList,\n settings=self.settings,\n colMaps=colMaps\n )\n crossmatches = cm.match()\n\n self.log.debug(\n 'completed the ``_crossmatch_transients_against_catalogues`` method')\n\n return crossmatches"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _update_transient_database(\n self,\n crossmatches,\n classifications,\n transientsMetadataList,\n colMaps):\n \"\"\" update transient database with classifications and crossmatch results\n\n **Key Arguments:**\n - ``crossmatches`` -- the crossmatches and associations resulting from the catlaogue crossmatches\n - ``classifications`` -- the classifications assigned to the transients post-crossmatches (dictionary of rank ordered list of classifications)\n - ``transientsMetadataList`` -- the list of transient metadata lifted from the database.\n - ``colMaps`` -- maps of the important column names for each table/view in the crossmatch-catalogues database\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n\n self.log.debug('starting the ``_update_transient_database`` method')\n\n import time\n start_time = time.time()\n print \"UPDATING TRANSIENTS DATABASE WITH RESULTS\"\n print \"DELETING OLD RESULTS\"\n\n now = datetime.now()\n now = now.strftime(\"%Y-%m-%d_%H-%M-%S-%f\")\n\n transientTable = self.settings[\"database settings\"][\n \"transients\"][\"transient table\"]\n transientTableClassCol = self.settings[\"database settings\"][\n \"transients\"][\"transient classification column\"]\n transientTableIdCol = self.settings[\"database settings\"][\n \"transients\"][\"transient primary id column\"]\n\n # COMBINE ALL CROSSMATCHES INTO A LIST OF DICTIONARIES TO DUMP INTO\n # DATABASE TABLE\n transientIDs = [str(c)\n for c in classifications.keys()]\n transientIDs = \",\".join(transientIDs)\n\n # REMOVE PREVIOUS MATCHES\n sqlQuery = \"\"\"delete from sherlock_crossmatches where transient_object_id in (%(transientIDs)s);\"\"\" % locals(\n )\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n )\n sqlQuery = \"\"\"delete from sherlock_classifications where transient_object_id in (%(transientIDs)s);\"\"\" % locals(\n )\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n )\n\n print \"FINISHED DELETING OLD RESULTS/ADDING TO CROSSMATCHES: %d\" % (time.time() - start_time,)\n start_time = time.time()\n\n if len(crossmatches):\n insert_list_of_dictionaries_into_database_tables(\n dbConn=self.transientsDbConn,\n log=self.log,\n dictList=crossmatches,\n dbTableName=\"sherlock_crossmatches\",\n dateModified=True,\n batchSize=10000,\n replace=True,\n dbSettings=self.settings[\"database settings\"][\n \"transients\"]\n )\n\n print \"FINISHED ADDING TO CROSSMATCHES/UPDATING CLASSIFICATIONS IN TRANSIENT TABLE: %d\" % (time.time() - start_time,)\n start_time = time.time()\n\n sqlQuery = \"\"\n inserts = []\n for k, v in classifications.iteritems():\n thisInsert = {\n \"transient_object_id\": k,\n \"classification\": v[0]\n }\n inserts.append(thisInsert)\n\n print \"FINISHED UPDATING CLASSIFICATIONS IN TRANSIENT TABLE/UPDATING sherlock_classifications TABLE: %d\" % (time.time() - start_time,)\n start_time = time.time()\n\n insert_list_of_dictionaries_into_database_tables(\n dbConn=self.transientsDbConn,\n log=self.log,\n dictList=inserts,\n dbTableName=\"sherlock_classifications\",\n dateModified=True,\n batchSize=10000,\n replace=True,\n dbSettings=self.settings[\"database settings\"][\n \"transients\"]\n )\n\n print \"FINISHED UPDATING sherlock_classifications TABLE: %d\" % (time.time() - start_time,)\n start_time = time.time()\n\n self.log.debug('completed the ``_update_transient_database`` method')\n return None", "response": "update the transient database with the given crossmatches classifications transientsMetadataList and colMaps"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _rank_classifications(\n self,\n crossmatchArray,\n colMaps):\n \"\"\"*rank the classifications returned from the catalogue crossmatcher, annotate the results with a classification rank-number (most likely = 1) and a rank-score (weight of classification)*\n\n **Key Arguments:**\n - ``crossmatchArrayIndex`` -- the index of list of unranked crossmatch classifications\n - ``colMaps`` -- dictionary of dictionaries with the name of the database-view (e.g. `tcs_view_agn_milliquas_v4_5`) as the key and the column-name dictary map as value (`{view_name: {columnMap}}`).\n\n **Return:**\n - ``classifications`` -- the classifications assigned to the transients post-crossmatches\n - ``crossmatches`` -- the crossmatches annotated with rankings and rank-scores\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_rank_classifications`` method')\n\n crossmatches = crossmatchArray\n\n # GROUP CROSSMATCHES INTO DISTINCT SOURCES (DUPLICATE ENTRIES OF THE\n # SAME ASTROPHYSICAL SOURCE ACROSS MULTIPLE CATALOGUES)\n ra, dec = zip(*[(r[\"raDeg\"], r[\"decDeg\"]) for r in crossmatches])\n\n from HMpTy.htm import sets\n xmatcher = sets(\n log=self.log,\n ra=ra,\n dec=dec,\n radius=3 / (60. * 60.), # in degrees\n sourceList=crossmatches\n )\n groupedMatches = xmatcher.match\n\n associatationTypeOrder = [\"AGN\", \"CV\", \"NT\", \"SN\", \"VS\", \"BS\"]\n\n # ADD DISTINCT-SOURCE KEY\n dupKey = 0\n distinctMatches = []\n for x in groupedMatches:\n dupKey += 1\n mergedMatch = copy.deepcopy(x[0])\n mergedMatch[\"merged_rank\"] = int(dupKey)\n\n # ADD OTHER ESSENTIAL KEYS\n for e in ['z', 'photoZ', 'photoZErr']:\n if e not in mergedMatch:\n mergedMatch[e] = None\n\n bestDirectDistance = {\n \"direct_distance\": mergedMatch[\"direct_distance\"],\n \"direct_distance_modulus\": mergedMatch[\"direct_distance_modulus\"],\n \"direct_distance_scale\": mergedMatch[\"direct_distance\"],\n \"qual\": colMaps[mergedMatch[\"catalogue_view_name\"]][\"object_type_accuracy\"]\n }\n bestSpecz = {\n \"z\": mergedMatch[\"z\"],\n \"distance\": mergedMatch[\"distance\"],\n \"distance_modulus\": mergedMatch[\"distance_modulus\"],\n \"scale\": mergedMatch[\"scale\"],\n \"qual\": colMaps[mergedMatch[\"catalogue_view_name\"]][\"object_type_accuracy\"]\n }\n bestPhotoz = {\n \"photoZ\": mergedMatch[\"photoZ\"],\n \"photoZErr\": mergedMatch[\"photoZErr\"],\n \"distance\": mergedMatch[\"distance\"],\n \"distance_modulus\": mergedMatch[\"distance_modulus\"],\n \"scale\": mergedMatch[\"scale\"],\n \"qual\": colMaps[mergedMatch[\"catalogue_view_name\"]][\"object_type_accuracy\"]\n }\n\n for i, m in enumerate(x):\n m[\"merged_rank\"] = int(dupKey)\n if i > 0:\n # MERGE ALL BEST MAGNITUDE MEASUREMENTS\n for f in self.filterPreference:\n if f in m and m[f] and (f not in mergedMatch or (f + \"Err\" in mergedMatch and (f + \"Err\" not in m or (mergedMatch[f + \"Err\"] > m[f + \"Err\"])))):\n mergedMatch[f] = m[f]\n try:\n mergedMatch[f + \"Err\"] = m[f + \"Err\"]\n except:\n pass\n mergedMatch[\"original_search_radius_arcsec\"] = \"multiple\"\n mergedMatch[\"catalogue_object_subtype\"] = \"multiple\"\n mergedMatch[\"catalogue_view_name\"] = \"multiple\"\n\n # MERGE SEARCH NAMES\n snippet = m[\"search_name\"].split(\" \")[0].upper()\n\n if \"/\" not in mergedMatch[\"search_name\"] and snippet not in mergedMatch[\"search_name\"].upper():\n mergedMatch[\"search_name\"] = mergedMatch[\"search_name\"].split(\n \" \")[0].upper() + \"/\" + m[\"search_name\"].split(\" \")[0].upper()\n elif snippet not in mergedMatch[\"search_name\"].upper():\n mergedMatch[\n \"search_name\"] += \"/\" + m[\"search_name\"].split(\" \")[0].upper()\n elif \"/\" not in mergedMatch[\"search_name\"]:\n mergedMatch[\"search_name\"] = mergedMatch[\"search_name\"].split(\n \" \")[0].upper()\n\n mergedMatch[\"catalogue_table_name\"] = mergedMatch[\n \"search_name\"]\n\n # MERGE CATALOGUE SOURCE NAMES\n mergedMatch[\"catalogue_object_id\"] = str(\n mergedMatch[\"catalogue_object_id\"])\n m[\"catalogue_object_id\"] = str(m[\"catalogue_object_id\"])\n if m[\"catalogue_object_id\"].replace(\" \", \"\").lower() not in mergedMatch[\"catalogue_object_id\"].replace(\" \", \"\").lower():\n mergedMatch[\"catalogue_object_id\"] += \"/\" + \\\n m[\"catalogue_object_id\"]\n\n # DETERMINE BEST CLASSIFICATION\n if mergedMatch[\"classificationReliability\"] == 3 and m[\"classificationReliability\"] < 3:\n mergedMatch[\"association_type\"] = m[\"association_type\"]\n mergedMatch[\"classificationReliability\"] = m[\n \"classificationReliability\"]\n\n if m[\"classificationReliability\"] != 3 and m[\"association_type\"] in associatationTypeOrder and (mergedMatch[\"association_type\"] not in associatationTypeOrder or associatationTypeOrder.index(m[\"association_type\"]) < associatationTypeOrder.index(mergedMatch[\"association_type\"])):\n mergedMatch[\"association_type\"] = m[\"association_type\"]\n mergedMatch[\"classificationReliability\"] = m[\n \"classificationReliability\"]\n\n # FIND BEST DISTANCES\n if \"direct_distance\" in m and m[\"direct_distance\"] and colMaps[m[\"catalogue_view_name\"]][\"object_type_accuracy\"] > bestDirectDistance[\"qual\"]:\n bestDirectDistance = {\n \"direct_distance\": m[\"direct_distance\"],\n \"direct_distance_modulus\": m[\"direct_distance_modulus\"],\n \"direct_distance_scale\": m[\"direct_distance_scale\"],\n \"catalogue_object_type\": m[\"catalogue_object_type\"],\n \"qual\": colMaps[m[\"catalogue_view_name\"]][\"object_type_accuracy\"]\n }\n # FIND BEST SPEC-Z\n if \"z\" in m and m[\"z\"] and colMaps[m[\"catalogue_view_name\"]][\"object_type_accuracy\"] > bestSpecz[\"qual\"]:\n bestSpecz = {\n \"z\": m[\"z\"],\n \"distance\": m[\"distance\"],\n \"distance_modulus\": m[\"distance_modulus\"],\n \"scale\": m[\"scale\"],\n \"catalogue_object_type\": m[\"catalogue_object_type\"],\n \"qual\": colMaps[m[\"catalogue_view_name\"]][\"object_type_accuracy\"]\n }\n # FIND BEST PHOT-Z\n if \"photoZ\" in m and m[\"photoZ\"] and colMaps[m[\"catalogue_view_name\"]][\"object_type_accuracy\"] > bestPhotoz[\"qual\"]:\n bestPhotoz = {\n \"photoZ\": m[\"photoZ\"],\n \"photoZErr\": m[\"photoZErr\"],\n \"distance\": m[\"distance\"],\n \"distance_modulus\": m[\"distance_modulus\"],\n \"scale\": m[\"scale\"],\n \"catalogue_object_type\": m[\"catalogue_object_type\"],\n \"qual\": colMaps[m[\"catalogue_view_name\"]][\"object_type_accuracy\"]\n }\n # CLOSEST ANGULAR SEP & COORDINATES\n if m[\"separationArcsec\"] < mergedMatch[\"separationArcsec\"]:\n mergedMatch[\"separationArcsec\"] = m[\"separationArcsec\"]\n mergedMatch[\"raDeg\"] = m[\"raDeg\"]\n mergedMatch[\"decDeg\"] = m[\"decDeg\"]\n\n # MERGE THE BEST RESULTS\n for l in [bestPhotoz, bestSpecz, bestDirectDistance]:\n for k, v in l.iteritems():\n if k != \"qual\":\n mergedMatch[k] = v\n\n mergedMatch[\"catalogue_object_id\"] = str(mergedMatch[\n \"catalogue_object_id\"]).replace(\" \", \"\")\n\n # RECALULATE PHYSICAL DISTANCE SEPARATION\n if mergedMatch[\"direct_distance_scale\"]:\n mergedMatch[\"physical_separation_kpc\"] = mergedMatch[\n \"direct_distance_scale\"] * mergedMatch[\"separationArcsec\"]\n elif mergedMatch[\"scale\"]:\n mergedMatch[\"physical_separation_kpc\"] = mergedMatch[\n \"scale\"] * mergedMatch[\"separationArcsec\"]\n\n if \"/\" in mergedMatch[\"search_name\"]:\n mergedMatch[\"search_name\"] = \"multiple\"\n\n distinctMatches.append(mergedMatch)\n\n crossmatches = []\n for xm, gm in zip(distinctMatches, groupedMatches):\n\n # SPEC-Z GALAXIES\n if (xm[\"physical_separation_kpc\"] is not None and xm[\"physical_separation_kpc\"] != \"null\" and xm[\"physical_separation_kpc\"] < 20. and xm[\"association_type\"] == \"SN\" and ((\"z\" in xm and xm[\"z\"] is not None) or \"photoZ\" not in xm or xm[\"photoZ\"] is None or xm[\"photoZ\"] < 0.)):\n rankScore = xm[\"classificationReliability\"] * 1000 + 2. - \\\n xm[\"physical_separation_kpc\"] / 10\n # PHOTO-Z GALAXIES\n elif (xm[\"physical_separation_kpc\"] is not None and xm[\"physical_separation_kpc\"] != \"null\" and xm[\"physical_separation_kpc\"] < 20. and xm[\"association_type\"] == \"SN\"):\n rankScore = xm[\"classificationReliability\"] * 1000 + 2.2 - \\\n xm[\"physical_separation_kpc\"] / 10\n # NOT SPEC-Z, NON PHOTO-Z GALAXIES\n elif (xm[\"association_type\"] == \"SN\"):\n rankScore = xm[\"classificationReliability\"] * 1000 + 5.\n # VS\n elif (xm[\"association_type\"] == \"VS\"):\n rankScore = xm[\"classificationReliability\"] * \\\n 1000 + xm[\"separationArcsec\"] + 2.\n # BS\n elif (xm[\"association_type\"] == \"BS\"):\n rankScore = xm[\"classificationReliability\"] * \\\n 1000 + xm[\"separationArcsec\"]\n else:\n rankScore = xm[\"classificationReliability\"] * \\\n 1000 + xm[\"separationArcsec\"] + 10.\n xm[\"rankScore\"] = rankScore\n crossmatches.append(xm)\n if len(gm) > 1:\n for g in gm:\n g[\"rankScore\"] = rankScore\n\n crossmatches = sorted(\n crossmatches, key=itemgetter('rankScore'), reverse=False)\n crossmatches = sorted(\n crossmatches, key=itemgetter('transient_object_id'))\n\n transient_object_id = None\n uniqueIndexCheck = []\n classifications = {}\n crossmatchesKeep = []\n rank = 0\n transClass = []\n for xm in crossmatches:\n rank += 1\n if rank == 1:\n transClass.append(xm[\"association_type\"])\n classifications[xm[\"transient_object_id\"]] = transClass\n xm[\"rank\"] = rank\n crossmatchesKeep.append(xm)\n crossmatches = crossmatchesKeep\n\n crossmatchesKeep = []\n for xm in crossmatches:\n group = groupedMatches[xm[\"merged_rank\"] - 1]\n xm[\"merged_rank\"] = None\n crossmatchesKeep.append(xm)\n\n if len(group) > 1:\n groupKeep = []\n uniqueIndexCheck = []\n for g in group:\n g[\"merged_rank\"] = xm[\"rank\"]\n g[\"rankScore\"] = xm[\"rankScore\"]\n index = \"%(catalogue_table_name)s%(catalogue_object_id)s\" % g\n # IF WE HAVE HIT A NEW SOURCE\n if index not in uniqueIndexCheck:\n uniqueIndexCheck.append(index)\n crossmatchesKeep.append(g)\n\n crossmatches = crossmatchesKeep\n\n self.log.debug('completed the ``_rank_classifications`` method')\n\n return classifications, crossmatches", "response": "rank the classifications returned from the catalogue crossmatcher"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _print_results_to_stdout(\n self,\n classifications,\n crossmatches):\n \"\"\"*print the classification and crossmatch results for a single transient object to stdout*\n\n **Key Arguments:**\n - ``crossmatches`` -- the unranked crossmatch classifications\n - ``classifications`` -- the classifications assigned to the transients post-crossmatches (dictionary of rank ordered list of classifications)\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n \"\"\"\n self.log.debug('starting the ``_print_results_to_stdout`` method')\n\n if self.verbose == 0:\n return\n\n if self.name in classifications:\n headline = self.name + \"'s Predicted Classification: \" + \\\n classifications[self.name][0]\n else:\n headline = self.name + \"'s Predicted Classification: ORPHAN\"\n print headline\n print\n print \"Suggested Associations:\"\n\n # REPORT ONLY THE MOST PREFERED MAGNITUDE VALUE\n basic = [\"association_type\", \"rank\", \"rankScore\", \"catalogue_table_name\", \"catalogue_object_id\", \"catalogue_object_type\", \"catalogue_object_subtype\",\n \"raDeg\", \"decDeg\", \"separationArcsec\", \"physical_separation_kpc\", \"direct_distance\", \"distance\", \"z\", \"photoZ\", \"photoZErr\", \"Mag\", \"MagFilter\", \"MagErr\", \"classificationReliability\", \"merged_rank\"]\n verbose = [\"search_name\", \"catalogue_view_name\", \"original_search_radius_arcsec\", \"direct_distance_modulus\", \"distance_modulus\", \"direct_distance_scale\", \"major_axis_arcsec\", \"scale\", \"U\", \"UErr\",\n \"B\", \"BErr\", \"V\", \"VErr\", \"R\", \"RErr\", \"I\", \"IErr\", \"J\", \"JErr\", \"H\", \"HErr\", \"K\", \"KErr\", \"_u\", \"_uErr\", \"_g\", \"_gErr\", \"_r\", \"_rErr\", \"_i\", \"_iErr\", \"_z\", \"_zErr\", \"_y\", \"G\", \"GErr\", \"_yErr\", \"unkMag\"]\n dontFormat = [\"decDeg\", \"raDeg\", \"rank\",\n \"catalogue_object_id\", \"catalogue_object_subtype\", \"merged_rank\"]\n\n if self.verbose == 2:\n basic = basic + verbose\n\n for c in crossmatches:\n for f in self.filterPreference:\n if f in c and c[f]:\n c[\"Mag\"] = c[f]\n c[\"MagFilter\"] = f.replace(\"_\", \"\").replace(\"Mag\", \"\")\n if f + \"Err\" in c:\n c[\"MagErr\"] = c[f + \"Err\"]\n else:\n c[\"MagErr\"] = None\n break\n\n allKeys = []\n for c in crossmatches:\n for k, v in c.iteritems():\n if k not in allKeys:\n allKeys.append(k)\n\n for c in crossmatches:\n for k in allKeys:\n if k not in c:\n c[k] = None\n\n printCrossmatches = []\n for c in crossmatches:\n ordDict = collections.OrderedDict(sorted({}.items()))\n for k in basic:\n if k in c:\n if k == \"catalogue_table_name\":\n c[k] = c[k].replace(\"tcs_cat_\", \"\").replace(\"_\", \" \")\n if k == \"classificationReliability\":\n if c[k] == 1:\n c[\"classification reliability\"] = \"synonym\"\n elif c[k] == 2:\n c[\"classification reliability\"] = \"association\"\n elif c[k] == 3:\n c[\"classification reliability\"] = \"annotation\"\n k = \"classification reliability\"\n if k == \"catalogue_object_subtype\" and \"sdss\" in c[\"catalogue_table_name\"]:\n if c[k] == 6:\n c[k] = \"galaxy\"\n elif c[k] == 3:\n c[k] = \"star\"\n columnName = k.replace(\"tcs_cat_\", \"\").replace(\"_\", \" \")\n value = c[k]\n if k not in dontFormat:\n\n try:\n ordDict[columnName] = \"%(value)0.2f\" % locals()\n except:\n ordDict[columnName] = value\n else:\n ordDict[columnName] = value\n\n printCrossmatches.append(ordDict)\n\n from fundamentals.renderer import list_of_dictionaries\n dataSet = list_of_dictionaries(\n log=self.log,\n listOfDictionaries=printCrossmatches\n )\n tableData = dataSet.table(filepath=None)\n\n print tableData\n\n self.log.debug('completed the ``_print_results_to_stdout`` method')\n return None", "response": "Prints the classification and crossmatch results for a single transient object to stdout."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _consolidate_coordinateList(\n self,\n coordinateList):\n \"\"\"*match the coordinate list against itself with the parameters of the NED search queries to minimise duplicated NED queries*\n\n **Key Arguments:**\n - ``coordinateList`` -- the original coordinateList.\n\n **Return:**\n - ``updatedCoordinateList`` -- the coordinate list with duplicated search areas removed\n\n **Usage:**\n .. todo::\n\n - add usage info\n - create a sublime snippet for usage\n - update package tutorial if needed\n\n .. code-block:: python\n\n usage code\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n\n \"\"\"\n self.log.debug('starting the ``_consolidate_coordinateList`` method')\n\n raList = []\n raList[:] = np.array([c[0] for c in coordinateList])\n decList = []\n decList[:] = np.array([c[1] for c in coordinateList])\n\n nedStreamRadius = self.settings[\n \"ned stream search radius arcec\"] / (60. * 60.)\n firstPassNedSearchRadius = self.settings[\n \"first pass ned search radius arcec\"] / (60. * 60.)\n radius = nedStreamRadius - firstPassNedSearchRadius\n\n # LET'S BE CONSERVATIVE\n # radius = radius * 0.9\n\n xmatcher = sets(\n log=self.log,\n ra=raList,\n dec=decList,\n radius=radius, # in degrees\n sourceList=coordinateList,\n convertToArray=False\n )\n allMatches = xmatcher.match\n\n updatedCoordianteList = []\n for aSet in allMatches:\n updatedCoordianteList.append(aSet[0])\n\n self.log.debug('completed the ``_consolidate_coordinateList`` method')\n return updatedCoordianteList", "response": "consolidate the coordinate list against itself with the parameters of the NED search queries to minimise duplicated NED queries"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a detialed classification annotation to each crossmatch in the sherlock_classifications table", "response": "def classification_annotations(\n self):\n \"\"\"*add a detialed classification annotation to each classification in the sherlock_classifications table*\n\n **Key Arguments:**\n # -\n\n **Return:**\n - None\n\n **Usage:**\n .. todo::\n\n - add usage info\n - create a sublime snippet for usage\n - write a command-line tool for this method\n - update package tutorial with command-line tool info if needed\n\n .. code-block:: python\n\n usage code\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n\n \"\"\"\n self.log.debug('starting the ``classification_annotations`` method')\n\n from fundamentals.mysql import readquery\n sqlQuery = u\"\"\"\n select * from sherlock_classifications cl, sherlock_crossmatches xm where cl.transient_object_id=xm.transient_object_id and cl.annotation is null\n \"\"\" % locals()\n topXMs = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn\n )\n\n for xm in topXMs:\n annotation = []\n classType = xm[\"classificationReliability\"]\n if classType == 1:\n annotation.append(\"is synonymous with\")\n elif classType in [2, 3]:\n annotation.append(\"is possibly associated with\")\n\n print xm[\"catalogue_object_id\"]\n\n self.log.debug('completed the ``classification_annotations`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the classification annotations and summaries of the given object", "response": "def update_classification_annotations_and_summaries(\n self,\n updatePeakMagnitudes=True):\n \"\"\"*update classification annotations and summaries*\n\n **Key Arguments:**\n - ``updatePeakMagnitudes`` -- update the peak magnitudes in the annotations to give absolute magnitudes. Default *True*\n\n **Return:**\n - None\n\n **Usage:**\n .. todo::\n\n - add usage info\n - create a sublime snippet for usage\n - write a command-line tool for this method\n - update package tutorial with command-line tool info if needed\n\n .. code-block:: python\n\n usage code\n\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n\n \"\"\"\n self.log.debug(\n 'starting the ``update_classification_annotations_and_summaries`` method')\n\n # import time\n # start_time = time.time()\n # print \"COLLECTING TRANSIENTS WITH NO ANNOTATIONS\"\n\n if updatePeakMagnitudes:\n sqlQuery = u\"\"\"\n SELECT * from sherlock_crossmatches cm, sherlock_classifications cl where rank =1 and cl.transient_object_id=cm.transient_object_id and (cl.annotation is null orcl.dateLastModified is null or cl.dateLastModified > DATE_SUB(NOW(), INTERVAL 30 DAY)) order by cl.dateLastModified asc limit 100000\n \"\"\" % locals()\n else:\n sqlQuery = u\"\"\"\n SELECT * from sherlock_crossmatches cm, sherlock_classifications cl where rank =1 and cl.transient_object_id=cm.transient_object_id and cl.summary is null\n \"\"\" % locals()\n\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n quiet=False\n )\n\n # print \"FINISHED COLLECTING TRANSIENTS WITH NO ANNOTATIONS/GENERATING ANNOTATIONS: %d\" % (time.time() - start_time,)\n # start_time = time.time()\n\n from astrocalc.coords import unit_conversion\n # ASTROCALC UNIT CONVERTER OBJECT\n converter = unit_conversion(\n log=self.log\n )\n\n updates = []\n\n for row in rows:\n\n catalogue = row[\"catalogue_table_name\"]\n objectId = row[\"catalogue_object_id\"]\n objectType = row[\"catalogue_object_type\"]\n objectSubtype = row[\"catalogue_object_subtype\"]\n catalogueString = catalogue\n if \"catalogue\" not in catalogueString.lower():\n catalogueString = catalogue + \" catalogue\"\n if \"/\" in catalogueString:\n catalogueString += \"s\"\n\n if \"ned\" in catalogue.lower() and \"/\" not in catalogue:\n objectId = '''%(objectId)s''' % locals()\n elif \"sdss\" in catalogue.lower() and \"/\" not in catalogue:\n objectId = \"http://skyserver.sdss.org/dr12/en/tools/explore/Summary.aspx?id=%(objectId)s\" % locals(\n )\n\n ra = converter.ra_decimal_to_sexegesimal(\n ra=row[\"raDeg\"],\n delimiter=\"\"\n )\n dec = converter.dec_decimal_to_sexegesimal(\n dec=row[\"decDeg\"],\n delimiter=\"\"\n )\n betterName = \"SDSS J\" + ra[0:9] + dec[0:9]\n objectId = '''%(betterName)s''' % locals()\n elif \"milliquas\" in catalogue.lower() and \"/\" not in catalogue:\n thisName = objectId\n objectId = objectId.replace(\" \", \"+\")\n objectId = '''%(thisName)s''' % locals()\n\n if objectSubtype and objectSubtype.lower() in [\"uvs\", \"radios\", \"xray\", \"qso\", \"irs\", 'uves', 'viss', 'hii', 'gclstr', 'ggroup', 'gpair', 'gtrpl']:\n objectType = objectSubtype\n\n if objectType == \"star\":\n objectType = \"stellar source\"\n elif objectType == \"agn\":\n objectType = \"AGN\"\n elif objectType == \"cb\":\n objectType = \"CV\"\n elif objectType == \"unknown\":\n objectType = \"unclassified source\"\n\n sep = row[\"separationArcsec\"]\n if row[\"classificationReliability\"] == 1:\n classificationReliability = \"synonymous\"\n psep = row[\"physical_separation_kpc\"]\n if psep:\n location = '%(sep)0.1f\" (%(psep)0.1f Kpc) from the %(objectType)s core' % locals(\n )\n else:\n location = '%(sep)0.1f\" from the %(objectType)s core' % locals(\n )\n elif row[\"classificationReliability\"] in (2, 3):\n classificationReliability = \"possibly associated\"\n n = row[\"northSeparationArcsec\"]\n if n > 0:\n nd = \"S\"\n else:\n nd = \"N\"\n e = row[\"eastSeparationArcsec\"]\n if e > 0:\n ed = \"W\"\n else:\n ed = \"E\"\n n = math.fabs(n)\n e = math.fabs(e)\n psep = row[\"physical_separation_kpc\"]\n if psep:\n location = '%(n)0.2f\" %(nd)s, %(e)0.2f\" %(ed)s (%(psep)0.1f Kpc) from the %(objectType)s centre' % locals(\n )\n else:\n location = '%(n)0.2f\" %(nd)s, %(e)0.2f\" %(ed)s from the %(objectType)s centre' % locals(\n )\n location = location.replace(\"unclassified\", \"object's\")\n\n best_mag = None\n best_mag_error = None\n best_mag_filter = None\n filters = [\"R\", \"V\", \"B\", \"I\", \"J\", \"G\", \"H\", \"K\", \"U\",\n \"_r\", \"_g\", \"_i\", \"_g\", \"_z\", \"_y\", \"_u\", \"unkMag\"]\n for f in filters:\n if row[f] and not best_mag:\n best_mag = row[f]\n best_mag_error = row[f + \"Err\"]\n subfilter = f.replace(\n \"_\", \"\").replace(\"Mag\", \"\")\n best_mag_filter = f.replace(\n \"_\", \"\").replace(\"Mag\", \"\") + \"=\"\n if \"unk\" in best_mag_filter:\n best_mag_filter = \"\"\n subfilter = ''\n\n if not best_mag_filter:\n if str(best_mag).lower() in (\"8\", \"11\", \"18\"):\n best_mag_filter = \"an \"\n else:\n best_mag_filter = \"a \"\n else:\n if str(best_mag_filter)[0].lower() in (\"r\", \"i\", \"h\"):\n best_mag_filter = \"an \" + best_mag_filter\n else:\n best_mag_filter = \"a \" + best_mag_filter\n if not best_mag:\n best_mag = \"an unknown-\"\n best_mag_filter = \"\"\n else:\n best_mag = \"%(best_mag)0.2f \" % locals()\n\n distance = None\n if row[\"direct_distance\"]:\n d = row[\"direct_distance\"]\n distance = \"distance of %(d)0.1f Mpc\" % locals()\n\n if row[\"z\"]:\n z = row[\"z\"]\n distance += \"(z=%(z)0.3f)\" % locals()\n elif row[\"z\"]:\n z = row[\"z\"]\n distance = \"z=%(z)0.3f\" % locals()\n elif row[\"photoZ\"]:\n z = row[\"photoZ\"]\n zErr = row[\"photoZErr\"]\n distance = \"photoZ=%(z)0.3f (±%(zErr)0.3f)\" % locals()\n\n if distance:\n distance = \"%(distance)s\" % locals()\n\n distance_modulus = None\n if row[\"direct_distance_modulus\"]:\n distance_modulus = row[\"direct_distance_modulus\"]\n elif row[\"distance_modulus\"]:\n distance_modulus = row[\"distance_modulus\"]\n\n if updatePeakMagnitudes:\n if distance:\n absMag = row[\"transientAbsMag\"]\n absMag = \"\"\" A host %(distance)s implies a transient M = %(absMag)s.\"\"\" % locals(\n )\n else:\n absMag = \"\"\n else:\n if distance and distance_modulus:\n absMag = \"%(distance_modulus)0.2f\" % locals()\n absMag = \"\"\" A host %(distance)s implies a m - M = %(absMag)s.\"\"\" % locals(\n )\n else:\n absMag = \"\"\n\n annotation = \"The transient is %(classificationReliability)s with %(objectId)s; %(best_mag_filter)s%(best_mag)smag %(objectType)s found in the %(catalogueString)s. It's located %(location)s.%(absMag)s\" % locals()\n summary = '%(sep)0.1f\" from %(objectType)s in %(catalogue)s' % locals(\n )\n\n update = {\n \"transient_object_id\": row[\"transient_object_id\"],\n \"annotation\": annotation,\n \"summary\": summary,\n \"separationArcsec\": sep\n }\n updates.append(update)\n\n # print \"FINISHED GENERATING ANNOTATIONS/ADDING ANNOTATIONS TO TRANSIENT DATABASE: %d\" % (time.time() - start_time,)\n # start_time = time.time()\n\n insert_list_of_dictionaries_into_database_tables(\n dbConn=self.transientsDbConn,\n log=self.log,\n dictList=updates,\n dbTableName=\"sherlock_classifications\",\n dateModified=True,\n batchSize=10000,\n replace=True,\n dbSettings=self.settings[\"database settings\"][\"transients\"]\n )\n\n # print \"FINISHED ADDING ANNOTATIONS TO TRANSIENT DATABASE/UPDATING ORPHAN ANNOTATIONS: %d\" % (time.time() - start_time,)\n # start_time = time.time()\n\n sqlQuery = \"\"\"update sherlock_classifications set annotation = \"The transient location is not matched against any known catalogued source\", summary = \"No catalogued match\" where classification = 'ORPHAN' and summary is null \"\"\" % locals()\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n )\n\n # print \"FINISHED UPDATING ORPHAN ANNOTATIONS: %d\" % (time.time() - start_time,)\n # start_time = time.time()\n\n self.log.debug(\n 'completed the ``update_classification_annotations_and_summaries`` method')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_peak_magnitudes(\n self):\n \"\"\"*update peak magnitudes*\n\n **Key Arguments:**\n # -\n\n **Return:**\n - None\n\n **Usage:**\n .. todo::\n\n - add usage info\n - create a sublime snippet for usage\n - write a command-line tool for this method\n - update package tutorial with command-line tool info if needed\n\n .. code-block:: python\n\n usage code\n\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n\n \"\"\"\n self.log.debug('starting the ``update_peak_magnitudes`` method')\n\n sqlQuery = self.settings[\"database settings\"][\n \"transients\"][\"transient peak magnitude query\"]\n\n sqlQuery = \"\"\"UPDATE sherlock_crossmatches s,\n (%(sqlQuery)s) t\n SET\n s.transientAbsMag = ROUND(t.mag - IFNULL(direct_distance_modulus,\n distance_modulus),\n 2)\n WHERE\n IFNULL(direct_distance_modulus,\n distance_modulus) IS NOT NULL\n AND t.id = s.transient_object_id;\"\"\" % locals()\n\n writequery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n )\n\n self.log.debug('completed the ``update_peak_magnitudes`` method')\n return None", "response": "*update peak magnitudes*\n\n **Key Arguments:**\n # -\n\n **Return:**\n - None\n\n **Usage:**\n .. todo::\n\n - add usage info\n - create a sublime snippet for usage\n - write a command-line tool for this method\n - update package tutorial with command-line tool info if needed\n\n .. code-block:: python\n\n usage code\n\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _create_tables_if_not_exist(\n self):\n \"\"\"*create the sherlock helper tables if they don't yet exist*\n\n **Key Arguments:**\n # -\n\n **Return:**\n - None\n\n **Usage:**\n .. todo::\n\n - add usage info\n - create a sublime snippet for usage\n - write a command-line tool for this method\n - update package tutorial with command-line tool info if needed\n\n .. code-block:: python\n\n usage code\n\n .. todo ::\n\n - update key arguments values and definitions with defaults\n - update return values and definitions\n - update usage examples and text\n - update docstring text\n - check sublime snippet exists\n - clip any useful text to docs mindmap\n - regenerate the docs and check redendering of this docstring\n\n \"\"\"\n self.log.debug('starting the ``_create_tables_if_not_exist`` method')\n\n transientTable = self.settings[\"database settings\"][\n \"transients\"][\"transient table\"]\n transientTableClassCol = self.settings[\"database settings\"][\n \"transients\"][\"transient classification column\"]\n transientTableIdCol = self.settings[\"database settings\"][\n \"transients\"][\"transient primary id column\"]\n\n crossmatchTable = \"sherlock_crossmatches\"\n createStatement = \"\"\"\nCREATE TABLE IF NOT EXISTS `%(crossmatchTable)s` (\n `transient_object_id` bigint(20) unsigned DEFAULT NULL,\n `catalogue_object_id` varchar(30) DEFAULT NULL,\n `catalogue_table_id` smallint(5) unsigned DEFAULT NULL,\n `separationArcsec` double DEFAULT NULL,\n `northSeparationArcsec` DOUBLE DEFAULT NULL,\n `eastSeparationArcsec` DOUBLE DEFAULT NULL,\n `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,\n `z` double DEFAULT NULL,\n `scale` double DEFAULT NULL,\n `distance` double DEFAULT NULL,\n `distance_modulus` double DEFAULT NULL,\n `photoZ` double DEFAULT NULL,\n `photoZErr` double DEFAULT NULL,\n `association_type` varchar(45) DEFAULT NULL,\n `dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,\n `physical_separation_kpc` double DEFAULT NULL,\n `catalogue_object_type` varchar(45) DEFAULT NULL,\n `catalogue_object_subtype` varchar(45) DEFAULT NULL,\n `association_rank` int(11) DEFAULT NULL,\n `catalogue_table_name` varchar(100) DEFAULT NULL,\n `catalogue_view_name` varchar(100) DEFAULT NULL,\n `rank` int(11) DEFAULT NULL,\n `rankScore` double DEFAULT NULL,\n `search_name` varchar(100) DEFAULT NULL,\n `major_axis_arcsec` double DEFAULT NULL,\n `direct_distance` double DEFAULT NULL,\n `direct_distance_scale` double DEFAULT NULL,\n `direct_distance_modulus` double DEFAULT NULL,\n `raDeg` double DEFAULT NULL,\n `decDeg` double DEFAULT NULL,\n `original_search_radius_arcsec` double DEFAULT NULL,\n `catalogue_view_id` int(11) DEFAULT NULL,\n `U` double DEFAULT NULL,\n `UErr` double DEFAULT NULL,\n `B` double DEFAULT NULL,\n `BErr` double DEFAULT NULL,\n `V` double DEFAULT NULL,\n `VErr` double DEFAULT NULL,\n `R` double DEFAULT NULL,\n `RErr` double DEFAULT NULL,\n `I` double DEFAULT NULL,\n `IErr` double DEFAULT NULL,\n `J` double DEFAULT NULL,\n `JErr` double DEFAULT NULL,\n `H` double DEFAULT NULL,\n `HErr` double DEFAULT NULL,\n `K` double DEFAULT NULL,\n `KErr` double DEFAULT NULL,\n `_u` double DEFAULT NULL,\n `_uErr` double DEFAULT NULL,\n `_g` double DEFAULT NULL,\n `_gErr` double DEFAULT NULL,\n `_r` double DEFAULT NULL,\n `_rErr` double DEFAULT NULL,\n `_i` double DEFAULT NULL,\n `_iErr` double DEFAULT NULL,\n `_z` double DEFAULT NULL,\n `_zErr` double DEFAULT NULL,\n `_y` double DEFAULT NULL,\n `_yErr` double DEFAULT NULL,\n `G` double DEFAULT NULL,\n `GErr` double DEFAULT NULL,\n `unkMag` double DEFAULT NULL,\n `unkMagErr` double DEFAULT NULL,\n `dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,\n `updated` TINYINT NULL DEFAULT 0,\n `classificationReliability` TINYINT NULL DEFAULT NULL,\n `transientAbsMag` DOUBLE NULL DEFAULT NULL,\n PRIMARY KEY (`id`),\n KEY `key_transient_object_id` (`transient_object_id`),\n KEY `key_catalogue_object_id` (`catalogue_object_id`),\n KEY `idx_separationArcsec` (`separationArcsec`),\n KEY `idx_rank` (`rank`)\n) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;\n\n\nCREATE TABLE IF NOT EXISTS `sherlock_classifications` (\n `transient_object_id` bigint(20) NOT NULL,\n `classification` varchar(45) DEFAULT NULL,\n `annotation` TEXT COLLATE utf8_unicode_ci DEFAULT NULL,\n `summary` VARCHAR(50) COLLATE utf8_unicode_ci DEFAULT NULL,\n `separationArcsec` DOUBLE DEFAULT NULL,\n `matchVerified` TINYINT NULL DEFAULT NULL,\n `developmentComment` VARCHAR(100) NULL,\n `dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,\n `dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,\n `updated` varchar(45) DEFAULT '0',\n PRIMARY KEY (`transient_object_id`),\n KEY `key_transient_object_id` (`transient_object_id`),\n KEY `idx_summary` (`summary`),\n KEY `idx_classification` (`classification`)\n) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;\n\n\"\"\" % locals()\n\n # A FIX FOR MYSQL VERSIONS < 5.6\n triggers = []\n if float(self.dbVersions[\"transients\"][:3]) < 5.6:\n createStatement = createStatement.replace(\n \"`dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,\", \"`dateLastModified` datetime DEFAULT NULL,\")\n createStatement = createStatement.replace(\n \"`dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,\", \"`dateCreated` datetime DEFAULT NULL,\")\n\n triggers.append(\"\"\"\nCREATE TRIGGER dateCreated\nBEFORE INSERT ON `%(crossmatchTable)s`\nFOR EACH ROW\nBEGIN\n IF NEW.dateCreated IS NULL THEN\n SET NEW.dateCreated = NOW();\n SET NEW.dateLastModified = NOW();\n END IF;\nEND\"\"\" % locals())\n\n try:\n writequery(\n log=self.log,\n sqlQuery=createStatement,\n dbConn=self.transientsDbConn,\n Force=True\n )\n except:\n self.log.info(\n \"Could not create table (`%(crossmatchTable)s`). Probably already exist.\" % locals())\n\n sqlQuery = u\"\"\"\n SHOW TRIGGERS;\n \"\"\" % locals()\n rows = readquery(\n log=self.log,\n sqlQuery=sqlQuery,\n dbConn=self.transientsDbConn,\n )\n\n # DON'T ADD TRIGGERS IF THEY ALREADY EXIST\n for r in rows:\n if r[\"Trigger\"] in (\"sherlock_classifications_BEFORE_INSERT\", \"sherlock_classifications_AFTER_INSERT\"):\n return None\n\n triggers.append(\"\"\"CREATE TRIGGER `sherlock_classifications_BEFORE_INSERT` BEFORE INSERT ON `sherlock_classifications` FOR EACH ROW\nBEGIN\n IF new.classification = \"ORPHAN\" THEN\n SET new.annotation = \"The transient location is not matched against any known catalogued source\", new.summary = \"No catalogued match\";\n END IF;\nEND\"\"\" % locals())\n\n triggers.append(\"\"\"CREATE TRIGGER `sherlock_classifications_AFTER_INSERT` AFTER INSERT ON `sherlock_classifications` FOR EACH ROW\nBEGIN\n update `%(transientTable)s` set `%(transientTableClassCol)s` = new.classification\n where `%(transientTableIdCol)s` = new.transient_object_id;\nEND\"\"\" % locals())\n\n for t in triggers:\n try:\n writequery(\n log=self.log,\n sqlQuery=t,\n dbConn=self.transientsDbConn,\n Force=True\n )\n except:\n self.log.info(\n \"Could not create trigger (`%(crossmatchTable)s`). Probably already exist.\" % locals())\n\n self.log.debug('completed the ``_create_tables_if_not_exist`` method')\n return None", "response": "create the sherlock helper tables if they don t yet exist"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check(context: click.Context, family: str):\n analysis_obj = context.obj['store'].analyses(family=family).first()\n if analysis_obj is None:\n LOG.error('no analysis found')\n context.abort()\n\n config_path = Path(analysis_obj.config_path)\n if not config_path.exists():\n LOG.error(f\"analysis config not found: {config_path}\")\n context.abort()\n config_raw = ruamel.yaml.safe_load(config_path.open())\n config_data = files.parse_config(config_raw)\n\n sampleinfo_raw = ruamel.yaml.safe_load(Path(config_data['sampleinfo_path']).open())\n sampleinfo_data = files.parse_sampleinfo(sampleinfo_raw)\n\n qcmetrics_path = Path(sampleinfo_data['qcmetrics_path'])\n if not qcmetrics_path.exists():\n LOG.error(f\"qc metrics not found: {str(qcmetrics_path)}\")\n context.abort()\n qcmetrics_raw = ruamel.yaml.safe_load(qcmetrics_path.open())\n qcmetrics_data = files.parse_qcmetrics(qcmetrics_raw)\n\n samples = {\n 'sample': [],\n 'type': [],\n 'ped': [],\n 'chanjo': [],\n 'peddy': [],\n 'plink': [],\n 'duplicates': [],\n }\n for sample_data in config_data['samples']:\n LOG.debug(f\"{sample_data['id']}: parse analysis config\")\n samples['sample'].append(sample_data['id'])\n samples['type'].append(sample_data['type'])\n\n for sample_data in sampleinfo_data['samples']:\n LOG.debug(f\"{sample_data['id']}: parse sample info\")\n samples['ped'].append(sample_data['sex'])\n\n with Path(sample_data['chanjo_sexcheck']).open() as chanjo_handle:\n sexcheck_data = files.parse_chanjo_sexcheck(chanjo_handle)\n\n predicted_sex = sexcheck_data['predicted_sex']\n xy_ratio = sexcheck_data['y_coverage'] / sexcheck_data['x_coverage']\n samples['chanjo'].append(f\"{predicted_sex} ({xy_ratio:.3f})\")\n\n for sample_data in qcmetrics_data['samples']:\n LOG.debug(f\"{sample_data['id']}: parse qc metrics\")\n samples['plink'].append(sample_data['plink_sex'])\n duplicates_percent = sample_data['duplicates'] * 100\n samples['duplicates'].append(f\"{duplicates_percent:.3f}%\")\n\n peddy_path = Path(sampleinfo_data['peddy']['sex_check'])\n if peddy_path.exists():\n with peddy_path.open() as sexcheck_handle:\n peddy_data = files.parse_peddy_sexcheck(sexcheck_handle)\n\n for sample_id in samples['sample']:\n LOG.debug(f\"{sample_id}: parse peddy\")\n predicted_sex = peddy_data[sample_id]['predicted_sex']\n het_ratio = peddy_data[sample_id]['het_ratio']\n samples['peddy'].append(f\"{predicted_sex} ({het_ratio})\")\n else:\n LOG.warning(f\"missing peddy output: {peddy_path}\")\n\n print(tabulate(samples, headers='keys', tablefmt='psql'))", "response": "Delete an analysis log from the database."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef name_file(lane: int, flowcell: str, sample: str, read: int,\n undetermined: bool=False, date: dt.datetime=None, index: str=None) -> str:\n \"\"\"Name a FASTQ file following MIP conventions.\"\"\"\n flowcell = f\"{flowcell}-undetermined\" if undetermined else flowcell\n date_str = date.strftime('%y%m%d') if date else '171015'\n index = index if index else 'XXXXXX'\n return f\"{lane}_{date_str}_{flowcell}_{sample}_{index}_{read}.fastq.gz\"", "response": "Name a FASTQ file following MIP conventions."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef link(self, family: str, sample: str, analysis_type: str, files: List[str]):\n root_dir = Path(self.families_dir) / family / analysis_type / sample / 'fastq'\n root_dir.mkdir(parents=True, exist_ok=True)\n for fastq_data in files:\n fastq_path = Path(fastq_data['path'])\n fastq_name = self.name_file(\n lane=fastq_data['lane'],\n flowcell=fastq_data['flowcell'],\n sample=sample,\n read=fastq_data['read'],\n undetermined=fastq_data['undetermined'],\n )\n dest_path = root_dir / fastq_name\n if not dest_path.exists():\n log.info(f\"linking: {fastq_path} -> {dest_path}\")\n dest_path.symlink_to(fastq_path)\n else:\n log.debug(f\"destination path already exists: {dest_path}\")", "response": "Link FASTQ files for a sample."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the path to the python package", "response": "def getpackagepath():\n \"\"\"\n *Get the root path for this python package - used in unit testing code*\n \"\"\"\n moduleDirectory = os.path.dirname(__file__)\n packagePath = os.path.dirname(__file__) + \"/../\"\n\n return packagePath"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mcast_receiver(port, mcgroup=MC_GROUP):\n\n if _is_broadcast_group(mcgroup):\n group = None\n else:\n group = mcgroup\n\n # Create a socket\n sock = socket(AF_INET, SOCK_DGRAM)\n\n # Allow multiple copies of this program on one machine\n # (not strictly needed)\n sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)\n if group:\n sock.setsockopt(SOL_IP, IP_MULTICAST_TTL, TTL_LOCALNET) # default\n sock.setsockopt(SOL_IP, IP_MULTICAST_LOOP, 1) # default\n\n # Bind it to the port\n sock.bind(('', port))\n\n # Look up multicast group address in name server\n # (doesn't hurt if it is already in ddd.ddd.ddd.ddd format)\n if group:\n group = gethostbyname(group)\n\n # Construct binary group address\n bytes_ = [int(b) for b in group.split(\".\")]\n grpaddr = 0\n for byte in bytes_:\n grpaddr = (grpaddr << 8) | byte\n\n # Construct struct mreq from grpaddr and ifaddr\n ifaddr = INADDR_ANY\n mreq = struct.pack('!LL', grpaddr, ifaddr)\n\n # Add group membership\n sock.setsockopt(IPPROTO_IP, IP_ADD_MEMBERSHIP, mreq)\n\n return sock, group or ''", "response": "Open a UDP socket bind it to a port and select a multicast group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting temporary logs for the current family.", "response": "def _delete_temp_logs(self, family_name: str):\n \"\"\"Delete temporary logs for the current family.\"\"\"\n for temp_log in self.store.analyses(family=family_name, temp=True):\n log.debug(f\"delete temporary log: {temp_log.id} - {temp_log.status}\")\n temp_log.delete()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse(cls, config_data: dict, sampleinfo_data: dict, sacct_jobs: List[dict],\n jobs: int) -> dict:\n \"\"\"Parse information about a run.\"\"\"\n analysis_types = [sample['type'] for sample in config_data['samples']]\n run_data = {\n 'user': config_data['email'],\n 'family': config_data['family'],\n 'priority': config_data['priority'],\n 'started_at': sampleinfo_data['date'],\n 'version': sampleinfo_data['version'],\n 'out_dir': config_data['out_dir'],\n 'config_path': config_data['config_path'],\n 'type': cls._get_analysis_type(analysis_types),\n }\n\n sacct_data, last_job_end = cls._parse_sacct(sacct_jobs, jobs_count=jobs)\n run_data.update(sacct_data)\n\n run_data['status'] = cls.get_status(sampleinfo_data['is_finished'],\n len(run_data['failed_jobs']))\n if run_data['status'] == 'completed':\n run_data['completed_at'] = last_job_end\n\n return run_data", "response": "Parse information about a run."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing out info from Sacct log.", "response": "def _parse_sacct(sacct_jobs: List[dict], jobs_count: int=None):\n \"\"\"Parse out info from Sacct log.\"\"\"\n failed_jobs = sacct_api.filter_jobs(sacct_jobs, failed=True)\n completed_jobs = [job for job in sacct_jobs if job['is_completed']]\n last_job_end = completed_jobs[-1]['end'] if len(completed_jobs) > 0 else None\n data = {\n 'jobs': jobs_count,\n 'completed_jobs': len(completed_jobs),\n 'progress': (len(completed_jobs) / jobs_count) if jobs_count else None,\n 'failed_jobs': [{\n 'slurm_id': job['id'],\n 'started_at': job['start'],\n 'elapsed': job['elapsed'],\n 'status': job['state'].lower(),\n 'name': job['step'],\n 'context': job['context'],\n } for job in failed_jobs]\n }\n return data, last_job_end"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining the overall analysis type.", "response": "def _get_analysis_type(analysis_types: List[str]) -> str:\n \"\"\"Determine the overall analysis type.\"\"\"\n types_set = set(analysis_types)\n return types_set.pop() if len(types_set) == 1 else 'wgs'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef build(self, run_data: dict) -> models.Analysis:\n existing_run = self.store.find_analysis(family=run_data['family'],\n started_at=run_data['started_at'],\n status=run_data['status'])\n if existing_run:\n return None\n\n run_data['user'] = self.store.user(run_data['user'])\n new_failed_jobs = [self.store.Job(**job) for job in run_data['failed_jobs']]\n del run_data['failed_jobs']\n new_run = self.store.Analysis(**run_data)\n new_run.failed_jobs = new_failed_jobs\n return new_run", "response": "Build a new Analysis object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_default_value(self):\n default = self.default_value\n if isinstance(default, collections.Callable):\n default = default()\n return default", "response": "get the default value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _init_compiled(cls, dialect, connection, dbapi_connection,\n compiled, parameters):\n \"\"\"Initialize execution context for a Compiled construct.\"\"\"\n\n self = cls.__new__(cls)\n self.root_connection = connection\n self._dbapi_connection = dbapi_connection\n self.dialect = connection.dialect\n\n self.compiled = compiled\n\n # this should be caught in the engine before\n # we get here\n assert compiled.can_execute\n\n self.execution_options = compiled.execution_options.union(\n connection._execution_options)\n\n self.result_column_struct = (\n compiled._result_columns, compiled._ordered_columns,\n compiled._textual_ordered_columns)\n\n self.unicode_statement = util.text_type(compiled)\n if not dialect.supports_unicode_statements:\n self.statement = self.unicode_statement.encode(\n self.dialect.encoding)\n else:\n self.statement = self.unicode_statement\n\n self.isinsert = compiled.isinsert\n self.isupdate = compiled.isupdate\n self.isdelete = compiled.isdelete\n self.is_text = compiled.isplaintext\n\n if not parameters:\n self.compiled_parameters = [compiled.construct_params()]\n else:\n self.compiled_parameters = \\\n [compiled.construct_params(m, _group_number=grp) for\n grp, m in enumerate(parameters)]\n\n self.executemany = len(parameters) > 1\n\n self.cursor = self.create_cursor()\n\n if self.isinsert or self.isupdate or self.isdelete:\n self.is_crud = True\n self._is_explicit_returning = bool(compiled.statement._returning)\n self._is_implicit_returning = bool(\n compiled.returning and not compiled.statement._returning)\n\n if self.compiled.insert_prefetch or self.compiled.update_prefetch:\n if self.executemany:\n self._process_executemany_defaults()\n else:\n self._process_executesingle_defaults()\n\n processors = compiled._bind_processors\n\n # Convert the dictionary of bind parameter values\n # into a dict or list to be sent to the DBAPI's\n # execute() or executemany() method.\n parameters = []\n if dialect.positional:\n for compiled_params in self.compiled_parameters:\n param = []\n for key in self.compiled.positiontup:\n if key in processors:\n param.append(processors[key](compiled_params[key]))\n else:\n param.append(compiled_params[key])\n parameters.append(dialect.execute_sequence_format(param))\n else:\n encode = not dialect.supports_unicode_statements\n for compiled_params in self.compiled_parameters:\n\n if encode:\n param = dict(\n (\n dialect._encoder(key)[0],\n processors[key](compiled_params[key])\n if key in processors\n else compiled_params[key]\n )\n for key in compiled_params\n )\n else:\n param = dict(\n (\n key,\n processors[key](compiled_params[key])\n if key in processors\n else compiled_params[key]\n )\n for key in compiled_params\n )\n\n parameters.append(param)\n self.parameters = dialect.execute_sequence_format(parameters)\n\n self.statement = compiled\n return self", "response": "Initialize the Execution Context for a Compiled construct."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef split_sentences(text):\n sentence_delimiters = re.compile(u'[.!?,;:\\t\\\\\\\\\"\\\\(\\\\)\\\\\\'\\u2019\\u2013]|\\\\s\\\\-\\\\s')\n sentences = sentence_delimiters.split(text)\n return sentences", "response": "Utility function to return a list of sentences."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_action_urls(self):\n actions = {}\n\n model_name = self.model._meta.model_name\n # e.g.: polls_poll\n base_url_name = '%s_%s' % (self.model._meta.app_label, model_name)\n # e.g.: polls_poll_actions\n model_actions_url_name = '%s_actions' % base_url_name\n\n self.tools_view_name = 'admin:' + model_actions_url_name\n\n # WISHLIST use get_change_actions and get_changelist_actions\n # TODO separate change and changelist actions\n for action in chain(self.change_actions, self.changelist_actions):\n actions[action] = getattr(self, action)\n return [\n # change, supports the same pks the admin does\n # https://github.com/django/django/blob/stable/1.10.x/django/contrib/admin/options.py#L555\n url(r'^(?P.+)/actions/(?P\\w+)/$',\n self.admin_site.admin_view( # checks permissions\n ChangeActionView.as_view(\n model=self.model,\n actions=actions,\n back='admin:%s_change' % base_url_name,\n current_app=self.admin_site.name,\n )\n ),\n name=model_actions_url_name),\n # changelist\n url(r'^actions/(?P\\w+)/$',\n self.admin_site.admin_view( # checks permissions\n ChangeListActionView.as_view(\n model=self.model,\n actions=actions,\n back='admin:%s_changelist' % base_url_name,\n current_app=self.admin_site.name,\n )\n ),\n # Dupe name is fine. https://code.djangoproject.com/ticket/14259\n name=model_actions_url_name),\n ]", "response": "Get the url patterns that route each action to a view."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_tool_dict(self, tool_name):\n tool = getattr(self, tool_name)\n standard_attrs, custom_attrs = self._get_button_attrs(tool)\n return dict(\n name=tool_name,\n label=getattr(tool, 'label', tool_name),\n standard_attrs=standard_attrs,\n custom_attrs=custom_attrs,\n )", "response": "Represents the tool as a dict with extra meta."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the HTML attributes associated with a tool.", "response": "def _get_button_attrs(self, tool):\n \"\"\"\n Get the HTML attributes associated with a tool.\n\n There are some standard attributes (class and title) that the template\n will always want. Any number of additional attributes can be specified\n and passed on. This is kinda awkward and due for a refactor for\n readability.\n \"\"\"\n attrs = getattr(tool, 'attrs', {})\n # href is not allowed to be set. should an exception be raised instead?\n if 'href' in attrs:\n attrs.pop('href')\n # title is not allowed to be set. should an exception be raised instead?\n # `short_description` should be set instead to parallel django admin\n # actions\n if 'title' in attrs:\n attrs.pop('title')\n default_attrs = {\n 'class': attrs.get('class', ''),\n 'title': getattr(tool, 'short_description', ''),\n }\n standard_attrs = {}\n custom_attrs = {}\n for k, v in dict(default_attrs, **attrs).items():\n if k in default_attrs:\n standard_attrs[k] = v\n else:\n custom_attrs[k] = v\n return standard_attrs, custom_attrs"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a question mark.", "response": "def question_mark(self, request, obj):\n \"\"\"Add a question mark.\"\"\"\n obj.question = obj.question + '?'\n obj.save()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build_link(href, text, cls=None, icon_class=None, **attrs):\n return build_html_element(tag='a',\n text=text,\n href=href,\n cls=cls,\n icon_class=icon_class,\n **attrs)", "response": "Builds an html link."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds an html element.", "response": "def build_html_element(tag, text=None, icon_class=None, cls=None, **kwargs):\n \"\"\"Builds an html element.\n\n :param tag: the html tag to build ('a', 'div', 'img', etc)\n :param icon_class: the class to apply to an icon element. This only\n applies to elements that allow a closing tag.\n :param cls: the css class to apply to the tag. This can also be passed\n in as a kwarg as \"class\".\n\n >>> build_html_element(tag='a', href='someurl.com', text='hello')\n 'hello'\n \"\"\"\n if cls is not None:\n kwargs['class'] = cls\n\n tag_attrs = ' '.join(['{0}=\"{1}\"'.format(k, v) for k, v in kwargs.items()])\n\n tag_content = '{tag} {tag_attrs}'.format(tag=tag, tag_attrs=tag_attrs)\n\n if tag in ('img', 'input', 'hr', 'br'):\n return mark_safe('<{tag_content} />'.format(tag_content=tag_content))\n\n icon = ' '.format(icon_class) if icon_class else ''\n\n if not text:\n text = ''\n elif not isinstance(text, SafeText):\n text = escape(text)\n\n return mark_safe('<{tag_content}>{icon}{text}'.format(\n tag_content=tag_content,\n icon=icon,\n tag=tag,\n text=text)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dump_migration_session_state(raw):\n class BlockStyle(str): pass\n class SessionDumper(yaml.SafeDumper): pass\n def str_block_formatter(dumper, data):\n return dumper.represent_scalar(u'tag:yaml.org,2002:str', data, style='|')\n SessionDumper.add_representer(BlockStyle, str_block_formatter)\n\n raw = deepcopy(raw)\n for step in raw:\n step['output'] = BlockStyle(step['output'])\n step['traceback'] = BlockStyle(step['traceback'])\n return yaml.dump(raw, Dumper=SessionDumper)", "response": "Serialize a migration session state to yaml using nicer formatting"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_migrations(self, migrations):\n if self.__closed:\n raise MigrationSessionError(\"Can't change applied session\")\n self._to_apply.extend(migrations)", "response": "Adds migrations to be applied."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_unapplied_migrations(self, loader):\n unapplied = []\n graph = loader.graph\n plan = []\n seen = set()\n\n # Generate the plan, in the order that migrations have been/should be applied.\n for target in graph.leaf_nodes():\n for migration in graph.forwards_plan(target):\n if migration not in seen:\n plan.append(graph.nodes[migration])\n seen.add(migration)\n\n # Remove the migrations that have already been applied.\n for migration in plan:\n if not (migration.app_label, migration.name) in loader.applied_migrations:\n # NOTE: Unicode Django application names are unsupported.\n unapplied.append([migration.app_label, str(migration.name)])\n return unapplied", "response": "Return a list of unapplied migrations in the form [ migration1 migration2... ]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_current_migration_state(self, loader, apps):\n # Only care about applied migrations for the passed-in apps.\n apps = set(apps)\n relevant_applied = [migration for migration in loader.applied_migrations if migration[0] in apps]\n # Sort them by the most recent migration and convert to a dictionary,\n # leaving apps as keys and most recent migration as values.\n # NB: this is a dirty trick\n most_recents = dict(sorted(relevant_applied, key=lambda m: m[1]))\n # Fill in the apps with no migrations with 'zero'.\n # NOTE: Unicode Django application names are unsupported.\n most_recents = [[app, 'zero' if app not in most_recents else str(most_recents[app])] for app in apps]\n return most_recents", "response": "Extract the most recent migrations from the relevant apps and return a dictionary of app names to migration state."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a tuple of unapplied current", "response": "def list_migrations(self):\n \"\"\"\n Returns a tuple of unapplied, current\n\n \"Unapplied\" is a list of unapplied migrations. \"Current\" is a list of the current migration\n states for apps with unapplied migrations.\n\n Both are tuples of the form (app, migration_name).\n \"\"\"\n connection = connections[self._database_name]\n loader = MigrationLoader(connection, ignore_no_migrations=True)\n unapplied = self._get_unapplied_migrations(loader)\n currents = self._get_current_migration_state(loader, [u[0] for u in unapplied])\n return unapplied, currents"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _parse_migrate_output(self, output):\n failed = None\n succeeded = []\n\n # Mark migrations:\n # - before exception migration as success\n # - exception migration as failed\n for line in output.split('\\n'):\n line = _remove_escape_characters(line).strip()\n line_match = self.migration_regex.match(line)\n if line_match:\n migration = (line_match.group('app_name'), line_match.group('migration_name'))\n if line_match.group('success') == 'OK':\n # The migration succeeded\n succeeded.append(migration)\n else:\n # The migration failed\n failed = migration\n break\n return succeeded, failed", "response": "Parse the output of manage. py migrate and return a list of tuples."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __apply(self, migration=None, run_all=False):\n out = StringIO()\n trace = None\n\n migrate_kwargs = {\n 'interactive': False,\n 'stdout': out,\n 'database': self._database_name,\n }\n if migration is not None:\n migrate_kwargs.update({\n 'app_label': migration[0],\n 'migration_name': migration[1],\n })\n elif not run_all:\n raise ValueError('Either a migration must be provided or \"run_all\" must be True')\n\n start = self._timer()\n try:\n call_command(\"migrate\", **migrate_kwargs)\n except Exception:\n trace = ''.join(traceback.format_exception(*sys.exc_info()))\n finally:\n end = self._timer()\n successes, failure = self._parse_migrate_output(out.getvalue())\n\n self._migration_state.append({\n 'database': self._database_name,\n 'migration': 'all' if run_all else (migration[0], migration[1]),\n 'duration': end - start,\n 'output': _remove_escape_characters(out.getvalue()),\n 'succeeded_migrations': successes, # [(app, migration), ...]\n 'failed_migration': failure, # (app, migration)\n 'traceback': trace,\n 'succeeded': failure is None and trace is None,\n })\n\n if failure is not None:\n raise CommandError(\"Migration failed for app '{}' - migration '{}'.\\n\".format(*failure))\n elif trace is not None:\n raise CommandError(\"Migrations failed unexpectedly. See self.state['traceback'] for details.\")", "response": "Applies a migration to the state."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\napplying all migrations that have been added to the session.", "response": "def apply(self):\n \"\"\"\n Applies all migrations that have been added.\n Note that some migrations depend on others, so you might end up\n running more than one.\n \"\"\"\n if self.__closed:\n raise MigrationSessionError(\"Can't apply applied session\")\n try:\n while self._to_apply:\n self.__apply(migration=self._to_apply.pop(0))\n except:\n raise\n finally:\n self.__closed = True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef apply_all(self):\n if self.__closed:\n raise MigrationSessionError(\"Can't apply applied session\")\n if self._to_apply:\n raise MigrationSessionError(\"Can't apply_all with migrations added to session\")\n\n try:\n self.__apply(run_all=True)\n except:\n raise\n finally:\n self.__closed = True", "response": "Applies all Django model migrations at once recording the result."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsending an email from a template.", "response": "def send_email_from_template(to_email, from_email, subject,\n markdown_template=None,\n text_template=None, html_template=None,\n fail_silently=False, context=None,\n **kwargs):\n \"\"\"Send an email from a template.\n\n :param to_email: the email address to send the email to\n :param from_email: the email address the email will be from\n :param subject: the subject of the email\n :param markdown_template: the markdown syntax template to use for the\n email. If provided, this will generate both the text and html versions\n of the email. You must have the \"markdown\" library installed in order\n to use this. pip install markdown.\n :param text_template: the template for the text version of the email. This\n can be omitted if the markdown_template is provided.\n :param html_template: the template for the html version of the email. This\n can be omitted if the markdown_template is provided.\n :param context: the context for the email templates\n \"\"\"\n return send_emails_from_template(\n to_emails=[to_email],\n from_email=from_email,\n subject=subject,\n markdown_template=markdown_template,\n text_template=text_template,\n html_template=html_template,\n fail_silently=fail_silently,\n context=context,\n **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsends many emails from a single template.", "response": "def send_emails_from_template(to_emails, from_email, subject,\n markdown_template=None, text_template=None,\n html_template=None, fail_silently=False,\n context=None, attachments=None, **kwargs):\n \"\"\"Send many emails from single template. Each email address listed in the\n ``to_emails`` will receive an separate email.\n\n :param to_emails: list of email address to send the email to\n :param from_email: the email address the email will be from\n :param subject: the subject of the email\n :param markdown_template: the markdown syntax template to use for the\n email. If provided, this will generate both the text and html versions\n of the email. You must have the \"markdown\" library installed in order\n to use this. pip install markdown.\n :param text_template: the template for the text version of the email. This\n can be omitted if the markdown_template is provided.\n :param html_template: the template for the html version of the email. This\n can be omitted if the markdown_template is provided.\n :param context: the context for the email templates\n :param attachments: list of additional attachments to add to the email\n (example: email.mime.image.MIMEImage object). The attachments will be\n added to each email sent.\n \"\"\"\n if not to_emails:\n return\n\n if context is None:\n context = {}\n\n if markdown_template:\n try:\n from markdown import markdown\n except ImportError:\n raise ImportError(\n 'The application is attempting to send an email by using the '\n '\"markdown\" library, but markdown is not installed. Please '\n 'install it. See: '\n 'http://pythonhosted.org/Markdown/install.html'\n )\n\n base_html_template = getattr(settings,\n 'CORE_BASE_HTML_EMAIL_TEMPLATE',\n 'django_core/mail/base_email.html')\n\n text_content = render_to_string(markdown_template, context)\n context['email_content'] = markdown(text_content)\n html_content = render_to_string(base_html_template, context)\n else:\n text_content = render_to_string(text_template, context)\n html_content = render_to_string(html_template, context)\n\n emails = []\n\n for email_address in to_emails:\n email = EmailMultiAlternatives(\n subject=subject,\n body=text_content,\n from_email=from_email,\n to=[email_address],\n alternatives=[(html_content, 'text/html')]\n )\n\n if attachments:\n email.mixed_subtype = 'related'\n\n for attachment in attachments:\n email.attach(attachment)\n\n emails.append(email)\n\n connection = mail.get_connection()\n\n connection.open()\n connection.send_messages(emails)\n connection.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding all the possible interpretations given a set of symbols", "response": "def all_models(self, alphabet: _Alphabet) -> Set[PLInterpretation]:\n \"\"\"Find all the possible interpretations given a set of symbols\"\"\"\n\n all_possible_interpretations = alphabet.powerset().symbols\n all_models = set()\n for i in all_possible_interpretations:\n # compute current Interpretation, considering False\n # all propositional symbols not present in current interpretation\n current_interpretation = PLInterpretation(i)\n if self.truth(current_interpretation):\n all_models.add(current_interpretation)\n\n self._all_models = all_models\n return all_models"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef minimal_models(self, alphabet: _Alphabet) -> Set[PLInterpretation]:\n models = self.all_models(alphabet)\n\n minimal_models = set()\n for m in models:\n min_m = m\n for m1 in models:\n if min_m.true_propositions.issuperset(m1.true_propositions):\n min_m = m1\n minimal_models.add(min_m)\n\n return minimal_models", "response": "Find models of minimal size."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save(self, *args, **kwargs):\n self.save_prep(instance_or_instances=self)\n return super(AbstractTokenModel, self).save(*args, **kwargs)", "response": "Make sure token is added."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save_prep(cls, instance_or_instances):\n instances = make_obj_list(instance_or_instances)\n\n tokens = set(cls.objects.get_available_tokens(\n count=len(instances),\n token_length=cls.token_length\n ))\n\n for instance in instances:\n if not instance.token:\n instance.token = tokens.pop()\n\n super(AbstractTokenModel, cls).save_prep(\n instance_or_instances=instances\n )", "response": "Preprocess the object before the object is saved."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _popup(self):\n res = ()\n for child in self.formulas:\n if type(child) == type(self):\n superchilds = child.formulas\n res += superchilds\n else:\n res += (child, )\n return tuple(res)", "response": "recursively find commutative binary operator\n among child formulas and pop up them at the same level"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the absolute url for the content object.", "response": "def get_content_object_url(self):\n \"\"\"Gets the absolute url for the content object.\"\"\"\n if (self.content_object and\n hasattr(self.content_object, 'get_absolute_url')):\n return self.content_object.get_absolute_url()\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef t_ATOM(self, t):\n r'[a-zA-Z_][a-zA-Z_0-9]*'\n t.type = LDLfLexer.reserved.get(t.value, 'ATOM') # Check for reserved words\n return t", "response": "t_ATOM is a bit of a hack to handle reserved words"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the url that should be used to redirect to the current url.", "response": "def get_redirect_url(current_url, config, https=None):\n \"\"\"\n priorities are (primary domain and aliases are treated the same):\n exact redirect match > exact alias match > pattern redirect match > pattern alias match\n :param current_url: the url that is being called\n :param config: redirect configuration for this url\n :param want_https: whether redirects should go to https (None keeps the current scheme)\n :return: None for no redirect or an url to redirect to\n \"\"\"\n primary_domain = config['domain']\n domains = set(config.get('aliases', [])) | set((primary_domain,))\n domain_patterns = compile_regexes(domains)\n redirect_domains = set(config.get('redirects', []))\n redirect_domain_patterns = compile_regexes(redirect_domains)\n url = yurl.URL(current_url)\n if https is None:\n target_scheme = url.scheme\n else:\n target_scheme = 'https' if https else 'http'\n redirect_url = None\n if url.is_host_ip() or url.is_host_ipv4():\n # don't redirect for ips\n return\n if url.host in domains and url.scheme == target_scheme:\n # exact host and scheme match: Nothing to do\n return\n if url.host in domains and url.scheme != target_scheme:\n # exact alias match, but scheme mismatch: redirect to changed scheme\n redirect_url = url.replace(scheme=target_scheme)\n elif url.host in redirect_domains:\n # exact redirect match: redirect\n redirect_url = url.replace(scheme=target_scheme, host=primary_domain)\n elif url.host in domains:\n # exact alias match: nothing to do\n return\n elif match_any(redirect_domain_patterns, url.host):\n # pattern redirect match: redirect\n redirect_url = url.replace(scheme=target_scheme, host=primary_domain)\n elif match_any(domain_patterns, url.host):\n # pattern alias match\n if url.scheme != target_scheme:\n # pattern alias match and scheme mismatch: redirect\n redirect_url = url.replace(scheme=target_scheme)\n else:\n return\n\n if redirect_url:\n return '{}'.format(redirect_url)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef invoke(self, ctx):\n if self.callback is not None:\n loop = asyncio.get_event_loop()\n return loop.run_until_complete(self.async_invoke(ctx))", "response": "Invoke the callback on the given context."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_help_option(self, ctx):\n help_options = self.get_help_option_names(ctx)\n if not help_options or not self.add_help_option:\n return\n\n def show_help(ctx, param, value):\n import abot.bot\n if value and not ctx.resilient_parsing:\n event: abot.bot.MessageEvent = abot.bot.current_event.get()\n tbd_tasks.append(event.reply(ctx.get_help()))\n ctx.exit()\n\n return click.core.Option(help_options, is_flag=True,\n is_eager=True, expose_value=False,\n callback=show_help,\n help='Show this message and exit.')", "response": "Returns the help option object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_widget_css_class(self, attrs):\n size_class = 'size-{0}'.format(self.num_inputs)\n\n if 'class' in attrs:\n attrs['class'] += ' {0}'.format(size_class)\n else:\n attrs['class'] = size_class", "response": "Gets the class for the widget."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef p_formula_atom(self, p):\n if p[1]==Symbols.TRUE.value:\n p[0] = PLTrue()\n elif p[1]==Symbols.FALSE.value:\n p[0] = PLFalse()\n else:\n p[0] = PLAtomic(Symbol(p[1]))", "response": "formula : ATOM\n | TRUE\n | FALSE"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_or_none(self, prefetch_related=None, select_related=False,\n **kwargs):\n \"\"\"Gets a single object based on kwargs or None if one is not found.\n\n :param prefetch_related: list or tuple of fields to prefetch for an\n object. This takes precedence over select_related.\n\n Example:\n\n >> get_or_none(prefetch_related=['some_field',\n ... 'some_field__some_fields_field'])\n\n See:\n https://docs.djangoproject.com/en/dev/ref/models/querysets/#prefetch-related\n\n :param select_related: boolean when set to True will follow foreign-key\n relationships to prevent many db queries when looping over foreign\n keys. If this value is boolean True, the immediate foreign keys\n will be selected, but not foreign keys of foreign keys. If this\n value is set to a list or tuple, then those will be the fields to\n to follow and select.\n\n Example:\n\n >> # Both of the following are valid\n >> get_or_none(select_related=True)\n >> get_or_none(select_related=['some_field',\n ... 'some_field__some_fields_field'])\n\n See: https://docs.djangoproject.com/en/dev/ref/models/querysets/\n :param kwargs: list of fields and their values to retrieve.\n\n \"\"\"\n try:\n if prefetch_related:\n query_set = self.prefetch_related(*prefetch_related)\n elif select_related == True:\n query_set = self.select_related()\n elif isinstance(select_related, (list, tuple)):\n query_set = self.select_related(*select_related)\n else:\n query_set = self\n\n return query_set.get(**kwargs)\n except self.model.DoesNotExist:\n return None", "response": "Gets a single object based on kwargs or None if one is not found."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_by_id_or_404(self, id, **kwargs):\n obj = self.get_by_id(id=id, **kwargs)\n\n if obj:\n return obj\n\n raise Http404", "response": "r Gets a single instance r raises a 404 is one isn t found."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninsert many object at once.", "response": "def bulk_create(self, objs, *args, **kwargs):\n \"\"\"Insert many object at once.\"\"\"\n if hasattr(self.model, 'save_prep'):\n # Method from AbstractBaseModel. If the model class doesn't\n # subclass AbstractBaseModel, then don't call this.\n self.model.save_prep(instance_or_instances=objs)\n\n return super(CommonManager, self).bulk_create(objs=objs,\n *args,\n **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete objects by ids.", "response": "def delete_by_ids(self, ids):\n \"\"\"Delete objects by ids.\n\n :param ids: list of objects ids to delete.\n :return: True if objects were deleted. Otherwise, return False if no\n objects were found or the delete was not successful.\n \"\"\"\n try:\n self.filter(id__in=ids).delete()\n return True\n except self.model.DoesNotExist:\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck to see if a slug is available.", "response": "def is_slug_available(self, slug, **kwargs):\n \"\"\"Checks to see if a slug is available. If the slug is already being used\n this method returns False. Otherwise, return True.\n \"\"\"\n try:\n self.get(slug=slug, **kwargs)\n return False\n except self.model.DoesNotExist:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_next_slug(self, slug, **kwargs):\n original_slug = slug = slugify(slug)\n count = 0\n\n while not self.is_slug_available(slug=slug, **kwargs):\n count += 1\n slug = '{0}-{1}'.format(original_slug, count)\n\n return slug", "response": "Gets the next available slug."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the next available token.", "response": "def get_next_token(self, length=15, **kwargs):\n \"\"\"Gets the next available token.\n\n :param length: length of the token\n :param kwargs: additional filter criteria to check for when looking for\n a unique token.\n\n \"\"\"\n return self.get_available_tokens(count=1,\n token_length=length,\n **kwargs)[0]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_available_tokens(self, count=10, token_length=15, **kwargs):\n # This is the number of extra tokens to try and retrieve so calls to\n # the db can be limited\n token_buffer = int(math.ceil(count * .05))\n\n if token_buffer < 5:\n token_buffer = 5\n\n available = set([])\n\n while True:\n tokens = [random_alphanum(length=token_length)\n for t in range(count + token_buffer)]\n db_tokens = self.filter(token__in=tokens).values_list('token',\n flat=True)\n available.update(set(tokens).difference(db_tokens))\n\n if len(available) >= count:\n return list(available)[:count]", "response": "Gets a list of available tokens."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_generic(self, content_object=None, **kwargs):\n if content_object:\n kwargs['content_type'] = ContentType.objects.get_for_model(\n content_object\n )\n kwargs['object_id'] = content_object.id\n\n return self.create(**kwargs)", "response": "Create a generic object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfilters by a generic object.", "response": "def filter_generic(self, content_object=None, **kwargs):\n \"\"\"Filter by a generic object.\n\n :param content_object: the content object to filter on.\n \"\"\"\n if content_object:\n kwargs['content_type'] = ContentType.objects.get_for_model(\n content_object\n )\n kwargs['object_id'] = content_object.id\n\n return self.filter(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting or creates a generic object.", "response": "def get_or_create_generic(self, content_object=None, **kwargs):\n \"\"\"Gets or creates a generic object. This is a wrapper for\n get_or_create(...) when you need to get or create a generic object.\n\n :param obj: the object to get or create\n :param kwargs: any other kwargs that the model accepts.\n \"\"\"\n if content_object:\n kwargs['content_type'] = ContentType.objects.get_for_model(\n content_object\n )\n kwargs['object_id'] = content_object.id\n\n return self.get_or_create(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting all object by a specific model.", "response": "def get_by_model(self, model):\n \"\"\"Gets all object by a specific model.\"\"\"\n content_type = ContentType.objects.get_for_model(model)\n return self.filter(content_type=content_type)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef attr(obj, attr):\n if not obj or not hasattr(obj, attr):\n return ''\n\n return getattr(obj, attr, '')", "response": "Returns the attribute of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes an object iterable.", "response": "def make_iterable(obj):\n \"\"\"Make an object iterable.\n\n >>> make_iterable(obj='hello')\n ('hello',)\n >>> make_iterable(obj=None)\n ()\n \"\"\"\n if not obj:\n return tuple()\n\n if isinstance(obj, (list, tuple, set)):\n return obj\n\n return (obj,)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_form_kwargs(self):\n kwargs = super(ApiFormView, self).get_form_kwargs()\n kwargs['data'] = kwargs.get('initial')\n return kwargs", "response": "Add the data to the form args so you can validate the form s data on a get request."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef form_invalid(self, form, context=None, **kwargs):\n if not context:\n context = {}\n\n context['errors'] = form.errors\n return super(ApiFormView, self).render_to_response(context=context,\n status=400)", "response": "This will return the request with form errors as well as any additional context."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_valid_rgb_color(value):\n if not value:\n return False\n\n regex = re.compile(RGB_COLOR_REGEX)\n return bool(regex.match(value))", "response": "Checks whether the value is a valid rgb or rgba color string."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nvalidates that a password is at least 7 characters long and has at least 1 digit and 1 letter.", "response": "def validate_password_strength(value):\n \"\"\"Validates that a password is as least 7 characters long and has at least\n 1 digit and 1 letter.\n \"\"\"\n min_length = 7\n\n if len(value) < min_length:\n raise ValidationError(_('Password must be at least {0} characters '\n 'long.').format(min_length))\n\n # check for digit\n if not any(char.isdigit() for char in value):\n raise ValidationError(_('Password must contain at least 1 digit.'))\n\n # check for letter\n if not any(char.isalpha() for char in value):\n raise ValidationError(_('Password must contain at least 1 letter.'))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_metaclass(metaclass):\n def wrapper(cls):\n orig_vars = cls.__dict__.copy()\n orig_vars.pop('__dict__', None)\n orig_vars.pop('__weakref__', None)\n return metaclass(cls.__name__, cls.__bases__, orig_vars)\n return wrapper", "response": "Class decorator for creating a class with a metaclass."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef clean(self, value):\n if not value:\n return None\n\n # if any value exists, then add \"0\" as a placeholder to the remaining\n # values.\n if isinstance(value, list) and any(value):\n for i, item in enumerate(value):\n if not item:\n value[i] = '0'\n\n return super(MultipleDecimalField, self).clean(value)", "response": "Validates that the input can be converted to a list of decimals."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_python(self, value):\n if not value:\n return None\n\n if isinstance(value, list):\n for index, position_val in enumerate(value):\n val = super(MultipleDecimalField, self).to_python(position_val)\n value[index] = val\n\n return value", "response": "Validates that the input can be converted to a list of decimals."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef random_alphanum(length=10, lower_only=False):\n character_set = ALPHANUM_LOWER if lower_only else ALPHANUM\n sample_size = 5\n chars = random.sample(character_set, sample_size)\n\n while len(chars) < length:\n chars += random.sample(character_set, sample_size)\n\n random.shuffle(chars)\n return ''.join(chars[:length])", "response": "Returns a random alphanumeric string of the given length."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate_key(low=7, high=10, lower_only=False):\n return random_alphanum(length=randint(7, 10), lower_only=lower_only)", "response": "Generates a random alphanumeric key between low and high characters in\n length."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the html link for the object.", "response": "def get_absolute_url_link(self, text=None, cls=None, icon_class=None,\n **attrs):\n \"\"\"Gets the html link for the object.\"\"\"\n if text is None:\n text = self.get_link_text()\n\n return build_link(href=self.get_absolute_url(),\n text=text,\n cls=cls,\n icon_class=icon_class,\n **attrs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_edit_url_link(self, text=None, cls=None, icon_class=None,\n **attrs):\n \"\"\"Gets the html edit link for the object.\"\"\"\n if text is None:\n text = 'Edit'\n\n return build_link(href=self.get_edit_url(),\n text=text,\n cls=cls,\n icon_class=icon_class,\n **attrs)", "response": "Gets the html edit link for the object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_delete_url_link(self, text=None, cls=None, icon_class=None,\n **attrs):\n \"\"\"Gets the html delete link for the object.\"\"\"\n if text is None:\n text = 'Delete'\n\n return build_link(href=self.get_delete_url(),\n text=text,\n cls=cls,\n icon_class=icon_class,\n **attrs)", "response": "Gets the html delete link for the object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexpire all tokens for an email address or list of emails.", "response": "def expire_by_email(self, email_address, **kwargs):\n \"\"\"Expires tokens for an email address or email addresses.\n\n :param email_address: the string email address or emails addresses to\n expire tokens for.\n :param reason: the codified reason for the tokens. If explicitly set\n to None, this will expire all tokens for the email provided.\n \"\"\"\n if not email_address:\n # no email(s) provided. Nothing to do.\n return None\n\n if isinstance(email_address, (set, list, tuple)):\n email_address = [e.strip() for e in set(email_address)\n if e and e.strip()]\n\n # make sure there's at least 1 valid email address\n if len(email_address) <= 0:\n # no valid emails\n return None\n\n kwargs['email_address__in'] = email_address\n else:\n kwargs['email_address'] = email_address\n\n # try setting the reason default if one exists (in the case of proxy\n # models)\n if 'reason' not in kwargs and self.model.reason_default:\n kwargs['reason'] = self.model.reason_default\n\n if 'reason' in kwargs and kwargs.get('reason') is None:\n # explicitly setting the reason to None will expire all tokens for\n # a user regardless of the reason.\n del kwargs['reason']\n\n self.filter(**kwargs).update(expires=datetime(1970, 1, 1))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmap the GET query string params to the mapped keys and updates the request s GET QueryDict with the mapped keys.", "response": "def map_query_string(self):\n \"\"\"Maps the GET query string params the the query_key_mapper dict and\n updates the request's GET QueryDict with the mapped keys.\n \"\"\"\n if (not self.query_key_mapper or\n self.request.method == 'POST'):\n # Nothing to map, don't do anything.\n # return self.request.POST\n return {}\n\n keys = list(self.query_key_mapper.keys())\n\n return {self.query_key_mapper.get(k) if k in keys else k: v.strip()\n for k, v in self.request.GET.items()}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef bump_repos_version(module_name, new_version, local_only):\n # Make the cloning directory and change directories into it.\n tmp_dir = tempfile.mkdtemp(dir=os.getcwd())\n\n # Iterate through each repository.\n for owner, repo_name in REPOS_TO_CHANGE:\n repo_url = REPO_URL_FORMAT.format(owner, repo_name)\n\n gh = GitHubApiUtils(owner, repo_name)\n\n os.chdir(tmp_dir)\n\n # Clone the repo.\n ret_code = subprocess.call(['git', 'clone', '{}.git'.format(repo_url)])\n if ret_code:\n logging.error('Failed to clone repo {}'.format(repo_url))\n continue\n\n # Change into the cloned repo dir.\n os.chdir(repo_name)\n\n # Create a branch, using the version number.\n branch_name = '{}/{}'.format(module_name, new_version)\n ret_code = subprocess.call(['git', 'checkout', '-b', branch_name])\n if ret_code:\n logging.error('Failed to create branch in repo {}'.format(repo_url))\n continue\n\n # Search through all TXT files to find all lines with the module name, changing the pinned version.\n files_changed = False\n for root, _dirs, files in os.walk('.'):\n for file in files:\n if file.endswith('.txt') and (('requirements' in file) or ('requirements' in root)):\n found = False\n filepath = os.path.join(root, file)\n with open(filepath) as f:\n if '{}=='.format(module_name) in f.read():\n found = True\n if found:\n files_changed = True\n # Change the file in-place.\n for line in fileinput.input(filepath, inplace=True):\n if '{}=='.format(module_name) in line:\n print '{}=={}'.format(module_name, new_version)\n else:\n print line,\n\n if not files_changed:\n # Module name wasn't found in the requirements files.\n logging.info(\"Module name '{}' not found in repo {} - skipping.\".format(module_name, repo_url))\n continue\n\n # Add/commit the files.\n ret_code = subprocess.call(['git', 'commit', '-am', 'Updating {} requirement to version {}'.format(module_name, new_version)])\n if ret_code:\n logging.error(\"Failed to add and commit changed files to repo {}\".format(repo_url))\n continue\n\n if local_only:\n # For local_only, don't push the branch to the remote and create the PR - leave all changes local for review.\n continue\n\n # Push the branch.\n ret_code = subprocess.call(['git', 'push', '--set-upstream', 'origin', branch_name])\n if ret_code:\n logging.error(\"Failed to push branch {} upstream for repo {}\".format(branch_name, repo_url))\n continue\n\n # Create a PR with an automated message.\n rollback_branch_push = False\n try:\n # The GitHub \"mention\" below does not work via the API - unfortunately...\n response = gh.create_pull(\n title='Change {} version.'.format(module_name),\n body='Change the required version of {} to {}.\\n\\n@edx-ops/pipeline-team Please review and tag appropriate parties.'.format(module_name, new_version),\n head=branch_name,\n base='master'\n )\n except:\n logging.error('Failed to create PR for repo {} - did you set GITHUB_TOKEN?'.format(repo_url))\n rollback_branch_push = True\n else:\n logging.info('Created PR #{} for repo {}: {}'.format(response.number, repo_url, response.html_url))\n\n if rollback_branch_push:\n # Since the PR creation failed, delete the branch in the remote repo as well.\n ret_code = subprocess.call(['git', 'push', 'origin', '--delete', branch_name])\n if ret_code:\n logging.error(\"ROLLBACK: Failed to delete upstream branch {} for repo {}\".format(branch_name, repo_url))\n\n if not local_only:\n # Remove the temp directory containing all the cloned repos.\n shutil.rmtree(tmp_dir)", "response": "Bump the version number of a Python module in all repos."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncall an Slack Web API method with parameters.", "response": "async def call(self, method, **params):\n \"\"\"\n Call an Slack Web API method\n\n :param method: Slack Web API method to call\n :param params: {str: object} parameters to method\n :return: dict()\n \"\"\"\n url = self.SLACK_RPC_PREFIX + method\n data = FormData()\n data.add_fields(MultiDict(token=self.bot_token, charset='utf-8', **params))\n response_body = await self.request(\n method='POST',\n url=url,\n data=data\n )\n if 'warning' in response_body:\n logger.warning(f'Warnings received from API call {method}: {response_body[\"warning\"]}')\n if 'ok' not in response_body:\n logger.error(f'No ok marker in slack API call {method} {params} => {response_body}')\n raise SlackCallException('There is no ok marker, ... strange', method=method)\n if not response_body['ok']:\n logger.error(f'Slack API call failed {method} {params} => {response_body}')\n raise SlackCallException(f'No OK response returned', method=method)\n return response_body"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef rtm_handler(self, ws_message):\n message = json.loads(ws_message.data)\n if 'reply_to' in message:\n reply_to = message['reply_to']\n future = self.response_futures.pop(reply_to, None)\n if future is None:\n logger.error(f'This should not happen, received reply to unknown message! {message}')\n return None\n future.set_result(message)\n return None\n if 'type' not in message:\n logger.error(f'No idea what this could be {message}')\n return\n message_type = message['type']\n\n if hasattr(self, f'handle_{message_type}'):\n function = getattr(self, f'handle_{message_type}')\n return function(message)\n\n if message_type in self.SLACK_RTM_EVENTS:\n logger.debug(f'Unhandled {message_type}. {message}')\n else:\n logger.warning(f'Unknown {message_type}. {message}')\n return message", "response": "Handle a message from the Slack RTM."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the default prefix for this form.", "response": "def get_default_prefix(self, instance=None):\n \"\"\"Gets the prefix for this form.\n\n :param instance: the form model instance. When calling this method\n directly this should almost always stay None so it looks for\n self.instance.\n \"\"\"\n if instance is None and hasattr(self, 'instance'):\n instance = self.instance\n\n if instance and instance.id is not None:\n # it's an existing instance, use the instance prefix\n instance_prefix = self.default_instance_prefix\n if instance_prefix is None:\n instance_prefix = self.__class__.__name__.lower() + 'i-'\n\n return '{0}{1}'.format(instance_prefix,\n instance.id)\n\n if self.default_new_prefix is not None:\n return self.default_new_prefix\n\n return self.__class__.__name__.lower() + 'new-'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef authenticate(self, username, password):\n # If username is an email address, then try to pull it up\n user = self.get_by_username_or_email(username)\n\n if not user:\n return None\n\n if user.check_password(password):\n return user\n\n return None", "response": "Checks if a user is authenticated and returns the user s unique ID."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a set of all the elements in iterable that are the powerset of the elements of iterable.", "response": "def powerset(iterable) -> FrozenSet:\n \"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)\"\n combs = _powerset(iterable)\n res = frozenset(frozenset(x) for x in combs)\n # res = map(frozenset, combs)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef edit_url_link(obj, **kwargs):\n if hasattr(obj, 'get_edit_url_link'):\n return obj.get_edit_url_link(**kwargs)\n\n edit_url = obj.get_edit_url()\n return build_link(href=edit_url, **kwargs)", "response": "This method returns the edit url for the object"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_url_link(obj, **kwargs):\n if hasattr(obj, 'get_delete_url_link'):\n return obj.get_delete_url_link(**kwargs)\n\n delete_url = obj.get_delete_url()\n return build_link(href=delete_url, **kwargs)", "response": "This method returns the delete url link for the object"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef formfield(self, form_class=None, choices_form_class=None, **kwargs):\n defaults = {\n 'form_class': form_class or self.get_form_class()\n }\n defaults.update(kwargs)\n\n return super(ListField, self).formfield(**defaults)", "response": "Make the default formfield a CommaSeparatedListField."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(self, value, model_instance, **kwargs):\n self.get_choices_form_class().validate(value, model_instance, **kwargs)", "response": "This follows the validate rules for the choices_form_class field used."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting the input JSON value into python structures. Raises a ValidationError if the input JSON value can t be converted.", "response": "def to_python(self, value):\n \"\"\"\n Convert the input JSON value into python structures, raises\n django.core.exceptions.ValidationError if the data can't be converted.\n \"\"\"\n if isinstance(value, dict):\n return value\n\n if self.blank and not value:\n return None\n\n if isinstance(value, string_types):\n try:\n return json.loads(value)\n except Exception as e:\n raise ValidationError(str(e))\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_prep_value(self, value):\n try:\n return json.dumps(value, cls=DjangoJSONEncoder)\n except Exception as e:\n raise ValidationError(str(e))", "response": "Convert value to JSON string before save"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _gorg(a):\n assert isinstance(a, GenericMeta)\n while a.__origin__ is not None:\n a = a.__origin__\n return a", "response": "Return the farthest origin of a generic class."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _next_in_mro(cls):\n next_in_mro = object\n # Look for the last occurrence of Generic or Generic[...].\n for i, c in enumerate(cls.__mro__[:-1]):\n if isinstance(c, GenericMeta) and _gorg(c) is Generic:\n next_in_mro = cls.__mro__[i + 1]\n return next_in_mro", "response": "Helper for Generic. __new__.\nMimeType. _next_in_mro Returns the class after the last occurrence of Generic or Generic [... ] in cls.\nMimeType."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconstruct a __subclasshook__ callable that incorporates the associated __extra__ class in subclass checks performed against cls.", "response": "def _make_subclasshook(cls):\n \"\"\"Construct a __subclasshook__ callable that incorporates\n the associated __extra__ class in subclass checks performed\n against cls.\n \"\"\"\n if isinstance(cls.__extra__, abc.ABCMeta):\n # The logic mirrors that of ABCMeta.__subclasscheck__.\n # Registered classes need not be checked here because\n # cls and its extra share the same _abc_registry.\n def __extrahook__(cls, subclass):\n res = cls.__extra__.__subclasshook__(subclass)\n if res is not NotImplemented:\n return res\n if cls.__extra__ in getattr(subclass, '__mro__', ()):\n return True\n for scls in cls.__extra__.__subclasses__():\n # If we have fake typing and typing detect by module name\n if scls.__class__.__module__ == \"typing\":\n continue\n if isinstance(scls, GenericMeta):\n continue\n if issubclass(subclass, scls):\n return True\n return NotImplemented\n else:\n # For non-ABC extras we'll just call issubclass().\n def __extrahook__(cls, subclass):\n if cls.__extra__ and issubclass(subclass, cls.__extra__):\n return True\n return NotImplemented\n return classmethod(__extrahook__)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the authorization object for the view.", "response": "def get_authorization(self, **kwargs):\n \"\"\"Gets the authorization object for the view.\"\"\"\n if self.authorization is not None:\n return self.authorization\n\n auth_class = self.get_authorization_class()\n auth_user = self.get_authorization_user()\n auth_kwargs = {\n 'token': self.get_authorization_token(**kwargs)\n }\n\n if auth_user and auth_user.is_authenticated():\n auth_kwargs['created_user'] = self.get_authorization_user()\n\n self.authorization = auth_class.objects.get_by_token_or_404(\n **auth_kwargs\n )\n return self.authorization"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the user the authorization object is for.", "response": "def get_authorization_user(self, **kwargs):\n \"\"\"Gets the user the authorization object is for.\"\"\"\n if self.authorization_user is not None:\n return self.authorization_user\n\n self.authorization_user = self.request.user\n return self.request.user"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef to_dict(self, dict_cls=FrozenOrderedDict):\n # type: (Type[dict]) -> Dict[str, Any]\n \"\"\"Create a dictionary representation of object attributes\n\n Returns:\n OrderedDict serialised version of self\n \"\"\"\n pairs = tuple((k, serialize_object(getattr(self, k), dict_cls))\n for k in self.call_types)\n if self.typeid:\n d = dict_cls(((\"typeid\", self.typeid),) + pairs)\n else:\n d = dict_cls(pairs)\n return d", "response": "Create a dictionary representation of object attributes."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmaking sure it s a legit site to redirect to.", "response": "def safe_redirect(next_url, default=None):\n \"\"\"Makes sure it's a legit site to redirect to.\n\n :param default: this is the default url or named url to redirect to in the\n event where next_url is not legit.\n\n \"\"\"\n if is_legit_next_url(next_url):\n return redirect(next_url)\n\n if default:\n return redirect(default)\n\n return redirect('/')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef replace_url_query_values(url, replace_vals):\n if '?' not in url:\n return url\n\n parsed_url = urlparse(url)\n query = dict(parse_qsl(parsed_url.query))\n query.update(replace_vals)\n return '{0}?{1}'.format(url.split('?')[0], urlencode(query))", "response": "Replace querystring values in a url string."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_query_values_from_url(url, keys=None):\n if not url or '?' not in url:\n # no query params\n return None\n\n parsed_url = urlparse(url)\n query = dict(parse_qsl(parsed_url.query))\n\n if keys is None:\n return query\n\n if isinstance(keys, string_types):\n return query.get(keys)\n\n return {k: query.get(k) for k in keys}", "response": "Gets query string values from a url."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_json_response(self, content, **kwargs):\n\n # Don't care to return a django form or view in the response here.\n # Remove those from the context.\n if isinstance(content, dict):\n response_content = {k: deepcopy(v) for k, v in content.items()\n if k not in ('form', 'view') or k in ('form', 'view')\n and not isinstance(v, (Form, View))}\n else:\n response_content = content\n\n return HttpResponse(content=json.dumps(response_content),\n content_type='application/json; charset=utf-8',\n **kwargs)", "response": "Returns a json response object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes the object and then redirects to the success URL.", "response": "def delete(self, request, *args, **kwargs):\n \"\"\"\n Calls the delete() method on the fetched object and then\n redirects to the success URL.\n \"\"\"\n self.object = self.get_object()\n success_url = self.get_success_url()\n self.object.delete()\n\n if self.request.is_ajax():\n return JSONResponseMixin.render_to_response(self, context={})\n\n return HttpResponseRedirect(success_url)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_atomics(formula: Formula) -> Set[PLAtomic]:\n f = formula\n res = set()\n if isinstance(formula, PLFormula):\n res = formula.find_atomics()\n # elif isinstance(f, PLNot):\n # res = res.union(find_atomics(f.f))\n # elif isinstance(f, PLBinaryOperator):\n # for subf in f.formulas:\n # res = res.union(find_atomics(subf))\n else:\n res.add(f)\n return res", "response": "Finds all the atomic formulas"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _transform_delta(f:Formula, formula2AtomicFormula):\n t = type(f)\n if t == PLNot:\n return PLNot(_transform_delta(f, formula2AtomicFormula))\n # elif isinstance(f, PLBinaryOperator): #PLAnd, PLOr, PLImplies, PLEquivalence\n elif t == PLAnd or t == PLOr or t == PLImplies or t == PLEquivalence:\n return t([_transform_delta(subf, formula2AtomicFormula) for subf in f.formulas])\n elif t == PLTrue or t == PLFalse:\n return f\n else:\n return formula2AtomicFormula[f]", "response": "From a Propositional Formula to a Propositional Formula\n with non - propositional subformulas replaced with a freezed atomic formula."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_automaton_(f, labels:Set[Symbol]=None):\n\n nnf = f.to_nnf()\n\n if labels is None:\n # if the labels of the formula are not specified in input,\n # retrieve them from the formula\n labels = nnf.find_labels()\n\n # the alphabet is the powerset of the set of fluents\n alphabet = powerset(labels)\n initial_state = MacroState({nnf})\n final_states = {MacroState()}\n delta = set()\n\n d = f.delta(PLFalseInterpretation(), epsilon=True)\n if d.truth(d):\n final_states.add(initial_state)\n\n states = {MacroState(), initial_state}\n\n states_changed, delta_changed = True, True\n\n while states_changed or delta_changed:\n\n states_changed, delta_changed = False, False\n for actions_set in alphabet:\n states_list = list(states)\n for q in states_list:\n\n # delta function applied to every formula in the macro state Q\n delta_formulas = [f.delta(actions_set) for f in q]\n\n # find the list of atoms, which are \"true\" atoms (i.e. propositional atoms) or LDLf formulas\n atomics = [s for subf in delta_formulas for s in find_atomics(subf)]\n\n # \"freeze\" the found atoms as symbols and build a mapping from symbols to formulas\n symbol2formula = {Symbol(str(f)): f for f in atomics if f != PLTrue() and f != PLFalse()}\n\n # build a map from formula to a \"freezed\" propositional Atomic Formula\n formula2atomic_formulas = {\n f: PLAtomic(Symbol(str(f)))\n if f != PLTrue() and f != PLFalse()# and not isinstance(f, PLAtomic)\n else f for f in atomics\n }\n\n # the final list of Propositional Atomic Formulas, one for each formula in the original macro state Q\n transformed_delta_formulas = [_transform_delta(f, formula2atomic_formulas) for f in delta_formulas]\n\n # the empty conjunction stands for true\n if len(transformed_delta_formulas) == 0:\n conjunctions = PLTrue()\n elif len(transformed_delta_formulas) == 1:\n conjunctions = transformed_delta_formulas[0]\n else:\n conjunctions = PLAnd(transformed_delta_formulas)\n\n # the model in this case is the smallest set of symbols s.t. the conjunction of \"freezed\" atomic formula\n # is true.\n models = frozenset(conjunctions.minimal_models(Alphabet(symbol2formula)))\n\n if len(models) == 0:\n continue\n for min_model in models:\n q_prime = MacroState(\n {symbol2formula[s] for s in min_model.true_propositions})\n\n len_before = len(states)\n states.add(q_prime)\n if len(states) == len_before + 1:\n states_list.append(q_prime)\n states_changed = True\n\n len_before = len(delta)\n delta.add((q, actions_set, q_prime))\n if len(delta) == len_before + 1:\n delta_changed = True\n\n # check if q_prime should be added as final state\n if len(q_prime) == 0:\n final_states.add(q_prime)\n else:\n subf_deltas = [subf.delta(PLFalseInterpretation(), epsilon=True) for subf in q_prime]\n if len(subf_deltas)==1:\n q_prime_delta_conjunction = subf_deltas[0]\n else:\n q_prime_delta_conjunction = PLAnd(subf_deltas)\n\n if q_prime_delta_conjunction.truth(PLFalseInterpretation()):\n final_states.add(q_prime)\n\n\n alphabet = PythomataAlphabet({PLInterpretation(set(sym)) for sym in alphabet})\n delta = frozenset((i, PLInterpretation(set(a)), o) for i, a, o in delta)\n\n\n nfa = NFA.fromTransitions(\n alphabet=alphabet,\n states=frozenset(states),\n initial_state=initial_state,\n accepting_states=frozenset(final_states),\n transitions=delta\n )\n\n return nfa", "response": "DEPRECATED - Builds a NFA from a LDLfFormula."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a setting from the settings file.", "response": "def get_setting(key, **kwargs):\n \"\"\"Gets a settings key or raises an improperly configured error.\n\n :param key: the settings key to get.\n :param default: the default value to return if no value is found\n \"\"\"\n has_default = 'default' in kwargs\n default_val = kwargs.get('default')\n\n try:\n if has_default:\n return getattr(settings, key, default_val)\n else:\n return getattr(settings, key)\n except Exception as e:\n raise ImproperlyConfigured(\n _('\"{0}\" setting has not been properly set. {1}').format(key, e)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_class_from_settings(settings_key):\n cls_path = getattr(settings, settings_key, None)\n if not cls_path:\n raise NotImplementedError()\n\n try:\n # First check to see if it's an installed model\n return get_model_from_settings(settings_key=settings_key)\n except:\n try:\n # Next, check from installed apps\n return get_class_from_settings_from_apps(settings_key=settings_key)\n except:\n # Last, try to load from the full path\n return get_class_from_settings_full_path(settings_key)", "response": "Gets a class from a setting key."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_model_from_settings(settings_key):\n cls_path = getattr(settings, settings_key, None)\n if not cls_path:\n raise NotImplementedError()\n\n try:\n app_label, model_name = cls_path.split('.')\n except ValueError:\n raise ImproperlyConfigured(\"{0} must be of the form \"\n \"'app_label.model_name'\".format(settings_key))\n\n model = apps.get_model(app_label, model_name)\n\n if model is None:\n raise ImproperlyConfigured(\"{0} refers to model '%s' that has not \"\n \"been installed\".format(settings_key))\n\n return model", "response": "Return the django model from a settings key."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntrying and get a class from a settings key in the installed apps.", "response": "def get_class_from_settings_from_apps(settings_key):\n \"\"\"Try and get a class from a settings path by lookin in installed apps.\n \"\"\"\n cls_path = getattr(settings, settings_key, None)\n\n if not cls_path:\n raise NotImplementedError()\n\n try:\n app_label = cls_path.split('.')[-2]\n model_name = cls_path.split('.')[-1]\n except ValueError:\n raise ImproperlyConfigured(\"{0} must be of the form \"\n \"'app_label.model_name'\".format(\n settings_key))\n\n app = apps.get_app_config(app_label).models_module\n\n if not app:\n raise ImproperlyConfigured(\"{0} setting refers to an app that has not \"\n \"been installed\".format(settings_key))\n\n return getattr(app, model_name)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a class from it s full path.", "response": "def get_class_from_settings_full_path(settings_key):\n \"\"\"Get a class from it's full path.\n\n Example:\n\n some.path.module.MyClass\n \"\"\"\n cls_path = getattr(settings, settings_key, None)\n\n if not cls_path:\n raise NotImplementedError()\n\n try:\n module_name, class_name = cls_path.rsplit('.', 1)\n except ValueError:\n raise ImproperlyConfigured(\"{0} must be of the form \"\n \"'some.path.module.MyClass'\".format(\n settings_key))\n\n manager_module = importlib.import_module(module_name)\n\n if not manager_module:\n raise ImproperlyConfigured(\"{0} refers to a module that has not been \"\n \"installed\".format(settings_key))\n\n return getattr(manager_module, class_name)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a function from the string path defined in a settings file.", "response": "def get_function_from_settings(settings_key):\n \"\"\"Gets a function from the string path defined in a settings file.\n\n Example:\n\n # my_app/my_file.py\n def some_function():\n # do something\n pass\n\n # settings.py\n SOME_FUNCTION = 'my_app.my_file.some_function'\n\n > get_function_from_settings('SOME_FUNCTION')\n \n \"\"\"\n\n renderer_func_str = getattr(settings, settings_key, None)\n if not renderer_func_str:\n return None\n\n module_str, renderer_func_name = renderer_func_str.rsplit('.', 1)\n\n try:\n mod = importlib.import_module(module_str)\n return getattr(mod, renderer_func_name)\n except Exception:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef caller_locals():\n # type: () -> Dict\n \"\"\"Return the frame object for the caller's stack frame.\"\"\"\n try:\n raise ValueError\n except ValueError:\n _, _, tb = sys.exc_info()\n assert tb, \"Can't get traceback, this shouldn't happen\"\n caller_frame = tb.tb_frame.f_back.f_back\n return caller_frame.f_locals", "response": "Return the frame object for the caller s stack frame."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_repr(inst, attrs):\n # type: (object, Sequence[str]) -> str\n \"\"\"Create a repr from an instance of a class\n\n Args:\n inst: The class instance we are generating a repr of\n attrs: The attributes that should appear in the repr\n \"\"\"\n arg_str = \", \".join(\n \"%s=%r\" % (a, getattr(inst, a)) for a in attrs if hasattr(inst, a))\n repr_str = \"%s(%s)\" % (inst.__class__.__name__, arg_str)\n return repr_str", "response": "Create a repr from an instance of a class\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef default_rdo_tr(mod):\n pkg = mod.rsplit('-python')[0]\n pkg = pkg.replace('_', '-').replace('.', '-').lower()\n if not pkg.startswith('python-'):\n pkg = 'python-' + pkg\n py2pkg = pkg\n py3pkg = re.sub('python', 'python3', pkg)\n return (pkg, py2pkg, py3pkg)", "response": "Default translation function for Fedora - RDO based systems"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef default_ubuntu_tr(mod):\n pkg = 'python-%s' % mod.lower()\n py2pkg = pkg\n py3pkg = 'python3-%s' % mod.lower()\n return (pkg, py2pkg, py3pkg)", "response": "Default translation function for Ubuntu based systems"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndefault translation function for openSUSE SLES and otherSUSE based systems", "response": "def default_suse_tr(mod):\n \"\"\"\n Default translation function for openSUSE, SLES, and other\n SUSE based systems\n\n Returns a tuple of 3 elements - the unversioned name, the python2 versioned\n name and the python3 versioned name.\n \"\"\"\n pkg = 'python-%s' % mod\n py2pkg = 'python2-%s' % mod\n py3pkg = 'python3-%s' % mod\n return (pkg, py2pkg, py3pkg)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef module2package(mod, dist, pkg_map=None, py_vers=('py',)):\n if not pkg_map:\n pkg_map = get_pkg_map(dist)\n for rule in pkg_map:\n pkglist = rule(mod, dist)\n if pkglist:\n break\n else:\n tr_func = get_default_tr_func(dist)\n pkglist = tr_func(mod)\n\n output = []\n for v in py_vers:\n if v == 'py':\n output.append(pkglist[0])\n elif v == 'py2':\n output.append(pkglist[1])\n elif v == 'py3':\n output.append(pkglist[2])\n else:\n raise Exception('Invalid version \"%s\"' % (v))\n\n if len(output) == 1:\n # just return a single value (backwards compatible)\n return output[0]\n else:\n return output", "response": "Return a corresponding package name for a python module."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a corresponding OpenStack upstream name for a python module.", "response": "def module2upstream(mod):\n \"\"\"Return a corresponding OpenStack upstream name for a python module.\n\n mod -- python module name\n \"\"\"\n for rule in OPENSTACK_UPSTREAM_PKG_MAP:\n pkglist = rule(mod, dist=None)\n if pkglist:\n return pkglist[0]\n return mod"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main():\n parser = argparse.ArgumentParser(description='Python module name to'\n 'package name')\n group = parser.add_mutually_exclusive_group()\n group.add_argument('--dist', help='distribution style '\n '(default: %(default)s)',\n default=platform.linux_distribution()[0])\n group.add_argument('--upstream', help='map to OpenStack project name',\n action='store_true')\n parser.add_argument('--pyver', help='Python versions to return. \"py\" is '\n 'the unversioned name',\n action='append', choices=['py', 'py2', 'py3'],\n default=[])\n parser.add_argument('modulename', help='python module name')\n args = vars(parser.parse_args())\n\n pyversions = args['pyver'] if args['pyver'] else ['py']\n\n if args['upstream']:\n print(module2upstream(args['modulename']))\n else:\n pylist = module2package(args['modulename'], args['dist'],\n py_vers=pyversions)\n # When only 1 version is requested, it will be returned as a string,\n # for backwards compatibility. Else, it will be a list.\n if type(pylist) is list:\n print(' '.join(pylist))\n else:\n print(pylist)", "response": "for resolving names from command line"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_paging(self):\n orig_page_num = self.page_num\n orig_page_size = self.page_size\n\n try:\n\n page_num = int(self.request.GET.get(self.page_kwarg or 'p'))\n\n if page_num < 1:\n page_num = orig_page_num\n except:\n page_num = orig_page_num\n\n try:\n orig_page_size = self.page_size\n page_size = int(self.request.GET.get(self.page_size_kwarg or 'ps'))\n\n if page_size < 1:\n page_size = orig_page_size\n except:\n page_size = orig_page_size\n\n return page_num, page_size", "response": "Gets the paging values passed through the query string params."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmaking a call_types dictionary that describes what arguments to pass to f Taxonomy", "response": "def make_call_types(f, globals_d):\n # type: (Callable, Dict) -> Tuple[Dict[str, Anno], Anno]\n \"\"\"Make a call_types dictionary that describes what arguments to pass to f\n\n Args:\n f: The function to inspect for argument names (without self)\n globals_d: A dictionary of globals to lookup annotation definitions in\n \"\"\"\n arg_spec = getargspec(f)\n args = [k for k in arg_spec.args if k != \"self\"]\n\n defaults = {} # type: Dict[str, Any]\n if arg_spec.defaults:\n default_args = args[-len(arg_spec.defaults):]\n for a, default in zip(default_args, arg_spec.defaults):\n defaults[a] = default\n\n if not getattr(f, \"__annotations__\", None):\n # Make string annotations from the type comment if there is one\n annotations = make_annotations(f, globals_d)\n else:\n annotations = f.__annotations__\n\n call_types = OrderedDict() # type: Dict[str, Anno]\n for a in args:\n anno = anno_with_default(annotations[a], defaults.get(a, NO_DEFAULT))\n assert isinstance(anno, Anno), \\\n \"Argument %r has type %r which is not an Anno\" % (a, anno)\n call_types[a] = anno\n\n return_type = anno_with_default(annotations.get(\"return\", None))\n if return_type is Any:\n return_type = Anno(\"Any return value\", Any, \"return\")\n assert return_type is None or isinstance(return_type, Anno), \\\n \"Return has type %r which is not an Anno\" % (return_type,)\n\n return call_types, return_type"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate an annotations dictionary from Python 2 type comments and type idents.", "response": "def make_annotations(f, globals_d=None):\n # type: (Callable, Dict) -> Dict[str, Any]\n \"\"\"Create an annotations dictionary from Python2 type comments\n\n http://mypy.readthedocs.io/en/latest/python2.html\n\n Args:\n f: The function to examine for type comments\n globals_d: The globals dictionary to get type idents from. If not\n specified then make the annotations dict contain strings rather\n than the looked up objects\n \"\"\"\n locals_d = {} # type: Dict[str, Any]\n if globals_d is None:\n # If not given a globals_d then we should just populate annotations with\n # the strings in the type comment.\n globals_d = {}\n # The current approach is to use eval, which means manufacturing a\n # dict like object that will just echo the string back to you. This\n # has a number of complexities for somthing like numpy.number or\n # Callable[..., int], which are handled in EchoStr above, so it might be\n # better off as an ast.parse in the future...\n locals_d = EchoDict()\n lines, _ = inspect.getsourcelines(f)\n arg_spec = getargspec(f)\n args = list(arg_spec.args)\n if arg_spec.varargs is not None:\n args.append(arg_spec.varargs)\n if arg_spec.keywords is not None:\n args.append(arg_spec.keywords)\n it = iter(lines)\n types = [] # type: List\n found = None\n for token in tokenize.generate_tokens(lambda: next(it)):\n typ, string, start, end, line = token\n if typ == tokenize.COMMENT:\n found = type_re.match(string)\n if found:\n parts = found.groups()\n # (...) is used to represent all the args so far\n if parts[0] != \"(...)\":\n expr = parts[0].replace(\"*\", \"\")\n try:\n ob = eval(expr, globals_d, locals_d)\n except Exception as e:\n raise ValueError(\n \"Error evaluating %r: %s\" % (expr, e))\n if isinstance(ob, tuple):\n # We got more than one argument\n types += list(ob)\n else:\n # We got a single argument\n types.append(ob)\n if parts[1]:\n # Got a return, done\n try:\n ob = eval(parts[2], globals_d, locals_d)\n except Exception as e:\n raise ValueError(\n \"Error evaluating %r: %s\" % (parts[2], e))\n if args and args[0] in [\"self\", \"cls\"]:\n # Allow the first argument to be inferred\n if len(args) == len(types) + 1:\n args = args[1:]\n assert len(args) == len(types), \\\n \"Args %r Types %r length mismatch\" % (args, types)\n ret = dict(zip(args, types))\n ret[\"return\"] = ob\n return ret\n if found:\n # If we have ever found a type comment, but not the return value, error\n raise ValueError(\"Got to the end of the function without seeing ->\")\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\noverrides get_object to set the object in the object attribute.", "response": "def get_object(self, **kwargs):\n \"\"\"Sometimes preprocessing of a view need to happen before the object\n attribute has been set for a view. In this case, just return the\n object if it has already been set when it's called down the road since\n there's no need to make another query.\n \"\"\"\n if hasattr(self, 'object') and self.object:\n return self.object\n\n obj = super(CommonSingleObjectViewMixin, self).get_object(**kwargs)\n self.object = obj\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a hex timestamp to a datetime object.", "response": "def hex_timestamp_to_datetime(hex_timestamp):\n \"\"\"Converts hex timestamp to a datetime object.\n\n >>> hex_timestamp_to_datetime('558BBCF9')\n datetime.datetime(2015, 6, 25, 8, 34, 1)\n >>> hex_timestamp_to_datetime('0x558BBCF9')\n datetime.datetime(2015, 6, 25, 8, 34, 1)\n >>> datetime.fromtimestamp(0x558BBCF9)\n datetime.datetime(2015, 6, 25, 8, 34, 1)\n \"\"\"\n if not hex_timestamp.startswith('0x'):\n hex_timestamp = '0x{0}'.format(hex_timestamp)\n\n return datetime.fromtimestamp(int(hex_timestamp, 16))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef now_by_tz(tz='US/Central', ignoretz=True):\n if isinstance(tz, string_types):\n tz = pytz.timezone(tz)\n\n if ignoretz:\n return datetime.now(tz).replace(tzinfo=None)\n return datetime.now(tz)", "response": "Gets the current datetime object by timezone."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef tz_to_utc(dt, tz, ignoretz=True):\n if isinstance(tz, string_types):\n tz = pytz.timezone(tz)\n\n dt = tz.localize(dt)\n dt = datetime.astimezone(dt, pytz.timezone('UTC'))\n\n if ignoretz:\n return dt.replace(tzinfo=None)\n return dt", "response": "Converts a datetime object from the specified timezone to a UTC datetime."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef utc_to_tz(dt, tz, ignoretz=True):\n if isinstance(tz, string_types):\n tz = pytz.timezone(tz)\n\n dt = pytz.utc.localize(dt)\n dt = dt.astimezone(tz)\n\n if ignoretz:\n return dt.replace(tzinfo=None)\n return dt", "response": "Converts the given UTC datetime object to the specific timezone."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_date(dt, ignoretz=True, as_tz=None):\n dttm = parse_datetime(dt, ignoretz=ignoretz)\n return None if dttm is None else dttm.date()", "response": "Parse a date string into a date object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses a string datetime into a datetime object.", "response": "def parse_datetime(dt, ignoretz=True, **kwargs):\n \"\"\"\n :param dt: string datetime to convert into datetime object.\n :return: datetime object if the string can be parsed into a datetime.\n Otherwise, return None.\n\n :see: http://labix.org/python-dateutil\n\n Examples:\n\n >>> parse_datetime('2011-12-30 13:45:12 CDT')\n 2011-12-30 13:45:12\n >>> parse_datetime('12/30/2011 13:45:12 CDT')\n 2011-12-30 13:45:12\n >>> parse_datetime('2011-12-30 13:45:12 CDT', ignoretz=False)\n 2011-12-30 13:45:12-06:00\n >>> parse_datetime('12/30/2011 13:45:12 CDT', ignoretz=False)\n 2011-12-30 13:45:12-06:00\n\n \"\"\"\n try:\n return parse(dt, ignoretz=ignoretz, **kwargs)\n except:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef turn_emails_off(view_func):\n\n # Dummy email backend so no emails are sent.\n EMAIL_BACKEND_DUMMY = 'django.core.mail.backends.dummy.EmailBackend'\n\n def decorated(request, *args, **kwargs):\n orig_email_backend = settings.EMAIL_BACKEND\n settings.EMAIL_BACKEND = EMAIL_BACKEND_DUMMY\n\n response = view_func(request, *args, **kwargs)\n settings.EMAIL_BACKEND = orig_email_backend\n return response\n\n return decorated", "response": "Turns emails off so no emails will be sent."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the md5 hash for a file.", "response": "def get_md5_for_file(file):\n \"\"\"Get the md5 hash for a file.\n\n :param file: the file to get the md5 hash for\n \"\"\"\n md5 = hashlib.md5()\n\n while True:\n data = file.read(md5.block_size)\n\n if not data:\n break\n\n md5.update(data)\n\n return md5.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_dict_from_json_file(path, encoding='utf-8'):\n with open(path, encoding=encoding) as data_file:\n return json.loads(data_file.read())", "response": "Gets a dict of data form a json file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef linebreaks_safe(value, autoescape=True):\n if isinstance(value, string_types) and '\\n' in value:\n return linebreaks_filter(value, autoescape=autoescape)\n\n return value", "response": "Adds linebreaks only for text that has a newline character."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_sub_resource(self, relative_id, sub_resource):\n existing_sub_resources = self.resources.get(sub_resource.RELATIVE_PATH_TEMPLATE, defaultdict(list))\n existing_sub_resources[relative_id].append(sub_resource)\n self.resources.update({sub_resource.RELATIVE_PATH_TEMPLATE: existing_sub_resources})", "response": "Add a sub resource to the resource list"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the name of the CloudShell model", "response": "def cloudshell_model_name(self):\n \"\"\"Return the name of the CloudShell model\"\"\"\n if self.shell_name:\n return \"{shell_name}.{resource_model}\".format(shell_name=self.shell_name,\n resource_model=self.RESOURCE_MODEL.replace(\" \", \"\"))\n else:\n return self.RESOURCE_MODEL"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nslicing a correlation matrix of a y - dimensional dataset and a p - value matrix of a y - dimensional dataset", "response": "def slice_yx(r, pval, ydim=1):\n \"\"\"slice a correlation and p-value matrix of a (y,X) dataset\n into a (y,x_i) vector and (x_j, x_k) matrices\n\n Parameters\n ----------\n r : ndarray\n Correlation Matrix of a (y,X) dataset\n\n pval : ndarray\n p-values\n\n ydim : int\n Number of target variables y, i.e. the first ydim-th columns\n and rows are (y, x_i) correlations\n\n Returns\n -------\n y_r : ndarray\n 1D vector or ydim-column array with (y,x) correlations\n\n y_pval : ndarray\n 1D vector or ydim-column array with (y,x) p-values\n\n x_r : ndarray\n correlation matrix (x_j, x_k)\n\n x_pval : ndarar\n matrix with p-values\n\n Example\n -------\n import korr\n r, pval = korr.mcc(np.c_[y, X])\n y_r, y_pval, x_r, x_pval = slice_yx(r, pval, ydim=1)\n print(np.c_[y_r, y_pval])\n korr.corrgram(x_r, x_pval)\n \"\"\"\n if ydim is 1:\n return (\n r[1:, :1].reshape(-1, ), pval[1:, :1].reshape(-1, ),\n r[1:, 1:], pval[1:, 1:])\n else:\n return (\n r[ydim:, :ydim], pval[ydim:, :ydim],\n r[ydim:, ydim:], pval[ydim:, ydim:])"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_expression(source: str) -> ExpressionSource:\n '''Returns tuple with expression type and expression body'''\n if not is_expression(source):\n msg = 'Expression is not valid. Expression should be matched with regular expression: {0}'\\\n .format(EXPRESSION_REGEX)\n raise ExpressionError(msg, source)\n if not source.startswith('{'):\n [type_, source] = source.split(':', 1)\n elif source.endswith('}}'):\n type_ = 'twoways'\n else:\n type_ = 'oneway'\n return (type_, source[1:-1])", "response": "Returns tuple with expression type and expression body"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _server_whitelist(self):\n '''\n Returns list of servers that have not errored in the last five minutes.\n If all servers have errored in the last five minutes, returns list with\n one item, the server that errored least recently.\n '''\n whitelist = []\n for server in self.servers:\n if (server not in self.last_error\n or self.last_error[server] < time.time() - self.PENALTY_BOX_TIME):\n whitelist.append(server)\n if not whitelist:\n whitelist.append(sorted(\n self.last_error.items(), key=lambda kv: kv[1])[0][0])\n return whitelist", "response": "Returns a list of servers that have not errored in the last five minutes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_networking_resource_from_context(shell_name, supported_os, context):\n\n result = GenericNetworkingResource(shell_name=shell_name, name=context.resource.name, supported_os=supported_os)\n result.address = context.resource.address\n result.family = context.resource.family\n result.fullname = context.resource.fullname\n\n result.attributes = dict(context.resource.attributes)\n\n return result", "response": "Creates an instance of Networking Resource by given context"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_worst(rho, pval, m=1, rlim=.10, plim=.35):\n # convert to lists\n n = len(rho)\n r = list(np.abs(rho))\n p = list(pval)\n i = list(range(n))\n\n # check m\n if m > n:\n warnings.warn(\n 'm is bigger than the available correlations in rho and pval.')\n m = n\n\n # selected indicies\n selected = list()\n # (1) pick the highest/worst p-value\n # |r| <= r_lim\n # p > p_lim\n it = 0\n while (len(selected) < m) and (it < n):\n temp = p.index(max(p)) # temporary index of the remaining values\n worst = i[temp] # store original index as 'worst' before abort loop\n # check\n if (r[temp] <= rlim) and (p[temp] > plim):\n # delete from lists\n r.pop(temp)\n p.pop(temp)\n i.pop(temp)\n # append to abort\n selected.append(worst)\n # next step\n it = it + 1\n # print(selected, i)\n\n # (2) Just pick the highest/worst p-value of the remaining\n # with bad correlations\n # |r| <= r_lim\n it = 0\n n2 = len(i)\n while (len(selected) < m) and (it < n2):\n temp = p.index(max(p)) # temporary index of the remaining values\n worst = i[temp] # store original index as 'worst' before abort loop\n # check\n if (r[temp] <= rlim):\n # delete from lists\n r.pop(temp)\n p.pop(temp)\n i.pop(temp)\n # append to abort\n selected.append(worst)\n # next step\n it = it + 1\n\n # (3) Pick the lowest correlations\n it = 0\n n3 = len(i)\n while (len(selected) < m) and (it < n3):\n # find the smallest p-value\n temp = r.index(min(r))\n worst = i[temp]\n # delete from lists\n r.pop(temp)\n p.pop(temp)\n i.pop(temp)\n # append to abort\n selected.append(worst)\n # next step\n it = it + 1\n\n return selected", "response": "Find the N worst correlations in a given set of correlation coefficients."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _parse_array(value):\n value = value.lstrip()\n if not value or value[0] not in _bracket_strings:\n return None\n return json.loads(value)", "response": "Coerce value into a list."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncoerce value into a boolean.", "response": "def _parse_boolean(value):\n \"\"\"Coerce value into an bool.\n\n :param str value: Value to parse.\n :returns: bool or None if the value is not a boolean string.\n \"\"\"\n value = value.lower()\n if value in _true_strings:\n return True\n elif value in _false_strings:\n return False\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncoercing value into a dict.", "response": "def _parse_object(value):\n \"\"\"Coerce value into a dict.\n\n :param str value: Value to parse.\n :returns: dict or None if the value is not a JSON object\n :raises: TypeError or ValueError if value appears to be an object but can't\n be parsed as JSON.\n \"\"\"\n value = value.lstrip()\n if not value or value[0] not in _brace_strings:\n return None\n return json.loads(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_value(value, allowed_types, name='value'):\n if not isinstance(value, str):\n raise ValueError('value for %r must be a string' % name)\n if isinstance(allowed_types, str):\n allowed_types = [allowed_types]\n\n # Note that the order of these type considerations is important. Because we\n # have an untyped value that may be one of any given number of types, we\n # need a consistent order of evaluation in cases when there is ambiguity\n # between types.\n\n if 'null' in allowed_types and value == '':\n return 'null', None\n\n # For all of these types, we'll pass the value to the function and it will\n # raise a TypeError or ValueError or return None if it can't be parsed as\n # the given type.\n\n for allowed_type, parser in _parser_funcs:\n if allowed_type in allowed_types:\n try:\n parsed_value = parser(value)\n if parsed_value is not None:\n return allowed_type, parsed_value\n except (TypeError, ValueError):\n # Ignore any errors, and continue trying other types\n pass\n\n raise ParseError('%s must be a valid type (%s)' %\n (name, ', '.join(allowed_types)))", "response": "Parses a value into one of the allowed types."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_json(value: str, sig_params: List[inspect.Parameter] = None) -> dict:\n try:\n loaded = json.loads(value)\n except Exception as e:\n message = 'Error parsing JSON: %r error: %s' % (value, e)\n logging.debug(message, exc_info=e)\n raise ParseError(message)\n\n if sig_params is not None:\n return map_param_names(loaded, sig_params)\n return loaded", "response": "Parse a value as JSON."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmap request param names to match logic function param names.", "response": "def map_param_names(\n req_params: dict, sig_params: List[inspect.Parameter]) -> dict:\n \"\"\"Maps request param names to match logic function param names.\n\n If a doctor type defined a `param_name` attribute for the name of the\n parameter in the request, we should use that as the key when looking up\n the value for the request parameter.\n\n When we declare a type we can specify what the parameter name\n should be in the request that the annotated type should get mapped to.\n\n >>> from doctor.types import number\n >>> Latitude = number('The latitude', param_name='location.lat')\n >>> def my_logic(lat: Latitude): pass\n >>> request_params = {'location.lat': 45.2342343}\n\n In the above example doctor knows to pass the value at key `location.lat`\n to the logic function variable named `lat` since it's annotated by the\n `Latitude` type which specifies what the param_name is on the request.\n\n :param dict req_params: The parameters specified in the request.\n :param dict sig_params: The logic function's signature parameters.\n :returns: A dict of re-mapped params.\n \"\"\"\n new_request_params = {}\n for k, param in sig_params.items():\n param_name = getattr(param.annotation, 'param_name', None)\n key = k if param_name is None else param_name\n if key in req_params:\n new_request_params[k] = req_params[key]\n return new_request_params"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_form_and_query_params(req_params: dict, sig_params: dict) -> dict:\n # Importing here to prevent circular dependencies.\n from doctor.types import SuperType, UnionType\n errors = {}\n parsed_params = {}\n for param, value in req_params.items():\n # Skip request variables not in the function signature.\n if param not in sig_params:\n continue\n # Skip coercing parameters not annotated by a doctor type.\n if not issubclass(sig_params[param].annotation, SuperType):\n continue\n\n # Check if the type has a custom parser for the parameter.\n custom_parser = sig_params[param].annotation.parser\n if custom_parser is not None:\n if not callable(custom_parser):\n warnings.warn(\n 'Parser `{}` is not callable, using default parser.'.format(\n custom_parser))\n custom_parser = None\n\n try:\n if custom_parser is not None:\n parsed_params[param] = custom_parser(value)\n else:\n if issubclass(sig_params[param].annotation, UnionType):\n json_type = [\n _native_type_to_json[_type.native_type]\n for _type in sig_params[param].annotation.types\n ]\n else:\n native_type = sig_params[param].annotation.native_type\n json_type = [_native_type_to_json[native_type]]\n # If the type is nullable, also add null as an allowed type.\n if sig_params[param].annotation.nullable:\n json_type.append('null')\n _, parsed_params[param] = parse_value(value, json_type)\n except ParseError as e:\n errors[param] = str(e)\n\n if errors:\n raise TypeSystemError(errors, errors=errors)\n\n return parsed_params", "response": "Parses the form and query parameters from the input dict."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _handle_task(self):\n headers = self.request.headers\n\n message = None\n try:\n status_code, output = process_async_task(\n headers, self.request.body)\n except AbortAndRestart as restart:\n # Async retry status code\n status_code = 549\n message = 'Retry Async Task'\n output = str(restart)\n\n self.response.set_status(status_code, message)\n self.response.out.write(output)", "response": "Pass the request info to the async framework."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmigrates autoload details. Add namespace for resources and attributes.", "response": "def migrate_autoload_details(autoload_details, shell_name, shell_type):\n \"\"\" Migrate autoload details. Add namespace for attributes\n\n :param autoload_details:\n :param shell_name:\n :param shell_type:\n :return:\n \"\"\"\n\n mapping = {}\n for resource in autoload_details.resources:\n resource.model = \"{shell_name}.{model}\".format(shell_name=shell_name, model=resource.model)\n mapping[resource.relative_address] = resource.model\n\n for attribute in autoload_details.attributes:\n\n if not attribute.relative_address: # Root element\n attribute.attribute_name = \"{shell_type}.{attr_name}\".format(shell_type=shell_type,\n attr_name=attribute.attribute_name)\n else:\n attribute.attribute_name = \"{model}.{attr_name}\".format(model=mapping[attribute.relative_address],\n attr_name=attribute.attribute_name)\n\n return autoload_details"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_firewall_resource_from_context(shell_name, supported_os, context):\n\n result = GenericFirewallResource(shell_name=shell_name, name=context.resource.name, supported_os=supported_os)\n result.address = context.resource.address\n result.family = context.resource.family\n result.fullname = context.resource.fullname\n\n result.attributes = dict(context.resource.attributes)\n\n return result", "response": "Creates an instance of FirewallResource from given context"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_convert_method(self):\n for type_, method in self._map.items():\n if type(self.value) is bool and type_ is not bool:\n continue\n if isinstance(self.value, type_):\n return method\n if is_sql_instance(self.value):\n return self._raw\n return self._undefined", "response": "Get right method to convert the value."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nflattens correlation matrix and p - value matrix into a single table.", "response": "def flatten(rho, pval, sortby=\"cor\"):\n \"\"\"Flatten correlation and p-value matrix\n\n Parameters:\n -----------\n rho : ndarray\n Correlation Matrix\n\n pval : ndarray\n Matrix with p-values\n\n sortby : str\n sort the output table by\n - \"cor\" the highest absolute correlation coefficient\n - \"pval\" the lowest p-value\n\n Return:\n -------\n tab : ndarray\n Table with (i, j, cor, pval) rows\n\n Example:\n --------\n from korr import pearson, flatten\n rho, pval = pearson(X)\n tab = flatten(rho, pval, sortby=\"pval\")\n tab.values\n \"\"\"\n n = rho.shape[0]\n\n idx = np.triu_indices(n, k=1)\n\n tab = pd.DataFrame(\n columns=['i', 'j', 'cor', 'pval'],\n data=np.c_[idx[0], idx[1], rho[idx], pval[idx]])\n tab[['i', \"j\"]] = tab[['i', \"j\"]].astype(int)\n\n if sortby == \"cor\":\n tab['abscor'] = np.abs(tab['cor'])\n tab.sort_values(by='abscor', inplace=True, ascending=False)\n elif sortby == \"pval\":\n tab.sort_values(by='pval', inplace=True, ascending=True)\n\n return tab[[\"i\", \"j\", \"cor\", \"pval\"]]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef max(self) -> int:\n if self.signed:\n v = (1 << (self.bits - 1)) - 1\n else:\n v = (1 << self.bits) - 1\n if self.lsb0:\n v = v - (v % 2)\n return v", "response": "Get the maximum value this immediate can have\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set(self, value: int):\n if not isinstance(value, int):\n raise self.exception(\"{} is not an integer\".format(value))\n if self.lsb0 and self.value % 2 == 1:\n raise self.exception(\"{} not power of two\".format(value))\n if not self.signed and value < 0:\n raise self.exception(\"{} cannot be negative\".format(value))\n if value < self.min() or value > self.max():\n raise self.exception(\"{} not in allowed range {}-{}\".format(value, self.min(), self.max()))\n\n self.value = value", "response": "Sets the immediate value of the entry."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_from_bits(self, value: int):\n if self.signed:\n value = -(value & self.tcmask) + (value & ~self.tcmask)\n self.set(value)", "response": "Set the immediate value from machine code bits."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef randomize(self):\n self.value = randint(self.min(), self.max())\n if self.lsb0:\n self.value = self.value - (self.value % 2)", "response": "Randomize this immediate to a legal value."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _init():\n # If there is a context and it is initialized to this request,\n # return, otherwise reinitialize the _local_context.\n if (hasattr(_local_context, '_initialized') and\n _local_context._initialized == os.environ.get('REQUEST_ID_HASH')):\n return\n\n # Used to track the context object stack.\n _local_context.registry = []\n\n # Used to provide easy access to the currently running Async job.\n _local_context._executing_async_context = None\n _local_context._executing_async = []\n\n # So that we do not inadvertently reinitialize the local context.\n _local_context._initialized = os.environ.get('REQUEST_ID_HASH')\n\n return _local_context", "response": "Initialize the furious context and registry."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntake an async object and executes its job.", "response": "def run_job():\n \"\"\"Takes an async object and executes its job.\"\"\"\n async = get_current_async()\n async_options = async.get_options()\n\n job = async_options.get('job')\n if not job:\n raise Exception('This async contains no job to execute!')\n\n __, args, kwargs = job\n\n if args is None:\n args = ()\n\n if kwargs is None:\n kwargs = {}\n\n function = async._decorate_job()\n\n try:\n async.executing = True\n async.result = AsyncResult(payload=function(*args, **kwargs),\n status=AsyncResult.SUCCESS)\n except Abort as abort:\n logging.info('Async job was aborted: %r', abort)\n async.result = AsyncResult(status=AsyncResult.ABORT)\n\n # QUESTION: In this eventuality, we should probably tell the context we\n # are \"complete\" and let it handle completion checking.\n _handle_context_completion_check(async)\n return\n except AbortAndRestart as restart:\n logging.info('Async job was aborted and restarted: %r', restart)\n raise\n except BaseException as e:\n async.result = AsyncResult(payload=encode_exception(e),\n status=AsyncResult.ERROR)\n\n _handle_results(async_options)\n _handle_context_completion_check(async)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _handle_results(options):\n results_processor = options.get('_process_results')\n if not results_processor:\n results_processor = _process_results\n\n processor_result = results_processor()\n if isinstance(processor_result, (Async, Context)):\n processor_result.start()", "response": "Process the results of executing the Async s target."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nencode an exception to a form that can be passed around and serialized.", "response": "def encode_exception(exception):\n \"\"\"Encode exception to a form that can be passed around and serialized.\n\n This will grab the stack, then strip off the last two calls which are\n encode_exception and the function that called it.\n \"\"\"\n import sys\n return AsyncException(unicode(exception),\n exception.args,\n sys.exc_info(),\n exception)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _process_results():\n async = get_current_async()\n callbacks = async.get_callbacks()\n\n if not isinstance(async.result.payload, AsyncException):\n callback = callbacks.get('success')\n else:\n callback = callbacks.get('error')\n\n if not callback:\n raise async.result.payload.exception, None, \\\n async.result.payload.traceback[2]\n\n return _execute_callback(async, callback)", "response": "Process the results from an Async job."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexecutes the given callback or insert the Async callback or return the result.", "response": "def _execute_callback(async, callback):\n \"\"\"Execute the given callback or insert the Async callback, or if no\n callback is given return the async.result.\n \"\"\"\n from furious.async import Async\n\n if not callback:\n return async.result.payload\n\n if isinstance(callback, Async):\n return callback.start()\n\n return callback()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrendering a cURL command for the given request.", "response": "def get_example_curl_lines(method: str, url: str, params: dict,\n headers: dict) -> List[str]:\n \"\"\"Render a cURL command for the given request.\n\n :param str method: HTTP request method (e.g. \"GET\").\n :param str url: HTTP request URL.\n :param dict params: JSON body, for POST and PUT requests.\n :param dict headers: A dict of HTTP headers.\n :returns: list\n \"\"\"\n parts = ['curl {}'.format(pipes.quote(url))]\n parts.append('-X {}'.format(method))\n for header in headers:\n parts.append(\"-H '{}: {}'\".format(header, headers[header]))\n if method not in ('DELETE', 'GET'):\n # Don't append a json body if there are no params.\n if params:\n parts.append(\"-H 'Content-Type: application/json' -d\")\n pretty_json = json.dumps(params, separators=(',', ': '), indent=4,\n sort_keys=True)\n # add indentation for the closing bracket of the json body\n json_lines = pretty_json.split('\\n')\n json_lines[-1] = ' ' + json_lines[-1]\n pretty_json = '\\n'.join(json_lines)\n parts.append(pipes.quote(pretty_json))\n wrapped = [parts.pop(0)]\n for part in parts:\n if len(wrapped[-1]) + len(part) < 80:\n wrapped[-1] += ' ' + part\n else:\n wrapped[-1] += ' \\\\'\n wrapped.append(' ' + part)\n return wrapped"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_example_lines(headers: Dict[str, str], method: str, url: str,\n params: Dict[str, Any], response: str) -> List[str]:\n \"\"\"Render a reStructuredText example for the given request and response.\n\n :param dict headers: A dict of HTTP headers.\n :param str method: HTTP request method (e.g. \"GET\").\n :param str url: HTTP request URL.\n :param dict params: Form parameters, for POST and PUT requests.\n :param str response: Text response body.\n :returns: list\n \"\"\"\n lines = ['', 'Example Request:', '', '.. code-block:: bash', '']\n lines.extend(prefix_lines(\n get_example_curl_lines(method, url, params, headers), ' '))\n lines.extend(['', 'Example Response:', ''])\n try:\n # Try to parse and prettify the response as JSON. If it fails\n # (for whatever reason), we'll treat it as text instead.\n response = json.dumps(json.loads(response), indent=2,\n separators=(',', ': '), sort_keys=True)\n lines.extend(['.. code-block:: json', ''])\n except Exception:\n lines.extend(['.. code-block:: text', ''])\n lines.extend(prefix_lines(response, ' '))\n return lines", "response": "Render a reStructuredText example for the given request and response."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets an object reference string from the obj instance.", "response": "def get_object_reference(obj: Object) -> str:\n \"\"\"Gets an object reference string from the obj instance.\n\n This adds the object type to ALL_RESOURCES so that it gets documented and\n returns a str which contains a sphinx reference to the documented object.\n\n :param obj: The Object instance.\n :returns: A sphinx docs reference str.\n \"\"\"\n resource_name = obj.title\n if resource_name is None:\n class_name = obj.__name__\n resource_name = class_name_to_resource_name(class_name)\n ALL_RESOURCES[resource_name] = obj\n return ' See :ref:`resource-{}`.'.format(\n '-'.join(resource_name.split(' ')).lower().strip())"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a string documenting what types the array s items should be documented.", "response": "def get_array_items_description(item: Array) -> str:\n \"\"\"Returns a description for an array's items.\n\n :param item: The Array type whose items should be documented.\n :returns: A string documenting what type the array's items should be.\n \"\"\"\n desc = ''\n if isinstance(item.items, list):\n # This means the type has a list of types where each position is\n # mapped to a different type. Document what each type should be.\n desc = ''\n item_pos_template = (\n ' *Item {pos} must be*: {description}{enum}{ref}')\n for pos, item in enumerate(item.items):\n _enum = ''\n ref = ''\n if issubclass(item, Enum):\n _enum = ' Must be one of: `{}`'.format(item.enum)\n if item.case_insensitive:\n _enum += ' (case-insensitive)'\n _enum += '.'\n elif issubclass(item, Object):\n ref = get_object_reference(item)\n\n desc += item_pos_template.format(\n pos=pos, description=item.description, enum=_enum,\n ref=ref)\n\n else:\n # Otherwise just document the type assigned to `items`.\n desc = item.items.description\n _enum = ''\n ref = ''\n if issubclass(item.items, Enum):\n _enum = ' Must be one of: `{}`'.format(\n item.items.enum)\n if item.items.case_insensitive:\n _enum += ' (case-insensitive)'\n _enum += '.'\n elif issubclass(item.items, Object):\n ref = get_object_reference(item.items)\n desc = ' *Items must be*: {description}{enum}{ref}'.format(\n description=desc, enum=_enum, ref=ref)\n return desc"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the json types for the provided annotated type.", "response": "def get_json_types(annotated_type: SuperType) -> List[str]:\n \"\"\"Returns the json types for the provided annotated type.\n\n This handles special cases for when we encounter UnionType and an Array.\n UnionType's will have all valid types returned. An Array will document\n what the items type is by placing that value in brackets, e.g. `list[str]`.\n\n :param annotated_type: A subclass of SuperType.\n :returns: A list of json types.\n \"\"\"\n types = []\n if issubclass(annotated_type, UnionType):\n types = [str(t.native_type.__name__) for t in annotated_type.types]\n elif issubclass(annotated_type, Array):\n # Include the type of items in the list if items is defined.\n if annotated_type.items is not None:\n if not isinstance(annotated_type.items, list):\n # items are all of the same type.\n types.append('list[{}]'.format(\n str(annotated_type.items.native_type.__name__)))\n else:\n # items are different at each index.\n _types = [\n str(t.native_type.__name__) for t in annotated_type.items]\n types.append('list[{}]'.format(','.join(_types)))\n else:\n types.append('list')\n else:\n types.append(str(annotated_type.native_type.__name__))\n\n return types"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_json_object_lines(annotation: ResourceAnnotation,\n properties: Dict[str, Any], field: str,\n url_params: Dict, request: bool = False,\n object_property: bool = False) -> List[str]:\n \"\"\"Generate documentation for the given object annotation.\n\n :param doctor.resource.ResourceAnnotation annotation:\n Annotation object for the associated handler method.\n :param str field: Sphinx field type to use (e.g. ' List:\n \"\"\"Generate documentation lines for the given annotation.\n\n This only documents schemas of type \"object\", or type \"list\" where each\n \"item\" is an object. Other types are ignored (but a warning is logged).\n\n :param doctor.resource.ResourceAnnotation annotation:\n Annotation object for the associated handler method.\n :param str field: Sphinx field type to use (e.g. ' Response[MyType]\n if issubclass(return_type, Response):\n if return_type.__args__ is not None:\n return_type = return_type.__args__[0]\n if issubclass(return_type, Array):\n if issubclass(return_type.items, Object):\n properties = return_type.items.properties\n field += 'arr'\n else:\n return []\n elif issubclass(return_type, Object):\n properties = return_type.properties\n else:\n return []\n else:\n # If we defined a req_obj_type for the logic, use that type's\n # properties instead of the function signature.\n if annotation.logic._doctor_req_obj_type:\n properties = annotation.logic._doctor_req_obj_type.properties\n else:\n parameters = annotation.annotated_parameters\n properties = {k: p.annotation for k, p in parameters.items()}\n return get_json_object_lines(annotation, properties, field, url_params,\n request)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_resource_object_doc_lines() -> List[str]:\n # First loop through all resources and make sure to add any properties that\n # are objects and not already in `ALL_RESOURCES`. We iterate over a copy\n # since we will be modifying the dict during the loop.\n for resource_name, a_type in ALL_RESOURCES.copy().items():\n for prop_a_type in a_type.properties.values():\n if issubclass(prop_a_type, Object):\n resource_name = prop_a_type.title\n if resource_name is None:\n class_name = prop_a_type.__name__\n resource_name = class_name_to_resource_name(class_name)\n ALL_RESOURCES[resource_name] = prop_a_type\n elif (issubclass(prop_a_type, Array) and\n prop_a_type.items is not None and\n not isinstance(prop_a_type.items, list) and\n issubclass(prop_a_type.items, Object)):\n # This means the type is an array of objects, so we want to\n # collect the object as a resource we can document later.\n resource_name = prop_a_type.items.title\n if resource_name is None:\n class_name = prop_a_type.items.__name__\n resource_name = class_name_to_resource_name(class_name)\n ALL_RESOURCES[resource_name] = prop_a_type.items\n\n # If we don't have any resources to document, just return.\n if not ALL_RESOURCES:\n return []\n\n lines = ['Resource Objects', '----------------']\n for resource_name in sorted(ALL_RESOURCES.keys()):\n a_type = ALL_RESOURCES[resource_name]\n # First add a reference to the resource\n resource_ref = '_resource-{}'.format(\n '-'.join(resource_name.lower().split(' ')))\n lines.extend(['.. {}:'.format(resource_ref), ''])\n # Add resource name heading\n lines.extend([resource_name, '#' * len(resource_name)])\n # Add resource description\n lines.extend([a_type.description, ''])\n # Only document attributes if it has properties defined.\n if a_type.properties:\n # Add attributes documentation.\n lines.extend(['Attributes', '**********'])\n for prop in a_type.properties:\n prop_a_type = a_type.properties[prop]\n description = a_type.properties[prop].description.strip()\n # Add any object reference if the property is an object or\n # an array of objects.\n obj_ref = ''\n if issubclass(prop_a_type, Object):\n obj_ref = get_object_reference(prop_a_type)\n elif (issubclass(prop_a_type, Array) and\n prop_a_type.items is not None and\n not isinstance(prop_a_type.items, list) and\n issubclass(prop_a_type.items, Object)):\n # This means the type is an array of objects.\n obj_ref = get_object_reference(prop_a_type.items)\n elif (issubclass(prop_a_type, Array) and\n prop_a_type.items is not None):\n description += get_array_items_description(prop_a_type)\n native_type = a_type.properties[prop].native_type.__name__\n if prop in a_type.required:\n description = '**Required**. ' + description\n lines.append('* **{}** (*{}*) - {}{}'.format(\n prop, native_type, description, obj_ref).strip())\n lines.append('')\n # Add example of object.\n lines.extend(['Example', '*******'])\n example = a_type.get_example()\n pretty_json = json.dumps(example, separators=(',', ': '), indent=4,\n sort_keys=True)\n pretty_json_lines = prefix_lines(pretty_json, ' ')\n lines.extend(['.. code-block:: json', ''])\n lines.extend(pretty_json_lines)\n return lines", "response": "Generate documentation lines for all collected resource objects."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_name(value) -> str:\n if value.__module__ == '__builtin__':\n return value.__name__\n else:\n return '.'.join((value.__module__, value.__name__))", "response": "Return a best guess at the qualified name for a class or function object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef normalize_route(route: str) -> str:\n normalized_route = str(route).lstrip('^').rstrip('$').rstrip('?')\n normalized_route = normalized_route.replace('<', '(').replace('>', ')')\n return normalized_route", "response": "Normalize the given route."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef prefix_lines(lines, prefix):\n if isinstance(lines, bytes):\n lines = lines.decode('utf-8')\n if isinstance(lines, str):\n lines = lines.splitlines()\n return [prefix + line for line in lines]", "response": "Add the prefix to each of the lines."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts a camel case class name to a resource name with spaces.", "response": "def class_name_to_resource_name(class_name: str) -> str:\n \"\"\"Converts a camel case class name to a resource name with spaces.\n\n >>> class_name_to_resource_name('FooBarObject')\n 'Foo Bar Object'\n\n :param class_name: The name to convert.\n :returns: The resource name.\n \"\"\"\n s = re.sub('(.)([A-Z][a-z]+)', r'\\1 \\2', class_name)\n return re.sub('([a-z0-9])([A-Z])', r'\\1 \\2', s)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrender lines of reStructuredText for items yielded by baseHarness. iter_annotations.", "response": "def _render_rst(self): # pragma: no cover\n \"\"\"Render lines of reStructuredText for items yielded by\n :meth:`~doctor.docs.base.BaseHarness.iter_annotations`.\n \"\"\"\n # Create a mapping of headers to annotations. We want to group\n # all annotations by a header, but they could be in multiple handlers\n # so we create a map of them here with the heading as the key and\n # the list of associated annotations as a list. This is so we can\n # sort them alphabetically to make reading the api docs easier.\n heading_to_annotations_map = defaultdict(list)\n for heading, route, handler, annotations in (\n self.harness.iter_annotations()):\n # Set the route and handler as attributes so we can retrieve them\n # when we loop through them all below.\n for annotation in annotations:\n annotation.route = route\n annotation.handler = handler\n heading_to_annotations_map[heading].append(annotation)\n\n headings = list(heading_to_annotations_map.keys())\n headings.sort()\n previous_heading = None\n for heading in headings:\n annotations = heading_to_annotations_map.get(heading)\n # Sort all the annotations by title.\n annotations.sort(key=lambda a: a.title)\n # Only emit a new heading if the resource has changed. This\n # esnures that documented endpoints for the same resource all\n # end up under a single heading.\n if previous_heading != heading:\n previous_heading = heading\n yield HEADING_TOKEN + heading\n for annotation in annotations:\n route = annotation.route\n normalized_route = normalize_route(route)\n handler = annotation.handler\n # Adds a title for the endpoint.\n if annotation.title is not None:\n yield annotation.title\n yield '#' * len(annotation.title)\n docstring = get_description_lines(getattr(annotation.logic,\n '__doc__', None))\n\n # Documents the logic function associated with the annotation.\n docstring.append(':Logic Func: :func:`~{}.{}`'.format(\n annotation.logic.__module__, annotation.logic.__name__))\n\n field = ' Dict:\n headers = self.headers.copy()\n defined_header_values = self.defined_header_values.get(\n (annotation.http_method.lower(), str(route)))\n if defined_header_values is not None:\n if defined_header_values['update']:\n headers.update(defined_header_values['values'])\n else:\n headers = defined_header_values['values']\n return headers", "response": "Gets the headers for the provided route."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_example_values(self, route: str,\n annotation: ResourceAnnotation) -> Dict[str, Any]:\n \"\"\"Gets example values for all properties in the annotation's schema.\n\n :param route: The route to get example values for.\n :type route: werkzeug.routing.Rule for a flask api.\n :param annotation: Schema annotation for the method to be requested.\n :type annotation: doctor.resource.ResourceAnnotation\n :retruns: A dict containing property names as keys and example values\n as values.\n \"\"\"\n defined_values = self.defined_example_values.get(\n (annotation.http_method.lower(), str(route)))\n if defined_values and not defined_values['update']:\n return defined_values['values']\n\n # If we defined a req_obj_type for the logic, use that type's\n # example values instead of the annotated parameters.\n if annotation.logic._doctor_req_obj_type:\n values = annotation.logic._doctor_req_obj_type.get_example()\n else:\n values = {\n k: v.annotation.get_example()\n for k, v in annotation.annotated_parameters.items()\n }\n if defined_values:\n values.update(defined_values['values'])\n\n # If this is a GET route, we need to json dumps any parameters that\n # are lists or dicts. Otherwise we'll get a 400 error for those params\n if annotation.http_method == 'GET':\n for k, v in values.items():\n if isinstance(v, (list, dict)):\n values[k] = json.dumps(v)\n return values", "response": "Gets the example values for all properties in the schema."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_device_name(file_name, sys_obj_id, delimiter=\":\"):\n\n try:\n with open(file_name, \"rb\") as csv_file:\n csv_reader = csv.reader(csv_file, delimiter=delimiter)\n for row in csv_reader:\n if len(row) >= 2 and row[0] == sys_obj_id:\n return row[1]\n except IOError:\n pass # file does not exists\n\n return sys_obj_id", "response": "Get the device name by its SNMP sysObjectID property from the file map\n\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef has(self, querypart_name, value=None):\n if super().has(querypart_name, value):\n return True\n if not value:\n return super().has('select_options', querypart_name)\n return False", "response": "Returns True if querypart_name with value is set."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngroups by a set of attributes.", "response": "def group_by(self, *args, **kwds):\n \"\"\"\n Default ordering is ``ASC``.\n\n ``group_by`` accept ``dict`` as you would expect, but note that ``dict``\n does not have same order. Same for named arguments.\n\n .. code-block:: python\n\n >>> sqlpuzzle.select('c').from_('t').group_by('a', ('b', 'desc'))\n \n \"\"\"\n self._group_by.group_by(*args, **kwds)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef order_by(self, *args, **kwds):\n self._order_by.order_by(*args, **kwds)\n return self", "response": "Order by a key."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef context_complete(context_id):\n logging.info('Context %s is.......... DONE.', context_id)\n\n from furious.context import get_current_async_with_context\n\n _, context = get_current_async_with_context()\n\n if not context:\n logging.error(\"Could not load context\")\n return\n\n for task_id, result in context.result.items():\n logging.info(\"#########################\")\n logging.info(\"Task Id: %s and Result: %s\", task_id, result)\n\n return context_id", "response": "Log out that the context is complete."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprocess the messages pulled from a queue based off the passed in.", "response": "def process_messages(tag, retries=0):\n \"\"\"Processes the messages pulled fromm a queue based off the tag passed in.\n Will insert another processor if any work was processed or the retry count\n is under the max retry count. Will update a aggregated stats object with\n the data in the payload of the messages processed.\n\n :param tag: :class: `str` Tag to query the queue on\n :param retry: :class: `int` Number of retries the job has processed\n \"\"\"\n from furious.batcher import bump_batch\n from furious.batcher import MESSAGE_DEFAULT_QUEUE\n from furious.batcher import MessageIterator\n from furious.batcher import MessageProcessor\n\n from google.appengine.api import memcache\n\n # since we don't have a flag for checking complete we'll re-insert a\n # processor task with a retry count to catch any work that may still be\n # filtering in. If we've hit our max retry count we just bail out and\n # consider the job complete.\n if retries > 5:\n logging.info(\"Process messages hit max retry and is exiting\")\n return\n\n # create a message iteragor for the tag in batches of 500\n message_iterator = MessageIterator(tag, MESSAGE_DEFAULT_QUEUE, 500)\n\n client = memcache.Client()\n\n # get the stats object from cache\n stats = client.gets(tag)\n\n # json decode it if it exists otherwise get the default state.\n stats = json.loads(stats) if stats else get_default_stats()\n\n work_processed = False\n\n # loop through the messages pulled from the queue.\n for message in message_iterator:\n work_processed = True\n\n value = int(message.get(\"value\", 0))\n color = message.get(\"color\").lower()\n\n # update the total stats with the value pulled\n set_stats(stats[\"totals\"], value)\n\n # update the specific color status via the value pulled\n set_stats(stats[\"colors\"][color], value)\n\n # insert the stats back into cache\n json_stats = json.dumps(stats)\n\n # try and do an add first to see if it's new. We can't trush get due to\n # a race condition.\n if not client.add(tag, json_stats):\n # if we couldn't add than lets do a compare and set to safely\n # update the stats\n if not client.cas(tag, json_stats):\n raise Exception(\"Transaction Collision.\")\n\n # bump the process batch id\n bump_batch(tag)\n\n if work_processed:\n # reset the retries as we've processed work\n retries = 0\n else:\n # no work was processed so increment the retries\n retries += 1\n\n # insert another processor\n processor = MessageProcessor(\n target=process_messages, args=(\"colors\",),\n kwargs={'retries': retries}, tag=\"colors\")\n\n processor.start()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_stats(stats, value):\n stats[\"total_count\"] += 1\n stats[\"value\"] += value\n stats[\"average\"] = stats[\"value\"] / stats[\"total_count\"]\n\n # this is just a basic example and not the best way to track aggregation.\n # for max and min old there are cases where this will not work correctly.\n if value > stats[\"max\"]:\n stats[\"max\"] = value\n\n if value < stats[\"min\"] or stats[\"min\"] == 0:\n stats[\"min\"] = value", "response": "Updates the stats with the value passed in."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a : class : dict of the default stats structure.", "response": "def get_default_stats():\n \"\"\"Returns a :class: `dict` of the default stats structure.\"\"\"\n\n default_stats = {\n \"total_count\": 0,\n \"max\": 0,\n \"min\": 0,\n \"value\": 0,\n \"average\": 0,\n \"last_update\": None,\n }\n\n return {\n \"totals\": default_stats,\n \"colors\": {\n \"red\": default_stats.copy(),\n \"blue\": default_stats.copy(),\n \"yellow\": default_stats.copy(),\n \"green\": default_stats.copy(),\n \"black\": default_stats.copy(),\n }\n }"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef serialize_to_json(result, unpicklable=False):\n\n json = jsonpickle.encode(result, unpicklable=unpicklable)\n result_for_output = str(json)\n return result_for_output", "response": "Serializes output as JSON and writes it to console output wrapped with special prefix and suffix"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save(self, folder_path='', configuration_type='running', vrf_management_name=None, return_artifact=False):\n\n if hasattr(self.resource_config, \"vrf_management_name\"):\n vrf_management_name = vrf_management_name or self.resource_config.vrf_management_name\n\n self._validate_configuration_type(configuration_type)\n folder_path = self.get_path(folder_path)\n system_name = re.sub('\\s+', '_', self.resource_config.name)[:23]\n time_stamp = time.strftime(\"%d%m%y-%H%M%S\", time.localtime())\n destination_filename = '{0}-{1}-{2}'.format(system_name, configuration_type.lower(), time_stamp)\n full_path = join(folder_path, destination_filename)\n folder_path = self.get_path(full_path)\n self.save_flow.execute_flow(folder_path=folder_path,\n configuration_type=configuration_type.lower(),\n vrf_management_name=vrf_management_name)\n\n if return_artifact:\n artifact_type = full_path.split(':')[0]\n identifier = full_path.replace(\"{0}:\".format(artifact_type), \"\")\n return OrchestrationSavedArtifact(identifier=identifier, artifact_type=artifact_type)\n\n return destination_filename", "response": "Saves the current configuration to a folder"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef restore(self, path, configuration_type=\"running\", restore_method=\"override\", vrf_management_name=None):\n\n if hasattr(self.resource_config, \"vrf_management_name\"):\n vrf_management_name = vrf_management_name or self.resource_config.vrf_management_name\n\n self._validate_configuration_type(configuration_type)\n path = self.get_path(path)\n self.restore_flow.execute_flow(path=path,\n configuration_type=configuration_type.lower(),\n restore_method=restore_method.lower(),\n vrf_management_name=vrf_management_name)", "response": "Restore configuration on device from provided path"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef orchestration_save(self, mode=\"shallow\", custom_params=None):\n\n save_params = {'folder_path': '', 'configuration_type': 'running', 'return_artifact': True}\n params = dict()\n if custom_params:\n params = jsonpickle.decode(custom_params)\n\n save_params.update(params.get('custom_params', {}))\n save_params['folder_path'] = self.get_path(save_params['folder_path'])\n\n saved_artifact = self.save(**save_params)\n\n saved_artifact_info = OrchestrationSavedArtifactInfo(resource_name=self.resource_config.name,\n created_date=datetime.datetime.now(),\n restore_rules=self.get_restore_rules(),\n saved_artifact=saved_artifact)\n save_response = OrchestrationSaveResult(saved_artifacts_info=saved_artifact_info)\n self._validate_artifact_info(saved_artifact_info)\n\n return serialize_to_json(save_response)", "response": "Orchestration Save command\n\n :param mode:\n :param custom_params: json with all required action to configure or remove vlans from certain port\n :return Serialized OrchestrationSavedArtifact to json\n :rtype json"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef orchestration_restore(self, saved_artifact_info, custom_params=None):\n\n if saved_artifact_info is None or saved_artifact_info == '':\n raise Exception('ConfigurationOperations', 'saved_artifact_info is None or empty')\n\n saved_artifact_info = JsonRequestDeserializer(jsonpickle.decode(saved_artifact_info))\n if not hasattr(saved_artifact_info, 'saved_artifacts_info'):\n raise Exception('ConfigurationOperations', 'Saved_artifacts_info is missing')\n saved_config = saved_artifact_info.saved_artifacts_info\n params = None\n if custom_params:\n params = JsonRequestDeserializer(jsonpickle.decode(custom_params))\n _validate_custom_params(params)\n\n self._validate_artifact_info(saved_config)\n\n if saved_config.restore_rules.requires_same_resource \\\n and saved_config.resource_name.lower() != self.resource_config.name.lower():\n raise Exception('ConfigurationOperations', 'Incompatible resource, expected {}'.format(self.resource_config.name))\n\n restore_params = {'configuration_type': 'running',\n 'restore_method': 'override',\n 'vrf_management_name': None,\n 'path': '{}:{}'.format(saved_config.saved_artifact.artifact_type,\n saved_config.saved_artifact.identifier)}\n\n if hasattr(params, 'custom_params'):\n if hasattr(params.custom_params, 'restore_method'):\n restore_params['restore_method'] = params.custom_params.restore_method\n\n if hasattr(params.custom_params, 'configuration_type'):\n restore_params['configuration_type'] = params.custom_params.configuration_type\n\n if hasattr(params.custom_params, 'vrf_management_name'):\n restore_params['vrf_management_name'] = params.custom_params.vrf_management_name\n\n if 'startup' in saved_config.saved_artifact.identifier.split('/')[-1]:\n restore_params['configuration_type'] = 'startup'\n\n self.restore(**restore_params)", "response": "Orchestration restore on the device."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvalidating incoming path and build path url from resource attributes", "response": "def get_path(self, path=''):\n \"\"\"\n Validate incoming path, if path is empty, build it from resource attributes,\n If path is invalid - raise exception\n\n :param path: path to remote file storage\n :return: valid path or :raise Exception:\n \"\"\"\n\n if not path:\n host = self.resource_config.backup_location\n if ':' not in host:\n scheme = self.resource_config.backup_type\n if not scheme or scheme.lower() == self.DEFAULT_FILE_SYSTEM.lower():\n scheme = self.file_system\n scheme = re.sub('(:|/+).*$', '', scheme, re.DOTALL)\n host = re.sub('^/+', '', host)\n host = '{}://{}'.format(scheme, host)\n path = host\n\n url = UrlParser.parse_url(path)\n\n if url[UrlParser.SCHEME].lower() in AUTHORIZATION_REQUIRED_STORAGE:\n if UrlParser.USERNAME not in url or not url[UrlParser.USERNAME]:\n url[UrlParser.USERNAME] = self.resource_config.backup_user\n if UrlParser.PASSWORD not in url or not url[UrlParser.PASSWORD]:\n url[UrlParser.PASSWORD] = self._api.DecryptPassword(self.resource_config.backup_password).Value\n try:\n result = UrlParser.build_url(url)\n except Exception as e:\n self._logger.error('Failed to build url: {}'.format(e))\n raise Exception('ConfigurationOperations', 'Failed to build path url to remote host')\n return result"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _validate_configuration_type(self, configuration_type):\n\n if configuration_type.lower() != 'running' and configuration_type.lower() != 'startup':\n raise Exception(self.__class__.__name__, 'Configuration Type is invalid. Should be startup or running')", "response": "Validate configuration type\n\n :param configuration_type: configuration_type, should be Startup or Running\n :raise Exception:"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _validate_artifact_info(self, saved_config):\n is_fail = False\n fail_attribute = ''\n for class_attribute in self.REQUIRED_SAVE_ATTRIBUTES_LIST:\n if type(class_attribute) is tuple:\n if not hasattr(saved_config, class_attribute[0]):\n is_fail = True\n fail_attribute = class_attribute[0]\n elif not hasattr(getattr(saved_config, class_attribute[0]), class_attribute[1]):\n is_fail = True\n fail_attribute = class_attribute[1]\n else:\n if not hasattr(saved_config, class_attribute):\n is_fail = True\n fail_attribute = class_attribute\n\n if is_fail:\n raise Exception('ConfigurationOperations',\n 'Mandatory field {0} is missing in Saved Artifact Info request json'.format(\n fail_attribute))", "response": "Validate OrchestrationSavedArtifactInfo object for key components\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef register(key, initializer: callable, param=None):\n '''Adds resolver to global container'''\n get_current_scope().container.register(key, initializer, param)", "response": "Adds resolver to global container"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register_single(key, value, param=None):\n '''Generates resolver to return singleton value and adds it to global container'''\n get_current_scope().container.register(key, lambda: value, param)", "response": "Generates resolver to return singleton value and adds it to global container"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwraps function with scope.", "response": "def wrap_with_scope(func, scope_name=None):\n '''Wraps function with scope. If scope_name is None current scope is used'''\n if scope_name is None:\n scope_name = get_current_scope().name\n return lambda *args, scope=scope_name, **kwargs: \\\n _call_with_scope(func, scope, args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nresolve dependencies using global container and passed it with optional parameters", "response": "def inject(*injections):\n '''Resolves dependencies using global container and passed it with optional parameters'''\n def _decorate(func):\n def _decorated(*args, **kwargs):\n args = list(args)\n keys_to_inject = [name for name in injections if name not in kwargs]\n for key in keys_to_inject:\n kwargs[key] = get_current_scope().container.get(key)\n return func(*args, **kwargs)\n return _decorated\n return _decorate"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register(self, key, initializer: callable, param=None):\n '''Add resolver to container'''\n if not callable(initializer):\n raise DependencyError('Initializer {0} is not callable'.format(initializer))\n if key not in self._initializers:\n self._initializers[key] = {}\n self._initializers[key][param] = initializer", "response": "Add resolver to container"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef handle_http(handler: Resource, args: Tuple, kwargs: Dict, logic: Callable):\n try:\n # We are checking mimetype here instead of content_type because\n # mimetype is just the content-type, where as content_type can\n # contain encoding, charset, and language information. e.g.\n # `Content-Type: application/json; charset=UTF8`\n if (request.mimetype == 'application/json' and\n request.method in HTTP_METHODS_WITH_JSON_BODY):\n # This is a proper typed JSON request. The parameters will be\n # encoded into the request body as a JSON blob.\n if not logic._doctor_req_obj_type:\n request_params = map_param_names(\n request.json, logic._doctor_signature.parameters)\n else:\n request_params = request.json\n else:\n # Try to parse things from normal HTTP parameters\n request_params = parse_form_and_query_params(\n request.values, logic._doctor_signature.parameters)\n\n params = request_params\n # Only filter out additional params if a req_obj_type was not specified.\n if not logic._doctor_req_obj_type:\n # Filter out any params not part of the logic signature.\n all_params = logic._doctor_params.all\n params = {k: v for k, v in params.items() if k in all_params}\n params.update(**kwargs)\n\n # Check for required params\n missing = []\n for required in logic._doctor_params.required:\n if required not in params:\n missing.append(required)\n if missing:\n verb = 'are'\n if len(missing) == 1:\n verb = 'is'\n missing = missing[0]\n error = '{} {} required.'.format(missing, verb)\n raise InvalidValueError(error)\n\n # Validate and coerce parameters to the appropriate types.\n errors = {}\n sig = logic._doctor_signature\n # If a `req_obj_type` was defined for the route, pass all request\n # params to that type for validation/coercion\n if logic._doctor_req_obj_type:\n annotation = logic._doctor_req_obj_type\n try:\n # NOTE: We calculate the value before applying native type in\n # order to support UnionType types which dynamically modifies\n # the native_type property based on the initialized value.\n value = annotation(params)\n params = annotation.native_type(value)\n except TypeError:\n logging.exception(\n 'Error casting and validating params with value `%s`.',\n params)\n raise\n except TypeSystemError as e:\n errors['__all__'] = e.detail\n else:\n for name, value in params.items():\n annotation = sig.parameters[name].annotation\n if annotation.nullable and value is None:\n continue\n try:\n # NOTE: We calculate the value before applying native type\n # in order to support UnionType types which dynamically\n # modifies the native_type property based on the initialized\n # value.\n value = annotation(value)\n params[name] = annotation.native_type(value)\n except TypeSystemError as e:\n errors[name] = e.detail\n if errors:\n raise TypeSystemError(errors, errors=errors)\n\n if logic._doctor_req_obj_type:\n # Pass any positional arguments followed by the coerced request\n # parameters to the logic function.\n response = logic(*args, params)\n else:\n # Only pass request parameters defined by the logic signature.\n logic_params = {k: v for k, v in params.items()\n if k in logic._doctor_params.logic}\n response = logic(*args, **logic_params)\n\n # response validation\n if sig.return_annotation != sig.empty:\n return_annotation = sig.return_annotation\n _response = response\n if isinstance(response, Response):\n _response = response.content\n # Check if our return annotation is a Response that supplied a\n # type to validate against. If so, use that type for validation\n # e.g. def logic() -> Response[MyType]\n if (issubclass(return_annotation, Response) and\n return_annotation.__args__ is not None):\n return_annotation = return_annotation.__args__[0]\n try:\n return_annotation(_response)\n except TypeSystemError as e:\n response_str = str(_response)\n logging.warning('Response to %s %s does not validate: %s.',\n request.method, request.path,\n response_str, exc_info=e)\n if should_raise_response_validation_errors():\n error = ('Response to {method} {path} `{response}` does not'\n ' validate: {error}'.format(\n method=request.method, path=request.path,\n response=response, error=e.detail))\n raise TypeSystemError(error)\n\n if isinstance(response, Response):\n status_code = response.status_code\n if status_code is None:\n status_code = STATUS_CODE_MAP.get(request.method, 200)\n return (response.content, status_code, response.headers)\n return response, STATUS_CODE_MAP.get(request.method, 200)\n except (InvalidValueError, TypeSystemError) as e:\n errors = getattr(e, 'errors', None)\n raise HTTP400Exception(e, errors=errors)\n except UnauthorizedError as e:\n raise HTTP401Exception(e)\n except ForbiddenError as e:\n raise HTTP403Exception(e)\n except NotFoundError as e:\n raise HTTP404Exception(e)\n except ImmutableError as e:\n raise HTTP409Exception(e)\n except Exception as e:\n # Always re-raise exceptions when DEBUG is enabled for development.\n if current_app.config.get('DEBUG', False):\n raise\n allowed_exceptions = logic._doctor_allowed_exceptions\n if allowed_exceptions and any(isinstance(e, cls)\n for cls in allowed_exceptions):\n raise\n logging.exception(e)\n raise HTTP500Exception('Uncaught error in logic function')", "response": "Handle a Flask HTTP request."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef confusion_to_mcc(*args):\n if len(args) is 1:\n tn, fp, fn, tp = args[0].ravel().astype(float)\n elif len(args) is 4:\n tn, fp, fn, tp = [float(a) for a in args]\n else:\n raise Exception((\n \"Input argument is not an 2x2 matrix, \"\n \"nor 4 elements tn, fp, fn, tp.\"))\n\n return (tp * tn - fp * fn) / np.sqrt(\n (tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))", "response": "Convert the confusion matrix to Matthews correlation coefficient."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_node(xml_node: XmlNode, **init_args):\n '''Creates node from xml node using namespace as module and tag name as class name'''\n inst_type = get_inst_type(xml_node)\n init_args['xml_node'] = xml_node\n inst = create_inst(inst_type, **init_args)\n if not isinstance(inst, Node):\n inst = convert_to_node(inst, **init_args)\n return inst", "response": "Creates node from xml node using namespace as module and tag name as class name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning type by xml node", "response": "def get_inst_type(xml_node: XmlNode):\n '''Returns type by xml node'''\n (module_path, class_name) = (xml_node.namespace, xml_node.name)\n try:\n return import_module(module_path).__dict__[class_name]\n except (KeyError, ImportError, ModuleNotFoundError):\n message = 'Import \"{0}.{1}\" is failed.'.format(module_path, class_name)\n raise RenderingError(message, xml_node.view_info)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_inst(inst_type, **init_args):\n '''Creates class instance with args'''\n args, kwargs = get_init_args(inst_type, init_args)\n return inst_type(*args, **kwargs)", "response": "Creates class instance with args"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_init_args(inst_type, init_args: dict, add_kwargs=False) -> Tuple[List, Dict]:\n '''Returns tuple with args and kwargs to pass it to inst_type constructor'''\n try:\n parameters = signature(inst_type).parameters.values()\n args_keys = [p.name for p in parameters \\\n if p.kind in [Parameter.POSITIONAL_ONLY, Parameter.POSITIONAL_OR_KEYWORD] \\\n and p.default == Parameter.empty]\n args = [init_args[key] for key in args_keys]\n kwargs = _get_var_kwargs(parameters, args_keys, init_args)\\\n if add_kwargs else\\\n _get_kwargs(parameters, init_args)\n except KeyError as key_error:\n msg_format = 'parameter with key \"{0}\" is not found in node args'\n raise RenderingError(msg_format.format(key_error.args[0]))\n return (args, kwargs)", "response": "Returns tuple with args and kwargs to pass it to inst_type constructor"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef convert_to_node(instance, xml_node: XmlNode, node_globals: InheritedDict = None)\\\n -> InstanceNode:\n '''Wraps passed instance with InstanceNode'''\n return InstanceNode(instance, xml_node, node_globals)", "response": "Wraps passed instance with InstanceNode"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_logger_with_thread_id(context):\n logger = LoggingSessionContext.get_logger_for_context(context)\n child = logger.getChild(threading.currentThread().name)\n for handler in logger.handlers:\n child.addHandler(handler)\n child.level = logger.level\n for log_filter in logger.filters:\n child.addFilter(log_filter)\n return child", "response": "Create QS Logger for command context AutoLoadCommandContext ResourceCommandContext or ResourceRemoteCommandContext with thread name\n ."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the SNMPV3Parameters object from the command context.", "response": "def get_snmp_parameters_from_command_context(resource_config, api, force_decrypt=False):\n \"\"\"\n :param ResourceCommandContext resource_config: command context\n :return:\n \"\"\"\n\n if '3' in resource_config.snmp_version:\n return SNMPV3Parameters(ip=resource_config.address,\n snmp_user=resource_config.snmp_v3_user or '',\n snmp_password=api.DecryptPassword(resource_config.snmp_v3_password).Value or '',\n snmp_private_key=resource_config.snmp_v3_private_key or '',\n auth_protocol=resource_config.snmp_v3_auth_protocol or SNMPV3Parameters.AUTH_NO_AUTH,\n private_key_protocol=resource_config.snmp_v3_priv_protocol or SNMPV3Parameters.PRIV_NO_PRIV).get_valid()\n else:\n if resource_config.shell_name or force_decrypt:\n write_community = api.DecryptPassword(resource_config.snmp_write_community).Value or ''\n else:\n write_community = resource_config.snmp_write_community or ''\n\n if write_community:\n return SNMPV2WriteParameters(ip=resource_config.address, snmp_write_community=write_community)\n else:\n if resource_config.shell_name or force_decrypt:\n read_community = api.DecryptPassword(resource_config.snmp_read_community).Value or ''\n else:\n read_community = resource_config.snmp_read_community or ''\n\n return SNMPV2ReadParameters(ip=resource_config.address, snmp_read_community=read_community)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds an Async job to this context.", "response": "def add(self, target, args=None, kwargs=None, **options):\n \"\"\"Add an Async job to this context.\n\n Like Context.add(): creates an Async and adds it to our list of tasks.\n but also calls _auto_insert_check() to add tasks to queues\n automatically.\n \"\"\"\n\n # In superclass, add new task to our list of tasks\n target = super(\n AutoContext, self).add(target, args, kwargs, **options)\n\n self._auto_insert_check()\n\n return target"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _auto_insert_check(self):\n\n if not self.batch_size:\n return\n\n if len(self._tasks) >= self.batch_size:\n self._handle_tasks()", "response": "Automatically insert tasks asynchronously. Depending on batch_size insert or wait until next call."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_insns(cls = None):\n insns = []\n\n if cls is None:\n cls = Instruction\n\n if \"_mnemonic\" in cls.__dict__.keys():\n insns = [cls]\n\n for subcls in cls.__subclasses__():\n insns += get_insns(subcls)\n\n return insns", "response": "Get all Instructions that are derived from cls."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef reverse_lookup(mnemonic: str):\n for i in get_insns():\n if \"_mnemonic\" in i.__dict__ and i._mnemonic == mnemonic:\n return i\n\n return None", "response": "Find instruction that matches the mnemonic."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a value from a schema and definition.", "response": "def get_value_from_schema(schema, definition: dict, key: str,\n definition_key: str):\n \"\"\"Gets a value from a schema and definition.\n\n If the value has references it will recursively attempt to resolve them.\n\n :param ResourceSchema schema: The resource schema.\n :param dict definition: The definition dict from the schema.\n :param str key: The key to use to get the value from the schema.\n :param str definition_key: The name of the definition.\n :returns: The value.\n :raises TypeSystemError: If the key can't be found in the schema/definition\n or we can't resolve the definition.\n \"\"\"\n resolved_definition = definition.copy()\n if '$ref' in resolved_definition:\n try:\n # NOTE: The resolve method recursively resolves references, so\n # we don't need to worry about that in this function.\n resolved_definition = schema.resolve(definition['$ref'])\n except SchemaError as e:\n raise TypeSystemError(str(e))\n try:\n value = resolved_definition[key]\n except KeyError:\n # Before raising an error, the resolved definition may have an array\n # or object inside it that needs to be resolved in order to get\n # values. Attempt that here and then fail if we still can't find\n # the key we are looking for.\n\n # If the key was missing and this is an array, try to resolve it\n # from the items key.\n if resolved_definition['type'] == 'array':\n return [\n get_value_from_schema(schema, resolved_definition['items'], key,\n definition_key)\n ]\n # If the key was missing and this is an object, resolve it from it's\n # properties.\n elif resolved_definition['type'] == 'object':\n value = {}\n for prop, definition in resolved_definition['properties'].items():\n value[prop] = get_value_from_schema(\n schema, definition, key, definition_key)\n return value\n raise TypeSystemError(\n 'Definition `{}` is missing a {}.'.format(\n definition_key, key))\n return value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_types(json_type: StrOrList) -> typing.Tuple[str, str]:\n # If the type is a list, use the first non 'null' value as the type.\n if isinstance(json_type, list):\n for j_type in json_type:\n if j_type != 'null':\n json_type = j_type\n break\n return (json_type, JSON_TYPES_TO_NATIVE[json_type])", "response": "Returns the json and native python type based on the json_type input."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef json_schema_type(schema_file: str, **kwargs) -> typing.Type:\n # Importing here to avoid circular dependencies\n from doctor.resource import ResourceSchema\n schema = ResourceSchema.from_file(schema_file)\n kwargs['schema'] = schema\n\n # Look up the description, example and type in the schema.\n definition_key = kwargs.get('definition_key')\n if definition_key:\n params = [definition_key]\n request_schema = schema._create_request_schema(params, params)\n try:\n definition = request_schema['definitions'][definition_key]\n except KeyError:\n raise TypeSystemError(\n 'Definition `{}` is not defined in the schema.'.format(\n definition_key))\n description = get_value_from_schema(\n schema, definition, 'description', definition_key)\n example = get_value_from_schema(\n schema, definition, 'example', definition_key)\n json_type = get_value_from_schema(\n schema, definition, 'type', definition_key)\n json_type, native_type = get_types(json_type)\n kwargs['description'] = description\n kwargs['example'] = example\n kwargs['json_type'] = json_type\n kwargs['native_type'] = native_type\n else:\n try:\n kwargs['description'] = schema.schema['description']\n except KeyError:\n raise TypeSystemError('Schema is missing a description.')\n try:\n json_type = schema.schema['type']\n except KeyError:\n raise TypeSystemError('Schema is missing a type.')\n json_type, native_type = get_types(json_type)\n kwargs['json_type'] = json_type\n kwargs['native_type'] = native_type\n try:\n kwargs['example'] = schema.schema['example']\n except KeyError:\n # Attempt to load from properties, if defined.\n if schema.schema.get('properties'):\n example = {}\n for prop, definition in schema.schema['properties'].items():\n example[prop] = get_value_from_schema(\n schema, definition, 'example', 'root')\n kwargs['example'] = example\n else:\n raise TypeSystemError('Schema is missing an example.')\n\n return type('JsonSchema', (JsonSchema,), kwargs)", "response": "Create a type object from a json schema file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef string(description: str, **kwargs) -> typing.Type:\n kwargs['description'] = description\n return type('String', (String,), kwargs)", "response": "Create a new string type."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef integer(description, **kwargs) -> typing.Type:\n kwargs['description'] = description\n return type('Integer', (Integer,), kwargs)", "response": "Create a new integer type."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef number(description, **kwargs) -> typing.Type:\n kwargs['description'] = description\n return type('Number', (Number,), kwargs)", "response": "Create a number type."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a type with the given description.", "response": "def boolean(description, **kwargs) -> typing.Type:\n \"\"\"Create a :class:`~doctor.types.Boolean` type.\n\n :param description: A description of the type.\n :param kwargs: Can include any attribute defined in\n :class:`~doctor.types.Boolean`\n \"\"\"\n kwargs['description'] = description\n return type('Boolean', (Boolean,), kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a new Enum type.", "response": "def enum(description, **kwargs) -> typing.Type:\n \"\"\"Create a :class:`~doctor.types.Enum` type.\n\n :param description: A description of the type.\n :param kwargs: Can include any attribute defined in\n :class:`~doctor.types.Enum`\n \"\"\"\n kwargs['description'] = description\n return type('Enum', (Enum,), kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef array(description, **kwargs) -> typing.Type:\n kwargs['description'] = description\n return type('Array', (Array,), kwargs)", "response": "Create a new Array type."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new user defined type.", "response": "def new_type(cls, **kwargs) -> typing.Type:\n \"\"\"Create a user defined type.\n\n The new type will contain all attributes of the `cls` type passed in.\n Any attribute's value can be overwritten using kwargs.\n\n :param kwargs: Can include any attribute defined in\n the provided user defined type.\n \"\"\"\n props = dict(cls.__dict__)\n props.update(kwargs)\n return type(cls.__name__, (cls,), props)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning an example value for the Dict type.", "response": "def get_example(cls) -> dict:\n \"\"\"Returns an example value for the Dict type.\n\n If an example isn't a defined attribute on the class we return\n a dict of example values based on each property's annotation.\n \"\"\"\n if cls.example is not None:\n return cls.example\n return {k: v.get_example() for k, v in cls.properties.items()}"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns an example value for the Array type.", "response": "def get_example(cls) -> list:\n \"\"\"Returns an example value for the Array type.\n\n If an example isn't a defined attribute on the class we return\n a list of 1 item containing the example value of the `items` attribute.\n If `items` is None we simply return a `[1]`.\n \"\"\"\n if cls.example is not None:\n return cls.example\n if cls.items is not None:\n if isinstance(cls.items, list):\n return [item.get_example() for item in cls.items]\n else:\n return [cls.items.get_example()]\n return [1]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef render_node(xml_node: XmlNode, **args) -> Node:\n try:\n node = create_node(xml_node, **args)\n pipeline = get_pipeline(node)\n run_steps(node, pipeline, **args)\n return node\n except CoreError as error:\n error.add_view_info(xml_node.view_info)\n raise\n except:\n info = exc_info()\n msg = 'Unknown error occurred during rendering'\n error = RenderingError(msg, xml_node.view_info)\n error.add_cause(info[1])\n raise error from info[1]", "response": "Renders node from xml node"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_pipeline(node: Node) -> RenderingPipeline:\n pipeline = _get_registered_pipeline(node)\n if pipeline is None:\n msg = _get_pipeline_registration_error_message(node)\n raise RenderingError(msg)\n return pipeline", "response": "Gets rendering pipeline for passed node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrunning the node rendering steps", "response": "def run_steps(node: Node, pipeline: RenderingPipeline, **args):\n \"\"\"Runs instance node rendering steps\"\"\"\n for step in pipeline.steps:\n result = step(node, pipeline=pipeline, **args)\n if isinstance(result, dict):\n args = {**args, **result}"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\napplying xml attributes to instance node and setups bindings", "response": "def apply_attributes(node: Node, **_):\n \"\"\"Applies xml attributes to instance node and setups bindings\"\"\"\n for attr in node.xml_node.attrs:\n apply_attribute(node, attr)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmaps xml attribute to instance node property and setups bindings", "response": "def apply_attribute(node: Node, attr: XmlAttr):\n \"\"\"Maps xml attribute to instance node property and setups bindings\"\"\"\n setter = get_setter(attr)\n stripped_value = attr.value.strip() if attr.value else ''\n if is_expression(stripped_value):\n (binding_type, expr_body) = parse_expression(stripped_value)\n binder().apply(binding_type, node=node, attr=attr, modifier=setter, expr_body=expr_body)\n else:\n setter(node, attr.name, attr.value)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef all_columns(self):\n columns = set()\n for values in self._parts:\n for value in values._parts:\n columns.add(value.column_name)\n return sorted(columns)", "response": "Return a list of all columns."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds the _schema_annotation attribute for the given function.", "response": "def get_annotation(cls, fn):\n \"\"\"Find the _schema_annotation attribute for the given function.\n\n This will descend through decorators until it finds something that has\n the attribute. If it doesn't find it anywhere, it will return None.\n\n :param func fn: Find the attribute on this function.\n :returns: an instance of\n :class:`~doctor.resource.ResourceSchemaAnnotation` or\n None.\n \"\"\"\n while fn is not None:\n if hasattr(fn, '_schema_annotation'):\n return fn._schema_annotation\n fn = getattr(fn, 'im_func', fn)\n closure = getattr(fn, '__closure__', None)\n fn = closure[0].cell_contents if closure is not None else None\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a JSON schema for a request.", "response": "def _create_request_schema(self, params, required):\n \"\"\"Create a JSON schema for a request.\n\n :param list params: A list of keys specifying which definitions from\n the base schema should be allowed in the request.\n :param list required: A subset of the params that the requester must\n specify in the request.\n :returns: a JSON schema dict\n \"\"\"\n # We allow additional properties because the data this will validate\n # may also include kwargs passed by decorators on the handler method.\n schema = {'additionalProperties': True,\n 'definitions': self.resolve('#/definitions'),\n 'properties': {},\n 'required': required or (),\n 'type': 'object'}\n for param in params:\n schema['properties'][param] = {\n '$ref': '#/definitions/{}'.format(param)}\n return schema"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates an existing note.", "response": "def update_note(note_id: NoteId, body: Body=None, done: Done=None) -> Note:\n \"\"\"Update an existing note.\"\"\"\n if note_id != 1:\n raise NotFoundError('Note does not exist')\n new_note = note.copy()\n if body is not None:\n new_note['body'] = body\n if done is not None:\n new_note['done'] = done\n return new_note"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_task(self):\n from google.appengine.api.taskqueue import Task\n\n task_args = self.get_task_args().copy()\n\n payload = None\n if 'payload' in task_args:\n payload = task_args.pop('payload')\n\n kwargs = {\n 'method': METHOD_TYPE,\n 'payload': json.dumps(payload)\n }\n\n kwargs.update(task_args)\n\n return Task(**kwargs)", "response": "Return a Task object representing this message."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninserts the pull task into the requested queue.", "response": "def insert(self):\n \"\"\"Insert the pull task into the requested queue, 'default' if non\n given.\n \"\"\"\n from google.appengine.api.taskqueue import Queue\n\n task = self.to_task()\n\n Queue(name=self.get_queue()).add(task)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn this message as a dict suitable for json encoding.", "response": "def to_dict(self):\n \"\"\"Return this message as a dict suitable for json encoding.\"\"\"\n import copy\n\n options = copy.deepcopy(self._options)\n\n # JSON don't like datetimes.\n eta = options.get('task_args', {}).get('eta')\n if eta:\n options['task_args']['eta'] = time.mktime(eta.timetuple())\n\n return options"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_id(self):\n id = self._options.get('id')\n if id:\n return id\n\n id = uuid.uuid4().hex\n self.update_options(id=id)\n return id", "response": "Get the id of this message."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns an Async message from a dict output by Async. to_dict.", "response": "def from_dict(cls, message):\n \"\"\"Return an message from a dict output by Async.to_dict.\"\"\"\n\n message_options = message.copy()\n\n # JSON don't like datetimes.\n eta = message_options.get('task_args', {}).get('eta')\n if eta:\n from datetime import datetime\n\n message_options['task_args']['eta'] = datetime.fromtimestamp(eta)\n\n return Message(**message_options)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_task(self):\n task_args = self.get_task_args()\n\n # check for name in task args\n name = task_args.get('name', MESSAGE_PROCESSOR_NAME)\n\n # if the countdown isn't in the task_args set it to the frequency\n if not 'countdown' in task_args:\n task_args['countdown'] = self.frequency\n\n task_args['name'] = \"%s-%s-%s-%s\" % (\n name, self.tag, self.current_batch, self.time_throttle)\n\n self.update_options(task_args=task_args)\n\n return super(MessageProcessor, self).to_task()", "response": "Return a task object representing this MessageProcessor job."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the current batch id for the tag.", "response": "def current_batch(self):\n \"\"\"Return the batch id for the tag.\n\n :return: :class: `int` current batch id\n \"\"\"\n current_batch = memcache.get(self.group_key)\n\n if not current_batch:\n memcache.add(self.group_key, 1)\n current_batch = 1\n\n return current_batch"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfetching the messages from the specified pull - queue.", "response": "def fetch_messages(self):\n \"\"\"Fetch messages from the specified pull-queue.\n\n This should only be called a single time by a given MessageIterator\n object. If the MessageIterator is iterated over again, it should\n return the originally leased messages.\n \"\"\"\n if self._fetched:\n return\n\n start = time.time()\n\n loaded_messages = self.queue.lease_tasks_by_tag(\n self.duration, self.size, tag=self.tag, deadline=self.deadline)\n\n # If we are within 0.1 sec of our deadline and no messages were\n # returned, then we are hitting queue contention issues and this\n # should be a DeadlineExceederError.\n # TODO: investigate other ways around this, perhaps async leases, etc.\n if (not loaded_messages and\n round(time.time() - start, 1) >= self.deadline - 0.1):\n raise DeadlineExceededError()\n\n self._messages.extend(loaded_messages)\n\n self._fetched = True\n\n logging.debug(\"Calling fetch messages with %s:%s:%s:%s:%s:%s\" % (\n len(self._messages), len(loaded_messages),\n len(self._processed_messages), self.duration, self.size, self.tag))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef next(self):\n if not self._messages:\n if self.auto_delete:\n self.delete_messages()\n raise StopIteration\n\n message = self._messages.pop(0)\n self._processed_messages.append(message)\n return json.loads(message.payload)", "response": "Get the next batch of messages from the previously fetched messages."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_messages(self, only_processed=True):\n messages = self._processed_messages\n if not only_processed:\n messages += self._messages\n\n if messages:\n try:\n self.queue.delete_tasks(messages)\n except Exception:\n logging.exception(\"Error deleting messages\")\n raise", "response": "Delete the messages previously leased."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a copy of a function.", "response": "def copy_func(func: Callable) -> Callable:\n \"\"\"Returns a copy of a function.\n\n :param func: The function to copy.\n :returns: The copied function.\n \"\"\"\n copied = types.FunctionType(\n func.__code__, func.__globals__, name=func.__name__,\n argdefs=func.__defaults__, closure=func.__closure__)\n copied = functools.update_wrapper(copied, func)\n copied.__kwdefaults__ = func.__kwdefaults__\n return copied"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_params_from_func(func: Callable, signature: Signature=None) -> Params:\n if signature is None:\n # Check if the function already parsed the signature\n signature = getattr(func, '_doctor_signature', None)\n # Otherwise parse the signature\n if signature is None:\n signature = inspect.signature(func)\n\n # Check if a `req_obj_type` was provided for the function. If so we should\n # derrive the parameters from that defined type instead of the signature.\n if getattr(func, '_doctor_req_obj_type', None):\n annotation = func._doctor_req_obj_type\n all_params = list(annotation.properties.keys())\n required = annotation.required\n optional = list(set(all_params) - set(required))\n else:\n # Required is a positional argument with no defualt value and it's\n # annotation must sub class SuperType. This is so we don't try to\n # require parameters passed to a logic function by a decorator that are\n # not part of a request.\n required = [key for key, p in signature.parameters.items()\n if p.default == p.empty and\n issubclass(p.annotation, SuperType)]\n optional = [key for key, p in signature.parameters.items()\n if p.default != p.empty]\n all_params = [key for key in signature.parameters.keys()]\n\n # Logic params are all parameters that are part of the logic signature.\n logic_params = copy(all_params)\n return Params(all_params, required, optional, logic_params)", "response": "Gets all parameters from a function signature."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_param_annotations(\n logic: Callable, params: List[RequestParamAnnotation]) -> Callable:\n \"\"\"Adds parameter annotations to a logic function.\n\n This adds additional required and/or optional parameters to the logic\n function that are not part of it's signature. It's intended to be used\n by decorators decorating logic functions or middleware.\n\n :param logic: The logic function to add the parameter annotations to.\n :param params: The list of RequestParamAnnotations to add to the logic func.\n :returns: The logic func with updated parameter annotations.\n \"\"\"\n # If we've already added param annotations to this function get the\n # values from the logic, otherwise we need to inspect it.\n if hasattr(logic, '_doctor_signature'):\n sig = logic._doctor_signature\n doctor_params = logic._doctor_params\n else:\n sig = inspect.signature(logic)\n doctor_params = get_params_from_func(logic, sig)\n\n prev_parameters = {name: param for name, param in sig.parameters.items()}\n new_params = []\n for param in params:\n # If the parameter already exists in the function signature, log\n # a warning and skip it.\n if param.name in prev_parameters:\n logging.warning('Not adding %s to signature of %s, function '\n 'already has that parameter in its signature.',\n param.name, logic.__name__)\n continue\n doctor_params.all.append(param.name)\n default = None\n if param.required:\n default = Parameter.empty\n doctor_params.required.append(param.name)\n else:\n doctor_params.optional.append(param.name)\n new_params.append(\n Parameter(param.name, Parameter.KEYWORD_ONLY, default=default,\n annotation=param.annotation))\n\n new_sig = sig.replace(\n parameters=list(prev_parameters.values()) + new_params)\n logic._doctor_signature = new_sig\n logic._doctor_params = doctor_params\n return logic", "response": "Adds parameter annotations to a logic function."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget an attribute from a module.", "response": "def get_module_attr(module_filename, module_attr, namespace=None):\n \"\"\"Get an attribute from a module.\n\n This uses exec to load the module with a private namespace, and then\n plucks and returns the given attribute from that module's namespace.\n\n Note that, while this method doesn't have any explicit unit tests, it is\n tested implicitly by the doctor's own documentation. The Sphinx\n build process will fail to generate docs if this does not work.\n\n :param str module_filename: Path to the module to execute (e.g.\n \"../src/app.py\").\n :param str module_attr: Attribute to pluck from the module's namespace.\n (e.g. \"app\").\n :param dict namespace: Optional namespace. If one is not passed, an empty\n dict will be used instead. Note that this function mutates the passed\n namespace, so you can inspect a passed dict after calling this method\n to see how the module changed it.\n :returns: The attribute from the module.\n :raises KeyError: if the module doesn't have the given attribute.\n \"\"\"\n if namespace is None:\n namespace = {}\n module_filename = os.path.abspath(module_filename)\n namespace['__file__'] = module_filename\n module_dir = os.path.dirname(module_filename)\n old_cwd = os.getcwd()\n old_sys_path = sys.path[:]\n try:\n os.chdir(module_dir)\n sys.path.append(module_dir)\n with open(module_filename, 'r') as mf:\n exec(compile(mf.read(), module_filename, 'exec'), namespace)\n return namespace[module_attr]\n finally:\n os.chdir(old_cwd)\n sys.path = old_sys_path"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_description_lines(docstring):\n if prepare_docstring is None:\n raise ImportError('sphinx must be installed to use this function.')\n\n if not isinstance(docstring, str):\n return []\n lines = []\n for line in prepare_docstring(docstring):\n if DESCRIPTION_END_RE.match(line):\n break\n lines.append(line)\n if lines and lines[-1] != '':\n lines.append('')\n return lines", "response": "Extract the description from the given docstring."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the given string converted so that it can be used for a class name.", "response": "def get_valid_class_name(s: str) -> str:\n \"\"\"Return the given string converted so that it can be used for a class name\n\n Remove leading and trailing spaces; removes spaces and capitalizes each\n word; and remove anything that is not alphanumeric. Returns a pep8\n compatible class name.\n\n :param s: The string to convert.\n :returns: The updated string.\n \"\"\"\n s = str(s).strip()\n s = ''.join([w.title() for w in re.split(r'\\W+|_', s)])\n return re.sub(r'[^\\w|_]', '', s)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef execution_context_from_async(async):\n local_context = _local.get_local_context()\n\n if local_context._executing_async_context:\n raise errors.ContextExistsError\n\n execution_context = _ExecutionContext(async)\n local_context._executing_async_context = execution_context\n return execution_context", "response": "Instantiate a new _ExecutionContext and store a reference to it in the global async context."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_cli_service(self, command_mode):\n return self._cli.get_session(self._new_sessions(), command_mode, self._logger)", "response": "Use cli. get_session to open CLI connection and switch into required mode\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef check_crystal_equivalence(crystal_a, crystal_b):\n\n # getting symmetry datasets for both crystals\n cryst_a = spglib.get_symmetry_dataset(ase_to_spgcell(crystal_a), symprec=1e-5, angle_tolerance=-1.0, hall_number=0)\n cryst_b = spglib.get_symmetry_dataset(ase_to_spgcell(crystal_b), symprec=1e-5, angle_tolerance=-1.0, hall_number=0)\n\n samecell = np.allclose(cryst_a['std_lattice'], cryst_b['std_lattice'], atol=1e-5)\n samenatoms = len(cryst_a['std_positions']) == len(cryst_b['std_positions'])\n samespg = cryst_a['number'] == cryst_b['number']\n \n def test_rotations_translations(cryst_a, cryst_b, repeat):\n cell = cryst_a['std_lattice']\n pristine = crystal('Mg', [(0, 0., 0.)], \n spacegroup=int(cryst_a['number']),\n cellpar=[cell[0]/repeat[0], cell[1]/repeat[1], cell[2]/repeat[2]]).repeat(repeat)\n\n sym_set_p = spglib.get_symmetry_dataset(ase_to_spgcell(pristine), symprec=1e-5,\n angle_tolerance=-1.0, hall_number=0)\n\n for _,trans in enumerate(zip(sym_set_p['rotations'], sym_set_p['translations'])):\n pnew=(np.matmul(trans[0],cryst_a['std_positions'].T).T + trans[1]) % 1.0\n fulln = np.concatenate([cryst_a['std_types'][:, None], pnew], axis=1)\n fullb = np.concatenate([cryst_b['std_types'][:, None], cryst_b['std_positions']], axis=1)\n sorted_n = np.array(sorted([ list(row) for row in list(fulln) ]))\n sorted_b = np.array(sorted([ list(row) for row in list(fullb) ]))\n if np.allclose(sorted_n, sorted_b, atol=1e-5):\n return True\n return False\n\n if samecell and samenatoms and samespg:\n cell = cryst_a['std_lattice']\n # we assume there are no crystals with a lattice parameter smaller than 2 A\n rng1 = range(1, int(norm(cell[0])/2.))\n rng2 = range(1, int(norm(cell[1])/2.))\n rng3 = range(1, int(norm(cell[2])/2.))\n\n for repeat in itertools.product(rng1, rng2, rng3):\n if test_rotations_translations(cryst_a, cryst_b, repeat):\n return True\n\n return False", "response": "Function that identifies whether two crystals are equivalent"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd view information to error message", "response": "def add_view_info(self, view_info: ViewInfo):\n '''Adds view information to error message'''\n try:\n next(info for info in self._view_infos if info.view == view_info.view)\n except StopIteration:\n indent = len(self._view_infos) * '\\t'\n self._view_infos.append(view_info)\n info = 'Line {0} in \"{1}\"'.format(view_info.line, view_info.view)\n self.add_info(indent + 'View info', info)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding header line to error message", "response": "def add_info(self, header, message):\n '''Adds \"header: message\" line to error message'''\n current_message = self.args[0]\n message = current_message + self._format_info(header, message)\n self.args = (message,) + self.args[1:]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_cause(self, error: Exception):\n '''Adds cause error to error message'''\n self.add_info('Cause error', '{0} - {1}'.format(type(error).__name__, error))", "response": "Adds cause error to error message"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprocessing an Async task and execute the requested function.", "response": "def process_async_task(headers, request_body):\n \"\"\"Process an Async task and execute the requested function.\"\"\"\n async_options = json.loads(request_body)\n async = async_from_options(async_options)\n\n _log_task_info(headers,\n extra_task_info=async.get_options().get('_extra_task_info'))\n\n logging.info(async._function_path)\n\n with context.execution_context_from_async(async):\n run_job()\n\n return 200, async._function_path"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprocesses the header from task requests to log analytical data.", "response": "def _log_task_info(headers, extra_task_info=None):\n \"\"\"Processes the header from task requests to log analytical data.\"\"\"\n ran_at = time.time()\n task_eta = float(headers.get('X-Appengine-Tasketa', 0.0))\n task_info = {\n 'retry_count': headers.get('X-Appengine-Taskretrycount', ''),\n 'execution_count': headers.get('X-Appengine-Taskexecutioncount', ''),\n 'task_eta': task_eta,\n 'ran': ran_at,\n 'gae_latency_seconds': ran_at - task_eta\n }\n\n if extra_task_info:\n task_info['extra'] = extra_task_info\n\n logging.debug('TASK-INFO: %s', json.dumps(task_info))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_firmware(self, path, vrf_management_name=None):\n\n url = UrlParser.parse_url(path)\n required_keys = [UrlParser.FILENAME, UrlParser.HOSTNAME, UrlParser.SCHEME]\n\n if not url or not all(key in url for key in required_keys):\n raise Exception(self.__class__.__name__, \"Path is wrong or empty\")\n\n self.load_firmware_flow.execute_flow(path, vrf_management_name, self._timeout)", "response": "Load firmware from given path and update firmware version on device."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _parse_ports(self, ports):\n if not ports:\n return []\n\n return [tuple(port_pair.split(\"::\")) for port_pair in ports.strip(\";\").split(\";\")]", "response": "Parse the ports string into the list\n\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_trunk_ports(self):\n ports = self.attributes.get(\"{}Enable Full Trunk Ports\".format(self.namespace_prefix), None)\n return self._parse_ports(ports=ports)", "response": "SDN Controller enable trunk ports return list of tuple"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nvalidating the autoload resource structure.", "response": "def _validate_build_resource_structure(autoload_resource):\n \"\"\"Validate resource structure\n\n :param dict autoload_resource:\n :return correct autoload resource structure\n :rtype: dict\n \"\"\"\n result = {}\n\n for resource_prefix, resources in autoload_resource.iteritems():\n\n max_free_index = max(map(int, resources)) + 1 or 1\n\n for index, sub_resources in resources.iteritems():\n if not index or index == -1:\n index = max_free_index\n max_free_index += 1\n\n if len(sub_resources) > 1:\n result[\"{0}{1}\".format(resource_prefix, index)] = sub_resources[0]\n for resource in sub_resources[1:]:\n result[\"{0}{1}\".format(resource_prefix, str(max_free_index))] = resource\n max_free_index += 1\n else:\n result[\"{0}{1}\".format(resource_prefix, index)] = sub_resources[0]\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _build_autoload_details(self, autoload_data, relative_path=\"\"):\n\n self._autoload_details.attributes.extend([AutoLoadAttribute(relative_address=relative_path,\n attribute_name=attribute_name,\n attribute_value=attribute_value)\n for attribute_name, attribute_value in\n autoload_data.attributes.iteritems()])\n\n for resource_relative_path, resource in self._validate_build_resource_structure(autoload_data.resources).iteritems():\n full_relative_path = posixpath.join(relative_path, resource_relative_path)\n self._autoload_details.resources.append(AutoLoadResource(model=resource.cloudshell_model_name,\n name=resource.name,\n relative_address=full_relative_path,\n unique_identifier=resource.unique_identifier))\n\n self._build_autoload_details(autoload_data=resource, relative_path=full_relative_path)", "response": "Build autoload details\n\n :param autoload_data: dict:\n :param relative_path: str: full relative path of current autoload resource"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the attribute of the node. Can be customized by attr_setter property.", "response": "def set_attr(self, key: str, value):\n \"\"\"Sets node attribute. Can be customized by attr_setter property\"\"\"\n self.attr_setter(self, key, value)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef destroy(self):\n self.destroy_children()\n self.destroy_bindings()\n if self.on_destroy:\n self.on_destroy(self)", "response": "Destroys the current instance of the class."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the value of the node.", "response": "def set(self, value):\n \"\"\"Sets value\"\"\"\n self._value = self._setter(self._node, value, self._value) if self._setter else value"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new property for node", "response": "def new(self, node: Node):\n \"\"\"Creates property for node\"\"\"\n return Property(self.name, self._setter, node)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nplots a histogram of correlation coefficients and its p - values.", "response": "def corr_vs_pval(r, pval, plim=0.01, rlim=0.4, dpi=96):\n \"\"\"Histogram for correlation coefficients and its p-values (colored)\n\n Parameters:\n -----------\n r : np.ndarray\n Correlation coefficient matrix.\n The upper triangular elements are extracted if a NxN is provided.\n Otherwise provide a plain vector.\n\n pval : np.ndarray\n NxN matrix with p-values\n\n plim : float\n Discretionary alpha threshold to judge if a p-value is considered\n \"significant\" or not. (Default: 0.01 or 1%)\n\n rlim : float\n Descretionary threshold to judge if an absolute correlation\n coefficient is big enough. (Default: 0.4)\n\n dpi : int\n Set the resolution of the matplotlib graphics.\n\n Return:\n -------\n fig, ax\n matplotlib figure and axes for further tweaking\n \"\"\"\n # reshape\n if len(r.shape) == 2:\n idx = (np.tri(N=r.shape[0], k=-1) == 1)\n r = r[idx]\n pval = pval[idx]\n\n # indicies for the three groups\n i1 = (pval >= plim)\n i2 = (pval < plim) & (np.abs(r) > rlim)\n i3 = (pval < plim) & (np.abs(r) <= rlim)\n\n # plot paramters\n absmax = np.max(np.abs(r))\n b = (np.arange(0, 21) / 10 - 1) * absmax\n c = plt.get_cmap('tab10').colors\n c = (c[1], c[8], c[7])\n\n # create plot\n fig, ax = plt.subplots(dpi=dpi)\n ax.hist((r[i1], r[i2], r[i3]), histtype='bar',\n stacked=True, bins=b, color=c)\n\n # legend, title, labels\n ax.legend(['p >= ' + str(plim),\n 'p < ' + str(plim) + ' and |r| > ' + str(rlim),\n 'p < ' + str(plim) + ' and |r| <= ' + str(rlim)],\n loc=2, bbox_to_anchor=(1.04, 1.04))\n ax.set_ylabel('frequency')\n ax.set_xlabel('correlation coefficient')\n\n # design grid\n ax.grid(color='darkgray', linestyle='-.')\n for edge, spine in ax.spines.items():\n spine.set_visible(False)\n\n # output figure/axes onbjects\n return fig, ax"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_context(cls, context, shell_type=None, shell_name=None):\n return cls(address=context.resource.address,\n family=context.resource.family,\n shell_type=shell_type,\n shell_name=shell_name,\n fullname=context.resource.fullname,\n attributes=dict(context.resource.attributes),\n name=context.resource.name)", "response": "Create an instance of TrafficGeneratorVBladeResource from the given context."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_filter_dict(d, user, status):\n if user is not None:\n if not user.id:\n return None\n d['user'] = user\n if status is not None:\n d['status'] = status", "response": "Helper. Updates the filter dict for a queryset."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_flags_for_types(cls, mdl_classes, user=None, status=None, allow_empty=True):\n return get_model_class_from_string(MODEL_FLAG).get_flags_for_types(\n mdl_classes, user=user, status=status, allow_empty=allow_empty)", "response": "Returns a dictionary with flag objects associated with the given model classes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a dictionary with flag objects associated with the given model objects.", "response": "def get_flags_for_objects(cls, objects_list, user=None, status=None):\n \"\"\"Returns a dictionary with flag objects associated with the given model objects.\n The dictionary is indexed by objects IDs.\n Each dict entry contains a list of associated flag objects.\n\n :param list, QuerySet objects_list:\n :param User user:\n :param int status:\n :rtype: dict\n \"\"\"\n return get_model_class_from_string(MODEL_FLAG).get_flags_for_objects(objects_list, user=user, status=status)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a QuerySet containing only the flags for the object optionally filtered by status.", "response": "def get_flags(self, user=None, status=None):\n \"\"\"Returns flags for the object optionally filtered by status.\n\n :param User user: Optional user filter\n :param int status: Optional status filter\n :rtype: QuerySet\n \"\"\"\n filter_kwargs = {}\n update_filter_dict(filter_kwargs, user, status)\n return self.flags.filter(**filter_kwargs).all()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nflags the object. :param User user: :param str note: User-defined note for this flag. :param int status: Optional status integer (the meaning is defined by a developer). :return:", "response": "def set_flag(self, user, note=None, status=None):\n \"\"\"Flags the object.\n\n :param User user:\n :param str note: User-defined note for this flag.\n :param int status: Optional status integer (the meaning is defined by a developer).\n :return:\n \"\"\"\n if not user.id:\n return None\n\n init_kwargs = {\n 'user': user,\n 'linked_object': self,\n }\n if note is not None:\n init_kwargs['note'] = note\n if status is not None:\n init_kwargs['status'] = status\n\n flag = get_flag_model()(**init_kwargs)\n try:\n flag.save()\n except IntegrityError: # Record already exists.\n pass\n return flag"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_flag(self, user=None, status=None):\n filter_kwargs = {\n 'content_type': ContentType.objects.get_for_model(self),\n 'object_id': self.id\n }\n update_filter_dict(filter_kwargs, user, status)\n get_flag_model().objects.filter(**filter_kwargs).delete()", "response": "Removes flag from the object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning whether the objects is flagged by a user.", "response": "def is_flagged(self, user=None, status=None):\n \"\"\"Returns boolean whether the objects is flagged by a user.\n\n :param User user: Optional user filter\n :param int status: Optional status filter\n :return:\n \"\"\"\n filter_kwargs = {\n 'content_type': ContentType.objects.get_for_model(self),\n 'object_id': self.id,\n }\n update_filter_dict(filter_kwargs, user, status)\n return self.flags.filter(**filter_kwargs).count()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconfigures used database so sqlpuzzle can generate queries which are needed.", "response": "def set_backend(database):\n \"\"\"\n Configure used database, so sqlpuzzle can generate queries which are needed.\n For now there is only support of MySQL and PostgreSQL.\n \"\"\"\n new_backend = BACKENDS.get(database.lower())\n if not new_backend:\n raise Exception('Backend {} is not supported.'.format(database))\n global BACKEND # pylint: disable=global-statement\n BACKEND = new_backend"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nverify that the device is accessible over CLI by sending ENTER for CLI session.", "response": "def health_check(self):\n \"\"\" Verify that device is accessible over CLI by sending ENTER for cli session \"\"\"\n\n api_response = 'Online'\n\n result = 'Health check on resource {}'.format(self._resource_name)\n try:\n health_check_flow = RunCommandFlow(self.cli_handler, self._logger)\n health_check_flow.execute_flow()\n result += ' passed.'\n except Exception as e:\n self._logger.exception(e)\n api_response = 'Error'\n result += ' failed.'\n\n try:\n self._api.SetResourceLiveStatus(self._resource_name, api_response, result)\n except Exception:\n self._logger.error('Cannot update {} resource status on portal'.format(self._resource_name))\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef reference(cls, value):\n from sqlpuzzle._common.utils import force_text\n value = force_text(value)\n parts = re.split(r'{quote}([^{quote}]+){quote}|\\.'.format(quote=cls.reference_quote), value)\n parts = ('{quote}{i}{quote}'.format(quote=cls.reference_quote, i=i) if i != '*' else i for i in parts if i)\n return '.'.join(parts)", "response": "Convert as reference on column."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef reference_to_path(reference):\n # Try to pop the options off whatever they passed in.\n if isinstance(reference, basestring):\n # This is an object path name in str form.\n import re\n if not re.match(r'^[^\\d\\W]([a-zA-Z._]|((? 1:\n prefix = os.path.commonprefix(stack)\n if prefix.endswith('/'):\n prefix = prefix[:-1]\n stack = [scope[len(prefix):] for scope in stack]\n return ' => '.join(stack)", "response": "Formats a stack for use in error messages."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nresolving a fragment within the schema.", "response": "def resolve(self, ref, document=None):\n \"\"\"Resolve a fragment within the schema.\n\n If the resolved value contains a $ref, it will attempt to resolve that\n as well, until it gets something that is not a reference. Circular\n references will raise a SchemaError.\n\n :param str ref: URI to resolve.\n :param dict document: Optional schema in which to resolve the URI.\n :returns: a tuple of the final, resolved URI (after any recursion) and\n resolved value in the schema that the URI references.\n :raises SchemaError:\n \"\"\"\n try:\n # This logic is basically the RefResolver's resolve function, but\n # updated to support fragments of dynamic documents. The jsonschema\n # module supports passing documents when resolving fragments, but\n # it doesn't expose that capability in the resolve function.\n url = self._urljoin_cache(self.resolution_scope, ref)\n if document is None:\n # No document passed, so just resolve it as we normally would.\n resolved = self._remote_cache(url)\n else:\n # Document passed, so assume it's a fragment.\n _, fragment = urldefrag(url)\n resolved = self.resolve_fragment(document, fragment)\n except jsonschema.RefResolutionError as e:\n # Failed to find a ref. Make the error a bit prettier so we can\n # figure out where it came from.\n message = e.args[0]\n if self._scopes_stack:\n message = '{} (from {})'.format(\n message, self._format_stack(self._scopes_stack))\n raise SchemaError(message)\n\n if isinstance(resolved, dict) and '$ref' in resolved:\n # Try to resolve the reference, so we can get the actual value we\n # want, instead of a useless dict with a $ref in it.\n if url in self._scopes_stack:\n # We've already tried to look up this URL, so this must\n # be a circular reference in the schema.\n raise SchemaError(\n 'Circular reference in schema: {}'.format(\n self._format_stack(self._scopes_stack + [url])))\n try:\n self.push_scope(url)\n return self.resolve(resolved['$ref'])\n finally:\n self.pop_scope()\n else:\n return url, resolved"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds support to load YAML files.", "response": "def resolve_remote(self, uri):\n \"\"\"Add support to load YAML files.\n\n This will attempt to load a YAML file first, and then go back to the\n default behavior.\n\n :param str uri: the URI to resolve\n :returns: the retrieved document\n \"\"\"\n if uri.startswith('file://'):\n try:\n path = uri[7:]\n with open(path, 'r') as schema_file:\n result = yaml.load(schema_file)\n if self.cache_remote:\n self.store[uri] = result\n return result\n except yaml.parser.ParserError as e:\n logging.debug('Error parsing {!r} as YAML: {}'.format(\n uri, e))\n return super(SchemaRefResolver, self).resolve_remote(uri)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_validator(self, schema=None):\n schema = schema if schema is not None else self.schema\n return jsonschema.Draft4Validator(\n schema, resolver=self.resolver,\n format_checker=jsonschema.draft4_format_checker)", "response": "Get a jsonschema validator."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nresolves a ref within the schema.", "response": "def resolve(self, ref, document=None):\n \"\"\"Resolve a ref within the schema.\n\n This is just a convenience method, since RefResolver returns both a URI\n and the resolved value, and we usually just need the resolved value.\n\n :param str ref: URI to resolve.\n :param dict document: Optional schema in which to resolve the URI.\n :returns: the portion of the schema that the URI references.\n :see: :meth:`SchemaRefResolver.resolve`\n \"\"\"\n _, resolved = self.resolver.resolve(ref, document=document)\n return resolved"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef resolver(self):\n if self._resolver is not None:\n return self._resolver\n if self._schema_path is not None:\n # the documentation for ref resolving\n # https://github.com/Julian/jsonschema/issues/98\n # https://python-jsonschema.readthedocs.org/en/latest/references/\n self._resolver = SchemaRefResolver(\n 'file://' + self._schema_path + '/', self.schema)\n else:\n self._resolver = SchemaRefResolver.from_schema(self.schema)\n return self._resolver", "response": "jsonschema RefResolver object for the base schema."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nvalidating and returns the value.", "response": "def validate(self, value, validator):\n \"\"\"Validates and returns the value.\n\n If the value does not validate against the schema, SchemaValidationError\n will be raised.\n\n :param value: A value to validate (usually a dict).\n :param validator: An instance of a jsonschema validator class, as\n created by Schema.get_validator().\n :returns: the passed value.\n :raises SchemaValidationError:\n :raises Exception:\n \"\"\"\n try:\n validator.validate(value)\n except Exception as e:\n logging.debug(e, exc_info=e)\n if isinstance(e, DoctorError):\n raise\n else:\n # Gather all the validation errors\n validation_errors = sorted(\n validator.iter_errors(value), key=lambda e: e.path)\n errors = {}\n for error in validation_errors:\n try:\n key = error.path[0]\n except IndexError:\n key = '_other'\n errors[key] = error.args[0]\n raise SchemaValidationError(e.args[0], errors=errors)\n return value"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvalidating and returns the parsed JSON string.", "response": "def validate_json(self, json_value, validator):\n \"\"\"Validates and returns the parsed JSON string.\n\n If the value is not valid JSON, ParseError will be raised. If it is\n valid JSON, but does not validate against the schema,\n SchemaValidationError will be raised.\n\n :param str json_value: JSON value.\n :param validator: An instance of a jsonschema validator class, as\n created by Schema.get_validator().\n :returns: the parsed JSON value.\n \"\"\"\n value = parse_json(json_value)\n return self.validate(value, validator)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_file(cls, schema_filepath, *args, **kwargs):\n schema_filepath = os.path.abspath(schema_filepath)\n try:\n with open(schema_filepath, 'r') as schema_file:\n schema = yaml.load(schema_file.read())\n except Exception:\n msg = 'Error loading schema file {}'.format(schema_filepath)\n logging.exception(msg)\n raise SchemaLoadingError(msg)\n return cls(schema, *args, schema_path=os.path.dirname(schema_filepath),\n **kwargs)", "response": "Create an instance of the class from a YAML or JSON schema file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexecute a command on the device and return the output", "response": "def execute_flow(self, custom_command=\"\", is_config=False):\n \"\"\" Execute flow which run custom command on device\n\n :param custom_command: the command to execute on device\n :param is_config: if True then run command in configuration mode\n :return: command execution output\n \"\"\"\n\n responses = []\n if isinstance(custom_command, str):\n commands = [custom_command]\n elif isinstance(custom_command, tuple):\n commands = list(custom_command)\n else:\n commands = custom_command\n\n if is_config:\n mode = self._cli_handler.config_mode\n if not mode:\n raise Exception(self.__class__.__name__,\n \"CliHandler configuration is missing. Config Mode has to be defined\")\n else:\n mode = self._cli_handler.enable_mode\n if not mode:\n raise Exception(self.__class__.__name__,\n \"CliHandler configuration is missing. Enable Mode has to be defined\")\n\n with self._cli_handler.get_cli_service(mode) as session:\n for cmd in commands:\n responses.append(session.send_command(command=cmd))\n return '\\n'.join(responses)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexecutes custom command in configuration mode on device", "response": "def run_custom_config_command(self, custom_command):\n \"\"\" Execute custom command in configuration mode on device\n\n :param custom_command: command\n :return: result of command execution\n \"\"\"\n\n return self.run_command_flow.execute_flow(custom_command=custom_command, is_config=True)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse a rethinkdb url and returns a ParsedRethinkDbUrl object", "response": "def parse_rethinkdb_url(s):\n '''\n Parses a url like this rethinkdb://server1:port,server2:port/database/table\n\n Returns:\n tuple `(['server1:port', 'server2:port'], database, table)`\n `table` and `database` may be None\n\n Raises:\n ValueError if url cannot be parsed as a rethinkdb url\n\n There is some precedent for this kind of url (though only with a single\n host):\n - https://gist.github.com/lucidfrontier45/e5881a8fca25e51ab21c3cf4b4179daa\n - https://github.com/laggyluke/node-parse-rethinkdb-url\n '''\n result = ParsedRethinkDbUrl(None, None, None)\n parsed = urllib_parse.urlparse(s)\n if parsed.scheme != 'rethinkdb':\n raise ValueError\n hosts = parsed.netloc.split(',')\n\n database = None\n table = None\n path_segments = parsed.path.split('/')[1:]\n if len(path_segments) >= 3:\n raise ValueError\n if len(path_segments) >= 1:\n database = path_segments[0]\n if len(path_segments) == 2:\n table = path_segments[1]\n\n if '' in hosts or database == '' or table == '':\n raise ValueError\n\n if any('@' in host for host in hosts):\n raise ValueError\n\n return ParsedRethinkDbUrl(hosts, database, table)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef encode_async_options(async):\n options = copy.deepcopy(async._options)\n\n options['_type'] = reference_to_path(async.__class__)\n\n # JSON don't like datetimes.\n eta = options.get('task_args', {}).get('eta')\n if eta:\n options['task_args']['eta'] = time.mktime(eta.timetuple())\n\n callbacks = async._options.get('callbacks')\n if callbacks:\n options['callbacks'] = encode_callbacks(callbacks)\n\n if '_context_checker' in options:\n _checker = options.pop('_context_checker')\n options['__context_checker'] = reference_to_path(_checker)\n\n if '_process_results' in options:\n _processor = options.pop('_process_results')\n options['__process_results'] = reference_to_path(_processor)\n\n return options", "response": "Encode Async options for JSON encoding."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndecode Async options from JSON decoding.", "response": "def decode_async_options(options):\n \"\"\"Decode Async options from JSON decoding.\"\"\"\n async_options = copy.deepcopy(options)\n\n # JSON don't like datetimes.\n eta = async_options.get('task_args', {}).get('eta')\n if eta:\n from datetime import datetime\n\n async_options['task_args']['eta'] = datetime.fromtimestamp(eta)\n\n # If there are callbacks, reconstitute them.\n callbacks = async_options.get('callbacks', {})\n if callbacks:\n async_options['callbacks'] = decode_callbacks(callbacks)\n\n if '__context_checker' in options:\n _checker = options['__context_checker']\n async_options['_context_checker'] = path_to_reference(_checker)\n\n if '__process_results' in options:\n _processor = options['__process_results']\n async_options['_process_results'] = path_to_reference(_processor)\n return async_options"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets default Async options on the function decorated. Note: you must pass the decorated function by reference, not as a \"path.string.to.function\" for this to have any effect.", "response": "def defaults(**options):\n \"\"\"Set default Async options on the function decorated.\n\n Note: you must pass the decorated function by reference, not as a\n \"path.string.to.function\" for this to have any effect.\n \"\"\"\n _check_options(options)\n\n def real_decorator(function):\n function._async_options = options\n\n @wraps(function)\n def wrapper(*args, **kwargs):\n return function(*args, **kwargs)\n return wrapper\n\n return real_decorator"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstoring this Async s result in persistent storage.", "response": "def _persist_result(self):\n \"\"\"Store this Async's result in persistent storage.\"\"\"\n self._prepare_persistence_engine()\n\n return self._persistence_engine.store_async_result(\n self.id, self.result)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nensures recursion info is initialized if not initialize it.", "response": "def _initialize_recursion_depth(self):\n \"\"\"Ensure recursion info is initialized, if not, initialize it.\"\"\"\n from furious.context import get_current_async\n\n recursion_options = self._options.get('_recursion', {})\n\n current_depth = recursion_options.get('current', 0)\n max_depth = recursion_options.get('max', MAX_DEPTH)\n\n try:\n executing_async = get_current_async()\n\n # If this async is within an executing async, use the depth off\n # that async. Otherwise use the depth set in the async's options.\n current_depth = executing_async.recursion_depth\n\n # If max_depth does not equal MAX_DEPTH, it is custom. Otherwise\n # use the max_depth from the containing async.\n if max_depth == MAX_DEPTH:\n executing_options = executing_async.get_options().get(\n '_recursion', {})\n max_depth = executing_options.get('max', max_depth)\n\n except errors.NotInContextError:\n # This Async is not being constructed inside an executing Async.\n pass\n\n # Store the recursion info.\n self.update_options(_recursion={'current': current_depth,\n 'max': max_depth})"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_recursion_depth(self):\n from furious.async import MAX_DEPTH\n\n recursion_options = self._options.get('_recursion', {})\n max_depth = recursion_options.get('max', MAX_DEPTH)\n\n # Check if recursion check has been disabled, then check depth.\n if (max_depth != DISABLE_RECURSION_CHECK and\n self.recursion_depth > max_depth):\n raise errors.AsyncRecursionError('Max recursion depth reached.')", "response": "Check recursion depth, raise AsyncRecursionError if too deep."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nspecifying the function this async job is to execute when run.", "response": "def _update_job(self, target, args, kwargs):\n \"\"\"Specify the function this async job is to execute when run.\"\"\"\n target_path, options = get_function_path_and_options(target)\n\n assert isinstance(args, (tuple, list)) or args is None\n assert isinstance(kwargs, dict) or kwargs is None\n\n if options:\n self.update_options(**options)\n\n self._options['job'] = (target_path, args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_execution_context(self, execution_context):\n if self._execution_context:\n raise errors.AlreadyInContextError\n\n self._execution_context = execution_context", "response": "Set the ExecutionContext this async is executing under."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_options(self, **options):\n\n _check_options(options)\n\n if 'persistence_engine' in options:\n options['persistence_engine'] = reference_to_path(\n options['persistence_engine'])\n\n if 'id' in options:\n self._id = options['id']\n\n self._options.update(options)", "response": "Safely update this async job s configuration options."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a Task object representing this async job.", "response": "def to_task(self):\n \"\"\"Return a task object representing this async job.\"\"\"\n from google.appengine.api.taskqueue import Task\n from google.appengine.api.taskqueue import TaskRetryOptions\n\n self._increment_recursion_level()\n self.check_recursion_depth()\n\n url = \"%s/%s\" % (ASYNC_ENDPOINT, self.function_path)\n\n kwargs = {\n 'url': url,\n 'headers': self.get_headers().copy(),\n 'payload': json.dumps(self.to_dict())\n }\n kwargs.update(copy.deepcopy(self.get_task_args()))\n\n # Set task_retry_limit\n retry_options = copy.deepcopy(DEFAULT_RETRY_OPTIONS)\n retry_options.update(kwargs.pop('retry_options', {}))\n kwargs['retry_options'] = TaskRetryOptions(**retry_options)\n\n return Task(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef start(self, transactional=False, async=False, rpc=None):\n from google.appengine.api import taskqueue\n\n task = self.to_task()\n queue = taskqueue.Queue(name=self.get_queue())\n retry_transient = self._options.get('retry_transient_errors', True)\n retry_delay = self._options.get('retry_delay', RETRY_SLEEP_SECS)\n\n add = queue.add\n if async:\n add = partial(queue.add_async, rpc=rpc)\n\n try:\n ret = add(task, transactional=transactional)\n except taskqueue.TransientError:\n # Always re-raise for transactional insert, or if specified by\n # options.\n if transactional or not retry_transient:\n raise\n\n time.sleep(retry_delay)\n\n ret = add(task, transactional=transactional)\n except (taskqueue.TaskAlreadyExistsError,\n taskqueue.TombstonedTaskError):\n return\n\n # TODO: Return a \"result\" object.\n return ret", "response": "Start the task in the queue."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns an async job from a dict output by Async. to_dict.", "response": "def from_dict(cls, async):\n \"\"\"Return an async job from a dict output by Async.to_dict.\"\"\"\n async_options = decode_async_options(async)\n\n target, args, kwargs = async_options.pop('job')\n\n return cls(target, args, kwargs, **async_options)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _prepare_persistence_engine(self):\n if self._persistence_engine:\n return\n\n persistence_engine = self._options.get('persistence_engine')\n if persistence_engine:\n self._persistence_engine = path_to_reference(persistence_engine)\n return\n\n from furious.config import get_default_persistence_engine\n\n self._persistence_engine = get_default_persistence_engine()", "response": "Load the specified persistence engine or the default if none is set."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_parent_id(self):\n parent_id = self._options.get('parent_id')\n if parent_id:\n return parent_id\n\n from furious.context import get_current_async\n\n try:\n async = get_current_async()\n except errors.NotInContextError:\n async = None\n\n if async:\n parent_id = \":\".join([async.parent_id.split(\":\")[0], async.id])\n else:\n parent_id = self.request_id\n\n self.update_options(parent_id=parent_id)\n\n return parent_id", "response": "Get the parent async id."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the full_id for this Async. Consists of the parent id and context id.", "response": "def full_id(self):\n \"\"\"Return the full_id for this Async. Consists of the parent id, id and\n context id.\n \"\"\"\n full_id = \"\"\n\n if self.parent_id:\n full_id = \":\".join([self.parent_id, self.id])\n else:\n full_id = self.id\n\n if self.context_id:\n full_id = \"|\".join([full_id, self.context_id])\n\n return full_id"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _increment_recursion_level(self):\n # Update the recursion info. This is done so that if an async created\n # outside an executing context, or one previously created is later\n # loaded from storage, that the \"current\" setting is correctly set.\n self._initialize_recursion_depth()\n\n recursion_options = self._options.get('_recursion', {})\n current_depth = recursion_options.get('current', 0) + 1\n max_depth = recursion_options.get('max', MAX_DEPTH)\n\n # Increment and store\n self.update_options(_recursion={'current': current_depth,\n 'max': max_depth})", "response": "Increment the current_depth based on either the enclosing\n Async.\n Async. Checkpoints the current_depth and stores the current_depth and max_depth options."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef context_id(self):\n if not self._context_id:\n self._context_id = self._get_context_id()\n self.update_options(context_id=self._context_id)\n\n return self._context_id", "response": "Return this Async s Context Id if it exists."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _payload_to_dict(self):\n if self.status != self.ERROR or not self.payload:\n return self.payload\n\n import traceback\n\n return {\n \"error\": self.payload.error,\n \"args\": self.payload.args,\n \"traceback\": traceback.format_exception(*self.payload.traceback)\n }", "response": "Converts the payload to a serializable dict."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef put(func: Callable, allowed_exceptions: List = None,\n title: str = None, req_obj_type: Callable = None) -> HTTPMethod:\n \"\"\"Returns a HTTPMethod instance to create a PUT route.\n\n :see: :class:`~doctor.routing.HTTPMethod`\n \"\"\"\n return HTTPMethod('put', func, allowed_exceptions=allowed_exceptions,\n title=title, req_obj_type=req_obj_type)", "response": "Returns a HTTPMethod instance to create a PUT route."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_http_method(logic: Callable, http_method: str,\n handle_http: Callable, before: Callable = None,\n after: Callable = None) -> Callable:\n \"\"\"Create a handler method to be used in a handler class.\n\n :param callable logic: The underlying function to execute with the\n parsed and validated parameters.\n :param str http_method: HTTP method this will handle.\n :param handle_http: The HTTP handler function that should be\n used to wrap the logic functions.\n :param before: A function to be called before the logic function associated\n with the route.\n :param after: A function to be called after the logic function associated\n with the route.\n :returns: A handler function.\n \"\"\"\n @functools.wraps(logic)\n def fn(handler, *args, **kwargs):\n if before is not None and callable(before):\n before()\n result = handle_http(handler, args, kwargs, logic)\n if after is not None and callable(after):\n after(result)\n return result\n return fn", "response": "Create a handler method that will be used in a handler class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the handler name.", "response": "def get_handler_name(route: Route, logic: Callable) -> str:\n \"\"\"Gets the handler name.\n\n :param route: A Route instance.\n :param logic: The logic function.\n :returns: A handler class name.\n \"\"\"\n if route.handler_name is not None:\n return route.handler_name\n if any(m for m in route.methods if m.method.lower() == 'post'):\n # A list endpoint\n if route.heading != 'API':\n return '{}ListHandler'.format(get_valid_class_name(route.heading))\n return '{}ListHandler'.format(get_valid_class_name(logic.__name__))\n if route.heading != 'API':\n return '{}Handler'.format(get_valid_class_name(route.heading))\n return '{}Handler'.format(get_valid_class_name(logic.__name__))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate the handler routes from the provided routes.", "response": "def create_routes(routes: Tuple[HTTPMethod], handle_http: Callable,\n default_base_handler_class: Any) -> List[Tuple[str, Any]]:\n \"\"\"Creates handler routes from the provided routes.\n\n :param routes: A tuple containing the route and another tuple with\n all http methods allowed for the route.\n :param handle_http: The HTTP handler function that should be\n used to wrap the logic functions.\n :param default_base_handler_class: The default base handler class that\n should be used.\n :returns: A list of tuples containing the route and generated handler.\n \"\"\"\n created_routes = []\n all_handler_names = []\n for r in routes:\n handler = None\n if r.base_handler_class is not None:\n base_handler_class = r.base_handler_class\n else:\n base_handler_class = default_base_handler_class\n\n # Define the handler name. To prevent issues where auto-generated\n # handler names conflict with existing, appending a number to the\n # end of the hanlder name if it already exists.\n handler_name = get_handler_name(r, r.methods[0].logic)\n if handler_name in all_handler_names:\n handler_name = '{}{}'.format(\n handler_name, len(all_handler_names))\n all_handler_names.append(handler_name)\n\n for method in r.methods:\n logic = method.logic\n http_method = method.method\n http_func = create_http_method(logic, http_method, handle_http,\n before=r.before, after=r.after)\n\n handler_methods_and_properties = {\n '__name__': handler_name,\n '_doctor_heading': r.heading,\n 'methods': set([http_method.upper()]),\n http_method: http_func,\n }\n if handler is None:\n handler = type(\n handler_name, (base_handler_class,),\n handler_methods_and_properties)\n else:\n setattr(handler, http_method, http_func)\n # This is specific to Flask. Its MethodView class\n # initializes the methods attribute in __new__ so we\n # need to add all the other http methods we are defining\n # on the handler after it gets created by type.\n if hasattr(handler, 'methods'):\n handler.methods.add(http_method.upper())\n created_routes.append((r.route, handler))\n return created_routes"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse command line arguments.", "response": "def parse_args(options={}, *args, **kwds):\n \"\"\"\n Parser of arguments.\n\n dict options {\n int min_items: Min of required items to fold one tuple. (default: 1)\n int max_items: Count of items in one tuple. Last `max_items-min_items`\n items is by default set to None. (default: 1)\n bool allow_dict: Flag allowing dictionary as first (and only one)\n argument or dictinary as **kwds. (default: False)\n bool allow_list: Flag allowing list as first (and only one) argument.\n (default: False)\n }\n\n Examples:\n\n calling with min_items=1, max_items=2, allow_dict=False:\n arg1, arg2 => ((arg1, None), (arg2, None))\n (arg1a, arg1b), arg2 => ((arg1a, arg1b), arg2, None))\n arg1=val1 => FAIL\n {key1: val1} => FAIL\n\n calling with min_items=2, max_items=3, allow_dict=True:\n arg1, arg2 => ((arg1, arg2, None),)\n arg1, arg2, arg3 => ((arg1, arg2, arg3),)\n (arg1a, arg1b, arg1c) => ((arg1a, arg1b, arg1c),)\n arg1=val1, arg2=val2 => ((arg1, val1, None), (arg2, val2, None))\n {key1: val1, key2: val2} => ((key1, val1, None), (key2, val2, None))\n (arg1a, arg1b), arg2a, arg2b => FAIL\n \"\"\"\n parser_options = ParserOptions(options)\n parser_input = ParserInput(args, kwds)\n\n parser = Parser(parser_options, parser_input)\n parser.parse()\n return parser.output_data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npicking a number and continue the chain.", "response": "def simple_state_machine():\n \"\"\"Pick a number, if it is more than some cuttoff continue the chain.\"\"\"\n from random import random\n\n from furious.async import Async\n\n number = random()\n logging.info('Generating a number... %s', number)\n\n if number > 0.25:\n logging.info('Continuing to do stuff.')\n return Async(target=simple_state_machine)\n\n return number"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrenders HTML attributes, or return '' if no attributes needs to be rendered.", "response": "def render_attrs(attrs):\n \"\"\"\n Render HTML attributes, or return '' if no attributes needs to be rendered.\n \"\"\"\n if attrs is not None:\n def parts():\n for key, value in sorted(attrs.items()):\n if value is None:\n continue\n if value is True:\n yield '%s' % (key, )\n continue\n if key == 'class' and isinstance(value, dict):\n if not value:\n continue\n value = render_class(value)\n if key == 'style' and isinstance(value, dict):\n if not value:\n continue\n value = render_style(value)\n yield '%s=\"%s\"' % (key, ('%s' % value).replace('\"', '"'))\n return mark_safe(' %s' % ' '.join(parts()))\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef context_completion_checker(async):\n\n store_async_marker(async.id, async.result.status if async.result else -1)\n\n logging.debug(\"Async check completion for: %s\", async.context_id)\n current_queue = _get_current_queue()\n from furious.async import Async\n logging.debug(\"Completion Check queue:%s\", current_queue)\n Async(_completion_checker, queue=current_queue,\n args=(async.id, async.context_id)).start()\n\n return True", "response": "Persist async marker and async the completion check"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _completion_checker(async_id, context_id):\n\n if not context_id:\n logging.debug(\"Context for async %s does not exist\", async_id)\n return\n\n context = FuriousContext.from_id(context_id)\n marker = FuriousCompletionMarker.get_by_id(context_id)\n\n if marker and marker.complete:\n logging.info(\"Context %s already complete\" % context_id)\n return True\n\n task_ids = context.task_ids\n if async_id in task_ids:\n task_ids.remove(async_id)\n\n logging.debug(\"Loaded context.\")\n logging.debug(task_ids)\n\n done, has_errors = _check_markers(task_ids)\n\n if not done:\n return False\n\n _mark_context_complete(marker, context, has_errors)\n\n return True", "response": "Check if all Async jobs within a Context have been run."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _check_markers(task_ids, offset=10):\n\n shuffle(task_ids)\n has_errors = False\n\n for index in xrange(0, len(task_ids), offset):\n keys = [ndb.Key(FuriousAsyncMarker, id)\n for id in task_ids[index:index + offset]]\n\n markers = ndb.get_multi(keys)\n\n if not all(markers):\n logging.debug(\"Not all Async's complete\")\n return False, None\n\n # Did any of the aync's fail? Check the success property on the\n # AsyncResult.\n has_errors = not all((marker.success for marker in markers))\n\n return True, has_errors", "response": "Returns a flag for markers being found for the task_ids."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _mark_context_complete(marker, context, has_errors):\n\n current = None\n\n if marker:\n current = marker.key.get()\n\n if not current:\n return False\n\n if current and current.complete:\n return False\n\n current.complete = True\n current.has_errors = has_errors\n current.put()\n\n # Kick off completion tasks.\n _insert_post_complete_tasks(context)\n\n return True", "response": "Transactionally complete the context."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _insert_post_complete_tasks(context):\n\n logging.debug(\"Context %s is complete.\", context.id)\n\n # Async event handlers\n context.exec_event_handler('complete', transactional=True)\n\n # Insert cleanup tasks\n try:\n # TODO: If tracking results we may not want to auto cleanup and instead\n # wait until the results have been accessed.\n from furious.async import Async\n Async(_cleanup_markers, queue=CLEAN_QUEUE,\n args=[context.id, context.task_ids],\n task_args={'countdown': CLEAN_DELAY}).start()\n except:\n pass", "response": "Insert the event s asyncs and cleanup tasks."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes the FuriousAsyncMarker entities corresponding to ids.", "response": "def _cleanup_markers(context_id, task_ids):\n \"\"\"Delete the FuriousAsyncMarker entities corresponding to ids.\"\"\"\n\n logging.debug(\"Cleanup %d markers for Context %s\",\n len(task_ids), context_id)\n\n # TODO: Handle exceptions and retries here.\n delete_entities = [ndb.Key(FuriousAsyncMarker, id) for id in task_ids]\n delete_entities.append(ndb.Key(FuriousCompletionMarker, context_id))\n\n ndb.delete_multi(delete_entities)\n\n logging.debug(\"Markers cleaned.\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef store_context(context):\n\n logging.debug(\"Attempting to store Context %s.\", context.id)\n\n entity = FuriousContext.from_context(context)\n\n # TODO: Handle exceptions and retries here.\n marker = FuriousCompletionMarker(id=context.id)\n key, _ = ndb.put_multi((entity, marker))\n\n logging.debug(\"Stored Context with key: %s.\", key)\n\n return key", "response": "Persist a furious. context. Context object into the datastore by loading it\n into a FuriousContext ndb. Model.\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npersist the Async s result to the datastore.", "response": "def store_async_result(async_id, async_result):\n \"\"\"Persist the Async's result to the datastore.\"\"\"\n\n logging.debug(\"Storing result for %s\", async_id)\n\n key = FuriousAsyncMarker(\n id=async_id, result=json.dumps(async_result.to_dict()),\n status=async_result.status).put()\n\n logging.debug(\"Setting Async result %s using marker: %s.\", async_result,\n key)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef store_async_marker(async_id, status):\n\n logging.debug(\"Attempting to mark Async %s complete.\", async_id)\n\n # QUESTION: Do we trust if the marker had a flag result to just trust it?\n marker = FuriousAsyncMarker.get_by_id(async_id)\n\n if marker:\n logging.debug(\"Marker already exists for %s.\", async_id)\n return\n\n # TODO: Handle exceptions and retries here.\n key = FuriousAsyncMarker(id=async_id, status=status).put()\n\n logging.debug(\"Marked Async complete using marker: %s.\", key)", "response": "Persist a marker indicating the Async ran to the datastore."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef iter_context_results(context, batch_size=10, task_cache=None):\n\n for futures in iget_batches(context.task_ids, batch_size=batch_size):\n for key, future in futures:\n task = future.get_result()\n\n if task_cache is not None:\n task_cache[key.id()] = task\n\n yield key.id(), task", "response": "Yield out the results found on the markers for the context task ids."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef iget_batches(task_ids, batch_size=10):\n\n make_key = lambda _id: ndb.Key(FuriousAsyncMarker, _id)\n for keys in i_batch(imap(make_key, task_ids), batch_size):\n yield izip(keys, ndb.get_multi_async(keys))", "response": "Yields out a map of the keys and futures in batches of the batch size."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a cls entity from a context.", "response": "def from_context(cls, context):\n \"\"\"Create a `cls` entity from a context.\"\"\"\n return cls(id=context.id, context=context.to_dict())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_id(cls, id):\n from furious.context import Context\n\n # TODO: Handle exceptions and retries here.\n entity = cls.get_by_id(id)\n if not entity:\n raise FuriousContextNotFoundError(\n \"Context entity not found for: {}\".format(id))\n\n return Context.from_dict(entity.context)", "response": "Load a cls entity and instantiate the Context it stores."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nyield the async reuslts for the context.", "response": "def items(self):\n \"\"\"Yield the async reuslts for the context.\"\"\"\n for key, task in self._tasks:\n if not (task and task.result):\n yield key, None\n else:\n yield key, json.loads(task.result)[\"payload\"]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nyield the async reuslt values for the context.", "response": "def values(self):\n \"\"\"Yield the async reuslt values for the context.\"\"\"\n for _, task in self._tasks:\n if not (task and task.result):\n yield None\n else:\n yield json.loads(task.result)[\"payload\"]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndisplay AiiDA object in Jupyter notebooks.", "response": "def aiidalab_display(obj, downloadable=True, **kwargs):\n \"\"\"Display AiiDA data types in Jupyter notebooks.\n\n :param downloadable: If True, add link/button to download content of displayed AiiDA object.\n\n Defers to IPython.display.display for any objects it does not recognize.\n \"\"\"\n from aiidalab_widgets_base import aiida_visualizers\n try:\n visualizer = getattr(aiida_visualizers, AIIDA_VISUALIZER_MAPPING[obj.type])\n display(visualizer(obj, downloadable=downloadable), **kwargs)\n except KeyError:\n display(obj, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the password and immediately destroying it", "response": "def __password(self):\n \"\"\"Returning the password and immediately destroying it\"\"\"\n passwd = copy(self._inp_password.value)\n self._inp_password.value = ''\n return passwd"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __proxy_password(self):\n passwd = copy(self._inp_proxy_password.value)\n self._inp_proxy_password.value = ''\n return passwd", "response": "Returns the proxy password and immediately destroying it"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef username(self):\n if len(self._inp_username.value.strip()) == 0: # if username provided by user\n if not self.hostname is None:\n config = parse_sshconfig(self.hostname)\n if 'user' in config: # if username is present in the config file\n return config['user']\n else:\n return None\n else:\n return self._inp_username.value", "response": "Loking for username in user s input and config file"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads a single object from the database by primary key.", "response": "def load(cls, rr, pk):\n '''\n Retrieves a document from the database, by primary key.\n '''\n if pk is None:\n return None\n d = rr.table(cls.table).get(pk).run()\n if d is None:\n return None\n doc = cls(rr, d)\n return doc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef table_ensure(cls, rr):\n '''\n Creates the table if it doesn't exist.\n '''\n dbs = rr.db_list().run()\n if not rr.dbname in dbs:\n logging.info('creating rethinkdb database %s', repr(rr.dbname))\n rr.db_create(rr.dbname).run()\n tables = rr.table_list().run()\n if not cls.table in tables:\n logging.info(\n 'creating rethinkdb table %s in database %s',\n repr(cls.table), repr(rr.dbname))\n cls.table_create(rr)", "response": "Creates the table if it doesn t exist."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the name of the primary key field as retrieved from rethinkdb table SetException metadata by default. Should not be overridden. Override table_create by default.", "response": "def pk_field(self):\n '''\n Name of the primary key field as retrieved from rethinkdb table\n metadata, 'id' by default. Should not be overridden. Override\n `table_create` if you want to use a nonstandard field as the primary\n key.\n '''\n if not self._pk:\n try:\n pk = self.rr.db('rethinkdb').table('table_config').filter({\n 'db': self.rr.dbname, 'name': self.table}).get_field(\n 'primary_key')[0].run()\n self._pk = pk\n except Exception as e:\n raise Exception(\n 'problem determining primary key for table %s.%s: %s',\n self.rr.dbname, self.table, e)\n return self._pk"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save(self):\n '''\n Persist changes to rethinkdb. Updates only the fields that have\n changed. Performs insert rather than update if the document has no\n primary key or if the primary key is absent from the database.\n\n If there have been any changes to nested fields, updates the first\n level attribute. For example, if foo['bar']['baz']['quux'] has changed,\n all of foo['bar'] is replaced, but foo['something_else'] is not\n touched.\n '''\n should_insert = False\n try:\n self[self.pk_field] # raises KeyError if missing\n if self._updates:\n # r.literal() to replace, not merge with, nested fields\n updates = {field: r.literal(self._updates[field])\n for field in self._updates}\n query = self.rr.table(self.table).get(\n self.pk_value).update(updates)\n result = query.run()\n if result['skipped']: # primary key not found\n should_insert = True\n elif result['errors'] or result['deleted']:\n raise Exception(\n 'unexpected result %s from rethinkdb query %s' % (\n result, query))\n if not should_insert and self._deletes:\n query = self.rr.table(self.table).get(self.pk_value).replace(\n r.row.without(self._deletes))\n result = query.run()\n if result['errors']: # primary key not found\n should_insert = True\n elif result['replaced'] != 1:\n raise Exception(\n 'unexpected result %s from rethinkdb query %s' % (\n result, query))\n except KeyError:\n should_insert = True\n\n if should_insert:\n query = self.rr.table(self.table).insert(self)\n result = query.run()\n if result['inserted'] != 1:\n raise Exception(\n 'unexpected result %s from rethinkdb query %s' % (\n result, query))\n if 'generated_keys' in result:\n dict.__setitem__(\n self, self.pk_field, result['generated_keys'][0])\n\n self._clear_updates()", "response": "Saves the current object to rethinkdb."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrefresh the document from the database.", "response": "def refresh(self):\n '''\n Refresh the document from the database.\n '''\n d = self.rr.table(self.table).get(self.pk_value).run()\n if d is None:\n raise KeyError\n for k in d:\n dict.__setitem__(\n self, k, watch(d[k], callback=self._updated, field=k))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(self, xml_file, view_name=None) -> XmlNode:\n self._setup_parser()\n try:\n self._view_name = view_name\n self._parser.ParseFile(xml_file)\n except ExpatError as error:\n # pylint: disable=E1101\n raise XmlError(errors.messages[error.code], ViewInfo(view_name, error.lineno))\n\n root = self._root\n self._reset()\n return root", "response": "Parses xml file with xml_path and returns XmlNode"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds un - related variables in a tree.", "response": "def find_unrelated(x, plim=0.1, axis=0):\n \"\"\"Find indicies of insignificant un-/correlated variables\n\n Example:\n --------\n i, j = find_unrelated(x, plim, rlim)\n\n \"\"\"\n # transpose if axis<>0\n if axis is not 0:\n x = x.T\n\n # read dimensions and allocate variables\n _, c = x.shape\n pairs = []\n\n # compute each (i,j)-th correlation\n for i in range(0, c):\n for j in range(i + 1, c):\n _, p = scipy.stats.pearsonr(x[:, i], x[:, j])\n if p > plim:\n pairs.append((i, j))\n # done\n return tuple(pairs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef apply_connectivity_changes(self, request):\n\n if request is None or request == \"\":\n raise Exception(self.__class__.__name__, \"request is None or empty\")\n\n holder = JsonRequestDeserializer(jsonpickle.decode(request))\n\n if not holder or not hasattr(holder, \"driverRequest\"):\n raise Exception(self.__class__.__name__, \"Deserialized request is None or empty\")\n\n driver_response = DriverResponse()\n add_vlan_thread_list = []\n remove_vlan_thread_list = []\n driver_response_root = DriverResponseRoot()\n\n for action in holder.driverRequest.actions:\n self._logger.info(\"Action: \", action.__dict__)\n self._validate_request_action(action)\n\n action_id = action.actionId\n full_name = action.actionTarget.fullName\n port_mode = action.connectionParams.mode.lower()\n\n if action.type == \"setVlan\":\n qnq = False\n ctag = \"\"\n for attribute in action.connectionParams.vlanServiceAttributes:\n if attribute.attributeName.lower() == \"qnq\" and attribute.attributeValue.lower() == \"true\":\n qnq = True\n if attribute.attributeName.lower() == \"ctag\":\n ctag = attribute.attributeValue\n\n for vlan_id in self._get_vlan_list(action.connectionParams.vlanId):\n add_vlan_thread = Thread(target=self.add_vlan,\n name=action_id,\n args=(vlan_id, full_name, port_mode, qnq, ctag))\n add_vlan_thread_list.append(add_vlan_thread)\n elif action.type == \"removeVlan\":\n for vlan_id in self._get_vlan_list(action.connectionParams.vlanId):\n remove_vlan_thread = Thread(target=self.remove_vlan,\n name=action_id,\n args=(vlan_id, full_name, port_mode,))\n remove_vlan_thread_list.append(remove_vlan_thread)\n else:\n self._logger.warning(\"Undefined action type determined '{}': {}\".format(action.type, action.__dict__))\n continue\n\n # Start all created remove_vlan_threads\n for thread in remove_vlan_thread_list:\n thread.start()\n\n # Join all remove_vlan_threads. Main thread will wait completion of all remove_vlan_thread\n for thread in remove_vlan_thread_list:\n thread.join()\n\n # Start all created add_vlan_threads\n for thread in add_vlan_thread_list:\n thread.start()\n\n # Join all add_vlan_threads. Main thread will wait completion of all add_vlan_thread\n for thread in add_vlan_thread_list:\n thread.join()\n\n request_result = []\n for action in holder.driverRequest.actions:\n result_statuses, message = zip(*self.result.get(action.actionId))\n if all(result_statuses):\n action_result = ConnectivitySuccessResponse(action,\n \"Add Vlan {vlan} configuration successfully completed\"\n .format(vlan=action.connectionParams.vlanId))\n else:\n message_details = \"\\n\\t\".join(message)\n action_result = ConnectivityErrorResponse(action, \"Add Vlan {vlan} configuration failed.\"\n \"\\nAdd Vlan configuration details:\\n{message_details}\"\n .format(vlan=action.connectionParams.vlanId,\n message_details=message_details))\n request_result.append(action_result)\n\n driver_response.actionResults = request_result\n driver_response_root.driverResponse = driver_response\n return serialize_to_json(driver_response_root)", "response": "Handle apply connectivity changes request json"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _validate_request_action(self, action):\n is_fail = False\n fail_attribute = \"\"\n for class_attribute in self.APPLY_CONNECTIVITY_CHANGES_ACTION_REQUIRED_ATTRIBUTE_LIST:\n if type(class_attribute) is tuple:\n if not hasattr(action, class_attribute[0]):\n is_fail = True\n fail_attribute = class_attribute[0]\n if not hasattr(getattr(action, class_attribute[0]), class_attribute[1]):\n is_fail = True\n fail_attribute = class_attribute[1]\n else:\n if not hasattr(action, class_attribute):\n is_fail = True\n fail_attribute = class_attribute\n\n if is_fail:\n raise Exception(self.__class__.__name__,\n \"Mandatory field {0} is missing in ApplyConnectivityChanges request json\".format(\n fail_attribute))", "response": "Validate the action from the request json according to APPLY_CONNECTIVITY_CHANGES_ACTION_REQUIRED_ATTRIBUTE_LIST\nTracingEnabled"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget VLAN list from input string", "response": "def _get_vlan_list(self, vlan_str):\n \"\"\" Get VLAN list from input string\n\n :param vlan_str:\n :return list of VLANs or Exception\n \"\"\"\n\n result = set()\n for splitted_vlan in vlan_str.split(\",\"):\n if \"-\" not in splitted_vlan:\n if validate_vlan_number(splitted_vlan):\n result.add(int(splitted_vlan))\n else:\n raise Exception(self.__class__.__name__, \"Wrong VLAN number detected {}\".format(splitted_vlan))\n else:\n if self.IS_VLAN_RANGE_SUPPORTED:\n if validate_vlan_range(splitted_vlan):\n result.add(splitted_vlan)\n else:\n raise Exception(self.__class__.__name__, \"Wrong VLANs range detected {}\".format(vlan_str))\n else:\n start, end = map(int, splitted_vlan.split(\"-\"))\n if validate_vlan_number(start) and validate_vlan_number(end):\n if start > end:\n start, end = end, start\n for vlan in range(start, end + 1):\n result.add(vlan)\n else:\n raise Exception(self.__class__.__name__, \"Wrong VLANs range detected {}\".format(vlan_str))\n\n return map(str, list(result))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_vlan(self, vlan_id, full_name, port_mode, qnq, c_tag):\n\n try:\n action_result = self.add_vlan_flow.execute_flow(vlan_range=vlan_id,\n port_mode=port_mode,\n port_name=full_name,\n qnq=qnq,\n c_tag=c_tag)\n self.result[current_thread().name].append((True, action_result))\n except Exception as e:\n self._logger.error(traceback.format_exc())\n self.result[current_thread().name].append((False, e.message))", "response": "Adds VLAN to the specified number of VLANs in the specified local interface."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove_vlan(self, vlan_id, full_name, port_mode):\n\n try:\n\n action_result = self.remove_vlan_flow.execute_flow(vlan_range=vlan_id,\n port_name=full_name,\n port_mode=port_mode)\n self.result[current_thread().name].append((True, action_result))\n except Exception as e:\n self._logger.error(traceback.format_exc())\n self.result[current_thread().name].append((False, e.message))", "response": "This method is used to remove VLAN from the interface."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nprocesses view and return root Node", "response": "def render_view(view_name, **args):\n '''Process view and return root Node'''\n try:\n root_xml = get_view_root(view_name)\n return render(root_xml, **args)\n except CoreError as error:\n error.add_view_info(ViewInfo(view_name, None))\n raise\n except:\n info = exc_info()\n error = ViewError('Unknown error occured during rendering', ViewInfo(view_name, None))\n error.add_cause(info[1])\n raise error from info[1]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nparsing xml file and returns root XmlNode", "response": "def get_view_root(view_name: str) -> XmlNode:\n '''Parses xml file and return root XmlNode'''\n try:\n path = join(deps.views_folder, '{0}.{1}'.format(view_name, deps.view_ext))\n parser = Parser()\n if path not in _XML_CACHE:\n with open(path, 'rb') as xml_file:\n _XML_CACHE[path] = parser.parse(xml_file, view_name)\n return _XML_CACHE[path]\n except FileNotFoundError as error:\n error = ViewError('View is not found')\n error.add_info('View name', view_name)\n error.add_info('Path', path)\n raise error\n except CoreError as error:\n error.add_view_info(ViewInfo(view_name, None))\n raise\n except:\n info = exc_info()\n error = ViewError('Unknown error occured during parsing xml', ViewInfo(view_name, None))\n error.add_cause(info[1])\n raise error from info[1]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef import_global(node: Node, key: str, path: Any):\n node.node_globals[key] = import_path(path)", "response": "Imports passed module class function full name and stores it to node s globals"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nresolve passed dependency and stores it to node s globals", "response": "def inject_global(node: Node, global_key: str, inject_key: Any):\n \"\"\"Resolves passed dependency and stores it to node's globals\"\"\"\n value = get_current_scope().container.get(inject_key)\n set_global(node, global_key, value)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_global(node: Node, key: str, value: Any):\n node.node_globals[key] = value", "response": "Adds passed value to node s globals"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef call(node: Node, key: str, value: Any):\n value = _to_list(value)\n if not value or not isinstance(value[-1], dict):\n value.append({})\n args = value[0:-1]\n kwargs = value[-1]\n node.__dict__[key](*args, **kwargs)", "response": "Calls node instance method"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef request(self, rule, view_class, annotation):\n headers = self._get_headers(rule, annotation)\n example_values = self._get_example_values(rule, annotation)\n # If any of the example values for DELETE/GET HTTP methods are dicts\n # or lists, we will need to json dump them before building the rule,\n # otherwise the query string parameter won't get parsed correctly\n # by doctor.\n if annotation.http_method.upper() in ('DELETE', 'GET'):\n for key, value in list(example_values.items()):\n if isinstance(value, (dict, list)):\n example_values[key] = json.dumps(value)\n _, path = rule.build(example_values, append_unknown=True)\n if annotation.http_method.upper() not in ('DELETE', 'GET'):\n parsed_path = parse.urlparse(path)\n path = parsed_path.path\n params = example_values\n else:\n params = {}\n method_name = annotation.http_method.lower()\n method = getattr(self.test_client, method_name)\n if method_name in ('post', 'put'):\n response = method(path, data=json.dumps(params), headers=headers,\n content_type='application/json')\n else:\n response = method(path, data=params, headers=headers)\n return {\n 'url': '/'.join([self.url_prefix, path.lstrip('/')]),\n 'method': annotation.http_method.upper(),\n 'params': params,\n 'response': response.data,\n }", "response": "Make a request against the app."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef observe(self, key: str, callback: Callable[[Any, Any], None]):\n if key not in self._callbacks:\n self._add_key(key)\n self._callbacks[key].append(callback)", "response": "Subscribes to key changes"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef release(self, key: str, callback: Callable[[Any, Any], None]):\n try:\n self._callbacks[key].remove(callback)\n except (KeyError, ValueError):\n pass", "response": "Releases callback from key changes"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsubscribing to key changes", "response": "def observe(self, key, callback: Callable[[Any, Any], None]):\n \"\"\"Subscribes to key changes\"\"\"\n if key not in self.__dict__ and key not in self._callbacks:\n raise KeyError('Entity ' + str(self) + 'doesn''t have attribute' + key)\n super().observe(key, callback)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsubscribes to all keys changes", "response": "def observe_all(self, callback: Callable[[str, Any, Any], None]):\n \"\"\"Subscribes to all keys changes\"\"\"\n self._all_callbacks.append(callback)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreleasing callback from all keys changes", "response": "def release_all(self, callback: Callable[[str, Any, Any], None]):\n \"\"\"Releases callback from all keys changes\"\"\"\n self._all_callbacks.remove(callback)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove own key value", "response": "def remove_key(self, key):\n \"\"\"Remove own key, value\"\"\"\n try:\n self._own_keys.discard(key)\n if self._parent and self._parent.has_key(key):\n self._container[key] = self._parent[key]\n else:\n del self._container[key]\n except KeyError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a new rule to the list of rules for the given type.", "response": "def add_rule(self, binding_type: str, rule: BindingRule):\n \"\"\"Adds new rule\"\"\"\n if binding_type not in self._rules:\n self._rules[binding_type] = []\n\n self._rules[binding_type].insert(0, rule)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds rule by binding type and args", "response": "def find_rule(self, binding_type: str, **args):\n \"\"\"Finds rule by binding type and args\"\"\"\n try:\n rules = self._rules[binding_type]\n return next(rule for rule in rules if rule.suitable(**args))\n except (KeyError, StopIteration):\n return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_expression_target(expression: Expression, expr_vars: InheritedDict) -> BindingTarget:\n '''Factory method to create expression target'''\n root = expression.get_object_tree()\n if len(root.children) != 1 or not PROPERTY_EXPRESSION_REGEX.fullmatch(expression.code):\n error = BindingError('Expression should be property expression')\n error.add_info('Expression', expression.code)\n raise error\n if root.children[0].children:\n return PropertyExpressionTarget(expression, expr_vars)\n return GlobalValueExpressionTarget(expression, expr_vars)", "response": "Factory method to create expression target"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalls the modifier on instance with passed value", "response": "def on_change(self, value):\n '''Calles modifier on instance with passed value'''\n self._modifier(self.inst, self.prop, value)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef args():\n parser = argparse.ArgumentParser(description='Run the Furious Examples.')\n\n parser.add_argument('--gae-sdk-path', metavar='S', dest=\"gae_lib_path\",\n default=\"/usr/local/google_appengine\",\n help='path to the GAE SDK')\n\n parser.add_argument('url', metavar='U', default=\"\", nargs=1,\n help=\"the endpoint to run\")\n\n return parser.parse_args()", "response": "Add and parse the arguments for the script."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setup(options):\n sys.path.insert(0, options.gae_lib_path)\n\n from dev_appserver import fix_sys_path\n fix_sys_path()", "response": "Sets up the sys. path for the proper\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run(options):\n from google.appengine.tools import appengine_rpc\n from google.appengine.tools import appcfg\n\n source = 'furious'\n\n # use the same user agent that GAE uses in appcfg\n user_agent = appcfg.GetUserAgent()\n\n # Since we're only using the dev server for now we can hard code these\n # values. This will need to change and accept these values as variables\n # when this is wired up to hit appspots.\n server = appengine_rpc.HttpRpcServer(\n 'localhost:8080', lambda: ('test@example.com', 'password'), user_agent,\n source, secure=False)\n\n # if no url is passed in just use the top level.\n url = \"/\"\n if options.url:\n url += options.url[0]\n\n # use the dev server authentication for now.\n server._DevAppServerAuthenticate()\n\n # send a simple GET request to the url\n server.Send(url, content_type=\"text/html; charset=utf-8\",\n payload=None)", "response": "Run the passed in url of the example using GAE s rpc runner."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef discover(self):\n\n details = self.autoload_flow.execute_flow(self.resource_config.supported_os,\n self.resource_config.shell_name,\n self.resource_config.family,\n self.resource_config.name)\n\n self._log_device_details(details)\n return details", "response": "Enable and disable SNMP communityon the device"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef has(self, querypart_name, value=None):\n querypart = self._queryparts.get(querypart_name)\n if not querypart:\n return False\n if not querypart.is_set:\n return False\n if value:\n return querypart.has(value)\n return True", "response": "Returns True if querypart_name with value is set."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nexecute expression with passed parameters and returns result", "response": "def execute(self, parameters: dict = None):\n \"\"\"Executes expression with passed parameters and returns result\"\"\"\n try:\n parameters = {} if parameters is None else parameters\n return eval(self._compiled_code, parameters, {})\n except:\n info = exc_info()\n error = CompilationError('Error occurred in expression execution', self.code)\n error.add_cause(info[1])\n raise error from info[1]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef aborting_function():\n import random\n\n logging.info('In aborting_function')\n\n if random.random() < .5:\n from furious.errors import AbortAndRestart\n\n logging.info('Getting ready to restart')\n\n # Raise AbortAndRestart like an Exception, and watch the magic happen.\n raise AbortAndRestart()\n\n logging.info('No longer restarting')", "response": "This function is called by the aborting_function function in the module that is not in the context of a process."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves ANSI escape codes from student result.", "response": "def strip_ansi(state):\n \"\"\"Remove ANSI escape codes from student result.\"\"\"\n stu_res = _strip_ansi(state.student_result)\n\n return state.to_child(student_result = stu_res)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef has_code(state, text, incorrect_msg=\"The checker expected to find `{{text}}` in your command.\", fixed=False):\n\n stu_code = state.student_code\n\n # either simple text matching or regex test\n res = text in stu_code if fixed else re.search(text, stu_code)\n\n if not res:\n _msg = state.build_message(incorrect_msg, fmt_kwargs={ 'text': text })\n state.do_test(_msg)\n\n return state", "response": "Check whether the student code contains text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef has_output(state,\n text,\n incorrect_msg=\"The checker expected to find {{'' if fixed else 'the pattern '}}`{{text}}` in the output of your command.\",\n fixed=False,\n strip_ansi=True):\n \"\"\"Check whether student output contains specific text.\n\n Before you use ``has_output()``, have a look at ``has_expr_output()`` or ``has_expr_error()``;\n they might be more fit for your use case.\n\n Args:\n state: State instance describing student and solution code. Can be omitted if used with ``Ex()``.\n text : text that student output must contain. Can be a regex pattern or a simple string.\n incorrect_msg: if specified, this overrides the automatically generated feedback message\n in case ``text`` is not found in the student output.\n fixed: whether to match ``text`` exactly, rather than using regular expressions.\n strip_ansi: whether to remove ANSI escape codes from output\n\n :Example:\n\n Suppose the solution requires you to do: ::\n\n echo 'this is a printout!'\n\n The following SCT can be written: ::\n\n Ex().has_output(r'this\\\\s+is\\\\s+a\\\\s+print\\\\s*out')\n\n Submissions that would pass: ::\n\n echo 'this is a print out'\n test='this is a printout!' && echo $test\n\n Submissions that would fail: ::\n\n echo 'this is a wrong printout'\n \"\"\"\n\n stu_output = state.student_result\n\n if strip_ansi: stu_output = _strip_ansi(stu_output)\n\n # either simple text matching or regex test\n res = text in stu_output if fixed else re.search(text, stu_output)\n\n if not res:\n _msg = state.build_message(incorrect_msg, fmt_kwargs={ 'text': text, 'fixed': fixed })\n state.do_test(_msg)\n\n return state", "response": "Check whether student output contains specific text."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef has_cwd(state, dir, incorrect_msg=\"Your current working directory should be `{{dir}}`. Use `cd {{dir}}` to navigate there.\"):\n expr = \"[[ $PWD == '{}' ]]\".format(dir)\n _msg = state.build_message(incorrect_msg, fmt_kwargs={ 'dir': dir })\n has_expr_exit_code(state, expr, output=\"0\", incorrect_msg=_msg)\n return state", "response": "Check whether the student is in the expected working directory."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates the status of the service with the given status_info.", "response": "def heartbeat(self, status_info):\n '''\n Update service status, indicating \"up\"-ness.\n\n Args:\n status_info (dict): a dictionary representing the status of the\n service\n\n `status_info` must have at least the fields 'role', 'load', and\n 'ttl'. Some additional fields are populated automatically by this\n method. If the field 'id' is absent, it will be generated by rethinkdb.\n\n See the ServiceRegistry class-level documentation for more information\n about the various fields.\n\n Returns:\n On success, returns the modified status info dict. On failure\n communicating with rethinkdb, returns `status_info` unmodified.\n\n Raises:\n Exception: if `status_info` is missing a required field, or a\n `status_info['ttl']` is not a number greater than zero\n '''\n for field in 'role', 'ttl', 'load':\n if not field in status_info:\n raise Exception(\n 'status_info is missing required field %s',\n repr(field))\n val = status_info['ttl']\n if not (isinstance(val, float) or isinstance(val, int)) or val <= 0:\n raise Exception('ttl must be a number > 0')\n updated_status_info = dict(status_info)\n updated_status_info['last_heartbeat'] = r.now()\n if not 'first_heartbeat' in updated_status_info:\n updated_status_info['first_heartbeat'] = updated_status_info['last_heartbeat']\n if not 'host' in updated_status_info:\n updated_status_info['host'] = socket.gethostname()\n if not 'pid' in updated_status_info:\n updated_status_info['pid'] = os.getpid()\n try:\n result = self.rr.table(self.table).insert(\n updated_status_info, conflict='replace',\n return_changes=True).run()\n return result['changes'][0]['new_val'] # XXX check\n except:\n self.logger.error('error updating service registry', exc_info=True)\n return status_info"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef unregister(self, id):\n '''\n Remove the service with id `id` from the service registry.\n '''\n result = self.rr.table(self.table).get(id).delete().run()\n if result != {\n 'deleted':1, 'errors':0,'inserted':0,\n 'replaced':0,'skipped':0,'unchanged':0}:\n self.logger.warn(\n 'unexpected result attempting to delete id=%s from '\n 'rethinkdb services table: %s', id, result)", "response": "Remove the service with id from the service registry."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef unique_service(self, role, candidate=None):\n '''\n Retrieve a unique service, possibly setting or heartbeating it first.\n\n A \"unique service\" is a service with only one instance for a given\n role. Uniqueness is enforced by using the role name as the primary key\n `{'id':role, ...}`.\n\n Args:\n role (str): role name\n candidate (dict): if supplied, candidate info for the unique\n service, explained below\n\n `candidate` normally represents \"myself, this instance of the service\".\n When a service supplies `candidate`, it is nominating itself for\n selection as the unique service, or retaining its claim to the role\n (heartbeating).\n\n If `candidate` is supplied:\n\n First, atomically in a single rethinkdb query, checks if there is\n already a unique healthy instance of this service in rethinkdb, and\n if not, sets `candidate` as the unique service.\n\n Looks at the result of that query to determine if `candidate` is\n the unique service or not. If it is, updates 'last_heartbeat' in\n rethinkdb.\n\n To determine whether `candidate` is the unique service, checks that\n all the fields other than 'first_heartbeat' and 'last_heartbeat'\n have the same value in `candidate` as in the value returned from\n rethinkdb.\n\n ***Important***: this means that the caller must ensure that none\n of the fields of the unique service ever change. Don't store things\n like 'load' or any other volatile value in there. If you try to do\n that, heartbeats will end up not being sent, and the unique service\n will flap among the candidates.\n\n Finally, retrieves the service from rethinkdb and returns it, if it is\n healthy.\n\n Returns:\n the unique service, if there is one and it is healthy, otherwise\n None\n '''\n # use the same concept of 'now' for all queries\n now = doublethink.utcnow()\n if candidate is not None:\n candidate['id'] = role\n\n if not 'ttl' in candidate:\n raise Exception(\"candidate is missing required field 'ttl'\")\n val = candidate['ttl']\n if not (isinstance(val, float) or isinstance(val, int)) or val <= 0:\n raise Exception(\"'ttl' must be a number > 0\")\n\n candidate['first_heartbeat'] = now\n candidate['last_heartbeat'] = now\n if not 'host' in candidate:\n candidate['host'] = socket.gethostname()\n if not 'pid' in candidate:\n candidate['pid'] = os.getpid()\n\n result = self.rr.table(\n self.table, read_mode='majority').get(role).replace(\n lambda row: r.branch(\n r.branch(\n row,\n row['last_heartbeat'] > now - row['ttl'],\n False),\n row, candidate),\n return_changes='always').run()\n new_val = result['changes'][0]['new_val']\n if all([new_val.get(k) == candidate[k] for k in candidate\n if k not in ('first_heartbeat', 'last_heartbeat')]):\n # candidate is the unique_service, send a heartbeat\n del candidate['first_heartbeat'] # don't touch first_heartbeat\n self.rr.table(self.table).get(role).update(candidate).run()\n\n results = list(self.rr.table(\n self.table, read_mode='majority').get_all(role).filter(\n lambda row: row['last_heartbeat'] > now - row['ttl']).run())\n if results:\n return results[0]\n else:\n return None", "response": "Returns a unique service with only one instance for a given role and candidate."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of healthy services for the given role.", "response": "def healthy_services(self, role=None):\n '''\n Look up healthy services in the registry.\n\n A service is considered healthy if its 'last_heartbeat' was less than\n 'ttl' seconds ago\n\n Args:\n role (str, optional): role name\n\n Returns:\n If `role` is supplied, returns list of healthy services for the\n given role, otherwise returns list of all healthy services. May\n return an empty list.\n '''\n try:\n query = self.rr.table(self.table)\n if role:\n query = query.get_all(role, index='role')\n query = query.filter(\n lambda svc: r.now().sub(svc[\"last_heartbeat\"]) < svc[\"ttl\"] #.default(20.0)\n ).order_by(\"load\")\n result = query.run()\n return result\n except r.ReqlNonExistenceError:\n return []"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a new furious context and add it to the registry.", "response": "def new(batch_size=None, **options):\n \"\"\"Get a new furious context and add it to the registry. If a batch size is\n specified, use an AutoContext which inserts tasks in batches as they are\n added to the context.\n \"\"\"\n\n if batch_size:\n new_context = AutoContext(batch_size=batch_size, **options)\n else:\n new_context = Context(**options)\n\n _local.get_local_context().registry.append(new_context)\n\n return new_context"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_current_async():\n local_context = _local.get_local_context()\n\n if local_context._executing_async:\n return local_context._executing_async[-1]\n\n raise errors.NotInContextError('Not in an _ExecutionContext.')", "response": "Return a reference to the currently executing Async job object or None if not in an Async job."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a reference to the current Context object.", "response": "def get_current_context():\n \"\"\"Return a reference to the current Context object.\n \"\"\"\n local_context = _local.get_local_context()\n\n if local_context.registry:\n return local_context.registry[-1]\n\n raise errors.NotInContextError('Not in a Context.')"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate a base64 encoded version of the given XSRFToken.", "response": "def generate_token_string(self, action=None):\n \"\"\"Generate a hash of the given token contents that can be verified.\n\n :param action:\n A string representing the action that the generated hash is valid\n for. This string is usually a URL.\n :returns:\n A string containing the hash contents of the given `action` and the\n contents of the `XSRFToken`. Can be verified with\n `verify_token_string`. The string is base64 encoded so it is safe\n to use in HTML forms without escaping.\n \"\"\"\n digest_maker = self._digest_maker()\n digest_maker.update(self.user_id)\n digest_maker.update(self._DELIMITER)\n if action:\n digest_maker.update(action)\n digest_maker.update(self._DELIMITER)\n\n digest_maker.update(str(self.current_time))\n return base64.urlsafe_b64encode(\n self._DELIMITER.join([digest_maker.hexdigest(),\n str(self.current_time)]))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef verify_token_string(self,\n token_string,\n action=None,\n timeout=None,\n current_time=None):\n \"\"\"Generate a hash of the given token contents that can be verified.\n\n :param token_string:\n A string containing the hashed token (generated by\n `generate_token_string`).\n :param action:\n A string containing the action that is being verified.\n :param timeout:\n An int or float representing the number of seconds that the token\n is valid for. If None then tokens are valid forever.\n :current_time:\n An int representing the number of seconds since the epoch. Will be\n used by to check for token expiry if `timeout` is set. If `None`\n then the current time will be used.\n :raises:\n XSRFTokenMalformed if the given token_string cannot be parsed.\n XSRFTokenExpiredException if the given token string is expired.\n XSRFTokenInvalid if the given token string does not match the\n contents of the `XSRFToken`. \n \"\"\"\n try:\n decoded_token_string = base64.urlsafe_b64decode(token_string)\n except TypeError:\n raise XSRFTokenMalformed()\n\n split_token = decoded_token_string.split(self._DELIMITER)\n if len(split_token) != 2:\n raise XSRFTokenMalformed()\n\n try:\n token_time = int(split_token[1])\n except ValueError:\n raise XSRFTokenMalformed()\n\n if timeout is not None:\n if current_time is None:\n current_time = time.time()\n # If an attacker modifies the plain text time then it will not match\n # the hashed time so this check is sufficient.\n if (token_time + timeout) < current_time:\n raise XSRFTokenExpiredException()\n\n expected_token = XSRFToken(self.user_id, self.secret, token_time)\n expected_token_string = expected_token.generate_token_string(action)\n\n if len(expected_token_string) != len(token_string):\n raise XSRFTokenInvalid()\n\n # Compare the two strings in constant time to prevent timing attacks.\n different = 0\n for a, b in zip(token_string, expected_token_string):\n different |= ord(a) ^ ord(b)\n if different:\n raise XSRFTokenInvalid()", "response": "Verify the given token string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef iterate_dictionary(d, path, squash_single = False):\n path_parts = path.split(\"/\")\n return_list = []\n \n if len(path_parts) == 0: #no search string\n return d\n else:\n try:\n sub_dicts = [d] #BFS, start with root node\n for i in range(0, len(path_parts)): #BFS\n new_sub_dicts = [] \n for s in sub_dicts:\n if path_parts[i] in s: #this tree node is part of the search path\n the_list = s[path_parts[i]] if isinstance(s[path_parts[i]], list) else [s[path_parts[i]]]\n for j in the_list: \n if i < len(path_parts) -1: #not a leaf node; check level\n if isinstance(j, dict): #skip this non-leaf node if not a dict\n new_sub_dicts.append(j) #BFS expansion\n else: #leaf node at the desired path; add to final return list\n return_list.append(j)\n sub_dicts = new_sub_dicts\n \n #return \n return return_list[0] if squash_single and len(return_list) == 1 else return_list if len(return_list) >= 1 else None\n except:\n return None", "response": "This function will iterate over a dictionary and return a list of objects found at all trajectories at the given path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a SnmpContextManager instance for the given enable disable and snmp_parameters.", "response": "def get_snmp_service(self):\n \"\"\"\n Enable/Disable snmp\n :param snmp_parameters:\n :return:\n :rtype: SnmpContextManager\n \"\"\"\n return SnmpContextManager(self.enable_flow, self.disable_flow, self._snmp_parameters, self._logger)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncommanding - line utility to periodically purge stale entries from the services table.", "response": "def purge_stale_services(argv=None):\n \"\"\"Command-line utility to periodically purge stale entries from the \"services\" table.\n\n It is designed to be used in conjunction with cron.\n \"\"\"\n argv = argv or sys.argv\n arg_parser = argparse.ArgumentParser(\n prog=os.path.basename(argv[0]), description=(\n 'doublethink-purge-stale-services: utility to periodically '\n 'purge stale entries from the \"services\" table.'))\n\n arg_parser.add_argument(\n \"-d\", \"--rethinkdb-db\", required=True, dest=\"database\",\n help=\"A RethinkDB database containing a 'services' table\")\n\n arg_parser.add_argument(\"-s\", \"--rethinkdb-servers\",\n metavar=\"SERVERS\", dest=\"servers\", default='localhost',\n help=\"rethinkdb servers, e.g. db0.foo.org,db0.foo.org:38015,db1.foo.org\")\n\n arg_parser.add_argument(\n '-v', '--verbose', dest='log_level', action='store_const',\n default=logging.INFO, const=logging.DEBUG, help=(\n 'verbose logging'))\n args = arg_parser.parse_args(argv[1:])\n logging.basicConfig(\n stream=sys.stdout, level=args.log_level, format=(\n '%(asctime)s %(process)d %(levelname)s %(threadName)s '\n '%(name)s.%(funcName)s(%(filename)s:%(lineno)d) %(message)s'))\n\n args.servers = [srv.strip() for srv in args.servers.split(\",\")]\n\n rethinker = doublethink.Rethinker(servers=args.servers, db=args.database)\n registry = doublethink.services.ServiceRegistry(rethinker)\n registry.purge_stale_services()\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _insert_tasks(tasks, queue, transactional=False,\n retry_transient_errors=True,\n retry_delay=RETRY_SLEEP_SECS):\n \"\"\"Insert a batch of tasks into the specified queue. If an error occurs\n during insertion, split the batch and retry until they are successfully\n inserted. Return the number of successfully inserted tasks.\n \"\"\"\n from google.appengine.api import taskqueue\n\n if not tasks:\n return 0\n\n try:\n taskqueue.Queue(name=queue).add(tasks, transactional=transactional)\n return len(tasks)\n except (taskqueue.BadTaskStateError,\n taskqueue.TaskAlreadyExistsError,\n taskqueue.TombstonedTaskError):\n if len(tasks) <= 1:\n # Task has already been inserted, no reason to report an error here.\n return 0\n\n # If a list of more than one Tasks is given, a raised exception does\n # not guarantee that no tasks were added to the queue (unless\n # transactional is set to True). To determine which tasks were\n # successfully added when an exception is raised, check the\n # Task.was_enqueued property.\n reinsert = _tasks_to_reinsert(tasks, transactional)\n count = len(reinsert)\n inserted = len(tasks) - count\n inserted += _insert_tasks(reinsert[:count / 2], queue, transactional,\n retry_transient_errors, retry_delay)\n inserted += _insert_tasks(reinsert[count / 2:], queue, transactional,\n retry_transient_errors, retry_delay)\n\n return inserted\n except taskqueue.TransientError:\n # Always re-raise for transactional insert, or if specified by\n # options.\n if transactional or not retry_transient_errors:\n raise\n\n reinsert = _tasks_to_reinsert(tasks, transactional)\n\n # Retry with a delay, and then let any errors re-raise.\n time.sleep(retry_delay)\n\n taskqueue.Queue(name=queue).add(reinsert, transactional=transactional)\n return len(tasks)", "response": "Insert a batch of tasks into the specified queue."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _tasks_to_reinsert(tasks, transactional):\n if transactional:\n return tasks\n\n return [task for task in tasks if not task.was_enqueued]", "response": "Return a list of tasks that should be reinserted based on the was_enqueued property and whether the insert is transactional or not."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbatch large task lists into groups of 100 so that they can all be inserted.", "response": "def _task_batcher(tasks, batch_size=None):\n \"\"\"Batches large task lists into groups of 100 so that they can all be\n inserted.\n \"\"\"\n from itertools import izip_longest\n\n if not batch_size:\n batch_size = DEFAULT_TASK_BATCH_SIZE\n\n # Ensure the batch size is under the task api limit.\n batch_size = min(batch_size, 100)\n\n args = [iter(tasks)] * batch_size\n return ([task for task in group if task] for group in izip_longest(*args))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _handle_tasks_insert(self, batch_size=None):\n if self._tasks_inserted:\n raise errors.ContextAlreadyStartedError(\n \"This Context has already had its tasks inserted.\")\n\n task_map = self._get_tasks_by_queue()\n\n # QUESTION: Should the persist happen before or after the task\n # insertion? I feel like this is something that will alter the\n # behavior of the tasks themselves by adding a callback (check context\n # complete) to each Async's callback stack.\n\n # If we are able to and there is a reason to persist... persist.\n callbacks = self._options.get('callbacks')\n if self._persistence_engine and callbacks:\n self.persist()\n\n retry_transient = self._options.get('retry_transient_errors', True)\n retry_delay = self._options.get('retry_delay', RETRY_SLEEP_SECS)\n\n for queue, tasks in task_map.iteritems():\n for batch in _task_batcher(tasks, batch_size=batch_size):\n inserted = self._insert_tasks(\n batch, queue=queue, retry_transient_errors=retry_transient,\n retry_delay=retry_delay\n )\n if isinstance(inserted, (int, long)):\n # Don't blow up on insert_tasks that don't return counts.\n self._insert_success_count += inserted\n self._insert_failed_count += len(batch) - inserted", "response": "Convert all Async s into tasks then insert them into queues."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_tasks_by_queue(self):\n task_map = {}\n _checker = None\n\n # Ask the persistence engine for an Async to use for checking if the\n # context is complete.\n if self._persistence_engine:\n _checker = self._persistence_engine.context_completion_checker\n\n for async in self._tasks:\n queue = async.get_queue()\n if _checker:\n async.update_options(_context_checker=_checker)\n\n task = async.to_task()\n task_map.setdefault(queue, []).append(task)\n\n return task_map", "response": "Return the tasks for this Context grouped by queue."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd an Async to be run on event.", "response": "def set_event_handler(self, event, handler):\n \"\"\"Add an Async to be run on event.\"\"\"\n # QUESTION: Should we raise an exception if `event` is not in some\n # known event-type list?\n\n self._prepare_persistence_engine()\n\n callbacks = self._options.get('callbacks', {})\n callbacks[event] = handler\n self._options['callbacks'] = callbacks"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef exec_event_handler(self, event, transactional=False):\n # QUESTION: Should we raise an exception if `event` is not in some\n # known event-type list?\n\n callbacks = self._options.get('callbacks', {})\n\n handler = callbacks.get(event)\n\n if not handler:\n raise Exception('Handler not defined!!!')\n\n handler.start(transactional=transactional)", "response": "Execute the Async set to be run on event."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add(self, target, args=None, kwargs=None, **options):\n from furious.async import Async\n from furious.batcher import Message\n\n if self._tasks_inserted:\n raise errors.ContextAlreadyStartedError(\n \"This Context has already had its tasks inserted.\")\n\n if not isinstance(target, (Async, Message)):\n target = Async(target, args, kwargs, **options)\n\n target.update_options(_context_id=self.id)\n\n if self.persist_async_results:\n target.update_options(persist_result=True)\n\n self._tasks.append(target)\n self._options['_task_ids'].append(target.id)\n\n return target", "response": "Adds an Async object to this context."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading and instantiate a Context from the persistence_engine.", "response": "def load(cls, context_id, persistence_engine=None):\n \"\"\"Load and instantiate a Context from the persistence_engine.\"\"\"\n if not persistence_engine:\n from furious.config import get_default_persistence_engine\n persistence_engine = get_default_persistence_engine()\n\n if not persistence_engine:\n raise RuntimeError(\n 'Specify a valid persistence_engine to load the context.')\n\n return persistence_engine.load_context(context_id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning this Context as a dict suitable for json encoding.", "response": "def to_dict(self):\n \"\"\"Return this Context as a dict suitable for json encoding.\"\"\"\n import copy\n\n options = copy.deepcopy(self._options)\n\n if self._insert_tasks:\n options['insert_tasks'] = reference_to_path(self._insert_tasks)\n\n if self._persistence_engine:\n options['persistence_engine'] = reference_to_path(\n self._persistence_engine)\n\n options.update({\n '_tasks_inserted': self._tasks_inserted,\n })\n\n callbacks = self._options.get('callbacks')\n if callbacks:\n options['callbacks'] = encode_callbacks(callbacks)\n\n return options"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a context job from a dict output by Context. to_dict.", "response": "def from_dict(cls, context_options_dict):\n \"\"\"Return a context job from a dict output by Context.to_dict.\"\"\"\n import copy\n\n context_options = copy.deepcopy(context_options_dict)\n\n tasks_inserted = context_options.pop('_tasks_inserted', False)\n\n insert_tasks = context_options.pop('insert_tasks', None)\n if insert_tasks:\n context_options['insert_tasks'] = path_to_reference(insert_tasks)\n\n # The constructor expects a reference to the persistence engine.\n persistence_engine = context_options.pop('persistence_engine', None)\n if persistence_engine:\n context_options['persistence_engine'] = path_to_reference(\n persistence_engine)\n\n # If there are callbacks, reconstitute them.\n callbacks = context_options.pop('callbacks', None)\n if callbacks:\n context_options['callbacks'] = decode_callbacks(callbacks)\n\n context = cls(**context_options)\n\n context._tasks_inserted = tasks_inserted\n\n return context"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef result(self):\n if not self._result:\n if not self._persistence_engine:\n return None\n\n self._result = self._persistence_engine.get_context_result(self)\n\n return self._result", "response": "Return the context result object pulled from the persistence_engine if it has been set."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef insert_tasks_ignore_duplicate_names(tasks, queue, *args, **kwargs):\n\n from google.appengine.api import taskqueue\n\n try:\n inserted = _insert_tasks(tasks, queue, *args, **kwargs)\n\n return inserted\n except taskqueue.DuplicateTaskNameError:\n # At least one task failed in our batch, attempt to re-insert the\n # remaining tasks. Named tasks can never be transactional.\n reinsert = _tasks_to_reinsert(tasks, transactional=False)\n\n count = len(reinsert)\n inserted = len(tasks) - count\n\n # Our subsequent task inserts should raise TaskAlreadyExistsError at\n # least once, but that will be swallowed by _insert_tasks.\n for task in reinsert:\n inserted += _insert_tasks([task], queue, *args, **kwargs)\n\n return inserted", "response": "Insert a batch of tasks into a specific queue ignoring duplicate names."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef grep_file(query, item):\n return ['%s: %s' % (item, line) for line in open(item)\n if re.search(query, line)]", "response": "This function performs the actual grep on a given file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the module that is configured by the value of option_name.", "response": "def _get_configured_module(option_name, known_modules=None):\n \"\"\"Get the module specified by the value of option_name. The value of the\n configuration option will be used to load the module by name from the known\n module list or treated as a path if not found in known_modules.\n Args:\n option_name: name of persistence module\n known_modules: dictionary of module names and module paths,\n ie: {'ndb':'furious.extras.appengine.ndb_persistence'}\n Returns:\n module of the module path matching the name in known_modules\n \"\"\"\n from furious.job_utils import path_to_reference\n\n config = get_config()\n option_value = config[option_name]\n\n # If no known_modules were give, make it an empty dict.\n if not known_modules:\n known_modules = {}\n\n module_path = known_modules.get(option_value) or option_value\n return path_to_reference(module_path)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntraverse the directory trees to find a furious. yaml file and return the path of the furious. yaml file or None if not found", "response": "def find_furious_yaml(config_file=__file__):\n \"\"\"\n Traverse directory trees to find a furious.yaml file\n\n Begins with the location of this file then checks the\n working directory if not found\n\n Args:\n config_file: location of this file, override for\n testing\n Returns:\n the path of furious.yaml or None if not found\n \"\"\"\n checked = set()\n result = _find_furious_yaml(os.path.dirname(config_file), checked)\n if not result:\n result = _find_furious_yaml(os.getcwd(), checked)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntraverse the directory tree identified by start and check if a furious. yaml file is found.", "response": "def _find_furious_yaml(start, checked):\n \"\"\"Traverse the directory tree identified by start\n until a directory already in checked is encountered or the path\n of furious.yaml is found.\n\n Checked is present both to make the loop termination easy\n to reason about and so the same directories do not get\n rechecked\n\n Args:\n start: the path to start looking in and work upward from\n checked: the set of already checked directories\n\n Returns:\n the path of the furious.yaml file or None if it is not found\n \"\"\"\n directory = start\n while directory not in checked:\n checked.add(directory)\n for fs_yaml_name in FURIOUS_YAML_NAMES:\n yaml_path = os.path.join(directory, fs_yaml_name)\n if os.path.exists(yaml_path):\n return yaml_path\n directory = os.path.dirname(directory)\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nopen and return the yaml contents.", "response": "def _load_yaml_config(path=None):\n \"\"\"Open and return the yaml contents.\"\"\"\n furious_yaml_path = path or find_furious_yaml()\n if furious_yaml_path is None:\n logging.debug(\"furious.yaml not found.\")\n return None\n\n with open(furious_yaml_path) as yaml_file:\n return yaml_file.read()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the configuration from the found furious. yaml file and parses the data.", "response": "def _parse_yaml_config(config_data=None):\n \"\"\"\n Gets the configuration from the found furious.yaml\n file and parses the data.\n Returns:\n a dictionary parsed from the yaml file\n \"\"\"\n data_map = default_config()\n\n # If we were given config data to use, use it. Otherwise, see if there is\n # a furious.yaml to read the config from. Note that the empty string will\n # result in the default config being used.\n if config_data is None:\n config_data = _load_yaml_config()\n\n if not config_data:\n logging.debug(\"No custom furious config, using default config.\")\n return data_map\n\n # TODO: validate the yaml contents\n config = yaml.safe_load(config_data)\n\n # If there was a valid custom config, it will be a dict. Otherwise,\n # ignore it.\n if isinstance(config, dict):\n # Apply the custom config over the default config. This allows us to\n # extend functionality without breaking old stuff.\n data_map.update(config)\n elif not None:\n raise InvalidYamlFile(\"The furious.yaml file \"\n \"is invalid yaml\")\n\n return data_map"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _minimize_joins(self):\n joins_group = []\n for join in self._joins:\n append_new = True\n for join_group in joins_group:\n if join_group[0]['table'] == join['table'] and join_group[0]['ons'] == join['ons']:\n join_group.append(join)\n append_new = False\n break\n if append_new:\n joins_group.append([join])\n\n self._joins = []\n for joins in joins_group:\n if len(joins) > 1 and any(bool(join['type'] == INNER_JOIN) for join in joins):\n joins[0]['type'] = INNER_JOIN\n self._joins.append(joins[0])\n elif len(joins) > 1 and all(join['type'] == joins[0]['type'] for join in joins):\n self._joins.append(joins[0])\n else:\n self._joins.extend(joins)", "response": "Minimizing of joins. Left and right are only inner join."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexecuting node content as python module and adds its definitions to globals", "response": "def run_code(node: Code, parent_node: Node = None, node_globals: InheritedDict = None, **args): #pylint: disable=unused-argument\n '''Executes node content as python module and adds its definitions to globals'''\n if not node.xml_node.text:\n return\n code = node.xml_node.text\n try:\n globs = node_globals.to_dictionary()\n exec(dedent(code), globs) #pylint: disable=exec-used\n definitions = [(key, value) for key, value in globs.items() \\\n if key != '__builtins__' and not node_globals.has_key(key)]\n for key, value in definitions:\n parent_node.node_globals[key] = value\n except SyntaxError as err:\n error = _get_compilation_error(code, 'Invalid syntax', err, err.lineno)\n raise error from err\n except:\n info = exc_info()\n cause = info[1]\n line_number = extract_tb(info[2])[-1][1]\n error = _get_compilation_error(code, 'Code execution is failed', cause, line_number)\n raise error from cause"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmerge two mappings objects together and favoring right - values", "response": "def merge(left, right):\n \"\"\"\n Merge two mappings objects together, combining overlapping Mappings,\n and favoring right-values\n\n left: The left Mapping object.\n right: The right (favored) Mapping object.\n\n NOTE: This is not commutative (merge(a,b) != merge(b,a)).\n \"\"\"\n merged = {}\n\n left_keys = frozenset(left)\n right_keys = frozenset(right)\n\n # Items only in the left Mapping\n for key in left_keys - right_keys:\n merged[key] = left[key]\n\n # Items only in the right Mapping\n for key in right_keys - left_keys:\n merged[key] = right[key]\n\n # in both\n for key in left_keys & right_keys:\n left_value = left[key]\n right_value = right[key]\n\n if (isinstance(left_value, Mapping) and\n isinstance(right_value, Mapping)): # recursive merge\n merged[key] = merge(left_value, right_value)\n else: # overwrite with right value\n merged[key] = right_value\n\n return merged"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npelting algorithm to compute changepoints in time series Ported from: https://github.com/STOR-i/Changepoints.jl https://github.com/rkillick/changepoint/ Reference: Killick R, Fearnhead P, Eckley IA (2012) Optimal detection of changepoints with a linear computational cost, JASA 107(500), 1590-1598 Args: cost (function): cost function, with the following signature, (int, int) -> float where the parameters are the start index, and the second the last index of the segment to compute the cost. length (int): Data size pen (float, optional): defaults to log(n) Returns: (:obj:`list` of int): List with the indexes of changepoints", "response": "def pelt(cost, length, pen=None):\n \"\"\" PELT algorithm to compute changepoints in time series\n\n Ported from:\n https://github.com/STOR-i/Changepoints.jl\n https://github.com/rkillick/changepoint/\n Reference:\n Killick R, Fearnhead P, Eckley IA (2012) Optimal detection\n of changepoints with a linear computational cost, JASA\n 107(500), 1590-1598\n\n Args:\n cost (function): cost function, with the following signature,\n (int, int) -> float\n where the parameters are the start index, and the second\n the last index of the segment to compute the cost.\n length (int): Data size\n pen (float, optional): defaults to log(n)\n Returns:\n (:obj:`list` of int): List with the indexes of changepoints\n \"\"\"\n if pen is None:\n pen = np.log(length)\n\n F = np.zeros(length + 1)\n R = np.array([0], dtype=np.int)\n candidates = np.zeros(length + 1, dtype=np.int)\n\n F[0] = -pen\n\n for tstar in range(2, length + 1):\n cpt_cands = R\n seg_costs = np.zeros(len(cpt_cands))\n for i in range(0, len(cpt_cands)):\n seg_costs[i] = cost(cpt_cands[i], tstar)\n\n F_cost = F[cpt_cands] + seg_costs\n F[tstar], tau = find_min(F_cost, pen)\n candidates[tstar] = cpt_cands[tau]\n\n ineq_prune = [val < F[tstar] for val in F_cost]\n R = [cpt_cands[j] for j, val in enumerate(ineq_prune) if val]\n R.append(tstar - 1)\n R = np.array(R, dtype=np.int)\n\n last = candidates[-1]\n changepoints = [last]\n while last > 0:\n last = candidates[last]\n changepoints.append(last)\n\n return sorted(changepoints)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef normal_mean(data, variance):\n if not isinstance(data, np.ndarray):\n data = np.array(data)\n\n i_variance_2 = 1 / (variance ** 2)\n cmm = [0.0]\n cmm.extend(np.cumsum(data))\n\n cmm2 = [0.0]\n cmm2.extend(np.cumsum(np.abs(data)))\n\n def cost(start, end):\n \"\"\" Cost function for normal distribution with variable mean\n\n Args:\n start (int): start index\n end (int): end index\n Returns:\n float: Cost, from start to end\n \"\"\"\n cmm2_diff = cmm2[end] - cmm2[start]\n cmm_diff = pow(cmm[end] - cmm[start], 2)\n i_diff = end - start\n diff = cmm2_diff - cmm_diff\n return (diff/i_diff) * i_variance_2\n\n return cost", "response": "Returns a cost function for a time series with normal distribution with changing mean\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef normal_var(data, mean):\n if not isinstance(data, np.ndarray):\n data = np.array(data)\n\n cumm = [0.0]\n cumm.extend(np.cumsum(np.power(np.abs(data - mean), 2)))\n\n def cost(s, t):\n \"\"\" Cost function for normal distribution with variable variance\n\n Args:\n start (int): start index\n end (int): end index\n Returns:\n float: Cost, from start to end\n \"\"\"\n dist = float(t - s)\n diff = cumm[t] - cumm[s]\n return dist * np.log(diff/dist)\n\n return cost", "response": "Returns a function that calculates the cost of a time series with changing variance."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a cost function for a 1D time series with changing mean and variance.", "response": "def normal_meanvar(data):\n \"\"\" Creates a segment cost function for a time series with a\n Normal distribution with changing mean and variance\n\n Args:\n data (:obj:`list` of float): 1D time series data\n Returns:\n function: Function with signature\n (int, int) -> float\n where the first arg is the starting index, and the second\n is the last arg. Returns the cost of that segment\n \"\"\"\n data = np.hstack(([0.0], np.array(data)))\n\n cumm = np.cumsum(data)\n cumm_sq = np.cumsum([val**2 for val in data])\n\n def cost(s, t):\n \"\"\" Cost function for normal distribution with variable variance\n\n Args:\n start (int): start index\n end (int): end index\n Returns:\n float: Cost, from start to end\n \"\"\"\n ts_i = 1.0 / (t-s)\n mu = (cumm[t] - cumm[s]) * ts_i\n sig = (cumm_sq[t] - cumm_sq[s]) * ts_i - mu**2\n sig_i = 1.0 / sig\n return (t-s) * np.log(sig) + (cumm_sq[t] - cumm_sq[s]) * sig_i - 2*(cumm[t] - cumm[s])*mu*sig_i + ((t-s)*mu**2)*sig_i\n\n return cost"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a function that calculates the cost of a 1D time series with poisson distribution with changing mean .", "response": "def poisson(data):\n \"\"\" Creates a segment cost function for a time series with a\n poisson distribution with changing mean\n\n Args:\n data (:obj:`list` of float): 1D time series data\n Returns:\n function: Function with signature\n (int, int) -> float\n where the first arg is the starting index, and the second\n is the last arg. Returns the cost of that segment\n \"\"\"\n data = np.hstack(([0.0], np.array(data)))\n cumm = np.cumsum(data)\n\n def cost(s, t):\n \"\"\" Cost function for poisson distribution with changing mean\n\n Args:\n start (int): start index\n end (int): end index\n Returns:\n float: Cost, from start to end\n \"\"\"\n diff = cumm[t]-cumm[s]\n if diff == 0:\n return -2 * diff * (- np.log(t-s) - 1)\n else:\n return -2 * diff * (np.log(diff) - np.log(t-s) - 1)\n\n return cost"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef exponential(data):\n data = np.hstack(([0.0], np.array(data)))\n cumm = np.cumsum(data)\n\n def cost(s, t):\n \"\"\" Cost function for exponential distribution with changing mean\n\n Args:\n start (int): start index\n end (int): end index\n Returns:\n float: Cost, from start to end\n \"\"\"\n return -1*(t-s) * (np.log(t-s) - np.log(cumm[t] - cumm[s]))\n\n return cost", "response": "Creates a function that calculates the cost of a 1D time series with exponential distribution with changing mean\n ."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the min and max values for the given OID field.", "response": "def _get_layer_min_max(self, oid_field_name):\n \"\"\" Find the min and max values for the OID field. \"\"\"\n query_args = self._build_query_args({\n 'f': 'json',\n 'outFields': '',\n 'outStatistics': json.dumps([\n dict(statisticType='min', onStatisticField=oid_field_name, outStatisticFieldName='THE_MIN'),\n dict(statisticType='max', onStatisticField=oid_field_name, outStatisticFieldName='THE_MAX'),\n ], separators=(',', ':'))\n })\n headers = self._build_headers()\n url = self._build_url('/query')\n response = self._request('GET', url, params=query_args, headers=headers)\n metadata = self._handle_esri_errors(response, \"Could not retrieve min/max oid values\")\n\n # Some servers (specifically version 10.11, it seems) will respond with SQL statements\n # for the attribute names rather than the requested field names, so pick the min and max\n # deliberately rather than relying on the names.\n min_max_values = metadata['features'][0]['attributes'].values()\n min_value = min(min_max_values)\n max_value = max(min_max_values)\n query_args = self._build_query_args({\n 'f': 'json',\n 'outFields': '*',\n 'outStatistics': json.dumps([\n dict(statisticType='min', onStatisticField=oid_field_name, outStatisticFieldName='THE_MIN'),\n dict(statisticType='max', onStatisticField=oid_field_name, outStatisticFieldName='THE_MAX'),\n ], separators=(',', ':'))\n })\n query_args = self._build_query_args({\n 'where': '{} = {} OR {} = {}'.format(\n oid_field_name,\n min_value,\n oid_field_name,\n max_value\n ),\n 'returnIdsOnly': 'true',\n 'f': 'json',\n })\n headers = self._build_headers()\n url = self._build_url('/query')\n response = self._request('GET', url, params=query_args, headers=headers)\n oid_data = self._handle_esri_errors(response, \"Could not check min/max values\")\n if not oid_data or not oid_data.get('objectIds') or min_value not in oid_data['objectIds'] or max_value not in oid_data['objectIds']:\n raise EsriDownloadError('Server returned invalid min/max')\n return (min_value, max_value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ring_is_clockwise(ring):\n total = 0\n for (pt1, pt2) in pairwise(ring):\n total += (pt2[0] - pt1[0]) * (pt2[1] + pt1[1])\n return total >= 0", "response": "Determines if a polygon ring is clockwise."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef format_date_argument(date_element):\n if not isinstance(date_element, (datetime.datetime, datetime.date)):\n # TODO:\n if \"T\" in date_element:\n _date = datetime.datetime.strptime(date_element, \"%Y-%m-%dT%H:%M:%S\")\n else:\n _date = datetime.datetime.strptime(date_element, \"%Y-%m-%d\").date()\n else:\n _date = date_element\n return _date.isoformat()", "response": "Takes a date as either a datetime. date or a string and return it as an iso8601 formatted value\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmake a URL from component parts", "response": "def make_url(*args, **kwargs):\n \"\"\"Makes a URL from component parts\"\"\"\n base = \"/\".join(args)\n if kwargs:\n return \"%s?%s\" % (base, urlencode(kwargs))\n else:\n return base"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _querystring(self):\n\n kw = {}\n for key in self.KNOWN_QUERY_OPTIONS:\n val = getattr(self, key)\n if val is not None:\n kw[key] = val\n return kw", "response": "Get additional keyword arguments for the URL query string."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a dictionary of keyword arguments that can be used to construct the query string for the URL.", "response": "def _querystring(self):\n \"\"\"Additional keyword arguments\"\"\"\n kw = {}\n if self.status:\n kw[\"status\"] = \"all\"\n\n if self.links:\n kw[\"links\"] = \"all\"\n\n if self.include is not None:\n kw[\"include\"] = self.include\n if self.subject_key_type != \"SubjectName\":\n kw[\"subjectKeyType\"] = self.subject_key_type\n return kw"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dataset(self):\n if self.dataset_format:\n return \".\".join([self.dataset_name, self.dataset_format])\n return self.dataset_name", "response": "Qualify the dataset_name with the dataset_format"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking that dataset format is valid.", "response": "def check_dataset_format(ds_format):\n \"\"\"\n Ensure dataset format is XML or CSV\n\n :param str ds_format: Format of the Dataset (expected to be one of `csv` or `xml`)\n \"\"\"\n if ds_format.lower() not in DATASET_FORMATS.keys():\n raise ValueError(\n \"dataset_format is expected to be one of %s. '%s' is not valid\"\n % (\", \".join(DATASET_FORMATS.keys()), ds_format)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dataset_format_to_extension(ds_format):\n try:\n return DATASET_FORMATS[ds_format]\n except KeyError:\n raise ValueError(\n \"dataset_format is expected to be one of %s. '%s' is not valid\"\n % (\", \".join(DATASET_FORMATS.keys()), ds_format)\n )", "response": "Returns the preferred Dataset format extension"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_data(ctx, study, environment, subject):\n cfg = GetDataConfigurableDataset(GET_DATA_DATASET,\n study,\n environment,\n subject,\n params=dict(IncludeIDs=0,\n IncludeValues=0))\n # path = \"datasets/{}?StudyOID={}&SubjectKey={}\" \\\n # \"&IncludeIDs=0&IncludeValues=0\".format(GET_DATA_DATASET, studyoid, subject)\n # url = make_url(ctx.obj['RWS'].base_url, path)\n\n if ctx.obj['VERBOSE']:\n click.echo('Getting data list')\n # Get the client instance\n client = ctx.obj['RWS'] #: type: RWSConnection\n # Client rolls in the base_url\n resp = client.send_request(cfg)\n # resp = requests.get(url, auth=HTTPBasicAuth(ctx.obj['USERNAME'], ctx.obj['PASSWORD']))\n\n if client.last_result.status_code != 200:\n click.echo(client.last_result.text)\n\n return xml_pretty_print(resp)", "response": "Get the current data for a specific study environment subject"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmake a request to RWS and return the result.", "response": "def rws_call(ctx, method, default_attr=None):\n \"\"\"Make request to RWS\"\"\"\n try:\n response = ctx.obj['RWS'].send_request(method)\n\n if ctx.obj['RAW']: # use response from RWS\n result = ctx.obj['RWS'].last_result.text\n elif default_attr is not None: # human-readable summary\n result = \"\"\n for item in response:\n result = result + item.__dict__[default_attr] + \"\\n\"\n else: # use response from RWS\n result = ctx.obj['RWS'].last_result.text\n\n if ctx.obj['OUTPUT']: # write to file\n ctx.obj['OUTPUT'].write(result.encode('utf-8'))\n else: # echo\n click.echo(result)\n\n except RWSException as e:\n click.echo(str(e))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef data(ctx, path):\n _rws = partial(rws_call, ctx)\n if len(path) == 0:\n _rws(ClinicalStudiesRequest(), default_attr='oid')\n elif len(path) == 1:\n _rws(StudySubjectsRequest(path[0], 'Prod'), default_attr='subjectkey')\n elif len(path) == 2:\n _rws(StudySubjectsRequest(path[0], path[1]), default_attr='subjectkey')\n elif len(path) == 3:\n try:\n click.echo(get_data(ctx, path[0], path[1], path[2]))\n except RWSException as e:\n click.echo(str(e))\n except requests.exceptions.HTTPError as e:\n click.echo(str(e))\n else:\n click.echo('Too many arguments')", "response": "List EDC data for Study and Study Subject"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef post(ctx, odm):\n try:\n ctx.obj['RWS'].send_request(PostDataRequest(odm.read()))\n if ctx.obj['RAW']:\n click.echo(ctx.obj['RWS'].last_result.text)\n except RWSException as e:\n click.echo(e.message)", "response": "Post clinical data to the ODM"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef metadata(ctx, drafts, path):\n _rws = partial(rws_call, ctx)\n if len(path) == 0:\n _rws(MetadataStudiesRequest(), default_attr='oid')\n elif len(path) == 1:\n if drafts:\n _rws(StudyDraftsRequest(path[0]), default_attr='oid')\n else:\n _rws(StudyVersionsRequest(path[0]), default_attr='oid')\n elif len(path) == 2:\n _rws(StudyVersionRequest(path[0], path[1]))\n else:\n click.echo('Too many arguments')", "response": "List metadata for the given Study."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef direct(ctx, path):\n try:\n url = make_url(ctx.obj['RWS'].base_url, path)\n resp = requests.get(url, auth=HTTPBasicAuth(ctx.obj['USERNAME'], ctx.obj['PASSWORD']))\n click.echo(resp.text)\n except RWSException as e:\n click.echo(e.message)\n except requests.exceptions.HTTPError as e:\n click.echo(e.message)", "response": "Make direct call to RWS"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrequesting enterable data for a subject generate data values and post back to Rave.", "response": "def autofill(ctx, steps, metadata, fixed, study, environment, subject):\n \"\"\"Request enterable data for a subject, generate data values and post back to Rave.\n Requires 'rwscmd_getdata' configurable dataset to be installed on the Rave URL.\"\"\"\n\n if metadata is not None: # Read metadata from file, if supplied\n odm_metadata = metadata.read()\n meta_v = etree.fromstring(odm_metadata).find('.//' + E_ODM.METADATA_VERSION.value).get(A_ODM.OID.value)\n else:\n odm_metadata = None\n meta_v = None\n\n fixed_values = {}\n if fixed is not None: # Read fixed values from file, if supplied\n for f in fixed:\n oid, value = f.decode().split(',')\n fixed_values[oid] = value\n if ctx.obj['VERBOSE']:\n click.echo('Fixing {} to value: {}'.format(oid, value))\n\n try:\n for n in range(0, steps):\n if ctx.obj['VERBOSE']:\n click.echo('Step {}'.format(str(n + 1)))\n\n # Get currently enterable fields for this subject\n subject_data = get_data(ctx, study, environment, subject)\n\n subject_data_odm = etree.fromstring(subject_data)\n if subject_data_odm.find('.//' + E_ODM.CLINICAL_DATA.value) is None:\n if ctx.obj['VERBOSE']:\n click.echo('No data found')\n break\n\n # Get the metadata version for the subject\n subject_meta_v = subject_data_odm.find('.//' + E_ODM.CLINICAL_DATA.value).get(\n A_ODM.METADATA_VERSION_OID.value)\n if subject_meta_v is None:\n if ctx.obj['VERBOSE']:\n click.echo('Subject not found')\n break\n\n # If no metadata supplied, or versions don't match, retrieve metadata from RWS\n if meta_v != subject_meta_v:\n if ctx.obj['VERBOSE']:\n click.echo('Getting metadata version {}'.format(subject_meta_v))\n ctx.obj['RWS'].send_request(StudyVersionRequest(study, subject_meta_v))\n odm_metadata = ctx.obj['RWS'].last_result.text\n meta_v = subject_meta_v\n\n # Generate data values to fill in empty fields\n if ctx.obj['VERBOSE']:\n click.echo('Generating data')\n\n scr = Scramble(odm_metadata)\n odm = scr.fill_empty(fixed_values, subject_data)\n\n # If new data values, post to RWS\n if etree.fromstring(odm).find('.//' + E_ODM.ITEM_DATA.value) is None:\n if ctx.obj['VERBOSE']:\n click.echo('No data to send')\n break\n ctx.obj['RWS'].send_request(PostDataRequest(odm))\n if ctx.obj['RAW']:\n click.echo(ctx.obj['RWS'].last_result.text)\n\n except RWSException as e:\n click.echo(e.rws_error)\n\n except requests.exceptions.HTTPError as e:\n click.echo(e.strerror)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds the XML by appending to the given XML builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(OID=self.oid)\n params[\"mdsol:ProjectType\"] = self.project_type\n\n builder.start(\"Study\", params)\n\n # Ask children\n if self.global_variables is not None:\n self.global_variables.build(builder)\n\n if self.basic_definitions is not None:\n self.basic_definitions.build(builder)\n\n if self.metadata_version is not None:\n self.metadata_version.build(builder)\n\n builder.end(\"Study\")"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n builder.start(\"GlobalVariables\", {})\n make_element(builder, \"StudyName\", self.name)\n make_element(builder, \"StudyDescription\", self.description)\n make_element(builder, \"ProtocolName\", self.protocol_name)\n builder.end(\"GlobalVariables\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(self, builder):\n builder.start(\"BasicDefinitions\", {})\n for child in self.measurement_units:\n child.build(builder)\n builder.end(\"BasicDefinitions\")", "response": "Build XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n\n params = dict(OID=self.oid, Name=self.name)\n\n if self.unit_dictionary_name:\n params[\"mdsol:UnitDictionaryName\"] = self.unit_dictionary_name\n\n for suffix in [\"A\", \"B\", \"C\", \"K\"]:\n val = getattr(self, \"constant_{0}\".format(suffix.lower()))\n params[\"mdsol:Constant{0}\".format(suffix)] = str(val)\n\n if self.standard_unit:\n params[\"mdsol:StandardUnit\"] = \"Yes\"\n\n builder.start(\"MeasurementUnit\", params)\n for child in self.symbols:\n child.build(builder)\n builder.end(\"MeasurementUnit\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build(self, builder):\n builder.start(\"Symbol\", {})\n for child in self.translations:\n child.build(builder)\n builder.end(\"Symbol\")", "response": "Build XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build(self, builder):\n params = {}\n if self.lang is not None:\n params[\"xml:lang\"] = self.lang\n builder.start(\"TranslatedText\", params)\n builder.data(self.text)\n builder.end(\"TranslatedText\")", "response": "Build XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build(self, builder):\n\n params = dict(OID=self.oid, Name=self.name)\n\n if self.description is not None:\n params[\"Description\"] = self.description\n\n if self.signature_prompt is not None:\n params[\"mdsol:SignaturePrompt\"] = self.signature_prompt\n\n if self.primary_formoid is not None:\n params[\"mdsol:PrimaryFormOID\"] = self.primary_formoid\n\n if self.default_matrix_oid is not None:\n params[\"mdsol:DefaultMatrixOID\"] = self.default_matrix_oid\n\n params[\"mdsol:DeleteExisting\"] = bool_to_yes_no(self.delete_existing)\n\n builder.start(\"MetaDataVersion\", params)\n if self.protocol:\n self.protocol.build(builder)\n\n for event in self.study_event_defs:\n event.build(builder)\n\n for formdef in self.form_defs:\n formdef.build(builder)\n\n for itemgroupdef in self.item_group_defs:\n itemgroupdef.build(builder)\n\n for itemdef in self.item_defs:\n itemdef.build(builder)\n\n for codelist in self.codelists:\n codelist.build(builder)\n\n # Extensions must always come after core elements\n if self.confirmation_message:\n self.confirmation_message.build(builder)\n\n for labeldef in self.label_defs:\n labeldef.build(builder)\n\n for edit_check in self.edit_checks:\n edit_check.build(builder)\n\n for derivation in self.derivations:\n derivation.build(builder)\n\n for custom_function in self.custom_functions:\n custom_function.build(builder)\n\n builder.end(\"MetaDataVersion\")", "response": "Builds the MetaDataVersion XML by appending it to the given XML builder."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n builder.start(\"Protocol\", {})\n for child in self.study_event_refs:\n child.build(builder)\n for alias in self.aliases:\n alias.build(builder)\n builder.end(\"Protocol\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(StudyEventOID=self.oid, Mandatory=bool_to_yes_no(self.mandatory))\n if self._order_number is not None:\n params[\"OrderNumber\"] = self._order_number\n builder.start(\"StudyEventRef\", params)\n builder.end(\"StudyEventRef\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef build(self, builder):\n\n params = dict(\n OID=self.oid,\n Name=self.name,\n Repeating=bool_to_yes_no(self.repeating),\n Type=self.event_type,\n )\n\n if self.category is not None:\n params[\"Category\"] = self.category\n\n if self.access_days is not None:\n params[\"mdsol:AccessDays\"] = str(self.access_days)\n\n if self.start_win_days is not None:\n params[\"mdsol:StartWinDays\"] = str(self.start_win_days)\n\n if self.target_days is not None:\n params[\"mdsol:TargetDays\"] = str(self.target_days)\n\n if self.end_win_days is not None:\n params[\"mdsol:EndWinDays\"] = str(self.end_win_days)\n\n if self.overdue_days is not None:\n params[\"mdsol:OverDueDays\"] = str(self.overdue_days)\n\n if self.close_days is not None:\n params[\"mdsol:CloseDays\"] = str(self.close_days)\n\n builder.start(\"StudyEventDef\", params)\n for formref in self.formrefs:\n formref.build(builder)\n for alias in self.aliases:\n alias.build(builder)\n builder.end(\"StudyEventDef\")", "response": "Builds the XML by appending it to the given XML builder object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild the XML by appending to the given XML builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(\n OID=self.oid, Name=self.name, Repeating=bool_to_yes_no(self.repeating)\n )\n\n if self.order_number is not None:\n params[\"mdsol:OrderNumber\"] = str(self.order_number)\n\n if self.active is not None:\n params[\"mdsol:Active\"] = bool_to_yes_no(self.active)\n\n params[\"mdsol:Template\"] = bool_to_yes_no(self.template)\n params[\"mdsol:SignatureRequired\"] = bool_to_yes_no(self.signature_required)\n params[\"mdsol:LogDirection\"] = self.log_direction\n params[\"mdsol:DoubleDataEntry\"] = self.double_data_entry\n params[\"mdsol:ConfirmationStyle\"] = self.confirmation_style\n\n if self.link_study_event_oid:\n params[\"mdsol:LinkStudyEventOID\"] = self.link_study_event_oid\n\n if self.link_form_oid:\n params[\"mdsol:LinkFormOID\"] = self.link_form_oid\n\n builder.start(\"FormDef\", params)\n for itemgroup_ref in self.itemgroup_refs:\n itemgroup_ref.build(builder)\n\n for helptext in self.helptexts:\n helptext.build(builder)\n\n for view_restriction in self.view_restrictions:\n view_restriction.build(builder)\n\n for entry_restriction in self.entry_restrictions:\n entry_restriction.build(builder)\n\n for alias in self.aliases:\n alias.build(builder)\n\n builder.end(\"FormDef\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(self, builder):\n\n params = dict(\n OID=self.oid,\n Name=self.name,\n Repeating=bool_to_yes_no(self.repeating),\n IsReferenceData=bool_to_yes_no(self.is_reference_data),\n )\n\n if self.sas_dataset_name is not None:\n params[\"SASDatasetName\"] = self.sas_dataset_name\n\n if self.domain is not None:\n params[\"Domain\"] = self.domain\n\n if self.origin is not None:\n params[\"Origin\"] = self.origin\n\n if self.role is not None:\n params[\"Role\"] = self.role\n\n if self.purpose is not None:\n params[\"Purpose\"] = self.purpose\n\n if self.comment is not None:\n params[\"Comment\"] = self.comment\n\n builder.start(\"ItemGroupDef\", params)\n\n for itemref in self.item_refs:\n itemref.build(builder)\n\n # Extensions always listed AFTER core elements\n for labelref in self.label_refs:\n labelref.build(builder)\n\n for alias in self.aliases:\n alias.build(builder)\n\n builder.end(\"ItemGroupDef\")", "response": "Builds the XML by appending to the given XML builder"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build(self, builder):\n\n params = dict(ItemOID=self.oid, Mandatory=bool_to_yes_no(self.mandatory))\n\n if self.order_number is not None:\n params[\"OrderNumber\"] = str(self.order_number)\n\n if self.key_sequence is not None:\n params[\"KeySequence\"] = str(self.key_sequence)\n\n if self.imputation_method_oid is not None:\n params[\"ImputationMethodOID\"] = self.imputation_method_oid\n\n if self.role is not None:\n params[\"Role\"] = self.role\n\n if self.role_codelist_oid is not None:\n params[\"RoleCodeListOID\"] = self.role_codelist_oid\n\n builder.start(\"ItemRef\", params)\n\n for attribute in self.attributes:\n attribute.build(builder)\n builder.end(\"ItemRef\")", "response": "Builds the XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build(self, builder):\n\n params = dict(\n OID=self.oid,\n Name=self.name,\n DataType=self.datatype.value,\n Length=str(self.length),\n )\n\n if self.date_time_format is not None:\n params[\"mdsol:DateTimeFormat\"] = self.date_time_format\n\n params[\"mdsol:Active\"] = bool_to_yes_no(self.active)\n\n if self.significant_digits is not None:\n params[\"SignificantDigits\"] = str(self.significant_digits)\n\n if self.sas_field_name is not None:\n params[\"SASFieldName\"] = self.sas_field_name\n\n if self.sds_var_name is not None:\n params[\"SDSVarName\"] = self.sds_var_name\n\n if self.origin is not None:\n params[\"Origin\"] = self.origin\n\n if self.comment is not None:\n params[\"Comment\"] = self.comment\n\n if self.control_type is not None:\n params[\"mdsol:ControlType\"] = self.control_type.value\n\n if self.acceptable_file_extensions is not None:\n params[\"mdsol:AcceptableFileExtensions\"] = self.acceptable_file_extensions\n\n if self.default_value is not None:\n params[\"mdsol:DefaultValue\"] = str(self.default_value)\n\n params[\"mdsol:SourceDocument\"] = bool_to_yes_no(self.source_document_verify)\n params[\"mdsol:IndentLevel\"] = str(self.indent_level)\n\n if self.sas_format is not None:\n params[\"mdsol:SASFormat\"] = self.sas_format\n\n if self.sas_label is not None:\n params[\"mdsol:SASLabel\"] = self.sas_label\n\n params[\"mdsol:QueryFutureDate\"] = bool_to_yes_no(self.query_future_date)\n params[\"mdsol:Visible\"] = bool_to_yes_no(self.visible)\n params[\"mdsol:TranslationRequired\"] = bool_to_yes_no(self.translation_required)\n params[\"mdsol:QueryNonConformance\"] = bool_to_yes_no(self.query_non_conformance)\n params[\"mdsol:OtherVisits\"] = bool_to_yes_no(self.other_visits)\n params[\"mdsol:CanSetItemGroupDate\"] = bool_to_yes_no(\n self.can_set_item_group_date\n )\n params[\"mdsol:CanSetFormDate\"] = bool_to_yes_no(self.can_set_form_date)\n params[\"mdsol:CanSetStudyEventDate\"] = bool_to_yes_no(\n self.can_set_study_event_date\n )\n params[\"mdsol:CanSetSubjectDate\"] = bool_to_yes_no(self.can_set_subject_date)\n params[\"mdsol:VisualVerify\"] = bool_to_yes_no(self.visual_verify)\n params[\"mdsol:DoesNotBreakSignature\"] = bool_to_yes_no(\n self.does_not_break_signature\n )\n\n if self.field_number is not None:\n params[\"mdsol:FieldNumber\"] = self.field_number\n\n if self.variable_oid is not None:\n params[\"mdsol:VariableOID\"] = self.variable_oid\n\n builder.start(\"ItemDef\", params)\n\n if self.question is not None:\n self.question.build(builder)\n\n if self.codelistref is not None:\n self.codelistref.build(builder)\n\n for mur in self.measurement_unit_refs:\n mur.build(builder)\n\n for range_check in self.range_checks:\n range_check.build(builder)\n\n if self.header_text is not None:\n self.header_text.build(builder)\n\n for view_restriction in self.view_restrictions:\n view_restriction.build(builder)\n\n for entry_restriction in self.entry_restrictions:\n entry_restriction.build(builder)\n\n for help_text in self.help_texts:\n help_text.build(builder)\n\n for review_group in self.review_groups:\n review_group.build(builder)\n\n for alias in self.aliases:\n alias.build(builder)\n\n builder.end(\"ItemDef\")", "response": "Builds the XML by appending to the given XML builder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding the XML representation of the object into the given builder.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n\n .. note:: Questions can contain translations\n \"\"\"\n builder.start(\"Question\", {})\n for translation in self.translations:\n translation.build(builder)\n builder.end(\"Question\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the comparator (with validation of input)", "response": "def comparator(self, value):\n \"\"\"sets the comparator (with validation of input)\"\"\"\n if not isinstance(value, RangeCheckComparatorType):\n raise AttributeError(\"%s comparator is invalid in RangeCheck.\" % (value,))\n self._comparator = value"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the Soft or Hard range setting", "response": "def soft_hard(self, value):\n \"\"\"sets the Soft or Hard range setting (with validation of input)\"\"\"\n if not isinstance(value, RangeCheckType):\n raise AttributeError(\"%s soft_hard invalid in RangeCheck.\" % (value,))\n self._soft_hard = value"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild the ISO - 8601 range check XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(SoftHard=self.soft_hard.value, Comparator=self.comparator.value)\n builder.start(\"RangeCheck\", params)\n if self.check_value is not None:\n self.check_value.build(builder)\n if self.measurement_unit_ref is not None:\n self.measurement_unit_ref.build(builder)\n builder.end(\"RangeCheck\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n builder.start(\"CheckValue\", {})\n builder.data(str(self.value))\n builder.end(\"CheckValue\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef build(self, builder):\n params = dict(OID=self.oid, Name=self.name, DataType=self.datatype.value)\n if self.sas_format_name is not None:\n params[\"SASFormatName\"] = self.sas_format_name\n builder.start(\"CodeList\", params)\n\n for item in self.codelist_items:\n item.build(builder)\n\n for alias in self.aliases:\n alias.build(builder)\n\n builder.end(\"CodeList\")", "response": "Builds the XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(self, builder):\n params = dict(CodedValue=self.coded_value)\n if self.order_number is not None:\n params[\"mdsol:OrderNumber\"] = str(self.order_number)\n\n if self.specify:\n params[\"mdsol:Specify\"] = \"Yes\"\n\n builder.start(\"CodeListItem\", params)\n\n if self.decode is not None:\n self.decode.build(builder)\n\n for alias in self.aliases:\n alias.build(builder)\n\n builder.end(\"CodeListItem\")", "response": "Builds the XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding this item : param builder : Induce the tree structure of the item", "response": "def build(self, builder):\n \"\"\"\n Build this item\n :param builder: \n :return: \n \"\"\"\n params = dict(Context=self.context, Name=self.name)\n builder.start(\"Alias\", params)\n builder.end(\"Alias\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build(self, builder):\n params = dict(LabelOID=self.oid, OrderNumber=str(self.order_number))\n\n builder.start(\"mdsol:LabelRef\", params)\n builder.end(\"mdsol:LabelRef\")", "response": "Build XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nappending this attribute to the XML builder.", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(\n Namespace=self.namespace,\n Name=self.name,\n Value=self.value,\n TransactionType=self.transaction_type,\n )\n\n builder.start(\"mdsol:Attribute\", params)\n builder.end(\"mdsol:Attribute\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nappend this custom function definition to an xml. Builder object.", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(OID=self.oid, Language=self.language)\n builder.start(\"mdsol:CustomFunctionDef\", params)\n builder.data(self.code)\n builder.end(\"mdsol:CustomFunctionDef\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds the XML by appending to the given XML builder.", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(\n OID=self.oid,\n Active=bool_to_true_false(self.active),\n BypassDuringMigration=bool_to_true_false(self.bypass_during_migration),\n NeedsRetesting=bool_to_true_false(self.needs_retesting),\n )\n\n if self.variable_oid is not None:\n params[\"VariableOID\"] = self.variable_oid\n\n if self.field_oid is not None:\n params[\"FieldOID\"] = self.field_oid\n\n if self.form_oid is not None:\n params[\"FormOID\"] = self.form_oid\n\n if self.folder_oid is not None:\n params[\"FolderOID\"] = self.folder_oid\n\n if self.record_position is not None:\n params[\"RecordPosition\"] = str(self.record_position)\n\n if self.form_repeat_number is not None:\n params[\"FormRepeatNumber\"] = str(self.form_repeat_number)\n\n if self.folder_repeat_number is not None:\n params[\"FolderRepeatNumber\"] = str(self.folder_repeat_number)\n\n if self.all_variables_in_folders is not None:\n params[\"AllVariablesInFolders\"] = bool_to_true_false(\n self.all_variables_in_folders\n )\n\n if self.all_variables_in_fields is not None:\n params[\"AllVariablesInFields\"] = bool_to_true_false(\n self.all_variables_in_fields\n )\n\n if self.logical_record_position is not None:\n params[\"LogicalRecordPosition\"] = self.logical_record_position.value\n\n builder.start(\"mdsol:DerivationDef\", params)\n for step in self.derivation_steps:\n step.build(builder)\n builder.end(\"mdsol:DerivationDef\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef build(self, builder):\n params = dict()\n\n if self.variable_oid is not None:\n params[\"VariableOID\"] = self.variable_oid\n\n if self.data_format is not None:\n params[\"DataFormat\"] = self.data_format\n\n if self.folder_oid is not None:\n params[\"FolderOID\"] = self.folder_oid\n\n if self.field_oid is not None:\n params[\"FieldOID\"] = self.field_oid\n\n if self.form_oid is not None:\n params[\"FormOID\"] = self.form_oid\n\n if self.static_value is not None:\n params[\"StaticValue\"] = self.static_value\n\n if self.function is not None:\n params[\"Function\"] = self.function.value\n\n if self.custom_function is not None:\n params[\"CustomFunction\"] = self.custom_function\n\n if self.record_position is not None:\n params[\"RecordPosition\"] = str(self.record_position)\n\n if self.form_repeat_number is not None:\n params[\"FormRepeatNumber\"] = str(self.form_repeat_number)\n\n if self.folder_repeat_number is not None:\n params[\"FolderRepeatNumber\"] = str(self.folder_repeat_number)\n\n if self.logical_record_position is not None:\n params[\"LogicalRecordPosition\"] = self.logical_record_position.value\n\n builder.start(\"mdsol:CheckStep\", params)\n builder.end(\"mdsol:CheckStep\")", "response": "Builds the XML by appending to the given XML builder"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the value for the CheckActionType validating input", "response": "def check_action_type(self, value):\n \"\"\"Set the value for the CheckActionType, validating input\"\"\"\n if value is not None:\n if not isinstance(value, ActionType):\n raise AttributeError(\"Invalid check action %s\" % value)\n self._check_action_type = value"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds the xml by appending it to the given XML builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(\n OID=self.oid,\n Active=bool_to_true_false(self.active),\n BypassDuringMigration=bool_to_true_false(self.bypass_during_migration),\n NeedsRetesting=bool_to_true_false(self.needs_retesting),\n )\n\n builder.start(\"mdsol:EditCheckDef\", params)\n for step in self.check_steps:\n step.build(builder)\n\n for action in self.check_actions:\n action.build(builder)\n builder.end(\"mdsol:EditCheckDef\")"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nbuild the XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n\n params = dict(OID=self.oid, Name=self.name)\n if self.field_number is not None:\n params[\"FieldNumber\"] = str(self.field_number)\n\n builder.start(\"mdsol:LabelDef\", params)\n\n for translation in self.translations:\n translation.build(builder)\n\n for view_restriction in self.view_restrictions:\n view_restriction.build(builder)\n\n builder.end(\"mdsol:LabelDef\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(self, builder):\n builder.start(\"mdsol:ReviewGroup\", {})\n builder.data(self.name)\n builder.end(\"mdsol:ReviewGroup\")", "response": "Build the XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build(self, builder):\n params = dict(MetaDataVersionOID=str(self.metadata_version_oid),\n StudyOID=\"%s (%s)\" % (self.projectname, self.environment,),\n )\n\n # mixins\n self.mixin_params(params)\n\n builder.start(\"ClinicalData\", params)\n # Ask children\n if self.subject_data:\n for subject in self.subject_data:\n subject.build(builder)\n # Add the Annotations\n if self.annotations is not None:\n self.annotations.build(builder)\n builder.end(\"ClinicalData\")", "response": "Build XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds the SubjectData XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(SubjectKey=self.subject_key)\n params['mdsol:SubjectKeyType'] = self.subject_key_type\n\n if self.transaction_type is not None:\n params[\"TransactionType\"] = self.transaction_type\n\n # mixins\n self.mixin()\n self.mixin_params(params)\n\n builder.start(\"SubjectData\", params)\n\n # Ask children\n if self.audit_record is not None:\n self.audit_record.build(builder)\n\n if self.siteref:\n self.siteref.build(builder)\n else:\n builder.start(\"SiteRef\", {'LocationOID': self.sitelocationoid})\n builder.end(\"SiteRef\")\n\n for event in self.study_events:\n event.build(builder)\n\n if self.signature is not None:\n self.signature.build(builder)\n\n for annotation in self.annotations:\n annotation.build(builder)\n\n builder.end(\"SubjectData\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds an XML string by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\n :Example:\n \n\n \"\"\"\n params = dict(StudyEventOID=self.study_event_oid)\n\n if self.transaction_type is not None:\n params[\"TransactionType\"] = self.transaction_type\n\n if self.study_event_repeat_key is not None:\n params[\"StudyEventRepeatKey\"] = str(self.study_event_repeat_key)\n\n # mixins\n self.mixin()\n self.mixin_params(params)\n\n builder.start(\"StudyEventData\", params)\n\n # Ask children\n for form in self.forms:\n form.build(builder)\n\n if self.signature is not None:\n self.signature.build(builder)\n\n for annotation in self.annotations:\n annotation.build(builder)\n\n builder.end(\"StudyEventData\")"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nbuilds XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\n :Example:\n \n\n \"\"\"\n params = dict(FormOID=self.formoid)\n\n if self.transaction_type is not None:\n params[\"TransactionType\"] = self.transaction_type\n\n if self.form_repeat_key is not None:\n params[\"FormRepeatKey\"] = str(self.form_repeat_key)\n\n # mixins\n self.mixin()\n self.mixin_params(params)\n\n builder.start(\"FormData\", params)\n\n # Ask children\n for itemgroup in self.itemgroups:\n itemgroup.build(builder, self.formoid)\n\n if self.signature is not None:\n self.signature.build(builder)\n\n for annotation in self.annotations:\n annotation.build(builder)\n\n builder.end(\"FormData\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef build(self, builder, formname=None):\n\n params = dict(ItemGroupOID=formname if formname else self.itemgroupoid)\n\n if self.transaction_type is not None:\n params[\"TransactionType\"] = self.transaction_type\n\n if self.item_group_repeat_key is not None:\n params[\"ItemGroupRepeatKey\"] = str(\n self.item_group_repeat_key) # may be @context for transaction type upsert or context\n\n params[\"mdsol:Submission\"] = \"WholeItemGroup\" if self.whole_item_group else \"SpecifiedItemsOnly\"\n\n # mixins\n self.mixin()\n self.mixin_params(params)\n\n builder.start(\"ItemGroupData\", params)\n\n # Ask children\n for item in self.items.values():\n item.build(builder)\n\n # Add annotations\n for annotation in self.annotations:\n annotation.build(builder)\n\n # Add the signature if it exists\n if self.signature is not None:\n self.signature.build(builder)\n builder.end(\"ItemGroupData\")", "response": "Builds the XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build(self, builder):\n params = dict(ItemOID=self.itemoid)\n\n if self.transaction_type is not None:\n params[\"TransactionType\"] = self.transaction_type\n\n if self.value in [None, '']:\n params['IsNull'] = 'Yes'\n else:\n params['Value'] = str(self.value)\n\n if self.specify_value is not None:\n params['mdsol:SpecifyValue'] = self.specify_value\n\n if self.lock is not None:\n params['mdsol:Lock'] = bool_to_yes_no(self.lock)\n\n if self.freeze is not None:\n params['mdsol:Freeze'] = bool_to_yes_no(self.freeze)\n\n if self.verify is not None:\n params['mdsol:Verify'] = bool_to_yes_no(self.verify)\n\n # mixins\n self.mixin()\n self.mixin_params(params)\n\n builder.start(\"ItemData\", params)\n\n if self.audit_record is not None:\n self.audit_record.build(builder)\n\n # Measurement unit ref must be after audit record or RWS complains\n if self.measurement_unit_ref is not None:\n self.measurement_unit_ref.build(builder)\n\n for query in self.queries: # type: MdsolQuery\n query.build(builder)\n\n for deviation in self.deviations: # type: MdsolProtocolDeviation\n deviation.build(builder)\n\n for annotation in self.annotations: # type: Annotation\n annotation.build(builder)\n\n builder.end(\"ItemData\")", "response": "Builds the XML by appending to the given XML builder."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build(self, builder):\n\n params = {}\n if self.signature_id is not None:\n # If a Signature element is contained within a Signatures element, the ID attribute is required.\n params['ID'] = self.signature_id\n\n builder.start(\"Signature\", params)\n\n if self.user_ref is None:\n raise ValueError(\"User Reference not set.\")\n self.user_ref.build(builder)\n\n if self.location_ref is None:\n raise ValueError(\"Location Reference not set.\")\n self.location_ref.build(builder)\n\n if self.signature_ref is None:\n raise ValueError(\"Signature Reference not set.\")\n self.signature_ref.build(builder)\n\n if self.date_time_stamp is None:\n raise ValueError(\"DateTime not set.\")\n self.date_time_stamp.build(builder)\n\n builder.end(\"Signature\")", "response": "Builds the contents of this object into the XML builder."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef annotation_id(self, value):\n if value in [None, ''] or str(value).strip() == '':\n raise AttributeError(\"Invalid ID value supplied\")\n self._id = value", "response": "Set the ID for this object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef seqnum(self, value):\n if not re.match(r'\\d+', str(value)) or value < 0:\n raise AttributeError(\"Invalid SeqNum value supplied\")\n self._seqnum = value", "response": "Set the SeqNum of the object for this object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds the XML by appending to the given XML builder.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n params = {}\n\n # Add in the transaction type\n if self.transaction_type is not None:\n params[\"TransactionType\"] = self.transaction_type\n\n if self.seqnum is None:\n # SeqNum is not optional (and defaulted)\n raise ValueError(\"SeqNum is not set.\") # pragma: no cover\n params[\"SeqNum\"] = str(self.seqnum)\n\n if self.annotation_id is not None:\n # If an Annotation is contained with an Annotations element,\n # the ID attribute is required.\n params[\"ID\"] = self.annotation_id\n\n builder.start(\"Annotation\", params)\n\n if self.flags in (None, []):\n raise ValueError('Flag is not set.')\n\n # populate the flags\n for flag in self.flags:\n flag.build(builder)\n\n # add the Comment, if it exists\n if self.comment is not None:\n self.comment.build(builder)\n\n builder.end(\"Annotation\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding the XML by appending it to the given XML builder.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n builder.start(\"Annotations\")\n\n # populate the flags\n for annotation in self.annotations:\n annotation.build(builder)\n\n builder.end(\"Annotations\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the text content for Comment", "response": "def text(self, value):\n \"\"\"Set Text content for Comment (validation of input)\"\"\"\n if value in (None, '') or value.strip() == \"\":\n raise AttributeError(\"Empty text value is invalid.\")\n self._text = value"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sponsor_or_site(self, value):\n if value not in Comment.VALID_SPONSOR_OR_SITE_RESPONSES:\n raise AttributeError(\"%s sponsor_or_site value of %s is not valid\" % (self.__class__.__name__,\n value))\n self._sponsor_or_site = value", "response": "Set Originator with validation of input"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef build(self, builder):\n if self.text is None:\n raise ValueError(\"Text is not set.\")\n params = {}\n if self.sponsor_or_site is not None:\n params['SponsorOrSite'] = self.sponsor_or_site\n\n builder.start(\"Comment\", params)\n builder.data(self.text)\n builder.end(\"Comment\")", "response": "Builds the XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding the object into an XML string.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n builder.start(\"Flag\", {})\n\n if self.flag_type is not None:\n self.flag_type.build(builder)\n\n if self.flag_value is None:\n raise ValueError('FlagValue is not set.')\n self.flag_value.build(builder)\n\n builder.end(\"Flag\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds the object to XML by appending it to the given XML builder.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n if self.codelist_oid is None:\n raise ValueError(\"CodeListOID not set.\")\n builder.start(\"FlagValue\", dict(CodeListOID=self.codelist_oid))\n builder.data(self.flag_value)\n builder.end(\"FlagValue\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild the ASN. 1 User Ref element into the given XML builder.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n builder.start(\"UserRef\", dict(UserOID=self.oid))\n builder.end(\"UserRef\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef build(self, builder):\n builder.start(\"LocationRef\", dict(LocationOID=str(self.oid)))\n builder.end(\"LocationRef\")", "response": "Build an XML string representing this object into the given XML builder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding the XML by appending to builder", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n params = dict(LocationOID=self.oid)\n # mixins\n self.mixin()\n self.mixin_params(params)\n\n builder.start(\"SiteRef\", params)\n builder.end(\"SiteRef\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build(self, builder):\n builder.start(\"SignatureRef\", dict(SignatureOID=self.oid))\n builder.end(\"SignatureRef\")", "response": "Build the XML for this object into the given builder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding the XML description of this object into the given XML builder.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n builder.start(\"ReasonForChange\", {})\n builder.data(self.reason)\n builder.end(\"ReasonForChange\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build(self, builder):\n builder.start(\"DateTimeStamp\", {})\n if isinstance(self.date_time, datetime):\n builder.data(dt_to_iso8601(self.date_time))\n else:\n builder.data(self.date_time)\n builder.end(\"DateTimeStamp\")", "response": "Builds the ISO - 8601 date time entry into the XML builder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding the XML for this object into the given XML builder.", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n builder.start(\"SourceID\", {})\n builder.data(self.source_id)\n builder.end(\"SourceID\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nbuild the audit record into the XML string.", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = {}\n\n if self.edit_point is not None:\n params[\"EditPoint\"] = self.edit_point\n\n if self.used_imputation_method is not None:\n params['UsedImputationMethod'] = bool_to_yes_no(self.used_imputation_method)\n\n if self.audit_id is not None:\n params['ID'] = str(self.audit_id)\n\n if self.include_file_oid is not None:\n params['mdsol:IncludeFileOID'] = bool_to_yes_no(self.include_file_oid)\n\n builder.start(\"AuditRecord\", params)\n if self.user_ref is None:\n raise ValueError(\"User Reference not set.\")\n self.user_ref.build(builder)\n\n if self.location_ref is None:\n raise ValueError(\"Location Reference not set.\")\n self.location_ref.build(builder)\n\n if self.date_time_stamp is None:\n raise ValueError(\"DateTime not set.\")\n\n self.date_time_stamp.build(builder)\n\n # Optional\n if self.source_id:\n self.source_id.build(builder)\n # Optional\n if self.reason_for_change is not None:\n self.reason_for_change.build(builder)\n\n builder.end(\"AuditRecord\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nappends this ProtocolDeviation to the XML builder.", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(Value=self.value,\n Status=self.status.value,\n ProtocolDeviationRepeatKey=self.repeat_key\n )\n\n if self.code:\n params['Code'] = self.code\n if self.pdclass:\n params['Class'] = self.pdclass\n if self.transaction_type:\n params['TransactionType'] = self.transaction_type\n builder.start('mdsol:ProtocolDeviation', params)\n builder.end('mdsol:ProtocolDeviation')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the status of the resource.", "response": "def status(self, value):\n \"\"\"Set Query Status\"\"\"\n if value is not None:\n if not isinstance(value, QueryStatusType):\n raise AttributeError(\"%s action type is invalid in mdsol:Query.\" % (value,))\n self._status = value"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds the XML by appending to builder", "response": "def build(self, builder):\n \"\"\"\n Build XML by appending to builder\n \"\"\"\n params = {}\n\n if self.value is not None:\n params['Value'] = str(self.value)\n\n if self.query_repeat_key is not None:\n params['QueryRepeatKey'] = str(self.query_repeat_key)\n\n if self.recipient is not None:\n params['Recipient'] = str(self.recipient)\n\n if self.status is not None:\n params['Status'] = self.status.value\n\n if self.requires_response is not None:\n params['RequiresResponse'] = bool_to_yes_no(self.requires_response)\n\n # When closing a query\n if self.response is not None:\n params['Response'] = str(self.response)\n\n builder.start(\"mdsol:Query\", params)\n builder.end(\"mdsol:Query\")"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getroot(self):\n builder = ET.TreeBuilder()\n self.build(builder)\n return builder.close()", "response": "Build XML object return the root"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds the XML object and return the root element", "response": "def build(self, builder):\n \"\"\"Build XML object, return the root, this is a copy for consistency and testing\"\"\"\n params = dict(ODMVersion=\"1.3\",\n FileType=self.filetype,\n CreationDateTime=self.creationdatetime,\n Originator=self.originator,\n FileOID=self.fileoid,\n xmlns=\"http://www.cdisc.org/ns/odm/v1.3\",\n )\n if self.granularity_type:\n params['Granularity'] = self.granularity_type.value\n if self.source_system:\n params['SourceSystem'] = self.source_system\n\n if self.source_system_version:\n params['SourceSystemVersion'] = self.source_system_version\n params['xmlns:mdsol'] = \"http://www.mdsol.com/ns/odm/metadata\"\n\n if self.description:\n params['Description'] = self.description\n\n builder.start(\"ODM\", params)\n\n # Ask the children\n if self.study is not None:\n self.study.build(builder)\n\n if self.clinical_data:\n for clinical_data in self.clinical_data:\n clinical_data.build(builder)\n\n if self.admindata is not None:\n self.admindata.build(builder)\n\n builder.end(\"ODM\")\n return builder.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a dictionary of keyword arguments for the URL query string.", "response": "def _querystring(self):\n \"\"\"Additional keyword arguments\"\"\"\n\n kw = {\"studyoid\": self.studyoid}\n if self.location_oid is not None:\n kw[\"locationoid\"] = self.location_oid\n return kw"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef value(self):\n if isinstance(self.raw_value, (datetime.datetime, datetime.date)):\n return self.raw_value.isoformat()\n elif isinstance(self.raw_value, (bool,)):\n return 'Yes' if self.raw_value else 'No'\n return self.raw_value", "response": "Normalise the Attribute value and return the normalised value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd an attribute to the current instance of the modm extension", "response": "def add_attribute(self, attribute, value):\n \"\"\"\n Add an attribute to the current instance\n\n :param str attribute: Attribute name\n :type value: Union[datetime,bool,str]\n :param value: Attribute value\n \"\"\"\n class_name = self.__class__.__name__\n if class_name.startswith('ItemData'):\n # ItemData* Elements\n class_name = 'ItemData'\n if attribute not in MODMExtensionRegistry[class_name].value:\n raise ValueError(\"Can't add {} to {}\".format(attribute, self.__class__.__name__))\n self.attributes.append(MODMAttribute(attribute, value))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mixin_params(self, params):\n if not isinstance(params, (dict,)):\n raise AttributeError(\"Cannot mixin to object of type {}\".format(type(params)))\n for attribute in self.attributes:\n params.update({attribute.tag: attribute.value})", "response": "Merge in the MdsolAttribute for the passed parameter\n\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef last_update_time(self, value):\n if isinstance(value, (datetime.datetime,)):\n self._last_update_time = value\n else:\n raise ValueError(\"Expect last_update_time to be a datetime\")", "response": "Setter for the last_update_time attribute."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_update_time(self, update_time=None):\n if update_time and isinstance(update_time, (datetime.datetime,)):\n self.last_update_time = update_time\n else:\n self.last_update_time = datetime.datetime.utcnow()", "response": "Set the Update Time from the local clock in UTC."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds the mdsol : LastUpdateTime attribute to the params.", "response": "def mixin_params(self, params):\n \"\"\"\n Add the mdsol:LastUpdateTime attribute\n :return: \n \"\"\"\n super(LastUpdateMixin, self).mixin_params(params)\n if self.last_update_time is not None:\n params.update({\"mdsol:LastUpdateTime\": self.last_update_time.isoformat()})"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_milestone(self,\n milestone,\n codelistoid=\"MILESTONES\"):\n \"\"\"\n Add a milestone\n :param codelistoid: specify the CodeListOID (defaults to MILESTONES)\n :param str milestone: Milestone to add\n \"\"\"\n if milestone not in self.milestones.get(codelistoid, []):\n self._milestones.setdefault(codelistoid, []).append(milestone)", "response": "Adds a milestone to the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mixin(self):\n if self.milestones:\n from rwslib.builders.clinicaldata import Annotation, Flag, FlagValue\n annotation = Annotation()\n for codelist, milestones in self.milestones.items():\n for milestone in milestones:\n annotation << Flag() << FlagValue(milestone, codelist_oid=codelist)\n self.annotations.append(annotation)", "response": "Add the annotations to the ODM Element"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getEnvironmentFromNameAndProtocol(studyname, protocolname):\n # StudyName = \"TEST (1) (DEV)\"\n # ProtocolName = \"TEST (1)\"\n # Raw Env = \"(DEV)\"\n\n raw_env = studyname[len(protocolname) :].strip()\n\n if \"(\" in raw_env:\n l_brace_pos = raw_env.rfind(\"(\")\n r_brace_pos = raw_env.rfind(\")\")\n return raw_env[l_brace_pos + 1 : r_brace_pos]\n else:\n return raw_env", "response": "Extract environment name using studyname and protocolname"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parseXMLString(xml):\n\n # Remove BOM if it exists (different requests seem to have different BOMs)\n unichr_captured = \"\"\n\n if not xml.strip():\n return u\"\"\n\n while xml[0] != u\"<\":\n unichr_captured += xml[0]\n xml = xml[1:]\n parser = etree.XMLParser(ns_clean=True, collect_ids=False, huge_tree=True)\n try:\n return etree.fromstring(xml.encode(\"utf-8\"), parser=parser)\n except etree.XMLSyntaxError:\n raise Exception(xml)", "response": "Parse XML string and return root XML object"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fromElement(cls, elem):\n e_global_variables = elem.find(ODM_NS + \"GlobalVariables\")\n studyname = e_global_variables.find(ODM_NS + \"StudyName\").text\n protocolname = e_global_variables.find(ODM_NS + \"ProtocolName\").text\n self = cls(\n oid=elem.get(\"OID\"),\n projecttype=elem.get(MEDI_NS + \"ProjectType\", \"Project\"),\n studyname=studyname,\n protocolname=protocolname,\n environment=getEnvironmentFromNameAndProtocol(studyname, protocolname),\n )\n\n return self", "response": "Read properties from an XML Element to build a StudyList Item."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreading properties from a MetaDataVersion element.", "response": "def fromElement(cls, elem):\n \"\"\"\n Read properties from a MetaDataVersion element\n\n :param lxml.etree._Element elem: Source etree Element\n \"\"\"\n\n self = cls()\n\n self.oid = elem.get(\"OID\")\n self.name = elem.get(\"Name\")\n return self"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef subject_name(self):\n if self.subjectkeytype and self.subjectkeytype == \"SubjectUUID\".lower():\n # if the SubjectKeyType is \"SubjectUUID\", then return the SubjectName\n return self.subjectname\n else:\n return self.subjectkey", "response": "Get the subject name consistently\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fromElement(cls, elem):\n self = cls()\n self.studyoid = elem.get(\"StudyOID\")\n self.metadataversionoid = elem.get(\"MetaDataVersionOID\")\n e_subjectdata = elem.findall(ODM_NS + \"SubjectData\")[0]\n self.subjectkey = e_subjectdata.get(\"SubjectKey\")\n e_siteref = e_subjectdata.findall(ODM_NS + \"SiteRef\")[0]\n self.locationoid = e_siteref.get(\"LocationOID\")\n\n e_links = e_subjectdata.findall(MEDI_NS + \"Link\")\n for e_link in e_links:\n self.links.append(e_link.get(XLINK_NS + \"href\"))\n\n decodes = {\"yes\": True, \"no\": False, \"\": None}\n for prop in RWSSubjectListItem.STATUS_PROPERTIES:\n val = e_subjectdata.get(MEDI_NS + prop, \"\").lower()\n setattr(self, prop.lower(), decodes.get(val, val))\n\n # By default we only get back active and non-deleted subjects\n self.active = decodes[\n e_subjectdata.get(MEDI_NS + \"SubjectActive\", \"yes\").lower()\n ]\n self.deleted = decodes[e_subjectdata.get(MEDI_NS + \"Deleted\", \"no\").lower()]\n\n return self", "response": "Read properties from an XML Element object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_element(builder, tag, content):\n builder.start(tag, {})\n builder.data(content) # Must be UTF-8 encoded\n builder.end(tag)", "response": "Make an element with this tag and text content"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nusing to set guard the setting of an attribute which is singular and can t be set twice", "response": "def set_single_attribute(self, other, trigger_klass, property_name):\n \"\"\"Used to set guard the setting of an attribute which is singular and can't be set twice\"\"\"\n\n if isinstance(other, trigger_klass):\n\n # Check property exists\n if not hasattr(self, property_name):\n raise AttributeError(\"%s has no property %s\" % (self.__class__.__name__, property_name))\n\n if getattr(self, property_name) is None:\n setattr(self, property_name, other)\n else:\n raise ValueError(\n '%s already has a %s element set.' % (self.__class__.__name__, other.__class__.__name__,))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_list_attribute(self, other, trigger_klass, property_name):\n # Check property exists\n if isinstance(other, trigger_klass):\n\n if not hasattr(self, property_name):\n raise AttributeError(\"%s has no property %s\" % (self.__class__.__name__, property_name))\n\n val = getattr(self, property_name, [])\n if other in val:\n raise ValueError(\"%s already exists in %s\" % (other.__class__.__name__, self.__class__.__name__))\n else:\n val.append(other)\n setattr(self, property_name, val)", "response": "Used to set guard the setting of a list attribute."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the TransactionType ( with Input Validation", "response": "def transaction_type(self, value):\n \"\"\"Set the TransactionType (with Input Validation)\"\"\"\n if value is not None:\n if value not in self.ALLOWED_TRANSACTION_TYPES:\n raise AttributeError('%s transaction_type element must be one of %s not %s' % (\n self.__class__.__name__, ','.join(self.ALLOWED_TRANSACTION_TYPES), value,))\n self._transaction_type = value"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(self, builder):\n params = {}\n if self.study_oid:\n params.update(dict(StudyOID=self.study_oid))\n builder.start(\"AdminData\", params)\n for user in self.users:\n user.build(builder)\n for location in self.locations:\n location.build(builder)\n\n builder.end(\"AdminData\")", "response": "Build XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build(self, builder):\n params = dict(StudyOID=self.study_oid,\n MetaDataVersionOID=self.metadata_version_oid,\n EffectiveDate=dt_to_iso8601(self.effective_date))\n builder.start(\"MetaDataVersionRef\", params)\n builder.end(\"MetaDataVersionRef\")", "response": "Build XML by appending to builder"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding XML by appending to builder", "response": "def build(self, builder):\n \"\"\"Build XML by appending to builder\"\"\"\n params = dict(OID=self.oid,\n Name=self.name)\n if self.location_type:\n params.update(dict(LocationType=self.location_type.value))\n # mixins\n self.mixin()\n self.mixin_params(params)\n builder.start(\"Location\", params)\n for mdv in self.metadata_versions:\n mdv.build(builder)\n builder.end(\"Location\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(self, builder):\n params = dict()\n builder.start(self.__class__.__name__, params)\n for street in self.street_names:\n street.build(builder)\n # build the children\n for child in ('city', 'country', 'state_prov', 'postal_code', 'other_text'):\n if getattr(self, child) is not None:\n getattr(self, child).build(builder)\n builder.end(self.__class__.__name__)", "response": "Build the XML by appending to the given XML builder."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build(self, builder):\n params = dict(OID=self.oid)\n if self.user_type:\n params.update(dict(UserType=self.user_type.value))\n builder.start(self.__class__.__name__, params)\n # build the children\n for child in ('login_name', 'display_name', 'full_name', 'first_name', 'last_name',\n 'organisation'):\n if getattr(self, child) is not None:\n getattr(self, child).build(builder)\n for address in self.addresses:\n address.build(builder)\n for email in self.emails:\n email.build(builder)\n for phone in self.phones:\n phone.build(builder)\n for location in self.locations:\n location.build(builder)\n builder.end(self.__class__.__name__)", "response": "Builds the xml by appending it to the given XML builder object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build(self, builder):\n builder.start(self.__class__.__name__)\n builder.data(self.country_code)\n builder.end(self.__class__.__name__)", "response": "Builds this element\n : param builder : A builder object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nextract next start id from link for next result set and get the next start id", "response": "def get_next_start_id(self):\n \"\"\"If link for next result set has been passed, extract it and get the next set start id\"\"\"\n link = self.rws_connection.last_result.links.get(\"next\", None)\n if link:\n link = link['url']\n p = urlparse(link)\n start_id = int(parse_qs(p.query)['startid'][0])\n return start_id\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsends a request to the RWS.", "response": "def send_request(self, request_object, timeout=None, retries=1, **kwargs):\n \"\"\"Send request to RWS endpoint. The request object passed provides the URL endpoint and the HTTP method.\n Takes the text response from RWS and allows the request object to modify it for return. This allows the request\n object to return text, an XML document object, a CSV file or anything else that can be generated from the text\n response from RWS.\n A timeout, in seconds, can be optionally passed into send_request.\n \"\"\"\n if not isinstance(request_object, RWSRequest):\n raise ValueError(\"Request object must be a subclass of RWSRequest\")\n\n # Construct a URL from the object and make a call\n full_url = make_url(self.base_url, request_object.url_path())\n if request_object.requires_authorization:\n kwargs[\"auth\"] = self.auth\n # TODO: Look at different connect and read timeouts?\n kwargs[\"timeout\"] = timeout\n kwargs.update(request_object.args())\n\n # Explicit use of requests library here. Could alter in future to inject library to use in case\n # requests not available.\n\n # Get a session that allows us to customize HTTP requests\n session = requests.Session()\n\n # Mount a custom adapter that retries failed connections for HTTP and HTTPS requests.\n for scheme in [\"http://\", \"https://\"]:\n session.mount(scheme, requests.adapters.HTTPAdapter(max_retries=retries))\n\n action = {\"GET\": session.get, \"POST\": session.post}[request_object.method]\n\n start_time = time.time()\n\n try:\n r = action(full_url, **kwargs) # type: requests.models.Response\n except (\n requests.exceptions.ConnectTimeout,\n requests.exceptions.ReadTimeout,\n ) as exc:\n if isinstance(exc, (requests.exceptions.ConnectTimeout,)):\n raise RWSException(\n \"Server Connection Timeout\",\n \"Connection timeout for {}\".format(full_url),\n )\n elif isinstance(exc, (requests.exceptions.ReadTimeout,)):\n raise RWSException(\n \"Server Read Timeout\", \"Read timeout for {}\".format(full_url)\n )\n\n self.request_time = time.time() - start_time\n self.last_result = r # see also r.elapsed for timedelta object.\n\n if r.status_code in [400, 404]:\n # Is it a RWS response?\n if r.text.startswith(\"HTTP Error 401.0 - Unauthorized\" in r.text:\n raise RWSException(\"Unauthorized.\", r.text)\n\n # Check if the content_type is text/xml. Use startswith\n # in case the charset is also specified:\n # content-type: text/xml; charset=utf-8\n if r.headers.get(\"content-type\").startswith(\"text/xml\"):\n # XML response\n if r.text.startswith(\" 2030):\n break\n return ('date', format)\n except ValueError:\n pass\n except TypeError:\n pass\n\n # Test if value is a time\n for format in ['%H:%M:%S', '%H:%M', '%I:%M:%S', '%I:%M', '%I:%M:%S %p', '%I:%M %p']:\n try:\n datetime.datetime.strptime(value, format)\n return ('time', format)\n except ValueError:\n pass\n except TypeError:\n pass\n\n # Test if value is a integer\n try:\n if ((isinstance(value, str) and isinstance(int(value), int)) \\\n or isinstance(value, int)):\n return ('int', None)\n except ValueError:\n pass\n except TypeError:\n pass\n\n # Test if value is a float\n try:\n float(value)\n return ('float', None)\n except ValueError:\n pass\n except TypeError:\n pass\n\n # If no match on anything else, assume its a string\n return ('string', None)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef scramble_float(self, length, sd=0):\n if sd == 0:\n return str(fake.random_number(length))\n else:\n return str(fake.pyfloat(length - sd, sd, positive=True))", "response": "Return random float in specified format"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nscrambling a value and return it as a string", "response": "def scramble_value(self, value):\n \"\"\"Duck-type value and scramble appropriately\"\"\"\n try:\n type, format = typeof_rave_data(value)\n if type == 'float':\n i, f = value.split('.')\n return self.scramble_float(len(value) - 1, len(f))\n elif type == 'int':\n return self.scramble_int(len(value))\n elif type == 'date':\n return self.scramble_date(value, format)\n elif type == 'time':\n return self.scramble_time(format)\n elif type == 'string':\n return self.scramble_string(len(value))\n else:\n return value\n except:\n return \"\""} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nscramble a code list into a random element", "response": "def scramble_codelist(self, codelist):\n \"\"\"Return random element from code list\"\"\"\n # TODO: External code lists\n path = \".//{0}[@{1}='{2}']\".format(E_ODM.CODELIST.value, A_ODM.OID.value, codelist)\n elem = self.metadata.find(path)\n codes = []\n for c in elem.iter(E_ODM.CODELIST_ITEM.value):\n codes.append(c.get(A_ODM.CODED_VALUE.value))\n for c in elem.iter(E_ODM.ENUMERATED_ITEM.value):\n codes.append(c.get(A_ODM.CODED_VALUE.value))\n\n return fake.random_element(codes)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nscramble the value based on the data type of the item.", "response": "def scramble_itemdata(self, oid, value):\n \"\"\"If metadata provided, use it to scramble the value based on data type\"\"\"\n if self.metadata is not None:\n path = \".//{0}[@{1}='{2}']\".format(E_ODM.ITEM_DEF.value, A_ODM.OID.value, oid)\n elem = self.metadata.find(path)\n # for elem in self.metadata.iter(E_ODM.ITEM_DEF.value):\n datatype = elem.get(A_ODM.DATATYPE.value)\n\n codelist = None\n for el in elem.iter(E_ODM.CODELIST_REF.value):\n codelist = el.get(A_ODM.CODELIST_OID.value)\n\n length = 1 if not A_ODM.LENGTH in elem else int(elem.get(A_ODM.LENGTH.value))\n\n if A_ODM.SIGNIFICANT_DIGITS.value in elem.keys():\n sd = elem.get(A_ODM.SIGNIFICANT_DIGITS.value)\n else:\n sd = 0\n\n if A_ODM.DATETIME_FORMAT.value in elem.keys():\n dt_format = elem.get(A_ODM.DATETIME_FORMAT.value)\n for fmt in [('yyyy', '%Y'), ('MMM', '%b'), ('dd', '%d'), ('HH', '%H'), ('nn', '%M'), ('ss', '%S'),\n ('-', '')]:\n dt_format = dt_format.replace(fmt[0], fmt[1])\n\n if codelist is not None:\n return self.scramble_codelist(codelist)\n\n elif datatype == 'integer':\n return self.scramble_int(length)\n\n elif datatype == 'float':\n return self.scramble_float(length, sd)\n\n elif datatype in ['string', 'text']:\n return self.scramble_string(length)\n\n elif datatype in ['date', 'datetime']:\n return self.scramble_date(value, dt_format)\n\n elif datatype in ['time']:\n return self.scramble_time(dt_format)\n\n else:\n return self.scramble_value(value)\n\n else:\n return self.scramble_value(value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfill in random values for all empty - valued ItemData elements in an ODM document", "response": "def fill_empty(self, fixed_values, input):\n \"\"\"Fill in random values for all empty-valued ItemData elements in an ODM document\"\"\"\n odm_elements = etree.fromstring(input)\n\n for v in odm_elements.iter(E_ODM.ITEM_DATA.value):\n if v.get(A_ODM.VALUE.value) == \"\":\n oid = v.get(A_ODM.ITEM_OID.value)\n\n if fixed_values is not None and oid in fixed_values:\n d = fixed_values[oid]\n else:\n d = self.scramble_itemdata(v.get(A_ODM.ITEM_OID.value), v.get(A_ODM.VALUE.value))\n\n v.set(A_ODM.VALUE.value, d)\n else:\n # Remove ItemData if it already has a value\n v.getparent().remove(v)\n\n # Remove empty ItemGroupData elements\n for v in odm_elements.iter(E_ODM.ITEM_GROUP_DATA.value):\n if len(v) == 0:\n v.getparent().remove(v)\n\n # Remove empty FormData elements\n for v in odm_elements.iter(E_ODM.FORM_DATA.value):\n if len(v) == 0:\n v.getparent().remove(v)\n\n # Remove empty StudyEventData elements\n for v in odm_elements.iter(E_ODM.STUDY_EVENT_DATA.value):\n if len(v) == 0:\n v.getparent().remove(v)\n\n return etree.tostring(odm_elements)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts string value to long", "response": "def make_int(value, missing=-1):\n \"\"\"Convert string value to long, '' to missing\"\"\"\n if isinstance(value, six.string_types):\n if not value.strip():\n return missing\n elif value is None:\n return missing\n return int(value)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse the XML data firing events from the eventer", "response": "def parse(data, eventer):\n \"\"\"Parse the XML data, firing events from the eventer\"\"\"\n parser = etree.XMLParser(target=ODMTargetParser(eventer))\n return etree.XML(data, parser)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef emit(self):\n\n self.count += 1\n\n # event_name = 'on_{0}'.format(self.context.subcategory.lower())\n event_name = self.context.subcategory\n if hasattr(self.handler, event_name):\n getattr(self.handler, event_name)(self.context)\n elif hasattr(self.handler, 'default'):\n self.handler.default(self.context)", "response": "Emit the next element of the sequence."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef start(self, tag, attrib):\n if tag == E_CLINICAL_DATA:\n self.ref_state = AUDIT_REF_STATE\n self.context = Context(attrib[A_STUDY_OID],\n attrib[A_AUDIT_SUBCATEGORY_NAME],\n int(attrib[A_METADATA_VERSION_OID]))\n\n elif tag == E_SUBJECT_DATA:\n self.context.subject = Subject(\n attrib.get(A_SUBJECT_KEY),\n attrib.get(A_SUBJECT_NAME),\n attrib.get(A_SUBJECT_STATUS),\n attrib.get(A_TRANSACTION_TYPE, DEFAULT_TRANSACTION_TYPE),\n )\n elif tag == E_USER_REF:\n # Set the Signature or audit-record value depending on state\n self.get_parent_element().user_oid = attrib.get(A_USER_OID)\n\n elif tag == E_SOURCE_ID:\n self.state = STATE_SOURCE_ID\n\n elif tag == E_DATE_TIME_STAMP_:\n self.state = STATE_DATETIME\n\n elif tag == E_REASON_FOR_CHANGE:\n self.state = STATE_REASON_FOR_CHANGE\n\n elif tag == E_LOCATION_REF:\n # Set the Signature or audit-record value depending on state\n self.get_parent_element().location_oid = attrib.get(A_LOCATION_OID)\n\n elif tag == E_STUDY_EVENT_DATA:\n self.context.event = Event(\n attrib.get(A_STUDYEVENT_OID),\n attrib.get(A_STUDYEVENT_REPEAT_KEY),\n attrib.get(A_TRANSACTION_TYPE),\n attrib.get(A_INSTANCE_NAME),\n attrib.get(A_INSTANCE_OVERDUE),\n make_int(attrib.get(A_INSTANCE_ID), -1)\n )\n\n elif tag == E_FORM_DATA:\n self.context.form = Form(\n attrib.get(A_FORM_OID),\n int(attrib.get(A_FORM_REPEAT_KEY, 0)),\n attrib.get(A_TRANSACTION_TYPE),\n attrib.get(A_DATAPAGE_NAME),\n make_int(attrib.get(A_DATAPAGE_ID, -1)),\n )\n\n elif tag == E_ITEM_GROUP_DATA:\n self.context.itemgroup = ItemGroup(\n attrib.get(A_ITEMGROUP_OID),\n int(attrib.get(A_ITEMGROUP_REPEAT_KEY, 0)),\n attrib.get(A_TRANSACTION_TYPE),\n make_int(attrib.get(A_RECORD_ID, -1)),\n )\n\n elif tag == E_ITEM_DATA:\n self.context.item = Item(\n attrib.get(A_ITEM_OID),\n attrib.get(A_VALUE),\n yes_no_none(attrib.get(A_FREEZE)),\n yes_no_none(attrib.get(A_VERIFY)),\n yes_no_none(attrib.get(A_LOCK)),\n attrib.get(A_TRANSACTION_TYPE)\n )\n\n elif tag == E_QUERY:\n self.context.query = Query(\n make_int(attrib.get(A_QUERY_REPEAT_KEY, -1)),\n attrib.get(A_STATUS),\n attrib.get(A_RESPONSE),\n attrib.get(A_RECIPIENT),\n attrib.get(A_VALUE) # Optional, depends on status\n )\n\n elif tag == E_PROTOCOL_DEVIATION:\n self.context.protocol_deviation = ProtocolDeviation(\n make_int(attrib.get(A_PROTCOL_DEVIATION_REPEAT_KEY, -1)),\n attrib.get(A_CODE),\n attrib.get(A_CLASS),\n attrib.get(A_STATUS),\n attrib.get(A_VALUE),\n attrib.get(A_TRANSACTION_TYPE)\n )\n\n elif tag == E_REVIEW:\n self.context.review = Review(\n attrib.get(A_GROUP_NAME),\n yes_no_none(attrib.get(A_REVIEWED)),\n )\n elif tag == E_COMMENT:\n self.context.comment = Comment(\n attrib.get(A_COMMENT_REPEAT_KEY),\n attrib.get(A_VALUE),\n attrib.get(A_TRANSACTION_TYPE)\n\n )\n elif tag == E_SIGNATURE:\n self.ref_state = SIGNATURE_REF_STATE\n\n elif tag == E_SIGNATURE_REF:\n self.context.signature.oid = attrib.get(A_SIGNATURE_OID)", "response": "On start of element tag"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the parent element of the current context.", "response": "def get_parent_element(self):\n \"\"\"Signatures and Audit elements share sub-elements, we need to know which to set attributes on\"\"\"\n return {AUDIT_REF_STATE: self.context.audit_record,\n SIGNATURE_REF_STATE: self.context.signature}[self.ref_state]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef data(self, data):\n if self.state == STATE_SOURCE_ID:\n self.context.audit_record.source_id = int(data) # Audit ids can be 64 bits\n elif self.state == STATE_DATETIME:\n dt = datetime.datetime.strptime(data, \"%Y-%m-%dT%H:%M:%S\")\n self.get_parent_element().datetimestamp = dt\n elif self.state == STATE_REASON_FOR_CHANGE:\n self.context.audit_record.reason_for_change = data.strip() or None # Convert a result of '' to None.\n self.state = STATE_NONE", "response": "Called for text between tags"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate_not_empty():\n def empty_validator(field, data):\n if isinstance(field.value, str) and not field.value.strip():\n raise ValidationError('empty')\n return empty_validator", "response": "Validate that a field is not empty."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef validate_length(low=None, high=None, equal=None):\n def length_validator(field, data):\n if field.value is None:\n return\n if equal is not None and len(field.value) != equal:\n raise ValidationError('length_equal', equal=equal)\n if low is not None and len(field.value) < low:\n key = 'length_low' if high is None else 'length_between'\n raise ValidationError(key, low=low, high=high)\n if high is not None and len(field.value) > high:\n key = 'length_high' if low is None else 'length_between'\n raise ValidationError(key, low=low, high=high)\n return length_validator", "response": "Validate the length of a node with either low high or equal."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nvalidates that a field is in one of the given values.", "response": "def validate_one_of(values):\n \"\"\"\n Validate that a field is in one of the given values.\n\n :param values: Iterable of valid values.\n :raises: ``ValidationError('one_of')``\n \"\"\"\n def one_of_validator(field, data):\n if field.value is None:\n return\n options = values\n if callable(options):\n options = options()\n if field.value not in options:\n raise ValidationError('one_of', choices=', '.join(map(str, options)))\n return one_of_validator"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_none_of(values):\n def none_of_validator(field, data):\n options = values\n if callable(options):\n options = options()\n if field.value in options:\n raise ValidationError('none_of', choices=str.join(', ', options))\n return none_of_validator", "response": "Validate that a field is not in one of the given values."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_range(low=None, high=None):\n def range_validator(field, data):\n if field.value is None:\n return\n if low is not None and field.value < low:\n key = 'range_low' if high is None else 'range_between'\n raise ValidationError(key, low=low, high=high)\n if high is not None and field.value > high:\n key = 'range_high' if high is None else 'range_between'\n raise ValidationError(key, low=low, high=high)\n return range_validator", "response": "Validate that a field with either low or high or equal is present."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nvalidating the field value is equal to the given value.", "response": "def validate_equal(value):\n \"\"\"\n Validate the field value is equal to the given value.\n Should work with anything that supports '==' operator.\n\n :param value: Value to compare.\n :raises: ``ValidationError('equal')``\n \"\"\"\n def equal_validator(field, data):\n if field.value is None:\n return\n if not (field.value == value):\n raise ValidationError('equal', other=value)\n return equal_validator"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate that the value of a field in the data contains the value of another field in the data.", "response": "def validate_matches(other):\n \"\"\"\n Validate the field value is equal to another field in the data.\n Should work with anything that supports '==' operator.\n\n :param value: Field key to compare.\n :raises: ``ValidationError('matches')``\n \"\"\"\n def matches_validator(field, data):\n if field.value is None:\n return\n if not (field.value == data.get(other)):\n raise ValidationError('matches', other=other)\n return matches_validator"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_regexp(pattern, flags=0):\n regex = re.compile(pattern, flags) if isinstance(pattern, str) else pattern\n\n def regexp_validator(field, data):\n if field.value is None:\n return\n if regex.match(str(field.value)) is None:\n raise ValidationError('regexp', pattern=pattern)\n return regexp_validator", "response": "Validate that the field value matches the given regular expression."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating the field value matches the result of calling the given method.", "response": "def validate_function(method, **kwargs):\n \"\"\"\n Validate the field matches the result of calling the given method. Example::\n\n def myfunc(value, name):\n return value == name\n\n validator = validate_function(myfunc, name='tim')\n\n Essentially creates a validator that only accepts the name 'tim'.\n\n :param method: Method to call.\n :param kwargs: Additional keyword arguments passed to the method.\n :raises: ``ValidationError('function')``\n \"\"\"\n def function_validator(field, data):\n if field.value is None:\n return\n if not method(field.value, **kwargs):\n raise ValidationError('function', function=method.__name__)\n return function_validator"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates the field is a valid email address.", "response": "def validate_email():\n \"\"\"\n Validate the field is a valid email address.\n\n :raises: ``ValidationError('email')``\n \"\"\"\n user_regex = re.compile(\n r\"(^[-!#$%&'*+/=?^`{}|~\\w]+(\\.[-!#$%&'*+/=?^`{}|~\\w]+)*$\"\n r'|^\"([\\001-\\010\\013\\014\\016-\\037!#-\\[\\]-\\177]'\n r'|\\\\[\\001-\\011\\013\\014\\016-\\177])*\"$)', re.IGNORECASE | re.UNICODE)\n\n domain_regex = re.compile(\n r'(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+'\n r'(?:[A-Z]{2,6}|[A-Z0-9-]{2,})$'\n r'|^\\[(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)'\n r'(\\.(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)){3}\\]$', re.IGNORECASE | re.UNICODE)\n\n domain_whitelist = ('localhost',)\n\n def email_validator(field, data):\n if field.value is None:\n return\n\n value = str(field.value)\n\n if '@' not in value:\n raise ValidationError('email')\n\n user_part, domain_part = value.rsplit('@', 1)\n\n if not user_regex.match(user_part):\n raise ValidationError('email')\n\n if domain_part in domain_whitelist:\n return\n\n if not domain_regex.match(domain_part):\n raise ValidationError('email')\n\n return email_validator"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating the field is a unique given a queryset and lookup_field.", "response": "def validate_model_unique(lookup_field, queryset, pk_field=None, pk_value=None):\n \"\"\"\n Validate the field is a unique, given a queryset and lookup_field. Example::\n\n validator = validate_model_unique(User.email, User.select())\n\n Creates a validator that can validate the uniqueness of an email address.\n\n :param lookup_field: Peewee model field that should be used for checking existing values.\n :param queryset: Queryset to use for lookup.\n :param pk_field: Field instance to use when excluding existing instance.\n :param pk_value: Field value to use when excluding existing instance.\n :raises: ``ValidationError('unique')``\n \"\"\"\n def unique_validator(field, data):\n # If we have a PK, ignore it because it represents the current record.\n query = queryset.where(lookup_field == field.value)\n if pk_field and pk_value:\n query = query.where(~(pk_field == pk_value))\n if query.count():\n raise ValidationError('unique')\n return unique_validator"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncoerce a value to a scalar value for lookup_field.", "response": "def coerce_single_instance(lookup_field, value):\n \"\"\"\n Convert from whatever value is given to a scalar value for lookup_field.\n If value is a dict, then lookup_field.name is used to get the value from the dict. Example:\n lookup_field.name = 'id'\n value = {'id': 123, 'name': 'tim'}\n returns = 123\n If value is a model, then lookup_field.name is extracted from the model. Example:\n lookup_field.name = 'id'\n value = \n returns = 123\n Otherwise the value is returned as-is.\n\n :param lookup_field: Peewee model field used for getting name from value.\n :param value: Some kind of value (usually a dict, Model instance, or scalar).\n \"\"\"\n if isinstance(value, dict):\n return value.get(lookup_field.name)\n if isinstance(value, peewee.Model):\n return getattr(value, lookup_field.name)\n return value"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn True if the value is iterable but not a string. Otherwise returns False.", "response": "def isiterable_notstring(value):\n \"\"\"\n Returns True if the value is iterable but not a string. Otherwise returns False.\n\n :param value: Value to check.\n \"\"\"\n if isinstance(value, str):\n return False\n return isinstance(value, Iterable) or isgeneratorfunction(value) or isgenerator(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_value(self, name, data):\n if name in data:\n return data.get(name)\n if self.default:\n if callable(self.default):\n return self.default()\n return self.default\n return None", "response": "Get the value of this field from the data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate(self, name, data):\n self.value = self.get_value(name, data)\n if self.value is not None:\n self.value = self.coerce(self.value)\n for method in self.validators:\n method(self, data)", "response": "Check to make sure the data for this field is valid."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(self, name, data):\n super().validate(name, data)\n if self.value is not None:\n try:\n self.value = self.query.get(self.lookup_field == self.value)\n except (AttributeError, ValueError, peewee.DoesNotExist):\n raise ValidationError('related', field=self.lookup_field.name, values=self.value)", "response": "Validate that the related field is set."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert from whatever is given to a list of scalars for the lookup_field.", "response": "def coerce(self, value):\n \"\"\"Convert from whatever is given to a list of scalars for the lookup_field.\"\"\"\n if isinstance(value, dict):\n value = [value]\n if not isiterable_notstring(value):\n value = [value]\n return [coerce_single_instance(self.lookup_field, v) for v in value]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef initialize_fields(self):\n for field in dir(self):\n obj = getattr(self, field)\n if isinstance(obj, Field):\n self._meta.fields[field] = obj", "response": "Initialize the base_fields dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nvalidates the data for all fields and return whether the validation was successful.", "response": "def validate(self, data=None, only=None, exclude=None):\n \"\"\"\n Validate the data for all fields and return whether the validation was successful.\n This method also retains the validated data in ``self.data`` so that it can be accessed later.\n\n This is usually the method you want to call after creating the validator instance.\n\n :param data: Dictionary of data to validate.\n :param only: List or tuple of fields to validate.\n :param exclude: List or tuple of fields to exclude from validation.\n :return: True if validation was successful. Otherwise False.\n \"\"\"\n only = only or []\n exclude = exclude or []\n data = data or {}\n self.errors = {}\n self.data = {}\n\n # Validate individual fields.\n for name, field in self._meta.fields.items():\n if name in exclude or (only and name not in only):\n continue\n try:\n field.validate(name, data)\n except ValidationError as err:\n self.add_error(name, err)\n continue\n self.data[name] = field.value\n\n # Clean individual fields.\n if not self.errors:\n self.clean_fields(self.data)\n\n # Then finally clean the whole data dict.\n if not self.errors:\n try:\n self.data = self.clean(self.data)\n except ValidationError as err:\n self.add_error('__base__', err)\n\n return (not self.errors)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncleans the data of the record sets.", "response": "def clean_fields(self, data):\n \"\"\"\n For each field, check to see if there is a clean_ method.\n If so, run that method and set the returned value on the self.data dict.\n This happens after all validations so that each field can act on the\n cleaned data of other fields if needed.\n\n :param data: Dictionary of data to clean.\n :return: None\n \"\"\"\n for name, value in data.items():\n try:\n method = getattr(self, 'clean_{}'.format(name), None)\n if method:\n self.data[name] = method(value)\n except ValidationError as err:\n self.add_error(name, err)\n continue"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting all model fields to validator fields.", "response": "def initialize_fields(self):\n \"\"\"\n Convert all model fields to validator fields.\n Then call the parent so that overwrites can happen if necessary for manually defined fields.\n\n :return: None\n \"\"\"\n # # Pull all the \"normal\" fields off the model instance meta.\n for name, field in self.instance._meta.fields.items():\n if getattr(field, 'primary_key', False):\n continue\n self._meta.fields[name] = self.convert_field(name, field)\n\n # Many-to-many fields are not stored in the meta fields dict.\n # Pull them directly off the class.\n for name in dir(type(self.instance)):\n field = getattr(type(self.instance), name, None)\n if isinstance(field, ManyToManyField):\n self._meta.fields[name] = self.convert_field(name, field)\n\n super().initialize_fields()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts a single field from a Peewee model field to a validator field.", "response": "def convert_field(self, name, field):\n \"\"\"\n Convert a single field from a Peewee model field to a validator field.\n\n :param name: Name of the field as defined on this validator.\n :param name: Peewee field instance.\n :return: Validator field.\n \"\"\"\n if PEEWEE3:\n field_type = field.field_type.lower()\n else:\n field_type = field.db_field\n\n pwv_field = ModelValidator.FIELD_MAP.get(field_type, StringField)\n\n print('pwv_field', field_type, pwv_field)\n\n validators = []\n required = not bool(getattr(field, 'null', True))\n choices = getattr(field, 'choices', ())\n default = getattr(field, 'default', None)\n max_length = getattr(field, 'max_length', None)\n unique = getattr(field, 'unique', False)\n\n if required:\n validators.append(validate_required())\n\n if choices:\n print('CHOICES', choices)\n validators.append(validate_one_of([c[0] for c in choices]))\n\n if max_length:\n validators.append(validate_length(high=max_length))\n\n if unique:\n validators.append(validate_model_unique(field, self.instance.select(), self.pk_field, self.pk_value))\n\n if isinstance(field, peewee.ForeignKeyField):\n if PEEWEE3:\n rel_field = field.rel_field\n else:\n rel_field = field.to_field\n return ModelChoiceField(field.rel_model, rel_field, default=default, validators=validators)\n\n if isinstance(field, ManyToManyField):\n return ManyModelChoiceField(\n field.rel_model, field.rel_model._meta.primary_key,\n default=default, validators=validators)\n\n return pwv_field(default=default, validators=validators)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(self, data=None, only=None, exclude=None):\n data = data or {}\n only = only or self._meta.only\n exclude = exclude or self._meta.exclude\n\n for name, field in self.instance._meta.fields.items():\n if name in exclude or (only and name not in only):\n continue\n try:\n data.setdefault(name, getattr(self.instance, name, None))\n except (peewee.DoesNotExist):\n if PEEWEE3:\n instance_data = self.instance.__data__\n else:\n instance_data = self.instance._data\n data.setdefault(name, instance_data.get(name, None))\n\n # This will set self.data which we should use from now on.\n super().validate(data=data, only=only, exclude=exclude)\n\n if not self.errors:\n self.perform_index_validation(self.data)\n\n return (not self.errors)", "response": "Validate the data for all fields and return whether the validation was successful."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate any unique indexes specified on the model.", "response": "def perform_index_validation(self, data):\n \"\"\"\n Validate any unique indexes specified on the model.\n This should happen after all the normal fields have been validated.\n This can add error messages to multiple fields.\n\n :return: None\n \"\"\"\n # Build a list of dict containing query values for each unique index.\n index_data = []\n for columns, unique in self.instance._meta.indexes:\n if not unique:\n continue\n index_data.append({col: data.get(col, None) for col in columns})\n\n # Then query for each unique index to see if the value is unique.\n for index in index_data:\n query = self.instance.filter(**index)\n # If we have a primary key, need to exclude the current record from the check.\n if self.pk_field and self.pk_value:\n query = query.where(~(self.pk_field == self.pk_value))\n if query.count():\n err = ValidationError('index', fields=str.join(', ', index.keys()))\n for col in index.keys():\n self.add_error(col, err)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save(self, force_insert=False):\n delayed = {}\n for field, value in self.data.items():\n model_field = getattr(type(self.instance), field, None)\n\n # If this is a many-to-many field, we cannot save it to the instance until the instance\n # is saved to the database. Collect these fields and delay the setting until after\n # the model instance is saved.\n if isinstance(model_field, ManyToManyField):\n if value is not None:\n delayed[field] = value\n continue\n\n setattr(self.instance, field, value)\n\n rv = self.instance.save(force_insert=force_insert)\n\n for field, value in delayed.items():\n setattr(self.instance, field, value)\n\n return rv", "response": "Save the model and any related many - to - many fields."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef long_to_bytes(N, blocksize=1):\n\n bytestring = hex(N)\n bytestring = bytestring[2:] if bytestring.startswith('0x') else bytestring\n bytestring = bytestring[:-1] if bytestring.endswith('L') else bytestring\n bytestring = '0' + bytestring if (len(bytestring) % 2) != 0 else bytestring\n bytestring = binascii.unhexlify(bytestring)\n\n if blocksize > 0 and len(bytestring) % blocksize != 0:\n bytestring = '\\x00' * \\\n (blocksize - (len(bytestring) % blocksize)) + bytestring\n\n return bytestring", "response": "Given an input integer N returns the representation of N in bytes."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives a string plaintext returns a ciphertext encrypted with an authenticated - encryption scheme using the keys specified in self. K1 and self. K2.", "response": "def encrypt(self, plaintext, iv_bytes=None):\n \"\"\"Given ``plaintext``, returns a ``ciphertext`` encrypted with an authenticated-encryption scheme, using the keys specified in ``__init__``.\n Ciphertext expansion is deterministic, the output ciphertext is always 42 bytes longer than the input ``plaintext``.\n The input ``plaintext`` can be ``''``.\n\n Raises ``PlaintextTypeError`` if input plaintext is not a string.\n \"\"\"\n\n if not isinstance(plaintext, str):\n raise PlaintextTypeError(\"Input plaintext is not of type string\")\n\n if iv_bytes is None:\n iv_bytes = fte.bit_ops.random_bytes(Encrypter._IV_LENGTH)\n\n iv1_bytes = '\\x01' + iv_bytes\n iv2_bytes = '\\x02' + iv_bytes\n\n W1 = iv1_bytes\n W1 += fte.bit_ops.long_to_bytes(\n len(plaintext), Encrypter._MSG_COUNTER_LENGTH)\n W1 = self._ecb_enc_K1.encrypt(W1)\n\n counter_length_in_bits = AES.block_size * 8\n counter_val = fte.bit_ops.bytes_to_long(iv2_bytes)\n counter = Counter.new(\n counter_length_in_bits, initial_value=counter_val)\n ctr_enc = AES.new(key=self.K1,\n mode=AES.MODE_CTR,\n IV='\\x00' * 8 + iv2_bytes,\n counter=counter)\n W2 = ctr_enc.encrypt(plaintext)\n\n mac = HMAC.new(self.K2, W1 + W2, SHA512)\n T = mac.digest()\n T = T[:Encrypter._MAC_LENGTH]\n\n ciphertext = W1 + W2 + T\n\n return ciphertext"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving a ciphertext returns a plaintext using the keys specified in self. _ecb_enc_K1 and self. _ecb_enc_K2.", "response": "def decrypt(self, ciphertext):\n \"\"\"Given ``ciphertext`` returns a ``plaintext`` decrypted using the keys specified in ``__init__``.\n\n Raises ``CiphertextTypeError`` if the input ``ciphertext`` is not a string.\n Raises ``RecoverableDecryptionError`` if the input ``ciphertext`` has a non-negative message length greater than the ciphertext length.\n Raises ``UnrecoverableDecryptionError`` if invalid padding is detected, or the the MAC is invalid.\n \"\"\"\n\n if not isinstance(ciphertext, str):\n raise CiphertextTypeError(\"Input ciphertext is not of type string\")\n\n plaintext_length = self.getPlaintextLen(ciphertext)\n ciphertext_length = self.getCiphertextLen(ciphertext)\n ciphertext_complete = (len(ciphertext) >= ciphertext_length)\n if ciphertext_complete is False:\n raise RecoverableDecryptionError('Incomplete ciphertext: ('+str(len(ciphertext))+' of '+str(ciphertext_length)+').')\n\n ciphertext = ciphertext[:ciphertext_length]\n\n W1_start = 0\n W1_end = AES.block_size\n W1 = ciphertext[W1_start:W1_end]\n\n W2_start = AES.block_size\n W2_end = AES.block_size + plaintext_length\n W2 = ciphertext[W2_start:W2_end]\n\n T_start = AES.block_size + plaintext_length\n T_end = AES.block_size + plaintext_length + Encrypter._MAC_LENGTH\n T_expected = ciphertext[T_start:T_end]\n\n mac = HMAC.new(self.K2, W1 + W2, SHA512)\n T_actual = mac.digest()[:Encrypter._MAC_LENGTH]\n if T_expected != T_actual:\n raise UnrecoverableDecryptionError('Failed to verify MAC.')\n\n iv2_bytes = '\\x02' + self._ecb_enc_K1.decrypt(W1)[1:8]\n counter_val = fte.bit_ops.bytes_to_long(iv2_bytes)\n counter_length_in_bits = AES.block_size * 8\n counter = Counter.new(\n counter_length_in_bits, initial_value=counter_val)\n ctr_enc = AES.new(key=self.K1,\n mode=AES.MODE_CTR,\n IV='\\x00' * 8 + iv2_bytes,\n counter=counter)\n plaintext = ctr_enc.decrypt(W2)\n\n return plaintext"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getCiphertextLen(self, ciphertext):\n\n plaintext_length = self.getPlaintextLen(ciphertext)\n ciphertext_length = plaintext_length + Encrypter._CTXT_EXPANSION\n return ciphertext_length", "response": "Given a ciphertext with a valid header returns the length of the ciphertext inclusive of ciphertext expansion."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngive a ciphertext with a valid header returns the length of the plaintext payload.", "response": "def getPlaintextLen(self, ciphertext):\n \"\"\"Given a ``ciphertext`` with a valid header, returns the length of the plaintext payload.\n \"\"\"\n\n completeCiphertextHeader = (len(ciphertext) >= 16)\n if completeCiphertextHeader is False:\n raise RecoverableDecryptionError('Incomplete ciphertext header.')\n\n ciphertext_header = ciphertext[:16]\n L = self._ecb_enc_K1.decrypt(ciphertext_header)\n\n padding_expected = '\\x00\\x00\\x00\\x00'\n padding_actual = L[-8:-4]\n validPadding = (padding_actual == padding_expected)\n if validPadding is False:\n raise UnrecoverableDecryptionError(\n 'Invalid padding: ' + padding_actual)\n\n message_length = fte.bit_ops.bytes_to_long(L[-8:])\n\n msgLenNonNegative = (message_length >= 0)\n if msgLenNonNegative is False:\n raise UnrecoverableDecryptionError('Negative message length.')\n\n return message_length"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a digest suitable for use within an object.", "response": "def _make_digest(k, **kwargs):\n \"\"\"\n Creates a digest suitable for use within an :class:`phyles.FSCache`\n object from the key object `k`.\n\n >>> adict = {'a' : {'b':1}, 'f': []}\n >>> make_digest(adict)\n 'a2VKynHgDrUIm17r6BQ5QcA5XVmqpNBmiKbZ9kTu0A'\n \"\"\"\n result = list()\n result_dict = _make_digest_dict(k, **kwargs)\n if result_dict is None:\n return 'default'\n else: \n for (key, h) in sorted(result_dict.items()):\n result.append('{}_{}'.format(key, h))\n return '.'.join(result)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives model name and stan model code file compute path to cached stan fit if include_prefix returns ( model_prefix model_cachefile", "response": "def cached_model_file(model_name='anon_model', file=None, model_code=None, cache_dir=None,\n fit_cachefile=None, include_prefix=False):\n ''' Given model name & stan model code/file, compute path to cached stan fit\n \n if include_prefix, returns (model_prefix, model_cachefile)\n '''\n cache_dir = _get_cache_dir(cache_dir)\n model_name = _sanitize_model_name(model_name)\n ## compute model prefix\n if file:\n model_code = _read_file(file)\n if model_code:\n model_prefix = '.'.join([model_name, _make_digest(dict(model_code=model_code,\n pystan=pystan.__version__,\n cython=Cython.__version__))])\n else: ## handle case where no model code given\n if file is not None:\n logger.info('Note - no model code detected from given file: {}'.format(file))\n else:\n logger.info('Note - no model code detected (neither file nor model_code given)')\n ## parse model_prefix from fit_cachefile if given\n if fit_cachefile:\n # if necessary, impute cache_dir from filepath\n if fit_cachefile != os.path.basename(fit_cachefile):\n cache_dir, fit_cachefile = os.path.split(os.path.abspath(fit_cachefile))\n # if fit_cachefile given, parse to get fit_model_prefix\n fit_model_prefix = re.sub(string=os.path.basename(fit_cachefile), pattern='(.*).stanfit.*', repl='\\\\1')\n if model_code:\n if fit_model_prefix != model_prefix:\n logger.warning('Computed model prefix does not match that used to estimate model. Using prefix matching fit_cachefile')\n model_prefix = fit_model_prefix\n # compute path to model cachefile\n model_cachefile = '.'.join([model_prefix, 'stanmodel', 'pkl'])\n if include_prefix:\n return model_prefix, model_cachefile\n return model_cachefile"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngiving inputs to cached_stan_fit compute pickle file containing cached fit", "response": "def cached_stan_file(model_name='anon_model', file=None, model_code=None,\n cache_dir=None, fit_cachefile=None, cache_only=None, force=False,\n include_modelfile=False, prefix_only=False,\n **kwargs\n ):\n ''' Given inputs to cached_stan_fit, compute pickle file containing cached fit\n '''\n model_prefix, model_cachefile = cached_model_file(model_name=model_name, file=file, model_code=model_code,\n cache_dir=cache_dir, fit_cachefile=fit_cachefile, include_prefix=True)\n if not fit_cachefile:\n fit_cachefile = '.'.join([model_prefix, 'stanfit', _make_digest(dict(**kwargs)), 'pkl'])\n if include_modelfile:\n return model_cachefile, fit_cachefile\n if prefix_only:\n fit_cachefile = re.sub(string=fit_cachefile, pattern='.pkl$', repl='')\n return fit_cachefile"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _cached_stan_fit(model_name='anon_model', file=None, model_code=None,\n force=False, cache_dir=None, cache_only=None, \n fit_cachefile=None, **kwargs):\n ''' Cache fit stan model, by storing pickled objects in filesystem\n \n per following warning:\n 07: UserWarning: Pickling fit objects is an experimental feature!\n The relevant StanModel instance must be pickled along with this fit object.\n When unpickling the StanModel must be unpickled first.\n pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)\n '''\n if fit_cachefile and cache_only is None:\n cache_only = True\n model_cachefile, fit_cachefile = cached_stan_file(model_name=model_name, file=file, model_code=model_code,\n cache_dir=cache_dir, fit_cachefile=fit_cachefile,\n include_modelfile=True, **kwargs)\n cache_dir = _get_cache_dir(cache_dir)\n model_name = _sanitize_model_name(model_name)\n model_code = _get_model_code(model_code=model_code, file=file)\n logger.info('Step 1: Get compiled model code, possibly from cache')\n stan_model = cached(func=pystan.StanModel,\n cache_filename=model_cachefile,\n model_code=model_code,\n cache_dir=cache_dir,\n model_name=model_name,\n cache_only=cache_only,\n force=force)\n \n ## either pull fitted model from cache, or fit model\n logger.info('Step 2: Get posterior draws from model, possibly from cache')\n fit = cached(func=stan_model.sampling,\n cache_filename=fit_cachefile,\n cache_dir=cache_dir,\n force=force,\n cache_only=cache_only,\n **kwargs)\n return fit", "response": "Cache fit stan model with the given model code and parameters."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfitting a lorentzian curve using p0 as an initial guess. Returns a list of N_points.", "response": "def fit_lorentizan(curve,p0=None,N_points=1000):\r\n\t'''Fits a lorentzian curve using p0=[x0,A,gamma] as an initial guess.\r\n\tIt returns a curve with N_points.'''\r\n\tdef lorentzian(x,x0,A,gamma): return A*gamma**2/((x-x0)**2+gamma**2)\r\n\tN=len(curve)\r\n\tx=[curve[i][0] for i in range(N)]\r\n\ty=[curve[i][1] for i in range(N)]\r\n\r\n\tfrom scipy.optimize import curve_fit\r\n\tpopt,pcov = curve_fit(lorentzian,x,y,p0=p0)\r\n\tx0=popt[0]; A=popt[1]; gamma=popt[2]\r\n\r\n\ta=x[0]; b=x[-1]\r\n\tx_step=(b-a)/(N_points-1)\r\n\tx_fit=[a+i*x_step for i in range(N_points)]\r\n\tfited_curve=[(xi,lorentzian(xi,x0,A,gamma)) for xi in x_fit]\r\n\treturn fited_curve,A,x0,gamma"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the amplitude of the electric field for a Gaussian beam.", "response": "def electric_field_amplitude_gaussian(P, sigmax, sigmay=None, Omega=1.0e6,\r\n units=\"ad-hoc\"):\r\n \"\"\"Return the amplitude of the electric field for a Gaussian beam.\r\n\r\n This the amplitude at the center of a laser beam of power P (in Watts) and\\\r\n a Gaussian intensity distribution of standard deviations sigmax, sigmay\\\r\n (in meters). The value of E0 is given in rescaled units according to the\\\r\n frequency scale Omega (in Hertz) understood as absolute frequency\\\r\n (as opposed to angular frequency).\r\n\r\n >>> print(electric_field_amplitude_gaussian(0.001, 0.001))\r\n 19.6861467587\r\n\r\n \"\"\"\r\n e0 = hbar*Omega/(e*a0) # This is the electric field scale.\r\n\r\n if sigmay is None: sigmay = sigmax\r\n E0 = sqrt((c*mu0*P)/(2*Pi))/sqrt(sigmax*sigmay)\r\n if units == \"ad-hoc\":\r\n E0 = E0/e0\r\n return E0"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef electric_field_amplitude_top(P, a, Omega=1e6, units=\"ad-hoc\"):\r\n e0 = hbar*Omega/(e*a0) # This is the electric field scale.\r\n E0 = sqrt((c*mu0*P)/(Pi*a**2))\r\n if units == \"ad-hoc\":\r\n E0 = E0/e0\r\n return E0", "response": "Return the amplitude of the electric field for a top hat beam of power P and a top - hat intensity distribution of radius a."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the amplitude of the electric field for saturation parameter.", "response": "def electric_field_amplitude_intensity(s0, Isat=16.6889462814,\r\n Omega=1e6, units=\"ad-hoc\"):\r\n \"\"\"Return the amplitude of the electric field for saturation parameter.\r\n\r\n This is at a given saturation parameter s0=I/Isat, where I0 is by default \\\r\nIsat=16.6889462814 m/m^2 is the saturation intensity of the D2 line of \\\r\nrubidium for circularly polarized light. Optionally, a frequency scale \\\r\n`Omega` can be provided.\r\n\r\n >>> print(electric_field_amplitude_intensity(1.0, units=\"ad-hoc\"))\r\n 9.0152984553\r\n >>> print(electric_field_amplitude_intensity(1.0, Omega=1.0, units=\"SI\"))\r\n 112.135917207\r\n >>> print(electric_field_amplitude_intensity(1.0, units=\"SI\"))\r\n 0.000112135917207\r\n\r\n \"\"\"\r\n E0_sat = sqrt(2*mu0*c*Isat)/Omega\r\n\r\n if units == \"ad-hoc\":\r\n e0 = hbar/(e*a0) # This is the electric field scale.\r\n E0_sat = E0_sat/e0\r\n\r\n return E0_sat*sqrt(s0)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fprint(expr, print_ascii=False):\n if print_ascii:\n pprint(expr, use_unicode=False, num_columns=120)\n else:\n return expr", "response": "r Prints the expression in the notebook."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef Mu(i, j, s, N, excluded_mu=[]):\n if i == j:\n if s == -1:\n if i == 1:\n return 0\n else:\n mes = 'There is no population rhoii with i='+str(i)+'.'\n raise ValueError(mes)\n mu = i-1\n elif i > j and s == 1:\n mu = i - j + sum([N-k for k in range(1, j)])+N-1\n elif i > j and s == -1:\n mu = i - j + sum([N-k for k in range(1, j)])+N-1 + N*(N-1)/2\n else:\n mes = 'i='+str(i)+', j='+str(j)\n mes += ' Equations for i N*(N+1)/2 - 1:\n return ii, jj, -1\n else:\n return ii, jj, 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_result(path, name, i=None, j=None, s=0, N=None,\n excluded_mu=[], use_netcdf=True, clone=None):\n r\"\"\"This function reads the results stored in path under file name.dat\n returning them as a list of N^2 lists of the form\n [frequency, rho22, rho33, ... rho_N,N-1]. Alternatively it can return only\n two lists [frequency, rho_i,j,s] where s must be 1 for the real part and\n -1 for the imaginary part.\n \"\"\"\n if clone is not None:\n clone = '_'+str(clone)\n else:\n clone = ''\n\n if use_netcdf:\n ncfile = Dataset(path+name+clone+'.nc', 'r')\n matrix = ncfile.variables['matrix'][:]\n vector = ncfile.variables['vector'][:]\n Nrho, n_points = len(matrix), len(matrix[0])\n ncfile.close()\n\n if i is not None:\n mu = Mu(i, j, s, N, excluded_mu)\n if sage_included:\n return [(vector[k], matrix[mu-1][k]) for k in range(n_points)]\n else:\n return vector, matrix[mu-1]\n else:\n return [vector] + [matrix[k] for k in range(Nrho)]\n # ln=[for i in range(m)]\n else:\n r = file(path+name+clone+'.dat', 'r')\n l = r.readlines()\n r.close()\n\n try:\n ln = [[float(num) for num in li.split()] for li in l]\n except:\n print(num)\n print([l[0]])\n print([l[-2]])\n print([l[-1]])\n raise ValueError\n\n if i is not None:\n mu = Mu(i, j, s, N, excluded_mu)\n if sage_included:\n return [(li[0], li[mu]) for li in ln]\n else:\n x = [li[0] for li in ln]\n y = [li[mu] for li in ln]\n return x, y\n else:\n dat = [[ln[ii][0] for ii in range(len(ln))]]\n dat += [[ln[ii][mu+1] for ii in range(len(ln))]\n for mu in range(N**2-1)]\n return dat", "response": "r This function reads the results stored in path under file name. dat\n returns them as a list of N^2 lists where N is the number of times the real part and rho_i j s is the imaginary part and s is the number of times the imaginary part."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef calculate_iI_correspondence(omega):\n Ne = len(omega[0])\n om = omega[0][0]\n correspondence = []\n I = 0\n for i in range(Ne):\n if omega[i][0] != om:\n om = omega[i][0]\n I += 1\n correspondence += [(i+1, I+1)]\n Nnd = I+1\n\n def I_nd(i):\n return correspondence[i-1][1]\n\n def i_d(I):\n for i in range(Ne):\n if correspondence[i][1] == I:\n return correspondence[i][0]\n\n return i_d, I_nd, Nnd", "response": "r Return the correspondance between degenerate and nondegenerate schemes."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef find_omega_min(omega, Nl, detuningsij, i_d, I_nd):\n omega_min = []\n omega_min_indices = []\n for l in range(Nl):\n omegas = sorted([(omega[i_d(p[0]+1)-1][i_d(p[1]+1)-1], p)\n for p in detuningsij[l]])\n omega_min += [omegas[0][0]]\n omega_min_indices += [omegas[0][1]]\n return omega_min, omega_min_indices", "response": "r This function returns a list of length Nl containing the mininmal frequency\n that each laser excites."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nidentify a merge compatible method available in self.", "response": "def _merge_method(self, key):\n \"\"\"\n Identify a merge compatible method available in self[key]. Currently we\n support 'update' and 'append'.\n\n :param key: Attribute name or item key\n :return: Method name usable to merge a value into the instance mapped\n by key\n :rtype: str\n \"\"\"\n if key in self:\n for method in [\"update\", \"append\"]:\n if hasattr(self[key], method):\n return method\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a TweetStatus object to a Tweet object.", "response": "def _tweepy_status_to_tweet(self, status):\n \"\"\"\n Fields documentation: https://dev.twitter.com/docs/api/1.1/get/statuses/home_timeline\n \"\"\"\n tweet = Tweet()\n tweet.published_at = status.created_at\n tweet.content = status.text\n\n return tweet"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wrap_cell(entity, json_obj, mapping, table_view=False):\n '''\n Cell wrappers\n for customizing the GUI data table\n\n TODO : must coincide with hierarchy!\n TODO : simplify this!\n '''\n html_class = '' # for GUI javascript\n out = ''\n\n #if 'cell_wrapper' in entity: # TODO : this bound type was defined by apps only\n # out = entity['cell_wrapper'](json_obj)\n #else:\n\n if entity['multiple']:\n out = \", \".join( map(lambda x: num2name(x, entity, mapping), json_obj.get(entity['source'], [])) )\n\n elif entity['is_chem_formula']:\n out = html_formula(json_obj[ entity['source'] ]) if entity['source'] in json_obj else '—'\n\n elif entity['source'] == 'bandgap':\n html_class = ' class=_g'\n out = json_obj.get('bandgap')\n if out is None: out = '—'\n\n # dynamic determination below:\n elif entity['source'] == 'energy':\n html_class = ' class=_e'\n out = \"%6.5f\" % json_obj['energy'] if json_obj['energy'] else '—'\n\n elif entity['source'] == 'dims':\n out = \"%4.2f\" % json_obj['dims'] if json_obj['periodicity'] in [2, 3] else '—'\n\n else:\n out = num2name(json_obj.get(entity['source']), entity, mapping) or '—'\n\n if table_view:\n return '' + str(out) + ''\n elif html_class:\n return '' + str(out) + ''\n\n return str(out)", "response": "Wrap the cell in a tree of objects."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef meminfo():\n f = open(\"/proc/meminfo\")\n hwinfo = {}\n for line in f.readlines():\n meml = line.split()\n if (meml[0] == \"MemTotal:\"):\n mem = int(meml[1])\n hwinfo[\"Mem_MiB\"] = mem/1024\n elif (meml[0] == \"SwapTotal:\"):\n swap = int(meml[1])\n hwinfo[\"Swap_MiB\"] = swap/1024\n f.close()\n return hwinfo", "response": "Get the amount of memory and swap Mebibytes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cpuinfo():\n f = open(\"/proc/cpuinfo\")\n hwinfo = {}\n for line in f.readlines():\n cpul = line.split(\":\")\n name = cpul[0].strip()\n if (len(cpul) > 1):\n val = cpul[1].strip()\n if (name == \"model name\"):\n hwinfo[\"CPU\"] = val\n elif (name == \"cpu MHz\"):\n hwinfo[\"MHz\"] = int(round(float(val)))\n f.close()\n return hwinfo", "response": "Get the cpu info from the node"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef vgadata():\n if os.path.isfile('/sbin/lspci'):\n lspci = '/sbin/lspci'\n else:\n lspci = '/usr/bin/lspci'\n f = os.popen (lspci + ' -m')\n pdata = {}\n for line in f.readlines():\n p = line.split(\"\\\"\")\n name = p[1].strip()\n if (name == \"VGA compatible controller\"):\n pdata[\"Graphics\"] = p[3] + \" \" + p[5]\n f.close()\n return pdata", "response": "Get data about the graphics card."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the serial number. Requires root access", "response": "def serial_number():\n \"\"\"Get the serial number. Requires root access\"\"\"\n sdata = {}\n if os.getuid() == 0:\n try:\n sdata['Serial'] = open('/sys/class/dmi/id/product_serial') \\\n .read().strip()\n except:\n for line in os.popen('/usr/sbin/dmidecode -s system-serial-number'):\n sdata['Serial'] = line.strip()\n return sdata"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget manufacturer and model number. On older Linux kernel versions without / sys / class / dmi / id this requires root access.", "response": "def system_model():\n \"\"\"Get manufacturer and model number.\n\n On older Linux kernel versions without /sys/class/dmi/id this\n requires root access.\n \"\"\"\n mdata = {}\n man = None\n pn = None\n try:\n # This might be\n # sys_vendor, bios_vendor, board_vendor, or chassis_vendor\n man = open('/sys/class/dmi/id/sys_vendor').read().strip()\n except:\n if os.getuid() == 0:\n for line in os.popen('/usr/sbin/dmidecode -s system-manufacturer'):\n man = line.strip()\n try:\n pn = open('/sys/class/dmi/id/product_name').read().strip()\n except:\n if os.getuid() == 0:\n for line in os.popen('/usr/sbin/dmidecode -s system-product-name'):\n pn = line.strip()\n if man is not None:\n mdata['System_manufacturer'] = man\n if pn is not None:\n mdata['System_product_name'] = pn\n return mdata"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets total disk size in GB.", "response": "def diskdata():\n \"\"\"Get total disk size in GB.\"\"\"\n p = os.popen(\"/bin/df -l -P\")\n ddata = {}\n tsize = 0\n for line in p.readlines():\n d = line.split()\n if (\"/dev/sd\" in d[0] or \"/dev/hd\" in d[0] or \"/dev/mapper\" in d[0]):\n tsize = tsize + int(d[1])\n ddata[\"Disk_GB\"] = int(tsize)/1000000\n p.close()\n return ddata"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ip_address():\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n # 8.8.8.8 is the google public DNS\n s.connect((\"8.8.8.8\", 53))\n ip = s.getsockname()[0]\n s.close()\n return ip", "response": "Get the IP address used for public connections."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the MAC address of a specific IP", "response": "def mac_address(ip):\n \"\"\"Get the MAC address\"\"\"\n mac = ''\n for line in os.popen('/sbin/ifconfig'):\n s = line.split()\n if len(s) > 3:\n if s[3] == 'HWaddr':\n mac = s[4]\n elif s[2] == ip:\n break\n return {'MAC': mac}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getallhwinfo():\n hwinfo = meminfo()\n hwinfo.update(cpuinfo())\n hwinfo.update(uname())\n hwinfo.update(vgadata())\n hwinfo.update(distro())\n hwinfo.update(diskdata())\n hwinfo.update(hostname())\n hwinfo.update(serial_number())\n ip = ip_address()\n hwinfo.update(mac_address(ip))\n hwinfo.update({'IP': ip})\n hwinfo.update(system_model())\n return hwinfo", "response": "Get all the hw info."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef printheader(h=None):\n writer = csv.writer(sys.stdout)\n writer.writerow(header_fields(h))", "response": "Print the header for the CSV table."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprints as a table.", "response": "def printtable(h, header):\n \"\"\"Print as a table.\"\"\"\n hk = header_fields(h)\n if (header):\n printheader()\n writer = csv.DictWriter(sys.stdout, hk, extrasaction='ignore')\n writer.writerow(h)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef agent(server=\"http://localhost:8000\"):\n import xmlrpc.client\n sp = xmlrpc.client.ServerProxy(server)\n hw = getallhwinfo()\n fields = header_fields()\n for f in fields:\n if not f in hw:\n hw[f] = ''\n try:\n sp.puthwinfo(xmlrpc.client.dumps((hw,)))\n except xmlrpc.client.Error as v:\n print(\"ERROR occured: \", v)", "response": "Run in agent mode."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef calculate_omega_matrix(states, Omega=1):\n N = len(states)\n omega = [[2*Pi*(states[i].nu-states[j].nu)/Omega\n for j in range(N)] for i in range(N)]\n\n return omega", "response": "Calculate the matrix of transition frequencies."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef calculate_boundaries(fine_states, full_magnetic_states):\n N_magnetic = len(full_magnetic_states)\n\n # We calculate the boundaries of the various detail levels\n # First we will make a list of indices of that will tell where each fine\n # level begins and ends.\n fq = full_magnetic_states[0].quantum_numbers[:4]\n index_list_fine = []; start_fine = 0\n # And another list of indices that will tell where each hyperfine level\n # begins and ends.\n hq = full_magnetic_states[0].quantum_numbers[:5]\n index_list_hyperfine = []; start_hyperfine = 0\n\n for i in range(N_magnetic):\n magnetic = full_magnetic_states[i]\n if magnetic.quantum_numbers[:4] != fq:\n index_list_fine += [(start_fine, i)]\n start_fine = i\n fq = magnetic.quantum_numbers[:4]\n\n if magnetic.quantum_numbers[:5] != hq:\n index_list_hyperfine += [(start_hyperfine, i)]\n start_hyperfine = i\n hq = magnetic.quantum_numbers[:5]\n\n if i == N_magnetic-1:\n index_list_fine += [(start_fine, i+1)]\n index_list_hyperfine += [(start_hyperfine, i+1)]\n return index_list_fine, index_list_hyperfine", "response": "r This function calculates the boundary indices within a list of magnetic states."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef exclude_states(omega, gamma, r, Lij, states, excluded_states):\n Ne = len(omega)\n excluded_indices = [i for i in range(Ne) if states[i] in excluded_states]\n\n omega_new = []; gamma_new = []; r_new = [[], [], []]; Lij_new = []\n for i in range(Ne):\n row_om = []; row_ga = []; row_L = []\n for j in range(Ne):\n if j not in excluded_indices:\n row_om += [omega[i][j]]\n row_ga += [gamma[i][j]]\n row_L += [Lij[i][j]]\n if i not in excluded_indices:\n omega_new += [row_om]\n gamma_new += [row_ga]\n Lij_new += [row_L]\n\n for p in range(3):\n for i in range(Ne):\n row_r = []\n for j in range(Ne):\n if j not in excluded_indices:\n row_r += [r[p][i][j]]\n if i not in excluded_indices:\n r_new[p] += [row_r]\n\n states_new = [states[i] for i in range(Ne) if i not in excluded_indices]\n\n return omega_new, gamma_new, r_new, Lij_new, states_new", "response": "Exclude states from matrices."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef calculate_reduced_matrix_elements_0(fine_states):\n\t'''This function calculates the reduced matrix elments given a list of fine states.'''\n\t#We calculate the reduced matrix elements starting from the list of fine_states\n\n\t#The factor composed of physical quantities.\n\tfactor=sqrt(3*c**3*me**2*e**2/(16*Pi*epsilon0*hbar**3))\n\t#We read the database to obtain the Einstein A coefficients in Hz.\n\teinsteinA=get_einstein_A_matrix(fine_states)\n\t#We read the database to obtain the transition frequencies in Hz.\n\tomega_fine=calculate_omega_matrix(fine_states)\n\n\treduced_matrix_elements=[[0.0 for jj in range(len(fine_states))] for ii in range(len(fine_states))]\n\n\tfor ii in range(len(fine_states)):\n\t\ti=fine_states[ii]\n\t\tfor jj in range(ii):\n\t\t\tj=fine_states[jj]\n\t\t\tt=Transition(i,j)\n\t\t\teinsteinAij=einsteinA[ii][jj]\n\t\t\tomega0=omega_fine[ii][jj]\n\n\t\t\t#The formula is valid only for i =/= j so that omega0 =/= 0.\n\t\t\t#Because fine states are asumed to be ordered by their energies we can asume that\n\t\t\t# i decays in j.\n\t\t\tJi=i.j; Jj=j.j\n\n\t\t\trij=sqrt((2.0*Ji+1)/(2*Jj+1))*sqrt(einsteinAij/omega0**3)\n\t\t\trij=factor*rij\n\n\t\t\treduced_matrix_elements[ii][jj]=rij\n\t\t\t#We add the matrix elements on the other side of the diagonal.\n\t\t\treduced_matrix_elements[jj][ii]=rij*(-1)**(Jj-Ji)\n\n\treturn reduced_matrix_elements", "response": "This function calculates the reduced matrix elements given a list of fine states."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef speed_rms(Temperature,element,isotope):\n atom = Atom(element, isotope)\n return sqrt(3*Temperature*k_B/atom.mass)", "response": "r This function calculates the average speed of an atom in a vapour assuming a Maxwell - Boltzmann velocity distribution."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_decays(self, fine_state):\n def decays_from(fine_state, transitions):\n states_below = []\n for t in transitions:\n if t.e2 == fine_state:\n states_below += [t.e1]\n return states_below\n\n def iterate(states, transitions):\n new_and_old_states = states[:]\n for state in states:\n new_states = decays_from(state, transitions)\n for new_state in new_states:\n if new_state not in new_and_old_states:\n new_and_old_states += [new_state]\n\n if states == new_and_old_states:\n return states\n else:\n return iterate(new_and_old_states, transitions)\n\n transitions = self.transitions()\n states = iterate([fine_state], transitions)\n\n return states", "response": "r Find all possible decays from a given fine state."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _latex_(self):\n if self.l == 0: l = 'S'\n elif self.l == 1: l = 'P'\n elif self.l == 2: l = 'D'\n elif self.l == 3: l = 'F'\n else: l = str(self.l)\n\n if self.f is None:\n s = '^{'+str(self.isotope)+'}\\\\mathrm{'+self.element+'}\\\\ '\n s += str(self.n)+l+'_{'+str(self.j)+'}'\n else:\n s = '^{'+str(self.isotope)+'}\\\\mathrm{'+self.element+'}\\\\ '\n s += str(self.n)+l+'_{'+str(self.j)+'}^{'+str(self.f)+'}'\n\n if self.m is not None:\n s = s[:-1] + ','+str(self.m)+'}'\n\n return s", "response": "The LaTeX routine for states."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _latex_(self):\n if self.allowed:\n return self.e1._latex_()+'\\\\ \\\\rightarrow \\\\ '+self.e2._latex_()\n elif not self.allowed:\n return self.e1._latex_()+'\\\\ \\\\nrightarrow \\\\ '+self.e2._latex_()\n else:\n return self.e1._latex_()+'\\\\ \\\\rightarrow^? \\\\ '+self.e2._latex_()\n\n return self.e1._latex_()+'\\\\ \\\\nleftrightarrow \\\\ '+self.e2._latex_()", "response": "The LaTeX representation routine for transitions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding the lower index of the segment that contains the value x", "response": "def _findSegment(self, x):\n '''\n :param x: x value to place in segment defined by the xData (instantiation)\n :return: The lower index in the segment\n '''\n iLeft = 0\n iRight = len(self.xData) - 1\n while True:\n if iRight - iLeft <= 1:\n return iLeft\n i = (iRight + iLeft) / 2\n if x < self.xData[i]:\n iRight = i\n else:\n iLeft = i"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef calculate_r_matrices(fine_states, reduced_matrix_elements, q=None,\n numeric=True, convention=1):\n ur\"\"\"Calculate the matrix elements of the electric dipole (in the helicity\n basis).\n\n We calculate all matrix elements for the D2 line in Rb 87.\n\n >>> from sympy import symbols, pprint\n >>> red = symbols(\"r\", positive=True)\n >>> reduced_matrix_elements = [[0, -red], [red, 0]]\n >>> g = State(\"Rb\", 87, 5, 0, 1/Integer(2))\n >>> e = State(\"Rb\", 87, 5, 1, 3/Integer(2))\n >>> fine_levels = [g, e]\n >>> r = calculate_r_matrices(fine_levels, reduced_matrix_elements,\n ... numeric=False)\n >>> pprint(r[0][8:,:8])\n \u23a1 \u221a3\u22c5r \u23a4\n \u23a2 0 0 \u2500\u2500\u2500\u2500 0 0 0 0 0 \u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 -\u221a15\u22c5r \u221a15\u22c5r \u23a5\n \u23a2 0 \u2500\u2500\u2500\u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0 \u23a5\n \u23a2 12 60 \u23a5\n \u23a2 \u23a5\n \u23a2 -\u221a15\u22c5r \u221a5\u22c5r \u23a5\n \u23a2 0 0 \u2500\u2500\u2500\u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500 0 \u23a5\n \u23a2 12 20 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a10\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500 \u23a5\n \u23a2 20 \u23a5\n \u23a2 \u23a5\n \u23a2\u221a2\u22c5r -\u221a6\u22c5r \u23a5\n \u23a2\u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500\u2500\u2500 0 0 0 \u23a5\n \u23a2 4 12 \u23a5\n \u23a2 \u23a5\n \u23a2 r -r \u23a5\n \u23a2 0 \u2500 0 0 0 \u2500\u2500\u2500 0 0 \u23a5\n \u23a2 4 4 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a3\u22c5r -r \u23a5\n \u23a2 0 0 \u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500 0 \u23a5\n \u23a2 12 4 \u23a5\n \u23a2 \u23a5\n \u23a2 -\u221a6\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500\u2500\u23a5\n \u23a2 12 \u23a5\n \u23a2 \u23a5\n \u23a2 0 0 0 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a2 r \u23a5\n \u23a2 0 0 0 \u2500 0 0 0 0 \u23a5\n \u23a2 2 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a6\u22c5r \u23a5\n \u23a2 0 0 0 0 \u2500\u2500\u2500\u2500 0 0 0 \u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a10\u22c5r \u23a5\n \u23a2 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0 \u23a5\n \u23a2 10 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a5\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 \u2500\u2500\u2500\u2500 0 \u23a5\n \u23a2 10 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a15\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500 \u23a5\n \u23a2 30 \u23a5\n \u23a2 \u23a5\n \u23a2 0 0 0 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a3 0 0 0 0 0 0 0 0 \u23a6\n >>> pprint(r[1][8:,:8])\n \u23a1 -\u221a3\u22c5r \u23a4\n \u23a2 0 \u2500\u2500\u2500\u2500\u2500\u2500 0 0 0 0 0 0 \u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2\u221a15\u22c5r -\u221a5\u22c5r \u23a5\n \u23a2\u2500\u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500\u2500\u2500 0 0 0 \u23a5\n \u23a2 12 20 \u23a5\n \u23a2 \u23a5\n \u23a2 -\u221a15\u22c5r \u23a5\n \u23a2 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500\u2500\u2500 0 0 \u23a5\n \u23a2 30 \u23a5\n \u23a2 \u23a5\n \u23a2 -\u221a15\u22c5r -\u221a5\u22c5r \u23a5\n \u23a2 0 0 \u2500\u2500\u2500\u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500\u2500\u2500 0 \u23a5\n \u23a2 12 20 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a3\u22c5r \u23a5\n \u23a2 0 0 0 \u2500\u2500\u2500\u2500 0 0 0 0 \u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 r \u221a3\u22c5r \u23a5\n \u23a2 \u2500 0 0 0 \u2500\u2500\u2500\u2500 0 0 0 \u23a5\n \u23a2 4 12 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a3\u22c5r \u23a5\n \u23a2 0 \u2500\u2500\u2500\u2500 0 0 0 0 0 0 \u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 r -\u221a3\u22c5r \u23a5\n \u23a2 0 0 \u2500 0 0 0 \u2500\u2500\u2500\u2500\u2500\u2500 0 \u23a5\n \u23a2 4 12 \u23a5\n \u23a2 \u23a5\n \u23a2 -\u221a3\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500\u2500\u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 0 0 0 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a3\u22c5r \u23a5\n \u23a2 0 0 0 \u2500\u2500\u2500\u2500 0 0 0 0 \u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a30\u22c5r \u23a5\n \u23a2 0 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0 0 \u23a5\n \u23a2 15 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a15\u22c5r \u23a5\n \u23a2 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0 \u23a5\n \u23a2 10 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a30\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 \u23a5\n \u23a2 15 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a3\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 0 \u2500\u2500\u2500\u2500 \u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a3 0 0 0 0 0 0 0 0 \u23a6\n >>> pprint(r[2][8:,:8])\n \u23a1\u221a3\u22c5r \u23a4\n \u23a2\u2500\u2500\u2500\u2500 0 0 0 0 0 0 0\u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a10\u22c5r \u23a5\n \u23a2 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0 0 0\u23a5\n \u23a2 20 \u23a5\n \u23a2 \u23a5\n \u23a2\u221a15\u22c5r \u221a5\u22c5r \u23a5\n \u23a2\u2500\u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500 0 0 0\u23a5\n \u23a2 12 20 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a15\u22c5r \u221a15\u22c5r \u23a5\n \u23a2 0 \u2500\u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0\u23a5\n \u23a2 12 60 \u23a5\n \u23a2 \u23a5\n \u23a2 0 0 0 0 0 0 0 0\u23a5\n \u23a2 \u23a5\n \u23a2 \u221a6\u22c5r \u23a5\n \u23a2 0 0 0 \u2500\u2500\u2500\u2500 0 0 0 0\u23a5\n \u23a2 12 \u23a5\n \u23a2 \u23a5\n \u23a2\u221a3\u22c5r r \u23a5\n \u23a2\u2500\u2500\u2500\u2500 0 0 0 \u2500 0 0 0\u23a5\n \u23a2 12 4 \u23a5\n \u23a2 \u23a5\n \u23a2 r r \u23a5\n \u23a2 0 \u2500 0 0 0 \u2500 0 0\u23a5\n \u23a2 4 4 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a2\u22c5r \u221a6\u22c5r \u23a5\n \u23a2 0 0 \u2500\u2500\u2500\u2500 0 0 0 \u2500\u2500\u2500\u2500 0\u23a5\n \u23a2 4 12 \u23a5\n \u23a2 \u23a5\n \u23a2 0 0 0 0 0 0 0 0\u23a5\n \u23a2 \u23a5\n \u23a2 0 0 0 0 0 0 0 0\u23a5\n \u23a2 \u23a5\n \u23a2 \u221a15\u22c5r \u23a5\n \u23a2 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0 0 0\u23a5\n \u23a2 30 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a5\u22c5r \u23a5\n \u23a2 0 0 0 0 \u2500\u2500\u2500\u2500 0 0 0\u23a5\n \u23a2 10 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a10\u22c5r \u23a5\n \u23a2 0 0 0 0 0 \u2500\u2500\u2500\u2500\u2500 0 0\u23a5\n \u23a2 10 \u23a5\n \u23a2 \u23a5\n \u23a2 \u221a6\u22c5r \u23a5\n \u23a2 0 0 0 0 0 0 \u2500\u2500\u2500\u2500 0\u23a5\n \u23a2 6 \u23a5\n \u23a2 \u23a5\n \u23a2 r\u23a5\n \u23a2 0 0 0 0 0 0 0 \u2500\u23a5\n \u23a3 2\u23a6\n\n \"\"\"\n magnetic_states = make_list_of_states(fine_states, 'magnetic', verbose=0)\n aux = calculate_boundaries(fine_states, magnetic_states)\n index_list_fine, index_list_hyperfine = aux\n\n Ne = len(magnetic_states)\n\n r = [[[0 for j in range(Ne)] for i in range(Ne)] for p in range(3)]\n\n II = fine_states[0].i\n\n for p in [-1, 0, 1]:\n for i in range(Ne):\n ei = magnetic_states[i]\n ii = fine_index(i, index_list_fine)\n\n for j in range(Ne):\n ej = magnetic_states[j]\n jj = fine_index(j, index_list_fine)\n\n reduced_matrix_elementij = reduced_matrix_elements[ii][jj]\n if reduced_matrix_elementij != 0:\n\n ji = ei.j; jj = ej.j\n fi = ei.f; fj = ej.f\n mi = ei.m; mj = ej.m\n\n rpij = matrix_element(ji, fi, mi, jj, fj, mj,\n II, reduced_matrix_elementij, p,\n numeric=numeric,\n convention=convention)\n\n if q == 1:\n r[p+1][i][j] = rpij*delta_lesser(i, j)\n elif q == -1:\n r[p+1][i][j] = rpij*delta_greater(i, j)\n else:\n r[p+1][i][j] = rpij\n\n if not numeric:\n r = [Matrix(ri) for ri in r]\n return r", "response": "ur This function calculates the matrix elements of the electric dipole in the helicity tree."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef doppler_width(transition, Temperature):\n atom = Atom(transition.e1.element, transition.e1.isotope)\n m = atom.mass\n omega = transition.omega\n return omega*np.log(8*np.sqrt(2))*np.sqrt(k_B*Temperature/m/c**2)", "response": "r Return the Doppler width of a transition at a given temperature."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef transitions(self, omega_min=None, omega_max=None):\n states = self.states()\n transitions = states\n transitions = []\n for i in range(len(states)):\n si = states[i]\n for j in range(i):\n sj = states[j]\n t = Transition(sj, si)\n if t.allowed:\n transitions += [t]\n\n if omega_min is not None:\n transitions = [ti for ti in transitions\n if abs(ti.omega) >= omega_min]\n if omega_max is not None:\n transitions = [ti for ti in transitions\n if abs(ti.omega) <= omega_max]\n\n return transitions", "response": "r Find all allowed transitions in the atom."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef read_roi(fileobj):\n '''\n points = read_roi(fileobj)\n\n Read ImageJ's ROI format. Points are returned in a nx2 array. Each row\n is in [row, column] -- that is, (y,x) -- order.\n '''\n # This is based on:\n # http://rsbweb.nih.gov/ij/developer/source/ij/io/RoiDecoder.java.html\n # http://rsbweb.nih.gov/ij/developer/source/ij/io/RoiEncoder.java.html\n\n SPLINE_FIT = 1\n DOUBLE_HEADED = 2\n OUTLINE = 4\n OVERLAY_LABELS = 8\n OVERLAY_NAMES = 16\n OVERLAY_BACKGROUNDS = 32\n OVERLAY_BOLD = 64\n SUB_PIXEL_RESOLUTION = 128\n DRAW_OFFSET = 256\n\n class RoiType:\n POLYGON = 0\n RECT = 1\n OVAL = 2\n LINE = 3\n FREELINE = 4\n POLYLINE = 5\n NOROI = 6\n FREEHAND = 7\n TRACED = 8\n ANGLE = 9\n POINT = 10\n\n def get8():\n s = fileobj.read(1)\n if not s:\n raise IOError('readroi: Unexpected EOF')\n return ord(s)\n\n def get16():\n b0 = get8()\n b1 = get8()\n return (b0 << 8) | b1\n\n def get32():\n s0 = get16()\n s1 = get16()\n return (s0 << 16) | s1\n\n def getfloat():\n v = np.int32(get32())\n return v.view(np.float32)\n\n #===========================================================================\n #Read Header data\n \n magic = fileobj.read(4)\n if magic != b'Iout':\n raise ValueError('Magic number not found')\n \n version = get16()\n\n # It seems that the roi type field occupies 2 Bytes, but only one is used\n roi_type = get8()\n # Discard second Byte:\n get8()\n \n top = get16()\n left = get16()\n bottom = get16()\n right = get16()\n n_coordinates = get16()\n x1 = getfloat()\n y1 = getfloat()\n x2 = getfloat()\n y2 = getfloat()\n stroke_width = get16()\n shape_roi_size = get32()\n stroke_color = get32()\n fill_color = get32()\n subtype = get16()\n \n options = get16()\n arrow_style = get8()\n arrow_head_size = get8()\n rect_arc_size = get16()\n position = get32()\n header2offset = get32()\n\n # End Header data\n #===========================================================================\n\n #RoiDecoder.java checks the version when setting sub-pixel resolution, therefore so do we\n subPixelResolution = ((options&SUB_PIXEL_RESOLUTION)!=0) and (version>=222)\n \n # Check exceptions\n if roi_type not in [RoiType.FREEHAND, RoiType.TRACED, RoiType.POLYGON, RoiType.RECT, RoiType.POINT]:\n raise NotImplementedError('roireader: ROI type %s not supported' % roi_type)\n \n if subtype != 0:\n raise NotImplementedError('roireader: ROI subtype %s not supported (!= 0)' % subtype)\n \n if roi_type == RoiType.RECT:\n if subPixelResolution:\n return np.array(\n [[y1, x1], [y1, x1+x2], [y1+y2, x1+x2], [y1+y2, x1]],\n dtype=np.float32)\n else:\n return np.array(\n [[top, left], [top, right], [bottom, right], [bottom, left]],\n dtype=np.int16)\n\n if subPixelResolution:\n getc = getfloat\n points = np.empty((n_coordinates, 2), dtype=np.float32)\n fileobj.seek(4*n_coordinates, 1)\n else:\n getc = get16\n points = np.empty((n_coordinates, 2), dtype=np.int16)\n\n points[:, 1] = [getc() for i in range(n_coordinates)]\n points[:, 0] = [getc() for i in range(n_coordinates)]\n\n if not subPixelResolution:\n points[:, 1] += left\n points[:, 0] += top\n\n return points", "response": "Read ImageJ s ROI format."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef perm_j(j1, j2):\n jmin = abs(j1-j2)\n jmax = j1+j2\n return [jmin + i for i in range(jmax-jmin+1)]", "response": "r Calculate the allowed total angular momenta."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef angular_momentum_matrix(J, ind=\"z\"):\n MJ = [-J+i for i in range(2*J+1)]\n\n if ind == \"x\":\n JX = Matrix([[sqrt((J-mj)*(J+mj+1))/2*KroneckerDelta(mi-1, mj)\n for mj in MJ] for mi in MJ])\n JX += Matrix([[sqrt((J+mj)*(J-mj+1))/2*KroneckerDelta(mi+1, mj)\n for mj in MJ] for mi in MJ])\n return JX\n elif ind == \"y\":\n JY = Matrix([[-I*sqrt((J-mj)*(J+mj+1))/2*KroneckerDelta(mi-1, mj)\n for mj in MJ] for mi in MJ])\n JY += Matrix([[+I*sqrt((J+mj)*(J-mj+1))/2*KroneckerDelta(mi+1, mj)\n for mj in MJ] for mi in MJ])\n return JY\n elif ind == \"z\":\n JZ = Matrix([[mi*KroneckerDelta(mi, mj) for mj in MJ] for mi in MJ])\n return JZ\n elif ind == \"all\":\n JX = angular_momentum_matrix(J, \"x\")\n JY = angular_momentum_matrix(J, \"y\")\n JZ = angular_momentum_matrix(J, \"z\")\n return JX, JY, JZ", "response": "ur Returns the angular momentum matrix for a \\\n given a J angular momentum."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef spherical_tensor(Ji, Jj, K, Q):\n keti = {(Ji, Mi): Matrix([KroneckerDelta(i, j)\n for j in range(2*Ji+1)])\n for i, Mi in enumerate(perm_m(Ji))}\n\n braj = {(Jj, Mj): Matrix([KroneckerDelta(i, j)\n for j in range(2*Jj+1)]).adjoint()\n for i, Mj in enumerate(perm_m(Jj))}\n\n if K not in perm_j(Ji, Jj):\n raise ValueError(\"K value is not allowed.\")\n if Q not in perm_m(K):\n raise ValueError(\"Q value is not allowed.\")\n\n Ni = 2*Ji+1\n Nj = 2*Jj+1\n T = zeros(Ni, Nj)\n for i, Mi in enumerate(perm_m(Ji)):\n for j, Mj in enumerate(perm_m(Jj)):\n T += (-1)**(Jj-Mj)*clebsch_gordan(Ji, Jj, K, Mi, -Mj, Q) * \\\n keti[(Ji, Mi)]*braj[(Jj, Mj)]\n\n return T", "response": "Return a matrix representation of the spherical tensor with quantum\n numbers J_i J_j K Q."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the small Wigner d matrix for angular momentum J.", "response": "def wigner_d_small(J, beta):\n u\"\"\"Return the small Wigner d matrix for angular momentum J.\n\n We use the general formula from [Edmonds74]_, equation 4.1.15.\n\n Some examples form [Edmonds74]_:\n\n >>> from sympy import Integer, symbols, pi\n >>> half = 1/Integer(2)\n >>> beta = symbols(\"beta\", real=True)\n >>> wigner_d_small(half, beta)\n Matrix([\n [ cos(beta/2), sin(beta/2)],\n [-sin(beta/2), cos(beta/2)]])\n\n >>> from sympy import pprint\n >>> pprint(wigner_d_small(2*half, beta), use_unicode=True)\n \u23a1 2\u239b\u03b2\u239e \u239b\u03b2\u239e \u239b\u03b2\u239e 2\u239b\u03b2\u239e \u23a4\n \u23a2 cos \u239c\u2500\u239f \u221a2\u22c5sin\u239c\u2500\u239f\u22c5cos\u239c\u2500\u239f sin \u239c\u2500\u239f \u23a5\n \u23a2 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0 \u23a5\n \u23a2 \u23a5\n \u23a2 \u239b\u03b2\u239e \u239b\u03b2\u239e 2\u239b\u03b2\u239e 2\u239b\u03b2\u239e \u239b\u03b2\u239e \u239b\u03b2\u239e\u23a5\n \u23a2-\u221a2\u22c5sin\u239c\u2500\u239f\u22c5cos\u239c\u2500\u239f - sin \u239c\u2500\u239f + cos \u239c\u2500\u239f \u221a2\u22c5sin\u239c\u2500\u239f\u22c5cos\u239c\u2500\u239f\u23a5\n \u23a2 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0\u23a5\n \u23a2 \u23a5\n \u23a2 2\u239b\u03b2\u239e \u239b\u03b2\u239e \u239b\u03b2\u239e 2\u239b\u03b2\u239e \u23a5\n \u23a2 sin \u239c\u2500\u239f -\u221a2\u22c5sin\u239c\u2500\u239f\u22c5cos\u239c\u2500\u239f cos \u239c\u2500\u239f \u23a5\n \u23a3 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0 \u239d2\u23a0 \u23a6\n\n From table 4 in [Edmonds74]_\n\n >>> wigner_d_small(half, beta).subs({beta:pi/2})\n Matrix([\n [ sqrt(2)/2, sqrt(2)/2],\n [-sqrt(2)/2, sqrt(2)/2]])\n\n >>> wigner_d_small(2*half, beta).subs({beta:pi/2})\n Matrix([\n [ 1/2, sqrt(2)/2, 1/2],\n [-sqrt(2)/2, 0, sqrt(2)/2],\n [ 1/2, -sqrt(2)/2, 1/2]])\n\n >>> wigner_d_small(3*half, beta).subs({beta:pi/2})\n Matrix([\n [ sqrt(2)/4, sqrt(6)/4, sqrt(6)/4, sqrt(2)/4],\n [-sqrt(6)/4, -sqrt(2)/4, sqrt(2)/4, sqrt(6)/4],\n [ sqrt(6)/4, -sqrt(2)/4, -sqrt(2)/4, sqrt(6)/4],\n [-sqrt(2)/4, sqrt(6)/4, -sqrt(6)/4, sqrt(2)/4]])\n\n >>> wigner_d_small(4*half, beta).subs({beta:pi/2})\n Matrix([\n [ 1/4, 1/2, sqrt(6)/4, 1/2, 1/4],\n [ -1/2, -1/2, 0, 1/2, 1/2],\n [sqrt(6)/4, 0, -1/2, 0, sqrt(6)/4],\n [ -1/2, 1/2, 0, -1/2, 1/2],\n [ 1/4, -1/2, sqrt(6)/4, -1/2, 1/4]])\n\n \"\"\"\n def prod(x):\n p = 1\n for i, xi in enumerate(x): p = p*xi\n return p\n\n M = [J-i for i in range(2*J+1)]\n d = []\n for Mi in M:\n row = []\n for Mj in M:\n\n # We get the maximum and minimum value of sigma.\n sigmamax = max([-Mi-Mj, J-Mj])\n sigmamin = min([0, J-Mi])\n\n dij = sqrt(factorial(J+Mi)*factorial(J-Mi) /\n factorial(J+Mj)/factorial(J-Mj))\n terms = [[(-1)**(J-Mi-s),\n binomial(J+Mj, J-Mi-s),\n binomial(J-Mj, s),\n cos(beta/2)**(2*s+Mi+Mj),\n sin(beta/2)**(2*J-2*s-Mj-Mi)]\n for s in range(sigmamin, sigmamax+1)]\n\n terms = [prod(term) if 0 not in term else 0 for term in terms]\n\n dij = dij*sum(terms)\n row += [dij]\n d += [row]\n\n return Matrix(d)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef wigner_d(J, alpha, beta, gamma):\n d = wigner_d_small(J, beta)\n M = [J-i for i in range(2*J+1)]\n D = [[exp(I*Mi*alpha)*d[i, j]*exp(I*Mj*gamma)\n for j, Mj in enumerate(M)] for i, Mi in enumerate(M)]\n return Matrix(D)", "response": "Return the Wigner D matrix for angular momentum J."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates coordinates for a tiling of unit squares.", "response": "def generate_unit_squares(image_width, image_height):\n \"\"\"Generate coordinates for a tiling of unit squares.\"\"\"\n # Iterate over the required rows and cells. The for loops (x, y)\n # give the coordinates of the top left-hand corner of each square:\n #\n # (x, y) +-----+ (x + 1, y)\n # | |\n # | |\n # | |\n # (x, y + 1) +-----+ (x + 1, y + 1)\n #\n for x in range(image_width):\n for y in range(image_height):\n yield [(x, y), (x + 1, y), (x + 1, y + 1), (x, y + 1)]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef generate_unit_triangles(image_width, image_height):\n # Our triangles lie with one side parallel to the x-axis. Let s be\n # the length of one side, and h the height of the triangle.\n #\n # The for loops (x, y) gives the coordinates of the top left-hand corner\n # of a pair of triangles:\n #\n # (x, y) +-----+ (x + 1, y)\n # \\ / \\\n # \\ / \\\n # (x + 1/2, y + h) +-----+ (x + 3/2, y + h)\n #\n # where h = sin(60\u00b0) is the height of an equilateral triangle with\n # side length 1.\n #\n # On odd-numbered rows, we translate by (s/2, 0) to make the triangles\n # line up with the even-numbered rows.\n #\n # To avoid blank spaces on the edge of the canvas, the first pair of\n # triangles on each row starts at (-1, 0) -- one width before the edge\n # of the canvas.\n h = math.sin(math.pi / 3)\n\n for x in range(-1, image_width):\n for y in range(int(image_height / h)):\n\n # Add a horizontal offset on odd numbered rows\n x_ = x if (y % 2 == 0) else x + 0.5\n\n yield [(x_, y * h), (x_ + 1, y * h), (x_ + 0.5, (y + 1) * h)]\n yield [(x_ + 1, y * h), (x_ + 1.5, (y + 1) * h),\n (x_ + 0.5, (y + 1) * h)]", "response": "Generate coordinates for a tiling of unit triangles."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrestores settings to default values.", "response": "def restore_default_settings():\n \"\"\" Restore settings to default values. \n \"\"\"\n global __DEFAULTS\n __DEFAULTS.CACHE_DIR = defaults.CACHE_DIR\n __DEFAULTS.SET_SEED = defaults.SET_SEED\n __DEFAULTS.SEED = defaults.SEED\n logging.info('Settings reverted to their default values.')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload config file into default settings", "response": "def load_config(config_file='~/.stancache.ini'):\n \"\"\" Load config file into default settings\n \"\"\"\n if not os.path.exists(config_file):\n logging.warning('Config file does not exist: {}. Using default settings.'.format(config_file))\n return\n ## get user-level config in *.ini format\n config = configparser.ConfigParser()\n config.read(config_file)\n if not config.has_section('main'):\n raise ValueError('Config file {} has no section \"main\"'.format(config_file))\n for (key, val) in config.items('main'):\n _set_value(key.upper(), val)\n return"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef send_email(name, ctx_dict, send_to=None, subject=u'Subject', **kwargs):\n\n eft = EmailFromTemplate(name=name)\n eft.subject = subject\n eft.context = ctx_dict\n eft.get_object()\n eft.render_message()\n eft.send_email(send_to=send_to, **kwargs)", "response": "Shortcut function for EmailFromTemplate class."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the logging level of the current log entry.", "response": "def set_level(cls, level):\n \"\"\"\n :raises: ValueError\n \"\"\"\n level = (\n level\n if not is_str(level)\n else int(LOGGING_LEVELS.get(level.upper(), level))\n )\n\n for handler in root.handlers:\n handler.setLevel(level)\n\n root.setLevel(level)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading the configfile and return a dict of the class", "response": "def load_config(cls, configfile=\"logging.yaml\"):\n \"\"\"\n :raises: ValueError\n \"\"\"\n configfile = getenv(cls.CONFIGFILE_ENV_KEY, configfile)\n if isfile(configfile):\n with open(configfile, \"r\") as cf:\n # noinspection PyBroadException\n try:\n dictConfig(load(cf))\n except ValueError:\n debug(\"Learn to config foooo! Improper config at %s\", configfile)\n except Exception:\n exception(\"Something went wrong while reading %s.\", configfile)\n else:\n raise ValueError(\"Invalid configfile specified: {}\".format(configfile))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing a string to remove and replace all wiki markup tags", "response": "def __parse(self, string=''):\n '''\n Parse a string to remove and replace all wiki markup tags\n '''\n self.string = string\n self.string = self.wiki_re.sub('', self.string)\n # search for lists\n self.listmatch = re.search('^(\\*+)', self.string)\n if self.listmatch:\n self.string = self.__list(self.listmatch) + re.sub('^(\\*+)', \\\n '', self.string)\n return self.string"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_string(self, string=''):\n '''\n Parse a string object to de-wikified text\n '''\n self.strings = string.splitlines(1)\n self.strings = [self.__parse(line) for line in self.strings]\n return ''.join(self.strings)", "response": "Parse a string object to de - wikified text"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run_evolution(path,name,E0,laser_frequencies, N_iter,dt,N_states,\n\t\t\t\tspectrum_of_laser=None,N_delta=None,frequency_step=None,frequency_end=None,\n\t\t\t\trho0=None,print_steps=False,\n\t\t\t\tintegrate=False,\n\t\t\t\tsave_systems=False,save_eigenvalues=False,rk4=False,use_netcdf=True):\n\t\"\"\"This function runs the Runge-Kutta method compiled in path+name...\"\"\"\n\tdef py2f_bool(bool_var):\n\t\tif bool_var:\n\t\t\treturn \".true.\\n\"\n\t\telse:\n\t\t\treturn \".false.\\n\"\n\n\tif rk4:\n\t\treturn run_rk4(path,name,E0,laser_frequencies, N_iter,dt,N_states,\n\t\t\t\tspectrum_of_laser=spectrum_of_laser,N_delta=N_delta,\n\t\t\t\tfrequency_step=frequency_step, frequency_end=frequency_end,\n\t\t\t\trho0=rho0,print_steps=print_steps,\n\t\t\t\tintegrate=integrate,save_systems=save_systems)\n\n\tt0=time()\n\tparams =str(N_iter)+'\\n'\n\tparams+=str(dt)+'\\n'\n\t#We give the flag on wether to print each time step.\n\tparams+=py2f_bool(print_steps)\n\n\t#We give the initial value of rho\n\tN_vars=N_states*(N_states+1)/2-1\n\tif rho0==None:\n\t\tparams+=''.join(['(0.0,0.0) ' for i in range(N_states**2-1)])\n\telif len(rho0)==N_states-1:\n\t\tif sage_included:\n\t\t\tparams+=''.join(['('+str(real(i))+','+str(imag(i))+') ' for i in rho0])\n\t\telse:\n\t\t\tparams+=''.join(['('+str(i.real)+','+str(i.imag)+') ' for i in rho0])\n\t\tparams+=''.join(['(0.0,0.0) ' for i in range( N_states**2 -N_states)])\n\telif len(rho0)==N_vars:\n\t\tparams+=''.join(['('+str(real(i))+','+str(imag(i))+') ' for i in rho0])\n\telse:\n\t\traise ValueError,'rho0 had an invalid number of elements.'\n\n\tparams+='\\n'\n\t#We give the amplitude of the electrical fields.\n\tparams+=''.join([str(i)+' ' for i in E0])+'\\n'\n\n\t#We give the detuning of each laser (taken from the lowest frequency transition).\n\tparams+=''.join([str(i)+' ' for i in laser_frequencies])+'\\n'\n\n\t#We give the flag on wether to calculate spectrums or time evolution.\n\tif spectrum_of_laser==None:\n\t\tparams+='.false.\\n'\n\t\tparams+=py2f_bool(save_systems)\n\t\tparams+=py2f_bool(save_eigenvalues)\n\t\tparams+=py2f_bool(use_netcdf)\n\t\tparams+=py2f_bool(integrate)\n\n\telse:\n\t\tif frequency_end !=None:\n\t\t\tif frequency_step !=None:\n\t\t\t\traise ValueError,'both frequency_end and frequency_step were specified.'\n\t\t\tif N_delta==1:\n\t\t\t\tfrequency_step=0.0\n\t\t\telse:\n\t\t\t\tfrequency_step=(frequency_end-laser_frequencies[spectrum_of_laser-1])/(N_delta-1)\n\n\t\t#frequency_step=frequency_end\n\t\tparams+='.true.\\n'\n\t\tparams+=py2f_bool(save_systems)\n\t\tparams+=py2f_bool(save_eigenvalues)\n\t\tparams+=py2f_bool(use_netcdf)\n\t\tparams+=py2f_bool(integrate)\n\n\t\tparams+=str(spectrum_of_laser)+'\\n'\n\t\tparams+=str(N_delta)+'\\n'\n\t\tparams+=str(frequency_step)\n\t#print params\n\n\tf=file(path+name+'_params.dat','w')\n\tf.write(params)\n\tf.close()\n\n\tos.system(path+name)\n\treturn time()-t0", "response": "This function runs the Runge - Kutta method compiled in path + name..."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef characteristic_times(path,name,Omega=1):\n\tr'''This function can be called after calling ``run_diagonalization`` if the option ``save_eigenvalues``\nis set to ``True``. It will return the oscillation periods, and the shortest and half lives.\nThe results are lists ordered as:\n\t``[detunings, oscillation_periods_1, oscillation_periods_N**2-1, half_lives_1, half_lives_N**2-1]``.\n'''\n\tre,im=get_eigenvalues(path,name)\n\n\tlog12=log(0.5)\n\tNr=len(re[0]); Nd=len(re)\n\thalf_lives=[]; oscillation_periods=[]\n\tfor i in range(Nr):\n\t\tcol_half=[]\n\t\tcol_osci=[]\n\t\tfor j in range(Nd):\n\t\t\tif re[j][i]>=0.0 and i!=0:\n\t\t\t\traise ValueError,'an eigenvalue was greater or equall to zero:'+str(re[j][i])+'.'\n\t\t\telse:\n\t\t\t\tcol_half+=[log12/re[j][i]/Omega]\n\n\t\t\tif im[j][i]==0.0 and i!=0:\n\t\t\t\tcol_osci+=[float('inf')]\n\t\t\telse:\n\t\t\t\tcol_osci+=[abs(2*pi/im[j][i])/Omega]\n\t\thalf_lives+=[col_half]\n\t\toscillation_periods+=[col_osci]\n\treturn oscillation_periods+half_lives[1:]", "response": "This function returns the oscillation periods and shortest and half lives of a characteristic."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_checksum(self):\n '''\n Retrieve unique hash in a cross-platform manner:\n this is how calculation identity is determined\n '''\n if self._checksum:\n return self._checksum\n\n if not self._filename:\n raise RuntimeError('Source calc file is required in order to properly save the data!')\n\n calc_checksum = hashlib.sha224()\n struc_repr = \"\"\n for ase_obj in self.structures:\n struc_repr += \"%3.6f %3.6f %3.6f %3.6f %3.6f %3.6f %3.6f %3.6f %3.6f \" % tuple(map(abs, [ase_obj.cell[0][0], ase_obj.cell[0][1], ase_obj.cell[0][2], ase_obj.cell[1][0], ase_obj.cell[1][1], ase_obj.cell[1][2], ase_obj.cell[2][0], ase_obj.cell[2][1], ase_obj.cell[2][2]])) # NB beware of length & minus zeros\n\n for atom in ase_obj:\n struc_repr += \"%s %3.6f %3.6f %3.6f \" % tuple(map(abs, [chemical_symbols.index(atom.symbol), atom.x, atom.y, atom.z])) # NB beware of length & minus zeros\n\n if self.info[\"energy\"] is None:\n energy = str(None)\n else:\n energy = str(round(self.info['energy'], 11 - int(math.log10(math.fabs(self.info['energy'])))))\n\n calc_checksum.update((\n struc_repr + \"\\n\" +\n energy + \"\\n\" +\n self.info['prog'] + \"\\n\" +\n str(self.info['input']) + \"\\n\" +\n str(sum([2**x for x in self.info['calctypes']]))\n ).encode('ascii')) # NB this is fixed and should not be changed\n\n result = base64.b32encode(calc_checksum.digest()).decode('ascii')\n result = result[:result.index('=')] + 'CI'\n return result", "response": "Retrieve unique hash in a cross - platform manner"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef refine_cell(self, tilde_obj):\n '''\n NB only used for perovskite_tilting app\n '''\n try: lattice, positions, numbers = spg.refine_cell(tilde_obj['structures'][-1], symprec=self.accuracy, angle_tolerance=self.angle_tolerance)\n except Exception as ex:\n self.error = 'Symmetry finder error: %s' % ex\n else:\n self.refinedcell = Atoms(numbers=numbers, cell=lattice, scaled_positions=positions, pbc=tilde_obj['structures'][-1].get_pbc())\n self.refinedcell.periodicity = sum(self.refinedcell.get_pbc())\n self.refinedcell.dims = abs(det(tilde_obj['structures'][-1].cell))", "response": "This method is used to refine the cell of the structure."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse(self):\n\n if self.data is None:\n raise ValueError('No input data provided, unable to parse')\n\n for line in self.data:\n parts = line.strip().split()\n try:\n path = parts[0]\n code = parts[1]\n path, line, char = path.split(':')[:3]\n\n if not re.match(POSITION, line):\n continue\n if not re.match(POSITION, char):\n continue\n if not re.match(ERROR_CODE, code):\n continue\n if not re.match(FILEPATH, path):\n continue\n\n # For parts mismatch\n except IndexError:\n continue\n # For unpack mismatch\n except ValueError:\n continue\n\n yield path, code, line, char, ' '.join(parts[2:])", "response": "Reads all lines from the current data source and yields each FileResult objects that are available in the file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get(function, return_format=None):\n if return_format:\n return requests.get(''.join([__BASE_URL, function, return_format])).text\n return requests.get(''.join([__BASE_URL, function, JSON])).json()", "response": "Get and return data from the API."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns possible backscatter data.", "response": "def backscatter(date=None, rows=None, return_format=None):\n \"\"\"Returns possible backscatter data.\n\n This report only includes \"syn ack\" data and is summarized by source port.\n\n :param date: optional string (in Y-M-D format) or datetime.date() object\n :param rows: optional number of rows returned (default 1000)\n :returns: list -- backscatter data.\n \"\"\"\n uri = 'backscatter'\n if date:\n try:\n uri = '/'.join([uri, date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, date])\n if rows:\n uri = '/'.join([uri, str(rows)])\n return _get(uri, return_format)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ip(ip_address, return_format=None):\n response = _get('ip/{address}'.format(address=ip_address), return_format)\n if 'bad IP address' in str(response):\n raise Error('Bad IP address, {address}'.format(address=ip_address))\n else:\n return response", "response": "Returns a summary of the information our database holds for the given IP address."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef port(port_number, return_format=None):\n response = _get('port/{number}'.format(number=port_number), return_format)\n if 'bad port number' in str(response):\n raise Error('Bad port number, {number}'.format(number=port_number))\n else:\n return response", "response": "Summary information about a particular port."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef topports(sort_by='records', limit=10, date=None, return_format=None):\n uri = '/'.join(['topports', sort_by, str(limit)])\n if date:\n try:\n uri = '/'.join([uri, date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, date])\n return _get(uri, return_format)", "response": "Return a list of top ports for a particular date with return limit."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the port data for a given date range.", "response": "def porthistory(port_number, start_date=None, end_date=None, return_format=None):\n \"\"\"Returns port data for a range of dates.\n\n In the return data:\n\n Records: Total number of records for a given date range.\n Targets: Number of unique destination IP addresses.\n Sources: Number of unique originating IPs.\n\n :param port_number: a valid port number (required)\n :param start_date: string or datetime.date(), default is 30 days ago\n :param end_date: string or datetime.date(), default is today\n \"\"\"\n uri = 'porthistory/{port}'.format(port=port_number)\n\n if not start_date:\n # default 30 days ago\n start_date = datetime.datetime.now() - datetime.timedelta(days=30)\n\n try:\n uri = '/'.join([uri, start_date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, start_date])\n if end_date:\n try:\n uri = '/'.join([uri, end_date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, end_date])\n response = _get(uri, return_format)\n if 'bad port number' in str(response):\n raise Error('Bad port, {port}'.format(port=port_number))\n else:\n return response"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef asnum(number, limit=None, return_format=None):\n uri = 'asnum/{number}'.format(number=number)\n if limit:\n uri = '/'.join([uri, str(limit)])\n return _get(uri, return_format)", "response": "Returns a summary of the information our database holds for the given ASNUM number."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dailysummary(start_date=None, end_date=None, return_format=None):\n uri = 'dailysummary'\n\n if not start_date:\n # default today\n start_date = datetime.datetime.now()\n\n try:\n uri = '/'.join([uri, start_date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, start_date])\n if end_date:\n try:\n uri = '/'.join([uri, end_date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, end_date])\n return _get(uri, return_format)", "response": "Returns daily summary of the specified datetimes."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns daily summary information of submitted 404 Error Page.", "response": "def daily404summary(date, return_format=None):\n \"\"\"Returns daily summary information of submitted 404 Error Page\n Information.\n\n :param date: string or datetime.date() (required)\n \"\"\"\n uri = 'daily404summary'\n if date:\n try:\n uri = '/'.join([uri, date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, date])\n return _get(uri, return_format)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef daily404detail(date, limit=None, return_format=None):\n uri = 'daily404detail'\n if date:\n try:\n uri = '/'.join([uri, date.strftime(\"%Y-%m-%d\")])\n except AttributeError:\n uri = '/'.join([uri, date])\n if limit:\n uri = '/'.join([uri, str(limit)])\n return _get(uri, return_format)", "response": "Returns detail information of submitted 404 Error Page Information."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef glossary(term=None, return_format=None):\n uri = 'glossary'\n if term:\n uri = '/'.join([uri, term])\n return _get(uri, return_format)", "response": "Get a list of glossary terms and definitions."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _append_path(new_path): # type: (str) -> None\n for path in sys.path:\n path = os.path.abspath(path)\n if new_path == path:\n return\n sys.path.append(new_path)", "response": "Append a path to sys. path."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _caller_path(index): # type: (int) -> str\n module = None\n stack = inspect.stack()\n while not module:\n if index >= len(stack):\n raise RuntimeError(\"Cannot find import path\")\n frame = stack[index]\n module = inspect.getmodule(frame[0])\n index += 1\n filename = module.__file__\n path = os.path.dirname(os.path.realpath(filename))\n return path", "response": "Get the caller s file path by the index of the stack"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_current_path(index=2): # type: (int) -> str\n try:\n path = _caller_path(index)\n except RuntimeError:\n path = os.getcwd()\n return path", "response": "Get the caller s path to sys. path\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_git_root(index=3): # type: (int) -> str\n path = get_current_path(index=index)\n while True:\n git_path = os.path.join(path, '.git')\n if os.path.isdir(git_path):\n return path\n if os.path.dirname(path) == path:\n raise RuntimeError(\"Cannot find git root\")\n path = os.path.split(path)[0]", "response": "Returns the path of the git root directory of the caller s file\n Raises a RuntimeError if a git repository cannot be found\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the caller s parent path to sys. path", "response": "def get_parent_path(index=2): # type: (int) -> str\n \"\"\"\n Get the caller's parent path to sys.path\n If the caller is a CLI through stdin, the parent of the current working\n directory is used\n \"\"\"\n try:\n path = _caller_path(index)\n except RuntimeError:\n path = os.getcwd()\n path = os.path.abspath(os.path.join(path, os.pardir))\n return path"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_rk4(path,name,laser,omega,gamma,r,Lij,states=None,verbose=1):\n\n\tglobal omega_rescaled\n\t\n\tt0=time()\n\tNe=len(omega[0])\n\tNl=len(laser)\n\n\tif states==None: states=range(1,Ne+1)\n\n\t#We make some checks\n\tfor i in range(Ne):\n\t\tfor j in range(Ne):\n\t\t\tb1=not ('.' in str(omega[i][j]) or 'e' in str(omega[i][j]))\n\t\t\tif b1: raise ValueError,'omega must be composed of floating point numbers.'\n\t\t\tb2=not ('.' in str(gamma[i][j]) or 'e' in str(gamma[i][j]))\n\t\t\tif b2: raise ValueError,'gamma must be composed of floating point numbers.'\n\n\t#We rescale the frequencies as requested.\n\t#~ if Omega != None:\n\t\t#~ omega_rescaled=[[omega[i][j]/Omega for j in range(Ne)] for i in range(Ne)]\n\t\t#~ #gamma=[[gamma[i][j]/Omega for j in range(Ne)] for i in range(Ne)]\n\t#~ else:\n\t\t#~ omega_rescaled=omega[:]\n\tomega_rescaled=omega[:]\n\t\t\n\n\t#We determine wether it is possible to eliminate explicit time-dependance\n\ttheta=find_phase_transformation(Ne,Nl,r,Lij)\n\t#We find the detunings if required\n\n\t#We construct the correspondence i <-> I between degenerate and non-degenerate indices.\n\ti_d,I_nd,Nnd=calculate_iI_correspondence(omega)\n\t#We get wich transitions each laser induces\n\tdetunings,detuningsij=laser_detunings(Lij,Nl,i_d,I_nd,Nnd)\n\t\n\t#We get how many transitions each laser induces\n\tdetuning_indices=[len(detunings[i]) for i in range(Nl)]\n\t\n\t#The number of detunings\n\tNd=sum([len(detunings[l]) for l in range(Nl)])\n\tcombinations=detuning_combinations(detuning_indices)\t\n\n\tcode0='''program evolution_rk4\n\timplicit none\n\tcomplex*16, dimension('''+str(Ne*(Ne+1)/2-1)+') :: x,k1,k2,k3,k4\\n'\n\tcode0+=''' real*8 :: dt,t,ddelta,delta,delta0\n\tinteger :: i,j,n,ldelta,ndelta,detuning_index,n_aprox,n_mod\n\n\tlogical :: print_steps,run_spectrum\\n'''\n\tcode0+=' real*8, dimension('+str(Nl)+') :: E0,detuning_knob\\n'\n\tcode0+=' real*8, dimension('+str(Nd)+') :: detuning\\n\\n'\n\n\tcode0+=\" open(unit=1,file='\"+path+name+\".dat',status='unknown')\\n\\n\"\n\tcode0+=' n_aprox=1500\\n'\n\tcode0+=' !We load the parameters\\n'\n\tcode0+=\" open(unit=2,file='\"+path+name+\"_params.dat',status='unknown')\\n\" \n\tcode0+=''' read(2,*) n\n read(2,*) dt\n read(2,*) print_steps\n read(2,*) x\n read(2,*) E0\\n'''\n\n\tcode0+=' read(2,*) detuning_knob\\n'\n\tcode0+=' read(2,*) run_spectrum\\n\\n'\n\tcode0+=''' if (run_spectrum) then\n\t\tread(2,*) ldelta\n\t\tread(2,*) ndelta\n\t\tread(2,*) ddelta\n\t\tclose(2)\n\t\tdelta0=detuning_knob(ldelta)\n\t\tn_mod=ndelta/n_aprox\n else\n\t\tldelta=1; ndelta=1; ddelta=0; delta=0\n\t\tclose(2)\n\t\tn_mod=n/n_aprox\n end if\n if (n_mod==0) n_mod=1\\n\\n\\n'''\n\n\n\t#We add the code to caculate all the initial detunings for each laser.\n\tcode0+='\t!We calculate the initial detunings.\\n'\n\t#We find the minimal frequency corresponding to each laser.\n\tomega_min,omega_min_indices=find_omega_min(omega_rescaled,Nl,detuningsij,i_d,I_nd)\n\t\n\tdet_index=1\n\tfor l in range(Nl):\n\t\tomega0=omega_min[l]\n\t\tfor p in detuningsij[l]:\n\t\t\tcode0+='\tdetuning('+str(det_index)+')='\n\t\t\tcode0+=format_double(omega0-omega_rescaled[p[0]][p[1]])+'+detuning_knob('+str(l+1)+')\\n'\n\t\t\tdet_index+=1\n\tcode0+='\\n'\n\n\tcode0+='''\tt=0\n\tif (.not. run_spectrum) WRITE(1,*) t,real(x),imag(x('''+str(Ne)+''':))\\n'''\n\tcode0+='''\t!We start the detuning variation\\n'''\n\tcode0+='\tdelta=detuning_knob(ldelta)\\n'\n\tcode0+=''' do j=1,ndelta\n\t\t!We run the Runge Kutta method\n\t\tt=0.0\n\t\tdo i=1,n-1\\n'''\n\n\tcode0+=' call f(x , t , k1, E0, detuning, detuning_knob)\\n'\n\tcode0+=' call f(x+0.5*k1*dt, t+dt*0.5, k2, E0, detuning, detuning_knob)\\n'\n\tcode0+=' call f(x+0.5*k2*dt, t+dt*0.5, k3, E0, detuning, detuning_knob)\\n'\n\tcode0+=' call f(x +k3*dt, t+dt , k4, E0, detuning, detuning_knob)\\n'\n\n\tcode0+='''\t\t\tx= x+(k1+2*k2+2*k3+k4)*dt/6\n\t\t\tif (print_steps.and. .not. run_spectrum) print*,'t=',t,'delta=',delta\n\t\t\tt= t+ dt\n\t\t\t\n\t\t\tif (isnan(real(x(1)))) stop 1\n\t\t\tif (.not. run_spectrum .and. mod(i,n_mod)==0) WRITE(1,*) t,real(x),imag(x('''+str(Ne)+''':))\n\t\tend do\n\t\tif (print_steps) print*, 'delta=',delta,'percentage=',100*(delta-delta0)/(ddelta*ndelta)\n\t\t\n\t\t!We recalculate the detunings\n\t\tif (run_spectrum) then\n\t\t\tif (mod(j,n_mod)==0) WRITE(1,*) delta,real(x),imag(x('''+str(Ne)+''':))\n\t\t\tdelta=delta+ddelta\n\t\t\tdetuning_knob(ldelta)=detuning_knob(ldelta)+ddelta\\n'''\n\t\n\n\n\t#We add the code to caculate all detunings for each laser\n\t#This way of assigining a global index ll to the detunings ammounts to\n\t# ll= number_of_previous_detunings \n\t# + number_of_detuning_ordered_by_row_and_from_left_to_right_column\n\t#like this\n\t#->\n\t#-> ->\n\t#-> -> ->\n\t#for each l\n\t\n\t#We find the minimal frequency corresponding to each laser\t\t\n\tomega_min,omega_min_indices=find_omega_min(omega_rescaled,Nl,detuningsij,i_d,I_nd)\n\t\n\tdet_index=1\n\tfor l in range(Nl):\n\t\tomega0=omega_min[l]\n\t\tcode0+='\t\t\tif (ldelta=='+str(l+1)+') then\\n'\n\t\tfor p in detuningsij[l]:\n\t\t\tcode0+='\t\t\t\tdetuning('+str(det_index)+')=detuning('+str(det_index)+')'\n\t\t\t#code0+='+('+str(omega0-omega_rescaled[p[0]][p[1]])+'+ddelta\\n'\n\t\t\tcode0+='+ddelta\\n'\n\t\t\tdet_index+=1\n\t\tcode0+='\t\t\tend if\\n'\n\t\n\tcode0+='''\t\tend if\n\t\n\t\n\tend do\n\t\n close(1)\n \nend program\\n\\n'''\n\n\n\tcode0+='subroutine f(x,t,y, E0, detuning,detuning_knob)\\n'\n\t\n\tcode0+=''' implicit none\n real*8, intent(in) :: t\\n'''\n\tcode0+=' complex*16, dimension('+str(Ne*(Ne+1)/2-1)+'), intent(in) :: x\\n'\n\tcode0+=' complex*16, dimension('+str(Ne*(Ne+1)/2-1)+'), intent(out) :: y\\n'\n\tcode0+=' real*8, dimension('+str(Nl)+'), intent(in) :: E0,detuning_knob\\n'\n\tcode0+=' real*8, dimension('+str(Nd)+'), intent(in) :: detuning\\n\\n'\n\n\tcode0+=' complex*16 :: I,fact,aux\\n'\n\tcode0+=' real*8 :: rho11\\n\\n'\n\tcode0+=' I=(0,1D0)\\n'\n\t\n\t#We establish the scaling of the equations\n\t#~ if Omega==None:\n\t\t#~ h =1.054571726e-34; e=1.602176565e-19\n\t\t#~ code0+=' fact=I*'+str(e/h)+'\\n'\n\t#~ else:\n\t\t#~ #code0+=' fact=I*'+str(float(Omega/sqrt(2)))+'\\n'\n\t\t#~ code0+=' fact=I*'+str(float(1/sqrt(2)))+'\\n'\n\t\t#~ #code0+=' fact=I*'+str(float(1/(sqrt(2)*Omega)))+'\\n'\n\tcode0+=' fact=I*'+format_double(float(1/sqrt(2)))+'\\n'\n\t\n\t#We give the code to calculate rho11\n\tcode0+=' rho11=1\\n'\n\tfor i in range(1,Ne):\n\t\tcode0+=' rho11=rho11 -x('+str(i)+')\\n'\n\tcode0+='\\n\\n'\n\t\n\t\n\t####################################################################\n\t#We produce the code for the first order equations.\n\t####################################################################\n\tif len(theta)>0:\n\t\tcode=''\n\t\tfor mu in range(1,Ne*(Ne+1)/2):\n\t\t\ti,j,s=IJ(mu,Ne)\n\t\t\t#print 'ecuacion mu=',mu,',i,j=',i,j\n\t\t\teqmu=' y('+str(mu)+')= 0\\n'\n\t\t\t\n\t\t\t####################################################################\n\t\t\t#We add the terms associated with the effective hamiltonian\n\t\t\t#other than those associated with the phase transformation.\n\t\t\t\n\t\t\tfor k in range(1,Ne+1):\n\t\t\t\t#Case 1\n\t\t\t\tif k>=j:\n\t\t\t\t\tfor l in Lij[i-1][k-1]:\n\t\t\t\t\t\tif k>i:\n\t\t\t\t\t\t\t#print 'E0^',l,-1,'r',i,k,'rho',k,j,'case 1.1'\n\t\t\t\t\t\t\teqmu+=add_line(Ne,mu,'+',laser,l,-1, r,i,k, k,j)\n\t\t\t\t\t\telif ki:\n\t\t\t\t\t\t\t#print 'E0^',l,-1,'r',i,k,'rhoa',j,k,'case 2.1'\n\t\t\t\t\t\t\teqmu+=add_line(Ne,mu,'+',laser,l,-1, r,i,k, j,k,True)\n\t\t\t\t\t\telif kj:\n\t\t\t\t\t\t\t#print 'E0^',l, 1,'r',k,j,'rho',i,k,'case 3.2'\n\t\t\t\t\t\t\teqmu+=add_line(Ne,mu,'-',laser,l, 1, r,k,j, i,k)\n\t\t\t\t#Case 4\n\t\t\t\telif k>i:\n\t\t\t\t\tfor l in Lij[k-1][j-1]:\n\t\t\t\t\t\tif kj:\n\t\t\t\t\t\t\t#print 'E0^',l, 1,'r',k,j,'rhoa',k,i,'case 4.2'\n\t\t\t\t\t\t\teqmu+=add_line(Ne,mu,'-',laser,l, 1, r,k,j, k,i,True)\n\n\t\t\teqmu+=' y('+str(mu)+')=y('+str(mu)+')*fact\\n'\n\t\t\t\n\t\t\t####################################################################\n\t\t\t#We add the terms associated with the phase transformation.\n\n\t\t\textra=Theta(i,j,theta,omega_rescaled,omega_min,detunings,detuningsij,\n\t\t\t\t\t\tcombinations,detuning_indices,Lij,i_d,I_nd,Nnd,\n\t\t\t\t\t\tverbose=verbose,states=states)\n\n\t\t\tif extra!='':\n\t\t\t\teqmu+=' y('+str(mu)+')=y('+str(mu)+') + I*('+extra+')*x('+str(mu)+')\\n'\n\n\t\t\t####################################################################\n\t\t\t\n\t\t\t#~ if i==j:\n\t\t\t\t#~ for k in range(1,Ne+1):\n\t\t\t\t\t#~ if k < i:\n\t\t\t\t\t\t#~ muii=Mu(i,i,s=1,N=Ne)\n\t\t\t\t\t\t#~ eqmu+=' y('+str(mu)+')= y('+str(mu)+') - ('+format_double(gamma[i-1][k-1])+')*x('+str(muii)+')\\n'\n\t\t\t\t\t#~ elif k > i:\n\t\t\t\t\t\t#~ mukk=Mu(k,k,s=1,N=Ne)\n\t\t\t\t\t\t#~ eqmu+=' y('+str(mu)+')= y('+str(mu)+') - ('+format_double(gamma[i-1][k-1])+')*x('+str(mukk)+')\\n'\n\t\t\t\t#~ eqmu+='\\n'\n\t\t\t#~ else:\n\t\t\t\t#~ eqmu+=' y('+str(mu)+')= y('+str(mu)+') - ('+format_double(gamma[i-1][j-1]/2)+')*x('+str(mu)+')\\n'\n#~ \n\n\t\t\t####################################################################\n\t\t\tcode+=eqmu+'\\n'\n\n\t\t#We add the terms associated with spontaneous decay.\t\t\n\t\t#First for populations.\n\t\tfor i in range(2,Ne+1):\n\t\t\tmu=Mu(i,i,1,Ne)\n\t\t\tfor k in range(1,Ne+1):\n\t\t\t\tgams=0\n\t\t\t\tif ki:\n\t\t\t\t\tnu=Mu(k,k,1,Ne)\n\t\t\t\t\tga=gamma[i-1][k-1]\n\t\t\t\t\tif ga != 0:\n\t\t\t\t\t\tcode+=' y('+str(mu)+')=y('+str(mu)+')'\n\t\t\t\t\t\tcode+='-('+format_double(ga)+')*x('+str(nu)+')\\n'\n\t\t\t\tif gams!=0:\n\t\t\t\t\tcode+=' y('+str(mu)+')=y('+str(mu)+')'\n\t\t\t\t\tcode+='-('+format_double(gams)+')*x('+str(mu)+')\\n'\n\n\t\t#And now for coherences\t\n\t\tfor i in range(1,Ne+1):\n\t\t\tfor j in range(1,i):\n\t\t\t\tgams=gamma[i-1][j-1]/2\n\t\t\t\tif gams!=0:\n\t\t\t\t\tfor a in range(i+1,Ne+1):\n\t\t\t\t\t\tmu=Mu(a,i,+1,Ne)\n\t\t\t\t\t\tcode+=' y('+str(mu)+')=y('+str(mu)+')'\n\t\t\t\t\t\tcode+='-('+format_double(gams)+')*x('+str(mu)+')\\n'\n\n\t\t\t\t\t\t#~ mu=Mu(a,i,-1,Ne)\n\t\t\t\t\t\t#~ code+=' y('+str(mu)+')=y('+str(mu)+')'\n\t\t\t\t\t\t#~ code+='-('+format_double(gams)+')*x('+str(mu)+')\\n'\n\n\t\t\t\t\tfor b in range(1,i):\n\t\t\t\t\t\tmu=Mu(i,b,+1,Ne)\n\t\t\t\t\t\tcode+=' y('+str(mu)+')=y('+str(mu)+')'\n\t\t\t\t\t\tcode+='-('+format_double(gams)+')*x('+str(mu)+')\\n'\n\n\t\t\t\t\t\t#~ mu=Mu(i,b,-1,Ne)\n\t\t\t\t\t\t#~ code+=' y('+str(mu)+')=y('+str(mu)+')'\n\t\t\t\t\t\t#~ code+='-('+format_double(gams)+')*x('+str(mu)+')\\n'\n\n\n\n\n\t\t\n\t\t####################################################################\n\t\t####################################################################\n\t\t####################################################################\n\t\t#code+=' y=y/'+str(Omega)+'\\n'\n\t\t\n\t\tf=file(path+name+'.f90','w')\n\t\tcode=code0+code+'end subroutine\\n'\n\t\tf.write(code)\n\t\tf.close()\n\t\t\t\t\n\t\treturn time()-t0\n\telse:\n\t\tprint 'There was no phase transformation capable of eliminating explicit time dependance.'", "response": "This function writes the Fortran code needed to calculate the time evolution of the density matrix elements."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run_rk4(path,name,E0,laser_frequencies, N_iter,dt,N_states,\n\t\t\t\tspectrum_of_laser=None,N_delta=None,frequency_step=None,frequency_end=None,\n\t\t\t\trho0=None,print_steps=False,integrate=False,\n\t\t\t\tsave_systems=False):\n\t\"\"\"This function runs the Runge-Kutta method compiled in path+name...\"\"\"\n\n\tt0=time()\n\tparams =str(N_iter)+'\\n'\n\tparams+=str(dt)+'\\n'\n\t#We give the flag on wether to print each time step.\n\tif print_steps:\n\t\tparams+='.true.\\n'\n\telse:\n\t\tparams+='.false.\\n'\n\t\n\t#We give the initial value of rho\n\tN_vars=N_states*(N_states+1)/2-1\n\tif rho0==None:\n\t\tparams+=''.join(['(0.0,0.0) ' for i in range(N_vars)])\n\telif len(rho0)==N_states-1:\n\t\tif sage_included:\n\t\t\tparams+=''.join(['('+str(real(i))+','+str(imag(i))+') ' for i in rho0])\n\t\telse:\n\t\t\tparams+=''.join(['('+str(i.real)+','+str(i.imag)+') ' for i in rho0])\n\t\tparams+=''.join(['(0.0,0.0) ' for i in range(N_vars-N_states+1)])\n\telif len(rho0)==N_vars:\n\t\tparams+=''.join(['('+str(real(i))+','+str(imag(i))+') ' for i in rho0])\n\n\tparams+='\\n'\n\t#We give the amplitude of the electrical fields.\n\tparams+=''.join([str(i)+' ' for i in E0])+'\\n'\n\t\n\t#We give the detuning of each laser (taken from the lowest frequency transition).\n\tparams+=''.join([str(i)+' ' for i in laser_frequencies])+'\\n'\n\n\t#We give the flag on wether to calculate spectrums or time evolution.\n\tif spectrum_of_laser==None:\n\t\tparams+='.false.'\n\telse:\n\t\tif frequency_end !=None:\n\t\t\tif frequency_step !=None:\n\t\t\t\traise ValueError,'both frequency_end and frequency_step were specified.'\n\t\t\tif N_delta==1:\n\t\t\t\tfrequency_step=0.0\n\t\t\telse:\n\t\t\t\tfrequency_step=(frequency_end-laser_frequencies[spectrum_of_laser-1])/(N_delta-1)\n\t\t\t\n\t\t#frequency_step=frequency_end\n\t\tparams+='.true.\\n'\n\t\tparams+=str(spectrum_of_laser)+'\\n'\n\t\tparams+=str(N_delta)+'\\n'\n\t\tparams+=str(frequency_step)\n\t#print params\n\t\n\tf=file(path+name+'_params.dat','w')\n\tf.write(params)\n\tf.close()\n\t\n\tos.system(path+name)\n\t\n\treturn time()-t0", "response": "This function runs the Runge - Kutta method compiled in path + name..."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef send_email(self, send_to, attachment_paths=None, fail_silently=True, *args, **kwargs):\n msg = self.get_message_object(send_to, attachment_paths, *args, **kwargs)\n msg.content_subtype = self.content_subtype\n\n try:\n self.sent = msg.send()\n except SMTPException, e:\n if not fail_silently:\n raise\n logger.error(u'Problem sending email to %s: %s', send_to, e)\n\n return self.sent", "response": "Sends an email to the recipient based on the parameters passed to the constructor."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning element. tag with xmlns stripped away.", "response": "def _tag(element):\n \"\"\"Return element.tag with xmlns stripped away.\"\"\"\n tag = element.tag\n if tag[0] == \"{\":\n uri, tag = tag[1:].split(\"}\")\n return tag"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns xmlns prefix of the given element.", "response": "def _uriPrefix(element):\n \"\"\"Return xmlns prefix of the given element.\"\"\"\n i = element.tag.find('}')\n if i < 0:\n return \"\"\n return element.tag[:i+1]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread the required and optional elements and returns the remaining elements.", "response": "def _parseElements(self, elementTree, expectedTag = None):\n \"\"\"Read REQUIRED_ELEMENTS and OPTIONAL_ELEMENTS and returns\n the rest of the children. Every read child element's text\n value will be filled into an attribute of the same name,\n i.e. Test will lead to 'Test' being\n assigned to self.description. Missing REQUIRED_ELEMENTS\n result in warnings.\"\"\"\n\n xmlns = _uriPrefix(elementTree)\n\n if expectedTag is not None:\n assert _tag(elementTree) == expectedTag, 'expected <%s>, got <%s>' % (expectedTag, _tag(elementTree))\n\n parsed = []\n\n for tagName in self.REQUIRED_ELEMENTS + self.OPTIONAL_ELEMENTS:\n tags = elementTree.findall(xmlns + tagName)\n if tags:\n parsed.extend(tags)\n tagValue = tags[0].text\n tagValue = tagValue.strip() if tagValue else \"\"\n if len(tags) > 1:\n logger.warning(\"More than one <%s> found within %r (using only first)\" % (tagName, _tag(elementTree)))\n else:\n tagValue = None\n if tagName in self.REQUIRED_ELEMENTS:\n logger.warning(\"Required element %r not found within %r\" % (tagName, _tag(elementTree)))\n setattr(self, _tagToIdentifier(tagName), tagValue)\n\n return [tag for tag in elementTree if tag not in parsed]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef classifyParameters(self):\n\n arguments = []\n options = []\n outputs = []\n for parameter in self.parameters():\n if parameter.channel == 'output' and not parameter.isExternalType():\n outputs.append(parameter)\n elif parameter.index is not None:\n arguments.append(parameter)\n if parameter.flag is not None or parameter.longflag is not None:\n logger.warning(\"Parameter '%s' has both index=%d and flag set.\" % (\n parameter.identifier(), parameter.index))\n elif parameter.flag or parameter.longflag:\n options.append(parameter)\n else:\n logger.warning(\"Parameter '%s' cannot be passed (missing flag, longflag, or index)!\" % parameter.name)\n arguments.sort(key = lambda parameter: parameter.index)\n return (arguments, options, outputs)", "response": "Return the arguments options and outputs tuples."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parseValue(self, value):\n if self.isVector():\n return list(map(self._pythonType, value.split(',')))\n if self.typ == 'boolean':\n return _parseBool(value)\n return self._pythonType(value)", "response": "Parse the given value and return result."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef defaultExtension(self):\n result = self.EXTERNAL_TYPES[self.typ]\n if not self.fileExtensions:\n return result\n if result in self.fileExtensions:\n return result\n return self.fileExtensions[0]", "response": "Return the default extension for this parameter type checked\n against supported fileExtensions. If the default extension is not within supported fileExtensions return the first supported\n extension."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef encode(self, X, seed=None):\n\n if not X:\n return ''\n\n if not isinstance(X, str):\n raise InvalidInputException('Input must be of type string.')\n\n if seed is not None and len(seed) != 8:\n raise InvalidSeedLength('The seed is not 8 bytes long, seed length: '+str(len(seed)))\n\n ciphertext = self._encrypter.encrypt(X)\n\n maximumBytesToRank = int(math.floor(self.getCapacity() / 8.0))\n unrank_payload_len = (\n maximumBytesToRank - DfaEncoderObject._COVERTEXT_HEADER_LEN_CIPHERTTEXT)\n unrank_payload_len = min(len(ciphertext), unrank_payload_len)\n\n if unrank_payload_len <= 0:\n raise InsufficientCapacityException('Language doesn\\'t have enough capacity')\n\n msg_len_header = fte.bit_ops.long_to_bytes(unrank_payload_len)\n msg_len_header = string.rjust(\n msg_len_header, DfaEncoderObject._COVERTEXT_HEADER_LEN_PLAINTEXT, '\\x00')\n random_bytes = seed if seed is not None else fte.bit_ops.random_bytes(8)\n msg_len_header = random_bytes + msg_len_header\n msg_len_header = self._encrypter.encryptOneBlock(msg_len_header)\n\n unrank_payload = msg_len_header + \\\n ciphertext[:maximumBytesToRank -\n DfaEncoderObject._COVERTEXT_HEADER_LEN_CIPHERTTEXT]\n\n random_padding_bytes = maximumBytesToRank - len(unrank_payload)\n if random_padding_bytes > 0:\n unrank_payload += fte.bit_ops.random_bytes(random_padding_bytes)\n\n unrank_payload = fte.bit_ops.bytes_to_long(unrank_payload)\n\n formatted_covertext_header = self._dfa.unrank(unrank_payload)\n unformatted_covertext_body = ciphertext[\n maximumBytesToRank - DfaEncoderObject._COVERTEXT_HEADER_LEN_CIPHERTTEXT:]\n\n covertext = formatted_covertext_header + unformatted_covertext_body\n\n return covertext", "response": "Given a string X returns a string of unranked version of X."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef decode(self, covertext):\n\n if not isinstance(covertext, str):\n raise InvalidInputException('Input must be of type string.')\n\n insufficient = (len(covertext) < self._fixed_slice)\n if insufficient:\n raise DecodeFailureError(\n \"Covertext is shorter than self._fixed_slice, can't decode.\")\n\n maximumBytesToRank = int(math.floor(self.getCapacity() / 8.0))\n\n rank_payload = self._dfa.rank(covertext[:self._fixed_slice])\n X = fte.bit_ops.long_to_bytes(rank_payload)\n\n X = string.rjust(X, maximumBytesToRank, '\\x00')\n msg_len_header = self._encrypter.decryptOneBlock(\n X[:DfaEncoderObject._COVERTEXT_HEADER_LEN_CIPHERTTEXT])\n msg_len_header = msg_len_header[8:16]\n msg_len = fte.bit_ops.bytes_to_long(\n msg_len_header[:DfaEncoderObject._COVERTEXT_HEADER_LEN_PLAINTEXT])\n\n retval = X[16:16 + msg_len]\n retval += covertext[self._fixed_slice:]\n ctxt_len = self._encrypter.getCiphertextLen(retval)\n remaining_buffer = retval[ctxt_len:]\n retval = retval[:ctxt_len]\n retval = self._encrypter.decrypt(retval)\n\n return retval, remaining_buffer", "response": "Given an input string X returns an unranked version of X."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef define_symbol(name, open_brace, comma, i, j,\n close_brace, variables, **kwds):\n r\"\"\"Define a nice symbol with matrix indices.\n\n >>> name = \"rho\"\n >>> from sympy import symbols\n >>> t, x, y, z = symbols(\"t, x, y, z\", positive=True)\n >>> variables = [t, x, y, z]\n >>> open_brace = \"\"\n >>> comma = \"\"\n >>> close_brace = \"\"\n >>> i = 0\n >>> j = 1\n >>> f = define_symbol(name, open_brace, comma, i, j, close_brace,\n ... variables, positive=True)\n >>> print f\n rho12(t, x, y, z)\n\n \"\"\"\n if variables is None:\n return Symbol(name+open_brace+str(i+1)+comma+str(j+1) +\n close_brace, **kwds)\n else:\n return Function(name+open_brace+str(i+1)+comma+str(j+1) +\n close_brace, **kwds)(*variables)", "response": "r Define a nice symbol with matrix indices."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef polarization_vector(phi, theta, alpha, beta, p,\n numeric=False, abstract=False):\n \"\"\"This function returns a unitary vector describing the polarization\n of plane waves.:\n\n INPUT:\n\n - ``phi`` - The spherical coordinates azimuthal angle of the wave vector\\\n k.\n - ``theta`` - The spherical coordinates polar angle of the wave vector k.\n - ``alpha`` - The rotation of a half-wave plate.\n - ``beta`` - The rotation of a quarter-wave plate.\n - ``p`` - either 1 or -1 to indicate whether to return epsilon^(+) or\\\n epsilon^(-) respectively.\n\n If alpha and beta are zero, the result will be linearly polarized light\n along some fast axis. alpha and beta are measured from that fast axis.\n\n Propagation towards y, linear polarization (for pi transitions):\n\n >>> from sympy import pi\n >>> polarization_vector(phi=pi/2, theta=pi/2, alpha=pi/2, beta= 0,p=1)\n Matrix([\n [0],\n [0],\n [1]])\n\n Propagation towards +z, circular polarization (for sigma + transitions):\n\n >>> polarization_vector(phi=0, theta= 0, alpha=pi/2, beta= pi/8,p=1)\n Matrix([\n [ -sqrt(2)/2],\n [-sqrt(2)*I/2],\n [ 0]])\n\n Propagation towards -z, circular polarization for sigma + transitions:\n\n >>> polarization_vector(phi=0, theta=pi, alpha= 0, beta=-pi/8,p=1)\n Matrix([\n [ -sqrt(2)/2],\n [-sqrt(2)*I/2],\n [ 0]])\n\n Components + and - are complex conjugates of each other\n\n >>> from sympy import symbols\n >>> phi, theta, alpha, beta = symbols(\"phi theta alpha beta\", real=True)\n >>> ep = polarization_vector(phi,theta,alpha,beta, 1)\n >>> em = polarization_vector(phi,theta,alpha,beta,-1)\n >>> ep-em.conjugate()\n Matrix([\n [0],\n [0],\n [0]])\n\n We can also define abstract polarization vectors without explicit \\\n components\n\n >>> polarization_vector(0, 0, 0, 0, 1, abstract=True)\n epsilonp\n >>> polarization_vector(0, 0, 0, 0, -1, abstract=True)\n epsilonm\n\n \"\"\"\n if abstract:\n Nl = symbols(\"N_l\", integer=True)\n if p == 1:\n epsilon = Vector3D(IndexedBase(\"epsilonp\", shape=(Nl,)))\n else:\n epsilon = Vector3D(IndexedBase(\"epsilonm\", shape=(Nl,)))\n return epsilon\n\n epsilon = Matrix([cos(2*beta), p*I*sin(2*beta), 0])\n\n R1 = Matrix([[cos(2*alpha), -sin(2*alpha), 0],\n [sin(2*alpha), cos(2*alpha), 0],\n [0, 0, 1]])\n\n R2 = Matrix([[cos(theta), 0, sin(theta)],\n [0, 1, 0],\n [-sin(theta), 0, cos(theta)]])\n\n R3 = Matrix([[cos(phi), -sin(phi), 0],\n [sin(phi), cos(phi), 0],\n [0, 0, 1]])\n\n epsilon = R3*R2*R1*epsilon\n if numeric:\n epsilon = nparray([complex(epsilon[i]) for i in range(3)])\n return epsilon", "response": "This function returns a unitary vector describing the polarization of a single - plane wave."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cartesian_to_helicity(vector, numeric=False):\n if numeric:\n vector = list(vector)\n vector[0] = nparray(vector[0])\n vector[1] = nparray(vector[1])\n vector[2] = nparray(vector[2])\n\n v = [(vector[0]-1j*vector[1])/npsqrt(2),\n vector[2],\n -(vector[0]+1j*vector[1])/npsqrt(2)]\n v = nparray(v)\n else:\n v = [(vector[0]-I*vector[1])/sqrt(2),\n vector[2],\n -(vector[0]+I*vector[1])/sqrt(2)]\n\n if type(vector[0]) in [type(Matrix([1, 0])), type(nparray([1, 0]))]:\n return v\n else:\n return Matrix(v)", "response": "r This function takes a cartesian basis and converts it to the helicity basis."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef vector_element(r, i, j):\n return Matrix([r[p][i, j] for p in range(3)])", "response": "r Extract an matrix element of a vector operator."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ketbra(i, j, Ne):\n return ket(i, Ne)*bra(j, Ne)", "response": "This function returns the outer product of a canonical basis of an \\\n Ne - dimensional Hilbert space."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef lindblad_operator(A, rho):\n a, b = sigma_operator_indices(A)\n # print(111, a, b)\n if a is not None and b is not None:\n Ne = A.shape[0]\n L = zeros(Ne, Ne)\n L[a, a] += rho[b, b]\n\n for j in range(Ne):\n L[b, j] += -rho[b, j]/2\n\n for i in range(Ne):\n L[i, b] += -rho[i, b]/2\n\n return L\n else:\n return A*rho*A.adjoint() - (A.adjoint()*A*rho + rho*A.adjoint()*A)/2", "response": "r This function returns the action of a Lindblad operator A on a density matrix rho."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the Lindblad terms for decays gamma in matrix form.", "response": "def lindblad_terms(gamma, rho, Ne, verbose=1):\n u\"\"\"Return the Lindblad terms for decays gamma in matrix form.\n\n >>> from sympy import pprint\n >>> aux = define_frequencies(4, explicitly_antisymmetric=True)\n >>> omega_level, omega, gamma = aux\n >>> gamma = gamma.subs({gamma[2, 0]:0, gamma[3, 0]:0, gamma[3, 1]:0})\n >>> pprint(gamma, use_unicode=True)\n \u23a1 0 -\u03b3\u2082\u2081 0 0 \u23a4\n \u23a2 \u23a5\n \u23a2\u03b3\u2082\u2081 0 -\u03b3\u2083\u2082 0 \u23a5\n \u23a2 \u23a5\n \u23a2 0 \u03b3\u2083\u2082 0 -\u03b3\u2084\u2083\u23a5\n \u23a2 \u23a5\n \u23a3 0 0 \u03b3\u2084\u2083 0 \u23a6\n >>> rho = define_density_matrix(4)\n >>> pprint(lindblad_terms(gamma, rho, 4), use_unicode=True)\n \u23a1 -\u03b3\u2082\u2081\u22c5\u03c1\u2081\u2082 -\u03b3\u2083\u2082\u22c5\u03c1\u2081\u2083 -\u03b3\u2084\u2083\u22c5\u03c1\u2081\u2084 \u23a4\n \u23a2 \u03b3\u2082\u2081\u22c5\u03c1\u2082\u2082 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u23a5\n \u23a2 2 2 2 \u23a5\n \u23a2 \u23a5\n \u23a2-\u03b3\u2082\u2081\u22c5\u03c1\u2082\u2081 \u03b3\u2082\u2081\u22c5\u03c1\u2082\u2083 \u03b3\u2083\u2082\u22c5\u03c1\u2082\u2083 \u03b3\u2082\u2081\u22c5\u03c1\u2082\u2084 \u03b3\u2084\u2083\u22c5\u03c1\u2082\u2084\u23a5\n \u23a2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 -\u03b3\u2082\u2081\u22c5\u03c1\u2082\u2082 + \u03b3\u2083\u2082\u22c5\u03c1\u2083\u2083 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u23a5\n \u23a2 2 2 2 2 2 \u23a5\n \u23a2 \u23a5\n \u23a2-\u03b3\u2083\u2082\u22c5\u03c1\u2083\u2081 \u03b3\u2082\u2081\u22c5\u03c1\u2083\u2082 \u03b3\u2083\u2082\u22c5\u03c1\u2083\u2082 \u03b3\u2083\u2082\u22c5\u03c1\u2083\u2084 \u03b3\u2084\u2083\u22c5\u03c1\u2083\u2084\u23a5\n \u23a2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 -\u03b3\u2083\u2082\u22c5\u03c1\u2083\u2083 + \u03b3\u2084\u2083\u22c5\u03c1\u2084\u2084 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u23a5\n \u23a2 2 2 2 2 2 \u23a5\n \u23a2 \u23a5\n \u23a2-\u03b3\u2084\u2083\u22c5\u03c1\u2084\u2081 \u03b3\u2082\u2081\u22c5\u03c1\u2084\u2082 \u03b3\u2084\u2083\u22c5\u03c1\u2084\u2082 \u03b3\u2083\u2082\u22c5\u03c1\u2084\u2083 \u03b3\u2084\u2083\u22c5\u03c1\u2084\u2083 \u23a5\n \u23a2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 - \u2500\u2500\u2500\u2500\u2500\u2500\u2500 -\u03b3\u2084\u2083\u22c5\u03c1\u2084\u2084 \u23a5\n \u23a3 2 2 2 2 2 \u23a6\n\n Notice that there are more terms than simply adding a decay\n gamma_ij*rho_ij/2 for each coherence.\n\n \"\"\"\n # We count the necessary Lindblad operators.\n Nterms = 0\n for i in range(Ne):\n for j in range(i):\n if gamma[i, j] != 0:\n Nterms += 1\n\n L = zeros(Ne)\n counter = 0\n t0 = time()\n for i in range(Ne):\n for j in range(i):\n if gamma[i, j] != 0:\n counter += 1\n sig = ket(j+1, Ne)*bra(i+1, Ne)\n L += gamma[i, j]*lindblad_operator(sig, rho)\n tn = time()\n if tn-t0 > 1:\n aux = \"Calculated up to i={}, j={}, or {}/{} = {:2.2f} %.\"\n if verbose > 0:\n print(aux.format(i, j, counter, Nterms,\n float(counter+1)/Nterms*100))\n t0 = tn\n return L"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef phase_transformation(Ne, Nl, r, Lij, omega_laser, phase):\n ph = find_phase_transformation(Ne, Nl, r, Lij)\n\n return {phase[i]: sum([ph[i][j]*omega_laser[j] for j in range(Nl)])\n for i in range(Ne)}", "response": "r Return a phase transformation to eliminate explicit time dependence."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef hamiltonian(Ep, epsilonp, detuning_knob, rm, omega_level, omega_laser, xi,\n RWA=True, RF=True):\n r\"\"\"Return symbolic Hamiltonian.\n\n >>> from sympy import zeros, pi, pprint, symbols\n >>> Ne = 3\n >>> Nl = 2\n >>> Ep, omega_laser = define_laser_variables(Nl)\n >>> epsilonp = [polarization_vector(0, -pi/2, 0, 0, 1) for l in range(Nl)]\n >>> detuning_knob = symbols(\"delta1 delta2\", real=True)\n\n >>> xi = [zeros(Ne, Ne) for l in range(Nl)]\n >>> coup = [[(1, 0)], [(2, 0)]]\n >>> for l in range(Nl):\n ... for pair in coup[l]:\n ... xi[l][pair[0], pair[1]] = 1\n ... xi[l][pair[1], pair[0]] = 1\n\n >>> rm = define_r_components(Ne, xi, explicitly_hermitian=True,\n ... helicity=True, p=-1)\n\n >>> rm = helicity_to_cartesian(rm)\n >>> omega_level, omega, gamma = define_frequencies(Ne, True)\n\n >>> H = hamiltonian(Ep, epsilonp, detuning_knob, rm, omega_level,\n ... omega_laser, xi, RWA=True, RF=False)\n\n >>> print H[1, 0]\n -E_{01}*e*r_{0;21}*exp(-I*t*varpi_1)/2\n >>> print H[2, 0]\n -E_{02}*e*r_{0;31}*exp(-I*t*varpi_2)/2\n >>> print H[2, 2]\n hbar*omega_3\n\n \"\"\"\n # We check what RF is.\n if type(RF) == list:\n theta = RF[:]\n RF = True\n elif type(RF) == Matrix:\n theta = [RF[i, 0] for i in range(RF.shape[0])]\n RF = True\n elif RF:\n # theta should be calculate here!\n s = \"We are still missing automatic calculation of phase \"\n s += \"transformations.\"\n raise ValueError(s)\n\n if not RWA and RF:\n s = \"The rotating frame does not exist without the rotating wave \\\napproximation, as far as I know.\"\n raise ValueError(s)\n\n # We check that the epsilonp is a list of vectors.\n if not isinstance(epsilonp, list):\n raise ValueError(\"epsilonp must be a list of polarization vectors.\")\n if not isinstance(epsilonp[0], Matrix):\n raise ValueError(\"epsilonp must be a list of polarization vectors.\")\n\n Ne = len(omega_level)\n Nl = len(omega_laser)\n\n H = zeros(Ne, Ne)\n hbar, e = symbols(\"hbar e\", positive=True)\n t = symbols(\"t\", real=True)\n\n for i in range(Ne):\n for j in range(Ne):\n rmij = vector_element(rm, i, j)\n rpij = vector_element(rm, j, i).conjugate()\n for l in range(Nl):\n epsilonpl = epsilonp[l]\n epsilonml = epsilonpl.conjugate()\n\n if RF:\n Epl = xi[l][i, j]*Ep[l]\n Epl *= exp(-I*(theta[j]-theta[i]-t*omega_laser[l]))\n Eml = xi[l][i, j]*Ep[l].conjugate()\n Eml *= exp(-I*(theta[j]-theta[i]+t*omega_laser[l]))\n else:\n Epl = Ep[l]*xi[l][i, j]*exp(-I*omega_laser[l]*t)\n Eml = Epl.conjugate()\n\n # The E^(+)r^(-) term\n H[i, j] += -e*Epl/2*cartesian_dot_product(epsilonpl, rmij)\n # The E^(-)r^(+) term\n H[i, j] += -e*Eml/2*cartesian_dot_product(epsilonml, rpij)\n if not RWA:\n # The E^(+)r^(+) term\n H[i, j] += -e*Epl/2*cartesian_dot_product(epsilonpl, rpij)\n # The E^(-)r^(-) term\n H[i, j] += -e*Eml/2*cartesian_dot_product(epsilonml, rmij)\n\n if i == j:\n if RF:\n H[i, j] += hbar*(omega_level[i]+diff(theta[i], t))\n else:\n H[i, j] += hbar*omega_level[i]\n\n return H", "response": "r Return symbolic Hamiltonian."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cross(a, b):\n if isinstance(a, Mul):\n a = a.expand()\n avect = 1\n aivect = -1\n for ai, fact in enumerate(a.args):\n if isinstance(fact, Vector3D):\n avect = fact\n aivect = ai\n break\n\n acoef = a.args[:aivect] + a.args[aivect+1:]\n acoef = Mul(*acoef)\n return acoef*cross(avect, b)\n\n if isinstance(b, Mul):\n b = b.expand()\n bvect = 1\n bivect = -1\n for bi, fact in enumerate(b.args):\n if isinstance(fact, Vector3D):\n bvect = fact\n bivect = bi\n break\n\n bcoef = b.args[:bivect] + b.args[bivect+1:]\n bcoef = Mul(*bcoef)\n return bcoef*cross(a, bvect)\n\n if isinstance(a, Vector3D) and isinstance(b, Vector3D):\n return CrossProduct(a, b)", "response": "r Cross product of two 3d vectors."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_settings(settings):\n '''\n Saves user's settings\n @returns True on success\n @returns False on failure\n '''\n if not os.access(DATA_DIR, os.W_OK): return False\n try:\n f = open(DATA_DIR + os.sep + SETTINGS_FILE, 'w')\n f.writelines(json.dumps(settings, indent=0))\n f.close()\n os.chmod(os.path.abspath(DATA_DIR + os.sep + SETTINGS_FILE), 0o777) # to avoid (or create?) IO problems with multiple users\n except IOError:\n return False\n else:\n return True", "response": "Saves user s settings to a file in the user s data directory."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_field_unique_by_group(df, field_col, group_col):\n ''' Determine if field is constant by group in df\n '''\n def num_unique(x):\n return len(pd.unique(x))\n num_distinct = df.groupby(group_col)[field_col].agg(num_unique)\n return all(num_distinct == 1)", "response": "Determine if a field is constant by group in df"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _list_files_in_path(path, pattern=\"*.stan\"):\n\n results = []\n for dirname, subdirs, files in os.walk(path):\n for name in files:\n if fnmatch(name, pattern):\n results.append(os.path.join(dirname, name))\n return(results)", "response": "List files in a directory of stan"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef generate_random_perovskite(lat=None):\n '''\n This generates a random valid perovskite structure in ASE format.\n Useful for testing.\n Binary and organic perovskites are not considered.\n '''\n if not lat:\n lat = round(random.uniform(3.5, Perovskite_tilting.OCTAHEDRON_BOND_LENGTH_LIMIT*2), 3)\n A_site = random.choice(Perovskite_Structure.A)\n B_site = random.choice(Perovskite_Structure.B)\n Ci_site = random.choice(Perovskite_Structure.C)\n Cii_site = random.choice(Perovskite_Structure.C)\n\n while covalent_radii[chemical_symbols.index(A_site)] - \\\n covalent_radii[chemical_symbols.index(B_site)] < 0.05 or \\\n covalent_radii[chemical_symbols.index(A_site)] - \\\n covalent_radii[chemical_symbols.index(B_site)] > 0.5:\n\n A_site = random.choice(Perovskite_Structure.A)\n B_site = random.choice(Perovskite_Structure.B)\n\n return crystal(\n [A_site, B_site, Ci_site, Cii_site],\n [(0.5, 0.25, 0.0), (0.0, 0.0, 0.0), (0.0, 0.25, 0.0), (0.25, 0.0, 0.75)],\n spacegroup=62, cellpar=[lat*math.sqrt(2), 2*lat, lat*math.sqrt(2), 90, 90, 90]\n )", "response": "Generates a random valid perovskite structure in ASE format."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks a value is a positive integer. Returns the value if so raises ValueError otherwise.", "response": "def check_positive_integer(name, value):\n \"\"\"Check a value is a positive integer.\n\n Returns the value if so, raises ValueError otherwise.\n\n \"\"\"\n try:\n value = int(value)\n is_positive = (value > 0)\n except ValueError:\n raise ValueError('%s should be an integer; got %r' % (name, value))\n\n if is_positive:\n return value\n else:\n raise ValueError('%s should be positive; got %r' % (name, value))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks a value is a valid colour input.", "response": "def check_color_input(value):\n \"\"\"Check a value is a valid colour input.\n\n Returns a parsed `RGBColor` instance if so, raises ValueError\n otherwise.\n\n \"\"\"\n value = value.lower()\n # Trim a leading hash\n if value.startswith('#'):\n value = value[1:]\n\n if len(value) != 6:\n raise ValueError(\n 'Color should be six hexadecimal digits, got %r (%s)' %\n (value, len(value)))\n\n if re.sub(r'[a-f0-9]', '', value):\n raise ValueError(\n 'Color should only contain hex characters, got %r' % value)\n\n red = int(value[0:2], base=16)\n green = int(value[2:4], base=16)\n blue = int(value[4:6], base=16)\n return RGBColor(red, green, blue)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nextracts octahedra as lists of sequence numbers of corner atoms.", "response": "def get_octahedra(self, atoms, periodicity=3):\n '''\n Extract octahedra as lists of sequence numbers of corner atoms\n '''\n octahedra = []\n for n, i in enumerate(atoms):\n found = []\n if i.symbol in Perovskite_Structure.B:\n for m, j in enumerate(self.virtual_atoms):\n if j.symbol in Perovskite_Structure.C and self.virtual_atoms.get_distance(n, m) <= self.OCTAHEDRON_BOND_LENGTH_LIMIT:\n found.append(m)\n\n if (periodicity == 3 and len(found) == 6) or (periodicity == 2 and len(found) in [5, 6]):\n octahedra.append([n, found])\n\n if not len(octahedra): raise ModuleError(\"Cannot extract valid octahedra: not enough corner atoms found!\")\n return octahedra"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_tiltplane(self, sequence):\n '''\n Extract the main tilting plane basing on Z coordinate\n '''\n sequence = sorted(sequence, key=lambda x: self.virtual_atoms[ x ].z)\n in_plane = []\n for i in range(0, len(sequence)-4):\n if abs(self.virtual_atoms[ sequence[i] ].z - self.virtual_atoms[ sequence[i+1] ].z) < self.OCTAHEDRON_ATOMS_Z_DIFFERENCE and \\\n abs(self.virtual_atoms[ sequence[i+1] ].z - self.virtual_atoms[ sequence[i+2] ].z) < self.OCTAHEDRON_ATOMS_Z_DIFFERENCE and \\\n abs(self.virtual_atoms[ sequence[i+2] ].z - self.virtual_atoms[ sequence[i+3] ].z) < self.OCTAHEDRON_ATOMS_Z_DIFFERENCE:\n in_plane = [sequence[j] for j in range(i, i+4)]\n return in_plane", "response": "Extract the main tilting plane based on Z coordinate"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_tiltplanes(self, sequence):\n '''\n Extract tilting planes basing on distance map\n '''\n tilting_planes = []\n distance_map = []\n\n for i in range(1, len(sequence)):\n distance_map.append([ sequence[i], self.virtual_atoms.get_distance( sequence[0], sequence[i] ) ])\n\n distance_map = sorted(distance_map, key=lambda x: x[1])\n\n if len(distance_map) == 4:\n # surface edge case\n # semi-octahedron at surface edge has only one tilting plane to consider\n sorted_dist = [i[0] for i in distance_map]\n if distance_map[-1][1] - distance_map[-2][1] < 0.5:\n # 1st case: max diff < 0.5 Angstrom,\n # meaning all distances to reference atom are similar,\n # therefore the reference atom is above the searched plane\n # and the searched plane consists of other atoms\n tilting_planes.append( [ i[0] for i in distance_map ] )\n else:\n # 2nd case: reference atom belongs to the searched plane,\n # procedure needs to be repeated with the next atom as reference atom\n candidates = [sequence[0], sorted_dist[-1]]\n next_distance_map = []\n next_distance_map.append([ sorted_dist[1], self.virtual_atoms.get_distance( sorted_dist[0], sorted_dist[1] ) ])\n next_distance_map.append([ sorted_dist[2], self.virtual_atoms.get_distance( sorted_dist[0], sorted_dist[2] ) ])\n next_distance_map = sorted(next_distance_map, key=lambda x: x[1])\n next_sorted_dist = [i[0] for i in next_distance_map]\n\n # the next reference atom is taken above the plane (distances are similar)\n if next_distance_map[1][1] - next_distance_map[0][1] < 0.5: candidates.extend([ next_sorted_dist[0], next_sorted_dist[1] ])\n\n # the next reference atom is taken in the plane (distances are different)\n else: candidates.extend([ sorted_dist[0], next_sorted_dist[1] ])\n tilting_planes.append(candidates)\n\n elif len(distance_map) == 5:\n # full octahedron case\n # full octahedron has 3 different tilting planes (perpendicular in ideal case)\n sorted_dist = [i[0] for i in distance_map]\n\n # 1st plane is found as:\n first_plane = sorted_dist[0:4]\n tilting_planes.append(first_plane)\n distance_map_first_plane = []\n for i in range(1, 4):\n distance_map_first_plane.append([ first_plane[i], self.virtual_atoms.get_distance( first_plane[0], first_plane[i] ) ])\n distance_map_first_plane = sorted(distance_map_first_plane, key=lambda x: x[1])\n sorted_first_plane = [i[0] for i in distance_map_first_plane]\n\n # 2nd and 3rd planes are found as:\n tilting_planes.append([ sequence[0], sorted_dist[4], first_plane[0], sorted_first_plane[2] ])\n tilting_planes.append([ sequence[0], sorted_dist[4], sorted_first_plane[0], sorted_first_plane[1] ])\n\n # filter planes by Z according to octahedral spatial compound\n filtered = list(filter(lambda x:\n abs(self.virtual_atoms[ x[0] ].z - self.virtual_atoms[ x[1] ].z) < self.OCTAHEDRON_ATOMS_Z_DIFFERENCE and \\\n abs(self.virtual_atoms[ x[1] ].z - self.virtual_atoms[ x[2] ].z) < self.OCTAHEDRON_ATOMS_Z_DIFFERENCE and \\\n abs(self.virtual_atoms[ x[2] ].z - self.virtual_atoms[ x[3] ].z) < self.OCTAHEDRON_ATOMS_Z_DIFFERENCE,\n tilting_planes\n )) # Py3\n if len(filtered): tilting_planes = filtered\n\n return tilting_planes", "response": "Extract tilting planes based on distance map"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning dict of help_context keys.", "response": "def get_help_keys(self):\n \"\"\"\n Returns dict of help_context keys (description texts used in `EmailRegistry.register()` method).\n \"\"\"\n help_keys = {}\n for k, v in self.help_context.items():\n if isinstance(v, tuple):\n help_keys[k] = v[0]\n else:\n help_keys[k] = v\n return help_keys"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_help_values(self):\n help_values = {}\n for k, v in self.help_context.items():\n if isinstance(v, tuple) and len(v) == 2:\n help_values[k] = v[1]\n else:\n help_values[k] = u\"<%s>\" % k\n return help_values", "response": "Returns dict of help_context values."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nregister an email template file path.", "response": "def register(self, path, help_text=None, help_context=None):\n \"\"\"\n Registers email template.\n\n Example usage:\n email_templates.register('hello_template.html', help_text=u'Hello template',\n help_context={'username': u'Name of user in hello expression'})\n\n :param path: Template file path. It will become immutable registry lookup key.\n :param help_text: Help text to describe template in admin site\n :param help_context: Dictionary of possible keys used in the context and description of their content\n\n `help_context` items values may be strings or tuples of two strings. If strings, then email template preview\n will use variable names to fill context, otherwise the second tuple element will become example value.\n\n If an email template is already registered, this will raise AlreadyRegistered.\n \"\"\"\n if path in self._registry:\n raise AlreadyRegistered('The template %s is already registered' % path)\n self._registry[path] = RegistrationItem(path, help_text, help_context)\n logger.debug(\"Registered email template %s\", path)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the registration item for the specified path.", "response": "def get_registration(self, path):\n \"\"\"\n Returns registration item for specified path.\n\n If an email template is not registered, this will raise NotRegistered.\n \"\"\"\n if not self.is_registered(path):\n raise NotRegistered(\"Email template not registered\")\n return self._registry[path]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_form_help_text(self, path):\n try:\n form_help_text = self.get_registration(path).as_form_help_text()\n except NotRegistered:\n form_help_text = u\"\"\n return form_help_text", "response": "Returns the form help text for creating email templates."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nanalyze the parsed results from Flake8 or PEP 8 output and creates HTML file result instances.", "response": "def analyze(self, output_file=None):\n \"\"\"\n Analyzes the parsed results from Flake8 or PEP 8 output and creates FileResult instances\n :param output_file: If specified, output will be written to this file instead of stdout.\n \"\"\"\n\n fr = None\n for path, code, line, char, desc in self.parser.parse():\n\n # Create a new FileResult and register it if we have changed to a new file\n if path not in self.files:\n # Update statistics\n if fr:\n self.update_stats(fr)\n fr = FileResult(path)\n self.files[path] = fr\n\n # Add line to the FileResult\n fr.add_error(code, line, char, desc)\n\n # Add final FileResult to statistics, if any were parsed\n if fr:\n self.update_stats(fr)\n\n # Generate HTML file\n self.generate(output_file=output_file)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef generate(self, output_file=None):\n\n fd = output_file\n\n # Write to stdout if we do not have a file to write to\n if not fd:\n fd = stdout\n else:\n try:\n fd = open(output_file, 'w')\n except IOError as e:\n stderr.write('Unable to open outputfile %s: %s' % (output_file, e))\n\n with open(os.path.join(os.path.dirname(__file__), 'templates/base.html')) as template:\n html = Template(template.read())\n\n # Write potential build messages to stdout if we are writing to HTML file\n # If dest is stdout and supposed to be piped or redirected, build messages like TeamCity's will\n # have no effect, since they require to read from stdin.\n if output_file:\n self.report_build_messages()\n\n # Write our rendered template to the file descriptor\n fd.write(\n html.render(\n files=sorted(self.files.values(), key=lambda x: x.path),\n total_warnings=self.total_warnings,\n total_errors=self.total_errors,\n total_other=self.total_other,\n violations=sorted(\n ((code, count) for code, count in self.violations.items()),\n key=lambda x: x[1],\n reverse=True,\n )\n )\n )\n\n # If file descriptor is stdout\n if not output_file:\n fd.flush()\n else:\n fd.close()", "response": "Generates an HTML file based on the data from the Parser object and Jinja2 templates."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading the data from a FileResult and updates the overall statistics with the sum of the error and warning counts.", "response": "def update_stats(self, file_result):\n \"\"\"\n Reads the data from a FileResult and updates overall statistics\n :param file_result: A FileResult instance\n \"\"\"\n\n for code, count in file_result.violations.items():\n if code not in self.violations:\n self.violations[code] = 0\n self.violations[code] += file_result.violations[code]\n\n if 'E' in code.upper():\n self.total_errors += file_result.violations[code]\n elif 'W' in code.upper():\n self.total_warnings += file_result.violations[code]\n else:\n self.total_other += file_result.violations[code]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreport the build messages for a specific build agent.", "response": "def report_build_messages(self):\n \"\"\"\n Checks environment variables to see whether pepper8 is run under a build agent such as TeamCity\n and performs the adequate actions to report statistics.\n\n Will not perform any action if HTML output is written to OUTPUT_FILE and not stdout.\n Currently only supports TeamCity.\n\n :return: A list of build message strings destined for stdout\n \"\"\"\n\n if os.getenv('TEAMCITY_VERSION'):\n tc_build_message_warning = \"##teamcity[buildStatisticValue key='pepper8warnings' value='{}']\\n\"\n tc_build_message_error = \"##teamcity[buildStatisticValue key='pepper8errors' value='{}']\\n\"\n\n stdout.write(tc_build_message_warning.format(self.total_warnings))\n stdout.write(tc_build_message_error.format(self.total_errors))\n stdout.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef phase_transformation(Ne, Nl, rm, xi, return_equations=False):\n # We first define the needed variables\n E0, omega_laser = define_laser_variables(Nl)\n theta = [Symbol('theta'+str(i+1)) for i in range(Ne)]\n\n # We check for the case of xi being a list of matrices.\n if type(xi) == list:\n xi = np.array([[[xi[l][i, j]\n for j in range(Ne)] for i in range(Ne)]\n for l in range(Nl)])\n\n # We find all the equations that the specified problem has to fulfil.\n eqs = []\n for i in range(Ne):\n for j in range(0, i):\n if (rm[0][i, j] != 0) or \\\n (rm[1][i, j] != 0) or \\\n (rm[2][i, j] != 0):\n for l in range(Nl):\n if xi[l, i, j] == 1:\n eqs += [-omega_laser[l] + theta[j] - theta[i]]\n\n if return_equations:\n return eqs\n\n # We solve the system of equations.\n sol = sympy.solve(eqs, theta, dict=True)\n sol = sol[0]\n # We add any missing theta that may be left outside if the system is\n # under determined.\n\n extra_thetas = []\n for i in range(Ne):\n if theta[i] not in sol.keys():\n sol.update({theta[i]: theta[i]})\n extra_thetas += [theta[i]]\n\n # We make the solution such that theta1 + omega_level1 = 0.\n omega_level, omega, gamma = define_frequencies(Ne)\n eq_crit = sol[theta[0]] + omega_level[0]\n ss = sympy.solve(eq_crit, extra_thetas[0])[0]\n ss = {extra_thetas[0]: ss}\n\n sol_simple = [sol[theta[i]].subs(ss)\n for i in range(Ne)]\n\n # sol = []\n # for i in range(Ne):\n # soli = []\n # for l in range(Nl):\n # soli += [sympy.diff(sol_simple[theta[i]], omega_laser[l])]\n # sol += [soli]\n\n return sol_simple", "response": "This function returns a phase transformation theta_i."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef define_simplification(omega_level, xi, Nl):\n try:\n Ne = len(omega_level)\n except:\n Ne = omega_level.shape[0]\n #####################################\n # 1 We calculate the symplifying functions.\n om = omega_level[0]\n iu = 0; Neu = 1\n omega_levelu = [om]\n d = {}; di = {0: 0}\n for i in range(Ne):\n if omega_level[i] != om:\n iu += 1\n om = omega_level[i]\n Neu += 1\n omega_levelu += [om]\n di.update({iu: i})\n d.update({i: iu})\n\n def u(i):\n return d[i]\n\n def invu(iu):\n return di[iu]\n\n #####################################\n # 2 We build the simplified xi.\n Neu = len(omega_levelu)\n xiu = np.array([[[xi[l, invu(i), invu(j)]\n for j in range(Neu)] for i in range(Neu)]\n for l in range(Nl)])\n #####################################\n return u, invu, omega_levelu, Neu, xiu", "response": "Return a simplifying function its inverse and simplified frequencies."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_omega_min(omega_levelu, Neu, Nl, xiu):\n omega_min = []; iu0 = []; ju0 = []\n for l in range(Nl):\n omegasl = []\n for iu in range(Neu):\n for ju in range(iu):\n if xiu[l, iu, ju] == 1:\n omegasl += [(omega_levelu[iu]-omega_levelu[ju], iu, ju)]\n omegasl = list(sorted(omegasl))\n omega_min += [omegasl[0][0]]\n iu0 += [omegasl[0][1]]\n ju0 += [omegasl[0][2]]\n\n return omega_min, iu0, ju0", "response": "r Find the smallest transition frequency for each field."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef detunings_indices(Neu, Nl, xiu):\n pairs = []\n for l in range(Nl):\n ind = []\n for iu in range(Neu):\n for ju in range(iu):\n if xiu[l, iu, ju] == 1:\n ind += [(iu, ju)]\n pairs += [ind]\n return pairs", "response": "r Get the indices of the transitions of all fields."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef detunings_code(Neu, Nl, pairs, omega_levelu, iu0, ju0):\n code_det = \"\"\n for l in range(Nl):\n for pair in pairs[l]:\n iu, ju = pair\n code_det += \" delta\"+str(l+1)\n code_det += \"_\"+str(iu+1)\n code_det += \"_\"+str(ju+1)\n code_det += \" = detuning_knob[\"+str(l)+\"]\"\n corr = -omega_levelu[iu]+omega_levelu[iu0[l]]\n corr = -omega_levelu[ju0[l]]+omega_levelu[ju] + corr\n\n if corr != 0:\n code_det += \" + (\"+str(corr)+\")\"\n code_det += \"\\n\"\n return code_det", "response": "r Get the code to calculate the simplified detunings."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef detunings_rewrite(expr, combs, omega_laser, symb_omega_levelu,\n omega_levelu, iu0, ju0):\n r\"\"\"Rewrite a symbolic expression in terms of allowed transition detunings.\n\n >>> Ne = 6\n >>> Nl = 2\n >>> omega_level = [0.0, 100.0, 100.0, 200.0, 200.0, 300.0]\n >>> xi = np.zeros((Nl, Ne, Ne))\n >>> coup = [[(1, 0), (2, 0)], [(3, 0), (4, 0), (5, 0)]]\n >>> for l in range(Nl):\n ... for pair in coup[l]:\n ... xi[l, pair[0], pair[1]] = 1.0\n ... xi[l, pair[1], pair[0]] = 1.0\n\n >>> aux = define_simplification(omega_level, xi, Nl)\n >>> u, invu, omega_levelu, Neu, xiu = aux\n >>> omega_min, iu0, ju0 = find_omega_min(omega_levelu, Neu, Nl, xiu)\n >>> pairs = detunings_indices(Neu, Nl, xiu)\n >>> combs = detunings_combinations(pairs)\n >>> symb_omega_levelu, omega, gamma = define_frequencies(Neu)\n >>> E0, omega_laser = define_laser_variables(Nl)\n\n Most times it is possible to express these combinations of optical\n frequencies in terms of allowed transition detunings.\n\n >>> expr = +(omega_laser[0]-(symb_omega_levelu[1]-symb_omega_levelu[0]))\n >>> expr += -(omega_laser[1]-(symb_omega_levelu[3]-symb_omega_levelu[0]))\n >>> expr\n -omega_2 + omega_4 + varpi_1 - varpi_2\n >>> detunings_rewrite(expr, combs, omega_laser, symb_omega_levelu,\n ... omega_levelu, iu0, ju0)\n '+delta1_2_1-delta2_4_1'\n\n But some times it is not possible:\n\n >>> expr = +(omega_laser[1]-(symb_omega_levelu[1]-symb_omega_levelu[0]))\n >>> expr += -(omega_laser[0]-(symb_omega_levelu[3]-symb_omega_levelu[0]))\n >>> expr\n -omega_2 + omega_4 - varpi_1 + varpi_2\n >>> detunings_rewrite(expr, combs, omega_laser, symb_omega_levelu,\n ... omega_levelu, iu0, ju0)\n '300.000000000000-detuning_knob[0]+detuning_knob[1]'\n\n \"\"\"\n Nl = len(omega_laser)\n Neu = len(symb_omega_levelu)\n # We find the coefficients a_i of the field frequencies.\n a = [diff(expr, omega_laser[l]) for l in range(Nl)]\n\n # We look for a combination of the detunings obtained with the\n # function detunings_code. For each combination we sum the\n # detunings weighed by a_i.\n success = False\n for comb in combs:\n expr_try = 0\n for l in range(Nl):\n expr_try += a[l]*(omega_laser[l] -\n symb_omega_levelu[comb[l][0]] +\n symb_omega_levelu[comb[l][1]])\n if expr-expr_try == 0:\n success = True\n break\n\n assign = \"\"\n if success:\n for l in range(Nl):\n if a[l] != 0:\n if a[l] == 1:\n assign += \"+\"\n elif a[l] == -1:\n assign += \"-\"\n assign += \"delta\"+str(l+1)\n assign += \"_\"+str(comb[l][0]+1)\n assign += \"_\"+str(comb[l][1]+1)\n else:\n # We get the code for Hii using detuning knobs.\n # We find out the remainder terms.\n _remainder = expr - sum([a[l]*omega_laser[l]\n for l in range(Nl)])\n # We find the coefficients of the remainder.\n b = [diff(_remainder, symb_omega_levelu[j]) for j in range(Neu)]\n # We calculate the remainder numerically.\n remainder = sum([b[j]*omega_levelu[j] for j in range(Neu)])\n # We add the contributions from the detuning knobs.\n remainder += sum([a[l]*(omega_levelu[iu0[l]] -\n omega_levelu[ju0[l]])\n for l in range(Nl)])\n assign = str(remainder)\n # We get the code for Hii using detuning knobs.\n for l in range(Nl):\n if a[l] != 0:\n if a[l] == 1:\n assign += \"+\"\n elif a[l] == -1:\n assign += \"-\"\n assign += \"detuning_knob[\"+str(l)+\"]\"\n\n return assign", "response": "r Rewrites a symbolic expression in terms of allowed transition detunings."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fast_hamiltonian(Ep, epsilonp, detuning_knob, rm, omega_level, xi, theta,\n file_name=None):\n r\"\"\"Return a fast function that returns a Hamiltonian as an array.\n\n INPUT:\n\n - ``Ep`` - A list with the electric field amplitudes (real or complex).\n - ``epsilonp`` - A list of the polarization vectors of the fields \\\n (real or complex).\n - ``detuning_knob`` - A list of the detunings of each field (relative \\\n to the transition of lowest energy).\n - ``rm`` - The below-diagonal components\n of the position operator in the cartesian basis:\n\n .. math::\n \\vec{r}^{(-)}_{i j} = [ x_{ij}, y_{ij}, z_{ij} ]\n \\hspace{1cm} \\forall \\hspace{1cm} 0 < j < i\n - ``omega_level`` - The angular frequencies of each state.\n - ``xi`` - An array whose ``xi[l, i, j]`` element is 1 if the \\\n transition :math:`|i\\rangle \\rightarrow |j\\rangle`\\ is driven by field \\\n ``l`` and 0 otherwise.\n - ``theta`` - A list of symbolic expressions representing a phase \\\n transformation.\n - ``file_name`` - A string indicating a file to save the function's \\\n code.\n\n If the arguments Ep, epsilonp, and detuning_knob are symbolic amounts, \\\n the returned function will accept numeric values of Ep, epsilonp, and \\\n detuning_knob as arguments.\n\n All quantities should be in SI units.\n\n EXAMPLES:\n We build an example using states coupled like this:\n\n --- |4> --- |5> --- |6>\n ^ ^ ^\n | | |\n | --- |2> | --- |3> |\n 2 | ^ 2 | ^ | 2\n | 1 | | 1 | |\n | | | | |\n ------------------------------------- |1>\n\n With the numbers on kets labeling states and the plain numbers labeling\n fields.\n\n The number of states and fields:\n >>> Ne = 6\n >>> Nl = 2\n\n We invent some energy levels:\n >>> omega_level = np.array([0.0, 100.0, 100.0, 200.0, 200.0, 300.0])\n >>> omega_level = omega_level*1e6*2*np.pi\n\n We build the symbol xi, that chooses which laser couples which\n transition.\n >>> xi = np.zeros((Nl, Ne, Ne))\n >>> coup = [[(1, 0), (2, 0)], [(3, 0), (4, 0), (5, 0)]]\n >>> for l in range(Nl):\n ... for pair in coup[l]:\n ... xi[l, pair[0], pair[1]] = 1.0\n ... xi[l, pair[1], pair[0]] = 1.0\n\n We invent some electric dipole matrix elements:\n >>> from scipy.constants import physical_constants\n >>> a0 = physical_constants[\"Bohr radius\"][0]\n >>> rm = np.zeros((3, Ne, Ne))\n >>> for l in range(Nl):\n ... for i in range(Ne):\n ... for j in range(i):\n ... if xi[l, i, j] != 0:\n ... rm[2, i, j] = float(i)*a0\n\n The phase transformation:\n >>> theta = phase_transformation(Ne, Nl, rm, xi)\n\n We define the possible arguments:\n >>> from sympy import symbols, pi\n >>> from fast.symbolic import polarization_vector\n >>> detuning_knob = symbols(\"delta1 delta2\")\n >>> detuning_knob_vals = np.array([-1.0, 3.0])*1e6*2*np.pi\n\n >>> Ep, omega_laser = define_laser_variables(Nl)\n >>> Ep_vals = [1e2, 1e2]\n\n >>> alpha = symbols(\"alpha\")\n >>> epsilon = polarization_vector(0, pi/2, alpha, 0, 1)\n >>> epsilonp = [epsilon, epsilon]\n >>> epsilonp_vals = [[0.0, 0.0, 1.0], [0.0, 0.0, 1.0]]\n\n There are 8 ways to call fast_hamiltonian:\n\n 1 .- Get a function of detunings, field amplitudes, polarizations:\n >>> H1 = fast_hamiltonian(Ep, epsilonp, detuning_knob, rm,\n ... omega_level, xi, theta)\n\n 2 .- Get a function of field amplitudes, polarizations:\n >>> H2 = fast_hamiltonian(Ep, epsilonp, detuning_knob_vals, rm,\n ... omega_level, xi, theta)\n\n 3 .- Get a function of detunings, polarizations:\n >>> H3 = fast_hamiltonian(Ep_vals, epsilonp, detuning_knob, rm,\n ... omega_level, xi, theta)\n\n 4 .- Get a function of detunings, field amplitudes:\n >>> H4 = fast_hamiltonian(Ep, epsilonp_vals, detuning_knob, rm,\n ... omega_level, xi, theta)\n\n 5 .- Get a function of detunings:\n >>> H5 = fast_hamiltonian(Ep_vals, epsilonp_vals, detuning_knob, rm,\n ... omega_level, xi, theta)\n\n 6 .- Get a function of field amplitudes:\n >>> H6 = fast_hamiltonian(Ep, epsilonp_vals, detuning_knob_vals, rm,\n ... omega_level, xi, theta)\n\n 7 .- Get a function of polarizations:\n >>> H7 = fast_hamiltonian(Ep_vals, epsilonp, detuning_knob_vals, rm,\n ... omega_level, xi, theta)\n\n 8 .- Get a function of nothing:\n >>> H8 = fast_hamiltonian(Ep_vals, epsilonp_vals, detuning_knob_vals, rm,\n ... omega_level, xi, theta)\n\n\n We test all of these combinations.\n >>> print(H1(Ep_vals, epsilonp_vals, detuning_knob_vals) \\\n ... /hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n >>> print(H2(Ep_vals, epsilonp_vals)/hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n >>> print(H3(epsilonp_vals, detuning_knob_vals)/hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n >>> print(H4(Ep_vals, detuning_knob_vals)/hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n >>> print(H5(detuning_knob_vals)/hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n >>> print(H6(Ep_vals)/hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n >>> print(H7(epsilonp_vals)/hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n >>> print(H8()/hbar_num/2/np.pi*1e-6)\n [[ 0. +0.j 0.6398+0.j 1.2795+0.j 1.9193+0.j 2.5591+0.j 3.1989+0.j]\n [ 0.6398+0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.2795+0.j 0. +0.j 1. +0.j 0. +0.j 0. +0.j 0. +0.j]\n [ 1.9193+0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j 0. +0.j]\n [ 2.5591+0.j 0. +0.j 0. +0.j 0. +0.j -3. +0.j 0. +0.j]\n [ 3.1989+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j 97. +0.j]]\n\n \"\"\"\n # We determine which arguments are constants.\n if True:\n Nl = len(Ep)\n Ne = np.array(rm[0]).shape[0]\n try:\n Ep = np.array([complex(Ep[l]) for l in range(Nl)])\n variable_Ep = False\n except:\n variable_Ep = True\n\n try:\n epsilonp = [np.array([complex(epsilonp[l][i]) for i in range(3)])\n for l in range(Nl)]\n variable_epsilonp = False\n except:\n variable_epsilonp = True\n\n try:\n detuning_knob = np.array([float(detuning_knob[l])\n for l in range(Nl)])\n variable_detuning_knob = False\n except:\n variable_detuning_knob = True\n\n # We convert rm to a numpy array\n rm = np.array([[[complex(rm[k][i, j])\n for j in range(Ne)] for i in range(Ne)]\n for k in range(3)])\n # We establish the arguments of the output function.\n if True:\n code = \"\"\n code += \"def hamiltonian(\"\n if variable_Ep: code += \"Ep, \"\n if variable_epsilonp: code += \"epsilonp, \"\n if variable_detuning_knob: code += \"detuning_knob, \"\n if code[-2:] == \", \":\n code = code[:-2]\n code += \"):\\n\"\n\n code += ' r\"\"\"A fast calculation of the hamiltonian.\"\"\"\\n'\n code += \" H = np.zeros((\"+str(Ne)+\", \"+str(Ne)+\"), complex)\\n\\n\"\n # We get the code for the below-diagonal elements\n # (Rabi frequencies).\n if True:\n code += \" # We calculate the below-diagonal elements.\\n\"\n for i in range(Ne):\n for j in range(i):\n for l in range(Nl):\n if xi[l, i, j] == 1.0:\n # We get the below-diagonal terms.\n code += \" H[\"+str(i)+\", \"+str(j)+\"] = \"\n # We get the code for Ep.\n if variable_Ep:\n code += \"0.5*Ep[\"+str(l)+\"]\"\n\n else:\n code += str(0.5*Ep[l])\n # We get the code for epsilonp dot rm\n rmij = rm[:, i, j]\n if variable_epsilonp:\n code += \"*cartesian_dot_product(\"\n code += \"epsilonp[\"+str(l)+\"],\"\n code += str(list(rmij*e_num))+\" )\"\n else:\n dp = cartesian_dot_product(epsilonp[l], rmij)\n dp = dp*e_num\n code += \"*(\"+str(dp)+\")\"\n\n code += \"\\n\"\n # We get the code for the above-diagonal elements\n # (Conjugate Rabi frequencies).\n if True:\n code += \"\\n\"\n code += \"\"\" # We calculate the above-diagonal elements.\\n\"\"\"\n code += \"\"\" for i in range(\"\"\"+str(Ne)+\"\"\"):\\n\"\"\"\n code += \"\"\" for j in range(i+1, \"\"\"+str(Ne)+\"\"\"):\\n\"\"\"\n code += \"\"\" H[i, j] = H[j, i].conjugate()\\n\\n\"\"\"\n # We get the code for the diagonal elements (detunings).\n if True:\n code += \" # We calculate the diagonal elements.\\n\"\n # We build the degeneration simplification and is inverse (to avoid\n # large combinatorics).\n aux = define_simplification(omega_level, xi, Nl)\n u, invu, omega_levelu, Neu, xiu = aux\n # For each field we find the smallest transition frequency, and its\n # simplified indices.\n omega_min, iu0, ju0 = find_omega_min(omega_levelu, Neu, Nl, xiu)\n #####################################\n # We get the code to calculate the non degenerate detunings.\n pairs = detunings_indices(Neu, Nl, xiu)\n if not variable_detuning_knob:\n code += \" detuning_knob = np.zeros(\"+str(Nl)+\")\\n\"\n for l in range(Nl):\n code += \" detuning_knob[\"+str(l)+\"] = \" +\\\n str(detuning_knob[l])+\"\\n\"\n\n code_det = detunings_code(Neu, Nl, pairs, omega_levelu, iu0, ju0)\n code += code_det\n code += \"\\n\"\n #####################################\n # We find the coefficients a_l that multiply omega_laser_l in\n # H_ii = omega_level_iu + theta_iu = \\sum_i a_i varpi_i + remainder\n _omega_level, omega, gamma = define_frequencies(Ne)\n _omega_levelu, omega, gamma = define_frequencies(Neu)\n E0, omega_laser = define_laser_variables(Nl)\n # So we build all combinations.\n combs = detunings_combinations(pairs)\n for i in range(Ne):\n _Hii = theta[i] + _omega_levelu[u(i)]\n aux = (_Hii, combs, omega_laser,\n _omega_levelu, omega_levelu, iu0, ju0)\n assign = detunings_rewrite(*aux)\n\n if assign != \"\":\n code += \" H[\"+str(i)+\", \"+str(i)+\"] = \"+assign+\"\\n\"\n\n code += \"\\n\"\n code += \"\"\" for i in range(\"\"\"+str(Ne)+\"\"\"):\\n\"\"\"\n code += \"\"\" H[i, i] = H[i, i]*\"\"\"+str(hbar_num)+\"\\n\"\n\n code += \" return H\\n\"\n\n if file_name is not None:\n f = file(file_name, \"w\")\n f.write(code)\n f.close()\n\n hamiltonian = code\n exec hamiltonian\n return hamiltonian", "response": "r Returns a fast function that returns a Hamiltonian."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef linear_get_coefficients(coef, rhouv, s, i, j, k, u, v,\n unfolding, matrix_form):\n r\"\"\"Get the indices mu, nu, and term coefficients for linear terms.\n\n We determine mu and nu, the indices labeling the density matrix components\n d rho[mu] /dt = sum_nu A[mu, nu]*rho[nu]\n for this complex and rho_u,v.\n\n >>> from fast.symbolic import define_density_matrix\n >>> Ne = 2\n >>> coef = 1+2j\n >>> rhouv = define_density_matrix(Ne)[1, 1]\n >>> s, i, j, k, u, v = (1, 1, 0, 1, 1, 1)\n >>> unfolding = Unfolding(Ne, real=True, normalized=True)\n\n >>> linear_get_coefficients(coef, rhouv, s, i, j, k, u, v,\n ... unfolding, False)\n [[1, 0, -2.00000000000000, False, False]]\n\n \"\"\"\n Ne = unfolding.Ne\n Mu = unfolding.Mu\n # We determine mu, the index labeling the equation.\n mu = Mu(s, i, j)\n\n if unfolding.normalized and u == 0 and v == 0:\n # We find the nu and coefficients for a term of the form.\n # coef*rho_{00} = coef*(1-sum_{i=1}^{Ne-1} rho_{ii})\n if unfolding.real:\n ss = 1\n else:\n ss = 0\n\n mu11 = Mu(ss, 1, 1)\n muNeNe = Mu(ss, Ne-1, Ne-1)\n rhouv_isconjugated = False\n if s == 1:\n coef_list = [[mu, nu, im(coef), matrix_form, rhouv_isconjugated]\n for nu in range(mu11, muNeNe+1)]\n elif s == -1:\n coef_list = [[mu, nu, -re(coef), matrix_form, rhouv_isconjugated]\n for nu in range(mu11, muNeNe+1)]\n elif s == 0:\n coef_list = [[mu, nu, -coef, matrix_form, rhouv_isconjugated]\n for nu in range(mu11, muNeNe+1)]\n return coef_list\n\n #####################################################################\n\n if (unfolding.lower_triangular and\n isinstance(rhouv, sympy.conjugate)):\n u, v = (v, u)\n rhouv_isconjugated = True\n else:\n rhouv_isconjugated = False\n # If the unfolding is real, there are two terms for this\n # component rhouv of equation mu.\n if unfolding.real:\n nur = Mu(1, u, v)\n nui = Mu(-1, u, v)\n else:\n nu = Mu(0, u, v)\n #####################################################################\n # We determine the coefficients for each term.\n if unfolding.real:\n # There are two sets of forumas for the coefficients depending\n # on whether rhouv_isconjugated.\n # re(I*x*conjugate(y)) = -im(x)*re(y) + re(x)*im(y)\n # re(I*x*y) = -im(x)*re(y) - re(x)*im(y)\n # im(I*x*conjugate(y)) = +re(x)*re(y) + im(x)*im(y)\n # im(I*x*y) = +re(x)*re(y) - im(x)*im(y)\n if s == 1:\n # The real part\n if rhouv_isconjugated:\n coef_rerhouv = -im(coef)\n coef_imrhouv = re(coef)\n else:\n coef_rerhouv = -im(coef)\n coef_imrhouv = -re(coef)\n elif s == -1:\n if rhouv_isconjugated:\n coef_rerhouv = re(coef)\n coef_imrhouv = im(coef)\n else:\n coef_rerhouv = re(coef)\n coef_imrhouv = -im(coef)\n\n coef_list = [[mu, nur, coef_rerhouv, matrix_form, rhouv_isconjugated]]\n if nui is not None:\n coef_list += [[mu, nui, coef_imrhouv,\n matrix_form, rhouv_isconjugated]]\n else:\n coef_list = [[mu, nu, coef, matrix_form, rhouv_isconjugated]]\n\n return coef_list", "response": "r This function calculates the coefficients for a linear term."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a fast function that returns the Lindblad terms.", "response": "def fast_lindblad_terms(gamma, unfolding, matrix_form=False, file_name=None,\n return_code=False):\n r\"\"\"Return a fast function that returns the Lindblad terms.\n\n We test a basic two-level system.\n\n >>> import numpy as np\n >>> Ne = 2\n >>> gamma21 = 2*np.pi*6e6\n >>> gamma = np.array([[0.0, -gamma21],\n ... [gamma21, 0.0]])\n >>> rhos = np.array([[0.6, 3+2j],\n ... [3-2j, 0.4]])\n\n An map to unfold the density matrix.\n >>> unfolding = Unfolding(Ne, True, True, True)\n\n We obtain a function to calculate Lindblad terms.\n >>> lindblad_terms = fast_lindblad_terms(gamma, unfolding)\n\n Apply this to a density matrix.\n >>> rhos = np.array([[0.6, 3+2j],\n ... [3-2j, 0.4]])\n >>> rhosv = unfolding(rhos)\n >>> rhs_lindblad = lindblad_terms(rhosv)\n >>> print(rhs_lindblad)\n [-15079644.7372 -56548667.7646 37699111.8431]\n\n \"\"\"\n Ne = unfolding.Ne\n Nrho = unfolding.Nrho\n Mu = unfolding.Mu\n\n # We establish the arguments of the output function.\n if True:\n code = \"\"\n code += \"def lindblad_terms(\"\n if not matrix_form: code += \"rho, \"\n if code[-2:] == \", \": code = code[:-2]\n code += \"):\\n\"\n # We initialize the output and auxiliaries.\n if True:\n # We introduce the factor that multiplies all terms.\n if matrix_form:\n code += \" A = np.zeros((\"+str(Nrho)+\", \"+str(Nrho)\n if not unfolding.real:\n code += \"), complex)\\n\\n\"\n else:\n code += \"))\\n\\n\"\n if unfolding.normalized:\n code += \" b = np.zeros((\"+str(Nrho)\n if not unfolding.real:\n code += \"), complex)\\n\\n\"\n else:\n code += \"))\\n\\n\"\n else:\n code += \" rhs = np.zeros((\"+str(Nrho)\n if not unfolding.real:\n code += \"), complex)\\n\\n\"\n else:\n code += \"))\\n\\n\"\n\n for a in range(Ne):\n for b in range(a):\n # The first term is of the from\n # gamma_ab * rho_aa |b>20:\n red_order = []\n for i in range(0, len(order), int(math.floor(len(order)/10))):\n red_order.append(order[i])\n order = red_order\n\n for N in range(len( kwargs['values'][nullstand] )):\n # interpolate for each curve throughout the BZ\n results.append({'color':'#000000', 'data':[], 'ticks':[]})\n d = 0.0\n x = []\n y = []\n bz_vec_ref = [0, 0, 0]\n\n for bz in order:\n y.append( kwargs['values'][bz][N] )\n bz_coords = map(frac2float, bz.split() )\n bz_vec_cur = dot( bz_coords, linalg.inv( kwargs['xyz_matrix'] ).transpose() )\n bz_vec_dir = map(sum, zip(bz_vec_cur, bz_vec_ref))\n bz_vec_ref = bz_vec_cur\n d += linalg.norm( bz_vec_dir )\n x.append(d)\n results[-1]['ticks'].append( [d, bz.replace(' ', '')] )\n\n # end in nullstand point (normally, Gamma)\n #y.append(kwargs['values'][nullstand][N])\n #if d == 0: d+=0.5\n #else: d += linalg.norm( bz_vec_ref )\n #x.append(d)\n #results[-1]['ticks'].append( [d, nullstand.replace(' ', '')] )\n\n divider = 10 if len(order)<10 else 1.5\n step = (max(x)-min(x)) / len(kwargs['values']) / divider\n\n xnew = arange(min(x), max(x)+step/2, step).tolist()\n ynew = []\n f = NaturalCubicSpline( array(x), array(y) )\n for i in xnew:\n results[-1]['data'].append([ round( i, 3 ), round( f(i), 3 ) ]) # round to reduce output\n\n return results\n\n elif task == 'dos': # CRYSTAL, VASP, EXCITING\n\n results = []\n\n if 'precomputed' in kwargs:\n total_dos = [[i, kwargs['precomputed']['total'][n]] for n, i in enumerate(kwargs['precomputed']['x'])]\n\n else:\n tdos = TotalDos( kwargs['eigenvalues'], sigma=kwargs['sigma'] )\n tdos.set_draw_area(omega_min=kwargs['omega_min'], omega_max=kwargs['omega_max'], omega_pitch=kwargs['omega_pitch'])\n total_dos = tdos.calculate()\n\n results.append({'label':'total', 'color': '#000000', 'data': total_dos})\n\n if 'precomputed' in kwargs:\n partial_doses = []\n for k in kwargs['precomputed'].keys():\n if k in ['x', 'total']: continue\n partial_doses.append({ 'label': k, 'data': [[i, kwargs['precomputed'][k][n]] for n, i in enumerate(kwargs['precomputed']['x'])] })\n\n elif 'impacts' in kwargs and 'atomtypes' in kwargs:\n # get the order of atoms to evaluate their partial impact\n labels = {}\n types = []\n index, subtractor = 0, 0\n for k, atom in enumerate(kwargs['atomtypes']): # determine the order of atoms for the partial impact of every type\n if atom not in labels:\n #if atom == 'X' and not calc.phonons: # artificial GHOST case for phonons, decrease atomic index\n # subtractor += 1\n # continue\n labels[atom] = index\n types.append([k+1-subtractor])\n index += 1\n else:\n types[ labels[atom] ].append(k+1-subtractor)\n\n pdos = PartialDos( kwargs['eigenvalues'], kwargs['impacts'], sigma=kwargs['sigma'] )\n pdos.set_draw_area(omega_min=kwargs['omega_min'], omega_max=kwargs['omega_max'], omega_pitch=kwargs['omega_pitch'])\n partial_doses = pdos.calculate( types, labels )\n\n # add colors to partials\n for i in range(len(partial_doses)):\n if partial_doses[i]['label'] == 'X': color = '#000000'\n elif partial_doses[i]['label'] == 'H': color = '#CCCCCC'\n else:\n try: color = jmol_to_hex( jmol_colors[ chemical_symbols.index(partial_doses[i]['label']) ] )\n except ValueError: color = '#FFCC66'\n partial_doses[i].update({'color': color})\n results.extend(partial_doses)\n\n return results", "response": "This function is used to plot the BZ in order to get the order of the objects in the order they are in."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef eplotter(task, data): # CRYSTAL, VASP, EXCITING\n '''\n eplotter is like bdplotter but less complicated\n '''\n results, color, fdata = [], None, []\n\n if task == 'optstory':\n color = '#CC0000'\n clickable = True\n for n, i in enumerate(data):\n fdata.append([n, i[4]])\n fdata = array(fdata)\n fdata[:,1] -= min(fdata[:,1]) # this normalizes values to minimum (by 2nd col)\n fdata = fdata.tolist()\n\n elif task == 'convergence':\n color = '#0066CC'\n clickable = False\n for n, i in enumerate(data):\n fdata.append([n, i])\n\n for n in range(len(fdata)):\n #fdata[n][1] = \"%10.5f\" % fdata[n][1]\n fdata[n][1] = round(fdata[n][1], 5)\n\n results.append({'color': color, 'clickable:': clickable, 'data': fdata})\n return results", "response": "eplotter is like bdplotter but less complicated"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nassign a parser to the current parser.", "response": "def assign_parser(self, name):\n '''\n Restricts parsing\n **name** is a name of the parser class\n NB: this is the PUBLIC method\n @procedure\n '''\n for n, p in list(self.Parsers.items()):\n if n != name:\n del self.Parsers[n]\n if len(self.Parsers) != 1:\n raise RuntimeError('Parser cannot be assigned!')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconstructing a standardized chemical formula for the given atom sequence.", "response": "def formula(self, atom_sequence):\n '''\n Constructs standardized chemical formula\n NB: this is the PUBLIC method\n @returns formula_str\n '''\n labels = {}\n types = []\n y = 0\n for k, atomi in enumerate(atom_sequence):\n lbl = re.sub(\"[0-9]+\", \"\", atomi).capitalize()\n if lbl not in labels:\n labels[lbl] = y\n types.append([k+1])\n y += 1\n else:\n types[ labels[lbl] ].append(k+1)\n atoms = list(labels.keys())\n atoms = [x for x in self.formula_sequence if x in atoms] + [x for x in atoms if x not in self.formula_sequence] # accordingly\n formula = ''\n for atom in atoms:\n n = len(types[labels[atom]])\n if n==1: n = ''\n else: n = str(n)\n formula += atom + n\n return formula"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef savvyize(self, input_string, recursive=False, stemma=False):\n '''\n Determines which files should be processed\n NB: this is the PUBLIC method\n @returns filenames_list\n '''\n input_string = os.path.abspath(input_string)\n tasks = []\n\n restricted = [ symbol for symbol in self.settings['skip_if_path'] ] if self.settings['skip_if_path'] else []\n\n # given folder\n if os.path.isdir(input_string):\n if recursive:\n for root, dirs, files in os.walk(input_string): # beware of broken links on unix! (NB find ~ -type l -exec rm -f {} \\;)\n # skip_if_path directive\n to_filter = []\n for dir in dirs:\n dir = u(dir)\n for rs in restricted:\n if dir.startswith(rs) or dir.endswith(rs):\n to_filter.append(dir)\n break\n dirs[:] = [x for x in dirs if x not in to_filter]\n for filename in files:\n # skip_if_path directive\n filename = u(filename)\n if restricted:\n for rs in restricted:\n if filename.startswith(rs) or filename.endswith(rs):\n break\n else:\n tasks.append(root + os.sep + filename)\n else:\n tasks.append(root + os.sep + filename)\n else:\n for filename in os.listdir(input_string):\n filename = u(filename)\n if os.path.isfile(input_string + os.sep + filename):\n # skip_if_path directive\n if restricted:\n for rs in restricted:\n if filename.startswith(rs) or filename.endswith(rs):\n break\n else:\n tasks.append(input_string + os.sep + filename)\n else:\n tasks.append(input_string + os.sep + filename)\n\n # given full filename\n elif os.path.isfile(input_string):\n tasks.append(input_string) # skip_if_path directive is not applicable here\n\n # given filename stemma\n else:\n if stemma:\n parent = os.path.dirname(input_string)\n for filename in os.listdir(parent):\n filename = u(filename)\n if input_string in parent + os.sep + filename and not os.path.isdir(parent + os.sep + filename):\n # skip_if_path directive\n if restricted:\n for rs in restricted:\n if filename.startswith(rs) or filename.endswith(rs):\n break\n else:\n tasks.append(parent + os.sep + filename)\n else:\n tasks.append(parent + os.sep + filename)\n return tasks", "response": "This method will take a string and return a list of filenames that can be processed."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _parse(self, parsable, parser_name):\n '''\n Low-level parsing\n NB: this is the PRIVATE method\n @returns tilde_obj, error\n '''\n calc, error = None, None\n try:\n for calc in self.Parsers[parser_name].iparse(parsable):\n yield calc, None\n return\n except RuntimeError as e:\n error = \"routine %s parser error in %s: %s\" % ( parser_name, parsable, e )\n except:\n exc_type, exc_value, exc_tb = sys.exc_info()\n error = \"unexpected %s parser error in %s:\\n %s\" % ( parser_name, parsable, \"\".join(traceback.format_exception( exc_type, exc_value, exc_tb )) )\n yield None, error", "response": "Parse a set of items from a parsable object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse(self, parsable):\n '''\n High-level parsing:\n determines the data format\n and combines parent-children outputs\n NB: this is the PUBLIC method\n @returns tilde_obj, error\n '''\n calc, error = None, None\n try:\n f = open(parsable, 'rb')\n if is_binary_string(f.read(2048)):\n yield None, 'was read (binary data)...'\n return\n f.close()\n except IOError:\n yield None, 'read error!'\n return\n\n f = open(parsable, 'r', errors='surrogateescape') if six.PY3 else open(parsable, 'r') # open the file once again with right mode\n f.seek(0)\n counter, detected = 0, False\n\n while not detected:\n\n if counter > 700: break # criterion: parser must detect its working format until here\n\n fingerprint = f.readline()\n if not fingerprint:\n break\n for name, Parser in self.Parsers.items():\n if Parser.fingerprints(fingerprint):\n for calc, error in self._parse(parsable, name):\n detected = True\n # check if we parsed something reasonable\n if not error and calc:\n\n if not len(calc.structures) or not len(calc.structures[-1]):\n error = 'Valid structure is not present!'\n\n if calc.info['finished'] == 0x1:\n calc.warning( 'This calculation is not correctly finished!' )\n\n if not calc.info['H']:\n error = 'XC potential is not present!'\n\n yield calc, error\n\n if detected:\n break\n counter += 1\n f.close()\n\n # unsupported data occured\n if not detected:\n yield None, 'was read...'", "response": "High - level parsing for the tilde object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nclassify the current structure using the hierarchy API and returns the tilde_obj and the error message", "response": "def classify(self, calc, symprec=None):\n '''\n Reasons on normalization, invokes hierarchy API and prepares calc for saving\n NB: this is the PUBLIC method\n @returns tilde_obj, error\n '''\n error = None\n symbols = calc.structures[-1].get_chemical_symbols()\n calc.info['formula'] = self.formula(symbols)\n calc.info['cellpar'] = cell_to_cellpar(calc.structures[-1].cell).tolist()\n if calc.info['input']:\n try:\n calc.info['input'] = str(calc.info['input'], errors='ignore')\n except:\n pass\n\n # applying filter: todo\n if (calc.info['finished'] == 0x1 and self.settings['skip_unfinished']) or \\\n (not calc.info['energy'] and self.settings['skip_notenergy']):\n return None, 'data do not satisfy the active filter'\n\n # naive elements extraction\n fragments = re.findall(r'([A-Z][a-z]?)(\\d*[?:.\\d+]*)?', calc.info['formula'])\n for fragment in fragments:\n if fragment[0] == 'X':\n continue\n calc.info['elements'].append(fragment[0])\n calc.info['contents'].append(int(fragment[1])) if fragment[1] else calc.info['contents'].append(1)\n\n # extend hierarchy with modules\n for C_obj in self.Classifiers:\n try:\n calc = C_obj['classify'](calc)\n except:\n exc_type, exc_value, exc_tb = sys.exc_info()\n error = \"Fatal error during classification:\\n %s\" % \"\".join(traceback.format_exception( exc_type, exc_value, exc_tb ))\n return None, error\n\n # chemical ratios\n if not len(calc.info['standard']):\n if len(calc.info['elements']) == 1: calc.info['expanded'] = 1\n if not calc.info['expanded']:\n calc.info['expanded'] = reduce(gcd, calc.info['contents'])\n for n, i in enumerate([x//calc.info['expanded'] for x in calc.info['contents']]):\n if i == 1:\n calc.info['standard'] += calc.info['elements'][n]\n else:\n calc.info['standard'] += calc.info['elements'][n] + str(i)\n if not calc.info['expanded']:\n del calc.info['expanded']\n\n calc.info['nelem'] = len(calc.info['elements'])\n if calc.info['nelem'] > 13:\n calc.info['nelem'] = 13\n calc.info['natom'] = len(symbols)\n\n # periodicity\n if calc.info['periodicity'] == 0:\n calc.info['periodicity'] = 0x4\n elif calc.info['periodicity'] == -1:\n calc.info['periodicity'] = 0x5\n\n # general calculation type reasoning\n if (calc.structures[-1].get_initial_charges() != 0).sum():\n calc.info['calctypes'].append(0x4) # numpy count_nonzero implementation\n if (calc.structures[-1].get_initial_magnetic_moments() != 0).sum():\n calc.info['calctypes'].append(0x5)\n if calc.phonons['modes']:\n calc.info['calctypes'].append(0x6)\n if calc.phonons['ph_k_degeneracy']:\n calc.info['calctypes'].append(0x7)\n if calc.phonons['dielectric_tensor']:\n calc.info['calctypes'].append(0x8) # CRYSTAL-only!\n if len(calc.tresholds) > 1:\n calc.info['calctypes'].append(0x3)\n calc.info['optgeom'] = True\n if calc.electrons['dos'] or calc.electrons['bands']:\n calc.info['calctypes'].append(0x2)\n if calc.info['energy']:\n calc.info['calctypes'].append(0x1)\n calc.info['spin'] = 0x2 if calc.info['spin'] else 0x1\n\n # TODO: standardize\n if 'vac' in calc.info:\n if 'X' in symbols:\n calc.info['techs'].append('vacancy defect: ghost')\n else:\n calc.info['techs'].append('vacancy defect: void space')\n\n calc.info['lata'] = round(calc.info['cellpar'][0], 3)\n calc.info['latb'] = round(calc.info['cellpar'][1], 3)\n calc.info['latc'] = round(calc.info['cellpar'][2], 3)\n calc.info['latalpha'] = round(calc.info['cellpar'][3], 2)\n calc.info['latbeta'] = round(calc.info['cellpar'][4], 2)\n calc.info['latgamma'] = round(calc.info['cellpar'][5], 2)\n\n # invoke symmetry finder\n found = SymmetryHandler(calc, symprec)\n if found.error:\n return None, found.error\n\n calc.info['sg'] = found.i\n calc.info['ng'] = found.n\n calc.info['symmetry'] = found.symmetry\n calc.info['spg'] = \"%s — %s\" % (found.n, found.i)\n calc.info['pg'] = found.pg\n calc.info['dg'] = found.dg\n\n # phonons\n if calc.phonons['dfp_magnitude']: calc.info['dfp_magnitude'] = round(calc.phonons['dfp_magnitude'], 3)\n if calc.phonons['dfp_disps']: calc.info['dfp_disps'] = len(calc.phonons['dfp_disps'])\n if calc.phonons['modes']:\n calc.info['n_ph_k'] = len(calc.phonons['ph_k_degeneracy']) if calc.phonons['ph_k_degeneracy'] else 1\n\n #calc.info['rgkmax'] = calc.electrons['rgkmax'] # LAPW\n\n # electronic properties reasoning by bands\n if calc.electrons['bands']:\n if calc.electrons['bands'].is_conductor():\n calc.info['etype'] = 0x2\n calc.info['bandgap'] = 0.0\n calc.info['bandgaptype'] = 0x1\n else:\n try:\n gap, is_direct = calc.electrons['bands'].get_bandgap()\n except ElectronStructureError as e:\n calc.electrons['bands'] = None\n calc.warning(e.value)\n else:\n calc.info['etype'] = 0x1\n calc.info['bandgap'] = round(gap, 2)\n calc.info['bandgaptype'] = 0x2 if is_direct else 0x3\n\n # electronic properties reasoning by DOS\n if calc.electrons['dos']:\n try: gap = round(calc.electrons['dos'].get_bandgap(), 2)\n except ElectronStructureError as e:\n calc.electrons['dos'] = None\n calc.warning(e.value)\n else:\n if calc.electrons['bands']: # check coincidence\n if abs(calc.info['bandgap'] - gap) > 0.2:\n calc.warning('Bans gaps in DOS and bands data differ considerably! The latter will be considered.')\n else:\n calc.info['bandgap'] = gap\n if gap:\n calc.info['etype'] = 0x1\n else:\n calc.info['etype'] = 0x2\n calc.info['bandgaptype'] = 0x1\n\n # TODO: beware to add something new to an existing item!\n # TODO2: unknown or absent?\n for entity in self.hierarchy:\n if entity['creates_topic'] and not entity['optional'] and not calc.info.get(entity['source']):\n if entity['enumerated']:\n calc.info[ entity['source'] ] = [0x0] if entity['multiple'] else 0x0\n else:\n calc.info[ entity['source'] ] = ['none'] if entity['multiple'] else 'none'\n\n calc.benchmark() # this call must be at the very end of parsing\n\n return calc, error"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninvoke module(s) API NB: this is the PUBLIC method @returns apps_dict", "response": "def postprocess(self, calc, with_module=None, dry_run=None):\n '''\n Invokes module(s) API\n NB: this is the PUBLIC method\n @returns apps_dict\n '''\n for appname, appclass in self.Apps.items():\n if with_module and with_module != appname: continue\n\n run_permitted = False\n\n # scope-conditions\n if appclass['apptarget']:\n for key in appclass['apptarget']:\n negative = False\n if str(appclass['apptarget'][key]).startswith('!'):\n negative = True\n scope_prop = appclass['apptarget'][key][1:]\n else:\n scope_prop = appclass['apptarget'][key]\n\n if key in calc.info:\n # non-strict comparison (\"CRYSTAL\" matches \"CRYSTAL09 v2.0\")\n if (str(scope_prop) in str(calc.info[key]) or scope_prop == calc.info[key]) != negative: # true if only one, but not both\n run_permitted = True\n else:\n run_permitted = False\n break\n\n else: run_permitted = True\n\n # module code running\n if run_permitted:\n calc.apps[appname] = {'error': None, 'data': None}\n if dry_run:\n continue\n try:\n AppInstance = appclass['appmodule'](calc)\n except:\n exc_type, exc_value, exc_tb = sys.exc_info()\n errmsg = \"Fatal error in %s module:\\n %s\" % ( appname, \" \".join(traceback.format_exception( exc_type, exc_value, exc_tb )) )\n calc.apps[appname]['error'] = errmsg\n calc.warning( errmsg )\n else:\n try:\n calc.apps[appname]['data'] = getattr(AppInstance, appclass['appdata'])\n except AttributeError:\n errmsg = 'No appdata-defined property found for %s module!' % appname\n calc.apps[appname]['error'] = errmsg\n calc.warning( errmsg )\n return calc"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save(self, calc, session):\n '''\n Saves tilde_obj into the database\n NB: this is the PUBLIC method\n @returns checksum, error\n '''\n checksum = calc.get_checksum()\n\n try:\n existing_calc = session.query(model.Calculation).filter(model.Calculation.checksum == checksum).one()\n except NoResultFound:\n pass\n else:\n del calc\n return None, \"This calculation already exists!\"\n\n if not calc.download_size:\n for f in calc.related_files:\n calc.download_size += os.stat(f).st_size\n\n ormcalc = model.Calculation(checksum = checksum)\n\n if calc._calcset:\n ormcalc.meta_data = model.Metadata(chemical_formula = calc.info['standard'], download_size = calc.download_size)\n\n for child in session.query(model.Calculation).filter(model.Calculation.checksum.in_(calc._calcset)).all():\n ormcalc.children.append(child)\n ormcalc.siblings_count = len(ormcalc.children)\n ormcalc.nested_depth = calc._nested_depth\n\n else:\n # prepare phonon data for saving\n # this is actually a dict to list conversion TODO re-structure this\n if calc.phonons['modes']:\n phonons_json = []\n\n for bzpoint, frqset in calc.phonons['modes'].items():\n # re-orientate eigenvectors\n for i in range(0, len(calc.phonons['ph_eigvecs'][bzpoint])):\n for j in range(0, len(calc.phonons['ph_eigvecs'][bzpoint][i])//3):\n eigv = array([calc.phonons['ph_eigvecs'][bzpoint][i][j*3], calc.phonons['ph_eigvecs'][bzpoint][i][j*3+1], calc.phonons['ph_eigvecs'][bzpoint][i][j*3+2]])\n R = dot( eigv, calc.structures[-1].cell ).tolist()\n calc.phonons['ph_eigvecs'][bzpoint][i][j*3], calc.phonons['ph_eigvecs'][bzpoint][i][j*3+1], calc.phonons['ph_eigvecs'][bzpoint][i][j*3+2] = [round(x, 3) for x in R]\n\n try: irreps = calc.phonons['irreps'][bzpoint]\n except KeyError:\n empty = []\n for i in range(len(frqset)):\n empty.append('')\n irreps = empty\n\n phonons_json.append({ 'bzpoint':bzpoint, 'freqs':frqset, 'irreps':irreps, 'ph_eigvecs':calc.phonons['ph_eigvecs'][bzpoint] })\n if bzpoint == '0 0 0':\n phonons_json[-1]['ir_active'] = calc.phonons['ir_active']\n phonons_json[-1]['raman_active'] = calc.phonons['raman_active']\n if calc.phonons['ph_k_degeneracy']:\n phonons_json[-1]['ph_k_degeneracy'] = calc.phonons['ph_k_degeneracy'][bzpoint]\n\n ormcalc.phonons = model.Phonons()\n ormcalc.spectra.append( model.Spectra(kind = model.Spectra.PHONON, eigenvalues = json.dumps(phonons_json)) )\n\n # prepare electron data for saving TODO re-structure this\n for task in ['dos', 'bands']: # projected?\n if calc.electrons[task]:\n calc.electrons[task] = calc.electrons[task].todict()\n\n if calc.electrons['dos'] or calc.electrons['bands']:\n ormcalc.electrons = model.Electrons(gap = calc.info['bandgap'])\n if 'bandgaptype' in calc.info:\n ormcalc.electrons.is_direct = 1 if calc.info['bandgaptype'] == 'direct' else -1\n ormcalc.spectra.append(model.Spectra(\n kind = model.Spectra.ELECTRON,\n dos = json.dumps(calc.electrons['dos']),\n bands = json.dumps(calc.electrons['bands']),\n projected = json.dumps(calc.electrons['projected']),\n eigenvalues = json.dumps(calc.electrons['eigvals'])\n ))\n\n # construct ORM for other props\n calc.related_files = list(map(virtualize_path, calc.related_files))\n ormcalc.meta_data = model.Metadata(location = calc.info['location'], finished = calc.info['finished'], raw_input = calc.info['input'], modeling_time = calc.info['duration'], chemical_formula = html_formula(calc.info['standard']), download_size = calc.download_size, filenames = json.dumps(calc.related_files))\n\n codefamily = model.Codefamily.as_unique(session, content = calc.info['framework'])\n codeversion = model.Codeversion.as_unique(session, content = calc.info['prog'])\n\n codeversion.instances.append( ormcalc.meta_data )\n codefamily.versions.append( codeversion )\n\n pot = model.Pottype.as_unique(session, name = calc.info['H'])\n pot.instances.append(ormcalc)\n ormcalc.recipinteg = model.Recipinteg(kgrid = calc.info['k'], kshift = calc.info['kshift'], smearing = calc.info['smear'], smeartype = calc.info['smeartype'])\n ormcalc.basis = model.Basis(kind = calc.info['ansatz'], content = json.dumps(calc.electrons['basis_set']) if calc.electrons['basis_set'] else None)\n ormcalc.energy = model.Energy(convergence = json.dumps(calc.convergence), total = calc.info['energy'])\n\n ormcalc.spacegroup = model.Spacegroup(n=calc.info['ng'])\n ormcalc.struct_ratios = model.Struct_ratios(chemical_formula=calc.info['standard'], formula_units=calc.info['expanded'], nelem=calc.info['nelem'], dimensions=calc.info['dims'])\n if len(calc.tresholds) > 1:\n ormcalc.struct_optimisation = model.Struct_optimisation(tresholds=json.dumps(calc.tresholds), ncycles=json.dumps(calc.ncycles))\n\n for n, ase_repr in enumerate(calc.structures):\n is_final = True if n == len(calc.structures)-1 else False\n struct = model.Structure(step = n, final = is_final)\n\n s = cell_to_cellpar(ase_repr.cell)\n struct.lattice = model.Lattice(a=s[0], b=s[1], c=s[2], alpha=s[3], beta=s[4], gamma=s[5], a11=ase_repr.cell[0][0], a12=ase_repr.cell[0][1], a13=ase_repr.cell[0][2], a21=ase_repr.cell[1][0], a22=ase_repr.cell[1][1], a23=ase_repr.cell[1][2], a31=ase_repr.cell[2][0], a32=ase_repr.cell[2][1], a33=ase_repr.cell[2][2])\n\n #rmts = ase_repr.get_array('rmts') if 'rmts' in ase_repr.arrays else [None for j in range(len(ase_repr))]\n charges = ase_repr.get_array('charges') if 'charges' in ase_repr.arrays else [None for j in range(len(ase_repr))]\n magmoms = ase_repr.get_array('magmoms') if 'magmoms' in ase_repr.arrays else [None for j in range(len(ase_repr))]\n for n, i in enumerate(ase_repr):\n struct.atoms.append( model.Atom( number=chemical_symbols.index(i.symbol), x=i.x, y=i.y, z=i.z, charge=charges[n], magmom=magmoms[n] ) )\n\n ormcalc.structures.append(struct)\n # TODO Forces\n\n ormcalc.uigrid = model.Grid(info=json.dumps(calc.info))\n\n # tags ORM\n uitopics = []\n for entity in self.hierarchy:\n\n if not entity['creates_topic']:\n continue\n\n if entity['multiple'] or calc._calcset:\n for item in calc.info.get( entity['source'], [] ):\n uitopics.append( model.topic(cid=entity['cid'], topic=item) )\n else:\n topic = calc.info.get(entity['source'])\n if topic or not entity['optional']:\n uitopics.append( model.topic(cid=entity['cid'], topic=topic) )\n\n uitopics = [model.Topic.as_unique(session, cid=x.cid, topic=\"%s\" % x.topic) for x in uitopics]\n\n ormcalc.uitopics.extend(uitopics)\n\n if calc._calcset:\n session.add(ormcalc)\n else:\n session.add_all([codefamily, codeversion, pot, ormcalc])\n\n session.commit()\n del calc, ormcalc\n return checksum, None", "response": "Saves the tilde_obj into the database."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting calc entry by checksum entirely from the database.", "response": "def purge(self, session, checksum):\n '''\n Deletes calc entry by checksum entirely from the database\n NB source files on disk are not deleted\n NB: this is the PUBLIC method\n @returns error\n '''\n C = session.query(model.Calculation).get(checksum)\n\n if not C:\n return 'Calculation does not exist!'\n\n # dataset deletion includes editing the whole dataset hierarchical tree (if any)\n if C.siblings_count:\n C_meta = session.query(model.Metadata).get(checksum)\n higher_lookup = {}\n more = C.parent\n distance = 0\n while True:\n distance += 1\n higher, more = more, []\n if not higher:\n break\n for item in higher:\n try:\n higher_lookup[distance].add(item)\n except KeyError:\n higher_lookup[distance] = set([item])\n if item.parent:\n more += item.parent\n for distance, members in higher_lookup.items():\n for member in members:\n if distance == 1:\n member.siblings_count -= 1\n\n if not member.siblings_count:\n return 'The parent dataset contains only one (current) item, please, delete parent dataset first!'\n\n member.meta_data.download_size -= C_meta.download_size\n session.add(member)\n # low-level entry deletion deals with additional tables\n else:\n session.execute( model.delete( model.Spectra ).where( model.Spectra.checksum == checksum) )\n session.execute( model.delete( model.Electrons ).where( model.Electrons.checksum == checksum ) )\n session.execute( model.delete( model.Phonons ).where( model.Phonons.checksum == checksum ) )\n session.execute( model.delete( model.Recipinteg ).where( model.Recipinteg.checksum == checksum ) )\n session.execute( model.delete( model.Basis ).where( model.Basis.checksum == checksum ) )\n session.execute( model.delete( model.Energy ).where( model.Energy.checksum == checksum ) )\n session.execute( model.delete( model.Spacegroup ).where( model.Spacegroup.checksum == checksum ) )\n session.execute( model.delete( model.Struct_ratios ).where( model.Struct_ratios.checksum == checksum ) )\n session.execute( model.delete( model.Struct_optimisation ).where( model.Struct_optimisation.checksum == checksum ) )\n\n struct_ids = [ int(i[0]) for i in session.query(model.Structure.struct_id).filter(model.Structure.checksum == checksum).all() ]\n for struct_id in struct_ids:\n session.execute( model.delete( model.Atom ).where( model.Atom.struct_id == struct_id ) )\n session.execute( model.delete( model.Lattice ).where( model.Lattice.struct_id == struct_id ) )\n session.execute( model.delete( model.Structure ).where( model.Structure.checksum == checksum ) )\n\n # for all types of entries\n if len(C.references):\n left_references = [ int(i[0]) for i in session.query(model.Reference.reference_id).join(model.metadata_references, model.Reference.reference_id == model.metadata_references.c.reference_id).filter(model.metadata_references.c.checksum == checksum).all() ]\n session.execute( model.delete( model.metadata_references ).where( model.metadata_references.c.checksum == checksum ) )\n\n # remove the whole citation?\n for lc in left_references:\n if not (session.query(model.metadata_references.c.checksum).filter(model.metadata_references.c.reference_id == lc).count()):\n session.execute( model.delete( model.Reference ).where(model.Reference.reference_id == lc) )\n\n # TODO rewrite with cascading\n session.execute( model.delete( model.Metadata ).where( model.Metadata.checksum == checksum ) )\n\n session.execute( model.delete( model.Grid ).where( model.Grid.checksum == checksum ) )\n session.execute( model.delete( model.tags ).where( model.tags.c.checksum == checksum ) )\n\n session.execute( model.delete( model.calcsets ).where( model.calcsets.c.children_checksum == checksum ) )\n session.execute( model.delete( model.calcsets ).where( model.calcsets.c.parent_checksum == checksum ) )\n session.execute( model.delete( model.Calculation ).where( model.Calculation.checksum == checksum ) )\n session.commit()\n # NB tables topics, codefamily, codeversion, pottype are mostly irrelevant and, if needed, should be cleaned manually\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmerges the given checksums into a new calc called DATASET", "response": "def merge(self, session, checksums, title):\n '''\n Merges calcs into a new calc called DATASET\n NB: this is the PUBLIC method\n @returns DATASET, error\n '''\n calc = Output(calcset=checksums)\n\n cur_depth = 0\n\n for nested_depth, grid_item, download_size in session.query(model.Calculation.nested_depth, model.Grid.info, model.Metadata.download_size).filter(model.Calculation.checksum == model.Grid.checksum, model.Grid.checksum == model.Metadata.checksum, model.Calculation.checksum.in_(checksums)).all():\n\n if nested_depth > cur_depth: cur_depth = nested_depth\n\n grid_item = json.loads(grid_item)\n\n for entity in self.hierarchy:\n\n topic = grid_item.get(entity['source'])\n if not topic:\n continue\n\n if not isinstance(topic, list):\n topic = [ topic ]\n\n calc.info[ entity['source'] ] = list(set( calc.info.get(entity['source'], []) + topic ))\n\n calc.download_size += download_size\n\n if not calc.download_size:\n return None, 'Wrong parameters provided!'\n\n calc._nested_depth = cur_depth + 1\n\n calc.info['standard'] = title\n\n # generate fake checksum\n calc._checksum = calc.get_collective_checksum()\n\n return calc, None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\naugment a DATASET with some calcs with some addendum.", "response": "def augment(self, session, parent, addendum):\n '''\n Augments a DATASET with some calcs\n NB: this is the PUBLIC method\n @returns error\n '''\n parent_calc = session.query(model.Calculation).get(parent)\n if not parent_calc or not parent_calc.siblings_count:\n return 'Dataset is erroneously selected!'\n\n existing_children, filtered_addendum = [child.checksum for child in parent_calc.children], []\n\n for child in addendum:\n if not child in existing_children:\n filtered_addendum.append(child)\n\n if not filtered_addendum:\n return 'All these data are already present in this dataset.'\n if parent_calc.checksum in filtered_addendum:\n return 'A dataset cannot be added into itself.'\n\n higher_lookup = {}\n more = parent_calc.parent\n distance = 0\n while True:\n distance += 1\n higher, more = more, []\n if not higher:\n break\n for item in higher:\n try:\n higher_lookup[distance].add(item)\n except KeyError:\n higher_lookup[distance] = set([item])\n if item.parent:\n more += item.parent\n\n for members in list(higher_lookup.values()):\n for member in members:\n if member.checksum in filtered_addendum:\n return 'A parent dataset cannot be added to its children dataset.'\n\n parent_meta = session.query(model.Metadata).get(parent)\n parent_grid = session.query(model.Grid).get(parent)\n info_obj = json.loads(parent_grid.info)\n\n for nested_depth, grid_item, download_size in session.query(model.Calculation.nested_depth, model.Grid.info, model.Metadata.download_size).filter(model.Calculation.checksum == model.Grid.checksum, model.Grid.checksum == model.Metadata.checksum, model.Calculation.checksum.in_(filtered_addendum)).all():\n\n if nested_depth >= parent_calc.nested_depth:\n parent_calc.nested_depth = nested_depth + 1\n\n grid_item = json.loads(grid_item)\n\n for entity in self.hierarchy:\n\n topic = grid_item.get(entity['source'])\n if not topic:\n continue\n\n if entity['source'] == 'standard':\n topic = []\n\n if not isinstance(topic, list):\n topic = [ topic ]\n\n existing_term = info_obj.get(entity['source'], [])\n if not isinstance(existing_term, list):\n existing_term = [ existing_term ] # TODO\n\n info_obj[ entity['source'] ] = list(set( existing_term + topic ))\n\n parent_meta.download_size += download_size\n\n info_obj['standard'] = info_obj['standard'][0] # TODO\n parent_grid.info = json.dumps(info_obj)\n\n # tags ORM\n for entity in self.hierarchy:\n\n if not entity['creates_topic']:\n continue\n\n for item in info_obj.get( entity['source'], [] ):\n parent_calc.uitopics.append( model.Topic.as_unique(session, cid=entity['cid'], topic=\"%s\" % item) )\n\n for child in session.query(model.Calculation).filter(model.Calculation.checksum.in_(filtered_addendum)).all():\n parent_calc.children.append(child)\n\n parent_calc.siblings_count = len(parent_calc.children)\n\n for distance, members in higher_lookup.items():\n for member in members:\n d = parent_calc.nested_depth - member.nested_depth + distance\n if d > 0:\n member.nested_depth += d\n\n member.meta_data.download_size += parent_meta.download_size # FIXME\n session.add(member)\n\n session.add_all([parent_calc, parent_meta, parent_grid])\n session.commit()\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ntest whether given filePath is an executable.", "response": "def isCLIExecutable(filePath):\n \"\"\"Test whether given `filePath` is an executable. Does not really\n check whether the executable is a CLI (e.g. whether it supports\n --xml), but can be used to filter out non-executables within a\n directory with CLI modules.\n \"\"\"\n # see qSlicerUtils::isCLIExecutable\n # e.g. https://github.com/Slicer/Slicer/blob/master/Base/QTCore/qSlicerUtils.cxx\n if not os.path.isfile(filePath):\n return False\n if sys.platform.startswith('win'):\n filePath = filePath.lower() # be case insensitive\n return filePath.endswith(\".exe\") or filePath.endswith(\".bat\")\n else:\n # differing from qSlicerUtils here, which does not check for executable bits\n # (this way we can differentiate between XML files saved with the same name\n # as the executables and the executables themselves)\n if not os.access(filePath, os.X_OK):\n return False\n return not '.' in os.path.basename(filePath)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef listCLIExecutables(baseDir):\n return [path for path in glob.glob(os.path.join(os.path.normpath(baseDir), '*'))\n if isCLIExecutable(path)]", "response": "Return list of paths to valid CLI executables within baseDir. This calls isCLIExecutable on all files within baseDir."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef popenCLIExecutable(command, **kwargs):\n\n cliExecutable = command[0]\n\n # hack (at least, this does not scale to other module sources):\n # detect Slicer modules and run through wrapper script setting up\n # appropriate runtime environment\n ma = re_slicerSubPath.search(cliExecutable)\n if ma:\n wrapper = os.path.join(cliExecutable[:ma.start()], 'Slicer')\n if sys.platform.startswith('win'):\n wrapper += '.exe'\n if os.path.exists(wrapper):\n command = [wrapper, '--launcher-no-splash', '--launch'] + command\n\n return subprocess.Popen(command, **kwargs)", "response": "Wrapper around subprocess. Popen that tries to detect Slicer CLI modules and launches them through the Slicer launcher."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncall given cliExecutable with -- xml and return xml ElementTree representation of standard output.", "response": "def getXMLDescription(cliExecutable, **kwargs):\n \"\"\"Call given cliExecutable with --xml and return xml ElementTree\n representation of standard output.\n\n Any kwargs are passed on to subprocess.Popen() (via popenCLIExecutable()).\"\"\"\n\n command = [cliExecutable, '--xml']\n \n stdout, stdoutFilename = tempfile.mkstemp('.stdout')\n stderr, stderrFilename = tempfile.mkstemp('.stderr')\n try:\n p = popenCLIExecutable(command, stdout = stdout, stderr = stderr, **kwargs)\n ec = p.wait()\n with open(stderrFilename) as f:\n for line in f:\n logger.warning('%s: %s' % (os.path.basename(cliExecutable), line[:-1]))\n if ec:\n raise RuntimeError(\"Calling %s failed (exit code %d)\" % (cliExecutable, ec))\n with open(stdoutFilename) as f:\n return ET.parse(f)\n finally:\n os.close(stdout)\n os.close(stderr)\n os.unlink(stdoutFilename)\n os.unlink(stderrFilename)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate filenames of the form specktre_123AB. png.", "response": "def _candidate_filenames():\n \"\"\"Generates filenames of the form 'specktre_123AB.png'.\n\n The random noise is five characters long, which allows for\n 62^5 = 916 million possible filenames.\n\n \"\"\"\n while True:\n random_stub = ''.join([\n random.choice(string.ascii_letters + string.digits)\n for _ in range(5)\n ])\n yield 'specktre_%s.png' % random_stub"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef draw_tiling(coord_generator, filename):\n im = Image.new('L', size=(CANVAS_WIDTH, CANVAS_HEIGHT))\n for shape in coord_generator(CANVAS_WIDTH, CANVAS_HEIGHT):\n ImageDraw.Draw(im).polygon(shape, outline='white')\n im.save(filename)", "response": "Given a coordinate generator and a filename draw those coordinates in\n a new image and save them to the file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlogs msg % args with severity TRACE.", "response": "def trace(self, msg, *args, **kwargs):\n \"\"\"\n Log 'msg % args' with severity 'TRACE'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.trace(\"Houston, we have a %s\", \"thorny problem\", exc_info=1)\n \"\"\"\n if self.isEnabledFor(TRACE):\n self._log(TRACE, msg, args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd an error to the internal list of errors that occurred at the specified line.", "response": "def add_error(self, code, line, char, description):\n \"\"\"\n Registers an error for this container with code on line at char.\n :param code: The PEP 8 error code\n :param line: The line number of the reported error\n :param char: Line location of first offending character\n :param description: The human readable description of the thrown error/warning\n \"\"\"\n\n if code not in self.violations:\n self.violations[code] = 0\n self.violations[code] += 1\n self.lines.append((code, line, char, description))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nresolves a setting for a configuration option.", "response": "def resolve_setting(default, arg_value=None, env_var=None, config_value=None):\n \"\"\"\n Resolves a setting for a configuration option. The winning value is chosen\n from multiple methods of configuration, in the following order of priority\n (top first):\n\n - Explicitly passed argument\n - Environment variable\n - Configuration file entry\n - Default\n\n :param arg_value: Explicitly passed value\n :param env_var: Environment variable name\n :type env_var: string or None\n :param config_value: Configuration entry\n :param default: Default value to if there are no overriding options\n :return: Configuration value\n \"\"\"\n if arg_value is not None:\n return arg_value\n else:\n env_value = getenv(env_var)\n if env_value is not None:\n return env_value\n else:\n if config_value is not None:\n return config_value\n else:\n return default"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nretrieves the state of a given Class.", "response": "def get_state(cls, clz):\n \"\"\"\n Retrieve the state of a given Class.\n\n :param clz: types.ClassType\n :return: Class state.\n :rtype: dict\n \"\"\"\n if clz not in cls.__shared_state:\n cls.__shared_state[clz] = (\n clz.init_state() if hasattr(clz, \"init_state\") else {}\n )\n return cls.__shared_state[clz]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the first object in the database if it does not exist creates it if no", "response": "def _unique(session, cls, queryfunc, constructor, arg, kw):\n '''\n https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/UniqueObject\n Checks if ORM entity exists according to criteria,\n if yes, returns it, if no, creates\n '''\n with session.no_autoflush:\n q = session.query(cls)\n q = queryfunc(q, *arg, **kw)\n obj = q.first()\n if not obj:\n obj = constructor(*arg, **kw)\n session.add(obj)\n return obj"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn dict representation of the object that is unique to the ORM entity.", "response": "def _unique_todict(session, cls, queryfunc, arg, kw):\n '''\n Checks if ORM entity exists according to criteria,\n if yes, returns it, if no, returns dict representation\n (required for further DB replication and syncing)\n '''\n q = session.query(cls)\n q = queryfunc(q, *arg, **kw)\n obj = q.first()\n if not obj:\n obj = kw\n obj['__cls__'] = cls.__mapper__.class_.__name__\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef define_density_matrix(Ne, explicitly_hermitian=False, normalized=False,\n variables=None):\n r\"\"\"Return a symbolic density matrix.\n\n The arguments are\n\n Ne (integer):\n The number of atomic states.\n explicitly_hermitian (boolean):\n Whether to make $\\rho_{ij}=\\bar{\\rho}_{ij}$ for $i1} \\rho_{ii}$\n\n A very simple example:\n >>> define_density_matrix(2)\n Matrix([\n [rho11, rho12],\n [rho21, rho22]])\n\n The density matrix can be made explicitly hermitian\n >>> define_density_matrix(2, explicitly_hermitian=True)\n Matrix([\n [rho11, conjugate(rho21)],\n [rho21, rho22]])\n\n or normalized\n >>> define_density_matrix(2, normalized=True)\n Matrix([\n [-rho22 + 1, rho12],\n [ rho21, rho22]])\n\n or it can be made an explicit function of given variables\n >>> from sympy import symbols\n >>> t, z = symbols(\"t, z\", positive=True)\n >>> define_density_matrix(2, variables=[t, z])\n Matrix([\n [rho11(t, z), rho12(t, z)],\n [rho21(t, z), rho22(t, z)]])\n\n \"\"\"\n if Ne > 9:\n comma = \",\"\n name = r\"\\rho\"\n open_brace = \"_{\"\n close_brace = \"}\"\n else:\n comma = \"\"\n name = \"rho\"\n open_brace = \"\"\n close_brace = \"\"\n\n rho = []\n for i in range(Ne):\n row_rho = []\n for j in range(Ne):\n if i == j:\n row_rho += [define_symbol(name, open_brace, comma, i, j,\n close_brace, variables,\n positive=True)]\n elif i > j:\n row_rho += [define_symbol(name, open_brace, comma, i, j,\n close_brace, variables)]\n else:\n if explicitly_hermitian:\n row_rho += [conjugate(define_symbol(name, open_brace,\n comma, j, i,\n close_brace,\n variables))]\n else:\n row_rho += [define_symbol(name, open_brace, comma, i, j,\n close_brace, variables)]\n rho += [row_rho]\n\n if normalized:\n rho11 = 1-sum([rho[i][i] for i in range(1, Ne)])\n rho[0][0] = rho11\n\n rho = Matrix(rho)\n return rho", "response": "r A function that returns a symbolic density matrix."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef polarization_vector(phi, theta, alpha, beta, p):\n epsilon = Matrix([cos(2*beta), p*I*sin(2*beta), 0])\n\n R1 = Matrix([[cos(2*alpha), -sin(2*alpha), 0],\n [sin(2*alpha), cos(2*alpha), 0],\n [0, 0, 1]])\n\n R2 = Matrix([[cos(theta), 0, sin(theta)],\n [0, 1, 0],\n [-sin(theta), 0, cos(theta)]])\n\n R3 = Matrix([[cos(phi), -sin(phi), 0],\n [sin(phi), cos(phi), 0],\n [0, 0, 1]])\n\n return R3*R2*R1*epsilon", "response": "r Returns a unitary vector describing the polarization of the waves."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns iterable of all mass email recipients.", "response": "def mass_mailing_recipients():\n \"\"\"\n Returns iterable of all mass email recipients.\n Default behavior will be to return list of all active users' emails.\n This can be changed by providing callback in settings return some other list of users,\n when user emails are stored in many, non default models.\n To accomplish that add constant MASS_EMAIL_RECIPIENTS to settings. It should contain path to function, e.g.\n >>> MASS_EMAIL_RECIPIENTS = 'emailtemplates.helpers.mass_mailing_recipients'\n\n :rtype iterable\n \"\"\"\n if hasattr(settings, 'MASS_EMAIL_RECIPIENTS'):\n callback_name = settings.MASS_EMAIL_RECIPIENTS.split('.')\n module_name = '.'.join(callback_name[:-1])\n func_name = callback_name[-1]\n module = import_module(module_name)\n func = getattr(module, func_name, lambda: [])\n return func()\n User = get_user_model()\n if hasattr(User, 'is_active') and hasattr(User, 'email'):\n filtered_users = User.objects.filter(is_active=True).exclude(email__isnull=True).exclude(email__exact='')\n return filtered_users.values_list('email', flat=True).distinct()\n return []"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef complex_matrix_plot(A, logA=False, normalize=False, plot=True, **kwds):\r\n N = len(A[0])\r\n if logA:\r\n Anew = []\r\n for i in range(N):\r\n row = []\r\n for j in range(N):\r\n if A[i][j] != 0:\r\n row += [log(log(A[i][j]))]\r\n else:\r\n row += [0.0]\r\n Anew += [row]\r\n A = Anew[:]\r\n # A=[[log(A[i][j]) for j in range(N)] for i in range(N)]\r\n\r\n if normalize:\r\n norm = 1\r\n for i in range(N):\r\n for j in range(N):\r\n if abs(A[i][j]) > norm: norm = abs(A[i][j])\r\n\r\n A = [[A[i][j]/norm for j in range(N)]for i in range(N)]\r\n\r\n # print A\r\n color_matrix = []\r\n lmax = -1\r\n for i in range(N):\r\n row = []\r\n for j in range(N):\r\n rgb, l = complex_to_color(A[i][j])\r\n row += [rgb]\r\n if l > lmax:\r\n lmax = l\r\n color_matrix += [row]\r\n\r\n if normalize:\r\n color_matrix = [[tuple([k/lmax for k in color_matrix[i][j]])\r\n for j in range(N)] for i in range(N)]\r\n\r\n if plot:\r\n pyplot.imshow(color_matrix, interpolation='none')\r\n pyplot.savefig('a.png', bbox_inches='tight')\r\n pyplot.close('all')\r\n else:\r\n return color_matrix", "response": "r A function to plot complex matrices."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes a bar chart for data on MF quantities.", "response": "def bar_chart_mf(data, path_name):\r\n \"\"\"Make a bar chart for data on MF quantities.\"\"\"\r\n N = len(data)\r\n\r\n ind = np.arange(N) # the x locations for the groups\r\n width = 0.8 # the width of the bars\r\n\r\n fig, ax = pyplot.subplots()\r\n rects1 = ax.bar(ind, data, width, color='g')\r\n\r\n # add some text for labels, title and axes ticks\r\n ax.set_ylabel('Population')\r\n ax.set_xticks(ind+width/2)\r\n labs = ['m='+str(i) for i in range(-N/2+1, N/2+1)]\r\n ax.set_xticklabels(labs)\r\n\r\n def autolabel(rects):\r\n # attach some text labels\r\n for rect in rects:\r\n rect.get_height()\r\n\r\n autolabel(rects1)\r\n pyplot.savefig(path_name)\r\n pyplot.close()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndraws the polarization of a plane wave.", "response": "def draw_plane_wave_3d(ax, beam, dist_to_center=0):\r\n \"\"\"Draw the polarization of a plane wave.\"\"\"\r\n Ex = []; Ey = []; Ez = []\r\n\r\n k = [cos(beam.phi)*sin(beam.theta),\r\n sin(beam.phi)*sin(beam.theta),\r\n cos(beam.theta)]\r\n kx, ky, kz = k\r\n\r\n Nt = 1000\r\n tstep = 7*pi/4/(Nt-1)\r\n\r\n alpha = beam.alpha\r\n beta = beam.beta\r\n phi = beam.phi\r\n theta = beam.theta\r\n omega = 1\r\n\r\n for i in range(Nt):\r\n t = i*tstep\r\n\r\n Ex += [(cos(2*alpha)*cos(phi)*cos(theta) -\r\n sin(2*alpha)*sin(phi))*cos(omega*t)*cos(2*beta) -\r\n (cos(phi)*cos(theta)*sin(2*alpha) +\r\n cos(2*alpha)*sin(phi))*sin(omega*t)*sin(2*beta) -\r\n dist_to_center*kx]\r\n\r\n Ey += [(cos(2*alpha)*cos(theta)*sin(phi) +\r\n cos(phi)*sin(2*alpha))*cos(omega*t)*cos(2*beta) -\r\n (cos(theta)*sin(2*alpha)*sin(phi) -\r\n cos(2*alpha)*cos(phi))*sin(omega*t)*sin(2*beta) -\r\n dist_to_center*ky]\r\n\r\n Ez += [-cos(omega*t)*cos(2*alpha)*cos(2*beta)*sin(theta) +\r\n sin(omega*t)*sin(2*alpha)*sin(2*beta)*sin(theta) -\r\n dist_to_center*kz]\r\n\r\n ax.plot(Ex, Ey, Ez, beam.color+'-')\r\n ff = dist_to_center-1.0\r\n\r\n arrx = [-kx*dist_to_center, -kx*ff]\r\n arry = [-ky*dist_to_center, -ky*ff]\r\n arrz = [-kz*dist_to_center, -kz*ff]\r\n arrow = Arrow3D(arrx, arry, arrz, mutation_scale=20,\r\n lw=1, arrowstyle=\"-|>\", color=beam.color)\r\n ax.add_artist(arrow)\r\n ax.plot([Ex[-1]], [Ey[-1]], [Ez[-1]], '.', markersize=8, color=beam.color)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef draw_lasers_3d(ax, lasers, name=None, distances=None, lim=None):\r\n if distances is None: distances = [1.0 for i in range(len(lasers))]\r\n for i in range(len(lasers)):\r\n if type(lasers[i]) == PlaneWave:\r\n draw_plane_wave_3d(ax, lasers[i], distances[i])\r\n elif type(lasers[i]) == MotField:\r\n draw_mot_field_3d(ax, lasers[i], distances[i])\r\n\r\n ax.set_xlabel(r\"$x$\", fontsize=20)\r\n ax.set_ylabel(r\"$y$\", fontsize=20)\r\n ax.set_zlabel(r\"$z$\", fontsize=20)\r\n\r\n if lim is None: lim = sqrt(2.0)\r\n\r\n ax.set_xlim(-lim, lim)\r\n ax.set_ylim(-lim, lim)\r\n ax.set_zlim(-lim, lim)\r\n ax.set_aspect(\"equal\")\r\n\r\n if name is not None:\r\n pyplot.savefig(name, bbox_inches='tight')", "response": "Draw MOT lasers in 3d."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef plot_populations(path, name, Ne, states=None, filename='a.png',\r\n fontsize=12, absolute_frequency=True,\r\n save_path='', use_netcdf=True):\r\n r\"\"\"Plot the populations of a density matrix.\"\"\"\r\n pyplot.close(\"all\")\r\n dat = read_result(path, name, N=Ne, use_netcdf=use_netcdf)\r\n x = dat[0]\r\n if absolute_frequency:\r\n x = [xi/2/pi for xi in x]\r\n pop = dat[1:Ne]\r\n Nd = len(pop[0])\r\n\r\n pop1 = [1-sum([pop[j][i] for j in range(Ne-1)]) for i in range(Nd)]\r\n pop = [pop1]+pop\r\n\r\n # We do different things depending on what states we are given.\r\n if states is None:\r\n # If we recieve no states all populations are ploted.\r\n for i in range(Ne):\r\n lab = r\"$\\mathrm{Poblaci\\'on} \\ \" + str(i+1)+\"$\"\r\n pyplot.plot(x, pop[i], label=lab)\r\n pyplot.legend(fontsize=fontsize)\r\n pyplot.savefig(save_path+filename, bbox_inches='tight')\r\n pyplot.close('all')\r\n\r\n elif len(states[0].quantum_numbers) >= 5:\r\n # If we recieve magnetic states we make a plot\r\n # for each hyperfine state.\r\n magnetic_plots = len(states[0].quantum_numbers) == 6\r\n\r\n if not magnetic_plots:\r\n N_plots = len(states)\r\n states = split_hyperfine_to_magnetic(states)\r\n aux = [hsv_to_rgb(m*0.8/(N_plots-1), 1.0, 1.0)\r\n for m in range(N_plots)]\r\n hyperfine_colors = list(reversed(aux))\r\n conta = 0\r\n\r\n fine_states = find_fine_states(states)\r\n boundaries = reversed(calculate_boundaries(fine_states, states)[1])\r\n boundaries = list(boundaries)\r\n\r\n for pair in boundaries:\r\n f = states[pair[0]].f\r\n if f == 0:\r\n colors = [(0.8, 0.0, 1.0)]\r\n else:\r\n aux = [hsv_to_rgb(m*0.8/f, 1.0, 1.0) for m in range(f+1)]\r\n colors = list(reversed(aux))\r\n\r\n for i in range(pair[0], pair[1]):\r\n m = states[i].m\r\n if m < 0:\r\n color = colors[-m]\r\n style = ':'\r\n else:\r\n color = colors[m]\r\n style = '-'\r\n if magnetic_plots:\r\n aux = r\"$\\mathrm{Poblaci\\'on} \\ M_F=\"\r\n aux += str(states[i].m)+\"$\"\r\n pyplot.plot(x, pop[i], style, label=aux, color=color)\r\n\r\n if magnetic_plots:\r\n if f != 0:\r\n suma = [sum([pop[i][j] for i in range(pair[0], pair[1])])\r\n for j in range(len(pop[0]))]\r\n pyplot.plot(x, suma, 'k-', label=r'$\\mathrm{suma}$')\r\n\r\n aux = str(states[i]).split()[1].replace('_', '')\r\n filenamei = aux.replace('/', '_').replace('^', 'F=')\r\n s = filenamei.find(',')\r\n filenamei = filenamei[:s]\r\n filenamei = name+'_'+filenamei+'.png'\r\n\r\n aux = find_fine_states([states[i]])[0]\r\n title = aux._latex_()+'\\ F='+str(states[i].f)\r\n\r\n pyplot.title(r\"$\"+title+\"$\")\r\n pyplot.ylim([0, None])\r\n pyplot.xlim([x[0], x[-1]])\r\n pyplot.legend(fontsize=fontsize, loc=0)\r\n\r\n pyplot.savefig(save_path+filenamei, bbox_inches='tight')\r\n pyplot.close('all')\r\n else:\r\n suma = [sum([pop[i][j] for i in range(pair[0], pair[1])])\r\n for j in range(len(pop[0]))]\r\n label = states[i]._latex_()\r\n label = label[label.find(' ')+1:]\r\n label = label[:label.find('^')]\r\n label += r\"\\ F=\"+str(states[i].f)\r\n label = \"$\"+label+\"$\"\r\n\r\n pyplot.plot(x, suma, '-', color=hyperfine_colors[conta],\r\n label=label)\r\n conta += 1\r\n\r\n if not magnetic_plots:\r\n title = states[0]._latex_()\r\n title = \"$\"+title[:title.find(' ')-1]+\"$\"\r\n pyplot.title(title, fontsize=20)\r\n pyplot.xlim([x[0], x[-1]])\r\n pyplot.legend(fontsize=fontsize, loc=0)\r\n pyplot.savefig(save_path+name+'_pops.png', bbox_inches='tight')\r\n pyplot.close('all')", "response": "r Plot the populations of a density matrix."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef beam_splitter(ax, p0, size=2.54, alpha=0, format=None, **kwds):\r\n if format is None: format = 'k-'\r\n a = size/2\r\n x0 = [a, -a, -a, a, a, -a]\r\n y0 = [a, a, -a, -a, a, -a]\r\n\r\n cur_list = [(x0, y0)]\r\n cur_list = rotate_and_traslate(cur_list, alpha, p0)\r\n\r\n for curi in cur_list: ax.plot(curi[0], curi[1], format, **kwds)", "response": "r Draw a beam splitter."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef draw_beam(ax, p1, p2, width=0, beta1=None, beta2=None,\r\n format=None, **kwds):\r\n r\"\"\"Draw a laser beam.\"\"\"\r\n if format is None: format = 'k-'\r\n\r\n if width == 0:\r\n x0 = [p1[0], p2[0]]\r\n y0 = [p1[1], p2[1]]\r\n ax.plot(x0, y0, format, **kwds)\r\n else:\r\n a = width/2\r\n\r\n x1, y1 = p1\r\n x2, y2 = p2\r\n\r\n x11 = (a*x1**2*cos(beta1) - 2*a*x1*x2*cos(beta1) + a*x2**2*cos(beta1) + a*y1**2*cos(beta1) + a*y2**2*cos(beta1) - (2*a*y1*cos(beta1) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*x1*cos(beta1))*y2 - (x1*y1*cos(beta1) - x1**2*sin(beta1) + x1*x2*sin(beta1))*sqrt((x1 - x2)**2 + (y1 - y2)**2))/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta1) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta1) - x1*sin(beta1) + x2*sin(beta1)))\r\n y11 = (a*x1**2*sin(beta1) - 2*a*x1*x2*sin(beta1) + a*x2**2*sin(beta1) + a*y1**2*sin(beta1) + a*y2**2*sin(beta1) - (2*a*y1*sin(beta1) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*y1*cos(beta1))*y2 - (y1**2*cos(beta1) - (x1*sin(beta1) - x2*sin(beta1))*y1)*sqrt((x1 - x2)**2 + (y1 - y2)**2))/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta1) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta1) - x1*sin(beta1) + x2*sin(beta1)))\r\n\r\n x21 = (a*x1**2*cos(beta2) - 2*a*x1*x2*cos(beta2) + a*x2**2*cos(beta2) + a*y1**2*cos(beta2) + a*y2**2*cos(beta2) - (2*a*y1*cos(beta2) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*x2*cos(beta2))*y2 - (x2*y1*cos(beta2) - x1*x2*sin(beta2) + x2**2*sin(beta2))*sqrt((x1 - x2)**2 + (y1 - y2)**2))/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta2) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta2) - x1*sin(beta2) + x2*sin(beta2)))\r\n y21 = (a*x1**2*sin(beta2) - 2*a*x1*x2*sin(beta2) + a*x2**2*sin(beta2) + a*y1**2*sin(beta2) + (a*sin(beta2) + sqrt((x1 - x2)**2 + (y1 - y2)**2)*cos(beta2))*y2**2 - (2*a*y1*sin(beta2) + sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta2) - x1*sin(beta2) + x2*sin(beta2)))*y2)/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta2) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta2) - x1*sin(beta2) + x2*sin(beta2)))\r\n\r\n ax.plot([x11, x21], [y11, y21], format, **kwds)\r\n\r\n x12 = -(a*x1**2*cos(beta2) - 2*a*x1*x2*cos(beta2) + a*x2**2*cos(beta2) + a*y1**2*cos(beta2) + a*y2**2*cos(beta2) - (2*a*y1*cos(beta2) + sqrt((x1 - x2)**2 + (y1 - y2)**2)*x2*cos(beta2))*y2 + (x2*y1*cos(beta2) - x1*x2*sin(beta2) + x2**2*sin(beta2))*sqrt((x1 - x2)**2 + (y1 - y2)**2))/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta2) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta2) - x1*sin(beta2) + x2*sin(beta2)))\r\n y12 = -(a*x1**2*sin(beta2) - 2*a*x1*x2*sin(beta2) + a*x2**2*sin(beta2) + a*y1**2*sin(beta2) + (a*sin(beta2) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*cos(beta2))*y2**2 - (2*a*y1*sin(beta2) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta2) - x1*sin(beta2) + x2*sin(beta2)))*y2)/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta2) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta2) - x1*sin(beta2) + x2*sin(beta2)))\r\n\r\n x22 = -(a*x1**2*cos(beta1) - 2*a*x1*x2*cos(beta1) + a*x2**2*cos(beta1) + a*y1**2*cos(beta1) + a*y2**2*cos(beta1) - (2*a*y1*cos(beta1) + sqrt((x1 - x2)**2 + (y1 - y2)**2)*x1*cos(beta1))*y2 + (x1*y1*cos(beta1) - x1**2*sin(beta1) + x1*x2*sin(beta1))*sqrt((x1 - x2)**2 + (y1 - y2)**2))/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta1) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta1) - x1*sin(beta1) + x2*sin(beta1)))\r\n y22 = -(a*x1**2*sin(beta1) - 2*a*x1*x2*sin(beta1) + a*x2**2*sin(beta1) + a*y1**2*sin(beta1) + a*y2**2*sin(beta1) - (2*a*y1*sin(beta1) + sqrt((x1 - x2)**2 + (y1 - y2)**2)*y1*cos(beta1))*y2 + (y1**2*cos(beta1) - (x1*sin(beta1) - x2*sin(beta1))*y1)*sqrt((x1 - x2)**2 + (y1 - y2)**2))/(sqrt((x1 - x2)**2 + (y1 - y2)**2)*y2*cos(beta1) - sqrt((x1 - x2)**2 + (y1 - y2)**2)*(y1*cos(beta1) - x1*sin(beta1) + x2*sin(beta1)))\r\n\r\n ax.plot([x12, x22], [y12, y22], format, **kwds)", "response": "r Draw a laser beam."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef simple_beam_splitter(ax, p0, size=2.54, width=0.1, alpha=0,\r\n format=None, **kwds):\r\n r\"\"\"Draw a simple beam splitter.\"\"\"\r\n if format is None: format = 'k-'\r\n a = size/2\r\n b = width/2\r\n x0 = [a, -a, -a, a, a]\r\n y0 = [b, b, -b, -b, b]\r\n\r\n cur_list = [(x0, y0)]\r\n cur_list = rotate_and_traslate(cur_list, alpha, p0)\r\n\r\n for curi in cur_list: ax.plot(curi[0], curi[1], format, **kwds)", "response": "r Draw a simple beam splitter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef draw_arith(ax, p0, size=1, alpha=0, arith=None, format=None,\r\n fontsize=10, **kwds):\r\n r\"\"\"Draw an arithmetic operator.\"\"\"\r\n if format is None: format = 'k-'\r\n a = size/2.0\r\n x0 = [0, 2.5*a, 0, 0]\r\n y0 = [a, 0, -a, a]\r\n\r\n cur_list = [(x0, y0)]\r\n cur_list = rotate_and_traslate(cur_list, alpha, p0)\r\n\r\n for curi in cur_list: ax.plot(curi[0], curi[1], format, **kwds)\r\n\r\n if arith is not None:\r\n pyplot.text(p0[0]+0.75*a, p0[1], arith, horizontalalignment='center',\r\n verticalalignment='center', fontsize=fontsize)", "response": "r Draw an arithmetic operator."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dup2(a, b, timeout=3):\n dup_err = None\n # give FDs 3 seconds to not be busy anymore\n for i in range(int(10 * timeout)):\n try:\n return os.dup2(a, b)\n except OSError as e:\n dup_err = e\n if e.errno == errno.EBUSY:\n time.sleep(0.1)\n else:\n raise\n if dup_err:\n raise dup_err", "response": "Like os. dup2 but retry on EBUSY"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncaptures C-level stdout/stderr in a context manager. The return value for the context manager is (stdout, stderr). Examples -------- >>> with capture() as (stdout, stderr): ... printf(\"C-level stdout\") ... output = stdout.read()", "response": "def pipes(stdout=PIPE, stderr=PIPE, encoding=_default_encoding):\n \"\"\"Capture C-level stdout/stderr in a context manager.\n\n The return value for the context manager is (stdout, stderr).\n\n Examples\n --------\n\n >>> with capture() as (stdout, stderr):\n ... printf(\"C-level stdout\")\n ... output = stdout.read()\n \"\"\"\n stdout_pipe = stderr_pipe = False\n # setup stdout\n if stdout == PIPE:\n stdout_r, stdout_w = os.pipe()\n stdout_w = os.fdopen(stdout_w, 'wb')\n if encoding:\n stdout_r = io.open(stdout_r, 'r', encoding=encoding)\n else:\n stdout_r = os.fdopen(stdout_r, 'rb')\n stdout_pipe = True\n else:\n stdout_r = stdout_w = stdout\n # setup stderr\n if stderr == STDOUT:\n stderr_r = None\n stderr_w = stdout_w\n elif stderr == PIPE:\n stderr_r, stderr_w = os.pipe()\n stderr_w = os.fdopen(stderr_w, 'wb')\n if encoding:\n stderr_r = io.open(stderr_r, 'r', encoding=encoding)\n else:\n stderr_r = os.fdopen(stderr_r, 'rb')\n stderr_pipe = True\n else:\n stderr_r = stderr_w = stderr\n if stdout_pipe or stderr_pipe:\n capture_encoding = None\n else:\n capture_encoding = encoding\n w = Wurlitzer(stdout=stdout_w, stderr=stderr_w, encoding=capture_encoding)\n try:\n with w:\n yield stdout_r, stderr_r\n finally:\n # close pipes\n if stdout_pipe:\n stdout_w.close()\n if stderr_pipe:\n stderr_w.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef sys_pipes(encoding=_default_encoding):\n return pipes(sys.stdout, sys.stderr, encoding=encoding)", "response": "Redirect sys. stdout and sys. stderr to sys. stdout and sys. stderr."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nredirecting all C output to sys. stdout and err This is not a context manager.", "response": "def sys_pipes_forever(encoding=_default_encoding):\n \"\"\"Redirect all C output to sys.stdout/err\n \n This is not a context manager; it turns on C-forwarding permanently.\n \"\"\"\n global _mighty_wurlitzer\n if _mighty_wurlitzer is None:\n _mighty_wurlitzer = sys_pipes(encoding)\n _mighty_wurlitzer.__enter__()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_ipython_extension(ip):\n if not getattr(ip, 'kernel'):\n warnings.warn(\n \"wurlitzer extension doesn't do anything in terminal IPython\"\n )\n return\n ip.events.register('pre_execute', sys_pipes_forever)\n ip.events.register('post_execute', stop_sys_pipes)", "response": "Register me as an IPython extension"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unload_ipython_extension(ip):\n if not getattr(ip, 'kernel'):\n return\n ip.events.unregister('pre_execute', sys_pipes_forever)\n ip.events.unregister('post_execute', stop_sys_pipes)", "response": "Unload me as an IPython extension\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecodes data if any Called before passing to stdout streams", "response": "def _decode(self, data):\n \"\"\"Decode data, if any\n \n Called before passing to stdout/stderr streams\n \"\"\"\n if self.encoding:\n data = data.decode(self.encoding, 'replace')\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef push_filters(self, new_filters):\n t = self.tokenizer\n for f in new_filters:\n t = f(t)\n self.tokenizer = t", "response": "Add a filter to the tokenizer chain."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef check(self, text):\n for word, pos in self.tokenizer(text):\n correct = self.dictionary.check(word)\n if correct:\n continue\n yield word, self.dictionary.suggest(word) if self.suggest else []\n return", "response": "Yields bad words and suggested alternate spellings."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the name of the service from the given port", "response": "def get_service_from_port(port, all_services=None):\n \"\"\"Gets the name of the service from the port\n all_services allows you to feed in the services to look through, pass\n in a dict of service names to service information eg.\n {\n 'service_name': {\n 'port': port_number\n }\n }\n \n Returns the name of the service\n \"\"\"\n if port is None or not isinstance(port, int):\n return None\n\n if all_services is None:\n all_services = read_services_configuration()\n\n for name, info in all_services.items():\n srv_port = info.get('port')\n if srv_port is not None and port == int(srv_port):\n return name\n\n for elem in info.get('smartstack', {}).values():\n elem_port = elem.get('proxy_port')\n if elem_port is not None and port == int(elem_port):\n return name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a list of all nodes that receive a given service.", "response": "def all_nodes_that_receive(service, service_configuration=None, run_only=False, deploy_to_only=False):\n \"\"\"If run_only, returns only the services that are in the runs_on list.\n If deploy_to_only, returns only the services in the deployed_to list.\n If neither, both are returned, duplicates stripped.\n Results are always sorted.\n \"\"\"\n assert not (run_only and deploy_to_only)\n\n if service_configuration is None:\n service_configuration = read_services_configuration()\n runs_on = service_configuration[service]['runs_on']\n deployed_to = service_configuration[service].get('deployed_to')\n if deployed_to is None:\n deployed_to = []\n\n if run_only:\n result = runs_on\n elif deploy_to_only:\n result = deployed_to\n else:\n result = set(runs_on) | set(deployed_to)\n\n return list(sorted(result))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn all nodes that run in a certain environment", "response": "def all_nodes_that_run_in_env(service, env, service_configuration=None):\n \"\"\" Returns all nodes that run in an environment. This needs\n to be specified in field named 'env_runs_on' one level under services\n in the configuration, and needs to contain an object which maps strings\n to lists (environments to nodes).\n\n :param service: A string specifying which service to look up nodes for\n :param env: A string specifying which environment's nodes should be returned\n :param service_configuration: A service_configuration dict to look in or None to\n use the default dict.\n\n :returns: list of all nodes running in a certain environment\n \"\"\"\n\n if service_configuration is None:\n service_configuration = read_services_configuration()\n env_runs_on = service_configuration[service]['env_runs_on']\n if env in env_runs_on:\n return list(sorted(env_runs_on[env]))\n else:\n return []"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a float to a precision and scale tuple.", "response": "def precision_and_scale(x):\n \"\"\"\n From a float, decide what precision and scale are needed to represent it.\n\n >>> precision_and_scale(54.2)\n (3, 1)\n >>> precision_and_scale(9)\n (1, 0)\n\n Thanks to Mark Ransom,\n http://stackoverflow.com/questions/3018758/determine-precision-and-scale-of-particular-number-in-python\n \"\"\"\n if isinstance(x, Decimal):\n precision = len(x.as_tuple().digits)\n scale = -1 * x.as_tuple().exponent\n if scale < 0:\n precision -= scale\n scale = 0\n return (precision, scale)\n max_digits = 14\n int_part = int(abs(x))\n magnitude = 1 if int_part == 0 else int(math.log10(int_part)) + 1\n if magnitude >= max_digits:\n return (magnitude, 0)\n frac_part = abs(x) - int_part\n multiplier = 10 ** (max_digits - magnitude)\n frac_digits = multiplier + int(multiplier * frac_part + 0.5)\n while frac_digits % 10 == 0:\n frac_digits /= 10\n scale = int(math.log10(frac_digits))\n return (magnitude + scale, scale)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef coerce_to_specific(datum):\n if datum is None:\n return None \n try:\n result = dateutil.parser.parse(datum)\n # but even if this does not raise an exception, may\n # not be a date -- dateutil's parser is very aggressive\n # check for nonsense unprintable date\n str(result) \n # most false date hits will be interpreted as times today\n # or as unlikely far-future or far-past years\n clean_datum = datum.strip().lstrip('-').lstrip('0').rstrip('.')\n if len(_complex_enough_to_be_date.findall(clean_datum)) < 2:\n digits = _digits_only.search(clean_datum)\n if (not digits) or (len(digits.group(0)) not in \n (4, 6, 8, 12, 14, 17)):\n raise Exception(\"false date hit for %s\" % datum)\n if result.date() == datetime.datetime.now().date():\n raise Exception(\"false date hit (%s) for %s\" % (\n str(result), datum))\n if not (1700 < result.year < 2150):\n raise Exception(\"false date hit (%s) for %s\" % (\n str(result), datum)) \n return result\n except Exception as e:\n pass\n if str(datum).strip().lower() in ('0', 'false', 'f', 'n', 'no'):\n return False\n elif str(datum).strip().lower() in ('1', 'true', 't', 'y', 'yes'):\n return True\n try:\n return int(str(datum))\n except ValueError:\n pass\n try:\n return Decimal(str(datum))\n except InvalidOperation:\n pass\n try:\n return float(str(datum))\n except ValueError:\n pass\n return str(datum)", "response": "Coerce a datum to the most specific data type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the number of digits and exponent of a decimal number.", "response": "def _places_b4_and_after_decimal(d):\n \"\"\"\n >>> _places_b4_and_after_decimal(Decimal('54.212'))\n (2, 3)\n \"\"\"\n tup = d.as_tuple()\n return (len(tup.digits) + tup.exponent, max(-1*tup.exponent, 0))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving two Decimals return a 9 - filled decimal representing both enough > 0 digits and enough < 0 digits.", "response": "def worst_decimal(d1, d2):\n \"\"\"\n Given two Decimals, return a 9-filled decimal representing both enough > 0 digits\n and enough < 0 digits (scale) to accomodate numbers like either.\n\n >>> worst_decimal(Decimal('762.1'), Decimal('-1.983'))\n Decimal('999.999')\n \"\"\"\n (d1b4, d1after) = _places_b4_and_after_decimal(d1)\n (d2b4, d2after) = _places_b4_and_after_decimal(d2)\n return Decimal('9' * max(d1b4, d2b4) + '.' + '9' * max(d1after, d2after))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_worst(old_worst, new_worst):\n \n if isinstance(new_worst, bool):\n return new_worst\n # Negative numbers confuse the length calculation. \n negative = ( (hasattr(old_worst, '__neg__') and old_worst < 0) or\n (hasattr(new_worst, '__neg__') and new_worst < 0) )\n try:\n old_worst = abs(old_worst)\n new_worst = abs(new_worst)\n except TypeError:\n pass\n \n # now go by length \n new_len = len(str(new_worst))\n old_len = len(str(old_worst))\n if new_len < old_len:\n new_type = type(new_worst)\n new_worst = str(new_worst).ljust(old_len, '0')\n new_worst = new_type(new_worst)\n \n # now put the removed negative back\n if negative:\n try:\n new_worst = -1 * abs(new_worst)\n except:\n pass\n \n return new_worst", "response": "Set the original taxonomy to new_worst."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngives two objects d1 and d2 return the one of the most restrictive type possible.", "response": "def best_representative(d1, d2):\n \"\"\"\n Given two objects each coerced to the most specific type possible, return the one\n of the least restrictive type.\n\n >>> best_representative(Decimal('-37.5'), Decimal('0.9999'))\n Decimal('-99.9999')\n >>> best_representative(None, Decimal('6.1'))\n Decimal('6.1')\n >>> best_representative(311920, '48-49')\n '48-490'\n >>> best_representative(6, 'foo')\n 'foo'\n >>> best_representative(Decimal('4.95'), Decimal('6.1'))\n Decimal('9.99')\n >>> best_representative(Decimal('-1.9'), Decimal('6.1'))\n Decimal('-9.9')\n \"\"\"\n \n if hasattr(d2, 'strip') and not d2.strip():\n return d1\n if d1 is None:\n return d2\n elif d2 is None:\n return d1\n preference = (datetime.datetime, bool, int, Decimal, float, str)\n worst_pref = 0\n worst = ''\n for coerced in (d1, d2):\n pref = preference.index(type(coerced))\n if pref > worst_pref:\n worst_pref = pref\n worst = set_worst(worst, coerced)\n elif pref == worst_pref:\n if isinstance(coerced, Decimal):\n worst = set_worst(worst, worst_decimal(coerced, worst))\n elif isinstance(coerced, float):\n worst = set_worst(worst, max(coerced, worst))\n else: # int, str\n if len(str(coerced)) > len(str(worst)):\n worst = set_worst(worst, coerced)\n return worst"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef best_coercable(data):\n preference = (datetime.datetime, bool, int, Decimal, float, str)\n worst_pref = 0\n worst = ''\n for datum in data:\n coerced = coerce_to_specific(datum)\n pref = preference.index(type(coerced))\n if pref > worst_pref:\n worst_pref = pref\n worst = coerced\n elif pref == worst_pref:\n if isinstance(coerced, Decimal):\n worst = worst_decimal(coerced, worst)\n elif isinstance(coerced, float):\n worst = max(coerced, worst)\n else: # int, str\n if len(str(coerced)) > len(str(worst)):\n worst = coerced\n return worst", "response": "Given an iterable of scalar data returns the datum representing the most specific\n data type."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngive a scalar Python value picks an appropriate SQLAlchemy data type.", "response": "def sqla_datatype_for(datum):\n \"\"\"\n Given a scalar Python value, picks an appropriate SQLAlchemy data type.\n\n >>> sqla_datatype_for(7.2)\n DECIMAL(precision=2, scale=1)\n >>> sqla_datatype_for(\"Jan 17 2012\")\n \n >>> sqla_datatype_for(\"something else\")\n Unicode(length=14)\n \"\"\"\n try:\n if len(_complex_enough_to_be_date.findall(datum)) > 1:\n dateutil.parser.parse(datum)\n return sa.DATETIME\n except (TypeError, ValueError):\n pass\n try:\n (prec, scale) = precision_and_scale(datum)\n return sa.DECIMAL(prec, scale)\n except TypeError:\n return sa.Unicode(len(datum))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef generate_one(tbl, args, table_name=None, file=None):\n table = Table(tbl, table_name=table_name, varying_length_text=args.text, uniques=args.uniques,\n pk_name = args.key, force_pk=args.force_key, reorder=args.reorder, data_size_cushion=args.cushion,\n save_metadata_to=args.save_metadata_to, metadata_source=args.use_metadata_from,\n loglevel=args.log, limit=args.limit)\n if args.dialect.startswith('sqla'):\n if not args.no_creates:\n print(table.sqlalchemy(), file=file)\n if args.inserts:\n print(\"\\n\".join(table.inserts(dialect=args.dialect)), file=file)\n elif args.dialect.startswith('dj'):\n table.django_models()\n else:\n print(table.sql(dialect=args.dialect, inserts=args.inserts,\n creates=(not args.no_creates), drops=args.drops,\n metadata_source=args.use_metadata_from), file=file)\n return table", "response": "Generate a single table."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generate(args=None, namespace=None, file=None):\n if hasattr(args, 'split'):\n args = args.split()\n args = parser.parse_args(args, namespace)\n set_logging(args)\n logging.info(str(args))\n if args.dialect in ('pg', 'pgsql', 'postgres'):\n args.dialect = 'postgresql'\n if args.dialect.startswith('dj'):\n args.dialect = 'django'\n elif args.dialect.startswith('sqla'):\n args.dialect = 'sqlalchemy'\n\n if args.dialect not in dialect_names:\n raise NotImplementedError('First arg must be one of: %s' % \", \".join(dialect_names))\n if args.dialect == 'sqlalchemy':\n print(sqla_head, file=file)\n for datafile in args.datafile:\n if is_sqlalchemy_url.search(datafile):\n table_names_for_insert = []\n for tbl in sqlalchemy_table_sources(datafile):\n t = generate_one(tbl, args, table_name=tbl.generator.name, file=file)\n if t.data:\n table_names_for_insert.append(tbl.generator.name)\n if args.inserts and args.dialect == 'sqlalchemy':\n print(sqla_inserter_call(table_names_for_insert), file=file)\n if t and args.inserts:\n for seq_update in emit_db_sequence_updates(t.source.db_engine):\n if args.dialect == 'sqlalchemy':\n print(' conn.execute(\"%s\")' % seq_update, file=file)\n elif args.dialect == 'postgresql':\n print(seq_update, file=file)\n else:\n generate_one(datafile, args, file=file)", "response": "Generates DDL from data sources named in args."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef emit_db_sequence_updates(engine):\n\n if engine and engine.name == 'postgresql':\n # not implemented for other RDBMS; necessity unknown\n conn = engine.connect()\n qry = \"\"\"SELECT 'SELECT last_value FROM ' || n.nspname ||\n '.' || c.relname || ';' AS qry,\n n.nspname || '.' || c.relname AS qual_name\n FROM pg_namespace n\n JOIN pg_class c ON (n.oid = c.relnamespace)\n WHERE c.relkind = 'S'\"\"\"\n for (qry, qual_name) in list(conn.execute(qry)):\n (lastval, ) = conn.execute(qry).first()\n nextval = int(lastval) + 1\n yield \"ALTER SEQUENCE %s RESTART WITH %s;\" % (qual_name, nextval)", "response": "Set database sequence objects to match the source db\n\nATTRI. Relevant only when generated from SQLAlchemy connection."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the SQL to define the table.", "response": "def ddl(self, dialect=None, creates=True, drops=True):\n \"\"\"\n Returns SQL to define the table.\n \"\"\"\n dialect = self._dialect(dialect)\n creator = CreateTable(self.table).compile(mock_engines[dialect])\n creator = \"\\n\".join(l for l in str(creator).splitlines() if l.strip()) # remove empty lines\n comments = \"\\n\\n\".join(self._comment_wrapper.fill(\"in %s: %s\" %\n (col, self.comments[col]))\n for col in self.comments)\n result = []\n if drops:\n result.append(self._dropper(dialect) + ';')\n if creates:\n result.append(\"%s;\\n%s\" % (creator, comments))\n for child in self.children.values():\n result.append(child.ddl(dialect=dialect, creates=creates,\n drops=drops))\n return '\\n\\n'.join(result)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndumping Python code to set up the table s SQLAlchemy model", "response": "def sqlalchemy(self, is_top=True):\n \"\"\"Dumps Python code to set up the table's SQLAlchemy model\"\"\"\n table_def = self.table_backref_remover.sub('', self.table.__repr__())\n\n # inject UNIQUE constraints into table definition\n constraint_defs = []\n for constraint in self.table.constraints:\n if isinstance(constraint, sa.sql.schema.UniqueConstraint):\n col_list = ', '.join(\"'%s'\" % c.name\n for c in constraint.columns)\n constraint_defs.append('UniqueConstraint(%s)' % col_list)\n if constraint_defs:\n constraint_defs = ',\\n '.join(constraint_defs) + ','\n table_def = table_def.replace('schema=None', '\\n ' + constraint_defs + 'schema=None')\n\n table_def = table_def.replace(\"MetaData(bind=None)\", \"metadata\")\n table_def = table_def.replace(\"Column(\", \"\\n Column(\")\n table_def = table_def.replace(\"schema=\", \"\\n schema=\")\n result = [table_def, ]\n result.extend(c.sqlalchemy(is_top=False) for c in self.children.values())\n result = \"\\n%s = %s\" % (self.table_name, \"\\n\".join(result))\n if is_top:\n sqla_imports = set(self.capitalized_words.findall(table_def))\n sqla_imports &= set(dir(sa))\n sqla_imports = sorted(sqla_imports)\n result = self.sqlalchemy_setup_template % (\n \", \".join(sqla_imports), result, self.table.name)\n result = textwrap.dedent(result)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _prep_datum(self, datum, dialect, col, needs_conversion):\n if datum is None or (needs_conversion and not str(datum).strip()):\n return 'NULL'\n pytype = self.columns[col]['pytype']\n\n if needs_conversion:\n if pytype == datetime.datetime:\n datum = dateutil.parser.parse(datum)\n elif pytype == bool:\n datum = th.coerce_to_specific(datum)\n if dialect.startswith('sqlite'):\n datum = 1 if datum else 0\n else:\n datum = pytype(str(datum))\n\n if isinstance(datum, datetime.datetime) or isinstance(datum, datetime.date):\n if dialect in self._datetime_format:\n return datum.strftime(self._datetime_format[dialect])\n else:\n return \"'%s'\" % datum\n elif hasattr(datum, 'lower'):\n # simple SQL injection protection, sort of... ?\n return \"'%s'\" % datum.replace(\"'\", \"''\")\n else:\n return datum", "response": "Puts a value in proper format for a SQL string"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset database sequence objects to match the source db Relevant only when generated from SQLAlchemy connection.", "response": "def emit_db_sequence_updates(self):\n \"\"\"Set database sequence objects to match the source db\n\n Relevant only when generated from SQLAlchemy connection.\n Needed to avoid subsequent unique key violations after DB build.\"\"\"\n\n if self.source.db_engine and self.source.db_engine.name != 'postgresql':\n # not implemented for other RDBMS; necessity unknown\n conn = self.source.db_engine.connect()\n qry = \"\"\"SELECT 'SELECT last_value FROM ' || n.nspname || '.' || c.relname || ';'\n FROM pg_namespace n\n JOIN pg_class c ON (n.oid = c.relnamespace)\n WHERE c.relkind = 'S'\"\"\"\n result = []\n for (sequence, ) in list(conn.execute(qry)):\n qry = \"SELECT last_value FROM %s\" % sequence\n (lastval, ) = conn.execute(qry).first()\n nextval = int(lastval) + 1\n yield \"ALTER SEQUENCE %s RESTART WITH %s;\" % nextval"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a SQL string that can be used to create or update a record set.", "response": "def sql(self, dialect=None, inserts=False, creates=True,\n drops=True, metadata_source=None):\n \"\"\"\n Combined results of ``.ddl(dialect)`` and, if ``inserts==True``,\n ``.inserts(dialect)``.\n \"\"\"\n result = [self.ddl(dialect, creates=creates, drops=drops)]\n if inserts:\n for row in self.inserts(dialect):\n result.append(row)\n return '\\n'.join(result)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef clean_key_name(key):\n result = _illegal_in_column_name.sub(\"_\", key.strip())\n if result[0].isdigit():\n result = '_%s' % result\n if result.upper() in sql_reserved_words:\n result = '_%s' % key\n return result.lower()", "response": "Cleans a key name for use in SQL table and column names."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _id_fieldname(fieldnames, table_name = ''):\n templates = ['%s_%%s' % table_name, '%s', '_%s']\n for stub in ['id', 'num', 'no', 'number']:\n for t in templates:\n if t % stub in fieldnames:\n return t % stub", "response": "Find the field name from a dict likeliest to be its unique ID\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef unnest_child_dict(parent, key, parent_name=''):\n val = parent[key]\n name = \"%s['%s']\" % (parent_name, key)\n logging.debug(\"Unnesting dict %s\" % name)\n id = _id_fieldname(val, parent_name)\n if id:\n logging.debug(\"%s is %s's ID\" % (id, key))\n if len(val) <= 2:\n logging.debug('Removing ID column %s.%s' % (key, id))\n val.pop(id)\n if len(val) == 0:\n logging.debug('%s is empty, removing from %s' % (name, parent_name))\n parent.pop(key)\n return\n elif len(val) == 1:\n logging.debug('Nested one-item dict in %s, making scalar.' % name)\n parent[key] = list(val.values())[0]\n return\n else:\n logging.debug('Pushing all fields from %s up to %s' % (name, parent_name))\n new_field_names = ['%s_%s' % (key, child_key.strip('_')) for child_key in val]\n overlap = (set(new_field_names) & set(parent)) - set(id or [])\n if overlap:\n logging.error(\"Could not unnest child %s; %s present in %s\"\n % (name, key, ','.join(overlap), parent_name))\n return\n for (child_key, child_val) in val.items():\n new_field_name = '%s_%s' % (key, child_key.strip('_'))\n parent[new_field_name] = child_val\n parent.pop(key)", "response": "Unnest a child dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a tuple of result and key_type for the given key name.", "response": "def suitability_as_key(self, key_name):\n \"\"\"\n Returns: (result, key_type)\n ``result`` is True, False, or 'absent' or 'partial' (both still usable)\n ``key_type`` is ``int`` for integer keys or ``str`` for hash keys\n\n \"\"\"\n pk_values = all_values_for(self, key_name)\n if not pk_values:\n return ('absent', int) # could still use it\n key_type = type(th.best_coercable(pk_values))\n num_unique_values = len(set(pk_values))\n if num_unique_values < len(pk_values):\n return (False, None) # non-unique\n if num_unique_values == len(self):\n return (True, key_type) # perfect!\n return ('partial', key_type)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_tgt(self):\n\t\tenc_part = EncryptedData({'etype': 1, 'cipher': b''})\n\t\t\n\t\ttgt_rep = {}\n\t\ttgt_rep['pvno'] = krb5_pvno\n\t\ttgt_rep['msg-type'] = MESSAGE_TYPE.KRB_AS_REP.value\n\t\ttgt_rep['crealm'] = self.server.realm.to_string()\n\t\ttgt_rep['cname'] = self.client.to_asn1()[0]\n\t\ttgt_rep['ticket'] = Ticket.load(self.ticket.to_asn1()).native\n\t\ttgt_rep['enc-part'] = enc_part.native\n\n\t\tt = EncryptionKey(self.key.to_asn1()).native\n\t\t\n\t\treturn tgt_rep, t", "response": "Returns the native format of an AS_REP message and the sessionkey in EncryptionKey native format\n\ttgt_rep tpkt_rep tpkpkt_rep tpkt in EncryptionKey native format\n\tt"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_tgt(self, as_rep, enc_as_rep_part, override_pp = True): #from AS_REP\n\t\tc = Credential()\n\t\tc.client = CCACHEPrincipal.from_asn1(as_rep['cname'], as_rep['crealm'])\n\t\tif override_pp == True:\n\t\t\tself.primary_principal = c.client\n\t\tc.server = CCACHEPrincipal.from_asn1(enc_as_rep_part['sname'], enc_as_rep_part['srealm'])\n\t\tc.time = Times.from_asn1(enc_as_rep_part)\n\t\tc.key = Keyblock.from_asn1(enc_as_rep_part['key'])\n\t\tc.is_skey = 0 #not sure!\n\t\t\n\t\tc.tktflags = TicketFlags(enc_as_rep_part['flags']).cast(core.IntegerBitString).native\n\t\tc.num_address = 0\n\t\tc.num_authdata = 0\n\t\tc.ticket = CCACHEOctetString.from_asn1(Ticket(as_rep['ticket']).dump())\n\t\tc.second_ticket = CCACHEOctetString.empty()\n\t\t\n\t\tself.credentials.append(c)", "response": "This function adds a credential object from the TGT and adds it to the ccache file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_tgs(self, tgs_rep, enc_tgs_rep_part, override_pp = False): #from AS_REP\n\t\tc = Credential()\n\t\tc.client = CCACHEPrincipal.from_asn1(tgs_rep['cname'], tgs_rep['crealm'])\n\t\tif override_pp == True:\n\t\t\tself.primary_principal = c.client\n\t\tc.server = CCACHEPrincipal.from_asn1(enc_tgs_rep_part['sname'], enc_tgs_rep_part['srealm'])\n\t\tc.time = Times.from_asn1(enc_tgs_rep_part)\n\t\tc.key = Keyblock.from_asn1(enc_tgs_rep_part['key'])\n\t\tc.is_skey = 0 #not sure!\n\t\t\n\t\tc.tktflags = TicketFlags(enc_tgs_rep_part['flags']).cast(core.IntegerBitString).native\n\t\tc.num_address = 0\n\t\tc.num_authdata = 0\n\t\tc.ticket = CCACHEOctetString.from_asn1(Ticket(tgs_rep['ticket']).dump())\n\t\tc.second_ticket = CCACHEOctetString.empty()\n\t\t\n\t\tself.credentials.append(c)", "response": "This function adds a credential object from the TGS_REP data and adds it to the ccache file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_kirbi(self, krbcred, override_pp = True, include_expired = False):\n\t\tc = Credential()\n\t\tenc_credinfo = EncKrbCredPart.load(krbcred['enc-part']['cipher']).native\n\t\tticket_info = enc_credinfo['ticket-info'][0]\n\t\t\n\t\t\"\"\"\n\t\tif ticket_info['endtime'] < datetime.datetime.now(datetime.timezone.utc):\n\t\t\tif include_expired == True:\n\t\t\t\tlogging.debug('This ticket has most likely expired, but include_expired is forcing me to add it to cache! This can cause problems!')\n\t\t\telse:\n\t\t\t\tlogging.debug('This ticket has most likely expired, skipping')\n\t\t\t\treturn\n\t\t\"\"\"\n\t\t\n\t\tc.client = CCACHEPrincipal.from_asn1(ticket_info['pname'], ticket_info['prealm'])\n\t\tif override_pp == True:\n\t\t\tself.primary_principal = c.client\n\t\t\n\t\t#yaaaaay 4 additional weirdness!!!!\n\t\t#if sname name-string contains a realm as well htne impacket will crash miserably :(\n\t\tif len(ticket_info['sname']['name-string']) > 2 and ticket_info['sname']['name-string'][-1].upper() == ticket_info['srealm'].upper():\n\t\t\tlogging.debug('SNAME contains the realm as well, trimming it')\n\t\t\tt = ticket_info['sname']\n\t\t\tt['name-string'] = t['name-string'][:-1]\n\t\t\tc.server = CCACHEPrincipal.from_asn1(t, ticket_info['srealm'])\n\t\telse:\n\t\t\tc.server = CCACHEPrincipal.from_asn1(ticket_info['sname'], ticket_info['srealm'])\n\t\t\n\t\t\n\t\tc.time = Times.from_asn1(ticket_info)\n\t\tc.key = Keyblock.from_asn1(ticket_info['key'])\n\t\tc.is_skey = 0 #not sure!\n\t\t\n\t\tc.tktflags = TicketFlags(ticket_info['flags']).cast(core.IntegerBitString).native\n\t\tc.num_address = 0\n\t\tc.num_authdata = 0\n\t\tc.ticket = CCACHEOctetString.from_asn1(Ticket(krbcred['tickets'][0]).dump()) #kirbi only stores one ticket per file\n\t\tc.second_ticket = CCACHEOctetString.empty()\n\t\t\n\t\tself.credentials.append(c)", "response": "Add a Kirbi credential to the cache."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of hashes in hashcat - firendly format for tickets with encryption type 23 which is RC4", "response": "def get_hashes(self, all_hashes = False):\n\t\t\"\"\"\n\t\tReturns a list of hashes in hashcat-firendly format for tickets with encryption type 23 (which is RC4)\n\t\tall_hashes: overrides the encryption type filtering and returns hash for all tickets\n\n\t\t\"\"\"\n\t\thashes = []\n\t\tfor cred in self.credentials:\n\t\t\tres = Ticket.load(cred.ticket.to_asn1()).native\n\t\t\tif int(res['enc-part']['etype']) == 23 or all_hashes == True:\n\t\t\t\thashes.append(cred.to_hash())\n\n\t\treturn hashes"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_kirbidir(directory_path):\n\t\tcc = CCACHE()\n\t\tdir_path = os.path.join(os.path.abspath(directory_path), '*.kirbi')\n\t\tfor filename in glob.glob(dir_path):\n\t\t\twith open(filename, 'rb') as f:\n\t\t\t\tkirbidata = f.read()\n\t\t\t\tkirbi = KRBCRED.load(kirbidata).native\n\t\t\t\tcc.add_kirbi(kirbi)\n\t\t\n\t\treturn cc", "response": "Reads all. kirbi files in a given directory and converts all of them into one CCACHE object and returns it"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting all credential objects in the CCACHE object to the kirbi file format used by mimikatz.", "response": "def to_kirbidir(self, directory_path):\n\t\t\"\"\"\n\t\tConverts all credential object in the CCACHE object to the kirbi file format used by mimikatz.\n\t\tThe kirbi file format supports one credential per file, so prepare for a lot of files being generated.\n\t\t\n\t\tdirectory_path: str the directory to write the kirbi files to\n\t\t\"\"\"\n\t\tkf_abs = os.path.abspath(directory_path)\n\t\tfor cred in self.credentials:\n\t\t\tkirbi, filename = cred.to_kirbi()\n\t\t\tfilename = '%s.kirbi' % filename.replace('..','!')\n\t\t\tfilepath = os.path.join(kf_abs, filename)\n\t\t\twith open(filepath, 'wb') as o:\n\t\t\t\to.write(kirbi.dump())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_file(self, filename):\n\t\twith open(filename, 'wb') as f:\n\t\t\tf.write(self.to_bytes())", "response": "Writes the contents of the CCACHE object to a file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting a formatted table given a 2 dimensional array", "response": "def print_table(lines, separate_head=True):\n\t\"\"\"Prints a formatted table given a 2 dimensional array\"\"\"\n\t#Count the column width\n\twidths = []\n\tfor line in lines:\n\t\t\tfor i,size in enumerate([len(x) for x in line]):\n\t\t\t\t\twhile i >= len(widths):\n\t\t\t\t\t\t\twidths.append(0)\n\t\t\t\t\tif size > widths[i]:\n\t\t\t\t\t\t\twidths[i] = size\n\t \n\t#Generate the format string to pad the columns\n\tprint_string = \"\"\n\tfor i,width in enumerate(widths):\n\t\t\tprint_string += \"{\" + str(i) + \":\" + str(width) + \"} | \"\n\tif (len(print_string) == 0):\n\t\t\treturn\n\tprint_string = print_string[:-3]\n\t \n\t#Print the actual data\n\tfor i,line in enumerate(lines):\n\t\t\tprint(print_string.format(*line))\n\t\t\tif (i == 0 and separate_head):\n\t\t\t\t\tprint(\"-\"*(sum(widths)+3*(len(widths)-1)))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_key_for_enctype(self, etype):\n\t\tif etype == EncryptionType.AES256_CTS_HMAC_SHA1_96:\n\t\t\tif self.kerberos_key_aes_256:\n\t\t\t\treturn bytes.fromhex(self.kerberos_key_aes_256)\n\t\t\tif self.password is not None:\n\t\t\t\tsalt = (self.domain.upper() + self.username).encode()\n\t\t\t\treturn string_to_key(Enctype.AES256, self.password.encode(), salt).contents\n\t\t\traise Exception('There is no key for AES256 encryption')\n\t\telif etype == EncryptionType.AES128_CTS_HMAC_SHA1_96:\n\t\t\tif self.kerberos_key_aes_128:\n\t\t\t\treturn bytes.fromhex(self.kerberos_key_aes_128)\n\t\t\tif self.password is not None:\n\t\t\t\tsalt = (self.domain.upper() + self.username).encode()\n\t\t\t\treturn string_to_key(Enctype.AES128, self.password.encode(), salt).contents\n\t\t\traise Exception('There is no key for AES128 encryption')\n\t\telif etype == EncryptionType.ARCFOUR_HMAC_MD5:\n\t\t\tif self.kerberos_key_rc4:\n\t\t\t\treturn bytes.fromhex(self.kerberos_key_rc4)\n\t\t\tif self.nt_hash:\n\t\t\t\treturn bytes.fromhex(self.nt_hash)\n\t\t\telif self.password:\n\t\t\t\tself.nt_hash = hashlib.new('md4', self.password.encode('utf-16-le')).hexdigest().upper()\n\t\t\t\treturn bytes.fromhex(self.nt_hash)\n\t\t\telse:\n\t\t\t\traise Exception('There is no key for RC4 encryption')\n\t\telif etype == EncryptionType.DES3_CBC_SHA1:\n\t\t\tif self.kerberos_key_des3:\n\t\t\t\treturn bytes.fromhex(self.kerberos_key_des)\n\t\t\telif self.password:\n\t\t\t\tsalt = (self.domain.upper() + self.username).encode()\n\t\t\t\treturn string_to_key(Enctype.DES3, self.password.encode(), salt).contents\n\t\t\telse:\n\t\t\t\traise Exception('There is no key for DES3 encryption')\n\t\t\t\t\n\t\telif etype == EncryptionType.DES_CBC_MD5: #etype == EncryptionType.DES_CBC_CRC or etype == EncryptionType.DES_CBC_MD4 or \n\t\t\tif self.kerberos_key_des:\n\t\t\t\treturn bytes.fromhex(self.kerberos_key_des)\n\t\t\telif self.password:\n\t\t\t\tsalt = (self.domain.upper() + self.username).encode()\n\t\t\t\treturn string_to_key(Enctype.DES_MD5, self.password.encode(), salt).contents\n\t\t\telse:\n\t\t\t\traise Exception('There is no key for DES3 encryption')\n\t\t\n\t\telse:\n\t\t\traise Exception('Unsupported encryption type: %s' % etype.name)", "response": "Returns the encryption key bytes for the enctryption type."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a KerberosCredential object from a connection string.", "response": "def from_connection_string(s):\n\t\t\"\"\"\n\t\tCredential input format:\n\t\t//:@\n\t\t\"\"\"\n\t\tcred = KerberosCredential()\n\t\t\n\t\tcred.domain, t = s.split('/', 1)\n\t\tcred.username, t = t.split('/', 1)\n\t\tsecret_type, t = t.split(':', 1)\n\t\tsecret, target = t.rsplit('@', 1)\n\t\t\n\t\tst = KerberosSecretType(secret_type.upper())\n\t\tif st == KerberosSecretType.PASSWORD or st == KerberosSecretType.PW or st == KerberosSecretType.PASS:\n\t\t\tcred.password = secret\n\t\t\n\t\telif st == KerberosSecretType.NT or st == KerberosSecretType.RC4:\n\t\t\tcred.nt_hash = secret\n\t\t\tcred.kerberos_key_rc4 = secret\n\t\t\t\n\t\t\n\t\telif st == KerberosSecretType.AES:\n\t\t\tcred.kerberos_key_aes_256 = secret\n\t\t\tcred.kerberos_key_aes_128 = secret\n\t\t\n\t\telif st == KerberosSecretType.DES:\n\t\t\tcred.kerberos_key_des = secret\n\t\t\n\t\telif st == KerberosSecretType.DES3 or st == KerberosSecretType.TDES:\n\t\t\tcred.kerberos_key_des3 = secret\n\t\t\t\n\t\telif st == KerberosSecretType.CCACHE:\n\t\t\tcred.ccache = CCACHE.from_file(secret)\n\t\t\t\n\t\treturn cred"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run(self, realm, users):\n\t\texisting_users = []\n\t\tfor user in users:\n\t\t\tlogging.debug('Probing user %s' % user)\n\t\t\treq = KerberosUserEnum.construct_tgt_req(realm, user)\n\t\t\trep = self.ksoc.sendrecv(req.dump(), throw = False)\n\t\t\t\n\t\t\tif rep.name != 'KRB_ERROR':\t\n\t\t\t\t# user doesnt need preauth, but it exists\n\t\t\t\texisting_users.append(user)\n\t\t\t\n\t\t\telif rep.native['error-code'] != KerberosErrorCode.KDC_ERR_PREAUTH_REQUIRED.value:\n\t\t\t\t# any other error means user doesnt exist\n\t\t\t\tcontinue\n\t\t\t\n\t\t\telse:\n\t\t\t\t# preauth needed, only if user exists\n\t\t\t\texisting_users.append(user)\n\n\t\treturn existing_users", "response": "This method is used to send a TGT request to the Kerberos server."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef run(self, creds, override_etype = [23]):\n\t\ttgts = []\n\t\tfor cred in creds:\n\t\t\ttry:\n\t\t\t\tkcomm = KerbrosComm(cred, self.ksoc)\n\t\t\t\tkcomm.get_TGT(override_etype = override_etype, decrypt_tgt = False)\n\t\t\t\ttgts.append(kcomm.kerberos_TGT)\n\t\t\texcept Exception as e:\n\t\t\t\tlogger.debug('Error while roasting client %s/%s Reason: %s' % (cred.domain, cred.username, str(e)))\n\t\t\t\tcontinue\n\n\t\tresults = []\n\t\tfor tgt in tgts:\n\t\t\tresults.append(TGTTicket2hashcat(tgt))\n\n\n\t\treturn results", "response": "This method is called by the client to request the TGT tickets for all users specified in the targets."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrequests TGS tickets for all service users specified in the targets list targets: list : the SPN users to request the TGS tickets for allhash: bool : Return all enctype tickets, ot just 23", "response": "def run(self, targets, override_etype = [2, 3, 16, 23, 17, 18]):\n\t\t\"\"\"\n\t\tRequests TGS tickets for all service users specified in the targets list\n\t\ttargets: list : the SPN users to request the TGS tickets for\n\t\tallhash: bool : Return all enctype tickets, ot just 23\n\t\t\"\"\"\n\t\tif not self.kcomm:\n\t\t\ttry:\n\t\t\t\tself.kcomm = KerbrosComm(self.ccred, self.ksoc)\n\t\t\t\tself.kcomm.get_TGT()\n\t\t\texcept Exception as e:\n\t\t\t\tlogger.exception('Failed to get TGT ticket! Reason: %s' % str(e))\n\t\t\t\t\n\t\t\n\t\ttgss = []\n\t\tfor target in targets:\n\t\t\ttry:\n\t\t\t\ttgs, encTGSRepPart, key = self.kcomm.get_TGS(target, override_etype = override_etype)\n\t\t\t\ttgss.append(tgs)\n\t\t\texcept Exception as e:\n\t\t\t\tlogger.debug('Failed to get TGS ticket for user %s/%s/%s! Reason: %s' % (target.domain, str(target.service), target.username, str(e)))\n\t\t\t\tcontinue\n\n\t\tresults = []\n\t\tfor tgs in tgss:\n\t\t\tresults.append(TGSTicket2hashcat(tgs))\n\n\n\t\treturn results"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pbkdf2(password, salt, iters, keylen, digestmod = hashlib.sha1):\n\th = hmac.new(password, digestmod=digestmod)\n\tdef prf(data):\n\t\thm = h.copy()\n\t\thm.update(data)\n\t\treturn bytearray(hm.digest())\n\n\tkey = bytearray()\n\ti = 1\n\twhile len(key) < keylen:\n\t\tT = U = prf(salt + struct.pack('>i', i))\n\t\tfor _ in range(iters - 1):\n\t\t\tU = prf(U)\n\t\t\tT = bytearray(x ^ y for x, y in zip(T, U))\n\t\tkey += T\n\t\ti += 1\n\n\treturn key[:keylen]", "response": "Run the PBKDF2 algorithm\n\tand return the derived key."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a KerberosSocket object from a connection string.", "response": "def from_connection_string(s, soc_type = KerberosSocketType.TCP):\n\t\t\"\"\"\n\t\t@:\n\t\t\"\"\"\n\t\t\n\t\tip = None\n\t\tport = 88\n\t\tt, addr = s.rsplit('@')\n\t\tif addr.find(':') == -1:\n\t\t\tip = addr\n\t\telse:\n\t\t\tip, port = addr.split(':')\n\t\t\t\n\t\treturn KerberosSocket(ip, port = int(port), soc_type = soc_type)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset up the kerberos object from tgt and the session key.", "response": "def from_tgt(ksoc, tgt, key):\n\t\t\"\"\"\n\t\tSets up the kerberos object from tgt and the session key.\n\t\tUse this function when pulling the TGT from ccache file.\n\t\t\"\"\"\n\t\tkc = KerbrosComm(None, ksoc)\n\t\tkc.kerberos_TGT = tgt\n\t\t\n\t\tkc.kerberos_cipher_type = key['keytype']\n\t\tkc.kerberos_session_key = Key(kc.kerberos_cipher_type, key['keyvalue']) \n\t\tkc.kerberos_cipher = _enctype_table[kc.kerberos_cipher_type]\n\t\treturn kc"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_TGS(self, spn_user, override_etype = None):\n\t\t#construct tgs_req\n\t\tlogger.debug('Constructing TGS request for user %s' % spn_user.get_formatted_pname())\n\t\tnow = datetime.datetime.utcnow() \n\t\tkdc_req_body = {}\n\t\tkdc_req_body['kdc-options'] = KDCOptions(set(['forwardable','renewable','renewable_ok', 'canonicalize']))\n\t\tkdc_req_body['realm'] = spn_user.domain.upper()\n\t\tkdc_req_body['sname'] = PrincipalName({'name-type': NAME_TYPE.SRV_INST.value, 'name-string': spn_user.get_principalname()})\n\t\tkdc_req_body['till'] = now + datetime.timedelta(days=1)\n\t\tkdc_req_body['nonce'] = secrets.randbits(31)\n\t\tif override_etype:\n\t\t\tkdc_req_body['etype'] = override_etype\n\t\telse:\n\t\t\tkdc_req_body['etype'] = [self.kerberos_cipher_type]\n\n\t\tauthenticator_data = {}\n\t\tauthenticator_data['authenticator-vno'] = krb5_pvno\n\t\tauthenticator_data['crealm'] = Realm(self.kerberos_TGT['crealm'])\n\t\tauthenticator_data['cname'] = self.kerberos_TGT['cname']\n\t\tauthenticator_data['cusec'] = now.microsecond\n\t\tauthenticator_data['ctime'] = now\n\t\t\n\t\tauthenticator_data_enc = self.kerberos_cipher.encrypt(self.kerberos_session_key, 7, Authenticator(authenticator_data).dump(), None)\n\t\t\n\t\tap_req = {}\n\t\tap_req['pvno'] = krb5_pvno\n\t\tap_req['msg-type'] = MESSAGE_TYPE.KRB_AP_REQ.value\n\t\tap_req['ap-options'] = APOptions(set())\n\t\tap_req['ticket'] = Ticket(self.kerberos_TGT['ticket'])\n\t\tap_req['authenticator'] = EncryptedData({'etype': self.kerberos_cipher_type, 'cipher': authenticator_data_enc})\n\t\t\n\t\tpa_data_1 = {}\n\t\tpa_data_1['padata-type'] = PaDataType.TGS_REQ.value\n\t\tpa_data_1['padata-value'] = AP_REQ(ap_req).dump()\n\t\t\n\t\t\n\t\tkdc_req = {}\n\t\tkdc_req['pvno'] = krb5_pvno\n\t\tkdc_req['msg-type'] = MESSAGE_TYPE.KRB_TGS_REQ.value\n\t\tkdc_req['padata'] = [pa_data_1]\n\t\tkdc_req['req-body'] = KDC_REQ_BODY(kdc_req_body)\n\t\t\n\t\treq = TGS_REQ(kdc_req)\n\t\tlogger.debug('Constructing TGS request to server')\n\t\trep = self.ksoc.sendrecv(req.dump())\n\t\tlogger.debug('Got TGS reply, decrypting...')\n\t\ttgs = rep.native\n\t\t\n\t\tencTGSRepPart = EncTGSRepPart.load(self.kerberos_cipher.decrypt(self.kerberos_session_key, 8, tgs['enc-part']['cipher'])).native\n\t\tkey = Key(encTGSRepPart['key']['keytype'], encTGSRepPart['key']['keyvalue'])\n\t\t\n\t\tself.ccache.add_tgs(tgs, encTGSRepPart)\n\t\tlogger.debug('Got valid TGS reply')\n\t\tself.kerberos_TGS = tgs\n\t\treturn tgs, encTGSRepPart, key", "response": "This method returns a TGS ticket for the specified user."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef minimise(table, target_length, check_for_aliases=True):\n # If alias checking is required, see if we can cheaply prove that no\n # aliases exist in the table to skip this costly check.\n if check_for_aliases:\n # Aliases cannot exist when all entries share the same mask and all\n # keys are unique.\n if len(set(e.mask for e in table)) == 1 and \\\n len(table) == len(set(e.key for e in table)):\n check_for_aliases = False\n\n # Generate a new table with default-route entries removed\n new_table = list()\n for i, entry in enumerate(table):\n if not _is_defaultable(i, entry, table, check_for_aliases):\n # If the entry cannot be removed then add it to the table\n new_table.append(entry)\n\n # If the resultant table is larger than the target raise an exception\n if target_length is not None and target_length < len(new_table):\n raise MinimisationFailedError(target_length, len(new_table))\n\n return new_table", "response": "Minimise the routing table."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _is_defaultable(i, entry, table, check_for_aliases=True):\n # May only have one source and sink (which may not be None)\n if (len(entry.sources) == 1 and\n len(entry.route) == 1 and\n None not in entry.sources):\n # Neither the source nor sink may be a core\n source = next(iter(entry.sources))\n sink = next(iter(entry.route))\n\n if source.is_link and sink.is_link:\n # The source must be going in the same direction as the link\n if source.opposite is sink:\n # And the entry must not be aliased\n key, mask = entry.key, entry.mask\n if not check_for_aliases or \\\n not any(intersect(key, mask, d.key, d.mask) for\n d in table[i+1:]):\n return True\n return False", "response": "Determine if an entry may be removed from a routing table and a default route."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a set of routing trees into a set of routing tables.", "response": "def routing_tree_to_tables(routes, net_keys):\n \"\"\"Convert a set of\n :py:class:`~rig.place_and_route.routing_tree.RoutingTree` s into a per-chip\n set of routing tables.\n\n .. warning::\n A :py:exc:`rig.routing_table.MultisourceRouteError` will\n be raised if entries with identical keys and masks but with differing\n routes are generated. This is not a perfect test, entries which would\n otherwise collide are not spotted.\n\n .. warning::\n The routing trees provided are assumed to be correct and continuous\n (not missing any hops). If this is not the case, the output is\n undefined.\n\n .. note::\n If a routing tree has a terminating vertex whose route is set to None,\n that vertex is ignored.\n\n Parameters\n ----------\n routes : {net: :py:class:`~rig.place_and_route.routing_tree.RoutingTree`, \\\n ...}\n The complete set of RoutingTrees representing all routes in the system.\n (Note: this is the same data structure produced by routers in the\n :py:mod:`~rig.place_and_route` module.)\n net_keys : {net: (key, mask), ...}\n The key and mask associated with each net.\n\n Returns\n -------\n {(x, y): [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n \"\"\"\n # Pairs of inbound and outbound routes.\n InOutPair = namedtuple(\"InOutPair\", \"ins, outs\")\n\n # {(x, y): {(key, mask): _InOutPair}}\n route_sets = defaultdict(OrderedDict)\n\n for net, routing_tree in iteritems(routes):\n key, mask = net_keys[net]\n\n # The direction is the Links entry which describes the direction in\n # which we last moved to reach the node (or None for the root).\n for direction, (x, y), out_directions in routing_tree.traverse():\n # Determine the in_direction\n in_direction = direction\n if in_direction is not None:\n in_direction = direction.opposite\n\n # Add a routing entry\n if (key, mask) in route_sets[(x, y)]:\n # If there is an existing route set raise an error if the out\n # directions are not equivalent.\n if route_sets[(x, y)][(key, mask)].outs != out_directions:\n raise MultisourceRouteError(key, mask, (x, y))\n\n # Otherwise, add the input directions as this represents a\n # merge of the routes.\n route_sets[(x, y)][(key, mask)].ins.add(in_direction)\n else:\n # Otherwise create a new route set\n route_sets[(x, y)][(key, mask)] = \\\n InOutPair({in_direction}, set(out_directions))\n\n # Construct the routing tables from the route sets\n routing_tables = defaultdict(list)\n for (x, y), routes in iteritems(route_sets):\n for (key, mask), route in iteritems(routes):\n # Add the route\n routing_tables[(x, y)].append(\n RoutingTableEntry(route.outs, key, mask, route.ins)\n )\n\n return routing_tables"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nbuild a dictionary of target routing table lengths from a system info object.", "response": "def build_routing_table_target_lengths(system_info):\n \"\"\"Build a dictionary of target routing table lengths from a\n :py:class:`~rig.machine_control.machine_controller.SystemInfo` object.\n\n Useful in conjunction with :py:func:`~rig.routing_table.minimise_tables`.\n\n Returns\n -------\n {(x, y): num, ...}\n A dictionary giving the number of free routing table entries on each\n chip on a SpiNNaker system.\n\n .. note::\n The actual number of entries reported is the size of the largest\n contiguous free block of routing entries in the routing table.\n \"\"\"\n return {\n (x, y): ci.largest_free_rtr_mc_block\n for (x, y), ci in iteritems(system_info)\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef table_is_subset_of(entries_a, entries_b):\n # Determine which bits we don't need to explicitly test for\n common_xs = get_common_xs(entries_b)\n\n # For every entry in the first table\n for entry in expand_entries(entries_a, ignore_xs=common_xs):\n # Look at every entry in the second table\n for other_entry in entries_b:\n # If the first entry matches the second\n if other_entry.mask & entry.key == other_entry.key:\n if other_entry.route == entry.route:\n # If the route is the same then we move on to the next\n # entry in the first table.\n break\n else:\n # Otherwise we return false as the tables are different\n return False\n else:\n # If we didn't break out of the loop then the entry from the first\n # table never matched an entry in the second table. If the entry\n # from the first table could not be default routed we return False\n # as the tables cannot be equivalent.\n default_routed = False\n\n if len(entry.route) == 1 and len(entry.sources) == 1:\n source = next(iter(entry.sources))\n sink = next(iter(entry.route))\n\n if (source is not None and\n sink.is_link and\n source is sink.opposite):\n default_routed = True\n\n if not default_routed:\n return False\n\n return True", "response": "Check that every entry in one table results in the\n same route when checked against the other table."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef expand_entry(entry, ignore_xs=0x0):\n # Get all the Xs in the entry that are not ignored\n xs = (~entry.key & ~entry.mask) & ~ignore_xs\n\n # Find the most significant X\n for bit in (1 << i for i in range(31, -1, -1)):\n if bit & xs:\n # Yield all the entries with this bit set as 0\n entry_0 = RoutingTableEntry(entry.route, entry.key,\n entry.mask | bit, entry.sources)\n for new_entry in expand_entry(entry_0, ignore_xs):\n yield new_entry\n\n # And yield all the entries with this bit set as 1\n entry_1 = RoutingTableEntry(entry.route, entry.key | bit,\n entry.mask | bit, entry.sources)\n for new_entry in expand_entry(entry_1, ignore_xs):\n yield new_entry\n\n # Stop looking for Xs\n break\n else:\n # If there are no Xs then yield the entry we were given.\n yield entry", "response": "Expand the entries in the routing table into a single tree of entries."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef expand_entries(entries, ignore_xs=None):\n # Find the common Xs for the entries\n if ignore_xs is None:\n ignore_xs = get_common_xs(entries)\n\n # Keep a track of keys that we've seen\n seen_keys = set({})\n\n # Expand each entry in turn\n for entry in entries:\n for new_entry in expand_entry(entry, ignore_xs):\n if new_entry.key in seen_keys:\n # We've already used this key, warn that the table is\n # over-complete.\n warnings.warn(\"Table is not orthogonal: Key {:#010x} matches \"\n \"multiple entries.\".format(new_entry.key))\n else:\n # Mark the key as seen and yield the new entry\n seen_keys.add(new_entry.key)\n yield new_entry", "response": "Expand the given list of entries into a single set of entries."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_common_xs(entries):\n # Determine where there are never 1s in the key and mask\n key = 0x00000000\n mask = 0x00000000\n\n for entry in entries:\n key |= entry.key\n mask |= entry.mask\n\n # Where there are never 1s in the key or the mask there are Xs which are\n # common to all entries.\n return (~(key | mask)) & 0xffffffff", "response": "Return a mask of where there are common Xs in all entries."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef slices_overlap(slice_a, slice_b):\n assert slice_a.step is None\n assert slice_b.step is None\n\n return max(slice_a.start, slice_b.start) \\\n < min(slice_a.stop, slice_b.stop)", "response": "Test if the ranges covered by a pair of slices overlap."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nminimises an ( x y z ) coordinate.", "response": "def minimise_xyz(xyz):\n \"\"\"Minimise an (x, y, z) coordinate.\"\"\"\n x, y, z = xyz\n m = max(min(x, y), min(max(x, y), z))\n return (x-m, y-m, z-m)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncalculate the standard network dimensions for a full torus system.", "response": "def standard_system_dimensions(num_boards):\n \"\"\"Calculate the standard network dimensions (in chips) for a full torus\n system with the specified number of SpiNN-5 boards.\n\n Returns\n -------\n (w, h)\n Width and height of the network in chips.\n\n Standard SpiNNaker systems are constructed as squarely as possible\n given the number of boards available. When a square system cannot be\n made, the function prefers wider systems over taller systems.\n\n Raises\n ------\n ValueError\n If the number of boards is not a multiple of three.\n \"\"\"\n # Special case to avoid division by 0\n if num_boards == 0:\n return (0, 0)\n\n # Special case: meaningful systems with 1 board can exist\n if num_boards == 1:\n return (8, 8)\n\n if num_boards % 3 != 0:\n raise ValueError(\"{} is not a multiple of 3\".format(num_boards))\n\n # Find the largest pair of factors to discover the squarest system in terms\n # of triads of boards.\n for h in reversed( # pragma: no branch\n range(1, int(sqrt(num_boards // 3)) + 1)):\n if (num_boards // 3) % h == 0:\n break\n\n w = (num_boards // 3) // h\n\n # Convert the number of triads into numbers of chips (each triad of boards\n # contributes as 12x12 block of chips).\n return (w * 12, h * 12)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef spinn5_eth_coords(width, height, root_x=0, root_y=0):\n # In oddly-shaped machines where chip (0, 0) does not exist, we must offset\n # the coordinates returned in accordance with the root chip's location.\n root_x %= 12\n root_x %= 12\n\n # Internally, work with the width and height rounded up to the next\n # multiple of 12\n w = ((width + 11) // 12) * 12\n h = ((height + 11) // 12) * 12\n\n for x in range(0, w, 12):\n for y in range(0, h, 12):\n for dx, dy in ((0, 0), (4, 8), (8, 4)):\n nx = (x + dx + root_x) % w\n ny = (y + dy + root_y) % h\n # Skip points which are outside the range available\n if nx < width and ny < height:\n yield (nx, ny)", "response": "Generate a list of board coordinates with Ethernet connectivity in a SpiNNaker machine."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef spinn5_local_eth_coord(x, y, w, h, root_x=0, root_y=0):\n dx, dy = SPINN5_ETH_OFFSET[(y - root_y) % 12][(x - root_x) % 12]\n return ((x + int(dx)) % w), ((y + int(dy)) % h)", "response": "Returns the coordinates of a chip s local ethernet connected chip on the same board as the supplied chip."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef spinn5_chip_coord(x, y, root_x=0, root_y=0):\n dx, dy = SPINN5_ETH_OFFSET[(y - root_y) % 12][(x - root_x) % 12]\n return (-int(dx), -int(dy))", "response": "Calculates the coordinates of a chip on its board."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the identity of the FPGA link which corresponds with the supplied link.", "response": "def spinn5_fpga_link(x, y, link, root_x=0, root_y=0):\n \"\"\"Get the identity of the FPGA link which corresponds with the supplied\n link.\n\n .. note::\n This function assumes the system is constructed from SpiNN-5 boards\n whose FPGAs are loaded with the SpI/O 'spinnaker_fpgas' image.\n\n Parameters\n ----------\n x, y : int\n The chip whose link is of interest.\n link : :py:class:`~rig.links.Link`\n The link of interest.\n root_x, root_y : int\n The coordinates of the root chip (i.e. the chip used to boot the\n machine), e.g. from\n :py:attr:`rig.machine_control.MachineController.root_chip`.\n\n Returns\n -------\n (fpga_num, link_num) or None\n If not None, the link supplied passes through an FPGA link. The\n returned tuple indicates the FPGA responsible for the sending-side of\n the link.\n\n `fpga_num` is the number (0, 1 or 2) of the FPGA responsible for the\n link.\n\n `link_num` indicates which of the sixteen SpiNNaker links (0 to 15)\n into an FPGA is being used. Links 0-7 are typically handled by S-ATA\n link 0 and 8-15 are handled by S-ATA link 1.\n\n Returns None if the supplied link does not pass through an FPGA.\n \"\"\"\n x, y = spinn5_chip_coord(x, y, root_x, root_y)\n return SPINN5_FPGA_LINKS.get((x, y, link))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nloading the CSV package resource from a source resource.", "response": "def _load_resource(self, source_r, abs_path=False):\n \"\"\"The CSV package has no reseources, so we just need to resolve the URLs to them. Usually, the\n CSV package is built from a file system ackage on a publically acessible server. \"\"\"\n\n r = self.doc.resource(source_r.name)\n\n r.url = self.resource_root.join(r.url).inner"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _reset(self):\n self._arbiter = None\n self._workers = None\n self._working_queue = None\n self._must_stop = threading.Event()", "response": "Reset the internal state of the thread."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nattaches a set of tasks to an Engine.", "response": "def attach_tasks(self, tasks: Tasks):\n \"\"\"Attach a set of tasks.\n\n A task cannot be scheduled or executed before it is attached to an\n Engine.\n\n >>> tasks = Tasks()\n >>> spin.attach_tasks(tasks)\n \"\"\"\n if tasks._spin is not None and tasks._spin is not self:\n logger.warning('Tasks already attached to a different Engine')\n self._tasks.update(tasks)\n tasks._spin = self"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef schedule_at(self, task: Schedulable, at: datetime, *args, **kwargs):\n task = self._tasks.get(task)\n job = Job(task.name, task.queue, at, task.max_retries, task_args=args,\n task_kwargs=kwargs)\n return self._broker.enqueue_jobs([job])", "response": "Schedule a job to be executed in the background."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nscheduling many jobs at once.", "response": "def schedule_batch(self, batch: Batch):\n \"\"\"Schedule many jobs at once.\n\n Scheduling jobs in batches allows to enqueue them fast by avoiding\n round-trips to the broker.\n\n :arg batch: :class:`Batch` instance containing jobs to schedule\n \"\"\"\n jobs = list()\n for task, at, args, kwargs in batch.jobs_to_create:\n task = self._tasks.get(task)\n jobs.append(\n Job(task.name, task.queue, at, task.max_retries,\n task_args=args, task_kwargs=kwargs)\n )\n return self._broker.enqueue_jobs(jobs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef start_workers(self, number: int=DEFAULT_WORKER_NUMBER,\n queue=DEFAULT_QUEUE, block=True,\n stop_when_queue_empty=False):\n \"\"\"Start the worker threads.\n\n :arg number: number of worker threads to launch\n :arg queue: name of the queue to consume, see :doc:`queues`\n :arg block: whether to block the calling thread until a signal arrives\n and workers get terminated\n :arg stop_when_queue_empty: automatically stop the workers when the\n queue is empty. Useful mostly for one-off scripts and testing.\n \"\"\"\n if self._arbiter or self._workers:\n raise RuntimeError('Workers are already running')\n\n self._working_queue = queue\n\n tasks_names = '\\n'.join(\n [' - ' + task.name for task in self._tasks.tasks.values()\n if task.queue == self._working_queue]\n )\n logger.info('Starting %d workers on queue \"%s\" with tasks:\\n%s',\n number, self._working_queue, tasks_names)\n\n # Start the broker\n self._broker.start()\n\n # Start workers\n self._workers = Workers(\n num_workers=number,\n namespace=self.namespace,\n )\n\n # Start the result notifier\n self._result_notifier = threading.Thread(\n target=run_forever,\n args=(self._result_notifier_func, self._must_stop, logger),\n name='{}-result-notifier'.format(self.namespace)\n )\n self._result_notifier.start()\n\n # Start the arbiter\n self._arbiter = threading.Thread(\n target=run_forever,\n args=(self._arbiter_func, self._must_stop, logger,\n stop_when_queue_empty),\n name='{}-arbiter'.format(self.namespace)\n )\n self._arbiter.start()\n\n if block:\n with handle_sigterm():\n try:\n self._arbiter.join()\n except KeyboardInterrupt:\n self.stop_workers()\n except AttributeError:\n # Arbiter thread starts and stops immediately when ran with\n # `stop_when_queue_empty` and queue is already empty.\n pass", "response": "Start the workers threads."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef stop_workers(self, _join_arbiter=True):\n # _join_arbiter is used internally when the arbiter is shutting down\n # the full engine itself. This is because the arbiter thread cannot\n # join itself.\n self._must_stop.set()\n self._workers.stop()\n self._result_notifier.join()\n self._broker.stop()\n if _join_arbiter:\n self._arbiter.join()\n self._reset()", "response": "Stop the workers and wait for them to terminate."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef handle_trunks(self, trunks, event_type):\n\n LOG.debug(\"Trunks event received: %(event_type)s. Trunks: %(trunks)s\",\n {'event_type': event_type, 'trunks': trunks})\n\n if event_type == events.DELETED:\n # The port trunks have been deleted. Remove them from cache.\n for trunk in trunks:\n self._trunks.pop(trunk.id, None)\n else:\n for trunk in trunks:\n self._trunks[trunk.id] = trunk\n self._setup_trunk(trunk)", "response": "Handle trunks from the server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_subports(self, subports, event_type):\n\n LOG.debug(\"Subports event received: %(event_type)s. \"\n \"Subports: %(subports)s\",\n {'event_type': event_type, 'subports': subports})\n\n # update the cache.\n if event_type == events.CREATED:\n for subport in subports:\n trunk = self._trunks.get(subport['trunk_id'])\n if trunk:\n trunk.sub_ports.append(subport)\n elif event_type == events.DELETED:\n for subport in subports:\n trunk = self._trunks.get(subport['trunk_id'])\n if trunk and subport in trunk.sub_ports:\n trunk.sub_ports.remove(subport)\n\n # update the bound trunks.\n affected_trunk_ids = set([s['trunk_id'] for s in subports])\n for trunk_id in affected_trunk_ids:\n trunk = self._trunks.get(trunk_id)\n if trunk:\n self._setup_trunk(trunk)", "response": "Handle the subports event."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _setup_trunk(self, trunk, vlan_id=None):\n\n LOG.info('Binding trunk port: %s.', trunk)\n try:\n # bind sub_ports to host.\n self._trunk_rpc.update_subport_bindings(self._context,\n trunk.sub_ports)\n\n vlan_trunk = [s.segmentation_id for s in trunk.sub_ports]\n self._set_port_vlan(trunk.port_id, vlan_id, vlan_trunk)\n\n self._trunk_rpc.update_trunk_status(self._context, trunk.id,\n t_const.ACTIVE_STATUS)\n except Exception:\n # something broke\n LOG.exception(\"Failure setting up subports for %s\", trunk.port_id)\n self._trunk_rpc.update_trunk_status(self._context, trunk.id,\n t_const.DEGRADED_STATUS)", "response": "Sets up VLAN trunk and updates the trunk status."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the port profile ID from the request path.", "response": "def _get_port_profile_id(self, request):\n \"\"\"Get the port profile ID from the request path.\"\"\"\n\n # Note(alexcoman): The port profile ID can be found as suffix\n # in request path.\n port_profile_id = request.path.split(\"/\")[-1].strip()\n if uuidutils.is_uuid_like(port_profile_id):\n LOG.debug(\"The instance id was found in request path.\")\n return port_profile_id\n\n LOG.debug(\"Failed to get the instance id from the request.\")\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _setup_rpc(self):\n self._state_rpc = agent_rpc.PluginReportStateAPI(topics.REPORTS)\n report_interval = CONF.AGENT.report_interval\n if report_interval:\n heartbeat = loopingcall.FixedIntervalLoopingCall(\n self._report_state)\n heartbeat.start(interval=report_interval)", "response": "Setup the RPC client for the current agent."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _work(self):\n server = wsgi.Server(\n name=self._AGENT_BINARY,\n num_threads=CONF.AGENT.worker_count)\n server.start(\n application=_MetadataProxyHandler(),\n port=CONF.bind_port,\n host=CONF.bind_host)\n server.wait()", "response": "Start the neutron - hnv - metadata - proxy agent."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the Azure web info for the resource group.", "response": "def set_azure_web_info(self, resource_group_name, website_name, credentials,\n subscription_id, subscription_name, tenant_id, webapp_location):\n \"\"\"\n Call this method before attempting to setup continuous delivery to setup the azure settings\n :param resource_group_name:\n :param website_name:\n :param credentials:\n :param subscription_id:\n :param subscription_name:\n :param tenant_id:\n :param webapp_location:\n :return:\n \"\"\"\n self._azure_info.resource_group_name = resource_group_name\n self._azure_info.website_name = website_name\n self._azure_info.credentials = credentials\n self._azure_info.subscription_id = subscription_id\n self._azure_info.subscription_name = subscription_name\n self._azure_info.tenant_id = tenant_id\n self._azure_info.webapp_location = webapp_location"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the repository info for the current source control", "response": "def set_repository_info(self, repo_url, branch, git_token, private_repo_username, private_repo_password):\n \"\"\"\n Call this method before attempting to setup continuous delivery to setup the source control settings\n :param repo_url: URL of the code repo\n :param branch: repo branch\n :param git_token: git token\n :param private_repo_username: private repo username\n :param private_repo_password: private repo password\n :return:\n \"\"\"\n self._repo_info.url = repo_url\n self._repo_info.branch = branch\n self._repo_info.git_token = git_token\n self._repo_info._private_repo_username = private_repo_username\n self._repo_info._private_repo_password = private_repo_password"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuses this method to setup Continuous Delivery of an Azure web site from a source control repository. :param swap_with_slot: the slot to use for deployment :param app_type_details: the details of app that will be deployed. i.e. app_type = Python, python_framework = Django etc. :param cd_project_url: CD Project url in the format of https://.visualstudio.com/ :param create_account: Boolean value to decide if account need to be created or not :param vsts_app_auth_token: Authentication token for vsts app :param test: Load test webapp name :param webapp_list: Existing webapp list :return: a message indicating final status and instructions for the user", "response": "def setup_continuous_delivery(self, swap_with_slot, app_type_details, cd_project_url, create_account,\n vsts_app_auth_token, test, webapp_list):\n \"\"\"\n Use this method to setup Continuous Delivery of an Azure web site from a source control repository.\n :param swap_with_slot: the slot to use for deployment\n :param app_type_details: the details of app that will be deployed. i.e. app_type = Python, python_framework = Django etc.\n :param cd_project_url: CD Project url in the format of https://.visualstudio.com/ \n :param create_account: Boolean value to decide if account need to be created or not\n :param vsts_app_auth_token: Authentication token for vsts app\n :param test: Load test webapp name\n :param webapp_list: Existing webapp list\n :return: a message indicating final status and instructions for the user\n \"\"\"\n\n branch = self._repo_info.branch or 'refs/heads/master'\n self._validate_cd_project_url(cd_project_url)\n vsts_account_name = self._get_vsts_account_name(cd_project_url)\n\n # Verify inputs before we start generating tokens\n source_repository, account_name, team_project_name = self._get_source_repository(self._repo_info.url,\n self._repo_info.git_token, branch, self._azure_info.credentials, \n self._repo_info._private_repo_username, self._repo_info._private_repo_password)\n self._verify_vsts_parameters(vsts_account_name, source_repository)\n vsts_account_name = vsts_account_name or account_name\n cd_project_name = team_project_name or self._azure_info.website_name\n account_url = 'https://{}.visualstudio.com'.format(quote(vsts_account_name))\n portalext_account_url = 'https://{}.portalext.visualstudio.com'.format(quote(vsts_account_name))\n\n # VSTS Account using AEX APIs\n account_created = False\n if create_account:\n self.create_vsts_account(self._azure_info.credentials, vsts_account_name)\n account_created = True\n \n # Create ContinuousDelivery client\n cd = ContinuousDelivery('3.2-preview.1', portalext_account_url, self._azure_info.credentials)\n\n # Construct the config body of the continuous delivery call\n build_configuration = self._get_build_configuration(app_type_details)\n source = ProvisioningConfigurationSource('codeRepository', source_repository, build_configuration)\n auth_info = AuthorizationInfo('Headers', AuthorizationInfoParameters('Bearer ' + vsts_app_auth_token))\n target = self.get_provisioning_configuration_target(auth_info, swap_with_slot, test, webapp_list)\n ci_config = CiConfiguration(CiArtifact(name=cd_project_name))\n config = ProvisioningConfiguration(None, source, target, ci_config)\n\n # Configure the continuous deliver using VSTS as a backend\n response = cd.provisioning_configuration(config)\n if response.ci_configuration.result.status == 'queued':\n final_status = self._wait_for_cd_completion(cd, response)\n return self._get_summary(final_status, account_url, vsts_account_name, account_created, self._azure_info.subscription_id,\n self._azure_info.resource_group_name, self._azure_info.website_name)\n else:\n raise RuntimeError('Unknown status returned from provisioning_configuration: ' + response.ci_configuration.result.status)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef longest_dimension_first(vector, start=(0, 0), width=None, height=None):\n x, y = start\n\n out = []\n\n for dimension, magnitude in sorted(enumerate(vector),\n key=(lambda x:\n abs(x[1]) + random.random()),\n reverse=True):\n if magnitude == 0:\n break\n\n # Advance in the specified direction\n sign = 1 if magnitude > 0 else -1\n for _ in range(abs(magnitude)):\n if dimension == 0:\n dx, dy = sign, 0\n elif dimension == 1:\n dx, dy = 0, sign\n elif dimension == 2: # pragma: no branch\n dx, dy = -sign, -sign\n\n x += dx\n y += dy\n\n # Wrap-around if required\n if width is not None:\n x %= width\n if height is not None:\n y %= height\n\n direction = Links.from_vector((dx, dy))\n\n out.append((direction, (x, y)))\n\n return out", "response": "List the steps on a longest - dimension first route."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the set of working links between chips a and b.", "response": "def links_between(a, b, machine):\n \"\"\"Get the set of working links connecting chips a and b.\n\n Parameters\n ----------\n a : (x, y)\n b : (x, y)\n machine : :py:class:`~rig.place_and_route.Machine`\n\n Returns\n -------\n set([:py:class:`~rig.links.Links`, ...])\n \"\"\"\n ax, ay = a\n bx, by = b\n return set(link for link, (dx, dy) in ((l, l.to_vector()) for l in Links)\n if (ax + dx) % machine.width == bx and\n (ay + dy) % machine.height == by and\n (ax, ay, link) in machine)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads the credentials for an s3 profile into environmental variables", "response": "def set_s3_profile(profile_name):\n \"\"\"Load the credentials for an s3 profile into environmental variables\"\"\"\n import os\n\n session = boto3.Session(profile_name=profile_name)\n\n os.environ['AWS_ACCESS_KEY_ID'] = session.get_credentials().access_key\n os.environ['AWS_SECRET_ACCESS_KEY'] = session.get_credentials().secret_key"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_new_model(reference,\n duration, \n dt=0.025, # ms \n temperature=6.3, # degC\n default_region=None,\n parameters = None,\n cell_for_default_population=None,\n color_for_default_population='0.8 0 0',\n input_for_default_population=None,\n synapses=[],\n simulation_seed=12345,\n network_filename=None,\n simulation_filename=None):\n \n ################################################################################\n ### Build a new network\n\n net = Network(id=reference)\n net.notes = \"A network model: %s\"%reference\n net.temperature = temperature # degC\n if parameters:\n net.parameters = parameters\n \n\n ################################################################################\n ### Add some regions\n \n if default_region:\n if type(default_region)==str:\n r1 = RectangularRegion(id=default_region, x=0,y=0,z=0,width=1000,height=100,depth=1000)\n net.regions.append(r1)\n default_region = r1\n else:\n net.regions.append(default_region)\n\n\n ################################################################################\n ### Add some cells\n\n if cell_for_default_population:\n net.cells.append(cell_for_default_population)\n\n\n ################################################################################\n ### Add some synapses\n \n for s in synapses:\n net.synapses.append(s)\n\n\n\n ################################################################################\n ### Add some populations\n\n if cell_for_default_population:\n pop = Population(id='pop_%s'%cell_for_default_population.id, \n size=1, \n component=cell_for_default_population.id, \n properties={'color':color_for_default_population})\n\n if default_region:\n pop.region = default_region\n \n pop.random_layout = RandomLayout(region=default_region.id)\n\n net.populations.append(pop)\n \n\n\n ################################################################################\n ### Add a projection\n\n '''\n net.projections.append(Projection(id='proj0',\n presynaptic=p0.id, \n postsynaptic=p1.id,\n synapse='ampa'))\n\n net.projections[0].random_connectivity=RandomConnectivity(probability=0.5)'''\n \n \n ################################################################################\n ### Add some inputs\n \n if input_for_default_population:\n\n net.input_sources.append(input_for_default_population)\n\n \n net.inputs.append(Input(id='Stim_%s'%input_for_default_population.id,\n input_source=input_for_default_population.id,\n population=pop.id,\n percentage=100))\n \n\n\n ################################################################################\n ### Save to JSON format\n\n net.id = reference\n\n print(net.to_json())\n if network_filename==None:\n network_filename='%s.json'%net.id\n new_file = net.to_json_file(network_filename)\n \n\n ################################################################################\n ### Build Simulation object & save as JSON\n\n sim = Simulation(id='Sim_%s'%reference,\n network=new_file,\n duration=duration,\n dt=dt,\n seed=simulation_seed,\n recordTraces={'all':'*'})\n\n if simulation_filename==None:\n simulation_filename='%s.json'%sim.id\n sim.to_json_file(simulation_filename)\n \n return sim, net", "response": "Create a new network and return it"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nemits this signal on behalf of sender passing on kwargs.", "response": "def send(self, *sender, **kwargs):\n \"\"\"Emit this signal on behalf of `sender`, passing on kwargs.\n\n This is an extension of `Signal.send` that changes one thing:\n Exceptions raised in calling the receiver are logged but do not fail\n \"\"\"\n if len(sender) == 0:\n sender = None\n elif len(sender) > 1:\n raise TypeError('send() accepts only one positional argument, '\n '%s given' % len(sender))\n else:\n sender = sender[0]\n\n if not self.receivers:\n return []\n\n rv = list()\n for receiver in self.receivers_for(sender):\n try:\n rv.append((receiver, receiver(sender, **kwargs)))\n except Exception:\n logger.exception('Error while dispatching signal \"{}\" '\n 'to receiver'.format(self.name))\n return rv"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the metatab with the resources from the HTML exporter.", "response": "def update_metatab(self, doc, resources):\n \"\"\"Add documentation entries for resources\"\"\"\n\n if not 'Documentation' in doc:\n\n doc.new_section(\"Documentation\")\n\n ds = doc['Documentation']\n\n if not 'Name' in ds.args:\n ds.add_arg('Name', prepend=True)\n\n # This is the main output from the HTML exporter, not a resource.\n ds.new_term('Root.Documentation', 'docs/notebook.html', name=\"notebook.html\", title='Jupyter Notebook (HTML)')\n\n for name, data in resources.get('outputs', {}).items():\n\n if name == 'documentation.html':\n ds.new_term('Root.Documentation', 'docs/' + name, name=name, title='Primary Documentation (HTML)')\n\n elif name == 'html_basic_body.html':\n pass\n elif name.endswith('.html'):\n ds.new_term('Root.Documentation', 'docs/' + name, name=name, title='Documentation (HTML)')\n elif name.endswith('.md'):\n ds.new_term('Root.Documentation', 'docs/' + name, name=name, title='Documentation (Markdown)')\n elif name.endswith('.pdf'):\n ds.new_term('Root.Documentation', 'docs/' + name, name=name, title='Documentation (PDF)')\n elif name.endswith('.png'):\n ds.new_term('Root.Image', 'docs/' + name, name=name, title='Image for HTML Documentation')\n else:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_package_dir_name(self, nb):\n\n package_dir = self.package_dir\n\n if not package_dir:\n package_dir = getcwd()\n\n package_name = self.package_name\n\n if not package_name:\n doc = ExtractInlineMetatabDoc(package_url=\"metapack+file:\" + package_dir).run(nb)\n\n if not doc:\n raise NotebookError(\"Notebook does not have an inline metatab doc\")\n\n t = doc.find_first('Root.Name', section='Root')\n\n if not t:\n raise NotebookError(\"Inline Metatab doc doesnt have a Root.Name term\")\n\n package_name = doc.as_version(None)\n\n return package_dir, package_name", "response": "This is the name of the package we will be creating."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nopens a notebook and determine the output directory from the name", "response": "def get_output_dir(self, nb):\n \"\"\"Open a notebook and determine the output directory from the name\"\"\"\n self.package_dir, self.package_name = self.get_package_dir_name(nb)\n\n return join(self.package_dir, self.package_name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nextracting some terms from a NB object", "response": "def extract_terms(self, nb):\n \"\"\"Extract some term values, usually set with tags or metadata\"\"\"\n emt = ExtractMetatabTerms()\n emt.preprocess(nb, {})\n\n return emt.terms"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a Metatab package from a notebook node.", "response": "def from_notebook_node(self, nb, resources=None, **kw):\n \"\"\"Create a Metatab package from a notebook node \"\"\"\n\n nb_copy = copy.deepcopy(nb)\n\n # The the package name and directory, either from the inlined Metatab doc,\n # or from the config\n\n try:\n self.output_dir = self.get_output_dir(nb)\n except NotebookError as e:\n # Notebook probably lacks a metatab doc.\n self.log.fatal(e)\n sys.exit(1)\n\n assert self.output_dir\n\n resources = self._init_resources(resources)\n\n resources['outputs'] = {}\n\n if 'language' in nb['metadata']:\n resources['language'] = nb['metadata']['language'].lower()\n\n # Do any other configured preprocessing\n nb_copy, resources = self._preprocess(nb_copy, resources)\n\n # The Notebook can set some terms with tags\n self.extra_terms = self.extract_terms(nb_copy)\n\n # Clear the output before executing\n self.clear_output(nb_copy)\n\n nb_copy, resources = self.exec_notebook(nb_copy, resources, self.notebook_dir)\n\n eld = ExtractLibDirs()\n eld.preprocess(nb_copy, {})\n\n self.lib_dirs = eld.lib_dirs\n\n efm = ExtractFinalMetatabDoc()\n efm.preprocess(nb_copy, {})\n\n if not efm.doc:\n raise MetapackError(\"No metatab doc\")\n\n self.doc = efm.doc\n\n for section, term, value in self.extra_terms:\n self.doc[section].get_or_new_term(term, value)\n\n nb, _ = RemoveMetatab().preprocess(nb, {})\n\n resources['outputs']['notebooks/{}.ipynb'.format(self.package_name)] = nbformat.writes(nb).encode('utf-8')\n\n return efm.doc.as_csv(), resources"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef preprocess_cell(self, cell, resources, cell_index):\n from nbformat.notebooknode import NotebookNode\n\n attach_names = []\n\n # Just move the attachment into an output\n\n for k, attach in cell.get('attachments', {}).items():\n for mime_type in self.extract_output_types:\n if mime_type in attach:\n\n if not 'outputs' in cell:\n cell['outputs'] = []\n\n o = NotebookNode({\n 'data': NotebookNode({mime_type: attach[mime_type]}),\n 'metadata': NotebookNode({\n 'filenames': {mime_type: k} # Will get re-written\n }),\n 'output_type': 'display_data'\n })\n\n cell['outputs'].append(o)\n\n attach_names.append((mime_type, k))\n\n nb, resources = super().preprocess_cell(cell, resources, cell_index)\n\n output_names = list(resources.get('outputs', {}).keys())\n\n if attach_names:\n # We're going to assume that attachments are only on Markdown cells, and Markdown cells\n # can't generate output, so all of the outputs wee added.\n\n # reverse + zip matches the last len(attach_names) elements from output_names\n\n for output_name, (mimetype, an) in zip(reversed(output_names), reversed(attach_names)):\n # We'll post process to set the final output directory\n cell.source = re.sub('\\(attachment:{}\\)'.format(an),\n '(__IMGDIR__/{})'.format(output_name), cell.source)\n\n return nb, resources", "response": "This method is called by the cell processor to extract attachments and add them to the cell s outputs."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hilbert_chip_order(machine):\n max_dimen = max(machine.width, machine.height)\n hilbert_levels = int(ceil(log(max_dimen, 2.0))) if max_dimen >= 1 else 0\n return hilbert(hilbert_levels)", "response": "A generator which iterates over a set of chips in a hilbert path."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef place(vertices_resources, nets, machine, constraints, breadth_first=True):\n return sequential_place(vertices_resources, nets,\n machine, constraints,\n (None if not breadth_first else\n breadth_first_vertex_order(vertices_resources,\n nets)),\n hilbert_chip_order(machine))", "response": "Places vertices in breadth - first order along a hilbert - curve path."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nminimises a routing table.", "response": "def minimise_table(table, target_length,\n methods=(remove_default_entries, ordered_covering)):\n \"\"\"Apply different minimisation algorithms to minimise a single routing\n table.\n\n Parameters\n ----------\n table : [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n Routing table to minimise. NOTE: This is the data structure as\n returned by :py:meth:`~rig.routing_table.routing_tree_to_tables`.\n target_length : int or None\n Maximum length of the routing table. If None then all methods will be\n tried and the smallest achieved table will be returned.\n methods :\n Each method is tried in the order presented and the first to meet the\n required target length for a given chip is used. Consequently less\n computationally costly algorithms should be nearer the start of the\n list. The defaults will try to remove default routes\n (:py:meth:rig.routing_table.remove_default_routes.minimise) and then\n fall back on the ordered covering algorithm\n (:py:meth:rig.routing_table.ordered_covering.minimise).\n\n Returns\n -------\n [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n Minimised routing table, guaranteed to be at least as small as\n `target_length`, or as small as possible if `target_length` is None.\n\n Raises\n ------\n MinimisationFailedError\n If no method can sufficiently minimise the table.\n \"\"\"\n # Add a final method which checks the size of the table and returns it if\n # the size is correct. NOTE: This method will avoid running any other\n # minimisers if the table is already sufficiently small.\n methods = list(methods)\n methods.insert(0, _identity)\n\n if target_length is not None:\n best_achieved = len(table)\n\n # Try each minimiser in turn until the table is small enough\n for f in methods:\n try:\n # Minimise the table, if this fails a MinimisationFailedError\n # will be raised and the return will not be executed.\n new_table = f(table, target_length)\n return new_table\n except MinimisationFailedError as exc:\n # Store the best achieved final length\n if best_achieved is None or exc.final_length < best_achieved:\n best_achieved = exc.final_length\n\n # The table must still be too large\n raise MinimisationFailedError(target_length, best_achieved)\n else:\n # Try all methods and return the smallest table\n return min((f(table, target_length) for f in methods), key=len)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading groups and organizations from a file in Metatab format", "response": "def configure_ckan(m):\n \"\"\"Load groups and organizations, from a file in Metatab format\"\"\"\n from ckanapi import RemoteCKAN\n\n try:\n doc = MetapackDoc(m.mt_file, cache=m.cache)\n except (IOError, MetatabError) as e:\n err(\"Failed to open metatab '{}': {}\".format(m.mt_file, e))\n\n c = RemoteCKAN(m.ckan_url, apikey=m.api_key)\n\n groups = { g['name']:g for g in c.action.group_list(all_fields=True) }\n\n for g in doc['Groups']:\n\n if g.value not in groups:\n prt('Creating group: ', g.value)\n c.action.group_create(name=g.value,\n title=g.get_value('title'),\n description=g.get_value('description'),\n id=g.get_value('id'),\n image_url=g.get_value('image_url'))\n\n orgs = {o['name']: o for o in c.action.organization_list(all_fields=True)}\n\n for o in doc['Organizations']:\n\n if o.value not in orgs:\n prt('Creating organization: ', o.value)\n c.action.organization_create(name=o.value,\n title=o.get_value('title'),\n description=o.get_value('description'),\n id=o.get_value('id'),\n image_url=o.get_value('image_url'))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the load instructions for the given list of packages.", "response": "def package_load_instructions(inst_distributions):\n \"\"\"Load instructions, displayed in the package notes\"\"\"\n\n per_package_inst = ''\n\n for dist in inst_distributions:\n\n if dist.type == 'zip':\n per_package_inst += dedent(\n \"\"\"\n # Loading the ZIP Package\n \n Zip packages are compressed, so large resources may load faster.\n \n import metapack as mp\n pkg = mp.open_package('{url}')\n \n \"\"\".format(url=dist.package_url.inner))\n\n elif dist.type == 'csv':\n per_package_inst += dedent(\n \"\"\"\n # Loading the CSV Package\n \n CSV packages load resources individually, so small resources may load faster. \n \n \n import metapack as mp\n pkg = mp.open_package('{url}')\n \n \"\"\".format(url=dist.package_url.inner))\n\n if per_package_inst:\n\n return '\\n---\\n'+per_package_inst\n\n else:\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dump_ckan(m):\n\n doc = MetapackDoc(cache=m.cache)\n doc.new_section('Groups', 'Title Description Id Image_url'.split())\n doc.new_section('Organizations', 'Title Description Id Image_url'.split())\n\n c = RemoteCKAN(m.ckan_url, apikey=m.api_key)\n\n for g in c.action.group_list(all_fields=True):\n print(g.keys())\n\n for o in c.action.organization_list(all_fields=True):\n print(g.keys())", "response": "Dump the groups and organizations to a file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_mt_arg(self, metatabfile):\n o = MetapackCliMemo(self.args)\n o.set_mt_arg(metatabfile)\n return o", "response": "Return a new memo with a new metatabfile argument"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef linkcode_resolve(domain, info):\n # Only link to Python source\n if domain != 'py':\n return None\n\n # Get a reference to the object in question\n modname = info['module']\n fullname = info['fullname']\n\n submod = sys.modules.get(modname)\n if submod is None:\n return None\n\n obj = submod\n for part in fullname.split('.'):\n try:\n obj = getattr(obj, part)\n except:\n return None\n\n # Find the file which contains the object\n try:\n file_name = inspect.getsourcefile(obj)\n except:\n file_name = None\n if not file_name:\n return None\n\n # Convert the filename into a path relative to the rig module top-level\n file_name = os.path.relpath(file_name, start=os.path.dirname(local_module_path))\n\n # Get the line number range that object lives on\n try:\n source, lineno = inspect.getsourcelines(obj)\n except:\n lineno = None\n if lineno:\n linespec = \"#L{}-L{}\".format(lineno,\n lineno + len(source) - 1)\n else:\n linespec = \"\"\n \n # Generate a link to the code in the official GitHub repository with the\n # current-version's tag.\n return (\"https://github.com/{repo}/blob/v{version}/\"\n \"{module_path}{file_name}{linespec}\".format(repo=github_repo,\n version=version,\n module_path=github_module_path,\n file_name=file_name,\n linespec=linespec))", "response": "Determine the URL corresponding to the Python object on GitHub\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef declaration_path(name):\n from os.path import dirname, join, exists\n import metatab.declarations\n from metatab.exc import IncludeError\n\n d = dirname(metatab.declarations.__file__)\n\n path = join(d, name)\n\n if not exists(path):\n path = join(d, name + '.csv')\n\n if not exists(path):\n raise IncludeError(\"No local declaration file for name '{}' \".format(name))\n\n return path", "response": "Return the path to an included declaration file"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nflatten a data structure into tuples", "response": "def flatten(d, sep='.'):\n \"\"\"Flatten a data structure into tuples\"\"\"\n\n def _flatten(e, parent_key='', sep='.'):\n import collections\n\n prefix = parent_key + sep if parent_key else ''\n\n if isinstance(e, collections.MutableMapping):\n return tuple((prefix + k2, v2) for k, v in e.items() for k2, v2 in _flatten(v, k, sep))\n elif isinstance(e, collections.MutableSequence):\n return tuple((prefix + k2, v2) for i, v in enumerate(e) for k2, v2 in _flatten(v, str(i), sep))\n else:\n return (parent_key, (e,)),\n\n return tuple((k, v[0]) for k, v in _flatten(d, '', sep))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_dir_structure(base_dir):\n\n def maybe_makedir(*args):\n\n p = join(base_dir, *args)\n\n if exists(p) and not isdir(p):\n raise IOError(\"File '{}' exists but is not a directory \".format(p))\n\n if not exists(p):\n makedirs(p)\n\n maybe_makedir(DOWNLOAD_DIR)\n maybe_makedir(PACKAGE_DIR)\n maybe_makedir(OLD_DIR)", "response": "Make the build directory structure."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntrying to guess the format of a resource possibly with a HEAD request", "response": "def guess_format(url):\n \"\"\"Try to guess the format of a resource, possibly with a HEAD request\"\"\"\n import requests\n from requests.exceptions import InvalidSchema\n from rowgenerators import parse_url_to_dict\n\n parts = parse_url_to_dict(url)\n\n # Guess_type fails for root urls like 'http://civicknowledge.com'\n if parts.get('path'):\n type, encoding = mimetypes.guess_type(url)\n elif parts['scheme'] in ('http', 'https'):\n type, encoding = 'text/html', None # Assume it is a root url\n else:\n type, encoding = None, None\n\n if type is None:\n try:\n r = requests.head(url, allow_redirects=False)\n type = r.headers['Content-Type']\n\n if ';' in type:\n type, encoding = [e.strip() for e in type.split(';')]\n\n except InvalidSchema:\n pass # It's probably FTP\n\n return type, mime_map.get(type)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmimicing os. walk but instead of down the directory tree", "response": "def walk_up(bottom):\n \"\"\" mimic os.walk, but walk 'up' instead of down the directory tree\n :param bottom:\n :return:\n \"\"\"\n import os\n from os import path\n\n bottom = path.realpath(bottom)\n\n # get files in current dir\n try:\n names = os.listdir(bottom)\n except Exception as e:\n raise e\n\n dirs, nondirs = [], []\n for name in names:\n if path.isdir(path.join(bottom, name)):\n dirs.append(name)\n else:\n nondirs.append(name)\n\n yield bottom, dirs, nondirs\n\n new_path = path.realpath(path.join(bottom, '..'))\n\n # see if we are at the top\n if new_path == bottom:\n return\n\n for x in walk_up(new_path):\n yield x"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the cache directory where data can be written during a build", "response": "def get_materialized_data_cache(doc=None):\n \"\"\"Return the cache directory where data can be written during a build, usually for\n a Jupyter notebook that generates many files for each execution\"\"\"\n\n from metapack.constants import MATERIALIZED_DATA_PREFIX\n from os.path import join\n\n if not doc:\n from metapack import Downloader\n downloader = Downloader()\n return downloader.cache.getsyspath(MATERIALIZED_DATA_PREFIX)\n else:\n dr = doc._cache.getsyspath(join(MATERIALIZED_DATA_PREFIX, doc.name))\n\n ensure_dir(dr)\n\n return dr"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef traverse(self):\n # A queue of (direction, node) to visit. The direction is the Links\n # entry which describes the direction in which we last moved to reach\n # the node (or None for the root).\n to_visit = deque([(None, self)])\n while to_visit:\n direction, node = to_visit.popleft()\n\n # Determine the set of directions we must travel to reach the\n # children\n out_directions = set()\n for child_direction, child in node.children:\n # Note that if the direction is unspecified, we simply\n # (silently) don't add a route for that child.\n if child_direction is not None:\n out_directions.add(child_direction)\n\n # Search the next steps of the route too\n if isinstance(child, RoutingTree):\n assert child_direction is not None\n to_visit.append((child_direction, child))\n\n # Yield the information pertaining to this Node\n yield direction, node.chip, out_directions", "response": "Traverse the tree yielding the direction taken to a node the co -ordinates of that node and the directions leading from the children of that node and the routes leading to the children of that node."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef yield_deps(doc):\n\n this_node = DependencyNode(doc, None)\n\n for r in doc.references():\n\n other_node = DependencyNode(r, this_node)\n\n yield (this_node, other_node)\n\n if isinstance(other_node, PackageDependencyNode):\n yield from yield_deps(other_node.doc)", "response": "Yields all dependencies of a package."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps a URL string in a list of lines", "response": "def wrap_url(s, l):\n \"\"\"Wrap a URL string\"\"\"\n parts = s.split('/')\n\n if len(parts) == 1:\n return parts[0]\n else:\n i = 0\n lines = []\n for j in range(i, len(parts) + 1):\n tv = '/'.join(parts[i:j])\n nv = '/'.join(parts[i:j + 1])\n\n if len(nv) > l or nv == tv:\n i = j\n lines.append(tv)\n\n return '/\\n'.join(lines)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the least signifiant bit O ( N in the worst case", "response": "def _get_rho(self, w, arr):\n \"\"\" Return the least signifiant bit\n O(N) in the worst case\n \"\"\"\n lsb = 0\n while not (w & arr[lsb]):\n lsb += 1\n return lsb + 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a key to the HyperLogLog", "response": "def add(self, uuid):\n \"\"\" Adds a key to the HyperLogLog \"\"\"\n if uuid:\n # Computing the hash\n try:\n x = hash64(uuid)\n except UnicodeEncodeError:\n x = hash64(uuid.encode('ascii', 'ignore'))\n # Finding the register to update by using the first b bits as an index\n j = x & ((1 << self.b) - 1)\n # Remove those b bits\n w = x >> self.b\n # Find the first 0 in the remaining bit pattern\n self.M[j] = max(self.M[j], self._get_rho(w, self.bitcount_arr))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef estimate(self):\n E = self.alpha * float(self.m ** 2) / np.power(2.0, - self.M).sum()\n if E <= 2.5 * self.m: # Small range correction\n V = self.m - np.count_nonzero(self.M)\n return int(self.m * np.log(self.m / float(V))) if V > 0 else int(E)\n # intermidiate range correction -> No correction\n elif E <= float(long(1) << self.precision) / 30.0:\n return int(E)\n else:\n return int(-(long(1) << self.precision) *\n np.log(1.0 - E / (long(1) << self.precision)))", "response": "Returns the estimate of the cardinality of the class"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef base_url(self):\n\n if self.doc.package_url:\n return self.doc.package_url\n\n return self.doc._ref", "response": "Base URL for resolving resource URLs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _resolved_url(self):\n\n if not self.url:\n return None\n\n u = parse_app_url(self.url)\n\n if u.scheme == 'index':\n u = u.resolve()\n\n if u.scheme != 'file':\n # Hopefully means the URL is http, https, ftp, etc.\n\n return u\n elif u.resource_format == 'ipynb':\n\n # This shouldn't be a special case, but ...\n t = self.doc.package_url.inner.join_dir(self.url)\n t = t.as_type(type(u))\n t.fragment = u.fragment\n\n return t\n\n elif u.proto == 'metatab':\n\n u = self.expanded_url\n\n return u.get_resource().get_target()\n\n elif u.proto == 'metapack':\n\n u = self.expanded_url\n\n if u.resource:\n return u.resource.resolved_url.get_resource().get_target()\n else:\n return u\n\n if u.scheme == 'file':\n\n return self.expanded_url\n\n elif False:\n\n\n assert isinstance(self.doc.package_url, MetapackPackageUrl), (type(self.doc.package_url), self.doc.package_url)\n\n try:\n t = self.doc.package_url.resolve_url(self.url) # Why are we doing this?\n\n # Also a hack\n t.scheme_extension = parse_app_url(self.url).scheme_extension\n\n # Another Hack!\n try:\n if not any(t.fragment) and any(u.fragment):\n t.fragment = u.fragment\n except TypeError:\n if not t.fragment and u.fragment:\n t.fragment = u.fragment\n\n\n # Yet more hack!\n t = parse_app_url(str(t))\n\n return t\n\n except ResourceError as e:\n # This case happens when a filesystem packages has a non-standard metadata name\n # Total hack\n raise\n\n else:\n raise ResourceError('Unknown case for url {} '.format(self.url))", "response": "Return a URL that properly combines the base_url and a possibly relative_url"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the Table term for this resource which is referenced either by the table property or the schema property.", "response": "def schema_term(self):\n \"\"\"Return the Table term for this resource, which is referenced either by the `table` property or the\n `schema` property\"\"\"\n\n if not self.name:\n raise MetapackError(\"Resource for url '{}' doe not have name\".format(self.url))\n\n t = self.doc.find_first('Root.Table', value=self.get_value('name'))\n frm = 'name'\n\n if not t:\n t = self.doc.find_first('Root.Table', value=self.get_value('schema'))\n frm = 'schema'\n\n if not t:\n frm = None\n\n return t"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the headers for the resource.", "response": "def headers(self):\n \"\"\"Return the headers for the resource. Returns the AltName, if specified; if not, then the\n Name, and if that is empty, a name based on the column position. These headers\n are specifically applicable to the output table, and may not apply to the resource source. FOr those headers,\n use source_headers\"\"\"\n\n t = self.schema_term\n\n if t:\n return [self._name_for_col_term(c, i)\n for i, c in enumerate(t.children, 1) if c.term_is(\"Table.Column\")]\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef source_headers(self):\n\n t = self.schema_term\n\n if t:\n return [self._name_for_col_term(c, i)\n for i, c in enumerate(t.children, 1) if c.term_is(\"Table.Column\")\n and c.get_value('name') != EMPTY_SOURCE_HEADER\n ]\n else:\n return None", "response": "Returns the headers for the resource source. Specifically does not include any header that is\n the EMPTY_SOURCE_HEADER value of _NONE_"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn column information from the schema or an upstreram package", "response": "def columns(self):\n \"\"\"Return column information from the schema or from an upstreram package\"\"\"\n\n try:\n # For resources that are metapack packages.\n r = self.expanded_url.resource.columns()\n return list(r)\n except AttributeError as e:\n\n pass\n\n return self.schema_columns"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns column informatino only from this schema", "response": "def schema_columns(self):\n \"\"\"Return column informatino only from this schema\"\"\"\n t = self.schema_term\n\n columns = []\n\n\n if t:\n for i, c in enumerate(t.children):\n\n if c.term_is(\"Table.Column\"):\n p = c.all_props\n p['pos'] = i\n p['name'] = c.value\n p['header'] = self._name_for_col_term(c, i)\n \n\n columns.append(p)\n\n return columns"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a row processor from the schema to convert the text values from the CSV into real types", "response": "def row_processor_table(self, ignore_none=False):\n \"\"\"Create a row processor from the schema, to convert the text values from the\n CSV into real types\"\"\"\n from rowgenerators.rowpipe import Table\n\n type_map = {\n None: None,\n 'string': 'str',\n 'text': 'str',\n 'number': 'float',\n 'integer': 'int'\n }\n\n def map_type(v):\n return type_map.get(v, v)\n\n if self.schema_term:\n\n t = Table(self.get_value('name'))\n\n col_n = 0\n\n for c in self.schema_term.children:\n\n if ignore_none and c.name == EMPTY_SOURCE_HEADER:\n continue\n\n if c.term_is('Table.Column'):\n t.add_column(self._name_for_col_term(c, col_n),\n datatype=map_type(c.get_value('datatype')),\n valuetype=map_type(c.get_value('valuetype')),\n transform=c.get_value('transform'),\n width=c.get_value('width')\n )\n col_n += 1\n\n return t\n\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlike rowgenerator but does not try to create a row processor table", "response": "def raw_row_generator(self):\n \"\"\"Like rowgenerator, but does not try to create a row processor table\"\"\"\n from rowgenerators import get_generator\n\n self.doc.set_sys_path() # Set sys path to package 'lib' dir in case of python function generator\n\n ru = self.resolved_url\n\n try:\n resource = ru.resource # For Metapack urls\n\n return resource.row_generator\n except AttributeError:\n pass\n\n ut = ru.get_resource().get_target()\n\n # Encoding is supposed to be preserved in the URL but isn't\n source_url = parse_app_url(self.url)\n\n # source_url will be None for Sql terms.\n\n ut.encoding = self.get_value('encoding') or (source_url.encoding if source_url else None)\n\n g = get_generator(ut, resource=self,\n doc=self._doc, working_dir=self._doc.doc_dir,\n env=self.env)\n\n assert g, ut\n\n return g"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the header from the deinfed header rows for use on references or resources where the schema has not been run", "response": "def _get_header(self):\n \"\"\"Get the header from the deinfed header rows, for use on references or resources where the schema\n has not been run\"\"\"\n\n try:\n header_lines = [int(e) for e in str(self.get_value('headerlines', 0)).split(',')]\n except ValueError as e:\n header_lines = [0]\n\n # We're processing the raw datafile, with no schema.\n header_rows = islice(self.row_generator, min(header_lines), max(header_lines) + 1)\n\n from tableintuit import RowIntuiter\n headers = RowIntuiter.coalesce_headers(header_rows)\n\n return headers"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef iterdict(self):\n from collections import OrderedDict\n\n headers = None\n\n for row in self:\n\n if headers is None:\n headers = row\n continue\n\n yield OrderedDict(zip(headers, row))", "response": "Iterate over the resource in dict records"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\niterates over the resource as row proxy objects which allow acessing colums as attributes", "response": "def iterrows(self):\n \"\"\"Iterate over the resource as row proxy objects, which allow acessing colums as attributes\"\"\"\n\n\n\n row_proxy = None\n\n headers = None\n\n for row in self:\n\n if not headers:\n headers = row\n row_proxy = RowProxy(headers)\n continue\n\n yield row_proxy.set_row(row)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef iterrowproxy(self, cls=RowProxy):\n\n row_proxy = None\n\n headers = None\n\n for row in self:\n\n if not headers:\n headers = row\n row_proxy = cls(headers)\n continue\n\n yield row_proxy.set_row(row)", "response": "Iterate over the resource as row proxy objects which allow acessing colums as attributes. Like iterrows this method creates a new RowProxy object for each row."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef iterstruct(self):\n from rowgenerators.rowpipe.json import add_to_struct\n\n json_headers = self.json_headers\n\n for row in islice(self, 1, None): # islice skips header\n d = {}\n for pos, jh in json_headers:\n add_to_struct(d, jh, row[pos])\n yield d", "response": "Yields the data structures built from the JSON header specifications in a table."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nyields the data structures from iterstruct as JSON strings", "response": "def iterjson(self, *args, **kwargs):\n \"\"\"Yields the data structures from iterstruct as JSON strings\"\"\"\n from rowgenerators.rowpipe.json import VTEncoder\n import json\n\n if 'cls' not in kwargs:\n kwargs['cls'] = VTEncoder\n\n for s in self.iterstruct:\n yield (json.dumps(s, *args, **kwargs))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nyielding the data structures from iterstruct as YAML strings", "response": "def iteryaml(self, *args, **kwargs):\n \"\"\"Yields the data structures from iterstruct as YAML strings\"\"\"\n from rowgenerators.rowpipe.json import VTEncoder\n import yaml\n\n if 'cls' not in kwargs:\n kwargs['cls'] = VTEncoder\n\n for s in self.iterstruct:\n yield (yaml.safe_dump(s))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dataframe(self, dtype=False, parse_dates=True, *args, **kwargs):\n\n import pandas as pd\n\n rg = self.row_generator\n\n t = self.resolved_url.get_resource().get_target()\n\n if t.target_format == 'csv':\n return self.read_csv(dtype, parse_dates, *args, **kwargs)\n\n # Maybe generator has it's own Dataframe method()\n try:\n\n return rg.dataframe( *args, **kwargs)\n except AttributeError:\n pass\n\n # Just normal data, so use the iterator in this object.\n headers = next(islice(self, 0, 1))\n data = islice(self, 1, None)\n\n df = pd.DataFrame(list(data), columns=headers, *args, **kwargs)\n\n self.errors = df.metatab_errors = rg.errors if hasattr(rg, 'errors') and rg.errors else {}\n\n return df", "response": "Return a pandas datafrome from the resource"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a Geo dataframe of the current state of the object.", "response": "def geoframe(self, *args, **kwargs):\n \"\"\"Return a Geo dataframe\"\"\"\n\n from geopandas import GeoDataFrame\n import geopandas as gpd\n from shapely.geometry.polygon import BaseGeometry\n from shapely.wkt import loads\n\n gdf = None\n try:\n gdf = self.resolved_url.geoframe(*args, **kwargs)\n except AttributeError:\n pass\n\n if gdf is None:\n try:\n gdf = self.resolved_url.geo_generator.geoframe(*args, **kwargs)\n except AttributeError:\n pass\n\n if gdf is None:\n try:\n gdf = self.row_generator.geoframe(*args, **kwargs)\n except AttributeError:\n pass\n\n if gdf is None:\n try:\n\n gdf = GeoDataFrame(self.dataframe(*args, **kwargs))\n\n first = next(gdf.iterrows())[1]['geometry']\n\n if isinstance(first, str):\n # We have a GeoDataframe, but the geometry column is still strings, so\n # it must be converted\n shapes = [loads(row['geometry']) for i, row in gdf.iterrows()]\n\n elif not isinstance(first, BaseGeometry):\n # If we are reading a metatab package, the geometry column's type should be\n # 'geometry' which will give the geometry values class type of\n # rowpipe.valuetype.geo.ShapeValue. However, there are other\n # types of objects that have a 'shape' property.\n\n shapes = [row['geometry'].shape for i, row in gdf.iterrows()]\n\n else:\n shapes = gdf['geometry']\n\n gdf['geometry'] = gpd.GeoSeries(shapes)\n gdf.set_geometry('geometry')\n\n # Wild guess. This case should be most often for Metatab processed geo files,\n # which are all 4326\n if gdf.crs is None:\n gdf.crs = {'init': 'epsg:4326'}\n\n\n except KeyError as e:\n raise ResourceError(\"Failed to create GeoDataFrame for resource '{}': No geometry column\".format(self.name))\n except (KeyError,TypeError) as e:\n raise ResourceError(\"Failed to create GeoDataFrame for resource '{}': {}\".format(self.name, str(e)))\n\n assert gdf.crs is not None\n return gdf"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _update_pandas_kwargs(self, dtype=False, parse_dates=True, kwargs= {}):\n\n from datetime import datetime, time, date\n\n type_map = {\n None: None,\n 'string': str,\n 'text': str,\n 'number': float,\n 'integer': int,\n 'datetime': datetime,\n 'time': time,\n 'date': date\n\n }\n\n if dtype is True:\n kwargs['dtype'] = { c['name']:type_map.get(c['datatype'], c['datatype']) for c in self.columns() }\n elif dtype:\n kwargs['dtype'] = dtype\n\n if parse_dates is True:\n date_cols = [ c['name'] for c in self.columns() if c['datatype'] in ('date','datetime','time') ]\n kwargs['parse_dates'] = date_cols or True\n elif parse_dates:\n kwargs['parse_dates'] = parse_dates\n\n kwargs['low_memory'] = False\n\n return kwargs", "response": "Update kwargs suitable for pandas read_csv\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfetching the target and pass through to pandas. read_csv", "response": "def read_csv(self, dtype=False, parse_dates=True, *args, **kwargs):\n \"\"\"Fetch the target and pass through to pandas.read_csv\n\n Don't provide the first argument of read_csv(); it is supplied internally.\n \"\"\"\n\n import pandas\n\n t = self.resolved_url.get_resource().get_target()\n\n kwargs = self._update_pandas_kwargs(dtype, parse_dates, kwargs)\n\n return pandas.read_csv(t.fspath, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfetches the target and pass through to pandas. read_fwf.", "response": "def read_fwf(self, *args, **kwargs):\n \"\"\"Fetch the target and pass through to pandas.read_fwf.\n\n Don't provide the first argument of read_fwf(); it is supplied internally. \"\"\"\n import pandas\n\n t = self.resolved_url.get_resource().get_target()\n\n return pandas.read_fwf(t.fspath, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads the target open it and return the result from readlines", "response": "def readlines(self):\n \"\"\"Load the target, open it, and return the result from readlines()\"\"\"\n\n t = self.resolved_url.get_resource().get_target()\n with open(t.fspath) as f:\n return f.readlines()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a PETL source object", "response": "def petl(self, *args, **kwargs):\n \"\"\"Return a PETL source object\"\"\"\n import petl\n\n t = self.resolved_url.get_resource().get_target()\n\n if t.target_format == 'txt':\n return petl.fromtext(str(t.fspath), *args, **kwargs)\n elif t.target_format == 'csv':\n return petl.fromcsv(str(t.fspath), *args, **kwargs)\n else:\n raise Exception(\"Can't handle\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef context(self):\n\n # Can't use self.columns b/c of recursion with resolved_url\n\n t = self.schema_term\n\n if not t:\n return {}\n\n sql_columns = []\n all_columns = []\n\n for i, c in enumerate(t.children):\n if c.term_is(\"Table.Column\"):\n p = c.all_props\n\n if p.get('sqlselect'): # has a value for SqlSqlect\n sql_columns.append(p.get('sqlselect'))\n\n all_columns.append(c.name)\n\n return {\n 'SQL_COLUMNS': ', '.join(sql_columns),\n 'ALL_COLUMNS': ', '.join(all_columns)\n }", "response": "Build the interpolation context from the schemas"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the local variables in the caller s frame.", "response": "def caller_locals():\n \"\"\"Get the local variables in the caller's frame.\"\"\"\n import inspect\n frame = inspect.currentframe()\n try:\n return frame.f_back.f_back.f_locals\n finally:\n del frame"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef open_source_package(dr=None):\n if dr is None:\n dr = getcwd()\n\n for i, e in enumerate(walk_up(dr)):\n\n intr = set([DEFAULT_METATAB_FILE, LINES_METATAB_FILE, IPYNB_METATAB_FILE]) & set(e[2])\n\n if intr:\n\n return op(join(e[0], list(intr)[0] ))\n\n if i > 2:\n break\n\n return None", "response": "Like open_package but always open the source package"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef open_package(locals=None, dr=None):\n\n\n if locals is None:\n locals = caller_locals()\n\n try:\n # Running in a package build\n return op(locals['metatab_doc'])\n\n except KeyError:\n # Running interactively in Jupyter\n\n package_name = None\n build_package_dir = None\n source_package = None\n\n if dr is None:\n dr = getcwd()\n\n for i, e in enumerate(walk_up(dr)):\n\n intr = set([DEFAULT_METATAB_FILE, LINES_METATAB_FILE, IPYNB_METATAB_FILE]) & set(e[2])\n\n if intr:\n\n source_package = join(e[0], list(intr)[0])\n p = op(source_package)\n package_name = p.find_first_value(\"Root.Name\")\n\n if not package_name:\n raise PackageError(\"Source package in {} does not have root.Name term\".format(e[0]))\n\n if PACKAGE_PREFIX in e[1]:\n build_package_dir = join(e[0], PACKAGE_PREFIX)\n\n break\n\n if i > 2:\n break\n\n if build_package_dir and package_name and exists(join(build_package_dir, package_name)):\n # Open the previously built package\n built_package = join(build_package_dir, package_name)\n try:\n return op(built_package)\n except RowGeneratorError as e:\n pass # Probably could not open the metadata file.\n\n if source_package:\n # Open the source package\n return op(source_package)\n\n raise PackageError(\"Failed to find package, either in locals() or above dir '{}' \".format(dr))", "response": "Try to open a package with the metatab_doc variable. If that does not exist try the local _packages directory. If that does not exist try the local _packages directory."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rebuild_schema(doc, r, df):\n import numpy as np\n\n # Re-get the resource in the doc, since it may be different.\n try:\n r = doc.resource(r.name)\n except AttributeError:\n # Maybe r is actually a resource name\n r = doc.resource(r)\n\n def alt_col_name(name, i):\n import re\n\n if not name:\n return 'col{}'.format(i)\n\n return re.sub('_+', '_', re.sub('[^\\w_]', '_', str(name)).lower()).rstrip('_')\n\n df_types = {\n np.dtype('O'): 'text',\n np.dtype('int64'): 'integer',\n np.dtype('float64'): 'number'\n }\n\n try:\n df_index_frame = df.index.to_frame()\n except AttributeError:\n df_index_frame = None\n\n def get_col_dtype(c):\n\n c = str(c)\n\n try:\n return df_types[df[c].dtype]\n except KeyError:\n # Maybe it is in the index?\n pass\n\n try:\n return df_types[df_index_frame[c].dtype]\n except TypeError:\n # Maybe not a multi-index\n pass\n\n if c == 'id' or c == df.index.name:\n return df_types[df.index.dtype]\n\n return 'unknown'\n\n columns = []\n schema_term = r.schema_term[0]\n\n if schema_term:\n\n old_cols = {c['name'].value: c.properties for c in schema_term.children}\n for c in schema_term.children:\n schema_term.remove_child(c)\n\n schema_term.children = []\n\n else:\n old_cols = {}\n schema_term = doc['Schema'].new_term('Table', r.schema_name)\n\n index_names = [n if n else \"id\" for n in df.index.names]\n\n for i, col in enumerate(index_names + list(df.columns)):\n acn = alt_col_name(col, i) if alt_col_name(col, i) != str(col) else ''\n\n d = {'name': col, 'datatype': get_col_dtype(col), 'altname': acn}\n\n if col in old_cols.keys():\n lookup_name = col\n elif acn in old_cols.keys():\n lookup_name = acn\n else:\n lookup_name = None\n\n if lookup_name and lookup_name in old_cols:\n\n for k, v in schema_term.properties.items():\n\n old_col = old_cols.get(lookup_name)\n\n for k, v in old_col.items():\n if k != 'name' and v:\n d[k] = v\n\n columns.append(d)\n\n for c in columns:\n name = c['name']\n del c['name']\n datatype = c['datatype']\n del c['datatype']\n altname = c['altname']\n del c['altname']\n\n schema_term.new_child('Column', name, datatype=datatype, altname=altname, **c)", "response": "Rebuild the schema for a resource based on a dataframe"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrebuild the schema for a resource based on a dataframe and write the doc", "response": "def rewrite_schema(r, df, doc=None):\n \"\"\"Rebuild the schema for a resource based on a dataframe and re-write the doc\"\"\"\n\n from metapack.cli.core import write_doc\n\n if doc is None:\n doc = open_source_package()\n\n rebuild_schema(doc, r, df)\n\n write_doc(doc, doc.ref)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef interactive_rewrite_schema(r, df, doc=None):\n\n if 'metatab_doc' in caller_locals():\n return False\n\n if doc is None:\n doc = open_source_package()\n\n rewrite_schema(r, df, doc)\n\n return True", "response": "Rebuild the schema for a resource based on a dataframe and re - write the doc"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_dataframes():\n\n for k, v in caller_locals().items():\n if k.startswith('_'):\n continue\n\n if isinstance(v, pd.core.frame.DataFrame):\n yield k, v", "response": "Yields tuples of dataframe variable name and the dataframe."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the full path of the jupyter notebook.", "response": "def get_notebook_path():\n \"\"\"\n Return the full path of the jupyter notebook.\n \"\"\"\n kernel_id = re.search('kernel-(.*).json',\n ipykernel.connect.get_connection_file()).group(1)\n servers = list_running_servers()\n for ss in servers:\n response = requests.get(urljoin(ss['url'], 'api/sessions'),\n params={'token': ss.get('token', '')})\n for nn in json.loads(response.text):\n if nn['kernel']['id'] == kernel_id:\n relative_path = nn['notebook']['path']\n return os.path.join(ss['notebook_dir'], relative_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the path of a notebook relative to the current soruce package", "response": "def get_notebook_rel_path(pkg=None):\n \"\"\"Get the path of a notebook, relative to the current soruce package\"\"\"\n pkg = pkg or open_source_package()\n pkg_path = str(pkg.package_url.fspath)\n nb_path = get_notebook_path()\n\n return nb_path.replace(pkg_path, '').strip('/')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_dataframe(df, name, pkg=None, description=''):\n from warnings import warn\n from metapack.cli.core import alt_col_name, type_map\n import numpy as np\n\n if name is None or df is None:\n warn(\"Did not find dataframe for reference '{}' \".format(ref))\n return\n\n pkg = pkg or open_source_package()\n\n resource_ref = 'file:' + get_notebook_rel_path(pkg) + '#' + name\n\n t = pkg.find_first('Root.Datafile', value=resource_ref)\n col_props = {}\n\n if t:\n print(\"Datafile exists for url '{}', deleting\".format(resource_ref))\n\n if t.schema_term:\n col_props = { c['name']:c for c in t.columns()}\n pkg.remove_term(t.schema_term)\n\n pkg.remove_term(t)\n\n t = pkg['Resources'].new_term('Root.Datafile', resource_ref, name=name, description=description)\n\n st = pkg['Schema'].new_term('Table', t.schema_name, description=description)\n\n for i, name in enumerate(df.columns):\n\n props = col_props.get(name,{})\n\n try:\n native_type = type(np.asscalar(df[name].dtype.type(0))).__name__\n except ValueError:\n native_type = df[name].dtype.name\n except AttributeError:\n native_type = type(df[name][0]).__name__\n\n for pn in 'datatype name pos header'.split():\n if pn in props:\n del props[pn]\n\n if 'altname' in props:\n altname = props['altname']\n del props['altname']\n else:\n raw_alt_name = alt_col_name(name, i)\n altname = raw_alt_name if raw_alt_name != name else ''\n\n col = df[name]\n\n if hasattr(col, 'description'): # custom property\n props['description'] = col.description\n\n t = st.new_child('Column', name,\n datatype=type_map.get(native_type, native_type),\n altname=altname,\n **props)\n\n\n pkg.write_csv()", "response": "Add a dataframe to a source package."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun the method iff. the memory view hasn t been closed and the parent Taxonomy object has not been freed.", "response": "def _if_not_closed(f):\n \"\"\"Run the method iff. the memory view hasn't been closed and the parent\n object has not been freed.\"\"\"\n @add_signature_to_docstring(f)\n @functools.wraps(f)\n def f_(self, *args, **kwargs):\n if self.closed or self._parent._freed:\n raise OSError\n return f(self, *args, **kwargs)\n\n return f_"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns the method iff. the memory view hasn t been closed.", "response": "def _if_not_freed(f):\n \"\"\"Run the method iff. the memory view hasn't been closed.\"\"\"\n @add_signature_to_docstring(f)\n @functools.wraps(f)\n def f_(self, *args, **kwargs):\n if self._freed:\n raise OSError\n return f(self, *args, **kwargs)\n\n return f_"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nunpacks a routing table entry from a SpiNNaker machine.", "response": "def unpack_routing_table_entry(packed):\n \"\"\"Unpack a routing table entry read from a SpiNNaker machine.\n\n Parameters\n ----------\n packet : :py:class:`bytes`\n Bytes containing a packed routing table.\n\n Returns\n -------\n (:py:class:`~rig.routing_table.RoutingTableEntry`, app_id, core) or None\n Tuple containing the routing entry, the app_id associated with the\n entry and the core number associated with the entry; or None if the\n routing table entry is flagged as unused.\n \"\"\"\n # Unpack the routing table entry\n _, free, route, key, mask = struct.unpack(consts.RTE_PACK_STRING, packed)\n\n # If the top 8 bits of the route are set then this entry is not in use, so\n # return None.\n if route & 0xff000000 == 0xff000000:\n return None\n\n # Convert the routing table entry\n routes = {r for r in routing_table.Routes if (route >> r) & 0x1}\n rte = routing_table.RoutingTableEntry(routes, key, mask)\n\n # Convert the surrounding data\n app_id = free & 0xff\n core = (free >> 8) & 0x0f\n\n return (rte, app_id, core)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntransmitting an SCP Packet and return the response.", "response": "def send_scp(self, *args, **kwargs):\n \"\"\"Transmit an SCP Packet and return the response.\n\n This function is a thin wrapper around\n :py:meth:`rig.machine_control.scp_connection.SCPConnection.send_scp`.\n\n This function will attempt to use the SCP connection nearest the\n destination of the SCP command if multiple connections have been\n discovered using :py:meth:`.discover_connections`.\n\n Parameters\n ----------\n x : int\n y : int\n p : int\n *args\n **kwargs\n \"\"\"\n # Retrieve contextual arguments from the keyword arguments. The\n # context system ensures that these values are present.\n x = kwargs.pop(\"x\")\n y = kwargs.pop(\"y\")\n p = kwargs.pop(\"p\")\n return self._send_scp(x, y, p, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the appropriate connection for a chip.", "response": "def _get_connection(self, x, y):\n \"\"\"Get the appropriate connection for a chip.\"\"\"\n if (self._width is None or self._height is None or\n self._root_chip is None):\n return self.connections[None]\n else:\n # If possible, use the local Ethernet connected chip\n eth_chip = spinn5_local_eth_coord(x, y, self._width, self._height,\n *self._root_chip)\n conn = self.connections.get(eth_chip)\n if conn is not None:\n return conn\n else:\n # If no connection was available to the local board, choose\n # another arbitrarily.\n # XXX: This choice will cause lots of contention in systems\n # with many missing Ethernet connections.\n return self.connections[None]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _send_scp(self, x, y, p, *args, **kwargs):\n # Determine the size of packet we expect in return, this is usually the\n # size that we are informed we should expect by SCAMP/SARK or else is\n # the default.\n if self._scp_data_length is None:\n length = consts.SCP_SVER_RECEIVE_LENGTH_MAX\n else:\n length = self._scp_data_length\n\n connection = self._get_connection(x, y)\n return connection.send_scp(length, x, y, p, *args, **kwargs)", "response": "Internal method to send an SCP packet."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef boot(self, width=None, height=None,\n only_if_needed=True, check_booted=True, **boot_kwargs):\n \"\"\"Boot a SpiNNaker machine.\n\n The system will be booted from the Ethernet connected chip whose\n hostname was given as the argument to the MachineController. With the\n default arguments this method will only boot systems which have not\n already been booted and will wait until machine is completely booted\n (and raise a :py:exc:`.SpiNNakerBootError` on failure).\n\n This method uses :py:func:`rig.machine_control.boot.boot` to send boot\n commands to the machine and update the struct files contained within\n this object according to those used during boot.\n\n .. warning::\n\n Booting the system over the open internet is likely to fail due to\n the port number being blocked by most ISPs and UDP not being\n reliable. A proxy such as `spinnaker_proxy\n `_ may be useful in\n this situation.\n\n Parameters\n ----------\n width, height : *Deprecated*\n **Deprecated.** In older versions of SC&MP, it was necessary to\n indicate the size of the machine being booted. These parameters are\n now ignored and setting them will produce a deprecation warning.\n scamp_binary : filename or None\n Filename of the binary to boot the machine with or None to use the\n SC&MP binary bundled with Rig.\n sark_struct : filename or None\n The 'sark.struct' file which defines the datastructures or None to\n use the one bundled with Rig.\n boot_delay : float\n Number of seconds to pause between sending boot data packets.\n post_boot_delay : float\n Number of seconds to wait after sending last piece of boot data to\n give SC&MP time to re-initialise the Ethernet interface.\n only_if_needed : bool\n If ``only_if_needed`` is True (the default), this method checks to\n see if the machine is already booted and only attempts to boot the\n machine if neccessary.\n\n If ``only_if_needed`` is False, the boot commands will be sent to\n the target machine without checking if it is already booted or not.\n\n .. warning::\n\n If the machine has already been booted, sending the boot\n commands again will not 'reboot' the machine with the newly\n supplied boot image, even if ``only_if_needed`` is False.\n check_booted : bool\n If ``check_booted`` is True this method waits for the machine to be\n fully booted before returning. If False, this check is skipped and\n the function returns as soon as the machine's Ethernet interface is\n likely to be up (but not necessarily before booting has completed).\n sv_overrides : {name: value, ...}\n Additional arguments used to override the default values in the\n 'sv' struct defined in the struct file.\n\n Returns\n -------\n bool\n Returns True if the machine was sent boot commands, False if the\n machine was already booted.\n\n Raises\n ------\n rig.machine_control.machine_controller.SpiNNakerBootError\n Raised when ``check_booted`` is True and the boot process was\n unable to boot the machine. Also raised when ``only_if_needed`` is\n True and the remote host is a BMP.\n\n Notes\n -----\n The constants `rig.machine_control.boot.spinX_boot_options` can be used\n to specify boot parameters, for example::\n\n controller.boot(**spin3_boot_options)\n\n This is neccessary on boards such as SpiNN-3 boards if the more than\n LED 0 are required by an application since by default, only LED 0 is\n enabled.\n \"\"\"\n # Report deprecated width/height arguments\n if width is not None or height is not None:\n warnings.warn(\"Machine width and height are no longer needed when \"\n \"booting a machine.\", DeprecationWarning)\n\n # Check to see if the machine is already booted first\n if only_if_needed:\n # We create a new MachineController which fails quickly if it\n # doesn't receieve a reply (since typically the machine is already\n # booted).\n quick_fail_mc = MachineController(self.initial_host, n_tries=1)\n try:\n info = quick_fail_mc.get_software_version(255, 255, 0)\n if \"SpiNNaker\" not in info.version_string:\n raise SpiNNakerBootError(\n \"Remote host is not a SpiNNaker machine and so cannot \"\n \"be booted. (Are you using a BMP IP/hostname?)\")\n\n # Machine did not need booting\n return False\n except SCPError:\n # The machine is not responding to SCP so it needs booting.\n pass\n\n # Actually boot the machine\n boot_kwargs.setdefault(\"boot_port\", self.boot_port)\n self.structs = boot.boot(self.initial_host, **boot_kwargs)\n assert len(self.structs) > 0\n\n # Wait for the machine to completely boot\n if check_booted:\n try:\n p2p_address = (255, 255)\n while p2p_address == (255, 255):\n time.sleep(0.1)\n p2p_address = self.get_software_version(\n 255, 255, 0).position\n except SCPError:\n # Machine did not respond\n raise SpiNNakerBootError(\n \"The remote machine could not be booted.\")\n\n # The machine was sent boot commands\n return True", "response": "Boot a SpiNNaker machine from the Ethernet connected chip."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef discover_connections(self, x=255, y=255):\n working_chips = set(\n (x, y)\n for (x, y), route in iteritems(self.get_p2p_routing_table(x, y))\n if route != consts.P2PTableEntry.none)\n self._width = max(x for x, y in working_chips) + 1\n self._height = max(y for x, y in working_chips) + 1\n\n num_new_connections = 0\n\n for x, y in spinn5_eth_coords(self._width, self._height,\n *self.root_chip):\n if (x, y) in working_chips and (x, y) not in self.connections:\n # Discover the chip's IP address\n try:\n ip = self.get_ip_address(x, y)\n except SCPError:\n continue\n\n if ip is not None:\n # Create a connection to the IP\n self.connections[(x, y)] = \\\n SCPConnection(ip, self.scp_port,\n self.n_tries, self.timeout)\n\n # Attempt to use the connection (and remove it if it\n # doesn't work)\n try:\n self.get_software_version(x, y, 0)\n num_new_connections += 1\n except SCPError:\n self.connections.pop((x, y)).close()\n\n return num_new_connections", "response": "Discover all available Ethernet connections to a machine."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating the context to use the given application ID and stop the application when done.", "response": "def application(self, app_id):\n \"\"\"Update the context to use the given application ID and stop the\n application when done.\n\n For example::\n\n with cn.application(54):\n # All commands in this block will use app_id=54.\n # On leaving the block `cn.send_signal(\"stop\", 54)` is\n # automatically called.\n \"\"\"\n # Get a new context and add a method that will be called before the\n # context is removed from the stack.\n context = self(app_id=app_id)\n context.before_close(lambda: self.send_signal(\"stop\"))\n return context"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the software version for a given SpiNNaker core.", "response": "def get_software_version(self, x=255, y=255, processor=0):\n \"\"\"Get the software version for a given SpiNNaker core.\n\n Returns\n -------\n :py:class:`.CoreInfo`\n Information about the software running on a core.\n \"\"\"\n sver = self._send_scp(x, y, processor, SCPCommands.sver)\n\n # Format the result\n # arg1 => p2p address, physical cpu, virtual cpu\n p2p = sver.arg1 >> 16\n p2p_address = (p2p >> 8, p2p & 0x00ff)\n pcpu = (sver.arg1 >> 8) & 0xff\n vcpu = sver.arg1 & 0xff\n\n # arg2 => version number (parsed separately) and buffer size\n buffer_size = (sver.arg2 & 0xffff)\n\n software_name, version, version_labels = \\\n unpack_sver_response_version(sver)\n\n return CoreInfo(p2p_address, pcpu, vcpu, version, buffer_size,\n sver.arg3, software_name, version_labels)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the IPv4 address of a particular SpiNNaker chip s Ethernet link.", "response": "def get_ip_address(self, x, y):\n \"\"\"Get the IP address of a particular SpiNNaker chip's Ethernet link.\n\n Returns\n -------\n str or None\n The IPv4 address (as a string) of the chip's Ethernet link or None\n if the chip does not have an Ethernet connection or the link is\n currently down.\n \"\"\"\n chip_info = self.get_chip_info(x=x, y=y)\n return chip_info.ip_address if chip_info.ethernet_up else None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write(self, address, data, x, y, p=0):\n # Call the SCPConnection to perform the write on our behalf\n connection = self._get_connection(x, y)\n return connection.write(self.scp_data_length, self.scp_window_size,\n x, y, p, address, data)", "response": "Write a bytestring to an address in memory."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef read(self, address, length_bytes, x, y, p=0):\n # Call the SCPConnection to perform the read on our behalf\n connection = self._get_connection(x, y)\n return connection.read(self.scp_data_length, self.scp_window_size,\n x, y, p, address, length_bytes)", "response": "Read a bytestring from an address in memory."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write_across_link(self, address, data, x, y, link):\n if address % 4:\n raise ValueError(\"Addresses must be word-aligned.\")\n if len(data) % 4:\n raise ValueError(\"Data must be a whole number of words.\")\n\n length_bytes = len(data)\n cur_byte = 0\n\n # Write the requested data, one SCP packet worth at a time\n while length_bytes > 0:\n to_write = min(length_bytes, (self.scp_data_length & ~0b11))\n cur_data = data[cur_byte:cur_byte + to_write]\n self._send_scp(x, y, 0, SCPCommands.link_write,\n arg1=address, arg2=to_write, arg3=int(link),\n data=cur_data, expected_args=0)\n\n # Move to the next block to write\n address += to_write\n cur_byte += to_write\n length_bytes -= to_write", "response": "Write a bytestring to an address in memory on a neigbouring chip."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads a bytestring from an address in memory on a neigbouring chip.", "response": "def read_across_link(self, address, length_bytes, x, y, link):\n \"\"\"Read a bytestring from an address in memory on a neigbouring chip.\n\n .. warning::\n\n This function is intended for low-level debug use only and is not\n optimised for performance nor intended for more general use.\n\n This method instructs a monitor processor to send 'PEEK'\n nearest-neighbour packets to a neighbouring chip. These packets are\n handled directly by the SpiNNaker router in the neighbouring chip,\n potentially allowing advanced debug or recovery of a chip rendered\n otherwise unreachable.\n\n Parameters\n ----------\n address : int\n The address at which to start reading the data. Only addresses in\n the system-wide address map may be accessed. Addresses must be word\n aligned.\n length_bytes : int\n The number of bytes to read from memory. Must be a multiple of four\n (i.e. a whole number of words). Large reads are transparently\n broken into multiple SCP link-read commands.\n x : int\n y : int\n The coordinates of the chip from which the command will be sent,\n *not* the coordinates of the chip on which the read will be\n performed.\n link : :py:class:`rig.links.Links`\n The link down which the read should be sent.\n\n Returns\n -------\n :py:class:`bytes`\n The data is read back from memory as a bytestring.\n \"\"\"\n if address % 4:\n raise ValueError(\"Addresses must be word-aligned.\")\n if length_bytes % 4:\n raise ValueError(\"Lengths must be multiples of words.\")\n\n # Prepare the buffer to receive the incoming data\n data = bytearray(length_bytes)\n mem = memoryview(data)\n\n # Read the requested data, one SCP packet worth at a time\n while length_bytes > 0:\n to_read = min(length_bytes, (self.scp_data_length & ~0b11))\n response = self._send_scp(x, y, 0, SCPCommands.link_read,\n arg1=address,\n arg2=to_read,\n arg3=int(link),\n expected_args=0)\n\n # Accumulate the incoming data and advance the memoryview through\n # the buffer.\n mem[:to_read] = response.data\n mem = mem[to_read:]\n\n # Move to the next block to read\n address += to_read\n length_bytes -= to_read\n\n return bytes(data)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_struct_field(self, struct_name, field_name, x, y, p=0):\n # Look up the struct and field\n field, address, pack_chars = \\\n self._get_struct_field_and_address(struct_name, field_name)\n length = struct.calcsize(pack_chars)\n\n # Perform the read\n data = self.read(address, length, x, y, p)\n\n # Unpack the data\n unpacked = struct.unpack(pack_chars, data)\n\n if field.length == 1:\n return unpacked[0]\n else:\n return unpacked", "response": "Read the value out of a struct field."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting a value into a struct field.", "response": "def write_struct_field(self, struct_name, field_name, values, x, y, p=0):\n \"\"\"Write a value into a struct.\n\n This method is particularly useful for writing values into the ``sv``\n struct which contains some configuration data. See ``sark.h`` for\n details.\n\n Parameters\n ----------\n struct_name : string\n Name of the struct to write to, e.g., `\"sv\"`\n field_name : string\n Name of the field to write, e.g., `\"random\"`\n values :\n Value(s) to be written into the field.\n\n .. warning::\n Fields which are arrays must currently be written in their\n entirety.\n \"\"\"\n # Look up the struct and field\n field, address, pack_chars = \\\n self._get_struct_field_and_address(struct_name, field_name)\n\n if field.length != 1:\n assert len(values) == field.length\n data = struct.pack(pack_chars, *values)\n else:\n data = struct.pack(pack_chars, values)\n\n # Perform the write\n self.write(address, data, x, y, p)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_vcpu_field_and_address(self, field_name, x, y, p):\n vcpu_struct = self.structs[b\"vcpu\"]\n field = vcpu_struct[six.b(field_name)]\n address = (self.read_struct_field(\"sv\", \"vcpu_base\", x, y) +\n vcpu_struct.size * p) + field.offset\n pack_chars = b\"<\" + field.pack_chars\n return field, address, pack_chars", "response": "Get the field and address for a VCPU struct field."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_vcpu_struct_field(self, field_name, x, y, p):\n # Get the base address of the VCPU struct for this chip, then advance\n # to get the correct VCPU struct for the requested core.\n field, address, pack_chars = \\\n self._get_vcpu_field_and_address(field_name, x, y, p)\n\n # Perform the read\n length = struct.calcsize(pack_chars)\n data = self.read(address, length, x, y)\n\n # Unpack and return\n unpacked = struct.unpack(pack_chars, data)\n\n if field.length == 1:\n return unpacked[0]\n else:\n # If the field is a string then truncate it and return\n if b\"s\" in pack_chars:\n return unpacked[0].strip(b\"\\x00\").decode(\"utf-8\")\n\n # Otherwise just return. (Note: at the time of writing, no fields\n # in the VCPU struct are of this form.)\n return unpacked", "response": "Reads a value out of the VCPU struct for a specific core."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write_vcpu_struct_field(self, field_name, value, x, y, p):\n field, address, pack_chars = \\\n self._get_vcpu_field_and_address(field_name, x, y, p)\n\n # Pack the data\n if b\"s\" in pack_chars:\n data = struct.pack(pack_chars, value.encode('utf-8'))\n elif field.length == 1:\n data = struct.pack(pack_chars, value)\n else:\n # NOTE: At the time of writing no VCPU struct fields are of this\n # form.\n data = struct.pack(pack_chars, *value) # pragma: no cover\n\n # Perform the write\n self.write(address, data, x, y)", "response": "Writes a value to the VCPU struct for a specific core."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_processor_status(self, p, x, y):\n # Get the VCPU base\n address = (self.read_struct_field(\"sv\", \"vcpu_base\", x, y) +\n self.structs[b\"vcpu\"].size * p)\n\n # Get the VCPU data\n data = self.read(address, self.structs[b\"vcpu\"].size, x, y)\n\n # Build the kwargs that describe the current state\n state = {\n name.decode('utf-8'): struct.unpack(\n f.pack_chars,\n data[f.offset:f.offset+struct.calcsize(f.pack_chars)]\n )[0] for (name, f) in iteritems(self.structs[b\"vcpu\"].fields)\n }\n state[\"registers\"] = [state.pop(\"r{}\".format(i)) for i in range(8)]\n state[\"user_vars\"] = [state.pop(\"user{}\".format(i)) for i in range(4)]\n state[\"app_name\"] = state[\"app_name\"].strip(b'\\x00').decode('utf-8')\n state[\"cpu_state\"] = consts.AppState(state[\"cpu_state\"])\n state[\"rt_code\"] = consts.RuntimeException(state[\"rt_code\"])\n\n sw_ver = state.pop(\"sw_ver\")\n state[\"version\"] = ((sw_ver >> 16) & 0xFF,\n (sw_ver >> 8) & 0xFF,\n (sw_ver >> 0) & 0xFF)\n\n for newname, oldname in [(\"iobuf_address\", \"iobuf\"),\n (\"program_state_register\", \"psr\"),\n (\"stack_pointer\", \"sp\"),\n (\"link_register\", \"lr\"), ]:\n state[newname] = state.pop(oldname)\n state.pop(\"__PAD\")\n return ProcessorStatus(**state)", "response": "Get the status of a given core and the application executing on it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_iobuf(self, p, x, y):\n return self.get_iobuf_bytes(p, x, y).decode(\"utf-8\")", "response": "Read the messages from the IOBUF buffer on a specific core."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_iobuf_bytes(self, p, x, y):\n # The IOBUF data is stored in a linked-list of blocks of memory in\n # SDRAM. The size of each block is given in SV\n iobuf_size = self.read_struct_field(\"sv\", \"iobuf_size\", x, y)\n\n # The first block in the list is given in the core's VCPU field\n address = self.read_vcpu_struct_field(\"iobuf\", x, y, p)\n\n iobuf = b\"\"\n\n while address:\n # The IOBUF data is proceeded by a header which gives the next\n # address and also the length of the string in the current buffer.\n iobuf_data = self.read(address, iobuf_size + 16, x, y)\n address, time, ms, length = struct.unpack(\"<4I\", iobuf_data[:16])\n iobuf += iobuf_data[16:16 + length]\n\n return iobuf", "response": "Read raw bytes into the IOBUF buffer on a specific core."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the values of the router diagnostic counters.", "response": "def get_router_diagnostics(self, x, y):\n \"\"\"Get the values of the router diagnostic counters.\n\n Returns\n -------\n :py:class:`~.RouterDiagnostics`\n Description of the state of the counters.\n \"\"\"\n # Read the block of memory\n data = self.read(0xe1000300, 64, x=x, y=y)\n\n # Convert to 16 ints, then process that as the appropriate tuple type\n return RouterDiagnostics(*struct.unpack(\"<16I\", data))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef iptag_set(self, iptag, addr, port, x, y):\n # Format the IP address\n ip_addr = struct.pack('!4B',\n *map(int, socket.gethostbyname(addr).split('.')))\n self._send_scp(x, y, 0, SCPCommands.iptag,\n int(consts.IPTagCommands.set) << 16 | iptag,\n port, struct.unpack('` which allows safe reading and writing to the block\n that is allocated.\n\n Returns\n -------\n :py:class:`.MemoryIO`\n File-like object which allows accessing the newly allocated region\n of memory. For example::\n\n >>> # Read, write and seek through the allocated memory just\n >>> # like a file\n >>> mem = mc.sdram_alloc_as_filelike(12) # doctest: +SKIP\n >>> mem.write(b\"Hello, world\") # doctest: +SKIP\n 12\n >>> mem.seek(0) # doctest: +SKIP\n >>> mem.read(5) # doctest: +SKIP\n b\"Hello\"\n >>> mem.read(7) # doctest: +SKIP\n b\", world\"\n\n >>> # Reads and writes are truncated to the allocated region,\n >>> # preventing accidental clobbering/access of memory.\n >>> mem.seek(0) # doctest: +SKIP\n >>> mem.write(b\"How are you today?\") # doctest: +SKIP\n 12\n >>> mem.seek(0) # doctest: +SKIP\n >>> mem.read(100) # doctest: +SKIP\n b\"How are you \"\n\n See the :py:class:`.MemoryIO` class for details of other features\n of these file-like views of SpiNNaker's memory.\n\n Raises\n ------\n rig.machine_control.machine_controller.SpiNNakerMemoryError\n If the memory cannot be allocated, or the tag is already taken or\n invalid.\n \"\"\"\n # Perform the malloc\n start_address = self.sdram_alloc(size, tag, x, y, app_id, clear)\n return MemoryIO(self, x, y, start_address, start_address + size)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef sdram_free(self, ptr, x=Required, y=Required):\n self._send_scp(x, y, 0, SCPCommands.alloc_free,\n consts.AllocOperations.free_sdram_by_ptr, ptr)", "response": "Free an allocated block of memory in SDRAM."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_next_nn_id(self):\n self._nn_id = self._nn_id + 1 if self._nn_id < 126 else 1\n return self._nn_id * 2", "response": "Get the next nearest neighbour ID."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _send_ffs(self, pid, n_blocks, fr):\n sfr = fr | (1 << 31)\n self._send_scp(\n 255, 255, 0, SCPCommands.nearest_neighbour_packet,\n (NNCommands.flood_fill_start << 24) | (pid << 16) |\n (n_blocks << 8), 0x0, sfr\n )", "response": "Send a flood - fill start packet."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsending a flood - fill core select packet.", "response": "def _send_ffcs(self, region, core_mask, fr):\n \"\"\"Send a flood-fill core select packet.\n\n This packet was added in a patched SC&MP 1.34*. Each packet includes a\n region and a core mask; every core that is in the region ORs the core\n mask with a mask it stores locally. On receiving a flood-fill end (FFE)\n packet the application is loaded to the cores specified by this\n composed core mask.\n\n FFCS packets should be sent in ascending order of\n `(region << 18) | core`.\n\n * See https://bitbucket.org/mundya/scamp/branch/new-ff2\n \"\"\"\n arg1 = (NNCommands.flood_fill_core_select << 24) | core_mask\n arg2 = region\n self._send_scp(255, 255, 0, SCPCommands.nearest_neighbour_packet,\n arg1, arg2, fr)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _send_ffd(self, pid, aplx_data, address):\n block = 0\n pos = 0\n aplx_size = len(aplx_data)\n\n while pos < aplx_size:\n # Get the next block of data, send and progress the block\n # counter and the address\n data = aplx_data[pos:pos + self.scp_data_length]\n data_size = len(data)\n size = data_size // 4 - 1\n\n arg1 = (NNConstants.forward << 24 | NNConstants.retry << 16 | pid)\n arg2 = (block << 16) | (size << 8)\n self._send_scp(255, 255, 0, SCPCommands.flood_fill_data,\n arg1, arg2, address, data)\n\n # Increment the address and the block counter\n block += 1\n address += data_size\n pos += data_size", "response": "Send flood - fill data packets."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsend a flood - fill end packet.", "response": "def _send_ffe(self, pid, app_id, app_flags, fr):\n \"\"\"Send a flood-fill end packet.\n\n The cores and regions that the application should be loaded to will\n have been specified by a stream of flood-fill core select packets\n (FFCS).\n \"\"\"\n arg1 = (NNCommands.flood_fill_end << 24) | pid\n arg2 = (app_id << 24) | (app_flags << 18)\n self._send_scp(255, 255, 0, SCPCommands.nearest_neighbour_packet,\n arg1, arg2, fr)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads an application into a set of cores.", "response": "def load_application(self, *args, **kwargs):\n \"\"\"Load an application to a set of application cores.\n\n This method guarantees that once it returns, all required cores will\n have been loaded. If this is not possible after a small number of\n attempts, a :py:exc:`.SpiNNakerLoadingError` will be raised.\n\n This method can be called in either of the following ways::\n\n load_application(\"/path/to/app.aplx\", {(x, y): {core, ...}, ...})\n load_application({\"/path/to/app.aplx\": {(x, y): {core, ...}, ...},\n ...})\n\n Note that the latter format is the same format produced by\n :py:func:`~rig.place_and_route.util.build_application_map`.\n\n Parameters\n ----------\n app_id : int\n wait : bool\n Leave the application in a wait state after successfully loading\n it.\n n_tries : int\n Number attempts to make to load the application.\n app_start_delay : float\n Time to pause (in seconds) after loading to ensure that the\n application successfully reaches the wait state before checking for\n success.\n use_count : bool\n If True (the default) then the targets dictionary will be assumed\n to represent _all_ the cores that will be loaded and a faster\n method to determine whether all applications have been loaded\n correctly will be used. If False a fallback method will be used.\n\n Raises\n ------\n rig.machine_control.machine_controller.SpiNNakerLoadingError\n This exception is raised after some cores failed to load after\n ``n_tries`` attempts.\n \"\"\"\n # Get keyword arguments\n app_id = kwargs.pop(\"app_id\")\n wait = kwargs.pop(\"wait\")\n n_tries = kwargs.pop(\"n_tries\")\n app_start_delay = kwargs.pop(\"app_start_delay\")\n use_count = kwargs.pop(\"use_count\", True)\n\n # Coerce the arguments into a single form. If there are two arguments\n # then assume that we have filename and a map of chips and cores;\n # otherwise there should be ONE argument which is of the form of the\n # return value of `build_application_map`.\n application_map = {}\n if len(args) == 1:\n application_map = args[0]\n elif len(args) == 2:\n application_map = {args[0]: args[1]}\n else:\n raise TypeError(\n \"load_application: accepts either 1 or 2 positional arguments:\"\n \"a map of filenames to targets OR a single filename and its\"\n \"targets\"\n )\n\n # Count the number of cores being loaded\n core_count = sum(\n len(cores) for ts in six.itervalues(application_map) for\n cores in six.itervalues(ts)\n )\n\n # Mark all targets as unloaded\n unloaded = application_map\n\n # Try to load the applications, then determine which are unloaded\n tries = 0\n while unloaded != {} and tries <= n_tries:\n tries += 1\n\n # Load all unloaded applications, then pause to ensure they reach\n # the wait state\n self.flood_fill_aplx(unloaded, app_id=app_id, wait=True)\n time.sleep(app_start_delay)\n\n # If running in \"fast\" mode then check that the correct number of\n # cores are in the \"wait\" state, if so then break out of this loop.\n if (use_count and\n core_count == self.count_cores_in_state(\"wait\", app_id)):\n unloaded = {}\n continue\n\n # Query each target in turn to determine if it is loaded or\n # otherwise. If it is loaded (in the wait state) then remove it\n # from the unloaded list.\n new_unloadeds = dict()\n for app_name, targets in iteritems(unloaded):\n unloaded_targets = {}\n for (x, y), cores in iteritems(targets):\n unloaded_cores = set()\n for p in cores:\n # Read the struct value vcpu->cpu_state, if it is\n # anything BUT wait then we mark this core as unloaded.\n state = consts.AppState(\n self.read_vcpu_struct_field(\"cpu_state\", x, y, p)\n )\n if state is not consts.AppState.wait:\n unloaded_cores.add(p)\n\n if len(unloaded_cores) > 0:\n unloaded_targets[(x, y)] = unloaded_cores\n if len(unloaded_targets) > 0:\n new_unloadeds[app_name] = unloaded_targets\n unloaded = new_unloadeds\n\n # If there are still unloaded cores then we bail\n if unloaded != {}:\n raise SpiNNakerLoadingError(unloaded)\n\n # If not waiting then send the start signal\n if not wait:\n self.send_signal(\"start\", app_id)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntransmitting a signal to applications.", "response": "def send_signal(self, signal, app_id):\n \"\"\"Transmit a signal to applications.\n\n .. warning::\n In current implementations of SARK, signals are highly likely to\n arrive but this is not guaranteed (especially when the system's\n network is heavily utilised). Users should treat this mechanism\n with caution. Future versions of SARK may resolve this issue.\n\n Parameters\n ----------\n signal : string or :py:class:`~rig.machine_control.consts.AppSignal`\n Signal to transmit. This may be either an entry of the\n :py:class:`~rig.machine_control.consts.AppSignal` enum or, for\n convenience, the name of a signal (defined in\n :py:class:`~rig.machine_control.consts.AppSignal`) as a string.\n \"\"\"\n if isinstance(signal, str):\n try:\n signal = getattr(consts.AppSignal, signal)\n except AttributeError:\n # The signal name is not present in consts.AppSignal! The next\n # test will throw an appropriate exception since no string can\n # be \"in\" an IntEnum.\n pass\n if signal not in consts.AppSignal:\n raise ValueError(\n \"send_signal: Cannot transmit signal of type {}\".format(\n repr(signal)))\n\n # Construct the packet for transmission\n arg1 = consts.signal_types[signal]\n arg2 = (signal << 16) | 0xff00 | app_id\n arg3 = 0x0000ffff # Meaning \"transmit to all\"\n self._send_scp(255, 255, 0, SCPCommands.signal, arg1, arg2, arg3)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncounting the number of cores in a given state.", "response": "def count_cores_in_state(self, state, app_id):\n \"\"\"Count the number of cores in a given state.\n\n .. warning::\n In current implementations of SARK, signals (which are used to\n determine the state of cores) are highly likely to arrive but this\n is not guaranteed (especially when the system's network is heavily\n utilised). Users should treat this mechanism with caution. Future\n versions of SARK may resolve this issue.\n\n Parameters\n ----------\n state : string or :py:class:`~rig.machine_control.consts.AppState` or\n iterable\n Count the number of cores currently in this state. This may be\n either an entry of the\n :py:class:`~rig.machine_control.consts.AppState` enum or, for\n convenience, the name of a state (defined in\n :py:class:`~rig.machine_control.consts.AppState`) as a string or\n an iterable of these, in which case the total count will be\n returned.\n \"\"\"\n if (isinstance(state, collections.Iterable) and\n not isinstance(state, str)):\n # If the state is iterable then call for each state and return the\n # sum.\n return sum(self.count_cores_in_state(s, app_id) for s in state)\n\n if isinstance(state, str):\n try:\n state = getattr(consts.AppState, state)\n except AttributeError:\n # The state name is not present in consts.AppSignal! The next\n # test will throw an appropriate exception since no string can\n # be \"in\" an IntEnum.\n pass\n if state not in consts.AppState:\n raise ValueError(\n \"count_cores_in_state: Unknown state {}\".format(\n repr(state)))\n\n # TODO Determine a way to nicely express a way to use the region data\n # stored in arg3.\n region = 0x0000ffff # Largest possible machine, level 0\n level = (region >> 16) & 0x3\n mask = region & 0x0000ffff\n\n # Construct the packet\n arg1 = consts.diagnostic_signal_types[consts.AppDiagnosticSignal.count]\n arg2 = ((level << 26) | (1 << 22) |\n (consts.AppDiagnosticSignal.count << 20) | (state << 16) |\n (0xff << 8) | app_id) # App mask for 1 app_id = 0xff\n arg3 = mask\n\n # Transmit and return the count\n return self._send_scp(\n 255, 255, 0, SCPCommands.signal, arg1, arg2, arg3).arg1"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nblock until the specified number of cores reach the specified state.", "response": "def wait_for_cores_to_reach_state(self, state, count, app_id,\n poll_interval=0.1, timeout=None):\n \"\"\"Block until the specified number of cores reach the specified state.\n\n This is a simple utility-wrapper around the\n :py:meth:`.count_cores_in_state` method which polls the machine until\n (at least) the supplied number of cores has reached the specified\n state.\n\n .. warning::\n In current implementations of SARK, signals (which are used to\n determine the state of cores) are highly likely to arrive but this\n is not guaranteed (especially when the system's network is heavily\n utilised). As a result, in uncommon-but-possible circumstances,\n this function may never exit. Users should treat this function with\n caution. Future versions of SARK may resolve this issue.\n\n Parameters\n ----------\n state : string or :py:class:`~rig.machine_control.consts.AppState`\n The state to wait for cores to enter. This may be\n either an entry of the\n :py:class:`~rig.machine_control.consts.AppState` enum or, for\n convenience, the name of a state (defined in\n :py:class:`~rig.machine_control.consts.AppState`) as a string.\n count : int\n The (minimum) number of cores reach the specified state before this\n method terminates.\n poll_interval : float\n Number of seconds between state counting requests sent to the\n machine.\n timeout : float or Null\n Maximum number of seconds which may elapse before giving up. If\n None, keep trying forever.\n\n Returns\n -------\n int\n The number of cores in the given state (which will be less than the\n number required if the method timed out).\n \"\"\"\n if timeout is not None:\n timeout_time = time.time() + timeout\n\n while True:\n cur_count = self.count_cores_in_state(state, app_id)\n if cur_count >= count:\n break\n\n # Stop if timeout elapsed\n if timeout is not None and time.time() > timeout_time:\n break\n\n # Pause before retrying\n time.sleep(poll_interval)\n\n return cur_count"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_routing_tables(self, routing_tables, app_id):\n for (x, y), table in iteritems(routing_tables):\n self.load_routing_table_entries(table, x=x, y=y, app_id=app_id)", "response": "Allocate space for an load multicast routing tables."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_routing_table_entries(self, entries, x, y, app_id):\n count = len(entries)\n\n # Try to allocate room for the entries\n rv = self._send_scp(\n x, y, 0, SCPCommands.alloc_free,\n (app_id << 8) | consts.AllocOperations.alloc_rtr, count\n )\n rtr_base = rv.arg1 # Index of the first allocated entry, 0 if failed\n\n if rtr_base == 0:\n raise SpiNNakerRouterError(count, x, y)\n\n # Determine where to write into memory\n buf = self.read_struct_field(\"sv\", \"sdram_sys\", x, y)\n\n # Build the data to write in, then perform the write\n data = bytearray(16 * len(entries))\n for i, entry in enumerate(entries):\n # Build the route as a 32-bit value\n route = 0x00000000\n for r in entry.route:\n route |= 1 << r\n\n struct.pack_into(consts.RTE_PACK_STRING, data, i*16,\n i, 0, route, entry.key, entry.mask)\n self.write(buf, data, x, y)\n\n # Perform the load of the data into the router\n self._send_scp(\n x, y, 0, SCPCommands.router,\n (count << 16) | (app_id << 8) | consts.RouterOperations.load,\n buf, rtr_base\n )", "response": "Load routing table entries into memory."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndumps the multicast routing table of a given chip.", "response": "def get_routing_table_entries(self, x, y):\n \"\"\"Dump the multicast routing table of a given chip.\n\n Returns\n -------\n [(:py:class:`~rig.routing_table.RoutingTableEntry`, app_id, core) \\\n or None, ...]\n Ordered list of routing table entries with app_ids and\n core numbers.\n \"\"\"\n # Determine where to read from, perform the read\n rtr_addr = self.read_struct_field(\"sv\", \"rtr_copy\", x, y)\n read_size = struct.calcsize(consts.RTE_PACK_STRING)\n rtr_data = self.read(rtr_addr, consts.RTR_ENTRIES * read_size, x, y)\n\n # Read each routing table entry in turn\n table = list()\n while len(rtr_data) > 0:\n entry, rtr_data = rtr_data[:read_size], rtr_data[read_size:]\n table.append(unpack_routing_table_entry(entry))\n return table"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clear_routing_table_entries(self, x, y, app_id):\n # Construct the arguments\n arg1 = (app_id << 8) | consts.AllocOperations.free_rtr_by_app\n self._send_scp(x, y, 0, SCPCommands.alloc_free, arg1, 0x1)", "response": "Clears the routing table entries associated with a given application."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_p2p_routing_table(self, x, y):\n table = {}\n\n # Get the dimensions of the system\n p2p_dims = self.read_struct_field(\"sv\", \"p2p_dims\", x, y)\n width = (p2p_dims >> 8) & 0xFF\n height = (p2p_dims >> 0) & 0xFF\n\n # Read out the P2P table data, one column at a time (note that eight\n # entries are packed into each 32-bit word)\n col_words = (((height + 7) // 8) * 4)\n for col in range(width):\n # Read the entries for this row\n raw_table_col = self.read(\n consts.SPINNAKER_RTR_P2P + (((256 * col) // 8) * 4),\n col_words,\n x, y\n )\n\n row = 0\n while row < height:\n raw_word, raw_table_col = raw_table_col[:4], raw_table_col[4:]\n word, = struct.unpack(\"> (3*entry)) & 0b111)\n row += 1\n\n return table", "response": "Dump the contents of a chip s P2P routing table."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_chip_info(self, x, y):\n info = self._send_scp(x, y, 0, SCPCommands.info, expected_args=3)\n\n # Unpack values encoded in the argument fields\n num_cores = info.arg1 & 0x1F\n working_links = set(link for link in Links\n if (info.arg1 >> (8 + link)) & 1)\n largest_free_rtr_mc_block = (info.arg1 >> 14) & 0x7FF\n ethernet_up = bool(info.arg1 & (1 << 25))\n\n # Unpack the values in the data payload\n data = struct.unpack_from(\"<18BHI\", info.data)\n core_states = [consts.AppState(c) for c in data[:18]]\n local_ethernet_chip = ((data[18] >> 8) & 0xFF,\n (data[18] >> 0) & 0xFF)\n ip_address = \".\".join(str((data[19] >> i) & 0xFF)\n for i in range(0, 32, 8))\n\n return ChipInfo(\n num_cores=num_cores,\n core_states=core_states[:num_cores],\n working_links=working_links,\n largest_free_sdram_block=info.arg2,\n largest_free_sram_block=info.arg3,\n largest_free_rtr_mc_block=largest_free_rtr_mc_block,\n ethernet_up=ethernet_up,\n ip_address=ip_address,\n local_ethernet_chip=local_ethernet_chip,\n )", "response": "Get general information about the resources available on a chip."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndiscover the integrity and resource availability of a whole SpiNNaker system. This command performs :py:meth:`.get_chip_info` on all working chips in the system returning an enhanced :py:class:`dict` (:py:class:`.SystemInfo`) containing a look-up from chip coordinate to :py:class:`.ChipInfo`. In addition to standard dictionary functionality, :py:class:`.SystemInfo` provides a number of convenience methods, which allow convenient iteration over various aspects of the information stored. .. note:: This method replaces the deprecated :py:meth:`.get_machine` method. To build a :py:class:`~rig.place_and_route.Machine` for place-and-route purposes, the :py:func:`rig.place_and_route.utils.build_machine` utility function may be used with :py:meth:`.get_system_info` like so:: >> from rig.place_and_route.utils import build_machine >> sys_info = mc.get_system_info() >> machine = build_machine(sys_info) Parameters ---------- x : int y : int The coordinates of the chip from which system exploration should begin, by default (255, 255). Most users will not need to change these parameters. Returns ------- :py:class:`.SystemInfo` An enhanced :py:class:`dict` object {(x, y): :py:class:`.ChipInfo`, ...} with a number of utility methods for accessing higher-level system information.", "response": "def get_system_info(self, x=255, y=255):\n \"\"\"Discover the integrity and resource availability of a whole\n SpiNNaker system.\n\n This command performs :py:meth:`.get_chip_info` on all working chips in\n the system returning an enhanced :py:class:`dict`\n (:py:class:`.SystemInfo`) containing a look-up from chip coordinate to\n :py:class:`.ChipInfo`. In addition to standard dictionary\n functionality, :py:class:`.SystemInfo` provides a number of convenience\n methods, which allow convenient iteration over various aspects of the\n information stored.\n\n .. note::\n This method replaces the deprecated :py:meth:`.get_machine` method.\n To build a :py:class:`~rig.place_and_route.Machine` for\n place-and-route purposes, the\n :py:func:`rig.place_and_route.utils.build_machine` utility function\n may be used with :py:meth:`.get_system_info` like so::\n\n >> from rig.place_and_route.utils import build_machine\n >> sys_info = mc.get_system_info()\n >> machine = build_machine(sys_info)\n\n Parameters\n ----------\n x : int\n y : int\n The coordinates of the chip from which system exploration should\n begin, by default (255, 255). Most users will not need to change\n these parameters.\n\n Returns\n -------\n :py:class:`.SystemInfo`\n An enhanced :py:class:`dict` object {(x, y): :py:class:`.ChipInfo`,\n ...} with a number of utility methods for accessing higher-level\n system information.\n \"\"\"\n # A quick way of getting a list of working chips\n p2p_tables = self.get_p2p_routing_table(x, y)\n\n # Calculate the extent of the system\n max_x = max(x_ for (x_, y_), r in iteritems(p2p_tables)\n if r != consts.P2PTableEntry.none)\n max_y = max(y_ for (x_, y_), r in iteritems(p2p_tables)\n if r != consts.P2PTableEntry.none)\n\n sys_info = SystemInfo(max_x + 1, max_y + 1)\n\n for (x, y), p2p_route in iteritems(p2p_tables):\n if p2p_route != consts.P2PTableEntry.none:\n try:\n sys_info[(x, y)] = self.get_chip_info(x, y)\n except SCPError:\n # The chip was listed in the P2P table but is not\n # responding. Assume it is dead and don't include it in\n # the info returned.\n pass\n\n return sys_info"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\niterates over the coordinates of Ethernet connected chips in the system.", "response": "def ethernet_connected_chips(self):\n \"\"\"Iterate over the coordinates of Ethernet connected chips.\n\n Yields\n ------\n ((x, y), str)\n The coordinate and IP address of each Ethernet connected chip in\n the system.\n \"\"\"\n for xy, chip_info in six.iteritems(self):\n if chip_info.ethernet_up:\n yield (xy, chip_info.ip_address)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dead_chips(self):\n for x in range(self.width):\n for y in range(self.height):\n if (x, y) not in self:\n yield (x, y)", "response": "Yields the coordinates of all dead chips."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates the coordinates of all working links.", "response": "def links(self):\n \"\"\"Generate the coordinates of all working links.\n\n Yields\n ------\n (x, y, :py:class:`rig.links.Links`)\n A working link leaving a chip from the perspective of the chip. For\n example ``(0, 0, Links.north)`` would be the link going north from\n chip (0, 0) to chip (0, 1).\n \"\"\"\n for (x, y), chip_info in iteritems(self):\n for link in chip_info.working_links:\n yield (x, y, link)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates the coordinates of all dead links leaving working chips.", "response": "def dead_links(self):\n \"\"\"Generate the coordinates of all dead links leaving working chips.\n\n Any link leading to a dead chip will also be included in the list of\n dead links. In non-torroidal SpiNNaker sysmtes (e.g. single SpiNN-5\n boards), links on the periphery of the system will be marked as dead.\n\n Yields\n ------\n (x, y, :py:class:`rig.links.Links`)\n A working link leaving a chip from the perspective of the chip. For\n example ``(0, 0, Links.north)`` would be the link going north from\n chip (0, 0) to chip (0, 1).\n \"\"\"\n for (x, y), chip_info in iteritems(self):\n for link in Links:\n if link not in chip_info.working_links:\n yield (x, y, link)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cores(self):\n for (x, y), chip_info in iteritems(self):\n for p, state in enumerate(chip_info.core_states):\n yield (x, y, p, state)", "response": "Yields the set of all cores in the system."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads a number of bytes from the SpiNNaker memory region.", "response": "def read(self, n_bytes=-1):\n \"\"\"Read a number of bytes from the memory.\n\n .. note::\n Reads beyond the specified memory range will be truncated.\n\n .. note::\n Produces a :py:exc:`.TruncationWarning` if fewer bytes are read\n than requested. These warnings can be converted into exceptions\n using :py:func:`warnings.simplefilter`::\n\n >>> import warnings\n >>> from rig.machine_control.machine_controller \\\\\n ... import TruncationWarning\n >>> warnings.simplefilter('error', TruncationWarning)\n\n Parameters\n ----------\n n_bytes : int\n A number of bytes to read. If the number of bytes is negative or\n omitted then read all data until the end of memory region.\n\n Returns\n -------\n :py:class:`bytes`\n Data read from SpiNNaker as a bytestring.\n \"\"\"\n # If n_bytes is negative then calculate it as the number of bytes left\n if n_bytes < 0:\n n_bytes = self._end_address - self.address\n\n # Determine how far to read, then read nothing beyond that point.\n if self.address + n_bytes > self._end_address:\n new_n_bytes = self._end_address - self.address\n warnings.warn(\"read truncated from {} to {} bytes\".format(\n n_bytes, new_n_bytes), TruncationWarning, stacklevel=3)\n n_bytes = new_n_bytes\n\n if n_bytes <= 0:\n return b''\n\n # Perform the read and increment the offset\n data = self._parent._perform_read(self.address, n_bytes)\n self._offset += n_bytes\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write(self, bytes):\n if self.address + len(bytes) > self._end_address:\n n_bytes = self._end_address - self.address\n\n warnings.warn(\"write truncated from {} to {} bytes\".format(\n len(bytes), n_bytes), TruncationWarning, stacklevel=3)\n bytes = bytes[:n_bytes]\n\n if len(bytes) == 0:\n return 0\n\n # Perform the write and increment the offset\n self._parent._perform_write(self.address, bytes)\n self._offset += len(bytes)\n return len(bytes)", "response": "Writes the specified bytes to the memory."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nseeking to a new position in the memory region.", "response": "def seek(self, n_bytes, from_what=os.SEEK_SET):\n \"\"\"Seek to a new position in the memory region.\n\n Parameters\n ----------\n n_bytes : int\n Number of bytes to seek.\n from_what : int\n As in the Python standard: `0` seeks from the start of the memory\n region, `1` seeks from the current position and `2` seeks from the\n end of the memory region. For example::\n\n mem.seek(-1, 2) # Goes to the last byte in the region\n mem.seek(-5, 1) # Goes 5 bytes before that point\n mem.seek(0) # Returns to the start of the region\n\n Note that `os.SEEK_END`, `os.SEEK_CUR` and `os.SEEK_SET` are also\n valid arguments.\n \"\"\"\n if from_what == 0:\n self._offset = n_bytes\n elif from_what == 1:\n self._offset += n_bytes\n elif from_what == 2:\n self._offset = (self._end_address - self._start_address) - n_bytes\n else:\n raise ValueError(\n \"from_what: can only take values 0 (from start), \"\n \"1 (from current) or 2 (from end) not {}\".format(from_what)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfrees the memory referred to by the file - like.", "response": "def free(self):\n \"\"\"Free the memory referred to by the file-like, any subsequent\n operations on this file-like or slices of it will fail.\n \"\"\"\n # Free the memory\n self._machine_controller.sdram_free(self._start_address,\n self._x, self._y)\n\n # Mark as freed\n self._freed = True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nperforming a read using the machine controller.", "response": "def _perform_read(self, addr, size):\n \"\"\"Perform a read using the machine controller.\"\"\"\n return self._machine_controller.read(addr, size, self._x, self._y, 0)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nperform a write using the machine controller.", "response": "def _perform_write(self, addr, data):\n \"\"\"Perform a write using the machine controller.\"\"\"\n return self._machine_controller.write(addr, data, self._x, self._y, 0)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nunpacking the SDP header from a bytestring into a packet.", "response": "def _unpack_sdp_into_packet(packet, bytestring):\n \"\"\"Unpack the SDP header from a bytestring into a packet.\n\n Parameters\n ----------\n packet : :py:class:`.SDPPacket`\n Packet into which to store the unpacked header.\n bytestring : bytes\n Bytes from which to unpack the header data.\n \"\"\"\n # Extract the header and the data from the packet\n packet.data = bytestring[10:] # Everything but the header\n\n # Unpack the header\n (flags, packet.tag, dest_cpu_port, src_cpu_port,\n packet.dest_y, packet.dest_x,\n packet.src_y, packet.src_x) = struct.unpack_from('<2x8B', bytestring)\n packet.reply_expected = flags == FLAG_REPLY\n\n # Neaten up the combined VCPU and port fields\n packet.dest_cpu = dest_cpu_port & 0x1f\n packet.dest_port = (dest_cpu_port >> 5) # & 0x07\n packet.src_cpu = src_cpu_port & 0x1f\n packet.src_port = (src_cpu_port >> 5)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef packed_data(self):\n # Pack the header\n scp_header = struct.pack(\"<2H\", self.cmd_rc, self.seq)\n\n # Potential loop intentionally unrolled\n if self.arg1 is not None:\n scp_header += struct.pack('Q', struct.pack('q', hash_val))[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates a set of hash functions.", "response": "def generate_hashfunctions(nbr_bits, nbr_slices):\n \"\"\"Generate a set of hash functions.\n\n The core method is a 64-bit murmur3 hash which has a good distribution.\n Multiple hashes can be generate using the previous hash value as a seed.\n \"\"\"\n def _make_hashfuncs(key):\n if isinstance(key, text_type):\n key = key.encode('utf-8')\n else:\n key = str(key)\n rval = []\n current_hash = 0\n for i in range(nbr_slices):\n seed = current_hash\n current_hash = hash64(key, seed)\n rval.append(current_hash % nbr_bits)\n return rval\n return _make_hashfuncs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a dictionary containing the current context arguments.", "response": "def get_context_arguments(self):\n \"\"\"Return a dictionary containing the current context arguments.\"\"\"\n cargs = {}\n for context in self.__context_stack:\n cargs.update(context.context_arguments)\n return cargs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreducing the size of a routing table by merging together entries where possible and removing any remaining default routes.", "response": "def minimise(routing_table, target_length):\n \"\"\"Reduce the size of a routing table by merging together entries where\n possible and by removing any remaining default routes.\n\n .. warning::\n\n The input routing table *must* also include entries which could be\n removed and replaced by default routing.\n\n .. warning::\n\n It is assumed that the input routing table is not in any particular\n order and may be reordered into ascending order of generality (number\n of don't cares/Xs in the key-mask) without affecting routing\n correctness. It is also assumed that if this table is unordered it is\n at least orthogonal (i.e., there are no two entries which would match\n the same key) and reorderable.\n\n .. note::\n\n If *all* the keys in the table are derived from a single instance\n of :py:class:`~rig.bitfield.BitField` then the table is guaranteed\n to be orthogonal and reorderable.\n\n .. note::\n\n Use :py:meth:`~rig.routing_table.expand_entries` to generate an\n orthogonal table and receive warnings if the input table is not\n orthogonal.\n\n Parameters\n ----------\n routing_table : [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n Routing entries to be merged.\n target_length : int or None\n Target length of the routing table; the minimisation procedure will\n halt once either this target is reached or no further minimisation is\n possible. If None then the table will be made as small as possible.\n\n Raises\n ------\n MinimisationFailedError\n If the smallest table that can be produced is larger than\n `target_length`.\n\n Returns\n -------\n [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n Reduced routing table entries.\n \"\"\"\n table, _ = ordered_covering(routing_table, target_length, no_raise=True)\n return remove_default_routes(table, target_length)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef ordered_covering(routing_table, target_length, aliases=dict(),\n no_raise=False):\n \"\"\"Reduce the size of a routing table by merging together entries where\n possible.\n\n .. warning::\n\n The input routing table *must* also include entries which could be\n removed and replaced by default routing.\n\n .. warning::\n\n It is assumed that the input routing table is not in any particular\n order and may be reordered into ascending order of generality (number\n of don't cares/Xs in the key-mask) without affecting routing\n correctness. It is also assumed that if this table is unordered it is\n at least orthogonal (i.e., there are no two entries which would match\n the same key) and reorderable.\n\n .. note::\n\n If *all* the keys in the table are derived from a single instance\n of :py:class:`~rig.bitfield.BitField` then the table is guaranteed\n to be orthogonal and reorderable.\n\n .. note::\n\n Use :py:meth:`~rig.routing_table.expand_entries` to generate an\n orthogonal table and receive warnings if the input table is not\n orthogonal.\n\n Parameters\n ----------\n routing_table : [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n Routing entries to be merged.\n target_length : int or None\n Target length of the routing table; the minimisation procedure will\n halt once either this target is reached or no further minimisation is\n possible. If None then the table will be made as small as possible.\n\n Other Parameters\n ----------------\n aliases : {(key, mask): {(key, mask), ...}, ...}\n Dictionary of which keys and masks in the routing table are\n combinations of other (now removed) keys and masks; this allows us to\n consider only the keys and masks the user actually cares about when\n determining if inserting a new entry will break the correctness of the\n table. This should be supplied when using this method to update an\n already minimised table.\n no_raise : bool\n If False (the default) then an error will be raised if the table cannot\n be minimised to be smaller than `target_length` and `target_length` is\n not None. If True then a table will be returned regardless of the size\n of the final table.\n\n Raises\n ------\n MinimisationFailedError\n If the smallest table that can be produced is larger than\n `target_length` and `no_raise` is False.\n\n Returns\n -------\n [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n Reduced routing table entries.\n {(key, mask): {(key, mask), ...}, ...}\n A new aliases dictionary.\n \"\"\"\n # Copy the aliases dictionary\n aliases = dict(aliases)\n\n # Perform an initial sort of the routing table in order of increasing\n # generality.\n routing_table = sorted(\n routing_table,\n key=lambda entry: _get_generality(entry.key, entry.mask)\n )\n\n while target_length is None or len(routing_table) > target_length:\n # Get the best merge\n merge = _get_best_merge(routing_table, aliases)\n\n # If there is no merge then stop\n if merge.goodness <= 0:\n break\n\n # Otherwise apply the merge, this returns a new routing table and a new\n # aliases dictionary.\n routing_table, aliases = merge.apply(aliases)\n\n # If the table is still too big then raise an error\n if (not no_raise and\n target_length is not None and\n len(routing_table) > target_length):\n raise MinimisationFailedError(target_length, len(routing_table))\n\n # Return the finished routing table and aliases table\n return routing_table, aliases", "response": "Reduce the size of a routing table by merging together entries where possible."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncounts the number of Xs in the key - mask pair.", "response": "def _get_generality(key, mask):\n \"\"\"Count the number of Xs in the key-mask pair.\n\n For example, there are 32 Xs in ``0x00000000/0x00000000``::\n\n >>> _get_generality(0x0, 0x0)\n 32\n\n And no Xs in ``0xffffffff/0xffffffff``::\n\n >>> _get_generality(0xffffffff, 0xffffffff)\n 0\n \"\"\"\n xs = (~key) & (~mask)\n return sum(1 for i in range(32) if xs & (1 << i))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninspect all possible merges and return the best merge and the best goodness for the calling method.", "response": "def _get_best_merge(routing_table, aliases):\n \"\"\"Inspect all possible merges for the routing table and return the merge\n which would combine the greatest number of entries.\n\n Returns\n -------\n :py:class:`~.Merge`\n \"\"\"\n # Create an empty merge to start with\n best_merge = _Merge(routing_table)\n best_goodness = 0\n\n # Look through every merge, discarding those that are no better than the\n # best we currently know about.\n for merge in _get_all_merges(routing_table):\n # If the merge isn't sufficiently good ignore it and move on\n if merge.goodness <= best_goodness:\n continue\n\n # After the merge refines itself to remove entries which would either\n # be aliased under other entries or entries which would cause the\n # aliasing of other entries we check if it is better than the current\n # best merge and reject it if it isn't.\n merge = _refine_merge(merge, aliases, min_goodness=best_goodness)\n if merge.goodness > best_goodness:\n # The merge we now have a reference to is better than the best\n # merge that we've previously encountered.\n best_merge = merge\n best_goodness = merge.goodness\n\n # Return the best merge and the best goodness for the calling method\n return best_merge"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget all possible sets of entries to merge.", "response": "def _get_all_merges(routing_table):\n \"\"\"Get possible sets of entries to merge.\n\n Yields\n ------\n :py:class:`~.Merge`\n \"\"\"\n # Memorise entries that have been considered as part of a merge\n considered_entries = set()\n\n for i, entry in enumerate(routing_table):\n # If we've already considered this entry then skip\n if i in considered_entries:\n continue\n\n # Construct a merge by including other routing table entries below this\n # one which have equivalent routes.\n merge = set([i])\n merge.update(\n j for j, other_entry in enumerate(routing_table[i+1:], start=i+1)\n if entry.route == other_entry.route\n )\n\n # Mark all these entries as considered\n considered_entries.update(merge)\n\n # If the merge contains multiple entries then yield it\n if len(merge) > 1:\n yield _Merge(routing_table, merge)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining the index in the routing table where a new entry should be inserted.", "response": "def _get_insertion_index(routing_table, generality):\n \"\"\"Determine the index in the routing table where a new entry should be\n inserted.\n \"\"\"\n # We insert before blocks of equivalent generality, so decrement the given\n # generality.\n generality -= 1\n\n # Wrapper for _get_generality which accepts a routing entry\n def gg(entry):\n return _get_generality(entry.key, entry.mask)\n\n # Perform a binary search through the routing table\n bottom = 0\n top = len(routing_table)\n pos = (top - bottom) // 2\n\n pg = gg(routing_table[pos])\n while pg != generality and bottom < pos < top:\n if pg < generality:\n bottom = pos # Move up\n else: # pg > generality\n top = pos # Move down\n\n # Compute a new position\n pos = bottom + (top - bottom) // 2\n pg = gg(routing_table[pos])\n\n while (pos < len(routing_table) and\n gg(routing_table[pos]) <= generality):\n pos += 1\n\n return pos"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _refine_merge(merge, aliases, min_goodness):\n # Perform the down-check\n merge = _refine_downcheck(merge, aliases, min_goodness)\n\n # If the merge is still sufficiently good then continue to refine it.\n if merge.goodness > min_goodness:\n # Perform the up-check\n merge, changed = _refine_upcheck(merge, min_goodness)\n\n if changed and merge.goodness > min_goodness:\n # If the up-check removed any entries we need to re-perform the\n # down-check; but we do not need to re-perform the up-check as the\n # down check can only move the resultant merge nearer the top of\n # the routing table.\n merge = _refine_downcheck(merge, aliases, min_goodness)\n\n return merge", "response": "Remove entries from a merge to generate a valid merge which may be\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _refine_upcheck(merge, min_goodness):\n # Remove any entries which would be covered by entries above the merge\n # position.\n changed = False\n for i in sorted(merge.entries, reverse=True):\n # Get all the entries that are between the entry we're looking at the\n # insertion index of the proposed merged index. If this entry would be\n # covered up by any of them then we remove it from the merge.\n entry = merge.routing_table[i]\n key, mask = entry.key, entry.mask\n if any(intersect(key, mask, other.key, other.mask) for other in\n merge.routing_table[i+1:merge.insertion_index]):\n # The entry would be partially or wholly covered by another entry,\n # remove it from the merge and return a new merge.\n merge = _Merge(merge.routing_table, merge.entries - {i})\n changed = True\n\n # Check if the merge is sufficiently good\n if merge.goodness <= min_goodness:\n merge = _Merge(merge.routing_table) # Replace with empty merge\n break\n\n # Return the final merge\n return merge, changed", "response": "Remove any entries which would be covered by the merge and return a new merge."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _refine_downcheck(merge, aliases, min_goodness):\n # Operation\n # ---------\n # While the merge is still better than `min_goodness` we determine which\n # entries below it in the table it covers. For each of these covered\n # entries we find which bits are Xs in the merged entry and are NOT Xs in\n # the covered entry.\n #\n # For example:\n #\n # Merged entry: ...0XXX1...\n # Covered entry: ...010XX...\n # Bits of interest: ^^\n # Label used below: mn\n #\n # NOTE:\n # The covered entry may be of lower generality than the prospective\n # merged entry if it is contained within the aliases dictionary (e.g.,\n # ...010XX... may be part of\n # ``aliases = {...XXXXX...: {..., ...010XX..., ...}, ...})``\n #\n # In this case there are 2 bits of interest highlighted. These are bits in\n # the merge entry whose value can be set (by removing entries from the\n # merge) to avoid covering the covered entry. Whenever we have multiple\n # covered entries we care only about the entries with the fewest number of\n # ``settable`` bits because these most constrain which entries we may\n # remove from the merge to avoid covering up the lower entry.\n #\n # NOTE:\n # * If there is only 1 ``settable`` bit then we are very constrained in\n # terms of which entries must be removed from the merge to avoid\n # covering a lower entry.\n # * If there are no ``settable`` bits then we cannot possibly avoid\n # covering the lower entry - the only correct action is to return an\n # empty merge.\n #\n # Assuming that there were no covered entries without any ``settable`` bits\n # (that is ``stringency > 0``) then ``bits_and_vals`` contains pairs of\n # bits and boolean values which indicate which values need to be removed\n # from which bit positions to avoid covering up lower entries. If the\n # example above were the only covered entry then ``bits_and_vals`` would\n # contain ``(m, True)`` to indicate that all entries containing Xs or 1s in\n # the left-most bit of interest could be removed to avoid the covered entry\n # and ``(n, False)`` to indicate that all entries containing Xs or 0s in\n # the right-most bit of interest could be removed to avoid covering the\n # entry.\n #\n # NOTE:\n # ``bits_and_vals`` consists of a set of options (e.g., we *could* remove\n # all entries with Xs or 1s in bit ``m`` *or* we could remove all entries\n # with Xs or 0s in bit ``n``, either would resolve the above covering).\n #\n # To determine which course of action to take we build a dictionary mapping\n # each of the pairs in ``bits_and_vals`` to the entries that would need to\n # be removed to \"set\" that bit in the merged entry. For example, we might\n # end up with:\n #\n # options = {(m, True): {1, 4, 5},\n # (n, False): {3, 7}}\n #\n # Indicating that we'd need to remove entries 1, 4 and 5 from the merge to\n # \"set\" the mth bit of the merged to 0 or that we'd need to remove entries\n # 3 and 7 to set the nth bit of the merged entry to set the nth bit to 1.\n #\n # NOTE:\n # The boolean part of the pair indicates which value needs to be removed\n # (True -> remove all 1s and Xs; False -> remove all 0s and Xs). If all\n # Xs and 1s in a given bit position are removed from a merge then the\n # merged entry is guaranteed to have a 0 in the bit position. Vice-versa\n # removing all Xs and 0s in a given bit position from a merge will result\n # in a merged entry with a 1 in that position.\n #\n # As we want to make our merges as large as possible we select the smallest\n # set of entries to remove from the merge from ``options``.\n #\n # The whole process is then repeated since:\n # * we ignored covered entries with more ``settable`` bits there may\n # still be covered entries below the merged entry\n # * after removing entries from the merge the merged entry is of lower\n # generality and is therefore nearer the top of the table so new\n # entries may be have become covered\n\n # Set of bit positions\n all_bits = tuple(1 << i for i in range(32))\n\n # While the merge is still worth considering continue to perform the\n # down-check.\n while merge.goodness > min_goodness:\n covered = list(_get_covered_keys_and_masks(merge, aliases))\n\n # If there are no covered entries (the merge is valid) then break out\n # of the loop.\n if not covered:\n break\n\n # For each covered entry work out which bits in the key-mask pair which\n # are not Xs are not covered by Xs in the merge key-mask pair. Only\n # keep track of the entries which have the fewest bits that we could\n # set.\n most_stringent = 33 # Not at all stringent\n bits_and_vals = set()\n for key, mask in covered:\n # Get the bit positions where there ISN'T an X in the covered entry\n # but there IS an X in the merged entry.\n settable = mask & ~merge.mask\n\n # Count the number of settable bits, if this is a more stringent\n # constraint than the previous constraint then ensure that we\n # record the new stringency and store which bits we need to set to\n # meet the constraint.\n n_settable = sum(1 for bit in all_bits if bit & settable)\n if n_settable <= most_stringent:\n if n_settable < most_stringent:\n most_stringent = n_settable\n bits_and_vals = set()\n\n # Add this settable mask and the required values to the\n # settables list.\n bits_and_vals.update((bit, not (key & bit)) for bit in\n all_bits if bit & settable)\n\n if most_stringent == 0:\n # If are there any instances where we could not possibly change a\n # bit to avoid aliasing an entry we'll return an empty merge and\n # give up.\n merge = _Merge(merge.routing_table, set())\n break\n else:\n # Get the smallest number of entries to remove to modify the\n # resultant key-mask to avoid covering a lower entry. Prefer to\n # modify more significant bits of the key mask.\n remove = set() # Entries to remove\n for bit, val in sorted(bits_and_vals, reverse=True):\n working_remove = set() # Holder for working remove set\n\n for i in merge.entries:\n entry = merge.routing_table[i]\n\n if ((not entry.mask & bit) or\n (bool(entry.key & bit) is (not val))):\n # If the entry has an X in this position then it will\n # need to be removed regardless of whether we want to\n # set a 0 or a 1 in this position, likewise it will\n # need to be removed if it is a 0 and we want a 1 or\n # vice-versa.\n working_remove.add(i)\n\n # If the current remove set is empty or the new remove set is\n # smaller update the remove set.\n if not remove or len(working_remove) < len(remove):\n remove = working_remove\n\n # Remove the selected entries from the merge\n merge = _Merge(merge.routing_table, merge.entries - remove)\n else:\n # NOTE: If there are no covered entries, that is, if the merge is\n # better than min goodness AND valid this `else` clause is not reached.\n # Ensure than an empty merge is returned if the above loop was aborted\n # early with a non-empty merge.\n merge = _Merge(merge.routing_table, set())\n\n return merge", "response": "Return a new merge that can be used to reconstruct the merge."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_covered_keys_and_masks(merge, aliases):\n # For every entry in the table below the insertion index see which keys\n # and masks would overlap with the key and mask of the merged entry.\n for entry in merge.routing_table[merge.insertion_index:]:\n key_mask = (entry.key, entry.mask)\n keys_masks = aliases.get(key_mask, [key_mask])\n\n for key, mask in keys_masks:\n if intersect(merge.key, merge.mask, key, mask):\n yield key, mask", "response": "Get the keys and masks that would be covered by the given merge."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\napply the merge to the routing table it is defined against and get a new routing table and aliases dictionary.", "response": "def apply(self, aliases):\n \"\"\"Apply the merge to the routing table it is defined against and get a\n new routing table and alias dictionary.\n\n Returns\n -------\n [:py:class:`~rig.routing_table.RoutingTableEntry`, ...]\n A new routing table which may be minimised further.\n {(key, mask): {(key, mask), ...}}\n A new aliases dictionary.\n \"\"\"\n # Create a new routing table of the correct size\n new_size = len(self.routing_table) - len(self.entries) + 1\n new_table = [None for _ in range(new_size)]\n\n # Create a copy of the aliases dictionary\n aliases = dict(aliases)\n\n # Get the new entry\n new_entry = RoutingTableEntry(\n route=self.routing_table[next(iter(self.entries))].route,\n key=self.key, mask=self.mask, sources=self.sources\n )\n aliases[(self.key, self.mask)] = our_aliases = set([])\n\n # Iterate through the old table copying entries acrosss\n insert = 0\n for i, entry in enumerate(self.routing_table):\n # If this is the insertion point then insert\n if i == self.insertion_index:\n new_table[insert] = new_entry\n insert += 1\n\n if i not in self.entries:\n # If this entry isn't to be removed then copy it across to the\n # new table.\n new_table[insert] = entry\n insert += 1\n else:\n # If this entry is to be removed then add it to the aliases\n # dictionary.\n km = (entry.key, entry.mask)\n our_aliases.update(aliases.pop(km, {km}))\n\n # If inserting beyond the end of the old table then insert at the end\n # of the new table.\n if self.insertion_index == len(self.routing_table):\n new_table[insert] = new_entry\n\n return new_table, aliases"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding pre - built packages in the _packages directory.", "response": "def find_packages(name, pkg_dir):\n \"\"\"Locate pre-built packages in the _packages directory\"\"\"\n for c in (FileSystemPackageBuilder, ZipPackageBuilder, ExcelPackageBuilder):\n\n package_path, cache_path = c.make_package_path(pkg_dir, name)\n\n if package_path.exists():\n\n yield c.type_code, package_path, cache_path"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef new_args(subparsers):\n parser = subparsers.add_parser(\n 'new',\n help='Create new Metatab packages',\n description=new_args.__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n\n parser.set_defaults(run_command=new_cmd)\n\n parser.add_argument('-o', '--origin', help=\"Dataset origin, usually a second-level domain name. Required\")\n parser.add_argument('-d', '--dataset', help=\"Main dataset name. Required\", required=True)\n parser.add_argument('-t', '--time', help=\"Temporal extents, usually a year, ISO8601 time, or interval. \")\n parser.add_argument('-s', '--space', help=\"Space, geographic extent, such as a name of a state or a Census geoid\")\n parser.add_argument('-g', '--grain', help=\"Grain, the type of entity a row represents\")\n parser.add_argument('-v', '--variant', help=\"Variant, any distinguishing string\")\n parser.add_argument('-r', '--revision', help=\"Version, defaults to 1\", default=1)\n\n parser.add_argument('-T', '--title', help=\"Set the title\")\n\n parser.add_argument('-L', '--pylib', help=\"Configure a pylib directory for Python code extensions\",\n action='store_true')\n\n parser.add_argument('-E', '--example', help=\"Add examples of resources\",\n action='store_true')\n\n parser.add_argument('-J', '--jupyter', help=\"Create a Jupyter notebook source package\",\n action='store_true')\n\n parser.add_argument('--template', help=\"Metatab file template, defaults to 'metatab' \", default='metatab')\n parser.add_argument('-C', '--config', help=\"Path to config file. \"\n \"Defaults to ~/.metapack-defaults.csv or value of METAPACK_DEFAULTS env var.\"\n \"Sets defaults for specia root terms and the Contacts section.\")\n return parser", "response": "Command line options for the new command"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninitializing both bitarray and current_day_bitarray", "response": "def initialize_bitarray(self):\n \"\"\"Initialize both bitarray.\n\n This BF contain two bit arrays instead of single one like a plain BF. bitarray\n is the main bit array where all the historical items are stored. It's the one\n used for the membership query. The second one, current_day_bitarray is the one\n used for creating the daily snapshot.\n \"\"\"\n self.bitarray = bitarray.bitarray(self.nbr_bits)\n self.current_day_bitarray = bitarray.bitarray(self.nbr_bits)\n self.bitarray.setall(False)\n self.current_day_bitarray.setall(False)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninitialize the period of the current BF.", "response": "def initialize_period(self, period=None):\n \"\"\"Initialize the period of BF.\n\n :period: datetime.datetime for setting the period explicity.\n \"\"\"\n if not period:\n self.current_period = dt.datetime.now()\n else:\n self.current_period = period\n self.current_period = dt.datetime(self.current_period.year, self.current_period.month, self.current_period.day)\n self.date = self.current_period.strftime(\"%Y-%m-%d\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwarms the previous snapshot during the day.", "response": "def warm(self, jittering_ratio=0.2):\n \"\"\"Progressively load the previous snapshot during the day.\n\n Loading all the snapshots at once can takes a substantial amount of time. This method, if called\n periodically during the day will progressively load those snapshots one by one. Because many workers are\n going to use this method at the same time, we add a jittering to the period between load to avoid\n hammering the disk at the same time.\n \"\"\"\n if self.snapshot_to_load == None:\n last_period = self.current_period - dt.timedelta(days=self.expiration-1)\n self.compute_refresh_period()\n self.snapshot_to_load = []\n base_filename = \"%s/%s_%s_*.dat\" % (self.snapshot_path, self.name, self.expiration)\n availables_snapshots = glob.glob(base_filename)\n for filename in availables_snapshots:\n snapshot_period = dt.datetime.strptime(filename.split('_')[-1].strip('.dat'), \"%Y-%m-%d\")\n if snapshot_period >= last_period:\n self.snapshot_to_load.append(filename)\n self.ready = False\n\n if self.snapshot_to_load and self._should_warm():\n filename = self.snapshot_to_load.pop()\n self._union_bf_from_file(filename)\n jittering = self.warm_period * (np.random.random()-0.5) * jittering_ratio\n self.next_snapshot_load = time.time() + self.warm_period + jittering\n if not self.snapshot_to_load:\n self.ready = True"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrestore the state of the BF using previous snapshots.", "response": "def restore_from_disk(self, clean_old_snapshot=False):\n \"\"\"Restore the state of the BF using previous snapshots.\n\n :clean_old_snapshot: Delete the old snapshot on the disk (period < current - expiration)\n \"\"\"\n base_filename = \"%s/%s_%s_*.dat\" % (self.snapshot_path, self.name, self.expiration)\n availables_snapshots = glob.glob(base_filename)\n last_period = self.current_period - dt.timedelta(days=self.expiration-1)\n for filename in availables_snapshots:\n snapshot_period = dt.datetime.strptime(filename.split('_')[-1].strip('.dat'), \"%Y-%m-%d\")\n if snapshot_period < last_period and not clean_old_snapshot:\n continue\n else:\n self._union_bf_from_file(filename)\n if snapshot_period == self.current_period:\n self._union_bf_from_file(filename, current=True)\n\n if snapshot_period < last_period and clean_old_snapshot:\n os.remove(filename)\n self.ready = True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsaving the current state of the current day bitarray on disk.", "response": "def save_snaphot(self):\n \"\"\"Save the current state of the current day bitarray on disk.\n\n Save the internal representation (bitarray) into a binary file using this format:\n filename : name_expiration_2013-01-01.dat\n \"\"\"\n filename = \"%s/%s_%s_%s.dat\" % (self.snapshot_path, self.name, self.expiration, self.date)\n with open(filename, 'w') as f:\n f.write(zlib.compress(pickle.dumps(self.current_day_bitarray, protocol=pickle.HIGHEST_PROTOCOL)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef place(vertices_resources, nets, machine, constraints,\n vertex_order=None, chip_order=None):\n \"\"\"Blindly places vertices in sequential order onto chips in the machine.\n\n This algorithm sequentially places vertices onto chips in the order\n specified (or in an undefined order if not specified). This algorithm is\n essentially the simplest possible valid placement algorithm and is intended\n to form the basis of other simple sequential and greedy placers.\n\n The algorithm proceeds by attempting to place each vertex on the a chip. If\n the vertex fits we move onto the next vertex (but keep filling the same\n vertex). If the vertex does not fit we move onto the next candidate chip\n until we find somewhere the vertex fits. The algorithm will raise an\n :py:exc:`rig.place_and_route.exceptions.InsufficientResourceError`\n if it has failed to fit a vertex on every chip.\n\n Parameters\n ----------\n vertex_order : None or iterable\n The order in which the vertices should be attemted to be placed.\n\n If None (the default), the vertices will be placed in the default\n iteration order of the ``vertices_resources`` argument. If an iterable,\n the iteration sequence should produce each vertex in vertices_resources\n *exactly once*.\n\n chip_order : None or iterable\n The order in which chips should be tried as a candidate location for a\n vertex.\n\n If None (the default), the chips will be used in the default iteration\n order of the ``machine`` object (a raster scan). If an iterable, the\n iteration sequence should produce (x, y) pairs giving the coordinates\n of chips to use. All working chip coordinates must be included in the\n iteration sequence *exactly once*. Additional chip coordinates of\n non-existant or dead chips are also allowed (and will simply be\n skipped).\n \"\"\"\n # If no vertices to place, just stop (from here on we presume that at least\n # one vertex will be placed)\n if len(vertices_resources) == 0:\n return {}\n\n # Within the algorithm we modify the resource availability values in the\n # machine to account for the effects of the current placement. As a result,\n # an internal copy of the structure must be made.\n machine = machine.copy()\n\n # {vertex: (x, y), ...} gives the location of all vertices, updated\n # throughout the function.\n placements = {}\n\n # Handle constraints\n vertices_resources, nets, constraints, substitutions = \\\n apply_same_chip_constraints(vertices_resources, nets, constraints)\n for constraint in constraints:\n if isinstance(constraint, LocationConstraint):\n # Location constraints are handled by recording the set of fixed\n # vertex locations and subtracting their resources from the chips\n # they're allocated to.\n location = constraint.location\n if location not in machine:\n raise InvalidConstraintError(\n \"Chip requested by {} unavailable\".format(machine))\n vertex = constraint.vertex\n\n # Record the constrained vertex's location\n placements[vertex] = location\n\n # Make sure the vertex fits at the requested location (updating the\n # resource availability after placement)\n resources = vertices_resources[vertex]\n machine[location] = subtract_resources(machine[location],\n resources)\n if overallocated(machine[location]):\n raise InsufficientResourceError(\n \"Cannot meet {}\".format(constraint))\n elif isinstance(constraint, # pragma: no branch\n ReserveResourceConstraint):\n apply_reserve_resource_constraint(machine, constraint)\n\n if vertex_order is not None:\n # Must modify the vertex_order to substitute the merged vertices\n # inserted by apply_reserve_resource_constraint.\n vertex_order = list(vertex_order)\n for merged_vertex in substitutions:\n # Swap the first merged vertex for its MergedVertex object and\n # remove all other vertices from the merged set\n vertex_order[vertex_order.index(merged_vertex.vertices[0])] \\\n = merged_vertex\n # Remove all other vertices in the MergedVertex\n already_removed = set([merged_vertex.vertices[0]])\n for vertex in merged_vertex.vertices[1:]:\n if vertex not in already_removed:\n vertex_order.remove(vertex)\n already_removed.add(vertex)\n\n # The set of vertices which have not been constrained, in iteration order\n movable_vertices = (v for v in (vertices_resources\n if vertex_order is None\n else vertex_order)\n if v not in placements)\n\n # A cyclic iterator over all available chips\n chips = cycle(c for c in (machine if chip_order is None else chip_order)\n if c in machine)\n chips_iter = iter(chips)\n\n try:\n cur_chip = next(chips_iter)\n except StopIteration:\n raise InsufficientResourceError(\"No working chips in machine.\")\n\n # The last chip that we successfully placed something on. Used to detect\n # when we've tried all available chips and not found a suitable candidate\n last_successful_chip = cur_chip\n\n # Place each vertex in turn\n for vertex in movable_vertices:\n while True:\n resources_if_placed = subtract_resources(\n machine[cur_chip], vertices_resources[vertex])\n\n if not overallocated(resources_if_placed):\n # The vertex fits: record the resources consumed and move on to\n # the next vertex.\n placements[vertex] = cur_chip\n machine[cur_chip] = resources_if_placed\n last_successful_chip = cur_chip\n break\n else:\n # The vertex won't fit on this chip, move onto the next one\n # available.\n cur_chip = next(chips_iter)\n\n # If we've looped around all the available chips without\n # managing to place the vertex, give up!\n if cur_chip == last_successful_chip:\n raise InsufficientResourceError(\n \"Ran out of chips while attempting to place vertex \"\n \"{}\".format(vertex))\n\n finalise_same_chip_constraints(substitutions, placements)\n\n return placements", "response": "This function places vertices onto chips in the order specified by the argument."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps for core place - and - route tasks for common case.", "response": "def place_and_route_wrapper(vertices_resources, vertices_applications,\n nets, net_keys,\n system_info, constraints=[],\n place=default_place, place_kwargs={},\n allocate=default_allocate, allocate_kwargs={},\n route=default_route, route_kwargs={},\n minimise_tables_methods=(remove_default_entries,\n ordered_covering),\n core_resource=Cores, sdram_resource=SDRAM,\n sram_resource=SRAM):\n \"\"\"Wrapper for core place-and-route tasks for the common case.\n\n This function takes a set of vertices and nets and produces placements,\n allocations, minimised routing tables and application loading information.\n\n .. note::\n\n This function replaces the deprecated :py:func:`.wrapper` function and\n makes use of the additional information provided by the\n :py:class:`~rig.machine_control.machine_controller.SystemInfo` object\n to infer the constraints required by most applications such as\n reserving non-idle cores such as the monitor processor.\n\n Parameters\n ----------\n vertices_resources : {vertex: {resource: quantity, ...}, ...}\n A dictionary from vertex to the required resources for that vertex.\n This dictionary must include an entry for every vertex in the\n application.\n\n Resource requirements are specified by a dictionary `{resource:\n quantity, ...}` where `resource` is some resource identifier and\n `quantity` is a non-negative integer representing the quantity of that\n resource required.\n vertices_applications : {vertex: application, ...}\n A dictionary from vertices to the application binary to load\n onto cores associated with that vertex. Applications are given as a\n string containing the file name of the binary to load.\n nets : [:py:class:`~rig.netlist.Net`, ...]\n A list (in no particular order) defining the nets connecting vertices.\n net_keys : {:py:class:`~rig.netlist.Net`: (key, mask), ...}\n A dictionary from nets to (key, mask) tuples to be used in SpiNNaker\n routing tables for routes implementing this net. The key and mask\n should be given as 32-bit integers.\n system_info : \\\n :py:class:`~rig.machine_control.machine_controller.SystemInfo`\n A data structure which defines the resources available in the target\n SpiNNaker machine, typically returned by\n :py:meth:`rig.machine_control.MachineController.get_system_info`. This\n information will be used internally to build a\n :py:class:`~rig.place_and_route.Machine` and set of\n :py:mod:`rig.place_and_route.constraints` which describe the SpiNNaker\n machine used and ensure placement, allocation and routing only use\n working and unused chips, cores, memory and links. If greater control\n over these datastructures is required this wrapper may not be\n appropriate.\n constraints : [constraint, ...]\n **Optional.** A list of additional constraints on placement, allocation\n and routing. Available constraints are provided in the\n :py:mod:`rig.place_and_route.constraints` module. These constraints\n will be added to those derrived from the ``system_info`` argument which\n restrict placement and allocation to only idle cores.\n place : function (Default: :py:func:`rig.place_and_route.place`)\n **Optional.** Placement algorithm to use.\n place_kwargs : dict (Default: {})\n **Optional.** Algorithm-specific arguments for the placer.\n allocate : function (Default: :py:func:`rig.place_and_route.allocate`)\n **Optional.** Allocation algorithm to use.\n allocate_kwargs : dict (Default: {})\n **Optional.** Algorithm-specific arguments for the allocator.\n route : function (Default: :py:func:`rig.place_and_route.route`)\n **Optional.** Routing algorithm to use.\n route_kwargs : dict (Default: {})\n **Optional.** Algorithm-specific arguments for the router.\n minimise_tables_methods : [:py:func:`rig.routing_table.minimise`, ...]\n **Optional.** An iterable of routing table minimisation algorithms to\n use when routing tables outgrow the space available. Each method is\n tried in the order presented and the first to meet the required target\n length for a given chip is used. Consequently less computationally\n costly algorithms should be nearer the start of the list. The default\n methods will try to remove default routes\n (:py:meth:`rig.routing_table.remove_default_routes.minimise`) and then\n fall back on the ordered covering algorithm\n (:py:meth:`rig.routing_table.ordered_covering.minimise`).\n core_resource : resource (Default: :py:data:`~rig.place_and_route.Cores`)\n **Optional.** The resource identifier used for cores.\n sdram_resource : resource (Default: :py:data:`~rig.place_and_route.SDRAM`)\n **Optional.** The resource identifier used for SDRAM.\n sram_resource : resource (Default: :py:data:`~rig.place_and_route.SRAM`)\n **Optional.** The resource identifier used for SRAM (System RAM).\n\n Returns\n -------\n placements : {vertex: (x, y), ...}\n A dictionary from vertices to the chip coordinate produced by\n placement.\n allocations : {vertex: {resource: slice, ...}, ...}\n A dictionary from vertices to the resources allocated to it. Resource\n allocations are dictionaries from resources to a :py:class:`slice`\n defining the range of the given resource type allocated to the vertex.\n These :py:class:`slice` objects have `start` <= `end` and `step` set to\n None.\n application_map : {application: {(x, y): set([core_num, ...]), ...}, ...}\n A dictionary from application to the set of cores it should be loaded\n onto. The set of cores is given as a dictionary from chip to sets of\n core numbers.\n routing_tables : {(x, y): \\\n [:py:class:`~rig.routing_table.RoutingTableEntry`, \\\n ...], ...}\n The generated routing tables. Provided as a dictionary from chip to a\n list of routing table entries.\n \"\"\"\n # Infer place-and-route data-structures from SystemInfo\n machine = build_machine(system_info,\n core_resource=core_resource,\n sdram_resource=sdram_resource,\n sram_resource=sram_resource)\n base_constraints = build_core_constraints(system_info, core_resource)\n constraints = base_constraints + constraints\n\n # Place/Allocate/Route\n placements = place(vertices_resources, nets, machine, constraints,\n **place_kwargs)\n allocations = allocate(vertices_resources, nets, machine, constraints,\n placements, **allocate_kwargs)\n routes = route(vertices_resources, nets, machine, constraints, placements,\n allocations, core_resource, **route_kwargs)\n\n # Build data-structures ready to feed to the machine loading functions\n application_map = build_application_map(vertices_applications, placements,\n allocations, core_resource)\n\n # Build routing tables from the generated routes\n routing_tables = routing_tree_to_tables(routes, net_keys)\n\n # Minimise the routing tables, if required\n target_lengths = build_routing_table_target_lengths(system_info)\n routing_tables = minimise_tables(routing_tables,\n target_lengths,\n minimise_tables_methods)\n\n return placements, allocations, application_map, routing_tables"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wrapper(vertices_resources, vertices_applications,\n nets, net_keys,\n machine, constraints=[],\n reserve_monitor=True, align_sdram=True,\n place=default_place, place_kwargs={},\n allocate=default_allocate, allocate_kwargs={},\n route=default_route, route_kwargs={},\n core_resource=Cores, sdram_resource=SDRAM):\n \"\"\"Wrapper for core place-and-route tasks for the common case.\n At a high level this function essentially takes a set of vertices and nets\n and produces placements, memory allocations, routing tables and application\n loading information.\n\n .. warning::\n\n This function is deprecated. New users should use\n :py:func:`.place_and_route_wrapper` along with\n :py:meth:`rig.machine_control.MachineController.get_system_info` in\n place of this function. The new wrapper automatically reserves cores\n and SDRAM already in use in the target machine, improving on the\n behaviour of this wrapper which blindly reserves certain ranges of\n resources presuming only core 0 (the monitor processor) is not idle.\n\n Parameters\n ----------\n vertices_resources : {vertex: {resource: quantity, ...}, ...}\n A dictionary from vertex to the required resources for that vertex.\n This dictionary must include an entry for every vertex in the\n application.\n Resource requirements are specified by a dictionary `{resource:\n quantity, ...}` where `resource` is some resource identifier and\n `quantity` is a non-negative integer representing the quantity of that\n resource required.\n vertices_applications : {vertex: application, ...}\n A dictionary from vertices to the application binary to load\n onto cores associated with that vertex. Applications are given as a\n string containing the file name of the binary to load.\n nets : [:py:class:`~rig.netlist.Net`, ...]\n A list (in no particular order) defining the nets connecting vertices.\n net_keys : {:py:class:`~rig.netlist.Net`: (key, mask), ...}\n A dictionary from nets to (key, mask) tuples to be used in SpiNNaker\n routing tables for routes implementing this net. The key and mask\n should be given as 32-bit integers.\n machine : :py:class:`rig.place_and_route.Machine`\n A data structure which defines the resources available in the target\n SpiNNaker machine.\n constraints : [constraint, ...]\n A list of constraints on placement, allocation and routing. Available\n constraints are provided in the\n :py:mod:`rig.place_and_route.constraints` module.\n reserve_monitor : bool (Default: True)\n **Optional.** If True, reserve core zero since it will be used as the\n monitor processor using a\n :py:class:`rig.place_and_route.constraints.ReserveResourceConstraint`.\n align_sdram : bool (Default: True)\n **Optional.** If True, SDRAM allocations will be aligned to 4-byte\n addresses. Specifically, the supplied constraints will be augmented\n with an `AlignResourceConstraint(sdram_resource, 4)`.\n place : function (Default: :py:func:`rig.place_and_route.place`)\n **Optional.** Placement algorithm to use.\n place_kwargs : dict (Default: {})\n **Optional.** Algorithm-specific arguments for the placer.\n allocate : function (Default: :py:func:`rig.place_and_route.allocate`)\n **Optional.** Allocation algorithm to use.\n allocate_kwargs : dict (Default: {})\n **Optional.** Algorithm-specific arguments for the allocator.\n route : function (Default: :py:func:`rig.place_and_route.route`)\n **Optional.** Routing algorithm to use.\n route_kwargs : dict (Default: {})\n **Optional.** Algorithm-specific arguments for the router.\n core_resource : resource (Default: :py:data:`~rig.place_and_route.Cores`)\n **Optional.** The resource identifier used for cores.\n sdram_resource : resource (Default: :py:data:`~rig.place_and_route.SDRAM`)\n **Optional.** The resource identifier used for SDRAM.\n\n Returns\n -------\n placements : {vertex: (x, y), ...}\n A dictionary from vertices to the chip coordinate produced by\n placement.\n allocations : {vertex: {resource: slice, ...}, ...}\n A dictionary from vertices to the resources allocated to it. Resource\n allocations are dictionaries from resources to a :py:class:`slice`\n defining the range of the given resource type allocated to the vertex.\n These :py:class:`slice` objects have `start` <= `end` and `step` set to\n None.\n application_map : {application: {(x, y): set([core_num, ...]), ...}, ...}\n A dictionary from application to the set of cores it should be loaded\n onto. The set of cores is given as a dictionary from chip to sets of\n core numbers.\n routing_tables : {(x, y): \\\n [:py:class:`~rig.routing_table.RoutingTableEntry`, \\\n ...], ...}\n The generated routing tables. Provided as a dictionary from chip to a\n list of routing table entries.\n \"\"\"\n warnings.warn(\"rig.place_and_route.wrapper is deprecated \"\n \"use rig.place_and_route.place_and_route_wrapper instead in \"\n \"new applications.\",\n DeprecationWarning)\n constraints = constraints[:]\n\n # Augment constraints with (historically) commonly used constraints\n if reserve_monitor:\n constraints.append(\n ReserveResourceConstraint(core_resource, slice(0, 1)))\n if align_sdram:\n constraints.append(AlignResourceConstraint(sdram_resource, 4))\n\n # Place/Allocate/Route\n placements = place(vertices_resources, nets, machine, constraints,\n **place_kwargs)\n allocations = allocate(vertices_resources, nets, machine, constraints,\n placements, **allocate_kwargs)\n routes = route(vertices_resources, nets, machine, constraints, placements,\n allocations, core_resource, **route_kwargs)\n\n # Build data-structures ready to feed to the machine loading functions\n application_map = build_application_map(vertices_applications, placements,\n allocations, core_resource)\n\n # Build data-structures ready to feed to the machine loading functions\n from rig.place_and_route.utils import build_routing_tables\n routing_tables = build_routing_tables(routes, net_keys)\n\n return placements, allocations, application_map, routing_tables", "response": "A function that creates a common case with the common case."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_region_for_chip(x, y, level=3):\n shift = 6 - 2*level\n\n bit = ((x >> shift) & 3) + 4*((y >> shift) & 3) # bit in bits 15:0 to set\n\n mask = 0xffff ^ ((4 << shift) - 1) # in {0xfffc, 0xfff0, 0xffc0, 0xff00}\n nx = x & mask # The mask guarantees that bits 1:0 will be cleared\n ny = y & mask # The mask guarantees that bits 1:0 will be cleared\n\n # sig bits x | sig bits y | 2-bit level | region select bits\n region = (nx << 24) | (ny << 16) | (level << 16) | (1 << bit)\n return region", "response": "Returns the region word for the given chip co -ordinates."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngenerating a reduced set of flood fill parameters.", "response": "def compress_flood_fill_regions(targets):\n \"\"\"Generate a reduced set of flood fill parameters.\n\n Parameters\n ----------\n targets : {(x, y) : set([c, ...]), ...}\n For each used chip a set of core numbers onto which an application\n should be loaded. E.g., the output of\n :py:func:`~rig.place_and_route.util.build_application_map` when indexed\n by an application.\n\n Yields\n ------\n (region, core mask)\n Pair of integers which represent a region of a SpiNNaker machine and a\n core mask of selected cores within that region for use in flood-filling\n an application. `region` and `core_mask` are both integer\n representations of bit fields that are understood by SCAMP.\n\n The pairs are yielded in an order suitable for direct use with SCAMP's\n flood-fill core select (FFCS) method of loading.\n \"\"\"\n t = RegionCoreTree()\n\n for (x, y), cores in iteritems(targets):\n for p in cores:\n t.add_core(x, y, p)\n\n return sorted(t.get_regions_and_coremasks())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_regions_and_coremasks(self):\n region_code = ((self.base_x << 24) | (self.base_y << 16) |\n (self.level << 16))\n\n # Generate core masks for any regions which are selected at this level\n # Create a mapping from subregion mask to core numbers\n subregions_cores = collections.defaultdict(lambda: 0x0)\n for core, subregions in enumerate(self.locally_selected):\n if subregions: # If any subregions are selected on this level\n subregions_cores[subregions] |= 1 << core\n\n # Order the locally selected items and then yield them\n for (subregions, coremask) in sorted(subregions_cores.items()):\n yield (region_code | subregions), coremask\n\n if self.level < 3:\n # Iterate through the subregions and recurse, we iterate through in\n # the order which ensures that anything we yield is in increasing\n # order.\n for i in (4*x + y for y in range(4) for x in range(4)):\n subregion = self.subregions[i]\n if subregion is not None:\n for (region, coremask) in \\\n subregion.get_regions_and_coremasks():\n yield (region, coremask)", "response": "Generate a set of ordered paired region and core masks."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_core(self, x, y, p):\n # Check that the co-ordinate is contained in this region\n if ((p < 0 or p > 17) or\n (x < self.base_x or x >= self.base_x + self.scale) or\n (y < self.base_y or y >= self.base_y + self.scale)):\n raise ValueError((x, y, p))\n\n # Determine which subregion this refers to\n subregion = ((x >> self.shift) & 0x3) + 4*((y >> self.shift) & 0x3)\n\n if self.level == 3:\n # If level-3 then we just add to the locally selected regions\n self.locally_selected[p] |= 1 << subregion\n elif not self.locally_selected[p] & (1 << subregion):\n # If the subregion isn't in `locally_selected` for this core number\n # then add the core to the subregion.\n if self.subregions[subregion] is None:\n # \"Lazy\": if the subtree doesn't exist yet then add it\n base_x = int(self.base_x + (self.scale / 4) * (subregion % 4))\n base_y = int(self.base_y + (self.scale / 4) * (subregion // 4))\n self.subregions[subregion] = RegionCoreTree(base_x, base_y,\n self.level + 1)\n\n # If the subregion reports that all of its subregions for this core\n # are selected then we need to add it to `locally_selected`.\n if self.subregions[subregion].add_core(x, y, p):\n self.locally_selected[p] |= 1 << subregion\n\n # If all subregions are selected for this core and this is not the top\n # level in the hierarchy then return True after emptying the local\n # selection for the core.\n if self.locally_selected[p] == 0xffff and self.level != 0:\n self.locally_selected[p] = 0x0\n return True\n else:\n return False", "response": "Add a new core to the region tree."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef send_scp(self, *args, **kwargs):\n # Retrieve contextual arguments from the keyword arguments. The\n # context system ensures that these values are present.\n cabinet = kwargs.pop(\"cabinet\")\n frame = kwargs.pop(\"frame\")\n board = kwargs.pop(\"board\")\n return self._send_scp(cabinet, frame, board, *args, **kwargs)", "response": "Transmit an SCP Packet to a specific board."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _send_scp(self, cabinet, frame, board, *args, **kwargs):\n # Find the connection which best matches the specified coordinates,\n # preferring direct connections to a board when available.\n connection = self.connections.get((cabinet, frame, board), None)\n if connection is None:\n connection = self.connections.get((cabinet, frame), None)\n assert connection is not None, \\\n \"No connection available to ({}, {}, {})\".format(cabinet,\n frame,\n board)\n\n # Determine the size of packet we expect in return, this is usually the\n # size that we are informed we should expect by SCAMP/SARK or else is\n # the default.\n if self._scp_data_length is None:\n length = consts.SCP_SVER_RECEIVE_LENGTH_MAX\n else:\n length = self._scp_data_length\n\n return connection.send_scp(length, 0, 0, board, *args, **kwargs)", "response": "Send an SCP packet to the specified board."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_software_version(self, cabinet, frame, board):\n sver = self._send_scp(cabinet, frame, board, SCPCommands.sver)\n\n # Format the result\n # arg1\n code_block = (sver.arg1 >> 24) & 0xff\n frame_id = (sver.arg1 >> 16) & 0xff\n can_id = (sver.arg1 >> 8) & 0xff\n board_id = sver.arg1 & 0xff\n\n # arg2 (version field unpacked separately)\n buffer_size = (sver.arg2 & 0xffff)\n\n software_name, version, version_labels = \\\n unpack_sver_response_version(sver)\n\n return BMPInfo(code_block, frame_id, can_id, board_id, version,\n buffer_size, sver.arg3, software_name, version_labels)", "response": "Get the software version for a given BMP."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the power of the specified Chip.", "response": "def set_power(self, state, cabinet, frame, board,\n delay=0.0, post_power_on_delay=5.0):\n \"\"\"Control power to the SpiNNaker chips and FPGAs on a board.\n\n Returns\n -------\n state : bool\n True for power on, False for power off.\n board : int or iterable\n Specifies the board to control the power of. This may also be an\n iterable of multiple boards (in the same frame). The command will\n actually be sent board 0, regardless of the set of boards\n specified.\n delay : float\n Number of seconds delay between power state changes of different\n boards.\n post_power_on_delay : float\n Number of seconds for this command to block once the power on\n command has been carried out. A short delay (default) is useful at\n this point since power-supplies and SpiNNaker chips may still be\n coming on line immediately after the power-on command is sent.\n\n .. warning::\n If the set of boards to be powered-on does not include board 0,\n this timeout should be extended by 2-3 seconds. This is due to\n the fact that BMPs immediately acknowledge power-on commands to\n boards other than board 0 but wait for the FPGAs to be loaded\n before responding when board 0 is powered on.\n \"\"\"\n if isinstance(board, int):\n boards = [board]\n else:\n boards = list(board)\n\n arg1 = int(delay * 1000) << 16 | (1 if state else 0)\n arg2 = sum(1 << b for b in boards)\n\n # Allow additional time for response when powering on (since FPGAs must\n # be loaded). Also, always send the command to board 0. This is\n # required by the BMPs which do not correctly handle the power-on\n # command being sent to anything but board 0. Though this is a bug in\n # the BMP firmware, it is considered sufficiently easy to work-around\n # that no fix is planned.\n self._send_scp(cabinet, frame, 0, SCPCommands.power,\n arg1=arg1, arg2=arg2,\n timeout=consts.BMP_POWER_ON_TIMEOUT if state else 0.0,\n expected_args=0)\n if state:\n time.sleep(post_power_on_delay)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting or toggle the state of an LED.", "response": "def set_led(self, led, action=None,\n cabinet=Required, frame=Required, board=Required):\n \"\"\"Set or toggle the state of an LED.\n\n .. note::\n At the time of writing, LED 7 is only set by the BMP on start-up to\n indicate that the watchdog timer reset the board. After this point,\n the LED is available for use by applications.\n\n Parameters\n ----------\n led : int or iterable\n Number of the LED or an iterable of LEDs to set the state of (0-7)\n action : bool or None\n State to set the LED to. True for on, False for off, None to\n toggle (default).\n board : int or iterable\n Specifies the board to control the LEDs of. This may also be an\n iterable of multiple boards (in the same frame). The command will\n actually be sent to the first board in the iterable.\n \"\"\"\n if isinstance(led, int):\n leds = [led]\n else:\n leds = led\n if isinstance(board, int):\n boards = [board]\n else:\n boards = list(board)\n board = boards[0]\n\n # LED setting actions\n arg1 = sum(LEDAction.from_bool(action) << (led * 2) for led in leds)\n\n # Bitmask of boards to control\n arg2 = sum(1 << b for b in boards)\n\n self._send_scp(cabinet, frame, board, SCPCommands.led, arg1=arg1,\n arg2=arg2, expected_args=0)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads the value of an FPGA register.", "response": "def read_fpga_reg(self, fpga_num, addr, cabinet, frame, board):\n \"\"\"Read the value of an FPGA (SPI) register.\n\n See the SpI/O project's spinnaker_fpga design's `README`_ for a listing\n of FPGA registers. The SpI/O project can be found on GitHub at:\n https://github.com/SpiNNakerManchester/spio/\n\n .. _README: https://github.com/SpiNNakerManchester/spio/\\\n blob/master/designs/spinnaker_fpgas/README.md#spi-interface\n\n Parameters\n ----------\n fpga_num : int\n FPGA number (0, 1 or 2) to communicate with.\n addr : int\n Register address to read to (will be rounded down to the nearest\n 32-bit word boundary).\n\n Returns\n -------\n int\n The 32-bit value at that address.\n \"\"\"\n arg1 = addr & (~0x3)\n arg2 = 4 # Read a 32-bit value\n arg3 = fpga_num\n response = self._send_scp(cabinet, frame, board, SCPCommands.link_read,\n arg1=arg1, arg2=arg2, arg3=arg3,\n expected_args=0)\n return struct.unpack(\"= 128 # Otherwise shoving this data in is nasty\n\n buf = bytearray(boot_data)\n buf[BOOT_DATA_OFFSET:BOOT_DATA_OFFSET+BOOT_DATA_LENGTH] = \\\n struct_packed[:BOOT_DATA_LENGTH]\n assert len(buf) < DTCM_SIZE # Assert that we fit in DTCM\n boot_data = bytes(buf)\n\n # Create a socket to communicate with the board\n sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n sock.connect((hostname, boot_port))\n\n # Transmit the boot data as a series of SDP packets. First determine\n # how many blocks must be sent and transmit that, then transmit each\n # block.\n n_blocks = (len(buf) + BOOT_BYTE_SIZE - 1) // BOOT_BYTE_SIZE\n assert n_blocks <= BOOT_MAX_BLOCKS\n\n boot_packet(sock, BootCommand.start, arg3=n_blocks - 1)\n time.sleep(boot_delay)\n\n block = 0\n while len(boot_data) > 0:\n # Get the data to transmit\n data, boot_data = (boot_data[:BOOT_BYTE_SIZE],\n boot_data[BOOT_BYTE_SIZE:])\n\n # Transmit, delay and increment the block count\n a1 = ((BOOT_WORD_SIZE - 1) << 8) | block\n boot_packet(sock, BootCommand.send_block, a1, data=data)\n time.sleep(boot_delay)\n block += 1\n\n # Send the END command\n boot_packet(sock, BootCommand.end, 1)\n\n # Close the socket and give time to boot\n sock.close()\n time.sleep(post_boot_delay)\n\n return structs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating and transmit a boot packet to the machine.", "response": "def boot_packet(sock, cmd, arg1=0, arg2=0, arg3=0, data=b\"\"):\n \"\"\"Create and transmit a packet to boot the machine.\n\n Parameters\n ----------\n sock : :py:class:`~socket.socket`\n Connected socket to use to transmit the packet.\n cmd : int\n arg1 : int\n arg2 : int\n arg3 : int\n data : :py:class:`bytes`\n Optional data to include in the packet.\n \"\"\"\n PROTOCOL_VERSION = 1\n\n # Generate the (network-byte order) header\n header = struct.pack(\"!H4I\", PROTOCOL_VERSION, cmd, arg1, arg2, arg3)\n\n assert len(data) % 4 == 0 # Data should always be word-sized\n fdata = b\"\"\n\n # Format the data from little- to network-/big-endian\n while len(data) > 0:\n word, data = (data[:4], data[4:])\n fdata += struct.pack(\"!I\", struct.unpack(\"= minimum_working"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites a Metatab doc to a CSV file and update the Modified time", "response": "def write_doc(doc : MetapackDoc, mt_file=None):\n \"\"\"\n Write a Metatab doc to a CSV file, and update the Modified time\n :param doc:\n :param mt_file:\n :return:\n \"\"\"\n\n from rowgenerators import parse_app_url\n\n if not mt_file:\n mt_file = doc.ref\n\n add_giturl(doc)\n\n u = parse_app_url(mt_file)\n\n if u.scheme == 'file':\n doc.write(mt_file)\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating the resource properties in the schema.", "response": "def update_resource_properties(r, orig_columns={}, force=False):\n \"\"\"Get descriptions and other properties from this, or upstream, packages, and add them to the schema. \"\"\"\n\n added = []\n\n schema_term = r.schema_term\n\n if not schema_term:\n warn(\"No schema term for \", r.name)\n return\n\n rg = r.raw_row_generator\n\n # Get columns information from the schema, or, if it is a package reference,\n # from the upstream schema\n\n upstream_columns = {e['name'].lower() if e['name'] else '': e for e in r.columns() or {}}\n\n # Just from the local schema\n schema_columns = {e['name'].lower() if e['name'] else '': e for e in r.schema_columns or {}}\n\n # Ask the generator if it can provide column descriptions and types\n generator_columns = {e['name'].lower() if e['name'] else '': e for e in rg.columns or {}}\n\n def get_col_value(col_name, value_name):\n\n v = None\n\n if not col_name:\n return None\n\n for d in [generator_columns, upstream_columns, orig_columns, schema_columns]:\n v_ = d.get(col_name.lower(), {}).get(value_name)\n if v_:\n v = v_\n\n return v\n\n\n\n # Look for new properties\n extra_properties = set()\n for d in [generator_columns, upstream_columns, orig_columns, schema_columns]:\n for k, v in d.items():\n for kk, vv in v.items():\n extra_properties.add(kk)\n\n # Remove the properties that are already accounted for\n extra_properties = extra_properties - {'pos', 'header', 'name', ''}\n\n # Add any extra properties, such as from upstream packages, to the schema.\n\n for ep in extra_properties:\n r.doc['Schema'].add_arg(ep)\n\n for c in schema_term.find('Table.Column'):\n\n for ep in extra_properties:\n t = c.get_or_new_child(ep)\n v = get_col_value(c.name, ep)\n if v:\n t.value = v\n added.append((c.name, ep, v))\n\n prt('Updated schema for {}. Set {} properties'.format(r.name, len(added)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_config():\n from os import environ\n from os.path import expanduser\n from pathlib import Path\n import yaml\n\n def pexp(p):\n try:\n return Path(p).expanduser()\n except AttributeError:\n # python 3.4\n return Path(expanduser(p))\n\n paths = [environ.get(\"METAPACK_CONFIG\"), '~/.metapack.yaml', '/etc/metapack.yaml']\n\n for p in paths:\n\n if not p:\n continue\n\n p = pexp(p)\n\n if p.exists():\n with p.open() as f:\n config = yaml.safe_load(f)\n if not config:\n config = {}\n\n config['_loaded_from'] = str(p)\n return config\n\n return None", "response": "Return a configuration dict for the current user"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_csv_packages(m, downloader):\n from metapack.package import CsvPackageBuilder\n\n pkg_dir = m.package_root\n name = m.doc.get_value('Root.Name')\n\n package_path, cache_path = CsvPackageBuilder.make_package_path(pkg_dir, name)\n\n if package_path.exists():\n return open_package(package_path, downloader=downloader)", "response": "Locate the build CSV package which will have distributions if it was generated"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_provisioning_configuration(\n self, provisioning_configuration_id, custom_headers=None, raw=False, **operation_config):\n \"\"\"GetContinuousDeploymentOperation.\n\n :param provisioning_configuration_id:\n :type provisioning_configuration_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :rtype: :class:`ContinuousDeploymentOperation\n `\n :rtype: :class:`ClientRawResponse`\n if raw=true\n :raises:\n :class:`HttpOperationError`\n \"\"\"\n # Construct URL\n url = '/_apis/continuousdelivery/provisioningconfigurations/{provisioningConfigurationId}'\n path_format_arguments = {\n 'provisioningConfigurationId': self._serialize.url(\"provisioning_configuration_id\", provisioning_configuration_id, 'str')\n }\n url = self._client.format_url(url, **path_format_arguments)\n\n # Construct parameters\n query_parameters = {}\n\n # Construct headers\n header_parameters = {}\n header_parameters['Content-Type'] = 'application/json; charset=utf-8'\n if custom_headers:\n header_parameters.update(custom_headers)\n\n # Construct and send request\n request = self._client.get(url, query_parameters)\n response = self._client.send(request, header_parameters, **operation_config)\n if response.status_code not in [200]:\n print(\"GET\", request.url, file=stderr)\n print(\"response:\", response.status_code, file=stderr)\n print(response.text, file=stderr)\n raise HttpOperationError(self._deserialize, response)\n\n deserialized = None\n\n if response.status_code == 200:\n deserialized = self._deserialize('ProvisioningConfiguration', response)\n\n if raw:\n client_raw_response = ClientRawResponse(deserialized, response)\n return client_raw_response\n\n return deserialized", "response": "Get a specific resource from the ContinuousDelivery API."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef serialize_email_messages(messages: List[EmailMessage]):\n return [\n base64.b64encode(zlib.compress(pickle.dumps(m, protocol=4))).decode()\n for m in messages\n ]", "response": "Serialize EmailMessages to be passed as task argument."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef deserialize_email_messages(messages: List[str]):\n return [\n pickle.loads(zlib.decompress(base64.b64decode(m)))\n for m in messages\n ]", "response": "Deserialize EmailMessages passed as task argument."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the count number using the estimation of the unset ratio", "response": "def _estimate_count(self):\n \"\"\" Update the count number using the estimation of the unset ratio \"\"\"\n if self.estimate_z == 0:\n self.estimate_z = (1.0 / self.nbr_bits)\n self.estimate_z = min(self.estimate_z, 0.999999)\n self.count = int(-(self.nbr_bits / self.nbr_slices) * np.log(1 - self.estimate_z))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef expiration_maintenance(self):\n if self.cellarray[self.refresh_head] != 0:\n self.cellarray[self.refresh_head] -= 1\n self.refresh_head = (self.refresh_head + 1) % self.nbr_bits", "response": "Decrement cell value if not zero\n This maintenance process need to execute each self. compute_refresh_time"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbatches version of expiration maintenance dev", "response": "def batched_expiration_maintenance_dev(self, elapsed_time):\n \"\"\" Batched version of expiration_maintenance() \"\"\"\n num_iterations = self.num_batched_maintenance(elapsed_time)\n for i in range(num_iterations):\n self.expiration_maintenance()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbatches version of expiration_maintenance", "response": "def batched_expiration_maintenance(self, elapsed_time):\n \"\"\" Batched version of expiration_maintenance()\n Cython version\n \"\"\"\n num_iterations = self.num_batched_maintenance(elapsed_time)\n self.refresh_head, nonzero = maintenance(self.cellarray, self.nbr_bits, num_iterations, self.refresh_head)\n if num_iterations != 0:\n self.estimate_z = float(nonzero) / float(num_iterations)\n self._estimate_count()\n processed_interval = num_iterations * self.compute_refresh_time()\n return processed_interval"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef compute_refresh_time(self):\n if self.z == 0:\n self.z = 1E-10\n s = float(self.expiration) * (1.0/(self.nbr_bits)) * (1.0/(self.counter_init - 1 + (1.0/(self.z * (self.nbr_slices + 1)))))\n return s", "response": "Compute the refresh time for the given expiration delay"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating a new network of the same order as the current network.", "response": "def generate():\n ################################################################################\n ### Build new network\n\n net = Network(id='Example7_Brunel2000')\n net.notes = 'Example 7: based on network of Brunel 2000'\n\n net.parameters = { 'g': 4, \n 'eta': 1, \n 'order': 5,\n 'epsilon': 0.1,\n 'J': 0.1,\n 'delay': 1.5,\n 'tauMem': 20.0,\n 'tauSyn': 0.1,\n 'tauRef': 2.0,\n 'U0': 0.0,\n 'theta': 20.0}\n\n cell = Cell(id='ifcell', pynn_cell='IF_curr_alpha')\n\n\n cell.parameters = { 'tau_m': 'tauMem', \n 'tau_refrac': 'tauRef',\n 'v_rest': 'U0',\n 'v_reset': 'U0',\n 'v_thresh': 'theta',\n 'cm': 0.001,\n \"i_offset\": 0}\n\n #cell = Cell(id='hhcell', neuroml2_source_file='test_files/hhcell.cell.nml')\n net.cells.append(cell)\n\n expoisson = Cell(id='expoisson', pynn_cell='SpikeSourcePoisson')\n expoisson.parameters = { 'rate': '1000 * (eta*theta/(J*4*order*epsilon*tauMem)) * (4*order*epsilon)',\n 'start': 0,\n 'duration': 1e9}\n net.cells.append(expoisson)\n\n\n '''\n input_source = InputSource(id='iclamp0', \n pynn_input='DCSource', \n parameters={'amplitude':0.002, 'start':100., 'stop':900.})\n\n input_source = InputSource(id='poissonFiringSyn', \n neuroml2_input='poissonFiringSynapse',\n parameters={'average_rate':\"eta\", 'synapse':\"ampa\", 'spike_target':\"./ampa\"})\n\n\n\n net.input_sources.append(input_source)'''\n\n pE = Population(id='Epop', size='4*order', component=cell.id, properties={'color':'1 0 0'})\n pEpoisson = Population(id='Einput', size='4*order', component=expoisson.id, properties={'color':'.5 0 0'})\n pI = Population(id='Ipop', size='1*order', component=cell.id, properties={'color':'0 0 1'})\n\n net.populations.append(pE)\n net.populations.append(pEpoisson)\n net.populations.append(pI)\n\n\n net.synapses.append(Synapse(id='ampa', \n pynn_receptor_type='excitatory', \n pynn_synapse_type='curr_alpha', \n parameters={'tau_syn':0.1}))\n net.synapses.append(Synapse(id='gaba', \n pynn_receptor_type='inhibitory', \n pynn_synapse_type='curr_alpha', \n parameters={'tau_syn':0.1}))\n\n\n net.projections.append(Projection(id='projEinput',\n presynaptic=pEpoisson.id, \n postsynaptic=pE.id,\n synapse='ampa',\n delay=2,\n weight=0.02,\n one_to_one_connector=OneToOneConnector()))\n ''' \n net.projections.append(Projection(id='projEE',\n presynaptic=pE.id, \n postsynaptic=pE.id,\n synapse='ampa',\n delay=2,\n weight=0.002,\n random_connectivity=RandomConnectivity(probability=.5)))'''\n\n net.projections.append(Projection(id='projEI',\n presynaptic=pE.id, \n postsynaptic=pI.id,\n synapse='ampa',\n delay=2,\n weight=0.02,\n random_connectivity=RandomConnectivity(probability=.5)))\n '''\n net.projections.append(Projection(id='projIE',\n presynaptic=pI.id, \n postsynaptic=pE.id,\n synapse='gaba',\n delay=2,\n weight=0.02,\n random_connectivity=RandomConnectivity(probability=.5)))\n\n net.inputs.append(Input(id='stim',\n input_source=input_source.id,\n population=pE.id,\n percentage=50))'''\n\n #print(net)\n #print(net.to_json())\n new_file = net.to_json_file('%s.json'%net.id)\n\n\n ################################################################################\n ### Build Simulation object & save as JSON\n\n sim = Simulation(id='SimExample7',\n network=new_file,\n duration='1000',\n dt='0.025',\n seed= 123,\n recordTraces={pE.id:'*',pI.id:'*'},\n recordSpikes={'all':'*'})\n\n sim.to_json_file()\n \n return sim, net"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef allocate(vertices_resources, nets, machine, constraints, placements):\n allocation = {}\n\n # Globally reserved resource ranges {resource, [slice, ...], ...}\n globally_reserved = defaultdict(list)\n # Locally reserved resource ranges {(x, y): {resource, [slice, ...], ...}}\n locally_reserved = defaultdict(lambda: defaultdict(list))\n\n # Alignment of each resource\n alignments = defaultdict(lambda: 1)\n\n # Collect constraints\n for constraint in constraints:\n if isinstance(constraint, ReserveResourceConstraint):\n if constraint.location is None:\n globally_reserved[constraint.resource].append(\n constraint.reservation)\n else:\n locally_reserved[constraint.location][\n constraint.resource].append(constraint.reservation)\n elif isinstance(constraint, AlignResourceConstraint):\n alignments[constraint.resource] = constraint.alignment\n\n # A dictionary {(x, y): [vertex, ...], ...}\n chip_contents = defaultdict(list)\n for vertex, xy in iteritems(placements):\n chip_contents[xy].append(vertex)\n\n for xy, chip_vertices in iteritems(chip_contents):\n # Index of the next free resource in the current chip\n resource_pointers = {resource: 0\n for resource in machine.chip_resources}\n\n for vertex in chip_vertices:\n vertex_allocation = {}\n\n # Make allocations, advancing resource pointers\n for resource, requirement in iteritems(vertices_resources[vertex]):\n proposed_allocation = None\n proposal_overlaps = True\n while proposal_overlaps:\n # Check that the proposed allocation doesn't overlap a\n # reserved area.\n start = align(resource_pointers[resource],\n alignments[resource])\n proposed_allocation = slice(start, start + requirement)\n proposal_overlaps = False\n if proposed_allocation.stop > machine[xy][resource]:\n raise InsufficientResourceError(\n \"{} over-allocated on chip {}\".format(resource,\n xy))\n for reservation in globally_reserved[resource]:\n if slices_overlap(proposed_allocation, reservation):\n resource_pointers[resource] = reservation.stop\n proposal_overlaps = True\n local_reservations \\\n = locally_reserved.get(xy, {}).get(resource, [])\n for reservation in local_reservations:\n if slices_overlap(proposed_allocation, reservation):\n resource_pointers[resource] = reservation.stop\n proposal_overlaps = True\n\n # Getting here means the proposed allocation is not blocked\n # by any reservations\n vertex_allocation[resource] = proposed_allocation\n resource_pointers[resource] = proposed_allocation.stop\n\n allocation[vertex] = vertex_allocation\n\n return allocation", "response": "Allocate resources to vertices on cores arbitrarily using a simple greedy algorithm."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating the Nagios related resources for the given port.", "response": "def create(self, port, qos_policy):\n \"\"\"Apply QoS rules on port for the first time.\n\n :param port: port object.\n :param qos_policy: the QoS policy to be applied on port.\n \"\"\"\n LOG.info(\"Setting QoS policy %(qos_policy)s on port %(port)s\",\n dict(qos_policy=qos_policy, port=port))\n\n policy_data = self._get_policy_values(qos_policy)\n self._utils.set_port_qos_rule(port[\"port_id\"], policy_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes the QoS rules from the specified port.", "response": "def delete(self, port, qos_policy=None):\n \"\"\"Remove QoS rules from port.\n\n :param port: port object.\n :param qos_policy: the QoS policy to be removed from port.\n \"\"\"\n LOG.info(\"Deleting QoS policy %(qos_policy)s on port %(port)s\",\n dict(qos_policy=qos_policy, port=port))\n\n self._utils.remove_port_qos_rule(port[\"port_id\"])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add(self, func: Callable, name: Optional[str]=None,\n queue: Optional[str]=None, max_retries: Optional[Number]=None,\n periodicity: Optional[timedelta]=None):\n \"\"\"Register a task function.\n\n :arg func: a callable to be executed\n :arg name: name of the task, used later to schedule jobs\n :arg queue: queue of the task, the default is used if not provided\n :arg max_retries: maximum number of retries, the default is used if\n not provided\n :arg periodicity: for periodic tasks, delay between executions as a\n timedelta\n\n >>> tasks = Tasks()\n >>> tasks.add(lambda x: x, name='do_nothing')\n \"\"\"\n if not name:\n raise ValueError('Each Spinach task needs a name')\n if name in self._tasks:\n raise ValueError('A task named {} already exists'.format(name))\n\n if queue is None:\n if self.queue:\n queue = self.queue\n else:\n queue = const.DEFAULT_QUEUE\n\n if max_retries is None:\n if self.max_retries:\n max_retries = self.max_retries\n else:\n max_retries = const.DEFAULT_MAX_RETRIES\n\n if periodicity is None:\n periodicity = self.periodicity\n\n if queue and queue.startswith('_'):\n raise ValueError('Queues starting with \"_\" are reserved by '\n 'Spinach for internal use')\n\n self._tasks[name] = Task(func, name, queue, max_retries, periodicity)", "response": "Register a task function."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef schedule(self, task: Schedulable, *args, **kwargs):\n self._require_attached_tasks()\n self._spin.schedule(task, *args, **kwargs)", "response": "Schedule a job to be executed as soon as possible."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef schedule_at(self, task: Schedulable, at: datetime, *args, **kwargs):\n self._require_attached_tasks()\n self._spin.schedule_at(task, at, *args, **kwargs)", "response": "Schedule a job to be executed in the future."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a job to be executed ASAP to the batch.", "response": "def schedule(self, task: Schedulable, *args, **kwargs):\n \"\"\"Add a job to be executed ASAP to the batch.\n\n :arg task: the task or its name to execute in the background\n :arg args: args to be passed to the task function\n :arg kwargs: kwargs to be passed to the task function\n \"\"\"\n at = datetime.now(timezone.utc)\n self.schedule_at(task, at, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef schedule_at(self, task: Schedulable, at: datetime, *args, **kwargs):\n self.jobs_to_create.append((task, at, args, kwargs))", "response": "Add a job to be executed in the future"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef unpack_sver_response_version(packet):\n software_name = packet.data.decode(\"utf-8\")\n\n legacy_version_field = packet.arg2 >> 16\n\n if legacy_version_field != 0xFFFF:\n # Legacy version encoding: just encoded in decimal fixed-point in the\n # integer.\n major = legacy_version_field // 100\n minor = legacy_version_field % 100\n patch = 0\n labels = \"\"\n else:\n # Semantic Version encoding: packed after the null-terminator of the\n # software name in the version string.\n software_name, _, version_number = software_name.partition(\"\\0\")\n\n match = VERSION_NUMBER_REGEX.match(version_number.rstrip(\"\\0\"))\n assert match, \"Malformed version number: {}\".format(version_number)\n\n major = int(match.group(1))\n minor = int(match.group(2))\n patch = int(match.group(3))\n labels = match.group(4) or \"\"\n\n return (software_name.rstrip(\"\\0\"), (major, minor, patch), labels)", "response": "Unpack the version - related parts of an sver response packet into a new sver object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _locate_file(f, base_dir):\n if base_dir == None:\n return f\n file_name = os.path.join(base_dir, f)\n real = os.path.realpath(file_name)\n #print_v('- Located %s at %s'%(f,real))\n return real", "response": "Utility method for finding full path to a filename as string"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_network(nl_model, \n handler, \n seed=1234, \n always_include_props=False,\n include_connections=True,\n include_inputs=True,\n base_dir=None):\n \"\"\"\n Generate the network model as described in NeuroMLlite in a specific handler,\n e.g. NeuroMLHandler, PyNNHandler, etc.\n \"\"\"\n \n pop_locations = {}\n cell_objects = {}\n synapse_objects = {}\n \n print_v(\"Starting net generation for %s%s...\" % (nl_model.id, \n ' (base dir: %s)' % base_dir if base_dir else ''))\n rng = random.Random(seed)\n \n if nl_model.network_reader:\n \n exec('from neuromllite.%s import %s' % (nl_model.network_reader.type, nl_model.network_reader.type))\n \n exec('network_reader = %s()' % (nl_model.network_reader.type))\n network_reader.parameters = nl_model.network_reader.parameters\n \n network_reader.parse(handler)\n pop_locations = network_reader.get_locations()\n \n else:\n notes = \"Generated network: %s\" % nl_model.id\n notes += \"\\n Generation seed: %i\" % (seed)\n if nl_model.parameters:\n notes += \"\\n NeuroMLlite parameters: \" \n for p in nl_model.parameters:\n notes += \"\\n %s = %s\" % (p, nl_model.parameters[p])\n handler.handle_document_start(nl_model.id, notes)\n temperature = '%sdegC' % nl_model.temperature if nl_model.temperature else None \n handler.handle_network(nl_model.id, nl_model.notes, temperature=temperature)\n \n nml2_doc_temp = _extract_pynn_components_to_neuroml(nl_model)\n \n for c in nl_model.cells:\n \n if c.neuroml2_source_file:\n from pyneuroml import pynml\n nml2_doc = pynml.read_neuroml2_file(_locate_file(c.neuroml2_source_file, base_dir), \n include_includes=True)\n cell_objects[c.id] = nml2_doc.get_by_id(c.id)\n \n if c.pynn_cell:\n cell_objects[c.id] = nml2_doc_temp.get_by_id(c.id)\n \n \n for s in nl_model.synapses:\n if s.neuroml2_source_file:\n from pyneuroml import pynml\n nml2_doc = pynml.read_neuroml2_file(_locate_file(s.neuroml2_source_file, base_dir), \n include_includes=True)\n synapse_objects[s.id] = nml2_doc.get_by_id(s.id)\n \n if s.pynn_synapse:\n synapse_objects[s.id] = nml2_doc_temp.get_by_id(s.id)\n \n \n for p in nl_model.populations:\n \n size = evaluate(p.size, nl_model.parameters)\n properties = p.properties if p.properties else {}\n if p.random_layout:\n properties['region'] = p.random_layout.region\n \n if not p.random_layout and not p.single_location and not always_include_props:\n \n # If there are no positions (abstract network), and \n # is added to , jLems doesn't like it... (it has difficulty \n # interpreting pop0[0]/v, etc.)\n # So better not to give properties...\n properties = {} \n \n if p.notes:\n handler.handle_population(p.id, \n p.component, \n size, \n cell_objects[p.component] if p.component in cell_objects else None,\n properties=properties,\n notes=p.notes)\n else:\n handler.handle_population(p.id, \n p.component, \n size, \n cell_objects[p.component] if p.component in cell_objects else None,\n properties=properties)\n \n pop_locations[p.id] = np.zeros((size, 3))\n \n for i in range(size):\n if p.random_layout:\n region = nl_model.get_child(p.random_layout.region, 'regions')\n \n x = region.x + rng.random() * region.width\n y = region.y + rng.random() * region.height\n z = region.z + rng.random() * region.depth\n pop_locations[p.id][i] = (x, y, z)\n\n handler.handle_location(i, p.id, p.component, x, y, z)\n \n if p.single_location:\n loc = p.single_location.location\n x = loc.x\n y = loc.y\n z = loc.z\n pop_locations[p.id][i] = (x, y, z)\n\n handler.handle_location(i, p.id, p.component, x, y, z)\n \n \n if hasattr(handler, 'finalise_population'):\n handler.finalise_population(p.id)\n \n if include_connections:\n for p in nl_model.projections:\n\n type = p.type if p.type else 'projection'\n\n handler.handle_projection(p.id, \n p.presynaptic, \n p.postsynaptic, \n p.synapse,\n synapse_obj=synapse_objects[p.synapse] if p.synapse in synapse_objects else None,\n pre_synapse_obj=synapse_objects[p.pre_synapse] if p.pre_synapse in synapse_objects else None,\n type=type)\n\n delay = p.delay if p.delay else 0\n weight = p.weight if p.weight else 1\n\n conn_count = 0\n if p.random_connectivity:\n for pre_i in range(len(pop_locations[p.presynaptic])):\n for post_i in range(len(pop_locations[p.postsynaptic])):\n flip = rng.random()\n #print(\"Is cell %i conn to %i, prob %s - %s\"%(pre_i, post_i, flip, p.random_connectivity.probability))\n if flip < p.random_connectivity.probability:\n weight = evaluate(weight, nl_model.parameters)\n delay = evaluate(delay, nl_model.parameters)\n #print_v(\"Adding connection %i with weight: %s, delay: %s\"%(conn_count, weight, delay))\n handler.handle_connection(p.id, \n conn_count, \n p.presynaptic, \n p.postsynaptic, \n p.synapse, \\\n pre_i, \\\n post_i, \\\n preSegId=0, \\\n preFract=0.5, \\\n postSegId=0, \\\n postFract=0.5, \\\n delay=delay, \\\n weight=weight)\n conn_count += 1\n \n if p.convergent_connectivity:\n \n for post_i in range(len(pop_locations[p.postsynaptic])):\n \n for count in range(int(p.convergent_connectivity.num_per_post)):\n found = False\n while not found:\n pre_i = int(rng.random()*len(pop_locations[p.presynaptic]))\n if p.presynaptic==p.postsynaptic and pre_i==post_i:\n found=False\n else:\n found=True\n\n weight = evaluate(weight, nl_model.parameters)\n delay = evaluate(delay, nl_model.parameters)\n print_v(\"Adding connection %i (%i->%i; %i to %s of post) with weight: %s, delay: %s\"%(conn_count, pre_i, post_i, count, p.convergent_connectivity.num_per_post, weight, delay))\n handler.handle_connection(p.id, \n conn_count, \n p.presynaptic, \n p.postsynaptic, \n p.synapse, \\\n pre_i, \\\n post_i, \\\n preSegId=0, \\\n preFract=0.5, \\\n postSegId=0, \\\n postFract=0.5, \\\n delay=delay, \\\n weight=weight)\n conn_count += 1\n\n elif p.one_to_one_connector:\n for i in range(min(len(pop_locations[p.presynaptic]), len(pop_locations[p.postsynaptic]))):\n weight = evaluate(weight, nl_model.parameters)\n delay = evaluate(delay, nl_model.parameters)\n #print_v(\"Adding connection %i with weight: %s, delay: %s\"%(conn_count, weight, delay))\n handler.handle_connection(p.id, \n conn_count, \n p.presynaptic, \n p.postsynaptic, \n p.synapse, \\\n i, \\\n i, \\\n preSegId=0, \\\n preFract=0.5, \\\n postSegId=0, \\\n postFract=0.5, \\\n delay=delay, \\\n weight=weight)\n conn_count += 1\n\n handler.finalise_projection(p.id, \n p.presynaptic, \n p.postsynaptic, \n p.synapse)\n \n if include_inputs: \n for input in nl_model.inputs:\n\n handler.handle_input_list(input.id, \n input.population, \n input.input_source, \n size=0, \n input_comp_obj=None)\n\n input_count = 0 \n for i in range(len(pop_locations[input.population])):\n flip = rng.random()\n weight = input.weight if input.weight else 1\n if flip * 100. < input.percentage:\n \n number_per_cell = evaluate(input.number_per_cell, nl_model.parameters) if input.number_per_cell else 1\n \n for j in range(number_per_cell):\n handler.handle_single_input(input.id, \n input_count, \n i,\n weight=evaluate(weight, nl_model.parameters))\n input_count += 1\n\n handler.finalise_input_source(input.id)\n \n if hasattr(handler, 'finalise_document'):\n handler.finalise_document()", "response": "Generate a network from a NeuroML model."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if any of the arguments to generate or run are passed to the main method.", "response": "def check_to_generate_or_run(argv, sim):\n \"\"\"\n Useful method for calling in main method after network and simulation are \n generated, to handle some standard export options like -jnml, -graph etc.\n \"\"\"\n \n print_v(\"Checking arguments: %s to see whether anything should be run in simulation %s (net: %s)...\" % \n (argv, sim.id, sim.network))\n \n if len(argv)==1:\n print_v(\"No arguments found. Currently supported export formats:\")\n print_v(\" -nml | -nmlh5 | -jnml | -jnmlnrn | -jnmlnetpyne | -netpyne | -pynnnrn \"+\\\n \"| -pynnnest | -pynnbrian | -pynnneuroml | -sonata | -matrix[1-2] | -graph[1-6 n/d/f/c]\")\n \n if '-pynnnest' in argv:\n generate_and_run(sim, simulator='PyNN_NEST')\n\n elif '-pynnnrn' in argv:\n generate_and_run(sim, simulator='PyNN_NEURON')\n\n elif '-pynnbrian' in argv:\n generate_and_run(sim, simulator='PyNN_Brian')\n\n elif '-jnml' in argv:\n generate_and_run(sim, simulator='jNeuroML')\n\n elif '-jnmlnrn' in argv:\n generate_and_run(sim, simulator='jNeuroML_NEURON')\n\n elif '-netpyne' in argv:\n generate_and_run(sim, simulator='NetPyNE')\n\n elif '-pynnneuroml' in argv:\n generate_and_run(sim, simulator='PyNN_NeuroML')\n\n elif '-sonata' in argv:\n generate_and_run(sim, simulator='sonata')\n \n elif '-nml' in argv or '-neuroml' in argv:\n \n network = load_network_json(sim.network)\n generate_neuroml2_from_network(network, validate=True)\n \n elif '-nmlh5' in argv or '-neuromlh5' in argv:\n \n network = load_network_json(sim.network)\n generate_neuroml2_from_network(network, validate=True, format='hdf5')\n \n else:\n for a in argv:\n \n if '-jnmlnetpyne' in a:\n num_processors = 1\n if len(a)>len('-jnmlnetpyne'):\n num_processors = int(a[12:])\n generate_and_run(sim, simulator='jNeuroML_NetPyNE',num_processors=num_processors)\n elif 'graph' in a: # e.g. -graph3c\n generate_and_run(sim, simulator=a[1:]) # Will not \"run\" obviously...\n elif 'matrix' in a: # e.g. -matrix2\n generate_and_run(sim, simulator=a[1:])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nextracts the equivalent elements of the cell synapses and inputs from a NeuroML document.", "response": "def _extract_pynn_components_to_neuroml(nl_model, nml_doc=None):\n \"\"\"\n Parse the NeuroMLlite description for cell, synapses and inputs described as \n PyNN elements (e.g. IF_cond_alpha, DCSource) and parameters, and convert \n these to the equivalent elements in a NeuroMLDocument\n \"\"\"\n \n if nml_doc == None:\n from neuroml import NeuroMLDocument\n nml_doc = NeuroMLDocument(id=\"temp\")\n \n for c in nl_model.cells:\n \n if c.pynn_cell:\n \n if nml_doc.get_by_id(c.id) == None:\n import pyNN.neuroml\n cell_params = c.parameters if c.parameters else {}\n #print('------- %s: %s' % (c, cell_params))\n for p in cell_params:\n cell_params[p] = evaluate(cell_params[p], nl_model.parameters)\n #print('====== %s: %s' % (c, cell_params))\n for proj in nl_model.projections:\n\n synapse = nl_model.get_child(proj.synapse, 'synapses')\n post_pop = nl_model.get_child(proj.postsynaptic, 'populations')\n if post_pop.component == c.id:\n #print(\"--------- Cell %s in post pop %s of %s uses %s\"%(c.id,post_pop.id, proj.id, synapse))\n\n if synapse.pynn_receptor_type == 'excitatory':\n post = '_E'\n elif synapse.pynn_receptor_type == 'inhibitory':\n post = '_I'\n for p in synapse.parameters:\n cell_params['%s%s' % (p, post)] = synapse.parameters[p]\n\n temp_cell = eval('pyNN.neuroml.%s(**cell_params)' % c.pynn_cell)\n if c.pynn_cell != 'SpikeSourcePoisson':\n temp_cell.default_initial_values['v'] = temp_cell.parameter_space['v_rest'].base_value\n\n cell_id = temp_cell.add_to_nml_doc(nml_doc, None)\n cell = nml_doc.get_by_id(cell_id)\n cell.id = c.id\n \n for s in nl_model.synapses:\n if nml_doc.get_by_id(s.id) == None:\n \n if s.pynn_synapse_type and s.pynn_receptor_type:\n import neuroml\n \n if s.pynn_synapse_type == 'cond_exp':\n syn = neuroml.ExpCondSynapse(id=s.id, tau_syn=s.parameters['tau_syn'], e_rev=s.parameters['e_rev'])\n nml_doc.exp_cond_synapses.append(syn)\n elif s.pynn_synapse_type == 'cond_alpha':\n syn = neuroml.AlphaCondSynapse(id=s.id, tau_syn=s.parameters['tau_syn'], e_rev=s.parameters['e_rev'])\n nml_doc.alpha_cond_synapses.append(syn)\n elif s.pynn_synapse_type == 'curr_exp':\n syn = neuroml.ExpCurrSynapse(id=s.id, tau_syn=s.parameters['tau_syn'])\n nml_doc.exp_curr_synapses.append(syn)\n elif s.pynn_synapse_type == 'curr_alpha':\n syn = neuroml.AlphaCurrSynapse(id=s.id, tau_syn=s.parameters['tau_syn'])\n nml_doc.alpha_curr_synapses.append(syn)\n \n for i in nl_model.input_sources:\n \n #if nml_doc.get_by_id(i.id) == None:\n \n if i.pynn_input:\n import pyNN.neuroml\n input_params = i.parameters if i.parameters else {}\n exec('input__%s = pyNN.neuroml.%s(**input_params)' % (i.id, i.pynn_input))\n exec('temp_input = input__%s' % i.id)\n pg_id = temp_input.add_to_nml_doc(nml_doc, None)\n #for pp in nml_doc.pulse_generators:\n # print('PG: %s: %s'%(pp,pp.id))\n pg = nml_doc.get_by_id(pg_id)\n pg.id = i.id\n \n return nml_doc"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate and save a NeuroML2 file from a NeuroML model.", "response": "def generate_neuroml2_from_network(nl_model, \n nml_file_name=None, \n print_summary=True, \n seed=1234, \n format='xml', \n base_dir=None,\n copy_included_elements=False,\n target_dir=None,\n validate=False):\n \"\"\"\n Generate and save NeuroML2 file (in either XML or HDF5 format) from the \n NeuroMLlite description\n \"\"\"\n\n print_v(\"Generating NeuroML2 for %s%s...\" % (nl_model.id, ' (base dir: %s; target dir: %s)' \n % (base_dir, target_dir) if base_dir or target_dir else ''))\n \n import neuroml\n from neuroml.hdf5.NetworkBuilder import NetworkBuilder\n\n neuroml_handler = NetworkBuilder()\n\n generate_network(nl_model, neuroml_handler, seed=seed, base_dir=base_dir)\n\n nml_doc = neuroml_handler.get_nml_doc()\n \n for i in nl_model.input_sources:\n \n if nml_doc.get_by_id(i.id) == None:\n if i.neuroml2_source_file:\n \n incl = neuroml.IncludeType(_locate_file(i.neuroml2_source_file, base_dir))\n if not incl in nml_doc.includes:\n nml_doc.includes.append(incl)\n \n if i.neuroml2_input: \n input_params = i.parameters if i.parameters else {}\n \n # TODO make more generic...\n \n if i.neuroml2_input.lower() == 'pulsegenerator':\n input = neuroml.PulseGenerator(id=i.id)\n nml_doc.pulse_generators.append(input)\n \n elif i.neuroml2_input.lower() == 'pulsegeneratordl':\n input = neuroml.PulseGeneratorDL(id=i.id)\n nml_doc.pulse_generator_dls.append(input)\n\n elif i.neuroml2_input.lower() == 'poissonfiringsynapse':\n input = neuroml.PoissonFiringSynapse(id=i.id)\n nml_doc.poisson_firing_synapses.append(input)\n\n for p in input_params:\n exec('input.%s = \"%s\"' % (p, evaluate(input_params[p], nl_model.parameters)))\n \n for c in nl_model.cells:\n if c.neuroml2_source_file:\n \n incl = neuroml.IncludeType(_locate_file(c.neuroml2_source_file, base_dir))\n found_cell = False\n for cell in nml_doc.cells:\n if cell.id == c.id:\n nml_doc.cells.remove(cell) # Better to use imported cell file; will have channels, etc.\n nml_doc.includes.append(incl) \n found_cell = True\n \n if not found_cell:\n for p in nl_model.populations:\n if p.component == c.id:\n pass\n \n if not incl in nml_doc.includes:\n nml_doc.includes.append(incl) \n \n ''' Needed???\n if c.lems_source_file: \n incl = neuroml.IncludeType(_locate_file(c.lems_source_file, base_dir))\n if not incl in nml_doc.includes:\n nml_doc.includes.append(incl)'''\n \n if c.neuroml2_cell: \n \n cell_params = c.parameters if c.parameters else {}\n # TODO make more generic...\n if c.neuroml2_cell.lower() == 'spikegenerator':\n cell = neuroml.SpikeGenerator(id=c.id)\n nml_doc.spike_generators.append(cell)\n elif c.neuroml2_cell.lower() == 'spikegeneratorpoisson':\n cell = neuroml.SpikeGeneratorPoisson(id=c.id)\n nml_doc.spike_generator_poissons.append(cell)\n elif c.neuroml2_cell.lower() == 'spikegeneratorrefpoisson':\n cell = neuroml.SpikeGeneratorRefPoisson(id=c.id)\n nml_doc.spike_generator_ref_poissons.append(cell)\n else:\n raise Exception('The neuroml2_cell: %s is not yet supported...'%c.neuroml2_cell)\n\n for p in cell_params:\n exec('cell.%s = \"%s\"' % (p, evaluate(cell_params[p], nl_model.parameters)))\n \n for s in nl_model.synapses:\n if nml_doc.get_by_id(s.id) == None:\n if s.neuroml2_source_file:\n incl = neuroml.IncludeType(_locate_file(s.neuroml2_source_file, base_dir))\n if not incl in nml_doc.includes:\n nml_doc.includes.append(incl) \n \n \n # Look for and add the PyNN based elements to the NeuroMLDocument \n _extract_pynn_components_to_neuroml(nl_model, nml_doc)\n \n if print_summary:\n # Print info\n print_v(nml_doc.summary())\n\n # Save to file\n if target_dir == None:\n target_dir = base_dir\n if format == 'xml':\n if not nml_file_name:\n nml_file_name = _locate_file('%s.net.nml' % nml_doc.id, target_dir)\n from neuroml.writers import NeuroMLWriter\n NeuroMLWriter.write(nml_doc, nml_file_name)\n \n if format == 'hdf5':\n if not nml_file_name:\n nml_file_name = _locate_file('%s.net.nml.h5' % nml_doc.id, target_dir)\n from neuroml.writers import NeuroMLHdf5Writer\n NeuroMLHdf5Writer.write(nml_doc, nml_file_name)\n\n print_v(\"Written NeuroML to %s\" % nml_file_name)\n \n if validate and format == 'xml':\n \n from pyneuroml import pynml\n success = pynml.validate_neuroml2(nml_file_name, verbose_validate=False)\n if success:\n print_v('Generated file is valid NeuroML2!')\n else:\n print_v('Generated file is NOT valid NeuroML2!')\n \n \n return nml_file_name, nml_doc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _generate_neuron_files_from_neuroml(network, verbose=False, dir_for_mod_files = None):\n\n print_v(\"------------- Generating NEURON files from NeuroML for %s (default dir: %s)...\" % (network.id, dir_for_mod_files))\n nml_src_files = []\n\n from neuroml import NeuroMLDocument\n import neuroml.writers as writers\n temp_nml_doc = NeuroMLDocument(id=\"temp\")\n \n dirs_for_mod_files = []\n if dir_for_mod_files!=None:\n dirs_for_mod_files.append(os.path.abspath(dir_for_mod_files))\n\n for c in network.cells:\n if c.neuroml2_source_file:\n nml_src_files.append(c.neuroml2_source_file)\n dir_for_mod = os.path.dirname(os.path.abspath(c.neuroml2_source_file))\n if not dir_for_mod in dirs_for_mod_files: dirs_for_mod_files.append(dir_for_mod)\n \n for s in network.synapses:\n if s.neuroml2_source_file:\n nml_src_files.append(s.neuroml2_source_file)\n dir_for_mod = os.path.dirname(os.path.abspath(s.neuroml2_source_file))\n if not dir_for_mod in dirs_for_mod_files: dirs_for_mod_files.append(dir_for_mod)\n\n for i in network.input_sources:\n if i.neuroml2_source_file:\n nml_src_files.append(i.neuroml2_source_file)\n dir_for_mod = os.path.dirname(os.path.abspath(i.neuroml2_source_file))\n if not dir_for_mod in dirs_for_mod_files: dirs_for_mod_files.append(dir_for_mod)\n \n temp_nml_doc = _extract_pynn_components_to_neuroml(network)\n \n summary = temp_nml_doc.summary()\n \n if 'IF_' in summary: \n import tempfile\n temp_nml_file = tempfile.NamedTemporaryFile(delete=False, suffix='.nml', dir=dir_for_mod_files)\n print_v(\"Writing temporary NML file to: %s, summary: \"%temp_nml_file.name)\n print_v(summary)\n\n writers.NeuroMLWriter.write(temp_nml_doc, temp_nml_file.name)\n nml_src_files.append(temp_nml_file.name)\n\n for f in nml_src_files:\n from pyneuroml import pynml\n print_v(\"Generating/compiling hoc/mod files for: %s\"%f)\n pynml.run_lems_with_jneuroml_neuron(f, \n nogui=True, \n only_generate_scripts=True,\n compile_mods=True,\n verbose=False)\n\n for dir_for_mod_files in dirs_for_mod_files:\n if not dir_for_mod_files in locations_mods_loaded_from:\n print_v(\"Generated NEURON code; loading mechanisms from %s (cwd: %s; already loaded: %s)\" % (dir_for_mod_files,os.getcwd(),locations_mods_loaded_from))\n try:\n\n from neuron import load_mechanisms\n if os.getcwd()==dir_for_mod_files:\n print_v(\"That's current dir => importing neuron module loads mods here...\")\n else:\n load_mechanisms(dir_for_mod_files)\n locations_mods_loaded_from.append(dir_for_mod_files)\n except:\n print_v(\"Failed to load mod file mechanisms...\")\n else:\n print_v(\"Already loaded mechanisms from %s (all loaded: %s)\" % (dir_for_mod_files,locations_mods_loaded_from))", "response": "Generates the NEURON files from the NeuroML file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating the network in the specified simulator and runs it.", "response": "def generate_and_run(simulation, \n simulator, \n network=None, \n return_results=False, \n base_dir=None,\n target_dir=None,\n num_processors=1):\n \"\"\"\n Generates the network in the specified simulator and runs, if appropriate\n \"\"\"\n\n if network == None:\n network = load_network_json(simulation.network)\n \n print_v(\"Generating network %s and running in simulator: %s...\" % (network.id, simulator))\n \n if simulator == 'NEURON':\n \n _generate_neuron_files_from_neuroml(network, dir_for_mod_files=target_dir)\n \n from neuromllite.NeuronHandler import NeuronHandler\n \n nrn_handler = NeuronHandler()\n\n for c in network.cells:\n if c.neuroml2_source_file:\n src_dir = os.path.dirname(os.path.abspath(c.neuroml2_source_file))\n nrn_handler.executeHoc('load_file(\"%s/%s.hoc\")' % (src_dir, c.id))\n \n generate_network(network, nrn_handler, generate_network, base_dir)\n if return_results:\n raise NotImplementedError(\"Reloading results not supported in Neuron yet...\")\n\n\n elif simulator.lower() == 'sonata': # Will not \"run\" obviously...\n \n from neuromllite.SonataHandler import SonataHandler\n \n sonata_handler = SonataHandler()\n \n generate_network(network, sonata_handler, always_include_props=True, base_dir=base_dir)\n \n print_v(\"Done with Sonata...\")\n\n\n elif simulator.lower().startswith('graph'): # Will not \"run\" obviously...\n \n from neuromllite.GraphVizHandler import GraphVizHandler, engines\n \n\n try:\n if simulator[-1].isalpha():\n print simulator\n print simulator[5:]\n print simulator[5:-1]\n engine = engines[simulator[-1]]\n level = int(simulator[5:-1])\n else:\n engine = 'dot'\n level = int(simulator[5:])\n \n except Exception as e:\n print e\n print_v(\"Error parsing: %s\"%simulator)\n print_v(\"Graphs of the network structure can be generated at many levels of detail (1-6, required) and laid out using GraphViz engines (d - dot (default); c - circo; n - neato; f - fdp), so use: -graph3c, -graph2, -graph4f etc.\")\n return\n \n handler = GraphVizHandler(level, engine=engine, nl_network=network)\n \n generate_network(network, handler, always_include_props=True, base_dir=base_dir)\n \n print_v(\"Done with GraphViz...\")\n \n \n elif simulator.lower().startswith('matrix'): # Will not \"run\" obviously...\n \n from neuromllite.MatrixHandler import MatrixHandler\n \n try:\n level = int(simulator[6:])\n except:\n print_v(\"Error parsing: %s\"%simulator)\n print_v(\"Matrices of the network structure can be generated at many levels of detail (1-n, required), so use: -matrix1, -matrix2, etc.\")\n return\n \n handler = MatrixHandler(level, nl_network=network)\n \n generate_network(network, handler, always_include_props=True, base_dir=base_dir)\n \n print_v(\"Done with MatrixHandler...\")\n \n\n elif simulator.startswith('PyNN'):\n \n #_generate_neuron_files_from_neuroml(network)\n simulator_name = simulator.split('_')[1].lower()\n \n \n from neuromllite.PyNNHandler import PyNNHandler\n \n pynn_handler = PyNNHandler(simulator_name, simulation.dt, network.id)\n \n syn_cell_params = {}\n for proj in network.projections:\n \n synapse = network.get_child(proj.synapse, 'synapses')\n post_pop = network.get_child(proj.postsynaptic, 'populations')\n \n if not post_pop.component in syn_cell_params:\n syn_cell_params[post_pop.component] = {}\n for p in synapse.parameters:\n post = ''\n if synapse.pynn_receptor_type == \"excitatory\":\n post = '_E'\n elif synapse.pynn_receptor_type == \"inhibitory\":\n post = '_I'\n syn_cell_params[post_pop.component]['%s%s' % (p, post)] = synapse.parameters[p]\n \n \n cells = {}\n for c in network.cells:\n if c.pynn_cell:\n cell_params = {}\n if c.parameters: \n for p in c.parameters:\n cell_params[p] = evaluate(c.parameters[p], network.parameters)\n \n dont_set_here = ['tau_syn_E', 'e_rev_E', 'tau_syn_I', 'e_rev_I']\n for d in dont_set_here:\n if d in c.parameters:\n raise Exception('Synaptic parameters like %s should be set '+\n 'in individual synapses, not in the list of parameters associated with the cell' % d)\n if c.id in syn_cell_params:\n cell_params.update(syn_cell_params[c.id])\n print_v(\"Creating cell with params: %s\" % cell_params)\n exec('cells[\"%s\"] = pynn_handler.sim.%s(**cell_params)' % (c.id, c.pynn_cell))\n \n if c.pynn_cell != 'SpikeSourcePoisson':\n exec(\"cells['%s'].default_initial_values['v'] = cells['%s'].parameter_space['v_rest'].base_value\" % (c.id, c.id))\n \n pynn_handler.set_cells(cells)\n \n receptor_types = {}\n for s in network.synapses:\n if s.pynn_receptor_type:\n receptor_types[s.id] = s.pynn_receptor_type\n \n pynn_handler.set_receptor_types(receptor_types)\n \n for input_source in network.input_sources:\n if input_source.pynn_input:\n pynn_handler.add_input_source(input_source)\n \n generate_network(network, pynn_handler, always_include_props=True, base_dir=base_dir)\n \n for pid in pynn_handler.populations:\n pop = pynn_handler.populations[pid]\n if 'all' in simulation.recordTraces or pop.label in simulation.recordTraces:\n if pop.can_record('v'):\n pop.record('v')\n \n \n pynn_handler.sim.run(simulation.duration)\n pynn_handler.sim.end()\n \n traces = {}\n events = {}\n \n if not 'NeuroML' in simulator:\n from neo.io import PyNNTextIO\n\n for pid in pynn_handler.populations:\n pop = pynn_handler.populations[pid]\n\n if 'all' in simulation.recordTraces or pop.label in simulation.recordTraces:\n \n filename = \"%s.%s.v.dat\" % (simulation.id, pop.label)\n all_columns = []\n print_v(\"Writing data for %s to %s\" % (pop.label, filename))\n for i in range(len(pop)):\n if pop.can_record('v'):\n ref = '%s[%i]'%(pop.label,i)\n traces[ref] = []\n data = pop.get_data('v', gather=False)\n for segment in data.segments:\n vm = segment.analogsignals[0].transpose()[i]\n \n if len(all_columns) == 0:\n tt = np.array([t * simulation.dt / 1000. for t in range(len(vm))])\n all_columns.append(tt)\n vm_si = [float(v / 1000.) for v in vm]\n traces[ref] = vm_si\n all_columns.append(vm_si)\n \n times_vm = np.array(all_columns).transpose()\n \n np.savetxt(filename, times_vm, delimiter='\\t', fmt='%s')\n \n \n if return_results:\n _print_result_info(traces, events)\n return traces, events\n\n elif simulator == 'NetPyNE':\n \n if target_dir==None:\n target_dir='./'\n \n _generate_neuron_files_from_neuroml(network, dir_for_mod_files=target_dir)\n \n from netpyne import specs\n from netpyne import sim\n # Note NetPyNE from this branch is required: https://github.com/Neurosim-lab/netpyne/tree/neuroml_updates\n from netpyne.conversion.neuromlFormat import NetPyNEBuilder\n \n import pprint; pp = pprint.PrettyPrinter(depth=6)\n \n netParams = specs.NetParams()\n simConfig = specs.SimConfig()\n netpyne_handler = NetPyNEBuilder(netParams, simConfig=simConfig, verbose=True)\n \n generate_network(network, netpyne_handler, base_dir=base_dir)\n \n netpyne_handler.finalise()\n \n simConfig = specs.SimConfig() \n simConfig.tstop = simulation.duration\n simConfig.duration = simulation.duration\n simConfig.dt = simulation.dt\n simConfig.seed = simulation.seed\n simConfig.recordStep = simulation.dt\n \n simConfig.recordCells = ['all'] \n simConfig.recordTraces = {}\n \n\n for pop in netpyne_handler.popParams.values():\n if 'all' in simulation.recordTraces or pop.id in simulation.recordTraces:\n for i in pop['cellsList']:\n id = pop['pop']\n index = i['cellLabel']\n simConfig.recordTraces['v_%s_%s' % (id, index)] = {'sec':'soma', 'loc':0.5, 'var':'v', 'conds':{'pop':id, 'cellLabel':index}}\n\n simConfig.saveDat = True\n \n print_v(\"NetPyNE netParams: \")\n pp.pprint(netParams.todict())\n #print_v(\"NetPyNE simConfig: \")\n #pp.pprint(simConfig.todict())\n \n sim.initialize(netParams, simConfig) # create network object and set cfg and net params\n\n sim.net.createPops() \n cells = sim.net.createCells() # instantiate network cells based on defined populations \n \n \n for proj_id in netpyne_handler.projection_infos.keys():\n projName, prePop, postPop, synapse, ptype = netpyne_handler.projection_infos[proj_id]\n print_v(\"Creating connections for %s (%s): %s->%s via %s\" % (projName, ptype, prePop, postPop, synapse))\n \n preComp = netpyne_handler.pop_ids_vs_components[prePop]\n \n for conn in netpyne_handler.connections[projName]:\n \n pre_id, pre_seg, pre_fract, post_id, post_seg, post_fract, delay, weight = conn\n \n #connParam = {'delay':delay,'weight':weight,'synsPerConn':1, 'sec':post_seg, 'loc':post_fract, 'threshold':threshold}\n connParam = {'delay':delay, 'weight':weight, 'synsPerConn':1, 'sec':post_seg, 'loc':post_fract}\n \n if ptype == 'electricalProjection':\n\n if weight != 1:\n raise Exception('Cannot yet support inputs where weight !=1!')\n connParam = {'synsPerConn': 1, \n 'sec': post_seg, \n 'loc': post_fract, \n 'gapJunction': True, \n 'weight': weight}\n else:\n connParam = {'delay': delay,\n 'weight': weight,\n 'synsPerConn': 1, \n 'sec': post_seg, \n 'loc': post_fract} \n #'threshold': threshold}\n\n connParam['synMech'] = synapse\n\n if post_id in sim.net.gid2lid: # check if postsyn is in this node's list of gids\n sim.net._addCellConn(connParam, pre_id, post_id)\n \n \n stims = sim.net.addStims() # add external stimulation to cells (IClamps etc)\n simData = sim.setupRecording() # setup variables to record for each cell (spikes, V traces, etc)\n sim.runSim() # run parallel Neuron simulation \n sim.gatherData() # gather spiking data and cell info from each node\n sim.saveData() # save params, cell info and sim output to file (pickle,mat,txt,etc)\n \n if return_results:\n raise NotImplementedError(\"Reloading results not supported in NetPyNE yet...\")\n \n elif simulator == 'jNeuroML' or simulator == 'jNeuroML_NEURON' or simulator == 'jNeuroML_NetPyNE':\n\n from pyneuroml.lems import generate_lems_file_for_neuroml\n from pyneuroml import pynml\n\n lems_file_name = 'LEMS_%s.xml' % simulation.id\n\n nml_file_name, nml_doc = generate_neuroml2_from_network(network, base_dir=base_dir, target_dir=target_dir)\n included_files = ['PyNN.xml']\n\n for c in network.cells: \n if c.lems_source_file:\n included_files.append(c.lems_source_file)\n '''\n if network.cells:\n for c in network.cells:\n included_files.append(c.neuroml2_source_file)\n '''\n if network.synapses:\n for s in network.synapses: \n if s.lems_source_file:\n included_files.append(s.lems_source_file)\n \n print_v(\"Generating LEMS file prior to running in %s\" % simulator)\n \n pops_plot_save = []\n pops_spike_save = []\n gen_plots_for_quantities = {}\n gen_saves_for_quantities = {}\n \n for p in network.populations:\n \n if simulation.recordTraces and ('all' in simulation.recordTraces or p.id in simulation.recordTraces):\n pops_plot_save.append(p.id)\n \n if simulation.recordSpikes and ('all' in simulation.recordSpikes or p.id in simulation.recordSpikes):\n pops_spike_save.append(p.id)\n \n if simulation.recordRates and ('all' in simulation.recordRates or p.id in simulation.recordRates):\n size = evaluate(p.size, network.parameters)\n for i in range(size):\n quantity = '%s/%i/%s/r' % (p.id, i, p.component)\n gen_plots_for_quantities['%s_%i_r' % (p.id, i)] = [quantity]\n gen_saves_for_quantities['%s_%i.r.dat' % (p.id, i)] = [quantity]\n \n if simulation.recordVariables:\n for var in simulation.recordVariables:\n to_rec = simulation.recordVariables[var]\n if ('all' in to_rec or p.id in to_rec):\n size = evaluate(p.size, network.parameters)\n for i in range(size):\n quantity = '%s/%i/%s/%s' % (p.id, i, p.component,var)\n gen_plots_for_quantities['%s_%i_%s' % (p.id, i, var)] = [quantity]\n gen_saves_for_quantities['%s_%i.%s.dat' % (p.id, i, var)] = [quantity]\n \n \n generate_lems_file_for_neuroml(simulation.id, \n nml_file_name, \n network.id, \n simulation.duration, \n simulation.dt, \n lems_file_name,\n target_dir=target_dir if target_dir else '.',\n nml_doc=nml_doc, # Use this if the nml doc has already been loaded (to avoid delay in reload)\n include_extra_files=included_files,\n \n gen_plots_for_all_v=False,\n plot_all_segments=False,\n gen_plots_for_quantities=gen_plots_for_quantities, # Dict with displays vs lists of quantity paths\n gen_plots_for_only_populations=pops_plot_save, # List of populations, all pops if = []\n \n gen_saves_for_all_v=False,\n save_all_segments=False,\n gen_saves_for_only_populations=pops_plot_save, # List of populations, all pops if = []\n gen_saves_for_quantities=gen_saves_for_quantities, # Dict with file names vs lists of quantity paths\n \n gen_spike_saves_for_all_somas=False,\n gen_spike_saves_for_only_populations=pops_spike_save, # List of populations, all pops if = []\n gen_spike_saves_for_cells={}, # Dict with file names vs lists of quantity paths\n spike_time_format='ID_TIME',\n \n copy_neuroml=True,\n lems_file_generate_seed=12345,\n report_file_name='report.%s.txt' % simulation.id,\n simulation_seed=simulation.seed if simulation.seed else 12345,\n verbose=True)\n \n lems_file_name = _locate_file(lems_file_name, target_dir)\n \n if simulator == 'jNeuroML':\n results = pynml.run_lems_with_jneuroml(lems_file_name, \n nogui=True, \n load_saved_data=return_results, \n reload_events=return_results)\n elif simulator == 'jNeuroML_NEURON':\n results = pynml.run_lems_with_jneuroml_neuron(lems_file_name, \n nogui=True, \n load_saved_data=return_results, \n reload_events=return_results)\n elif simulator == 'jNeuroML_NetPyNE':\n results = pynml.run_lems_with_jneuroml_netpyne(lems_file_name, \n nogui=True, \n verbose=True, \n load_saved_data=return_results, \n reload_events=return_results,\n num_processors=num_processors)\n\n print_v(\"Finished running LEMS file %s in %s (returning results: %s)\" % (lems_file_name, simulator, return_results))\n\n if return_results:\n traces, events = results\n _print_result_info(traces, events)\n return results"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _print_result_info(traces, events):\n print_v('Returning %i traces:'%len(traces))\n for r in sorted(traces.keys()):\n x = traces[r]\n print_v(' %s (%s): %s -> %s (min: %s, max: %s, len: %i)'%(r, type(x), x[0],x[-1],min(x),max(x),len(x)))\n print_v('Returning %i events:'%len(events))\n for r in sorted(events.keys()):\n x = events[r]\n print_v(' %s: %s -> %s (len: %i)'%(r, x[0] if len(x)>0 else '-',x[-1] if len(x)>0 else '-',len(x)))", "response": "Print a summary of the returned traces and spike times\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connect_input(self, name, wire):\n self._inputs[name] = wire\n wire.sinks.append(self)", "response": "Connect the specified input to a wire."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _write_config(self, memory):\n memory.seek(0)\n memory.write(struct.pack(\"<5I\",\n # sim_length\n self._simulator.length,\n # input_a_key\n self._inputs[\"a\"].routing_key\n if self._inputs[\"a\"] is not None\n else 0xFFFFFFFF,\n # input_b_key\n self._inputs[\"b\"].routing_key\n if self._inputs[\"b\"] is not None\n else 0xFFFFFFFF,\n # output_key\n self.output.routing_key,\n # lut\n self._lookup_table))", "response": "Write the configuration for this gate to memory."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef connect_input(self, wire):\n self._input = wire\n wire.sinks.append(self)", "response": "Probe the specified wire."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _write_config(self, memory):\n memory.seek(0)\n memory.write(struct.pack(\" Optional[float]:\n \"\"\"Give the amount of seconds before the next periodic task is due.\"\"\"\n try:\n next_event = self._scheduler.queue[0]\n except IndexError:\n return None\n\n now = time.monotonic()\n next_event_time = next_event[0]\n if next_event_time < now:\n return 0\n\n return next_event_time - now"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef inspect_periodic_tasks(self) -> List[Tuple[int, str]]:\n return [(int(e[0]), e[3][0].name) for e in self._scheduler.queue]", "response": "Get the next periodic task schedule."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting jobs from a queue.", "response": "def get_jobs_from_queue(self, queue: str, max_jobs: int) -> List[Job]:\n \"\"\"Get jobs from a queue.\"\"\"\n rv = list()\n while len(rv) < max_jobs:\n try:\n job_json_string = self._get_queue(queue).get(block=False)\n except Empty:\n break\n\n job = Job.deserialize(job_json_string)\n job.status = JobStatus.RUNNING\n rv.append(job)\n\n return rv"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef search_index_file():\n from metapack import Downloader\n from os import environ\n\n return environ.get('METAPACK_SEARCH_INDEX',\n Downloader.get_instance().cache.getsyspath('index.json'))", "response": "Return the default local index file from the download cache"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate from another index or index dict", "response": "def update(self,o):\n \"\"\"Update from another index or index dict\"\"\"\n\n self.open()\n\n try:\n self._db.update(o._db)\n except AttributeError:\n self._db.update(o)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bbox(ctx, tile):\n geom = TilePyramid(\n ctx.obj['grid'],\n tile_size=ctx.obj['tile_size'],\n metatiling=ctx.obj['metatiling']\n ).tile(*tile).bbox(pixelbuffer=ctx.obj['pixelbuffer'])\n if ctx.obj['output_format'] in ['WKT', 'Tile']:\n click.echo(geom)\n elif ctx.obj['output_format'] == 'GeoJSON':\n click.echo(geojson.dumps(geom))", "response": "Print Tile bounding box as geometry."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef tile(ctx, point, zoom):\n tile = TilePyramid(\n ctx.obj['grid'],\n tile_size=ctx.obj['tile_size'],\n metatiling=ctx.obj['metatiling']\n ).tile_from_xy(*point, zoom=zoom)\n if ctx.obj['output_format'] == 'Tile':\n click.echo('%s %s %s' % tile.id)\n elif ctx.obj['output_format'] == 'WKT':\n click.echo(tile.bbox(pixelbuffer=ctx.obj['pixelbuffer']))\n elif ctx.obj['output_format'] == 'GeoJSON':\n click.echo(\n geojson.dumps(\n geojson.FeatureCollection([\n geojson.Feature(\n geometry=tile.bbox(pixelbuffer=ctx.obj['pixelbuffer']),\n properties=dict(\n zoom=tile.zoom,\n row=tile.row,\n col=tile.col\n )\n )\n ])\n )\n )", "response": "Print a tile containing POINT.."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting Tiles from bounds.", "response": "def tiles(ctx, bounds, zoom):\n \"\"\"Print Tiles from bounds.\"\"\"\n tiles = TilePyramid(\n ctx.obj['grid'],\n tile_size=ctx.obj['tile_size'],\n metatiling=ctx.obj['metatiling']\n ).tiles_from_bounds(bounds, zoom=zoom)\n if ctx.obj['output_format'] == 'Tile':\n for tile in tiles:\n click.echo('%s %s %s' % tile.id)\n elif ctx.obj['output_format'] == 'WKT':\n for tile in tiles:\n click.echo(tile.bbox(pixelbuffer=ctx.obj['pixelbuffer']))\n elif ctx.obj['output_format'] == 'GeoJSON':\n click.echo(\n '{\\n'\n ' \"type\": \"FeatureCollection\",\\n'\n ' \"features\": ['\n )\n # print tiles as they come and only add comma if there is a next tile\n try:\n tile = next(tiles)\n while True:\n gj = ' %s' % geojson.Feature(\n geometry=tile.bbox(pixelbuffer=ctx.obj['pixelbuffer']),\n properties=dict(\n zoom=tile.zoom,\n row=tile.row,\n col=tile.col\n )\n )\n try:\n tile = next(tiles)\n click.echo(gj + ',')\n except StopIteration:\n click.echo(gj)\n raise\n except StopIteration:\n pass\n click.echo(\n ' ]\\n'\n '}'\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsnaps bounding box to tile grid.", "response": "def snap_bbox(ctx, bounds, zoom):\n \"\"\"Snap bbox to tile grid.\"\"\"\n click.echo(box(*tilematrix.snap_bounds(\n bounds=bounds,\n tile_pyramid=TilePyramid(\n ctx.obj['grid'],\n tile_size=ctx.obj['tile_size'],\n metatiling=ctx.obj['metatiling']\n ),\n zoom=zoom,\n pixelbuffer=ctx.obj['pixelbuffer']\n )))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _initial_placement(movable_vertices, vertices_resources, machine, random):\n # Initially fill chips in the system in a random order\n locations = list(machine)\n random.shuffle(locations)\n location_iter = iter(locations)\n\n # Greedily place the vertices in a random order\n movable_vertices = list(v for v in vertices_resources\n if v in movable_vertices)\n random.shuffle(movable_vertices)\n vertex_iter = iter(movable_vertices)\n\n placement = {}\n try:\n location = next(location_iter)\n except StopIteration:\n raise InsufficientResourceError(\"No working chips in system.\")\n while True:\n # Get a vertex to place\n try:\n vertex = next(vertex_iter)\n except StopIteration:\n # All vertices have been placed\n break\n\n # Advance through the set of available locations until we find a chip\n # where the vertex fits\n while True:\n resources_if_placed = subtract_resources(\n machine[location], vertices_resources[vertex])\n\n if overallocated(resources_if_placed):\n # The vertex won't fit on this chip, move onto the next chip\n try:\n location = next(location_iter)\n continue\n except StopIteration:\n raise InsufficientResourceError(\n \"Ran out of chips while attempting to place vertex \"\n \"{}\".format(vertex))\n else:\n # The vertex fits: record the resources consumed and move on to\n # the next vertex.\n placement[vertex] = location\n machine[location] = resources_if_placed\n break\n\n return placement", "response": "Generates a random sequential initial placement for the chips in the system."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef place(vertices_resources, nets, machine, constraints,\n effort=1.0, random=default_random, on_temperature_change=None,\n kernel=default_kernel, kernel_kwargs={}):\n \"\"\"A flat Simulated Annealing based placement algorithm.\n\n This placement algorithm uses simulated annealing directly on the supplied\n problem graph with the objective of reducing wire lengths (and thus,\n indirectly, the potential for congestion). Though computationally\n expensive, this placer produces relatively good placement solutions.\n\n The annealing temperature schedule used by this algorithm is taken from\n \"VPR: A New Packing, Placement and Routing Tool for FPGA Research\" by\n Vaughn Betz and Jonathan Rose from the \"1997 International Workshop on\n Field Programmable Logic and Applications\".\n\n Two implementations of the algorithm's kernel are available:\n\n * :py:class:`~rig.place_and_route.place.sa.python_kernel.PythonKernel` A\n pure Python implementation which is available on all platforms supported\n by Rig.\n * :py:class:`~rig.place_and_route.place.sa.c_kernel.CKernel` A C\n implementation which is typically 50-150x faster than the basic Python\n kernel. Since this implementation requires a C compiler during\n installation, it is an optional feature of Rig. See the\n :py:class:`CKernel's documentation\n ` for details.\n\n The fastest kernel installed is used by default and can be manually chosen\n using the ``kernel`` argument.\n\n This algorithm produces INFO level logging information describing the\n progress made by the algorithm.\n\n .. warning:\n This algorithm does not attempt to produce good solutions to the\n bin-packing problem of optimally fitting vertices into chips and it may\n fail if a good placement requires good bin packing.\n\n Parameters\n ----------\n effort : float\n A scaling factor for the number of iterations the algorithm should run\n for. 1.0 is probably about as low as you'll want to go in practice and\n runtime increases linearly as you increase this parameter.\n random : :py:class:`random.Random`\n A Python random number generator. Defaults to ``import random`` but can\n be set to your own instance of :py:class:`random.Random` to allow you\n to control the seed and produce deterministic results. For results to\n be deterministic, vertices_resources must be supplied as an\n :py:class:`collections.OrderedDict`.\n on_temperature_change : callback_function or None\n An (optional) callback function which is called every time the\n temperature is changed. This callback can be used to provide status\n updates\n\n The callback function is passed the following arguments:\n\n * ``iteration_count``: the number of iterations the placer has\n attempted (integer)\n * ``placements``: The current placement solution.\n * ``cost``: the weighted sum over all nets of bounding-box size.\n (float)\n * ``acceptance_rate``: the proportion of iterations which have resulted\n in an accepted change since the last callback call. (float between\n 0.0 and 1.0)\n * ``temperature``: The current annealing temperature. (float)\n * ``distance_limit``: The maximum distance any swap may be made over.\n (integer)\n\n If the callback returns False, the anneal is terminated immediately and\n the current solution is returned.\n kernel : :py:class:`~rig.place_and_route.place.sa.kernel.Kernel`\n A simulated annealing placement kernel. A sensible default will be\n chosen based on the available kernels on this machine. The kernel may\n not be used if the placement problem has a trivial solution.\n kernel_kwargs : dict\n Optional kernel-specific keyword arguments to pass to the kernel\n constructor.\n \"\"\"\n # Special case: just return immediately when there's nothing to place\n if len(vertices_resources) == 0:\n return {}\n\n # Within the algorithm we modify the resource availability values in the\n # machine to account for the effects of the current placement. As a result,\n # an internal copy of the structure must be made.\n machine = machine.copy()\n\n # {vertex: (x, y), ...} gives the location of all vertices whose position\n # is fixed by a LocationConstraint.\n fixed_vertices = {}\n\n # Handle constraints\n vertices_resources, nets, constraints, substitutions = \\\n apply_same_chip_constraints(vertices_resources, nets, constraints)\n for constraint in constraints:\n if isinstance(constraint, LocationConstraint):\n # Location constraints are handled by recording the set of fixed\n # vertex locations and subtracting their resources from the chips\n # they're allocated to. These vertices will then not be added to\n # the internal placement data structure to prevent annealing from\n # moving them. They will be re-introduced at the last possible\n # moment.\n location = constraint.location\n if location not in machine:\n raise InvalidConstraintError(\n \"Chip requested by {} unavailable\".format(machine))\n vertex = constraint.vertex\n\n # Record the constrained vertex's location\n fixed_vertices[vertex] = location\n\n # Make sure the vertex fits at the requested location (updating the\n # resource availability after placement)\n resources = vertices_resources[vertex]\n machine[location] = subtract_resources(machine[location],\n resources)\n if overallocated(machine[location]):\n raise InsufficientResourceError(\n \"Cannot meet {}\".format(constraint))\n elif isinstance(constraint, # pragma: no branch\n ReserveResourceConstraint):\n apply_reserve_resource_constraint(machine, constraint)\n\n # Initially randomly place the movable vertices\n movable_vertices = {v for v in vertices_resources\n if v not in fixed_vertices}\n initial_placements = _initial_placement(movable_vertices,\n vertices_resources,\n machine, random)\n\n # Include the fixed vertices in initial placement\n initial_placements.update(fixed_vertices)\n\n # Filter out empty or singleton nets and those weighted as zero since they\n # cannot influence placement.\n nets = [n for n in nets if len(set(n)) > 1 and n.weight > 0.0]\n\n # Special cases where no placement effort is required:\n # * There is only one chip\n # * There are no resource types to be consumed\n # * No effort is to be made\n # * No movable vertices\n # * There are no nets (and moving things has no effect)\n trivial = ((machine.width, machine.height) == (1, 1) or\n len(machine.chip_resources) == 0 or\n effort == 0.0 or\n len(movable_vertices) == 0 or\n len(nets) == 0)\n if trivial:\n logger.info(\"Placement has trivial solution. SA not used.\")\n finalise_same_chip_constraints(substitutions, initial_placements)\n return initial_placements\n\n # Intialise the algorithm kernel\n k = kernel(vertices_resources, movable_vertices, set(fixed_vertices),\n initial_placements, nets, machine, random, **kernel_kwargs)\n\n logger.info(\"SA placement kernel: %s\", kernel.__name__)\n\n # Specifies the maximum distance any swap can span. Initially consider\n # swaps that span the entire machine.\n distance_limit = max(machine.width, machine.height)\n\n # Determine initial temperature according to the heuristic used by VPR: 20\n # times the standard deviation of len(movable_vertices) random swap costs.\n # The arbitrary very-high temperature is used to cause \"all\" swaps to be\n # accepted.\n _0, _1, cost_delta_sd = k.run_steps(len(movable_vertices),\n distance_limit,\n 1e100)\n temperature = 20.0 * cost_delta_sd\n\n # The number of swap-attempts between temperature changes is selected by\n # the heuristic used by VPR. This value is scaled linearly by the effort\n # parameter.\n num_steps = max(1, int(effort * len(vertices_resources)**1.33))\n\n logger.info(\"Initial placement temperature: %0.1f\", temperature)\n\n # Counter for the number of swap attempts made (used for diagnostic\n # purposes)\n iteration_count = 0\n\n # Holds the total cost of the current placement. This default value chosen\n # to ensure the loop below iterates at least once.\n current_cost = 0.0\n\n # The annealing algorithm runs until a heuristic termination condition\n # (taken from VPR) is hit. The heuristic waits until the temperature falls\n # below a small fraction of the average net cost.\n while temperature > (0.005 * current_cost) / len(nets):\n # Run an iteration at the current temperature\n num_accepted, current_cost, _ = k.run_steps(\n num_steps, int(math.ceil(distance_limit)), temperature)\n\n # The ratio of accepted-to-not-accepted changes\n r_accept = num_accepted / float(num_steps)\n\n # Special case: Can't do better than 0 cost! This is a special case\n # since the normal termination condition will not terminate if the cost\n # doesn't drop below 0.\n if current_cost == 0:\n break\n\n # The temperature is reduced by a factor heuristically based on the\n # acceptance rate. The schedule below attempts to maximise the time\n # spent at temperatures where a large portion (but not all) of changes\n # are being accepted. If lots of changes are being accepted (e.g.\n # during high-temperature periods) then most of them are likely not to\n # be beneficial. If few changes are being accepted, we're probably\n # pretty close to the optimal placement.\n if r_accept > 0.96:\n alpha = 0.5\n elif r_accept > 0.8:\n alpha = 0.9\n elif r_accept > 0.15:\n alpha = 0.95\n else:\n alpha = 0.8\n temperature = alpha * temperature\n\n # According to:\n # * M. Huang, F. Romeo, and A. Sangiovanni-Vincentelli, \"An Efficient\n # General Cooling Schedule for Simulated Annealing\" ICCAD, 1986, pp.\n # 381 - 384 and J. Lam\n # * J. Delosme, \"Performance of a New Annealing Schedule\" DAC, 1988,\n # pp. 306 - 311.\n # It is desirable to keep the acceptance ratio as close to 0.44 for as\n # long as possible. As a result, when r_accept falls below this we can\n # help increase the acceptance rate by reducing the set of possible\n # swap candidates based on the observation that near the end of\n # placement, most things are near their optimal location and thus long\n # distance swaps are unlikely to be useful.\n distance_limit *= 1.0 - 0.44 + r_accept\n distance_limit = min(max(distance_limit, 1.0),\n max(machine.width, machine.height))\n\n iteration_count += num_steps\n logger.debug(\"Iteration: %d, \"\n \"Cost: %0.1f, \"\n \"Kept: %0.1f%%, \"\n \"Temp: %0.3f, \"\n \"Dist: %d.\",\n iteration_count, current_cost,\n r_accept*100, temperature, math.ceil(distance_limit))\n\n # Call the user callback before the next iteration, terminating if\n # requested.\n if on_temperature_change is not None:\n placements = k.get_placements().copy()\n finalise_same_chip_constraints(substitutions, placements)\n ret_val = on_temperature_change(iteration_count,\n placements,\n current_cost,\n r_accept,\n temperature,\n distance_limit)\n if ret_val is False:\n break\n\n logger.info(\"Anneal terminated after %d iterations.\", iteration_count)\n\n placements = k.get_placements()\n finalise_same_chip_constraints(substitutions, placements)\n\n return placements", "response": "A simple annealing based placement algorithm."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_citation_dict(td):\n\n from datetime import datetime\n\n if isinstance(td, dict):\n d = td\n name = d['name_link']\n else:\n\n d = td.as_dict()\n d['_term'] = td\n\n try:\n d['name_link'] = td.name\n except AttributeError:\n d['name_link'] = td['name_link'].value\n\n if 'author' in d and isinstance(d['author'], str):\n authors = []\n for e in d['author'].split(';'):\n author_d = HumanName(e).as_dict(include_empty=False)\n if 'suffix' in author_d:\n author_d['lineage'] = author_d['suffix']\n del author_d['suffix']\n authors.append(author_d)\n d['author'] = authors\n\n if not 'type' in d:\n\n if '_term' in d:\n t = d['_term']\n\n if t.term_is('Root.Reference') or t.term_is('Root.Resource'):\n d['type'] = 'dataset'\n\n elif t.term_is('Root.Citation'):\n d['type'] = 'article'\n else:\n d['type'] = 'article'\n\n if d['type'] == 'dataset':\n\n if not 'editor' in d:\n d['editor'] = [HumanName('Missing Editor').as_dict(include_empty=False)]\n\n if not 'accessdate' in d:\n d['accessdate'] = datetime.now().strftime('%Y-%m-%d')\n\n if not 'author' in d:\n d['author'] = [HumanName('Missing Author').as_dict(include_empty=False)]\n\n if not 'title' in d:\n d['title'] = d.get('description', '')\n\n if not 'journal' in d:\n d['journal'] = ''\n\n if not 'year' in d:\n d['year'] = ''\n\n if '_term' in d:\n del d['_term']\n\n return d", "response": "Update a citation dictionary by editing the Author field"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_metatab_citation_dict(t):\n\n try:\n\n if parse_app_url(t.url).proto == 'metatab':\n\n try:\n url = parse_app_url(str(t.resolved_url)).resource_url\n doc = t.row_generator.generator.package\n\n except AttributeError as e:\n return False # It isn't a resource or reference\n\n creator = doc.find_first('Root.Creator')\n author_key='author'\n\n if not creator:\n creator = doc.find_first('Root.Wrangler')\n author_key = 'editor'\n\n if not creator:\n creator = doc.find_first('Root.Origin')\n author_key = 'editor'\n\n try:\n origin = doc['Contacts'].get('Root.Origin').get('organization').value\n except AttributeError:\n try:\n origin = doc.get_value('Root.Origin', doc.get_value('Name.Origin')).title()\n except:\n origin = None\n\n d = {\n 'type': 'dataset',\n 'name_link': t.name,\n author_key: [HumanName(creator.value).as_dict(include_empty=False)],\n 'publisher': creator.properties.get('organization'),\n 'origin': origin,\n 'journal': '2010 - 2015 American Community Survey',\n 'title': doc['Root'].find_first_value('Root.Title') + '; ' + t.name,\n 'year': doc.get_value('Root.Year'),\n 'accessDate': '{}'.format(datetime.datetime.now().strftime('%Y-%m-%d')),\n 'url': url,\n 'version': doc.get_value('Root.Version', doc.get_value('Name.Version') ),\n }\n\n d = { k:v for k, v in d.items() if v is not None}\n\n\n return d\n\n\n else:\n return False\n\n except (AttributeError, KeyError) as e:\n raise\n return False", "response": "Return a dict with BibText key - value pairs for metatab data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrender citations from a document or a dict of dicts", "response": "def _bibliography(doc, terms, converters=[], format='html'):\n \"\"\"\n Render citations, from a document or a doct of dicts\n\n If the input is a dict, each key is the name of the citation, and the value is a BibTex\n formatted dict\n\n :param doc: A MetatabDoc, or a dict of BibTex dicts\n :return:\n \"\"\"\n\n output_backend = 'latex' if format == 'latex' else MetatabHtmlBackend\n\n def mk_cite(v):\n for c in converters:\n r = c(v)\n\n if r is not False:\n return r\n\n return make_citation_dict(v)\n\n if isinstance(doc, MetatabDoc):\n # This doesn't work for LaTex, b/c the formatter adds the prologue and epilogue to eery entry\n\n d = [mk_cite(t) for t in terms]\n\n cd = {e['name_link']: e for e in d}\n\n else:\n\n cd = {k: mk_cite(v, i) for i, (k, v) in enumerate(doc.items())}\n\n # for k, v in cd.items():\n # print (k, v)\n\n return PybtexEngine().format_from_string(safe_dump({'entries': cd}),\n style=MetatabStyle,\n output_backend=output_backend,\n bib_format='yaml')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef display_context(doc):\n from rowgenerators.exceptions import DownloadError\n\n context = {s.name.lower(): s.as_dict() for s in doc if s.name.lower() != 'schema'}\n\n #import json\n #print(json.dumps(context, indent=4))\n\n mandatory_sections = ['documentation', 'contacts']\n\n # Remove section names\n deletes = []\n for k,v in context.items():\n try:\n del v['@value']\n except KeyError:\n pass # Doesn't have the value\n except TypeError:\n # Is actually completely empty, and has a scalar value. Delete and re-create\n deletes.append(k)\n\n if isinstance(v, str): # Shouldn't ever happen, but who knows ?\n deletes.append(k)\n\n for d in deletes:\n try:\n del context[d]\n except KeyError:\n # Fails in TravisCI, no idea why.\n pass\n\n for ms in mandatory_sections:\n if not ms in context:\n context[ms] = {}\n\n # Load inline documentation\n inline = ''\n\n for d in context.get('documentation',{}).get('documentation',[]):\n\n u = parse_app_url(d['url'])\n\n if u.target_format == 'md': # The README.md file\n inline = ''\n if u.proto == 'file':\n # File really ought to be relative\n t = doc.package_url.join_target(u).get_resource().get_target()\n else:\n try:\n t = u.get_resource().get_target()\n except DownloadError as e:\n raise e\n\n\n try:\n with open(t.fspath) as f:\n inline += f.read()\n\n except FileNotFoundError:\n pass\n\n del d['title'] # Will cause it to be ignored in next section\n\n # Strip off the leading title, if it exists, because it will be re-applied\n # by the templates\n import re\n\n lines = inline.strip().splitlines()\n if lines and lines[0].startswith('# '):\n lines = lines[1:]\n\n context['inline_doc'] = '\\n'.join(lines)\n\n # Convert doc section\n doc_links = {}\n images = {}\n for term_name, terms in context['documentation'].items():\n if term_name == 'note':\n context['notes'] = terms\n else:\n for i, term in enumerate(terms):\n try:\n if term_name == 'image':\n images[term['title']] = term\n else:\n doc_links[term['title']] = term\n except AttributeError: # A scalar\n pass # There should not be any scalars in the documentation section\n except KeyError:\n pass # ignore entries without titles\n except TypeError:\n pass # Also probably a ascalar\n\n context['doc_links'] = doc_links\n context['images'] = images\n\n del context['documentation']\n\n #\n # Update contacts\n\n origin = None\n for term_name, terms in context['contacts'].items():\n if isinstance(terms, dict):\n origin = terms # Origin is a scalar in roort, must be converted to sequence here\n else:\n for t in terms:\n try:\n t.update(process_contact(t))\n except AttributeError:\n pass # Probably got a scalar\n if origin:\n origin.update(process_contact(origin))\n context['contacts']['origin'] = [origin]\n\n\n # For resources and references, convert scalars into lists of dicts, which are the\n # default for Datafiles and References.\n\n for section in ('references', 'resources'):\n for term_key, term_vals in context.get(section,{}).items():\n if isinstance(term_vals, dict):\n if '@value' in term_vals:\n term_vals['url'] = term_vals['@value']\n del term_vals['@value']\n new_term_vals = [term_vals]\n elif isinstance(term_vals, list):\n new_term_vals = None\n else:\n new_term_vals = [ {'url': term_vals, 'name': term_vals}]\n\n if new_term_vals:\n context[section][term_key] = new_term_vals\n\n context['distributions'] = {}\n for dist in doc.find('Root.Distribution'):\n context['distributions'][dist.type] = dist.value\n\n if doc.find('Root.Giturl'):\n context['distributions']['source'] = doc.get_value('Root.Giturl')\n\n return context", "response": "Create a Jinja context for display"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef breadth_first_vertex_order(vertices_resources, nets):\n # Special case: no vertices, just stop immediately\n if len(vertices_resources) == 0:\n return\n\n # Enumerate the set of nets attached to each vertex\n vertex_neighbours = defaultdict(set)\n for net in nets:\n # Note: Iterating over a Net object produces the set of vertices\n # involved in the net.\n vertex_neighbours[net.source].update(net)\n for sink in net.sinks:\n vertex_neighbours[sink].update(net)\n\n # Perform a breadth-first iteration over the vertices.\n unplaced_vertices = set(vertices_resources)\n vertex_queue = deque()\n while vertex_queue or unplaced_vertices:\n if not vertex_queue:\n vertex_queue.append(unplaced_vertices.pop())\n vertex = vertex_queue.popleft()\n\n yield vertex\n\n vertex_queue.extend(v for v in vertex_neighbours[vertex]\n if v in unplaced_vertices)\n unplaced_vertices.difference_update(vertex_neighbours[vertex])", "response": "A generator which iterates over a set of vertices in a breadth - first vertex order in terms of connectivity."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef place(vertices_resources, nets, machine, constraints, chip_order=None):\n return sequential_place(vertices_resources, nets,\n machine, constraints,\n breadth_first_vertex_order(vertices_resources,\n nets),\n chip_order)", "response": "Places vertices in breadth - first order onto chips in the machine."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getRandomComic():\n\trandom.seed()\n\tnumComics = getLatestComicNum()\n\tnumber = random.randint(1, numComics)\n\treturn Comic(number)", "response": "Returns a random xkcd comic object for a random xkcd comic."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a : class : Comic object with the provided index equal to the provided argument.", "response": "def getComic(number, silent=True):\n\t\"\"\"\tProduces a :class:`Comic` object with index equal to the provided argument.\n\t\tPrints an error in the event of a failure (i.e. the number is less than zero\n\t\tor greater than the latest comic number) and returns an empty Comic object.\n\n\t\tArguments:\n\t\t\tan integer or string that represents a number, \"number\", that is the index of the comic in question.\n\n\t\t\tsilent: boolean, defaults to True. If set to False, an error will be printed\n\t\t\tto standard output should the provided integer argument not be valid.\n\n\t\tReturns the resulting Comic object for the provided index if successful,\n\t\tor a Comic object with -1 as the index if not.\"\"\"\n\tnumComics = getLatestComicNum()\n\t\n\tif type(number) is str and number.isdigit():\n\t\tnumber = int(number)\n\tif number > numComics or number <= 0:\n\t\tif not silent:\n\t\t\tprint(\"Error: You have requested an invalid comic.\")\n\t\treturn Comic(-1)\n\treturn Comic(number)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses the xkcd What If archive page and returns the dictionary produced by the parser.", "response": "def getWhatIfArchive():\n\t\"\"\"\tParses the xkcd What If archive. getWhatIfArchive passes the HTML text of\n\t\tthe archive page into a :class:`WhatIfArchiveParser` and then calls\n\t\tthe parser's :func:`WhatIfArchiveParser.getWhatIfs` method and returns the dictionary produced.\n\n\t\tThis function returns a dictionary mapping article numbers to :class:`WhatIf`\n\t\tobjects for every What If article published thus far. If the parsing fails,\n\t\tfor whatever reason, the dictionary will be empty.\"\"\"\n\tarchive = urllib.urlopen(archiveUrl)\n\ttext = archive.read()\n\tif sys.version_info[0] >= 3:\n\t\ttext = text.decode('utf-8')\n\tarchive.close()\n\n\tparser = WhatIfArchiveParser()\n\tparser.feed(text)\n\treturn parser.getWhatIfs()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getRandomWhatIf():\n\n\trandom.seed()\n\tarchive = getWhatIfArchive()\n\tlatest = getLatestWhatIfNum(archive)\n\tnumber = random.randint(1, latest)\n\treturn archive[number]", "response": "Returns a randomly generated WhatIf object using the Python standard library\n\trandom number generator."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getWhatIf(number):\n\tarchive = getWhatIfArchive()\n\tlatest = getLatestWhatIfNum(archive)\n\t\n\tif type(number) is str and number.isdigit():\n\t\tnumber = int(number)\n\tif number > latest or latest <= 0:\n\t\treturn None\n\treturn archive[number]", "response": "Returns a : class:`WhatIf` object corresponding to the What If article of the passed index."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convertToAscii(string, error=\"?\"):\n\n\trunning = True\n\tasciiString = string\n\twhile running:\n\t\ttry:\n\t\t\tasciiString = asciiString.encode('ascii')\n\t\texcept UnicodeError as unicode:\n\t\t\tstart = unicode.start\n\t\t\tend = unicode.end\n\t\t\tasciiString = asciiString[:start] + \"?\" + asciiString[end:]\n\t\telse:\n\t\t\trunning = False\n\treturn asciiString", "response": "This function is used to convert a unicode string to ASCII."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef download(self, output=\"\", outputFile=\"\", silent=True):\n\t\timage = urllib.urlopen(self.imageLink).read()\n\n\t\t#Process optional input to work out where the dowload will go and what it'll be called\n\t\tif output != \"\":\n\t\t\toutput = os.path.abspath(os.path.expanduser(output))\n\t\tif output == \"\" or not os.path.exists(output):\n\t\t\toutput = os.path.expanduser(os.path.join(\"~\", \"Downloads\"))\n\t\t\t# Create ~/Downloads if it doesn't exist, since this is the default path.\n\t\t\tif not os.path.exists(output):\n\t\t\t\tos.mkdir(output)\n\t\tif outputFile == \"\":\n\t\t\toutputFile = \"xkcd-\" + str(self.number) + \"-\" + self.imageName\n\n\t\toutput = os.path.join(output, outputFile)\n\t\ttry:\n\t\t\tdownload = open(output, 'wb')\n\t\texcept:\n\t\t\tif not silent:\n\t\t\t\tprint(\"Unable to make file \" + output)\n\t\t\treturn \"\"\n\t\tdownload.write(image)\n\t\tdownload.close()\n\t\treturn output", "response": "Downloads the image of the comic onto your computer."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreplaces local hyperlinks in RST with absolute addresses using the given base_url.", "response": "def replace_local_hyperlinks(\n text, base_url=\"https://github.com/project-rig/rig/blob/master/\"):\n \"\"\"Replace local hyperlinks in RST with absolute addresses using the given\n base URL.\n\n This is used to make links in the long description function correctly\n outside of the repository (e.g. when published on PyPi).\n\n NOTE: This may need adjusting if further syntax is used.\n \"\"\"\n def get_new_url(url):\n return base_url + url[2:]\n\n # Deal with anonymous URLS\n for match in re.finditer(r\"^__ (?P\\./.*)\", text, re.MULTILINE):\n orig_url = match.groupdict()[\"url\"]\n url = get_new_url(orig_url)\n\n text = re.sub(\"^__ {}\".format(orig_url),\n \"__ {}\".format(url), text, flags=re.MULTILINE)\n\n # Deal with named URLS\n for match in re.finditer(r\"^\\.\\. _(?P[^:]*): (?P\\./.*)\",\n text, re.MULTILINE):\n identifier = match.groupdict()[\"identifier\"]\n orig_url = match.groupdict()[\"url\"]\n url = get_new_url(orig_url)\n\n text = re.sub(\n \"^\\.\\. _{}: {}\".format(identifier, orig_url),\n \".. _{}: {}\".format(identifier, url),\n text, flags=re.MULTILINE)\n\n # Deal with image URLS\n for match in re.finditer(r\"^\\.\\. image:: (?P\\./.*)\",\n text, re.MULTILINE):\n orig_url = match.groupdict()[\"url\"]\n url = get_new_url(orig_url)\n\n text = text.replace(\".. image:: {}\".format(orig_url),\n \".. image:: {}\".format(url))\n\n return text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a metatab file for the index", "response": "def dump_index(args, idx):\n \"\"\"Create a metatab file for the index\"\"\"\n\n import csv\n import sys\n\n from metatab import MetatabDoc\n\n doc = MetatabDoc()\n\n pack_section = doc.new_section('Packages', ['Identifier', 'Name', 'Nvname', 'Version', 'Format'])\n\n r = doc['Root']\n r.new_term('Root.Title', 'Package Index')\n\n for p in idx.list():\n\n pack_section.new_term('Package',\n p['url'],\n identifier=p['ident'],\n name=p['name'],\n nvname=p['nvname'],\n version=p['version'],\n format=p['format'])\n\n doc.write_csv(args.dump)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _setup(self):\n agent_config = CONF.get(\"AGENT\", {})\n self._worker_count = agent_config.get('worker_count')\n self._phys_net_map = agent_config.get(\n 'physical_network_vswitch_mappings', [])\n self._local_network_vswitch = agent_config.get(\n 'local_network_vswitch')\n self._load_physical_network_mappings(self._phys_net_map)\n self._validate_vswitches()\n\n self._endpoints.append(self)\n self._event_callback_pairs.extend([\n (self._utils.EVENT_TYPE_CREATE, self._process_added_port_event),\n (self._utils.EVENT_TYPE_DELETE, self._process_removed_port_event)\n ])\n\n tpool.set_num_threads(self._worker_count)", "response": "Setup the layer two agent."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _setup_rpc(self):\n self._plugin_rpc = agent_rpc.PluginApi(topics.PLUGIN)\n self._state_rpc = agent_rpc.PluginReportStateAPI(topics.PLUGIN)\n self._client = n_rpc.get_client(self.target)\n\n self._consumers.extend([\n [topics.PORT, topics.UPDATE], [topics.NETWORK, topics.DELETE],\n [topics.PORT, topics.DELETE]\n ])\n\n self._connection = agent_rpc.create_consumers(\n self._endpoints, self._topic, self._consumers,\n start_listening=False\n )\n self._setup_qos_extension()\n self._connection.consume_in_threads()\n\n report_interval = CONF.AGENT.report_interval\n if report_interval:\n heartbeat = loopingcall.FixedIntervalLoopingCall(\n self._report_state)\n heartbeat.start(interval=report_interval)", "response": "Setup the RPC client for the current agent."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _process_added_port_event(self, port_name):\n LOG.info(\"Hyper-V VM vNIC added: %s\", port_name)\n self._added_ports.add(port_name)", "response": "Callback for adding ports."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload all the information regarding the physical network.", "response": "def _load_physical_network_mappings(self, phys_net_vswitch_mappings):\n \"\"\"Load all the information regarding the physical network.\"\"\"\n for mapping in phys_net_vswitch_mappings:\n parts = mapping.split(':')\n if len(parts) != 2:\n LOG.debug('Invalid physical network mapping: %s', mapping)\n else:\n pattern = re.escape(parts[0].strip()).replace('\\\\*', '.*')\n pattern = pattern + '$'\n vswitch = parts[1].strip()\n self._physical_network_mappings[pattern] = vswitch"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_vswitch_name(self, network_type, physical_network):\n if network_type != constants.TYPE_LOCAL:\n vswitch_name = self._get_vswitch_for_physical_network(\n physical_network)\n else:\n vswitch_name = self._local_network_vswitch\n\n if vswitch_name:\n return vswitch_name\n\n err_msg = _(\"No vSwitch configured for physical network \"\n \"'%(physical_network)s'. Neutron network type: \"\n \"'%(network_type)s'.\")\n raise exception.NetworkingHyperVException(\n err_msg % dict(physical_network=physical_network,\n network_type=network_type))", "response": "Get the vswitch name for the received network information."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the vswitch name for the received network name.", "response": "def _get_vswitch_for_physical_network(self, phys_network_name):\n \"\"\"Get the vswitch name for the received network name.\"\"\"\n for pattern in self._physical_network_mappings:\n if phys_network_name is None:\n phys_network_name = ''\n if re.match(pattern, phys_network_name):\n return self._physical_network_mappings[pattern]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the vswitch name for the received port id.", "response": "def _get_network_vswitch_map_by_port_id(self, port_id):\n \"\"\"Get the vswitch name for the received port id.\"\"\"\n for network_id, vswitch in six.iteritems(self._network_vswitch_map):\n if port_id in vswitch['ports']:\n return (network_id, vswitch)\n\n # If the port was not found, just return (None, None)\n return (None, None)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the ports status cache.", "response": "def _update_port_status_cache(self, device, device_bound=True):\n \"\"\"Update the ports status cache.\"\"\"\n with self._cache_lock:\n if device_bound:\n self._bound_ports.add(device)\n self._unbound_ports.discard(device)\n else:\n self._bound_ports.discard(device)\n self._unbound_ports.add(device)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating and bind the event listeners.", "response": "def _create_event_listeners(self):\n \"\"\"Create and bind the event listeners.\"\"\"\n LOG.debug(\"Create the event listeners.\")\n for event_type, callback in self._event_callback_pairs:\n LOG.debug(\"Create listener for %r event\", event_type)\n listener = self._utils.get_vnic_event_listener(event_type)\n eventlet.spawn_n(listener, callback)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbind the port to the recived network.", "response": "def _port_bound(self, port_id, network_id, network_type, physical_network,\n segmentation_id, port_security_enabled, set_port_sriov):\n \"\"\"Bind the port to the recived network.\"\"\"\n LOG.debug(\"Binding port %s\", port_id)\n\n if network_id not in self._network_vswitch_map:\n self._provision_network(\n port_id, network_id, network_type,\n physical_network, segmentation_id)\n\n vswitch_map = self._network_vswitch_map[network_id]\n vswitch_map['ports'].append(port_id)\n\n LOG.debug(\"Trying to connect the current port to vswitch %r.\",\n vswitch_map['vswitch_name'])\n self._utils.connect_vnic_to_vswitch(\n vswitch_name=vswitch_map['vswitch_name'],\n switch_port_name=port_id,\n )\n if set_port_sriov:\n LOG.debug(\"Enabling SR-IOV for port: %s\", port_id)\n self._utils.set_vswitch_port_sriov(port_id, True)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef process_added_port(self, device_details):\n device = device_details['device']\n port_id = device_details['port_id']\n reprocess = True\n try:\n self._process_added_port(device_details)\n\n LOG.debug(\"Updating cached port %s status as UP.\", port_id)\n self._update_port_status_cache(device, device_bound=True)\n LOG.info(\"Port %s processed.\", port_id)\n except os_win_exc.HyperVvNicNotFound:\n LOG.debug('vNIC %s not found. This can happen if the VM was '\n 'destroyed.', port_id)\n reprocess = False\n except os_win_exc.HyperVPortNotFoundException:\n LOG.debug('vSwitch port %s not found. This can happen if the VM '\n 'was destroyed.', port_id)\n # NOTE(claudiub): just to be on the safe side, in case Hyper-V said\n # that the port was added, but it hasn't really, we're leaving\n # reprocess = True. If the VM / vNIC was removed, on the next\n # reprocess, a HyperVvNicNotFound will be raised.\n except Exception as ex:\n # NOTE(claudiub): in case of a non-transient error, the port will\n # be processed over and over again, and will not be reported as\n # bound (e.g.: InvalidParameterValue when setting QoS), until the\n # port is deleted. These issues have to be investigated and solved\n LOG.exception(\"Exception encountered while processing \"\n \"port %(port_id)s. Exception: %(ex)s\",\n dict(port_id=port_id, ex=ex))\n else:\n # no exception encountered, no need to reprocess.\n reprocess = False\n\n if reprocess:\n # Readd the port as \"added\", so it can be reprocessed.\n self._added_ports.add(device)\n\n # Force cache refresh.\n self._refresh_cache = True\n return False\n\n return True", "response": "Process the added ports."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _treat_devices_added(self):\n try:\n devices_details_list = self._plugin_rpc.get_devices_details_list(\n self._context, self._added_ports, self._agent_id)\n except Exception as exc:\n LOG.debug(\"Unable to get ports details for \"\n \"devices %(devices)s: %(exc)s\",\n {'devices': self._added_ports, 'exc': exc})\n return\n\n for device_details in devices_details_list:\n device = device_details['device']\n LOG.info(\"Adding port %s\", device)\n if 'port_id' in device_details:\n LOG.info(\"Port %(device)s updated. \"\n \"Details: %(device_details)s\",\n {'device': device, 'device_details': device_details})\n eventlet.spawn_n(self.process_added_port, device_details)\n else:\n LOG.debug(\"Missing port_id from device details: \"\n \"%(device)s. Details: %(device_details)s\",\n {'device': device, 'device_details': device_details})\n\n LOG.debug(\"Remove the port from added ports set, so it \"\n \"doesn't get reprocessed.\")\n self._added_ports.discard(device)", "response": "Process the new devices."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing the removed ports.", "response": "def _process_removed_port(self, device):\n \"\"\"Process the removed ports.\"\"\"\n LOG.debug(\"Trying to remove the port %r\", device)\n self._update_port_status_cache(device, device_bound=False)\n self._port_unbound(device, vnic_deleted=True)\n\n LOG.debug(\"The port was successfully removed.\")\n self._removed_ports.discard(device)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprocesses the removed devices.", "response": "def _treat_devices_removed(self):\n \"\"\"Process the removed devices.\"\"\"\n for device in self._removed_ports.copy():\n eventlet.spawn_n(self._process_removed_port, device)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _work(self):\n if self._refresh_cache:\n # Inconsistent cache might cause exceptions. For example,\n # if a port has been removed, it will be known in the next\n # loop. Using the old switch port can cause exceptions.\n LOG.debug(\"Refreshing os_win caches...\")\n self._utils.update_cache()\n self._refresh_cache = False\n\n if self._bound_ports or self._unbound_ports:\n eventlet.spawn_n(self._notify_plugin_on_port_updates)\n\n # notify plugin about port deltas\n if self._added_ports:\n LOG.debug(\"Agent loop has new devices!\")\n self._treat_devices_added()\n\n if self._removed_ports:\n LOG.debug(\"Agent loop has lost devices...\")\n self._treat_devices_removed()", "response": "Process the information regarding the available ports."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build(subparsers):\n\n parser = subparsers.add_parser(\n 'build',\n help='Build derived packages',\n description=build.__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n epilog='')\n\n\n parser.set_defaults(run_command=run_metapack)\n\n parser.add_argument('metatabfile', nargs='?',\n help=\"Path or URL to a metatab file. If not provided, defaults to 'metadata.csv'. \"\n )\n\n parser.add_argument('-p', '--profile', help=\"Name of a BOTO or AWS credentails profile\", required=False)\n\n parser.add_argument('-D', '--package-directory', help=\"Write Zip, Excel and CSV packages to an alternate directory\",\n required=False)\n\n parser.add_argument('-F', '--force', action='store_true', default=False,\n help='Force some operations, like updating the name and building packages')\n\n parser.add_argument('-R', '--reuse-resources', action='store_true', default=False,\n help='When building Filesystem package, try to reuse resources built in prior build')\n\n group = parser.add_mutually_exclusive_group()\n\n group.add_argument('-n', '--nonversion-name', action='store_true', default=False,\n help='Write file packages with non-versioned names')\n\n group.add_argument('-N', '--nonversion-link', action='store_true', default=False,\n help='Create links with nonversioned names to file packages')\n\n parser.set_defaults(handler=None)\n\n\n\n ##\n ## Derived Package Group\n\n derived_group = parser.add_argument_group('Derived Packages', 'Generate other types of packages')\n\n derived_group.add_argument('-e', '--excel', action='store_true', default=False,\n help='Create an excel archive from a metatab file')\n\n derived_group.add_argument('-z', '--zip', action='store_true', default=False,\n help='Create a zip archive from a metatab file')\n\n derived_group.add_argument('-f', '--filesystem', action='store_true', default=False,\n help='Create a filesystem archive from a metatab file')\n\n derived_group.add_argument('-c', '--csv', action='store_true', default=False,\n help='Create a CSV archive from a metatab file')\n\n ##\n ## Administration Group\n\n admin_group = parser.add_argument_group('Administration', 'Information and administration')\n\n admin_group.add_argument('--clean-cache', default=False, action='store_true',\n help=\"Clean the download cache\")\n\n admin_group.add_argument('-C', '--clean', default=False, action='store_true',\n help=\"For some operations, like updating schemas, clear the section of existing terms first\")", "response": "Build source packages.\n\n The mp build program runs all of the resources listed in a Metatab file and\n produces one or more Metapack packages with those resources localized. It\n will always try to produce a Filesystem package, and may optionally produce\n Excel, Zip and CSV packages.\n\n Typical usage is to be run inside a source package directory with\n\n .. code-block:: bash\n\n $ mp build\n\n To build all of the package types: \n \n .. code-block:: bash\n\n $ mp build -fezc\n\n By default, packages are built with versioned names. The\n :option:`--nonversion-name` option will create file packages with\n non-versioned name, and the :option:`--nonversioned-link` option will\n produce a non-versioned soft link pointing to the versioned file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef metatab_derived_handler(m):\n from metapack.exc import PackageError\n from metapack.util import get_materialized_data_cache\n from shutil import rmtree\n\n create_list = []\n url = None\n\n doc = MetapackDoc(m.mt_file)\n\n env = get_lib_module_dict(doc)\n\n package_dir = m.package_root\n\n if m.args.package_directory:\n # If this is set, the FS package will be built to m.package_root, but the\n # file packages will be built to package_dir\n package_dir = parse_app_url(m.args.package_directory)\n\n update_name(m.mt_file, fail_on_missing=False, report_unchanged=False)\n\n process_schemas(m.mt_file, cache=m.cache, clean=m.args.clean, report_found=False)\n\n nv_name = m.args.nonversion_name\n nv_link = m.args.nonversion_link\n\n # Remove any data that may have been cached , for instance, from Jupyter notebooks\n rmtree(get_materialized_data_cache(doc), ignore_errors=True)\n\n reuse_resources=m.args.reuse_resources\n\n try:\n\n # Always create a filesystem package before ZIP or Excel, so we can use it as a source for\n # data for the other packages. This means that Transform processes and programs only need\n # to be run once.\n\n _, url, created = make_filesystem_package(m.mt_file, m.package_root, m.cache, env, m.args.force, False,\n nv_link, reuse_resources=reuse_resources)\n create_list.append(('fs', url, created))\n\n lb_path = Path( m.package_root.fspath,'last_build')\n\n if created or not lb_path.exists():\n Path( m.package_root.fspath,'last_build').touch()\n\n m.mt_file = url\n\n env = {} # Don't need it anymore, since no more programs will be run.\n\n if m.args.excel is not False:\n _, url, created = make_excel_package(m.mt_file, package_dir, m.cache, env, m.args.force, nv_name, nv_link)\n create_list.append(('xlsx', url, created))\n\n if m.args.zip is not False:\n _, url, created = make_zip_package(m.mt_file, package_dir, m.cache, env, m.args.force, nv_name, nv_link)\n create_list.append(('zip', url, created))\n\n if m.args.csv is not False:\n _, url, created = make_csv_package(m.mt_file, package_dir, m.cache, env, m.args.force, nv_name, nv_link)\n create_list.append(('csv', url, created))\n\n except PackageError as e:\n err(\"Failed to generate package: {}\".format(e))\n\n index_packages(m)\n\n return create_list", "response": "Create local Zip Excel and Filesystem packages and process the data."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninitialize features that are normally initialized in the CLI", "response": "def init():\n \"\"\"Initialize features that are normally initialized in the CLI\"\"\"\n\n from metapack.appurl import SearchUrl\n import metapack as mp\n from os import environ\n\n SearchUrl.initialize() # This makes the 'index:\" urls work\n mp.Downloader.context.update(environ)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_struct_file(struct_data):\n # Holders for all structs\n structs = dict()\n\n # Holders for the current struct\n name = None\n\n # Iterate over every line in the file\n for i, l in enumerate(struct_data.splitlines()):\n # Empty the line of comments, if the line is empty then skip to the\n # next line. Split on whitespace to get the tokens.\n tokens = re_comment.sub(b\"\", l).strip().split()\n if len(tokens) == 0:\n continue\n elif len(tokens) == 3:\n # 3 tokens implies header data\n (key, _, value) = tokens\n\n if key == b\"name\":\n if name is not None:\n if structs[name].size is None:\n raise ValueError(\n \"size value missing for struct '{}'\".format(name))\n if structs[name].base is None:\n raise ValueError(\n \"base value missing for struct '{}'\".format(name))\n name = value\n structs[name] = Struct(name)\n elif key == b\"size\":\n structs[name].size = num(value)\n elif key == b\"base\":\n structs[name].base = num(value)\n else:\n raise ValueError(key)\n elif len(tokens) == 5:\n # 5 tokens implies entry in struct.\n (field, pack, offset, printf, default) = tokens\n\n # Convert the packing character from Perl to Python standard\n num_pack = re_numbered_pack.match(pack)\n if num_pack is not None:\n pack = (num_pack.group(\"num\") +\n perl_to_python_packs[num_pack.group(\"char\")])\n else:\n pack = perl_to_python_packs[pack]\n\n # If the field is an array then extract the length\n length = 1\n field_exp = re_array_field.match(field)\n if field_exp is not None:\n field = field_exp.group(\"field\")\n length = num(field_exp.group(\"length\"))\n\n structs[name][field] = StructField(pack, num(offset), printf,\n num(default), length)\n else:\n raise ValueError(\n \"line {}: Invalid syntax in struct file\".format(i))\n\n # Final check for setting size and base\n if structs[name].size is None:\n raise ValueError(\n \"size value missing for struct '{}'\".format(name))\n if structs[name].base is None:\n raise ValueError(\n \"base value missing for struct '{}'\".format(name))\n\n return structs", "response": "Interpret a struct file defining the location of variables in memory."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts a value from one of several bases to an int.", "response": "def num(value):\n \"\"\"Convert a value from one of several bases to an int.\"\"\"\n if re_hex_num.match(value):\n return int(value, base=16)\n else:\n return int(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreplaces the default values of specified fields in the internal structure.", "response": "def update_default_values(self, **updates):\n \"\"\"Replace the default values of specified fields.\n\n Parameters\n ----------\n Parameters are taken as keyword-arguments of `field=new_value`.\n\n Raises\n ------\n KeyError\n If a field doesn't exist in the struct.\n \"\"\"\n for (field, value) in six.iteritems(updates):\n fname = six.b(field)\n self[fname] = self[fname]._replace(default=value)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npacks the struct into a string of bytes.", "response": "def pack(self):\n \"\"\"Pack the struct (and its default values) into a string of bytes.\n\n Returns\n -------\n :py:class:`bytes`\n Byte-string representation of struct containing default values.\n \"\"\"\n # Generate a buffer big enough to hold the packed values\n data = bytearray(b\"\\x00\" * self.size)\n\n # Iterate over the fields, pack each value in little-endian format and\n # insert into the buffered data.\n for field in six.itervalues(self.fields):\n packed_data = struct.pack(b\"<\" + field.pack_chars, field.default)\n data[field.offset:len(packed_data)+field.offset] = packed_data\n\n return bytes(data)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef next_future_job_delta(self) -> Optional[float]:\n job = self._get_next_future_job()\n if not job:\n return None\n return (job.at - datetime.now(timezone.utc)).total_seconds()", "response": "Give the amount of seconds before the next future job is due."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef sdram_alloc_for_vertices(controller, placements, allocations,\n core_as_tag=True, sdram_resource=SDRAM,\n cores_resource=Cores, clear=False):\n \"\"\"Allocate and return a file-like view of a region of SDRAM for each\n vertex which uses SDRAM as a resource.\n\n The tag assigned to each region of assigned SDRAM is the index of the\n first core that each vertex is assigned. For example::\n\n placements = {vertex: (0, 5)}\n allocations = {vertex: {Cores: slice(3, 6),\n SDRAM: slice(204, 304)}}\n sdram_allocations = sdram_alloc_for_vertices(\n controller, placements, allocations\n )\n\n Will allocate a 100-byte block of SDRAM for the vertex which is\n allocated cores 3-5 on chip (0, 5). The region of SDRAM will be tagged\n `3` (because this is the index of the first core).\n\n Parameters\n ----------\n controller : :py:class:`rig.machine_control.MachineController`\n Controller to use to allocate the SDRAM.\n placements : {vertex: (x, y), ...}\n Mapping of vertices to the chips they have been placed on. Same as\n produced by placers.\n allocations : {vertex: {resource: allocation, ...}, ...}\n Mapping of vertices to the resources they have been allocated.\n\n A block of memory of the size specified by the `sdram_resource`\n (default: :py:class:`~rig.place_and_route.SDRAM`) resource will be\n allocated for each vertex. Note that location of the supplied\n allocation is *not* used.\n\n When `core_as_tag=True`, the tag allocated will be the ID of the first\n core used by the vertex (indicated by the `cores_resource`, default\n :py:class:`~rig.place_and_route.Cores`), otherwise the tag will be set\n to 0.\n clear : bool\n If True the requested memory will be filled with zeros before the\n pointer is returned. If False (the default) the memory will be left\n as-is.\n\n Other Parameters\n ----------------\n core_as_tag : bool\n Use the index of the first allocated core as the tag for the region of\n memory, otherwise 0 will be used.\n sdram_resource : resource (default :py:class:`~rig.place_and_route.SDRAM`)\n Key used to indicate SDRAM usage in the resources dictionary.\n cores_resource : resource (default :py:class:`~rig.place_and_route.Cores`)\n Key used to indicate cores which have been allocated in the\n allocations dictionary.\n\n Returns\n -------\n {vertex: :py:class:`.MemoryIO`, ...}\n A file-like object for each vertex which can be used to read and write\n to the region of SDRAM allocated to the vertex.\n\n Raises\n ------\n rig.machine_control.machine_controller.SpiNNakerMemoryError\n If the memory cannot be allocated, or a tag is already taken or\n invalid.\n \"\"\"\n # For each vertex we perform an SDRAM alloc to get a file-like for\n # the vertex.\n vertex_memory = dict()\n for vertex, allocs in six.iteritems(allocations):\n if sdram_resource in allocs:\n sdram_slice = allocs[sdram_resource]\n assert sdram_slice.step is None\n\n size = sdram_slice.stop - sdram_slice.start\n x, y = placements[vertex]\n\n if core_as_tag:\n tag = allocs[cores_resource].start\n else:\n tag = 0\n\n # Get the memory\n vertex_memory[vertex] = controller.sdram_alloc_as_filelike(\n size, tag, x=x, y=y, clear=clear\n )\n\n return vertex_memory", "response": "Allocate and return a file - like view of a region of SDRAM for each vertex."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef advance_job_status(namespace: str, job: Job, duration: float,\n err: Optional[Exception]):\n \"\"\"Advance the status of a job depending on its execution.\n\n This function is called after a job has been executed. It calculates its\n next status and calls the appropriate signals.\n \"\"\"\n duration = human_duration(duration)\n if not err:\n job.status = JobStatus.SUCCEEDED\n logger.info('Finished execution of %s in %s', job, duration)\n return\n\n if job.should_retry:\n job.status = JobStatus.NOT_SET\n job.retries += 1\n if isinstance(err, RetryException) and err.at is not None:\n job.at = err.at\n else:\n job.at = (datetime.now(timezone.utc) +\n exponential_backoff(job.retries))\n\n signals.job_schedule_retry.send(namespace, job=job, err=err)\n\n log_args = (\n job.retries, job.max_retries + 1, job, duration,\n human_duration(\n (job.at - datetime.now(tz=timezone.utc)).total_seconds()\n )\n )\n if isinstance(err, RetryException):\n logger.info('Retry requested during execution %d/%d of %s '\n 'after %s, retry in %s', *log_args)\n else:\n logger.warning('Error during execution %d/%d of %s after %s, '\n 'retry in %s', *log_args)\n\n return\n\n job.status = JobStatus.FAILED\n signals.job_failed.send(namespace, job=job, err=err)\n logger.error(\n 'Error during execution %d/%d of %s after %s',\n job.max_retries + 1, job.max_retries + 1, job, duration,\n exc_info=err\n )", "response": "Advance the status of a job depending on its execution."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef tile(self, zoom, row, col):\n return Tile(self, zoom, row, col)", "response": "Return a Tile object of this TilePyramid."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef matrix_width(self, zoom):\n validate_zoom(zoom)\n width = int(math.ceil(self.grid.shape.width * 2**(zoom) / self.metatiling))\n return 1 if width < 1 else width", "response": "Returns the number of columns at zoom level."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef matrix_height(self, zoom):\n validate_zoom(zoom)\n height = int(math.ceil(self.grid.shape.height * 2**(zoom) / self.metatiling))\n return 1 if height < 1 else height", "response": "Returns the number of rows at zoom level."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tile_x_size(self, zoom):\n warnings.warn(DeprecationWarning(\"tile_x_size is deprecated\"))\n validate_zoom(zoom)\n return round(self.x_size / self.matrix_width(zoom), ROUND)", "response": "Return the width of a tile in SRID units at zoom level."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the height of a tile in SRID units at zoom level zoom.", "response": "def tile_y_size(self, zoom):\n \"\"\"\n Height of a tile in SRID units at zoom level.\n\n - zoom: zoom level\n \"\"\"\n warnings.warn(DeprecationWarning(\"tile_y_size is deprecated\"))\n validate_zoom(zoom)\n return round(self.y_size / self.matrix_height(zoom), ROUND)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tile_width(self, zoom):\n warnings.warn(DeprecationWarning(\"tile_width is deprecated\"))\n validate_zoom(zoom)\n matrix_pixel = 2**(zoom) * self.tile_size * self.grid.shape.width\n tile_pixel = self.tile_size * self.metatiling\n return matrix_pixel if tile_pixel > matrix_pixel else tile_pixel", "response": "Tile width in pixel."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tile_height(self, zoom):\n warnings.warn(DeprecationWarning(\"tile_height is deprecated\"))\n validate_zoom(zoom)\n matrix_pixel = 2**(zoom) * self.tile_size * self.grid.shape.height\n tile_pixel = self.tile_size * self.metatiling\n return matrix_pixel if tile_pixel > matrix_pixel else tile_pixel", "response": "Return the height of the tile in pixel."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the width of a pixel in SRID units at zoom level zoom.", "response": "def pixel_x_size(self, zoom):\n \"\"\"\n Width of a pixel in SRID units at zoom level.\n\n - zoom: zoom level\n \"\"\"\n validate_zoom(zoom)\n return round(\n (self.grid.right - self.grid.left) /\n (self.grid.shape.width * 2**zoom * self.tile_size),\n ROUND\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the height of a pixel in SRID units at zoom level zoom.", "response": "def pixel_y_size(self, zoom):\n \"\"\"\n Height of a pixel in SRID units at zoom level.\n\n - zoom: zoom level\n \"\"\"\n validate_zoom(zoom)\n return round(\n (self.grid.top - self.grid.bottom) /\n (self.grid.shape.height * 2**zoom * self.tile_size),\n ROUND\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef tiles_from_bounds(self, bounds, zoom):\n validate_zoom(zoom)\n if not isinstance(bounds, tuple) or len(bounds) != 4:\n raise ValueError(\"bounds must be a tuple of left, bottom, right, top values\")\n if not isinstance(bounds, Bounds):\n bounds = Bounds(*bounds)\n if self.is_global:\n for tile in _global_tiles_from_bounds(self, bounds, zoom):\n yield tile\n else:\n for tile in _tiles_from_cleaned_bounds(self, bounds, zoom):\n yield tile", "response": "Yield all tiles intersecting with bounds."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tiles_from_bbox(self, geometry, zoom):\n validate_zoom(zoom)\n return self.tiles_from_bounds(geometry.bounds, zoom)", "response": "Returns a list of metatiles intersecting with given bounding box."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef tiles_from_geom(self, geometry, zoom):\n validate_zoom(zoom)\n if geometry.is_empty:\n return\n if not geometry.is_valid:\n raise ValueError(\"no valid geometry: %s\" % geometry.type)\n if geometry.geom_type == \"Point\":\n yield self.tile_from_xy(geometry.x, geometry.y, zoom)\n elif geometry.geom_type == \"MultiPoint\":\n for point in geometry:\n yield self.tile_from_xy(point.x, point.y, zoom)\n elif geometry.geom_type in (\n \"LineString\", \"MultiLineString\", \"Polygon\", \"MultiPolygon\",\n \"GeometryCollection\"\n ):\n prepared_geometry = prep(clip_geometry_to_srs_bounds(geometry, self))\n for tile in self.tiles_from_bbox(geometry, zoom):\n if prepared_geometry.intersects(tile.bbox()):\n yield tile", "response": "Yields all tiles intersecting with input geometry."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a tile covering a point defined by x y and zoom level.", "response": "def tile_from_xy(self, x, y, zoom, on_edge_use=\"rb\"):\n \"\"\"\n Return tile covering a point defined by x and y values.\n\n - x: x coordinate\n - y: y coordinate\n - zoom: zoom level\n - on_edge_use: determine which Tile to pick if Point hits a grid edge\n - rb: right bottom (default)\n - rt: right top\n - lt: left top\n - lb: left bottom\n \"\"\"\n validate_zoom(zoom)\n if x < self.left or x > self.right or y < self.bottom or y > self.top:\n raise ValueError(\"x or y are outside of grid bounds\")\n if on_edge_use not in [\"lb\", \"rb\", \"rt\", \"lt\"]:\n raise ValueError(\"on_edge_use must be one of lb, rb, rt or lt\")\n return _tile_from_xy(self, x, y, zoom, on_edge_use=on_edge_use)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_ladders_metadata(session, parsed):\n ladders = {}\n for ladder in parsed.find_all('a', href=re.compile(LADDER_URL_REGEX)):\n ladders[ladder.text] = get_ladder_metadata(session, ladder['href'])\n return ladders", "response": "Get metadata for all ladders."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_metadata(session, games):\n for data in games.values():\n parsed = make_scrape_request(session, data['url'])\n return {\n 'ladders': get_ladders_metadata(session, parsed)\n }", "response": "Get metadata for all games."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_metadata(session, path=DATA_PATH):\n with open(os.path.join(path, 'games.json')) as handle:\n games = json.loads(handle.read())\n \n for key, data in get_metadata(session, games).items():\n with open(os.path.join(path, '{}.json'.format(key)), 'w') as handle:\n handle.write(json.dumps(data, indent=2))", "response": "Update metadata files in the given path."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsamples every router counter in the machine.", "response": "def sample_counters(mc, system_info):\n \"\"\"Sample every router counter in the machine.\"\"\"\n return {\n (x, y): mc.get_router_diagnostics(x, y) for (x, y) in system_info\n }"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the change in counter values.", "response": "def deltas(last, now):\n \"\"\"Return the change in counter values (accounting for wrap-around).\"\"\"\n return {\n xy: RouterDiagnostics(*((n - l) & 0xFFFFFFFF\n for l, n in zip(last[xy], now[xy])))\n for xy in last\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef monitor_counters(mc, output, counters, detailed, f):\n # Print CSV header\n output.write(\"time,{}{}\\n\".format(\"x,y,\" if detailed else \"\",\n \",\".join(counters)))\n\n system_info = mc.get_system_info()\n\n # Make an initial sample of the counters\n last_counter_values = sample_counters(mc, system_info)\n\n start_time = time.time()\n\n for _ in f():\n # Snapshot the change in counter values\n counter_values = sample_counters(mc, system_info)\n delta = deltas(last_counter_values, counter_values)\n last_counter_values = counter_values\n\n now = time.time() - start_time\n\n # Output the changes\n if detailed:\n for x, y in sorted(system_info):\n output.write(\"{:0.1f},{},{},{}\\n\".format(\n now, x, y,\n \",\".join(str(getattr(delta[(x, y)], c))\n for c in counters)))\n else:\n totals = [0 for _ in counters]\n for xy in sorted(system_info):\n for i, counter in enumerate(counters):\n totals[i] += getattr(delta[xy], counter)\n output.write(\"{:0.1f},{}\\n\".format(\n now, \",\".join(map(str, totals))))", "response": "Monitor the counters on a specified machine taking a snap - shot every\n time the generator f yields."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef press_enter(multiple=False, silent=False):\n\n def f():\n try:\n while True:\n if silent:\n yield input()\n else:\n sys.stderr.write(\" \")\n sys.stderr.flush()\n yield input()\n if not multiple:\n break\n except (EOFError, KeyboardInterrupt):\n # User Ctrl+D or Ctrl+C'd\n if not silent:\n # Prevents the user's terminal getting clobbered\n sys.stderr.write(\"\\n\")\n sys.stderr.flush()\n return\n\n return f", "response": "Return a generator function which yields every time the user presses\n return."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef listen(timeout=6.0, port=BOOT_PORT):\n\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n\n # Don't take control of this socket in the system (i.e. allow other\n # processes to bind to it) since we're listening for broadcasts.\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n\n # Listen for the broadcasts\n s.bind(('0.0.0.0', port))\n s.settimeout(timeout)\n try:\n message, (ipaddr, port) = s.recvfrom(512)\n return ipaddr\n except socket.timeout:\n return None", "response": "Listen for a ping broadcast message from an unbooted SpiNNaker board on port 54321."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef clear_cache(m, files_processed):\n\n for what, reason, url, path in files_processed:\n cp = m.doc.downloader.cache_path(url)\n\n if m.cache.exists(cp):\n m.cache.remove(cp)", "response": "Remove any files we ve uploaded from the cache."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fill_categorical_na(df, nan_cat='NA'):\n for col in df.columns[df.isna().any()].tolist():\n\n if df[col].dtype.name != 'category':\n # If not categorical, fill with a blank, which creates and\n # empty cell in CSV.\n df[col] = df[col].fillna('')\n else:\n\n try:\n df[col].cat.add_categories([nan_cat], inplace=True)\n except ValueError:\n pass\n\n df[col] = df[col].fillna(nan_cat)\n\n return df", "response": "Fill NaNs with NaNs in the categoricals of the archive with NaNs."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef executor_ips(sc):\r\n try:\r\n app_id = sc.applicationId\r\n except AttributeError:\r\n app_id = sc.getConf().get('spark.app.id')\r\n # for getting the url (see: https://github.com/apache/spark/pull/15000)\r\n try:\r\n base_url = sc.uiWebUrl\r\n except AttributeError:\r\n base_url = sc._jsc.sc().uiWebUrl().get()\r\n url = base_url + '/api/v1/applications/' + app_id + '/executors'\r\n try:\r\n data = json.load(urlopen(url))\r\n except TypeError:\r\n response = urlopen(url)\r\n str_response = response.read().decode('utf-8')\r\n data = json.loads(str_response)\r\n ips = set(map(lambda x: x[u'hostPort'].split(':')[0], data))\r\n return ips", "response": "gets the unique ip addresses of the executors of the current application"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef change(sc=None, app_name='customSpark', master=None, wait='ips', min_cores=None, min_ips=None, timeout=30,\r\n refresh_rate=0.5, fail_on_timeout=False, **kwargs):\r\n \"\"\" Returns a new Spark Context (sc) object with added properties set\r\n\r\n :param sc: current SparkContext if None will create a new one\r\n :param app_name: name of new spark app\r\n :param master: url to master, if None will get from current sc\r\n :param wait: when to return after asking for a new sc (or max of timeout seconds):\r\n 'ips': wait for all the previous ips that were connected to return (needs sc to not be None)\r\n 'cores': wait for min_cores\r\n None: return immediately\r\n :param min_cores: when wait is 'cores' will wait until defaultParallelism is back to at least this value.\r\n if None will be set to defaultParallelism.\r\n :param min_ips: when wait is 'ips' will wait until number of unique executor ips is back to at least this value.\r\n if None will be set to the what the original sc had.\r\n :param timeout: max time in seconds to wait for new sc if wait is 'ips' or 'cores'\r\n :param fail_on_timeout: whether to fail if timeout has reached\r\n :param refresh_rate: how long to wait in seconds between each check of defaultParallelism\r\n :param kwargs: added properties to set. In the form of key value pairs (replaces '.' with '_' in key)\r\n examples: spark_task_cores='1', spark_python_worker_memory='8g'\r\n see: http://spark.apache.org/docs/latest/configuration.html\r\n :return: a new SparkContext\r\n \"\"\"\r\n # checking input\r\n if master is None and sc is None:\r\n raise ValueError('Both master and sc are None')\r\n if master is None:\r\n master = sc.getConf().get(u'spark.master')\r\n if wait == 'ips':\r\n if sc is None:\r\n if min_ips is None:\r\n min_ips = 1\r\n elif min_ips is None:\r\n min_ips = len(executor_ips(sc))\r\n elif wait == 'cores':\r\n if min_cores is None:\r\n if sc is None:\r\n logging.getLogger('pySparkUtils').info('Both sc and min_cores are None: setting target_cores to 2')\r\n min_cores = 2\r\n else:\r\n min_cores = sc.defaultParallelism\r\n logging.getLogger('pySparkUtils').info('min_cores is None: setting target_cores to: %d' % min_cores)\r\n elif wait is not None:\r\n raise ValueError(\"wait should be: ['ips','cores',None] got: %s\" % wait)\r\n\r\n if sc is not None:\r\n logging.getLogger('pySparkUtils').info('Stopping original sc with %d cores and %d executors' %\r\n (sc.defaultParallelism, len(executor_ips(sc))))\r\n sc.stop()\r\n\r\n # building a new configuration with added arguments\r\n conf = SparkConf().setMaster(master).setAppName(app_name)\r\n for key in kwargs.keys():\r\n name = key.replace('_', '.', 100)\r\n value = kwargs[key]\r\n conf = conf.set(name, value)\r\n logging.getLogger('pySparkUtils').info('Setting %s to: %s' % (name, value))\r\n\r\n # starting the new context and waiting for defaultParallelism to get back to original value\r\n sc = SparkContext(conf=conf)\r\n if wait == 'cores':\r\n total_time = 0\r\n while sc.defaultParallelism < min_cores and total_time < timeout:\r\n time.sleep(refresh_rate)\r\n total_time += refresh_rate\r\n if fail_on_timeout and total_time >= timeout:\r\n sc.stop()\r\n raise RuntimeError('Time out reached when changing sc')\r\n elif wait == 'ips':\r\n total_time = 0\r\n while len(executor_ips(sc)) < min_ips and total_time < timeout:\r\n time.sleep(refresh_rate)\r\n total_time += refresh_rate\r\n if fail_on_timeout and total_time >= timeout:\r\n sc.stop()\r\n raise RuntimeError('Time out reached when changing sc')\r\n logging.getLogger('pySparkUtils').info('Returning new sc with %d cores and %d executors' %\r\n (sc.defaultParallelism, len(executor_ips(sc))))\r\n return sc", "response": "Returns a new SparkContext object with added properties set to the properties of the SparkContext object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef regroup(rdd, groups=10, check_first=False):\r\n if check_first:\r\n first = rdd.first()\r\n if isinstance(first, (list, tuple, collections.Iterable)):\r\n if len(first) != 2:\r\n raise ValueError('first item was not not length 2: %d' % len(first))\r\n else:\r\n raise ValueError('first item was wrong type: %s' % type(first))\r\n rdd = rdd.map(lambda kv: (kv[0] % groups, (kv[0], kv[1])), preservesPartitioning=True)\r\n return rdd.groupByKey().mapValues(list)", "response": "Regroups an rdd into groups of size groups"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef save_rdd_as_pickle(rdd, path, batch_size=10, overwrite=False):\r\n if os.path.isdir(path):\r\n if overwrite:\r\n logging.getLogger('pySparkUtils').info('Deleting files from: %s' % path)\r\n shutil.rmtree(path)\r\n logging.getLogger('pySparkUtils').info('Done deleting files from: %s' % path)\r\n else:\r\n logging.getLogger('pySparkUtils').error('Directory %s already exists '\r\n 'and overwrite is false' % path)\r\n raise IOError('Directory %s already exists and overwrite is false'\r\n % path)\r\n rdd.glom().saveAsPickleFile(path, batchSize=batch_size)\r\n logging.getLogger('pySparkUtils').info('Saved rdd as pickle to: %s' % path)", "response": "Saves an rdd to a pickle file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load_rdd_from_pickle(sc, path, min_partitions=None, return_type='images'):\r\n if min_partitions is None:\r\n min_partitions = sc.defaultParallelism\r\n rdd = sc.pickleFile(path, minPartitions=min_partitions)\r\n rdd = rdd.flatMap(lambda x: x)\r\n if return_type == 'images':\r\n result = td.images.fromrdd(rdd).repartition(min_partitions)\r\n elif return_type == 'series':\r\n result = td.series.fromrdd(rdd).repartition(min_partitions)\r\n elif return_type == 'rdd':\r\n result = rdd.repartition(min_partitions)\r\n else:\r\n raise ValueError('return_type not supported: %s' % return_type)\r\n logging.getLogger('pySparkUtils').info('Loaded rdd from: %s as type: %s'\r\n % (path, return_type))\r\n return result", "response": "Loads an RDD that was saved as one pickle file per partition and returns it as a new instance of\r\n ."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef geo(self):\n import geopandas as gpd\n from shapely.geometry.polygon import BaseGeometry\n from shapely.wkt import loads\n\n gdf = gpd.GeoDataFrame(self)\n\n first = next(gdf.iterrows())[1].geometry\n\n if isinstance(first, str):\n # We have a GeoDataframe, but the geometry column is still strings, so\n # it must be converted\n shapes = [ loads(row['geometry']) for i, row in gdf.iterrows()]\n\n elif not isinstance(first, BaseGeometry):\n # If we are reading a metatab package, the geometry column's type should be\n # 'geometry' which will give the geometry values class type of\n # rowpipe.valuetype.geo.ShapeValue. However, there are other\n # types of objects that have a 'shape' property.\n\n shapes = [row['geometry'].shape for i, row in gdf.iterrows()]\n\n else:\n shapes = gdf['geometry']\n\n gdf['geometry'] = gpd.GeoSeries(shapes)\n gdf.set_geometry('geometry')\n return gdf", "response": "Return a geopandas dataframe"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nyields the rows of the table.", "response": "def rows(self):\n \"\"\"Yield rows like a partition does, with a header first, then rows. \"\"\"\n\n yield [self.index.name] + list(self.columns)\n\n for t in self.itertuples():\n yield list(t)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking whether the node_set properties match the given model type definition", "response": "def _matches_node_set_props(type_info, node_set_props):\n \"\"\"\n Check whether the node_set properties match the given model type definition\n \"\"\"\n matches = None\n for key in node_set_props:\n ns_val = node_set_props[key]\n if key in type_info:\n if ns_val==type_info[key]:\n if matches:\n matches = matches and True\n else:\n matches = True\n else:\n matches = False\n return matches"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a NeuroML document from a Sonata model", "response": "def get_neuroml_from_sonata(sonata_filename, id, generate_lems = True, format='xml'):\n \"\"\"\n Return a NeuroMLDocument with (most of) the contents of the Sonata model \n \"\"\"\n \n from neuroml.hdf5.NetworkBuilder import NetworkBuilder\n neuroml_handler = NetworkBuilder()\n \n sr = SonataReader(filename=sonata_filename, id=id)\n \n sr.parse(neuroml_handler) \n \n nml_doc = neuroml_handler.get_nml_doc()\n \n sr.add_neuroml_components(nml_doc)\n \n if format == 'xml':\n nml_file_name = '%s.net.nml'%id\n from neuroml.writers import NeuroMLWriter\n NeuroMLWriter.write(nml_doc, nml_file_name)\n \n elif format == 'hdf5':\n nml_file_name = '%s.net.nml.h5'%id\n from neuroml.writers import NeuroMLHdf5Writer\n NeuroMLHdf5Writer.write(nml_doc, nml_file_name)\n \n print_v('Written to: %s'%nml_file_name) \n \n if generate_lems:\n lems_file_name = sr.generate_lems_file(nml_file_name, nml_doc)\n \n return sr, lems_file_name, nml_file_name, nml_doc\n \n return nml_doc"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef subs(self, path):\n #print_v('Checking for: \\n %s, \\n %s \\n in %s'%(self.substitutes,self.init_substitutes,path))\n if type(path) == int or type(path) == float:\n return path\n for s in self.init_substitutes:\n if path.startswith(s):\n path = path.replace(s,self.init_substitutes[s], 1)\n #print_v(' So far: %s'%path)\n for s in self.substitutes:\n if s in path:\n path = path.replace(s,self.substitutes[s])\n #print_v(' Returning: %s'%path)\n return path", "response": "Search the strings in a config file for a substitutable value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse(self, handler):\n\n ########################################################################\n # load the main configuration scripts \n \n main_config_filename = os.path.abspath(self.parameters['filename'])\n \n config = load_json(main_config_filename)\n \n self.init_substitutes = {'.':'%s/'%os.path.dirname(main_config_filename),\n '../':'%s/'%os.path.dirname(os.path.dirname(main_config_filename))}\n \n self.substitutes = {'${configdir}':'%s'%os.path.dirname(main_config_filename)}\n \n if 'network' in config:\n self.network_config = load_json(self.subs(config['network']))\n else:\n self.network_config = config\n \n if 'simulation' in config:\n self.simulation_config = load_json(self.subs(config['simulation']))\n else:\n self.simulation_config = None\n \n for m in self.network_config['manifest']:\n path = self.subs(self.network_config['manifest'][m])\n self.substitutes[m] = path\n \n if 'id' in self.parameters:\n id = self.parameters['id']\n else:\n id = 'SonataNetwork'\n \n if id[0].isdigit(): # id like 9_cells is not a valid id for NeuroML\n id='NML2_%s'%id\n \n ########################################################################\n # Feed the handler the info on the network \n \n self.handler = handler\n \n notes = \"Network read in from Sonata: %s\"%main_config_filename\n handler.handle_document_start(id, notes)\n \n handler.handle_network(id, notes)\n \n self.node_types = {}\n \n ########################################################################\n # Get info from nodes files \n \n for n in self.network_config['networks']['nodes']:\n nodes_file = self.subs(n['nodes_file'])\n node_types_file = self.subs(n['node_types_file'])\n \n print_v(\"\\nLoading nodes from %s and %s\"%(nodes_file,node_types_file))\n\n h5file=tables.open_file(nodes_file,mode='r')\n\n print_v(\"Opened HDF5 file: %s\"%(h5file.filename))\n self.parse_group(h5file.root.nodes)\n h5file.close()\n self.node_types[self.current_sonata_pop] = load_csv_props(node_types_file)\n self.current_sonata_pop = None\n \n \n \n ########################################################################\n # Get info from edges files \n \n self.edges_info = {}\n self.conn_info = {}\n \n if 'edges' in self.network_config['networks']:\n for e in self.network_config['networks']['edges']:\n edges_file = self.subs(e['edges_file'])\n edge_types_file = self.subs(e['edge_types_file'])\n\n print_v(\"\\nLoading edges from %s and %s\"%(edges_file,edge_types_file))\n\n h5file=tables.open_file(edges_file,mode='r')\n\n print_v(\"Opened HDF5 file: %s\"%(h5file.filename))\n self.parse_group(h5file.root.edges)\n h5file.close()\n self.edges_info[self.current_edge] = load_csv_props(edge_types_file)\n self.current_edge = None\n \n \n ########################################################################\n # Use extracted node/cell info to create populations\n \n for sonata_pop in self.cell_info:\n types_vs_pops = {}\n \n for type in self.cell_info[sonata_pop]['type_count']:\n node_type_info = self.node_types[sonata_pop][type]\n \n model_name_type = node_type_info['model_name'] if 'model_name' in node_type_info \\\n else (node_type_info['pop_name'] if 'pop_name' in node_type_info else node_type_info['model_type'])\n \n model_type = node_type_info['model_type']\n model_template = node_type_info['model_template'] if 'model_template' in node_type_info else '- None -'\n \n nml_pop_id = '%s_%s_%s'%(sonata_pop,model_name_type,type) \n \n print_v(\" - Adding population: %s which has model info: %s\"%(nml_pop_id, node_type_info))\n \n size = self.cell_info[sonata_pop]['type_count'][type]\n \n if model_type=='point_process' and model_template=='nrn:IntFire1':\n raise Exception('Point process model not currently supported: %s\\nTry expressing the I&F cell in NEST format with nest:iaf_psc_alpha'%model_template)\n pop_comp = 'cell_%s'%nml_pop_id #model_template.replace(':','_')\n self.pop_comp_info[pop_comp] = {}\n self.pop_comp_info[pop_comp]['model_type'] = model_type\n \n dynamics_params_file = self.subs(self.network_config['components']['point_neuron_models_dir']) +'/'+node_type_info['dynamics_params']\n self.pop_comp_info[pop_comp]['dynamics_params'] = load_json(dynamics_params_file)\n \n elif model_type=='point_process' and model_template=='nest:iaf_psc_alpha':\n pop_comp = 'cell_%s'%nml_pop_id # = model_template.replace(':','_')\n self.pop_comp_info[pop_comp] = {}\n self.pop_comp_info[pop_comp]['model_type'] = model_type\n self.pop_comp_info[pop_comp]['model_template'] = model_template\n \n dynamics_params_file = self.subs(self.network_config['components']['point_neuron_models_dir']) +'/'+node_type_info['dynamics_params']\n self.pop_comp_info[pop_comp]['dynamics_params'] = load_json(dynamics_params_file)\n else:\n pop_comp = DUMMY_CELL\n self.pop_comp_info[pop_comp] = {}\n self.pop_comp_info[pop_comp]['model_type'] = pop_comp\n \n self.nml_pop_vs_comps[nml_pop_id] = pop_comp\n \n properties = {}\n\n properties['type_id']=type\n properties['sonata_population']=sonata_pop\n properties['region']=sonata_pop\n \n for i in node_type_info:\n properties[i]=node_type_info[i]\n if i=='ei':\n properties['type']=node_type_info[i].upper()\n \n color = '%s %s %s'%(self.myrandom.random(),self.myrandom.random(),self.myrandom.random())\n \n try:\n import opencortex.utils.color as occ\n interneuron = 'SOM' in nml_pop_id or 'PV' in nml_pop_id\n if 'L23' in nml_pop_id:\n color = occ.L23_INTERNEURON if interneuron else occ.L23_PRINCIPAL_CELL\n pop.properties.append(neuroml.Property('region','L23'))\n if 'L4' in nml_pop_id:\n color = occ.L4_INTERNEURON if interneuron else occ.L4_PRINCIPAL_CELL\n pop.properties.append(neuroml.Property('region','L4'))\n if 'L5' in nml_pop_id:\n color = occ.L5_INTERNEURON if interneuron else occ.L5_PRINCIPAL_CELL\n pop.properties.append(neuroml.Property('region','L5'))\n if 'L6' in nml_pop_id:\n color = occ.L6_INTERNEURON if interneuron else occ.L6_PRINCIPAL_CELL\n pop.properties.append(neuroml.Property('region','L6'))\n except:\n pass # Don't specify a particular color, use random, not a problem...\n \n properties['color']=color\n if True or not 'locations' in self.cell_info[sonata_pop]['0']:\n \n properties={} ############# temp for LEMS...\n \n if model_type != 'virtual':\n \n self.handler.handle_population(nml_pop_id, \n pop_comp, \n size,\n component_obj=None,\n properties=properties)\n \n types_vs_pops[type] = nml_pop_id\n self.cell_info[sonata_pop]['pop_count'] = {} \n self.cell_info[sonata_pop]['pop_map'] = {} \n \n for i in self.cell_info[sonata_pop]['types']:\n \n pop = types_vs_pops[self.cell_info[sonata_pop]['types'][i]]\n \n if not pop in self.cell_info[sonata_pop]['pop_count']:\n self.cell_info[sonata_pop]['pop_count'][pop] = 0\n \n index = self.cell_info[sonata_pop]['pop_count'][pop]\n self.cell_info[sonata_pop]['pop_map'][i] = (pop, index)\n \n if not pop in self.nml_ids_vs_gids:\n self.nml_ids_vs_gids[pop] = {}\n self.nml_ids_vs_gids[pop][index] = (sonata_pop, i)\n \n if i in self.cell_info[sonata_pop]['0']['locations']: \n if not pop in self.nml_pops_having_locations: \n self.nml_pops_having_locations.append(pop)\n pos = self.cell_info[sonata_pop]['0']['locations'][i]\n #print('Adding pos %i: %s'%(i,pos))\n self.handler.handle_location(index, \n pop, \n pop_comp, \n pos['x'] if 'x' in pos and pos['x'] is not None else 0, \n pos['y'] if 'y' in pos and pos['y'] is not None else 0, \n pos['z'] if 'z' in pos and pos['z'] is not None else 0)\n \n self.cell_info[sonata_pop]['pop_count'][pop]+=1\n \n \n ########################################################################\n # Load simulation info into self.simulation_config\n \n if self.simulation_config:\n \n if self.simulation_config:\n for m in self.simulation_config['manifest']:\n path = self.subs(self.simulation_config['manifest'][m])\n self.substitutes[m] = path\n \n for s1 in ['output']:\n for k in self.simulation_config[s1]:\n self.simulation_config[s1][k] = self.subs(self.simulation_config[s1][k])\n \n for s1 in ['inputs']:\n for s2 in self.simulation_config[s1]:\n for k in self.simulation_config[s1][s2]:\n self.simulation_config[s1][s2][k] = self.subs(self.simulation_config[s1][s2][k])\n \n if 'node_sets_file' in self.simulation_config:\n node_sets = load_json(self.subs(self.simulation_config['node_sets_file']))\n self.simulation_config['node_sets'] = node_sets \n \n if not 'node_sets' in self.simulation_config:\n self.simulation_config['node_sets'] = {}\n \n for sonata_pop in self.cell_info:\n self.node_set_mappings[sonata_pop] = {}\n for sindex in self.cell_info[sonata_pop]['pop_map']:\n nml_pop = self.cell_info[sonata_pop]['pop_map'][sindex][0]\n nml_index = self.cell_info[sonata_pop]['pop_map'][sindex][1]\n\n # Add all in this sonata_pop to a 'node_set' named after the sonata_pop\n if not nml_pop in self.node_set_mappings[sonata_pop]:\n self.node_set_mappings[sonata_pop][nml_pop] = []\n self.node_set_mappings[sonata_pop][nml_pop].append(nml_index)\n \n #pp.pprint(self.simulation_config)\n #pp.pprint(self.pop_comp_info)\n \n for node_set in self.simulation_config['node_sets']:\n self.node_set_mappings[node_set] = {}\n node_set_props = self.simulation_config['node_sets'][node_set]\n #print_v('===========Checking which cells in pops match node_set: %s = %s'%(node_set,node_set_props))\n \n for sonata_pop in self.cell_info:\n for sindex in self.cell_info[sonata_pop]['pop_map']:\n #print('Does %s %s match %s?'%(sonata_pop, sindex, node_set_props))\n \n type = self.cell_info[sonata_pop]['types'][sindex]\n type_info = self.node_types[sonata_pop][type]\n nml_pop = self.cell_info[sonata_pop]['pop_map'][sindex][0]\n nml_index = self.cell_info[sonata_pop]['pop_map'][sindex][1]\n \n if 'population' in node_set_props and node_set_props['population'] == sonata_pop:\n if 'node_id' in node_set_props and sindex in node_set_props['node_id']:\n if not nml_pop in self.node_set_mappings[node_set]:\n self.node_set_mappings[node_set][nml_pop] = []\n self.node_set_mappings[node_set][nml_pop].append(nml_index)\n \n matches = _matches_node_set_props(type_info, node_set_props)\n #print_v('Node %i in %s (NML: %s[%i]) has type %s (%s); matches: %s'%(sindex, sonata_pop, nml_pop, nml_index, type, type_info, matches))\n if matches:\n if not nml_pop in self.node_set_mappings[node_set]:\n self.node_set_mappings[node_set][nml_pop] = []\n self.node_set_mappings[node_set][nml_pop].append(nml_index)\n \n ##pp.pprint(self.node_set_mappings)\n \n \n ########################################################################\n # Extract info from inputs in simulation_config\n #pp.pprint(self.simulation_config)\n \n for input in self.simulation_config['inputs']:\n info = self.simulation_config['inputs'][input]\n #print_v(\" - Adding input: %s which has info: %s\"%(input, info)) \n \n self.input_comp_info[input] = {}\n self.input_comp_info[input][info['input_type']] = {}\n \n node_set = info['node_set']\n \n if info['input_type'] == 'current_clamp':\n \n comp = 'PG_%s'%input\n self.input_comp_info[input][info['input_type']][comp] = {'amp':info['amp'],'delay':info['delay'],'duration':info['duration']}\n \n for nml_pop_id in self.node_set_mappings[node_set]:\n \n input_list_id = 'il_%s_%s'%(input,nml_pop_id)\n indices = self.node_set_mappings[node_set][nml_pop_id]\n \n self.handler.handle_input_list(input_list_id, \n nml_pop_id, \n comp, \n len(indices))\n count = 0\n for index in indices:\n self.handler.handle_single_input(input_list_id, \n count, \n cellId = index, \n segId = 0, \n fract = 0.5)\n count+=1\n \n \n elif info['input_type'] == 'spikes':\n node_info = self.cell_info[node_set]\n \n from pyneuroml.plot.PlotSpikes import read_sonata_spikes_hdf5_file\n\n ids_times = read_sonata_spikes_hdf5_file(self.subs(info['input_file']))\n \n for id in ids_times:\n times = ids_times[id]\n if id in node_info['pop_map']:\n nml_pop_id, cell_id = node_info['pop_map'][id] \n print_v(\"Cell %i in Sonata node set %s (cell %s in nml pop %s) has %i spikes\"%(id, node_set, nml_pop_id, cell_id, len(times)))\n\n component = '%s__%i'%(nml_pop_id,cell_id)\n\n self.input_comp_info[input][info['input_type']][component] ={'id': cell_id, 'times': times}\n\n '''\n input_list_id = 'il_%s_%i'%(input,cell_id)\n self.handler.handle_input_list(input_list_id, \n nml_pop_id, \n component, \n 1)\n\n self.handler.handle_single_input(input_list_id, \n 0, \n cellId = cell_id, \n segId = 0, \n fract = 0.5)\n '''\n else:\n print_v(\"Cell %i in Sonata node set %s NOT FOUND!\"%(id, node_set))\n \n else:\n raise Exception(\"Sonata input type not yet supported: %s\"%(info['input_type']))\n\n\n ########################################################################\n # Use extracted edge info to create connections\n \n projections_created = []\n for conn in self.conn_info:\n \n pre_node = self.conn_info[conn]['pre_node']\n post_node = self.conn_info[conn]['post_node']\n \n for i in range(len(self.conn_info[conn]['pre_id'])):\n pre_id = self.conn_info[conn]['pre_id'][i]\n post_id = self.conn_info[conn]['post_id'][i]\n nsyns = self.conn_info[conn]['nsyns'][i] if 'nsyns' in self.conn_info[conn] else 1\n type = self.conn_info[conn]['edge_type_id'][i]\n #print_v(' Conn with %i syns, type %s: %s(%s) -> %s(%s)'%(nsyns,type,pre_node,pre_id,post_node,post_id))\n pre_pop,pre_i = self.cell_info[pre_node]['pop_map'][pre_id]\n post_pop,post_i = self.cell_info[post_node]['pop_map'][post_id]\n #print_v(' Mapped: Conn %s(%s) -> %s(%s)'%(pre_pop,pre_i,post_pop,post_i))\n # print self.edges_info[conn][type]\n #print self.cell_info[pre_node]\n #print 11\n #print self.node_types[pre_node]\n #print 22\n cell_type_pre = self.cell_info[pre_node]['types'][pre_id]\n #print cell_type_pre\n #print 444\n pop_type_pre = self.node_types[pre_node][cell_type_pre]['model_type']\n #print pop_type_pre\n #print 333\n \n synapse = self.edges_info[conn][type]['dynamics_params'].split('.')[0]\n self.syn_comp_info[synapse] = {}\n #print self.edges_info[conn][type]\n #pp.pprint(self.init_substitutes)\n #pp.pprint(self.substitutes)\n dynamics_params_file = self.subs(self.network_config['components']['synaptic_models_dir']) +'/'+self.edges_info[conn][type]['dynamics_params']\n #print_v('Adding syn %s (at %s)'%(self.edges_info[conn][type]['dynamics_params'], dynamics_params_file))\n \n #TODO: don't load this file every connection!!!\n self.syn_comp_info[synapse]['dynamics_params'] = load_json(dynamics_params_file)\n proj_id = '%s_%s_%s'%(pre_pop,post_pop,synapse)\n \n sign = self.syn_comp_info[synapse]['dynamics_params']['sign'] if 'sign' in self.syn_comp_info[synapse]['dynamics_params'] else 1\n weight = self.edges_info[conn][type]['syn_weight'] if 'syn_weight' in self.edges_info[conn][type] else 1.0\n \n syn_weight_edge_group_0 = self.conn_info[conn]['syn_weight_edge_group_0'][i] if 'syn_weight_edge_group_0' in self.conn_info[conn] else None\n \n # Assume this overrides value from csv file...\n if syn_weight_edge_group_0:\n weight = syn_weight_edge_group_0\n \n #print_v('Adding syn %s (at %s), weight: %s, sign: %s, nsyns: %s'%(self.edges_info[conn][type]['dynamics_params'], dynamics_params_file, weight, sign, nsyns))\n \n weight_scale = 0.001\n if 'level_of_detail' in self.syn_comp_info[synapse]['dynamics_params']:\n weight_scale = 1\n \n \n weight=weight_scale * sign * weight * nsyns\n \n delay = self.edges_info[conn][type]['delay'] if 'delay' in self.edges_info[conn][type] else 0\n \n if not pop_type_pre == 'virtual':\n if not proj_id in projections_created:\n\n self.handler.handle_projection(proj_id, \n pre_pop, \n post_pop, \n synapse)\n\n projections_created.append(proj_id)\n\n self.handler.handle_connection(proj_id, \n i, \n pre_pop, \n post_pop, \n synapse, \\\n pre_i, \\\n post_i, \\\n weight=weight, \\\n delay=delay)\n \n else:\n component = '%s__%i'%(pre_pop,pre_i)\n #print_v(' --- Connecting %s to %s[%s]'%(component, post_pop, post_i))\n \n \n #self.input_comp_info[input][info['input_type']][component] ={'id': cell_id, 'times': times}\n\n\n input_list_id = 'il_%s_%s_%i_%i'%(component,post_pop,post_i,i)\n \n self.handler.handle_input_list(input_list_id, \n post_pop, \n component, \n 1)\n\n self.handler.handle_single_input(input_list_id, \n 0, \n cellId = post_i, \n segId = 0, \n fract = 0.5,\n weight=weight)\n \n \"\"\"\n print('~~~~~~~~~~~~~~~')\n print('node_types:')\n pp.pprint(self.node_types)\n print('~~~~~~~~~~~~~~~')\n print('cell_info:')\n pp.pprint(self.cell_info)\n print('================')\"\"\"", "response": "Parse the Sonata file and call the appropriate methods in the handler."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd the components of the NeuroML cell to the doc.", "response": "def add_neuroml_components(self, nml_doc):\n \"\"\"\n Based on cell & synapse properties found, create the corresponding NeuroML components\n \"\"\"\n \n is_nest = False\n print_v(\"Adding NeuroML cells to: %s\"%nml_doc.id)\n #pp.pprint(self.pop_comp_info)\n \n for c in self.pop_comp_info:\n info = self.pop_comp_info[c]\n \n model_template = info['model_template'] if 'model_template' in info else \\\n (info['dynamics_params']['type'] if 'dynamics_params' in info else\n info['model_type'])\n \n print_v(\" - Adding %s: %s\"%(model_template, info))\n \n if info['model_type'] == 'point_process' and model_template == 'nest:iaf_psc_alpha':\n \n is_nest = True\n from neuroml import IF_curr_alpha\n pynn0 = IF_curr_alpha(id=c, \n cm=info['dynamics_params']['C_m']/1000.0, \n i_offset=\"0\", \n tau_m=info['dynamics_params']['tau_m'], \n tau_refrac=info['dynamics_params']['t_ref'], \n tau_syn_E=\"1\", \n tau_syn_I=\"1\", \n v_init='-70', \n v_reset=info['dynamics_params']['V_reset'], \n v_rest=info['dynamics_params']['E_L'], \n v_thresh=info['dynamics_params']['V_th'])\n \n nml_doc.IF_curr_alpha.append(pynn0)\n \n elif info['model_type'] == 'point_process' and model_template == 'NEURON_IntFire1':\n \n contents = '''\n \n'''%(c, info['dynamics_params']['tau']*1000, info['dynamics_params']['refrac']*1000)\n\n cell_file_name = '%s.xml'%c\n cell_file = open(cell_file_name,'w')\n cell_file.write(contents)\n cell_file.close()\n \n self.nml_includes.append(cell_file_name)\n self.nml_includes.append('../../../examples/sonatatest/IntFireCells.xml')\n \n \n else:\n \n from neuroml import IafRefCell\n IafRefCell0 = IafRefCell(id=DUMMY_CELL,\n C=\".2 nF\",\n thresh = \"1mV\",\n reset=\"0mV\",\n refract=\"3ms\",\n leak_conductance=\"1.2 nS\",\n leak_reversal=\"0mV\")\n\n print_v(\" - Adding: %s\"%IafRefCell0)\n nml_doc.iaf_ref_cells.append(IafRefCell0)\n \n print_v(\"Adding NeuroML synapses to: %s\"%nml_doc.id)\n \n #pp.pprint(self.syn_comp_info)\n \n for s in self.syn_comp_info:\n dyn_params = self.syn_comp_info[s]['dynamics_params']\n \n print_v(\" - Syn: %s: %s\"%(s, dyn_params))\n \n if 'level_of_detail' in dyn_params and dyn_params['level_of_detail'] == 'exp2syn':\n from neuroml import ExpTwoSynapse\n\n syn = ExpTwoSynapse(id=s, \n gbase=\"1nS\",\n erev=\"%smV\"%dyn_params['erev'],\n tau_rise=\"%sms\"%dyn_params['tau1'],\n tau_decay=\"%sms\"%dyn_params['tau2'])\n \n #print(\"Adding syn: %s\"%syn)\n nml_doc.exp_two_synapses.append(syn)\n \n elif 'level_of_detail' in dyn_params and dyn_params['level_of_detail'] == 'instanteneous':\n \n contents = '''\n \n'''%(s)\n\n syn_file_name = '%s.xml'%s\n syn_file = open(syn_file_name,'w')\n syn_file.write(contents)\n syn_file.close()\n \n self.nml_includes.append(syn_file_name)\n #self.nml_includes.append('../examples/sonatatest/IntFireCells.xml')\n \n else:\n from neuroml import AlphaCurrSynapse\n pynnSynn0 = AlphaCurrSynapse(id=s, tau_syn=\"2\")\n #print(\"Adding syn: %s\"%pynnSynn0)\n nml_doc.alpha_curr_synapses.append(pynnSynn0)\n \n \n print_v(\"Adding NeuroML inputs to: %s\"%nml_doc.id)\n \n #pp.pprint(self.input_comp_info)\n \n for input in self.input_comp_info:\n for input_type in self.input_comp_info[input]:\n if input_type == 'spikes':\n for comp_id in self.input_comp_info[input][input_type]:\n info = self.input_comp_info[input][input_type][comp_id]\n print_v(\"Adding input %s: %s\"%(comp_id, info.keys()))\n \n nest_syn = _get_default_nest_syn(nml_doc)\n from neuroml import TimedSynapticInput, Spike\n\n tsi = TimedSynapticInput(id=comp_id, synapse=nest_syn.id, spike_target=\"./%s\"%nest_syn.id)\n nml_doc.timed_synaptic_inputs.append(tsi)\n for ti in range(len(info['times'])):\n tsi.spikes.append(Spike(id=ti, time='%sms'%info['times'][ti]))\n elif input_type == 'current_clamp':\n from neuroml import PulseGenerator\n for comp_id in self.input_comp_info[input][input_type]:\n info = self.input_comp_info[input][input_type][comp_id]\n #TODO remove when https://github.com/AllenInstitute/sonata/issues/42 is fixed!\n amp_template = '%spA' if is_nest else '%snA' # \n pg = PulseGenerator(id=comp_id,delay='%sms'%info['delay'],duration='%sms'%info['duration'],amplitude=amp_template%info['amp'])\n nml_doc.pulse_generators.append(pg)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a LEMS file to use in simulations of the NeuroML file.", "response": "def generate_lems_file(self, nml_file_name, nml_doc):\n \"\"\"\n Generate a LEMS file to use in simulations of the NeuroML file\n \"\"\"\n \n #pp.pprint(self.simulation_config)\n #pp.pprint(self.pop_comp_info)\n #pp.pprint(self.node_set_mappings)\n \n if 'output' in self.simulation_config:\n gen_spike_saves_for_all_somas = True\n \n target = nml_doc.networks[0].id\n sim_id = 'Sim_%s'%target\n \n duration = self.simulation_config['run']['tstop']\n dt = self.simulation_config['run']['dt']\n lems_file_name = 'LEMS_%s.xml'%sim_id\n target_dir = \"./\"\n gen_saves_for_quantities = {}\n gen_plots_for_quantities = {}\n \n if 'reports' in self.simulation_config:\n if 'membrane_potential' in self.simulation_config['reports']:\n mp = self.simulation_config['reports']['membrane_potential']\n node_set = self.node_set_mappings[mp['cells']]\n for nml_pop in node_set:\n comp = self.nml_pop_vs_comps[nml_pop]\n ids = node_set[nml_pop]\n display = 'Voltages_%s'%nml_pop\n file_name = '%s.v.dat'%nml_pop\n \n for id in ids:\n quantity = '%s/%i/%s/%s'%(nml_pop,id,comp,'v')\n if not nml_pop in self.nml_pops_having_locations:\n quantity = '%s[%i]/%s'%(nml_pop,id,'v')\n \n if not display in gen_plots_for_quantities:\n gen_plots_for_quantities[display] = []\n gen_plots_for_quantities[display].append(quantity)\n if not file_name in gen_saves_for_quantities:\n gen_saves_for_quantities[file_name] = []\n gen_saves_for_quantities[file_name].append(quantity)\n \n\n generate_lems_file_for_neuroml(sim_id, \n nml_file_name, \n target, \n duration, \n dt, \n lems_file_name,\n target_dir,\n include_extra_files = self.nml_includes,\n gen_plots_for_all_v = False,\n plot_all_segments = False,\n gen_plots_for_quantities = gen_plots_for_quantities, # Dict with displays vs lists of quantity paths\n gen_saves_for_all_v = False,\n save_all_segments = False,\n gen_saves_for_quantities = gen_saves_for_quantities, # List of populations, all pops if = []\n gen_spike_saves_for_all_somas = gen_spike_saves_for_all_somas,\n report_file_name = REPORT_FILE,\n copy_neuroml = True,\n verbose=True)\n \n return lems_file_name"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef clip_geometry_to_srs_bounds(geometry, pyramid, multipart=False):\n if not geometry.is_valid:\n raise ValueError(\"invalid geometry given\")\n pyramid_bbox = box(*pyramid.bounds)\n\n # Special case for global tile pyramids if geometry extends over tile\n # pyramid boundaries (such as the antimeridian).\n if pyramid.is_global and not geometry.within(pyramid_bbox):\n inside_geom = geometry.intersection(pyramid_bbox)\n outside_geom = geometry.difference(pyramid_bbox)\n # shift outside geometry so it lies within SRS bounds\n if isinstance(outside_geom, Polygon):\n outside_geom = [outside_geom]\n all_geoms = [inside_geom]\n for geom in outside_geom:\n geom_bounds = Bounds(*geom.bounds)\n if geom_bounds.left < pyramid.left:\n geom = translate(geom, xoff=2*pyramid.right)\n elif geom_bounds.right > pyramid.right:\n geom = translate(geom, xoff=-2*pyramid.right)\n all_geoms.append(geom)\n if multipart:\n return all_geoms\n else:\n return GeometryCollection(all_geoms)\n\n else:\n if multipart:\n return [geometry]\n else:\n return geometry", "response": "Clip input geometry to SRS bounds of given TilePyramid."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef snap_bounds(bounds=None, tile_pyramid=None, zoom=None, pixelbuffer=0):\n bounds = Bounds(*bounds)\n validate_zoom(zoom)\n lb = _tile_from_xy(tile_pyramid, bounds.left, bounds.bottom, zoom, on_edge_use=\"rt\")\n rt = _tile_from_xy(tile_pyramid, bounds.right, bounds.top, zoom, on_edge_use=\"lb\")\n left, bottom, _, _ = lb.bounds(pixelbuffer)\n _, _, right, top = rt.bounds(pixelbuffer)\n return Bounds(left, bottom, right, top)", "response": "Extend bounds to be aligned with union of tile bboxes."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nverifying that shape corresponds to bounds apect ratio.", "response": "def _verify_shape_bounds(shape, bounds):\n \"\"\"Verify that shape corresponds to bounds apect ratio.\"\"\"\n if not isinstance(shape, (tuple, list)) or len(shape) != 2:\n raise TypeError(\n \"shape must be a tuple or list with two elements: %s\" % str(shape)\n )\n if not isinstance(bounds, (tuple, list)) or len(bounds) != 4:\n raise TypeError(\n \"bounds must be a tuple or list with four elements: %s\" % str(bounds)\n )\n shape = Shape(*shape)\n bounds = Bounds(*bounds)\n shape_ratio = shape.width / shape.height\n bounds_ratio = (bounds.right - bounds.left) / (bounds.top - bounds.bottom)\n if abs(shape_ratio - bounds_ratio) > DELTA:\n min_length = min([\n (bounds.right - bounds.left) / shape.width,\n (bounds.top - bounds.bottom) / shape.height\n ])\n proposed_bounds = Bounds(\n bounds.left,\n bounds.bottom,\n bounds.left + shape.width * min_length,\n bounds.bottom + shape.height * min_length\n )\n raise ValueError(\n \"shape ratio (%s) must equal bounds ratio (%s); try %s\" % (\n shape_ratio, bounds_ratio, proposed_bounds\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn all tiles from tilepyramid intersecting with tile.", "response": "def _tile_intersecting_tilepyramid(tile, tp):\n \"\"\"Return all tiles from tilepyramid intersecting with tile.\"\"\"\n if tile.tp.grid != tp.grid:\n raise ValueError(\"Tile and TilePyramid source grids must be the same.\")\n tile_metatiling = tile.tile_pyramid.metatiling\n pyramid_metatiling = tp.metatiling\n multiplier = tile_metatiling / pyramid_metatiling\n if tile_metatiling > pyramid_metatiling:\n return [\n tp.tile(\n tile.zoom,\n int(multiplier) * tile.row + row_offset,\n int(multiplier) * tile.col + col_offset\n )\n for row_offset, col_offset in product(\n range(int(multiplier)), range(int(multiplier))\n )\n ]\n elif tile_metatiling < pyramid_metatiling:\n return [tp.tile(\n tile.zoom, int(multiplier * tile.row), int(multiplier * tile.col)\n )]\n else:\n return [tp.tile(*tile.id)]"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nyields all tiles that overlap the given bounds.", "response": "def _global_tiles_from_bounds(tp, bounds, zoom):\n \"\"\"Return also Tiles if bounds cross the antimeridian.\"\"\"\n seen = set()\n\n # clip to tilepyramid top and bottom bounds\n left, right = bounds.left, bounds.right\n top = tp.top if bounds.top > tp.top else bounds.top\n bottom = tp.bottom if bounds.bottom < tp.bottom else bounds.bottom\n\n if left >= tp.left and right <= tp.right:\n for tile in _tiles_from_cleaned_bounds(tp, bounds, zoom):\n yield tile\n\n # bounds overlap on the Western side with antimeridian\n if left < tp.left:\n for tile in chain(\n # tiles west of antimeridian\n _tiles_from_cleaned_bounds(\n tp,\n Bounds(left + (tp.right - tp.left), bottom, tp.right, top),\n zoom\n ),\n # tiles east of antimeridian\n _tiles_from_cleaned_bounds(\n tp, Bounds(tp.left, bottom, right, top), zoom\n )\n ):\n # make output tiles unique\n if tile.id not in seen:\n seen.add(tile.id)\n yield tile\n\n # bounds overlap on the Eastern side with antimeridian\n if right > tp.right:\n for tile in chain(\n # tiles west of antimeridian\n _tiles_from_cleaned_bounds(\n tp, Bounds(left, bottom, tp.right, top), zoom\n ),\n # tiles east of antimeridian\n _tiles_from_cleaned_bounds(\n tp,\n Bounds(tp.left, bottom, right - (tp.right - tp.left), top),\n zoom\n )\n ):\n # make output tiles unique\n if tile.id not in seen:\n seen.add(tile.id)\n yield tile"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _tiles_from_cleaned_bounds(tp, bounds, zoom):\n lb = _tile_from_xy(tp, bounds.left, bounds.bottom, zoom, on_edge_use=\"rt\")\n rt = _tile_from_xy(tp, bounds.right, bounds.top, zoom, on_edge_use=\"lb\")\n for tile_id in product([zoom], range(rt.row, lb.row + 1), range(lb.col, rt.col + 1)):\n yield tp.tile(*tile_id)", "response": "Yield all tiles intersecting with bounds."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef open_args(subparsers):\n parser = subparsers.add_parser(\n 'open',\n help='open a CSV resoruce with a system application',\n description=open_args.__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n\n parser.set_defaults(run_command=open_cmd)\n\n parser.add_argument('metatabfile', nargs='?',\n help=\"Path or URL to a metatab file. If not provided, defaults to 'metadata.csv' \")\n\n return parser", "response": "The mp open command will open a CSV resoruce with a system application such as Excel or OpenOffice."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nproduce a shortest path tree for a given net using NER. This is the kernel of the NER algorithm. Parameters ---------- source : (x, y) The coordinate of the source vertex. destinations : iterable([(x, y), ...]) The coordinates of destination vertices. width : int Width of the system (nodes) height : int Height of the system (nodes) wrap_around : bool True if wrap-around links should be used, false if they should be avoided. radius : int Radius of area to search from each node. 20 is arbitrarily selected in the paper and shown to be acceptable in practice. Returns ------- (:py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, {(x,y): :py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, ...}) A RoutingTree is produced rooted at the source and visiting all destinations but which does not contain any vertices etc. For convenience, a dictionarry mapping from destination (x, y) coordinates to the associated RoutingTree is provided to allow the caller to insert these items.", "response": "def ner_net(source, destinations, width, height, wrap_around=False, radius=10):\n \"\"\"Produce a shortest path tree for a given net using NER.\n\n This is the kernel of the NER algorithm.\n\n Parameters\n ----------\n source : (x, y)\n The coordinate of the source vertex.\n destinations : iterable([(x, y), ...])\n The coordinates of destination vertices.\n width : int\n Width of the system (nodes)\n height : int\n Height of the system (nodes)\n wrap_around : bool\n True if wrap-around links should be used, false if they should be\n avoided.\n radius : int\n Radius of area to search from each node. 20 is arbitrarily selected in\n the paper and shown to be acceptable in practice.\n\n Returns\n -------\n (:py:class:`~.rig.place_and_route.routing_tree.RoutingTree`,\n {(x,y): :py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, ...})\n A RoutingTree is produced rooted at the source and visiting all\n destinations but which does not contain any vertices etc. For\n convenience, a dictionarry mapping from destination (x, y) coordinates\n to the associated RoutingTree is provided to allow the caller to insert\n these items.\n \"\"\"\n # Map from (x, y) to RoutingTree objects\n route = {source: RoutingTree(source)}\n\n # Handle each destination, sorted by distance from the source, closest\n # first.\n for destination in sorted(destinations,\n key=(lambda destination:\n shortest_mesh_path_length(\n to_xyz(source), to_xyz(destination))\n if not wrap_around else\n shortest_torus_path_length(\n to_xyz(source), to_xyz(destination),\n width, height))):\n # We shall attempt to find our nearest neighbouring placed node.\n neighbour = None\n\n # Try to find a nearby (within radius hops) node in the routing tree\n # that we can route to (falling back on just routing to the source).\n #\n # In an implementation according to the algorithm's original\n # specification looks for nodes at each point in a growing set of rings\n # of concentric hexagons. If it doesn't find any destinations this\n # means an awful lot of checks: 1261 for the default radius of 20.\n #\n # An alternative (but behaviourally identical) implementation scans the\n # list of all route nodes created so far and finds the closest node\n # which is < radius hops (falling back on the origin if no node is\n # closer than radius hops). This implementation requires one check per\n # existing route node. In most routes this is probably a lot less than\n # 1261 since most routes will probably have at most a few hundred route\n # nodes by the time the last destination is being routed.\n #\n # Which implementation is best is a difficult question to answer:\n # * In principle nets with quite localised connections (e.g.\n # nearest-neighbour or centroids traffic) may route slightly more\n # quickly with the original algorithm since it may very quickly find\n # a neighbour.\n # * In nets which connect very spaced-out destinations the second\n # implementation may be quicker since in such a scenario it is\n # unlikely that a neighbour will be found.\n # * In extremely high-fan-out nets (e.g. broadcasts), the original\n # method is very likely to perform *far* better than the alternative\n # method since most iterations will complete immediately while the\n # alternative method must scan *all* the route vertices.\n # As such, it should be clear that neither method alone is 'best' and\n # both have degenerate performance in certain completely reasonable\n # styles of net. As a result, a simple heuristic is used to decide\n # which technique to use.\n #\n # The following micro-benchmarks are crude estimate of the\n # runtime-per-iteration of each approach (at least in the case of a\n # torus topology)::\n #\n # $ # Original approach\n # $ python -m timeit --setup 'x, y, w, h, r = 1, 2, 5, 10, \\\n # {x:None for x in range(10)}' \\\n # 'x += 1; y += 1; x %= w; y %= h; (x, y) in r'\n # 1000000 loops, best of 3: 0.207 usec per loop\n # $ # Alternative approach\n # $ python -m timeit --setup 'from rig.geometry import \\\n # shortest_torus_path_length' \\\n # 'shortest_torus_path_length( \\\n # (0, 1, 2), (3, 2, 1), 10, 10)'\n # 1000000 loops, best of 3: 0.666 usec per loop\n #\n # From this we can approximately suggest that the alternative approach\n # is 3x more expensive per iteration. A very crude heuristic is to use\n # the original approach when the number of route nodes is more than\n # 1/3rd of the number of routes checked by the original method.\n concentric_hexagons = memoized_concentric_hexagons(radius)\n if len(concentric_hexagons) < len(route) / 3:\n # Original approach: Start looking for route nodes in a concentric\n # spiral pattern out from the destination node.\n for x, y in concentric_hexagons:\n x += destination[0]\n y += destination[1]\n if wrap_around:\n x %= width\n y %= height\n if (x, y) in route:\n neighbour = (x, y)\n break\n else:\n # Alternative approach: Scan over every route node and check to see\n # if any are < radius, picking the closest one if so.\n neighbour = None\n neighbour_distance = None\n for candidate_neighbour in route:\n if wrap_around:\n distance = shortest_torus_path_length(\n to_xyz(candidate_neighbour), to_xyz(destination),\n width, height)\n else:\n distance = shortest_mesh_path_length(\n to_xyz(candidate_neighbour), to_xyz(destination))\n\n if distance <= radius and (neighbour is None or\n distance < neighbour_distance):\n neighbour = candidate_neighbour\n neighbour_distance = distance\n\n # Fall back on routing directly to the source if no nodes within radius\n # hops of the destination was found.\n if neighbour is None:\n neighbour = source\n\n # Find the shortest vector from the neighbour to this destination\n if wrap_around:\n vector = shortest_torus_path(to_xyz(neighbour),\n to_xyz(destination),\n width, height)\n else:\n vector = shortest_mesh_path(to_xyz(neighbour), to_xyz(destination))\n\n # The longest-dimension-first route may inadvertently pass through an\n # already connected node. If the route is allowed to pass through that\n # node it would create a cycle in the route which would be VeryBad(TM).\n # As a result, we work backward through the route and truncate it at\n # the first point where the route intersects with a connected node.\n ldf = longest_dimension_first(vector, neighbour, width, height)\n i = len(ldf)\n for direction, (x, y) in reversed(ldf):\n i -= 1\n if (x, y) in route:\n # We've just bumped into a node which is already part of the\n # route, this becomes our new neighbour and we truncate the LDF\n # route. (Note ldf list is truncated just after the current\n # position since it gives (direction, destination) pairs).\n neighbour = (x, y)\n ldf = ldf[i + 1:]\n break\n\n # Take the longest dimension first route.\n last_node = route[neighbour]\n for direction, (x, y) in ldf:\n this_node = RoutingTree((x, y))\n route[(x, y)] = this_node\n\n last_node.children.append((Routes(direction), this_node))\n last_node = this_node\n\n return (route[source], route)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncopy a RoutingTree and disconnects nodes which are not connected in the given machine.", "response": "def copy_and_disconnect_tree(root, machine):\n \"\"\"Copy a RoutingTree (containing nothing but RoutingTrees), disconnecting\n nodes which are not connected in the machine.\n\n Note that if a dead chip is part of the input RoutingTree, no corresponding\n node will be included in the copy. The assumption behind this is that the\n only reason a tree would visit a dead chip is because a route passed\n through the chip and wasn't actually destined to arrive at that chip. This\n situation is impossible to confirm since the input routing trees have not\n yet been populated with vertices. The caller is responsible for being\n sensible.\n\n Parameters\n ----------\n root : :py:class:`~rig.place_and_route.routing_tree.RoutingTree`\n The root of the RoutingTree that contains nothing but RoutingTrees\n (i.e. no children which are vertices or links).\n machine : :py:class:`~rig.place_and_route.Machine`\n The machine in which the routes exist.\n\n Returns\n -------\n (root, lookup, broken_links)\n Where:\n * `root` is the new root of the tree\n :py:class:`~rig.place_and_route.routing_tree.RoutingTree`\n * `lookup` is a dict {(x, y):\n :py:class:`~rig.place_and_route.routing_tree.RoutingTree`, ...}\n * `broken_links` is a set ([(parent, child), ...]) containing all\n disconnected parent and child (x, y) pairs due to broken links.\n \"\"\"\n new_root = None\n\n # Lookup for copied routing tree {(x, y): RoutingTree, ...}\n new_lookup = {}\n\n # List of missing connections in the copied routing tree [(new_parent,\n # new_child), ...]\n broken_links = set()\n\n # A queue [(new_parent, direction, old_node), ...]\n to_visit = deque([(None, None, root)])\n while to_visit:\n new_parent, direction, old_node = to_visit.popleft()\n\n if old_node.chip in machine:\n # Create a copy of the node\n new_node = RoutingTree(old_node.chip)\n new_lookup[new_node.chip] = new_node\n else:\n # This chip is dead, move all its children into the parent node\n assert new_parent is not None, \\\n \"Net cannot be sourced from a dead chip.\"\n new_node = new_parent\n\n if new_parent is None:\n # This is the root node\n new_root = new_node\n elif new_node is not new_parent:\n # If this node is not dead, check connectivity to parent node (no\n # reason to check connectivity between a dead node and its parent).\n if direction in links_between(new_parent.chip,\n new_node.chip,\n machine):\n # Is connected via working link\n new_parent.children.append((direction, new_node))\n else:\n # Link to parent is dead (or original parent was dead and the\n # new parent is not adjacent)\n broken_links.add((new_parent.chip, new_node.chip))\n\n # Copy children\n for child_direction, child in old_node.children:\n to_visit.append((new_node, child_direction, child))\n\n return (new_root, new_lookup, broken_links)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nuses A* to find a path from any of the sources to the sink.", "response": "def a_star(sink, heuristic_source, sources, machine, wrap_around):\n \"\"\"Use A* to find a path from any of the sources to the sink.\n\n Note that the heuristic means that the search will proceed towards\n heuristic_source without any concern for any other sources. This means that\n the algorithm may miss a very close neighbour in order to pursue its goal\n of reaching heuristic_source. This is not considered a problem since 1) the\n heuristic source will typically be in the direction of the rest of the tree\n and near by and often the closest entity 2) it prevents us accidentally\n forming loops in the rest of the tree since we'll stop as soon as we touch\n any part of it.\n\n Parameters\n ----------\n sink : (x, y)\n heuristic_source : (x, y)\n An element from `sources` which is used as a guiding heuristic for the\n A* algorithm.\n sources : set([(x, y), ...])\n machine : :py:class:`~rig.place_and_route.Machine`\n wrap_around : bool\n Consider wrap-around links in heuristic distance calculations.\n\n Returns\n -------\n [(:py:class:`~rig.routing_table.Routes`, (x, y)), ...]\n A path starting with a coordinate in `sources` and terminating at\n connected neighbour of `sink` (i.e. the path does not include `sink`).\n The direction given is the link down which to proceed from the given\n (x, y) to arrive at the next point in the path.\n\n Raises\n ------\n :py:class:~rig.place_and_route.exceptions.MachineHasDisconnectedSubregion`\n If a path cannot be found.\n \"\"\"\n # Select the heuristic function to use for distances\n if wrap_around:\n heuristic = (lambda node:\n shortest_torus_path_length(to_xyz(node),\n to_xyz(heuristic_source),\n machine.width, machine.height))\n else:\n heuristic = (lambda node:\n shortest_mesh_path_length(to_xyz(node),\n to_xyz(heuristic_source)))\n\n # A dictionary {node: (direction, previous_node}. An entry indicates that\n # 1) the node has been visited and 2) which node we hopped from (and the\n # direction used) to reach previous_node. This may be None if the node is\n # the sink.\n visited = {sink: None}\n\n # The node which the tree will be reconnected to\n selected_source = None\n\n # A heap (accessed via heapq) of (distance, (x, y)) where distance is the\n # distance between (x, y) and heuristic_source and (x, y) is a node to\n # explore.\n to_visit = [(heuristic(sink), sink)]\n while to_visit:\n _, node = heapq.heappop(to_visit)\n\n # Terminate if we've found the destination\n if node in sources:\n selected_source = node\n break\n\n # Try all neighbouring locations. Note: link identifiers are from the\n # perspective of the neighbour, not the current node!\n for neighbour_link in Links:\n vector = neighbour_link.opposite.to_vector()\n neighbour = ((node[0] + vector[0]) % machine.width,\n (node[1] + vector[1]) % machine.height)\n\n # Skip links which are broken\n if (neighbour[0], neighbour[1], neighbour_link) not in machine:\n continue\n\n # Skip neighbours who have already been visited\n if neighbour in visited:\n continue\n\n # Explore all other neighbours\n visited[neighbour] = (neighbour_link, node)\n heapq.heappush(to_visit, (heuristic(neighbour), neighbour))\n\n # Fail of no paths exist\n if selected_source is None:\n raise MachineHasDisconnectedSubregion(\n \"Could not find path from {} to {}\".format(\n sink, heuristic_source))\n\n # Reconstruct the discovered path, starting from the source we found and\n # working back until the sink.\n path = [(Routes(visited[selected_source][0]), selected_source)]\n while visited[path[-1][1]][1] != sink:\n node = visited[path[-1][1]][1]\n direction = Routes(visited[node][0])\n path.append((direction, node))\n\n return path"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmodify a RoutingTree to route - around dead links in a Machine.", "response": "def avoid_dead_links(root, machine, wrap_around=False):\n \"\"\"Modify a RoutingTree to route-around dead links in a Machine.\n\n Uses A* to reconnect disconnected branches of the tree (due to dead links\n in the machine).\n\n Parameters\n ----------\n root : :py:class:`~rig.place_and_route.routing_tree.RoutingTree`\n The root of the RoutingTree which contains nothing but RoutingTrees\n (i.e. no vertices and links).\n machine : :py:class:`~rig.place_and_route.Machine`\n The machine in which the routes exist.\n wrap_around : bool\n Consider wrap-around links in pathfinding heuristics.\n\n Returns\n -------\n (:py:class:`~.rig.place_and_route.routing_tree.RoutingTree`,\n {(x,y): :py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, ...})\n A new RoutingTree is produced rooted as before. A dictionarry mapping\n from (x, y) to the associated RoutingTree is provided for convenience.\n\n Raises\n ------\n :py:class:~rig.place_and_route.exceptions.MachineHasDisconnectedSubregion`\n If a path to reconnect the tree cannot be found.\n \"\"\"\n # Make a copy of the RoutingTree with all broken parts disconnected\n root, lookup, broken_links = copy_and_disconnect_tree(root, machine)\n\n # For each disconnected subtree, use A* to connect the tree to *any* other\n # disconnected subtree. Note that this process will eventually result in\n # all disconnected subtrees being connected, the result is a fully\n # connected tree.\n for parent, child in broken_links:\n child_chips = set(c.chip for c in lookup[child])\n\n # Try to reconnect broken links to any other part of the tree\n # (excluding this broken subtree itself since that would create a\n # cycle).\n path = a_star(child, parent,\n set(lookup).difference(child_chips),\n machine, wrap_around)\n\n # Add new RoutingTree nodes to reconnect the child to the tree.\n last_node = lookup[path[0][1]]\n last_direction = path[0][0]\n for direction, (x, y) in path[1:]:\n if (x, y) not in child_chips:\n # This path segment traverses new ground so we must create a\n # new RoutingTree for the segment.\n new_node = RoutingTree((x, y))\n # A* will not traverse anything but chips in this tree so this\n # assert is meerly a sanity check that this ocurred correctly.\n assert (x, y) not in lookup, \"Cycle created.\"\n lookup[(x, y)] = new_node\n else:\n # This path segment overlaps part of the disconnected tree\n # (A* doesn't know where the disconnected tree is and thus\n # doesn't avoid it). To prevent cycles being introduced, this\n # overlapped node is severed from its parent and merged as part\n # of the A* path.\n new_node = lookup[(x, y)]\n\n # Find the node's current parent and disconnect it.\n for node in lookup[child]: # pragma: no branch\n dn = [(d, n) for d, n in node.children if n == new_node]\n assert len(dn) <= 1\n if dn:\n node.children.remove(dn[0])\n # A node can only have one parent so we can stop now.\n break\n last_node.children.append((Routes(last_direction), new_node))\n last_node = new_node\n last_direction = direction\n last_node.children.append((last_direction, lookup[child]))\n\n return (root, lookup)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef route(vertices_resources, nets, machine, constraints, placements,\n allocations={}, core_resource=Cores, radius=20):\n \"\"\"Routing algorithm based on Neighbour Exploring Routing (NER).\n\n Algorithm refrence: J. Navaridas et al. SpiNNaker: Enhanced multicast\n routing, Parallel Computing (2014).\n http://dx.doi.org/10.1016/j.parco.2015.01.002\n\n This algorithm attempts to use NER to generate routing trees for all nets\n and routes around broken links using A* graph search. If the system is\n fully connected, this algorithm will always succeed though no consideration\n of congestion or routing-table usage is attempted.\n\n Parameters\n ----------\n radius : int\n Radius of area to search from each node. 20 is arbitrarily selected in\n the paper and shown to be acceptable in practice. If set to zero, this\n method is becomes longest dimension first routing.\n \"\"\"\n wrap_around = machine.has_wrap_around_links()\n\n # Vertices constrained to route to a specific link. {vertex: route}\n route_to_endpoint = {}\n for constraint in constraints:\n if isinstance(constraint, RouteEndpointConstraint):\n route_to_endpoint[constraint.vertex] = constraint.route\n\n routes = {}\n for net in nets:\n # Generate routing tree (assuming a perfect machine)\n root, lookup = ner_net(placements[net.source],\n set(placements[sink] for sink in net.sinks),\n machine.width, machine.height,\n wrap_around, radius)\n\n # Fix routes to avoid dead chips/links\n if route_has_dead_links(root, machine):\n root, lookup = avoid_dead_links(root, machine, wrap_around)\n\n # Add the sinks in the net to the RoutingTree\n for sink in net.sinks:\n tree_node = lookup[placements[sink]]\n if sink in route_to_endpoint:\n # Sinks with route-to-endpoint constraints must be routed\n # in the according directions.\n tree_node.children.append((route_to_endpoint[sink], sink))\n else:\n cores = allocations.get(sink, {}).get(core_resource, None)\n if cores is not None:\n # Sinks with the core_resource resource specified must be\n # routed to that set of cores.\n for core in range(cores.start, cores.stop):\n tree_node.children.append((Routes.core(core), sink))\n else:\n # Sinks without that resource are simply included without\n # an associated route\n tree_node.children.append((None, sink))\n routes[net] = root\n\n return routes", "response": "This method generates a routing tree for a set of vertices and nets."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef match(string, patterns):\n if patterns is None:\n return True\n else:\n return any(re.match(pattern, string)\n for pattern in patterns)", "response": "Given a string return true if it matches the supplied list of regular expressions."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_process_list(mc, x_=None, y_=None, p_=None,\n app_ids=None, applications=None, states=None):\n \"\"\"Scan a SpiNNaker system's cores filtering by the specified features.\n\n Generates\n -------\n (x, y, core, state, runtime_exception, application, app_id)\n \"\"\"\n\n system_info = mc.get_system_info()\n for (x, y), chip_info in sorted(iteritems(system_info)):\n if x_ is not None and x_ != x:\n continue\n if y_ is not None and y_ != y:\n continue\n\n for p in range(chip_info.num_cores):\n if p_ is not None and p_ != p:\n continue\n\n try:\n status = mc.get_processor_status(x=x, y=y, p=p)\n keep = (match(str(status.app_id), app_ids) and\n match(status.app_name, applications) and\n match(status.cpu_state.name, states))\n\n if keep:\n yield (x, y, p,\n status.cpu_state,\n status.rt_code,\n status.app_name,\n status.app_id)\n except SCPError as e:\n # If an error occurs while communicating with a chip, we bodge\n # it into the \"cpu_status\" field and continue (note that it\n # will never get filtered out).\n class DeadStatus(object):\n name = \"{}: {}\".format(e.__class__.__name__, str(e))\n yield (x, y, p, DeadStatus(), None, \"\", -1)", "response": "Yields a list of processes that are in a SpiNNaker system."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbuilds a SpiNNaker machine from a SystemInfo object.", "response": "def build_machine(system_info,\n core_resource=Cores,\n sdram_resource=SDRAM,\n sram_resource=SRAM):\n \"\"\"Build a :py:class:`~rig.place_and_route.Machine` object from a\n :py:class:`~rig.machine_control.machine_controller.SystemInfo` object.\n\n .. note::\n Links are tested by sending a 'PEEK' command down the link which\n checks to see if the remote device responds correctly. If the link\n is dead, no response will be received and the link will be assumed\n dead. Since peripherals do not generally respond to 'PEEK'\n commands, working links attached to peripherals will also be marked\n as dead.\n\n .. note::\n The returned object does not report how much memory is free, nor\n how many cores are idle but rather the total number of working cores\n and the size of the heap. See :py:func:`.build_resource_constraints`\n for a function which can generate a set of\n :py:class:`~rig.place_and_route.constraints` which prevent the use of\n already in-use cores and memory.\n\n .. note::\n This method replaces the deprecated\n :py:meth:`rig.machine_control.MachineController.get_machine` method.\n Its functionality may be recreated using\n :py:meth:`rig.machine_control.MachineController.get_system_info` along\n with this function like so::\n\n >> sys_info = mc.get_system_info()\n >> machine = build_machine(sys_info)\n\n Parameters\n ----------\n system_info : :py:class:`rig.machine_control.machine_controller.SystemInfo`\n The resource availability information for a SpiNNaker machine,\n typically produced by\n :py:meth:`rig.machine_control.MachineController.get_system_info`.\n core_resource : resource (default: :py:class:`rig.place_and_route.Cores`)\n The resource type to use to represent the number of working cores on a\n chip, including the monitor, those already in use and all idle cores.\n sdram_resource : resource (default: :py:class:`rig.place_and_route.SDRAM`)\n The resource type to use to represent SDRAM on a chip. This resource\n will be set to the number of bytes in the largest free block in the\n SDRAM heap. This gives a conservative estimate of the amount of free\n SDRAM on the chip which will be an underestimate in the presence of\n memory fragmentation.\n sram_resource : resource (default: :py:class:`rig.place_and_route.SRAM`)\n The resource type to use to represent SRAM (a.k.a. system RAM) on a\n chip. This resource will be set to the number of bytes in the largest\n free block in the SRAM heap. This gives a conservative estimate of the\n amount of free SRAM on the chip which will be an underestimate in the\n presence of memory fragmentation.\n\n Returns\n -------\n :py:class:`rig.place_and_route.Machine`\n A :py:class:`~rig.place_and_route.Machine` object representing the\n resources available within a SpiNNaker machine in the form used by the\n place-and-route infrastructure.\n \"\"\"\n try:\n max_cores = max(c.num_cores for c in itervalues(system_info))\n except ValueError:\n max_cores = 0\n\n try:\n max_sdram = max(c.largest_free_sdram_block\n for c in itervalues(system_info))\n except ValueError:\n max_sdram = 0\n\n try:\n max_sram = max(c.largest_free_sram_block\n for c in itervalues(system_info))\n except ValueError:\n max_sram = 0\n\n return Machine(width=system_info.width,\n height=system_info.height,\n chip_resources={\n core_resource: max_cores,\n sdram_resource: max_sdram,\n sram_resource: max_sram,\n },\n chip_resource_exceptions={\n chip: {\n core_resource: info.num_cores,\n sdram_resource: info.largest_free_sdram_block,\n sram_resource: info.largest_free_sram_block,\n }\n for chip, info in iteritems(system_info)\n if (info.num_cores != max_cores or\n info.largest_free_sdram_block != max_sdram or\n info.largest_free_sram_block != max_sram)\n },\n dead_chips=set(system_info.dead_chips()),\n dead_links=set(system_info.dead_links()))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nyields a minimal set of ReserveResourceConstraint objects which reserve the specified set of cores.", "response": "def _get_minimal_core_reservations(core_resource, cores, chip=None):\n \"\"\"Yield a minimal set of\n :py:class:`~rig.place_and_route.constraints.ReserveResourceConstraint`\n objects which reserve the specified set of cores.\n\n Parameters\n ----------\n core_resource : resource type\n The type of resource representing cores.\n cores : [int, ...]\n The core numbers to reserve *in ascending order*.\n chip : None or (x, y)\n Which chip the constraints should be applied to or None for a global\n constraint.\n\n Yields\n ------\n :py:class:`~rig.place_and_route.constraints.ReserveResourceConstraint`\n \"\"\"\n reservation = None\n\n # Cores is in ascending order\n for core in cores:\n if reservation is None:\n reservation = slice(core, core + 1)\n elif reservation.stop == core:\n reservation = slice(reservation.start, core + 1)\n else:\n yield ReserveResourceConstraint(\n core_resource, reservation, chip)\n reservation = slice(core, core + 1)\n\n if reservation is not None:\n yield ReserveResourceConstraint(core_resource, reservation, chip)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds a set of place - and - route constraints that reserve all cores that are not already in use.", "response": "def build_core_constraints(system_info, core_resource=Cores):\n \"\"\"Return a set of place-and-route\n :py:class:`~rig.place_and_route.constraints.ReserveResourceConstraint`\n which reserve any cores that that are already in use.\n\n The returned list of\n :py:class:`~rig.place_and_route.constraints.ReserveResourceConstraint`\\ s\n reserves all cores not in an Idle state (i.e. not a monitor and not already\n running an application).\n\n .. note::\n\n Historically, every application was required to add a\n :py:class:~rig.place_and_route.constraints.ReserveResourceConstraint to\n reserve the monitor processor on each chip. This method improves upon\n this approach by automatically generating constraints which reserve not\n just the monitor core but also any other cores which are already in\n use.\n\n Parameters\n ----------\n system_info : :py:class:`rig.machine_control.machine_controller.SystemInfo`\n The resource availability information for a SpiNNaker machine,\n typically produced by\n :py:meth:`rig.machine_control.MachineController.get_system_info`.\n core_resource : resource (Default: :py:data:`~rig.place_and_route.Cores`)\n The resource identifier used for cores.\n\n Returns\n -------\n [:py:class:`rig.place_and_route.constraints.ReserveResourceConstraint`, \\\n ...]\n A set of place-and-route constraints which reserves all non-idle cores.\n The resource type given in the ``core_resource`` argument will be\n reserved accordingly.\n \"\"\"\n constraints = []\n\n # Find the set of cores which are universally reserved\n globally_reserved = None\n for chip_info in itervalues(system_info):\n reserved = sum(1 << c for c, state in enumerate(chip_info.core_states)\n if state != AppState.idle)\n if globally_reserved is None:\n globally_reserved = reserved\n else:\n globally_reserved &= reserved\n\n if globally_reserved is None:\n globally_reserved = 0\n\n constraints.extend(_get_minimal_core_reservations(\n core_resource,\n [core for core in range(18) if (1 << core) & globally_reserved]))\n\n # Create chip-specific resource reservations for any special cases\n for chip, chip_info in iteritems(system_info):\n constraints.extend(_get_minimal_core_reservations(\n core_resource,\n [core for core, state in enumerate(chip_info.core_states)\n if state != AppState.idle and\n not globally_reserved & (1 << core)],\n chip))\n\n return constraints"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild a mapping from application to a list of cores where the application is used.", "response": "def build_application_map(vertices_applications, placements, allocations,\n core_resource=Cores):\n \"\"\"Build a mapping from application to a list of cores where the\n application is used.\n\n This utility function assumes that each vertex is associated with a\n specific application.\n\n Parameters\n ----------\n vertices_applications : {vertex: application, ...}\n Applications are represented by the path of their APLX file.\n placements : {vertex: (x, y), ...}\n allocations : {vertex: {resource: slice, ...}, ...}\n One of these resources should match the `core_resource` argument.\n core_resource : object\n The resource identifier which represents cores.\n\n Returns\n -------\n {application: {(x, y) : set([c, ...]), ...}, ...}\n For each application, for each used chip a set of core numbers onto\n which the application should be loaded.\n \"\"\"\n application_map = defaultdict(lambda: defaultdict(set))\n\n for vertex, application in iteritems(vertices_applications):\n chip_cores = application_map[application][placements[vertex]]\n core_slice = allocations[vertex].get(core_resource, slice(0, 0))\n chip_cores.update(range(core_slice.start, core_slice.stop))\n\n return application_map"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef build_routing_tables(routes, net_keys, omit_default_routes=True):\n from rig.routing_table import routing_tree_to_tables, remove_default_routes\n\n warnings.warn(\n \"build_routing_tables() is deprecated, see \"\n \"rig.routing_table.routing_tree_to_tables()\"\n \"and rig.routing_table.minimise()\", DeprecationWarning\n )\n\n # Build full routing tables and then remove default entries from them\n tables = dict()\n\n for chip, table in iteritems(routing_tree_to_tables(routes, net_keys)):\n if omit_default_routes:\n table = remove_default_routes.minimise(table, target_length=None)\n\n # If the table is empty don't add it to the dictionary of tables.\n if table:\n tables[chip] = table\n\n return tables", "response": "Convert a set of routing trees into a set of routing tables."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nregister the Sentry integration.", "response": "def register_sentry(raven_client, namespace: Optional[str]=None,\n send_retries: bool=False):\n \"\"\"Register the Sentry integration.\n\n Exceptions making jobs fail are sent to Sentry.\n\n :param raven_client: configured Raven client used to sent errors to Sentry\n :param namespace: optionally only register the Sentry integration for a\n particular Spinach :class:`Engine`\n :param send_retries: whether to also send to Sentry exceptions resulting\n in a job being retried\n \"\"\"\n\n @signals.job_started.connect_via(namespace)\n def job_started(namespace, job, **kwargs):\n raven_client.context.activate()\n raven_client.transaction.push(job.task_name)\n\n @signals.job_finished.connect_via(namespace)\n def job_finished(namespace, job, **kwargs):\n raven_client.transaction.pop(job.task_name)\n raven_client.context.clear()\n\n @signals.job_failed.connect_via(namespace)\n def job_failed(namespace, job, **kwargs):\n raven_client.captureException(\n extra={attr: getattr(job, attr) for attr in job.__slots__}\n )\n\n if send_retries:\n @signals.job_schedule_retry.connect_via(namespace)\n def job_schedule_retry(namespace, job, **kwargs):\n raven_client.captureException(\n extra={attr: getattr(job, attr) for attr in job.__slots__}\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_resource(mt_file, ref, cache):\n\n if isinstance(mt_file, MetapackDoc):\n doc = mt_file\n else:\n doc = MetapackDoc(mt_file)\n\n if not 'Resources' in doc:\n doc.new_section('Resources')\n\n doc['Resources'].args = [e for e in set(doc['Resources'].args + ['Name', 'StartLine', 'HeaderLines', 'Encoding']) if\n e]\n\n seen_names = set()\n\n u = parse_app_url(ref)\n\n # The web and file URLs don't list the same.\n\n if u.proto == 'file':\n entries = u.list()\n else:\n entries = [ssu for su in u.list() for ssu in su.list()]\n\n errors = []\n\n for e in entries:\n if not add_single_resource(doc, e, cache=cache, seen_names=seen_names):\n errors.append(e)\n\n if errors:\n prt()\n warn(\"Found, but failed to add these urls:\")\n for e in errors:\n print(' ', e)\n\n write_doc(doc, mt_file)", "response": "Add a resource entry to the file"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef bounds(self, pixelbuffer=0):\n left = self._left\n bottom = self._bottom\n right = self._right\n top = self._top\n if pixelbuffer:\n offset = self.pixel_x_size * float(pixelbuffer)\n left -= offset\n bottom -= offset\n right += offset\n top += offset\n # on global grids clip at northern and southern TilePyramid bound\n if self.tp.grid.is_global:\n top = min([top, self.tile_pyramid.top])\n bottom = max([bottom, self.tile_pyramid.bottom])\n return Bounds(left, bottom, right, top)", "response": "Return Tile boundaries.\n\n - pixelbuffer: tile buffer in pixels"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns an Affine object of tile.", "response": "def affine(self, pixelbuffer=0):\n \"\"\"\n Return an Affine object of tile.\n\n - pixelbuffer: tile buffer in pixels\n \"\"\"\n return Affine(\n self.pixel_x_size,\n 0,\n self.bounds(pixelbuffer).left,\n 0,\n -self.pixel_y_size,\n self.bounds(pixelbuffer).top\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a tuple of tile height and width.", "response": "def shape(self, pixelbuffer=0):\n \"\"\"\n Return a tuple of tile height and width.\n\n - pixelbuffer: tile buffer in pixels\n \"\"\"\n # apply pixelbuffers\n height = self._base_shape.height + 2 * pixelbuffer\n width = self._base_shape.width + 2 * pixelbuffer\n if pixelbuffer and self.tp.grid.is_global:\n # on first and last row, remove pixelbuffer on top or bottom\n matrix_height = self.tile_pyramid.matrix_height(self.zoom)\n if matrix_height == 1:\n height = self._base_shape.height\n elif self.row in [0, matrix_height - 1]:\n height = self._base_shape.height + pixelbuffer\n return Shape(height=height, width=width)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_valid(self):\n if not all([\n isinstance(self.zoom, int),\n self.zoom >= 0,\n isinstance(self.row, int),\n self.row >= 0,\n isinstance(self.col, int),\n self.col >= 0\n ]):\n raise TypeError(\"zoom, col and row must be integers >= 0\")\n cols = self.tile_pyramid.matrix_width(self.zoom)\n rows = self.tile_pyramid.matrix_height(self.zoom)\n if self.col >= cols:\n raise ValueError(\"col (%s) exceeds matrix width (%s)\" % (self.col, cols))\n if self.row >= rows:\n raise ValueError(\"row (%s) exceeds matrix height (%s)\" % (self.row, rows))\n return True", "response": "Return True if tile is available in tile pyramid."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning tile from previous zoom level.", "response": "def get_parent(self):\n \"\"\"Return tile from previous zoom level.\"\"\"\n return None if self.zoom == 0 else self.tile_pyramid.tile(\n self.zoom - 1, self.row // 2, self.col // 2\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn tiles from next zoom level.", "response": "def get_children(self):\n \"\"\"Return tiles from next zoom level.\"\"\"\n next_zoom = self.zoom + 1\n return [\n self.tile_pyramid.tile(\n next_zoom,\n self.row * 2 + row_offset,\n self.col * 2 + col_offset\n )\n for row_offset, col_offset in [\n (0, 0), # top left\n (0, 1), # top right\n (1, 1), # bottom right\n (1, 0), # bottom left\n ]\n if all([\n self.row * 2 + row_offset < self.tp.matrix_height(next_zoom),\n self.col * 2 + col_offset < self.tp.matrix_width(next_zoom)\n ])\n ]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the set of unique tile neighbors for this tile.", "response": "def get_neighbors(self, connectedness=8):\n \"\"\"\n Return tile neighbors.\n\n Tile neighbors are unique, i.e. in some edge cases, where both the left\n and right neighbor wrapped around the antimeridian is the same. Also,\n neighbors ouside the northern and southern TilePyramid boundaries are\n excluded, because they are invalid.\n\n -------------\n | 8 | 1 | 5 |\n -------------\n | 4 | x | 2 |\n -------------\n | 7 | 3 | 6 |\n -------------\n\n - connectedness: [4 or 8] return four direct neighbors or all eight.\n \"\"\"\n if connectedness not in [4, 8]:\n raise ValueError(\"only connectedness values 8 or 4 are allowed\")\n\n unique_neighbors = {}\n # 4-connected neighborsfor pyramid\n matrix_offsets = [\n (-1, 0), # 1: above\n (0, 1), # 2: right\n (1, 0), # 3: below\n (0, -1) # 4: left\n ]\n if connectedness == 8:\n matrix_offsets.extend([\n (-1, 1), # 5: above right\n (1, 1), # 6: below right\n (1, -1), # 7: below left\n (-1, -1) # 8: above left\n ])\n\n for row_offset, col_offset in matrix_offsets:\n new_row = self.row + row_offset\n new_col = self.col + col_offset\n # omit if row is outside of tile matrix\n if new_row < 0 or new_row >= self.tp.matrix_height(self.zoom):\n continue\n # wrap around antimeridian if new column is outside of tile matrix\n if new_col < 0:\n if not self.tp.is_global:\n continue\n new_col = self.tp.matrix_width(self.zoom) + new_col\n elif new_col >= self.tp.matrix_width(self.zoom):\n if not self.tp.is_global:\n continue\n new_col -= self.tp.matrix_width(self.zoom)\n # omit if new tile is current tile\n if new_row == self.row and new_col == self.col:\n continue\n # create new tile\n unique_neighbors[(new_row, new_col)] = self.tp.tile(\n self.zoom, new_row, new_col\n )\n\n return unique_neighbors.values()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_field(self, identifier, length=None, start_at=None, tags=None):\n # Check for zero-length fields\n if length is not None and length <= 0:\n raise ValueError(\"Fields must be at least one bit in length.\")\n\n # Check for fields which don't fit in the bit field\n if (start_at is not None and\n (0 <= start_at >= self.length or\n start_at + (length or 1) > self.length)):\n raise ValueError(\n \"Field doesn't fit within {}-bit bit field.\".format(\n self.length))\n\n # Check for fields which occupy the same bits\n if start_at is not None:\n end_at = start_at + (length or 1)\n for other_identifier, other_field in \\\n self.fields.potential_fields(self.field_values):\n if other_field.start_at is not None:\n other_start_at = other_field.start_at\n other_end_at = other_start_at + (other_field.length or 1)\n\n if end_at > other_start_at and other_end_at > start_at:\n raise ValueError(\n \"Field '{}' (range {}-{}) \"\n \"overlaps field '{}' (range {}-{})\".format(\n identifier,\n start_at, end_at,\n other_identifier,\n other_start_at, other_end_at))\n\n # Normalise tags type\n if type(tags) is str:\n tags = set(tags.split())\n elif tags is None:\n tags = set()\n else:\n tags = set(tags)\n\n # Add the field (checking that the identifier is unique in the process)\n field = type(self)._Field(length, start_at, tags)\n self.fields.add_field(field, identifier, self.field_values)\n\n # Add tags to all parents of this field\n for parent_identifier in self.fields.get_field_requirements(\n identifier, self.field_values):\n parent = self.fields.get_field(parent_identifier,\n self.field_values)\n parent.tags.update(tags)", "response": "Add a new field to the BitField."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_value(self, tag=None, field=None):\n assert not (tag is not None and field is not None), \\\n \"Cannot filter by tag and field simultaneously.\"\n\n selected_fields = self._select_by_field_or_tag(tag, field)\n\n # Check all selected fields have values defined\n missing_fields_idents = set(selected_fields) - set(self.field_values)\n if missing_fields_idents:\n raise ValueError(\n \"Cannot generate value with undefined fields {}.\".format(\n \", \".join(\"'{}'\".format(f)\n for f in missing_fields_idents)))\n\n # Build the value\n value = 0\n for identifier, field in iteritems(selected_fields):\n if field.length is None or field.start_at is None:\n raise ValueError(\n \"Field '{}' does not have a fixed size/position.\".format(\n identifier))\n value |= (self.field_values[identifier] <<\n field.start_at)\n\n return value", "response": "Generate an integer whose bits are set according to the values of the specified fields."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_mask(self, tag=None, field=None):\n if tag is not None and field is not None:\n raise TypeError(\"get_mask() takes exactly one keyword argument, \"\n \"either 'field' or 'tag' (both given)\")\n\n selected_fields = self._select_by_field_or_tag(tag, field)\n\n # Build the mask (and throw an exception if we encounter a field\n # without a fixed size/length.\n mask = 0\n for identifier, field in iteritems(selected_fields):\n if field.length is None or field.start_at is None:\n raise ValueError(\n \"Field '{}' does not have a fixed size/position.\".format(\n identifier))\n mask |= ((1 << field.length) - 1) << field.start_at\n\n return mask", "response": "Returns a bitmask for all fields which exist in the current bit field."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _select_by_field_or_tag(self, tag=None, field=None):\n # Get the set of fields whose values will be included in the value\n if field is not None:\n # Select just the specified field (checking the field exists)\n field_obj = self.fields.get_field(field, self.field_values)\n selected_fields = OrderedDict([(field, field_obj)])\n elif tag is not None:\n # Select just fields with the specified tag\n selected_fields = OrderedDict(\n (i, f)\n for (i, f) in self.fields.enabled_fields(self.field_values)\n if tag in f.tags)\n # Fail if no fields match the supplied tag. Because tags are\n # applied to parent fields in the hierarchy, it is guaranteed that\n # if a tag exists, at least one top-level (i.e. always present)\n # field will have the tag.\n if not selected_fields:\n raise UnknownTagError(tag)\n else:\n # No specific field/tag supplied, select all enabled fields.\n selected_fields = OrderedDict(\n (i, f)\n for (i, f) in self.fields.enabled_fields(self.field_values))\n\n return selected_fields", "response": "Internal use only. Returns an OrderedDict of fields which match the supplied field or tag."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_tags(self, field):\n return self.fields.get_field(field, self.field_values).tags.copy()", "response": "Get the set of tags for a given field."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_location_and_length(self, field):\n field_obj = self.fields.get_field(field, self.field_values)\n\n if field_obj.length is None or field_obj.start_at is None:\n raise ValueError(\n \"Field '{}' does not have a fixed size/position.\".format(\n field))\n\n return (field_obj.start_at, field_obj.length)", "response": "Get the location and length of a bitfield."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef assign_fields(self):\n # We must fix fields at every level of the hierarchy separately\n # (otherwise fields of children won't be allowed to overlap). Here we\n # do a breadth-first iteration over the hierarchy to fix fields with\n # given starting positions; then we do depth-first to fix other fields.\n\n # Assign all fields with a fixed starting position in breadth first,\n # top-down order. The breadth-first ensures that children's fixed\n # position fields must fit around the fixed position fields of their\n # parents.\n queue = [(self.fields, {})]\n while queue:\n node, field_values = queue.pop(0)\n\n # Assign all fields at this level whose position is fixed\n self._assign_fields(node.fields, field_values,\n assign_positions=False)\n\n # Breadth-first search through children\n for requirements, child in iteritems(node.children):\n requirements = dict(requirements)\n requirements.update(field_values)\n queue.append((child, requirements))\n\n # Assign all fields with movable starting positions in leaf-first,\n # depth-first order. The depth first ordering for variable position\n # fields ensures that parents don't allocate fields in positions which\n # would collide with fixed and variable position fields their children\n # have already allocated.\n\n def recurse_assign_fields(node=self.fields, field_values={}):\n # Assign fields of child nodes first (allowing them to allocate\n # bits independently)\n for requirements, child in iteritems(node.children):\n child_field_values = dict(requirements)\n child_field_values.update(field_values)\n recurse_assign_fields(child, child_field_values)\n\n # Finally, assign all remaining fields at this level in the tree\n self._assign_fields(node.fields, field_values,\n assign_positions=True)\n recurse_assign_fields()", "response": "Assign a position & length to any fields which do not have one."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _assign_fields(self, identifiers, field_values,\n assign_positions, assigned_bits=0):\n \"\"\"For internal use only. Assign lengths & positions to a subset of all\n potential fields with the supplied field_values.\n\n This method will check for any assigned bits of all potential fields\n but will only assign those fields whose identifiers are provided.\n\n Parameters\n ----------\n identifiers : iterable of identifiers\n The identifiers of the fields to assign\n field_values : {identifier: value, ...}\n The values held by various fields (used to access the correct\n identifiers)\n assign_positions : bool\n If False, will only assign lengths to fields whose positions are\n already known. Otherwise lengths and positions will be assigned to\n all fields as necessary.\n assigned_bits : int\n A bit mask of bits which are already allocated. (Note that this\n will automatically be extended with any already-assigned potential\n fields' bits.)\n\n Returns\n -------\n int\n Mask of which bits which are assigned to fields after fields have\n been assigned.\n \"\"\"\n # Calculate a mask of already allocated fields' bits\n for i, f in self.fields.potential_fields(field_values):\n if f.length is not None and f.start_at is not None:\n assigned_bits |= ((1 << f.length) - 1) << f.start_at\n\n # Allocate all specified fields\n for identifier in identifiers:\n field = self.fields.get_field(identifier, field_values)\n if field.length is not None and field.start_at is not None:\n # Already allocated, do nothing!\n pass\n elif assign_positions or field.start_at is not None:\n assigned_bits |= self._assign_field(assigned_bits,\n identifier,\n field_values)\n\n return assigned_bits", "response": "Internal method to assign fields to a subset of all the potential fields."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _load_resources(self):\n\n for t in self.doc.find('Root.Table'):\n for c in t.find('Table.Column'):\n if c.get_value('datatype') == 'geometry':\n c['transform'] = '^empty_str'\n c['datatype'] = 'text'\n\n return super()._load_resources()", "response": "Remove the geography from the files since it isn t particularly useful in Excel"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting a bucket policy to a dict mapping principal and prefix names to R or W", "response": "def user_policy_to_dict(doc):\n \"\"\"Convert a bucket policy to a dict mapping principal/prefix names to 'R' or 'W' \"\"\"\n\n import json\n\n if not isinstance(doc, dict):\n doc = json.loads(doc)\n\n d = {}\n\n def get_statement(name):\n for s in doc['Statement']:\n if s['Sid'] == name:\n return s\n\n for r in get_statement('read')['Principal']['AWS']:\n bucket, prefix = r.replace(arn_prefix, '').replace('/*', '').split('/')\n d[(bucket, prefix.strip('/'))] = 'R'\n\n try:\n for r in get_statement('write')['Resource']:\n bucket, prefix = r.replace(arn_prefix, '').replace('/*', '').split('/')\n d[(bucket, prefix.strip('/'))] = 'W'\n except TypeError:\n pass # No write section\n\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_bucket_policy_statements(bucket):\n import yaml\n from os.path import dirname, join, abspath\n import copy\n import metatab\n\n with open(join(dirname(abspath(metatab.__file__)), 'support', 'policy_parts.yaml')) as f:\n parts = yaml.load(f)\n\n statements = {}\n\n cl = copy.deepcopy(parts['list'])\n cl['Resource'] = arn_prefix + bucket\n statements['list'] = cl\n\n cl = copy.deepcopy(parts['bucket'])\n cl['Resource'] = arn_prefix + bucket\n statements['bucket'] = cl\n\n for sd in TOP_LEVEL_DIRS:\n cl = copy.deepcopy(parts['read'])\n cl['Resource'] = arn_prefix + bucket + '/' + sd + '/*'\n cl['Sid'] = cl['Sid'].title() + sd.title()\n\n statements[cl['Sid']] = cl\n\n cl = copy.deepcopy(parts['write'])\n cl['Resource'] = arn_prefix + bucket + '/' + sd + '/*'\n cl['Sid'] = cl['Sid'].title() + sd.title()\n\n statements[cl['Sid']] = cl\n\n cl = copy.deepcopy(parts['listb'])\n cl['Resource'] = arn_prefix + bucket\n cl['Sid'] = cl['Sid'].title() + sd.title()\n cl['Condition']['StringLike']['s3:prefix'] = [sd + '/*']\n\n statements[cl['Sid']] = cl\n\n return statements", "response": "Return the statemtns in a bucket policy as a dict of dicts"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef bucket_dict_to_policy(args, bucket_name, d):\n\n import json\n\n iam = get_resource(args, 'iam')\n\n statements = make_bucket_policy_statements(bucket_name)\n\n user_stats = set() # statement tripples\n\n for (user, prefix), mode in d.items():\n\n user_stats.add((user, 'list'))\n user_stats.add((user, 'bucket'))\n\n if mode == 'R':\n user_stats.add((user, 'Read' + prefix.title()))\n user_stats.add((user, 'List' + prefix.title()))\n elif mode == 'W':\n user_stats.add((user, 'List' + prefix.title()))\n user_stats.add((user, 'Read' + prefix.title()))\n user_stats.add((user, 'Write' + prefix.title()))\n\n users_arns = {}\n\n for user_name, section in user_stats:\n section = statements[section]\n\n if user_name not in users_arns:\n user = iam.User(user_name)\n users_arns[user.name] = user\n else:\n user = users_arns[user_name]\n\n section['Principal']['AWS'].append(user.arn)\n\n for sid in list(statements.keys()):\n if not statements[sid]['Principal']['AWS']:\n del statements[sid]\n\n return json.dumps(dict(Version=\"2012-10-17\", Statement=list(statements.values())), indent=4)", "response": "Create a bucket policy document from a permissions dict."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nproduce a dictionary of read write permissions for an existing bucket policy document", "response": "def bucket_policy_to_dict(policy):\n \"\"\"Produce a dictionary of read, write permissions for an existing bucket policy document\"\"\"\n import json\n\n if not isinstance(policy, dict):\n policy = json.loads(policy)\n\n statements = {s['Sid']: s for s in policy['Statement']}\n\n d = {}\n\n for rw in ('Read', 'Write'):\n for prefix in TOP_LEVEL_DIRS:\n sid = rw.title() + prefix.title()\n\n if sid in statements:\n\n if isinstance(statements[sid]['Principal']['AWS'], list):\n\n for principal in statements[sid]['Principal']['AWS']:\n user_name = principal.split('/').pop()\n d[(user_name, prefix)] = rw[0]\n else:\n user_name = statements[sid]['Principal']['AWS'].split('/').pop()\n d[(user_name, prefix)] = rw[0]\n\n return d"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_iam_account(l, args, user_name):\n\n iam = get_resource(args, 'iam')\n user = iam.User(user_name)\n user.load()\n\n return l.find_or_new_account(user.arn)", "response": "Get the local Account for a user name by fetching User and looking up\n the arn."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwraps for computeSVD that will normalize and handle a Thunder Images object", "response": "def getSVD(data, k, getComponents=False, getS=False, normalization='mean'):\n \"\"\" Wrapper for computeSVD that will normalize and handle a Thunder Images object\n\n :param data: Thunder Images object\n :param k: number of components to keep\n :param getComponents: will return the components if true, otherwise will return None\n\n :returns: projections, components, s\n \"\"\"\n if normalization == 'nanmean':\n data2 = data.tordd().sortByKey().values().map(lambda x: _convert_to_vector(x.flatten() - np.nanmean(x)))\n elif normalization == 'mean':\n data2 = data.tordd().sortByKey().values().map(lambda x: _convert_to_vector(x.flatten() - x.mean()))\n elif normalization is 'zscore':\n data2 = data.tordd().sortByKey().values().map(lambda x: _convert_to_vector(zscore(x.flatten())))\n elif normalization is None:\n data2 = data.tordd().sortByKey().values().map(lambda x: _convert_to_vector(x.flatten()))\n else:\n raise ValueError('Normalization should be one of: mean, nanmean, zscore, None. Got: %s' % normalization)\n\n mat = RowMatrix(data2)\n mat.rows.cache()\n mat.rows.count()\n svd = compute_svd(row_matrix=mat, k=k, compute_u=False)\n if getComponents:\n components = svd.call(\"V\").toArray()\n components = components.transpose(1, 0).reshape((k,) + data.shape[1:])\n else:\n components = None\n projection = np.array(RowMatrix_new(data2).multiply(svd.call(\"V\")).rows.collect())\n if getS:\n s = svd.call(\"s\").toArray()\n else:\n s = None\n return projection, components, s"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef human_duration(duration_seconds: float) -> str:\n if duration_seconds < 0.001:\n return '0 ms'\n if duration_seconds < 1:\n return '{} ms'.format(int(duration_seconds * 1000))\n return '{} s'.format(int(duration_seconds))", "response": "Convert a duration in seconds into a human friendly string."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncall a function and retry it on failure.", "response": "def call_with_retry(func: Callable, exceptions, max_retries: int,\n logger: Logger, *args, **kwargs):\n \"\"\"Call a function and retry it on failure.\"\"\"\n attempt = 0\n while True:\n try:\n return func(*args, **kwargs)\n except exceptions as e:\n attempt += 1\n if attempt >= max_retries:\n raise\n\n delay = exponential_backoff(attempt, cap=60)\n logger.warning('%s: retrying in %s', e, delay)\n time.sleep(delay.total_seconds())"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncalculate a delay to retry using an exponential backoff algorithm.", "response": "def exponential_backoff(attempt: int, cap: int=1200) -> timedelta:\n \"\"\"Calculate a delay to retry using an exponential backoff algorithm.\n\n It is an exponential backoff with random jitter to prevent failures\n from being retried at the same time. It is a good fit for most\n applications.\n\n :arg attempt: the number of attempts made\n :arg cap: maximum delay, defaults to 20 minutes\n \"\"\"\n base = 3\n temp = min(base * 2 ** attempt, cap)\n return timedelta(seconds=temp / 2 + random.randint(0, temp / 2))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef handle_sigterm():\n original_sigterm_handler = signal.getsignal(signal.SIGTERM)\n signal.signal(signal.SIGTERM, signal.default_int_handler)\n try:\n yield\n finally:\n signal.signal(signal.SIGTERM, original_sigterm_handler)", "response": "Handles SIGTERM like SIGINT and SIGTERM."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef float_to_fp(signed, n_bits, n_frac):\n # Calculate the maximum and minimum values\n if signed:\n max_v = (1 << (n_bits - 1)) - 1\n min_v = -max_v - 1\n else:\n min_v = 0\n max_v = (1 << n_bits) - 1\n\n # Compute the scale\n scale = 2.0**n_frac\n\n def bitsk(value):\n \"\"\"Convert a floating point value to a fixed point value.\n\n Parameters\n ----------\n value : float\n The value to convert.\n \"\"\"\n int_val = int(scale * value)\n return max((min(max_v, int_val), min_v))\n\n return bitsk", "response": "Return a function to convert a floating point value to a fixed - point value."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a function to convert a float to a fixed - point value.", "response": "def float_to_fix(signed, n_bits, n_frac):\n \"\"\"**DEPRECATED** Return a function to convert a floating point value to a\n fixed point value.\n\n .. warning::\n This function is deprecated in favour of :py:meth:`~.float_to_fp`.\n\n For example, a function to convert a float to a signed fractional\n representation with 8 bits overall and 4 fractional bits (S3.4) can be\n constructed and used with::\n\n >>> s34 = float_to_fix(signed=True, n_bits=8, n_frac=4)\n >>> hex(s34(0.5))\n '0x8'\n\n The fixed point conversion is saturating::\n\n >>> q34 = float_to_fix(False, 8, 4) # Unsigned 4.4\n >>> hex(q34(-0.5))\n '0x0'\n\n >>> hex(q34(15.0))\n '0xf0'\n\n >>> hex(q34(16.0))\n '0xff'\n\n Parameters\n ----------\n signed : bool\n Whether the values that are to be converted should be signed, or\n clipped at zero.\n\n >>> hex(float_to_fix(True, 8, 4)(-0.5)) # Signed\n '0xf8'\n >>> hex(float_to_fix(False, 8, 4)(-0.5)) # Unsigned\n '0x0'\n\n .. note::\n Regardless of the value of the `signed` parameter the returned\n value is always an unsigned integer suitable for packing with the\n struct packing chars `B`, `H`, `I` etc.\n\n n_bits : int\n Total number of bits in the fixed-point representation (including sign\n bit and fractional bits).\n n_frac : int\n Number of fractional bits in the fixed-point representation.\n\n Raises\n ------\n ValueError\n If the number of bits specified is not possible. For example,\n requiring more fractional bits than there are bits overall will result\n in a `ValueError`::\n\n >>> fix_to_float(False, 8, 9)\n Traceback (most recent call last):\n ValueError: n_frac: 9: Must be less than 8 (and positive).\n \"\"\"\n warnings.warn(\"float_to_fix() is deprecated, see float_to_fp\",\n DeprecationWarning)\n\n mask = int(2**n_bits - 1)\n min_v, max_v = validate_fp_params(signed, n_bits, n_frac)\n\n # Saturate values\n def bitsk(value):\n \"\"\"Convert a floating point value to a fixed point value.\n\n Parameters\n ----------\n value : float\n The value to convert.\n \"\"\"\n value = np.clip(value, min_v, max_v)\n\n if value < 0:\n fp_val = (1 << n_bits) + int(value * 2**n_frac)\n else:\n fp_val = int(value * 2**n_frac)\n\n assert 0 <= fp_val < 1 << (n_bits + 1)\n return fp_val & mask\n\n return bitsk"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fix_to_float(signed, n_bits, n_frac):\n warnings.warn(\"fix_to_float() is deprecated, see fp_to_float\",\n DeprecationWarning)\n\n validate_fp_params(signed, n_bits, n_frac)\n\n def kbits(value):\n \"\"\"Convert a fixed point value to a float.\n\n Parameters\n ----------\n value : int\n The fix point value as an integer.\n \"\"\"\n if signed and value & (1 << (n_bits - 1)):\n # If signed and negative\n value -= (1 << n_bits)\n\n # Unsigned or signed and positive\n return float(value) / (2.0**n_frac)\n\n return kbits", "response": "Return a function to convert a fixed - point value to a float."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget reference metadata file path.", "response": "def get_metadata_path(name):\n \"\"\"Get reference metadata file path.\"\"\"\n return pkg_resources.resource_filename('voobly', os.path.join(METADATA_PATH, '{}.json'.format(name)))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _save_cookies(requests_cookiejar, filename):\n with open(filename, 'wb') as handle:\n pickle.dump(requests_cookiejar, handle)", "response": "Save cookies to a file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _make_request(session, url, argument=None, params=None, raw=False):\n if not params:\n params = {}\n params['key'] = session.auth.key\n try:\n if argument:\n request_url = '{}{}{}{}'.format(session.auth.base_url, VOOBLY_API_URL, url, argument)\n else:\n request_url = '{}{}'.format(VOOBLY_API_URL, url)\n resp = session.get(request_url, params=params)\n except RequestException:\n raise VooblyError('failed to connect')\n if resp.text == 'bad-key':\n raise VooblyError('bad api key')\n elif resp.text == 'too-busy':\n raise VooblyError('service too busy')\n elif not resp.text:\n raise VooblyError('no data returned')\n if raw:\n return resp.text\n try:\n return tablib.Dataset().load(resp.text).dict\n except UnsupportedFormat:\n raise VooblyError('unexpected error {}'.format(resp.text))", "response": "Make a request to the API endpoint."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmaking a request to URL.", "response": "def make_scrape_request(session, url, mode='get', data=None):\n \"\"\"Make a request to URL.\"\"\"\n try:\n html = session.request(mode, url, data=data)\n except RequestException:\n raise VooblyError('failed to connect')\n if SCRAPE_FETCH_ERROR in html.text:\n raise VooblyError('not logged in')\n if html.status_code != 200 or SCRAPE_PAGE_NOT_FOUND in html.text:\n raise VooblyError('page not found')\n return bs4.BeautifulSoup(html.text, features='lxml')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_ladder(session, ladder_id, user_id=None, user_ids=None, start=0, limit=LADDER_RESULT_LIMIT):\n params = {\n 'start': start,\n 'limit': limit\n }\n if isinstance(ladder_id, str):\n ladder_id = lookup_ladder_id(ladder_id)\n if limit > LADDER_RESULT_LIMIT:\n raise VooblyError('limited to 40 rows')\n if user_ids:\n params['uidlist'] = ','.join([str(uid) for uid in user_ids])\n elif user_id:\n params['uid'] = user_id\n resp = _make_request(session, LADDER_URL, ladder_id, params)\n if user_id:\n if not resp:\n raise VooblyError('user not ranked')\n return resp[0]\n return resp", "response": "Get a single ladder."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget lobbies for a game.", "response": "def get_lobbies(session, game_id):\n \"\"\"Get lobbies for a game.\"\"\"\n if isinstance(game_id, str):\n game_id = lookup_game_id(game_id)\n lobbies = _make_request(session, LOBBY_URL, game_id)\n for lobby in lobbies:\n # pylint: disable=len-as-condition\n if len(lobby['ladders']) > 0:\n lobby['ladders'] = lobby['ladders'][:-1].split('|')\n return lobbies"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_user(session, username):\n resp = _make_request(session, FIND_USER_URL, username)\n if not resp:\n raise VooblyError('user not found')\n try:\n return int(resp[0]['uid'])\n except ValueError:\n raise VooblyError('user not found')", "response": "Find user by name - returns user ID."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfind multiple users by name.", "response": "def find_users(session, *usernames):\n \"\"\"Find multiple users by name.\"\"\"\n user_string = ','.join(usernames)\n return _make_request(session, FIND_USERS_URL, user_string)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget all possible user info by name.", "response": "def user(session, uid, ladder_ids=None):\n \"\"\"Get all possible user info by name.\"\"\"\n data = get_user(session, uid)\n resp = dict(data)\n if not ladder_ids:\n return resp\n resp['ladders'] = {}\n for ladder_id in ladder_ids:\n if isinstance(ladder_id, str):\n ladder_id = lookup_ladder_id(ladder_id)\n try:\n ladder_data = dict(get_ladder(session, ladder_id, user_id=uid))\n resp['ladders'][ladder_id] = ladder_data\n except VooblyError:\n # No ranking on ladder\n pass\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ladders(session, game_id):\n if isinstance(game_id, str):\n game_id = lookup_game_id(game_id)\n lobbies = get_lobbies(session, game_id)\n ladder_ids = set()\n for lobby in lobbies:\n ladder_ids |= set(lobby['ladders'])\n return list(ladder_ids)", "response": "Get a list of ladder IDs."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwraps function to re - authenticate session.", "response": "def authenticated(function):\n \"\"\"Re-authenticate if session expired.\"\"\"\n def wrapped(session, *args, **kwargs):\n \"\"\"Wrap function.\"\"\"\n try:\n return function(session, *args, **kwargs)\n except VooblyError:\n with session.cache_disabled():\n _LOGGER.info(\"attempted to access page before login\")\n login(session)\n return function(session, *args, **kwargs)\n return wrapped"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_clan_matches(session, subdomain, clan_id, from_timestamp=None, limit=None):\n return get_recent_matches(session, 'https://{}.voobly.com/{}/{}/0'.format(\n subdomain, TEAM_MATCHES_URL, clan_id), from_timestamp, limit)", "response": "Get recent matches by clan."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget recent matches by user.", "response": "def get_user_matches(session, user_id, from_timestamp=None, limit=None):\n \"\"\"Get recent matches by user.\"\"\"\n return get_recent_matches(session, '{}{}/{}/Matches/games/matches/user/{}/0'.format(\n session.auth.base_url, PROFILE_URL, user_id, user_id), from_timestamp, limit)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget recently played user matches.", "response": "def get_recent_matches(session, init_url, from_timestamp, limit):\n \"\"\"Get recently played user matches.\"\"\"\n if not from_timestamp:\n from_timestamp = datetime.datetime.now() - datetime.timedelta(days=1)\n matches = []\n page_id = 0\n done = False\n while not done and page_id < MAX_MATCH_PAGE_ID:\n url = '{}/{}'.format(init_url, page_id)\n parsed = make_scrape_request(session, url)\n for row in parsed.find('table').find_all('tr')[1:]:\n cols = row.find_all('td')\n played_at = dateparser.parse(cols[2].text)\n match_id = int(cols[5].find('a').text[1:])\n has_rec = cols[6].find('a').find('img')\n if played_at < from_timestamp or (limit and len(matches) == limit):\n done = True\n break\n if not has_rec:\n continue\n matches.append({\n 'timestamp': played_at,\n 'match_id': match_id\n })\n if not matches:\n break\n page_id += 1\n return matches"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_ladder_matches(session, ladder_id, from_timestamp=None, limit=LADDER_MATCH_LIMIT):\n if not from_timestamp:\n from_timestamp = datetime.datetime.now() - datetime.timedelta(days=1)\n matches = []\n page_id = 0\n done = False\n i = 0\n while not done and page_id < MAX_LADDER_PAGE_ID:\n url = '{}{}/{}/{}'.format(session.auth.base_url, LADDER_MATCHES_URL, lookup_ladder_id(ladder_id), page_id)\n parsed = make_scrape_request(session, url)\n for row in parsed.find(text='Recent Matches').find_next('table').find_all('tr')[1:]:\n cols = row.find_all('td')\n played_at = dateparser.parse(cols[0].text)\n match_id = int(cols[1].find('a').text[1:])\n has_rec = cols[4].find('a').find('img')\n if not has_rec:\n continue\n if played_at < from_timestamp or i >= limit:\n done = True\n break\n matches.append({\n 'timestamp': played_at,\n 'match_id': match_id\n })\n i += 1\n page_id += 1\n return matches", "response": "Get recently played ladder matches."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef download_rec(session, rec_url, target_path):\n try:\n resp = session.get(session.auth.base_url + rec_url)\n except RequestException:\n raise VooblyError('failed to connect for download')\n try:\n downloaded = zipfile.ZipFile(io.BytesIO(resp.content))\n downloaded.extractall(target_path)\n except zipfile.BadZipFile:\n raise VooblyError('invalid zip file')\n return downloaded.namelist()[0]", "response": "Download and extract a recorded game."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_session(key=None, username=None, password=None, cache=True,\n cache_expiry=datetime.timedelta(days=7), cookie_path=COOKIE_PATH, backend='memory',\n version=VERSION_GLOBAL):\n \"\"\"Get Voobly API session.\"\"\"\n class VooblyAuth(AuthBase): # pylint: disable=too-few-public-methods\n \"\"\"Voobly authorization storage.\"\"\"\n\n def __init__(self, key, username, password, cookie_path, version):\n \"\"\"Init.\"\"\"\n self.key = key\n self.username = username\n self.password = password\n self.cookie_path = cookie_path\n self.base_url = BASE_URLS[version]\n\n def __call__(self, r):\n \"\"\"Call is no-op.\"\"\"\n return r\n\n if version not in BASE_URLS:\n raise ValueError('unsupported voobly version')\n\n session = requests.session()\n if cache:\n session = requests_cache.core.CachedSession(expire_after=cache_expiry, backend=backend)\n session.auth = VooblyAuth(key, username, password, cookie_path, version)\n if os.path.exists(cookie_path):\n _LOGGER.info(\"cookie found at: %s\", cookie_path)\n session.cookies = _load_cookies(cookie_path)\n return session", "response": "Get a session object for a Voobly API."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n neutron_config.register_agent_state_opts_helper(CONF)\n common_config.init(sys.argv[1:])\n neutron_config.setup_logging()\n\n hyperv_agent = HyperVNeutronAgent()\n\n # Start everything.\n LOG.info(\"Agent initialized successfully, now running... \")\n hyperv_agent.daemon_loop()", "response": "The main entry point for the Hyper - V Neutron Agent."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _setup_qos_extension(self):\n if not CONF.AGENT.enable_qos_extension:\n return\n self._qos_ext = qos_extension.QosAgentExtension()\n self._qos_ext.consume_api(self)\n self._qos_ext.initialize(self._connection, 'hyperv')", "response": "Setup the QOS extension if it is required."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _provision_network(self, port_id, net_uuid, network_type,\n physical_network, segmentation_id):\n \"\"\"Provision the network with the received information.\"\"\"\n LOG.info(\"Provisioning network %s\", net_uuid)\n\n vswitch_name = self._get_vswitch_name(network_type, physical_network)\n if network_type == h_constant.TYPE_VLAN:\n # Nothing to do\n pass\n elif network_type == h_constant.TYPE_FLAT:\n # Nothing to do\n pass\n elif network_type == h_constant.TYPE_LOCAL:\n # TODO(alexpilotti): Check that the switch type is private\n # or create it if not existing.\n pass\n elif network_type == h_constant.TYPE_NVGRE and self._nvgre_enabled:\n self._nvgre_ops.bind_nvgre_network(segmentation_id, net_uuid,\n vswitch_name)\n else:\n raise exception.NetworkingHyperVException(\n (_(\"Cannot provision unknown network type \"\n \"%(network_type)s for network %(net_uuid)s\") %\n dict(network_type=network_type, net_uuid=net_uuid)))\n\n vswitch_map = {\n 'network_type': network_type,\n 'vswitch_name': vswitch_name,\n 'ports': [],\n 'vlan_id': segmentation_id}\n self._network_vswitch_map[net_uuid] = vswitch_map", "response": "Provision the network with the received information."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _port_bound(self, port_id, network_id, network_type, physical_network,\n segmentation_id, port_security_enabled, set_port_sriov):\n \"\"\"Bind the port to the recived network.\"\"\"\n super(HyperVNeutronAgent, self)._port_bound(\n port_id, network_id, network_type, physical_network,\n segmentation_id, port_security_enabled, set_port_sriov\n )\n vswitch_map = self._network_vswitch_map[network_id]\n\n if network_type == h_constant.TYPE_VLAN:\n self._vlan_driver.bind_vlan_port(port_id, segmentation_id)\n elif network_type == h_constant.TYPE_NVGRE and self._nvgre_enabled:\n self._nvgre_ops.bind_nvgre_port(\n segmentation_id, vswitch_map['vswitch_name'], port_id)\n elif network_type == h_constant.TYPE_FLAT:\n pass # Nothing to do\n elif network_type == h_constant.TYPE_LOCAL:\n pass # Nothing to do\n else:\n LOG.error('Unsupported network type %s', network_type)\n\n if self._enable_metrics_collection:\n self._utils.add_metrics_collection_acls(port_id)\n self._port_metric_retries[port_id] = self._metrics_max_retries\n\n # check if security groups is enabled.\n # if not, teardown the security group rules\n if self._enable_security_groups:\n self._sec_groups_agent.refresh_firewall([port_id])\n else:\n self._utils.remove_all_security_rules(port_id)\n\n self._utils.set_vswitch_port_mac_spoofing(port_id,\n port_security_enabled)", "response": "Bind the port to the recived network."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprocess the information regarding the available ports.", "response": "def _work(self):\n \"\"\"Process the information regarding the available ports.\"\"\"\n super(HyperVNeutronAgent, self)._work()\n if self._nvgre_enabled:\n self._nvgre_ops.refresh_nvgre_records()\n self._port_enable_control_metrics()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning a limited number of magics from scripts without IPython", "response": "def run_cell_magic(self, magic_name, line, cell):\n \"\"\"Run a limited number of magics from scripts, without IPython\"\"\"\n\n if magic_name == 'bash':\n self.shebang(\"bash\", cell)\n elif magic_name == 'metatab':\n self.mm.metatab(line, cell)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef system_piped(self, cmd):\n if cmd.rstrip().endswith('&'):\n # this is *far* from a rigorous test\n # We do not support backgrounding processes because we either use\n # pexpect or pipes to read from. Users can always just call\n # os.system() or use ip.system=ip.system_raw\n # if they really want a background process.\n raise OSError(\"Background processes not supported.\")\n\n # we explicitly do NOT return the subprocess status code, because\n # a non-None value would trigger :func:`sys.displayhook` calls.\n # Instead, we store the exit_code in user_ns.\n self.user_ns['_exit_code'] = system(self.var_expand(cmd, depth=1))", "response": "Call the given cmd in a subprocess piping stdout and err\n ArcGIS output."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef var_expand(self, cmd, depth=0, formatter=DollarFormatter()):\n ns = self.user_ns.copy()\n try:\n frame = sys._getframe(depth + 1)\n except ValueError:\n # This is thrown if there aren't that many frames on the stack,\n # e.g. if a script called run_line_magic() directly.\n pass\n else:\n ns.update(frame.f_locals)\n\n try:\n # We have to use .vformat() here, because 'self' is a valid and common\n # name, and expanding **ns for .format() would make it collide with\n # the 'self' argument of the method.\n cmd = formatter.vformat(cmd, args=[], kwargs=ns)\n except Exception:\n # if formatter couldn't format, just let it go untransformed\n pass\n return cmd", "response": "Expand python variables in a string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning a cell via a shell command.", "response": "def shebang(self, line, cell):\n \"\"\"Run a cell via a shell command\n\n The `%%script` line is like the #! line of script,\n specifying a program (bash, perl, ruby, etc.) with which to run.\n\n The rest of the cell is run by that program.\n\n Examples\n --------\n ::\n\n In [1]: %%script bash\n ...: for i in 1 2 3; do\n ...: echo $i\n ...: done\n 1\n 2\n 3\n \"\"\"\n argv = arg_split(line, posix=not sys.platform.startswith('win'))\n args, cmd = self.shebang.parser.parse_known_args(argv)\n\n try:\n p = Popen(cmd, stdout=PIPE, stderr=PIPE, stdin=PIPE)\n except OSError as e:\n if e.errno == errno.ENOENT:\n print(\"Couldn't find program: %r\" % cmd[0])\n return\n else:\n raise\n\n if not cell.endswith('\\n'):\n cell += '\\n'\n cell = cell.encode('utf8', 'replace')\n if args.bg:\n self.bg_processes.append(p)\n self._gc_bg_processes()\n if args.out:\n self.shell.user_ns[args.out] = p.stdout\n if args.err:\n self.shell.user_ns[args.err] = p.stderr\n self.job_manager.new(self._run_script, p, cell, daemon=True)\n if args.proc:\n self.shell.user_ns[args.proc] = p\n return\n\n try:\n out, err = p.communicate(cell)\n except KeyboardInterrupt:\n try:\n p.send_signal(signal.SIGINT)\n time.sleep(0.1)\n if p.poll() is not None:\n print(\"Process is interrupted.\")\n return\n p.terminate()\n time.sleep(0.1)\n if p.poll() is not None:\n print(\"Process is terminated.\")\n return\n p.kill()\n print(\"Process is killed.\")\n except OSError:\n pass\n except Exception as e:\n print(\"Error while terminating subprocess (pid=%i): %s\" \\\n % (p.pid, e))\n return\n out = py3compat.bytes_to_str(out)\n err = py3compat.bytes_to_str(err)\n if args.out:\n self.shell.user_ns[args.out] = out\n else:\n sys.stdout.write(out)\n sys.stdout.flush()\n if args.err:\n self.shell.user_ns[args.err] = err\n else:\n sys.stderr.write(err)\n sys.stderr.flush()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate a listing of each vertex s immedate neighbours in an undirected un", "response": "def _get_vertices_neighbours(nets):\n \"\"\"Generate a listing of each vertex's immedate neighbours in an undirected\n interpretation of a graph.\n\n Returns\n -------\n {vertex: {vertex: weight, ...}), ...}\n \"\"\"\n zero_fn = (lambda: 0)\n vertices_neighbours = defaultdict(lambda: defaultdict(zero_fn))\n for net in nets:\n if net.weight != 0:\n for sink in net.sinks:\n vertices_neighbours[net.source][sink] += net.weight\n vertices_neighbours[sink][net.source] += net.weight\n return vertices_neighbours"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _dfs(vertex, vertices_neighbours):\n visited = set()\n to_visit = deque([vertex])\n while to_visit:\n vertex = to_visit.pop()\n if vertex not in visited:\n yield vertex\n visited.add(vertex)\n to_visit.extend(vertices_neighbours[vertex])", "response": "Generate all the vertices connected to the supplied vertex in alphabetical order."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbreak a graph containing unconnected subgraphs into a list of connected subgraphs.", "response": "def _get_connected_subgraphs(vertices, vertices_neighbours):\n \"\"\"Break a graph containing unconnected subgraphs into a list of connected\n subgraphs.\n\n Returns\n -------\n [set([vertex, ...]), ...]\n \"\"\"\n remaining_vertices = set(vertices)\n subgraphs = []\n while remaining_vertices:\n subgraph = set(_dfs(remaining_vertices.pop(), vertices_neighbours))\n remaining_vertices.difference_update(subgraph)\n subgraphs.append(subgraph)\n\n return subgraphs"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _cuthill_mckee(vertices, vertices_neighbours):\n vertices_degrees = {v: sum(itervalues(vertices_neighbours[v]))\n for v in vertices}\n\n peripheral_vertex = min(vertices, key=(lambda v: vertices_degrees[v]))\n\n visited = set([peripheral_vertex])\n cm_order = [peripheral_vertex]\n previous_layer = set([peripheral_vertex])\n while len(cm_order) < len(vertices):\n adjacent = set()\n for vertex in previous_layer:\n adjacent.update(vertices_neighbours[vertex])\n adjacent.difference_update(visited)\n\n visited.update(adjacent)\n cm_order.extend(sorted(adjacent, key=(lambda v: vertices_degrees[v])))\n previous_layer = adjacent\n\n return cm_order", "response": "Yields the Cuthill - McKee order for a connected undirected graph."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rcm_vertex_order(vertices_resources, nets):\n vertices_neighbours = _get_vertices_neighbours(nets)\n for subgraph_vertices in _get_connected_subgraphs(vertices_resources,\n vertices_neighbours):\n cm_order = _cuthill_mckee(subgraph_vertices, vertices_neighbours)\n for vertex in reversed(cm_order):\n yield vertex", "response": "A generator which iterates over the vertices in Reverse - Cuthill - McKee - vertex order."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef place(vertices_resources, nets, machine, constraints):\n return sequential_place(vertices_resources, nets,\n machine, constraints,\n rcm_vertex_order(vertices_resources, nets),\n rcm_chip_order(machine))", "response": "Places vertices in reverse - cuthill - McKee order."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef register_datadog(tracer=None, namespace: Optional[str]=None,\n service: str='spinach'):\n \"\"\"Register the Datadog integration.\n\n Exceptions making jobs fail are sent to Sentry.\n\n :param tracer: optionally use a custom ddtrace Tracer instead of the global\n one.\n :param namespace: optionally only register the Datadog integration for a\n particular Spinach :class:`Engine`\n :param service: Datadog service associated with the trace, defaults to\n `spinach`\n \"\"\"\n if tracer is None:\n from ddtrace import tracer\n\n @signals.job_started.connect_via(namespace)\n def job_started(namespace, job, **kwargs):\n tracer.trace(\n 'spinach.task', service=service, span_type='worker',\n resource=job.task_name\n )\n\n @signals.job_finished.connect_via(namespace)\n def job_finished(namespace, job, **kwargs):\n root_span = tracer.current_root_span()\n for attr in job.__slots__:\n root_span.set_tag(attr, getattr(job, attr))\n root_span.finish()\n\n @signals.job_failed.connect_via(namespace)\n def job_failed(namespace, job, **kwargs):\n root_span = tracer.current_root_span()\n root_span.set_traceback()\n\n @signals.job_schedule_retry.connect_via(namespace)\n def job_schedule_retry(namespace, job, **kwargs):\n root_span = tracer.current_root_span()\n root_span.set_traceback()", "response": "Register the Datadog integration."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef copy_reference(resource, doc, env, *args, **kwargs):\n\n yield from doc.reference(resource.name)", "response": "A row - generating function that yields from a reference"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef copy_reference_group(resource, doc, env, *args, **kwargs):\n\n all_headers = []\n\n # Combine all of the headers into a list of tuples by position\n for ref in doc.references():\n if ref.get_value('Group') == resource.get_value('Group'):\n for row in ref.iterrowproxy():\n all_headers.append(list(row.keys()))\n break\n\n # For each position, add the headers that are not already in the header set.\n # this merges the headers from all datasets, maintaining the order. mostly.\n\n headers = []\n for e in zip(*all_headers):\n for c in set(e):\n if c not in headers:\n headers.append(c)\n\n if resource.get_value('RefArgs'):\n ref_args = [e.strip() for e in resource.get_value('RefArgs').strip().split(',')]\n else:\n ref_args = []\n\n yield ref_args+headers\n\n for ref in doc.references():\n if ref.get_value('Group') == resource.get_value('Group'):\n ref_args_values = [ ref.get_value(e) for e in ref_args]\n\n for row in ref.iterdict:\n yield ref_args_values + [ row.get(c) for c in headers]", "response": "A generator function that copies all of the references that have the same Group argument as this reference group."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef is_older_than_metadata(self):\n\n try:\n path = self.doc_file.path\n except AttributeError:\n path = self.doc_file\n\n source_ref = self._doc.ref.path\n\n try:\n age_diff = getmtime(source_ref) - getmtime(path)\n\n return age_diff > 0\n\n except (FileNotFoundError, OSError):\n return False", "response": "Returns True if the package save file is older than the metadata."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _load_resource(self, source_r, abs_path=False):\n\n from itertools import islice\n from metapack.exc import MetapackError\n from os.path import splitext\n\n # Refetch the resource ... IIRC b/c the source_r resource may actually be from\n # a different package. So r is the resource we want to possibly modify in this package,\n # while source_r is from a different souce package, whose data is being loaded into this\n # one.\n\n r = self.datafile(source_r.name)\n\n if self.reuse_resources:\n self.prt(\"Re-using data for '{}' \".format(r.name))\n else:\n self.prt(\"Loading data for '{}' \".format(r.name))\n\n if not r.name:\n raise MetapackError(f\"Resource/reference term has no name: {str(r)}\")\n\n # Special handing for SQL should not be done here; it should be done in Rowgenerators, probably.\n if r.term_is('root.sql'):\n new_r = self.doc['Resources'].new_term('Root.Datafile', '')\n new_r.name = r.name\n\n self.doc.remove_term(r)\n\n r = new_r\n\n r.url = 'data/' + r.name + '.csv' # Re-writing the URL for the resource.\n\n path = join(self.package_path.path, r.url)\n\n makedirs(dirname(path), exist_ok=True)\n\n if not self.reuse_resources or not exists(path):\n\n if self.reuse_resources:\n self.prt(\"Resource {} doesn't exist, rebuilding\".format(path))\n\n if exists(path):\n remove(path)\n\n gen = islice(source_r, 1, None)\n headers = source_r.headers\n self.write_csv(path, headers, gen)\n\n for k, v in source_r.post_iter_meta.items():\n r[k] = v\n\n try:\n if source_r.errors:\n for col_name, errors in source_r.errors.items():\n self.warn(\"ERRORS for column '{}' \".format(col_name))\n for e in islice(errors,5):\n self.warn(' {}'.format(e))\n if len(errors) > 5:\n self.warn(\"... and {} more \".format(len(errors)-5))\n except AttributeError:\n pass # Maybe generator does not track errors\n\n if source_r.errors:\n self.err(\"Resource processing generated conversion errors\")\n\n # Writing between resources so row-generating programs and notebooks can\n # access previously created resources. We have to clean the doc before writing it\n\n ref = self._write_doc()\n\n # What a wreck ... we also have to get rid of the 'Transform' values, since the CSV files\n # that are written don't need them, and a lot of intermediate processsing ( specifically,\n # jupyter Notebooks, ) does not load them.\n p = FileSystemPackageBuilder(ref, self.package_root)\n p._clean_doc()\n\n ref = p._write_doc()", "response": "Load the resource from a source_r into the current resource and return the resource object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a single documentation entry from a term.", "response": "def _load_documentation(self, term, contents, file_name):\n \"\"\"Load a single documentation entry\"\"\"\n\n try:\n title = term['title'].value\n except KeyError:\n self.warn(\"Documentation has no title, skipping: '{}' \".format(term.value))\n return\n\n if term.term_is('Root.Readme'): # This term type has inline content, not a url\n package_sub_dir = 'docs'\n else:\n try:\n eu = term.expanded_url\n parsed_url = term.parsed_url\n except AttributeError:\n parsed_url = eu = parse_app_url(term.value)\n\n # Can't used expanded_url here because expansion makes file system URLS absolute.\n if eu.proto == 'file' and not parsed_url.path_is_absolute:\n package_sub_dir = parsed_url.fspath.parent\n else:\n package_sub_dir = 'docs'\n\n\n path = join(self.package_path.path, package_sub_dir, file_name)\n\n self.prt(\"Loading documentation for '{}', '{}' to '{}' \".format(title, file_name, path))\n\n makedirs(dirname(path), exist_ok=True)\n\n if exists(path):\n remove(path)\n\n with open(path, 'wb') as f:\n f.write(contents)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef serialize(obj):\n if isinstance(obj, datetime.datetime):\n serial = obj.isoformat(sep='T')\n return serial\n\n if isinstance(obj, uuid.UUID):\n serial = str(obj)\n return serial\n\n try:\n return obj.__dict__\n except AttributeError:\n return str(obj)\n except Exception as e:\n strval = 'unknown obj'\n exceptval = 'unknown err'\n try:\n strval = str(obj)\n exceptval = repr(e)\n except Exception:\n pass\n return 'json fail {} {}'.format(exceptval, strval)", "response": "JSON serializer for objects not serializable by default json code"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the payload of the record.", "response": "def _getPayload(self, record):\n \"\"\"\n The data that will be sent to the RESTful API\n \"\"\"\n\n try:\n # top level payload items\n d = record.__dict__\n pid = d.pop('process', 'nopid')\n tid = d.pop('thread', 'notid')\n\n payload = {\n k: v for (k, v) in d.items()\n if k in TOP_KEYS\n }\n\n # logging meta attributes\n payload['meta'] = {\n k: v for (k, v) in d.items()\n if k in META_KEYS\n }\n\n # everything else goes in details\n payload['details'] = {\n k: simple_json(v) for (k, v) in d.items()\n if k not in self.detail_ignore_set\n }\n\n payload['log'] = payload.pop('name', 'n/a')\n payload['level'] = payload.pop('levelname', 'n/a')\n payload['meta']['line'] = payload['meta'].pop('lineno', 'n/a')\n\n payload['message'] = record.getMessage()\n tb = self._getTraceback(record)\n if tb:\n payload['traceback'] = tb\n\n except Exception as e:\n payload = {\n 'level': 'ERROR',\n 'message': 'could not format',\n 'exception': repr(e),\n }\n payload['pid'] = 'p-{}'.format(pid)\n payload['tid'] = 't-{}'.format(tid)\n return payload"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _prepPayload(self, record):\n payload = self._getPayload(record)\n json_data = json.dumps(payload, default=serialize)\n\n return {\n 'json': (json_data, 'application/json')\n }.get(self.content_type, (json_data, 'text/plain'))", "response": "Preps the payload to be formatted in whatever content - type is\n expected from the RESTful API."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef emit(self, record):\n # avoid infinite recursion\n if record.name.startswith('requests'):\n return\n\n data, header = self._prepPayload(record)\n\n try:\n self.session.post(self._getEndpoint(),\n data=data,\n headers={'content-type': header})\n except Exception:\n self.handleError(record)", "response": "Send log record to RESTful API."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the payload of the loggly message.", "response": "def _getPayload(self, record):\n \"\"\"\n The data that will be sent to loggly.\n \"\"\"\n payload = super(LogglyHandler, self)._getPayload(record)\n payload['tags'] = self._implodeTags()\n\n return payload"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\noverride emit method for sending log to RESTful", "response": "def emit(self, record):\n \"\"\"\n Override emit() method in handler parent for sending log to RESTful\n API\n \"\"\"\n\n pid = os.getpid()\n if pid != self.pid:\n self.pid = pid\n self.logs = []\n self.timer = self._flushAndRepeatTimer()\n atexit.register(self._stopFlushTimer)\n\n # avoid infinite recursion\n if record.name.startswith('requests'):\n return\n\n self.logs.append(self._prepPayload(record))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the non versioned name", "response": "def nonver_name(self):\n \"\"\"Return the non versioned name\"\"\"\n nv = self.as_version(None)\n if not nv:\n import re\n nv = re.sub(r'-[^-]+$', '', self.name)\n\n return nv"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the Root. Description possibly combining multiple terms.", "response": "def wrappable_term(self, term):\n \"\"\"Return the Root.Description, possibly combining multiple terms.\n :return:\n \"\"\"\n\n return ' '.join(e.value.strip() for e in self['Root'].find(term) if e and e.value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_wrappable_term(self, v, term):\n\n import textwrap\n\n for t in self['Root'].find(term):\n self.remove_term(t)\n\n for l in textwrap.wrap(v, 80):\n self['Root'].new_term(term, l)", "response": "Set the Root. Description possibly splitting long descriptions across multiple terms."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads the lib directory as a python module so it can be used to provide functions for rowpipe transforms. This only works filesystem packages.", "response": "def get_lib_module_dict(self):\n \"\"\"Load the 'lib' directory as a python module, so it can be used to provide functions\n for rowpipe transforms. This only works filesystem packages\"\"\"\n\n from importlib import import_module\n\n\n if not self.ref:\n return {}\n\n u = parse_app_url(self.ref)\n\n if u.scheme == 'file':\n\n if not self.set_sys_path():\n return {}\n\n for module_name in self.lib_dir_names:\n\n try:\n m = import_module(module_name)\n return {k: v for k, v in m.__dict__.items() if not k.startswith('__')}\n except ModuleNotFoundError as e:\n # We need to know if it is the datapackage's module that is missing\n # or if it is a module that it imported\n if not module_name in str(e):\n raise # If not our module, it's a real error.\n\n continue\n\n\n else:\n return {}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _repr_html_(self, **kwargs):\n from jinja2 import Template\n from markdown import markdown as convert_markdown\n\n extensions = [\n 'markdown.extensions.extra',\n 'markdown.extensions.admonition'\n ]\n\n return convert_markdown(self.markdown, extensions)", "response": "Produce HTML for Jupyter Notebook"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_csv(self, path=None):\n\n # Sort the Sections\n\n self.sort_sections(['Root', 'Contacts', 'Documentation', 'References', 'Resources', 'Citations', 'Schema'])\n\n # Sort Terms in the root section\n\n # Re-wrap the description and abstract\n if self.description:\n self.description = self.description\n\n if self.abstract:\n self.description = self.abstract\n\n t = self['Root'].get_or_new_term('Root.Modified')\n t.value = datetime_now()\n\n self.sort_by_term()\n\n return super().write_csv(str(path))", "response": "Write the CSV file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dimensions_wizard():\n option = yield MultipleChoice(\n \"What type of SpiNNaker system to you have?\",\n [\"A single four-chip 'SpiNN-3' board\",\n \"A single forty-eight-chip 'SpiNN-5' board\",\n \"Multiple forty-eight-chip 'SpiNN-5' boards\",\n \"Other\"],\n None)\n assert 0 <= option < 4\n\n if option == 0:\n raise Success({\"dimensions\": (2, 2)})\n elif option == 1:\n raise Success({\"dimensions\": (8, 8)})\n elif option == 2:\n # Infer the system's dimensions from the number of boards supplied\n num_boards = yield Text(\"How many 'SpiNN-5' boards are in the system?\")\n try:\n w, h = standard_system_dimensions(int(num_boards))\n except ValueError:\n # May fail due to integer conversion or the function rejecting the\n # number of boards.\n raise Failure(\n \"'{}' is not a valid number of boards.\".format(num_boards))\n raise Success({\"dimensions\": (w, h)})\n else:\n dimensions = yield Text(\n \"What are the dimensions of the network in chips (e.g. 24x12)?\")\n match = re.match(r\"\\s*(\\d+)\\s*[xX]\\s*(\\d+)\\s*\", dimensions)\n if not match:\n raise Failure(\"'{}' is not a valid system size.\".format(\n dimensions))\n else:\n w = int(match.group(1))\n h = int(match.group(2))\n raise Success({\"dimensions\": (w, h)})", "response": "A wizard which attempts to determine the dimensions of a SpiNNaker system."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cli_wrapper(generator):\n first = True\n response = None\n while True:\n # Insert blank lines between prompts\n if not first:\n print()\n first = False\n\n try:\n message = generator.send(response)\n\n if isinstance(message, MultipleChoice):\n print(message.question)\n for num, choice in enumerate(message.options):\n print(\" {}: {}\".format(num, choice))\n option = input(\"Select an option 0-{}{}: \".format(\n len(message.options) - 1,\n \" (default: {})\".format(message.default)\n if message.default is not None else \"\"))\n\n if option == \"\" and message.default is not None:\n option = message.default\n\n try:\n response = int(option)\n except ValueError:\n response = -1\n if not (0 <= response < len(message.options)):\n print(\"ERROR: {} is not a valid option.\".format(option))\n return None\n elif isinstance(message, Text):\n print(message.question)\n response = input(\"> \")\n elif isinstance(message, Prompt):\n print(message.message)\n input(\"\")\n response = None\n elif isinstance(message, Info): # pragma: no branch\n print(message.message)\n response = None\n\n except Failure as f:\n print(\"ERROR: {}\".format(str(f)))\n return None\n except Success as s:\n return s.data", "response": "A wizard that implements an interactive command - line human - friendly command - line human - friendly command - line interface for it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _net_cost(net, placements, has_wrap_around_links, machine):\n # This function is by far the hottest code in the entire algorithm, as a\n # result, small performance improvements in here can have significant\n # impact on the runtime of the overall algorithm. As an unfortunate side\n # effect, this code is rather ugly since many higher-level constructs (e.g.\n # min/max) are outrageously slow.\n\n # XXX: This does not account for the hexagonal properties of the SpiNNaker\n # topology.\n if has_wrap_around_links:\n # When wrap-around links exist, we find the minimal bounding box and\n # return the HPWL weighted by the net weight. To do this the largest\n # gap between any pair of vertices is found::\n #\n # | x x x |\n # ^-------------^\n # max gap\n #\n # The minimal bounding box then goes the other way around::\n #\n # | x x x |\n # ----------^ ^---\n\n # First we collect the x and y coordinates of all vertices in the net\n # into a pair of (sorted) lists, xs and ys.\n x, y = placements[net.source]\n num_vertices = len(net.sinks) + 1\n xs = [x] * num_vertices\n ys = [y] * num_vertices\n i = 1\n for v in net.sinks:\n x, y = placements[v]\n xs[i] = x\n ys[i] = y\n i += 1\n\n xs.sort()\n ys.sort()\n\n # The minimal bounding box is then found as above.\n x_max_delta = 0\n last_x = xs[-1] - machine.width\n for x in xs:\n delta = x - last_x\n last_x = x\n if delta > x_max_delta:\n x_max_delta = delta\n\n y_max_delta = 0\n last_y = ys[-1] - machine.height\n for y in ys:\n delta = y - last_y\n last_y = y\n if delta > y_max_delta:\n y_max_delta = delta\n\n return (((machine.width - x_max_delta) +\n (machine.height - y_max_delta)) *\n net.weight *\n math.sqrt(len(net.sinks) + 1))\n else:\n # When no wrap-around links, find the bounding box around the vertices\n # in the net and return the HPWL weighted by the net weight.\n x1, y1 = x2, y2 = placements[net.source]\n for vertex in net.sinks:\n x, y = placements[vertex]\n x1 = x if x < x1 else x1\n y1 = y if y < y1 else y1\n x2 = x if x > x2 else x2\n y2 = y if y > y2 else y2\n\n return (((x2 - x1) + (y2 - y1)) *\n float(net.weight) *\n math.sqrt(len(net.sinks) + 1))", "response": "This function calculates the cost of a given net."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _vertex_net_cost(vertex, v2n, placements, has_wrap_around_links, machine):\n total_cost = 0.0\n for net in v2n[vertex]:\n total_cost += _net_cost(net, placements, has_wrap_around_links,\n machine)\n\n return total_cost", "response": "Get the total cost of the nets connected to the given vertex."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_candidate_swap(resources, location,\n l2v, vertices_resources, fixed_vertices, machine):\n \"\"\"Given a chip location, select a set of vertices which would have to be\n moved elsewhere to accommodate the arrival of the specified set of\n resources.\n\n Parameters\n ----------\n resources : {resource: value, ...}\n The amount of resources which are required at the specified location.\n location : (x, y)\n The coordinates of the chip where the resources are sought.\n l2v : {(x, y): [vertex, ...], ...}\n vertices_resources : {vertex: {resource: value, ...}, ...}\n fixed_vertices : {vertex, ...}\n machine : :py:class:`rig.place_and_route.Machine`\n\n Returns\n -------\n [Vertex, ...] or None\n If a (possibly empty) list, gives the set of vertices which should be\n removed from the specified location to make room.\n\n If None, the situation is impossible.\n \"\"\"\n # The resources already available at the given location\n chip_resources = machine[location]\n\n # The set of vertices at that location\n vertices = l2v[location]\n\n # The set of vertices to be moved from the location to free up the\n # specified amount of resources\n to_move = []\n\n # While there's not enough free resource, remove an arbitrary (movable)\n # vertex from the chip.\n i = 0\n while overallocated(subtract_resources(chip_resources, resources)):\n if i >= len(vertices):\n # Run out of vertices to remove from this chip, thus the situation\n # must be impossible.\n return None\n elif vertices[i] in fixed_vertices:\n # Can't move fixed vertices, just skip them.\n i += 1\n continue\n else:\n # Work out the cost change when we remove the specified vertex\n vertex = vertices[i]\n chip_resources = add_resources(chip_resources,\n vertices_resources[vertex])\n to_move.append(vertex)\n i += 1\n\n return to_move", "response": "Given a set of vertices and a set of resources and a machine return a list of vertices that should be moved elsewhere to accommodate the arrival of the specified set of resources."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nswap the positions of two sets of vertices in a single tree.", "response": "def _swap(vas, vas_location, vbs, vbs_location, l2v, vertices_resources,\n placements, machine):\n \"\"\"Swap the positions of two sets of vertices.\n\n Parameters\n ----------\n vas : [vertex, ...]\n A set of vertices currently at vas_location.\n vas_location : (x, y)\n vbs : [vertex, ...]\n A set of vertices currently at vbs_location.\n vbs_location : (x, y)\n l2v : {(x, y): [vertex, ...], ...}\n vertices_resources : {vertex: {resource: value, ...}, ...}\n placements : {vertex: (x, y), ...}\n machine : :py:class:`rig.place_and_route.Machine`\n \"\"\"\n # Get the lists of vertices at either location\n vas_location2v = l2v[vas_location]\n vbs_location2v = l2v[vbs_location]\n\n # Get the resource availability at either location\n vas_resources = machine[vas_location]\n vbs_resources = machine[vbs_location]\n\n # Move all the vertices in vas into vbs.\n for va in vas:\n # Update the placements\n placements[va] = vbs_location\n\n # Update the location-to-vertex lookup\n vas_location2v.remove(va)\n vbs_location2v.append(va)\n\n # Update the resource consumption after the move\n resources = vertices_resources[va]\n vas_resources = add_resources(vas_resources, resources)\n vbs_resources = subtract_resources(vbs_resources, resources)\n\n for vb in vbs:\n # Update the placements\n placements[vb] = vas_location\n\n # Update the location-to-vertex lookup\n vbs_location2v.remove(vb)\n vas_location2v.append(vb)\n\n # Update the resource consumption after the move\n resources = vertices_resources[vb]\n vas_resources = subtract_resources(vas_resources, resources)\n vbs_resources = add_resources(vbs_resources, resources)\n\n # Update the resources in the machine\n machine[vas_location] = vas_resources\n machine[vbs_location] = vbs_resources"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nattempts a single swap operation: the kernel of the Simulated Annealing algorithm. Parameters ---------- vertices : [vertex, ...] The set of *movable* vertices. d_limit : int The maximum distance over-which swaps are allowed. temperature : float > 0.0 or None The temperature (i.e. likelihood of accepting a non-advantageous swap). Higher temperatures mean higher chances of accepting a swap. placements : {vertex: (x, y), ...} The positions of all vertices, will be updated if a swap is made. l2v : {(x, y): [vertex, ...], ...} Lookup from chip to vertices, will be updated if a swap is made. v2n : {vertex: [:py:class:`rig.netlist.Net`, ...], ...} Lookup from vertex to all nets that vertex is in. vertices_resources : {vertex: {resource: value, ...}, ...} fixed_vertices : {vertex, ...} The set of vertices which must not be moved. machine : :py:class:`rig.place_and_route.Machine` Describes the state of the machine including the resources actually available on each chip given the current placements. Updated if a swap is made. has_wrap_around_links : bool Should the placements attempt to make use of wrap-around links? random : :py:class:`random.Random` The random number generator to use. Returns ------- (swapped, delta) swapped is a boolean indicating if a swap was made. delta is a float indicating the change in cost resulting from the swap (or 0.0 when no swap is made).", "response": "def _step(vertices, d_limit, temperature, placements, l2v, v2n,\n vertices_resources, fixed_vertices, machine, has_wrap_around_links,\n random):\n \"\"\"Attempt a single swap operation: the kernel of the Simulated Annealing\n algorithm.\n\n Parameters\n ----------\n vertices : [vertex, ...]\n The set of *movable* vertices.\n d_limit : int\n The maximum distance over-which swaps are allowed.\n temperature : float > 0.0 or None\n The temperature (i.e. likelihood of accepting a non-advantageous swap).\n Higher temperatures mean higher chances of accepting a swap.\n placements : {vertex: (x, y), ...}\n The positions of all vertices, will be updated if a swap is made.\n l2v : {(x, y): [vertex, ...], ...}\n Lookup from chip to vertices, will be updated if a swap is made.\n v2n : {vertex: [:py:class:`rig.netlist.Net`, ...], ...}\n Lookup from vertex to all nets that vertex is in.\n vertices_resources : {vertex: {resource: value, ...}, ...}\n fixed_vertices : {vertex, ...}\n The set of vertices which must not be moved.\n machine : :py:class:`rig.place_and_route.Machine`\n Describes the state of the machine including the resources actually\n available on each chip given the current placements. Updated if a swap\n is made.\n has_wrap_around_links : bool\n Should the placements attempt to make use of wrap-around links?\n random : :py:class:`random.Random`\n The random number generator to use.\n\n Returns\n -------\n (swapped, delta)\n swapped is a boolean indicating if a swap was made.\n\n delta is a float indicating the change in cost resulting from the swap\n (or 0.0 when no swap is made).\n \"\"\"\n # Special case: If the machine is a singleton, no swaps can be made so just\n # terminate.\n if machine.width == 1 and machine.height == 1:\n return (False, 0.0)\n\n # Select a vertex to swap at random\n src_vertex = random.choice(vertices)\n\n # Select a random (nearby) location to swap the vertex with. Note: this is\n # guaranteed to be different from the selected vertex, otherwise the swap\n # cannot change the cost of the placements.\n # XXX: Does not consider hexagonal properties of the system!\n src_location = placements[src_vertex]\n dst_location = src_location\n while dst_location == src_location:\n if has_wrap_around_links:\n dst_location = tuple(random.randint(v - d_limit,\n v + d_limit) % limit\n for v, limit\n in [(src_location[0], machine.width),\n (src_location[1], machine.height)])\n else:\n dst_location = tuple(random.randint(max(v - d_limit, 0),\n min(v + d_limit, limit-1))\n for v, limit\n in [(src_location[0], machine.width),\n (src_location[1], machine.height)])\n\n # If we've inadvertently selected a dead chip to swap to, abort the swap.\n if dst_location not in machine:\n return (False, 0.0)\n\n # Find out which vertices (if any) must be swapped out of the destination\n # to make room for the vertex we're moving.\n src_resources = vertices_resources[src_vertex]\n dst_vertices = _get_candidate_swap(src_resources, dst_location,\n l2v, vertices_resources,\n fixed_vertices, machine)\n\n # The destination simply isn't big enough (no matter how many vertices at\n # the destination are moved), abort the swap.\n if dst_vertices is None:\n return (False, 0.0)\n\n # Make sure that any vertices moved out of the destination will fit in the\n # space left in the source location. If there isn't enough space, abort the\n # swap.\n resources = machine[src_location]\n resources = add_resources(resources, src_resources)\n for dst_vertex in dst_vertices:\n resources = subtract_resources(resources,\n vertices_resources[dst_vertex])\n if overallocated(resources):\n return (False, 0.0)\n\n # Work out the cost of the nets involved *before* swapping\n cost_before = _vertex_net_cost(src_vertex, v2n, placements,\n has_wrap_around_links, machine)\n for dst_vertex in dst_vertices:\n cost_before += _vertex_net_cost(dst_vertex, v2n, placements,\n has_wrap_around_links, machine)\n\n # Swap the vertices\n _swap([src_vertex], src_location,\n dst_vertices, dst_location,\n l2v, vertices_resources, placements, machine)\n\n # Work out the new cost\n cost_after = _vertex_net_cost(src_vertex, v2n, placements,\n has_wrap_around_links, machine)\n for dst_vertex in dst_vertices:\n cost_after += _vertex_net_cost(dst_vertex, v2n, placements,\n has_wrap_around_links, machine)\n\n # If the swap was beneficial, keep it, otherwise keep it with a probability\n # related to just how bad the cost change is is and the temperature.\n delta = cost_after - cost_before\n if delta <= 0.0 or random.random() < math.exp(-delta/temperature):\n # Keep the swap!\n return (True, delta)\n else:\n # Revert the swap\n _swap([src_vertex], dst_location,\n dst_vertices, src_location,\n l2v, vertices_resources, placements, machine)\n return (False, 0.0)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _reset(self):\n self._subscriber_thread = None\n self._must_stop = threading.Event()\n self._number_periodic_tasks = 0", "response": "Reset the internal state of the broker."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _load_script(self, filename: str) -> Script:\n with open(path.join(here, 'redis_scripts', filename), mode='rb') as f:\n script_data = f.read()\n rv = self._r.register_script(script_data)\n if script_data.startswith(b'-- idempotency protected script'):\n self._idempotency_protected_scripts.append(rv)\n return rv", "response": "Load a Lua script file and return a Script object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef enqueue_jobs(self, jobs: Iterable[Job]):\n jobs_to_queue = list()\n for job in jobs:\n if job.should_start:\n job.status = JobStatus.QUEUED\n else:\n job.status = JobStatus.WAITING\n jobs_to_queue.append(job.serialize())\n\n if jobs_to_queue:\n self._run_script(\n self._enqueue_job,\n self._to_namespaced(NOTIFICATIONS_KEY),\n self._to_namespaced(RUNNING_JOBS_KEY.format(self._id)),\n self.namespace,\n self._to_namespaced(FUTURE_JOBS_KEY),\n *jobs_to_queue\n )", "response": "Enqueue a batch of jobs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_jobs_from_queue(self, queue: str, max_jobs: int) -> List[Job]:\n jobs_json_string = self._run_script(\n self._get_jobs_from_queue,\n self._to_namespaced(queue),\n self._to_namespaced(RUNNING_JOBS_KEY.format(self._id)),\n JobStatus.RUNNING.value,\n max_jobs\n )\n\n jobs = json.loads(jobs_json_string.decode())\n jobs = [Job.deserialize(job) for job in jobs]\n\n return jobs", "response": "Get jobs from a queue."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef register_periodic_tasks(self, tasks: Iterable[Task]):\n tasks = [task.serialize() for task in tasks]\n self._number_periodic_tasks = len(tasks)\n self._run_script(\n self._register_periodic_tasks,\n math.ceil(datetime.now(timezone.utc).timestamp()),\n self._to_namespaced(PERIODIC_TASKS_HASH_KEY),\n self._to_namespaced(PERIODIC_TASKS_QUEUE_KEY),\n *tasks\n )", "response": "Register tasks that need to be scheduled periodically."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the next periodic task schedule.", "response": "def inspect_periodic_tasks(self) -> List[Tuple[int, str]]:\n \"\"\"Get the next periodic task schedule.\n\n Used only for debugging and during tests.\n \"\"\"\n rv = self._r.zrangebyscore(\n self._to_namespaced(PERIODIC_TASKS_QUEUE_KEY),\n '-inf', '+inf', withscores=True\n )\n return [(int(r[1]), r[0].decode()) for r in rv]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngiving the amount of seconds before the next periodic task is due.", "response": "def next_future_periodic_delta(self) -> Optional[float]:\n \"\"\"Give the amount of seconds before the next periodic task is due.\"\"\"\n rv = self._r.zrangebyscore(\n self._to_namespaced(PERIODIC_TASKS_QUEUE_KEY),\n '-inf', '+inf', start=0, num=1, withscores=True,\n score_cast_func=int\n )\n if not rv:\n return None\n\n now = datetime.now(timezone.utc).timestamp()\n next_event_time = rv[0][1]\n if next_event_time < now:\n return 0\n\n return next_event_time - now"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _provision_network(self, port_id, net_uuid, network_type,\n physical_network, segmentation_id):\n \"\"\"Provision the network with the received information.\"\"\"\n LOG.info(\"Provisioning network %s\", net_uuid)\n\n vswitch_name = self._get_vswitch_name(network_type, physical_network)\n vswitch_map = {\n 'network_type': network_type,\n 'vswitch_name': vswitch_name,\n 'ports': [],\n 'vlan_id': segmentation_id}\n self._network_vswitch_map[net_uuid] = vswitch_map", "response": "Provision the network with the received information."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _port_bound(self, port_id, network_id, network_type, physical_network,\n segmentation_id, port_security_enabled, set_port_sriov):\n \"\"\"Bind the port to the recived network.\"\"\"\n super(HNVAgent, self)._port_bound(port_id, network_id, network_type,\n physical_network, segmentation_id,\n port_security_enabled,\n set_port_sriov)\n LOG.debug(\"Getting the profile id for the current port.\")\n profile_id = self._neutron_client.get_port_profile_id(port_id)\n\n LOG.debug(\"Trying to set port profile id %r for the current port %r.\",\n profile_id, port_id)\n self._utils.set_vswitch_port_profile_id(\n switch_port_name=port_id,\n profile_id=profile_id,\n profile_data=h_const.PROFILE_DATA,\n profile_name=h_const.PROFILE_NAME,\n net_cfg_instance_id=h_const.NET_CFG_INSTANCE_ID,\n cdn_label_id=h_const.CDN_LABEL_ID,\n cdn_label_string=h_const.CDN_LABEL_STRING,\n vendor_id=h_const.VENDOR_ID,\n vendor_name=h_const.VENDOR_NAME)", "response": "Bind the port to the recived network."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntransmits a SCP packet to the SpiNNaker machine and block until acknowledgement is received.", "response": "def send_scp(self, buffer_size, x, y, p, cmd, arg1=0, arg2=0, arg3=0,\n data=b'', expected_args=3, timeout=0.0):\n \"\"\"Transmit a packet to the SpiNNaker machine and block until an\n acknowledgement is received.\n\n Parameters\n ----------\n buffer_size : int\n Number of bytes held in an SCP buffer by SARK, determines how many\n bytes will be expected in a socket.\n x : int\n y : int\n p : int\n cmd : int\n arg1 : int\n arg2 : int\n arg3 : int\n data : bytestring\n expected_args : int\n The number of arguments (0-3) that are expected in the returned\n packet.\n timeout : float\n Additional timeout in seconds to wait for a reply on top of the\n default specified upon instantiation.\n\n Returns\n -------\n :py:class:`~rig.machine_control.packets.SCPPacket`\n The packet that was received in acknowledgement of the transmitted\n packet.\n \"\"\"\n # This is implemented as a single burst packet sent using the bursty\n # interface. This significantly reduces code duplication.\n # Construct a callable to retain the returned packet for us\n class Callback(object):\n def __init__(self):\n self.packet = None\n\n def __call__(self, packet):\n self.packet = SCPPacket.from_bytestring(\n packet, n_args=expected_args\n )\n\n # Create the packet to send\n callback = Callback()\n packets = [\n scpcall(x, y, p, cmd, arg1, arg2, arg3, data, callback, timeout)\n ]\n\n # Send the burst\n self.send_scp_burst(buffer_size, 1, packets)\n\n # Return the received packet\n assert callback.packet is not None\n return callback.packet"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsend a burst of SCP packets and call a callback for each returned ArcGIS Packet.", "response": "def send_scp_burst(self, buffer_size, window_size,\n parameters_and_callbacks):\n \"\"\"Send a burst of SCP packets and call a callback for each returned\n packet.\n\n Parameters\n ----------\n buffer_size : int\n Number of bytes held in an SCP buffer by SARK, determines how many\n bytes will be expected in a socket.\n window_size : int\n Number of packets which can be awaiting replies from the SpiNNaker\n board.\n parameters_and_callbacks: iterable of :py:class:`.scpcall`\n Iterable of :py:class:`.scpcall` elements. These elements can\n specify a callback which will be called with the returned packet.\n \"\"\"\n parameters_and_callbacks = iter(parameters_and_callbacks)\n\n self.sock.setblocking(False)\n\n # Calculate the receive length, this should be the smallest power of\n # two greater than the required size\n max_length = buffer_size + consts.SDP_HEADER_LENGTH\n receive_length = int(2**math.ceil(math.log(max_length, 2)))\n\n class TransmittedPacket(object):\n \"\"\"A packet which has been transmitted and still awaits a response.\n \"\"\"\n __slots__ = [\"callback\", \"packet\", \"bytestring\", \"n_tries\",\n \"timeout\", \"timeout_time\"]\n\n def __init__(self, callback, packet, timeout):\n self.callback = callback\n self.packet = packet\n self.bytestring = packet.bytestring\n self.n_tries = 1\n self.timeout = timeout\n self.timeout_time = time.time() + self.timeout\n\n queued_packets = True\n outstanding_packets = {}\n outstanding_callbacks = collections.deque()\n\n # While there are packets in the queue or packets for which we are\n # still awaiting returns then continue to loop.\n while queued_packets or outstanding_packets or outstanding_callbacks:\n # If there are fewer outstanding packets than the window can take\n # and we still might have packets left to send then transmit a\n # packet and add it to the list of outstanding packets.\n while len(outstanding_packets) < window_size and queued_packets:\n try:\n args = next(parameters_and_callbacks)\n except StopIteration:\n queued_packets = False\n\n if queued_packets:\n # If we extracted a new packet to send then create a new\n # outstanding packet and transmit it.\n seq = next(self.seq)\n while seq in outstanding_packets:\n # The seq should rarely be already taken, it normally\n # means that one packet is taking such a long time to\n # send that the sequence has wrapped around. It's not\n # a problem provided that we don't reuse the number.\n seq = next(self.seq)\n\n # Construct the packet that we'll be sending\n packet = SCPPacket(\n reply_expected=True, tag=0xff, dest_port=0,\n dest_cpu=args.p, src_port=7, src_cpu=31,\n dest_x=args.x, dest_y=args.y, src_x=0, src_y=0,\n cmd_rc=args.cmd, seq=seq,\n arg1=args.arg1, arg2=args.arg2, arg3=args.arg3,\n data=args.data\n )\n\n # Create a reference to this packet so that we know we're\n # expecting a response for it and can retransmit it if\n # necessary.\n outstanding_packets[seq] = TransmittedPacket(\n args.callback, packet,\n self.default_timeout + args.timeout\n )\n\n # Actually send the packet\n self.sock.send(outstanding_packets[seq].bytestring)\n\n # Call all outstanding callbacks\n while outstanding_callbacks:\n callback, packet = outstanding_callbacks.pop()\n callback(packet)\n\n # Listen on the socket for an acknowledgement packet, there may not\n # be one.\n if outstanding_packets:\n timeout = min((o.timeout_time for o in\n six.itervalues(outstanding_packets)\n )) - time.time()\n else:\n timeout = 0.0\n r, w, x = select.select([self.sock], [], [], max(timeout, 0.0))\n # Process the received packet (if there is one).\n while r:\n # Note that 'r' is never changed so this while loop will either\n # get skipped if r is empty or loop until 'break' of r is not.\n # Since we may receive multiple packets at once, it is better\n # to try and pull all out of the socket immediately rather than\n # running around the parent loop again and incuring the\n # 'select' cost.\n try:\n ack = self.sock.recv(receive_length)\n except IOError:\n break\n\n # Extract the sequence number from the bytestring, iff possible\n rc, seq = struct.unpack_from(\"<2H\", ack,\n consts.SDP_HEADER_LENGTH + 2)\n\n # If the code is an error then we respond immediately\n if rc != consts.SCPReturnCodes.ok:\n if rc in consts.RETRYABLE_SCP_RETURN_CODES:\n # If the error is timeout related then treat the packet\n # as though it timed out, just discard. This avoids us\n # hammering the board when it's most vulnerable.\n pass\n else:\n # For all other errors, we'll just fall over\n # immediately.\n packet = outstanding_packets.get(seq)\n if packet is not None:\n packet = packet.packet\n raise FatalReturnCodeError(rc, packet)\n else:\n # Look up the sequence index of packet in the list of\n # outstanding packets. We may have already processed a\n # response for this packet (indicating that the response\n # was delayed and we retransmitted the initial message) in\n # which case we can silently ignore the returned packet.\n # XXX: There is a danger that a response was so delayed\n # that we already reused the seq number... this is probably\n # sufficiently unlikely that there is no problem.\n outstanding = outstanding_packets.pop(seq, None)\n if outstanding is not None:\n outstanding_callbacks.appendleft((outstanding.callback,\n ack))\n\n # Look through all the remaining outstanding packets, if any of\n # them have timed out then we retransmit them.\n current_time = time.time()\n for outstanding in six.itervalues(outstanding_packets):\n if outstanding.timeout_time < current_time:\n # This packet has timed out, if we have sent it more than\n # the given number of times then raise a timeout error for\n # it.\n if outstanding.n_tries >= self.n_tries:\n raise TimeoutError(\n \"No response after {} attempts.\".format(\n self.n_tries),\n outstanding.packet)\n\n # Otherwise we retransmit it\n self.sock.send(outstanding.bytestring)\n outstanding.n_tries += 1\n outstanding.timeout_time = (current_time +\n outstanding.timeout)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading a bytestring from an address in memory.", "response": "def read(self, buffer_size, window_size, x, y, p, address, length_bytes):\n \"\"\"Read a bytestring from an address in memory.\n\n ..note::\n This method is included here to maintain API compatibility with an\n `alternative implementation of SCP\n `_.\n\n Parameters\n ----------\n buffer_size : int\n Number of bytes held in an SCP buffer by SARK, determines how many\n bytes will be expected in a socket and how many bytes of data will\n be read back in each packet.\n window_size : int\n x : int\n y : int\n p : int\n address : int\n The address at which to start reading the data.\n length_bytes : int\n The number of bytes to read from memory. Large reads are\n transparently broken into multiple SCP read commands.\n\n Returns\n -------\n :py:class:`bytes`\n The data is read back from memory as a bytestring.\n \"\"\"\n # Prepare the buffer to receive the incoming data\n data = bytearray(length_bytes)\n mem = memoryview(data)\n\n # Create a callback which will write the data from a packet into a\n # memoryview.\n def callback(mem, data):\n mem[:] = data[6 + consts.SDP_HEADER_LENGTH:]\n\n # Create a generator that will generate request packets and store data\n # until all data has been returned\n def packets(length_bytes, data):\n offset = 0\n while length_bytes > 0:\n # Get the next block of data\n block_size = min((length_bytes, buffer_size))\n read_address = address + offset\n dtype = consts.address_length_dtype[(read_address % 4,\n block_size % 4)]\n\n # Create the call spec and yield\n yield scpcall(\n x, y, p, consts.SCPCommands.read, read_address,\n block_size, dtype,\n callback=functools.partial(callback,\n mem[offset:offset + block_size])\n )\n\n # Update the number of bytes remaining and the offset\n offset += block_size\n length_bytes -= block_size\n\n # Run the event loop and then return the retrieved data\n self.send_scp_burst(buffer_size, window_size,\n list(packets(length_bytes, data)))\n return bytes(data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwriting a bytestring to an address in memory.", "response": "def write(self, buffer_size, window_size, x, y, p, address, data):\n \"\"\"Write a bytestring to an address in memory.\n\n ..note::\n This method is included here to maintain API compatibility with an\n `alternative implementation of SCP\n `_.\n\n Parameters\n ----------\n buffer_size : int\n Number of bytes held in an SCP buffer by SARK, determines how many\n bytes will be expected in a socket and how many bytes will be\n written in each packet.\n window_size : int\n x : int\n y : int\n p : int\n address : int\n The address at which to start writing the data. Addresses are given\n within the address space of a SpiNNaker core. See the SpiNNaker\n datasheet for more information.\n data : :py:class:`bytes`\n Data to write into memory. Writes are automatically broken into a\n sequence of SCP write commands.\n \"\"\"\n # While there is still data perform a write: get the block to write\n # this time around, determine the data type, perform the write and\n # increment the address\n def packets(address, data):\n end = len(data)\n pos = 0\n while pos < end:\n block = data[pos:pos + buffer_size]\n block_size = len(block)\n\n dtype = consts.address_length_dtype[(address % 4,\n block_size % 4)]\n\n yield scpcall(x, y, p, consts.SCPCommands.write, address,\n block_size, dtype, block)\n\n address += block_size\n pos += block_size\n\n # Run the event loop and then return the retrieved data\n self.send_scp_burst(buffer_size, window_size,\n list(packets(address, data)))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert_msg_to_html(msg):\n\n msg = re.sub(\"\\n\", \"
\", msg, flags=re.MULTILINE)\n return msg", "response": "Convert \\ n into a
for an HTML formatted message"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(nanopub: dict, error_level: str = \"WARNING\") -> Tuple[str, str, str]:\n\n # Validation results\n v = []\n\n bel_version = config[\"bel\"][\"lang\"][\"default_bel_version\"]\n\n # Structural checks\n try:\n if not isinstance(nanopub[\"nanopub\"][\"assertions\"], list):\n msg = \"Assertions must be a list/array\"\n v.append(\n {\n \"level\": \"Error\",\n \"section\": \"Structure\",\n \"label\": \"Error-Structure\",\n \"msg\": msg,\n \"msg_html\": msg,\n }\n )\n except Exception as e:\n msg = 'Missing nanopub[\"nanopub\"][\"assertions\"]'\n v.append(\n {\n \"level\": \"Error\",\n \"section\": \"Structure\",\n \"label\": \"Error-Structure\",\n \"msg\": msg,\n \"msg_html\": msg,\n }\n )\n\n try:\n if (\n \"name\" in nanopub[\"nanopub\"][\"type\"]\n and \"version\" in nanopub[\"nanopub\"][\"type\"]\n ):\n pass\n if nanopub[\"nanopub\"][\"type\"][\"name\"].upper() == \"BEL\":\n bel_version = nanopub[\"nanopub\"][\"type\"][\"version\"]\n\n except Exception as e:\n msg = 'Missing or badly formed type - must have nanopub[\"nanopub\"][\"type\"] = {\"name\": , \"version\": 0:\n result = results[\"hits\"][\"hits\"][0][\"_source\"]\n if term_type not in result[\"annotation_types\"]:\n msg = f'Annotation type: {term_type} for {term_id} does not match annotation types in database: {result[\"annotation_types\"]}'\n v.append(\n {\n \"level\": \"Warning\",\n \"section\": \"Annotation\",\n \"index\": idx,\n \"label\": \"Warning-Annotation\",\n \"msg\": msg,\n \"msg_html\": msg,\n }\n )\n else:\n msg = f\"Annotation term: {term_id} not found in database\"\n v.append(\n {\n \"level\": \"Warning\",\n \"section\": \"Annotation\",\n \"index\": idx,\n \"label\": \"Warning-Annotation\",\n \"msg\": msg,\n \"msg_html\": msg,\n }\n )\n\n return v", "response": "Validate Nanopub\n\n Error Levels are similar to log levels - selecting WARNING includes both\n WARNING and ERROR, selecting ERROR just includes ERROR\n\n The validation result is a list of objects containing\n {\n 'level': 'Warning|Error',\n 'section': 'Assertion|Annotation|Structure',\n 'label': '{Error|Warning}-{Assertion|Annotation|Structure}', # to be used for faceting in Elasticsearch\n 'index': idx, # Index of Assertion or Annotation in Nanopub - starts at 0\n 'msg': msg, # Error or Warning message\n }\n\n Args:\n nanopub: nanopub record starting with nanopub...\n level: return WARNING or just ERROR? defaults to warnings and errors\n Returns:\n list(tuples): [{'level': 'Warning', 'section': 'Assertion', 'label': 'Warning-Assertion', 'index': 0, 'msg': }]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget BEL Specification The json file this depends on is generated by belspec_yaml2json as part of the update_specifications function Args: version: e.g. 2.0.0 where the filename", "response": "def get_specification(version: str) -> Mapping[str, Any]:\n \"\"\"Get BEL Specification\n\n The json file this depends on is generated by belspec_yaml2json as\n part of the update_specifications function\n\n Args:\n version: e.g. 2.0.0 where the filename\n \"\"\"\n\n spec_dir = config[\"bel\"][\"lang\"][\"specifications\"]\n spec_dict = {}\n\n bel_versions = get_bel_versions()\n if version not in bel_versions:\n log.error(\"Cannot get unknown version BEL specification\")\n return {\"error\": \"unknown version of BEL\"}\n\n # use this variable to find our parser file since periods aren't recommended in python module names\n version_underscored = version.replace(\".\", \"_\")\n\n json_fn = f\"{spec_dir}/bel_v{version_underscored}.json\"\n\n with open(json_fn, \"r\") as f:\n spec_dict = json.load(f)\n\n return spec_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_bel_versions() -> List[str]:\n\n spec_dir = config[\"bel\"][\"lang\"][\"specifications\"]\n\n fn = f\"{spec_dir}/versions.json\"\n with open(fn, \"r\") as f:\n versions = json.load(f)\n\n return versions", "response": "Get the list of all BEL Language versions supported by BEL"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_specifications(force: bool = False):\n\n spec_dir = config[\"bel\"][\"lang\"][\"specifications\"]\n if not os.path.isdir(spec_dir):\n os.mkdir(spec_dir)\n\n log.info(f\"Updating BEL Specifications - stored in {spec_dir}\")\n\n # Collect new specifications from Git repository\n if config[\"bel\"][\"lang\"][\"specification_github_repo\"]:\n github_belspec_files(spec_dir, force=force)\n\n # Ensure that files use 'yaml' extension\n files = glob.glob(f\"{spec_dir}/*.yml\")\n for fn in files:\n new_fn = fn.replace(\"yml\", \"yaml\")\n os.rename(fn, new_fn)\n\n # Convert YAML to enhanced JSON\n files = glob.glob(f\"{spec_dir}/*.yaml\")\n versions = {}\n for fn in files:\n filename = os.path.basename(fn)\n\n check_version = filename.replace(\"bel_v\", \"\").replace(\".yaml\", \"\").replace(\"_\", \".\")\n\n json_fn = fn.replace(\".yaml\", \".json\")\n version = belspec_yaml2json(fn, json_fn)\n\n if version != check_version:\n log.error(\n f\"Version mis-match for {fn} - fn version: {check_version} version: {version}\"\n )\n versions[version] = filename\n\n with open(f\"{spec_dir}/versions.json\", \"w\") as f:\n json.dump(list(set(versions)), f, indent=4)\n\n # Convert YAML file to EBNF and then parser module\n\n create_ebnf_parser(files)", "response": "Update BEL Specifications in local directory and store in local directory specified in belbio_conf. yaml"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting BEL Spec files from Github repo", "response": "def github_belspec_files(spec_dir, force: bool = False):\n \"\"\"Get belspec files from Github repo\n\n\n Args:\n spec_dir: directory to store the BEL Specification and derived files\n force: force update of BEL Specifications from Github - skipped if local files less than 1 day old\n \"\"\"\n\n if not force:\n dtnow = datetime.datetime.utcnow()\n delta = datetime.timedelta(1)\n yesterday = dtnow - delta\n\n for fn in glob.glob(f\"{spec_dir}/bel*yaml\"):\n if datetime.datetime.fromtimestamp(os.path.getmtime(fn)) > yesterday:\n log.info(\"Skipping BEL Specification update - specs less than 1 day old\")\n return\n\n repo_url = \"https://api.github.com/repos/belbio/bel_specifications/contents/specifications\"\n params = {}\n github_access_token = os.getenv(\"GITHUB_ACCESS_TOKEN\", \"\")\n if github_access_token:\n params = {\"access_token\": github_access_token}\n\n r = requests.get(repo_url, params=params)\n if r.status_code == 200:\n results = r.json()\n for f in results:\n url = f[\"download_url\"]\n fn = os.path.basename(url)\n\n if \"yaml\" not in fn and \"yml\" in fn:\n fn = fn.replace(\"yml\", \"yaml\")\n\n r = requests.get(url, params=params, allow_redirects=True)\n if r.status_code == 200:\n open(f\"{spec_dir}/{fn}\", \"wb\").write(r.content)\n else:\n sys.exit(\n f\"Could not get BEL Spec file {url} from Github -- Status: {r.status_code} Msg: {r.content}\"\n )\n else:\n sys.exit(\n f\"Could not get BEL Spec directory listing from Github -- Status: {r.status_code} Msg: {r.content}\"\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nenhance BEL Specification and save as JSON file", "response": "def belspec_yaml2json(yaml_fn: str, json_fn: str) -> str:\n \"\"\"Enhance BEL specification and save as JSON file\n\n Load all BEL Specification YAML files and convert to JSON files\n after enhancing them. Also create a bel_versions.json file with\n all available BEL versions for fast loading.\n\n Args:\n yaml_fn: original YAML version of BEL Spec\n json_fn: enhanced JSON version of BEL Spec\n Returns:\n str: version of BEL Spec\n \"\"\"\n\n try:\n spec_dict = yaml.load(open(yaml_fn, \"r\").read(), Loader=yaml.SafeLoader)\n\n # admin-related keys\n spec_dict[\"admin\"] = {}\n spec_dict[\"admin\"][\"version_underscored\"] = spec_dict[\"version\"].replace(\".\", \"_\")\n spec_dict[\"admin\"][\"parser_fn\"] = yaml_fn.replace(\".yaml\", \"_parser.py\")\n\n # add relation keys list, to_short, to_long\n add_relations(spec_dict)\n # add function keys list, to_short, to_long\n add_functions(spec_dict)\n # add namespace keys list, list_short, list_long, to_short, to_long\n add_namespaces(spec_dict)\n\n enhance_function_signatures(spec_dict)\n\n add_function_signature_help(spec_dict)\n\n with open(json_fn, \"w\") as f:\n json.dump(spec_dict, f)\n\n except Exception as e:\n log.error(\n \"Warning: BEL Specification {yaml_fn} could not be read. Cannot proceed.\".format(\n yaml_fn\n )\n )\n sys.exit()\n\n return spec_dict[\"version\"]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds function signature help to BEL Editor users", "response": "def add_function_signature_help(spec_dict: dict) -> dict:\n \"\"\"Add function signature help\n\n Simplify the function signatures for presentation to BEL Editor users\n \"\"\"\n for f in spec_dict[\"functions\"][\"signatures\"]:\n for argset_idx, argset in enumerate(spec_dict[\"functions\"][\"signatures\"][f][\"signatures\"]):\n args_summary = \"\"\n args_list = []\n arg_idx = 0\n for arg_idx, arg in enumerate(\n spec_dict[\"functions\"][\"signatures\"][f][\"signatures\"][argset_idx][\"arguments\"]\n ):\n if arg[\"type\"] in [\"Function\", \"Modifier\"]:\n vals = [\n spec_dict[\"functions\"][\"to_short\"].get(\n val, spec_dict[\"functions\"][\"to_short\"].get(val)\n )\n for val in arg[\"values\"]\n ]\n args_summary += \"|\".join(vals) + \"()\"\n arg_idx += 1\n\n if arg.get(\"optional\", False) and arg.get(\"multiple\", False) is False:\n args_summary += \"?\"\n text = f'Zero or one of each function(s): {\", \".join([val for val in arg[\"values\"]])}'\n elif arg.get(\"optional\", False):\n args_summary += \"*\"\n text = f'Zero or more of each function(s): {\", \".join([val for val in arg[\"values\"]])}'\n else:\n text = f'One of following function(s): {\", \".join([val for val in arg[\"values\"]])}'\n\n elif arg[\"type\"] in [\"NSArg\", \"StrArg\", \"StrArgNSArg\"]:\n args_summary += f'{arg[\"type\"]}'\n if arg.get(\"optional\", False) and arg.get(\"multiple\", False) is False:\n args_summary += \"?\"\n if arg[\"type\"] in [\"NSArg\"]:\n text = f'Zero or one namespace argument of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n elif arg[\"type\"] == \"StrArgNSArg\":\n text = f'Zero or one amespace argument or default namespace argument (without prefix) of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n else:\n text = f'Zero or one string argument of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n elif arg.get(\"optional\", False):\n args_summary += \"*\"\n if arg[\"type\"] in [\"NSArg\"]:\n text = f'Zero or more namespace arguments of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n elif arg[\"type\"] == \"StrArgNSArg\":\n text = f'Zero or more namespace arguments or default namespace arguments (without prefix) of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n else:\n text = f'Zero or more of string arguments of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n else:\n if arg[\"type\"] in [\"NSArg\"]:\n text = f'Namespace argument of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n elif arg[\"type\"] == \"StrArgNSArg\":\n text = f'Namespace argument or default namespace argument (without prefix) of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n else:\n text = f'String argument of following type(s): {\", \".join([val for val in arg[\"values\"]])}'\n\n args_summary += \", \"\n args_list.append(text)\n\n args_summary = re.sub(\", $\", \"\", args_summary)\n spec_dict[\"functions\"][\"signatures\"][f][\"signatures\"][argset_idx][\n \"argument_summary\"\n ] = f\"{f}({args_summary})\"\n spec_dict[\"functions\"][\"signatures\"][f][\"signatures\"][argset_idx][\n \"argument_help_listing\"\n ] = args_list\n\n # print(f'{f}({args_summary})')\n # print(args_list)\n\n return spec_dict"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_relations(spec_dict: Mapping[str, Any]) -> Mapping[str, Any]:\n\n # Class 'Mapping' does not define '__setitem__', so the '[]' operator cannot be used on its instances\n spec_dict[\"relations\"][\"list\"] = []\n spec_dict[\"relations\"][\"list_short\"] = []\n spec_dict[\"relations\"][\"list_long\"] = []\n spec_dict[\"relations\"][\"to_short\"] = {}\n spec_dict[\"relations\"][\"to_long\"] = {}\n\n for relation_name in spec_dict[\"relations\"][\"info\"]:\n\n abbreviated_name = spec_dict[\"relations\"][\"info\"][relation_name][\"abbreviation\"]\n spec_dict[\"relations\"][\"list\"].extend((relation_name, abbreviated_name))\n spec_dict[\"relations\"][\"list_long\"].append(relation_name)\n spec_dict[\"relations\"][\"list_short\"].append(abbreviated_name)\n\n spec_dict[\"relations\"][\"to_short\"][relation_name] = abbreviated_name\n spec_dict[\"relations\"][\"to_short\"][abbreviated_name] = abbreviated_name\n\n spec_dict[\"relations\"][\"to_long\"][abbreviated_name] = relation_name\n spec_dict[\"relations\"][\"to_long\"][relation_name] = relation_name\n\n return spec_dict", "response": "Add relation keys to the BEL specification dictionary"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_functions(spec_dict: Mapping[str, Any]) -> Mapping[str, Any]:\n\n # Class 'Mapping' does not define '__setitem__', so the '[]' operator cannot be used on its instances\n spec_dict[\"functions\"][\"list\"] = []\n spec_dict[\"functions\"][\"list_long\"] = []\n spec_dict[\"functions\"][\"list_short\"] = []\n\n spec_dict[\"functions\"][\"primary\"] = {}\n spec_dict[\"functions\"][\"primary\"][\"list_long\"] = []\n spec_dict[\"functions\"][\"primary\"][\"list_short\"] = []\n\n spec_dict[\"functions\"][\"modifier\"] = {}\n spec_dict[\"functions\"][\"modifier\"][\"list_long\"] = []\n spec_dict[\"functions\"][\"modifier\"][\"list_short\"] = []\n\n spec_dict[\"functions\"][\"to_short\"] = {}\n spec_dict[\"functions\"][\"to_long\"] = {}\n\n for func_name in spec_dict[\"functions\"][\"info\"]:\n\n abbreviated_name = spec_dict[\"functions\"][\"info\"][func_name][\"abbreviation\"]\n\n spec_dict[\"functions\"][\"list\"].extend((func_name, abbreviated_name))\n\n spec_dict[\"functions\"][\"list_long\"].append(func_name)\n spec_dict[\"functions\"][\"list_short\"].append(abbreviated_name)\n\n if spec_dict[\"functions\"][\"info\"][func_name][\"type\"] == \"primary\":\n spec_dict[\"functions\"][\"primary\"][\"list_long\"].append(func_name)\n spec_dict[\"functions\"][\"primary\"][\"list_short\"].append(abbreviated_name)\n else:\n spec_dict[\"functions\"][\"modifier\"][\"list_long\"].append(func_name)\n spec_dict[\"functions\"][\"modifier\"][\"list_short\"].append(abbreviated_name)\n\n spec_dict[\"functions\"][\"to_short\"][abbreviated_name] = abbreviated_name\n spec_dict[\"functions\"][\"to_short\"][func_name] = abbreviated_name\n\n spec_dict[\"functions\"][\"to_long\"][abbreviated_name] = func_name\n spec_dict[\"functions\"][\"to_long\"][func_name] = func_name\n\n return spec_dict", "response": "Add function keys to BEL specification dictionary."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_namespaces(spec_dict):\n\n for ns in spec_dict[\"namespaces\"]:\n spec_dict[\"namespaces\"][ns][\"list\"] = []\n spec_dict[\"namespaces\"][ns][\"list_long\"] = []\n spec_dict[\"namespaces\"][ns][\"list_short\"] = []\n\n spec_dict[\"namespaces\"][ns][\"to_short\"] = {}\n spec_dict[\"namespaces\"][ns][\"to_long\"] = {}\n\n for obj in spec_dict[\"namespaces\"][ns][\"info\"]:\n spec_dict[\"namespaces\"][ns][\"list\"].extend([obj[\"name\"], obj[\"abbreviation\"]])\n spec_dict[\"namespaces\"][ns][\"list_short\"].append(obj[\"abbreviation\"])\n spec_dict[\"namespaces\"][ns][\"list_long\"].append(obj[\"name\"])\n\n spec_dict[\"namespaces\"][ns][\"to_short\"][obj[\"abbreviation\"]] = obj[\"abbreviation\"]\n spec_dict[\"namespaces\"][ns][\"to_short\"][obj[\"name\"]] = obj[\"abbreviation\"]\n\n spec_dict[\"namespaces\"][ns][\"to_long\"][obj[\"abbreviation\"]] = obj[\"name\"]\n spec_dict[\"namespaces\"][ns][\"to_long\"][obj[\"name\"]] = obj[\"name\"]\n\n # For AminoAcid namespace\n if \"abbrev1\" in obj:\n spec_dict[\"namespaces\"][ns][\"to_short\"][obj[\"abbrev1\"]] = obj[\"abbreviation\"]\n spec_dict[\"namespaces\"][ns][\"to_long\"][obj[\"abbrev1\"]] = obj[\"name\"]", "response": "Add namespace convenience keys list list_long to_short to_long"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef enhance_function_signatures(spec_dict: Mapping[str, Any]) -> Mapping[str, Any]:\n\n for func in spec_dict[\"functions\"][\"signatures\"]:\n for i, sig in enumerate(spec_dict[\"functions\"][\"signatures\"][func][\"signatures\"]):\n args = sig[\"arguments\"]\n req_args = []\n pos_args = []\n opt_args = []\n mult_args = []\n for arg in args:\n # Multiple argument types\n if arg.get(\"multiple\", False):\n if arg[\"type\"] in [\"Function\", \"Modifier\"]:\n mult_args.extend(arg.get(\"values\", []))\n elif arg[\"type\"] in [\"StrArgNSArg\", \"NSArg\", \"StrArg\"]:\n # Complex signature has this\n mult_args.append(arg[\"type\"])\n\n # Optional, position dependent - will be added after req_args based on order in bel_specification\n elif arg.get(\"optional\", False) and arg.get(\"position\", False):\n if arg[\"type\"] in [\"Function\", \"Modifier\"]:\n pos_args.append(arg.get(\"values\", []))\n elif arg[\"type\"] in [\"StrArgNSArg\", \"NSArg\", \"StrArg\"]:\n pos_args.append(arg[\"type\"])\n\n # Optional, position independent\n elif arg.get(\"optional\", False):\n if arg[\"type\"] in [\"Function\", \"Modifier\"]:\n opt_args.extend(arg.get(\"values\", []))\n elif arg[\"type\"] in [\"StrArgNSArg\", \"NSArg\", \"StrArg\"]:\n opt_args.append(arg[\"type\"])\n\n # Required arguments, position dependent\n else:\n if arg[\"type\"] in [\"Function\", \"Modifier\"]:\n req_args.append(arg.get(\"values\", []))\n elif arg[\"type\"] in [\"StrArgNSArg\", \"NSArg\", \"StrArg\"]:\n req_args.append(arg[\"type\"])\n\n spec_dict[\"functions\"][\"signatures\"][func][\"signatures\"][i][\"req_args\"] = copy.deepcopy(\n req_args\n )\n spec_dict[\"functions\"][\"signatures\"][func][\"signatures\"][i][\"pos_args\"] = copy.deepcopy(\n pos_args\n )\n spec_dict[\"functions\"][\"signatures\"][func][\"signatures\"][i][\"opt_args\"] = copy.deepcopy(\n opt_args\n )\n spec_dict[\"functions\"][\"signatures\"][func][\"signatures\"][i][\n \"mult_args\"\n ] = copy.deepcopy(mult_args)\n\n return spec_dict", "response": "Enhance function signatures in BEL Specification dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the EBNF template from Github belbio - bel. ebnf. j2", "response": "def get_ebnf_template():\n \"\"\"Get EBNF template from Github belbio/bel_specifications repo\"\"\"\n\n spec_dir = config[\"bel\"][\"lang\"][\"specifications\"]\n local_fp = f\"{spec_dir}/bel.ebnf.j2\"\n\n repo_url = (\n \"https://api.github.com/repos/belbio/bel_specifications/contents/resources/bel.ebnf.j2\"\n )\n\n params = {}\n github_access_token = os.getenv(\"GITHUB_ACCESS_TOKEN\", \"\")\n if github_access_token:\n params = {\"access_token\": github_access_token}\n\n try:\n # Get download url for template file\n r = requests.get(repo_url, params=params)\n\n if r.status_code == 200:\n template_url = r.json()[\"download_url\"]\n else:\n log.warning(\"Could not get EBNF file download url from Github\")\n\n # Get template file\n try:\n r = requests.get(template_url, params=params, allow_redirects=True)\n if r.status_code == 200:\n open(local_fp, \"wt\").write(r.text)\n else:\n log.warning(\n f\"Could not download EBNF file from Github -- Status: {r.status_code} Msg: {r.text}\"\n )\n\n except Exception as e:\n log.warning(\n f\"Could not download EBNF file from Github -- Status: {r.status_code} Msg: {e}\"\n )\n\n except Exception as e:\n log.warning(\"Could not download BEL EBNF template file\")\n if not os.path.exists(f\"{spec_dir}/local_fp\"):\n log.error(\"No BEL EBNF template file available\")\n\n return local_fp"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate EBNF files and EBNF - based parsers", "response": "def create_ebnf_parser(files):\n \"\"\"Create EBNF files and EBNF-based parsers\"\"\"\n\n flag = False\n for belspec_fn in files:\n # Get EBNF Jinja template from Github if enabled\n if config[\"bel\"][\"lang\"][\"specification_github_repo\"]:\n tmpl_fn = get_ebnf_template()\n\n # Check if EBNF file is more recent than belspec_fn\n ebnf_fn = belspec_fn.replace(\".yaml\", \".ebnf\")\n if not os.path.exists(ebnf_fn) or os.path.getmtime(belspec_fn) > os.path.getmtime(ebnf_fn):\n with open(belspec_fn, \"r\") as f:\n belspec = yaml.load(f, Loader=yaml.SafeLoader)\n\n tmpl_dir = os.path.dirname(tmpl_fn)\n tmpl_basename = os.path.basename(tmpl_fn)\n\n bel_major_version = belspec[\"version\"].split(\".\")[0]\n\n env = jinja2.Environment(\n loader=jinja2.FileSystemLoader(tmpl_dir)\n ) # create environment for template\n template = env.get_template(tmpl_basename) # get the template\n\n # replace template placeholders with appropriate variables\n relations_list = [\n (relation, belspec[\"relations\"][\"info\"][relation][\"abbreviation\"])\n for relation in belspec[\"relations\"][\"info\"]\n ]\n relations_list = sorted(list(itertools.chain(*relations_list)), key=len, reverse=True)\n\n functions_list = [\n (function, belspec[\"functions\"][\"info\"][function][\"abbreviation\"])\n for function in belspec[\"functions\"][\"info\"]\n if belspec[\"functions\"][\"info\"][function][\"type\"] == \"primary\"\n ]\n functions_list = sorted(list(itertools.chain(*functions_list)), key=len, reverse=True)\n\n modifiers_list = [\n (function, belspec[\"functions\"][\"info\"][function][\"abbreviation\"])\n for function in belspec[\"functions\"][\"info\"]\n if belspec[\"functions\"][\"info\"][function][\"type\"] == \"modifier\"\n ]\n modifiers_list = sorted(list(itertools.chain(*modifiers_list)), key=len, reverse=True)\n\n created_time = datetime.datetime.now().strftime(\"%B %d, %Y - %I:%M:%S%p\")\n\n ebnf = template.render(\n functions=functions_list,\n m_functions=modifiers_list,\n relations=relations_list,\n bel_version=belspec[\"version\"],\n bel_major_version=bel_major_version,\n created_time=created_time,\n )\n\n with open(ebnf_fn, \"w\") as f:\n f.write(ebnf)\n\n parser_fn = ebnf_fn.replace(\".ebnf\", \"_parser.py\")\n\n parser = tatsu.to_python_sourcecode(ebnf, filename=parser_fn)\n flag = True\n with open(parser_fn, \"wt\") as f:\n f.write(parser)\n\n if flag:\n # In case we created new parser modules\n importlib.invalidate_caches()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting function_help given function name", "response": "def get_function_help(function: str, bel_spec: BELSpec):\n \"\"\"Get function_help given function name\n\n This will get the function summary template (argument summary in signature)\n and the argument help listing.\n \"\"\"\n\n function_long = bel_spec[\"functions\"][\"to_long\"].get(function)\n function_help = []\n\n if function_long:\n for signature in bel_spec[\"functions\"][\"signatures\"][function_long][\"signatures\"]:\n function_help.append(\n {\n \"function_summary\": signature[\"argument_summary\"],\n \"argument_help\": signature[\"argument_help_listing\"],\n \"description\": bel_spec[\"functions\"][\"info\"][function_long][\"description\"],\n }\n )\n\n return function_help"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if loc is inside span", "response": "def in_span(loc: int, span: Span) -> bool:\n \"\"\"Checks if loc is inside span\"\"\"\n\n if loc >= span[0] and loc <= span[1]:\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfinds BEL function or argument at cursor location", "response": "def cursor(\n belstr: str, ast: AST, cursor_loc: int, result: Mapping[str, Any] = None\n) -> Mapping[str, Any]:\n \"\"\"Find BEL function or argument at cursor location\n\n Args:\n belstr: BEL String used to create the completion_text\n ast (Mapping[str, Any]): AST (dict) of BEL String\n cursor_loc (int): given cursor location from input field\n cursor_loc starts at 0, think of it like a block cursor covering each char\n result: used to recursively return the result\n Returns:\n result dict\n \"\"\"\n\n log.debug(f\"SubAST: {json.dumps(ast, indent=4)}\")\n\n # Recurse down through subject, object, nested to functions\n log.debug(f\"Cursor keys {ast.keys()}, BELStr: {belstr}\")\n\n if len(belstr) == 0:\n return {\"type\": \"Function\", \"replace_span\": (0, 0), \"completion_text\": \"\"}\n\n if \"relation\" in ast and in_span(cursor_loc, ast[\"relation\"][\"span\"]):\n log.debug(\"In relation\")\n\n completion_text = belstr[ast[\"relation\"][\"span\"][0] : cursor_loc + 1]\n return {\n \"type\": \"Relation\",\n \"replace_span\": ast[\"relation\"][\"span\"],\n \"completion_text\": completion_text,\n }\n\n # Handle subject, object and nested keys in tree\n elif \"span\" not in ast and isinstance(ast, dict):\n for key in ast:\n if key in [\"subject\", \"object\", \"nested\"]:\n log.debug(f\"Recursing Keys {key}\")\n result = cursor(belstr, ast[key], cursor_loc, result=result)\n if result:\n return result\n\n # Matches Functions, NSArgs and StrArgs/StrArgNSArg\n if \"span\" in ast and in_span(cursor_loc, ast[\"span\"]):\n log.debug(\"Inside subject/object subAST\")\n if \"function\" in ast:\n name_span = ast[\"function\"][\"name_span\"]\n if in_span(cursor_loc, name_span):\n return {\n \"type\": \"Function\",\n \"replace_span\": name_span,\n \"completion_text\": belstr[name_span[0] : cursor_loc + 1],\n }\n for idx, arg in enumerate(ast[\"args\"]):\n if (\n cursor_loc == ast[\"function\"][\"parens_span\"][0]\n and ast[\"function\"][\"parens_span\"][1] == -1\n ):\n return {\n \"type\": \"StrArg\", # default type if unknown\n \"arg_idx\": idx,\n \"replace_span\": arg[\"span\"], # replace entire strarg\n \"parent_function\": ast[\"function\"][\"name\"],\n \"completion_text\": \"\",\n }\n\n elif in_span(cursor_loc, arg[\"span\"]):\n log.debug(\n f'In argument span {arg[\"span\"]} Cursor_loc: {cursor_loc}'\n )\n if arg[\"type\"] == \"Function\":\n if in_span(cursor_loc, arg[\"function\"][\"name_span\"]):\n log.debug(\"Found replace_span in args: Function type\")\n return {\n \"type\": \"Function\",\n \"replace_span\": arg[\"function\"][\n \"name_span\"\n ], # replace function name only\n \"arg_idx\": idx,\n \"args\": copy.deepcopy(ast[\"args\"]),\n \"parent_function\": ast[\"function\"][\"name\"],\n \"completion_text\": belstr[\n arg[\"function\"][\"name_span\"][0] : cursor_loc + 1\n ],\n }\n else:\n log.debug(f'Recursing Function {arg[\"span\"]}')\n result = cursor(belstr, arg, cursor_loc, result=result)\n elif arg[\"type\"] == \"NSArg\":\n\n result = {\n \"type\": \"NSArg\",\n \"replace_span\": arg[\"span\"], # replace entire nsarg\n \"arg_idx\": idx,\n \"args\": copy.deepcopy(ast[\"args\"]),\n \"parent_function\": ast[\"function\"][\"name\"],\n }\n\n # Filter on namespace and query on ns_val chars up to cursor_loc\n if in_span(cursor_loc, arg[\"nsarg\"][\"ns_val_span\"]):\n result[\"namespace\"] = arg[\"nsarg\"][\n \"ns\"\n ] # provide namespace for nsarg filtering\n result[\"completion_text\"] = belstr[\n arg[\"nsarg\"][\"ns_val_span\"][0] : cursor_loc + 1\n ]\n # Query on nsarg chars up to cursor_loc\n else:\n result[\"completion_text\"] = belstr[\n arg[\"nsarg\"][\"ns_span\"][0] : cursor_loc + 1\n ]\n\n log.debug(f\"Found replace_span in args: NSArg {result}\")\n return result\n elif (\n arg[\"type\"] == \"StrArg\"\n ): # in case this is a default namespace StrArg\n if (\n arg[\"span\"][0] == arg[\"span\"][1]\n ): # handle case like p() cursor=2\n completion_text = arg[\"arg\"]\n else:\n completion_text = belstr[arg[\"span\"][0] : cursor_loc + 1]\n\n return {\n \"type\": \"StrArg\",\n \"arg_idx\": idx,\n \"replace_span\": arg[\"span\"], # replace entire strarg\n \"parent_function\": ast[\"function\"][\"name\"],\n \"completion_text\": completion_text.lstrip(),\n }\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef nsarg_completions(\n completion_text: str,\n entity_types: list,\n bel_spec: BELSpec,\n namespace: str,\n species_id: str,\n bel_fmt: str,\n size: int,\n):\n \"\"\"Namespace completions\n\n Args:\n completion_text\n entity_types: used to filter namespace search results\n bel_spec: used to search default namespaces\n namespace: used to filter namespace search results\n species_id: used to filter namespace search results\n bel_fmt: used to select full name or abbrev for default namespaces\n size: how many completions to return\n\n Results:\n list of replacement text objects\n \"\"\"\n\n minimal_nsarg_completion_len = 1\n\n species = [species_id]\n namespaces = [namespace]\n replace_list = []\n\n if len(completion_text) >= minimal_nsarg_completion_len:\n # Use BEL.bio API module if running bel module in BEL.bio API, otherwise call BEL.bio API endpoint\n # is there a better way to handle this?\n\n url = f'{config[\"bel_api\"][\"servers\"][\"api_url\"]}/terms/completions/{url_path_param_quoting(completion_text)}'\n params = {\n \"size\": size,\n \"entity_types\": entity_types,\n \"namespaces\": namespaces,\n \"species\": species,\n }\n r = get_url(url, params=params)\n\n if r.status_code == 200:\n ns_completions = r.json()\n else:\n log.error(f\"Status code of {r.status_code} for {url}\")\n ns_completions = {}\n\n for complete in ns_completions.get(\"completions\", []):\n replace_list.append(\n {\n \"replacement\": complete[\"id\"],\n \"label\": f\"{complete['id']} ({complete['label']})\",\n \"highlight\": complete[\"highlight\"][-1],\n \"type\": \"NSArg\",\n }\n )\n\n # Check default namespaces\n for entity_type in entity_types:\n default_namespace = bel_spec[\"namespaces\"].get(entity_type, [])\n if default_namespace:\n for obj in default_namespace[\"info\"]:\n replacement = None\n if bel_fmt == \"long\" and re.match(\n completion_text, obj[\"name\"], re.IGNORECASE\n ):\n replacement = obj[\"name\"]\n elif bel_fmt in [\"short\", \"medium\"] and re.match(\n completion_text, obj[\"abbreviation\"], re.IGNORECASE\n ):\n replacement = obj[\"abbreviation\"]\n\n if replacement:\n highlight = replacement.replace(\n completion_text, f\"{completion_text}\"\n )\n replace_list.insert(\n 0,\n {\n \"replacement\": replacement,\n \"label\": replacement,\n \"highlight\": highlight,\n \"type\": \"NSArg\",\n },\n )\n\n return replace_list[:size]", "response": "Search for completions in the current namespace."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfilter BEL relations by prefix", "response": "def relation_completions(\n completion_text: str, bel_spec: BELSpec, bel_fmt: str, size: int\n) -> list:\n \"\"\"Filter BEL relations by prefix\n\n Args:\n prefix: completion string\n bel_fmt: short, medium, long BEL formats\n spec: BEL specification\n\n Returns:\n list: list of BEL relations that match prefix\n \"\"\"\n\n if bel_fmt == \"short\":\n relation_list = bel_spec[\"relations\"][\"list_short\"]\n else:\n relation_list = bel_spec[\"relations\"][\"list_long\"]\n\n matches = []\n for r in relation_list:\n if re.match(completion_text, r):\n matches.append(r)\n\n replace_list = []\n for match in matches:\n highlight = match.replace(completion_text, f\"{completion_text}\")\n replace_list.append(\n {\n \"replacement\": match,\n \"label\": match,\n \"highlight\": highlight,\n \"type\": \"Relation\",\n }\n )\n\n return replace_list[:size]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfiltering function list by prefix", "response": "def function_completions(\n completion_text: str,\n bel_spec: BELSpec,\n function_list: list,\n bel_fmt: str,\n size: int,\n) -> list:\n \"\"\"Filter BEL functions by prefix\n\n Args:\n prefix: completion string\n bel_fmt: short, medium, long BEL formats\n spec: BEL specification\n\n Returns:\n list: list of BEL functions that match prefix\n \"\"\"\n\n # Convert provided function list to correct bel_fmt\n if isinstance(function_list, list):\n if bel_fmt in [\"short\", \"medium\"]:\n function_list = [\n bel_spec[\"functions\"][\"to_short\"][fn] for fn in function_list\n ]\n else:\n function_list = [\n bel_spec[\"functions\"][\"to_long\"][fn] for fn in function_list\n ]\n elif bel_fmt in [\"short\", \"medium\"]:\n function_list = bel_spec[\"functions\"][\"primary\"][\"list_short\"]\n else:\n function_list = bel_spec[\"functions\"][\"primary\"][\"list_long\"]\n\n matches = []\n for f in function_list:\n escaped_completion_text = completion_text.replace(r\"(\", r\"\\(\").replace(\n r\")\", r\"\\)\"\n )\n log.debug(f\"Completion match: {escaped_completion_text} F: {f}\")\n if re.match(escaped_completion_text, f):\n matches.append(f)\n\n replace_list = []\n for match in matches:\n if completion_text:\n highlight = match.replace(completion_text, f\"{completion_text}\")\n else:\n highlight = completion_text\n\n replace_list.append(\n {\n \"replacement\": match,\n \"label\": f\"{match}()\",\n \"highlight\": highlight,\n \"type\": \"Function\",\n }\n )\n\n return replace_list[:size]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef arg_completions(\n completion_text: str,\n parent_function: str,\n args: list,\n arg_idx: int,\n bel_spec: BELSpec,\n bel_fmt: str,\n species_id: str,\n namespace: str,\n size: int,\n):\n \"\"\"Function argument completion\n\n Only allow legal options for completion given function name, arguments and index of argument\n to replace.\n\n Args:\n completion_text: text to use for completion - used for creating highlight\n parent_function: BEL function containing these args\n args: arguments of BEL function\n arg_idx: completing on this argument identified by this index\n bel_spec: BEL Specification\n bel_fmt: short, medium, long BEL function/relation formats\n species_id: filter on this species id, e.g. TAX:9606 if available\n namespace: filter on this namespace if available\n size: number of completions to return\n\n Return:\n list of replacements\n \"\"\"\n\n function_long = bel_spec[\"functions\"][\"to_long\"].get(parent_function)\n if not function_long:\n return []\n\n signatures = bel_spec[\"functions\"][\"signatures\"][function_long][\"signatures\"]\n\n # Position based argument ###################################\n function_list = []\n entity_types = []\n fn_replace_list, ns_arg_replace_list = [], []\n position_flag = False # Signature matches position-based argument\n\n # Check for position based argument\n for signature in signatures:\n sig_arg = signature[\"arguments\"][arg_idx]\n sig_type = sig_arg[\"type\"]\n\n if sig_arg.get(\"position\", False) and arg_idx == sig_arg[\"position\"] - 1:\n position_flag = True\n if sig_type in [\"Function\", \"Modifier\"]:\n function_list.extend(sig_arg[\"values\"])\n elif sig_type in [\"NSArg\", \"StrArgNSArg\"]:\n entity_types.extend(sig_arg[\"values\"])\n\n if not position_flag:\n # Collect optional and multiple signature arguments for completion\n opt_fn_sig_args = []\n opt_nsarg_sig_args = []\n mult_fn_sig_args = []\n mult_nsarg_sig_args = []\n for signature in signatures:\n signature_opt_fn_sig_args = []\n signature_opt_nsarg_sig_args = []\n signature_mult_fn_sig_args = []\n signature_mult_nsarg_sig_args = []\n max_position = -1\n for sig_arg in signature[\"arguments\"]:\n if \"position\" in sig_arg:\n max_position = sig_arg[\"position\"]\n continue # Skip position based signature arguments\n\n if (\n sig_arg.get(\"optional\", False) is True\n and sig_arg.get(\"multiple\", False) is False\n ):\n if sig_arg[\"type\"] in [\"Function\", \"Modifier\"]:\n signature_opt_fn_sig_args.extend(sig_arg[\"values\"])\n elif sig_arg[\"type\"] in [\"NSArg\", \"StrArgNSArg\"]:\n signature_opt_nsarg_sig_args.extend(sig_arg[\"values\"])\n elif sig_arg.get(\"multiple\", False) is True:\n if sig_arg[\"type\"] in [\"Function\", \"Modifier\"]:\n signature_mult_fn_sig_args.extend(sig_arg[\"values\"])\n elif sig_arg[\"type\"] in [\"NSArg\", \"StrArgNSArg\"]:\n signature_mult_nsarg_sig_args.extend(sig_arg[\"values\"])\n\n # Remove signature non-multiple, optional arguments that are already in args list\n for idx, arg in enumerate(args):\n if idx <= max_position - 1: # Skip positional arguments\n continue\n if idx == arg_idx: # Skip argument to be completed\n continue\n log.debug(f\"Remove Optional Args {arg} {signature_opt_fn_sig_args}\")\n\n opt_fn_sig_args.extend(signature_opt_fn_sig_args)\n opt_nsarg_sig_args.extend(signature_opt_nsarg_sig_args)\n mult_fn_sig_args.extend(signature_mult_fn_sig_args)\n mult_nsarg_sig_args.extend(signature_mult_nsarg_sig_args)\n\n function_list.extend(list(set(opt_fn_sig_args + mult_fn_sig_args)))\n entity_types.extend(list(set(opt_nsarg_sig_args + mult_nsarg_sig_args)))\n\n if function_list:\n log.debug(f\"ArgComp - position-based Function list: {function_list}\")\n fn_replace_list = function_completions(\n completion_text, bel_spec, function_list, bel_fmt, size\n )\n\n if entity_types:\n log.debug(f\"ArgComp - position-based Entity types: {entity_types}\")\n ns_arg_replace_list = nsarg_completions(\n completion_text,\n entity_types,\n bel_spec,\n namespace,\n species_id,\n bel_fmt,\n size,\n )\n\n replace_list = fn_replace_list + ns_arg_replace_list\n\n return replace_list", "response": "Function argument completion function"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_completions(\n replace_list: list, belstr: str, replace_span: Span, completion_text: str\n) -> List[Mapping[str, Any]]:\n \"\"\"Create completions to return given replacement list\n\n Args:\n replace_list: list of completion replacement values\n belstr: BEL String\n replace_span: start, stop of belstr to replace\n completion_text: text to use for completion - used for creating highlight\n Returns:\n [{\n \"replacement\": replacement,\n \"cursor_loc\": cursor_loc,\n \"highlight\": highlight,\n \"label\": label,\n }]\n \"\"\"\n\n completions = []\n\n for r in replace_list:\n\n # if '(' not in belstr:\n # replacement = f'{r[\"replacement\"]}()'\n # cursor_loc = len(replacement) - 1 # inside parenthesis\n # elif r['type'] == 'Function' and replace_span[1] == len(belstr) - 1:\n\n if len(belstr) > 0:\n belstr_end = len(belstr) - 1\n else:\n belstr_end = 0\n\n log.debug(\n f'Replace list {r} Replace_span {replace_span} BELstr: {belstr} Len: {belstr_end} Test1 {r[\"type\"] == \"Function\"} Test2 {replace_span[1] + 1 == len(belstr)}'\n )\n\n # Put a space between comma and following function arg\n if (\n r[\"type\"] == \"Function\"\n and replace_span[0] > 0\n and belstr[replace_span[0] - 1] == \",\"\n ):\n log.debug(\"prior char is a comma\")\n replacement = (\n belstr[0 : replace_span[0]]\n + \" \"\n + f\"{r['replacement']}()\"\n + belstr[replace_span[1] + 1 :]\n )\n cursor_loc = len(\n belstr[0 : replace_span[0]] + \" \" + f\"{r['replacement']}()\"\n )\n # Put a space between comman and following NSArg or StrArg\n elif replace_span[0] > 0 and belstr[replace_span[0] - 1] == \",\":\n log.debug(\"prior char is a comma\")\n replacement = (\n belstr[0 : replace_span[0]]\n + \" \"\n + r[\"replacement\"]\n + belstr[replace_span[1] + 1 :]\n )\n cursor_loc = len(belstr[0 : replace_span[0]] + \" \" + r[\"replacement\"])\n # Add function to end of belstr\n elif r[\"type\"] == \"Function\" and replace_span[1] >= belstr_end:\n replacement = belstr[0 : replace_span[0]] + f\"{r['replacement']}()\"\n cursor_loc = len(replacement) - 1 # inside parenthesis\n log.debug(f\"Replacement: {replacement}\")\n # Insert replacement in beginning or middle of belstr\n else:\n replacement = (\n belstr[0 : replace_span[0]]\n + r[\"replacement\"]\n + belstr[replace_span[1] + 1 :]\n )\n cursor_loc = len(\n belstr[0 : replace_span[0]] + r[\"replacement\"]\n ) # move cursor just past replacement\n\n completions.append(\n {\n \"replacement\": replacement,\n \"cursor_loc\": cursor_loc,\n \"highlight\": r[\"highlight\"],\n \"label\": r[\"label\"],\n }\n )\n\n return completions", "response": "Create completions for given replacement list and BEL string."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_completions(\n belstr: str,\n cursor_loc: int,\n bel_spec: BELSpec,\n bel_comp: str,\n bel_fmt: str,\n species_id: str,\n size: int,\n):\n \"\"\"Get BEL Assertion completions\n\n Args:\n\n Results:\n\n \"\"\"\n\n ast, errors = pparse.get_ast_dict(belstr)\n\n spans = pparse.collect_spans(ast)\n\n completion_text = \"\"\n completions = []\n function_help = []\n\n log.debug(f\"Cursor location BELstr: {belstr} Cursor idx: {cursor_loc}\")\n cursor_results = cursor(belstr, ast, cursor_loc)\n log.debug(f\"Cursor results: {cursor_results}\")\n\n if not cursor_results:\n log.debug(\"Cursor results is empty\")\n return (completion_text, completions, function_help, spans)\n\n completion_text = cursor_results.get(\"completion_text\", \"\")\n\n replace_span = cursor_results[\"replace_span\"]\n namespace = cursor_results.get(\"namespace\", None)\n\n if \"parent_function\" in cursor_results:\n parent_function = cursor_results[\"parent_function\"]\n function_help = bel_specification.get_function_help(\n cursor_results[\"parent_function\"], bel_spec\n )\n\n args = cursor_results.get(\"args\", [])\n arg_idx = cursor_results.get(\"arg_idx\")\n\n replace_list = arg_completions(\n completion_text,\n parent_function,\n args,\n arg_idx,\n bel_spec,\n bel_fmt,\n species_id,\n namespace,\n size,\n )\n elif cursor_results[\"type\"] == \"Function\":\n function_list = None\n replace_list = function_completions(\n completion_text, bel_spec, function_list, bel_fmt, size\n )\n elif cursor_results[\"type\"] == \"Relation\":\n replace_list = relation_completions(completion_text, bel_spec, bel_fmt, size)\n\n completions.extend(\n add_completions(replace_list, belstr, replace_span, completion_text)\n )\n\n return completion_text, completions, function_help, spans", "response": "Get BEL Assertion completions"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nscanning BEL string to find the corresponding set of character locations and return a list of character locations and a list of errors.", "response": "def parse_chars(bels: list, errors: Errors) -> Tuple[CharLocs, Errors]:\n \"\"\"Scan BEL string to map parens, quotes, commas\n\n Args:\n bels: bel string as an array of characters\n errors: list of error tuples ('', '')\n\n Returns:\n (char_locs, errors): character locations and errors\n \"\"\"\n pstack, qstack, nested_pstack = [], [], []\n parens, nested_parens, quotes, commas = {}, {}, {}, {}\n\n notquoted_flag = True\n\n for i, c in enumerate(bels):\n prior_char = i - 1\n # print('BEL', prior_char, b[prior_char])\n\n # Find starting quote\n if c == '\"' and bels[prior_char] != \"\\\\\" and len(qstack) == 0:\n qstack.append(i)\n notquoted_flag = False\n\n # Find closing quote\n elif c == '\"' and bels[prior_char] != \"\\\\\":\n quotes[qstack.pop()] = i\n notquoted_flag = True\n\n # Find all escaped quotes outside of quoted string\n elif c == '\"' and bels[prior_char] == \"\\\\\" and len(qstack) == 0:\n errors.append(\n (\n \"ERROR\",\n f\"Escaped quote outside of quoted string at location: {i - 1}\",\n (i - 1, i - 1),\n )\n )\n\n # Find all nested object opening parens\n elif notquoted_flag and c == \"(\" and bels[prior_char] == \" \":\n if len(nested_pstack) > 1:\n errors.append(\n (\n \"ERROR\",\n f\"More than one nested parenthesis or left parenthesis following a space character\",\n (i, i),\n )\n )\n\n nested_pstack.append(i)\n\n # Find all opening parens\n elif notquoted_flag and c == \"(\" and bels[prior_char] not in [\"\\\\\"]:\n pstack.append(i)\n\n # Find all closing parens\n elif notquoted_flag and c == \")\" and bels[prior_char] != \"\\\\\":\n if len(pstack):\n if len(pstack) > 1:\n parens[pstack.pop()] = (i, \"child\")\n else:\n parens[pstack.pop()] = (i, \"top\")\n elif len(nested_pstack):\n nested_parens[nested_pstack.pop()] = (i, \"top\")\n else:\n errors.append(\n (\n \"ERROR\",\n f\"Missing left parenthesis for right parenthesis at location {i}\",\n (i, i),\n )\n )\n # Find comma outside of quoted string\n elif notquoted_flag and c == \",\" and len(qstack) == 0:\n sparen = pstack[-1]\n if sparen not in commas:\n commas[sparen] = [i]\n else:\n commas[sparen].append(i)\n\n while len(pstack):\n errors.append(\n (\n \"ERROR\",\n f\"Missing right parenthesis for left parenthesis at location {pstack[-1]}\",\n (pstack[-1], pstack[-1]),\n )\n )\n if len(pstack) > 1:\n parens[pstack.pop()] = (-1, \"child\")\n else:\n parens[pstack.pop()] = (-1, \"top\")\n\n while len(nested_pstack):\n errors.append(\n (\n \"ERROR\",\n f\"Missing right parenthesis for nested object left parenthesis at location {nested_pstack[-1]}\",\n (nested_pstack[-1], nested_pstack[-1]),\n )\n )\n nested_parens[nested_pstack.pop()] = (-1, \"top\")\n\n if len(qstack):\n missing_quote = qstack.pop()\n errors.append(\n (\n \"ERROR\",\n f\"Missing right quote for left quote at location {missing_quote}\",\n (missing_quote, missing_quote),\n )\n )\n\n return (\n {\n \"parens\": parens,\n \"nested_parens\": nested_parens,\n \"quotes\": quotes,\n \"commas\": commas,\n },\n errors,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses functions from BEL string using paren comma quote character locations and return parsed and errors.", "response": "def parse_functions(\n bels: list, char_locs: CharLocs, parsed: Parsed, errors: Errors\n) -> Tuple[Parsed, Errors]:\n \"\"\"Parse functions from BEL using paren, comma, quote character locations\n\n Args:\n bels: BEL string as list of chars\n char_locs: paren, comma, quote character locations\n errors: Any error messages generated during the parse\n\n Returns:\n (functions, errors): function names and locations and error messages\n \"\"\"\n parens = char_locs[\"parens\"]\n\n # Handle partial top-level function name\n if not parens:\n bels_len = len(bels) - 1\n span = (0, bels_len)\n parsed[span] = {\n \"name\": \"\".join(bels),\n \"type\": \"Function\",\n \"span\": span,\n \"name_span\": (span),\n \"function_level\": \"top\",\n }\n return parsed, errors\n\n for sp in sorted(parens): # sp = starting paren, ep = ending_paren\n ep, function_level = parens[sp]\n\n # Functions can't have a space between function name and left paren\n if bels[sp - 1] == \" \":\n continue\n\n # look in front of start paren for function name\n for i in range(sp - 1, 0, -1):\n if bels[i] in [\" \", \",\", \"(\"]: # function name upstream boundary chars\n if i < sp - 1:\n if ep == -1:\n span = (i + 1, len(bels) - 1)\n else:\n span = (i + 1, ep)\n\n parsed[span] = {\n \"name\": \"\".join(bels[i + 1 : sp]),\n \"type\": \"Function\",\n \"span\": span,\n \"name_span\": (i + 1, sp - 1),\n \"parens_span\": (sp, ep),\n \"function_level\": function_level,\n }\n break\n else:\n if ep == -1:\n span = (0, len(bels) - 1)\n else:\n span = (0, ep)\n\n parsed[span] = {\n \"name\": \"\".join(bels[0:sp]),\n \"type\": \"Function\",\n \"span\": span,\n \"name_span\": (0, sp - 1),\n \"parens_span\": (sp, ep),\n \"function_level\": function_level,\n }\n\n return parsed, errors"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing arguments from functions Formula.", "response": "def parse_args(\n bels: list, char_locs: CharLocs, parsed: Parsed, errors: Errors\n) -> Tuple[Parsed, Errors]:\n \"\"\"Parse arguments from functions\n\n Args:\n bels: BEL string as list of chars\n char_locs: char locations for parens, commas and quotes\n parsed: function locations\n errors: error messages\n\n Returns:\n (functions, errors): function and arg locations plus error messages\n \"\"\"\n\n commas = char_locs[\"commas\"]\n\n # Process each span key in parsed from beginning\n for span in parsed:\n if parsed[span][\"type\"] != \"Function\" or \"parens_span\" not in parsed[span]:\n continue # Skip if not argument-less\n sp, ep = parsed[span][\"parens_span\"]\n\n # calculate args_end position\n if ep == -1: # supports bel completion\n args_end = len(bels) - 1 # 1\n else:\n args_end = ep - 1 # 1\n\n # Parse arguments\n args = []\n arg_start = sp + 1\n each_arg_end_list = sorted([end - 1 for end in commas.get(sp, [])] + [args_end])\n for arg_end in each_arg_end_list:\n # log.debug(f'Arg_start: {arg_start} Arg_end: {arg_end}')\n\n # Skip blanks at beginning of argument\n while arg_start < args_end and bels[arg_start] == \" \":\n arg_start += 1\n\n # Trim arg_end (e.g. HGNC:AKT1 , HGNC:EGF) - if there are spaces before comma\n trimmed_arg_end = arg_end\n while trimmed_arg_end > arg_start and bels[trimmed_arg_end] == \" \":\n trimmed_arg_end -= 1\n\n if trimmed_arg_end < arg_start:\n trimmed_arg_end = arg_start\n\n arg = \"\".join(bels[arg_start : trimmed_arg_end + 1])\n\n # log.debug(f'Adding arg to args: {arg_start} {trimmed_arg_end}')\n args.append({\"arg\": arg, \"span\": (arg_start, trimmed_arg_end)})\n arg_start = arg_end + 2\n\n parsed[span][\"args\"] = args\n\n return parsed, errors"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef arg_types(parsed: Parsed, errors: Errors) -> Tuple[Parsed, Errors]:\n\n func_pattern = re.compile(r\"\\s*[a-zA-Z]+\\(\")\n nsarg_pattern = re.compile(r\"^\\s*([A-Z]+):(.*?)\\s*$\")\n\n for span in parsed:\n if parsed[span][\"type\"] != \"Function\" or \"parens_span\" not in parsed[span]:\n continue\n\n for i, arg in enumerate(parsed[span][\"args\"]):\n nsarg_matches = nsarg_pattern.match(arg[\"arg\"])\n if func_pattern.match(arg[\"arg\"]):\n parsed[span][\"args\"][i].update({\"type\": \"Function\"})\n elif nsarg_matches:\n (start, end) = arg[\"span\"]\n ns = nsarg_matches.group(1)\n ns_val = nsarg_matches.group(2)\n ns_span = nsarg_matches.span(1)\n ns_span = (ns_span[0] + start, ns_span[1] + start - 1)\n ns_val_span = nsarg_matches.span(2)\n ns_val_span = (ns_val_span[0] + start, ns_val_span[1] + start - 1)\n\n parsed[span][\"args\"][i].update(\n {\n \"type\": \"NSArg\",\n \"ns\": ns,\n \"ns_span\": ns_span,\n \"ns_val\": ns_val,\n \"ns_val_span\": ns_val_span,\n }\n )\n else:\n parsed[span][\"args\"][i].update({\"type\": \"StrArg\"})\n\n return parsed, errors", "response": "Add argument types to the function data structure."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_relations(\n belstr: str, char_locs: CharLocs, parsed: Parsed, errors: Errors\n) -> Tuple[Parsed, Errors]:\n \"\"\"Parse relations from BEL string\n\n Args:\n belstr: BEL string as one single string (not list of chars)\n char_locs: paren, comma and quote char locations\n parsed: data structure for parsed functions, relations, nested\n errors: error messages\n\n Returns:\n (parsed, errors):\n \"\"\"\n quotes = char_locs[\"quotes\"]\n quoted_range = set([i for start, end in quotes.items() for i in range(start, end)])\n\n for match in relations_pattern_middle.finditer(belstr):\n (start, end) = match.span(1)\n # log.debug(f'Relation-middle {match}')\n end = end - 1 # adjust end to match actual end character index\n if start != end:\n test_range = set(range(start, end))\n else:\n test_range = set(start)\n\n # Skip if relation overlaps with quoted string\n if test_range.intersection(quoted_range):\n continue\n\n span_key = (start, end)\n parsed[span_key] = {\n \"type\": \"Relation\",\n \"name\": match.group(1),\n \"span\": (start, end),\n }\n\n for match in relations_pattern_end.finditer(belstr):\n (start, end) = match.span(1)\n log.debug(f\"Relation-end {match}\")\n end = end - 1 # adjust end to match actual end character index\n if start != end:\n test_range = set(range(start, end))\n else:\n test_range = set(start)\n\n # Skip if relation overlaps with quoted string\n if test_range.intersection(quoted_range):\n continue\n\n span_key = (start, end)\n parsed[span_key] = {\n \"type\": \"Relation\",\n \"name\": match.group(1),\n \"span\": (start, end),\n }\n\n return parsed, errors", "response": "Parse BEL relations from BEL string into a nested BEL tree structure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_nested(\n bels: list, char_locs: CharLocs, parsed: Parsed, errors: Errors\n) -> Tuple[Parsed, Errors]:\n \"\"\" Parse nested BEL object \"\"\"\n\n for sp in char_locs[\n \"nested_parens\"\n ]: # sp = start parenthesis, ep = end parenthesis\n ep, level = char_locs[\"nested_parens\"][sp]\n if ep == -1:\n ep = len(bels) + 1\n parsed[(sp, ep)] = {\"type\": \"Nested\", \"span\": (sp, ep)}\n\n return parsed, errors", "response": "Parse nested BEL object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dump_json(d: dict) -> None:\n\n import json\n\n k = d.keys()\n v = d.values()\n k1 = [str(i) for i in k]\n\n return json.dumps(dict(zip(*[k1, v])), indent=4)", "response": "Dump json when using tuples for dictionary keys\n\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef collect_spans(ast: AST) -> List[Tuple[str, Tuple[int, int]]]:\n\n spans = []\n\n if ast.get(\"subject\", False):\n spans.extend(collect_spans(ast[\"subject\"]))\n\n if ast.get(\"object\", False):\n spans.extend(collect_spans(ast[\"object\"]))\n\n if ast.get(\"nested\", False):\n spans.extend(collect_spans(ast[\"nested\"]))\n\n if ast.get(\"function\", False):\n log.debug(f\"Processing function\")\n spans.append((\"Function\", ast[\"function\"][\"name_span\"]))\n log.debug(f\"Spans: {spans}\")\n\n if ast.get(\"args\", False):\n for idx, arg in enumerate(ast[\"args\"]):\n log.debug(f\"Arg {arg}\")\n\n if arg.get(\"function\", False):\n log.debug(f\"Recursing on arg function\")\n results = collect_spans(arg)\n log.debug(f\"Results {results}\")\n spans.extend(results) # Recurse arg function\n elif arg.get(\"nsarg\", False):\n log.debug(f\"Processing NSArg Arg {arg}\")\n spans.append((\"NSArg\", arg[\"span\"]))\n spans.append((\"NSPrefix\", arg[\"nsarg\"][\"ns_span\"]))\n spans.append((\"NSVal\", arg[\"nsarg\"][\"ns_val_span\"]))\n elif arg[\"type\"] == \"StrArg\":\n spans.append((\"StrArg\", arg[\"span\"]))\n\n log.debug(f\"Spans: {spans}\")\n return spans", "response": "Collect flattened list of spans of BEL syntax types\n Provide simple list of BEL syntax type spans for highlighting."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parsed_function_to_ast(parsed: Parsed, parsed_key):\n\n sub = parsed[parsed_key]\n\n subtree = {\n \"type\": \"Function\",\n \"span\": sub[\"span\"],\n \"function\": {\n \"name\": sub[\"name\"],\n \"name_span\": sub[\"name_span\"],\n \"parens_span\": sub.get(\"parens_span\", []),\n },\n }\n\n args = []\n for arg in parsed[parsed_key].get(\"args\", []):\n\n # pdb.set_trace()\n\n if arg[\"type\"] == \"Function\":\n args.append(parsed_function_to_ast(parsed, arg[\"span\"]))\n elif arg[\"type\"] == \"NSArg\":\n args.append(\n {\n \"arg\": arg[\"arg\"],\n \"type\": arg[\"type\"],\n \"span\": arg[\"span\"],\n \"nsarg\": {\n \"ns\": arg[\"ns\"],\n \"ns_val\": arg[\"ns_val\"],\n \"ns_span\": arg[\"ns_span\"],\n \"ns_val_span\": arg[\"ns_val_span\"],\n },\n }\n )\n elif arg[\"type\"] == \"StrArg\":\n args.append({\"arg\": arg[\"arg\"], \"type\": arg[\"type\"], \"span\": arg[\"span\"]})\n\n subtree[\"args\"] = copy.deepcopy(args)\n\n return subtree", "response": "Create AST for top - level functions."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks full parse for errors", "response": "def parsed_top_level_errors(parsed, errors, component_type: str = \"\") -> Errors:\n \"\"\"Check full parse for errors\n\n Args:\n parsed:\n errors:\n component_type: Empty string or 'subject' or 'object' to indicate that we\n are parsing the subject or object field input\n \"\"\"\n\n # Error check\n fn_cnt = 0\n rel_cnt = 0\n nested_cnt = 0\n for key in parsed:\n if parsed[key][\"type\"] == \"Function\":\n fn_cnt += 1\n if parsed[key][\"type\"] == \"Relation\":\n rel_cnt += 1\n if parsed[key][\"type\"] == \"Nested\":\n nested_cnt += 1\n\n if not component_type:\n if nested_cnt > 1:\n errors.append(\n (\n \"Error\",\n \"Too many nested objects - can only have one per BEL Assertion\",\n )\n )\n\n if nested_cnt:\n if rel_cnt > 2:\n errors.append(\n (\n \"Error\",\n \"Too many relations - can only have two in a nested BEL Assertion\",\n )\n )\n elif fn_cnt > 4:\n errors.append((\"Error\", \"Too many BEL subject and object candidates\"))\n\n else:\n if rel_cnt > 1:\n errors.append(\n (\n \"Error\",\n \"Too many relations - can only have one in a BEL Assertion\",\n )\n )\n elif fn_cnt > 2:\n errors.append((\"Error\", \"Too many BEL subject and object candidates\"))\n\n elif component_type == \"subject\":\n if rel_cnt > 0:\n errors.append(\n (\"Error\", \"Too many relations - cannot have any in a BEL Subject\")\n )\n elif fn_cnt > 1:\n errors.append(\n (\"Error\", \"Too many BEL subject candidates - can only have one\")\n )\n\n elif component_type == \"object\":\n if nested_cnt:\n if rel_cnt > 1:\n errors.append(\n (\n \"Error\",\n \"Too many relations - can only have one in a nested BEL object\",\n )\n )\n elif fn_cnt > 2:\n errors.append(\n (\n \"Error\",\n \"Too many BEL subject and object candidates in a nested BEL object\",\n )\n )\n else:\n if rel_cnt > 0:\n errors.append(\n (\"Error\", \"Too many relations - cannot have any in a BEL Subject\")\n )\n elif fn_cnt > 1:\n errors.append(\n (\"Error\", \"Too many BEL subject candidates - can only have one\")\n )\n\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parsed_to_ast(parsed: Parsed, errors: Errors, component_type: str = \"\"):\n\n ast = {}\n sorted_keys = sorted(parsed.keys())\n\n # Setup top-level tree\n for key in sorted_keys:\n if parsed[key][\"type\"] == \"Nested\":\n nested_component_stack = [\"subject\", \"object\"]\n\n if component_type:\n component_stack = [component_type]\n else:\n component_stack = [\"subject\", \"object\"]\n\n for key in sorted_keys:\n if parsed[key][\"type\"] == \"Function\" and parsed[key][\"function_level\"] == \"top\":\n ast[component_stack.pop(0)] = parsed_function_to_ast(parsed, key)\n elif parsed[key][\"type\"] == \"Relation\" and \"relation\" not in ast:\n ast[\"relation\"] = {\n \"name\": parsed[key][\"name\"],\n \"type\": \"Relation\",\n \"span\": key,\n }\n elif parsed[key][\"type\"] == \"Nested\":\n ast[\"nested\"] = {}\n for nested_key in sorted_keys:\n if nested_key <= key:\n continue\n\n if (\n parsed[nested_key][\"type\"] == \"Function\"\n and parsed[nested_key][\"function_level\"] == \"top\"\n ):\n ast[\"nested\"][\n nested_component_stack.pop(0)\n ] = parsed_function_to_ast(parsed, nested_key)\n elif (\n parsed[nested_key][\"type\"] == \"Relation\"\n and \"relation\" not in ast[\"nested\"]\n ):\n ast[\"nested\"][\"relation\"] = {\n \"name\": parsed[nested_key][\"name\"],\n \"type\": \"Relation\",\n \"span\": parsed[nested_key][\"span\"],\n }\n\n return ast, errors\n\n return ast, errors", "response": "Convert parsed data struct to a dictionary of weolecular hierarchy."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts BEL string to BEL AST dictionary", "response": "def get_ast_dict(belstr, component_type: str = \"\"):\n \"\"\"Convert BEL string to AST dictionary\n\n Args:\n belstr: BEL string\n component_type: Empty string or 'subject' or 'object' to indicate that we\n are parsing the subject or object field input\n \"\"\"\n\n errors = []\n parsed = {}\n bels = list(belstr)\n char_locs, errors = parse_chars(bels, errors)\n parsed, errors = parse_functions(belstr, char_locs, parsed, errors)\n parsed, errors = parse_args(bels, char_locs, parsed, errors)\n parsed, errors = arg_types(parsed, errors)\n parsed, errors = parse_relations(belstr, char_locs, parsed, errors)\n parsed, errors = parse_nested(bels, char_locs, parsed, errors)\n errors = parsed_top_level_errors(parsed, errors)\n\n ast, errors = parsed_to_ast(parsed, errors, component_type=component_type)\n\n return ast, errors"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts BEL AST partialparse dict to BELAst object", "response": "def get_ast_obj(belstr, bel_version, component_type: str = \"\"):\n \"\"\"Convert AST partialparse dict to BELAst\"\"\"\n\n ast_dict, errors = get_ast_dict(belstr, component_type)\n\n spec = bel_specification.get_specification(bel_version)\n\n subj = ast_dict[\"subject\"]\n subj_ast = add_ast_fn(subj, spec)\n\n relation = None\n obj = None\n\n if \"relation\" in ast_dict:\n relation = ast_dict[\"relation\"][\"name\"]\n\n if \"object\" in ast_dict:\n obj = ast_dict[\"object\"]\n obj_ast = add_ast_fn(obj, spec)\n\n return BELAst(subj_ast, relation, obj_ast, spec)\n elif \"nested\" in ast_dict:\n nested_subj = ast_dict[\"nested\"][\"subject\"]\n nested_subj_ast = add_ast_fn(nested_subj, spec)\n nested_relation = ast_dict[\"nested\"][\"relation\"][\"name\"]\n nested_obj = ast_dict[\"nested\"][\"object\"]\n nested_obj_ast = add_ast_fn(nested_obj, spec)\n\n return BELAst(\n subj_ast,\n relation,\n BELAst(nested_subj_ast, nested_relation, nested_obj_ast, spec),\n spec,\n )\n\n return BELAst(subj_ast, None, None, spec)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting dict AST to BEL object AST Function", "response": "def add_ast_fn(d, spec, parent_function=None):\n \"\"\"Convert dict AST to object AST Function\n\n Args:\n ast_fn: AST object Function\n d: AST as dictionary\n spec: BEL Specification\n\n Return:\n ast_fn\n \"\"\"\n\n if d[\"type\"] == \"Function\":\n ast_fn = Function(d[\"function\"][\"name\"], spec, parent_function=parent_function)\n for arg in d[\"args\"]:\n if arg[\"type\"] == \"Function\":\n ast_fn.add_argument(add_ast_fn(arg, spec, parent_function=ast_fn))\n elif arg[\"type\"] == \"NSArg\":\n ast_fn.add_argument(\n NSArg(arg[\"nsarg\"][\"ns\"], arg[\"nsarg\"][\"ns_val\"], ast_fn)\n )\n elif arg[\"type\"] == \"StrArg\":\n ast_fn.add_argument(StrArg(arg[\"arg\"], ast_fn))\n return ast_fn"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting NSArg to BEL - style format", "response": "def convert_nsarg(\n nsarg: str,\n api_url: str = None,\n namespace_targets: Mapping[str, List[str]] = None,\n canonicalize: bool = False,\n decanonicalize: bool = False,\n) -> str:\n \"\"\"[De]Canonicalize NSArg\n\n Args:\n nsarg (str): bel statement string or partial string (e.g. subject or object)\n api_url (str): BEL.bio api url to use, e.g. https://api.bel.bio/v1\n namespace_targets (Mapping[str, List[str]]): formatted as in configuration file example\n canonicalize (bool): use canonicalize endpoint/namespace targets\n decanonicalize (bool): use decanonicalize endpoint/namespace targets\n\n Results:\n str: converted NSArg\n \"\"\"\n\n if not api_url:\n api_url = config[\"bel_api\"][\"servers\"][\"api_url\"]\n if not api_url:\n log.error(\"Missing api url - cannot convert namespace\")\n return None\n\n params = None\n if namespace_targets:\n namespace_targets_str = json.dumps(namespace_targets)\n params = {\"namespace_targets\": namespace_targets_str}\n\n if not namespace_targets:\n if canonicalize:\n api_url = api_url + \"/terms/{}/canonicalized\"\n elif decanonicalize:\n api_url = api_url + \"/terms/{}/decanonicalized\"\n else:\n log.warning(\"Missing (de)canonical flag - cannot convert namespaces\")\n return nsarg\n else:\n\n api_url = (\n api_url + \"/terms/{}/canonicalized\"\n ) # overriding with namespace_targets\n\n request_url = api_url.format(url_path_param_quoting(nsarg))\n\n r = get_url(request_url, params=params, timeout=10)\n\n if r and r.status_code == 200:\n nsarg = r.json().get(\"term_id\", nsarg)\n elif not r or r.status_code == 404:\n log.error(f\"[de]Canonicalization endpoint missing: {request_url}\")\n\n return nsarg"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts BEL namespace in string Uses a regex expression to extract all NSArgs and replace them with the updated NSArg from BEL. bio API terms endpoint.", "response": "def convert_namespaces_str(\n bel_str: str,\n api_url: str = None,\n namespace_targets: Mapping[str, List[str]] = None,\n canonicalize: bool = False,\n decanonicalize: bool = False,\n) -> str:\n \"\"\"Convert namespace in string\n\n Uses a regex expression to extract all NSArgs and replace them with the\n updated NSArg from the BEL.bio API terms endpoint.\n\n Args:\n bel_str (str): bel statement string or partial string (e.g. subject or object)\n api_url (str): BEL.bio api url to use, e.g. https://api.bel.bio/v1\n namespace_targets (Mapping[str, List[str]]): formatted as in configuration file example\n canonicalize (bool): use canonicalize endpoint/namespace targets\n decanonicalize (bool): use decanonicalize endpoint/namespace targets\n\n Results:\n str: bel statement with namespaces converted\n \"\"\"\n\n # pattern - look for capitalized namespace followed by colon\n # and either a quoted string or a string that\n # can include any char other than space, comma or ')'\n matches = re.findall(r'([A-Z]+:\"(?:\\\\.|[^\"\\\\])*\"|[A-Z]+:(?:[^\\),\\s]+))', bel_str)\n for nsarg in matches:\n if \"DEFAULT:\" in nsarg: # skip default namespaces\n continue\n\n updated_nsarg = convert_nsarg(\n nsarg,\n api_url=api_url,\n namespace_targets=namespace_targets,\n canonicalize=canonicalize,\n decanonicalize=decanonicalize,\n )\n if updated_nsarg != nsarg:\n bel_str = bel_str.replace(nsarg, updated_nsarg)\n\n return bel_str"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef orthologize(ast, bo, species_id: str):\n\n # if species_id == 'TAX:9606' and str(ast) == 'MGI:Sult2a1':\n # import pdb; pdb.set_trace()\n\n if not species_id:\n bo.validation_messages.append(\n (\"WARNING\", \"No species id was provided for orthologization\")\n )\n return ast\n\n if isinstance(ast, NSArg):\n if ast.orthologs:\n # log.debug(f'AST: {ast.to_string()} species_id: {species_id} orthologs: {ast.orthologs}')\n if ast.orthologs.get(species_id, None):\n orthologized_nsarg_val = ast.orthologs[species_id][\"decanonical\"]\n ns, value = orthologized_nsarg_val.split(\":\")\n ast.change_nsvalue(ns, value)\n ast.canonical = ast.orthologs[species_id][\"canonical\"]\n ast.decanonical = ast.orthologs[species_id][\"decanonical\"]\n ast.orthologized = True\n bo.ast.species.add(\n (species_id, ast.orthologs[species_id][\"species_label\"])\n )\n else:\n bo.ast.species.add((ast.species_id, ast.species_label))\n bo.validation_messages.append(\n (\"WARNING\", f\"No ortholog found for {ast.namespace}:{ast.value}\")\n )\n elif ast.species_id:\n bo.ast.species.add((ast.species_id, ast.species_label))\n\n # Recursively process every NSArg by processing BELAst and Functions\n if hasattr(ast, \"args\"):\n for arg in ast.args:\n orthologize(arg, bo, species_id)\n\n return ast", "response": "Recursively orthologize BEL Entities in BEL AST using API endpoint"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef populate_ast_nsarg_orthologs(ast, species):\n\n ortholog_namespace = \"EG\"\n\n if isinstance(ast, NSArg):\n if re.match(ortholog_namespace, ast.canonical):\n orthologs = bel.terms.orthologs.get_orthologs(\n ast.canonical, list(species.keys())\n )\n for species_id in species:\n if species_id in orthologs:\n orthologs[species_id][\"species_label\"] = species[species_id]\n\n ast.orthologs = copy.deepcopy(orthologs)\n\n # Recursively process every NSArg by processing BELAst and Functions\n if hasattr(ast, \"args\"):\n for arg in ast.args:\n populate_ast_nsarg_orthologs(arg, species)\n\n return ast", "response": "Recursively collect NSArg orthologs for BEL AST"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef preprocess_bel_stmt(stmt: str) -> str:\n\n stmt = stmt.strip() # remove newline at end of stmt\n stmt = re.sub(r\",+\", \",\", stmt) # remove multiple commas\n stmt = re.sub(r\",\", \", \", stmt) # add space after each comma\n stmt = re.sub(r\" +\", \" \", stmt) # remove multiple spaces\n\n return stmt", "response": "Clean up basic formatting of BEL statement as single string\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef quoting_nsarg(nsarg_value):\n quoted = re.findall(r'^\"(.*)\"$', nsarg_value)\n\n if re.search(\n r\"[),\\s]\", nsarg_value\n ): # quote only if it contains whitespace, comma or ')'\n if quoted:\n return nsarg_value\n else:\n return f'\"{nsarg_value}\"'\n else:\n if quoted:\n return quoted[0]\n else:\n return nsarg_value", "response": "Quoting nsarg value if needs quotes."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _dump_spec(spec):\n with open(\"spec.yaml\", \"w\") as f:\n yaml.dump(spec, f, Dumper=MyDumper, default_flow_style=False)", "response": "Dump a bel specification dictionary using YAML"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute edges from BEL AST.", "response": "def compute_edges(ast: BELAst, spec: BELSpec) -> Edges:\n \"\"\"Compute edges\"\"\"\n\n edges = []\n if ast.bel_object.__class__.__name__ == \"BELAst\":\n edges.append(ast.bel_object)\n\n process_ast(edges, ast, spec)\n return edges"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process_rule(edges: Edges, ast: Function, rule: Mapping[str, Any], spec: BELSpec):\n ast_type = ast.__class__.__name__\n trigger_functions = rule.get(\"trigger_function\", [])\n trigger_types = rule.get(\"trigger_type\", [])\n rule_subject = rule.get(\"subject\")\n rule_relation = rule.get(\"relation\")\n rule_object = rule.get(\"object\")\n\n log.debug(f\"Running {rule_relation} Type: {ast_type}\")\n\n if isinstance(ast, Function):\n function_name = ast.name\n args = ast.args\n parent_function = ast.parent_function\n\n if function_name in trigger_functions:\n if rule_subject == \"trigger_value\":\n subject = ast\n\n if rule_object == \"args\":\n for arg in args:\n log.debug(f\"1: {subject} {arg}\")\n edge_ast = BELAst(subject, rule_relation, arg, spec)\n edges.append(edge_ast)\n elif rule_object == \"parent_function\" and parent_function:\n log.debug(f\"2: {subject} {parent_function}\")\n edge_ast = BELAst(subject, rule_relation, parent_function, spec)\n edges.append(edge_ast)\n\n elif ast_type in trigger_types:\n if rule_subject == \"trigger_value\":\n subject = ast\n\n if rule_object == \"args\":\n for arg in args:\n log.debug(f\"3: {subject} {arg}\")\n edge_ast = BELAst(subject, rule_relation, arg, spec)\n edges.append(edge_ast)\n elif rule_object == \"parent_function\" and parent_function:\n log.debug(f\"4: {subject} {parent_function}\")\n edge_ast = BELAst(subject, rule_relation, parent_function, spec)\n edges.append(edge_ast)\n\n if isinstance(ast, NSArg):\n term = \"{}:{}\".format(ast.namespace, ast.value)\n parent_function = ast.parent_function\n\n if ast_type in trigger_types:\n if rule_subject == \"trigger_value\":\n subject = term\n\n if rule_object == \"args\":\n for arg in args:\n log.debug(f\"5: {subject} {arg}\")\n edge_ast = BELAst(subject, rule_relation, arg, spec)\n edges.append(edge_ast)\n elif rule_object == \"parent_function\" and parent_function:\n log.debug(f\"6: {subject} {parent_function}\")\n edge_ast = BELAst(subject, rule_relation, parent_function, spec)\n edges.append(edge_ast)\n\n # Recursively process every element by processing BELAst and Functions\n if hasattr(ast, \"args\"):\n for arg in ast.args:\n process_rule(edges, arg, rule, spec)", "response": "Processes a single computed edge rule and returns a new edge that is a single edge that is a descendant of the edge."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_orthologs(fo: IO, metadata: dict):\n\n version = metadata[\"metadata\"][\"version\"]\n\n # LOAD ORTHOLOGS INTO ArangoDB\n with timy.Timer(\"Load Orthologs\") as timer:\n arango_client = arangodb.get_client()\n belns_db = arangodb.get_belns_handle(arango_client)\n arangodb.batch_load_docs(\n belns_db, orthologs_iterator(fo, version), on_duplicate=\"update\"\n )\n\n log.info(\n \"Load orthologs\",\n elapsed=timer.elapsed,\n source=metadata[\"metadata\"][\"source\"],\n )\n\n # Clean up old entries\n remove_old_ortholog_edges = f\"\"\"\n FOR edge in ortholog_edges\n FILTER edge.source == \"{metadata[\"metadata\"][\"source\"]}\"\n FILTER edge.version != \"{version}\"\n REMOVE edge IN ortholog_edges\n \"\"\"\n remove_old_ortholog_nodes = f\"\"\"\n FOR node in ortholog_nodes\n FILTER node.source == \"{metadata[\"metadata\"][\"source\"]}\"\n FILTER node.version != \"{version}\"\n REMOVE node IN ortholog_nodes\n \"\"\"\n arangodb.aql_query(belns_db, remove_old_ortholog_edges)\n arangodb.aql_query(belns_db, remove_old_ortholog_nodes)\n\n # Add metadata to resource metadata collection\n metadata[\"_key\"] = f\"Orthologs_{metadata['metadata']['source']}\"\n try:\n belns_db.collection(arangodb.belns_metadata_name).insert(metadata)\n except ArangoError as ae:\n belns_db.collection(arangodb.belns_metadata_name).replace(metadata)", "response": "Load orthologs into ArangoDB"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef nanopub_to_edges(nanopub: dict = {}, rules: List[str] = [], orthologize_targets: list = []):\n\n # Collect input values ####################################################\n nanopub_url = nanopub.get(\"source_url\", \"\")\n\n edge_dt = utils.dt_utc_formatted() # don't want this in relation_id\n\n # Extract BEL Version and make sure we can process this\n if nanopub[\"nanopub\"][\"type\"][\"name\"].upper() == \"BEL\":\n bel_version = nanopub[\"nanopub\"][\"type\"][\"version\"]\n versions = bel.lang.bel_specification.get_bel_versions()\n if bel_version not in versions:\n log.error(\n f\"Do not know this BEL Version: {bel_version}, these are the ones I can process: {versions.keys()}\"\n )\n return []\n else:\n log.error(\n f\"Not a BEL Nanopub according to nanopub.type.name: {nanopub['nanopub']['type']['name']}\"\n )\n return []\n\n # Required for BEL parsing/canonicalization/orthologization\n api_url = config[\"bel_api\"][\"servers\"][\"api_url\"]\n\n try:\n citation_string = normalize_nanopub_citation(nanopub)\n except Exception as e:\n log.error(f\"Could not create citation string for {nanopub_url}\")\n citation_string = \"\"\n\n if orthologize_targets == []:\n if config[\"bel_api\"].get(\"edges\", None):\n orthologize_targets = config[\"bel_api\"][\"edges\"].get(\"orthologize_targets\", [])\n\n # orig_species_id = [anno['id'] for anno in nanopub['nanopub']['annotations'] if anno['type'] == 'Species']\n # if orig_species_id:\n # orig_species_id = orig_species_id[0]\n\n master_annotations = copy.deepcopy(nanopub[\"nanopub\"][\"annotations\"])\n master_metadata = copy.deepcopy(nanopub[\"nanopub\"][\"metadata\"])\n master_metadata.pop(\"gd_abstract\", None)\n\n nanopub_type = nanopub[\"nanopub\"][\"metadata\"].get(\"nanopub_type\")\n\n # Create Edge Assertion Info ##############################################\n # r = generate_assertion_edge_info(nanopub['nanopub']['assertions'], orig_species_id, orthologize_targets, bel_version, api_url, nanopub_type)\n r = generate_assertion_edge_info(\n nanopub[\"nanopub\"][\"assertions\"], orthologize_targets, bel_version, api_url, nanopub_type\n )\n edge_info_list = r[\"edge_info_list\"]\n\n # Build Edges #############################################################\n edges = []\n errors = []\n for edge_info in edge_info_list:\n annotations = copy.deepcopy(master_annotations)\n metadata = copy.deepcopy(master_metadata)\n\n errors.extend(edge_info[\"errors\"])\n\n if not edge_info.get(\"canonical\"):\n continue\n\n # TODO - remove this\n # if edge_info.get('species_id', False):\n # annotations = orthologize_context(edge_info['species_id'], annotations)\n\n edge_hash = utils._create_hash(\n f'{edge_info[\"canonical\"][\"subject\"]} {edge_info[\"canonical\"][\"relation\"]} {edge_info[\"canonical\"][\"object\"]}'\n )\n\n edge = {\n \"edge\": {\n \"subject\": {\n \"name\": edge_info[\"canonical\"][\"subject\"],\n \"name_lc\": edge_info[\"canonical\"][\"subject\"].lower(),\n \"label\": edge_info[\"decanonical\"][\"subject\"],\n \"label_lc\": edge_info[\"decanonical\"][\"subject\"].lower(),\n \"components\": edge_info[\"subject_comp\"],\n },\n \"relation\": {\n \"relation\": edge_info[\"canonical\"][\"relation\"],\n \"edge_hash\": edge_hash,\n \"edge_dt\": edge_dt,\n \"nanopub_url\": nanopub_url,\n \"nanopub_id\": nanopub[\"nanopub\"][\"id\"],\n \"citation\": citation_string,\n \"subject_canon\": edge_info[\"canonical\"][\"subject\"],\n \"subject\": edge_info[\"decanonical\"][\"subject\"],\n \"object_canon\": edge_info[\"canonical\"][\"object\"],\n \"object\": edge_info[\"decanonical\"][\"object\"],\n \"annotations\": copy.deepcopy(annotations),\n \"metadata\": copy.deepcopy(metadata),\n \"public_flag\": True,\n \"edge_types\": edge_info[\"edge_types\"],\n \"species_id\": edge_info[\"species_id\"],\n \"species_label\": edge_info[\"species_label\"],\n },\n \"object\": {\n \"name\": edge_info[\"canonical\"][\"object\"],\n \"name_lc\": edge_info[\"canonical\"][\"object\"].lower(),\n \"label\": edge_info[\"decanonical\"][\"object\"],\n \"label_lc\": edge_info[\"decanonical\"][\"object\"].lower(),\n \"components\": edge_info[\"object_comp\"],\n },\n }\n }\n\n edges.append(copy.deepcopy(edge))\n\n return {\n \"edges\": edges,\n \"nanopub_id\": nanopub[\"nanopub\"][\"id\"],\n \"nanopub_url\": nanopub_url,\n \"success\": True,\n \"errors\": errors,\n }", "response": "Process nanopub into edges and load into EdgeStore"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextract species from ast. species set of tuples", "response": "def extract_ast_species(ast):\n \"\"\"Extract species from ast.species set of tuples (id, label)\"\"\"\n\n species_id = \"None\"\n species_label = \"None\"\n species = [\n (species_id, species_label) for (species_id, species_label) in ast.species if species_id\n ]\n\n if len(species) == 1:\n (species_id, species_label) = species[0]\n\n if not species_id:\n species_id = \"None\"\n species_label = \"None\"\n\n log.debug(f\"AST Species: {ast.species} Species: {species} SpeciesID: {species_id}\")\n\n return (species_id, species_label)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngenerating a list of edge info for a list of assertions given orthologization targets.", "response": "def generate_assertion_edge_info(\n assertions: List[dict],\n orthologize_targets: List[str],\n bel_version: str,\n api_url: str,\n nanopub_type: str = \"\",\n) -> dict:\n \"\"\"Create edges (SRO) for assertions given orthologization targets\n\n Args:\n assertions: list of BEL statements (SRO object)\n orthologize_targets: list of species in TAX: format\n bel_version: to be used for processing assertions\n api_url: BEL API url endpoint to use for terminologies and orthologies\n \"\"\"\n\n bo = bel.lang.belobj.BEL(bel_version, api_url)\n bo_computed = bel.lang.belobj.BEL(bel_version, api_url)\n\n edge_info_list = []\n\n with utils.Timer() as t:\n for assertion in assertions:\n # if not assertion.get('relation', False):\n # continue # Skip any subject only statements\n\n start_time = t.elapsed\n bo.parse(assertion)\n\n if not bo.ast:\n errors = [\n f\"{error[0]} {error[1]}\"\n for error in bo.validation_messages\n if error[0] == \"ERROR\"\n ]\n edge_info = {\"errors\": copy.deepcopy(errors)}\n edge_info_list.append(copy.deepcopy(edge_info))\n continue\n\n # populate canonical terms and orthologs for assertion\n bo.collect_nsarg_norms()\n bo.collect_orthologs(orthologize_targets)\n\n log.info(\n \"Timing - time to collect nsargs and orthologs\", delta_ms=(t.elapsed - start_time)\n )\n\n (edge_species_id, edge_species_label) = extract_ast_species(bo.ast)\n orig_species_id = edge_species_id\n\n canon = bo.canonicalize().to_triple()\n\n components = get_node_subcomponents(bo.ast) # needs to be run after canonicalization\n computed_asts = bo.compute_edges(\n ast_result=True\n ) # needs to be run after canonicalization\n decanon = bo.decanonicalize().to_triple()\n\n if nanopub_type == \"backbone\":\n edge_types = [\"backbone\"]\n else:\n edge_types = [\"original\", \"primary\"]\n\n if assertion.get(\"relation\", False):\n\n causal_edge_type = []\n if \"causal\" in bo.spec[\"relations\"][\"info\"][assertion[\"relation\"]][\"categories\"]:\n edge_types.append(\"causal\")\n\n edge_info = {\n \"edge_types\": edge_types,\n \"species_id\": edge_species_id,\n \"species_label\": edge_species_label,\n \"canonical\": canon,\n \"decanonical\": decanon,\n \"subject_comp\": components[\"subject_comp\"],\n \"object_comp\": components[\"object_comp\"],\n \"errors\": [],\n }\n edge_info_list.append(copy.deepcopy(edge_info))\n\n # Loop through primary computed asts\n for computed_ast in computed_asts:\n bo_computed.ast = computed_ast\n bo_computed.collect_nsarg_norms()\n canon = bo_computed.canonicalize().to_triple()\n components = get_node_subcomponents(\n bo_computed.ast\n ) # needs to be run after canonicalization\n decanon = bo_computed.decanonicalize().to_triple()\n\n (edge_species_id, edge_species_label) = extract_ast_species(bo_computed.ast)\n\n edge_info = {\n \"edge_types\": [\"computed\"],\n \"species_id\": edge_species_id,\n \"species_label\": edge_species_label,\n \"canonical\": canon,\n \"decanonical\": decanon,\n \"subject_comp\": components[\"subject_comp\"],\n \"object_comp\": components[\"object_comp\"],\n \"errors\": [],\n }\n if [edge for edge in edge_info_list if edge.get(\"canonical\", {}) == canon]:\n continue # skip if edge is already included (i.e. the primary is same as computed edge)\n edge_info_list.append(copy.deepcopy(edge_info))\n\n # Skip orthologs if backbone nanopub\n if nanopub_type == \"backbone\":\n continue\n\n # only process orthologs if there are species-specific NSArgs\n if len(bo.ast.species) > 0:\n # Loop through orthologs\n for species_id in orthologize_targets:\n log.debug(f\"Orig species: {orig_species_id} Target species: {species_id}\")\n if species_id == orig_species_id:\n continue\n\n bo.orthologize(species_id)\n\n (edge_species_id, edge_species_label) = extract_ast_species(bo.ast)\n\n if edge_species_id == \"None\" or edge_species_id == orig_species_id:\n log.debug(\n f'Skipping orthologization- species == \"None\" or {orig_species_id} ASTspecies: {bo.ast.species} for {bo}'\n )\n continue\n\n ortho_decanon = bo.orthologize(\n species_id\n ).to_triple() # defaults to decanonicalized orthologized form\n ortho_canon = bo.canonicalize().to_triple()\n computed_asts = bo.compute_edges(\n ast_result=True\n ) # needs to be run after canonicalization\n components = get_node_subcomponents(\n bo.ast\n ) # needs to be run after canonicalization\n\n if assertion.get(\"relation\", False):\n edge_info = {\n \"edge_types\": [\"orthologized\", \"primary\"] + causal_edge_type,\n \"species_id\": edge_species_id,\n \"species_label\": edge_species_label,\n \"canonical\": ortho_canon,\n \"decanonical\": ortho_decanon,\n \"subject_comp\": components[\"subject_comp\"],\n \"object_comp\": components[\"object_comp\"],\n \"errors\": [],\n }\n\n edge_info_list.append(copy.deepcopy(edge_info))\n\n # Loop through orthologized computed asts\n for computed_ast in computed_asts:\n bo_computed.ast = computed_ast\n bo_computed.collect_nsarg_norms()\n canon = bo_computed.canonicalize().to_triple()\n components = get_node_subcomponents(\n bo_computed.ast\n ) # needs to be run after canonicalization\n decanon = bo_computed.decanonicalize().to_triple()\n\n (edge_species_id, edge_species_label) = extract_ast_species(bo_computed.ast)\n\n edge_info = {\n \"edge_types\": [\"computed\", \"orthologized\"],\n \"species_id\": edge_species_id,\n \"species_label\": edge_species_label,\n \"canonical\": canon,\n \"decanonical\": decanon,\n \"subject_comp\": components[\"subject_comp\"],\n \"object_comp\": components[\"object_comp\"],\n \"errors\": [],\n }\n if [edge for edge in edge_info_list if edge.get(\"canonical\", {}) == canon]:\n continue # skip if edge is already included (i.e. the primary is same as computed edge)\n edge_info_list.append(copy.deepcopy(edge_info))\n\n log.info(\"Timing - Generated all edge info for all nanopub assertions\", delta_ms=t.elapsed)\n\n return {\"edge_info_list\": edge_info_list}"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef migrate(belstr: str) -> str:\n\n bo.ast = bel.lang.partialparse.get_ast_obj(belstr, \"2.0.0\")\n\n return migrate_ast(bo.ast).to_string()", "response": "Migrate BEL 1 to BEL 2. 0. 0"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmigrates BEL1 assertion into BEL 2. 0. 0 SRO triple", "response": "def migrate_into_triple(belstr: str) -> str:\n \"\"\"Migrate BEL1 assertion into BEL 2.0.0 SRO triple\"\"\"\n\n bo.ast = bel.lang.partialparse.get_ast_obj(belstr, \"2.0.0\")\n\n return migrate_ast(bo.ast).to_triple()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef convert(ast):\n\n if ast and ast.type == \"Function\":\n # Activity function conversion\n if (\n ast.name != \"molecularActivity\"\n and ast.name in spec[\"namespaces\"][\"Activity\"][\"list\"]\n ):\n print(\"name\", ast.name, \"type\", ast.type)\n ast = convert_activity(ast)\n return ast # Otherwise - this will trigger on the BEL2 molecularActivity\n\n # translocation conversion\n elif ast.name in [\"tloc\", \"translocation\"]:\n ast = convert_tloc(ast)\n\n fus_flag = False\n for idx, arg in enumerate(ast.args):\n if arg.__class__.__name__ == \"Function\":\n\n # Fix substitution -> variation()\n if arg.name in [\"sub\", \"substitution\"]:\n ast.args[idx] = convert_sub(arg)\n\n elif arg.name in [\"trunc\", \"truncation\"]:\n ast.args[idx] = convert_trunc(arg)\n\n elif arg.name in [\"pmod\", \"proteinModification\"]:\n ast.args[idx] = convert_pmod(arg)\n\n elif arg.name in [\"fus\", \"fusion\"]:\n fus_flag = True\n\n # Recursively process Functions\n ast.args[idx] = convert(ast.args[idx])\n\n if fus_flag:\n ast = convert_fus(ast)\n\n return ast", "response": "Convert BEL1 AST Function to BEL2 AST Function"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert_tloc(ast):\n\n from_loc_arg = ast.args[1]\n to_loc_arg = ast.args[2]\n from_loc = Function(\"fromLoc\", spec, parent_function=ast)\n from_loc.add_argument(\n NSArg(from_loc_arg.namespace, from_loc_arg.value, parent_function=from_loc)\n )\n to_loc = Function(\"toLoc\", spec, parent_function=ast)\n to_loc.add_argument(\n NSArg(to_loc_arg.namespace, to_loc_arg.value, parent_function=to_loc)\n )\n\n ast.args[1] = from_loc\n ast.args[2] = to_loc\n\n return ast", "response": "Convert BEL1 tloc to BEL2 tloc"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_activity(ast):\n\n if len(ast.args) > 1:\n log.error(f\"Activity should not have more than 1 argument {ast.to_string()}\")\n\n p_arg = ast.args[0] # protein argument\n print(\"p_arg\", p_arg)\n ma_arg = Function(\"ma\", bo.spec)\n ma_arg.add_argument(StrArg(ast.name, ma_arg))\n p_arg.change_parent_fn(ma_arg)\n ast = Function(\"activity\", bo.spec)\n p_arg.change_parent_fn(ast)\n ast.add_argument(p_arg)\n ast.add_argument(ma_arg)\n\n return ast", "response": "Convert BEL1 activities to BEL2 act"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate BEL1 pmod object with new protein modification term", "response": "def convert_pmod(pmod):\n \"\"\"Update BEL1 pmod() protein modification term\"\"\"\n\n if pmod.args[0].value in spec[\"bel1_migration\"][\"protein_modifications\"]:\n pmod.args[0].value = spec[\"bel1_migration\"][\"protein_modifications\"][\n pmod.args[0].value\n ]\n\n return pmod"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_fus(ast):\n\n parent_fn_name = ast.name_short\n prefix_list = {\"p\": \"p.\", \"r\": \"r.\", \"g\": \"c.\"}\n prefix = prefix_list[parent_fn_name]\n\n fus1_ns = ast.args[0].namespace\n fus1_val = ast.args[0].value\n\n arg_fus = ast.args[1]\n fus_args = [None, \"?\", \"?\"]\n for idx, arg in enumerate(arg_fus.args):\n fus_args[idx] = arg\n\n fus2_ns = fus_args[0].namespace\n fus2_val = fus_args[0].value\n\n if fus_args[1] == \"?\":\n fus1_range = fus_args[1]\n else:\n fus1_range = f'\"{prefix}1_{fus_args[1].value}\"'\n\n if fus_args[2] == \"?\":\n fus2_range = fus_args[2]\n else:\n fus2_range = f'\"{prefix}{fus_args[2].value}_?\"'\n\n fus = Function(\"fus\", spec, parent_function=ast)\n fus.args = [\n NSArg(fus1_ns, fus1_val, fus),\n StrArg(fus1_range, fus),\n NSArg(fus2_ns, fus2_val, fus),\n StrArg(fus2_range, fus),\n ]\n\n # Remove BEL\n ast_args = ast.args\n ast_args.pop(0)\n ast_args.pop(0)\n\n if ast_args == [None]:\n ast_args = []\n\n ast.args = []\n ast.add_argument(fus)\n\n if len(ast_args) > 0:\n ast.args.extend(ast_args)\n\n return ast", "response": "Convert BEL1 fus to BEL2 fus"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert BEL1 sub to BEL2 var", "response": "def convert_sub(sub):\n \"\"\"Convert BEL1 sub() to BEL2 var()\"\"\"\n\n args = sub.args\n (ref_aa, pos, new_aa) = args\n\n parent_fn_name = sub.parent_function.name_short\n prefix_list = {\"p\": \"p.\", \"r\": \"r.\", \"g\": \"c.\"}\n prefix = prefix_list[parent_fn_name]\n\n new_var_arg = f'\"{prefix}{spec[\"namespaces\"][\"AminoAcid\"][\"to_short\"][ref_aa.value]}{pos.value}{spec[\"namespaces\"][\"AminoAcid\"][\"to_short\"][new_aa.value]}\"'\n\n new_var = bel.lang.ast.Function(\"var\", bo.spec)\n\n new_var.add_argument(StrArg(new_var_arg, new_var))\n\n return new_var"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef convert_trunc(trunc):\n\n parent_fn_name = trunc.parent_function.name_short\n prefix_list = {\"p\": \"p.\", \"r\": \"r.\", \"g\": \"c.\"}\n prefix = prefix_list[parent_fn_name]\n\n new_var_arg = f'\"truncated at {trunc.args[0].value}\"'\n\n new_var = bel.lang.ast.Function(\"var\", bo.spec)\n\n new_var.add_argument(StrArg(new_var_arg, new_var))\n\n return new_var", "response": "Convert a truncated function to BEL1 var"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_user_creds(username, password):\n username = utils.first_true(\n [username, config[\"bel_api\"][\"servers\"][\"arangodb_username\"]], default=\"\"\n )\n password = utils.first_true(\n [password, config[\"secrets\"][\"bel_api\"][\"servers\"].get(\"arangodb_password\")],\n default=\"\",\n )\n\n return username, password", "response": "Get username and password from provided username and password OR in config OR blank in that order"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_client(host=None, port=None, username=None, password=None, enable_logging=True):\n\n host = utils.first_true(\n [host, config[\"bel_api\"][\"servers\"][\"arangodb_host\"], \"localhost\"]\n )\n port = utils.first_true([port, config[\"bel_api\"][\"servers\"][\"arangodb_port\"], 8529])\n username = utils.first_true(\n [username, config[\"bel_api\"][\"servers\"][\"arangodb_username\"], \"\"]\n )\n password = utils.first_true(\n [\n password,\n config.get(\n \"secrets\",\n config[\"secrets\"][\"bel_api\"][\"servers\"].get(\"arangodb_password\"),\n ),\n \"\",\n ]\n )\n\n client = arango.client.ArangoClient(\n protocol=config[\"bel_api\"][\"servers\"][\"arangodb_protocol\"], host=host, port=port\n )\n\n return client", "response": "Get arango client and edgestore db handle"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the arangodb database handle for the given resource.", "response": "def get_edgestore_handle(\n client: arango.client.ArangoClient,\n username=None,\n password=None,\n edgestore_db_name: str = edgestore_db_name,\n edgestore_edges_name: str = edgestore_edges_name,\n edgestore_nodes_name: str = edgestore_nodes_name,\n edgestore_pipeline_name: str = edgestore_pipeline_name,\n edgestore_pipeline_stats_name: str = edgestore_pipeline_stats_name,\n edgestore_pipeline_errors_name: str = edgestore_pipeline_errors_name,\n) -> arango.database.StandardDatabase:\n \"\"\"Get Edgestore arangodb database handle\n\n Args:\n client (arango.client.ArangoClient): Description\n username (None, optional): Description\n password (None, optional): Description\n edgestore_db_name (str, optional): Description\n edgestore_edges_name (str, optional): Description\n edgestore_nodes_name (str, optional): Description\n\n Returns:\n arango.database.StandardDatabase: Description\n \"\"\"\n\n (username, password) = get_user_creds(username, password)\n\n sys_db = client.db(\"_system\", username=username, password=password)\n\n # Create a new database named \"edgestore\"\n try:\n if username and password:\n edgestore_db = sys_db.create_database(\n name=edgestore_db_name,\n users=[{\"username\": username, \"password\": password, \"active\": True}],\n )\n else:\n edgestore_db = sys_db.create_database(name=edgestore_db_name)\n except arango.exceptions.DatabaseCreateError:\n if username and password:\n edgestore_db = client.db(\n edgestore_db_name, username=username, password=password\n )\n else:\n edgestore_db = client.db(edgestore_db_name)\n\n # TODO - add a skiplist index for _from? or _key? to be able to do paging?\n # has_collection function doesn't seem to be working\n # if not edgestore_db.has_collection(edgestore_nodes_name):\n try:\n nodes = edgestore_db.create_collection(\n edgestore_nodes_name, index_bucket_count=64\n )\n nodes.add_hash_index(fields=[\"name\"], unique=False)\n nodes.add_hash_index(\n fields=[\"components\"], unique=False\n ) # add subject/object components as node properties\n except Exception:\n pass\n\n # if not edgestore_db.has_collection(edgestore_edges_name):\n try:\n edges = edgestore_db.create_collection(\n edgestore_edges_name, edge=True, index_bucket_count=64\n )\n edges.add_hash_index(fields=[\"relation\"], unique=False)\n edges.add_hash_index(fields=[\"edge_types\"], unique=False)\n edges.add_hash_index(fields=[\"nanopub_id\"], unique=False)\n edges.add_hash_index(fields=[\"metadata.project\"], unique=False)\n edges.add_hash_index(fields=[\"annotations[*].id\"], unique=False)\n except Exception:\n pass\n\n # if not edgestore_db.has_collection(edgestore_pipeline_name):\n try:\n edgestore_db.create_collection(edgestore_pipeline_name)\n except Exception:\n pass\n\n try:\n edgestore_db.create_collection(edgestore_pipeline_errors_name)\n except Exception:\n pass\n\n try:\n edgestore_db.create_collection(edgestore_pipeline_stats_name)\n except arango.exceptions.CollectionCreateError as e:\n pass\n\n return edgestore_db"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting BEL namespace arango db handle", "response": "def get_belns_handle(client, username=None, password=None):\n \"\"\"Get BEL namespace arango db handle\"\"\"\n\n (username, password) = get_user_creds(username, password)\n\n sys_db = client.db(\"_system\", username=username, password=password)\n\n # Create a new database named \"belns\"\n try:\n if username and password:\n belns_db = sys_db.create_database(\n name=belns_db_name,\n users=[{\"username\": username, \"password\": password, \"active\": True}],\n )\n else:\n belns_db = sys_db.create_database(name=belns_db_name)\n except arango.exceptions.DatabaseCreateError:\n if username and password:\n belns_db = client.db(belns_db_name, username=username, password=password)\n else:\n belns_db = client.db(belns_db_name)\n\n try:\n belns_db.create_collection(belns_metadata_name)\n except Exception:\n pass\n\n try:\n equiv_nodes = belns_db.create_collection(\n equiv_nodes_name, index_bucket_count=64\n )\n equiv_nodes.add_hash_index(fields=[\"name\"], unique=True)\n except Exception:\n pass\n\n try:\n belns_db.create_collection(equiv_edges_name, edge=True, index_bucket_count=64)\n except Exception:\n pass\n\n try:\n ortholog_nodes = belns_db.create_collection(\n ortholog_nodes_name, index_bucket_count=64\n )\n ortholog_nodes.add_hash_index(fields=[\"name\"], unique=True)\n except Exception:\n pass\n\n try:\n belns_db.create_collection(\n ortholog_edges_name, edge=True, index_bucket_count=64\n )\n except Exception:\n pass\n\n return belns_db"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_belapi_handle(client, username=None, password=None):\n\n (username, password) = get_user_creds(username, password)\n\n sys_db = client.db(\"_system\", username=username, password=password)\n\n # Create a new database named \"belapi\"\n try:\n if username and password:\n belapi_db = sys_db.create_database(\n name=belapi_db_name,\n users=[{\"username\": username, \"password\": password, \"active\": True}],\n )\n else:\n belapi_db = sys_db.create_database(name=belapi_db_name)\n except arango.exceptions.DatabaseCreateError:\n if username and password:\n belapi_db = client.db(belapi_db_name, username=username, password=password)\n else:\n belapi_db = client.db(belapi_db_name)\n\n try:\n belapi_db.create_collection(belapi_settings_name)\n except Exception:\n pass\n\n try:\n belapi_db.create_collection(belapi_statemgmt_name)\n except Exception:\n pass\n\n return belapi_db", "response": "Get BEL API arango db handle"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nbatching load documents from ArangoDB", "response": "def batch_load_docs(db, doc_iterator, on_duplicate=\"replace\"):\n \"\"\"Batch load documents\n\n Args:\n db: ArangoDB client database handle\n doc_iterator: function that yields (collection_name, doc_key, doc)\n on_duplicate: defaults to replace, but can be error, update, replace or ignore\n\n https://python-driver-for-arangodb.readthedocs.io/en/master/specs.html?highlight=import_bulk#arango.collection.StandardCollection.import_bulk\n \"\"\"\n\n batch_size = 100\n\n counter = 0\n collections = {}\n docs = {}\n\n if on_duplicate not in [\"error\", \"update\", \"replace\", \"ignore\"]:\n log.error(f\"Bad parameter for on_duplicate: {on_duplicate}\")\n return\n\n for (collection_name, doc) in doc_iterator:\n if collection_name not in collections:\n collections[collection_name] = db.collection(collection_name)\n docs[collection_name] = []\n\n counter += 1\n\n docs[collection_name].append(doc)\n\n if counter % batch_size == 0:\n log.info(f\"Bulk import arangodb: {counter}\")\n for cname in docs:\n collections[cname].import_bulk(\n docs[cname], on_duplicate=on_duplicate, halt_on_error=False\n )\n docs[cname] = []\n\n log.info(f\"Bulk import arangodb: {counter}\")\n for cname in docs:\n collections[cname].import_bulk(\n docs[cname], on_duplicate=on_duplicate, halt_on_error=False\n )\n docs[cname] = []"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts an arango id to a _key value", "response": "def arango_id_to_key(_id):\n \"\"\"Remove illegal chars from potential arangodb _key (id)\n\n Args:\n _id (str): id to be used as arangodb _key\n\n Returns:\n (str): _key value with illegal chars removed\n \"\"\"\n\n key = re.sub(r\"[^a-zA-Z0-9\\_\\-\\:\\.\\@\\(\\)\\+\\,\\=\\;\\$\\!\\*\\%]+\", r\"_\", _id)\n if len(key) > 254:\n log.error(\n f\"Arango _key cannot be longer than 254 chars: Len={len(key)} Key: {key}\"\n )\n elif len(key) < 1:\n log.error(f\"Arango _key cannot be an empty string: Len={len(key)} Key: {key}\")\n\n return key"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load_resource(resource_url: str, forceupdate: bool = False):\n\n log.info(f\"Loading resource {resource_url}\")\n\n try:\n # Download resource\n fo = bel.utils.download_file(resource_url)\n\n if not fo:\n log.error(f\"Could not download and open file {resource_url}\")\n return \"Failed to download resource_url\"\n\n # Get metadata\n fo.seek(0)\n with gzip.open(fo, \"rt\") as f:\n metadata = json.loads(f.__next__())\n\n if \"metadata\" not in metadata:\n log.error(f\"Missing metadata entry for {resource_url}\")\n return \"Cannot load resource file - missing metadata object in first line of file\"\n\n # Load resource files\n if metadata[\"metadata\"][\"type\"] == \"namespace\":\n bel.resources.namespace.load_terms(fo, metadata, forceupdate)\n\n elif metadata[\"metadata\"][\"type\"] == \"ortholog\":\n bel.resources.ortholog.load_orthologs(fo, metadata)\n\n finally:\n fo.close()", "response": "Load a resource into the BEL API."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget term list using term_id", "response": "def get_terms(term_id):\n \"\"\"Get term(s) using term_id - given term_id may match multiple term records\n\n Term ID has to match either the id, alt_ids or obsolete_ids\n \"\"\"\n\n search_body = {\n \"query\": {\n \"bool\": {\n \"should\": [\n {\"term\": {\"id\": term_id}},\n {\"term\": {\"alt_ids\": term_id}},\n {\"term\": {\"obsolete_ids\": term_id}},\n ]\n }\n }\n }\n\n result = es.search(index=\"terms\", body=search_body)\n\n results = []\n for r in result[\"hits\"][\"hits\"]:\n results.append(r[\"_source\"])\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the equivalents given a term id", "response": "def get_equivalents(term_id: str) -> List[Mapping[str, Union[str, bool]]]:\n \"\"\"Get equivalents given ns:id\n\n\n Args:\n term_id (str): term id\n Returns:\n List[Mapping[str, Union[str, bool]]]: e.g. [{'term_id': 'HGNC:5', 'namespace': 'HGNC'}, 'primary': False]\n \"\"\"\n try:\n errors = []\n terms = get_terms(term_id)\n if len(terms) == 0:\n return {\"equivalents\": [], \"errors\": errors}\n elif len(terms) > 1:\n errors.append(\n f'Too many primary IDs returned. Given term_id: {term_id} matches these term_ids: {[term[\"id\"] for term in terms]}'\n )\n return {\"equivalents\": [], \"errors\": errors}\n else:\n term_id = terms[0][\"id\"]\n\n term_id_key = bel.db.arangodb.arango_id_to_key(term_id)\n\n equivalents = []\n query = f\"\"\"\n FOR vertex, edge IN 1..5\n ANY 'equivalence_nodes/{term_id_key}' equivalence_edges\n OPTIONS {{bfs: true, uniqueVertices : 'global'}}\n RETURN DISTINCT {{\n term_id: vertex.name,\n namespace: vertex.namespace,\n primary: vertex.primary\n }}\n \"\"\"\n\n cursor = belns_db.aql.execute(query, count=True, batch_size=20)\n for doc in cursor:\n if doc.get(\"term_id\", False):\n equivalents.append(doc)\n\n equivalents.append(\n {\"term_id\": term_id, \"namespace\": term_id.split(\":\")[0], \"primary\": True}\n )\n\n return {\"equivalents\": equivalents, \"errors\": errors}\n\n except Exception as e:\n log.error(f\"Problem getting term equivalents for {term_id} msg: {e}\")\n return {\"equivalents\": [], \"errors\": [f\"Unexpected error {e}\"]}"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_normalized_term(term_id: str, equivalents: list, namespace_targets: dict) -> str:\n\n if equivalents and len(equivalents) > 0:\n for start_ns in namespace_targets:\n if re.match(start_ns, term_id):\n for target_ns in namespace_targets[start_ns]:\n for e in equivalents:\n if e and target_ns in e[\"namespace\"] and e[\"primary\"]:\n normalized_term = e[\"term_id\"]\n return normalized_term\n\n return term_id", "response": "Get normalized term id from equivalents list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting term labels given term ids", "response": "def get_labels(term_ids: list) -> dict:\n \"\"\"Get term labels given term ids\n\n This only takes the first term returned for a term_id so use the\n unique term_id for a term not an alternate id that might not be unique.\n \"\"\"\n term_labels = {}\n for term_id in term_ids:\n term = get_terms(term_id)\n term_labels[term_id] = term[0].get(\"label\", \"\")\n\n return term_labels"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting normalized terms - canonical and decanonical forms", "response": "def get_normalized_terms(term_id: str) -> dict:\n \"\"\"Get normalized terms - canonical/decanonical forms\"\"\"\n\n canonical = term_id\n decanonical = term_id\n\n canonical_namespace_targets = config[\"bel\"][\"lang\"][\"canonical\"]\n decanonical_namespace_targets = config[\"bel\"][\"lang\"][\"decanonical\"]\n results = get_equivalents(term_id)\n equivalents = results[\"equivalents\"]\n\n # log.debug(f'Equivalents: {equivalents}')\n\n if equivalents:\n canonical = get_normalized_term(term_id, equivalents, canonical_namespace_targets)\n decanonical = get_normalized_term(canonical, equivalents, decanonical_namespace_targets)\n\n # log.debug(f'canonical: {canonical}, decanonical: {decanonical}, original: {term_id}')\n\n return {\"canonical\": canonical, \"decanonical\": decanonical, \"original\": term_id}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _generate_hash(self):\n \n self.hash = None\n str_hash = ''\n \n for key, val in self._get_params().iteritems():\n str_hash += smart_str(val)\n \n # Append the encryption string\n str_hash += self._service.encryption_key\n \n # Set md5 hash on the object\n self.hash = hashlib.md5(str_hash).hexdigest()", "response": "Generates a hash based on the specific fields for the object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes the SOAP request and convert the result to a dictionary.", "response": "def _send_request(self):\n \"\"\"\n Make the SOAP request and convert the result to a dictionary.\n \"\"\"\n \n # Generate the hash variable and parameters\n self._generate_hash()\n params = self._get_params()\n \n # Make the SOAP request\n try:\n resp = self._endpoint(**params)\n logger.debug(resp)\n except WebFault, e:\n logger.exception('An error occurred while making the SOAP request.')\n return None\n \n # Convert XML response into a dictionary\n self.response = XmlDictConfig(ElementTree.XML(smart_str(resp)))\n \n # Normalize dictionary values\n self.response = normalize_dictionary_values(self.response)\n \n # Log all non OK status codes\n if self.response['status']['errorCode'] != 'OK':\n logger.error(resp)\n \n return self.response"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_combination_action(combination):\n accepted_actions = ('link', 'js')\n for action in accepted_actions:\n if action in combination:\n return {action: combination[action]}\n return {}", "response": "Returns the action for a keyboard combination."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_processed_hotkeys(hotkeys=None):\n hotkeys = hotkeys or ks_settings.HOTKEYS\n processed_hotkeys = AutoVivification()\n if not hotkeys:\n return processed_hotkeys\n\n for combination in hotkeys:\n key_codes = get_key_codes(combination['keys'])\n if len(key_codes) == 1:\n processed_hotkeys[key_codes[0]] = get_combination_action(combination)\n elif len(key_codes) == 2:\n processed_hotkeys[key_codes[0]][key_codes[1]] = get_combination_action(combination)\n elif len(key_codes) == 3:\n processed_hotkeys[key_codes[0]][key_codes[1]][key_codes[2]] = get_combination_action(combination)\n # TODO: make dynamic vivification\n\n return processed_hotkeys", "response": "Process passed dict with key combinations or HOTKEYS dict from settings. HOTKEYS settings."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the list of key codes from a string with key combinations.", "response": "def get_key_codes(keys):\n \"\"\"\n Calculates the list of key codes from a string with key combinations.\n Ex: 'CTRL+A' will produce the output (17, 65)\n \"\"\"\n keys = keys.strip().upper().split('+')\n codes = list()\n for key in keys:\n code = ks_settings.KEY_CODES.get(key.strip())\n if code:\n codes.append(code)\n return codes"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse(\n self,\n assertion: Union[str, Mapping[str, str]],\n strict: bool = False,\n parseinfo: bool = False,\n rule_name: str = \"start\",\n error_level: str = \"WARNING\",\n ) -> \"BEL\":\n \"\"\"Parse and semantically validate BEL statement\n\n Parses a BEL statement given as a string and returns an AST, Abstract Syntax Tree (defined in ast.py)\n if the statement is valid, self.parse_valid. Else, the AST attribute is None and there will be validation error messages\n in self.validation_messages. self.validation_messages will contain WARNINGS if\n warranted even if the statement parses correctly.\n\n Error Levels are similar to log levels - selecting WARNING includes both\n WARNING and ERROR, selecting ERROR just includes ERROR\n\n Args:\n assertion: BEL statement (if str -> 'S R O', if dict {'subject': S, 'relation': R, 'object': O})\n strict: specify to use strict or loose parsing; defaults to loose\n parseinfo: specify whether or not to include Tatsu parse information in AST\n rule_name: starting point in parser - defaults to 'start'\n error_level: return ERRORs only or also WARNINGs\n\n Returns:\n ParseObject: The ParseObject which contain either an AST or error messages.\n \"\"\"\n\n self.ast = None\n self.parse_valid = False\n self.parse_visualize_error = \"\"\n self.validation_messages = [] # Reset messages when parsing a new BEL Statement\n\n if isinstance(assertion, dict):\n if assertion.get(\"relation\", False) and assertion.get(\"object\", False):\n statement = f\"{assertion['subject']} {assertion['relation']} {assertion['object']}\"\n elif assertion.get(\"subject\"):\n statement = f\"{assertion['subject']}\"\n else:\n statement = \"\"\n else:\n statement = assertion\n\n self.original_bel_stmt = statement\n\n # pre-process to remove extra white space, add space after commas, etc.\n self.bel_stmt = bel_utils.preprocess_bel_stmt(statement)\n\n # TODO - double check these tests before enabling\n # is_valid, messages = bel_utils.simple_checks(self.bel_stmt)\n # if not is_valid:\n # self.validation_messages.extend(messages)\n # return self\n\n # Check to see if empty string for bel statement\n if len(self.bel_stmt) == 0:\n self.validation_messages.append(\n (\"ERROR\", \"Please include a valid BEL statement - found empty string.\")\n )\n return self\n\n try:\n # see if an AST is returned without any parsing errors\n\n ast_dict = self.parser.parse(\n self.bel_stmt, rule_name=rule_name, trace=False, parseinfo=parseinfo\n )\n self.ast = lang_ast.ast_dict_to_objects(ast_dict, self)\n\n self.parse_valid = True\n\n except FailedParse as e:\n # if an error is returned, send to handle_syntax, error\n error, visualize_error = bel_utils.handle_parser_syntax_error(e)\n self.parse_visualize_error = visualize_error\n if visualize_error:\n self.validation_messages.append(\n (\"ERROR\", f\"{error}\\n{visualize_error}\")\n )\n else:\n self.validation_messages.append(\n (\"ERROR\", f\"{error}\\nBEL: {self.bel_stmt}\")\n )\n self.ast = None\n\n except Exception as e:\n log.error(\"Error {}, error type: {}\".format(e, type(e)))\n self.validation_messages.append(\n (\"ERROR\", \"Error {}, error type: {}\".format(e, type(e)))\n )\n\n return self", "response": "Parses a BEL statement and returns a ParseObject which contains either an AST or an error message."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef canonicalize(self, namespace_targets: Mapping[str, List[str]] = None) -> \"BEL\":\n\n # TODO Need to order position independent args\n\n if not self.ast:\n return self\n\n # Collect canonical/decanonical NSArg values\n if not self.ast.collected_nsarg_norms:\n self = self.collect_nsarg_norms()\n\n # TODO Need to pass namespace target overrides for canonicalization\n self.ast.canonicalize()\n\n # self.ast = bel_utils.convert_namespaces_ast(self.ast, canonicalize=True, api_url=self.api_url, namespace_targets=namespace_targets)\n\n return self", "response": "Returns a canonicalized BEL statement string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef collect_nsarg_norms(self):\n start_time = datetime.datetime.now()\n\n self.ast = bel_utils.populate_ast_nsarg_defaults(self.ast, self.ast)\n self.ast.collected_nsarg_norms = True\n if (\n hasattr(self.ast, \"bel_object\")\n and self.ast.bel_object\n and self.ast.bel_object.type == \"BELAst\"\n ):\n self.ast.bel_object.collected_nsarg_norms = True\n\n end_time = datetime.datetime.now()\n delta_ms = f\"{(end_time - start_time).total_seconds() * 1000:.1f}\"\n\n log.info(\"Timing - prepare nsarg normalization\", delta_ms=delta_ms)\n\n return self", "response": "Adds canonical and decanonical values to the AST object for canonicalization"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef collect_orthologs(self, species: list) -> \"BEL\":\n\n if not species:\n return self\n\n species_labels = bel.terms.terms.get_labels(species)\n\n # Collect canonical/decanonical NSArg values\n if not self.ast.collected_nsarg_norms:\n self = self.collect_nsarg_norms()\n\n start_time = datetime.datetime.now()\n\n self.ast = bel_utils.populate_ast_nsarg_orthologs(self.ast, species_labels)\n self.ast.collected_orthologs = True\n if (\n hasattr(self.ast, \"bel_object\")\n and self.ast.bel_object\n and self.ast.bel_object.type == \"BELAst\"\n ):\n self.ast.bel_object.collected_orthologs = True\n\n end_time = datetime.datetime.now()\n delta_ms = f\"{(end_time - start_time).total_seconds() * 1000:.1f}\"\n\n log.info(\"Timing - prepare nsarg normalization\", delta_ms=delta_ms)\n\n return self", "response": "Add NSArg orthologs to the AST for the given species."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef compute_edges(\n self, rules: List[str] = None, ast_result=False, fmt=\"medium\"\n ) -> List[Mapping[str, Any]]:\n \"\"\"Computed edges from primary BEL statement\n\n Takes an AST and generates all computed edges based on BEL Specification YAML computed signatures.\n Will run only the list of computed edge rules if given.\n\n Args:\n rules (list): a list of rules to filter; only the rules in this list will be applied to computed\n fmt (str): short, medium or long version of BEL Edge (function and relation names)\n Returns:\n List[Mapping[str, Any]]: BEL Edges in medium format\n \"\"\"\n\n if not self.ast:\n return self\n\n edges_asts = bel.edge.computed.compute_edges(self.ast, self.spec)\n\n if ast_result:\n return edges_asts\n\n edges = []\n for ast in edges_asts:\n edges.append(\n {\n \"subject\": ast.bel_subject.to_string(),\n \"relation\": ast.bel_relation,\n \"object\": ast.bel_object.to_string(),\n }\n )\n\n return edges", "response": "Computes all edges from the primary BEL statement and returns a list of edges in the medium format."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting the object to BEL triple format", "response": "def to_triple(self, fmt: str = \"medium\") -> dict:\n \"\"\"Convert AST object to BEL triple\n\n Args:\n fmt (str): short, medium, long formatted BEL statements\n short = short function and short relation format\n medium = short function and long relation format\n long = long function and long relation format\n\n Returns:\n dict: {'subject': , 'relation': , 'object': }\n \"\"\"\n\n if self.ast:\n return self.ast.to_triple(ast_obj=self.ast, fmt=fmt)\n else:\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef print_tree(self) -> str:\n\n if self.ast:\n return self.ast.print_tree(ast_obj=self.ast)\n else:\n return \"\"", "response": "Convert AST object to tree view of BEL AST"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert Tatsu AST dictionary to BEL AST object", "response": "def ast_dict_to_objects(ast_dict: Mapping[str, Any], bel_obj) -> BELAst:\n \"\"\"Convert Tatsu AST dictionary to BEL AST object\n\n Args:\n ast_dict (Mapping[str, Any])\n\n Returns:\n BELAst: object representing the BEL Statement AST\n \"\"\"\n ast_subject = ast_dict.get(\"subject\", None)\n ast_object = ast_dict.get(\"object\", None)\n\n bel_subject = None\n bel_object = None\n bel_relation = ast_dict.get(\"relation\")\n\n if ast_subject:\n bel_subject = function_ast_to_objects(ast_subject, bel_obj)\n\n if ast_object:\n bel_object = function_ast_to_objects(ast_object, bel_obj)\n\n ast_obj = BELAst(bel_subject, bel_relation, bel_object, bel_obj.spec)\n\n return ast_obj"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts the BEL AST object to string version of BEL AST object.", "response": "def to_string(self, ast_obj=None, fmt: str = \"medium\") -> str:\n \"\"\"Convert AST object to string\n\n Args:\n fmt (str): short, medium, long formatted BEL statements\n short = short function and short relation format\n medium = short function and long relation format\n long = long function and long relation format\n canonicalize\n\n Returns:\n str: string version of BEL AST\n \"\"\"\n\n if not ast_obj:\n ast_obj = self\n\n bel_relation = None\n if self.bel_relation and fmt == \"short\":\n bel_relation = self.spec[\"relations\"][\"to_short\"].get(\n self.bel_relation, self.bel_relation\n )\n elif self.bel_relation:\n bel_relation = self.spec[\"relations\"][\"to_long\"].get(\n self.bel_relation, self.bel_relation\n )\n\n if self.bel_subject and bel_relation and self.bel_object:\n if isinstance(self.bel_object, BELAst):\n return \"{} {} ({})\".format(\n self.bel_subject.to_string(fmt=fmt),\n bel_relation,\n self.bel_object.to_string(fmt=fmt),\n )\n else:\n return \"{} {} {}\".format(\n self.bel_subject.to_string(fmt=fmt),\n bel_relation,\n self.bel_object.to_string(fmt=fmt),\n )\n\n elif self.bel_subject:\n return \"{}\".format(self.bel_subject.to_string(fmt=fmt))\n\n else:\n return \"\""} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_triple(self, ast_obj=None, fmt=\"medium\"):\n\n if not ast_obj:\n ast_obj = self\n\n if self.bel_subject and self.bel_relation and self.bel_object:\n if self.bel_relation.startswith(\"has\"):\n bel_relation = self.bel_relation\n elif fmt == \"short\":\n bel_relation = self.spec[\"relations\"][\"to_short\"].get(\n self.bel_relation, None\n )\n else:\n bel_relation = self.spec[\"relations\"][\"to_long\"].get(\n self.bel_relation, None\n )\n\n bel_subject = self.bel_subject.to_string(fmt=fmt)\n\n if isinstance(self.bel_object, (BELAst)):\n bel_object = f\"({self.bel_object.to_string(fmt=fmt)})\"\n else:\n bel_object = self.bel_object.to_string(fmt=fmt)\n\n return {\n \"subject\": bel_subject,\n \"relation\": bel_relation,\n \"object\": bel_object,\n }\n\n elif self.bel_subject:\n return {\"subject\": self.bel_subject.to_string(fmt=fmt)}\n\n else:\n return None", "response": "Convert BEL object to BEL triple"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef print_tree(self, ast_obj=None):\n\n if not ast_obj:\n ast_obj = self\n\n if hasattr(self, \"bel_subject\"):\n print(\"Subject:\")\n self.bel_subject.print_tree(self.bel_subject, indent=0)\n\n if hasattr(self, \"bel_relation\"):\n print(\"Relation:\", self.bel_relation)\n\n if hasattr(self, \"bel_object\"):\n if self.bel_object.type == \"BELAst\":\n if hasattr(self, \"bel_subject\"):\n print(\"Nested Subject:\")\n self.bel_object.bel_subject.print_tree(indent=0)\n\n if hasattr(self, \"bel_relation\"):\n print(\"Nested Relation:\", self.bel_object.bel_relation)\n\n if hasattr(self, \"bel_object\"):\n print(\"Nested Object:\")\n self.bel_object.bel_object.print_tree(indent=0)\n else:\n print(\"Object:\")\n self.bel_object.print_tree(self.bel_object, indent=0)\n\n return self", "response": "Convert BEL AST object to tree view of BEL AST object"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert the object to string version of BEL AST.", "response": "def to_string(\n self,\n fmt: str = \"medium\",\n canonicalize: bool = False,\n decanonicalize: bool = False,\n orthologize: str = None,\n ) -> str:\n \"\"\"Convert AST object to string\n\n Args:\n fmt (str): short, medium, long formatted BEL statements\n short = short function and short relation format\n medium = short function and long relation format\n long = long function and long relation format\n\n Returns:\n str: string version of BEL AST\n \"\"\"\n\n arg_string = \", \".join([a.to_string(fmt=fmt) for a in self.args])\n\n if fmt in [\"short\", \"medium\"]:\n function_name = self.name_short\n else:\n function_name = self.name\n\n return \"{}({})\".format(function_name, arg_string)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef subcomponents(self, subcomponents):\n\n for arg in self.args:\n if arg.__class__.__name__ == \"Function\":\n subcomponents.append(arg.to_string())\n if arg.function_type == \"primary\":\n arg.subcomponents(subcomponents)\n else:\n subcomponents.append(arg.to_string())\n\n return subcomponents", "response": "Generate subcomponents of the BEL subject or object in the Edgestore."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef change_nsvalue(self, namespace, value):\n\n self.namespace = namespace\n self.value = value", "response": "Change the namespace value of the current object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate Namespace and valueast.", "response": "def update_nsval(\n self, *, nsval: str = None, ns: str = None, val: str = None\n ) -> None:\n \"\"\"Update Namespace and valueast.\n\n Args:\n nsval: e.g. HGNC:AKT1\n ns: namespace\n val: value of entity\n \"\"\"\n\n if not (ns and val) and nsval:\n (ns, val) = nsval.split(\":\", 1)\n elif not (ns and val) and not nsval:\n log.error(\"Did not update NSArg - no ns:val or nsval provided\")\n\n self.namespace = ns\n self.value = val"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef convert_csv_str_to_list(csv_str: str) -> list:\n\n csv_str = re.sub(\"^\\s*{\", \"\", csv_str)\n csv_str = re.sub(\"}\\s*$\", \"\", csv_str)\n r = csv.reader([csv_str])\n row = list(r)[0]\n new = []\n for col in row:\n col = re.sub('^\\s*\"?\\s*', \"\", col)\n col = re.sub('\\s*\"?\\s*$', \"\", col)\n new.append(col)\n\n return new", "response": "Convert CSV str to list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process_citation(citation_str: str) -> dict:\n\n citation_obj = {}\n\n citation_list = convert_csv_str_to_list(citation_str)\n (citation_type, name, doc_id, pub_date, authors, comment, *extra) = (\n citation_list + [None] * 7\n )\n # print(f'citation_type: {citation_type}, name: {name}, doc_id: {doc_id}, pub_date: {pub_date}, authors: {authors}, comment: {comment}')\n\n authors_list = []\n if authors:\n authors_list = authors.split(\"|\")\n citation_obj[\"authors\"] = authors_list\n\n if name and re.match(\"http?://\", name):\n citation_obj[\"uri\"] = name\n\n elif citation_type and citation_type.upper() == \"PUBMED\":\n citation_obj[\"database\"] = {\"name\": \"PubMed\", \"id\": doc_id}\n if name:\n citation_obj[\"reference\"] = name\n elif name:\n citation_obj[\"reference\"] = name\n\n else:\n citation_obj[\"reference\"] = \"No reference found\"\n\n if pub_date:\n citation_obj[\"date_published\"] = pub_date\n\n if comment:\n citation_obj[\"comment\"] = comment\n\n return citation_obj", "response": "Parse BEL Script Citation string into nanopub_bel citation object"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef split_bel_stmt(stmt: str, line_num) -> tuple:\n\n m = re.match(f\"^(.*?\\))\\s+([a-zA-Z=\\->\\|:]+)\\s+([\\w(]+.*?)$\", stmt, flags=0)\n if m:\n return (m.group(1), m.group(2), m.group(3))\n else:\n log.info(\n f\"Could not parse bel statement into components at line number: {line_num} assertion: {stmt}\"\n )\n return (stmt, None, None)", "response": "Split bel statement into subject relation object tuple"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef process_documentline(line, nanopubs_metadata):\n\n matches = re.match('SET DOCUMENT\\s+(\\w+)\\s+=\\s+\"?(.*?)\"?$', line)\n key = matches.group(1)\n val = matches.group(2)\n nanopubs_metadata[key] = val\n\n return nanopubs_metadata", "response": "Process SET DOCUMENT line in BEL script"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef process_definition(line, nanopubs_metadata):\n\n matches = re.match('DEFINE\\s+(\\w+)\\s+(\\w+)\\s+AS\\s+URL\\s+\"(.*?)\"\\s*$', line)\n if matches:\n def_type = matches.group(1).lower()\n if def_type == \"namespace\":\n def_type = \"namespaces\"\n elif def_type == \"annotation\":\n def_type == \"annotations\"\n\n key = matches.group(2)\n val = matches.group(3)\n\n if def_type in nanopubs_metadata:\n nanopubs_metadata[def_type][key] = val\n else:\n nanopubs_metadata[def_type] = {key: val}\n\n matches = re.match(\"DEFINE\\s+(\\w+)\\s+(\\w+)\\s+AS\\s+LIST\\s+{(.*?)}\\s*$\", line)\n if matches:\n def_type = matches.group(1).lower()\n if def_type == \"namespace\":\n def_type = \"namespaces\"\n elif def_type == \"annotation\":\n def_type == \"annotations\"\n\n key = matches.group(2)\n val = matches.group(3)\n vals = convert_csv_str_to_list(val)\n\n if def_type in nanopubs_metadata:\n nanopubs_metadata[def_type][key] = vals\n else:\n nanopubs_metadata[def_type] = {key: vals}\n\n return nanopubs_metadata", "response": "Process DEFINE line in BEL script"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprocesses UNSET lines in BEL Script", "response": "def process_unset(line, annotations):\n \"\"\"Process UNSET lines in BEL Script\"\"\"\n\n matches = re.match('UNSET\\s+\"?(.*?)\"?\\s*$', line)\n if matches:\n val = matches.group(1)\n if val == \"ALL\" or val == \"STATEMENT_GROUP\":\n annotations = {}\n elif re.match(\"{\", val):\n vals = convert_csv_str_to_list(val)\n for val in vals:\n annotations.pop(val, None)\n else:\n annotations.pop(val, None)\n\n else:\n log.warn(f\"Problem with UNSET line: {line}\")\n\n return annotations"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting set line into nanopub_bel annotations format", "response": "def process_set(line, annotations):\n \"\"\"Convert annotations into nanopub_bel annotations format\"\"\"\n\n matches = re.match('SET\\s+(\\w+)\\s*=\\s*\"?(.*?)\"?\\s*$', line)\n\n key = None\n if matches:\n key = matches.group(1)\n val = matches.group(2)\n\n if key == \"STATEMENT_GROUP\":\n annotations[\"statement_group\"] = val\n elif key == \"Citation\":\n annotations[\"citation\"] = process_citation(val)\n elif key.lower() == \"support\" or key.lower() == \"evidence\":\n annotations[\"evidence\"] = val\n elif re.match(\"\\s*{.*?}\", val):\n vals = convert_csv_str_to_list(val)\n annotations[key] = vals\n else:\n annotations[key] = val\n\n return annotations"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts any multi - line SET statements into single line SET statements", "response": "def preprocess_belscript(lines):\n \"\"\" Convert any multi-line SET statements into single line SET statements\"\"\"\n\n set_flag = False\n for line in lines:\n if set_flag is False and re.match(\"SET\", line):\n set_flag = True\n set_line = [line.rstrip()]\n # SET following SET\n elif set_flag and re.match(\"SET\", line):\n yield f\"{' '.join(set_line)}\\n\"\n set_line = [line.rstrip()]\n # Blank line following SET yields single line SET\n elif set_flag and re.match(\"\\s+$\", line):\n yield f\"{' '.join(set_line)}\\n\"\n yield line\n set_flag = False\n\n # Append second, third, ... lines to SET\n elif set_flag:\n set_line.append(line.rstrip())\n else:\n yield line"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_belscript(lines):\n\n nanopubs_metadata = {}\n annotations = {}\n assertions = []\n\n # # Turn a list into an iterator\n # if not isinstance(lines, collections.Iterator):\n # lines = iter(lines)\n\n line_num = 0\n\n # for line in preprocess_belscript(lines):\n for line in set_single_line(lines):\n\n line_num += 1\n # Get rid of trailing comments\n line = re.sub(\"\\/\\/.*?$\", \"\", line)\n\n line = line.rstrip()\n\n # Collapse continuation lines\n while re.search(\"\\\\\\s*$\", line):\n line = line.replace(\"\\\\\", \"\") + next(lines)\n\n # Process lines #################################\n if re.match(\"\\s*#\", line) or re.match(\"\\s*$\", line):\n # Skip comments and empty lines\n continue\n elif re.match(\"SET DOCUMENT\", line):\n nanopubs_metadata = process_documentline(line, nanopubs_metadata)\n elif re.match(\"DEFINE\", line):\n nanopubs_metadata = process_definition(line, nanopubs_metadata)\n elif re.match(\"UNSET\", line):\n\n # Process any assertions prior to changing annotations\n if assertions:\n yield yield_nanopub(assertions, annotations, line_num)\n assertions = []\n annotations = process_unset(line, annotations)\n\n elif re.match(\"SET\", line):\n # Create nanopubs metadata prior to starting BEL Script statements section\n if nanopubs_metadata:\n yield yield_metadata(nanopubs_metadata)\n nanopubs_metadata = {}\n\n # Process any assertions prior to changing annotations\n if assertions:\n yield yield_nanopub(assertions, annotations, line_num)\n assertions = []\n\n annotations = process_set(line, annotations)\n\n else:\n assertions.append(line)\n\n # Catch any leftover bel statements\n yield_nanopub(assertions, annotations, line_num)", "response": "Yields Nanopubs in BEL Script statements section\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __stringify_body(self, request_or_response):\r\n ''' this method reference from httprunner '''\r\n headers = self.__track_info['{}_headers'.format(request_or_response)]\r\n body = self.__track_info.get('{}_body'.format(request_or_response))\r\n \r\n if isinstance(body, CaseInsensitiveDict):\r\n body = json.dumps(dict(body), ensure_ascii=False)\r\n \r\n elif isinstance(body, (dict, list)):\r\n body = json.dumps(body, indent=2, ensure_ascii=False)\r\n \r\n elif isinstance(body, bytes):\r\n resp_content_type = headers.get(\"Content-Type\", \"\")\r\n try:\r\n if \"image\" in resp_content_type:\r\n self.__track_info[\"response_data_type\"] = \"image\"\r\n body = \"data:{};base64,{}\".format(\r\n resp_content_type,\r\n b64encode(body).decode('utf-8')\r\n )\r\n else:\r\n body = escape(body.decode(\"utf-8\"))\r\n except UnicodeDecodeError:\r\n pass\r\n \r\n elif not isinstance(body, (basestring, numeric_types, Iterable)):\r\n # class instance, e.g. MultipartEncoder()\r\n body = repr(body)\r\n \r\n self.__track_info['{}_body'.format(request_or_response)] = body", "response": "this method reference from httprunner"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets dynamic value from the string data of response", "response": "def DyStrData(cls,name, regx, index = 0):\r\n ''' set dynamic value from the string data of response \r\n @param name: glob parameter name\r\n @param regx: re._pattern_type\r\n e.g.\r\n DyStrData(\"a\",re.compile('123'))\r\n '''\r\n text = Markup(cls.__trackinfo[\"response_body\"]).unescape()\r\n \r\n if not text:\r\n return\r\n if not isinstance(regx, re._pattern_type):\r\n raise Exception(\"DyStrData need the arg which have compiled the regular expression.\")\r\n \r\n values = regx.findall(text)\r\n result = \"\"\r\n if len(values)>index:\r\n result = values[index] \r\n cls.glob.update({name:result})"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset dynamic value from the json data of response", "response": "def DyJsonData(cls,name, sequence):\r\n ''' set dynamic value from the json data of response \r\n @param name: glob parameter name\r\n @param sequence: sequence for the json\r\n e.g.\r\n result={\"a\":1,\r\n \"b\":[1,2,3,4],\r\n \"c\":{\"d\":5,\"e\":6},\r\n \"f\":{\"g\":[7,8,9]},\r\n \"h\":[{\"i\":10,\"j\":11},{\"k\":12}]\r\n }\r\n \r\n sequence1 =\"a\" # -> 1\r\n sequence2 =\"b.3\" # -> 4\r\n sequence3 = \"f.g.2\" # -> 9\r\n sequence4 = \"h.0.j\" # -> 11\r\n '''\r\n text = Markup(cls.__trackinfo[\"response_body\"]).unescape()\r\n if not text:\r\n return\r\n \r\n resp = json.loads(text)\r\n# resp = cls.__test.copy() \r\n sequence = [_parse_string_value(i) for i in sequence.split('.')] \r\n for i in sequence:\r\n try:\r\n if isinstance(i, int):\r\n resp = resp[i] \r\n else:\r\n resp = resp.get(i)\r\n except: \r\n cls.glob.update({name:None})\r\n return \r\n cls.glob.update({name:resp})"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads file and generate nanopubs in nanopub_bel JSON Schema format", "response": "def read_nanopubs(fn: str) -> Iterable[Mapping[str, Any]]:\n \"\"\"Read file and generate nanopubs\n\n If filename has *.gz, will read as a gzip file\n If filename has *.jsonl*, will parsed as a JSONLines file\n IF filename has *.json*, will be parsed as a JSON file\n If filename has *.yaml* or *.yml*, will be parsed as a YAML file\n\n Args:\n filename (str): filename to read nanopubs from\n\n Returns:\n Generator[Mapping[str, Any]]: generator of nanopubs in nanopub_bel JSON Schema format\n \"\"\"\n\n jsonl_flag, json_flag, yaml_flag = False, False, False\n if fn == \"-\" or \"jsonl\" in fn:\n jsonl_flag = True\n elif \"json\" in fn:\n json_flag = True\n elif re.search(\"ya?ml\", fn):\n yaml_flag = True\n else:\n log.error(\"Do not recognize nanopub file format - neither json nor jsonl format.\")\n return {}\n\n try:\n if re.search(\"gz$\", fn):\n f = gzip.open(fn, \"rt\")\n else:\n try:\n f = click.open_file(fn, mode=\"rt\")\n except Exception as e:\n log.info(f\"Can not open file {fn} Error: {e}\")\n quit()\n\n if jsonl_flag:\n for line in f:\n yield json.loads(line)\n elif json_flag:\n nanopubs = json.load(f)\n for nanopub in nanopubs:\n yield nanopub\n elif yaml_flag:\n nanopubs = yaml.load(f, Loader=yaml.SafeLoader)\n for nanopub in nanopubs:\n yield nanopub\n\n except Exception as e:\n log.error(f\"Could not open file: {fn}\")"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates Nanopubs output filehandle", "response": "def create_nanopubs_fh(output_fn: str):\n \"\"\"Create Nanopubs output filehandle\n\n \\b\n If output fn is '-' will write JSONlines to STDOUT\n If output fn has *.gz, will written as a gzip file\n If output fn has *.jsonl*, will written as a JSONLines file\n IF output fn has *.json*, will be written as a JSON file\n If output fn has *.yaml* or *.yml*, will be written as a YAML file\n\n Args:\n output_fn: Name of output file\n\n Returns:\n (filehandle, yaml_flag, jsonl_flag, json_flag)\n \"\"\"\n\n # output file\n # set output flags\n json_flag, jsonl_flag, yaml_flag = False, False, False\n if output_fn:\n if re.search(\"gz$\", output_fn):\n out_fh = gzip.open(output_fn, \"wt\")\n else:\n out_fh = click.open_file(output_fn, mode=\"wt\")\n\n if re.search(\"ya?ml\", output_fn):\n yaml_flag = True\n elif \"jsonl\" in output_fn or \"-\" == output_fn:\n jsonl_flag = True\n elif \"json\" in output_fn:\n json_flag = True\n\n else:\n out_fh = sys.stdout\n\n return (out_fh, yaml_flag, jsonl_flag, json_flag)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write_edges(\n edges: Mapping[str, Any],\n filename: str,\n jsonlines: bool = False,\n gzipflag: bool = False,\n yaml: bool = False,\n):\n \"\"\"Write edges to file\n\n Args:\n edges (Mapping[str, Any]): in edges JSON Schema format\n filename (str): filename to write\n jsonlines (bool): output in JSONLines format?\n gzipflag (bool): create gzipped file?\n yaml (bool): create yaml file?\n \"\"\"\n pass", "response": "Write edges to file\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd index alias to index_name", "response": "def add_index_alias(es, index_name, alias_name):\n \"\"\"Add index alias to index_name\"\"\"\n\n es.indices.put_alias(index=index_name, name=terms_alias)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting the terms index", "response": "def delete_index(es, index_name: str):\n \"\"\"Delete the terms index\"\"\"\n\n if not index_name:\n log.warn(\"No index name given to delete\")\n return None\n\n result = es.indices.delete(index=index_name)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_terms_indexes(es, index_name: str = \"terms_*\"):\n\n try:\n es.indices.delete(index=index_name)\n except Exception as e:\n log.error(f\"Could not delete all terms indices: {e}\")", "response": "Delete all terms indexes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef bulk_load_docs(es, docs):\n\n chunk_size = 200\n\n try:\n results = elasticsearch.helpers.bulk(es, docs, chunk_size=chunk_size)\n log.debug(f\"Elasticsearch documents loaded: {results[0]}\")\n\n # elasticsearch.helpers.parallel_bulk(es, terms, chunk_size=chunk_size, thread_count=4)\n if len(results[1]) > 0:\n log.error(\"Bulk load errors {}\".format(results))\n except elasticsearch.ElasticsearchException as e:\n log.error(\"Indexing error: {}\\n\".format(e))", "response": "Bulk load a set of documents into the cache"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate(bo, error_level: str = \"WARNING\") -> Tuple[bool, List[Tuple[str, str]]]:\n\n if bo.ast:\n bo = validate_functions(bo.ast, bo) # No WARNINGs generated in this function\n if error_level == \"WARNING\":\n bo = validate_arg_values(bo.ast, bo) # validates NSArg and StrArg values\n\n else:\n bo.validation_messages.append((\"ERROR\", \"Invalid BEL Statement - cannot parse\"))\n\n for msg in bo.validation_messages:\n if msg[0] == \"ERROR\":\n bo.parse_valid = False\n break\n\n return bo", "response": "Semantically validate BEL AST"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate_functions(ast: BELAst, bo):\n\n if isinstance(ast, Function):\n log.debug(f\"Validating: {ast.name}, {ast.function_type}, {ast.args}\")\n function_signatures = bo.spec[\"functions\"][\"signatures\"][ast.name][\"signatures\"]\n\n function_name = ast.name\n (valid_function, messages) = check_function_args(\n ast.args, function_signatures, function_name\n )\n if not valid_function:\n message = \", \".join(messages)\n bo.validation_messages.append(\n (\n \"ERROR\",\n \"Invalid BEL Statement function {} - problem with function signatures: {}\".format(\n ast.to_string(), message\n ),\n )\n )\n bo.parse_valid = False\n\n # Recursively process every NSArg by processing BELAst and Functions\n if hasattr(ast, \"args\"):\n for arg in ast.args:\n validate_functions(arg, bo)\n\n return bo", "response": "Recursively validate function signatures and return a BEL object containing only valid BEL functions."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if function args don t match function signature", "response": "def check_function_args(args, signatures, function_name):\n \"\"\"Check function args - return message if function args don't match function signature\n\n Called from validate_functions\n\n We have following types of arguments to validate:\n 1. Required, position_dependent arguments, e.g. p(HGNC:AKT1), NSArg HGNC:AKT1 is required and must be first argument\n 2. Optional, position_dependent arguments, e.g. pmod(P, T, 308) - T and 308 are optional and position_dependent\n 3. Optional, e.g. loc() modifier can only be found once, but anywhere after the position_dependent arguments\n 4. Multiple, e.g. var(), can have more than one var() modifier in p() function\n\n Args:\n args (Union['Function', 'NSArg', 'StrArg']): AST Function arguments\n signatures (Mapping[str, Any]): function signatures from spec_dict, may be more than one per function\n function_name (str): passed in to improve error messaging\n\n Returns:\n Tuple[bool, List[str]]: (function_valid?, list of error messages per signature)\n \"\"\"\n\n messages = []\n\n arg_types = []\n for arg in args:\n arg_type = arg.__class__.__name__\n if arg_type == \"Function\":\n arg_types.append((arg.name, \"\"))\n elif arg_type == \"NSArg\":\n arg_types.append((arg_type, f\"{arg.namespace}:{arg.value}\"))\n elif arg_type == \"StrArg\":\n arg_types.append((arg_type, arg.value))\n log.debug(f\"Arg_types {arg_types}\")\n\n matched_signature_idx = -1\n valid_function = False\n for sig_argset_idx, sig_argset in enumerate(signatures):\n sig_req_args = sig_argset[\"req_args\"] # required position_dependent arguments\n sig_pos_args = sig_argset[\"pos_args\"] # optional position_dependent arguments\n sig_opt_args = sig_argset[\"opt_args\"] # optional arguments\n sig_mult_args = sig_argset[\"mult_args\"] # multiple arguments\n\n log.debug(f\"{sig_argset_idx} Req: {sig_req_args}\")\n log.debug(f\"{sig_argset_idx} Pos: {sig_pos_args}\")\n log.debug(f\"{sig_argset_idx} Opt: {sig_opt_args}\")\n log.debug(f\"{sig_argset_idx} Mult: {sig_mult_args}\")\n\n # Check required arguments\n reqs_mismatch_flag = False\n for sig_idx, sig_req in enumerate(sig_req_args):\n if len(arg_types) > sig_idx:\n log.debug(\n \"Req args: arg_type {} vs sig_req {}\".format(\n arg_types[sig_idx][0], sig_req\n )\n )\n if arg_types[sig_idx][0] not in sig_req:\n reqs_mismatch_flag = True\n msg = f\"Missing required arguments for {function_name} signature: {sig_argset_idx}\"\n messages.append(msg)\n log.debug(msg)\n break\n\n if reqs_mismatch_flag:\n continue # test next argset\n\n # Check position_dependent optional arguments\n pos_dep_arg_types = arg_types[len(sig_req_args) :]\n log.debug(f\"Optional arg types {pos_dep_arg_types}\")\n log.debug(f\"{sig_argset_idx} Pos: {sig_pos_args}\")\n pos_mismatch_flag = False\n for sig_pos_idx, sig_pos in enumerate(sig_pos_args):\n if sig_pos_idx == len(pos_dep_arg_types):\n break # stop checking position dependent arguments when we run out of them vs signature optional position dependent arguments\n if pos_dep_arg_types[sig_pos_idx][0] not in sig_pos:\n pos_mismatch_flag = True\n msg = f\"Missing position_dependent arguments for {function_name} signature: {sig_argset_idx}\"\n messages.append(msg)\n log.debug(msg)\n break\n if pos_mismatch_flag:\n continue # test next argset\n\n reqpos_arglen = len(sig_req_args) + len(sig_pos_args)\n optional_arg_types = arg_types[reqpos_arglen:]\n\n # Remove function args that are found in the mult_args signature\n optional_types = [\n (opt_type, opt_val)\n for opt_type, opt_val in optional_arg_types\n if opt_type not in sig_mult_args\n ]\n log.debug(f\"Optional types after sig mult args removed {optional_types}\")\n\n # Check if any remaining function args are duplicated and therefore not unique opt_args\n if len(optional_types) != len(set(optional_types)):\n msg = f\"Duplicate optional arguments {optional_types} for {function_name} signature: {sig_argset_idx}\"\n messages.append(msg)\n log.debug(msg)\n continue\n\n optional_types = [\n (opt_type, opt_val)\n for opt_type, opt_val in optional_types\n if opt_type not in sig_opt_args\n ]\n if len(optional_types) > 0:\n msg = f\"Invalid arguments {optional_types} for {function_name} signature: {sig_argset_idx}\"\n messages.append(msg)\n log.debug(msg)\n continue\n\n matched_signature_idx = sig_argset_idx\n messages = [] # reset messages if signature is matched\n valid_function = True\n break\n\n # Add NSArg and StrArg value types (e.g. Protein, Complex, ec)\n if matched_signature_idx > -1:\n # Shouldn't have single optional NSArg arguments - not currently checking for that\n log.debug(\n f'AST1, Sigs {signatures[matched_signature_idx][\"arguments\"]} Args: {args}'\n )\n for arg_idx, arg in enumerate(args):\n log.debug(f\"Arg type {arg.type}\")\n for sig_idx, sig_arg in enumerate(\n signatures[matched_signature_idx][\"arguments\"]\n ):\n if arg.type == \"Function\" or sig_arg[\"type\"] in [\n \"Function\",\n \"Modifier\",\n ]:\n pass # Skip Function arguments\n elif sig_arg.get(\"position\", None):\n if sig_arg[\"position\"] == arg_idx + 1:\n arg.add_value_types(sig_arg[\"values\"])\n log.debug(f'AST2 {arg} {sig_arg[\"values\"]}')\n elif arg.type in [\"NSArg\", \"StrArg\", \"StrArgNSArg\"]:\n log.debug(f\"Arg {arg}\")\n arg.add_value_types(sig_arg[\"values\"])\n log.debug(f'AST2 {arg} {sig_arg[\"values\"]}')\n\n for arg in args:\n if arg.__class__.__name__ in [\"NSArg\", \"StrArg\"]:\n log.debug(f\"Arg: {arg.to_string()} Value_types: {arg.value_types}\")\n\n return (valid_function, messages)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_arg_values(ast, bo):\n\n if not bo.api_url:\n log.info(\"No API endpoint defined\")\n return bo\n\n log.debug(f\"AST: {ast}\")\n\n # Test NSArg terms\n if isinstance(ast, NSArg):\n term_id = \"{}:{}\".format(ast.namespace, ast.value)\n value_types = ast.value_types\n log.debug(f\"Value types: {value_types} AST value: {ast.value}\")\n # Default namespaces are defined in the bel_specification file\n if ast.namespace == \"DEFAULT\": # may use the DEFAULT namespace or not\n for value_type in value_types:\n default_namespace = [\n ns[\"name\"] for ns in bo.spec[\"namespaces\"][value_type][\"info\"]\n ] + [\n ns[\"abbreviation\"]\n for ns in bo.spec[\"namespaces\"][value_type][\"info\"]\n ]\n\n if ast.value in default_namespace:\n log.debug(\"Default namespace valid term: {}\".format(term_id))\n break\n else: # if for loop doesn't hit the break, run this else\n log.debug(\"Default namespace invalid term: {}\".format(term_id))\n bo.validation_messages.append(\n (\"WARNING\", f\"Default Term: {term_id} not found\")\n )\n\n # Process normal, non-default-namespace terms\n else:\n request_url = bo.api_url + \"/terms/{}\".format(\n url_path_param_quoting(term_id)\n )\n log.info(f\"Validate Arg Values url {request_url}\")\n r = get_url(request_url)\n if r and r.status_code == 200:\n result = r.json()\n # function signature term value_types doesn't match up with API term entity_types\n\n log.debug(\n f'AST.value_types {ast.value_types} Entity types {result.get(\"entity_types\", [])}'\n )\n\n # Check that entity types match\n if (\n len(\n set(ast.value_types).intersection(\n result.get(\"entity_types\", [])\n )\n )\n == 0\n ):\n log.debug(\n \"Invalid Term - statement term {} allowable entity types: {} do not match API term entity types: {}\".format(\n term_id, ast.value_types, result.get(\"entity_types\", [])\n )\n )\n bo.validation_messages.append(\n (\n \"WARNING\",\n \"Invalid Term - statement term {} allowable entity types: {} do not match API term entity types: {}\".format(\n term_id, ast.value_types, result.get(\"entity_types\", [])\n ),\n )\n )\n\n if term_id in result.get(\"obsolete_ids\", []):\n bo.validation_messages.append(\n (\n \"WARNING\",\n f'Obsolete term: {term_id} Current term: {result[\"id\"]}',\n )\n )\n\n elif r.status_code == 404:\n bo.validation_messages.append(\n (\"WARNING\", f\"Term: {term_id} not found in namespace\")\n )\n else:\n log.error(f\"Status {r.status_code} - Bad URL: {request_url}\")\n\n # Process StrArgs\n if isinstance(ast, StrArg):\n log.debug(f\" Check String Arg: {ast.value} {ast.value_types}\")\n for value_type in ast.value_types:\n # Is this a regex to match against\n if re.match(\"/\", value_type):\n value_type = re.sub(\"^/\", \"\", value_type)\n value_type = re.sub(\"/$\", \"\", value_type)\n match = re.match(value_type, ast.value)\n if match:\n break\n if value_type in bo.spec[\"namespaces\"]:\n default_namespace = [\n ns[\"name\"] for ns in bo.spec[\"namespaces\"][value_type][\"info\"]\n ] + [\n ns[\"abbreviation\"]\n for ns in bo.spec[\"namespaces\"][value_type][\"info\"]\n ]\n if ast.value in default_namespace:\n break\n else: # If for loop doesn't hit the break, no matches found, therefore for StrArg value is bad\n bo.validation_messages.append(\n (\n \"WARNING\",\n f\"String value {ast.value} does not match default namespace value or regex pattern: {ast.value_types}\",\n )\n )\n\n # Recursively process every NSArg by processing BELAst and Functions\n if hasattr(ast, \"args\"):\n for arg in ast.args:\n validate_arg_values(arg, bo)\n\n return bo", "response": "Recursively validate NSArg and StrArg values"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets BELBio configuration from files", "response": "def get_belbio_conf_files():\n \"\"\"Get belbio configuration from files\n \"\"\"\n\n home = os.path.expanduser(\"~\")\n cwd = os.getcwd()\n\n belbio_conf_fp, belbio_secrets_fp = \"\", \"\"\n\n env_conf_dir = os.getenv(\"BELBIO_CONF\", \"\").rstrip(\"/\")\n\n conf_paths = [\n f\"{cwd}/belbio_conf.yaml\",\n f\"{cwd}/belbio_conf.yml\",\n f\"{env_conf_dir}/belbio_conf.yaml\",\n f\"{env_conf_dir}/belbio_conf.yml\",\n f\"{home}/.belbio/conf\",\n ]\n secret_paths = [\n f\"{cwd}/belbio_secrets.yaml\",\n f\"{cwd}/belbio_secrets.yml\",\n f\"{env_conf_dir}/belbio_secrets.yaml\",\n f\"{env_conf_dir}/belbio_secrets.yml\",\n f\"{home}/.belbio/secrets\",\n ]\n\n for fn in conf_paths:\n if os.path.exists(fn):\n belbio_conf_fp = fn\n break\n else:\n log.error(\n \"No BELBio configuration file found - please add one (see http://bel.readthedocs.io/en/latest/configuration.html)\"\n )\n\n for fn in secret_paths:\n if os.path.exists(fn):\n belbio_secrets_fp = fn\n break\n\n return (belbio_conf_fp, belbio_secrets_fp)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload the configuration from the belbio_conf_fp belbio_secrets_fp", "response": "def load_configuration():\n \"\"\"Load the configuration\"\"\"\n\n (belbio_conf_fp, belbio_secrets_fp) = get_belbio_conf_files()\n log.info(f\"Using conf: {belbio_conf_fp} and secrets files: {belbio_secrets_fp} \")\n\n config = {}\n if belbio_conf_fp:\n with open(belbio_conf_fp, \"r\") as f:\n config = yaml.load(f, Loader=yaml.SafeLoader)\n config[\"source_files\"] = {}\n config[\"source_files\"][\"conf\"] = belbio_conf_fp\n\n if belbio_secrets_fp:\n with open(belbio_secrets_fp, \"r\") as f:\n secrets = yaml.load(f, Loader=yaml.SafeLoader)\n config[\"secrets\"] = copy.deepcopy(secrets)\n if \"source_files\" in config:\n config[\"source_files\"][\"secrets\"] = belbio_secrets_fp\n\n get_versions(config)\n\n # TODO - needs to be completed\n # add_environment_vars(config)\n\n return config"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_versions(config) -> dict:\n\n # Collect bel package version\n try:\n import bel.__version__\n\n config[\"bel\"][\"version\"] = bel.__version__.__version__\n except KeyError:\n config[\"bel\"] = {\"version\": bel.__version__.__version__}\n except ModuleNotFoundError:\n pass\n\n # Collect bel_resources version\n try:\n import tools.__version__\n\n config[\"bel_resources\"][\"version\"] = tools.__version__.__version__\n except KeyError:\n config[\"bel_resources\"] = {\"version\": tools.__version__.__version__}\n except ModuleNotFoundError:\n pass\n\n # Collect bel_api version\n try:\n import __version__\n\n if __version__.__name__ == \"BELBIO API\":\n config[\"bel_api\"][\"version\"] = __version__.__version__\n except KeyError:\n if __version__.__name__ == \"BELBIO API\":\n config[\"bel_api\"] = {\"version\": __version__.__version__}\n except ModuleNotFoundError:\n pass", "response": "Get versions of bel modules and tools"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\noverrides config with environment variables", "response": "def add_environment_vars(config: MutableMapping[str, Any]):\n \"\"\"Override config with environment variables\n\n Environment variables have to be prefixed with BELBIO_\n which will be stripped before splitting on '__' and lower-casing\n the environment variable name that is left into keys for the\n config dictionary.\n\n Example:\n BELBIO_BEL_API__SERVERS__API_URL=http://api.bel.bio\n 1. BELBIO_BEL_API__SERVERS__API_URL ==> BEL_API__SERVERS__API_URL\n 2. BEL_API__SERVERS__API_URL ==> bel_api__servers__api_url\n 3. bel_api__servers__api_url ==> [bel_api, servers, api_url]\n 4. [bel_api, servers, api_url] ==> config['bel_api']['servers']['api_url'] = http://api.bel.bio\n\n \"\"\"\n\n # TODO need to redo config - can't add value to dictionary without recursively building up the dict\n # check into config libraries again\n\n for e in os.environ:\n if re.match(\"BELBIO_\", e):\n val = os.environ.get(e)\n if val:\n e.replace(\"BELBIO_\", \"\")\n env_keys = e.lower().split(\"__\")\n if len(env_keys) > 1:\n joined = '\"][\"'.join(env_keys)\n eval_config = f'config[\"{joined}\"] = val'\n try:\n eval(eval_config)\n except Exception as exc:\n log.warn(\"Cannot process {e} into config\")\n else:\n config[env_keys[0]] = val"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef merge_config(\n config: Mapping[str, Any],\n override_config: Mapping[str, Any] = None,\n override_config_fn: str = None,\n) -> Mapping[str, Any]:\n \"\"\"Override config with additional configuration in override_config or override_config_fn\n\n Used in script to merge CLI options with Config\n\n Args:\n config: original configuration\n override_config: new configuration to override/extend current config\n override_config_fn: new configuration filename as YAML file\n \"\"\"\n\n if override_config_fn:\n with open(override_config_fn, \"r\") as f:\n override_config = yaml.load(f, Loader=yaml.SafeLoader)\n\n if not override_config:\n log.info(\"Missing override_config\")\n\n return functools.reduce(rec_merge, (config, override_config))", "response": "Override config with additional configuration in override_config or override_config_fn\nShipment Used in script to merge CLI options with Config\nShipment"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_terms(fo: IO, metadata: dict, forceupdate: bool):\n\n version = metadata[\"metadata\"][\"version\"]\n\n # LOAD TERMS INTO Elasticsearch\n with timy.Timer(\"Load Terms\") as timer:\n es = bel.db.elasticsearch.get_client()\n\n es_version = version.replace(\"T\", \"\").replace(\"-\", \"\").replace(\":\", \"\")\n index_prefix = f\"terms_{metadata['metadata']['namespace'].lower()}\"\n index_name = f\"{index_prefix}_{es_version}\"\n\n # Create index with mapping\n if not elasticsearch.index_exists(es, index_name):\n elasticsearch.create_terms_index(es, index_name)\n elif forceupdate: # force an update to the index\n index_name += \"_alt\"\n elasticsearch.create_terms_index(es, index_name)\n else:\n return # Skip loading if not forced and not a new namespace\n\n terms_iterator = terms_iterator_for_elasticsearch(fo, index_name)\n elasticsearch.bulk_load_docs(es, terms_iterator)\n\n # Remove old namespace index\n index_names = elasticsearch.get_all_index_names(es)\n for name in index_names:\n if name != index_name and index_prefix in name:\n elasticsearch.delete_index(es, name)\n\n # Add terms_alias to this index\n elasticsearch.add_index_alias(es, index_name, terms_alias)\n\n log.info(\n \"Load namespace terms\",\n elapsed=timer.elapsed,\n namespace=metadata[\"metadata\"][\"namespace\"],\n )\n\n # LOAD EQUIVALENCES INTO ArangoDB\n with timy.Timer(\"Load Term Equivalences\") as timer:\n arango_client = arangodb.get_client()\n belns_db = arangodb.get_belns_handle(arango_client)\n arangodb.batch_load_docs(\n belns_db, terms_iterator_for_arangodb(fo, version), on_duplicate=\"update\"\n )\n\n log.info(\n \"Loaded namespace equivalences\",\n elapsed=timer.elapsed,\n namespace=metadata[\"metadata\"][\"namespace\"],\n )\n\n # Clean up old entries\n remove_old_equivalence_edges = f\"\"\"\n FOR edge in equivalence_edges\n FILTER edge.source == \"{metadata[\"metadata\"][\"namespace\"]}\"\n FILTER edge.version != \"{version}\"\n REMOVE edge IN equivalence_edges\n \"\"\"\n remove_old_equivalence_nodes = f\"\"\"\n FOR node in equivalence_nodes\n FILTER node.source == \"{metadata[\"metadata\"][\"namespace\"]}\"\n FILTER node.version != \"{version}\"\n REMOVE node IN equivalence_nodes\n \"\"\"\n arangodb.aql_query(belns_db, remove_old_equivalence_edges)\n arangodb.aql_query(belns_db, remove_old_equivalence_nodes)\n\n # Add metadata to resource metadata collection\n metadata[\"_key\"] = f\"Namespace_{metadata['metadata']['namespace']}\"\n try:\n belns_db.collection(arangodb.belns_metadata_name).insert(metadata)\n except ArangoError as ae:\n belns_db.collection(arangodb.belns_metadata_name).replace(metadata)", "response": "Load terms into Elasticsearch and ArangoDB"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef terms_iterator_for_elasticsearch(fo: IO, index_name: str):\n\n species_list = config[\"bel_resources\"].get(\"species_list\", [])\n\n fo.seek(0) # Seek back to beginning of file\n with gzip.open(fo, \"rt\") as f:\n for line in f:\n term = json.loads(line)\n # skip if not term record (e.g. is a metadata record)\n if \"term\" not in term:\n continue\n term = term[\"term\"]\n\n # Filter species if enabled in config\n species_id = term.get(\"species_id\", None)\n if species_list and species_id and species_id not in species_list:\n continue\n\n all_term_ids = set()\n for term_id in [term[\"id\"]] + term.get(\"alt_ids\", []):\n all_term_ids.add(term_id)\n all_term_ids.add(lowercase_term_id(term_id))\n\n term[\"alt_ids\"] = copy.copy(list(all_term_ids))\n\n yield {\n \"_op_type\": \"index\",\n \"_index\": index_name,\n \"_type\": \"term\",\n \"_id\": term[\"id\"],\n \"_source\": copy.deepcopy(term),\n }", "response": "Iterate over the terms in the file that are in elastic search."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef lowercase_term_id(term_id: str) -> str:\n (ns, val) = term_id.split(\":\", maxsplit=1)\n term_id = f\"{ns}:{val.lower()}\"\n\n return term_id", "response": "Lowercase the term value"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the text of a node.", "response": "def node_text(node):\n \"\"\"Needed for things like abstracts which have internal tags (see PMID:27822475)\"\"\"\n\n if node.text:\n result = node.text\n else:\n result = \"\"\n for child in node:\n if child.tail is not None:\n result += child.tail\n return result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_pubtator(pmid):\n r = get_url(PUBTATOR_TMPL.replace(\"PMID\", pmid), timeout=10)\n if r and r.status_code == 200:\n pubtator = r.json()[0]\n else:\n log.error(\n f\"Cannot access Pubtator, status: {r.status_code} url: {PUBTATOR_TMPL.replace('PMID', pmid)}\"\n )\n return None\n\n known_types = [\"CHEBI\", \"Chemical\", \"Disease\", \"Gene\", \"Species\"]\n\n for idx, anno in enumerate(pubtator[\"denotations\"]):\n s_match = re.match(r\"(\\w+):(\\w+)\", anno[\"obj\"])\n c_match = re.match(r\"(\\w+):(\\w+):(\\w+)\", anno[\"obj\"])\n if c_match:\n (ctype, namespace, cid) = (\n c_match.group(1),\n c_match.group(2),\n c_match.group(3),\n )\n\n if ctype not in known_types:\n log.info(f\"{ctype} not in known_types for Pubtator\")\n if namespace not in known_types:\n log.info(f\"{namespace} not in known_types for Pubtator\")\n\n pubtator[\"denotations\"][idx][\n \"obj\"\n ] = f'{pubtator_ns_convert.get(namespace, \"UNKNOWN\")}:{cid}'\n pubtator[\"denotations\"][idx][\"entity_type\"] = pubtator_entity_convert.get(\n ctype, None\n )\n pubtator[\"denotations\"][idx][\n \"annotation_type\"\n ] = pubtator_annotation_convert.get(ctype, None)\n elif s_match:\n (ctype, cid) = (s_match.group(1), s_match.group(2))\n\n if ctype not in known_types:\n log.info(f\"{ctype} not in known_types for Pubtator\")\n\n pubtator[\"denotations\"][idx][\n \"obj\"\n ] = f'{pubtator_ns_convert.get(ctype, \"UNKNOWN\")}:{cid}'\n pubtator[\"denotations\"][idx][\"entity_type\"] = pubtator_entity_convert.get(\n ctype, None\n )\n pubtator[\"denotations\"][idx][\n \"annotation_type\"\n ] = pubtator_annotation_convert.get(ctype, None)\n\n annotations = {}\n for anno in pubtator[\"denotations\"]:\n log.info(anno)\n if anno[\"obj\"] not in annotations:\n annotations[anno[\"obj\"]] = {\"spans\": [anno[\"span\"]]}\n annotations[anno[\"obj\"]][\"entity_types\"] = [anno.get(\"entity_type\", [])]\n annotations[anno[\"obj\"]][\"annotation_types\"] = [\n anno.get(\"annotation_type\", [])\n ]\n\n else:\n annotations[anno[\"obj\"]][\"spans\"].append(anno[\"span\"])\n\n del pubtator[\"denotations\"]\n pubtator[\"annotations\"] = copy.deepcopy(annotations)\n\n return pubtator", "response": "Get Pubtator Bioconcepts from Pubmed Abstract\n ArcGIS"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef process_pub_date(year, mon, day):\n\n pub_date = None\n if year and re.match(\"[a-zA-Z]+\", mon):\n pub_date = datetime.datetime.strptime(\n f\"{year}-{mon}-{day}\", \"%Y-%b-%d\"\n ).strftime(\"%Y-%m-%d\")\n elif year:\n pub_date = f\"{year}-{mon}-{day}\"\n\n return pub_date", "response": "Create pub_date from what Pubmed provides in Journal PubDate entry\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget pubmed xml for pmid and convert to JSON", "response": "def get_pubmed(pmid: str) -> Mapping[str, Any]:\n \"\"\"Get pubmed xml for pmid and convert to JSON\n\n Remove MESH terms if they are duplicated in the compound term set\n\n ArticleDate vs PubDate gets complicated: https://www.nlm.nih.gov/bsd/licensee/elements_descriptions.html see and \n Only getting pub_year at this point from the element.\n\n Args:\n pmid: pubmed id number as a string\n\n Returns:\n pubmed json\n \"\"\"\n pubmed_url = PUBMED_TMPL.replace(\"PMID\", str(pmid))\n r = get_url(pubmed_url)\n log.info(f\"Getting Pubmed URL {pubmed_url}\")\n\n try:\n root = etree.fromstring(r.content)\n doc = {\"abstract\": \"\"}\n doc[\"pmid\"] = root.xpath(\"//PMID/text()\")[0]\n doc[\"title\"] = next(iter(root.xpath(\"//ArticleTitle/text()\")), \"\")\n\n # TODO https://stackoverflow.com/questions/4770191/lxml-etree-element-text-doesnt-return-the-entire-text-from-an-element\n atext = next(iter(root.xpath(\"//Abstract/AbstractText/text()\")), \"\")\n print(\"Text\", atext)\n\n for abstracttext in root.xpath(\"//Abstract/AbstractText\"):\n abstext = node_text(abstracttext)\n\n label = abstracttext.get(\"Label\", None)\n if label:\n doc[\"abstract\"] += f\"{label}: {abstext}\\n\"\n else:\n doc[\"abstract\"] += f\"{abstext}\\n\"\n\n doc[\"abstract\"] = doc[\"abstract\"].rstrip()\n\n doc[\"authors\"] = []\n for author in root.xpath(\"//Author\"):\n last_name = next(iter(author.xpath(\"LastName/text()\")), \"\")\n first_name = next(iter(author.xpath(\"ForeName/text()\")), \"\")\n initials = next(iter(author.xpath(\"Initials/text()\")), \"\")\n if not first_name and initials:\n first_name = initials\n doc[\"authors\"].append(f\"{last_name}, {first_name}\")\n\n pub_year = next(\n iter(root.xpath(\"//Journal/JournalIssue/PubDate/Year/text()\")), None\n )\n pub_mon = next(\n iter(root.xpath(\"//Journal/JournalIssue/PubDate/Month/text()\")), \"Jan\"\n )\n pub_day = next(\n iter(root.xpath(\"//Journal/JournalIssue/PubDate/Day/text()\")), \"01\"\n )\n\n pub_date = process_pub_date(pub_year, pub_mon, pub_day)\n\n doc[\"pub_date\"] = pub_date\n doc[\"journal_title\"] = next(iter(root.xpath(\"//Journal/Title/text()\")), \"\")\n doc[\"joural_iso_title\"] = next(\n iter(root.xpath(\"//Journal/ISOAbbreviation/text()\")), \"\"\n )\n doc[\"doi\"] = next(iter(root.xpath('//ArticleId[@IdType=\"doi\"]/text()')), None)\n\n doc[\"compounds\"] = []\n for chem in root.xpath(\"//ChemicalList/Chemical/NameOfSubstance\"):\n chem_id = chem.get(\"UI\")\n doc[\"compounds\"].append({\"id\": f\"MESH:{chem_id}\", \"name\": chem.text})\n\n compounds = [cmpd[\"id\"] for cmpd in doc[\"compounds\"]]\n doc[\"mesh\"] = []\n for mesh in root.xpath(\"//MeshHeading/DescriptorName\"):\n mesh_id = f\"MESH:{mesh.get('UI')}\"\n if mesh_id in compounds:\n continue\n doc[\"mesh\"].append({\"id\": mesh_id, \"name\": mesh.text})\n\n return doc\n except Exception as e:\n log.error(\n f\"Bad Pubmed request, status: {r.status_code} error: {e}\",\n url=f'{PUBMED_TMPL.replace(\"PMID\", pmid)}',\n )\n return {\"message\": f\"Cannot get PMID: {pubmed_url}\"}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef enhance_pubmed_annotations(pubmed: Mapping[str, Any]) -> Mapping[str, Any]:\n\n text = pubmed[\"title\"] + pubmed[\"abstract\"]\n\n annotations = {}\n\n for nsarg in pubmed[\"annotations\"]:\n url = f'{config[\"bel_api\"][\"servers\"][\"api_url\"]}/terms/{url_path_param_quoting(nsarg)}'\n log.info(f\"URL: {url}\")\n r = get_url(url)\n log.info(f\"Result: {r}\")\n new_nsarg = \"\"\n if r and r.status_code == 200:\n term = r.json()\n new_nsarg = bel_utils.convert_nsarg(term[\"id\"], decanonicalize=True)\n\n pubmed[\"annotations\"][nsarg][\"name\"] = term[\"name\"]\n pubmed[\"annotations\"][nsarg][\"label\"] = term[\"label\"]\n pubmed[\"annotations\"][nsarg][\"entity_types\"] = list(\n set(\n pubmed[\"annotations\"][nsarg][\"entity_types\"]\n + term.get(\"entity_types\", [])\n )\n )\n pubmed[\"annotations\"][nsarg][\"annotation_types\"] = list(\n set(\n pubmed[\"annotations\"][nsarg][\"annotation_types\"]\n + term.get(\"annotation_types\", [])\n )\n )\n\n if new_nsarg != nsarg:\n annotations[new_nsarg] = copy.deepcopy(pubmed[\"annotations\"][nsarg])\n else:\n annotations[nsarg] = copy.deepcopy(pubmed[\"annotations\"][nsarg])\n\n for nsarg in annotations:\n for idx, span in enumerate(annotations[nsarg][\"spans\"]):\n string = text[span[\"begin\"] - 1 : span[\"end\"] - 1]\n annotations[nsarg][\"spans\"][idx][\"text\"] = string\n\n pubmed[\"annotations\"] = copy.deepcopy(annotations)\n\n return pubmed", "response": "Enhance pubmed annotations with the same namespace IDs"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_pubmed_for_beleditor(pmid: str) -> Mapping[str, Any]:\n\n pubmed = get_pubmed(pmid)\n pubtator = get_pubtator(pmid)\n pubmed[\"annotations\"] = copy.deepcopy(pubtator[\"annotations\"])\n\n # Add entity types and annotation types to annotations\n pubmed = enhance_pubmed_annotations(pubmed)\n\n return pubmed", "response": "Get the pubmed document with the given Pubmed PMID and add entity types and annotation types to the pubmed document."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_orthologs(canonical_gene_id: str, species: list = []) -> List[dict]:\n\n gene_id_key = bel.db.arangodb.arango_id_to_key(canonical_gene_id)\n orthologs = {}\n\n if species:\n query_filter = f\"FILTER vertex.tax_id IN {species}\"\n\n query = f\"\"\"\n LET start = (\n FOR vertex in ortholog_nodes\n FILTER vertex._key == \"{gene_id_key}\"\n RETURN {{ \"name\": vertex.name, \"tax_id\": vertex.tax_id }}\n )\n\n LET orthologs = (\n FOR vertex IN 1..3\n ANY \"ortholog_nodes/{gene_id_key}\" ortholog_edges\n OPTIONS {{ bfs: true, uniqueVertices : 'global' }}\n {query_filter}\n RETURN DISTINCT {{ \"name\": vertex.name, \"tax_id\": vertex.tax_id }}\n )\n\n RETURN {{ 'orthologs': FLATTEN(UNION(start, orthologs)) }}\n \"\"\"\n\n cursor = belns_db.aql.execute(query, batch_size=20)\n results = cursor.pop()\n for ortholog in results[\"orthologs\"]:\n norms = bel.terms.terms.get_normalized_terms(ortholog[\"name\"])\n orthologs[ortholog[\"tax_id\"]] = {\n \"canonical\": norms[\"canonical\"],\n \"decanonical\": norms[\"decanonical\"],\n }\n\n return orthologs", "response": "Get orthologs for given gene_id and species"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nnormalize strings with booleans into Python types.", "response": "def normalize_value(val):\n \"\"\"\n Normalize strings with booleans into Python types.\n \"\"\"\n \n if val is not None:\n if val.lower() == 'false':\n val = False\n elif val.lower() == 'true':\n val = True\n \n return val"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef normalize_dictionary_values(dictionary):\n \n for key, val in dictionary.iteritems():\n if isinstance(val, dict):\n dictionary[key] = normalize_dictionary_values(val)\n elif isinstance(val, list):\n dictionary[key] = list(val)\n else:\n dictionary[key] = normalize_value(val)\n \n return dictionary", "response": "Normalizes the values in a dictionary recursivly."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_url(url: str, params: dict = {}, timeout: float = 5.0, cache: bool = True):\n\n try:\n\n if not cache:\n with requests_cache.disabled():\n r = requests.get(url, params=params, timeout=timeout)\n else:\n r = requests.get(url, params=params, timeout=timeout)\n\n log.debug(f\"Response headers {r.headers} From cache {r.from_cache}\")\n return r\n\n except requests.exceptions.Timeout:\n log.warn(f\"Timed out getting url in get_url: {url}\")\n return None\n except Exception as e:\n log.warn(f\"Error getting url: {url} error: {e}\")\n return None", "response": "Wrapper for requests. get that handles timeout exceptions and returns the result object"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns time in milliseconds from start_time", "response": "def timespan(start_time):\n \"\"\"Return time in milliseconds from start_time\"\"\"\n\n timespan = datetime.datetime.now() - start_time\n timespan_ms = timespan.total_seconds() * 1000\n return timespan_ms"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef download_file(url):\n\n response = requests.get(url, stream=True)\n fp = tempfile.NamedTemporaryFile()\n for chunk in response.iter_content(chunk_size=1024):\n if chunk: # filter out keep-alive new chunks\n fp.write(chunk)\n\n # log.info(f'Download file - tmp file: {fp.name} size: {fp.tell()}')\n return fp", "response": "Download file from url and return file object"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the first true value in the iterable.", "response": "def first_true(iterable, default=False, pred=None):\n \"\"\"Returns the first true value in the iterable.\n\n If no true value is found, returns *default*\n\n If *pred* is not None, returns the first item\n for which pred(item) is true.\n\n \"\"\"\n # first_true([a,b,c], x) --> a or b or c or x\n # first_true([a,b], x, f) --> a if f(a) else b if f(b) else x\n return next(filter(pred, iterable), default)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _create_hash_from_doc(doc: Mapping[str, Any]) -> str:\n\n doc_string = json.dumps(doc, sort_keys=True)\n return _create_hash(doc_string)", "response": "Create hash Id from document"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef elapsed(self):\n if self.end is None:\n # if elapsed is called in the context manager scope\n return (self() - self.start) * self.factor\n else:\n # if elapsed is called out of the context manager scope\n return (self.end - self.start) * self.factor", "response": "Return the elapsed time since the start of the current block."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads edges into Arangodb", "response": "def load_edges_into_db(\n nanopub_id: str,\n nanopub_url: str,\n edges: list = [],\n edges_coll_name: str = edges_coll_name,\n nodes_coll_name: str = nodes_coll_name,\n):\n \"\"\"Load edges into Edgestore\"\"\"\n\n start_time = datetime.datetime.now()\n\n # Clean out edges for nanopub in edgestore\n query = f\"\"\"\n FOR edge IN {edges_coll_name}\n FILTER edge.nanopub_id == \"{nanopub_id}\"\n REMOVE edge IN edges\n \"\"\"\n\n try:\n edgestore_db.aql.execute(query)\n except Exception as e:\n log.debug(f\"Could not remove nanopub-related edges: {query} msg: {e}\")\n\n end_time1 = datetime.datetime.now()\n delta_ms = f\"{(end_time1 - start_time).total_seconds() * 1000:.1f}\"\n log.info(\"Timing - Delete edges for nanopub\", delta_ms=delta_ms)\n\n # Clean out errors for nanopub in pipeline_errors\n query = f\"\"\"\n FOR e IN pipeline_errors\n FILTER e.nanopub_id == \"{nanopub_id}\"\n REMOVE e IN pipeline_errors\n \"\"\"\n try:\n edgestore_db.aql.execute(query)\n except Exception as e:\n log.debug(f\"Could not remove nanopub-related errors: {query} msg: {e}\")\n\n end_time2 = datetime.datetime.now()\n delta_ms = f\"{(end_time2 - end_time1).total_seconds() * 1000:.1f}\"\n log.info(\"Timing - Delete pipeline errors for nanopub\", delta_ms=delta_ms)\n\n # Collect edges and nodes to load into arangodb\n node_list, edge_list = [], []\n for doc in edge_iterator(edges=edges):\n if doc[0] == \"nodes\":\n node_list.append(doc[1])\n else:\n edge_list.append(doc[1])\n\n end_time3 = datetime.datetime.now()\n delta_ms = f\"{(end_time3 - end_time2).total_seconds() * 1000:.1f}\"\n log.info(\"Timing - Collect edges and nodes\", delta_ms=delta_ms)\n\n try:\n results = edgestore_db.collection(edges_coll_name).import_bulk(\n edge_list, on_duplicate=\"replace\", halt_on_error=False\n )\n except Exception as e:\n log.error(f\"Could not load edges msg: {e}\")\n\n end_time4 = datetime.datetime.now()\n delta_ms = f\"{(end_time4 - end_time3).total_seconds() * 1000:.1f}\"\n log.info(\"Timing - Load edges into edgestore\", delta_ms=delta_ms)\n\n try:\n results = edgestore_db.collection(nodes_coll_name).import_bulk(\n node_list, on_duplicate=\"replace\", halt_on_error=False\n )\n except Exception as e:\n log.error(f\"Could not load nodes msg: {e}\")\n\n end_time5 = datetime.datetime.now()\n delta_ms = f\"{(end_time5 - end_time4).total_seconds() * 1000:.1f}\"\n log.info(\"Timing - Load nodes into edgestore\", delta_ms=delta_ms)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef edge_iterator(edges=[], edges_fn=None):\n\n for edge in itertools.chain(edges, files.read_edges(edges_fn)):\n\n subj = copy.deepcopy(edge[\"edge\"][\"subject\"])\n subj_id = str(utils._create_hash_from_doc(subj))\n subj[\"_key\"] = subj_id\n obj = copy.deepcopy(edge[\"edge\"][\"object\"])\n\n obj_id = str(utils._create_hash_from_doc(obj))\n obj[\"_key\"] = obj_id\n relation = copy.deepcopy(edge[\"edge\"][\"relation\"])\n\n relation[\"_from\"] = f\"nodes/{subj_id}\"\n relation[\"_to\"] = f\"nodes/{obj_id}\"\n\n # Create edge _key\n relation_hash = copy.deepcopy(relation)\n relation_hash.pop(\"edge_dt\", None)\n relation_hash.pop(\"edge_hash\", None)\n relation_hash.pop(\"nanopub_dt\", None)\n relation_hash.pop(\"nanopub_url\", None)\n relation_hash.pop(\"subject_canon\", None)\n relation_hash.pop(\"object_canon\", None)\n relation_hash.pop(\"public_flag\", None)\n relation_hash.pop(\"metadata\", None)\n\n relation_id = str(utils._create_hash_from_doc(relation_hash))\n relation[\"_key\"] = relation_id\n\n if edge.get(\"nanopub_id\", None):\n if \"metadata\" not in relation:\n relation[\"metadata\"] = {}\n relation[\"metadata\"][\"nanopub_id\"] = edge[\"nanopub_id\"]\n\n yield (\"nodes\", subj)\n yield (\"nodes\", obj)\n yield (\"edges\", relation)", "response": "Yields all documents from an edge for loading into ArangoDB"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating nanopubstore start_dt in belapi. state_mgmt collection", "response": "def update_nanopubstore_start_dt(url: str, start_dt: str):\n \"\"\"Add nanopubstore start_dt to belapi.state_mgmt collection\n\n Args:\n url: url of nanopubstore\n start_dt: datetime of last query against nanopubstore for new ID's\n \"\"\"\n\n hostname = urllib.parse.urlsplit(url)[1]\n\n start_dates_doc = state_mgmt.get(start_dates_doc_key)\n if not start_dates_doc:\n start_dates_doc = {\n \"_key\": start_dates_doc_key,\n \"start_dates\": [{\"nanopubstore\": hostname, \"start_dt\": start_dt}],\n }\n state_mgmt.insert(start_dates_doc)\n else:\n for idx, start_date in enumerate(start_dates_doc[\"start_dates\"]):\n if start_date[\"nanopubstore\"] == hostname:\n start_dates_doc[\"start_dates\"][idx][\"start_dt\"] = start_dt\n break\n else:\n start_dates_doc[\"start_dates\"].append(\n {\"nanopubstore\": hostname, \"start_dt\": start_dt}\n )\n\n state_mgmt.replace(start_dates_doc)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget last start_dt recorded for getting new nanopub ID s", "response": "def get_nanopubstore_start_dt(url: str):\n \"\"\"Get last start_dt recorded for getting new nanopub ID's\"\"\"\n\n hostname = urllib.parse.urlsplit(url)[1]\n\n start_dates_doc = state_mgmt.get(start_dates_doc_key)\n if start_dates_doc and start_dates_doc.get(\"start_dates\"):\n date = [\n dt[\"start_dt\"]\n for dt in start_dates_doc[\"start_dates\"]\n if dt[\"nanopubstore\"] == hostname\n ]\n log.info(f\"Selected start_dt: {date} len: {len(date)}\")\n if len(date) == 1:\n return date[0]\n\n return \"1900-01-01T00:00:00.000Z\""} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget modified and deleted nanopub urls", "response": "def get_nanopub_urls(ns_root_url: str = None, start_dt: str = None) -> dict:\n \"\"\"Get modified and deleted nanopub urls\n\n Limited by last datetime retrieved (start_dt). Modified includes new and updated nanopubs\n\n Returns:\n dict: {'modified': [], 'deleted': []}\n \"\"\"\n if not ns_root_url:\n ns_root_url = config[\"bel_api\"][\"servers\"][\"nanopubstore\"]\n\n url = f\"{ns_root_url}/nanopubs/timed\"\n\n if not start_dt:\n start_dt = get_nanopubstore_start_dt(ns_root_url)\n\n params = {\"startTime\": start_dt, \"published\": True}\n\n # TODO - this is coming back without a status code in some cases - why?\n r = bel.utils.get_url(url, params=params, cache=False)\n if r and r.status_code == 200:\n data = r.json()\n new_start_dt = data[\"queryTime\"]\n update_nanopubstore_start_dt(ns_root_url, new_start_dt)\n nanopub_urls = {\"modified\": [], \"deleted\": []}\n\n # Deleted nanopubs\n for nid in data[\"deleteddata\"]:\n nanopub_urls[\"deleted\"].append(f\"{ns_root_url}/nanopubs/{nid}\")\n\n # Modified nanopubs\n for nid in data[\"data\"]:\n nanopub_urls[\"modified\"].append(f\"{ns_root_url}/nanopubs/{nid}\")\n\n return nanopub_urls\n\n else:\n log.error(\n f\"Bad request to Nanopubstore\",\n url=url,\n status=r.status_code,\n type=\"api_request\",\n )\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_nanopub(url):\n\n r = bel.utils.get_url(url, cache=False)\n if r and r.json():\n return r.json()\n else:\n return {}", "response": "Get Nanopub from nanopubstore given url"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pipeline(\n ctx,\n input_fn,\n db_save,\n db_delete,\n output_fn,\n rules,\n species,\n namespace_targets,\n version,\n api,\n config_fn,\n):\n \"\"\"BEL Pipeline - BEL Nanopubs into BEL Edges\n\n This will process BEL Nanopubs into BEL Edges by validating, orthologizing (if requested),\n canonicalizing, and then computing the BEL Edges based on the given rule_set.\n\n \\b\n input_fn:\n If input fn has *.gz, will read as a gzip file\n If input fn has *.jsonl*, will parsed as a JSONLines file\n IF input fn has *.json*, will be parsed as a JSON file\n If input fn has *.yaml* or *.yml*, will be parsed as a YAML file\n\n \\b\n output_fn:\n If output fn has *.gz, will written as a gzip file\n If output fn has *.jsonl*, will written as a JSONLines file\n IF output fn has *.json*, will be written as a JSON file\n If output fn has *.yaml* or *.yml*, will be written as a YAML file\n If output fn has *.jgf, will be written as JSON Graph Formatted file\n \"\"\"\n\n if config_fn:\n config = bel.db.Config.merge_config(ctx.config, override_config_fn=config_fn)\n else:\n config = ctx.config\n\n # Configuration - will return the first truthy result in list else the default option\n if namespace_targets:\n namespace_targets = json.loads(namespace_targets)\n if rules:\n rules = rules.replace(\" \", \"\").split(\",\")\n\n namespace_targets = utils.first_true(\n [namespace_targets, config[\"bel\"][\"lang\"].get(\"canonical\")], None\n )\n rules = utils.first_true(\n [rules, config[\"bel\"][\"nanopub\"].get(\"pipeline_edge_rules\", False)], False\n )\n api = utils.first_true(\n [api, config[\"bel_api\"][\"servers\"].get(\"api_url\", None)], None\n )\n version = utils.first_true(\n [version, config[\"bel\"][\"lang\"].get(\"default_bel_version\", None)], None\n )\n\n n = bnn.Nanopub()\n\n try:\n json_flag, jsonl_flag, yaml_flag, jgf_flag = False, False, False, False\n all_bel_edges = []\n fout = None\n\n if db_save or db_delete:\n if db_delete:\n arango_client = bel.db.arangodb.get_client()\n bel.db.arangodb.delete_database(arango_client, \"edgestore\")\n else:\n arango_client = bel.db.arangodb.get_client()\n\n edgestore_handle = bel.db.arangodb.get_edgestore_handle(arango_client)\n\n elif re.search(\"ya?ml\", output_fn):\n yaml_flag = True\n elif \"jsonl\" in output_fn:\n jsonl_flag = True\n elif \"json\" in output_fn:\n json_flag = True\n elif \"jgf\" in output_fn:\n jgf_flag = True\n\n if db_save:\n pass\n elif \"gz\" in output_fn:\n fout = gzip.open(output_fn, \"wt\")\n else:\n fout = open(output_fn, \"wt\")\n\n nanopub_cnt = 0\n with timy.Timer() as timer:\n for np in bnf.read_nanopubs(input_fn):\n # print('Nanopub:\\n', json.dumps(np, indent=4))\n\n nanopub_cnt += 1\n if nanopub_cnt % 100 == 0:\n timer.track(f\"{nanopub_cnt} Nanopubs processed into Edges\")\n\n bel_edges = n.bel_edges(\n np,\n namespace_targets=namespace_targets,\n orthologize_target=species,\n rules=rules,\n )\n\n if db_save:\n bel.edge.edges.load_edges_into_db(edgestore_handle, edges=bel_edges)\n elif jsonl_flag:\n fout.write(\"{}\\n\".format(json.dumps(bel_edges)))\n else:\n all_bel_edges.extend(bel_edges)\n\n if db_save:\n pass\n elif yaml_flag:\n fout.write(\"{}\\n\".format(yaml.dumps(all_bel_edges)))\n elif json_flag:\n fout.write(\"{}\\n\".format(json.dumps(all_bel_edges)))\n elif jgf_flag:\n bnf.edges_to_jgf(output_fn, all_bel_edges)\n\n finally:\n if fout:\n fout.close()", "response": "This function processes the BEL Nanopubs into BEL Edges and returns the result as a list of BEL Nanopubs."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef convert_belscript(ctx, input_fn, output_fn):\n\n try:\n\n (\n out_fh,\n yaml_flag,\n jsonl_flag,\n json_flag,\n ) = bel.nanopub.files.create_nanopubs_fh(output_fn)\n if yaml_flag or json_flag:\n docs = []\n\n # input file\n if re.search(\"gz$\", input_fn):\n f = gzip.open(input_fn, \"rt\")\n else:\n f = open(input_fn, \"rt\")\n\n # process belscript\n for doc in bel.nanopub.belscripts.parse_belscript(f):\n if yaml_flag or json_flag:\n docs.append(doc)\n elif jsonl_flag:\n out_fh.write(\"{}\\n\".format(json.dumps(doc)))\n\n if yaml_flag:\n yaml.dump(docs, out_fh)\n\n elif json_flag:\n json.dump(docs, out_fh, indent=4)\n\n finally:\n f.close()\n out_fh.close()", "response": "Convert BEL script to nanopubs_bel format"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreformats between JSON YAML and JSONLines files.", "response": "def reformat(ctx, input_fn, output_fn):\n \"\"\"Reformat between JSON, YAML, JSONLines formats\n\n \\b\n input_fn:\n If input fn has *.gz, will read as a gzip file\n\n \\b\n output_fn:\n If output fn has *.gz, will written as a gzip file\n If output fn has *.jsonl*, will written as a JSONLines file\n IF output fn has *.json*, will be written as a JSON file\n If output fn has *.yaml* or *.yml*, will be written as a YAML file\n \"\"\"\n\n try:\n\n (\n out_fh,\n yaml_flag,\n jsonl_flag,\n json_flag,\n ) = bel.nanopub.files.create_nanopubs_fh(output_fn)\n if yaml_flag or json_flag:\n docs = []\n\n # input file\n if re.search(\"gz$\", input_fn):\n f = gzip.open(input_fn, \"rt\")\n else:\n f = open(input_fn, \"rt\")\n\n for np in bnf.read_nanopubs(input_fn):\n if yaml_flag or json_flag:\n docs.append(np)\n elif jsonl_flag:\n out_fh.write(\"{}\\n\".format(json.dumps(np)))\n\n if yaml_flag:\n yaml.dump(docs, out_fh)\n\n elif json_flag:\n json.dump(docs, out_fh, indent=4)\n\n finally:\n f.close()\n out_fh.close()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncollect statistics on nanopub file input_fn can be json jsonl or yaml and additionally gzipped", "response": "def nanopub_stats(ctx, input_fn):\n \"\"\"Collect statistics on nanopub file\n\n input_fn can be json, jsonl or yaml and additionally gzipped\n \"\"\"\n\n counts = {\n \"nanopubs\": 0,\n \"assertions\": {\"total\": 0, \"subject_only\": 0, \"nested\": 0, \"relations\": {}},\n }\n\n for np in bnf.read_nanopubs(input_fn):\n if \"nanopub\" in np:\n counts[\"nanopubs\"] += 1\n counts[\"assertions\"][\"total\"] += len(np[\"nanopub\"][\"assertions\"])\n for assertion in np[\"nanopub\"][\"assertions\"]:\n if assertion[\"relation\"] is None:\n counts[\"assertions\"][\"subject_only\"] += 1\n else:\n if re.match(\"\\s*\\(\", assertion[\"object\"]):\n counts[\"assertions\"][\"nested\"] += 1\n\n if (\n not assertion.get(\"relation\")\n in counts[\"assertions\"][\"relations\"]\n ):\n counts[\"assertions\"][\"relations\"][assertion.get(\"relation\")] = 1\n else:\n counts[\"assertions\"][\"relations\"][\n assertion.get(\"relation\")\n ] += 1\n\n counts[\"assertions\"][\"relations\"] = sorted(counts[\"assertions\"][\"relations\"])\n\n print(\"DumpVar:\\n\", json.dumps(counts, indent=4))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef edges(ctx, statement, rules, species, namespace_targets, version, api, config_fn):\n\n if config_fn:\n config = bel.db.Config.merge_config(ctx.config, override_config_fn=config_fn)\n else:\n config = ctx.config\n\n # Configuration - will return the first truthy result in list else the default option\n if namespace_targets:\n namespace_targets = json.loads(namespace_targets)\n if rules:\n rules = rules.replace(\" \", \"\").split(\",\")\n\n namespace_targets = utils.first_true(\n [namespace_targets, config[\"bel\"][\"lang\"].get(\"canonical\")], None\n )\n api_url = utils.first_true(\n [api, config[\"bel_api\"][\"servers\"].get(\"api_url\", None)], None\n )\n version = utils.first_true(\n [version, config[\"bel\"][\"lang\"].get(\"default_bel_version\", None)], None\n )\n\n print(\"------------------------------\")\n print(\"BEL version: {}\".format(version))\n print(\"API Endpoint: {}\".format(api))\n print(\"------------------------------\")\n\n bo = BEL(version=version, endpoint=api_url)\n if species:\n edges = (\n bo.parse(statement)\n .orthologize(species)\n .canonicalize(namespace_targets=namespace_targets)\n .compute_edges(rules=rules)\n )\n else:\n edges = (\n bo.parse(statement)\n .canonicalize(namespace_targets=namespace_targets)\n .compute_edges(rules=rules)\n )\n\n if edges is None:\n print(bo.original_bel_stmt)\n print(bo.parse_visualize_error)\n print(bo.validation_messages)\n else:\n print(json.dumps(edges, indent=4))\n\n if bo.validation_messages:\n print(bo.validation_messages)\n else:\n print(\"No problems found\")\n return", "response": "Create BEL Edges from BEL Statement"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_to_schema(nanopub, schema) -> Tuple[bool, List[Tuple[str, str]]]:\n\n v = jsonschema.Draft4Validator(schema)\n messages = []\n errors = sorted(v.iter_errors(nanopub), key=lambda e: e.path)\n for error in errors:\n for suberror in sorted(error.context, key=lambda e: e.schema_path):\n print(list(suberror.schema_path), suberror.message, sep=\", \")\n messages.append((\"ERROR\", suberror.message))\n\n is_valid = True\n if errors:\n is_valid = False\n\n return (is_valid, messages)", "response": "Validate nanopub against jsonschema for nanopub\n ArcGIS"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates CityHash64 from nanopub", "response": "def hash_nanopub(nanopub: Mapping[str, Any]) -> str:\n \"\"\"Create CityHash64 from nanopub for duplicate check\n\n TODO - check that this hash value is consistent between C# and Python running on\n laptop and server\n\n Build string to hash\n\n Collect flat array of (all values.strip()):\n nanopub.type.name\n nanopub.type.version\n\n One of:\n nanopub.citation.database.name\n nanopub.citation.database.id\n\n OR\n\n nanopub.citation.database.uri\n\n OR\n\n nanopub.citation.database.reference\n\n Extend with sorted list of assertions (SRO as single string with space between S, R and O)\n\n Extend with sorted list of annotations (nanopub.annotations.type + ' ' + nanopub.annotations.id)\n\n Convert array to string by joining array elements separated by a space\n\n Create CityHash64(str) and return\n\n \"\"\"\n\n hash_list = []\n\n # Type\n hash_list.append(nanopub[\"nanopub\"][\"type\"].get(\"name\", \"\").strip())\n hash_list.append(nanopub[\"nanopub\"][\"type\"].get(\"version\", \"\").strip())\n\n # Citation\n if nanopub[\"nanopub\"][\"citation\"].get(\"database\", False):\n hash_list.append(\n nanopub[\"nanopub\"][\"citation\"][\"database\"].get(\"name\", \"\").strip()\n )\n hash_list.append(\n nanopub[\"nanopub\"][\"citation\"][\"database\"].get(\"id\", \"\").strip()\n )\n elif nanopub[\"nanopub\"][\"citation\"].get(\"uri\", False):\n hash_list.append(nanopub[\"nanopub\"][\"citation\"].get(\"uri\", \"\").strip())\n elif nanopub[\"nanopub\"][\"citation\"].get(\"reference\", False):\n hash_list.append(nanopub[\"nanopub\"][\"citation\"].get(\"reference\", \"\").strip())\n\n # Assertions\n assertions = []\n for assertion in nanopub[\"nanopub\"][\"assertions\"]:\n if assertion.get(\"relation\") is None:\n assertion[\"relation\"] = \"\"\n if assertion.get(\"object\") is None:\n assertion[\"object\"] = \"\"\n assertions.append(\n \" \".join(\n (\n assertion[\"subject\"].strip(),\n assertion.get(\"relation\", \"\").strip(),\n assertion.get(\"object\", \"\").strip(),\n )\n ).strip()\n )\n assertions = sorted(assertions)\n hash_list.extend(assertions)\n\n # Annotations\n annotations = []\n\n for anno in nanopub[\"nanopub\"][\"annotations\"]:\n annotations.append(\n \" \".join((anno.get(\"type\", \"\").strip(), anno.get(\"id\", \"\").strip())).strip()\n )\n\n annotations = sorted(annotations)\n hash_list.extend(annotations)\n\n np_string = \" \".join([l.lower() for l in hash_list])\n\n return \"{:x}\".format(CityHash64(np_string))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nvalidates the nanopub dict using the nanopub schema.", "response": "def validate(\n self, nanopub: Mapping[str, Any]\n ) -> Tuple[bool, List[Tuple[str, str]]]:\n \"\"\"Validates using the nanopub schema\n\n Args:\n nanopub (Mapping[str, Any]): nanopub dict\n\n Returns:\n Tuple[bool, List[Tuple[str, str]]]:\n bool: Is valid? Yes = True, No = False\n List[Tuple[str, str]]: Validation issues, empty if valid, tuple is ('ERROR|WARNING', msg)\n e.g. [('WARNING', \"Context ID not found\")] \"\"\"\n\n # Validate nanopub\n (is_valid, messages) = validate_to_schema(nanopub, self.nanopub_schema)\n if not is_valid:\n return messages\n\n # Extract BEL Version\n if nanopub[\"nanopub\"][\"type\"][\"name\"].upper() == \"BEL\":\n bel_version = nanopub[\"nanopub\"][\"type\"][\"version\"]\n else:\n is_valid = False\n return (\n is_valid,\n f\"Not a BEL Nanopub according to nanopub.type.name: {nanopub['nanopub']['type']['name']}\",\n )\n\n all_messages = []\n # Validate BEL Statements\n bel_obj = bel.lang.belobj.BEL(bel_version, self.endpoint)\n for edge in nanopub[\"nanopub\"][\"edges\"]:\n bel_statement = f\"{edge['subject']} {edge['relation']} {edge['object']}\"\n parse_obj = bel_obj.parse(bel_statement)\n if not parse_obj.valid:\n all_messages.extend(\n (\n \"ERROR\",\n f\"BEL statement parse error {parse_obj.error}, {parse_obj.err_visual}\",\n )\n )\n\n # Validate nanopub.context\n for context in nanopub[\"nanopub\"][\"context\"]:\n (is_valid, messages) = self.validate_context(context)\n all_messages.extend(messages)\n\n is_valid = True\n for _type, msg in all_messages:\n if _type == \"ERROR\":\n is_valid = False\n\n return (is_valid, all_messages)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nvalidate a context dictionary of type id and label.", "response": "def validate_context(\n self, context: Mapping[str, Any]\n ) -> Tuple[bool, List[Tuple[str, str]]]:\n \"\"\" Validate context\n\n Args:\n context (Mapping[str, Any]): context dictionary of type, id and label\n\n Returns:\n Tuple[bool, List[Tuple[str, str]]]:\n bool: Is valid? Yes = True, No = False\n List[Tuple[str, str]]: Validation issues, empty if valid, tuple is ('ERROR|WARNING', msg)\n e.g. [('WARNING', \"Context ID not found\")]\n \"\"\"\n\n url = f'{self.endpoint}/terms/{context[\"id\"]}'\n\n res = requests.get(url)\n if res.status_code == 200:\n return (True, [])\n else:\n return (False, [(\"WARNING\", f'Context {context[\"id\"]} not found at {url}')])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating BEL Edges from BEL nanopub.", "response": "def bel_edges(\n self,\n nanopub: Mapping[str, Any],\n namespace_targets: Mapping[str, List[str]] = {},\n rules: List[str] = [],\n orthologize_target: str = None,\n ) -> List[Mapping[str, Any]]:\n \"\"\"Create BEL Edges from BEL nanopub\n\n Args:\n nanopub (Mapping[str, Any]): bel nanopub\n namespace_targets (Mapping[str, List[str]]): what namespaces to canonicalize\n rules (List[str]): which computed edge rules to process, default is all,\n look at BEL Specification yaml file for computed edge signature keys,\n e.g. degradation, if any rule in list is 'skip', then skip computing edges\n just return primary_edge\n orthologize_target (str): species to convert BEL into, e.g. TAX:10090 for mouse, default option does not orthologize\n\n Returns:\n List[Mapping[str, Any]]: edge list with edge attributes (e.g. context)\n \"\"\"\n\n edges = bel.edge.edges.create_edges(\n nanopub,\n self.endpoint,\n namespace_targets=namespace_targets,\n rules=rules,\n orthologize_target=orthologize_target,\n )\n\n return edges"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main_hrun():\r\n \r\n parser = argparse.ArgumentParser(description=\"Tools for http(s) test. Base on rtsf.\")\r\n \r\n parser.add_argument(\r\n '--log-level', default='INFO',\r\n help=\"Specify logging level, default is INFO.\")\r\n \r\n parser.add_argument(\r\n '--log-file',\r\n help=\"Write logs to specified file path.\")\r\n \r\n parser.add_argument(\r\n 'case_file', \r\n help=\"yaml testcase file\")\r\n \r\n color_print(\"httpdriver {}\".format(__version__), \"GREEN\")\r\n args = parser.parse_args()\r\n logger.setup_logger(args.log_level, args.log_file) \r\n \r\n runner = TestRunner(runner = HttpDriver).run(args.case_file)\r\n html_report = runner.gen_html_report()\r\n color_print(\"report: {}\".format(html_report))", "response": "parse command line options and run commands"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds the. pyd file in the build directory.", "response": "def find_pyd_file():\n \"\"\"\n Return path to .pyd after successful build command.\n :return: Path to .pyd file or None.\n \"\"\"\n\n if not os.path.isdir(\"./build\"):\n raise NotADirectoryError\n\n for path, dirs, files in os.walk(\"./build\"):\n for file_name in files:\n file_name_parts = os.path.splitext(file_name)\n if file_name_parts[1] == \".pyd\":\n return path\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _void_array_to_nested_list(res, _func, _args):\n try:\n shape = res.coords.len, 2\n ptr = cast(res.coords.data, POINTER(c_double))\n array = np.ctypeslib.as_array(ptr, shape)\n return array.tolist()\n finally:\n drop_array(res.coords)", "response": "Dereference the FFI result to a list of coordinates"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_json_file(abspath):\n abspath = abspath.lower()\n fname, ext = os.path.splitext(abspath)\n if ext in [\".json\", \".js\"]:\n is_json = True\n elif ext == \".gz\":\n is_json = False\n elif ext == \".tmp\":\n return is_json_file(fname)\n else:\n raise JsonExtError(\n \"'%s' is not a valid json file. \"\n \"extension has to be '.json' for uncompressed, '.gz' \"\n \"for compressed.\" % abspath)\n return is_json", "response": "Check if file is a json file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef lower_ext(abspath):\n fname, ext = os.path.splitext(abspath)\n return fname + ext.lower()", "response": "Convert file extension to lowercase."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(abspath, default=None, enable_verbose=True):\n if default is None:\n default = dict()\n\n prt(\"\\nLoad from '%s' ...\" % abspath, enable_verbose)\n\n abspath = lower_ext(str(abspath))\n is_json = is_json_file(abspath)\n\n if not os.path.exists(abspath):\n prt(\" File not found, use default value: %r\" %\n default, enable_verbose)\n return default\n\n st = time.clock()\n if is_json:\n data = json.loads(textfile.read(abspath, encoding=\"utf-8\"))\n else:\n data = json.loads(compress.read_gzip(abspath).decode(\"utf-8\"))\n\n prt(\" Complete! Elapse %.6f sec.\" % (time.clock() - st), enable_verbose)\n return data", "response": "Load a json file into a dict."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndump a JSON serializable object to a file.", "response": "def dump(data, abspath,\n indent_format=False,\n float_precision=None,\n ensure_ascii=True,\n overwrite=False,\n enable_verbose=True):\n \"\"\"Dump Json serializable object to file.\n Provides multiple choice to customize the behavior.\n\n :param data: Serializable python object.\n :type data: dict or list\n\n :param abspath: ``save as`` path, file extension has to be ``.json`` or ``.gz``\n (for compressed Json)\n :type abspath: string\n\n :param indent_format: default ``False``, If ``True``, then dump to human\n readable format, but it's slower, the file is larger\n :type indent_format: boolean\n\n :param float_precision: default ``None``, limit flotas to N-decimal points. \n :type float_precision: integer\n\n :param overwrite: default ``False``, If ``True``, when you dump to existing\n file, it silently overwrite it. If ``False``, an alert message is shown.\n Default setting ``False`` is to prevent overwrite file by mistake.\n :type overwrite: boolean\n\n :param enable_verbose: default True, help-message-display trigger.\n :type enable_verbose: boolean\n\n Usage::\n\n >>> from dataIO import js\n >>> data = {\"a\": 1, \"b\": 2}\n >>> dump(data, \"test.json\", overwrite=True)\n Dumping to 'test.json'...\n Complete! Elapse 0.002432 sec\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u5c06Python\u4e2d\u53ef\u88ab\u5e8f\u5217\u5316\u7684\"\u5b57\u5178\", \"\u5217\u8868\"\u4ee5\u53ca\u4ed6\u4eec\u7684\u7ec4\u5408, \u6309\u7167Json\u7684\u7f16\u7801\u65b9\u5f0f\u5199\u5165\u6587\u4ef6\n \u6587\u4ef6\n\n \u53c2\u6570\u5217\u8868\n\n :param js: \u53efJson\u5316\u7684Python\u5bf9\u8c61\n :type js: ``\u5b57\u5178`` \u6216 ``\u5217\u8868``\n\n :param abspath: Json\u6587\u4ef6\u7edd\u5bf9\u8def\u5f84, \u6269\u5c55\u540d\u9700\u4e3a ``.json`` \u6216 ``.gz``, \u5176\u4e2d ``.gz``\n \u662f\u88ab\u538b\u7f29\u540e\u7684Json\u6587\u4ef6\n :type abspath: ``\u5b57\u7b26\u4e32``\n\n :param indent_format: \u9ed8\u8ba4 ``False``, \u5f53\u4e3a ``True`` \u65f6, Json\u7f16\u7801\u65f6\u4f1a\u5bf9Key\u8fdb\u884c\n \u6392\u5e8f, \u5e76\u8fdb\u884c\u7f29\u8fdb\u6392\u7248\u3002\u4f46\u662f\u8fd9\u6837\u5199\u5165\u901f\u5ea6\u8f83\u6162, \u6587\u4ef6\u4f53\u79ef\u4e5f\u66f4\u5927\u3002\n :type indent_format: \"\u5e03\u5c14\u503c\"\n\n :param overwrite: \u9ed8\u8ba4 ``False``, \u5f53\u4e3a``True``\u65f6, \u5982\u679c\u5199\u5165\u8def\u5f84\u5df2\u7ecf\u5b58\u5728, \u5219\u4f1a\n \u81ea\u52a8\u8986\u76d6\u539f\u6587\u4ef6\u3002\u800c\u4e3a``False``\u65f6, \u5219\u4f1a\u6253\u5370\u8b66\u544a\u6587\u4ef6, \u9632\u6b62\u8bef\u64cd\u4f5c\u8986\u76d6\u6e90\u6587\u4ef6\u3002\n :type overwrite: \"\u5e03\u5c14\u503c\"\n\n :param float_precision: \u9ed8\u8ba4 ``None``, \u5f53\u4e3a\u4efb\u610f\u6574\u6570\u65f6, \u5219\u4f1a\u4fdd\u7559\u5c0f\u6570\u70b9\u540eN\u4f4d\n :type float_precision: \"\u6574\u6570\"\n\n :param enable_verbose: \u9ed8\u8ba4 ``True``, \u4fe1\u606f\u63d0\u793a\u7684\u5f00\u5173, \u6279\u5904\u7406\u65f6\u5efa\u8bae\u5173\u95ed\n :type enable_verbose: ``\u5e03\u5c14\u503c``\n \"\"\"\n prt(\"\\nDump to '%s' ...\" % abspath, enable_verbose)\n\n abspath = lower_ext(str(abspath))\n is_json = is_json_file(abspath)\n\n if os.path.exists(abspath):\n if not overwrite: # \u5b58\u5728, \u5e76\u4e14overwrite=False\n prt(\" Stop! File exists and overwrite is not allowed\",\n enable_verbose)\n return\n\n if float_precision is not None:\n encoder.FLOAT_REPR = lambda x: format(x, \".%sf\" % float_precision)\n indent_format = True\n else:\n encoder.FLOAT_REPR = repr\n\n if indent_format:\n sort_keys = True\n indent = 4\n else:\n sort_keys = False\n indent = None\n\n st = time.clock()\n js = json.dumps(data, sort_keys=sort_keys, indent=indent,\n ensure_ascii=ensure_ascii)\n content = js.encode(\"utf-8\")\n if is_json:\n textfile.writebytes(content, abspath)\n else:\n compress.write_gzip(content, abspath)\n\n prt(\" Complete! Elapse %.6f sec.\" % (time.clock() - st), enable_verbose)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef safe_dump(data, abspath,\n indent_format=False,\n float_precision=None,\n ensure_ascii=True,\n enable_verbose=True):\n \"\"\"A stable version of :func:`dump`, this method will silently overwrite \n existing file.\n\n There's a issue with :func:`dump`: If your program is interrupted while \n writing, you got an incomplete file, and you also lose the original file.\n So this method write json to a temporary file first, then rename to what\n you expect, and silently overwrite old one. This way can guarantee atomic \n write.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u5728\u5bf9\u6587\u4ef6\u8fdb\u884c\u5199\u5165\u65f6, \u5982\u679c\u7a0b\u5e8f\u4e2d\u65ad, \u5219\u4f1a\u7559\u4e0b\u4e00\u4e2a\u4e0d\u5b8c\u6574\u7684\u6587\u4ef6\u3002\u5982\u679c\u4f7f\u7528\u4e86\u8986\u76d6\u5f0f\n \u5199\u5165, \u5219\u6211\u4eec\u5373\u6ca1\u6709\u5f97\u5230\u65b0\u6587\u4ef6, \u540c\u65f6\u4e5f\u4e22\u5931\u4e86\u539f\u6587\u4ef6\u3002\u6240\u4ee5\u4e3a\u4e86\u4fdd\u8bc1\u5199\u64cd\u4f5c\u7684\u539f\u5b50\u6027\n (\u8981\u4e48\u5168\u90e8\u5b8c\u6210, \u8981\u4e48\u5168\u90e8\u90fd\u4e0d\u5b8c\u6210), \u66f4\u597d\u7684\u65b9\u6cd5\u662f: \u9996\u5148\u5c06\u6587\u4ef6\u5199\u5165\u4e00\u4e2a\u4e34\u65f6\u6587\u4ef6\u4e2d, \n \u5b8c\u6210\u540e\u518d\u8bb2\u6587\u4ef6\u91cd\u547d\u540d, \u8986\u76d6\u65e7\u6587\u4ef6\u3002\u8fd9\u6837\u5373\u4f7f\u4e2d\u9014\u7a0b\u5e8f\u88ab\u4e2d\u65ad, \u4e5f\u4ec5\u4ec5\u662f\u7559\u4e0b\u4e86\u4e00\u4e2a\n \u672a\u5b8c\u6210\u7684\u4e34\u65f6\u6587\u4ef6\u800c\u5df2, \u4e0d\u4f1a\u5f71\u54cd\u539f\u6587\u4ef6\u3002\n \"\"\"\n abspath = lower_ext(str(abspath))\n abspath_temp = \"%s.tmp\" % abspath\n dump(data, abspath_temp,\n indent_format=indent_format, float_precision=float_precision,\n ensure_ascii=ensure_ascii,\n overwrite=True, enable_verbose=enable_verbose)\n shutil.move(abspath_temp, abspath)", "response": "A stable version of dump that does not overwrite existing file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pretty_dumps(data):\n try:\n return json.dumps(data, sort_keys=True, indent=4, ensure_ascii=False)\n except:\n return json.dumps(data, sort_keys=True, indent=4, ensure_ascii=True)", "response": "Return json string in pretty format."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes left and right padding values.", "response": "def _get_pad_left_right(small, large):\n \"\"\" Compute left and right padding values.\n\n Here we use the convention that if the padding\n size is odd, we pad the odd part to the right\n and the even part to the left.\n\n Parameters\n ----------\n small : int\n Old size of original 1D array\n large : int\n New size off padded 1D array\n\n Returns\n -------\n (padleft, padright) : tuple\n The proposed padding sizes.\n \"\"\"\n assert small < large, \"Can only pad when new size larger than old size\"\n\n padsize = large - small\n if padsize % 2 != 0:\n leftpad = (padsize - 1)/2\n else:\n leftpad = padsize/2\n rightpad = padsize-leftpad\n\n return int(leftpad), int(rightpad)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nperforming linear padding for complex array", "response": "def pad_add(av, size=None, stlen=10):\n \"\"\" Perform linear padding for complex array\n\n The input array `av` is padded with a linear ramp starting at the\n edges and going outwards to an average value computed from a band\n of thickness `stlen` at the outer boundary of the array.\n\n Pads will only be appended, not prepended to the array.\n\n If the input array is complex, pads will be complex numbers\n The average is computed for phase and amplitude separately.\n\n Parameters\n ----------\n av : complex 1D or 2D ndarray\n The array that will be padded.\n size : int or tuple of length 1 (1D) or tuple of length 2 (2D), optional\n The final size of the padded array. Defaults to double the size\n of the input array.\n stlen : int, optional\n The thickness of the frame within `av` that will be used to\n compute an average value for padding.\n\n\n Returns\n -------\n pv : complex 1D or 2D ndarray\n Padded array `av` with pads appended to right and bottom.\n \"\"\"\n if size is None:\n size = list()\n for s in av.shape:\n size.append(int(2*s))\n elif not hasattr(size, \"__len__\"):\n size = [size]\n\n assert len(av.shape) in [1, 2], \"Only 1D and 2D arrays!\"\n assert len(av.shape) == len(\n size), \"`size` must have same length as `av.shape`!\"\n\n if len(av.shape) == 2:\n return _pad_add_2d(av, size, stlen)\n else:\n return _pad_add_1d(av, size, stlen)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _pad_add_1d(av, size, stlen):\n assert len(size) == 1\n\n padx = _get_pad_left_right(av.shape[0], size[0])\n\n mask = np.zeros(av.shape, dtype=bool)\n mask[stlen:-stlen] = True\n border = av[~mask]\n if av.dtype.name.count(\"complex\"):\n padval = np.average(np.abs(border)) * \\\n np.exp(1j*np.average(np.angle(border)))\n else:\n padval = np.average(border)\n if np.__version__[:3] in [\"1.7\", \"1.8\", \"1.9\"]:\n end_values = ((padval, padval),)\n else:\n end_values = (padval,)\n bv = np.pad(av,\n padx,\n mode=\"linear_ramp\",\n end_values=end_values)\n # roll the array so that the padding values are on the right\n bv = np.roll(bv, -padx[0], 0)\n return bv", "response": "1D component of pad_add"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove linear padding from array This is a convenience function that does the opposite of `pad_add`. Parameters ---------- pv : 1D or 2D ndarray The array from which the padding will be removed. size : tuple of length 1 (1D) or 2 (2D), optional The final size of the un-padded array. Defaults to half the size of the input array. Returns ------- pv : 1D or 2D ndarray Padded array `av` with pads appended to right and bottom.", "response": "def pad_rem(pv, size=None):\n \"\"\" Removes linear padding from array\n\n This is a convenience function that does the opposite\n of `pad_add`.\n\n Parameters\n ----------\n pv : 1D or 2D ndarray\n The array from which the padding will be removed.\n size : tuple of length 1 (1D) or 2 (2D), optional\n The final size of the un-padded array. Defaults to half the size\n of the input array.\n\n\n Returns\n -------\n pv : 1D or 2D ndarray\n Padded array `av` with pads appended to right and bottom.\n \"\"\"\n if size is None:\n size = list()\n for s in pv.shape:\n assert s % 2 == 0, \"Uneven size; specify correct size of output!\"\n size.append(int(s/2))\n elif not hasattr(size, \"__len__\"):\n size = [size]\n\n assert len(pv.shape) in [1, 2], \"Only 1D and 2D arrays!\"\n assert len(pv.shape) == len(\n size), \"`size` must have same length as `av.shape`!\"\n\n if len(pv.shape) == 2:\n return pv[:size[0], :size[1]]\n else:\n return pv[:size[0]]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef scan_file(self, filename, apikey):\n url = self.base_url + \"file/scan\"\n params = {'apikey': apikey}\n scanfile = {\"file\": open(filename, 'rb')}\n response = requests.post(url, files=scanfile, params=params)\n rate_limit_clear = self.rate_limit()\n if rate_limit_clear:\n if response.status_code == self.HTTP_OK:\n json_response = response.json()\n return json_response\n elif response.status_code == self.HTTP_RATE_EXCEEDED:\n time.sleep(20)\n else:\n self.logger.error(\"sent: %s, HTTP: %d\", filename, response.status_code)", "response": "Send a file to virus total for assessment\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef binary_report(self, sha256sum, apikey):\n url = self.base_url + \"file/report\"\n params = {\"apikey\": apikey, \"resource\": sha256sum}\n rate_limit_clear = self.rate_limit()\n\n if rate_limit_clear:\n response = requests.post(url, data=params)\n\n if response.status_code == self.HTTP_OK:\n json_response = response.json()\n response_code = json_response['response_code']\n return json_response\n elif response.status_code == self.HTTP_RATE_EXCEEDED:\n time.sleep(20)\n else:\n self.logger.warning(\"retrieve report: %s, HTTP code: %d\", os.path.basename(filename), response.status_code)", "response": "retrieve report from file scan\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsend IP address for list of past malicous domain associations", "response": "def send_ip(self, ipaddr, apikey):\n \"\"\"\n Send IP address for list of past malicous domain associations\n \"\"\"\n url = self.base_url + \"ip-address/report\"\n parameters = {\"ip\": ipaddr, \"apikey\": apikey}\n rate_limit_clear = self.rate_limit()\n if rate_limit_clear:\n response = requests.get(url, params=parameters)\n if response.status_code == self.HTTP_OK:\n json_response = response.json()\n return json_response\n elif response.status_code == self.HTTP_RATE_EXCEEDED:\n time.sleep(20)\n else:\n self.logger.error(\"sent: %s, HTTP: %d\", ipaddr, response.status_code)\n time.sleep(self.public_api_sleep_time)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending URLS for list of past malicous associations", "response": "def url_report(self, scan_url, apikey):\n \"\"\"\n Send URLS for list of past malicous associations\n \"\"\"\n url = self.base_url + \"url/report\"\n params = {\"apikey\": apikey, 'resource': scan_url}\n rate_limit_clear = self.rate_limit()\n if rate_limit_clear:\n response = requests.post(url, params=params, headers=self.headers)\n if response.status_code == self.HTTP_OK:\n json_response = response.json()\n return json_response\n elif response.status_code == self.HTTP_RATE_EXCEEDED:\n time.sleep(20)\n else:\n self.logger.error(\"sent: %s, HTTP: %d\", scan_url, response.status_code)\n time.sleep(self.public_api_sleep_time)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading the requirements file and returns a list of packages that are hard and extra packages.", "response": "def _read_requirements(filename, extra_packages):\n \"\"\"Returns a list of package requirements read from the file.\"\"\"\n requirements_file = open(filename).read()\n\n hard_requirements = []\n for line in requirements_file.splitlines():\n if _is_requirement(line):\n if line.find(';') > -1:\n dep, condition = tuple(line.split(';'))\n extra_packages[condition.strip()].append(dep.strip())\n else:\n hard_requirements.append(line.strip())\n return hard_requirements, extra_packages"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the value of a field in a string separated by a delimiter", "response": "def field(ctx, text, index, delimiter=' '):\n \"\"\"\n Reference a field in string separated by a delimiter\n \"\"\"\n splits = text.split(delimiter)\n\n # remove our delimiters and whitespace\n splits = [f for f in splits if f != delimiter and len(f.strip()) > 0]\n\n index = conversions.to_integer(index, ctx)\n if index < 1:\n raise ValueError('Field index cannot be less than 1')\n\n if index <= len(splits):\n return splits[index-1]\n else:\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts the given date to the number of seconds since January 1st 1970 UTC", "response": "def epoch(ctx, datetime):\n \"\"\"\n Converts the given date to the number of seconds since January 1st, 1970 UTC\n \"\"\"\n return conversions.to_decimal(str(conversions.to_datetime(datetime, ctx).timestamp()), ctx)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread digits from text into TTS", "response": "def read_digits(ctx, text):\n \"\"\"\n Formats digits in text for reading in TTS\n \"\"\"\n def chunk(value, chunk_size):\n return [value[i: i + chunk_size] for i in range(0, len(value), chunk_size)]\n\n text = conversions.to_string(text, ctx).strip()\n if not text:\n return ''\n\n # trim off the plus for phone numbers\n if text[0] == '+':\n text = text[1:]\n\n length = len(text)\n\n # ssn\n if length == 9:\n result = ' '.join(text[:3])\n result += ' , ' + ' '.join(text[3:5])\n result += ' , ' + ' '.join(text[5:])\n return result\n\n # triplets, most international phone numbers\n if length % 3 == 0 and length > 3:\n chunks = chunk(text, 3)\n return ' '.join(','.join(chunks))\n\n # quads, credit cards\n if length % 4 == 0:\n chunks = chunk(text, 4)\n return ' '.join(','.join(chunks))\n\n # otherwise, just put a comma between each number\n return ','.join(text)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_first_word(ctx, text):\n text = conversions.to_string(text, ctx).lstrip()\n first = first_word(ctx, text)\n return text[len(first):].lstrip() if first else ''", "response": "Removes the first word from the given text string"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef word(ctx, text, number, by_spaces=False):\n return word_slice(ctx, text, number, conversions.to_integer(number, ctx) + 1, by_spaces)", "response": "Extracts the nth word from the given text string"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef word_count(ctx, text, by_spaces=False):\n text = conversions.to_string(text, ctx)\n by_spaces = conversions.to_boolean(by_spaces, ctx)\n return len(__get_words(text, by_spaces))", "response": "Returns the number of words in the given text"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef word_slice(ctx, text, start, stop=0, by_spaces=False):\n text = conversions.to_string(text, ctx)\n start = conversions.to_integer(start, ctx)\n stop = conversions.to_integer(stop, ctx)\n by_spaces = conversions.to_boolean(by_spaces, ctx)\n\n if start == 0:\n raise ValueError(\"Start word cannot be zero\")\n elif start > 0:\n start -= 1 # convert to a zero-based offset\n\n if stop == 0: # zero is treated as no end\n stop = None\n elif stop > 0:\n stop -= 1 # convert to a zero-based offset\n\n words = __get_words(text, by_spaces)\n\n selection = operator.getitem(words, slice(start, stop))\n\n # re-combine selected words with a single space\n return ' '.join(selection)", "response": "Extracts a substring spanning from start to stop."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef format_date(ctx, text):\n dt = conversions.to_datetime(text, ctx)\n return dt.astimezone(ctx.timezone).strftime(ctx.get_date_format(True))", "response": "Takes a single parameter as string and returns it in the format defined by the org\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef format_location(ctx, text):\n text = conversions.to_string(text, ctx)\n return text.split(\">\")[-1].strip()", "response": "Takes a single parameter and returns the name of the leaf boundary"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef regex_group(ctx, text, pattern, group_num):\n text = conversions.to_string(text, ctx)\n pattern = conversions.to_string(pattern, ctx)\n group_num = conversions.to_integer(group_num, ctx)\n\n expression = regex.compile(pattern, regex.UNICODE | regex.IGNORECASE | regex.MULTILINE | regex.V0)\n match = expression.search(text)\n\n if not match:\n return \"\"\n\n if group_num < 0 or group_num > len(match.groups()):\n raise ValueError(\"No such matching group %d\" % group_num)\n\n return match.group(group_num)", "response": "Tries to match the text with the given pattern and returns the value of matching group_num"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef decimal_round(number, num_digits, rounding=ROUND_HALF_UP):\n exp = Decimal(10) ** -num_digits\n\n if num_digits >= 0:\n return number.quantize(exp, rounding)\n else:\n return exp * (number / exp).to_integral_value(rounding)", "response": "Round a number to num_digits decimal places."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses an ISO8601 formatted datetime from a string value", "response": "def parse_json_date(value):\n \"\"\"\n Parses an ISO8601 formatted datetime from a string value\n \"\"\"\n if not value:\n return None\n\n return datetime.datetime.strptime(value, JSON_DATETIME_FORMAT).replace(tzinfo=pytz.UTC)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nformat a datetime as ISO8601 in UTC with millisecond precision e. g. 2004 - 10 - 03T09 : 41.790Z", "response": "def format_json_date(value):\n \"\"\"\n Formats a datetime as ISO8601 in UTC with millisecond precision, e.g. \"2014-10-03T09:41:12.790Z\"\n \"\"\"\n if not value:\n return None\n\n # %f will include 6 microsecond digits\n micro_precision = value.astimezone(pytz.UTC).strftime(JSON_DATETIME_FORMAT)\n\n # only keep the milliseconds portion of the second fraction\n return micro_precision[:-4] + 'Z'"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves all non - printable characters from a text string", "response": "def clean(ctx, text):\n \"\"\"\n Removes all non-printable characters from a text string\n \"\"\"\n text = conversions.to_string(text, ctx)\n return ''.join([c for c in text if ord(c) >= 32])"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\njoins the given text strings into one text string", "response": "def concatenate(ctx, *text):\n \"\"\"\n Joins text strings into one text string\n \"\"\"\n result = ''\n for arg in text:\n result += conversions.to_string(arg, ctx)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fixed(ctx, number, decimals=2, no_commas=False):\n value = _round(ctx, number, decimals)\n format_str = '{:f}' if no_commas else '{:,f}'\n return format_str.format(value)", "response": "Formats the given number in decimal format using a period and commas"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef left(ctx, text, num_chars):\n num_chars = conversions.to_integer(num_chars, ctx)\n if num_chars < 0:\n raise ValueError(\"Number of chars can't be negative\")\n return conversions.to_string(text, ctx)[0:num_chars]", "response": "Returns the first num_chars characters in a text string"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rept(ctx, text, number_times):\n if number_times < 0:\n raise ValueError(\"Number of times can't be negative\")\n return conversions.to_string(text, ctx) * conversions.to_integer(number_times, ctx)", "response": "Repeats text a given number of times"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef right(ctx, text, num_chars):\n num_chars = conversions.to_integer(num_chars, ctx)\n if num_chars < 0:\n raise ValueError(\"Number of chars can't be negative\")\n elif num_chars == 0:\n return ''\n else:\n return conversions.to_string(text, ctx)[-num_chars:]", "response": "Returns the last num_chars characters in a text string"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef substitute(ctx, text, old_text, new_text, instance_num=-1):\n text = conversions.to_string(text, ctx)\n old_text = conversions.to_string(old_text, ctx)\n new_text = conversions.to_string(new_text, ctx)\n\n if instance_num < 0:\n return text.replace(old_text, new_text)\n else:\n splits = text.split(old_text)\n output = splits[0]\n instance = 1\n for split in splits[1:]:\n sep = new_text if instance == instance_num else old_text\n output += sep + split\n instance += 1\n return output", "response": "Substitutes new_text for old_text in a text string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _unicode(ctx, text):\n text = conversions.to_string(text, ctx)\n if len(text) == 0:\n raise ValueError(\"Text can't be empty\")\n return ord(text[0])", "response": "Returns a numeric code for the first character in a text string"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a date value", "response": "def date(ctx, year, month, day):\n \"\"\"\n Defines a date value\n \"\"\"\n return _date(conversions.to_integer(year, ctx), conversions.to_integer(month, ctx), conversions.to_integer(day, ctx))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncalculate the number of days months or years between two dates.", "response": "def datedif(ctx, start_date, end_date, unit):\n \"\"\"\n Calculates the number of days, months, or years between two dates.\n \"\"\"\n start_date = conversions.to_date(start_date, ctx)\n end_date = conversions.to_date(end_date, ctx)\n unit = conversions.to_string(unit, ctx).lower()\n\n if start_date > end_date:\n raise ValueError(\"Start date cannot be after end date\")\n\n if unit == 'y':\n return relativedelta(end_date, start_date).years\n elif unit == 'm':\n delta = relativedelta(end_date, start_date)\n return 12 * delta.years + delta.months\n elif unit == 'd':\n return (end_date - start_date).days\n elif unit == 'md':\n return relativedelta(end_date, start_date).days\n elif unit == 'ym':\n return relativedelta(end_date, start_date).months\n elif unit == 'yd':\n return (end_date - start_date.replace(year=end_date.year)).days\n\n raise ValueError(\"Invalid unit value: %s\" % unit)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmove a date by the given number of months", "response": "def edate(ctx, date, months):\n \"\"\"\n Moves a date by the given number of months\n \"\"\"\n return conversions.to_date_or_datetime(date, ctx) + relativedelta(months=conversions.to_integer(months, ctx))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a time value", "response": "def time(ctx, hours, minutes, seconds):\n \"\"\"\n Defines a time value\n \"\"\"\n return _time(conversions.to_integer(hours, ctx), conversions.to_integer(minutes, ctx), conversions.to_integer(seconds, ctx))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the absolute value of a number", "response": "def _abs(ctx, number):\n \"\"\"\n Returns the absolute value of a number\n \"\"\"\n return conversions.to_decimal(abs(conversions.to_decimal(number, ctx)), ctx)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nrounds a number down to the nearest integer", "response": "def _int(ctx, number):\n \"\"\"\n Rounds a number down to the nearest integer\n \"\"\"\n return conversions.to_integer(conversions.to_decimal(number, ctx).to_integral_value(ROUND_FLOOR), ctx)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the maximum value of all arguments", "response": "def _max(ctx, *number):\n \"\"\"\n Returns the maximum value of all arguments\n \"\"\"\n if len(number) == 0:\n raise ValueError(\"Wrong number of arguments\")\n\n result = conversions.to_decimal(number[0], ctx)\n for arg in number[1:]:\n arg = conversions.to_decimal(arg, ctx)\n if arg > result:\n result = arg\n return result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef mod(ctx, number, divisor):\n number = conversions.to_decimal(number, ctx)\n divisor = conversions.to_decimal(divisor, ctx)\n return number - divisor * _int(ctx, number / divisor)", "response": "Returns the remainder after number is divided by divisor"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the result of a number raised to a power", "response": "def _power(ctx, number, power):\n \"\"\"\n Returns the result of a number raised to a power\n \"\"\"\n return decimal_pow(conversions.to_decimal(number, ctx), conversions.to_decimal(power, ctx))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a random integer number between the numbers you specify", "response": "def randbetween(ctx, bottom, top):\n \"\"\"\n Returns a random integer number between the numbers you specify\n \"\"\"\n bottom = conversions.to_integer(bottom, ctx)\n top = conversions.to_integer(top, ctx)\n return random.randint(bottom, top)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _round(ctx, number, num_digits):\n number = conversions.to_decimal(number, ctx)\n num_digits = conversions.to_integer(num_digits, ctx)\n\n return decimal_round(number, num_digits, ROUND_HALF_UP)", "response": "Rounds a number to a specified number of digits"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rounddown(ctx, number, num_digits):\n number = conversions.to_decimal(number, ctx)\n num_digits = conversions.to_integer(num_digits, ctx)\n\n return decimal_round(number, num_digits, ROUND_DOWN)", "response": "Rounds a number down toward zero"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrounds a number up away from zero", "response": "def roundup(ctx, number, num_digits):\n \"\"\"\n Rounds a number up, away from zero\n \"\"\"\n number = conversions.to_decimal(number, ctx)\n num_digits = conversions.to_integer(num_digits, ctx)\n\n return decimal_round(number, num_digits, ROUND_UP)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _sum(ctx, *number):\n if len(number) == 0:\n raise ValueError(\"Wrong number of arguments\")\n\n result = Decimal(0)\n for arg in number:\n result += conversions.to_decimal(arg, ctx)\n return result", "response": "Returns the sum of all arguments"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntruncates a number to an integer by removing the fractional part of the number", "response": "def trunc(ctx, number):\n \"\"\"\n Truncates a number to an integer by removing the fractional part of the number\n \"\"\"\n return conversions.to_integer(conversions.to_decimal(number, ctx).to_integral_value(ROUND_DOWN), ctx)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning TRUE if all its arguments evaluate to TRUE", "response": "def _and(ctx, *logical):\n \"\"\"\n Returns TRUE if and only if all its arguments evaluate to TRUE\n \"\"\"\n for arg in logical:\n if not conversions.to_boolean(arg, ctx):\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _if(ctx, logical_test, value_if_true=0, value_if_false=False):\n return value_if_true if conversions.to_boolean(logical_test, ctx) else value_if_false", "response": "Returns one value if the condition evaluates to TRUE and another value if the condition evaluates to FALSE."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns TRUE if any argument is TRUE", "response": "def _or(ctx, *logical):\n \"\"\"\n Returns TRUE if any argument is TRUE\n \"\"\"\n for arg in logical:\n if conversions.to_boolean(arg, ctx):\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef compute_sum_values(i, j, data1, data2):\n sum1_ij = 1.\n for idx, d in zip([i,j], [data1, data2]):\n if isinstance(d, field): sum1_ij *= d.wvalue[idx]\n elif isinstance(d, points): sum1_ij *= d.weights[idx]\n else:\n raise NotImplementedError(\"data type not recognized\")\n sum2_ij = data1.weights[i] * data2.weights[j]\n\n return sum1_ij, sum2_ij", "response": "Compute the sum1_ij and sum2_ij values given the input indices and data instances."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets up the internal object variables for the internal object.", "response": "def _setup(self):\n \"\"\"\n Set the binning info we need from the `edges`\n \"\"\"\n dtype = [('inv', 'f8'), ('min', 'f8'), ('max', 'f8'),('N', 'i4'), ('spacing','object')]\n dtype = numpy.dtype(dtype)\n self._info = numpy.empty(self.Ndim, dtype=dtype)\n self.min = self._info['min']\n self.max = self._info['max']\n self.N = self._info['N']\n self.inv = self._info['inv']\n self.spacing = self._info['spacing']\n\n for i, dim in enumerate(self.dims):\n\n self.N[i] = len(self.edges[i])-1\n self.min[i] = self.edges[i][0]\n self.max[i] = self.edges[i][-1]\n\n # determine the type of spacing\n self.spacing[i] = None\n lin_diff = numpy.diff(self.edges[i])\n with numpy.errstate(divide='ignore', invalid='ignore'):\n log_diff = numpy.diff(numpy.log10(self.edges[i]))\n if numpy.allclose(lin_diff, lin_diff[0]):\n self.spacing[i] = 'linspace'\n self.inv[i] = self.N[i] * 1.0 / (self.max[i] - self.min[i])\n elif numpy.allclose(log_diff, log_diff[0]):\n self.spacing[i] = 'logspace'\n self.inv[i] = self.N[i] * 1.0 / numpy.log10(self.max[i] / self.min[i])\n\n self.shape = self.N + 2\n\n # store Rmax\n self.Rmax = self.max[0]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_sums(self, r, i, j, data1, data2, sum1, sum2, N=None, centers_sum=None):\n # the summation values for this (r,i,j)\n sum1_ij, sum2_ij = compute_sum_values(i, j, data1, data2)\n\n # digitize\n digr = self.digitize(r, i, j, data1, data2)\n\n if len(digr) == 3 and isinstance(digr[1], dict):\n dig, paircoords, weights = digr\n elif len(digr) == 2 and isinstance(digr[1], dict):\n dig, paircoords = digr\n weights = None\n else:\n dig = digr\n paircoords = None\n weights = None\n\n # sum 1\n def add_one_channel(sum1c, sum1_ijc):\n if numpy.isscalar(sum1_ijc) or sum1_ijc.ndim == 1:\n sum1c.flat[:] += utils.bincount(dig, sum1_ijc, minlength=sum1c.size)\n else:\n for d in range(sum1c.shape[0]):\n sum1c[d].flat[:] += utils.bincount(dig, sum1_ijc[...,d], minlength=sum1c[d].size)\n\n if self.channels:\n if weights is None:\n raise RuntimeError(\"`digitize` of multi channel paircount did not return a weight array for the channels\")\n\n sum1_ij = weights * sum1_ij\n # sum1_ij[ichannel, dig, dim]\n for ichannel in range(len(self.channels)):\n add_one_channel(sum1[ichannel], sum1_ij[ichannel])\n else:\n # sum1_ij[dig, dim]\n add_one_channel(sum1, sum1_ij)\n\n # sum 2, if both data are not points\n if not numpy.isscalar(sum2):\n sum2.flat[:] += utils.bincount(dig, sum2_ij, minlength=sum2.size)\n\n if N is not None:\n if not paircoords:\n raise RuntimeError(\"Bin center is requested but not returned by digitize\")\n # update the mean coords\n self._update_mean_coords(dig, N, centers_sum, **paircoords)", "response": "This method is used to update the sum1 and sum2 values for the current record."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the shapes of the summation arrays linearshape fullshape", "response": "def sum_shapes(self, data1, data2):\n \"\"\"\n Return the shapes of the summation arrays,\n given the input data and shape of the bins\n \"\"\"\n # the linear shape (put extra dimensions first)\n linearshape = [-1] + list(self.shape)\n\n # determine the full shape\n subshapes = [list(d.subshape) for d in [data1, data2] if isinstance(d, field)]\n subshape = []\n if len(subshapes) == 2:\n assert subshapes[0] == subshapes[1]\n subshape = subshapes[0]\n elif len(subshapes) == 1:\n subshape = subshapes[0]\n fullshape = subshape + list(self.shape)\n\n # prepend the shape for different channels\n if self.channels:\n fullshape = [len(self.channels)] + fullshape\n\n return linearshape, fullshape"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _update_mean_coords(self, dig, N, centers_sum, **paircoords):\n if N is None or centers_sum is None: return\n\n N.flat[:] += utils.bincount(dig, 1., minlength=N.size)\n for i, dim in enumerate(self.dims):\n size = centers_sum[i].size\n centers_sum[i].flat[:] += utils.bincount(dig, paircoords[dim], minlength=size)", "response": "Update the mean coordinate sums"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef work(self, i):\n n1, n2 = self.p[i]\n\n # initialize the total arrays for this process\n sum1 = numpy.zeros_like(self.sum1g)\n sum2 = 1.\n if not self.pts_only: sum2 = numpy.zeros_like(self.sum2g)\n\n if self.compute_mean_coords:\n N = numpy.zeros_like(self.N)\n centers_sum = [numpy.zeros_like(c) for c in self.centers]\n else:\n N = None; centers_sum = None\n\n if self.bins.enable_fast_node_count:\n # field x points is not supported.\n # because it is more likely need to deal\n # with broadcasting\n sum1attrs = [ d.attr for d in self.data ]\n\n counts, sum1c = n1.count(n2, self.bins.edges,\n attrs=sum1attrs)\n sum1[..., :-1] = sum1c\n sum1[..., -1] = 0\n else:\n\n def callback(r, i, j):\n # just call the binning function, passing the\n # sum arrays to fill in\n self.bins.update_sums(r, i, j, self.data[0], self.data[1], sum1, sum2, N=N, centers_sum=centers_sum)\n\n n1.enum(n2, self.bins.Rmax, process=callback)\n\n if not self.compute_mean_coords:\n return sum1, sum2\n else:\n return sum1, sum2, N, centers_sum", "response": "Internal function that performs the pair - counting\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reduce(self, sum1, sum2, *args):\n self.sum1g[...] += sum1\n if not self.pts_only: self.sum2g[...] += sum2\n\n if self.compute_mean_coords:\n N, centers_sum = args\n self.N[...] += N\n for i in range(self.bins.Ndim):\n self.centers[i][...] += centers_sum[i]", "response": "This function sums the results from various processors and updates the internal statistics of the internal statistics."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef gpdfitnew(x, sort=True, sort_in_place=False, return_quadrature=False):\n if x.ndim != 1 or len(x) <= 1:\n raise ValueError(\"Invalid input array.\")\n\n # check if x should be sorted\n if sort is True:\n if sort_in_place:\n x.sort()\n xsorted = True\n else:\n sort = np.argsort(x)\n xsorted = False\n elif sort is False:\n xsorted = True\n else:\n xsorted = False\n\n n = len(x)\n PRIOR = 3\n m = 30 + int(np.sqrt(n))\n\n bs = np.arange(1, m + 1, dtype=float)\n bs -= 0.5\n np.divide(m, bs, out=bs)\n np.sqrt(bs, out=bs)\n np.subtract(1, bs, out=bs)\n if xsorted:\n bs /= PRIOR * x[int(n/4 + 0.5) - 1]\n bs += 1 / x[-1]\n else:\n bs /= PRIOR * x[sort[int(n/4 + 0.5) - 1]]\n bs += 1 / x[sort[-1]]\n\n ks = np.negative(bs)\n temp = ks[:,None] * x\n np.log1p(temp, out=temp)\n np.mean(temp, axis=1, out=ks)\n\n L = bs / ks\n np.negative(L, out=L)\n np.log(L, out=L)\n L -= ks\n L -= 1\n L *= n\n\n temp = L - L[:,None]\n np.exp(temp, out=temp)\n w = np.sum(temp, axis=1)\n np.divide(1, w, out=w)\n\n # remove negligible weights\n dii = w >= 10 * np.finfo(float).eps\n if not np.all(dii):\n w = w[dii]\n bs = bs[dii]\n # normalise w\n w /= w.sum()\n\n # posterior mean for b\n b = np.sum(bs * w)\n # Estimate for k, note that we return a negative of Zhang and\n # Stephens's k, because it is more common parameterisation.\n temp = (-b) * x # pylint: disable=invalid-unary-operand-type\n np.log1p(temp, out=temp)\n k = np.mean(temp)\n if return_quadrature:\n np.negative(x, out=temp)\n temp = bs[:, None] * temp\n np.log1p(temp, out=temp)\n ks = np.mean(temp, axis=1)\n # estimate for sigma\n sigma = -k / b * n / (n - 0)\n # weakly informative prior for k\n a = 10\n k = k * n / (n+a) + a * 0.5 / (n+a)\n if return_quadrature:\n ks *= n / (n+a)\n ks += a * 0.5 / (n+a)\n\n if return_quadrature:\n return k, sigma, ks, w\n else:\n return k, sigma", "response": "Estimate the parameter values for the Generalized Pareto Distribution."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninverse Generalised Pareto distribution function.", "response": "def gpinv(p, k, sigma):\n \"\"\"Inverse Generalised Pareto distribution function.\"\"\"\n x = np.empty(p.shape)\n x.fill(np.nan)\n if sigma <= 0:\n return x\n ok = (p > 0) & (p < 1)\n if np.all(ok):\n if np.abs(k) < np.finfo(float).eps:\n np.negative(p, out=x)\n np.log1p(x, out=x)\n np.negative(x, out=x)\n else:\n np.negative(p, out=x)\n np.log1p(x, out=x)\n x *= -k\n np.expm1(x, out=x)\n x /= k\n x *= sigma\n else:\n if np.abs(k) < np.finfo(float).eps:\n # x[ok] = - np.log1p(-p[ok])\n temp = p[ok]\n np.negative(temp, out=temp)\n np.log1p(temp, out=temp)\n np.negative(temp, out=temp)\n x[ok] = temp\n else:\n # x[ok] = np.expm1(-k * np.log1p(-p[ok])) / k\n temp = p[ok]\n np.negative(temp, out=temp)\n np.log1p(temp, out=temp)\n temp *= -k\n np.expm1(temp, out=temp)\n temp /= k\n x[ok] = temp\n x *= sigma\n x[p == 0] = 0\n if k >= 0:\n x[p == 1] = np.inf\n else:\n x[p == 1] = -sigma / k\n return x"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef sumlogs(x, axis=None, out=None):\n maxx = x.max(axis=axis, keepdims=True)\n xnorm = x - maxx\n np.exp(xnorm, out=xnorm)\n out = np.sum(xnorm, axis=axis, out=out)\n if isinstance(out, np.ndarray):\n np.log(out, out=out)\n else:\n out = np.log(out)\n out += np.squeeze(maxx)\n return out", "response": "Sum of vector where numbers are represented by their logarithms."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding functions from a library module to the internal dictionary.", "response": "def add_library(self, library):\n \"\"\"\n Adds functions from a library module\n :param library: the library module\n :return:\n \"\"\"\n for fn in library.__dict__.copy().values():\n # ignore imported methods and anything beginning __\n if inspect.isfunction(fn) and inspect.getmodule(fn) == library and not fn.__name__.startswith('__'):\n name = fn.__name__.lower()\n\n # strip preceding _ chars used to avoid conflicts with Java keywords\n if name.startswith('_'):\n name = name[1:]\n\n self._functions[name] = fn"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef invoke_function(self, ctx, name, arguments):\n from temba_expressions import EvaluationError, conversions\n\n # find function with given name\n func = self.get_function(name)\n if func is None:\n raise EvaluationError(\"Undefined function: %s\" % name)\n\n args, varargs, defaults = self._get_arg_spec(func)\n\n call_args = []\n passed_args = list(arguments)\n\n for arg in args:\n if arg == 'ctx':\n call_args.append(ctx)\n elif passed_args:\n call_args.append(passed_args.pop(0))\n elif arg in defaults:\n call_args.append(defaults[arg])\n else:\n raise EvaluationError(\"Too few arguments provided for function %s\" % name)\n\n if varargs is not None:\n call_args.extend(passed_args)\n passed_args = []\n\n # any unused arguments?\n if passed_args:\n raise EvaluationError(\"Too many arguments provided for function %s\" % name)\n\n try:\n return func(*call_args)\n except Exception as e:\n pretty_args = []\n for arg in arguments:\n if isinstance(arg, str):\n pretty = '\"%s\"' % arg\n else:\n try:\n pretty = conversions.to_string(arg, ctx)\n except EvaluationError:\n pretty = str(arg)\n pretty_args.append(pretty)\n\n raise EvaluationError(\"Error calling function %s with arguments %s\" % (name, ', '.join(pretty_args)), e)", "response": "Invoke the function with the given name and arguments."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nbuilding a listing of all functions sorted A - Z with their names and descriptions.", "response": "def build_listing(self):\n \"\"\"\n Builds a listing of all functions sorted A-Z, with their names and descriptions\n \"\"\"\n def func_entry(name, func):\n args, varargs, defaults = self._get_arg_spec(func)\n\n # add regular arguments\n params = [{'name': str(a), 'optional': a in defaults, 'vararg': False} for a in args if a != 'ctx']\n\n # add possible variable argument\n if varargs:\n params += [{'name': str(varargs), 'optional': False, 'vararg': True}]\n\n return {'name': str(name.upper()),\n 'description': str(func.__doc__).strip(),\n 'params': params}\n\n listing = [func_entry(f_name, f) for f_name, f in self._functions.items()]\n return sorted(listing, key=lambda l: l['name'])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_arg_spec(func):\n args, varargs, keywords, defaults = inspect.getargspec(func)\n\n # build a mapping from argument names to their default values, if any:\n if defaults is None:\n defaults = {}\n else:\n defaulted_args = args[-len(defaults):]\n defaults = {name: val for name, val in zip(defaulted_args, defaults)}\n\n return args, varargs, defaults", "response": "Gets the argument spec of the given function returning defaults as a dict of param names to values\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef check_ocrmypdf(input_file, output_file, *args, env=None):\n \"Run ocrmypdf and confirmed that a valid file was created\"\n\n p, out, err = run_ocrmypdf(input_file, output_file, *args, env=env)\n if p.returncode != 0:\n print('stdout\\n======')\n print(out)\n print('stderr\\n======')\n print(err)\n #assert p.returncode == 0\n #assert os.path.exists(output_file), \"Output file not created\"\n #assert os.stat(output_file).st_size > 100, \"PDF too small or empty\"\n return output_file", "response": "Run ocrmypdf and confirmed that a valid file was created"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns ocrmypdf and let caller deal with results", "response": "def run_ocrmypdf(input_file, output_file, *args, env=None):\n \"Run ocrmypdf and let caller deal with results\"\n\n if env is None:\n env = os.environ\n\n p_args = OCRMYPDF + list(args) + [input_file, output_file]\n p = Popen(\n p_args, close_fds=True, stdout=PIPE, stderr=PIPE,\n universal_newlines=True, env=env)\n out, err = p.communicate()\n return p, out, err"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef standardizeMapName(mapName):\n #print(\"foreignName: %s (%s)\"%(mapName, mapName in c.mapNameTranslations))\n #if mapName in c.mapNameTranslations:\n # return c.mapNameTranslations[mapName]\n newName = os.path.basename(mapName)\n newName = newName.split(\".\")[0]\n newName = newName.split(\"(\")[0]\n newName = re.sub(\"[LT]E+$\", \"\", newName)\n newName = re.sub(\"-\", \"\", newName)\n newName = re.sub(' ', '', newName, flags=re.UNICODE)\n foreignName = newName#bytes(mapName, 'utf-16')\n #print(\"foreignName: %s (%s)\"%(foreignName, foreignName in c.mapNameTranslations))\n if foreignName in c.mapNameTranslations:\n return c.mapNameTranslations[foreignName]\n return newName", "response": "pretty - fine the name for pysc2 map lookup"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a string that contains the Python version and runtime path.", "response": "def env_info():\n '''\n Returns a string that contains the Python version and runtime path.\n '''\n v = sys.version_info\n pyver = f'Python {v.major}.{v.minor}.{v.micro}'\n if v.releaselevel == 'alpha':\n pyver += 'a'\n if v.releaselevel == 'beta':\n pyver += 'b'\n if v.releaselevel == 'candidate':\n pyver += 'rc'\n if v.releaselevel != 'final':\n pyver += str(v.serial)\n return f'{pyver} (env: {sys.prefix})'"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dict2kvlist(o):\n '''\n Serializes a dict-like object into a generator of the flatten list of\n repeating key-value pairs. It is useful when using HMSET method in Redis.\n\n Example:\n >>> list(dict2kvlist({'a': 1, 'b': 2}))\n ['a', 1, 'b', 2]\n '''\n return chain.from_iterable((k, v) for k, v in o.items())", "response": "Serializes a dict - like object into a generator of the flatten list of key - value pairs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef nmget(o, key_path, def_val=None, path_delimiter='.', null_as_default=True):\n '''\n A short-hand for retrieving a value from nested mappings\n (\"nested-mapping-get\"). At each level it checks if the given \"path\"\n component in the given key exists and return the default value whenever\n fails.\n\n Example:\n >>> o = {'a':{'b':1}, 'x': None}\n >>> nmget(o, 'a', 0)\n {'b': 1}\n >>> nmget(o, 'a.b', 0)\n 1\n >>> nmget(o, 'a/b', 0, '/')\n 1\n >>> nmget(o, 'a.c', 0)\n 0\n >>> nmget(o, 'x', 0)\n 0\n >>> nmget(o, 'x', 0, null_as_default=False)\n None\n '''\n pieces = key_path.split(path_delimiter)\n while pieces:\n p = pieces.pop(0)\n if o is None or p not in o:\n return def_val\n o = o[p]\n if o is None and null_as_default:\n return def_val\n return o", "response": "A short - hand for retrieving a value from nested mappings\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\npatch __init__. py to remove version check and append hard - coded version.", "response": "def patch_init_py(base_dir, name, version):\n \"\"\"Patch __init__.py to remove version check and append hard-coded version.\"\"\"\n # Ensure main package dir is there (may be absent in script-only packages)\n package_dir = os.path.join(base_dir, name)\n if not os.path.isdir(package_dir):\n os.makedirs(package_dir)\n # Open top-level __init__.py and read whole file\n init_py = os.path.join(package_dir, '__init__.py')\n log.info(\"patching %s to bake in version '%s'\" % (init_py, version))\n with open(init_py, 'r+') as init_file:\n lines = init_file.readlines()\n # Search for sentinels indicating version checking block\n try:\n begin = lines.index(\"# BEGIN VERSION CHECK\\n\")\n end = lines.index(\"# END VERSION CHECK\\n\")\n except ValueError:\n begin = end = len(lines)\n # Delete existing repo version checking block in file\n init_file.seek(0)\n init_file.writelines(lines[:begin] + lines[end+1:])\n # Append new version attribute to ensure it is authoritative, but only\n # if it is not already there (this happens in pip sdist installs)\n version_cmd = \"__version__ = '{0}'\\n\".format(version)\n if not lines or lines[-1] != version_cmd:\n init_file.write(\"\\n# Automatically added by katversion\\n\")\n init_file.write(version_cmd)\n init_file.truncate()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking if machine is in given state.", "response": "def is_(self, state):\n \"\"\"Check if machine is in given state.\"\"\"\n translator = self._meta['translator']\n state = translator.translate(state)\n return self.actual_state == state"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking if machine can transit to given state.", "response": "def can_be_(self, state):\n \"\"\"Check if machine can transit to given state.\"\"\"\n translator = self._meta['translator']\n state = translator.translate(state)\n\n if self._meta['complete']:\n return True\n\n if self.actual_state is None:\n return True\n\n transitions = self._meta['transitions'][self.actual_state]\n return state in transitions"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef force_set(self, state):\n translator = self._meta['translator']\n state = translator.translate(state)\n attr = self._meta['state_attribute_name']\n setattr(self, attr, state)", "response": "Set new state without checking if transition is allowed."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_(self, state):\n if not self.can_be_(state):\n state = self._meta['translator'].translate(state)\n raise TransitionError(\n \"Cannot transit from '{actual_value}' to '{value}'.\"\n .format(actual_value=self.actual_state.value, value=state.value)\n )\n\n self.force_set(state)", "response": "Set new state for the current machine."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_getter(value):\n @property\n @wraps(is_)\n def getter(self):\n return self.is_(value)\n\n return getter", "response": "Generate getter for given value."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate state checker for given value.", "response": "def generate_checker(value):\n \"\"\"Generate state checker for given value.\"\"\"\n @property\n @wraps(can_be_)\n def checker(self):\n return self.can_be_(value)\n\n return checker"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating setter for given value.", "response": "def generate_setter(value):\n \"\"\"Generate setter for given value.\"\"\"\n @wraps(set_)\n def setter(self):\n self.set_(value)\n\n return setter"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef translate(self, value):\n if self._check_if_already_proper(value):\n return value\n\n try:\n return self.search_table[value]\n except KeyError:\n raise ValueError(\"Value {value} doesn't match any state.\".format(\n value=value\n ))", "response": "Translate value to enum instance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _query(self, text):\n params = (\n ('v', self.api_version),\n ('query', text),\n ('lang', self.language),\n ('sessionId', self.session_id),\n ('timezone', self.timezone),\n )\n # store query_response if required\n if self.query_response:\n self.previous_query_response = self.query_response\n\n self.query_response = result = self.session.get(url=self.query_url, params=params).json()\n\n return result", "response": "Query the API for the specified natural language text and returns information as JSON."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck whether this workspace is a virtualenv.", "response": "def is_venv():\n \"\"\"Check whether if this workspace is a virtualenv.\n \"\"\"\n dir_path = os.path.dirname(SRC)\n is_venv_flag = True\n\n if SYS_NAME == \"Windows\":\n executable_list = [\"activate\", \"pip.exe\", \"python.exe\"]\n elif SYS_NAME in [\"Darwin\", \"Linux\"]:\n executable_list = [\"activate\", \"pip\", \"python\"]\n\n for executable in executable_list:\n path = os.path.join(dir_path, BIN_SCRIPTS, executable)\n if not os.path.exists(path):\n is_venv_flag = False\n\n return is_venv_flag"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef find_linux_venv_py_version():\n available_python_version = [\n \"python2.6\",\n \"python2.7\",\n \"python3.3\",\n \"python3.4\",\n \"python3.5\",\n \"python3.6\",\n ]\n dir_path = os.path.dirname(SRC)\n for basename in os.listdir(os.path.join(dir_path, BIN_SCRIPTS)):\n for python_version in available_python_version:\n if python_version in basename:\n return python_version\n raise Exception(\"Can't find virtualenv python version!\")", "response": "Find the python version used in this virtualenv."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding where this package should be installed to in this virtualenv.", "response": "def find_venv_DST():\n \"\"\"Find where this package should be installed to in this virtualenv.\n\n For example: ``/path-to-venv/lib/python2.7/site-packages/package-name``\n \"\"\"\n dir_path = os.path.dirname(SRC)\n\n if SYS_NAME == \"Windows\":\n DST = os.path.join(dir_path, \"Lib\", \"site-packages\", PKG_NAME)\n elif SYS_NAME in [\"Darwin\", \"Linux\"]:\n python_version = find_linux_venv_py_version()\n DST = os.path.join(dir_path, \"lib\", python_version, \"site-packages\", PKG_NAME)\n\n return DST"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_DST():\n if SYS_NAME == \"Windows\":\n return os.path.join(site.getsitepackages()[1], PKG_NAME)\n elif SYS_NAME in [\"Darwin\", \"Linux\"]:\n return os.path.join(site.getsitepackages()[0], PKG_NAME)", "response": "Find where this package should be installed to."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns md5 value of a file.", "response": "def md5_of_file(abspath):\n \"\"\"Md5 value of a file.\n \"\"\"\n chunk_size = 1024 * 1024\n m = hashlib.md5()\n with open(abspath, \"rb\") as f:\n while True:\n data = f.read(chunk_size)\n if not data:\n break\n m.update(data)\n return m.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_need_install():\n need_install_flag = False\n for root, _, basename_list in os.walk(SRC):\n if os.path.basename(root) != \"__pycache__\":\n for basename in basename_list:\n src = os.path.join(root, basename)\n dst = os.path.join(root.replace(SRC, DST), basename)\n if os.path.exists(dst):\n if md5_of_file(src) != md5_of_file(dst):\n return True\n else:\n return True\n return need_install_flag", "response": "Check if installed package are exactly the same to this one."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef install():\n # check installed package\n print(\"Compare to '%s' ...\" % DST)\n need_install_flag = check_need_install()\n if not need_install_flag:\n print(\" package is up-to-date, no need to install.\")\n return\n print(\"Difference been found, start installing ...\")\n\n # remove __pycache__ folder and *.pyc file\n print(\"Remove *.pyc file ...\")\n pyc_folder_list = list()\n for root, _, basename_list in os.walk(SRC):\n if os.path.basename(root) == \"__pycache__\":\n pyc_folder_list.append(root)\n\n for folder in pyc_folder_list:\n shutil.rmtree(folder)\n print(\" all *.pyc file has been removed.\")\n\n # install this package to all python version\n print(\"Uninstall %s from %s ...\" % (PKG_NAME, DST))\n try:\n shutil.rmtree(DST)\n print(\" Successfully uninstall %s\" % PKG_NAME)\n except Exception as e:\n print(\" %s\" % e)\n\n print(\"Install %s to %s ...\" % (PKG_NAME, DST))\n shutil.copytree(SRC, DST)\n print(\" Complete!\")", "response": "Manual install main script."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\nasync def login(\n sess: aiohttp.ClientSession,\n registry_url: yarl.URL,\n credentials: dict,\n scope: str) -> dict:\n '''\n Authorize to the docker registry using the given credentials and token scope, and returns a set\n of required aiohttp.ClientSession.request() keyword arguments for further API requests.\n\n Some registry servers only rely on HTTP Basic Authentication without token-based access controls\n (usually via nginx proxy). We do support them also. :)\n '''\n if credentials.get('username') and credentials.get('password'):\n basic_auth = aiohttp.BasicAuth(\n credentials['username'], credentials['password'],\n )\n else:\n basic_auth = None\n realm = registry_url / 'token' # fallback\n service = 'registry' # fallback\n async with sess.get(registry_url / 'v2/', auth=basic_auth) as resp:\n ping_status = resp.status\n www_auth_header = resp.headers.get('WWW-Authenticate')\n if www_auth_header:\n match = re.search(r'realm=\"([^\"]+)\"', www_auth_header)\n if match:\n realm = match.group(1)\n match = re.search(r'service=\"([^\"]+)\"', www_auth_header)\n if match:\n service = match.group(1)\n if ping_status == 200:\n log.debug('docker-registry: {0} -> basic-auth', registry_url)\n return {'auth': basic_auth, 'headers': {}}\n elif ping_status == 404:\n raise RuntimeError(f'Unsupported docker registry: {registry_url}! '\n '(API v2 not implemented)')\n elif ping_status == 401:\n params = {\n 'scope': scope,\n 'offline_token': 'true',\n 'client_id': 'docker',\n 'service': service,\n }\n async with sess.get(realm, params=params, auth=basic_auth) as resp:\n log.debug('docker-registry: {0} -> {1}', registry_url, realm)\n if resp.status == 200:\n data = json.loads(await resp.read())\n token = data.get('token', None)\n return {'auth': None, 'headers': {\n 'Authorization': f'Bearer {token}'\n }}\n raise RuntimeError('authentication for docker registry '\n 'f{registry_url} failed')", "response": "Login to the given docker registry using the given credentials and token scope."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning True if this is a git repo.", "response": "def is_git(path):\n \"\"\"Return True if this is a git repo.\"\"\"\n try:\n repo_dir = run_cmd(path, 'git', 'rev-parse', '--git-dir')\n return True if repo_dir else False\n except (OSError, RuntimeError):\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating a version string based on the SCM type and the date.", "response": "def date_version(scm=None):\n \"\"\"Generate a version string based on the SCM type and the date.\"\"\"\n dt = str(time.strftime('%Y%m%d%H%M'))\n if scm:\n version = \"0.0+unknown.{0}.{1}\".format(scm, dt)\n else:\n version = \"0.0+unknown.\" + dt\n return version"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_git_cleaned_branch_name(path):\n # Get name of current branch (or 'HEAD' for a detached HEAD)\n branch_name = run_cmd(path, 'git', 'rev-parse', '--abbrev-ref', 'HEAD')\n branch_name = re.sub(r\"[^A-Za-z0-9]+\", \".\", branch_name.strip())\n return branch_name", "response": "Get the git branch name of the current HEAD in path."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the GIT version of the working copy.", "response": "def get_git_version(path):\n \"\"\"Get the GIT version.\"\"\"\n branch_name = get_git_cleaned_branch_name(path)\n # Determine whether working copy is dirty (i.e. contains modified files)\n mods = run_cmd(path, 'git', 'status', '--porcelain', '--untracked-files=no')\n dirty = '.dirty' if mods else ''\n # Get a list of all commits on branch, with corresponding branch/tag refs\n # Each line looks something like: \"d3e4d42 (HEAD, master, tag: v0.1)\"\n git_output = run_cmd(path, 'git', 'log', '--pretty=\"%h%d\"')\n commits = git_output.strip().replace('\"', '').split('\\n')\n num_commits_since_branch = len(commits)\n # Short hash of the latest commit\n short_commit_name = commits[0].partition(' ')[0]\n # A valid version is sequence of dotted numbers optionally prefixed by 'v'\n valid_version = re.compile(r'^v?([\\.\\d]+)$')\n\n def tagged_version(commit):\n \"\"\"First tag on commit that is valid version, as a list of numbers.\"\"\"\n refs = commit.partition(' ')[2]\n for ref in refs.lstrip('(').rstrip(')').split(', '):\n if ref.startswith('tag: '):\n tag = ref[5:].lower()\n found = valid_version.match(tag)\n if found:\n return [int(v) for v in found.group(1).split('.') if v]\n return []\n # Walk back along branch and find first valid tagged version (or use 0.0)\n for commit in commits:\n version_numbers = tagged_version(commit)\n if version_numbers:\n break\n else:\n version_numbers = [0, 0]\n # It is a release if current commit has a version tag (and dir is clean)\n release = (commit == commits[0]) and not dirty\n if not release:\n # We are working towards the next (minor) release according to PEP 440\n version_numbers[-1] += 1\n version = '.'.join([str(v) for v in version_numbers])\n if not release:\n # Development version contains extra embellishments\n version = (\"%s.dev%d+%s.%s%s\" % (version, num_commits_since_branch,\n branch_name, short_commit_name, dirty))\n return version"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the current version string of this package using SCM tool.", "response": "def get_version_from_scm(path=None):\n \"\"\"Get the current version string of this package using SCM tool.\n\n Parameters\n ----------\n path : None or string, optional\n The SCM checkout path (default is current directory)\n\n Returns\n -------\n version : string\n The version string for this package\n\n \"\"\"\n if is_git(path):\n return 'git', get_git_version(path)\n elif is_svn(path):\n return 'svn', get_svn_version(path)\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_version_from_module(module):\n if module is not None:\n # Setup.py will not pass in a module, but creating __version__ from\n # __init__ will.\n module = str(module).split('.', 1)[0]\n try:\n package = pkg_resources.get_distribution(module)\n return package.version\n except pkg_resources.DistributionNotFound:\n # So there you have it the module is not installed.\n pass", "response": "Use pkg_resources to get version of installed module by name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncopying from pkginfo 1. 4. 1 _compat module.", "response": "def _must_decode(value):\n \"\"\"Copied from pkginfo 1.4.1, _compat module.\"\"\"\n if type(value) is bytes:\n try:\n return value.decode('utf-8')\n except UnicodeDecodeError:\n return value.decode('latin1')\n return value"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_version_from_unpacked_sdist(path):\n # This is a condensed version of the relevant code in pkginfo 1.4.1\n try:\n with open(os.path.join(path, 'PKG-INFO')) as f:\n data = f.read()\n except IOError:\n # Could not load path as an unpacked sdist as it had no PKG-INFO file\n return\n fp = StringIO(_must_decode(data))\n msg = Parser().parse(fp)\n value = msg.get('Version')\n if value != 'UNKNOWN':\n return value", "response": "Assume path points to unpacked source distribution and get version."}