repo stringlengths 7 54 | path stringlengths 4 192 | url stringlengths 87 284 | code stringlengths 78 104k | code_tokens list | docstring stringlengths 1 46.9k | docstring_tokens list | language stringclasses 1
value | partition stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|
rsheftel/raccoon | raccoon/utils.py | https://github.com/rsheftel/raccoon/blob/e5c4b5fb933b51f33aff11e8168c39790e9a7c75/raccoon/utils.py#L30-L53 | def assert_series_equal(left, right, data_function=None, data_args=None):
"""
For unit testing equality of two Series.
:param left: first Series
:param right: second Series
:param data_function: if provided will use this function to assert compare the df.data
:param data_args: arguments to pass to the data_function
:return: nothing
"""
assert type(left) == type(right)
if data_function:
data_args = {} if not data_args else data_args
data_function(left.data, right.data, **data_args)
else:
assert left.data == right.data
assert left.index == right.index
assert left.data_name == right.data_name
assert left.index_name == right.index_name
assert left.sort == right.sort
if isinstance(left, rc.ViewSeries):
assert left.offset == right.offset
if isinstance(left, rc.Series):
assert left.blist == right.blist | [
"def",
"assert_series_equal",
"(",
"left",
",",
"right",
",",
"data_function",
"=",
"None",
",",
"data_args",
"=",
"None",
")",
":",
"assert",
"type",
"(",
"left",
")",
"==",
"type",
"(",
"right",
")",
"if",
"data_function",
":",
"data_args",
"=",
"{",
... | For unit testing equality of two Series.
:param left: first Series
:param right: second Series
:param data_function: if provided will use this function to assert compare the df.data
:param data_args: arguments to pass to the data_function
:return: nothing | [
"For",
"unit",
"testing",
"equality",
"of",
"two",
"Series",
"."
] | python | train |
tensorflow/probability | tensorflow_probability/python/mcmc/diagnostic.py | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/mcmc/diagnostic.py#L203-L332 | def potential_scale_reduction(chains_states,
independent_chain_ndims=1,
name=None):
"""Gelman and Rubin (1992)'s potential scale reduction for chain convergence.
Given `N > 1` states from each of `C > 1` independent chains, the potential
scale reduction factor, commonly referred to as R-hat, measures convergence of
the chains (to the same target) by testing for equality of means.
Specifically, R-hat measures the degree to which variance (of the means)
between chains exceeds what one would expect if the chains were identically
distributed. See [Gelman and Rubin (1992)][1]; [Brooks and Gelman (1998)][2].
Some guidelines:
* The initial state of the chains should be drawn from a distribution
overdispersed with respect to the target.
* If all chains converge to the target, then as `N --> infinity`, R-hat --> 1.
Before that, R-hat > 1 (except in pathological cases, e.g. if the chain
paths were identical).
* The above holds for any number of chains `C > 1`. Increasing `C` does
improves effectiveness of the diagnostic.
* Sometimes, R-hat < 1.2 is used to indicate approximate convergence, but of
course this is problem dependent. See [Brooks and Gelman (1998)][2].
* R-hat only measures non-convergence of the mean. If higher moments, or
other statistics are desired, a different diagnostic should be used. See
[Brooks and Gelman (1998)][2].
Args:
chains_states: `Tensor` or Python `list` of `Tensor`s representing the
state(s) of a Markov Chain at each result step. The `ith` state is
assumed to have shape `[Ni, Ci1, Ci2,...,CiD] + A`.
Dimension `0` indexes the `Ni > 1` result steps of the Markov Chain.
Dimensions `1` through `D` index the `Ci1 x ... x CiD` independent
chains to be tested for convergence to the same target.
The remaining dimensions, `A`, can have any shape (even empty).
independent_chain_ndims: Integer type `Tensor` with value `>= 1` giving the
number of giving the number of dimensions, from `dim = 1` to `dim = D`,
holding independent chain results to be tested for convergence.
name: `String` name to prepend to created tf. Default:
`potential_scale_reduction`.
Returns:
`Tensor` or Python `list` of `Tensor`s representing the R-hat statistic for
the state(s). Same `dtype` as `state`, and shape equal to
`state.shape[1 + independent_chain_ndims:]`.
Raises:
ValueError: If `independent_chain_ndims < 1`.
#### Examples
Diagnosing convergence by monitoring 10 chains that each attempt to
sample from a 2-variate normal.
```python
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
target = tfd.MultivariateNormalDiag(scale_diag=[1., 2.])
# Get 10 (2x) overdispersed initial states.
initial_state = target.sample(10) * 2.
==> (10, 2)
# Get 1000 samples from the 10 independent chains.
chains_states, _ = tfp.mcmc.sample_chain(
num_burnin_steps=200,
num_results=1000,
current_state=initial_state,
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target.log_prob,
step_size=0.05,
num_leapfrog_steps=20))
chains_states.shape
==> (1000, 10, 2)
rhat = tfp.mcmc.diagnostic.potential_scale_reduction(
chains_states, independent_chain_ndims=1)
# The second dimension needed a longer burn-in.
rhat.eval()
==> [1.05, 1.3]
```
To see why R-hat is reasonable, let `X` be a random variable drawn uniformly
from the combined states (combined over all chains). Then, in the limit
`N, C --> infinity`, with `E`, `Var` denoting expectation and variance,
```R-hat = ( E[Var[X | chain]] + Var[E[X | chain]] ) / E[Var[X | chain]].```
Using the law of total variance, the numerator is the variance of the combined
states, and the denominator is the total variance minus the variance of the
the individual chain means. If the chains are all drawing from the same
distribution, they will have the same mean, and thus the ratio should be one.
#### References
[1]: Stephen P. Brooks and Andrew Gelman. General Methods for Monitoring
Convergence of Iterative Simulations. _Journal of Computational and
Graphical Statistics_, 7(4), 1998.
[2]: Andrew Gelman and Donald B. Rubin. Inference from Iterative Simulation
Using Multiple Sequences. _Statistical Science_, 7(4):457-472, 1992.
"""
chains_states_was_list = _is_list_like(chains_states)
if not chains_states_was_list:
chains_states = [chains_states]
# tf.get_static_value returns None iff a constant value (as a numpy
# array) is not efficiently computable. Therefore, we try constant_value then
# check for None.
icn_const_ = tf.get_static_value(
tf.convert_to_tensor(value=independent_chain_ndims))
if icn_const_ is not None:
independent_chain_ndims = icn_const_
if icn_const_ < 1:
raise ValueError(
'Argument `independent_chain_ndims` must be `>= 1`, found: {}'.format(
independent_chain_ndims))
with tf.compat.v1.name_scope(name, 'potential_scale_reduction'):
rhat_list = [
_potential_scale_reduction_single_state(s, independent_chain_ndims)
for s in chains_states
]
if chains_states_was_list:
return rhat_list
return rhat_list[0] | [
"def",
"potential_scale_reduction",
"(",
"chains_states",
",",
"independent_chain_ndims",
"=",
"1",
",",
"name",
"=",
"None",
")",
":",
"chains_states_was_list",
"=",
"_is_list_like",
"(",
"chains_states",
")",
"if",
"not",
"chains_states_was_list",
":",
"chains_state... | Gelman and Rubin (1992)'s potential scale reduction for chain convergence.
Given `N > 1` states from each of `C > 1` independent chains, the potential
scale reduction factor, commonly referred to as R-hat, measures convergence of
the chains (to the same target) by testing for equality of means.
Specifically, R-hat measures the degree to which variance (of the means)
between chains exceeds what one would expect if the chains were identically
distributed. See [Gelman and Rubin (1992)][1]; [Brooks and Gelman (1998)][2].
Some guidelines:
* The initial state of the chains should be drawn from a distribution
overdispersed with respect to the target.
* If all chains converge to the target, then as `N --> infinity`, R-hat --> 1.
Before that, R-hat > 1 (except in pathological cases, e.g. if the chain
paths were identical).
* The above holds for any number of chains `C > 1`. Increasing `C` does
improves effectiveness of the diagnostic.
* Sometimes, R-hat < 1.2 is used to indicate approximate convergence, but of
course this is problem dependent. See [Brooks and Gelman (1998)][2].
* R-hat only measures non-convergence of the mean. If higher moments, or
other statistics are desired, a different diagnostic should be used. See
[Brooks and Gelman (1998)][2].
Args:
chains_states: `Tensor` or Python `list` of `Tensor`s representing the
state(s) of a Markov Chain at each result step. The `ith` state is
assumed to have shape `[Ni, Ci1, Ci2,...,CiD] + A`.
Dimension `0` indexes the `Ni > 1` result steps of the Markov Chain.
Dimensions `1` through `D` index the `Ci1 x ... x CiD` independent
chains to be tested for convergence to the same target.
The remaining dimensions, `A`, can have any shape (even empty).
independent_chain_ndims: Integer type `Tensor` with value `>= 1` giving the
number of giving the number of dimensions, from `dim = 1` to `dim = D`,
holding independent chain results to be tested for convergence.
name: `String` name to prepend to created tf. Default:
`potential_scale_reduction`.
Returns:
`Tensor` or Python `list` of `Tensor`s representing the R-hat statistic for
the state(s). Same `dtype` as `state`, and shape equal to
`state.shape[1 + independent_chain_ndims:]`.
Raises:
ValueError: If `independent_chain_ndims < 1`.
#### Examples
Diagnosing convergence by monitoring 10 chains that each attempt to
sample from a 2-variate normal.
```python
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
target = tfd.MultivariateNormalDiag(scale_diag=[1., 2.])
# Get 10 (2x) overdispersed initial states.
initial_state = target.sample(10) * 2.
==> (10, 2)
# Get 1000 samples from the 10 independent chains.
chains_states, _ = tfp.mcmc.sample_chain(
num_burnin_steps=200,
num_results=1000,
current_state=initial_state,
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target.log_prob,
step_size=0.05,
num_leapfrog_steps=20))
chains_states.shape
==> (1000, 10, 2)
rhat = tfp.mcmc.diagnostic.potential_scale_reduction(
chains_states, independent_chain_ndims=1)
# The second dimension needed a longer burn-in.
rhat.eval()
==> [1.05, 1.3]
```
To see why R-hat is reasonable, let `X` be a random variable drawn uniformly
from the combined states (combined over all chains). Then, in the limit
`N, C --> infinity`, with `E`, `Var` denoting expectation and variance,
```R-hat = ( E[Var[X | chain]] + Var[E[X | chain]] ) / E[Var[X | chain]].```
Using the law of total variance, the numerator is the variance of the combined
states, and the denominator is the total variance minus the variance of the
the individual chain means. If the chains are all drawing from the same
distribution, they will have the same mean, and thus the ratio should be one.
#### References
[1]: Stephen P. Brooks and Andrew Gelman. General Methods for Monitoring
Convergence of Iterative Simulations. _Journal of Computational and
Graphical Statistics_, 7(4), 1998.
[2]: Andrew Gelman and Donald B. Rubin. Inference from Iterative Simulation
Using Multiple Sequences. _Statistical Science_, 7(4):457-472, 1992. | [
"Gelman",
"and",
"Rubin",
"(",
"1992",
")",
"s",
"potential",
"scale",
"reduction",
"for",
"chain",
"convergence",
"."
] | python | test |
CiscoDevNet/webexteamssdk | webexteamssdk/utils.py | https://github.com/CiscoDevNet/webexteamssdk/blob/6fc2cc3557e080ba4b2a380664cb2a0532ae45cd/webexteamssdk/utils.py#L279-L283 | def strptime(cls, date_string, format=WEBEX_TEAMS_DATETIME_FORMAT):
"""strptime with the Webex Teams DateTime format as the default."""
return super(WebexTeamsDateTime, cls).strptime(
date_string, format
).replace(tzinfo=ZuluTimeZone()) | [
"def",
"strptime",
"(",
"cls",
",",
"date_string",
",",
"format",
"=",
"WEBEX_TEAMS_DATETIME_FORMAT",
")",
":",
"return",
"super",
"(",
"WebexTeamsDateTime",
",",
"cls",
")",
".",
"strptime",
"(",
"date_string",
",",
"format",
")",
".",
"replace",
"(",
"tzin... | strptime with the Webex Teams DateTime format as the default. | [
"strptime",
"with",
"the",
"Webex",
"Teams",
"DateTime",
"format",
"as",
"the",
"default",
"."
] | python | test |
codeinn/vcs | vcs/cli.py | https://github.com/codeinn/vcs/blob/e6cd94188e9c36d273411bf3adc0584ac6ab92a0/vcs/cli.py#L108-L122 | def get_command_class(self, cmd):
"""
Returns command class from the registry for a given ``cmd``.
:param cmd: command to run (key at the registry)
"""
try:
cmdpath = self.registry[cmd]
except KeyError:
raise CommandError("No such command %r" % cmd)
if isinstance(cmdpath, basestring):
Command = import_class(cmdpath)
else:
Command = cmdpath
return Command | [
"def",
"get_command_class",
"(",
"self",
",",
"cmd",
")",
":",
"try",
":",
"cmdpath",
"=",
"self",
".",
"registry",
"[",
"cmd",
"]",
"except",
"KeyError",
":",
"raise",
"CommandError",
"(",
"\"No such command %r\"",
"%",
"cmd",
")",
"if",
"isinstance",
"("... | Returns command class from the registry for a given ``cmd``.
:param cmd: command to run (key at the registry) | [
"Returns",
"command",
"class",
"from",
"the",
"registry",
"for",
"a",
"given",
"cmd",
"."
] | python | train |
brocade/pynos | pynos/versions/ver_6/ver_6_0_1/yang/ietf_netconf_monitoring.py | https://github.com/brocade/pynos/blob/bd8a34e98f322de3fc06750827d8bbc3a0c00380/pynos/versions/ver_6/ver_6_0_1/yang/ietf_netconf_monitoring.py#L173-L188 | def netconf_state_schemas_schema_format(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
netconf_state = ET.SubElement(config, "netconf-state", xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring")
schemas = ET.SubElement(netconf_state, "schemas")
schema = ET.SubElement(schemas, "schema")
identifier_key = ET.SubElement(schema, "identifier")
identifier_key.text = kwargs.pop('identifier')
version_key = ET.SubElement(schema, "version")
version_key.text = kwargs.pop('version')
format = ET.SubElement(schema, "format")
format.text = kwargs.pop('format')
callback = kwargs.pop('callback', self._callback)
return callback(config) | [
"def",
"netconf_state_schemas_schema_format",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"config",
"=",
"ET",
".",
"Element",
"(",
"\"config\"",
")",
"netconf_state",
"=",
"ET",
".",
"SubElement",
"(",
"config",
",",
"\"netconf-state\"",
",",
"xmlns",
"... | Auto Generated Code | [
"Auto",
"Generated",
"Code"
] | python | train |
googlefonts/fontbakery | Lib/fontbakery/profiles/googlefonts.py | https://github.com/googlefonts/fontbakery/blob/b355aea2e619a4477769e060d24c32448aa65399/Lib/fontbakery/profiles/googlefonts.py#L2306-L2332 | def com_google_fonts_check_metatada_canonical_style_names(ttFont, font_metadata):
"""METADATA.pb: Font styles are named canonically?"""
from fontbakery.constants import MacStyle
def find_italic_in_name_table():
for entry in ttFont["name"].names:
if entry.nameID < 256 and "italic" in entry.string.decode(entry.getEncoding()).lower():
return True
return False
def is_italic():
return (ttFont["head"].macStyle & MacStyle.ITALIC or
ttFont["post"].italicAngle or
find_italic_in_name_table())
if font_metadata.style not in ["italic", "normal"]:
yield SKIP, ("This check only applies to font styles declared"
" as \"italic\" or \"regular\" on METADATA.pb.")
else:
if is_italic() and font_metadata.style != "italic":
yield FAIL, ("The font style is {}"
" but it should be italic").format(font_metadata.style)
elif not is_italic() and font_metadata.style != "normal":
yield FAIL, ("The font style is {}"
" but it should be normal").format(font_metadata.style)
else:
yield PASS, "Font styles are named canonically." | [
"def",
"com_google_fonts_check_metatada_canonical_style_names",
"(",
"ttFont",
",",
"font_metadata",
")",
":",
"from",
"fontbakery",
".",
"constants",
"import",
"MacStyle",
"def",
"find_italic_in_name_table",
"(",
")",
":",
"for",
"entry",
"in",
"ttFont",
"[",
"\"name... | METADATA.pb: Font styles are named canonically? | [
"METADATA",
".",
"pb",
":",
"Font",
"styles",
"are",
"named",
"canonically?"
] | python | train |
dereneaton/ipyrad | ipyrad/analysis/bucky.py | https://github.com/dereneaton/ipyrad/blob/5eeb8a178160f45faf71bf47cec4abe998a575d1/ipyrad/analysis/bucky.py#L269-L324 | def run(self, steps=None, ipyclient=None, force=False, quiet=False):
"""
Submits an ordered list of jobs to a load-balancer to complete
the following tasks, and reports a progress bar:
(1) Write nexus files for each locus
(2) Run mrBayes on each locus to get a posterior of gene trees
(3) Run mbsum (a bucky tool) on the posterior set of trees
(4) Run Bucky on the summarized set of trees for all alpha values.
Parameters:
-----------
ipyclient (ipyparallel.Client())
A connected ipyparallel Client object used to distribute jobs
force (bool):
Whether to overwrite existing files with the same name and workdir
if they exist. Default is False.
quiet (bool):
Whether to suppress progress information. Default is False.
steps (list):
A list of integers of steps to perform. This is useful if a
job was interrupted, or you created a new bucky object copy,
or you wish to run an analysis under a new set of parameters,
after having run it once. For example, if you finished running
steps 1 and 2 (write nexus files and infer mrbayes posteriors),
but you want to rerun steps 3 and 4 with new settings, then you
could enter `steps=[3,4]` and also `force=True` to run steps 3
and 4 with a new set of parameters. Default argument is None
which means run all steps.
"""
## require ipyclient
if not ipyclient:
raise IPyradWarningExit("an ipyclient object is required")
## check the steps argument
if not steps:
steps = [1, 2, 3, 4]
if isinstance(steps, (int, str)):
steps = [int(i) for i in [steps]]
if isinstance(steps, list):
if not all(isinstance(i, int) for i in steps):
raise IPyradWarningExit("steps must be a list of integers")
## run steps ------------------------------------------------------
## TODO: wrap this function so it plays nice when interrupted.
if 1 in steps:
self.write_nexus_files(force=force, quiet=quiet)
if 2 in steps:
self.run_mrbayes(force=force, quiet=quiet, ipyclient=ipyclient)
if 3 in steps:
self.run_mbsum(force=force, quiet=quiet, ipyclient=ipyclient)
if 4 in steps:
self.run_bucky(force=force, quiet=quiet, ipyclient=ipyclient)
## make sure jobs are done if waiting (TODO: maybe make this optional)
ipyclient.wait() | [
"def",
"run",
"(",
"self",
",",
"steps",
"=",
"None",
",",
"ipyclient",
"=",
"None",
",",
"force",
"=",
"False",
",",
"quiet",
"=",
"False",
")",
":",
"## require ipyclient",
"if",
"not",
"ipyclient",
":",
"raise",
"IPyradWarningExit",
"(",
"\"an ipyclient... | Submits an ordered list of jobs to a load-balancer to complete
the following tasks, and reports a progress bar:
(1) Write nexus files for each locus
(2) Run mrBayes on each locus to get a posterior of gene trees
(3) Run mbsum (a bucky tool) on the posterior set of trees
(4) Run Bucky on the summarized set of trees for all alpha values.
Parameters:
-----------
ipyclient (ipyparallel.Client())
A connected ipyparallel Client object used to distribute jobs
force (bool):
Whether to overwrite existing files with the same name and workdir
if they exist. Default is False.
quiet (bool):
Whether to suppress progress information. Default is False.
steps (list):
A list of integers of steps to perform. This is useful if a
job was interrupted, or you created a new bucky object copy,
or you wish to run an analysis under a new set of parameters,
after having run it once. For example, if you finished running
steps 1 and 2 (write nexus files and infer mrbayes posteriors),
but you want to rerun steps 3 and 4 with new settings, then you
could enter `steps=[3,4]` and also `force=True` to run steps 3
and 4 with a new set of parameters. Default argument is None
which means run all steps. | [
"Submits",
"an",
"ordered",
"list",
"of",
"jobs",
"to",
"a",
"load",
"-",
"balancer",
"to",
"complete",
"the",
"following",
"tasks",
"and",
"reports",
"a",
"progress",
"bar",
":",
"(",
"1",
")",
"Write",
"nexus",
"files",
"for",
"each",
"locus",
"(",
"... | python | valid |
stephan-mclean/KickassTorrentsAPI | kat.py | https://github.com/stephan-mclean/KickassTorrentsAPI/blob/4d867a090c06ce95b9ed996b48092cb5bfe28bbd/kat.py#L37-L41 | def _get_soup(page):
"""Return BeautifulSoup object for given page"""
request = requests.get(page)
data = request.text
return bs4.BeautifulSoup(data) | [
"def",
"_get_soup",
"(",
"page",
")",
":",
"request",
"=",
"requests",
".",
"get",
"(",
"page",
")",
"data",
"=",
"request",
".",
"text",
"return",
"bs4",
".",
"BeautifulSoup",
"(",
"data",
")"
] | Return BeautifulSoup object for given page | [
"Return",
"BeautifulSoup",
"object",
"for",
"given",
"page"
] | python | train |
pgmpy/pgmpy | pgmpy/readwrite/PomdpX.py | https://github.com/pgmpy/pgmpy/blob/9381a66aba3c3871d3ccd00672b148d17d63239e/pgmpy/readwrite/PomdpX.py#L612-L624 | def add_obs_function(self):
"""
add observation function tag to pomdpx model
Return
---------------
string containing the xml for observation function tag
"""
obs_function = self.model['obs_function']
for condition in obs_function:
condprob = etree.SubElement(self.observation_function, 'CondProb')
self.add_conditions(condition, condprob)
return self.__str__(self.observation_function)[:-1] | [
"def",
"add_obs_function",
"(",
"self",
")",
":",
"obs_function",
"=",
"self",
".",
"model",
"[",
"'obs_function'",
"]",
"for",
"condition",
"in",
"obs_function",
":",
"condprob",
"=",
"etree",
".",
"SubElement",
"(",
"self",
".",
"observation_function",
",",
... | add observation function tag to pomdpx model
Return
---------------
string containing the xml for observation function tag | [
"add",
"observation",
"function",
"tag",
"to",
"pomdpx",
"model"
] | python | train |
vmware/pyvmomi | pyVim/connect.py | https://github.com/vmware/pyvmomi/blob/3ffcb23bf77d757175c0d5216ba9a25345d824cd/pyVim/connect.py#L278-L308 | def ConnectNoSSL(host='localhost', port=443, user='root', pwd='',
service="hostd", adapter="SOAP", namespace=None, path="/sdk",
version=None, keyFile=None, certFile=None, thumbprint=None,
b64token=None, mechanism='userpass'):
"""
Provides a standard method for connecting to a specified server without SSL
verification. Useful when connecting to servers with self-signed certificates
or when you wish to ignore SSL altogether. Will attempt to create an unverified
SSL context and then connect via the Connect method.
"""
if hasattr(ssl, '_create_unverified_context'):
sslContext = ssl._create_unverified_context()
else:
sslContext = None
return Connect(host=host,
port=port,
user=user,
pwd=pwd,
service=service,
adapter=adapter,
namespace=namespace,
path=path,
version=version,
keyFile=keyFile,
certFile=certFile,
thumbprint=thumbprint,
sslContext=sslContext,
b64token=b64token,
mechanism=mechanism) | [
"def",
"ConnectNoSSL",
"(",
"host",
"=",
"'localhost'",
",",
"port",
"=",
"443",
",",
"user",
"=",
"'root'",
",",
"pwd",
"=",
"''",
",",
"service",
"=",
"\"hostd\"",
",",
"adapter",
"=",
"\"SOAP\"",
",",
"namespace",
"=",
"None",
",",
"path",
"=",
"\... | Provides a standard method for connecting to a specified server without SSL
verification. Useful when connecting to servers with self-signed certificates
or when you wish to ignore SSL altogether. Will attempt to create an unverified
SSL context and then connect via the Connect method. | [
"Provides",
"a",
"standard",
"method",
"for",
"connecting",
"to",
"a",
"specified",
"server",
"without",
"SSL",
"verification",
".",
"Useful",
"when",
"connecting",
"to",
"servers",
"with",
"self",
"-",
"signed",
"certificates",
"or",
"when",
"you",
"wish",
"t... | python | train |
bambinos/bambi | bambi/models.py | https://github.com/bambinos/bambi/blob/b4a0ced917968bb99ca20915317417d708387946/bambi/models.py#L722-L724 | def fixed_terms(self):
'''Return dict of all and only fixed effects in model.'''
return {k: v for (k, v) in self.terms.items() if not v.random} | [
"def",
"fixed_terms",
"(",
"self",
")",
":",
"return",
"{",
"k",
":",
"v",
"for",
"(",
"k",
",",
"v",
")",
"in",
"self",
".",
"terms",
".",
"items",
"(",
")",
"if",
"not",
"v",
".",
"random",
"}"
] | Return dict of all and only fixed effects in model. | [
"Return",
"dict",
"of",
"all",
"and",
"only",
"fixed",
"effects",
"in",
"model",
"."
] | python | train |
Equitable/trump | trump/orm.py | https://github.com/Equitable/trump/blob/a2802692bc642fa32096374159eea7ceca2947b4/trump/orm.py#L1406-L1419 | def _final_data(self):
"""
Returns
-------
A list of tuples representing rows from the datatable's index
and final column, sorted accordingly.
"""
dtbl = self.datatable
objs = object_session(self)
if isinstance(dtbl, Table):
return objs.query(dtbl.c.indx, dtbl.c.final).all()
else:
raise Exception("Symbol has no datatable, likely need to cache first.") | [
"def",
"_final_data",
"(",
"self",
")",
":",
"dtbl",
"=",
"self",
".",
"datatable",
"objs",
"=",
"object_session",
"(",
"self",
")",
"if",
"isinstance",
"(",
"dtbl",
",",
"Table",
")",
":",
"return",
"objs",
".",
"query",
"(",
"dtbl",
".",
"c",
".",
... | Returns
-------
A list of tuples representing rows from the datatable's index
and final column, sorted accordingly. | [
"Returns",
"-------",
"A",
"list",
"of",
"tuples",
"representing",
"rows",
"from",
"the",
"datatable",
"s",
"index",
"and",
"final",
"column",
"sorted",
"accordingly",
"."
] | python | train |
MostAwesomeDude/blackjack | blackjack.py | https://github.com/MostAwesomeDude/blackjack/blob/1346642e353719ab68c0dc3573aa33b688431bf8/blackjack.py#L206-L264 | def delete(self, value, key):
"""
Delete a value from a tree.
"""
# Base case: The empty tree cannot possibly have the desired value.
if self is NULL:
raise KeyError(value)
direction = cmp(key(value), key(self.value))
# Because we lean to the left, the left case stands alone.
if direction < 0:
if (not self.left.red and
self.left is not NULL and
not self.left.left.red):
self = self.move_red_left()
# Delete towards the left.
left = self.left.delete(value, key)
self = self._replace(left=left)
else:
# If we currently lean to the left, lean to the right for now.
if self.left.red:
self = self.rotate_right()
# Best case: The node on our right (which we just rotated there) is a
# red link and also we were just holding the node to delete. In that
# case, we just rotated NULL into our current node, and the node to
# the right is the lone matching node to delete.
if direction == 0 and self.right is NULL:
return NULL
# No? Okay. Move more reds to the right so that we can continue to
# traverse in that direction. At *this* spot, we do have to confirm
# that node.right is not NULL...
if (not self.right.red and
self.right is not NULL and
not self.right.left.red):
self = self.move_red_right()
if direction > 0:
# Delete towards the right.
right = self.right.delete(value, key)
self = self._replace(right=right)
else:
# Annoying case: The current node was the node to delete all
# along! Use a right-handed minimum deletion. First find the
# replacement value to rebuild the current node with, then delete
# the replacement value from the right-side tree. Finally, create
# the new node with the old value replaced and the replaced value
# deleted.
rnode = self.right
while rnode is not NULL:
rnode = rnode.left
right, replacement = self.right.delete_min()
self = self._replace(value=replacement, right=right)
return self.balance() | [
"def",
"delete",
"(",
"self",
",",
"value",
",",
"key",
")",
":",
"# Base case: The empty tree cannot possibly have the desired value.",
"if",
"self",
"is",
"NULL",
":",
"raise",
"KeyError",
"(",
"value",
")",
"direction",
"=",
"cmp",
"(",
"key",
"(",
"value",
... | Delete a value from a tree. | [
"Delete",
"a",
"value",
"from",
"a",
"tree",
"."
] | python | train |
Neurita/boyle | boyle/nifti/mask.py | https://github.com/Neurita/boyle/blob/2dae7199849395a209c887d5f30506e1de8a9ad9/boyle/nifti/mask.py#L193-L241 | def apply_mask_4d(image, mask_img): # , smooth_mm=None, remove_nans=True):
"""Read a Nifti file nii_file and a mask Nifti file.
Extract the signals in nii_file that are within the mask, the mask indices
and the mask shape.
Parameters
----------
image: img-like object or boyle.nifti.NeuroImage or str
Can either be:
- a file path to a Nifti image
- any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.
If niimg is a string, consider it as a path to Nifti image and
call nibabel.load on it. If it is an object, check if get_data()
and get_affine() methods are present, raise TypeError otherwise.
mask_img: img-like object or boyle.nifti.NeuroImage or str
3D mask array: True where a voxel should be used.
See img description.
smooth_mm: float #TBD
(optional) The size in mm of the FWHM Gaussian kernel to smooth the signal.
If True, remove_nans is True.
remove_nans: bool #TBD
If remove_nans is True (default), the non-finite values (NaNs and
infs) found in the images will be replaced by zeros.
Returns
-------
session_series, mask_data
session_series: numpy.ndarray
2D array of series with shape (voxel number, image number)
Note
----
nii_file and mask_file must have the same shape.
Raises
------
FileNotFound, NiftiFilesNotCompatible
"""
img = check_img(image)
mask = check_img(mask_img)
check_img_compatibility(img, mask, only_check_3d=True)
vol = get_data(img)
series, mask_data = _apply_mask_to_4d_data(vol, mask)
return series, mask_data | [
"def",
"apply_mask_4d",
"(",
"image",
",",
"mask_img",
")",
":",
"# , smooth_mm=None, remove_nans=True):",
"img",
"=",
"check_img",
"(",
"image",
")",
"mask",
"=",
"check_img",
"(",
"mask_img",
")",
"check_img_compatibility",
"(",
"img",
",",
"mask",
",",
"only_... | Read a Nifti file nii_file and a mask Nifti file.
Extract the signals in nii_file that are within the mask, the mask indices
and the mask shape.
Parameters
----------
image: img-like object or boyle.nifti.NeuroImage or str
Can either be:
- a file path to a Nifti image
- any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.
If niimg is a string, consider it as a path to Nifti image and
call nibabel.load on it. If it is an object, check if get_data()
and get_affine() methods are present, raise TypeError otherwise.
mask_img: img-like object or boyle.nifti.NeuroImage or str
3D mask array: True where a voxel should be used.
See img description.
smooth_mm: float #TBD
(optional) The size in mm of the FWHM Gaussian kernel to smooth the signal.
If True, remove_nans is True.
remove_nans: bool #TBD
If remove_nans is True (default), the non-finite values (NaNs and
infs) found in the images will be replaced by zeros.
Returns
-------
session_series, mask_data
session_series: numpy.ndarray
2D array of series with shape (voxel number, image number)
Note
----
nii_file and mask_file must have the same shape.
Raises
------
FileNotFound, NiftiFilesNotCompatible | [
"Read",
"a",
"Nifti",
"file",
"nii_file",
"and",
"a",
"mask",
"Nifti",
"file",
".",
"Extract",
"the",
"signals",
"in",
"nii_file",
"that",
"are",
"within",
"the",
"mask",
"the",
"mask",
"indices",
"and",
"the",
"mask",
"shape",
"."
] | python | valid |
heigeo/climata | climata/huc8/__init__.py | https://github.com/heigeo/climata/blob/2028bdbd40e1c8985b0b62f7cb969ce7dfa8f1bd/climata/huc8/__init__.py#L23-L46 | def get_huc8(prefix):
"""
Return all HUC8s matching the given prefix (e.g. 1801) or basin name
(e.g. Klamath)
"""
if not prefix.isdigit():
# Look up hucs by name
name = prefix
prefix = None
for row in hucs:
if row.basin.lower() == name.lower():
# Use most general huc if two have the same name
if prefix is None or len(row.huc) < len(prefix):
prefix = row.huc
if prefix is None:
return []
huc8s = []
for row in hucs:
# Return all 8-digit hucs with given prefix
if len(row.huc) == 8 and row.huc.startswith(prefix):
huc8s.append(row.huc)
return huc8s | [
"def",
"get_huc8",
"(",
"prefix",
")",
":",
"if",
"not",
"prefix",
".",
"isdigit",
"(",
")",
":",
"# Look up hucs by name",
"name",
"=",
"prefix",
"prefix",
"=",
"None",
"for",
"row",
"in",
"hucs",
":",
"if",
"row",
".",
"basin",
".",
"lower",
"(",
"... | Return all HUC8s matching the given prefix (e.g. 1801) or basin name
(e.g. Klamath) | [
"Return",
"all",
"HUC8s",
"matching",
"the",
"given",
"prefix",
"(",
"e",
".",
"g",
".",
"1801",
")",
"or",
"basin",
"name",
"(",
"e",
".",
"g",
".",
"Klamath",
")"
] | python | train |
ibm-watson-data-lab/ibmseti | ibmseti/compamp.py | https://github.com/ibm-watson-data-lab/ibmseti/blob/3361bc0adb4770dc7a554ed7cda292503892acee/ibmseti/compamp.py#L81-L91 | def headers(self):
'''
This returns all headers in the data file. There should be one for each
half_frame in the file (typically 129).
'''
first_header = self.header()
single_compamp_data = np.frombuffer(self.data, dtype=np.int8)\
.reshape((first_header['number_of_half_frames'], first_header['half_frame_bytes']))
return [self._read_half_frame_header(row) for row in single_compamp_data] | [
"def",
"headers",
"(",
"self",
")",
":",
"first_header",
"=",
"self",
".",
"header",
"(",
")",
"single_compamp_data",
"=",
"np",
".",
"frombuffer",
"(",
"self",
".",
"data",
",",
"dtype",
"=",
"np",
".",
"int8",
")",
".",
"reshape",
"(",
"(",
"first_... | This returns all headers in the data file. There should be one for each
half_frame in the file (typically 129). | [
"This",
"returns",
"all",
"headers",
"in",
"the",
"data",
"file",
".",
"There",
"should",
"be",
"one",
"for",
"each",
"half_frame",
"in",
"the",
"file",
"(",
"typically",
"129",
")",
"."
] | python | train |
tBaxter/tango-happenings | build/lib/happenings/views.py | https://github.com/tBaxter/tango-happenings/blob/cb3c49ea39e0a6cef9c6ffb534c2fbf401139ba2/build/lib/happenings/views.py#L181-L199 | def event_update_list(request, slug):
"""
Returns a list view of updates for a given event.
If the event is over, it will be in chronological order.
If the event is upcoming or still going,
it will be in reverse chronological order.
"""
event = get_object_or_404(Event, slug=slug)
updates = Update.objects.filter(event__slug=slug)
if event.recently_ended():
# if the event is over, use chronological order
updates = updates.order_by('id')
else:
# if not, use reverse chronological
updates = updates.order_by('-id')
return render(request, 'happenings/updates/update_list.html', {
'event': event,
'object_list': updates,
}) | [
"def",
"event_update_list",
"(",
"request",
",",
"slug",
")",
":",
"event",
"=",
"get_object_or_404",
"(",
"Event",
",",
"slug",
"=",
"slug",
")",
"updates",
"=",
"Update",
".",
"objects",
".",
"filter",
"(",
"event__slug",
"=",
"slug",
")",
"if",
"event... | Returns a list view of updates for a given event.
If the event is over, it will be in chronological order.
If the event is upcoming or still going,
it will be in reverse chronological order. | [
"Returns",
"a",
"list",
"view",
"of",
"updates",
"for",
"a",
"given",
"event",
".",
"If",
"the",
"event",
"is",
"over",
"it",
"will",
"be",
"in",
"chronological",
"order",
".",
"If",
"the",
"event",
"is",
"upcoming",
"or",
"still",
"going",
"it",
"will... | python | valid |
jbasko/configmanager | configmanager/sections.py | https://github.com/jbasko/configmanager/blob/1d7229ce367143c7210d8e5f0782de03945a1721/configmanager/sections.py#L496-L504 | def is_default(self):
"""
``True`` if values of all config items in this section and its subsections
have their values equal to defaults or have no value set.
"""
for _, item in self.iter_items(recursive=True):
if not item.is_default:
return False
return True | [
"def",
"is_default",
"(",
"self",
")",
":",
"for",
"_",
",",
"item",
"in",
"self",
".",
"iter_items",
"(",
"recursive",
"=",
"True",
")",
":",
"if",
"not",
"item",
".",
"is_default",
":",
"return",
"False",
"return",
"True"
] | ``True`` if values of all config items in this section and its subsections
have their values equal to defaults or have no value set. | [
"True",
"if",
"values",
"of",
"all",
"config",
"items",
"in",
"this",
"section",
"and",
"its",
"subsections",
"have",
"their",
"values",
"equal",
"to",
"defaults",
"or",
"have",
"no",
"value",
"set",
"."
] | python | train |
vicalloy/lbutils | lbutils/qs.py | https://github.com/vicalloy/lbutils/blob/66ae7e73bc939f073cdc1b91602a95e67caf4ba6/lbutils/qs.py#L28-L37 | def get_sum(qs, field):
"""
get sum for queryset.
``qs``: queryset
``field``: The field name to sum.
"""
sum_field = '%s__sum' % field
qty = qs.aggregate(Sum(field))[sum_field]
return qty if qty else 0 | [
"def",
"get_sum",
"(",
"qs",
",",
"field",
")",
":",
"sum_field",
"=",
"'%s__sum'",
"%",
"field",
"qty",
"=",
"qs",
".",
"aggregate",
"(",
"Sum",
"(",
"field",
")",
")",
"[",
"sum_field",
"]",
"return",
"qty",
"if",
"qty",
"else",
"0"
] | get sum for queryset.
``qs``: queryset
``field``: The field name to sum. | [
"get",
"sum",
"for",
"queryset",
"."
] | python | train |
bioasp/caspo | caspo/core/graph.py | https://github.com/bioasp/caspo/blob/a68d1eace75b9b08f23633d1fb5ce6134403959e/caspo/core/graph.py#L31-L45 | def from_tuples(cls, tuples):
"""
Creates a graph from an iterable of tuples describing edges like (source, target, sign)
Parameters
----------
tuples : iterable[(str,str,int))]
Tuples describing signed and directed edges
Returns
-------
caspo.core.graph.Graph
Created object instance
"""
return cls(it.imap(lambda (source, target, sign): (source, target, {'sign': sign}), tuples)) | [
"def",
"from_tuples",
"(",
"cls",
",",
"tuples",
")",
":",
"return",
"cls",
"(",
"it",
".",
"imap",
"(",
"lambda",
"(",
"source",
",",
"target",
",",
"sign",
")",
":",
"(",
"source",
",",
"target",
",",
"{",
"'sign'",
":",
"sign",
"}",
")",
",",
... | Creates a graph from an iterable of tuples describing edges like (source, target, sign)
Parameters
----------
tuples : iterable[(str,str,int))]
Tuples describing signed and directed edges
Returns
-------
caspo.core.graph.Graph
Created object instance | [
"Creates",
"a",
"graph",
"from",
"an",
"iterable",
"of",
"tuples",
"describing",
"edges",
"like",
"(",
"source",
"target",
"sign",
")"
] | python | train |
jd/tenacity | tenacity/compat.py | https://github.com/jd/tenacity/blob/354c40b7dc8e728c438668100dd020b65c84dfc6/tenacity/compat.py#L194-L212 | def retry_dunder_call_accept_old_params(fn):
"""Decorate cls.__call__ method to accept old "retry" signature."""
@_utils.wraps(fn)
def new_fn(self, attempt=_unset, retry_state=None):
if retry_state is None:
from tenacity import RetryCallState
if attempt is _unset:
raise _make_unset_exception('retry', attempt=attempt)
retry_state_passed_as_non_kwarg = (
attempt is not _unset and
isinstance(attempt, RetryCallState))
if retry_state_passed_as_non_kwarg:
retry_state = attempt
else:
warn_about_dunder_non_retry_state_deprecation(fn, stacklevel=2)
retry_state = RetryCallState(None, None, (), {})
retry_state.outcome = attempt
return fn(self, retry_state=retry_state)
return new_fn | [
"def",
"retry_dunder_call_accept_old_params",
"(",
"fn",
")",
":",
"@",
"_utils",
".",
"wraps",
"(",
"fn",
")",
"def",
"new_fn",
"(",
"self",
",",
"attempt",
"=",
"_unset",
",",
"retry_state",
"=",
"None",
")",
":",
"if",
"retry_state",
"is",
"None",
":"... | Decorate cls.__call__ method to accept old "retry" signature. | [
"Decorate",
"cls",
".",
"__call__",
"method",
"to",
"accept",
"old",
"retry",
"signature",
"."
] | python | train |
pypyr/pypyr-cli | pypyr/steps/echo.py | https://github.com/pypyr/pypyr-cli/blob/4003f999cd5eb030b4c7407317de728f5115a80f/pypyr/steps/echo.py#L8-L34 | def run_step(context):
"""Simple echo. Outputs context['echoMe'].
Args:
context: dictionary-like. context is mandatory.
context must contain key 'echoMe'
context['echoMe'] will echo the value to logger.
This logger could well be stdout.
When you execute the pipeline, it should look something like this:
pypyr [name here] 'echoMe=test', assuming a keyvaluepair context parser.
"""
logger.debug("started")
assert context, ("context must be set for echo. Did you set "
"'echoMe=text here'?")
context.assert_key_exists('echoMe', __name__)
if isinstance(context['echoMe'], str):
val = context.get_formatted('echoMe')
else:
val = context['echoMe']
logger.info(val)
logger.debug("done") | [
"def",
"run_step",
"(",
"context",
")",
":",
"logger",
".",
"debug",
"(",
"\"started\"",
")",
"assert",
"context",
",",
"(",
"\"context must be set for echo. Did you set \"",
"\"'echoMe=text here'?\"",
")",
"context",
".",
"assert_key_exists",
"(",
"'echoMe'",
",",
... | Simple echo. Outputs context['echoMe'].
Args:
context: dictionary-like. context is mandatory.
context must contain key 'echoMe'
context['echoMe'] will echo the value to logger.
This logger could well be stdout.
When you execute the pipeline, it should look something like this:
pypyr [name here] 'echoMe=test', assuming a keyvaluepair context parser. | [
"Simple",
"echo",
".",
"Outputs",
"context",
"[",
"echoMe",
"]",
"."
] | python | train |
great-expectations/great_expectations | great_expectations/data_asset/file_data_asset.py | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/data_asset/file_data_asset.py#L147-L256 | def expect_file_line_regex_match_count_to_be_between(self,
regex,
expected_min_count=0,
expected_max_count=None,
skip=None,
mostly=None,
null_lines_regex=r"^\s*$",
result_format=None,
include_config=False,
catch_exceptions=None,
meta=None,
_lines=None):
"""
Expect the number of times a regular expression appears on each line of
a file to be between a maximum and minimum value.
Args:
regex: \
A string that can be compiled as valid regular expression to match
expected_min_count (None or nonnegative integer): \
Specifies the minimum number of times regex is expected to appear
on each line of the file
expected_max_count (None or nonnegative integer): \
Specifies the maximum number of times regex is expected to appear
on each line of the file
Keyword Args:
skip (None or nonnegative integer): \
Integer specifying the first lines in the file the method should
skip before assessing expectations
mostly (None or number between 0 and 1): \
Specifies an acceptable error for expectations. If the percentage
of unexpected lines is less than mostly, the method still returns
true even if all lines don't match the expectation criteria.
null_lines_regex (valid regular expression or None): \
If not none, a regex to skip lines as null. Defaults to empty or whitespace-only lines.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`,
or `SUMMARY`. For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the
result object. For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the
result object. For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be
included in the output without modification. For more detail,
see :ref:`meta`.
_lines (list): \
The lines over which to operate (provided by the file_lines_map_expectation decorator)
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to
:ref:`result_format <result_format>` and :ref:`include_config`,
:ref:`catch_exceptions`, and :ref:`meta`.
"""
try:
comp_regex = re.compile(regex)
except:
raise ValueError("Must enter valid regular expression for regex")
if expected_min_count != None:
try:
assert float(expected_min_count).is_integer()
assert float(expected_min_count) >= 0
except:
raise ValueError("expected_min_count must be a non-negative \
integer or None")
if expected_max_count != None:
try:
assert float(expected_max_count).is_integer()
assert float(expected_max_count) >= 0
except:
raise ValueError("expected_max_count must be a non-negative \
integer or None")
if expected_max_count != None and expected_min_count != None:
try:
assert expected_max_count >= expected_min_count
except:
raise ValueError("expected_max_count must be greater than or \
equal to expected_min_count")
if expected_max_count != None and expected_min_count != None:
truth_list = [True if(len(comp_regex.findall(line)) >= expected_min_count and \
len(comp_regex.findall(line)) <= expected_max_count) else False \
for line in _lines]
elif expected_max_count != None:
truth_list = [True if(len(comp_regex.findall(line)) <= expected_max_count) else False \
for line in _lines]
elif expected_min_count != None:
truth_list = [True if(len(comp_regex.findall(line)) >= expected_min_count) else False \
for line in _lines]
else:
truth_list = [True for line in _lines]
return truth_list | [
"def",
"expect_file_line_regex_match_count_to_be_between",
"(",
"self",
",",
"regex",
",",
"expected_min_count",
"=",
"0",
",",
"expected_max_count",
"=",
"None",
",",
"skip",
"=",
"None",
",",
"mostly",
"=",
"None",
",",
"null_lines_regex",
"=",
"r\"^\\s*$\"",
",... | Expect the number of times a regular expression appears on each line of
a file to be between a maximum and minimum value.
Args:
regex: \
A string that can be compiled as valid regular expression to match
expected_min_count (None or nonnegative integer): \
Specifies the minimum number of times regex is expected to appear
on each line of the file
expected_max_count (None or nonnegative integer): \
Specifies the maximum number of times regex is expected to appear
on each line of the file
Keyword Args:
skip (None or nonnegative integer): \
Integer specifying the first lines in the file the method should
skip before assessing expectations
mostly (None or number between 0 and 1): \
Specifies an acceptable error for expectations. If the percentage
of unexpected lines is less than mostly, the method still returns
true even if all lines don't match the expectation criteria.
null_lines_regex (valid regular expression or None): \
If not none, a regex to skip lines as null. Defaults to empty or whitespace-only lines.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`,
or `SUMMARY`. For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the
result object. For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the
result object. For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be
included in the output without modification. For more detail,
see :ref:`meta`.
_lines (list): \
The lines over which to operate (provided by the file_lines_map_expectation decorator)
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to
:ref:`result_format <result_format>` and :ref:`include_config`,
:ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"the",
"number",
"of",
"times",
"a",
"regular",
"expression",
"appears",
"on",
"each",
"line",
"of",
"a",
"file",
"to",
"be",
"between",
"a",
"maximum",
"and",
"minimum",
"value",
"."
] | python | train |
SpockBotMC/SpockBot | spockbot/mcp/yggdrasil.py | https://github.com/SpockBotMC/SpockBot/blob/f89911551f18357720034fbaa52837a0d09f66ea/spockbot/mcp/yggdrasil.py#L123-L141 | def invalidate(self):
"""
Invalidate access tokens with a client/access token pair
Returns:
dict: Empty or error dict
"""
endpoint = '/invalidate'
payload = {
'accessToken': self.access_token,
'clientToken': self.client_token,
}
self._ygg_req(endpoint, payload)
self.client_token = ''
self.access_token = ''
self.available_profiles = []
self.selected_profile = {}
return True | [
"def",
"invalidate",
"(",
"self",
")",
":",
"endpoint",
"=",
"'/invalidate'",
"payload",
"=",
"{",
"'accessToken'",
":",
"self",
".",
"access_token",
",",
"'clientToken'",
":",
"self",
".",
"client_token",
",",
"}",
"self",
".",
"_ygg_req",
"(",
"endpoint",
... | Invalidate access tokens with a client/access token pair
Returns:
dict: Empty or error dict | [
"Invalidate",
"access",
"tokens",
"with",
"a",
"client",
"/",
"access",
"token",
"pair"
] | python | train |
KarchinLab/probabilistic2020 | prob2020/python/process_result.py | https://github.com/KarchinLab/probabilistic2020/blob/5d70583b0a7c07cfe32e95f3a70e05df412acb84/prob2020/python/process_result.py#L48-L90 | def handle_oncogene_results(permutation_result, num_permutations):
"""Takes in output from multiprocess_permutation function and converts to
a better formatted dataframe.
Parameters
----------
permutation_result : list
output from multiprocess_permutation
Returns
-------
permutation_df : pd.DataFrame
formatted output suitable to save
"""
mycols = ['gene', 'num recurrent', 'position entropy',
'mean vest score', 'entropy p-value',
'vest p-value', 'Total Mutations', 'Unmapped to Ref Tx']
permutation_df = pd.DataFrame(permutation_result, columns=mycols)
# get benjamani hochberg adjusted p-values
permutation_df['entropy BH q-value'] = mypval.bh_fdr(permutation_df['entropy p-value'])
permutation_df['vest BH q-value'] = mypval.bh_fdr(permutation_df['vest p-value'])
# combine p-values
permutation_df['tmp entropy p-value'] = permutation_df['entropy p-value']
permutation_df['tmp vest p-value'] = permutation_df['vest p-value']
permutation_df.loc[permutation_df['entropy p-value']==0, 'tmp entropy p-value'] = 1. / num_permutations
permutation_df.loc[permutation_df['vest p-value']==0, 'tmp vest p-value'] = 1. / num_permutations
permutation_df['combined p-value'] = permutation_df[['entropy p-value', 'vest p-value']].apply(mypval.fishers_method, axis=1)
permutation_df['combined BH q-value'] = mypval.bh_fdr(permutation_df['combined p-value'])
del permutation_df['tmp vest p-value']
del permutation_df['tmp entropy p-value']
# order output
permutation_df = permutation_df.set_index('gene', drop=False) # make sure genes are indices
permutation_df['num recurrent'] = permutation_df['num recurrent'].fillna(-1).astype(int) # fix dtype isssue
col_order = ['gene', 'Total Mutations', 'Unmapped to Ref Tx',
'num recurrent', 'position entropy',
'mean vest score', 'entropy p-value',
'vest p-value', 'combined p-value', 'entropy BH q-value',
'vest BH q-value', 'combined BH q-value']
permutation_df = permutation_df.sort_values(by=['combined p-value'])
return permutation_df[col_order] | [
"def",
"handle_oncogene_results",
"(",
"permutation_result",
",",
"num_permutations",
")",
":",
"mycols",
"=",
"[",
"'gene'",
",",
"'num recurrent'",
",",
"'position entropy'",
",",
"'mean vest score'",
",",
"'entropy p-value'",
",",
"'vest p-value'",
",",
"'Total Mutat... | Takes in output from multiprocess_permutation function and converts to
a better formatted dataframe.
Parameters
----------
permutation_result : list
output from multiprocess_permutation
Returns
-------
permutation_df : pd.DataFrame
formatted output suitable to save | [
"Takes",
"in",
"output",
"from",
"multiprocess_permutation",
"function",
"and",
"converts",
"to",
"a",
"better",
"formatted",
"dataframe",
"."
] | python | train |
cloudsmith-io/cloudsmith-cli | cloudsmith_cli/core/api/files.py | https://github.com/cloudsmith-io/cloudsmith-cli/blob/5bc245ca5d0bfa85380be48e7c206b4c86cc6c8e/cloudsmith_cli/core/api/files.py#L59-L87 | def upload_file(upload_url, upload_fields, filepath, callback=None):
"""Upload a pre-signed file to Cloudsmith."""
upload_fields = list(six.iteritems(upload_fields))
upload_fields.append(
("file", (os.path.basename(filepath), click.open_file(filepath, "rb")))
)
encoder = MultipartEncoder(upload_fields)
monitor = MultipartEncoderMonitor(encoder, callback=callback)
config = cloudsmith_api.Configuration()
if config.proxy:
proxies = {"http": config.proxy, "https": config.proxy}
else:
proxies = None
headers = {"content-type": monitor.content_type}
client = get_files_api()
headers["user-agent"] = client.api_client.user_agent
session = create_requests_session()
resp = session.post(upload_url, data=monitor, headers=headers, proxies=proxies)
try:
resp.raise_for_status()
except requests.RequestException as exc:
raise ApiException(
resp.status_code, headers=exc.response.headers, body=exc.response.content
) | [
"def",
"upload_file",
"(",
"upload_url",
",",
"upload_fields",
",",
"filepath",
",",
"callback",
"=",
"None",
")",
":",
"upload_fields",
"=",
"list",
"(",
"six",
".",
"iteritems",
"(",
"upload_fields",
")",
")",
"upload_fields",
".",
"append",
"(",
"(",
"\... | Upload a pre-signed file to Cloudsmith. | [
"Upload",
"a",
"pre",
"-",
"signed",
"file",
"to",
"Cloudsmith",
"."
] | python | train |
PGower/PyCanvas | pycanvas/apis/quizzes.py | https://github.com/PGower/PyCanvas/blob/68520005382b440a1e462f9df369f54d364e21e8/pycanvas/apis/quizzes.py#L19-L39 | def list_quizzes_in_course(self, course_id, search_term=None):
"""
List quizzes in a course.
Returns the list of Quizzes in this course.
"""
path = {}
data = {}
params = {}
# REQUIRED - PATH - course_id
"""ID"""
path["course_id"] = course_id
# OPTIONAL - search_term
"""The partial title of the quizzes to match and return."""
if search_term is not None:
params["search_term"] = search_term
self.logger.debug("GET /api/v1/courses/{course_id}/quizzes with query params: {params} and form data: {data}".format(params=params, data=data, **path))
return self.generic_request("GET", "/api/v1/courses/{course_id}/quizzes".format(**path), data=data, params=params, all_pages=True) | [
"def",
"list_quizzes_in_course",
"(",
"self",
",",
"course_id",
",",
"search_term",
"=",
"None",
")",
":",
"path",
"=",
"{",
"}",
"data",
"=",
"{",
"}",
"params",
"=",
"{",
"}",
"# REQUIRED - PATH - course_id\r",
"\"\"\"ID\"\"\"",
"path",
"[",
"\"course_id\"",... | List quizzes in a course.
Returns the list of Quizzes in this course. | [
"List",
"quizzes",
"in",
"a",
"course",
".",
"Returns",
"the",
"list",
"of",
"Quizzes",
"in",
"this",
"course",
"."
] | python | train |
soasme/rio | rio/tasks.py | https://github.com/soasme/rio/blob/f722eb0ff4b0382bceaff77737f0b87cb78429e7/rio/tasks.py#L56-L105 | def call_webhook(event, webhook, payload):
"""Build request from event,webhook,payoad and parse response."""
started_at = time()
request = _build_request_for_calling_webhook(event, webhook, payload)
logger.info('REQUEST %(uuid)s %(method)s %(url)s %(payload)s' % dict(
uuid=str(event['uuid']),
url=request['url'],
method=request['method'],
payload=payload,
))
try:
content = dispatch_webhook_request(**request)
logger.debug('RESPONSE %(uuid)s %(method)s %(url)s %(data)s' % dict(
uuid=str(event['uuid']),
url=request['url'],
method=request['method'],
data=content,
))
data = dict(
parent=str(event['uuid']),
content=content,
started_at=started_at,
ended_at=time()
)
except (FailureWebhookError, ConnectionError) as exception:
if sentry.client:
http_context = raven_context(**request)
sentry.captureException(data={'request': http_context})
logger.error('RESPONSE %(uuid)s %(method)s %(url)s %(error)s' % dict(
uuid=str(event['uuid']),
method=request['method'],
url=request['url'],
error=exception.message,))
data = dict(
parent=str(event['uuid']),
error=exception.message,
started_at=started_at,
ended_at=time(),
)
webhook_ran.send(None, data=data)
return data | [
"def",
"call_webhook",
"(",
"event",
",",
"webhook",
",",
"payload",
")",
":",
"started_at",
"=",
"time",
"(",
")",
"request",
"=",
"_build_request_for_calling_webhook",
"(",
"event",
",",
"webhook",
",",
"payload",
")",
"logger",
".",
"info",
"(",
"'REQUEST... | Build request from event,webhook,payoad and parse response. | [
"Build",
"request",
"from",
"event",
"webhook",
"payoad",
"and",
"parse",
"response",
"."
] | python | train |
LionelR/pyair | pyair/stats.py | https://github.com/LionelR/pyair/blob/467e8a843ca9f882f8bb2958805b7293591996ad/pyair/stats.py#L21-L28 | def df_quantile(df, nb=100):
"""Returns the nb quantiles for datas in a dataframe
"""
quantiles = np.linspace(0, 1., nb)
res = pd.DataFrame()
for q in quantiles:
res = res.append(df.quantile(q), ignore_index=True)
return res | [
"def",
"df_quantile",
"(",
"df",
",",
"nb",
"=",
"100",
")",
":",
"quantiles",
"=",
"np",
".",
"linspace",
"(",
"0",
",",
"1.",
",",
"nb",
")",
"res",
"=",
"pd",
".",
"DataFrame",
"(",
")",
"for",
"q",
"in",
"quantiles",
":",
"res",
"=",
"res",... | Returns the nb quantiles for datas in a dataframe | [
"Returns",
"the",
"nb",
"quantiles",
"for",
"datas",
"in",
"a",
"dataframe"
] | python | valid |
wroberts/pygermanet | pygermanet/germanet.py | https://github.com/wroberts/pygermanet/blob/1818c20a7e8c431c4cfb5a570ed0d850bb6dd515/pygermanet/germanet.py#L110-L115 | def all_synsets(self):
'''
A generator over all the synsets in the GermaNet database.
'''
for synset_dict in self._mongo_db.synsets.find():
yield Synset(self, synset_dict) | [
"def",
"all_synsets",
"(",
"self",
")",
":",
"for",
"synset_dict",
"in",
"self",
".",
"_mongo_db",
".",
"synsets",
".",
"find",
"(",
")",
":",
"yield",
"Synset",
"(",
"self",
",",
"synset_dict",
")"
] | A generator over all the synsets in the GermaNet database. | [
"A",
"generator",
"over",
"all",
"the",
"synsets",
"in",
"the",
"GermaNet",
"database",
"."
] | python | train |
rajeevs1992/pyhealthvault | src/healthvaultlib/hvcrypto.py | https://github.com/rajeevs1992/pyhealthvault/blob/2b6fa7c1687300bcc2e501368883fbb13dc80495/src/healthvaultlib/hvcrypto.py#L44-L49 | def i2osp(self, long_integer, block_size):
'Convert a long integer into an octet string.'
hex_string = '%X' % long_integer
if len(hex_string) > 2 * block_size:
raise ValueError('integer %i too large to encode in %i octets' % (long_integer, block_size))
return a2b_hex(hex_string.zfill(2 * block_size)) | [
"def",
"i2osp",
"(",
"self",
",",
"long_integer",
",",
"block_size",
")",
":",
"hex_string",
"=",
"'%X'",
"%",
"long_integer",
"if",
"len",
"(",
"hex_string",
")",
">",
"2",
"*",
"block_size",
":",
"raise",
"ValueError",
"(",
"'integer %i too large to encode i... | Convert a long integer into an octet string. | [
"Convert",
"a",
"long",
"integer",
"into",
"an",
"octet",
"string",
"."
] | python | train |
PMEAL/porespy | porespy/io/__funcs__.py | https://github.com/PMEAL/porespy/blob/1e13875b56787d8f5b7ffdabce8c4342c33ba9f8/porespy/io/__funcs__.py#L116-L155 | def to_palabos(im, filename, solid=0):
r"""
Converts an ND-array image to a text file that Palabos can read in as a
geometry for Lattice Boltzmann simulations. Uses a Euclidean distance
transform to identify solid voxels neighboring fluid voxels and labels
them as the interface.
Parameters
----------
im : ND-array
The image of the porous material
filename : string
Path to output file
solid : int
The value of the solid voxels in the image used to convert image to
binary with all other voxels assumed to be fluid.
Notes
-----
File produced contains 3 values: 2 = Solid, 1 = Interface, 0 = Pore
Palabos will run the simulation applying the specified pressure drop from
x = 0 to x = -1.
"""
# Create binary image for fluid and solid phases
bin_im = im == solid
# Transform to integer for distance transform
bin_im = bin_im.astype(int)
# Distance Transform computes Euclidean distance in lattice units to
# Nearest fluid for every solid voxel
dt = nd.distance_transform_edt(bin_im)
dt[dt > np.sqrt(2)] = 2
dt[(dt > 0)*(dt <= np.sqrt(2))] = 1
dt = dt.astype(int)
# Write out data
with open(filename, 'w') as f:
out_data = dt.flatten().tolist()
f.write('\n'.join(map(repr, out_data))) | [
"def",
"to_palabos",
"(",
"im",
",",
"filename",
",",
"solid",
"=",
"0",
")",
":",
"# Create binary image for fluid and solid phases",
"bin_im",
"=",
"im",
"==",
"solid",
"# Transform to integer for distance transform",
"bin_im",
"=",
"bin_im",
".",
"astype",
"(",
"... | r"""
Converts an ND-array image to a text file that Palabos can read in as a
geometry for Lattice Boltzmann simulations. Uses a Euclidean distance
transform to identify solid voxels neighboring fluid voxels and labels
them as the interface.
Parameters
----------
im : ND-array
The image of the porous material
filename : string
Path to output file
solid : int
The value of the solid voxels in the image used to convert image to
binary with all other voxels assumed to be fluid.
Notes
-----
File produced contains 3 values: 2 = Solid, 1 = Interface, 0 = Pore
Palabos will run the simulation applying the specified pressure drop from
x = 0 to x = -1. | [
"r",
"Converts",
"an",
"ND",
"-",
"array",
"image",
"to",
"a",
"text",
"file",
"that",
"Palabos",
"can",
"read",
"in",
"as",
"a",
"geometry",
"for",
"Lattice",
"Boltzmann",
"simulations",
".",
"Uses",
"a",
"Euclidean",
"distance",
"transform",
"to",
"ident... | python | train |
cackharot/suds-py3 | suds/sax/attribute.py | https://github.com/cackharot/suds-py3/blob/7387ec7806e9be29aad0a711bea5cb3c9396469c/suds/sax/attribute.py#L124-L135 | def resolvePrefix(self, prefix):
"""
Resolve the specified prefix to a known namespace.
@param prefix: A declared prefix
@type prefix: basestring
@return: The namespace that has been mapped to I{prefix}
@rtype: (I{prefix}, I{name})
"""
ns = Namespace.default
if self.parent is not None:
ns = self.parent.resolvePrefix(prefix)
return ns | [
"def",
"resolvePrefix",
"(",
"self",
",",
"prefix",
")",
":",
"ns",
"=",
"Namespace",
".",
"default",
"if",
"self",
".",
"parent",
"is",
"not",
"None",
":",
"ns",
"=",
"self",
".",
"parent",
".",
"resolvePrefix",
"(",
"prefix",
")",
"return",
"ns"
] | Resolve the specified prefix to a known namespace.
@param prefix: A declared prefix
@type prefix: basestring
@return: The namespace that has been mapped to I{prefix}
@rtype: (I{prefix}, I{name}) | [
"Resolve",
"the",
"specified",
"prefix",
"to",
"a",
"known",
"namespace",
"."
] | python | train |
WebarchivCZ/WA-KAT | src/wa_kat/templates/static/js/Lib/site-packages/wa_kat_main.py | https://github.com/WebarchivCZ/WA-KAT/blob/16d064a3a775dc1d2713debda7847ded52dd2a06/src/wa_kat/templates/static/js/Lib/site-packages/wa_kat_main.py#L210-L234 | def start(cls, ev=None):
"""
Start the query to aleph by ISSN.
"""
ViewController.log_view.add("Beginning AlephReader request..")
ViewController.issnbox_error.reset()
issn = ViewController.issn.strip()
# make sure, that `issn` was filled
if not issn:
ViewController.issnbox_error.show("ISSN nebylo vyplněno!")
ViewController.log_view.add("No ISSN! Aborting.")
return
ViewController.issnbox_error.hide()
ViewController.issn_progressbar.reset()
ViewController.issn_progressbar.show(50)
ViewController.log_view.add("For ISSN `%s`." % issn)
make_request(
url=join(settings.API_PATH, "aleph/records_by_issn"),
data={'issn': issn},
on_complete=cls.on_complete
) | [
"def",
"start",
"(",
"cls",
",",
"ev",
"=",
"None",
")",
":",
"ViewController",
".",
"log_view",
".",
"add",
"(",
"\"Beginning AlephReader request..\"",
")",
"ViewController",
".",
"issnbox_error",
".",
"reset",
"(",
")",
"issn",
"=",
"ViewController",
".",
... | Start the query to aleph by ISSN. | [
"Start",
"the",
"query",
"to",
"aleph",
"by",
"ISSN",
"."
] | python | train |
albu/albumentations | albumentations/augmentations/bbox_utils.py | https://github.com/albu/albumentations/blob/b31393cd6126516d37a84e44c879bd92c68ffc93/albumentations/augmentations/bbox_utils.py#L55-L82 | def filter_bboxes_by_visibility(original_shape, bboxes, transformed_shape, transformed_bboxes,
threshold=0., min_area=0.):
"""Filter bounding boxes and return only those boxes whose visibility after transformation is above
the threshold and minimal area of bounding box in pixels is more then min_area.
Args:
original_shape (tuple): original image shape
bboxes (list): original bounding boxes
transformed_shape(tuple): transformed image
transformed_bboxes (list): transformed bounding boxes
threshold (float): visibility threshold. Should be a value in the range [0.0, 1.0].
min_area (float): Minimal area threshold.
"""
img_height, img_width = original_shape[:2]
transformed_img_height, transformed_img_width = transformed_shape[:2]
visible_bboxes = []
for bbox, transformed_bbox in zip(bboxes, transformed_bboxes):
if not all(0.0 <= value <= 1.0 for value in transformed_bbox[:4]):
continue
bbox_area = calculate_bbox_area(bbox, img_height, img_width)
transformed_bbox_area = calculate_bbox_area(transformed_bbox, transformed_img_height, transformed_img_width)
if transformed_bbox_area < min_area:
continue
visibility = transformed_bbox_area / bbox_area
if visibility >= threshold:
visible_bboxes.append(transformed_bbox)
return visible_bboxes | [
"def",
"filter_bboxes_by_visibility",
"(",
"original_shape",
",",
"bboxes",
",",
"transformed_shape",
",",
"transformed_bboxes",
",",
"threshold",
"=",
"0.",
",",
"min_area",
"=",
"0.",
")",
":",
"img_height",
",",
"img_width",
"=",
"original_shape",
"[",
":",
"... | Filter bounding boxes and return only those boxes whose visibility after transformation is above
the threshold and minimal area of bounding box in pixels is more then min_area.
Args:
original_shape (tuple): original image shape
bboxes (list): original bounding boxes
transformed_shape(tuple): transformed image
transformed_bboxes (list): transformed bounding boxes
threshold (float): visibility threshold. Should be a value in the range [0.0, 1.0].
min_area (float): Minimal area threshold. | [
"Filter",
"bounding",
"boxes",
"and",
"return",
"only",
"those",
"boxes",
"whose",
"visibility",
"after",
"transformation",
"is",
"above",
"the",
"threshold",
"and",
"minimal",
"area",
"of",
"bounding",
"box",
"in",
"pixels",
"is",
"more",
"then",
"min_area",
... | python | train |
shaiguitar/snowclient.py | snowclient/api.py | https://github.com/shaiguitar/snowclient.py/blob/6bb513576d3b37612a7a4da225140d134f3e1c82/snowclient/api.py#L80-L89 | def req(self, meth, url, http_data=''):
"""
sugar that wraps the 'requests' module with basic auth and some headers.
"""
self.logger.debug("Making request: %s %s\nBody:%s" % (meth, url, http_data))
req_method = getattr(requests, meth)
return (req_method(url,
auth=(self.__username, self.__password),
data=http_data,
headers=({'user-agent': self.user_agent(), 'Accept': 'application/json'}))) | [
"def",
"req",
"(",
"self",
",",
"meth",
",",
"url",
",",
"http_data",
"=",
"''",
")",
":",
"self",
".",
"logger",
".",
"debug",
"(",
"\"Making request: %s %s\\nBody:%s\"",
"%",
"(",
"meth",
",",
"url",
",",
"http_data",
")",
")",
"req_method",
"=",
"ge... | sugar that wraps the 'requests' module with basic auth and some headers. | [
"sugar",
"that",
"wraps",
"the",
"requests",
"module",
"with",
"basic",
"auth",
"and",
"some",
"headers",
"."
] | python | train |
ansible/tower-cli | tower_cli/cli/misc.py | https://github.com/ansible/tower-cli/blob/a2b151fed93c47725018d3034848cb3a1814bed7/tower_cli/cli/misc.py#L39-L64 | def version():
"""Display full version information."""
# Print out the current version of Tower CLI.
click.echo('Tower CLI %s' % __version__)
# Print out the current API version of the current code base.
click.echo('API %s' % CUR_API_VERSION)
# Attempt to connect to the Ansible Tower server.
# If we succeed, print a version; if not, generate a failure.
try:
r = client.get('/config/')
except RequestException as ex:
raise exc.TowerCLIError('Could not connect to Ansible Tower.\n%s' %
six.text_type(ex))
config = r.json()
license = config.get('license_info', {}).get('license_type', 'open')
if license == 'open':
server_type = 'AWX'
else:
server_type = 'Ansible Tower'
click.echo('%s %s' % (server_type, config['version']))
# Print out Ansible version of server
click.echo('Ansible %s' % config['ansible_version']) | [
"def",
"version",
"(",
")",
":",
"# Print out the current version of Tower CLI.",
"click",
".",
"echo",
"(",
"'Tower CLI %s'",
"%",
"__version__",
")",
"# Print out the current API version of the current code base.",
"click",
".",
"echo",
"(",
"'API %s'",
"%",
"CUR_API_VERS... | Display full version information. | [
"Display",
"full",
"version",
"information",
"."
] | python | valid |
ramrod-project/database-brain | schema/brain/queries/writes.py | https://github.com/ramrod-project/database-brain/blob/b024cb44f34cabb9d80af38271ddb65c25767083/schema/brain/queries/writes.py#L122-L137 | def write_output(job_id, content, conn=None):
"""writes output to the output table
:param job_id: <str> id of the job
:param content: <str> output to write
:param conn:
"""
output_job = get_job_by_id(job_id, conn)
results = {}
if output_job is not None:
entry = {
OUTPUTJOB_FIELD: output_job,
CONTENT_FIELD: content
}
results = RBO.insert(entry, conflict=RDB_REPLACE).run(conn)
return results | [
"def",
"write_output",
"(",
"job_id",
",",
"content",
",",
"conn",
"=",
"None",
")",
":",
"output_job",
"=",
"get_job_by_id",
"(",
"job_id",
",",
"conn",
")",
"results",
"=",
"{",
"}",
"if",
"output_job",
"is",
"not",
"None",
":",
"entry",
"=",
"{",
... | writes output to the output table
:param job_id: <str> id of the job
:param content: <str> output to write
:param conn: | [
"writes",
"output",
"to",
"the",
"output",
"table"
] | python | train |
HewlettPackard/python-hpOneView | hpOneView/oneview_client.py | https://github.com/HewlettPackard/python-hpOneView/blob/3c6219723ef25e6e0c83d44a89007f89bc325b89/hpOneView/oneview_client.py#L256-L270 | def __set_proxy(self, config):
"""
Set proxy if needed
Args:
config: Config dict
"""
if "proxy" in config and config["proxy"]:
proxy = config["proxy"]
splitted = proxy.split(':')
if len(splitted) != 2:
raise ValueError(ONEVIEW_CLIENT_INVALID_PROXY)
proxy_host = splitted[0]
proxy_port = int(splitted[1])
self.__connection.set_proxy(proxy_host, proxy_port) | [
"def",
"__set_proxy",
"(",
"self",
",",
"config",
")",
":",
"if",
"\"proxy\"",
"in",
"config",
"and",
"config",
"[",
"\"proxy\"",
"]",
":",
"proxy",
"=",
"config",
"[",
"\"proxy\"",
"]",
"splitted",
"=",
"proxy",
".",
"split",
"(",
"':'",
")",
"if",
... | Set proxy if needed
Args:
config: Config dict | [
"Set",
"proxy",
"if",
"needed",
"Args",
":",
"config",
":",
"Config",
"dict"
] | python | train |
wummel/linkchecker | linkcheck/strformat.py | https://github.com/wummel/linkchecker/blob/c2ce810c3fb00b895a841a7be6b2e78c64e7b042/linkcheck/strformat.py#L279-L286 | def strtimezone ():
"""Return timezone info, %z on some platforms, but not supported on all.
"""
if time.daylight:
zone = time.altzone
else:
zone = time.timezone
return "%+04d" % (-zone//SECONDS_PER_HOUR) | [
"def",
"strtimezone",
"(",
")",
":",
"if",
"time",
".",
"daylight",
":",
"zone",
"=",
"time",
".",
"altzone",
"else",
":",
"zone",
"=",
"time",
".",
"timezone",
"return",
"\"%+04d\"",
"%",
"(",
"-",
"zone",
"//",
"SECONDS_PER_HOUR",
")"
] | Return timezone info, %z on some platforms, but not supported on all. | [
"Return",
"timezone",
"info",
"%z",
"on",
"some",
"platforms",
"but",
"not",
"supported",
"on",
"all",
"."
] | python | train |
gmr/rejected | rejected/mixins.py | https://github.com/gmr/rejected/blob/610a3e1401122ecb98d891b6795cca0255e5b044/rejected/mixins.py#L42-L50 | def on_finish(self, exc=None):
"""Used to initiate the garbage collection"""
super(GarbageCollector, self).on_finish(exc)
self._cycles_left -= 1
if self._cycles_left <= 0:
num_collected = gc.collect()
self._cycles_left = self.collection_cycle
LOGGER.debug('garbage collection run, %d objects evicted',
num_collected) | [
"def",
"on_finish",
"(",
"self",
",",
"exc",
"=",
"None",
")",
":",
"super",
"(",
"GarbageCollector",
",",
"self",
")",
".",
"on_finish",
"(",
"exc",
")",
"self",
".",
"_cycles_left",
"-=",
"1",
"if",
"self",
".",
"_cycles_left",
"<=",
"0",
":",
"num... | Used to initiate the garbage collection | [
"Used",
"to",
"initiate",
"the",
"garbage",
"collection"
] | python | train |
emilyhorsman/socialauth | socialauth/authentication.py | https://github.com/emilyhorsman/socialauth/blob/2246a5b2cbbea0936a9b76cc3a7f0a224434d9f6/socialauth/authentication.py#L14-L60 | def http_get_provider(provider,
request_url, params, token_secret, token_cookie = None):
'''Handle HTTP GET requests on an authentication endpoint.
Authentication flow begins when ``params`` has a ``login`` key with a value
of ``start``. For instance, ``/auth/twitter?login=start``.
:param str provider: An provider to obtain a user ID from.
:param str request_url: The authentication endpoint/callback.
:param dict params: GET parameters from the query string.
:param str token_secret: An app secret to encode/decode JSON web tokens.
:param str token_cookie: The current JSON web token, if available.
:return: A dict containing any of the following possible keys:
``status``: an HTTP status code the server should sent
``redirect``: where the client should be directed to continue the flow
``set_token_cookie``: contains a JSON web token and should be stored by
the client and passed in the next call.
``provider_user_id``: the user ID from the login provider
``provider_user_name``: the user name from the login provider
'''
if not validate_provider(provider):
raise InvalidUsage('Provider not supported')
klass = getattr(socialauth.providers, provider.capitalize())
provider = klass(request_url, params, token_secret, token_cookie)
if provider.status == 302:
ret = dict(status = 302, redirect = provider.redirect)
tc = getattr(provider, 'set_token_cookie', None)
if tc is not None:
ret['set_token_cookie'] = tc
return ret
if provider.status == 200 and provider.user_id is not None:
ret = dict(status = 200, provider_user_id = provider.user_id)
if provider.user_name is not None:
ret['provider_user_name'] = provider.user_name
return ret
raise InvalidUsage('Invalid request') | [
"def",
"http_get_provider",
"(",
"provider",
",",
"request_url",
",",
"params",
",",
"token_secret",
",",
"token_cookie",
"=",
"None",
")",
":",
"if",
"not",
"validate_provider",
"(",
"provider",
")",
":",
"raise",
"InvalidUsage",
"(",
"'Provider not supported'",
... | Handle HTTP GET requests on an authentication endpoint.
Authentication flow begins when ``params`` has a ``login`` key with a value
of ``start``. For instance, ``/auth/twitter?login=start``.
:param str provider: An provider to obtain a user ID from.
:param str request_url: The authentication endpoint/callback.
:param dict params: GET parameters from the query string.
:param str token_secret: An app secret to encode/decode JSON web tokens.
:param str token_cookie: The current JSON web token, if available.
:return: A dict containing any of the following possible keys:
``status``: an HTTP status code the server should sent
``redirect``: where the client should be directed to continue the flow
``set_token_cookie``: contains a JSON web token and should be stored by
the client and passed in the next call.
``provider_user_id``: the user ID from the login provider
``provider_user_name``: the user name from the login provider | [
"Handle",
"HTTP",
"GET",
"requests",
"on",
"an",
"authentication",
"endpoint",
"."
] | python | valid |
tcalmant/ipopo | pelix/ipopo/instance.py | https://github.com/tcalmant/ipopo/blob/2f9ae0c44cd9c34ef1a9d50837b3254e75678eb1/pelix/ipopo/instance.py#L667-L710 | def __safe_validation_callback(self, event):
# type: (str) -> Any
"""
Calls the ``@ValidateComponent`` or ``@InvalidateComponent`` callback,
ignoring raised exceptions
:param event: The kind of life-cycle callback (in/validation)
:return: The callback result, or None
"""
if self.state == StoredInstance.KILLED:
# Invalid state
return None
try:
return self.__validation_callback(event)
except FrameworkException as ex:
# Important error
self._logger.exception(
"Critical error calling back %s: %s", self.name, ex
)
# Kill the component
self._ipopo_service.kill(self.name)
# Store the exception as it is a validation error
self.error_trace = traceback.format_exc()
if ex.needs_stop:
# Framework must be stopped...
self._logger.error(
"%s said that the Framework must be stopped.", self.name
)
self.bundle_context.get_framework().stop()
return False
except:
self._logger.exception(
"Component '%s': error calling @ValidateComponent callback",
self.name,
)
# Store the exception as it is a validation error
self.error_trace = traceback.format_exc()
return False | [
"def",
"__safe_validation_callback",
"(",
"self",
",",
"event",
")",
":",
"# type: (str) -> Any",
"if",
"self",
".",
"state",
"==",
"StoredInstance",
".",
"KILLED",
":",
"# Invalid state",
"return",
"None",
"try",
":",
"return",
"self",
".",
"__validation_callback... | Calls the ``@ValidateComponent`` or ``@InvalidateComponent`` callback,
ignoring raised exceptions
:param event: The kind of life-cycle callback (in/validation)
:return: The callback result, or None | [
"Calls",
"the",
"@ValidateComponent",
"or",
"@InvalidateComponent",
"callback",
"ignoring",
"raised",
"exceptions"
] | python | train |
marshmallow-code/webargs | src/webargs/core.py | https://github.com/marshmallow-code/webargs/blob/40cc2d25421d15d9630b1a819f1dcefbbf01ed95/src/webargs/core.py#L300-L323 | def _get_schema(self, argmap, req):
"""Return a `marshmallow.Schema` for the given argmap and request.
:param argmap: Either a `marshmallow.Schema`, `dict`
of argname -> `marshmallow.fields.Field` pairs, or a callable that returns
a `marshmallow.Schema` instance.
:param req: The request object being parsed.
:rtype: marshmallow.Schema
"""
if isinstance(argmap, ma.Schema):
schema = argmap
elif isinstance(argmap, type) and issubclass(argmap, ma.Schema):
schema = argmap()
elif callable(argmap):
schema = argmap(req)
else:
schema = dict2schema(argmap, self.schema_class)()
if MARSHMALLOW_VERSION_INFO[0] < 3 and not schema.strict:
warnings.warn(
"It is highly recommended that you set strict=True on your schema "
"so that the parser's error handler will be invoked when expected.",
UserWarning,
)
return schema | [
"def",
"_get_schema",
"(",
"self",
",",
"argmap",
",",
"req",
")",
":",
"if",
"isinstance",
"(",
"argmap",
",",
"ma",
".",
"Schema",
")",
":",
"schema",
"=",
"argmap",
"elif",
"isinstance",
"(",
"argmap",
",",
"type",
")",
"and",
"issubclass",
"(",
"... | Return a `marshmallow.Schema` for the given argmap and request.
:param argmap: Either a `marshmallow.Schema`, `dict`
of argname -> `marshmallow.fields.Field` pairs, or a callable that returns
a `marshmallow.Schema` instance.
:param req: The request object being parsed.
:rtype: marshmallow.Schema | [
"Return",
"a",
"marshmallow",
".",
"Schema",
"for",
"the",
"given",
"argmap",
"and",
"request",
"."
] | python | train |
cloudera/impyla | impala/_thrift_gen/hive_metastore/ThriftHiveMetastore.py | https://github.com/cloudera/impyla/blob/547fa2ba3b6151e2a98b3544301471a643212dc3/impala/_thrift_gen/hive_metastore/ThriftHiveMetastore.py#L2008-L2016 | def drop_table(self, dbname, name, deleteData):
"""
Parameters:
- dbname
- name
- deleteData
"""
self.send_drop_table(dbname, name, deleteData)
self.recv_drop_table() | [
"def",
"drop_table",
"(",
"self",
",",
"dbname",
",",
"name",
",",
"deleteData",
")",
":",
"self",
".",
"send_drop_table",
"(",
"dbname",
",",
"name",
",",
"deleteData",
")",
"self",
".",
"recv_drop_table",
"(",
")"
] | Parameters:
- dbname
- name
- deleteData | [
"Parameters",
":",
"-",
"dbname",
"-",
"name",
"-",
"deleteData"
] | python | train |
hellysmile/django-activeurl | django_activeurl/utils.py | https://github.com/hellysmile/django-activeurl/blob/40d7f01b217641705fc5b7fe387e28e48ce332dc/django_activeurl/utils.py#L42-L68 | def get_cache_key(content, **kwargs):
'''generate cache key'''
cache_key = ''
for key in sorted(kwargs.keys()):
cache_key = '{cache_key}.{key}:{value}'.format(
cache_key=cache_key,
key=key,
value=kwargs[key],
)
cache_key = '{content}{cache_key}'.format(
content=content,
cache_key=cache_key,
)
# fix for non ascii symbols, ensure encoding, python3 hashlib fix
cache_key = cache_key.encode('utf-8', 'ignore')
cache_key = md5(cache_key).hexdigest()
cache_key = '{prefix}.{version}.{language}.{cache_key}'.format(
prefix=settings.ACTIVE_URL_CACHE_PREFIX,
version=__version__,
language=get_language(),
cache_key=cache_key
)
return cache_key | [
"def",
"get_cache_key",
"(",
"content",
",",
"*",
"*",
"kwargs",
")",
":",
"cache_key",
"=",
"''",
"for",
"key",
"in",
"sorted",
"(",
"kwargs",
".",
"keys",
"(",
")",
")",
":",
"cache_key",
"=",
"'{cache_key}.{key}:{value}'",
".",
"format",
"(",
"cache_k... | generate cache key | [
"generate",
"cache",
"key"
] | python | train |
KrishnaswamyLab/PHATE | Python/phate/vne.py | https://github.com/KrishnaswamyLab/PHATE/blob/346a4597dcfc523f8bef99bce482e677282b6719/Python/phate/vne.py#L52-L139 | def find_knee_point(y, x=None):
"""
Returns the x-location of a (single) knee of curve y=f(x)
Parameters
----------
y : array, shape=[n]
data for which to find the knee point
x : array, optional, shape=[n], default=np.arange(len(y))
indices of the data points of y,
if these are not in order and evenly spaced
Returns
-------
knee_point : int
The index (or x value) of the knee point on y
Examples
--------
>>> import numpy as np
>>> import phate
>>> x = np.arange(20)
>>> y = np.exp(-x/10)
>>> phate.vne.find_knee_point(y,x)
8
"""
try:
y.shape
except AttributeError:
y = np.array(y)
if len(y) < 3:
raise ValueError("Cannot find knee point on vector of length 3")
elif len(y.shape) > 1:
raise ValueError("y must be 1-dimensional")
if x is None:
x = np.arange(len(y))
else:
try:
x.shape
except AttributeError:
x = np.array(x)
if not x.shape == y.shape:
raise ValueError("x and y must be the same shape")
else:
# ensure x is sorted float
idx = np.argsort(x)
x = x[idx]
y = y[idx]
n = np.arange(2, len(y) + 1).astype(np.float32)
# figure out the m and b (in the y=mx+b sense) for the "left-of-knee"
sigma_xy = np.cumsum(x * y)[1:]
sigma_x = np.cumsum(x)[1:]
sigma_y = np.cumsum(y)[1:]
sigma_xx = np.cumsum(x * x)[1:]
det = (n * sigma_xx - sigma_x * sigma_x)
mfwd = (n * sigma_xy - sigma_x * sigma_y) / det
bfwd = -(sigma_x * sigma_xy - sigma_xx * sigma_y) / det
# figure out the m and b (in the y=mx+b sense) for the "right-of-knee"
sigma_xy = np.cumsum(x[::-1] * y[::-1])[1:]
sigma_x = np.cumsum(x[::-1])[1:]
sigma_y = np.cumsum(y[::-1])[1:]
sigma_xx = np.cumsum(x[::-1] * x[::-1])[1:]
det = (n * sigma_xx - sigma_x * sigma_x)
mbck = ((n * sigma_xy - sigma_x * sigma_y) / det)[::-1]
bbck = (-(sigma_x * sigma_xy - sigma_xx * sigma_y) / det)[::-1]
# figure out the sum of per-point errors for left- and right- of-knee fits
error_curve = np.full_like(y, np.float('nan'))
for breakpt in np.arange(1, len(y) - 1):
delsfwd = (mfwd[breakpt - 1] * x[:breakpt + 1] +
bfwd[breakpt - 1]) - y[:breakpt + 1]
delsbck = (mbck[breakpt - 1] * x[breakpt:] +
bbck[breakpt - 1]) - y[breakpt:]
error_curve[breakpt] = np.sum(np.abs(delsfwd)) + \
np.sum(np.abs(delsbck))
# find location of the min of the error curve
loc = np.argmin(error_curve[1:-1]) + 1
knee_point = x[loc]
return knee_point | [
"def",
"find_knee_point",
"(",
"y",
",",
"x",
"=",
"None",
")",
":",
"try",
":",
"y",
".",
"shape",
"except",
"AttributeError",
":",
"y",
"=",
"np",
".",
"array",
"(",
"y",
")",
"if",
"len",
"(",
"y",
")",
"<",
"3",
":",
"raise",
"ValueError",
... | Returns the x-location of a (single) knee of curve y=f(x)
Parameters
----------
y : array, shape=[n]
data for which to find the knee point
x : array, optional, shape=[n], default=np.arange(len(y))
indices of the data points of y,
if these are not in order and evenly spaced
Returns
-------
knee_point : int
The index (or x value) of the knee point on y
Examples
--------
>>> import numpy as np
>>> import phate
>>> x = np.arange(20)
>>> y = np.exp(-x/10)
>>> phate.vne.find_knee_point(y,x)
8 | [
"Returns",
"the",
"x",
"-",
"location",
"of",
"a",
"(",
"single",
")",
"knee",
"of",
"curve",
"y",
"=",
"f",
"(",
"x",
")"
] | python | train |
genialis/resolwe | resolwe/flow/signals.py | https://github.com/genialis/resolwe/blob/f7bb54932c81ec0cfc5b5e80d238fceaeaa48d86/resolwe/flow/signals.py#L39-L42 | def delete_entity(sender, instance, **kwargs):
"""Delete Entity when last Data object is deleted."""
# 1 means that the last Data object is going to be deleted.
Entity.objects.annotate(num_data=Count('data')).filter(data=instance, num_data=1).delete() | [
"def",
"delete_entity",
"(",
"sender",
",",
"instance",
",",
"*",
"*",
"kwargs",
")",
":",
"# 1 means that the last Data object is going to be deleted.",
"Entity",
".",
"objects",
".",
"annotate",
"(",
"num_data",
"=",
"Count",
"(",
"'data'",
")",
")",
".",
"fil... | Delete Entity when last Data object is deleted. | [
"Delete",
"Entity",
"when",
"last",
"Data",
"object",
"is",
"deleted",
"."
] | python | train |
librosa/librosa | librosa/sequence.py | https://github.com/librosa/librosa/blob/180e8e6eb8f958fa6b20b8cba389f7945d508247/librosa/sequence.py#L720-L864 | def viterbi_binary(prob, transition, p_state=None, p_init=None, return_logp=False):
'''Viterbi decoding from binary (multi-label), discriminative state predictions.
Given a sequence of conditional state predictions `prob[s, t]`,
indicating the conditional likelihood of state `s` being active
conditional on observation at time `t`, and a 2*2 transition matrix
`transition` which encodes the conditional probability of moving from
state `s` to state `~s` (not-`s`), the Viterbi algorithm computes the
most likely sequence of states from the observations.
This function differs from `viterbi_discriminative` in that it does not assume the
states to be mutually exclusive. `viterbi_binary` is implemented by
transforming the multi-label decoding problem to a collection
of binary Viterbi problems (one for each *state* or label).
The output is a binary matrix `states[s, t]` indicating whether each
state `s` is active at time `t`.
Parameters
----------
prob : np.ndarray [shape=(n_steps,) or (n_states, n_steps)], non-negative
`prob[s, t]` is the probability of state `s` being active
conditional on the observation at time `t`.
Must be non-negative and less than 1.
If `prob` is 1-dimensional, it is expanded to shape `(1, n_steps)`.
transition : np.ndarray [shape=(2, 2) or (n_states, 2, 2)], non-negative
If 2-dimensional, the same transition matrix is applied to each sub-problem.
`transition[0, i]` is the probability of the state going from inactive to `i`,
`transition[1, i]` is the probability of the state going from active to `i`.
Each row must sum to 1.
If 3-dimensional, `transition[s]` is interpreted as the 2x2 transition matrix
for state label `s`.
p_state : np.ndarray [shape=(n_states,)]
Optional: marginal probability for each state (between [0,1]).
If not provided, a uniform distribution (0.5 for each state)
is assumed.
p_init : np.ndarray [shape=(n_states,)]
Optional: initial state distribution.
If not provided, it is assumed to be uniform.
return_logp : bool
If `True`, return the log-likelihood of the state sequence.
Returns
-------
Either `states` or `(states, logp)`:
states : np.ndarray [shape=(n_states, n_steps)]
The most likely state sequence.
logp : np.ndarray [shape=(n_states,)]
If `return_logp=True`, the log probability of each state activation
sequence `states`
See Also
--------
viterbi : Viterbi decoding from observation likelihoods
viterbi_discriminative : Viterbi decoding for discriminative (mutually exclusive) state predictions
Examples
--------
In this example, we have a sequence of binary state likelihoods that we want to de-noise
under the assumption that state changes are relatively uncommon. Positive predictions
should only be retained if they persist for multiple steps, and any transient predictions
should be considered as errors. This use case arises frequently in problems such as
instrument recognition, where state activations tend to be stable over time, but subject
to abrupt changes (e.g., when an instrument joins the mix).
We assume that the 0 state has a self-transition probability of 90%, and the 1 state
has a self-transition probability of 70%. We assume the marginal and initial
probability of either state is 50%.
>>> trans = np.array([[0.9, 0.1], [0.3, 0.7]])
>>> prob = np.array([0.1, 0.7, 0.4, 0.3, 0.8, 0.9, 0.8, 0.2, 0.6, 0.3])
>>> librosa.sequence.viterbi_binary(prob, trans, p_state=0.5, p_init=0.5)
array([[0, 0, 0, 0, 1, 1, 1, 0, 0, 0]])
'''
prob = np.atleast_2d(prob)
n_states, n_steps = prob.shape
if transition.shape == (2, 2):
transition = np.tile(transition, (n_states, 1, 1))
elif transition.shape != (n_states, 2, 2):
raise ParameterError('transition.shape={}, must be (2,2) or '
'(n_states, 2, 2)={}'.format(transition.shape, (n_states)))
if np.any(transition < 0) or not np.allclose(transition.sum(axis=-1), 1):
raise ParameterError('Invalid transition matrix: must be non-negative '
'and sum to 1 on each row.')
if np.any(prob < 0) or np.any(prob > 1):
raise ParameterError('Invalid probability values: prob must be between [0, 1]')
if p_state is None:
p_state = np.empty(n_states)
p_state.fill(0.5)
else:
p_state = np.atleast_1d(p_state)
if p_state.shape != (n_states,) or np.any(p_state < 0) or np.any(p_state > 1):
raise ParameterError('Invalid marginal state distributions: p_state={}'.format(p_state))
if p_init is None:
p_init = np.empty(n_states)
p_init.fill(0.5)
else:
p_init = np.atleast_1d(p_init)
if p_init.shape != (n_states,) or np.any(p_init < 0) or np.any(p_init > 1):
raise ParameterError('Invalid initial state distributions: p_init={}'.format(p_init))
states = np.empty((n_states, n_steps), dtype=int)
logp = np.empty(n_states)
prob_binary = np.empty((2, n_steps))
p_state_binary = np.empty(2)
p_init_binary = np.empty(2)
for state in range(n_states):
prob_binary[0] = 1 - prob[state]
prob_binary[1] = prob[state]
p_state_binary[0] = 1 - p_state[state]
p_state_binary[1] = p_state[state]
p_init_binary[0] = 1 - p_init[state]
p_init_binary[1] = p_init[state]
states[state, :], logp[state] = viterbi_discriminative(prob_binary,
transition[state],
p_state=p_state_binary,
p_init=p_init_binary,
return_logp=True)
if return_logp:
return states, logp
return states | [
"def",
"viterbi_binary",
"(",
"prob",
",",
"transition",
",",
"p_state",
"=",
"None",
",",
"p_init",
"=",
"None",
",",
"return_logp",
"=",
"False",
")",
":",
"prob",
"=",
"np",
".",
"atleast_2d",
"(",
"prob",
")",
"n_states",
",",
"n_steps",
"=",
"prob... | Viterbi decoding from binary (multi-label), discriminative state predictions.
Given a sequence of conditional state predictions `prob[s, t]`,
indicating the conditional likelihood of state `s` being active
conditional on observation at time `t`, and a 2*2 transition matrix
`transition` which encodes the conditional probability of moving from
state `s` to state `~s` (not-`s`), the Viterbi algorithm computes the
most likely sequence of states from the observations.
This function differs from `viterbi_discriminative` in that it does not assume the
states to be mutually exclusive. `viterbi_binary` is implemented by
transforming the multi-label decoding problem to a collection
of binary Viterbi problems (one for each *state* or label).
The output is a binary matrix `states[s, t]` indicating whether each
state `s` is active at time `t`.
Parameters
----------
prob : np.ndarray [shape=(n_steps,) or (n_states, n_steps)], non-negative
`prob[s, t]` is the probability of state `s` being active
conditional on the observation at time `t`.
Must be non-negative and less than 1.
If `prob` is 1-dimensional, it is expanded to shape `(1, n_steps)`.
transition : np.ndarray [shape=(2, 2) or (n_states, 2, 2)], non-negative
If 2-dimensional, the same transition matrix is applied to each sub-problem.
`transition[0, i]` is the probability of the state going from inactive to `i`,
`transition[1, i]` is the probability of the state going from active to `i`.
Each row must sum to 1.
If 3-dimensional, `transition[s]` is interpreted as the 2x2 transition matrix
for state label `s`.
p_state : np.ndarray [shape=(n_states,)]
Optional: marginal probability for each state (between [0,1]).
If not provided, a uniform distribution (0.5 for each state)
is assumed.
p_init : np.ndarray [shape=(n_states,)]
Optional: initial state distribution.
If not provided, it is assumed to be uniform.
return_logp : bool
If `True`, return the log-likelihood of the state sequence.
Returns
-------
Either `states` or `(states, logp)`:
states : np.ndarray [shape=(n_states, n_steps)]
The most likely state sequence.
logp : np.ndarray [shape=(n_states,)]
If `return_logp=True`, the log probability of each state activation
sequence `states`
See Also
--------
viterbi : Viterbi decoding from observation likelihoods
viterbi_discriminative : Viterbi decoding for discriminative (mutually exclusive) state predictions
Examples
--------
In this example, we have a sequence of binary state likelihoods that we want to de-noise
under the assumption that state changes are relatively uncommon. Positive predictions
should only be retained if they persist for multiple steps, and any transient predictions
should be considered as errors. This use case arises frequently in problems such as
instrument recognition, where state activations tend to be stable over time, but subject
to abrupt changes (e.g., when an instrument joins the mix).
We assume that the 0 state has a self-transition probability of 90%, and the 1 state
has a self-transition probability of 70%. We assume the marginal and initial
probability of either state is 50%.
>>> trans = np.array([[0.9, 0.1], [0.3, 0.7]])
>>> prob = np.array([0.1, 0.7, 0.4, 0.3, 0.8, 0.9, 0.8, 0.2, 0.6, 0.3])
>>> librosa.sequence.viterbi_binary(prob, trans, p_state=0.5, p_init=0.5)
array([[0, 0, 0, 0, 1, 1, 1, 0, 0, 0]]) | [
"Viterbi",
"decoding",
"from",
"binary",
"(",
"multi",
"-",
"label",
")",
"discriminative",
"state",
"predictions",
"."
] | python | test |
spacetelescope/stsci.tools | lib/stsci/tools/teal.py | https://github.com/spacetelescope/stsci.tools/blob/9a022503ad24ca54ce83331482dfa3ff6de9f403/lib/stsci/tools/teal.py#L304-L320 | def _isInstalled(fullFname):
""" Return True if the given file name is located in an
installed area (versus a user-owned file) """
if not fullFname: return False
if not os.path.exists(fullFname): return False
instAreas = []
try:
import site
instAreas = site.getsitepackages()
except:
pass # python 2.6 and lower don't have site.getsitepackages()
if len(instAreas) < 1:
instAreas = [ os.path.dirname(os.__file__) ]
for ia in instAreas:
if fullFname.find(ia) >= 0:
return True
return False | [
"def",
"_isInstalled",
"(",
"fullFname",
")",
":",
"if",
"not",
"fullFname",
":",
"return",
"False",
"if",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"fullFname",
")",
":",
"return",
"False",
"instAreas",
"=",
"[",
"]",
"try",
":",
"import",
"site",... | Return True if the given file name is located in an
installed area (versus a user-owned file) | [
"Return",
"True",
"if",
"the",
"given",
"file",
"name",
"is",
"located",
"in",
"an",
"installed",
"area",
"(",
"versus",
"a",
"user",
"-",
"owned",
"file",
")"
] | python | train |
AlexMathew/scrapple | scrapple/commands/generate.py | https://github.com/AlexMathew/scrapple/blob/eeb604601b155d6cc7e035855ff4d3f48f8bed74/scrapple/commands/generate.py#L28-L67 | def execute_command(self):
"""
The generate command uses `Jinja2 <http://jinja.pocoo.org/>`_ templates \
to create Python scripts, according to the specification in the configuration \
file. The predefined templates use the extract_content() method of the \
:ref:`selector classes <implementation-selectors>` to implement linear extractors \
and use recursive for loops to implement multiple levels of link crawlers. This \
implementation is effectively a representation of the traverse_next() \
:ref:`utility function <implementation-utils>`, using the loop depth to \
differentiate between levels of the crawler execution.
According to the --output_type argument in the CLI input, the results are \
written into a JSON document or a CSV document.
The Python script is written into <output_filename>.py - running this file \
is the equivalent of using the Scrapple :ref:`run command <command-run>`.
"""
print(Back.GREEN + Fore.BLACK + "Scrapple Generate")
print(Back.RESET + Fore.RESET)
directory = os.path.join(scrapple.__path__[0], 'templates', 'scripts')
with open(os.path.join(directory, 'generate.txt'), 'r') as f:
template_content = f.read()
template = Template(template_content)
try:
with open(self.args['<projectname>'] + '.json', 'r') as f:
config = json.load(f)
if self.args['--output_type'] == 'csv':
from scrapple.utils.config import extract_fieldnames
config['fields'] = str(extract_fieldnames(config))
config['output_file'] = self.args['<output_filename>']
config['output_type'] = self.args['--output_type']
rendered = template.render(config=config)
with open(self.args['<output_filename>'] + '.py', 'w') as f:
f.write(rendered)
print(Back.WHITE + Fore.RED + self.args['<output_filename>'], \
".py has been created" + Back.RESET + Fore.RESET, sep="")
except IOError:
print(Back.WHITE + Fore.RED + self.args['<projectname>'], ".json does not ", \
"exist. Use ``scrapple genconfig``." + Back.RESET + Fore.RESET, sep="") | [
"def",
"execute_command",
"(",
"self",
")",
":",
"print",
"(",
"Back",
".",
"GREEN",
"+",
"Fore",
".",
"BLACK",
"+",
"\"Scrapple Generate\"",
")",
"print",
"(",
"Back",
".",
"RESET",
"+",
"Fore",
".",
"RESET",
")",
"directory",
"=",
"os",
".",
"path",
... | The generate command uses `Jinja2 <http://jinja.pocoo.org/>`_ templates \
to create Python scripts, according to the specification in the configuration \
file. The predefined templates use the extract_content() method of the \
:ref:`selector classes <implementation-selectors>` to implement linear extractors \
and use recursive for loops to implement multiple levels of link crawlers. This \
implementation is effectively a representation of the traverse_next() \
:ref:`utility function <implementation-utils>`, using the loop depth to \
differentiate between levels of the crawler execution.
According to the --output_type argument in the CLI input, the results are \
written into a JSON document or a CSV document.
The Python script is written into <output_filename>.py - running this file \
is the equivalent of using the Scrapple :ref:`run command <command-run>`. | [
"The",
"generate",
"command",
"uses",
"Jinja2",
"<http",
":",
"//",
"jinja",
".",
"pocoo",
".",
"org",
"/",
">",
"_",
"templates",
"\\",
"to",
"create",
"Python",
"scripts",
"according",
"to",
"the",
"specification",
"in",
"the",
"configuration",
"\\",
"fi... | python | train |
coreGreenberet/homematicip-rest-api | homematicip/home.py | https://github.com/coreGreenberet/homematicip-rest-api/blob/d4c8df53281577e01709f75cacb78b1a5a1d00db/homematicip/home.py#L388-L401 | def get_functionalHome(self, functionalHomeType: type) -> FunctionalHome:
""" gets the specified functionalHome
Args:
functionalHome(type): the type of the functionalHome which should be returned
Returns:
the FunctionalHome or None if it couldn't be found
"""
for x in self.functionalHomes:
if isinstance(x, functionalHomeType):
return x
return None | [
"def",
"get_functionalHome",
"(",
"self",
",",
"functionalHomeType",
":",
"type",
")",
"->",
"FunctionalHome",
":",
"for",
"x",
"in",
"self",
".",
"functionalHomes",
":",
"if",
"isinstance",
"(",
"x",
",",
"functionalHomeType",
")",
":",
"return",
"x",
"retu... | gets the specified functionalHome
Args:
functionalHome(type): the type of the functionalHome which should be returned
Returns:
the FunctionalHome or None if it couldn't be found | [
"gets",
"the",
"specified",
"functionalHome",
"Args",
":",
"functionalHome",
"(",
"type",
")",
":",
"the",
"type",
"of",
"the",
"functionalHome",
"which",
"should",
"be",
"returned",
"Returns",
":",
"the",
"FunctionalHome",
"or",
"None",
"if",
"it",
"couldn",
... | python | train |
omaraboumrad/mastool | mastool/practices.py | https://github.com/omaraboumrad/mastool/blob/0ec566de6717d03c6ec61affe5d1e9ff8d7e6ebd/mastool/practices.py#L55-L98 | def find_assign_to_builtin(node):
"""Finds assigning to built-ins"""
# The list of forbidden builtins is constant and not determined at
# runtime anyomre. The reason behind this change is that certain
# modules (like `gettext` for instance) would mess with the
# builtins module making this practice yield false positives.
if sys.version_info.major == 3:
builtins = {"abs", "all", "any", "ascii", "bin", "bool",
"bytearray", "bytes", "callable", "chr",
"classmethod", "compile", "complex", "delattr",
"dict", "dir", "divmod", "enumerate", "eval",
"exec", "filter", "float", "format", "frozenset",
"getattr", "globals", "hasattr", "hash", "help",
"hex", "id", "__import__", "input", "int",
"isinstance", "issubclass", "iter", "len", "list",
"locals", "map", "max", "memoryview", "min",
"next", "object", "oct", "open", "ord", "pow",
"print", "property", "range", "repr", "reversed",
"round", "set", "setattr", "slice", "sorted",
"staticmethod", "str", "sum", "super", "tuple",
"type", "vars", "zip"}
else:
builtins = {"abs", "all", "any", "basestring", "bin", "bool",
"bytearray", "callable", "chr", "classmethod",
"cmp", "compile", "complex", "delattr", "dict",
"dir", "divmod", "enumerate", "eval", "execfile",
"file", "filter", "float", "format", "frozenset",
"getattr", "globals", "hasattr", "hash", "help",
"hex", "id", "import__", "input", "int",
"isinstance", "issubclass", "iter", "len", "list",
"locals", "long", "map", "max", "memoryview",
"min", "next", "object", "oct", "open", "ord",
"pow", "print", "property", "range", "raw_input",
"reduce", "reload", "repr", "reversed", "round",
"set", "setattr", "slice", "sorted",
"staticmethod", "str", "sum", "super", "tuple",
"type", "unichr", "unicode", "vars", "xrange",
"zip"}
return (
isinstance(node, ast.Assign)
and len(builtins & set(h.target_names(node.targets))) > 0
) | [
"def",
"find_assign_to_builtin",
"(",
"node",
")",
":",
"# The list of forbidden builtins is constant and not determined at",
"# runtime anyomre. The reason behind this change is that certain",
"# modules (like `gettext` for instance) would mess with the",
"# builtins module making this practice y... | Finds assigning to built-ins | [
"Finds",
"assigning",
"to",
"built",
"-",
"ins"
] | python | train |
shad7/tvrenamer | tasks.py | https://github.com/shad7/tvrenamer/blob/7fb59cb02669357e73b7acb92dcb6d74fdff4654/tasks.py#L199-L206 | def publish(idx=None):
"""Publish packaged distributions to pypi index"""
if idx is None:
idx = ''
else:
idx = '-r ' + idx
run('python setup.py register {}'.format(idx))
run('twine upload {} dist/*.whl dist/*.egg dist/*.tar.gz'.format(idx)) | [
"def",
"publish",
"(",
"idx",
"=",
"None",
")",
":",
"if",
"idx",
"is",
"None",
":",
"idx",
"=",
"''",
"else",
":",
"idx",
"=",
"'-r '",
"+",
"idx",
"run",
"(",
"'python setup.py register {}'",
".",
"format",
"(",
"idx",
")",
")",
"run",
"(",
"'twi... | Publish packaged distributions to pypi index | [
"Publish",
"packaged",
"distributions",
"to",
"pypi",
"index"
] | python | train |
benedictpaten/sonLib | bioio.py | https://github.com/benedictpaten/sonLib/blob/1decb75bb439b70721ec776f685ce98e25217d26/bioio.py#L188-L197 | def popen(command, tempFile):
"""Runs a command and captures standard out in the given temp file.
"""
fileHandle = open(tempFile, 'w')
logger.debug("Running the command: %s" % command)
sts = subprocess.call(command, shell=True, stdout=fileHandle, bufsize=-1)
fileHandle.close()
if sts != 0:
raise RuntimeError("Command: %s exited with non-zero status %i" % (command, sts))
return sts | [
"def",
"popen",
"(",
"command",
",",
"tempFile",
")",
":",
"fileHandle",
"=",
"open",
"(",
"tempFile",
",",
"'w'",
")",
"logger",
".",
"debug",
"(",
"\"Running the command: %s\"",
"%",
"command",
")",
"sts",
"=",
"subprocess",
".",
"call",
"(",
"command",
... | Runs a command and captures standard out in the given temp file. | [
"Runs",
"a",
"command",
"and",
"captures",
"standard",
"out",
"in",
"the",
"given",
"temp",
"file",
"."
] | python | train |
diux-dev/ncluster | ncluster/aws_util.py | https://github.com/diux-dev/ncluster/blob/2fd359621896717197b479c7174d06d80df1529b/ncluster/aws_util.py#L86-L94 | def get_subnet_dict():
"""Returns dictionary of "availability zone" -> subnet for current VPC."""
subnet_dict = {}
vpc = get_vpc()
for subnet in vpc.subnets.all():
zone = subnet.availability_zone
assert zone not in subnet_dict, "More than one subnet in %s, why?" % (zone,)
subnet_dict[zone] = subnet
return subnet_dict | [
"def",
"get_subnet_dict",
"(",
")",
":",
"subnet_dict",
"=",
"{",
"}",
"vpc",
"=",
"get_vpc",
"(",
")",
"for",
"subnet",
"in",
"vpc",
".",
"subnets",
".",
"all",
"(",
")",
":",
"zone",
"=",
"subnet",
".",
"availability_zone",
"assert",
"zone",
"not",
... | Returns dictionary of "availability zone" -> subnet for current VPC. | [
"Returns",
"dictionary",
"of",
"availability",
"zone",
"-",
">",
"subnet",
"for",
"current",
"VPC",
"."
] | python | train |
UpCloudLtd/upcloud-python-api | upcloud_api/cloud_manager/firewall_mixin.py | https://github.com/UpCloudLtd/upcloud-python-api/blob/954b0ad7c4b932b2be31a95d88975f6b0eeac8ed/upcloud_api/cloud_manager/firewall_mixin.py#L64-L73 | def configure_firewall(self, server, firewall_rule_bodies):
"""
Helper for calling create_firewall_rule in series for a list of firewall_rule_bodies.
"""
server_uuid, server_instance = uuid_and_instance(server)
return [
self.create_firewall_rule(server_uuid, rule)
for rule in firewall_rule_bodies
] | [
"def",
"configure_firewall",
"(",
"self",
",",
"server",
",",
"firewall_rule_bodies",
")",
":",
"server_uuid",
",",
"server_instance",
"=",
"uuid_and_instance",
"(",
"server",
")",
"return",
"[",
"self",
".",
"create_firewall_rule",
"(",
"server_uuid",
",",
"rule"... | Helper for calling create_firewall_rule in series for a list of firewall_rule_bodies. | [
"Helper",
"for",
"calling",
"create_firewall_rule",
"in",
"series",
"for",
"a",
"list",
"of",
"firewall_rule_bodies",
"."
] | python | train |
jjjake/internetarchive | internetarchive/api.py | https://github.com/jjjake/internetarchive/blob/7c0c71bfe52490927a37ade15bd09b2733fea660/internetarchive/api.py#L214-L297 | def upload(identifier, files,
metadata=None,
headers=None,
access_key=None,
secret_key=None,
queue_derive=None,
verbose=None,
verify=None,
checksum=None,
delete=None,
retries=None,
retries_sleep=None,
debug=None,
request_kwargs=None,
**get_item_kwargs):
"""Upload files to an item. The item will be created if it does not exist.
:type identifier: str
:param identifier: The globally unique Archive.org identifier for a given item.
:param files: The filepaths or file-like objects to upload. This value can be an
iterable or a single file-like object or string.
:type metadata: dict
:param metadata: (optional) Metadata used to create a new item. If the item already
exists, the metadata will not be updated -- use ``modify_metadata``.
:type headers: dict
:param headers: (optional) Add additional HTTP headers to the request.
:type access_key: str
:param access_key: (optional) IA-S3 access_key to use when making the given request.
:type secret_key: str
:param secret_key: (optional) IA-S3 secret_key to use when making the given request.
:type queue_derive: bool
:param queue_derive: (optional) Set to False to prevent an item from being derived
after upload.
:type verbose: bool
:param verbose: (optional) Display upload progress.
:type verify: bool
:param verify: (optional) Verify local MD5 checksum matches the MD5 checksum of the
file received by IAS3.
:type checksum: bool
:param checksum: (optional) Skip uploading files based on checksum.
:type delete: bool
:param delete: (optional) Delete local file after the upload has been successfully
verified.
:type retries: int
:param retries: (optional) Number of times to retry the given request if S3 returns a
503 SlowDown error.
:type retries_sleep: int
:param retries_sleep: (optional) Amount of time to sleep between ``retries``.
:type debug: bool
:param debug: (optional) Set to True to print headers to stdout, and exit without
sending the upload request.
:param \*\*kwargs: Optional arguments that ``get_item`` takes.
:returns: A list of :py:class:`requests.Response` objects.
"""
item = get_item(identifier, **get_item_kwargs)
return item.upload(files,
metadata=metadata,
headers=headers,
access_key=access_key,
secret_key=secret_key,
queue_derive=queue_derive,
verbose=verbose,
verify=verify,
checksum=checksum,
delete=delete,
retries=retries,
retries_sleep=retries_sleep,
debug=debug,
request_kwargs=request_kwargs) | [
"def",
"upload",
"(",
"identifier",
",",
"files",
",",
"metadata",
"=",
"None",
",",
"headers",
"=",
"None",
",",
"access_key",
"=",
"None",
",",
"secret_key",
"=",
"None",
",",
"queue_derive",
"=",
"None",
",",
"verbose",
"=",
"None",
",",
"verify",
"... | Upload files to an item. The item will be created if it does not exist.
:type identifier: str
:param identifier: The globally unique Archive.org identifier for a given item.
:param files: The filepaths or file-like objects to upload. This value can be an
iterable or a single file-like object or string.
:type metadata: dict
:param metadata: (optional) Metadata used to create a new item. If the item already
exists, the metadata will not be updated -- use ``modify_metadata``.
:type headers: dict
:param headers: (optional) Add additional HTTP headers to the request.
:type access_key: str
:param access_key: (optional) IA-S3 access_key to use when making the given request.
:type secret_key: str
:param secret_key: (optional) IA-S3 secret_key to use when making the given request.
:type queue_derive: bool
:param queue_derive: (optional) Set to False to prevent an item from being derived
after upload.
:type verbose: bool
:param verbose: (optional) Display upload progress.
:type verify: bool
:param verify: (optional) Verify local MD5 checksum matches the MD5 checksum of the
file received by IAS3.
:type checksum: bool
:param checksum: (optional) Skip uploading files based on checksum.
:type delete: bool
:param delete: (optional) Delete local file after the upload has been successfully
verified.
:type retries: int
:param retries: (optional) Number of times to retry the given request if S3 returns a
503 SlowDown error.
:type retries_sleep: int
:param retries_sleep: (optional) Amount of time to sleep between ``retries``.
:type debug: bool
:param debug: (optional) Set to True to print headers to stdout, and exit without
sending the upload request.
:param \*\*kwargs: Optional arguments that ``get_item`` takes.
:returns: A list of :py:class:`requests.Response` objects. | [
"Upload",
"files",
"to",
"an",
"item",
".",
"The",
"item",
"will",
"be",
"created",
"if",
"it",
"does",
"not",
"exist",
"."
] | python | train |
ArchiveTeam/wpull | wpull/protocol/ftp/command.py | https://github.com/ArchiveTeam/wpull/blob/ddf051aa3322479325ba20aa778cb2cb97606bf5/wpull/protocol/ftp/command.py#L206-L220 | def size(self, filename: str) -> int:
'''Get size of file.
Coroutine.
'''
yield from self._control_stream.write_command(Command('SIZE', filename))
reply = yield from self._control_stream.read_reply()
self.raise_if_not_match('File size', ReplyCodes.file_status, reply)
try:
return int(reply.text.strip())
except ValueError:
return | [
"def",
"size",
"(",
"self",
",",
"filename",
":",
"str",
")",
"->",
"int",
":",
"yield",
"from",
"self",
".",
"_control_stream",
".",
"write_command",
"(",
"Command",
"(",
"'SIZE'",
",",
"filename",
")",
")",
"reply",
"=",
"yield",
"from",
"self",
".",... | Get size of file.
Coroutine. | [
"Get",
"size",
"of",
"file",
"."
] | python | train |
streamlink/streamlink | src/streamlink_cli/main.py | https://github.com/streamlink/streamlink/blob/c8ed1daff14ac03195870238b9b900c1109dd5c1/src/streamlink_cli/main.py#L663-L682 | def setup_args(parser, config_files=[], ignore_unknown=False):
"""Parses arguments."""
global args
arglist = sys.argv[1:]
# Load arguments from config files
for config_file in filter(os.path.isfile, config_files):
arglist.insert(0, "@" + config_file)
args, unknown = parser.parse_known_args(arglist)
if unknown and not ignore_unknown:
msg = gettext('unrecognized arguments: %s')
parser.error(msg % ' '.join(unknown))
# Force lowercase to allow case-insensitive lookup
if args.stream:
args.stream = [stream.lower() for stream in args.stream]
if not args.url and args.url_param:
args.url = args.url_param | [
"def",
"setup_args",
"(",
"parser",
",",
"config_files",
"=",
"[",
"]",
",",
"ignore_unknown",
"=",
"False",
")",
":",
"global",
"args",
"arglist",
"=",
"sys",
".",
"argv",
"[",
"1",
":",
"]",
"# Load arguments from config files",
"for",
"config_file",
"in",... | Parses arguments. | [
"Parses",
"arguments",
"."
] | python | test |
hyperledger/indy-plenum | plenum/server/node.py | https://github.com/hyperledger/indy-plenum/blob/dcd144e238af7f17a869ffc9412f13dc488b7020/plenum/server/node.py#L3441-L3461 | def addNewRole(self, txn):
"""
Adds a new client or steward to this node based on transaction type.
"""
# If the client authenticator is a simple authenticator then add verkey.
# For a custom authenticator, handle appropriately.
# NOTE: The following code should not be used in production
if isinstance(self.clientAuthNr.core_authenticator, SimpleAuthNr):
txn_data = get_payload_data(txn)
identifier = txn_data[TARGET_NYM]
verkey = txn_data.get(VERKEY)
v = DidVerifier(verkey, identifier=identifier)
if identifier not in self.clientAuthNr.core_authenticator.clients:
role = txn_data.get(ROLE)
if role not in (STEWARD, TRUSTEE, None):
logger.debug("Role if present must be {} and not {}".
format(Roles.STEWARD.name, role))
return
self.clientAuthNr.core_authenticator.addIdr(identifier,
verkey=v.verkey,
role=role) | [
"def",
"addNewRole",
"(",
"self",
",",
"txn",
")",
":",
"# If the client authenticator is a simple authenticator then add verkey.",
"# For a custom authenticator, handle appropriately.",
"# NOTE: The following code should not be used in production",
"if",
"isinstance",
"(",
"self",
".... | Adds a new client or steward to this node based on transaction type. | [
"Adds",
"a",
"new",
"client",
"or",
"steward",
"to",
"this",
"node",
"based",
"on",
"transaction",
"type",
"."
] | python | train |
twilio/twilio-python | twilio/rest/taskrouter/v1/workspace/task/__init__.py | https://github.com/twilio/twilio-python/blob/c867895f55dcc29f522e6e8b8868d0d18483132f/twilio/rest/taskrouter/v1/workspace/task/__init__.py#L38-L83 | def stream(self, priority=values.unset, assignment_status=values.unset,
workflow_sid=values.unset, workflow_name=values.unset,
task_queue_sid=values.unset, task_queue_name=values.unset,
evaluate_task_attributes=values.unset, ordering=values.unset,
has_addons=values.unset, limit=None, page_size=None):
"""
Streams TaskInstance records from the API as a generator stream.
This operation lazily loads records as efficiently as possible until the limit
is reached.
The results are returned as a generator, so this operation is memory efficient.
:param unicode priority: Retrieve the list of all Tasks in the workspace with the specified priority.
:param unicode assignment_status: Returns the list of all Tasks in the workspace with the specified AssignmentStatus.
:param unicode workflow_sid: Returns the list of Tasks that are being controlled by the Workflow with the specified Sid value.
:param unicode workflow_name: Returns the list of Tasks that are being controlled by the Workflow with the specified FriendlyName value.
:param unicode task_queue_sid: Returns the list of Tasks that are currently waiting in the TaskQueue identified by the Sid specified.
:param unicode task_queue_name: Returns the list of Tasks that are currently waiting in the TaskQueue identified by the FriendlyName specified.
:param unicode evaluate_task_attributes: Provide a task attributes expression, and this will return tasks which match the attributes.
:param unicode ordering: Use this parameter to control the order of the Tasks returned.
:param bool has_addons: The has_addons
:param int limit: Upper limit for the number of records to return. stream()
guarantees to never return more than limit. Default is no limit
:param int page_size: Number of records to fetch per request, when not set will use
the default value of 50 records. If no page_size is defined
but a limit is defined, stream() will attempt to read the
limit with the most efficient page size, i.e. min(limit, 1000)
:returns: Generator that will yield up to limit results
:rtype: list[twilio.rest.taskrouter.v1.workspace.task.TaskInstance]
"""
limits = self._version.read_limits(limit, page_size)
page = self.page(
priority=priority,
assignment_status=assignment_status,
workflow_sid=workflow_sid,
workflow_name=workflow_name,
task_queue_sid=task_queue_sid,
task_queue_name=task_queue_name,
evaluate_task_attributes=evaluate_task_attributes,
ordering=ordering,
has_addons=has_addons,
page_size=limits['page_size'],
)
return self._version.stream(page, limits['limit'], limits['page_limit']) | [
"def",
"stream",
"(",
"self",
",",
"priority",
"=",
"values",
".",
"unset",
",",
"assignment_status",
"=",
"values",
".",
"unset",
",",
"workflow_sid",
"=",
"values",
".",
"unset",
",",
"workflow_name",
"=",
"values",
".",
"unset",
",",
"task_queue_sid",
"... | Streams TaskInstance records from the API as a generator stream.
This operation lazily loads records as efficiently as possible until the limit
is reached.
The results are returned as a generator, so this operation is memory efficient.
:param unicode priority: Retrieve the list of all Tasks in the workspace with the specified priority.
:param unicode assignment_status: Returns the list of all Tasks in the workspace with the specified AssignmentStatus.
:param unicode workflow_sid: Returns the list of Tasks that are being controlled by the Workflow with the specified Sid value.
:param unicode workflow_name: Returns the list of Tasks that are being controlled by the Workflow with the specified FriendlyName value.
:param unicode task_queue_sid: Returns the list of Tasks that are currently waiting in the TaskQueue identified by the Sid specified.
:param unicode task_queue_name: Returns the list of Tasks that are currently waiting in the TaskQueue identified by the FriendlyName specified.
:param unicode evaluate_task_attributes: Provide a task attributes expression, and this will return tasks which match the attributes.
:param unicode ordering: Use this parameter to control the order of the Tasks returned.
:param bool has_addons: The has_addons
:param int limit: Upper limit for the number of records to return. stream()
guarantees to never return more than limit. Default is no limit
:param int page_size: Number of records to fetch per request, when not set will use
the default value of 50 records. If no page_size is defined
but a limit is defined, stream() will attempt to read the
limit with the most efficient page size, i.e. min(limit, 1000)
:returns: Generator that will yield up to limit results
:rtype: list[twilio.rest.taskrouter.v1.workspace.task.TaskInstance] | [
"Streams",
"TaskInstance",
"records",
"from",
"the",
"API",
"as",
"a",
"generator",
"stream",
".",
"This",
"operation",
"lazily",
"loads",
"records",
"as",
"efficiently",
"as",
"possible",
"until",
"the",
"limit",
"is",
"reached",
".",
"The",
"results",
"are",... | python | train |
uw-it-aca/uw-restclients-canvas | uw_canvas/external_tools.py | https://github.com/uw-it-aca/uw-restclients-canvas/blob/9845faf33d49a8f06908efc22640c001116d6ea2/uw_canvas/external_tools.py#L31-L42 | def get_external_tools_in_course(self, course_id, params={}):
"""
Return external tools for the passed canvas course id.
https://canvas.instructure.com/doc/api/external_tools.html#method.external_tools.index
"""
url = COURSES_API.format(course_id) + "/external_tools"
external_tools = []
for data in self._get_paged_resource(url, params=params):
external_tools.append(data)
return external_tools | [
"def",
"get_external_tools_in_course",
"(",
"self",
",",
"course_id",
",",
"params",
"=",
"{",
"}",
")",
":",
"url",
"=",
"COURSES_API",
".",
"format",
"(",
"course_id",
")",
"+",
"\"/external_tools\"",
"external_tools",
"=",
"[",
"]",
"for",
"data",
"in",
... | Return external tools for the passed canvas course id.
https://canvas.instructure.com/doc/api/external_tools.html#method.external_tools.index | [
"Return",
"external",
"tools",
"for",
"the",
"passed",
"canvas",
"course",
"id",
"."
] | python | test |
twilio/twilio-python | twilio/rest/preview/wireless/sim/__init__.py | https://github.com/twilio/twilio-python/blob/c867895f55dcc29f522e6e8b8868d0d18483132f/twilio/rest/preview/wireless/sim/__init__.py#L108-L145 | def page(self, status=values.unset, iccid=values.unset, rate_plan=values.unset,
e_id=values.unset, sim_registration_code=values.unset,
page_token=values.unset, page_number=values.unset,
page_size=values.unset):
"""
Retrieve a single page of SimInstance records from the API.
Request is executed immediately
:param unicode status: The status
:param unicode iccid: The iccid
:param unicode rate_plan: The rate_plan
:param unicode e_id: The e_id
:param unicode sim_registration_code: The sim_registration_code
:param str page_token: PageToken provided by the API
:param int page_number: Page Number, this value is simply for client state
:param int page_size: Number of records to return, defaults to 50
:returns: Page of SimInstance
:rtype: twilio.rest.preview.wireless.sim.SimPage
"""
params = values.of({
'Status': status,
'Iccid': iccid,
'RatePlan': rate_plan,
'EId': e_id,
'SimRegistrationCode': sim_registration_code,
'PageToken': page_token,
'Page': page_number,
'PageSize': page_size,
})
response = self._version.page(
'GET',
self._uri,
params=params,
)
return SimPage(self._version, response, self._solution) | [
"def",
"page",
"(",
"self",
",",
"status",
"=",
"values",
".",
"unset",
",",
"iccid",
"=",
"values",
".",
"unset",
",",
"rate_plan",
"=",
"values",
".",
"unset",
",",
"e_id",
"=",
"values",
".",
"unset",
",",
"sim_registration_code",
"=",
"values",
"."... | Retrieve a single page of SimInstance records from the API.
Request is executed immediately
:param unicode status: The status
:param unicode iccid: The iccid
:param unicode rate_plan: The rate_plan
:param unicode e_id: The e_id
:param unicode sim_registration_code: The sim_registration_code
:param str page_token: PageToken provided by the API
:param int page_number: Page Number, this value is simply for client state
:param int page_size: Number of records to return, defaults to 50
:returns: Page of SimInstance
:rtype: twilio.rest.preview.wireless.sim.SimPage | [
"Retrieve",
"a",
"single",
"page",
"of",
"SimInstance",
"records",
"from",
"the",
"API",
".",
"Request",
"is",
"executed",
"immediately"
] | python | train |
CamDavidsonPilon/lifelines | lifelines/plotting.py | https://github.com/CamDavidsonPilon/lifelines/blob/bdf6be6f1d10eea4c46365ee0ee6a47d8c30edf8/lifelines/plotting.py#L305-L379 | def plot_lifetimes(
durations,
event_observed=None,
entry=None,
left_truncated=False,
sort_by_duration=True,
event_observed_color="#A60628",
event_censored_color="#348ABD",
**kwargs
):
"""
Returns a lifetime plot, see examples: https://lifelines.readthedocs.io/en/latest/Survival%20Analysis%20intro.html#Censoring
Parameters
-----------
durations: (n,) numpy array or pd.Series
duration subject was observed for.
event_observed: (n,) numpy array or pd.Series
array of booleans: True if event observed, else False.
entry: (n,) numpy array or pd.Series
offsetting the births away from t=0. This could be from left-truncation, or delayed entry into study.
left_truncated: boolean
if entry is provided, and the data is left-truncated, this will display additional information in the plot to reflect this.
sort_by_duration: boolean
sort by the duration vector
event_observed_color: str
default: "#A60628"
event_censored_color: str
default: "#348ABD"
Returns
-------
ax
Examples
---------
>>> from lifelines.datasets import load_waltons
>>> from lifelines.plotting import plot_lifetimes
>>> T, E = load_waltons()["T"], load_waltons()["E"]
>>> ax = plot_lifetimes(T.loc[:50], event_observed=E.loc[:50])
"""
set_kwargs_ax(kwargs)
ax = kwargs.pop("ax")
N = durations.shape[0]
if N > 80:
warnings.warn("For less visual clutter, you may want to subsample to less than 80 individuals.")
if event_observed is None:
event_observed = np.ones(N, dtype=bool)
if entry is None:
entry = np.zeros(N)
assert durations.shape[0] == N
assert event_observed.shape[0] == N
if sort_by_duration:
# order by length of lifetimes;
ix = np.argsort(entry + durations, 0)
durations = durations[ix]
event_observed = event_observed[ix]
entry = entry[ix]
for i in range(N):
c = event_observed_color if event_observed[i] else event_censored_color
ax.hlines(i, entry[i], entry[i] + durations[i], color=c, lw=1.5)
if left_truncated:
ax.hlines(i, 0, entry[i], color=c, lw=1.0, linestyle="--")
m = "" if not event_observed[i] else "o"
ax.scatter(entry[i] + durations[i], i, color=c, marker=m, s=10)
ax.set_ylim(-0.5, N)
return ax | [
"def",
"plot_lifetimes",
"(",
"durations",
",",
"event_observed",
"=",
"None",
",",
"entry",
"=",
"None",
",",
"left_truncated",
"=",
"False",
",",
"sort_by_duration",
"=",
"True",
",",
"event_observed_color",
"=",
"\"#A60628\"",
",",
"event_censored_color",
"=",
... | Returns a lifetime plot, see examples: https://lifelines.readthedocs.io/en/latest/Survival%20Analysis%20intro.html#Censoring
Parameters
-----------
durations: (n,) numpy array or pd.Series
duration subject was observed for.
event_observed: (n,) numpy array or pd.Series
array of booleans: True if event observed, else False.
entry: (n,) numpy array or pd.Series
offsetting the births away from t=0. This could be from left-truncation, or delayed entry into study.
left_truncated: boolean
if entry is provided, and the data is left-truncated, this will display additional information in the plot to reflect this.
sort_by_duration: boolean
sort by the duration vector
event_observed_color: str
default: "#A60628"
event_censored_color: str
default: "#348ABD"
Returns
-------
ax
Examples
---------
>>> from lifelines.datasets import load_waltons
>>> from lifelines.plotting import plot_lifetimes
>>> T, E = load_waltons()["T"], load_waltons()["E"]
>>> ax = plot_lifetimes(T.loc[:50], event_observed=E.loc[:50]) | [
"Returns",
"a",
"lifetime",
"plot",
"see",
"examples",
":",
"https",
":",
"//",
"lifelines",
".",
"readthedocs",
".",
"io",
"/",
"en",
"/",
"latest",
"/",
"Survival%20Analysis%20intro",
".",
"html#Censoring"
] | python | train |
materialsproject/pymatgen | pymatgen/io/abinit/works.py | https://github.com/materialsproject/pymatgen/blob/4ca558cf72f8d5f8a1f21dfdfc0181a971c186da/pymatgen/io/abinit/works.py#L701-L713 | def build(self, *args, **kwargs):
"""Creates the top level directory."""
# Create the directories of the work.
self.indir.makedirs()
self.outdir.makedirs()
self.tmpdir.makedirs()
# Build dirs and files of each task.
for task in self:
task.build(*args, **kwargs)
# Connect signals within the work.
self.connect_signals() | [
"def",
"build",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"# Create the directories of the work.",
"self",
".",
"indir",
".",
"makedirs",
"(",
")",
"self",
".",
"outdir",
".",
"makedirs",
"(",
")",
"self",
".",
"tmpdir",
".",
"m... | Creates the top level directory. | [
"Creates",
"the",
"top",
"level",
"directory",
"."
] | python | train |
pytroll/satpy | satpy/multiscene.py | https://github.com/pytroll/satpy/blob/1f21d20ac686b745fb0da9b4030d139893e066dd/satpy/multiscene.py#L250-L252 | def crop(self, *args, **kwargs):
"""Crop the multiscene and return a new cropped multiscene."""
return self._generate_scene_func(self._scenes, 'crop', True, *args, **kwargs) | [
"def",
"crop",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"_generate_scene_func",
"(",
"self",
".",
"_scenes",
",",
"'crop'",
",",
"True",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | Crop the multiscene and return a new cropped multiscene. | [
"Crop",
"the",
"multiscene",
"and",
"return",
"a",
"new",
"cropped",
"multiscene",
"."
] | python | train |
jldantas/libmft | libmft/attribute.py | https://github.com/jldantas/libmft/blob/65a988605fe7663b788bd81dcb52c0a4eaad1549/libmft/attribute.py#L1852-L1867 | def _from_binary_secd_header(cls, binary_stream):
"""See base class."""
''' Revision number - 1
Padding - 1
Control flags - 2
Reference to the owner SID - 4 (offset relative to the header)
Reference to the group SID - 4 (offset relative to the header)
Reference to the DACL - 4 (offset relative to the header)
Reference to the SACL - 4 (offset relative to the header)
'''
nw_obj = cls(cls._REPR.unpack(binary_stream))
nw_obj.control_flags = SecurityDescriptorFlags(nw_obj.control_flags)
_MOD_LOGGER.debug("Attempted to unpack Security Descriptor Header from \"%s\"\nResult: %s", binary_stream.tobytes(), nw_obj)
return nw_obj | [
"def",
"_from_binary_secd_header",
"(",
"cls",
",",
"binary_stream",
")",
":",
"''' Revision number - 1\n Padding - 1\n Control flags - 2\n Reference to the owner SID - 4 (offset relative to the header)\n Reference to the group SID - 4 (offset relative to the header)\n ... | See base class. | [
"See",
"base",
"class",
"."
] | python | train |
PatrikValkovic/grammpy | grammpy/transforms/UnitRulesRemove/find_symbols_reachable_by_unit_rules.py | https://github.com/PatrikValkovic/grammpy/blob/879ce0ef794ac2823acc19314fcd7a8aba53e50f/grammpy/transforms/UnitRulesRemove/find_symbols_reachable_by_unit_rules.py#L64-L74 | def path_rules(self, from_symbol, to_symbol):
# type: (Type[Nonterminal], Type[Nonterminal]) -> List[Type[Rule]]
"""
Get sequence of unit rules between first and second parameter.
:param from_symbol: From which symbol.
:param to_symbol: To which symbol.
:return: Sequence of unit rules. Empty sequence mean there is no way between them.
"""
if from_symbol not in self.t or to_symbol not in self.t:
return []
return self.f[self.t[from_symbol]][self.t[to_symbol]] or [] | [
"def",
"path_rules",
"(",
"self",
",",
"from_symbol",
",",
"to_symbol",
")",
":",
"# type: (Type[Nonterminal], Type[Nonterminal]) -> List[Type[Rule]]",
"if",
"from_symbol",
"not",
"in",
"self",
".",
"t",
"or",
"to_symbol",
"not",
"in",
"self",
".",
"t",
":",
"retu... | Get sequence of unit rules between first and second parameter.
:param from_symbol: From which symbol.
:param to_symbol: To which symbol.
:return: Sequence of unit rules. Empty sequence mean there is no way between them. | [
"Get",
"sequence",
"of",
"unit",
"rules",
"between",
"first",
"and",
"second",
"parameter",
".",
":",
"param",
"from_symbol",
":",
"From",
"which",
"symbol",
".",
":",
"param",
"to_symbol",
":",
"To",
"which",
"symbol",
".",
":",
"return",
":",
"Sequence",... | python | train |
Nic30/hwt | hwt/pyUtils/arrayQuery.py | https://github.com/Nic30/hwt/blob/8cbb399e326da3b22c233b98188a9d08dec057e6/hwt/pyUtils/arrayQuery.py#L117-L135 | def groupedby(collection, fn):
"""
same like itertools.groupby
:note: This function does not needs initial sorting like itertools.groupby
:attention: Order of pairs is not deterministic.
"""
d = {}
for item in collection:
k = fn(item)
try:
arr = d[k]
except KeyError:
arr = []
d[k] = arr
arr.append(item)
yield from d.items() | [
"def",
"groupedby",
"(",
"collection",
",",
"fn",
")",
":",
"d",
"=",
"{",
"}",
"for",
"item",
"in",
"collection",
":",
"k",
"=",
"fn",
"(",
"item",
")",
"try",
":",
"arr",
"=",
"d",
"[",
"k",
"]",
"except",
"KeyError",
":",
"arr",
"=",
"[",
... | same like itertools.groupby
:note: This function does not needs initial sorting like itertools.groupby
:attention: Order of pairs is not deterministic. | [
"same",
"like",
"itertools",
".",
"groupby"
] | python | test |
cloudera/cm_api | python/src/cm_api/endpoints/hosts.py | https://github.com/cloudera/cm_api/blob/5d2512375bd94684b4da36df9e0d9177865ffcbb/python/src/cm_api/endpoints/hosts.py#L47-L54 | def get_all_hosts(resource_root, view=None):
"""
Get all hosts
@param resource_root: The root Resource object.
@return: A list of ApiHost objects.
"""
return call(resource_root.get, HOSTS_PATH, ApiHost, True,
params=view and dict(view=view) or None) | [
"def",
"get_all_hosts",
"(",
"resource_root",
",",
"view",
"=",
"None",
")",
":",
"return",
"call",
"(",
"resource_root",
".",
"get",
",",
"HOSTS_PATH",
",",
"ApiHost",
",",
"True",
",",
"params",
"=",
"view",
"and",
"dict",
"(",
"view",
"=",
"view",
"... | Get all hosts
@param resource_root: The root Resource object.
@return: A list of ApiHost objects. | [
"Get",
"all",
"hosts"
] | python | train |
andreikop/qutepart | qutepart/syntax/parser.py | https://github.com/andreikop/qutepart/blob/109d76b239751318bcef06f39b2fbbf18687a40b/qutepart/syntax/parser.py#L948-L991 | def highlightBlock(self, text, prevContextStack):
"""Parse block and return ParseBlockFullResult
return (lineData, highlightedSegments)
where lineData is (contextStack, textTypeMap)
where textTypeMap is a string of textType characters
"""
if prevContextStack is not None:
contextStack = prevContextStack
else:
contextStack = self._defaultContextStack
highlightedSegments = []
lineContinue = False
currentColumnIndex = 0
textTypeMap = []
if len(text) > 0:
while currentColumnIndex < len(text):
_logger.debug('In context %s', contextStack.currentContext().name)
length, newContextStack, segments, textTypeMapPart, lineContinue = \
contextStack.currentContext().parseBlock(contextStack, currentColumnIndex, text)
highlightedSegments += segments
contextStack = newContextStack
textTypeMap += textTypeMapPart
currentColumnIndex += length
if not lineContinue:
while contextStack.currentContext().lineEndContext is not None:
oldStack = contextStack
contextStack = contextStack.currentContext().lineEndContext.getNextContextStack(contextStack)
if oldStack == contextStack: # avoid infinite while loop if nothing to switch
break
# this code is not tested, because lineBeginContext is not defined by any xml file
if contextStack.currentContext().lineBeginContext is not None:
contextStack = contextStack.currentContext().lineBeginContext.getNextContextStack(contextStack)
elif contextStack.currentContext().lineEmptyContext is not None:
contextStack = contextStack.currentContext().lineEmptyContext.getNextContextStack(contextStack)
lineData = (contextStack, textTypeMap)
return lineData, highlightedSegments | [
"def",
"highlightBlock",
"(",
"self",
",",
"text",
",",
"prevContextStack",
")",
":",
"if",
"prevContextStack",
"is",
"not",
"None",
":",
"contextStack",
"=",
"prevContextStack",
"else",
":",
"contextStack",
"=",
"self",
".",
"_defaultContextStack",
"highlightedSe... | Parse block and return ParseBlockFullResult
return (lineData, highlightedSegments)
where lineData is (contextStack, textTypeMap)
where textTypeMap is a string of textType characters | [
"Parse",
"block",
"and",
"return",
"ParseBlockFullResult"
] | python | train |
kejbaly2/metrique | metrique/utils.py | https://github.com/kejbaly2/metrique/blob/a10b076097441b7dde687949139f702f5c1e1b35/metrique/utils.py#L1003-L1037 | def read_file(rel_path, paths=None, raw=False, as_list=False, as_iter=False,
*args, **kwargs):
'''
find a file that lives somewhere within a set of paths and
return its contents. Default paths include 'static_dir'
'''
if not rel_path:
raise ValueError("rel_path can not be null!")
paths = str2list(paths)
# try looking the file up in a directory called static relative
# to SRC_DIR, eg assuming metrique git repo is in ~/metrique
# we'd look in ~/metrique/static
paths.extend([STATIC_DIR, os.path.join(SRC_DIR, 'static')])
paths = [os.path.expanduser(p) for p in set(paths)]
for path in paths:
path = os.path.join(path, rel_path)
logger.debug("trying to read: %s " % path)
if os.path.exists(path):
break
else:
raise IOError("path %s does not exist!" % rel_path)
args = args if args else ['rU']
fd = open(path, *args, **kwargs)
if raw:
return fd
if as_iter:
return read_in_chunks(fd)
else:
fd_lines = fd.readlines()
if as_list:
return fd_lines
else:
return ''.join(fd_lines) | [
"def",
"read_file",
"(",
"rel_path",
",",
"paths",
"=",
"None",
",",
"raw",
"=",
"False",
",",
"as_list",
"=",
"False",
",",
"as_iter",
"=",
"False",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"not",
"rel_path",
":",
"raise",
"Value... | find a file that lives somewhere within a set of paths and
return its contents. Default paths include 'static_dir' | [
"find",
"a",
"file",
"that",
"lives",
"somewhere",
"within",
"a",
"set",
"of",
"paths",
"and",
"return",
"its",
"contents",
".",
"Default",
"paths",
"include",
"static_dir"
] | python | train |
Alignak-monitoring/alignak | alignak/external_command.py | https://github.com/Alignak-monitoring/alignak/blob/f3c145207e83159b799d3714e4241399c7740a64/alignak/external_command.py#L619-L673 | def search_host_and_dispatch(self, host_name, command, extcmd):
# pylint: disable=too-many-branches
"""Try to dispatch a command for a specific host (so specific scheduler)
because this command is related to a host (change notification interval for example)
:param host_name: host name to search
:type host_name: str
:param command: command line
:type command: str
:param extcmd: external command object (the object will be added to sched commands list)
:type extcmd: alignak.external_command.ExternalCommand
:return: None
"""
logger.debug("Calling search_host_and_dispatch for %s", host_name)
host_found = False
# If we are a receiver, just look in the receiver
if self.mode == 'receiver':
logger.debug("Receiver is searching a scheduler for the external command %s %s",
host_name, command)
scheduler_link = self.daemon.get_scheduler_from_hostname(host_name)
if scheduler_link:
host_found = True
logger.debug("Receiver pushing external command to scheduler %s",
scheduler_link.name)
scheduler_link.pushed_commands.append(extcmd)
else:
logger.warning("I did not found a scheduler for the host: %s", host_name)
else:
for cfg_part in list(self.cfg_parts.values()):
if cfg_part.hosts.find_by_name(host_name) is not None:
logger.debug("Host %s found in a configuration", host_name)
if cfg_part.is_assigned:
host_found = True
scheduler_link = cfg_part.scheduler_link
logger.debug("Sending command to the scheduler %s", scheduler_link.name)
scheduler_link.push_external_commands([command])
# scheduler_link.my_daemon.external_commands.append(command)
break
else:
logger.warning("Problem: the host %s was found in a configuration, "
"but this configuration is not assigned to any scheduler!",
host_name)
if not host_found:
if self.accept_passive_unknown_check_results:
brok = self.get_unknown_check_result_brok(command)
if brok:
self.send_an_element(brok)
else:
logger.warning("External command was received for the host '%s', "
"but the host could not be found! Command is: %s",
host_name, command)
else:
logger.warning("External command was received for host '%s', "
"but the host could not be found!", host_name) | [
"def",
"search_host_and_dispatch",
"(",
"self",
",",
"host_name",
",",
"command",
",",
"extcmd",
")",
":",
"# pylint: disable=too-many-branches",
"logger",
".",
"debug",
"(",
"\"Calling search_host_and_dispatch for %s\"",
",",
"host_name",
")",
"host_found",
"=",
"False... | Try to dispatch a command for a specific host (so specific scheduler)
because this command is related to a host (change notification interval for example)
:param host_name: host name to search
:type host_name: str
:param command: command line
:type command: str
:param extcmd: external command object (the object will be added to sched commands list)
:type extcmd: alignak.external_command.ExternalCommand
:return: None | [
"Try",
"to",
"dispatch",
"a",
"command",
"for",
"a",
"specific",
"host",
"(",
"so",
"specific",
"scheduler",
")",
"because",
"this",
"command",
"is",
"related",
"to",
"a",
"host",
"(",
"change",
"notification",
"interval",
"for",
"example",
")"
] | python | train |
pvlib/pvlib-python | pvlib/iotools/midc.py | https://github.com/pvlib/pvlib-python/blob/2e844a595b820b43d1170269781fa66bd0ccc8a3/pvlib/iotools/midc.py#L67-L88 | def format_index(data):
"""Create DatetimeIndex for the Dataframe localized to the timezone provided
as the label of the second (time) column.
Parameters
----------
data: Dataframe
Must contain 'DATE (MM/DD/YYYY)' column, second column must be labeled
with the timezone and contain times in 'HH:MM' format.
Returns
-------
data: Dataframe
Dataframe with DatetimeIndex localized to the provided timezone.
"""
tz_raw = data.columns[1]
timezone = TZ_MAP.get(tz_raw, tz_raw)
datetime = data['DATE (MM/DD/YYYY)'] + data[tz_raw]
datetime = pd.to_datetime(datetime, format='%m/%d/%Y%H:%M')
data = data.set_index(datetime)
data = data.tz_localize(timezone)
return data | [
"def",
"format_index",
"(",
"data",
")",
":",
"tz_raw",
"=",
"data",
".",
"columns",
"[",
"1",
"]",
"timezone",
"=",
"TZ_MAP",
".",
"get",
"(",
"tz_raw",
",",
"tz_raw",
")",
"datetime",
"=",
"data",
"[",
"'DATE (MM/DD/YYYY)'",
"]",
"+",
"data",
"[",
... | Create DatetimeIndex for the Dataframe localized to the timezone provided
as the label of the second (time) column.
Parameters
----------
data: Dataframe
Must contain 'DATE (MM/DD/YYYY)' column, second column must be labeled
with the timezone and contain times in 'HH:MM' format.
Returns
-------
data: Dataframe
Dataframe with DatetimeIndex localized to the provided timezone. | [
"Create",
"DatetimeIndex",
"for",
"the",
"Dataframe",
"localized",
"to",
"the",
"timezone",
"provided",
"as",
"the",
"label",
"of",
"the",
"second",
"(",
"time",
")",
"column",
"."
] | python | train |
iotile/coretools | iotilegateway/iotilegateway/supervisor/client.py | https://github.com/iotile/coretools/blob/2d794f5f1346b841b0dcd16c9d284e9bf2f3c6ec/iotilegateway/iotilegateway/supervisor/client.py#L408-L424 | async def _on_status_change(self, update):
"""Update a service that has its status updated."""
info = update['payload']
new_number = info['new_status']
name = update['service']
if name not in self.services:
return
with self._state_lock:
is_changed = self.services[name].state != new_number
self.services[name].state = new_number
# Notify about this service state change if anyone is listening
if self._on_change_callback and is_changed:
self._on_change_callback(name, self.services[name].id, new_number, False, False) | [
"async",
"def",
"_on_status_change",
"(",
"self",
",",
"update",
")",
":",
"info",
"=",
"update",
"[",
"'payload'",
"]",
"new_number",
"=",
"info",
"[",
"'new_status'",
"]",
"name",
"=",
"update",
"[",
"'service'",
"]",
"if",
"name",
"not",
"in",
"self",... | Update a service that has its status updated. | [
"Update",
"a",
"service",
"that",
"has",
"its",
"status",
"updated",
"."
] | python | train |
stevearc/dql | dql/engine.py | https://github.com/stevearc/dql/blob/e9d3aa22873076dae5ebd02e35318aa996b1e56a/dql/engine.py#L308-L342 | def _run(self, tree):
""" Run a query from a parse tree """
if tree.throttle:
limiter = self._parse_throttle(tree.table, tree.throttle)
self._query_rate_limit = limiter
del tree["throttle"]
return self._run(tree)
if tree.action == "SELECT":
return self._select(tree, self.allow_select_scan)
elif tree.action == "SCAN":
return self._scan(tree)
elif tree.action == "DELETE":
return self._delete(tree)
elif tree.action == "UPDATE":
return self._update(tree)
elif tree.action == "CREATE":
return self._create(tree)
elif tree.action == "INSERT":
return self._insert(tree)
elif tree.action == "DROP":
return self._drop(tree)
elif tree.action == "ALTER":
return self._alter(tree)
elif tree.action == "DUMP":
return self._dump(tree)
elif tree.action == "LOAD":
return self._load(tree)
elif tree.action == "EXPLAIN":
return self._explain(tree)
elif tree.action == "ANALYZE":
self._analyzing = True
self.connection.default_return_capacity = True
return self._run(tree[1])
else:
raise SyntaxError("Unrecognized action '%s'" % tree.action) | [
"def",
"_run",
"(",
"self",
",",
"tree",
")",
":",
"if",
"tree",
".",
"throttle",
":",
"limiter",
"=",
"self",
".",
"_parse_throttle",
"(",
"tree",
".",
"table",
",",
"tree",
".",
"throttle",
")",
"self",
".",
"_query_rate_limit",
"=",
"limiter",
"del"... | Run a query from a parse tree | [
"Run",
"a",
"query",
"from",
"a",
"parse",
"tree"
] | python | train |
bitesofcode/projex | projex/enum.py | https://github.com/bitesofcode/projex/blob/d31743ec456a41428709968ab11a2cf6c6c76247/projex/enum.py#L245-L253 | def toSet(self, flags):
"""
Generates a flag value based on the given set of values.
:param values: <set>
:return: <int>
"""
return {key for key, value in self.items() if value & flags} | [
"def",
"toSet",
"(",
"self",
",",
"flags",
")",
":",
"return",
"{",
"key",
"for",
"key",
",",
"value",
"in",
"self",
".",
"items",
"(",
")",
"if",
"value",
"&",
"flags",
"}"
] | Generates a flag value based on the given set of values.
:param values: <set>
:return: <int> | [
"Generates",
"a",
"flag",
"value",
"based",
"on",
"the",
"given",
"set",
"of",
"values",
"."
] | python | train |
pre-commit/pre-commit | pre_commit/parse_shebang.py | https://github.com/pre-commit/pre-commit/blob/72f98d26e690da11dc2e41861d14c58eb21930cb/pre_commit/parse_shebang.py#L62-L78 | def normalize_cmd(cmd):
"""Fixes for the following issues on windows
- https://bugs.python.org/issue8557
- windows does not parse shebangs
This function also makes deep-path shebangs work just fine
"""
# Use PATH to determine the executable
exe = normexe(cmd[0])
# Figure out the shebang from the resulting command
cmd = parse_filename(exe) + (exe,) + cmd[1:]
# This could have given us back another bare executable
exe = normexe(cmd[0])
return (exe,) + cmd[1:] | [
"def",
"normalize_cmd",
"(",
"cmd",
")",
":",
"# Use PATH to determine the executable",
"exe",
"=",
"normexe",
"(",
"cmd",
"[",
"0",
"]",
")",
"# Figure out the shebang from the resulting command",
"cmd",
"=",
"parse_filename",
"(",
"exe",
")",
"+",
"(",
"exe",
",... | Fixes for the following issues on windows
- https://bugs.python.org/issue8557
- windows does not parse shebangs
This function also makes deep-path shebangs work just fine | [
"Fixes",
"for",
"the",
"following",
"issues",
"on",
"windows",
"-",
"https",
":",
"//",
"bugs",
".",
"python",
".",
"org",
"/",
"issue8557",
"-",
"windows",
"does",
"not",
"parse",
"shebangs"
] | python | train |
emory-libraries/eulxml | eulxml/xmlmap/core.py | https://github.com/emory-libraries/eulxml/blob/17d71c7d98c0cebda9932b7f13e72093805e1fe2/eulxml/xmlmap/core.py#L71-L96 | def loadSchema(uri, base_uri=None):
"""Load an XSD XML document (specified by filename or URL), and return a
:class:`lxml.etree.XMLSchema`.
"""
# uri to use for reporting errors - include base uri if any
if uri in _loaded_schemas:
return _loaded_schemas[uri]
error_uri = uri
if base_uri is not None:
error_uri += ' (base URI %s)' % base_uri
try:
logger.debug('Loading schema %s' % uri)
_loaded_schemas[uri] = etree.XMLSchema(etree.parse(uri,
parser=_get_xmlparser(),
base_url=base_uri))
return _loaded_schemas[uri]
except IOError as io_err:
# add a little more detail to the error message - but should still be an IO error
raise IOError('Failed to load schema %s : %s' % (error_uri, io_err))
except etree.XMLSchemaParseError as parse_err:
# re-raise as a schema parse error, but ensure includes details about schema being loaded
raise etree.XMLSchemaParseError('Failed to parse schema %s -- %s' % (error_uri, parse_err)) | [
"def",
"loadSchema",
"(",
"uri",
",",
"base_uri",
"=",
"None",
")",
":",
"# uri to use for reporting errors - include base uri if any",
"if",
"uri",
"in",
"_loaded_schemas",
":",
"return",
"_loaded_schemas",
"[",
"uri",
"]",
"error_uri",
"=",
"uri",
"if",
"base_uri"... | Load an XSD XML document (specified by filename or URL), and return a
:class:`lxml.etree.XMLSchema`. | [
"Load",
"an",
"XSD",
"XML",
"document",
"(",
"specified",
"by",
"filename",
"or",
"URL",
")",
"and",
"return",
"a",
":",
"class",
":",
"lxml",
".",
"etree",
".",
"XMLSchema",
"."
] | python | train |
timothyb0912/pylogit | pylogit/mixed_logit_calcs.py | https://github.com/timothyb0912/pylogit/blob/f83b0fd6debaa7358d87c3828428f6d4ead71357/pylogit/mixed_logit_calcs.py#L351-L475 | def calc_mixed_logit_gradient(params,
design_3d,
alt_IDs,
rows_to_obs,
rows_to_alts,
rows_to_mixers,
choice_vector,
utility_transform,
ridge=None,
weights=None):
"""
Parameters
----------
params : 1D ndarray.
All elements should by ints, floats, or longs. Should have 1 element
for each utility coefficient being estimated
(i.e. num_features + num_coefs_being_mixed).
design_3d : 3D ndarray.
All elements should be ints, floats, or longs. Should have one row per
observation per available alternative. The second axis should have as
many elements as there are draws from the mixing distributions of the
coefficients. The last axis should have one element per index
coefficient being estimated.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_obs : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per observation.
This matrix maps the rows of the design matrix to the unique
observations (on the columns).
rows_to_alts : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per possible
alternative. This matrix maps the rows of the design matrix to the
possible alternatives for this dataset.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
choice_vector : 1D ndarray.
All elements should be either ones or zeros. There should be one row
per observation per available alternative for the given observation.
Elements denote the alternative which is chosen by the given
observation with a 1 and a zero otherwise.
utility_transform : callable.
Should accept a 1D array of systematic utility values, a 1D array of
alternative IDs, and miscellaneous args and kwargs. Should return a 2D
array whose elements contain the appropriately transformed systematic
utility values, based on the current model being evaluated and the
given draw of the random coefficients. There should be one column for
each draw of the random coefficients. There should have one row per
individual per choice situation per available alternative.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a float is
passed, then that float determines the ridge penalty for the
optimization. Default = None.
weights : 1D ndarray or None.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class.
Returns
-------
gradient : ndarray of shape (design_3d.shape[2],).
The returned array is the gradient of the log-likelihood of the mixed
MNL model with respect to `params`.
"""
# Calculate the weights for the sample
if weights is None:
weights = np.ones(design_3d.shape[0])
# Calculate the regular probability array. Note the implicit assumption
# that params == index coefficients.
prob_array = general_calc_probabilities(params,
design_3d,
alt_IDs,
rows_to_obs,
rows_to_alts,
utility_transform,
return_long_probs=True)
# Calculate the simulated probability of correctly predicting each persons
# sequence of choices. Note that this function implicitly assumes that the
# mixing unit is the individual
prob_results = calc_choice_sequence_probs(prob_array,
choice_vector,
rows_to_mixers,
return_type="all")
# Calculate the sequence probabilities given random draws
# and calculate the overal simulated probabilities
sequence_prob_array = prob_results[1]
simulated_probs = prob_results[0]
# Convert the various probabilties to long format
long_sequence_prob_array = rows_to_mixers.dot(sequence_prob_array)
long_simulated_probs = rows_to_mixers.dot(simulated_probs)
# Scale sequence probabilites given random draws by simulated probabilities
scaled_sequence_probs = (long_sequence_prob_array /
long_simulated_probs[:, None])
# Calculate the scaled error. Will have shape == (num_rows, num_draws)
scaled_error = ((choice_vector[:, None] - prob_array) *
scaled_sequence_probs)
# Calculate the gradient. Note that the lines below assume that we are
# taking the gradient of an MNL model. Should refactor to make use of the
# built in gradient function for logit-type models. Should also refactor
# the gradient function for logit-type models to be able to handle 2D
# systematic utility arrays.
gradient = (scaled_error[:, :, None] *
design_3d *
weights[:, None, None]).sum(axis=0)
gradient = gradient.mean(axis=0)
# Account for the ridge parameter if an L2 penalization is being performed
if ridge is not None:
gradient -= 2 * ridge * params
return gradient.ravel() | [
"def",
"calc_mixed_logit_gradient",
"(",
"params",
",",
"design_3d",
",",
"alt_IDs",
",",
"rows_to_obs",
",",
"rows_to_alts",
",",
"rows_to_mixers",
",",
"choice_vector",
",",
"utility_transform",
",",
"ridge",
"=",
"None",
",",
"weights",
"=",
"None",
")",
":",... | Parameters
----------
params : 1D ndarray.
All elements should by ints, floats, or longs. Should have 1 element
for each utility coefficient being estimated
(i.e. num_features + num_coefs_being_mixed).
design_3d : 3D ndarray.
All elements should be ints, floats, or longs. Should have one row per
observation per available alternative. The second axis should have as
many elements as there are draws from the mixing distributions of the
coefficients. The last axis should have one element per index
coefficient being estimated.
alt_IDs : 1D ndarray.
All elements should be ints. There should be one row per obervation per
available alternative for the given observation. Elements denote the
alternative corresponding to the given row of the design matrix.
rows_to_obs : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per observation.
This matrix maps the rows of the design matrix to the unique
observations (on the columns).
rows_to_alts : 2D scipy sparse array.
All elements should be zeros and ones. Should have one row per
observation per available alternative and one column per possible
alternative. This matrix maps the rows of the design matrix to the
possible alternatives for this dataset.
rows_to_mixers : 2D scipy sparse array.
All elements should be zeros and ones. Will map the rows of the design
matrix to the particular units that the mixing is being performed over.
Note that in the case of panel data, this matrix will be different from
`rows_to_obs`.
choice_vector : 1D ndarray.
All elements should be either ones or zeros. There should be one row
per observation per available alternative for the given observation.
Elements denote the alternative which is chosen by the given
observation with a 1 and a zero otherwise.
utility_transform : callable.
Should accept a 1D array of systematic utility values, a 1D array of
alternative IDs, and miscellaneous args and kwargs. Should return a 2D
array whose elements contain the appropriately transformed systematic
utility values, based on the current model being evaluated and the
given draw of the random coefficients. There should be one column for
each draw of the random coefficients. There should have one row per
individual per choice situation per available alternative.
ridge : int, float, long, or None, optional.
Determines whether or not ridge regression is performed. If a float is
passed, then that float determines the ridge penalty for the
optimization. Default = None.
weights : 1D ndarray or None.
Allows for the calculation of weighted log-likelihoods. The weights can
represent various things. In stratified samples, the weights may be
the proportion of the observations in a given strata for a sample in
relation to the proportion of observations in that strata in the
population. In latent class models, the weights may be the probability
of being a particular class.
Returns
-------
gradient : ndarray of shape (design_3d.shape[2],).
The returned array is the gradient of the log-likelihood of the mixed
MNL model with respect to `params`. | [
"Parameters",
"----------",
"params",
":",
"1D",
"ndarray",
".",
"All",
"elements",
"should",
"by",
"ints",
"floats",
"or",
"longs",
".",
"Should",
"have",
"1",
"element",
"for",
"each",
"utility",
"coefficient",
"being",
"estimated",
"(",
"i",
".",
"e",
"... | python | train |
facetoe/zenpy | zenpy/lib/api.py | https://github.com/facetoe/zenpy/blob/34c54c7e408b9ed01604ddf8b3422204c8bf31ea/zenpy/lib/api.py#L930-L936 | def organization_fields(self, organization):
"""
Retrieve the organization fields for this organization.
:param organization: Organization object or id
"""
return self._query_zendesk(self.endpoint.organization_fields, 'organization_field', id=organization) | [
"def",
"organization_fields",
"(",
"self",
",",
"organization",
")",
":",
"return",
"self",
".",
"_query_zendesk",
"(",
"self",
".",
"endpoint",
".",
"organization_fields",
",",
"'organization_field'",
",",
"id",
"=",
"organization",
")"
] | Retrieve the organization fields for this organization.
:param organization: Organization object or id | [
"Retrieve",
"the",
"organization",
"fields",
"for",
"this",
"organization",
"."
] | python | train |
ndrlslz/ternya | ternya/annotation.py | https://github.com/ndrlslz/ternya/blob/c05aec10029e645d63ff04313dbcf2644743481f/ternya/annotation.py#L82-L108 | def cinder(*arg):
"""
Cinder annotation for adding function to process cinder notification.
if event_type include wildcard, will put {pattern: function} into process_wildcard dict
else will put {event_type: function} into process dict
:param arg: event_type of notification
"""
check_event_type(Openstack.Cinder, *arg)
event_type = arg[0]
def decorator(func):
if event_type.find("*") != -1:
event_type_pattern = pre_compile(event_type)
cinder_customer_process_wildcard[event_type_pattern] = func
else:
cinder_customer_process[event_type] = func
log.info("add function {0} to process event_type:{1}".format(func.__name__, event_type))
@functools.wraps(func)
def wrapper(*args, **kwargs):
func(*args, **kwargs)
return wrapper
return decorator | [
"def",
"cinder",
"(",
"*",
"arg",
")",
":",
"check_event_type",
"(",
"Openstack",
".",
"Cinder",
",",
"*",
"arg",
")",
"event_type",
"=",
"arg",
"[",
"0",
"]",
"def",
"decorator",
"(",
"func",
")",
":",
"if",
"event_type",
".",
"find",
"(",
"\"*\"",
... | Cinder annotation for adding function to process cinder notification.
if event_type include wildcard, will put {pattern: function} into process_wildcard dict
else will put {event_type: function} into process dict
:param arg: event_type of notification | [
"Cinder",
"annotation",
"for",
"adding",
"function",
"to",
"process",
"cinder",
"notification",
"."
] | python | test |
helixyte/everest | everest/batch.py | https://github.com/helixyte/everest/blob/70c9b93c3061db5cb62428349d18b8fb8566411b/everest/batch.py#L35-L46 | def next(self):
"""
Returns the next batch for the batched sequence or `None`, if
this batch is already the last batch.
:rtype: :class:`Batch` instance or `None`.
"""
if self.start + self.size > self.total_size:
result = None
else:
result = Batch(self.start + self.size, self.size, self.total_size)
return result | [
"def",
"next",
"(",
"self",
")",
":",
"if",
"self",
".",
"start",
"+",
"self",
".",
"size",
">",
"self",
".",
"total_size",
":",
"result",
"=",
"None",
"else",
":",
"result",
"=",
"Batch",
"(",
"self",
".",
"start",
"+",
"self",
".",
"size",
",",... | Returns the next batch for the batched sequence or `None`, if
this batch is already the last batch.
:rtype: :class:`Batch` instance or `None`. | [
"Returns",
"the",
"next",
"batch",
"for",
"the",
"batched",
"sequence",
"or",
"None",
"if",
"this",
"batch",
"is",
"already",
"the",
"last",
"batch",
"."
] | python | train |
apple/turicreate | deps/src/libxml2-2.9.1/python/libxml2.py | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/libxml2-2.9.1/python/libxml2.py#L3922-L3931 | def xpointerNewContext(self, doc, origin):
"""Create a new XPointer context """
if doc is None: doc__o = None
else: doc__o = doc._o
if origin is None: origin__o = None
else: origin__o = origin._o
ret = libxml2mod.xmlXPtrNewContext(doc__o, self._o, origin__o)
if ret is None:raise treeError('xmlXPtrNewContext() failed')
__tmp = xpathContext(_obj=ret)
return __tmp | [
"def",
"xpointerNewContext",
"(",
"self",
",",
"doc",
",",
"origin",
")",
":",
"if",
"doc",
"is",
"None",
":",
"doc__o",
"=",
"None",
"else",
":",
"doc__o",
"=",
"doc",
".",
"_o",
"if",
"origin",
"is",
"None",
":",
"origin__o",
"=",
"None",
"else",
... | Create a new XPointer context | [
"Create",
"a",
"new",
"XPointer",
"context"
] | python | train |
satellogic/telluric | telluric/features.py | https://github.com/satellogic/telluric/blob/e752cd3ee71e339f79717e526fde362e80055d9e/telluric/features.py#L287-L301 | def from_raster(cls, raster, properties, product='visual'):
"""Initialize a GeoFeature object with a GeoRaster
Parameters
----------
raster : GeoRaster
the raster in the feature
properties : dict
Properties.
product : str
product associated to the raster
"""
footprint = raster.footprint()
assets = raster.to_assets(product=product)
return cls(footprint, properties, assets) | [
"def",
"from_raster",
"(",
"cls",
",",
"raster",
",",
"properties",
",",
"product",
"=",
"'visual'",
")",
":",
"footprint",
"=",
"raster",
".",
"footprint",
"(",
")",
"assets",
"=",
"raster",
".",
"to_assets",
"(",
"product",
"=",
"product",
")",
"return... | Initialize a GeoFeature object with a GeoRaster
Parameters
----------
raster : GeoRaster
the raster in the feature
properties : dict
Properties.
product : str
product associated to the raster | [
"Initialize",
"a",
"GeoFeature",
"object",
"with",
"a",
"GeoRaster"
] | python | train |
QuantEcon/QuantEcon.py | quantecon/markov/core.py | https://github.com/QuantEcon/QuantEcon.py/blob/26a66c552f2a73967d7efb6e1f4b4c4985a12643/quantecon/markov/core.py#L389-L411 | def _compute_stationary(self):
"""
Store the stationary distributions in self._stationary_distributions.
"""
if self.is_irreducible:
if not self.is_sparse: # Dense
stationary_dists = gth_solve(self.P).reshape(1, self.n)
else: # Sparse
stationary_dists = \
gth_solve(self.P.toarray(),
overwrite=True).reshape(1, self.n)
else:
rec_classes = self.recurrent_classes_indices
stationary_dists = np.zeros((len(rec_classes), self.n))
for i, rec_class in enumerate(rec_classes):
P_rec_class = self.P[np.ix_(rec_class, rec_class)]
if self.is_sparse:
P_rec_class = P_rec_class.toarray()
stationary_dists[i, rec_class] = \
gth_solve(P_rec_class, overwrite=True)
self._stationary_dists = stationary_dists | [
"def",
"_compute_stationary",
"(",
"self",
")",
":",
"if",
"self",
".",
"is_irreducible",
":",
"if",
"not",
"self",
".",
"is_sparse",
":",
"# Dense",
"stationary_dists",
"=",
"gth_solve",
"(",
"self",
".",
"P",
")",
".",
"reshape",
"(",
"1",
",",
"self",... | Store the stationary distributions in self._stationary_distributions. | [
"Store",
"the",
"stationary",
"distributions",
"in",
"self",
".",
"_stationary_distributions",
"."
] | python | train |
SheffieldML/GPy | GPy/kern/src/ODE_t.py | https://github.com/SheffieldML/GPy/blob/54c32d79d289d622fb18b898aee65a2a431d90cf/GPy/kern/src/ODE_t.py#L92-L165 | def update_gradients_full(self, dL_dK, X, X2=None):
"""derivative of the covariance matrix with respect to the parameters."""
X,slices = X[:,:-1],index_to_slices(X[:,-1])
if X2 is None:
X2,slices2 = X,slices
K = np.zeros((X.shape[0], X.shape[0]))
else:
X2,slices2 = X2[:,:-1],index_to_slices(X2[:,-1])
vyt = self.variance_Yt
lyt = 1./(2*self.lengthscale_Yt)
tdist = (X[:,0][:,None] - X2[:,0][None,:])**2
ttdist = (X[:,0][:,None] - X2[:,0][None,:])
#rdist = [tdist,xdist]
rd=tdist.shape[0]
dka = np.zeros([rd,rd])
dkc = np.zeros([rd,rd])
dkYdvart = np.zeros([rd,rd])
dkYdlent = np.zeros([rd,rd])
dkdubias = np.zeros([rd,rd])
kyy = lambda tdist: np.exp(-lyt*(tdist))
dkyydlyt = lambda tdist: kyy(tdist)*(-tdist)
k1 = lambda tdist: (2*lyt - 4*lyt**2 * (tdist) )
k4 = lambda ttdist: 2*lyt*(ttdist)
dk1dlyt = lambda tdist: 2. - 4*2.*lyt*tdist
dk4dlyt = lambda ttdist: 2*(ttdist)
for i, s1 in enumerate(slices):
for j, s2 in enumerate(slices2):
for ss1 in s1:
for ss2 in s2:
if i==0 and j==0:
dkYdvart[ss1,ss2] = kyy(tdist[ss1,ss2])
dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])
dkdubias[ss1,ss2] = 0
elif i==0 and j==1:
dkYdvart[ss1,ss2] = (k4(ttdist[ss1,ss2])+1)*kyy(tdist[ss1,ss2])
#dkYdvart[ss1,ss2] = ((2*lyt*ttdist[ss1,ss2])+1)*kyy(tdist[ss1,ss2])
dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])* (k4(ttdist[ss1,ss2])+1.)+\
vyt*kyy(tdist[ss1,ss2])*(dk4dlyt(ttdist[ss1,ss2]))
#dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])* (2*lyt*(ttdist[ss1,ss2])+1.)+\
#vyt*kyy(tdist[ss1,ss2])*(2*ttdist[ss1,ss2])
dkdubias[ss1,ss2] = 0
elif i==1 and j==1:
dkYdvart[ss1,ss2] = (k1(tdist[ss1,ss2]) + 1. )* kyy(tdist[ss1,ss2])
dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])*( k1(tdist[ss1,ss2]) + 1. ) +\
vyt*kyy(tdist[ss1,ss2])*dk1dlyt(tdist[ss1,ss2])
dkdubias[ss1,ss2] = 1
else:
dkYdvart[ss1,ss2] = (-k4(ttdist[ss1,ss2])+1)*kyy(tdist[ss1,ss2])
#dkYdvart[ss1,ss2] = (-2*lyt*(ttdist[ss1,ss2])+1)*kyy(tdist[ss1,ss2])
dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])* (-k4(ttdist[ss1,ss2])+1.)+\
vyt*kyy(tdist[ss1,ss2])*(-dk4dlyt(ttdist[ss1,ss2]) )
dkdubias[ss1,ss2] = 0
#dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])* (-2*lyt*(ttdist[ss1,ss2])+1.)+\
#vyt*kyy(tdist[ss1,ss2])*(-2)*(ttdist[ss1,ss2])
self.variance_Yt.gradient = np.sum(dkYdvart * dL_dK)
self.lengthscale_Yt.gradient = np.sum(dkYdlent*(-0.5*self.lengthscale_Yt**(-2)) * dL_dK)
self.ubias.gradient = np.sum(dkdubias * dL_dK) | [
"def",
"update_gradients_full",
"(",
"self",
",",
"dL_dK",
",",
"X",
",",
"X2",
"=",
"None",
")",
":",
"X",
",",
"slices",
"=",
"X",
"[",
":",
",",
":",
"-",
"1",
"]",
",",
"index_to_slices",
"(",
"X",
"[",
":",
",",
"-",
"1",
"]",
")",
"if",... | derivative of the covariance matrix with respect to the parameters. | [
"derivative",
"of",
"the",
"covariance",
"matrix",
"with",
"respect",
"to",
"the",
"parameters",
"."
] | python | train |
linuxsoftware/ls.joyous | ls/joyous/models/events.py | https://github.com/linuxsoftware/ls.joyous/blob/316283140ca5171a68ad3170a5964fdc89be0b56/ls/joyous/models/events.py#L1083-L1088 | def _getFromDt(self):
"""
Get the datetime of the next event after or before now.
"""
myNow = timezone.localtime(timezone=self.tz)
return self.__after(myNow) or self.__before(myNow) | [
"def",
"_getFromDt",
"(",
"self",
")",
":",
"myNow",
"=",
"timezone",
".",
"localtime",
"(",
"timezone",
"=",
"self",
".",
"tz",
")",
"return",
"self",
".",
"__after",
"(",
"myNow",
")",
"or",
"self",
".",
"__before",
"(",
"myNow",
")"
] | Get the datetime of the next event after or before now. | [
"Get",
"the",
"datetime",
"of",
"the",
"next",
"event",
"after",
"or",
"before",
"now",
"."
] | python | train |
weso/CWR-DataApi | cwr/parser/encoder/cwrjson.py | https://github.com/weso/CWR-DataApi/blob/f3b6ba8308c901b6ab87073c155c08e30692333c/cwr/parser/encoder/cwrjson.py#L57-L70 | def _unicode_handler(obj):
"""
Transforms an unicode string into a UTF-8 equivalent.
:param obj: object to transform into it's UTF-8 equivalent
:return: the UTF-8 equivalent of the string
"""
try:
result = obj.isoformat()
except AttributeError:
raise TypeError("Unserializable object {} of type {}".format(obj,
type(obj)))
return result | [
"def",
"_unicode_handler",
"(",
"obj",
")",
":",
"try",
":",
"result",
"=",
"obj",
".",
"isoformat",
"(",
")",
"except",
"AttributeError",
":",
"raise",
"TypeError",
"(",
"\"Unserializable object {} of type {}\"",
".",
"format",
"(",
"obj",
",",
"type",
"(",
... | Transforms an unicode string into a UTF-8 equivalent.
:param obj: object to transform into it's UTF-8 equivalent
:return: the UTF-8 equivalent of the string | [
"Transforms",
"an",
"unicode",
"string",
"into",
"a",
"UTF",
"-",
"8",
"equivalent",
"."
] | python | train |
btr1975/ipaddresstools | ipaddresstools/ipaddresstools.py | https://github.com/btr1975/ipaddresstools/blob/759e91b3fdf1e8a8eea55337b09f53a94ad2016e/ipaddresstools/ipaddresstools.py#L225-L246 | def mcast_ip(ip_addr, return_tuple=True):
"""
Function to check if a address is multicast
Args:
ip_addr: Multicast IP address in the following format 239.1.1.1
return_tuple: Set to True it returns a IP, set to False returns True or False
Returns: see return_tuple for return options
"""
regex_mcast_ip = __re.compile("^(((2[2-3][4-9])|(23[0-3]))\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9]))\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9]))\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9])))$")
if return_tuple:
while not regex_mcast_ip.match(ip_addr):
print("Not a good multicast IP.")
print("Please try again.")
ip_addr = input("Please enter a multicast IP address in the following format x.x.x.x: ")
return ip_addr
elif not return_tuple:
if not regex_mcast_ip.match(ip_addr):
return False
else:
return True | [
"def",
"mcast_ip",
"(",
"ip_addr",
",",
"return_tuple",
"=",
"True",
")",
":",
"regex_mcast_ip",
"=",
"__re",
".",
"compile",
"(",
"\"^(((2[2-3][4-9])|(23[0-3]))\\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9]))\\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9]))\\.((25[0-5... | Function to check if a address is multicast
Args:
ip_addr: Multicast IP address in the following format 239.1.1.1
return_tuple: Set to True it returns a IP, set to False returns True or False
Returns: see return_tuple for return options | [
"Function",
"to",
"check",
"if",
"a",
"address",
"is",
"multicast",
"Args",
":",
"ip_addr",
":",
"Multicast",
"IP",
"address",
"in",
"the",
"following",
"format",
"239",
".",
"1",
".",
"1",
".",
"1",
"return_tuple",
":",
"Set",
"to",
"True",
"it",
"ret... | python | train |
manns/pyspread | pyspread/src/lib/vlc.py | https://github.com/manns/pyspread/blob/0e2fd44c2e0f06605efc3058c20a43a8c1f9e7e0/pyspread/src/lib/vlc.py#L4848-L4856 | def libvlc_media_list_player_is_playing(p_mlp):
'''Is media list playing?
@param p_mlp: media list player instance.
@return: true for playing and false for not playing \libvlc_return_bool.
'''
f = _Cfunctions.get('libvlc_media_list_player_is_playing', None) or \
_Cfunction('libvlc_media_list_player_is_playing', ((1,),), None,
ctypes.c_int, MediaListPlayer)
return f(p_mlp) | [
"def",
"libvlc_media_list_player_is_playing",
"(",
"p_mlp",
")",
":",
"f",
"=",
"_Cfunctions",
".",
"get",
"(",
"'libvlc_media_list_player_is_playing'",
",",
"None",
")",
"or",
"_Cfunction",
"(",
"'libvlc_media_list_player_is_playing'",
",",
"(",
"(",
"1",
",",
")",... | Is media list playing?
@param p_mlp: media list player instance.
@return: true for playing and false for not playing \libvlc_return_bool. | [
"Is",
"media",
"list",
"playing?"
] | python | train |
Karaage-Cluster/karaage | karaage/datastores/ldap.py | https://github.com/Karaage-Cluster/karaage/blob/2f4c8b4e2d728b3fcbb151160c49000f1c04f5c9/karaage/datastores/ldap.py#L246-L259 | def get_group_details(self, group):
""" Get the group details. """
result = {}
try:
lgroup = self._get_group(group.name)
lgroup = preload(lgroup, database=self._database)
except ObjectDoesNotExist:
return result
for i, j in lgroup.items():
if j is not None:
result[i] = j
return result | [
"def",
"get_group_details",
"(",
"self",
",",
"group",
")",
":",
"result",
"=",
"{",
"}",
"try",
":",
"lgroup",
"=",
"self",
".",
"_get_group",
"(",
"group",
".",
"name",
")",
"lgroup",
"=",
"preload",
"(",
"lgroup",
",",
"database",
"=",
"self",
"."... | Get the group details. | [
"Get",
"the",
"group",
"details",
"."
] | python | train |
drastus/unicover | unicover/unicover.py | https://github.com/drastus/unicover/blob/4702d0151c63d525c25718a838396afe62302255/unicover/unicover.py#L83-L118 | def chars(self, font, block):
"""
Analyses characters in single font or all fonts.
"""
if font:
font_files = self._getFont(font)
else:
font_files = fc.query()
code_points = self._getFontChars(font_files)
if not block:
blocks = all_blocks
ranges_column = map(itemgetter(3), blocks)
overlapped = Ranges(code_points).getOverlappedList(ranges_column)
else:
blocks = [block]
overlapped = [Ranges(code_points).getOverlapped(block[3])]
if self._display['group']:
char_count = block_count = 0
for i, block in enumerate(blocks):
o_count = len(overlapped[i])
if o_count:
block_count += 1
char_count += o_count
total = sum(len(r) for r in block[3])
percent = 0 if total == 0 else o_count / total
print("{0:>6} {1:47} {2:>4.0%} ({3}/{4})".format(block[0], block[2], percent, o_count, total))
if self._display['list']:
for point in overlapped[i]:
self._charInfo(point, padding=9)
self._charSummary(char_count, block_count)
else:
for point in code_points:
self._charInfo(point, padding=7)
self._charSummary(len(code_points)) | [
"def",
"chars",
"(",
"self",
",",
"font",
",",
"block",
")",
":",
"if",
"font",
":",
"font_files",
"=",
"self",
".",
"_getFont",
"(",
"font",
")",
"else",
":",
"font_files",
"=",
"fc",
".",
"query",
"(",
")",
"code_points",
"=",
"self",
".",
"_getF... | Analyses characters in single font or all fonts. | [
"Analyses",
"characters",
"in",
"single",
"font",
"or",
"all",
"fonts",
"."
] | python | train |
mfcloud/python-zvm-sdk | tools/sample.py | https://github.com/mfcloud/python-zvm-sdk/blob/de9994ceca764f5460ce51bd74237986341d8e3c/tools/sample.py#L71-L92 | def capture_guest(userid):
"""Caputre a virtual machine image.
Input parameters:
:userid: USERID of the guest, last 8 if length > 8
Output parameters:
:image_name: Image name that captured
"""
# check power state, if down, start it
ret = sdk_client.send_request('guest_get_power_state', userid)
power_status = ret['output']
if power_status == 'off':
sdk_client.send_request('guest_start', userid)
# TODO: how much time?
time.sleep(1)
# do capture
image_name = 'image_captured_%03d' % (time.time() % 1000)
sdk_client.send_request('guest_capture', userid, image_name,
capture_type='rootonly', compress_level=6)
return image_name | [
"def",
"capture_guest",
"(",
"userid",
")",
":",
"# check power state, if down, start it",
"ret",
"=",
"sdk_client",
".",
"send_request",
"(",
"'guest_get_power_state'",
",",
"userid",
")",
"power_status",
"=",
"ret",
"[",
"'output'",
"]",
"if",
"power_status",
"=="... | Caputre a virtual machine image.
Input parameters:
:userid: USERID of the guest, last 8 if length > 8
Output parameters:
:image_name: Image name that captured | [
"Caputre",
"a",
"virtual",
"machine",
"image",
"."
] | python | train |
aboSamoor/polyglot | docs/sphinxext/github_link.py | https://github.com/aboSamoor/polyglot/blob/d0d2aa8d06cec4e03bd96618ae960030f7069a17/docs/sphinxext/github_link.py#L74-L87 | def make_linkcode_resolve(package, url_fmt):
"""Returns a linkcode_resolve function for the given URL format
revision is a git commit reference (hash or name)
package is the name of the root module of the package
url_fmt is along the lines of ('https://github.com/USER/PROJECT/'
'blob/{revision}/{package}/'
'{path}#L{lineno}')
"""
revision = _get_git_revision()
return partial(_linkcode_resolve, revision=revision, package=package,
url_fmt=url_fmt) | [
"def",
"make_linkcode_resolve",
"(",
"package",
",",
"url_fmt",
")",
":",
"revision",
"=",
"_get_git_revision",
"(",
")",
"return",
"partial",
"(",
"_linkcode_resolve",
",",
"revision",
"=",
"revision",
",",
"package",
"=",
"package",
",",
"url_fmt",
"=",
"url... | Returns a linkcode_resolve function for the given URL format
revision is a git commit reference (hash or name)
package is the name of the root module of the package
url_fmt is along the lines of ('https://github.com/USER/PROJECT/'
'blob/{revision}/{package}/'
'{path}#L{lineno}') | [
"Returns",
"a",
"linkcode_resolve",
"function",
"for",
"the",
"given",
"URL",
"format"
] | python | train |
inveniosoftware-attic/invenio-utils | invenio_utils/html.py | https://github.com/inveniosoftware-attic/invenio-utils/blob/9a1c6db4e3f1370901f329f510480dd8df188296/invenio_utils/html.py#L396-L401 | def handle_attribute_value(self, value):
"""Check attribute. Especially designed for avoiding URLs in the form:
javascript:myXSSFunction();"""
if self.re_js.match(value) or self.re_vb.match(value):
return ''
return value | [
"def",
"handle_attribute_value",
"(",
"self",
",",
"value",
")",
":",
"if",
"self",
".",
"re_js",
".",
"match",
"(",
"value",
")",
"or",
"self",
".",
"re_vb",
".",
"match",
"(",
"value",
")",
":",
"return",
"''",
"return",
"value"
] | Check attribute. Especially designed for avoiding URLs in the form:
javascript:myXSSFunction(); | [
"Check",
"attribute",
".",
"Especially",
"designed",
"for",
"avoiding",
"URLs",
"in",
"the",
"form",
":",
"javascript",
":",
"myXSSFunction",
"()",
";"
] | python | train |
aleju/imgaug | imgaug/imgaug.py | https://github.com/aleju/imgaug/blob/786be74aa855513840113ea523c5df495dc6a8af/imgaug/imgaug.py#L1989-L2003 | def postprocess(self, images, augmenter, parents):
"""
A function to be called after the augmentation of images was
performed.
Returns
-------
(N,H,W,C) ndarray or (N,H,W) ndarray or list of (H,W,C) ndarray or list of (H,W) ndarray
The input images, optionally modified.
"""
if self.postprocessor is None:
return images
else:
return self.postprocessor(images, augmenter, parents) | [
"def",
"postprocess",
"(",
"self",
",",
"images",
",",
"augmenter",
",",
"parents",
")",
":",
"if",
"self",
".",
"postprocessor",
"is",
"None",
":",
"return",
"images",
"else",
":",
"return",
"self",
".",
"postprocessor",
"(",
"images",
",",
"augmenter",
... | A function to be called after the augmentation of images was
performed.
Returns
-------
(N,H,W,C) ndarray or (N,H,W) ndarray or list of (H,W,C) ndarray or list of (H,W) ndarray
The input images, optionally modified. | [
"A",
"function",
"to",
"be",
"called",
"after",
"the",
"augmentation",
"of",
"images",
"was",
"performed",
"."
] | python | valid |
carta/ldap_tools | src/ldap_tools/user.py | https://github.com/carta/ldap_tools/blob/7c039304a5abaf836c7afc35cf068b4471306264/src/ldap_tools/user.py#L41-L44 | def show(self, username):
"""Return a specific user's info in LDIF format."""
filter = ['(objectclass=posixAccount)', "(uid={})".format(username)]
return self.client.search(filter) | [
"def",
"show",
"(",
"self",
",",
"username",
")",
":",
"filter",
"=",
"[",
"'(objectclass=posixAccount)'",
",",
"\"(uid={})\"",
".",
"format",
"(",
"username",
")",
"]",
"return",
"self",
".",
"client",
".",
"search",
"(",
"filter",
")"
] | Return a specific user's info in LDIF format. | [
"Return",
"a",
"specific",
"user",
"s",
"info",
"in",
"LDIF",
"format",
"."
] | python | train |
numba/llvmlite | llvmlite/ir/transforms.py | https://github.com/numba/llvmlite/blob/fcadf8af11947f3fd041c5d6526c5bf231564883/llvmlite/ir/transforms.py#L58-L64 | def replace_all_calls(mod, orig, repl):
"""Replace all calls to `orig` to `repl` in module `mod`.
Returns the references to the returned calls
"""
rc = ReplaceCalls(orig, repl)
rc.visit(mod)
return rc.calls | [
"def",
"replace_all_calls",
"(",
"mod",
",",
"orig",
",",
"repl",
")",
":",
"rc",
"=",
"ReplaceCalls",
"(",
"orig",
",",
"repl",
")",
"rc",
".",
"visit",
"(",
"mod",
")",
"return",
"rc",
".",
"calls"
] | Replace all calls to `orig` to `repl` in module `mod`.
Returns the references to the returned calls | [
"Replace",
"all",
"calls",
"to",
"orig",
"to",
"repl",
"in",
"module",
"mod",
".",
"Returns",
"the",
"references",
"to",
"the",
"returned",
"calls"
] | python | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.