nwo
stringlengths
5
106
sha
stringlengths
40
40
path
stringlengths
4
174
language
stringclasses
1 value
identifier
stringlengths
1
140
parameters
stringlengths
0
87.7k
argument_list
stringclasses
1 value
return_statement
stringlengths
0
426k
docstring
stringlengths
0
64.3k
docstring_summary
stringlengths
0
26.3k
docstring_tokens
list
function
stringlengths
18
4.83M
function_tokens
list
url
stringlengths
83
304
aws-samples/aws-kube-codesuite
ab4e5ce45416b83bffb947ab8d234df5437f4fca
src/networkx/algorithms/centrality/eigenvector.py
python
eigenvector_centrality
(G, max_iter=100, tol=1.0e-6, nstart=None, weight=None)
r"""Compute the eigenvector centrality for the graph `G`. Eigenvector centrality computes the centrality for a node based on the centrality of its neighbors. The eigenvector centrality for node $i$ is .. math:: Ax = \lambda x where $A$ is the adjacency matrix of the graph `G` with eigenvalue $\lambda$. By virtue of the Perron–Frobenius theorem, there is a unique and positive solution if $\lambda$ is the largest eigenvalue associated with the eigenvector of the adjacency matrix $A$ ([2]_). Parameters ---------- G : graph A networkx graph max_iter : integer, optional (default=100) Maximum number of iterations in power method. tol : float, optional (default=1.0e-6) Error tolerance used to check convergence in power method iteration. nstart : dictionary, optional (default=None) Starting value of eigenvector iteration for each node. weight : None or string, optional (default=None) If None, all edge weights are considered equal. Otherwise holds the name of the edge attribute used as weight. Returns ------- nodes : dictionary Dictionary of nodes with eigenvector centrality as the value. Examples -------- >>> G = nx.path_graph(4) >>> centrality = nx.eigenvector_centrality(G) >>> sorted((v, '{:0.2f}'.format(c)) for v, c in centrality.items()) [(0, '0.37'), (1, '0.60'), (2, '0.60'), (3, '0.37')] Raises ------ NetworkXPointlessConcept If the graph `G` is the null graph. NetworkXError If each value in `nstart` is zero. PowerIterationFailedConvergence If the algorithm fails to converge to the specified tolerance within the specified number of iterations of the power iteration method. See Also -------- eigenvector_centrality_numpy pagerank hits Notes ----- The measure was introduced by [1]_ and is discussed in [2]_. The power iteration method is used to compute the eigenvector and convergence is **not** guaranteed. Our method stops after ``max_iter`` iterations or when the change in the computed vector between two iterations is smaller than an error tolerance of ``G.number_of_nodes() * tol``. This implementation uses ($A + I$) rather than the adjacency matrix $A$ because it shifts the spectrum to enable discerning the correct eigenvector even for networks with multiple dominant eigenvalues. For directed graphs this is "left" eigenvector centrality which corresponds to the in-edges in the graph. For out-edges eigenvector centrality first reverse the graph with ``G.reverse()``. References ---------- .. [1] Phillip Bonacich. "Power and Centrality: A Family of Measures." *American Journal of Sociology* 92(5):1170–1182, 1986 <http://www.leonidzhukov.net/hse/2014/socialnetworks/papers/Bonacich-Centrality.pdf> .. [2] Mark E. J. Newman. *Networks: An Introduction.* Oxford University Press, USA, 2010, pp. 169.
r"""Compute the eigenvector centrality for the graph `G`.
[ "r", "Compute", "the", "eigenvector", "centrality", "for", "the", "graph", "G", "." ]
def eigenvector_centrality(G, max_iter=100, tol=1.0e-6, nstart=None, weight=None): r"""Compute the eigenvector centrality for the graph `G`. Eigenvector centrality computes the centrality for a node based on the centrality of its neighbors. The eigenvector centrality for node $i$ is .. math:: Ax = \lambda x where $A$ is the adjacency matrix of the graph `G` with eigenvalue $\lambda$. By virtue of the Perron–Frobenius theorem, there is a unique and positive solution if $\lambda$ is the largest eigenvalue associated with the eigenvector of the adjacency matrix $A$ ([2]_). Parameters ---------- G : graph A networkx graph max_iter : integer, optional (default=100) Maximum number of iterations in power method. tol : float, optional (default=1.0e-6) Error tolerance used to check convergence in power method iteration. nstart : dictionary, optional (default=None) Starting value of eigenvector iteration for each node. weight : None or string, optional (default=None) If None, all edge weights are considered equal. Otherwise holds the name of the edge attribute used as weight. Returns ------- nodes : dictionary Dictionary of nodes with eigenvector centrality as the value. Examples -------- >>> G = nx.path_graph(4) >>> centrality = nx.eigenvector_centrality(G) >>> sorted((v, '{:0.2f}'.format(c)) for v, c in centrality.items()) [(0, '0.37'), (1, '0.60'), (2, '0.60'), (3, '0.37')] Raises ------ NetworkXPointlessConcept If the graph `G` is the null graph. NetworkXError If each value in `nstart` is zero. PowerIterationFailedConvergence If the algorithm fails to converge to the specified tolerance within the specified number of iterations of the power iteration method. See Also -------- eigenvector_centrality_numpy pagerank hits Notes ----- The measure was introduced by [1]_ and is discussed in [2]_. The power iteration method is used to compute the eigenvector and convergence is **not** guaranteed. Our method stops after ``max_iter`` iterations or when the change in the computed vector between two iterations is smaller than an error tolerance of ``G.number_of_nodes() * tol``. This implementation uses ($A + I$) rather than the adjacency matrix $A$ because it shifts the spectrum to enable discerning the correct eigenvector even for networks with multiple dominant eigenvalues. For directed graphs this is "left" eigenvector centrality which corresponds to the in-edges in the graph. For out-edges eigenvector centrality first reverse the graph with ``G.reverse()``. References ---------- .. [1] Phillip Bonacich. "Power and Centrality: A Family of Measures." *American Journal of Sociology* 92(5):1170–1182, 1986 <http://www.leonidzhukov.net/hse/2014/socialnetworks/papers/Bonacich-Centrality.pdf> .. [2] Mark E. J. Newman. *Networks: An Introduction.* Oxford University Press, USA, 2010, pp. 169. """ if len(G) == 0: raise nx.NetworkXPointlessConcept('cannot compute centrality for the' ' null graph') # If no initial vector is provided, start with the all-ones vector. if nstart is None: nstart = {v: 1 for v in G} if all(v == 0 for v in nstart.values()): raise nx.NetworkXError('initial vector cannot have all zero values') # Normalize the initial vector so that each entry is in [0, 1]. This is # guaranteed to never have a divide-by-zero error by the previous line. x = {k: v / sum(nstart.values()) for k, v in nstart.items()} nnodes = G.number_of_nodes() # make up to max_iter iterations for i in range(max_iter): xlast = x x = xlast.copy() # Start with xlast times I to iterate with (A+I) # do the multiplication y^T = x^T A (left eigenvector) for n in x: for nbr in G[n]: x[nbr] += xlast[n] * G[n][nbr].get(weight, 1) # Normalize the vector. The normalization denominator `norm` # should never be zero by the Perron--Frobenius # theorem. However, in case it is due to numerical error, we # assume the norm to be one instead. norm = sqrt(sum(z ** 2 for z in x.values())) or 1 x = {k: v / norm for k, v in x.items()} # Check for convergence (in the L_1 norm). if sum(abs(x[n] - xlast[n]) for n in x) < nnodes * tol: return x raise nx.PowerIterationFailedConvergence(max_iter)
[ "def", "eigenvector_centrality", "(", "G", ",", "max_iter", "=", "100", ",", "tol", "=", "1.0e-6", ",", "nstart", "=", "None", ",", "weight", "=", "None", ")", ":", "if", "len", "(", "G", ")", "==", "0", ":", "raise", "nx", ".", "NetworkXPointlessCon...
https://github.com/aws-samples/aws-kube-codesuite/blob/ab4e5ce45416b83bffb947ab8d234df5437f4fca/src/networkx/algorithms/centrality/eigenvector.py#L25-L148
fossasia/open-event-server
5ce3b88c0bcf8685944e634701203944b2cf17dd
app/api/helpers/mail.py
python
convert_to_user_locale
(email, date_time=None, date=None, time=None, tz=None, amount=None, currency=None)
return None
Convert date and time and amount to user selected language :return : datetime/date/time translated to user locale
Convert date and time and amount to user selected language :return : datetime/date/time translated to user locale
[ "Convert", "date", "and", "time", "and", "amount", "to", "user", "selected", "language", ":", "return", ":", "datetime", "/", "date", "/", "time", "translated", "to", "user", "locale" ]
def convert_to_user_locale(email, date_time=None, date=None, time=None, tz=None, amount=None, currency=None): """ Convert date and time and amount to user selected language :return : datetime/date/time translated to user locale """ if email: user = User.query.filter(User.email == email).first() user_locale = 'en' if user and user.language_prefrence: user_locale = user.language_prefrence if amount and currency: return format_currency(amount, currency, locale=user_locale) if date_time and tz: return format_datetime( date_time, 'full', tzinfo=pytz.timezone(tz), locale=user_locale ) if date: return format_date(date, 'd MMMM Y', locale=user_locale) if time and tz: return format_time( time, 'HH:mm (zzzz)', tzinfo=pytz.timezone(tz), locale=user_locale ) return None
[ "def", "convert_to_user_locale", "(", "email", ",", "date_time", "=", "None", ",", "date", "=", "None", ",", "time", "=", "None", ",", "tz", "=", "None", ",", "amount", "=", "None", ",", "currency", "=", "None", ")", ":", "if", "email", ":", "user", ...
https://github.com/fossasia/open-event-server/blob/5ce3b88c0bcf8685944e634701203944b2cf17dd/app/api/helpers/mail.py#L830-L864
fabtools/fabtools
5fdc7174c3fae5e93a16d677d0466f41dc2be175
fabtools/supervisor.py
python
start_process
(name)
Start a supervisor process
Start a supervisor process
[ "Start", "a", "supervisor", "process" ]
def start_process(name): """ Start a supervisor process """ run_as_root("supervisorctl start %(name)s" % locals())
[ "def", "start_process", "(", "name", ")", ":", "run_as_root", "(", "\"supervisorctl start %(name)s\"", "%", "locals", "(", ")", ")" ]
https://github.com/fabtools/fabtools/blob/5fdc7174c3fae5e93a16d677d0466f41dc2be175/fabtools/supervisor.py#L47-L51
graalvm/mx
29c0debab406352df3af246be2f8973be5db69ae
mx_benchmark.py
python
build_name
()
return mx.get_env("BUILD_NAME", default="")
Get the current build name from the BUILD_NAME environment variable, or an empty string otherwise. :return: the build name :rtype: basestring
Get the current build name from the BUILD_NAME environment variable, or an empty string otherwise.
[ "Get", "the", "current", "build", "name", "from", "the", "BUILD_NAME", "environment", "variable", "or", "an", "empty", "string", "otherwise", "." ]
def build_name(): """ Get the current build name from the BUILD_NAME environment variable, or an empty string otherwise. :return: the build name :rtype: basestring """ return mx.get_env("BUILD_NAME", default="")
[ "def", "build_name", "(", ")", ":", "return", "mx", ".", "get_env", "(", "\"BUILD_NAME\"", ",", "default", "=", "\"\"", ")" ]
https://github.com/graalvm/mx/blob/29c0debab406352df3af246be2f8973be5db69ae/mx_benchmark.py#L2176-L2183
nodejs/node-gyp
a2f298870692022302fa27a1d42363c4a72df407
gyp/pylib/gyp/generator/ninja.py
python
NinjaWriter.__init__
( self, hash_for_rules, target_outputs, base_dir, build_dir, output_file, toplevel_build, output_file_name, flavor, toplevel_dir=None, )
base_dir: path from source root to directory containing this gyp file, by gyp semantics, all input paths are relative to this build_dir: path from source root to build output toplevel_dir: path to the toplevel directory
base_dir: path from source root to directory containing this gyp file, by gyp semantics, all input paths are relative to this build_dir: path from source root to build output toplevel_dir: path to the toplevel directory
[ "base_dir", ":", "path", "from", "source", "root", "to", "directory", "containing", "this", "gyp", "file", "by", "gyp", "semantics", "all", "input", "paths", "are", "relative", "to", "this", "build_dir", ":", "path", "from", "source", "root", "to", "build", ...
def __init__( self, hash_for_rules, target_outputs, base_dir, build_dir, output_file, toplevel_build, output_file_name, flavor, toplevel_dir=None, ): """ base_dir: path from source root to directory containing this gyp file, by gyp semantics, all input paths are relative to this build_dir: path from source root to build output toplevel_dir: path to the toplevel directory """ self.hash_for_rules = hash_for_rules self.target_outputs = target_outputs self.base_dir = base_dir self.build_dir = build_dir self.ninja = ninja_syntax.Writer(output_file) self.toplevel_build = toplevel_build self.output_file_name = output_file_name self.flavor = flavor self.abs_build_dir = None if toplevel_dir is not None: self.abs_build_dir = os.path.abspath(os.path.join(toplevel_dir, build_dir)) self.obj_ext = ".obj" if flavor == "win" else ".o" if flavor == "win": # See docstring of msvs_emulation.GenerateEnvironmentFiles(). self.win_env = {} for arch in ("x86", "x64"): self.win_env[arch] = "environment." + arch # Relative path from build output dir to base dir. build_to_top = gyp.common.InvertRelativePath(build_dir, toplevel_dir) self.build_to_base = os.path.join(build_to_top, base_dir) # Relative path from base dir to build dir. base_to_top = gyp.common.InvertRelativePath(base_dir, toplevel_dir) self.base_to_build = os.path.join(base_to_top, build_dir)
[ "def", "__init__", "(", "self", ",", "hash_for_rules", ",", "target_outputs", ",", "base_dir", ",", "build_dir", ",", "output_file", ",", "toplevel_build", ",", "output_file_name", ",", "flavor", ",", "toplevel_dir", "=", "None", ",", ")", ":", "self", ".", ...
https://github.com/nodejs/node-gyp/blob/a2f298870692022302fa27a1d42363c4a72df407/gyp/pylib/gyp/generator/ninja.py#L214-L257
Azure/azure-cli
6c1b085a0910c6c2139006fcbd8ade44006eb6dd
src/azure-cli/azure/cli/command_modules/keyvault/custom.py
python
set_policy
(cmd, client, resource_group_name, vault_name, object_id=None, application_id=None, spn=None, upn=None, key_permissions=None, secret_permissions=None, certificate_permissions=None, storage_permissions=None, no_wait=False)
return _azure_stack_wrapper(cmd, client, 'create_or_update', resource_type=ResourceType.MGMT_KEYVAULT, min_api_version='2018-02-14', resource_group_name=resource_group_name, vault_name=vault_name, parameters=VaultCreateOrUpdateParameters( location=vault.location, tags=vault.tags, properties=vault.properties), no_wait=no_wait)
Update security policy settings for a Key Vault.
Update security policy settings for a Key Vault.
[ "Update", "security", "policy", "settings", "for", "a", "Key", "Vault", "." ]
def set_policy(cmd, client, resource_group_name, vault_name, object_id=None, application_id=None, spn=None, upn=None, key_permissions=None, secret_permissions=None, certificate_permissions=None, storage_permissions=None, no_wait=False): """ Update security policy settings for a Key Vault. """ VaultCreateOrUpdateParameters = cmd.get_models('VaultCreateOrUpdateParameters', resource_type=ResourceType.MGMT_KEYVAULT) AccessPolicyEntry = cmd.get_models('AccessPolicyEntry', resource_type=ResourceType.MGMT_KEYVAULT) Permissions = cmd.get_models('Permissions', resource_type=ResourceType.MGMT_KEYVAULT) object_id = _object_id_args_helper(cmd.cli_ctx, object_id, spn, upn) vault = client.get(resource_group_name=resource_group_name, vault_name=vault_name) key_permissions = _permissions_distinct(key_permissions) secret_permissions = _permissions_distinct(secret_permissions) certificate_permissions = _permissions_distinct(certificate_permissions) storage_permissions = _permissions_distinct(storage_permissions) try: enable_rbac_authorization = getattr(vault.properties, 'enable_rbac_authorization') except: # pylint: disable=bare-except pass else: if enable_rbac_authorization: raise CLIError('Cannot set policies to a vault with \'--enable-rbac-authorization\' specified') # Find the existing policy to set policy = next((p for p in vault.properties.access_policies if object_id.lower() == p.object_id.lower() and (application_id or '').lower() == (p.application_id or '').lower() and vault.properties.tenant_id.lower() == p.tenant_id.lower()), None) if not policy: # Add new policy as none found vault.properties.access_policies.append(AccessPolicyEntry( tenant_id=vault.properties.tenant_id, object_id=object_id, application_id=application_id, permissions=Permissions(keys=key_permissions, secrets=secret_permissions, certificates=certificate_permissions, storage=storage_permissions))) else: # Modify existing policy. # If key_permissions is not set, use prev. value (similarly with secret_permissions). keys = policy.permissions.keys if key_permissions is None else key_permissions secrets = policy.permissions.secrets if secret_permissions is None else secret_permissions certs = policy.permissions.certificates \ if certificate_permissions is None else certificate_permissions storage = policy.permissions.storage if storage_permissions is None else storage_permissions policy.permissions = Permissions(keys=keys, secrets=secrets, certificates=certs, storage=storage) return _azure_stack_wrapper(cmd, client, 'create_or_update', resource_type=ResourceType.MGMT_KEYVAULT, min_api_version='2018-02-14', resource_group_name=resource_group_name, vault_name=vault_name, parameters=VaultCreateOrUpdateParameters( location=vault.location, tags=vault.tags, properties=vault.properties), no_wait=no_wait)
[ "def", "set_policy", "(", "cmd", ",", "client", ",", "resource_group_name", ",", "vault_name", ",", "object_id", "=", "None", ",", "application_id", "=", "None", ",", "spn", "=", "None", ",", "upn", "=", "None", ",", "key_permissions", "=", "None", ",", ...
https://github.com/Azure/azure-cli/blob/6c1b085a0910c6c2139006fcbd8ade44006eb6dd/src/azure-cli/azure/cli/command_modules/keyvault/custom.py#L911-L971
sony/nnabla
5b022f309bf2fa3ade0b6cb9bf05da9ddc67f104
python/src/nnabla/backward_function/gather_nd.py
python
gather_nd_backward
(inputs)
return dx0, None
Args: inputs (list of nn.Variable): Incomming grads/inputs to/of the forward function. kwargs (dict of arguments): Dictionary of the corresponding function arguments. Return: list of Variable: Return the gradients wrt inputs of the corresponding function.
Args: inputs (list of nn.Variable): Incomming grads/inputs to/of the forward function. kwargs (dict of arguments): Dictionary of the corresponding function arguments.
[ "Args", ":", "inputs", "(", "list", "of", "nn", ".", "Variable", ")", ":", "Incomming", "grads", "/", "inputs", "to", "/", "of", "the", "forward", "function", ".", "kwargs", "(", "dict", "of", "arguments", ")", ":", "Dictionary", "of", "the", "correspo...
def gather_nd_backward(inputs): """ Args: inputs (list of nn.Variable): Incomming grads/inputs to/of the forward function. kwargs (dict of arguments): Dictionary of the corresponding function arguments. Return: list of Variable: Return the gradients wrt inputs of the corresponding function. """ dy = inputs[0] x0 = inputs[1] idx = inputs[2] dx0 = F.scatter_nd(dy, idx, shape=x0.shape, add=True) return dx0, None
[ "def", "gather_nd_backward", "(", "inputs", ")", ":", "dy", "=", "inputs", "[", "0", "]", "x0", "=", "inputs", "[", "1", "]", "idx", "=", "inputs", "[", "2", "]", "dx0", "=", "F", ".", "scatter_nd", "(", "dy", ",", "idx", ",", "shape", "=", "x0...
https://github.com/sony/nnabla/blob/5b022f309bf2fa3ade0b6cb9bf05da9ddc67f104/python/src/nnabla/backward_function/gather_nd.py#L19-L32
GoogleCloudPlatform/PerfKitBenchmarker
6e3412d7d5e414b8ca30ed5eaf970cef1d919a67
perfkitbenchmarker/virtual_machine.py
python
BaseOsMixin.SetupLocalDisks
(self)
Perform OS specific setup on any local disks that exist.
Perform OS specific setup on any local disks that exist.
[ "Perform", "OS", "specific", "setup", "on", "any", "local", "disks", "that", "exist", "." ]
def SetupLocalDisks(self): """Perform OS specific setup on any local disks that exist.""" pass
[ "def", "SetupLocalDisks", "(", "self", ")", ":", "pass" ]
https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/blob/6e3412d7d5e414b8ca30ed5eaf970cef1d919a67/perfkitbenchmarker/virtual_machine.py#L649-L651
PyWavelets/pywt
9a72143be347481e2276371efb41ae0266b9c808
pywt/_wavelet_packets.py
python
BaseNode._evaluate_maxlevel
(self, evaluate_from='parent')
return None
Try to find the value of maximum decomposition level if it is not specified explicitly. Parameters ---------- evaluate_from : {'parent', 'subnodes'}
Try to find the value of maximum decomposition level if it is not specified explicitly.
[ "Try", "to", "find", "the", "value", "of", "maximum", "decomposition", "level", "if", "it", "is", "not", "specified", "explicitly", "." ]
def _evaluate_maxlevel(self, evaluate_from='parent'): """ Try to find the value of maximum decomposition level if it is not specified explicitly. Parameters ---------- evaluate_from : {'parent', 'subnodes'} """ assert evaluate_from in ('parent', 'subnodes') if self._maxlevel is not None: return self._maxlevel elif self.data is not None: return self.level + dwt_max_level( min(self.data.shape), self.wavelet) if evaluate_from == 'parent': if self.parent is not None: return self.parent._evaluate_maxlevel(evaluate_from) elif evaluate_from == 'subnodes': for node_name in self.PARTS: node = getattr(self, node_name, None) if node is not None: level = node._evaluate_maxlevel(evaluate_from) if level is not None: return level return None
[ "def", "_evaluate_maxlevel", "(", "self", ",", "evaluate_from", "=", "'parent'", ")", ":", "assert", "evaluate_from", "in", "(", "'parent'", ",", "'subnodes'", ")", "if", "self", ".", "_maxlevel", "is", "not", "None", ":", "return", "self", ".", "_maxlevel",...
https://github.com/PyWavelets/pywt/blob/9a72143be347481e2276371efb41ae0266b9c808/pywt/_wavelet_packets.py#L126-L153
Theano/Theano
8fd9203edfeecebced9344b0c70193be292a9ade
theano/sparse/basic.py
python
structured_add
(x)
Structured addition of sparse matrix x and scalar y.
Structured addition of sparse matrix x and scalar y.
[ "Structured", "addition", "of", "sparse", "matrix", "x", "and", "scalar", "y", "." ]
def structured_add(x): """ Structured addition of sparse matrix x and scalar y. """
[ "def", "structured_add", "(", "x", ")", ":" ]
https://github.com/Theano/Theano/blob/8fd9203edfeecebced9344b0c70193be292a9ade/theano/sparse/basic.py#L3142-L3146
angr/angr
4b04d56ace135018083d36d9083805be8146688b
angr/engines/vex/lifter.py
python
VEXLifter.lift_vex
(self, addr=None, state=None, clemory=None, insn_bytes=None, offset=None, arch=None, size=None, num_inst=None, traceflags=0, thumb=False, extra_stop_points=None, opt_level=None, strict_block_end=None, skip_stmts=False, collect_data_refs=False, cross_insn_opt=None, load_from_ro_regions=False)
Lift an IRSB. There are many possible valid sets of parameters. You at the very least must pass some source of data, some source of an architecture, and some source of an address. Sources of data in order of priority: insn_bytes, clemory, state Sources of an address, in order of priority: addr, state Sources of an architecture, in order of priority: arch, clemory, state :param state: A state to use as a data source. :param clemory: A cle.memory.Clemory object to use as a data source. :param addr: The address at which to start the block. :param thumb: Whether the block should be lifted in ARM's THUMB mode. :param opt_level: The VEX optimization level to use. The final IR optimization level is determined by (ordered by priority): - Argument opt_level - opt_level is set to 1 if OPTIMIZE_IR exists in state options - self._default_opt_level :param insn_bytes: A string of bytes to use as a data source. :param offset: If using insn_bytes, the number of bytes in it to skip over. :param size: The maximum size of the block, in bytes. :param num_inst: The maximum number of instructions. :param traceflags: traceflags to be passed to VEX. (default: 0) :param strict_block_end: Whether to force blocks to end at all conditional branches (default: false)
Lift an IRSB.
[ "Lift", "an", "IRSB", "." ]
def lift_vex(self, addr=None, state=None, clemory=None, insn_bytes=None, offset=None, arch=None, size=None, num_inst=None, traceflags=0, thumb=False, extra_stop_points=None, opt_level=None, strict_block_end=None, skip_stmts=False, collect_data_refs=False, cross_insn_opt=None, load_from_ro_regions=False): """ Lift an IRSB. There are many possible valid sets of parameters. You at the very least must pass some source of data, some source of an architecture, and some source of an address. Sources of data in order of priority: insn_bytes, clemory, state Sources of an address, in order of priority: addr, state Sources of an architecture, in order of priority: arch, clemory, state :param state: A state to use as a data source. :param clemory: A cle.memory.Clemory object to use as a data source. :param addr: The address at which to start the block. :param thumb: Whether the block should be lifted in ARM's THUMB mode. :param opt_level: The VEX optimization level to use. The final IR optimization level is determined by (ordered by priority): - Argument opt_level - opt_level is set to 1 if OPTIMIZE_IR exists in state options - self._default_opt_level :param insn_bytes: A string of bytes to use as a data source. :param offset: If using insn_bytes, the number of bytes in it to skip over. :param size: The maximum size of the block, in bytes. :param num_inst: The maximum number of instructions. :param traceflags: traceflags to be passed to VEX. (default: 0) :param strict_block_end: Whether to force blocks to end at all conditional branches (default: false) """ # phase 0: sanity check if not state and not clemory and not insn_bytes: raise ValueError("Must provide state or clemory or insn_bytes!") if not state and not clemory and not arch: raise ValueError("Must provide state or clemory or arch!") if addr is None and not state: raise ValueError("Must provide state or addr!") if arch is None: arch = clemory._arch if clemory else state.arch if arch.name.startswith("MIPS") and self._single_step: l.error("Cannot specify single-stepping on MIPS.") self._single_step = False # phase 1: parameter defaults if addr is None: addr = state.solver.eval(state._ip) if size is not None: size = min(size, VEX_IRSB_MAX_SIZE) if size is None: size = VEX_IRSB_MAX_SIZE if num_inst is not None: num_inst = min(num_inst, VEX_IRSB_MAX_INST) if num_inst is None and self._single_step: num_inst = 1 if opt_level is None: if state and o.OPTIMIZE_IR in state.options: opt_level = 1 else: opt_level = self._default_opt_level if cross_insn_opt is None: if state and o.NO_CROSS_INSN_OPT in state.options: cross_insn_opt = False else: cross_insn_opt = True if strict_block_end is None: strict_block_end = self.default_strict_block_end if self._support_selfmodifying_code: if opt_level > 0: if once('vex-engine-smc-opt-warning'): l.warning("Self-modifying code is not always correctly optimized by PyVEX. " "To guarantee correctness, VEX optimizations have been disabled.") opt_level = 0 if state and o.OPTIMIZE_IR in state.options: state.options.remove(o.OPTIMIZE_IR) if skip_stmts is not True: skip_stmts = False if offset is None: offset = 0 use_cache = self._use_cache if skip_stmts or collect_data_refs: # Do not cache the blocks if skip_stmts or collect_data_refs are enabled use_cache = False # phase 2: thumb normalization thumb = int(thumb) if isinstance(arch, ArchARM): if addr % 2 == 1: thumb = 1 if thumb: addr &= ~1 elif thumb: l.error("thumb=True passed on non-arm architecture!") thumb = 0 # phase 3: check cache cache_key = None if use_cache: cache_key = (addr, insn_bytes, size, num_inst, thumb, opt_level, strict_block_end, cross_insn_opt) if cache_key in self._block_cache: self._block_cache_hits += 1 l.debug("Cache hit IRSB of %s at %#x", arch, addr) irsb = self._block_cache[cache_key] stop_point = self._first_stoppoint(irsb, extra_stop_points) if stop_point is None: return irsb else: size = stop_point - addr # check the cache again cache_key = (addr, insn_bytes, size, num_inst, thumb, opt_level, strict_block_end, cross_insn_opt) if cache_key in self._block_cache: self._block_cache_hits += 1 return self._block_cache[cache_key] else: self._block_cache_misses += 1 else: # a special case: `size` is used as the maximum allowed size tmp_cache_key = (addr, insn_bytes, VEX_IRSB_MAX_SIZE, num_inst, thumb, opt_level, strict_block_end, cross_insn_opt) try: irsb = self._block_cache[tmp_cache_key] if irsb.size <= size: self._block_cache_hits += 1 return self._block_cache[tmp_cache_key] except KeyError: self._block_cache_misses += 1 # vex_lift breakpoints only triggered when the cache isn't used buff = NO_OVERRIDE if state: state._inspect('vex_lift', BP_BEFORE, vex_lift_addr=addr, vex_lift_size=size, vex_lift_buff=NO_OVERRIDE) buff = state._inspect_getattr("vex_lift_buff", NO_OVERRIDE) addr = state._inspect_getattr("vex_lift_addr", addr) size = state._inspect_getattr("vex_lift_size", size) # phase 4: get bytes if buff is NO_OVERRIDE: if insn_bytes is not None: buff, size = insn_bytes, len(insn_bytes) # offset stays unchanged else: buff, size, offset = self._load_bytes(addr, size, state, clemory) if isinstance(buff, claripy.ast.BV): # pylint:disable=isinstance-second-argument-not-valid-type if len(buff) == 0: raise SimEngineError("No bytes in memory for block starting at %#x." % addr) elif not buff: raise SimEngineError("No bytes in memory for block starting at %#x." % addr) # phase 5: call into pyvex l.debug("Creating IRSB of %s at %#x", arch, addr) try: for subphase in range(2): irsb = pyvex.lift(buff, addr + thumb, arch, max_bytes=size, max_inst=num_inst, bytes_offset=offset + thumb, traceflags=traceflags, opt_level=opt_level, strict_block_end=strict_block_end, skip_stmts=skip_stmts, collect_data_refs=collect_data_refs, load_from_ro_regions=load_from_ro_regions, cross_insn_opt=cross_insn_opt ) if subphase == 0 and irsb.statements is not None: # check for possible stop points stop_point = self._first_stoppoint(irsb, extra_stop_points) if stop_point is not None: size = stop_point - addr continue if use_cache: self._block_cache[cache_key] = irsb if state: state._inspect('vex_lift', BP_AFTER, vex_lift_addr=addr, vex_lift_size=size) return irsb # phase x: error handling except pyvex.PyVEXError as e: l.debug("VEX translation error at %#x", addr) if isinstance(buff, bytes): l.debug('Using bytes: %r', buff) else: l.debug("Using bytes: %r", pyvex.ffi.buffer(buff, size)) raise SimTranslationError("Unable to translate bytecode") from e
[ "def", "lift_vex", "(", "self", ",", "addr", "=", "None", ",", "state", "=", "None", ",", "clemory", "=", "None", ",", "insn_bytes", "=", "None", ",", "offset", "=", "None", ",", "arch", "=", "None", ",", "size", "=", "None", ",", "num_inst", "=", ...
https://github.com/angr/angr/blob/4b04d56ace135018083d36d9083805be8146688b/angr/engines/vex/lifter.py#L73-L278
Trusted-AI/adversarial-robustness-toolbox
9fabffdbb92947efa1ecc5d825d634d30dfbaf29
art/evaluations/evaluation.py
python
Evaluation.evaluate
(self, *args, **kwargs)
Abstract method running the evaluation.
Abstract method running the evaluation.
[ "Abstract", "method", "running", "the", "evaluation", "." ]
def evaluate(self, *args, **kwargs) -> Any: """ Abstract method running the evaluation. """ raise NotImplementedError
[ "def", "evaluate", "(", "self", ",", "*", "args", ",", "*", "*", "kwargs", ")", "->", "Any", ":", "raise", "NotImplementedError" ]
https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/9fabffdbb92947efa1ecc5d825d634d30dfbaf29/art/evaluations/evaluation.py#L31-L35
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/lib/graphy/graphy/common.py
python
BaseChart.GetDependentAxis
(self)
return self.left
Return this chart's main dependent axis (often 'left', but horizontal bar-charts use 'bottom').
Return this chart's main dependent axis (often 'left', but horizontal bar-charts use 'bottom').
[ "Return", "this", "chart", "s", "main", "dependent", "axis", "(", "often", "left", "but", "horizontal", "bar", "-", "charts", "use", "bottom", ")", "." ]
def GetDependentAxis(self): """Return this chart's main dependent axis (often 'left', but horizontal bar-charts use 'bottom'). """ return self.left
[ "def", "GetDependentAxis", "(", "self", ")", ":", "return", "self", ".", "left" ]
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/graphy/graphy/common.py#L266-L270
openshift/openshift-tools
1188778e728a6e4781acf728123e5b356380fe6f
ansible/roles/lib_statuspageio/library/statuspage_incident.py
python
StatusPageIncident.exists
(self)
verify if the incoming incident exists As per some discussion, this is a difficult task without a unique identifier on the incident. Decision: If an incident exists, with the same components, and the components are in the same state as before then we can say with a small degree of confidenced that this is the correct incident referred to by the caller.
verify if the incoming incident exists
[ "verify", "if", "the", "incoming", "incident", "exists" ]
def exists(self): ''' verify if the incoming incident exists As per some discussion, this is a difficult task without a unique identifier on the incident. Decision: If an incident exists, with the same components, and the components are in the same state as before then we can say with a small degree of confidenced that this is the correct incident referred to by the caller. ''' found, _, _ = self.find_incident() if len(found) == 1: return True if len(found) == 0: return False raise StatusPageIOAPIError('Found %s instances matching your search. Please resolve this issue ids=[%s].' \ % (len(found), ', '.join([inc.id for inc in found])))
[ "def", "exists", "(", "self", ")", ":", "found", ",", "_", ",", "_", "=", "self", ".", "find_incident", "(", ")", "if", "len", "(", "found", ")", "==", "1", ":", "return", "True", "if", "len", "(", "found", ")", "==", "0", ":", "return", "False...
https://github.com/openshift/openshift-tools/blob/1188778e728a6e4781acf728123e5b356380fe6f/ansible/roles/lib_statuspageio/library/statuspage_incident.py#L539-L558
AutodeskRoboticsLab/Mimic
85447f0d346be66988303a6a054473d92f1ed6f4
mimic/scripts/mimic_prefs_ui.py
python
Window.reload_window
(self)
Relauches the window. :return: None
Relauches the window. :return: None
[ "Relauches", "the", "window", ".", ":", "return", ":", "None" ]
def reload_window(self): """ Relauches the window. :return: None """ command = "import sys\n" \ "sys.dont_write_bytecode = True # don't write PYCs\n" \ "import {}\n" \ "reload({})\n" \ "{}.{}()".format(__name__, __name__, __name__, self.__class__.__name__) pm.evalDeferred(command)
[ "def", "reload_window", "(", "self", ")", ":", "command", "=", "\"import sys\\n\"", "\"sys.dont_write_bytecode = True # don't write PYCs\\n\"", "\"import {}\\n\"", "\"reload({})\\n\"", "\"{}.{}()\"", ".", "format", "(", "__name__", ",", "__name__", ",", "__name__", ",", ...
https://github.com/AutodeskRoboticsLab/Mimic/blob/85447f0d346be66988303a6a054473d92f1ed6f4/mimic/scripts/mimic_prefs_ui.py#L73-L86
rockingdingo/deepnlp
dac9793217e513ec19f1bbcc1c119b0b7bbdb883
deepnlp/ner_tagger.py
python
udf_default
(word, tags, *args)
return tags[0], 1.0
Default get the first tag Return: tag, confidence
Default get the first tag Return: tag, confidence
[ "Default", "get", "the", "first", "tag", "Return", ":", "tag", "confidence" ]
def udf_default(word, tags, *args): """ Default get the first tag Return: tag, confidence """ if (len(tags) > 0): return tags[0], 1.0 else: return TAG_NONE_ENTITY return tags[0], 1.0
[ "def", "udf_default", "(", "word", ",", "tags", ",", "*", "args", ")", ":", "if", "(", "len", "(", "tags", ")", ">", "0", ")", ":", "return", "tags", "[", "0", "]", ",", "1.0", "else", ":", "return", "TAG_NONE_ENTITY", "return", "tags", "[", "0",...
https://github.com/rockingdingo/deepnlp/blob/dac9793217e513ec19f1bbcc1c119b0b7bbdb883/deepnlp/ner_tagger.py#L42-L50
CLUEbenchmark/CLUE
5bd39732734afecb490cf18a5212e692dbf2c007
baselines/models/roberta_wwm_ext/modeling.py
python
get_shape_list
(tensor, expected_rank=None, name=None)
return shape
Returns a list of the shape of tensor, preferring static dimensions. Args: tensor: A tf.Tensor object to find the shape of. expected_rank: (optional) int. The expected rank of `tensor`. If this is specified and the `tensor` has a different rank, and exception will be thrown. name: Optional name of the tensor for the error message. Returns: A list of dimensions of the shape of tensor. All static dimensions will be returned as python integers, and dynamic dimensions will be returned as tf.Tensor scalars.
Returns a list of the shape of tensor, preferring static dimensions.
[ "Returns", "a", "list", "of", "the", "shape", "of", "tensor", "preferring", "static", "dimensions", "." ]
def get_shape_list(tensor, expected_rank=None, name=None): """Returns a list of the shape of tensor, preferring static dimensions. Args: tensor: A tf.Tensor object to find the shape of. expected_rank: (optional) int. The expected rank of `tensor`. If this is specified and the `tensor` has a different rank, and exception will be thrown. name: Optional name of the tensor for the error message. Returns: A list of dimensions of the shape of tensor. All static dimensions will be returned as python integers, and dynamic dimensions will be returned as tf.Tensor scalars. """ if name is None: name = tensor.name if expected_rank is not None: assert_rank(tensor, expected_rank, name) shape = tensor.shape.as_list() non_static_indexes = [] for (index, dim) in enumerate(shape): if dim is None: non_static_indexes.append(index) if not non_static_indexes: return shape dyn_shape = tf.shape(tensor) for index in non_static_indexes: shape[index] = dyn_shape[index] return shape
[ "def", "get_shape_list", "(", "tensor", ",", "expected_rank", "=", "None", ",", "name", "=", "None", ")", ":", "if", "name", "is", "None", ":", "name", "=", "tensor", ".", "name", "if", "expected_rank", "is", "not", "None", ":", "assert_rank", "(", "te...
https://github.com/CLUEbenchmark/CLUE/blob/5bd39732734afecb490cf18a5212e692dbf2c007/baselines/models/roberta_wwm_ext/modeling.py#L895-L929
buke/GreenOdoo
3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df
runtime/python/lib/python2.7/httplib.py
python
HTTPConnection.endheaders
(self, message_body=None)
Indicate that the last header line has been sent to the server. This method sends the request to the server. The optional message_body argument can be used to pass a message body associated with the request. The message body will be sent in the same packet as the message headers if it is string, otherwise it is sent as a separate packet.
Indicate that the last header line has been sent to the server.
[ "Indicate", "that", "the", "last", "header", "line", "has", "been", "sent", "to", "the", "server", "." ]
def endheaders(self, message_body=None): """Indicate that the last header line has been sent to the server. This method sends the request to the server. The optional message_body argument can be used to pass a message body associated with the request. The message body will be sent in the same packet as the message headers if it is string, otherwise it is sent as a separate packet. """ if self.__state == _CS_REQ_STARTED: self.__state = _CS_REQ_SENT else: raise CannotSendHeader() self._send_output(message_body)
[ "def", "endheaders", "(", "self", ",", "message_body", "=", "None", ")", ":", "if", "self", ".", "__state", "==", "_CS_REQ_STARTED", ":", "self", ".", "__state", "=", "_CS_REQ_SENT", "else", ":", "raise", "CannotSendHeader", "(", ")", "self", ".", "_send_o...
https://github.com/buke/GreenOdoo/blob/3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df/runtime/python/lib/python2.7/httplib.py#L956-L969
Yelp/bravado-core
382db874b7b838dcfd169b0ce490d6a447ad6ff2
bravado_core/spec.py
python
build_http_handlers
(http_client)
return { 'http': download, 'https': download, # jsonschema ordinarily handles file:// requests, but it assumes that # all files are json formatted. We override it here so that we can # load yaml files when necessary. 'file': read_file, }
Create a mapping of uri schemes to callables that take a uri. The callable is used by jsonschema's RefResolver to download remote $refs. :param http_client: http_client with a request() method :returns: dict like {'http': callable, 'https': callable}
Create a mapping of uri schemes to callables that take a uri. The callable is used by jsonschema's RefResolver to download remote $refs.
[ "Create", "a", "mapping", "of", "uri", "schemes", "to", "callables", "that", "take", "a", "uri", ".", "The", "callable", "is", "used", "by", "jsonschema", "s", "RefResolver", "to", "download", "remote", "$refs", "." ]
def build_http_handlers(http_client): """Create a mapping of uri schemes to callables that take a uri. The callable is used by jsonschema's RefResolver to download remote $refs. :param http_client: http_client with a request() method :returns: dict like {'http': callable, 'https': callable} """ def download(uri): log.debug('Downloading %s', uri) request_params = { 'method': 'GET', 'url': uri, } response = http_client.request(request_params).result() content_type = response.headers.get('content-type', '').lower() if is_yaml(uri, content_type): return yaml.load(response.content, Loader=SafeLoader) else: return response.json() def read_file(uri): with open(url2pathname(urlparse(uri).path), mode='rb') as fp: if is_yaml(uri): return yaml.load(fp, Loader=SafeLoader) else: return json.loads(fp.read().decode("utf-8")) return { 'http': download, 'https': download, # jsonschema ordinarily handles file:// requests, but it assumes that # all files are json formatted. We override it here so that we can # load yaml files when necessary. 'file': read_file, }
[ "def", "build_http_handlers", "(", "http_client", ")", ":", "def", "download", "(", "uri", ")", ":", "log", ".", "debug", "(", "'Downloading %s'", ",", "uri", ")", "request_params", "=", "{", "'method'", ":", "'GET'", ",", "'url'", ":", "uri", ",", "}", ...
https://github.com/Yelp/bravado-core/blob/382db874b7b838dcfd169b0ce490d6a447ad6ff2/bravado_core/spec.py#L555-L590
TheAlgorithms/Python
9af2eef9b3761bf51580dedfb6fa7136ca0c5c2c
project_euler/problem_035/sol1.py
python
solution
()
return len(find_circular_primes())
>>> solution() 55
>>> solution() 55
[ ">>>", "solution", "()", "55" ]
def solution() -> int: """ >>> solution() 55 """ return len(find_circular_primes())
[ "def", "solution", "(", ")", "->", "int", ":", "return", "len", "(", "find_circular_primes", "(", ")", ")" ]
https://github.com/TheAlgorithms/Python/blob/9af2eef9b3761bf51580dedfb6fa7136ca0c5c2c/project_euler/problem_035/sol1.py#L73-L78
natashamjaques/neural_chat
ddb977bb4602a67c460d02231e7bbf7b2cb49a97
ParlAI/parlai/agents/example_seq2seq/example_seq2seq.py
python
ExampleSeq2seqAgent.zero_grad
(self)
Zero out optimizer.
Zero out optimizer.
[ "Zero", "out", "optimizer", "." ]
def zero_grad(self): """Zero out optimizer.""" for optimizer in self.optims.values(): optimizer.zero_grad()
[ "def", "zero_grad", "(", "self", ")", ":", "for", "optimizer", "in", "self", ".", "optims", ".", "values", "(", ")", ":", "optimizer", ".", "zero_grad", "(", ")" ]
https://github.com/natashamjaques/neural_chat/blob/ddb977bb4602a67c460d02231e7bbf7b2cb49a97/ParlAI/parlai/agents/example_seq2seq/example_seq2seq.py#L179-L182
bendmorris/static-python
2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473
Lib/distutils/command/sdist.py
python
sdist.make_release_tree
(self, base_dir, files)
Create the directory tree that will become the source distribution archive. All directories implied by the filenames in 'files' are created under 'base_dir', and then we hard link or copy (if hard linking is unavailable) those files into place. Essentially, this duplicates the developer's source tree, but in a directory named after the distribution, containing only the files to be distributed.
Create the directory tree that will become the source distribution archive. All directories implied by the filenames in 'files' are created under 'base_dir', and then we hard link or copy (if hard linking is unavailable) those files into place. Essentially, this duplicates the developer's source tree, but in a directory named after the distribution, containing only the files to be distributed.
[ "Create", "the", "directory", "tree", "that", "will", "become", "the", "source", "distribution", "archive", ".", "All", "directories", "implied", "by", "the", "filenames", "in", "files", "are", "created", "under", "base_dir", "and", "then", "we", "hard", "link...
def make_release_tree(self, base_dir, files): """Create the directory tree that will become the source distribution archive. All directories implied by the filenames in 'files' are created under 'base_dir', and then we hard link or copy (if hard linking is unavailable) those files into place. Essentially, this duplicates the developer's source tree, but in a directory named after the distribution, containing only the files to be distributed. """ # Create all the directories under 'base_dir' necessary to # put 'files' there; the 'mkpath()' is just so we don't die # if the manifest happens to be empty. self.mkpath(base_dir) dir_util.create_tree(base_dir, files, dry_run=self.dry_run) # And walk over the list of files, either making a hard link (if # os.link exists) to each one that doesn't already exist in its # corresponding location under 'base_dir', or copying each file # that's out-of-date in 'base_dir'. (Usually, all files will be # out-of-date, because by default we blow away 'base_dir' when # we're done making the distribution archives.) if hasattr(os, 'link'): # can make hard links on this system link = 'hard' msg = "making hard links in %s..." % base_dir else: # nope, have to copy link = None msg = "copying files to %s..." % base_dir if not files: log.warn("no files to distribute -- empty manifest?") else: log.info(msg) for file in files: if not os.path.isfile(file): log.warn("'%s' not a regular file -- skipping" % file) else: dest = os.path.join(base_dir, file) self.copy_file(file, dest, link=link) self.distribution.metadata.write_pkg_info(base_dir)
[ "def", "make_release_tree", "(", "self", ",", "base_dir", ",", "files", ")", ":", "# Create all the directories under 'base_dir' necessary to", "# put 'files' there; the 'mkpath()' is just so we don't die", "# if the manifest happens to be empty.", "self", ".", "mkpath", "(", "base...
https://github.com/bendmorris/static-python/blob/2e0f8c4d7ed5b359dc7d8a75b6fb37e6b6c5c473/Lib/distutils/command/sdist.py#L385-L425
DistrictDataLabs/yellowbrick
2eb8e3edfbe6f3f22c595b38989a23ba7cea40cf
yellowbrick/features/jointplot.py
python
JointPlot._layout
(self)
Creates the grid layout for the joint plot, adding new axes for the histograms if necessary and modifying the aspect ratio. Does not modify the axes or the layout if self.hist is False or None.
Creates the grid layout for the joint plot, adding new axes for the histograms if necessary and modifying the aspect ratio. Does not modify the axes or the layout if self.hist is False or None.
[ "Creates", "the", "grid", "layout", "for", "the", "joint", "plot", "adding", "new", "axes", "for", "the", "histograms", "if", "necessary", "and", "modifying", "the", "aspect", "ratio", ".", "Does", "not", "modify", "the", "axes", "or", "the", "layout", "if...
def _layout(self): """ Creates the grid layout for the joint plot, adding new axes for the histograms if necessary and modifying the aspect ratio. Does not modify the axes or the layout if self.hist is False or None. """ # Ensure the axes are created if not hist, then return. if not self.hist: self.ax return # Ensure matplotlib version compatibility if make_axes_locatable is None: raise YellowbrickValueError( ( "joint plot histograms requires matplotlib 2.0.2 or greater " "please upgrade matplotlib or set hist=False on the visualizer" ) ) # Create the new axes for the histograms divider = make_axes_locatable(self.ax) self._xhax = divider.append_axes("top", size=1, pad=0.1, sharex=self.ax) self._yhax = divider.append_axes("right", size=1, pad=0.1, sharey=self.ax) # Modify the display of the axes self._xhax.xaxis.tick_top() self._yhax.yaxis.tick_right() self._xhax.grid(False, axis="y") self._yhax.grid(False, axis="x")
[ "def", "_layout", "(", "self", ")", ":", "# Ensure the axes are created if not hist, then return.", "if", "not", "self", ".", "hist", ":", "self", ".", "ax", "return", "# Ensure matplotlib version compatibility", "if", "make_axes_locatable", "is", "None", ":", "raise", ...
https://github.com/DistrictDataLabs/yellowbrick/blob/2eb8e3edfbe6f3f22c595b38989a23ba7cea40cf/yellowbrick/features/jointplot.py#L223-L252
MontrealCorpusTools/Montreal-Forced-Aligner
63473f9a4fabd31eec14e1e5022882f85cfdaf31
montreal_forced_aligner/dictionary/pronunciation.py
python
PronunciationDictionaryMixin.words_symbol_path
(self)
return os.path.join(self.dictionary_output_directory, "words.txt")
Path of word to int mapping file for the dictionary
Path of word to int mapping file for the dictionary
[ "Path", "of", "word", "to", "int", "mapping", "file", "for", "the", "dictionary" ]
def words_symbol_path(self) -> str: """ Path of word to int mapping file for the dictionary """ return os.path.join(self.dictionary_output_directory, "words.txt")
[ "def", "words_symbol_path", "(", "self", ")", "->", "str", ":", "return", "os", ".", "path", ".", "join", "(", "self", ".", "dictionary_output_directory", ",", "\"words.txt\"", ")" ]
https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner/blob/63473f9a4fabd31eec14e1e5022882f85cfdaf31/montreal_forced_aligner/dictionary/pronunciation.py#L472-L476
tendenci/tendenci
0f2c348cc0e7d41bc56f50b00ce05544b083bf1d
tendenci/libs/tinymce/views.py
python
render_to_image_list
(image_list)
return render_to_js_vardef('tinyMCEImageList', image_list)
Returns a HttpResponse whose content is a Javscript file representing a list of images suitable for use wit the TinyMCE external_image_list_url configuration option. The image_list parameter must be a list of 2-tuples.
Returns a HttpResponse whose content is a Javscript file representing a list of images suitable for use wit the TinyMCE external_image_list_url configuration option. The image_list parameter must be a list of 2-tuples.
[ "Returns", "a", "HttpResponse", "whose", "content", "is", "a", "Javscript", "file", "representing", "a", "list", "of", "images", "suitable", "for", "use", "wit", "the", "TinyMCE", "external_image_list_url", "configuration", "option", ".", "The", "image_list", "par...
def render_to_image_list(image_list): """ Returns a HttpResponse whose content is a Javscript file representing a list of images suitable for use wit the TinyMCE external_image_list_url configuration option. The image_list parameter must be a list of 2-tuples. """ return render_to_js_vardef('tinyMCEImageList', image_list)
[ "def", "render_to_image_list", "(", "image_list", ")", ":", "return", "render_to_js_vardef", "(", "'tinyMCEImageList'", ",", "image_list", ")" ]
https://github.com/tendenci/tendenci/blob/0f2c348cc0e7d41bc56f50b00ce05544b083bf1d/tendenci/libs/tinymce/views.py#L123-L129
ChineseGLUE/ChineseGLUE
1591b85cf5427c2ff60f718d359ecb71d2b44879
baselines/models/xlnet/tpu_estimator.py
python
_is_iterable
(obj)
A Python 2 and 3 compatible util to check whether `obj` is iterable.
A Python 2 and 3 compatible util to check whether `obj` is iterable.
[ "A", "Python", "2", "and", "3", "compatible", "util", "to", "check", "whether", "obj", "is", "iterable", "." ]
def _is_iterable(obj): """A Python 2 and 3 compatible util to check whether `obj` is iterable.""" try: iter(obj) return True except TypeError: return False
[ "def", "_is_iterable", "(", "obj", ")", ":", "try", ":", "iter", "(", "obj", ")", "return", "True", "except", "TypeError", ":", "return", "False" ]
https://github.com/ChineseGLUE/ChineseGLUE/blob/1591b85cf5427c2ff60f718d359ecb71d2b44879/baselines/models/xlnet/tpu_estimator.py#L114-L120
leancloud/satori
701caccbd4fe45765001ca60435c0cb499477c03
satori-rules/plugin/libs/pynvml.py
python
nvmlDeviceSetDefaultAutoBoostedClocksEnabled
(handle, enabled, flags)
return None
[]
def nvmlDeviceSetDefaultAutoBoostedClocksEnabled(handle, enabled, flags): fn = _nvmlGetFunctionPointer("nvmlDeviceSetDefaultAutoBoostedClocksEnabled") ret = fn(handle, _nvmlEnableState_t(enabled), c_uint(flags)) _nvmlCheckReturn(ret) return None
[ "def", "nvmlDeviceSetDefaultAutoBoostedClocksEnabled", "(", "handle", ",", "enabled", ",", "flags", ")", ":", "fn", "=", "_nvmlGetFunctionPointer", "(", "\"nvmlDeviceSetDefaultAutoBoostedClocksEnabled\"", ")", "ret", "=", "fn", "(", "handle", ",", "_nvmlEnableState_t", ...
https://github.com/leancloud/satori/blob/701caccbd4fe45765001ca60435c0cb499477c03/satori-rules/plugin/libs/pynvml.py#L1552-L1556
polakowo/vectorbt
6638735c131655760474d72b9f045d1dbdbd8fe9
vectorbt/utils/config.py
python
get_func_arg_names
(func: tp.Callable, arg_kind: tp.Optional[tp.MaybeTuple[int]] = None)
return [ p.name for p in signature.parameters.values() if p.kind in arg_kind ]
Get argument names of a function.
Get argument names of a function.
[ "Get", "argument", "names", "of", "a", "function", "." ]
def get_func_arg_names(func: tp.Callable, arg_kind: tp.Optional[tp.MaybeTuple[int]] = None) -> tp.List[str]: """Get argument names of a function.""" signature = inspect.signature(func) if arg_kind is not None and isinstance(arg_kind, int): arg_kind = (arg_kind,) if arg_kind is None: return [ p.name for p in signature.parameters.values() if p.kind != p.VAR_POSITIONAL and p.kind != p.VAR_KEYWORD ] return [ p.name for p in signature.parameters.values() if p.kind in arg_kind ]
[ "def", "get_func_arg_names", "(", "func", ":", "tp", ".", "Callable", ",", "arg_kind", ":", "tp", ".", "Optional", "[", "tp", ".", "MaybeTuple", "[", "int", "]", "]", "=", "None", ")", "->", "tp", ".", "List", "[", "str", "]", ":", "signature", "="...
https://github.com/polakowo/vectorbt/blob/6638735c131655760474d72b9f045d1dbdbd8fe9/vectorbt/utils/config.py#L55-L68
Tautulli/Tautulli
2410eb33805aaac4bd1c5dad0f71e4f15afaf742
lib/cheroot/cli.py
python
Application.server
(self, parsed_args)
return wsgi.Server(**self.server_args(parsed_args))
Server.
Server.
[ "Server", "." ]
def server(self, parsed_args): """Server.""" return wsgi.Server(**self.server_args(parsed_args))
[ "def", "server", "(", "self", ",", "parsed_args", ")", ":", "return", "wsgi", ".", "Server", "(", "*", "*", "self", ".", "server_args", "(", "parsed_args", ")", ")" ]
https://github.com/Tautulli/Tautulli/blob/2410eb33805aaac4bd1c5dad0f71e4f15afaf742/lib/cheroot/cli.py#L113-L115
eirannejad/pyRevit
49c0b7eb54eb343458ce1365425e6552d0c47d44
site-packages/sqlalchemy/sql/schema.py
python
Column._make_proxy
(self, selectable, name=None, key=None, name_is_truncatable=False, **kw)
return c
Create a *proxy* for this column. This is a copy of this ``Column`` referenced by a different parent (such as an alias or select statement). The column should be used only in select scenarios, as its full DDL/default information is not transferred.
Create a *proxy* for this column.
[ "Create", "a", "*", "proxy", "*", "for", "this", "column", "." ]
def _make_proxy(self, selectable, name=None, key=None, name_is_truncatable=False, **kw): """Create a *proxy* for this column. This is a copy of this ``Column`` referenced by a different parent (such as an alias or select statement). The column should be used only in select scenarios, as its full DDL/default information is not transferred. """ fk = [ForeignKey(f.column, _constraint=f.constraint) for f in self.foreign_keys] if name is None and self.name is None: raise exc.InvalidRequestError( "Cannot initialize a sub-selectable" " with this Column object until its 'name' has " "been assigned.") try: c = self._constructor( _as_truncated(name or self.name) if name_is_truncatable else (name or self.name), self.type, key=key if key else name if name else self.key, primary_key=self.primary_key, nullable=self.nullable, _proxies=[self], *fk) except TypeError: util.raise_from_cause( TypeError( "Could not create a copy of this %r object. " "Ensure the class includes a _constructor() " "attribute or method which accepts the " "standard Column constructor arguments, or " "references the Column class itself." % self.__class__) ) c.table = selectable selectable._columns.add(c) if selectable._is_clone_of is not None: c._is_clone_of = selectable._is_clone_of.columns[c.key] if self.primary_key: selectable.primary_key.add(c) c.dispatch.after_parent_attach(c, selectable) return c
[ "def", "_make_proxy", "(", "self", ",", "selectable", ",", "name", "=", "None", ",", "key", "=", "None", ",", "name_is_truncatable", "=", "False", ",", "*", "*", "kw", ")", ":", "fk", "=", "[", "ForeignKey", "(", "f", ".", "column", ",", "_constraint...
https://github.com/eirannejad/pyRevit/blob/49c0b7eb54eb343458ce1365425e6552d0c47d44/site-packages/sqlalchemy/sql/schema.py#L1420-L1463
django-import-export/django-import-export
0d4340f02582e86d4515159becb045cad1d53ee0
import_export/resources.py
python
Resource.bulk_create
(self, using_transactions, dry_run, raise_errors, batch_size=None)
Creates objects by calling ``bulk_create``.
Creates objects by calling ``bulk_create``.
[ "Creates", "objects", "by", "calling", "bulk_create", "." ]
def bulk_create(self, using_transactions, dry_run, raise_errors, batch_size=None): """ Creates objects by calling ``bulk_create``. """ try: if len(self.create_instances) > 0: if not using_transactions and dry_run: pass else: self._meta.model.objects.bulk_create(self.create_instances, batch_size=batch_size) except Exception as e: logger.exception(e) if raise_errors: raise e finally: self.create_instances.clear()
[ "def", "bulk_create", "(", "self", ",", "using_transactions", ",", "dry_run", ",", "raise_errors", ",", "batch_size", "=", "None", ")", ":", "try", ":", "if", "len", "(", "self", ".", "create_instances", ")", ">", "0", ":", "if", "not", "using_transactions...
https://github.com/django-import-export/django-import-export/blob/0d4340f02582e86d4515159becb045cad1d53ee0/import_export/resources.py#L376-L391
Tautulli/Tautulli
2410eb33805aaac4bd1c5dad0f71e4f15afaf742
lib/bs4/dammit.py
python
UnicodeDammit._sub_ms_char
(self, match)
return sub
Changes a MS smart quote character to an XML or HTML entity, or an ASCII character.
Changes a MS smart quote character to an XML or HTML entity, or an ASCII character.
[ "Changes", "a", "MS", "smart", "quote", "character", "to", "an", "XML", "or", "HTML", "entity", "or", "an", "ASCII", "character", "." ]
def _sub_ms_char(self, match): """Changes a MS smart quote character to an XML or HTML entity, or an ASCII character.""" orig = match.group(1) if self.smart_quotes_to == 'ascii': sub = self.MS_CHARS_TO_ASCII.get(orig).encode() else: sub = self.MS_CHARS.get(orig) if type(sub) == tuple: if self.smart_quotes_to == 'xml': sub = '&#x'.encode() + sub[1].encode() + ';'.encode() else: sub = '&'.encode() + sub[0].encode() + ';'.encode() else: sub = sub.encode() return sub
[ "def", "_sub_ms_char", "(", "self", ",", "match", ")", ":", "orig", "=", "match", ".", "group", "(", "1", ")", "if", "self", ".", "smart_quotes_to", "==", "'ascii'", ":", "sub", "=", "self", ".", "MS_CHARS_TO_ASCII", ".", "get", "(", "orig", ")", "."...
https://github.com/Tautulli/Tautulli/blob/2410eb33805aaac4bd1c5dad0f71e4f15afaf742/lib/bs4/dammit.py#L2872-L2887
blawar/nut
2cf351400418399a70164987e28670309f6c9cb5
gui/table_model.py
python
TableModel.setRowCount
(self, row_count)
[]
def setRowCount(self, row_count): if row_count == 0: self.datatable = []
[ "def", "setRowCount", "(", "self", ",", "row_count", ")", ":", "if", "row_count", "==", "0", ":", "self", ".", "datatable", "=", "[", "]" ]
https://github.com/blawar/nut/blob/2cf351400418399a70164987e28670309f6c9cb5/gui/table_model.py#L92-L94
dswah/pyGAM
b57b4cf8783a90976031e1857e748ca3e6ec650b
pygam/terms.py
python
LinearTerm.build_columns
(self, X, verbose=False)
return sp.sparse.csc_matrix(X[:, self.feature][:, np.newaxis])
construct the model matrix columns for the term Parameters ---------- X : array-like Input dataset with n rows verbose : bool whether to show warnings Returns ------- scipy sparse array with n rows
construct the model matrix columns for the term
[ "construct", "the", "model", "matrix", "columns", "for", "the", "term" ]
def build_columns(self, X, verbose=False): """construct the model matrix columns for the term Parameters ---------- X : array-like Input dataset with n rows verbose : bool whether to show warnings Returns ------- scipy sparse array with n rows """ return sp.sparse.csc_matrix(X[:, self.feature][:, np.newaxis])
[ "def", "build_columns", "(", "self", ",", "X", ",", "verbose", "=", "False", ")", ":", "return", "sp", ".", "sparse", ".", "csc_matrix", "(", "X", "[", ":", ",", "self", ".", "feature", "]", "[", ":", ",", "np", ".", "newaxis", "]", ")" ]
https://github.com/dswah/pyGAM/blob/b57b4cf8783a90976031e1857e748ca3e6ec650b/pygam/terms.py#L556-L571
plotly/plotly.py
cfad7862594b35965c0e000813bd7805e8494a5b
packages/python/plotly/plotly/graph_objs/layout/coloraxis/_colorbar.py
python
ColorBar.ticktextsrc
(self)
return self["ticktextsrc"]
Sets the source reference on Chart Studio Cloud for `ticktext`. The 'ticktextsrc' property must be specified as a string or as a plotly.grid_objs.Column object Returns ------- str
Sets the source reference on Chart Studio Cloud for `ticktext`. The 'ticktextsrc' property must be specified as a string or as a plotly.grid_objs.Column object
[ "Sets", "the", "source", "reference", "on", "Chart", "Studio", "Cloud", "for", "ticktext", ".", "The", "ticktextsrc", "property", "must", "be", "specified", "as", "a", "string", "or", "as", "a", "plotly", ".", "grid_objs", ".", "Column", "object" ]
def ticktextsrc(self): """ Sets the source reference on Chart Studio Cloud for `ticktext`. The 'ticktextsrc' property must be specified as a string or as a plotly.grid_objs.Column object Returns ------- str """ return self["ticktextsrc"]
[ "def", "ticktextsrc", "(", "self", ")", ":", "return", "self", "[", "\"ticktextsrc\"", "]" ]
https://github.com/plotly/plotly.py/blob/cfad7862594b35965c0e000813bd7805e8494a5b/packages/python/plotly/plotly/graph_objs/layout/coloraxis/_colorbar.py#L1062-L1073
triaquae/triaquae
bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9
TriAquae/models/django/contrib/gis/gdal/field.py
python
Field.__str__
(self)
return str(self.value).strip()
Returns the string representation of the Field.
Returns the string representation of the Field.
[ "Returns", "the", "string", "representation", "of", "the", "Field", "." ]
def __str__(self): "Returns the string representation of the Field." return str(self.value).strip()
[ "def", "__str__", "(", "self", ")", ":", "return", "str", "(", "self", ".", "value", ")", ".", "strip", "(", ")" ]
https://github.com/triaquae/triaquae/blob/bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9/TriAquae/models/django/contrib/gis/gdal/field.py#L43-L45
kuri65536/python-for-android
26402a08fc46b09ef94e8d7a6bbc3a54ff9d0891
python-modules/twisted/twisted/internet/iocpreactor/udp.py
python
Port.logPrefix
(self)
return self.logstr
Returns the name of my class, to prefix log entries with.
Returns the name of my class, to prefix log entries with.
[ "Returns", "the", "name", "of", "my", "class", "to", "prefix", "log", "entries", "with", "." ]
def logPrefix(self): """ Returns the name of my class, to prefix log entries with. """ return self.logstr
[ "def", "logPrefix", "(", "self", ")", ":", "return", "self", ".", "logstr" ]
https://github.com/kuri65536/python-for-android/blob/26402a08fc46b09ef94e8d7a6bbc3a54ff9d0891/python-modules/twisted/twisted/internet/iocpreactor/udp.py#L271-L275
pandas-dev/pandas
5ba7d714014ae8feaccc0dd4a98890828cf2832d
pandas/io/formats/format.py
python
format_percentiles
( percentiles: (np.ndarray | list[int | float] | list[float] | list[str | float]), )
return [i + "%" for i in out]
Outputs rounded and formatted percentiles. Parameters ---------- percentiles : list-like, containing floats from interval [0,1] Returns ------- formatted : list of strings Notes ----- Rounding precision is chosen so that: (1) if any two elements of ``percentiles`` differ, they remain different after rounding (2) no entry is *rounded* to 0% or 100%. Any non-integer is always rounded to at least 1 decimal place. Examples -------- Keeps all entries different after rounding: >>> format_percentiles([0.01999, 0.02001, 0.5, 0.666666, 0.9999]) ['1.999%', '2.001%', '50%', '66.667%', '99.99%'] No element is rounded to 0% or 100% (unless already equal to it). Duplicates are allowed: >>> format_percentiles([0, 0.5, 0.02001, 0.5, 0.666666, 0.9999]) ['0%', '50%', '2.0%', '50%', '66.67%', '99.99%']
Outputs rounded and formatted percentiles.
[ "Outputs", "rounded", "and", "formatted", "percentiles", "." ]
def format_percentiles( percentiles: (np.ndarray | list[int | float] | list[float] | list[str | float]), ) -> list[str]: """ Outputs rounded and formatted percentiles. Parameters ---------- percentiles : list-like, containing floats from interval [0,1] Returns ------- formatted : list of strings Notes ----- Rounding precision is chosen so that: (1) if any two elements of ``percentiles`` differ, they remain different after rounding (2) no entry is *rounded* to 0% or 100%. Any non-integer is always rounded to at least 1 decimal place. Examples -------- Keeps all entries different after rounding: >>> format_percentiles([0.01999, 0.02001, 0.5, 0.666666, 0.9999]) ['1.999%', '2.001%', '50%', '66.667%', '99.99%'] No element is rounded to 0% or 100% (unless already equal to it). Duplicates are allowed: >>> format_percentiles([0, 0.5, 0.02001, 0.5, 0.666666, 0.9999]) ['0%', '50%', '2.0%', '50%', '66.67%', '99.99%'] """ percentiles = np.asarray(percentiles) # It checks for np.NaN as well with np.errstate(invalid="ignore"): if ( not is_numeric_dtype(percentiles) or not np.all(percentiles >= 0) or not np.all(percentiles <= 1) ): raise ValueError("percentiles should all be in the interval [0,1]") percentiles = 100 * percentiles int_idx = np.isclose(percentiles.astype(int), percentiles) if np.all(int_idx): out = percentiles.astype(int).astype(str) return [i + "%" for i in out] unique_pcts = np.unique(percentiles) to_begin = unique_pcts[0] if unique_pcts[0] > 0 else None to_end = 100 - unique_pcts[-1] if unique_pcts[-1] < 100 else None # Least precision that keeps percentiles unique after rounding prec = -np.floor( np.log10(np.min(np.ediff1d(unique_pcts, to_begin=to_begin, to_end=to_end))) ).astype(int) prec = max(1, prec) out = np.empty_like(percentiles, dtype=object) out[int_idx] = percentiles[int_idx].astype(int).astype(str) out[~int_idx] = percentiles[~int_idx].round(prec).astype(str) return [i + "%" for i in out]
[ "def", "format_percentiles", "(", "percentiles", ":", "(", "np", ".", "ndarray", "|", "list", "[", "int", "|", "float", "]", "|", "list", "[", "float", "]", "|", "list", "[", "str", "|", "float", "]", ")", ",", ")", "->", "list", "[", "str", "]",...
https://github.com/pandas-dev/pandas/blob/5ba7d714014ae8feaccc0dd4a98890828cf2832d/pandas/io/formats/format.py#L1667-L1733
danilobellini/audiolazy
dba0a278937909980ed40b976d866b8e97c35dee
audiolazy/lazy_analysis.py
python
maverage
(size)
return sum((1. / size) * z ** -i for i in xrange(size))
Moving average Linear filter implementation as a FIR ZFilter. Parameters ---------- size : Data block window size. Should be an integer. Returns ------- A ZFilter instance with the FIR filter. See Also -------- envelope : Signal envelope (time domain) strategies.
Moving average
[ "Moving", "average" ]
def maverage(size): """ Moving average Linear filter implementation as a FIR ZFilter. Parameters ---------- size : Data block window size. Should be an integer. Returns ------- A ZFilter instance with the FIR filter. See Also -------- envelope : Signal envelope (time domain) strategies. """ return sum((1. / size) * z ** -i for i in xrange(size))
[ "def", "maverage", "(", "size", ")", ":", "return", "sum", "(", "(", "1.", "/", "size", ")", "*", "z", "**", "-", "i", "for", "i", "in", "xrange", "(", "size", ")", ")" ]
https://github.com/danilobellini/audiolazy/blob/dba0a278937909980ed40b976d866b8e97c35dee/audiolazy/lazy_analysis.py#L591-L612
aws-samples/aws-kube-codesuite
ab4e5ce45416b83bffb947ab8d234df5437f4fca
src/kubernetes/client/models/v1_config_map_list.py
python
V1ConfigMapList.api_version
(self, api_version)
Sets the api_version of this V1ConfigMapList. APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources :param api_version: The api_version of this V1ConfigMapList. :type: str
Sets the api_version of this V1ConfigMapList. APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
[ "Sets", "the", "api_version", "of", "this", "V1ConfigMapList", ".", "APIVersion", "defines", "the", "versioned", "schema", "of", "this", "representation", "of", "an", "object", ".", "Servers", "should", "convert", "recognized", "schemas", "to", "the", "latest", ...
def api_version(self, api_version): """ Sets the api_version of this V1ConfigMapList. APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources :param api_version: The api_version of this V1ConfigMapList. :type: str """ self._api_version = api_version
[ "def", "api_version", "(", "self", ",", "api_version", ")", ":", "self", ".", "_api_version", "=", "api_version" ]
https://github.com/aws-samples/aws-kube-codesuite/blob/ab4e5ce45416b83bffb947ab8d234df5437f4fca/src/kubernetes/client/models/v1_config_map_list.py#L64-L73
linxid/Machine_Learning_Study_Path
558e82d13237114bbb8152483977806fc0c222af
Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/reprlib.py
python
recursive_repr
(fillvalue='...')
return decorating_function
Decorator to make a repr function return fillvalue for a recursive call
Decorator to make a repr function return fillvalue for a recursive call
[ "Decorator", "to", "make", "a", "repr", "function", "return", "fillvalue", "for", "a", "recursive", "call" ]
def recursive_repr(fillvalue='...'): 'Decorator to make a repr function return fillvalue for a recursive call' def decorating_function(user_function): repr_running = set() def wrapper(self): key = id(self), get_ident() if key in repr_running: return fillvalue repr_running.add(key) try: result = user_function(self) finally: repr_running.discard(key) return result # Can't use functools.wraps() here because of bootstrap issues wrapper.__module__ = getattr(user_function, '__module__') wrapper.__doc__ = getattr(user_function, '__doc__') wrapper.__name__ = getattr(user_function, '__name__') wrapper.__qualname__ = getattr(user_function, '__qualname__') wrapper.__annotations__ = getattr(user_function, '__annotations__', {}) return wrapper return decorating_function
[ "def", "recursive_repr", "(", "fillvalue", "=", "'...'", ")", ":", "def", "decorating_function", "(", "user_function", ")", ":", "repr_running", "=", "set", "(", ")", "def", "wrapper", "(", "self", ")", ":", "key", "=", "id", "(", "self", ")", ",", "ge...
https://github.com/linxid/Machine_Learning_Study_Path/blob/558e82d13237114bbb8152483977806fc0c222af/Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/reprlib.py#L12-L37
googlearchive/appengine-flask-skeleton
8c25461d003a0bd99a9ff3b339c2791ee6919242
lib/jinja2/compiler.py
python
CodeGenerator.indent
(self)
Indent by one.
Indent by one.
[ "Indent", "by", "one", "." ]
def indent(self): """Indent by one.""" self._indentation += 1
[ "def", "indent", "(", "self", ")", ":", "self", ".", "_indentation", "+=", "1" ]
https://github.com/googlearchive/appengine-flask-skeleton/blob/8c25461d003a0bd99a9ff3b339c2791ee6919242/lib/jinja2/compiler.py#L455-L457
lmacken/quantumrandom
2c2725b3915ae524b4320dd3d54becffc505856f
quantumrandom/__init__.py
python
get_data
(data_type='uint16', array_length=1, block_size=1)
return data['data']
Fetch data from the ANU Quantum Random Numbers JSON API
Fetch data from the ANU Quantum Random Numbers JSON API
[ "Fetch", "data", "from", "the", "ANU", "Quantum", "Random", "Numbers", "JSON", "API" ]
def get_data(data_type='uint16', array_length=1, block_size=1): """Fetch data from the ANU Quantum Random Numbers JSON API""" if data_type not in DATA_TYPES: raise Exception("data_type must be one of %s" % DATA_TYPES) if array_length > MAX_LEN: raise Exception("array_length cannot be larger than %s" % MAX_LEN) if block_size > MAX_LEN: raise Exception("block_size cannot be larger than %s" % MAX_LEN) url = URL + '?' + urlencode({ 'type': data_type, 'length': array_length, 'size': block_size, }) data = get_json(url) assert data['success'] is True, data assert data['length'] == array_length, data return data['data']
[ "def", "get_data", "(", "data_type", "=", "'uint16'", ",", "array_length", "=", "1", ",", "block_size", "=", "1", ")", ":", "if", "data_type", "not", "in", "DATA_TYPES", ":", "raise", "Exception", "(", "\"data_type must be one of %s\"", "%", "DATA_TYPES", ")",...
https://github.com/lmacken/quantumrandom/blob/2c2725b3915ae524b4320dd3d54becffc505856f/quantumrandom/__init__.py#L48-L64
spack/spack
675210bd8bd1c5d32ad1cc83d898fb43b569ed74
var/spack/repos/builtin/packages/roms/package.py
python
Roms.selected_roms_application
(self)
return self.spec.variants['roms_application'].value
Application type that have been selected in this build
Application type that have been selected in this build
[ "Application", "type", "that", "have", "been", "selected", "in", "this", "build" ]
def selected_roms_application(self): """ Application type that have been selected in this build """ return self.spec.variants['roms_application'].value
[ "def", "selected_roms_application", "(", "self", ")", ":", "return", "self", ".", "spec", ".", "variants", "[", "'roms_application'", "]", ".", "value" ]
https://github.com/spack/spack/blob/675210bd8bd1c5d32ad1cc83d898fb43b569ed74/var/spack/repos/builtin/packages/roms/package.py#L56-L60
CalebBell/thermo
572a47d1b03d49fe609b8d5f826fa6a7cde00828
thermo/phases/ideal_gas.py
python
IdealGas.d2P_dVdT_TP
(self)
return 0.0
[]
def d2P_dVdT_TP(self): return 0.0
[ "def", "d2P_dVdT_TP", "(", "self", ")", ":", "return", "0.0" ]
https://github.com/CalebBell/thermo/blob/572a47d1b03d49fe609b8d5f826fa6a7cde00828/thermo/phases/ideal_gas.py#L715-L716
skarra/ASynK
e908a1ad670a2d79f791a6a7539392e078a64add
asynk/sync.py
python
SyncLists.remove_values_from_mod
(self, v)
return self.set_mods(d)
Remove all the values specified in the array k from the passed dictionary and return the new dictionary. This routine is typically used to manipulate one of the self.dictoinaries.
Remove all the values specified in the array k from the passed dictionary and return the new dictionary. This routine is typically used to manipulate one of the self.dictoinaries.
[ "Remove", "all", "the", "values", "specified", "in", "the", "array", "k", "from", "the", "passed", "dictionary", "and", "return", "the", "new", "dictionary", ".", "This", "routine", "is", "typically", "used", "to", "manipulate", "one", "of", "the", "self", ...
def remove_values_from_mod (self, v): """Remove all the values specified in the array k from the passed dictionary and return the new dictionary. This routine is typically used to manipulate one of the self.dictoinaries.""" d = self.get_mods() d = dict([(x,y) for x,y in d.iteritems() if not y in v]) return self.set_mods(d)
[ "def", "remove_values_from_mod", "(", "self", ",", "v", ")", ":", "d", "=", "self", ".", "get_mods", "(", ")", "d", "=", "dict", "(", "[", "(", "x", ",", "y", ")", "for", "x", ",", "y", "in", "d", ".", "iteritems", "(", ")", "if", "not", "y",...
https://github.com/skarra/ASynK/blob/e908a1ad670a2d79f791a6a7539392e078a64add/asynk/sync.py#L353-L361
google/grr
8ad8a4d2c5a93c92729206b7771af19d92d4f915
grr/server/grr_response_server/flows/general/hardware.py
python
DumpFlashImage.DumpImage
(self, responses)
Store hardware information and initiate dumping of the flash image.
Store hardware information and initiate dumping of the flash image.
[ "Store", "hardware", "information", "and", "initiate", "dumping", "of", "the", "flash", "image", "." ]
def DumpImage(self, responses): """Store hardware information and initiate dumping of the flash image.""" self.state.hardware_info = responses.First() self.CallClient( server_stubs.DumpFlashImage, log_level=self.args.log_level, chunk_size=self.args.chunk_size, notify_syslog=self.args.notify_syslog, next_state=compatibility.GetName(self.CollectImage))
[ "def", "DumpImage", "(", "self", ",", "responses", ")", ":", "self", ".", "state", ".", "hardware_info", "=", "responses", ".", "First", "(", ")", "self", ".", "CallClient", "(", "server_stubs", ".", "DumpFlashImage", ",", "log_level", "=", "self", ".", ...
https://github.com/google/grr/blob/8ad8a4d2c5a93c92729206b7771af19d92d4f915/grr/server/grr_response_server/flows/general/hardware.py#L31-L39
jim-schwoebel/voicebook
0e8eae0f01487f15589c0daa2cf7ca3c6f3b8ad3
chapter_7_design/snowboy/snowboydecoder.py
python
HotwordDetector.saveMessage
(self)
return filename
Save the message stored in self.recordedData to a timestamped file.
Save the message stored in self.recordedData to a timestamped file.
[ "Save", "the", "message", "stored", "in", "self", ".", "recordedData", "to", "a", "timestamped", "file", "." ]
def saveMessage(self): """ Save the message stored in self.recordedData to a timestamped file. """ filename = 'output' + str(int(time.time())) + '.wav' data = b''.join(self.recordedData) #use wave to save data wf = wave.open(filename, 'wb') wf.setnchannels(1) wf.setsampwidth(self.audio.get_sample_size( self.audio.get_format_from_width( self.detector.BitsPerSample() / 8))) wf.setframerate(self.detector.SampleRate()) wf.writeframes(data) wf.close() logger.debug("finished saving: " + filename) return filename
[ "def", "saveMessage", "(", "self", ")", ":", "filename", "=", "'output'", "+", "str", "(", "int", "(", "time", ".", "time", "(", ")", ")", ")", "+", "'.wav'", "data", "=", "b''", ".", "join", "(", "self", ".", "recordedData", ")", "#use wave to save ...
https://github.com/jim-schwoebel/voicebook/blob/0e8eae0f01487f15589c0daa2cf7ca3c6f3b8ad3/chapter_7_design/snowboy/snowboydecoder.py#L250-L267
ales-tsurko/cells
4cf7e395cd433762bea70cdc863a346f3a6fe1d0
packaging/macos/python/lib/python3.7/multiprocessing/process.py
python
BaseProcess.join
(self, timeout=None)
Wait until child process terminates
Wait until child process terminates
[ "Wait", "until", "child", "process", "terminates" ]
def join(self, timeout=None): ''' Wait until child process terminates ''' self._check_closed() assert self._parent_pid == os.getpid(), 'can only join a child process' assert self._popen is not None, 'can only join a started process' res = self._popen.wait(timeout) if res is not None: _children.discard(self)
[ "def", "join", "(", "self", ",", "timeout", "=", "None", ")", ":", "self", ".", "_check_closed", "(", ")", "assert", "self", ".", "_parent_pid", "==", "os", ".", "getpid", "(", ")", ",", "'can only join a child process'", "assert", "self", ".", "_popen", ...
https://github.com/ales-tsurko/cells/blob/4cf7e395cd433762bea70cdc863a346f3a6fe1d0/packaging/macos/python/lib/python3.7/multiprocessing/process.py#L133-L142
subuser-security/subuser
8072271f8fc3dded60b048c2dee878f9840c126a
subuserlib/classes/docker/dockerDaemon.py
python
DockerDaemon.getConnection
(self)
return self.__connection
Get an `HTTPConnection <https://docs.python.org/2/library/httplib.html#httplib.HTTPConnection>`_ to the Docker daemon. Note: You can find more info in the `Docker API docs <https://docs.docker.com/reference/api/docker_remote_api_v1.13/>`_
Get an `HTTPConnection <https://docs.python.org/2/library/httplib.html#httplib.HTTPConnection>`_ to the Docker daemon.
[ "Get", "an", "HTTPConnection", "<https", ":", "//", "docs", ".", "python", ".", "org", "/", "2", "/", "library", "/", "httplib", ".", "html#httplib", ".", "HTTPConnection", ">", "_", "to", "the", "Docker", "daemon", "." ]
def getConnection(self): """ Get an `HTTPConnection <https://docs.python.org/2/library/httplib.html#httplib.HTTPConnection>`_ to the Docker daemon. Note: You can find more info in the `Docker API docs <https://docs.docker.com/reference/api/docker_remote_api_v1.13/>`_ """ if not self.__connection: subuserlib.docker.getAndVerifyExecutable() try: self.__connection = UHTTPConnection("/var/run/docker.sock") except PermissionError as e: sys.exit("Permission error (%s) connecting to the docker socket. This usually happens when you've added yourself as a member of the docker group but haven't logged out/in again before starting subuser."% str(e)) return self.__connection
[ "def", "getConnection", "(", "self", ")", ":", "if", "not", "self", ".", "__connection", ":", "subuserlib", ".", "docker", ".", "getAndVerifyExecutable", "(", ")", "try", ":", "self", ".", "__connection", "=", "UHTTPConnection", "(", "\"/var/run/docker.sock\"", ...
https://github.com/subuser-security/subuser/blob/8072271f8fc3dded60b048c2dee878f9840c126a/subuserlib/classes/docker/dockerDaemon.py#L109-L121
openstack/ironic
b392dc19bcd29cef5a69ec00d2f18a7a19a679e5
ironic/drivers/modules/redfish/raid.py
python
RedfishRAID._clear_raid_configs
(self, node)
Clears RAID configurations from driver_internal_info Note that the caller must have an exclusive lock on the node. :param node: the node to clear the RAID configs from
Clears RAID configurations from driver_internal_info
[ "Clears", "RAID", "configurations", "from", "driver_internal_info" ]
def _clear_raid_configs(self, node): """Clears RAID configurations from driver_internal_info Note that the caller must have an exclusive lock on the node. :param node: the node to clear the RAID configs from """ node.del_driver_internal_info('raid_configs') node.save()
[ "def", "_clear_raid_configs", "(", "self", ",", "node", ")", ":", "node", ".", "del_driver_internal_info", "(", "'raid_configs'", ")", "node", ".", "save", "(", ")" ]
https://github.com/openstack/ironic/blob/b392dc19bcd29cef5a69ec00d2f18a7a19a679e5/ironic/drivers/modules/redfish/raid.py#L999-L1007
jbjorne/TEES
caf19a4a1352ac59f5dc13a8684cc42ce4342d9d
Detectors/ToolChain.py
python
ToolChain.defAlias
(self, name, steps)
[]
def defAlias(self, name, steps): assert name not in self.definedStepDict steps = [self.definedStepDict[x] for x in steps] step = Step(name, steps, None, None, None, None, self.group) self.definedStepDict[name] = step self.definedSteps.append(step)
[ "def", "defAlias", "(", "self", ",", "name", ",", "steps", ")", ":", "assert", "name", "not", "in", "self", ".", "definedStepDict", "steps", "=", "[", "self", ".", "definedStepDict", "[", "x", "]", "for", "x", "in", "steps", "]", "step", "=", "Step",...
https://github.com/jbjorne/TEES/blob/caf19a4a1352ac59f5dc13a8684cc42ce4342d9d/Detectors/ToolChain.py#L113-L118
twilio/twilio-python
6e1e811ea57a1edfadd5161ace87397c563f6915
twilio/rest/api/v2010/account/available_phone_number/local.py
python
LocalInstance.iso_country
(self)
return self._properties['iso_country']
:returns: The ISO country code of this phone number :rtype: unicode
:returns: The ISO country code of this phone number :rtype: unicode
[ ":", "returns", ":", "The", "ISO", "country", "code", "of", "this", "phone", "number", ":", "rtype", ":", "unicode" ]
def iso_country(self): """ :returns: The ISO country code of this phone number :rtype: unicode """ return self._properties['iso_country']
[ "def", "iso_country", "(", "self", ")", ":", "return", "self", ".", "_properties", "[", "'iso_country'", "]" ]
https://github.com/twilio/twilio-python/blob/6e1e811ea57a1edfadd5161ace87397c563f6915/twilio/rest/api/v2010/account/available_phone_number/local.py#L417-L422
zopefoundation/Zope
ea04dd670d1a48d4d5c879d3db38fc2e9b4330bb
src/webdav/PropertySheets.py
python
DAVProperties.dav__lockdiscovery
(self)
return out
[]
def dav__lockdiscovery(self): security = getSecurityManager() user = security.getUser().getId() vself = self.v_self() out = '\n' if IWriteLock.providedBy(vself): locks = vself.wl_lockValues(killinvalids=1) for lock in locks: creator = lock.getCreator()[-1] if creator == user: fake = 0 else: fake = 1 out = f'{out}\n{lock.asLockDiscoveryProperty("n", fake=fake)}' out = f'{out}\n' return out
[ "def", "dav__lockdiscovery", "(", "self", ")", ":", "security", "=", "getSecurityManager", "(", ")", "user", "=", "security", ".", "getUser", "(", ")", ".", "getId", "(", ")", "vself", "=", "self", ".", "v_self", "(", ")", "out", "=", "'\\n'", "if", ...
https://github.com/zopefoundation/Zope/blob/ea04dd670d1a48d4d5c879d3db38fc2e9b4330bb/src/webdav/PropertySheets.py#L126-L146
Yuliang-Liu/Box_Discretization_Network
5b3a30c97429ef8e5c5e1c4e2476c7d9abdc03e6
maskrcnn_benchmark/data/datasets/word_dataset.py
python
WordDataset.get_img_info
(self, index)
return img_data
[]
def get_img_info(self, index): img_id = self.id_to_img_map[index] img_data = self.coco.imgs[img_id] return img_data
[ "def", "get_img_info", "(", "self", ",", "index", ")", ":", "img_id", "=", "self", ".", "id_to_img_map", "[", "index", "]", "img_data", "=", "self", ".", "coco", ".", "imgs", "[", "img_id", "]", "return", "img_data" ]
https://github.com/Yuliang-Liu/Box_Discretization_Network/blob/5b3a30c97429ef8e5c5e1c4e2476c7d9abdc03e6/maskrcnn_benchmark/data/datasets/word_dataset.py#L105-L108
StackStorm/st2
85ae05b73af422efd3097c9c05351f7f1cc8369e
st2common/st2common/metrics/base.py
python
get_driver
()
return METRICS
Return metrics driver instance
Return metrics driver instance
[ "Return", "metrics", "driver", "instance" ]
def get_driver(): """ Return metrics driver instance """ if not METRICS: return metrics_initialize() return METRICS
[ "def", "get_driver", "(", ")", ":", "if", "not", "METRICS", ":", "return", "metrics_initialize", "(", ")", "return", "METRICS" ]
https://github.com/StackStorm/st2/blob/85ae05b73af422efd3097c9c05351f7f1cc8369e/st2common/st2common/metrics/base.py#L237-L244
tendenci/tendenci
0f2c348cc0e7d41bc56f50b00ce05544b083bf1d
tendenci/apps/chapters/fields.py
python
ChapterMembershipTypeModelChoiceField.label_from_instance
(self, obj)
return obj.get_price_display(renew_mode=self.renew_mode, chapter=self.chapter)
[]
def label_from_instance(self, obj): return obj.get_price_display(renew_mode=self.renew_mode, chapter=self.chapter)
[ "def", "label_from_instance", "(", "self", ",", "obj", ")", ":", "return", "obj", ".", "get_price_display", "(", "renew_mode", "=", "self", ".", "renew_mode", ",", "chapter", "=", "self", ".", "chapter", ")" ]
https://github.com/tendenci/tendenci/blob/0f2c348cc0e7d41bc56f50b00ce05544b083bf1d/tendenci/apps/chapters/fields.py#L9-L11
openstack/nova
b49b7663e1c3073917d5844b81d38db8e86d05c4
nova/api/openstack/compute/flavors.py
python
FlavorsController._get_flavors
(self, req)
return limited_flavors
Helper function that returns a list of flavor dicts.
Helper function that returns a list of flavor dicts.
[ "Helper", "function", "that", "returns", "a", "list", "of", "flavor", "dicts", "." ]
def _get_flavors(self, req): """Helper function that returns a list of flavor dicts.""" filters = {} sort_key = req.params.get('sort_key') or 'flavorid' sort_dir = req.params.get('sort_dir') or 'asc' limit, marker = common.get_limit_and_marker(req) context = req.environ['nova.context'] if context.is_admin: # Only admin has query access to all flavor types filters['is_public'] = self._parse_is_public( req.params.get('is_public', None)) else: filters['is_public'] = True filters['disabled'] = False if 'minRam' in req.params: try: filters['min_memory_mb'] = int(req.params['minRam']) except ValueError: msg = _('Invalid minRam filter [%s]') % req.params['minRam'] raise webob.exc.HTTPBadRequest(explanation=msg) if 'minDisk' in req.params: try: filters['min_root_gb'] = int(req.params['minDisk']) except ValueError: msg = (_('Invalid minDisk filter [%s]') % req.params['minDisk']) raise webob.exc.HTTPBadRequest(explanation=msg) try: limited_flavors = objects.FlavorList.get_all(context, filters=filters, sort_key=sort_key, sort_dir=sort_dir, limit=limit, marker=marker) except exception.MarkerNotFound: msg = _('marker [%s] not found') % marker raise webob.exc.HTTPBadRequest(explanation=msg) return limited_flavors
[ "def", "_get_flavors", "(", "self", ",", "req", ")", ":", "filters", "=", "{", "}", "sort_key", "=", "req", ".", "params", ".", "get", "(", "'sort_key'", ")", "or", "'flavorid'", "sort_dir", "=", "req", ".", "params", ".", "get", "(", "'sort_dir'", "...
https://github.com/openstack/nova/blob/b49b7663e1c3073917d5844b81d38db8e86d05c4/nova/api/openstack/compute/flavors.py#L96-L135
debian-calibre/calibre
020fc81d3936a64b2ac51459ecb796666ab6a051
src/odf/element.py
python
Element.addElement
(self, element, check_grammar=True)
adds an element to an Element Element.addElement(Element)
adds an element to an Element
[ "adds", "an", "element", "to", "an", "Element" ]
def addElement(self, element, check_grammar=True): """ adds an element to an Element Element.addElement(Element) """ if check_grammar and self.allowed_children is not None: if element.qname not in self.allowed_children: raise IllegalChild("<%s> is not allowed in <%s>" % (element.tagName, self.tagName)) self.appendChild(element) self._setOwnerDoc(element) if self.ownerDocument: self.ownerDocument.rebuild_caches(element)
[ "def", "addElement", "(", "self", ",", "element", ",", "check_grammar", "=", "True", ")", ":", "if", "check_grammar", "and", "self", ".", "allowed_children", "is", "not", "None", ":", "if", "element", ".", "qname", "not", "in", "self", ".", "allowed_childr...
https://github.com/debian-calibre/calibre/blob/020fc81d3936a64b2ac51459ecb796666ab6a051/src/odf/element.py#L368-L379
quantumlib/Cirq
89f88b01d69222d3f1ec14d649b7b3a85ed9211f
cirq-core/cirq/work/observable_measurement_data.py
python
BitstringAccumulator.variance
(self, setting: InitObsSetting, *, atol: float = 1e-8)
return var
Compute the variance of the estimators of the given setting. This is the normal variance divided by the number of samples to estimate the certainty of our estimate of the mean. It is the standard error of the mean, squared. This uses `ddof=1` during the call to `np.var` for an unbiased estimator of the variance in a hypothetical infinite population for consistency with `BitstringAccumulator.covariance()` but differs from the default for `np.var`. Args: setting: The initial state and observable. atol: The absolute tolerance for asserting coefficients are real. Raises: ValueError: If there were no measurements.
Compute the variance of the estimators of the given setting.
[ "Compute", "the", "variance", "of", "the", "estimators", "of", "the", "given", "setting", "." ]
def variance(self, setting: InitObsSetting, *, atol: float = 1e-8): """Compute the variance of the estimators of the given setting. This is the normal variance divided by the number of samples to estimate the certainty of our estimate of the mean. It is the standard error of the mean, squared. This uses `ddof=1` during the call to `np.var` for an unbiased estimator of the variance in a hypothetical infinite population for consistency with `BitstringAccumulator.covariance()` but differs from the default for `np.var`. Args: setting: The initial state and observable. atol: The absolute tolerance for asserting coefficients are real. Raises: ValueError: If there were no measurements. """ if len(self.bitstrings) == 0: raise ValueError("No measurements") self._validate_setting(setting, what='variance') mean, var = _stats_from_measurements( bitstrings=self.bitstrings, qubit_to_index=self._qubit_to_index, observable=setting.observable, atol=atol, ) if self._readout_calibration is not None: a = mean if np.isclose(a, 0, atol=atol): return np.inf var_a = var ro_setting = _setting_to_z_observable(setting) b = self._readout_calibration.mean(ro_setting) if np.isclose(b, 0, atol=atol): return np.inf var_b = self._readout_calibration.variance(ro_setting) f = a / b # https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Example_formulae # assume cov(a,b) = 0, otherwise there would be another term. var = f ** 2 * (var_a / (a ** 2) + var_b / (b ** 2)) return var
[ "def", "variance", "(", "self", ",", "setting", ":", "InitObsSetting", ",", "*", ",", "atol", ":", "float", "=", "1e-8", ")", ":", "if", "len", "(", "self", ".", "bitstrings", ")", "==", "0", ":", "raise", "ValueError", "(", "\"No measurements\"", ")",...
https://github.com/quantumlib/Cirq/blob/89f88b01d69222d3f1ec14d649b7b3a85ed9211f/cirq-core/cirq/work/observable_measurement_data.py#L444-L490
bearpaw/pytorch-classification
24f1c456f48c78133088c4eefd182ca9e6199b03
models/cifar/resnext.py
python
ResNeXtBottleneck.__init__
(self, in_channels, out_channels, stride, cardinality, widen_factor)
Constructor Args: in_channels: input channel dimensionality out_channels: output channel dimensionality stride: conv stride. Replaces pooling layer. cardinality: num of convolution groups. widen_factor: factor to reduce the input dimensionality before convolution.
Constructor Args: in_channels: input channel dimensionality out_channels: output channel dimensionality stride: conv stride. Replaces pooling layer. cardinality: num of convolution groups. widen_factor: factor to reduce the input dimensionality before convolution.
[ "Constructor", "Args", ":", "in_channels", ":", "input", "channel", "dimensionality", "out_channels", ":", "output", "channel", "dimensionality", "stride", ":", "conv", "stride", ".", "Replaces", "pooling", "layer", ".", "cardinality", ":", "num", "of", "convoluti...
def __init__(self, in_channels, out_channels, stride, cardinality, widen_factor): """ Constructor Args: in_channels: input channel dimensionality out_channels: output channel dimensionality stride: conv stride. Replaces pooling layer. cardinality: num of convolution groups. widen_factor: factor to reduce the input dimensionality before convolution. """ super(ResNeXtBottleneck, self).__init__() D = cardinality * out_channels // widen_factor self.conv_reduce = nn.Conv2d(in_channels, D, kernel_size=1, stride=1, padding=0, bias=False) self.bn_reduce = nn.BatchNorm2d(D) self.conv_conv = nn.Conv2d(D, D, kernel_size=3, stride=stride, padding=1, groups=cardinality, bias=False) self.bn = nn.BatchNorm2d(D) self.conv_expand = nn.Conv2d(D, out_channels, kernel_size=1, stride=1, padding=0, bias=False) self.bn_expand = nn.BatchNorm2d(out_channels) self.shortcut = nn.Sequential() if in_channels != out_channels: self.shortcut.add_module('shortcut_conv', nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, padding=0, bias=False)) self.shortcut.add_module('shortcut_bn', nn.BatchNorm2d(out_channels))
[ "def", "__init__", "(", "self", ",", "in_channels", ",", "out_channels", ",", "stride", ",", "cardinality", ",", "widen_factor", ")", ":", "super", "(", "ResNeXtBottleneck", ",", "self", ")", ".", "__init__", "(", ")", "D", "=", "cardinality", "*", "out_ch...
https://github.com/bearpaw/pytorch-classification/blob/24f1c456f48c78133088c4eefd182ca9e6199b03/models/cifar/resnext.py#L19-L40
Qiskit/qiskit-terra
b66030e3b9192efdd3eb95cf25c6545fe0a13da4
qiskit/transpiler/passes/synthesis/unitary_synthesis.py
python
UnitarySynthesis.run
(self, dag: DAGCircuit)
return dag
Run the UnitarySynthesis pass on `dag`. Args: dag: input dag. Returns: Output dag with UnitaryGates synthesized to target basis. Raises: TranspilerError: if a 'method' was specified for the class and is not found in the installed plugins list. The list of installed plugins can be queried with :func:`~qiskit.transpiler.passes.synthesis.plugin.unitary_synthesis_plugin_names`
Run the UnitarySynthesis pass on `dag`.
[ "Run", "the", "UnitarySynthesis", "pass", "on", "dag", "." ]
def run(self, dag: DAGCircuit) -> DAGCircuit: """Run the UnitarySynthesis pass on `dag`. Args: dag: input dag. Returns: Output dag with UnitaryGates synthesized to target basis. Raises: TranspilerError: if a 'method' was specified for the class and is not found in the installed plugins list. The list of installed plugins can be queried with :func:`~qiskit.transpiler.passes.synthesis.plugin.unitary_synthesis_plugin_names` """ if self.method not in self.plugins.ext_plugins: raise TranspilerError("Specified method: %s not found in plugin list" % self.method) # Return fast if we have no synth gates (ie user specified an empty # list or the synth gates are all in the basis if not self._synth_gates: return dag plugin_method = self.plugins.ext_plugins[self.method].obj plugin_kwargs = {"config": self._plugin_config} _gate_lengths = _gate_errors = None dag_bit_indices = {} if self.method == "default": # If the method is the default, we only need to evaluate one set of keyword arguments. # To simplify later logic, and avoid cases where static analysis might complain that we # haven't initialised the "default" handler, we rebind the names so they point to the # same object as the chosen method. default_method = plugin_method default_kwargs = plugin_kwargs method_list = [(plugin_method, plugin_kwargs)] else: # If the method is not the default, we still need to initialise the default plugin's # keyword arguments in case we have to fall back on it during the actual run. default_method = self.plugins.ext_plugins["default"].obj default_kwargs = {} method_list = [(plugin_method, plugin_kwargs), (default_method, default_kwargs)] for method, kwargs in method_list: if method.supports_basis_gates: kwargs["basis_gates"] = self._basis_gates if method.supports_coupling_map: dag_bit_indices = dag_bit_indices or {bit: i for i, bit in enumerate(dag.qubits)} if method.supports_natural_direction: kwargs["natural_direction"] = self._natural_direction if method.supports_pulse_optimize: kwargs["pulse_optimize"] = self._pulse_optimize if method.supports_gate_lengths: _gate_lengths = _gate_lengths or _build_gate_lengths(self._backend_props) kwargs["gate_lengths"] = _gate_lengths if method.supports_gate_errors: _gate_errors = _gate_errors or _build_gate_errors(self._backend_props) kwargs["gate_errors"] = _gate_errors supported_bases = method.supported_bases if supported_bases is not None: kwargs["matched_basis"] = _choose_bases(self._basis_gates, supported_bases) # Handle approximation degree as a special case for backwards compatibility, it's # not part of the plugin interface and only something needed for the default # pass. default_method._approximation_degree = self._approximation_degree if self.method == "default": plugin_method._approximation_degree = self._approximation_degree for node in dag.named_nodes(*self._synth_gates): if self._min_qubits is not None and len(node.qargs) < self._min_qubits: continue synth_dag = None unitary = node.op.to_matrix() n_qubits = len(node.qargs) if (plugin_method.max_qubits is not None and n_qubits > plugin_method.max_qubits) or ( plugin_method.min_qubits is not None and n_qubits < plugin_method.min_qubits ): method, kwargs = default_method, default_kwargs else: method, kwargs = plugin_method, plugin_kwargs if method.supports_coupling_map: kwargs["coupling_map"] = ( self._coupling_map, [dag_bit_indices[x] for x in node.qargs], ) synth_dag = method.run(unitary, **kwargs) if synth_dag is not None: if isinstance(synth_dag, tuple): dag.substitute_node_with_dag(node, synth_dag[0], wires=synth_dag[1]) else: dag.substitute_node_with_dag(node, synth_dag) return dag
[ "def", "run", "(", "self", ",", "dag", ":", "DAGCircuit", ")", "->", "DAGCircuit", ":", "if", "self", ".", "method", "not", "in", "self", ".", "plugins", ".", "ext_plugins", ":", "raise", "TranspilerError", "(", "\"Specified method: %s not found in plugin list\"...
https://github.com/Qiskit/qiskit-terra/blob/b66030e3b9192efdd3eb95cf25c6545fe0a13da4/qiskit/transpiler/passes/synthesis/unitary_synthesis.py#L192-L283
mylar3/mylar3
fce4771c5b627f8de6868dd4ab6bc53f7b22d303
lib/rarfile/rarfile.py
python
RarExtFile.writable
(self)
return False
Returns False. Writing is not supported.
Returns False.
[ "Returns", "False", "." ]
def writable(self): """Returns False. Writing is not supported. """ return False
[ "def", "writable", "(", "self", ")", ":", "return", "False" ]
https://github.com/mylar3/mylar3/blob/fce4771c5b627f8de6868dd4ab6bc53f7b22d303/lib/rarfile/rarfile.py#L2296-L2301
replit-archive/empythoned
977ec10ced29a3541a4973dc2b59910805695752
dist/lib/python2.7/shutil.py
python
copystat
(src, dst)
Copy all stat info (mode bits, atime, mtime, flags) from src to dst
Copy all stat info (mode bits, atime, mtime, flags) from src to dst
[ "Copy", "all", "stat", "info", "(", "mode", "bits", "atime", "mtime", "flags", ")", "from", "src", "to", "dst" ]
def copystat(src, dst): """Copy all stat info (mode bits, atime, mtime, flags) from src to dst""" st = os.stat(src) mode = stat.S_IMODE(st.st_mode) if hasattr(os, 'utime'): os.utime(dst, (st.st_atime, st.st_mtime)) if hasattr(os, 'chmod'): os.chmod(dst, mode) if hasattr(os, 'chflags') and hasattr(st, 'st_flags'): try: os.chflags(dst, st.st_flags) except OSError, why: if (not hasattr(errno, 'EOPNOTSUPP') or why.errno != errno.EOPNOTSUPP): raise
[ "def", "copystat", "(", "src", ",", "dst", ")", ":", "st", "=", "os", ".", "stat", "(", "src", ")", "mode", "=", "stat", ".", "S_IMODE", "(", "st", ".", "st_mode", ")", "if", "hasattr", "(", "os", ",", "'utime'", ")", ":", "os", ".", "utime", ...
https://github.com/replit-archive/empythoned/blob/977ec10ced29a3541a4973dc2b59910805695752/dist/lib/python2.7/shutil.py#L92-L106
openstack/kuryr-kubernetes
513ada72685ef02cd9f3aca418ac1b2e0dc1b8ba
kuryr_kubernetes/controller/drivers/network_policy.py
python
NetworkPolicyDriver._create_svc_egress_sg_rule
(self, policy_namespace, sg_rule_body_list, resource=None, port=None, protocol=None)
[]
def _create_svc_egress_sg_rule(self, policy_namespace, sg_rule_body_list, resource=None, port=None, protocol=None): # FIXME(dulek): We could probably filter by namespace here for pods # and namespace resources? services = driver_utils.get_services() if not resource: svc_subnet = utils.get_subnet_cidr( CONF.neutron_defaults.service_subnet) rule = driver_utils.create_security_group_rule_body( 'egress', port, protocol=protocol, cidr=svc_subnet) if rule not in sg_rule_body_list: sg_rule_body_list.append(rule) return for service in services.get('items'): if service['metadata'].get('deletionTimestamp'): # Ignore services being deleted continue cluster_ip = service['spec'].get('clusterIP') if not cluster_ip or cluster_ip == 'None': # Headless services has 'None' as clusterIP, ignore. continue svc_name = service['metadata']['name'] svc_namespace = service['metadata']['namespace'] if self._is_pod(resource): pod_labels = resource['metadata'].get('labels') svc_selector = service['spec'].get('selector') if not svc_selector: targets = driver_utils.get_endpoints_targets( svc_name, svc_namespace) pod_ip = resource['status'].get('podIP') if pod_ip and pod_ip not in targets: continue elif pod_labels: if not driver_utils.match_labels(svc_selector, pod_labels): continue elif resource.get('cidr'): # NOTE(maysams) Accounts for traffic to pods under # a service matching an IPBlock rule. svc_selector = service['spec'].get('selector') if not svc_selector: # Retrieving targets of services on any Namespace targets = driver_utils.get_endpoints_targets( svc_name, svc_namespace) if (not targets or not self._targets_in_ip_block(targets, resource)): continue else: if svc_namespace != policy_namespace: continue pods = driver_utils.get_pods({'selector': svc_selector}, svc_namespace).get('items') if not self._pods_in_ip_block(pods, resource): continue else: ns_name = service['metadata']['namespace'] if ns_name != resource['metadata']['name']: continue rule = driver_utils.create_security_group_rule_body( 'egress', port, protocol=protocol, cidr=cluster_ip) if rule not in sg_rule_body_list: sg_rule_body_list.append(rule)
[ "def", "_create_svc_egress_sg_rule", "(", "self", ",", "policy_namespace", ",", "sg_rule_body_list", ",", "resource", "=", "None", ",", "port", "=", "None", ",", "protocol", "=", "None", ")", ":", "# FIXME(dulek): We could probably filter by namespace here for pods", "#...
https://github.com/openstack/kuryr-kubernetes/blob/513ada72685ef02cd9f3aca418ac1b2e0dc1b8ba/kuryr_kubernetes/controller/drivers/network_policy.py#L570-L633
TencentCloud/tencentcloud-sdk-python
3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2
tencentcloud/cdb/v20170320/models.py
python
SlaveInfo.__init__
(self)
r""" :param First: 第一备机信息 :type First: :class:`tencentcloud.cdb.v20170320.models.SlaveInstanceInfo` :param Second: 第二备机信息 注意:此字段可能返回 null,表示取不到有效值。 :type Second: :class:`tencentcloud.cdb.v20170320.models.SlaveInstanceInfo`
r""" :param First: 第一备机信息 :type First: :class:`tencentcloud.cdb.v20170320.models.SlaveInstanceInfo` :param Second: 第二备机信息 注意:此字段可能返回 null,表示取不到有效值。 :type Second: :class:`tencentcloud.cdb.v20170320.models.SlaveInstanceInfo`
[ "r", ":", "param", "First", ":", "第一备机信息", ":", "type", "First", ":", ":", "class", ":", "tencentcloud", ".", "cdb", ".", "v20170320", ".", "models", ".", "SlaveInstanceInfo", ":", "param", "Second", ":", "第二备机信息", "注意:此字段可能返回", "null,表示取不到有效值。", ":", "typ...
def __init__(self): r""" :param First: 第一备机信息 :type First: :class:`tencentcloud.cdb.v20170320.models.SlaveInstanceInfo` :param Second: 第二备机信息 注意:此字段可能返回 null,表示取不到有效值。 :type Second: :class:`tencentcloud.cdb.v20170320.models.SlaveInstanceInfo` """ self.First = None self.Second = None
[ "def", "__init__", "(", "self", ")", ":", "self", ".", "First", "=", "None", "self", ".", "Second", "=", "None" ]
https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/cdb/v20170320/models.py#L9428-L9437
zhl2008/awd-platform
0416b31abea29743387b10b3914581fbe8e7da5e
web_flaskbb/Python-2.7.9/Parser/asdl.py
python
Check.__init__
(self)
[]
def __init__(self): super(Check, self).__init__(skip=True) self.cons = {} self.errors = 0 self.types = {}
[ "def", "__init__", "(", "self", ")", ":", "super", "(", "Check", ",", "self", ")", ".", "__init__", "(", "skip", "=", "True", ")", "self", ".", "cons", "=", "{", "}", "self", ".", "errors", "=", "0", "self", ".", "types", "=", "{", "}" ]
https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/Python-2.7.9/Parser/asdl.py#L331-L335
PaddlePaddle/ERNIE
15eddb022ce1beb281777e9ab8807a1bdfa7a76e
propeller/paddle/train/hooks.py
python
CheckpointSaverHook.after_run
(self, res_list, state)
doc
doc
[ "doc" ]
def after_run(self, res_list, state): """doc""" if state.gstep % self.per_step == 0 and \ state.step > self.skip_step: self.saver.save(state)
[ "def", "after_run", "(", "self", ",", "res_list", ",", "state", ")", ":", "if", "state", ".", "gstep", "%", "self", ".", "per_step", "==", "0", "and", "state", ".", "step", ">", "self", ".", "skip_step", ":", "self", ".", "saver", ".", "save", "(",...
https://github.com/PaddlePaddle/ERNIE/blob/15eddb022ce1beb281777e9ab8807a1bdfa7a76e/propeller/paddle/train/hooks.py#L336-L340
elastic/apm-agent-python
67ce41e492f91d0aaf9fdb599e11d42e5f0ea198
elasticapm/base.py
python
Client.get_user_agent
(self)
Compiles the user agent, which will be added as a header to all requests to the APM Server
Compiles the user agent, which will be added as a header to all requests to the APM Server
[ "Compiles", "the", "user", "agent", "which", "will", "be", "added", "as", "a", "header", "to", "all", "requests", "to", "the", "APM", "Server" ]
def get_user_agent(self) -> str: """ Compiles the user agent, which will be added as a header to all requests to the APM Server """ if self.config.service_version: service_version = re.sub(r"[^\t _\x21-\x27\x2a-\x5b\x5d-\x7e\x80-\xff]", "_", self.config.service_version) return "apm-agent-python/{} ({} {})".format(elasticapm.VERSION, self.config.service_name, service_version) else: return "apm-agent-python/{} ({})".format(elasticapm.VERSION, self.config.service_name)
[ "def", "get_user_agent", "(", "self", ")", "->", "str", ":", "if", "self", ".", "config", ".", "service_version", ":", "service_version", "=", "re", ".", "sub", "(", "r\"[^\\t _\\x21-\\x27\\x2a-\\x5b\\x5d-\\x7e\\x80-\\xff]\"", ",", "\"_\"", ",", "self", ".", "co...
https://github.com/elastic/apm-agent-python/blob/67ce41e492f91d0aaf9fdb599e11d42e5f0ea198/elasticapm/base.py#L423-L432
microsoft/ptvsd
99c8513921021d2cc7cd82e132b65c644c256768
src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_api.py
python
PyDevdAPI.request_load_source
(self, py_db, seq, filename)
:param str filename: Note: must be sent as it was received in the protocol. It may be translated in this function.
:param str filename: Note: must be sent as it was received in the protocol. It may be translated in this function.
[ ":", "param", "str", "filename", ":", "Note", ":", "must", "be", "sent", "as", "it", "was", "received", "in", "the", "protocol", ".", "It", "may", "be", "translated", "in", "this", "function", "." ]
def request_load_source(self, py_db, seq, filename): ''' :param str filename: Note: must be sent as it was received in the protocol. It may be translated in this function. ''' try: filename = self.filename_to_server(filename) assert filename.__class__ == str # i.e.: bytes on py2 and str on py3 with open(filename, 'r') as stream: source = stream.read() cmd = py_db.cmd_factory.make_load_source_message(seq, source) except: cmd = py_db.cmd_factory.make_error_message(seq, get_exception_traceback_str()) py_db.writer.add_command(cmd)
[ "def", "request_load_source", "(", "self", ",", "py_db", ",", "seq", ",", "filename", ")", ":", "try", ":", "filename", "=", "self", ".", "filename_to_server", "(", "filename", ")", "assert", "filename", ".", "__class__", "==", "str", "# i.e.: bytes on py2 and...
https://github.com/microsoft/ptvsd/blob/99c8513921021d2cc7cd82e132b65c644c256768/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_api.py#L616-L632
joke2k/faker
0ebe46fc9b9793fe315cf0fce430258ce74df6f8
faker/providers/ssn/en_GB/__init__.py
python
Provider.vat_id
(self)
return self.bothify(self.random_element(self.vat_id_formats))
http://ec.europa.eu/taxation_customs/vies/faq.html#item_11 :return: A random British VAT ID
http://ec.europa.eu/taxation_customs/vies/faq.html#item_11 :return: A random British VAT ID
[ "http", ":", "//", "ec", ".", "europa", ".", "eu", "/", "taxation_customs", "/", "vies", "/", "faq", ".", "html#item_11", ":", "return", ":", "A", "random", "British", "VAT", "ID" ]
def vat_id(self) -> str: """ http://ec.europa.eu/taxation_customs/vies/faq.html#item_11 :return: A random British VAT ID """ return self.bothify(self.random_element(self.vat_id_formats))
[ "def", "vat_id", "(", "self", ")", "->", "str", ":", "return", "self", ".", "bothify", "(", "self", ".", "random_element", "(", "self", ".", "vat_id_formats", ")", ")" ]
https://github.com/joke2k/faker/blob/0ebe46fc9b9793fe315cf0fce430258ce74df6f8/faker/providers/ssn/en_GB/__init__.py#L34-L39
RussBaz/enforce
caf1dc3984c595a120882337fa6c2a7d23d90201
enforce/enforcers.py
python
Enforcer.reset
(self)
Clears validator internal state
Clears validator internal state
[ "Clears", "validator", "internal", "state" ]
def reset(self): """ Clears validator internal state """ self.validator.reset()
[ "def", "reset", "(", "self", ")", ":", "self", ".", "validator", ".", "reset", "(", ")" ]
https://github.com/RussBaz/enforce/blob/caf1dc3984c595a120882337fa6c2a7d23d90201/enforce/enforcers.py#L104-L108
holzschu/Carnets
44effb10ddfc6aa5c8b0687582a724ba82c6b547
Library/lib/python3.7/site-packages/chardet/chardistribution.py
python
CharDistributionAnalysis.reset
(self)
reset analyser, clear any state
reset analyser, clear any state
[ "reset", "analyser", "clear", "any", "state" ]
def reset(self): """reset analyser, clear any state""" # If this flag is set to True, detection is done and conclusion has # been made self._done = False self._total_chars = 0 # Total characters encountered # The number of characters whose frequency order is less than 512 self._freq_chars = 0
[ "def", "reset", "(", "self", ")", ":", "# If this flag is set to True, detection is done and conclusion has", "# been made", "self", ".", "_done", "=", "False", "self", ".", "_total_chars", "=", "0", "# Total characters encountered", "# The number of characters whose frequency ...
https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/chardet/chardistribution.py#L61-L68
jacexh/pyautoit
6bc67acbbe59d5c8620836c90d38b833c6ffaed2
autoit/control.py
python
control_list_view_by_handle
(hwnd, h_ctrl, command, **kwargs)
return result.value.rstrip()
:param hwnd: :param h_ctrl: :param command: :param kwargs: :return:
[]
def control_list_view_by_handle(hwnd, h_ctrl, command, **kwargs): """ :param hwnd: :param h_ctrl: :param command: :param kwargs: :return: """ extra1 = kwargs.get("extra1", "") extra2 = kwargs.get("extra2", "") buf_size = kwargs.get("buf_size", 256) result = ctypes.create_unicode_buffer(buf_size) AUTO_IT.AU3_ControlListViewByHandle( HWND(hwnd), HWND(h_ctrl), LPCWSTR(command), LPCWSTR(extra1), LPCWSTR(extra2), result, INT(buf_size) ) return result.value.rstrip()
[ "def", "control_list_view_by_handle", "(", "hwnd", ",", "h_ctrl", ",", "command", ",", "*", "*", "kwargs", ")", ":", "extra1", "=", "kwargs", ".", "get", "(", "\"extra1\"", ",", "\"\"", ")", "extra2", "=", "kwargs", ".", "get", "(", "\"extra2\"", ",", ...
https://github.com/jacexh/pyautoit/blob/6bc67acbbe59d5c8620836c90d38b833c6ffaed2/autoit/control.py#L120-L138
jgagneastro/coffeegrindsize
22661ebd21831dba4cf32bfc6ba59fe3d49f879c
App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/matplotlib/colors.py
python
get_named_colors_mapping
()
return _colors_full_map
Return the global mapping of names to named colors.
Return the global mapping of names to named colors.
[ "Return", "the", "global", "mapping", "of", "names", "to", "named", "colors", "." ]
def get_named_colors_mapping(): """Return the global mapping of names to named colors.""" return _colors_full_map
[ "def", "get_named_colors_mapping", "(", ")", ":", "return", "_colors_full_map" ]
https://github.com/jgagneastro/coffeegrindsize/blob/22661ebd21831dba4cf32bfc6ba59fe3d49f879c/App/dist/coffeegrindsize.app/Contents/Resources/lib/python3.7/matplotlib/colors.py#L101-L103
dimagi/commcare-hq
d67ff1d3b4c51fa050c19e60c3253a79d3452a39
corehq/apps/userreports/pillow.py
python
UcrTableManager.bootstrap_if_needed
(self)
Bootstrap the manager with data sources or else check for updated data sources
Bootstrap the manager with data sources or else check for updated data sources
[ "Bootstrap", "the", "manager", "with", "data", "sources", "or", "else", "check", "for", "updated", "data", "sources" ]
def bootstrap_if_needed(self): """Bootstrap the manager with data sources or else check for updated data sources""" if self.needs_bootstrap(): self.bootstrap() else: self._update_modified_data_sources()
[ "def", "bootstrap_if_needed", "(", "self", ")", ":", "if", "self", ".", "needs_bootstrap", "(", ")", ":", "self", ".", "bootstrap", "(", ")", "else", ":", "self", ".", "_update_modified_data_sources", "(", ")" ]
https://github.com/dimagi/commcare-hq/blob/d67ff1d3b4c51fa050c19e60c3253a79d3452a39/corehq/apps/userreports/pillow.py#L136-L141
robinhood/thorn
486a53ebcf6373ff306a8d17dc016d78e880fcd6
thorn/events.py
python
ModelEvent._connect_model_signal
(self, model)
[]
def _connect_model_signal(self, model): # type: (Any) -> None if self.signal_dispatcher: self.signal_dispatcher.connect(sender=model)
[ "def", "_connect_model_signal", "(", "self", ",", "model", ")", ":", "# type: (Any) -> None", "if", "self", ".", "signal_dispatcher", ":", "self", ".", "signal_dispatcher", ".", "connect", "(", "sender", "=", "model", ")" ]
https://github.com/robinhood/thorn/blob/486a53ebcf6373ff306a8d17dc016d78e880fcd6/thorn/events.py#L390-L393
uuvsimulator/uuv_simulator
bfb40cb153684a0703173117b6bbf4258e8e71c5
tools/cpplint.py
python
CheckForUnicodeReplacementCharacters
(filename, lines, error)
Logs an error for each line containing Unicode replacement characters. These indicate that either the file contained invalid UTF-8 (likely) or Unicode replacement characters (which it shouldn't). Note that it's possible for this to throw off line numbering if the invalid UTF-8 occurred adjacent to a newline. Args: filename: The name of the current file. lines: An array of strings, each representing a line of the file. error: The function to call with any errors found.
Logs an error for each line containing Unicode replacement characters.
[ "Logs", "an", "error", "for", "each", "line", "containing", "Unicode", "replacement", "characters", "." ]
def CheckForUnicodeReplacementCharacters(filename, lines, error): """Logs an error for each line containing Unicode replacement characters. These indicate that either the file contained invalid UTF-8 (likely) or Unicode replacement characters (which it shouldn't). Note that it's possible for this to throw off line numbering if the invalid UTF-8 occurred adjacent to a newline. Args: filename: The name of the current file. lines: An array of strings, each representing a line of the file. error: The function to call with any errors found. """ for linenum, line in enumerate(lines): if u'\ufffd' in line: error(filename, linenum, 'readability/utf8', 5, 'Line contains invalid UTF-8 (or Unicode replacement character).')
[ "def", "CheckForUnicodeReplacementCharacters", "(", "filename", ",", "lines", ",", "error", ")", ":", "for", "linenum", ",", "line", "in", "enumerate", "(", "lines", ")", ":", "if", "u'\\ufffd'", "in", "line", ":", "error", "(", "filename", ",", "linenum", ...
https://github.com/uuvsimulator/uuv_simulator/blob/bfb40cb153684a0703173117b6bbf4258e8e71c5/tools/cpplint.py#L1100-L1116
ucsb-seclab/karonte
427ac313e596f723e40768b95d13bd7a9fc92fd8
tool/bbf/border_binaries_finder.py
python
BorderBinariesFinder.run
(self, pickle_file=None, bins=None)
return self._border_binaries
Run the this module algorithm :param pickle_file: path to pickle file :param bins: binaries to consider :return: the list of border binaries
Run the this module algorithm :param pickle_file: path to pickle file :param bins: binaries to consider :return: the list of border binaries
[ "Run", "the", "this", "module", "algorithm", ":", "param", "pickle_file", ":", "path", "to", "pickle", "file", ":", "param", "bins", ":", "binaries", "to", "consider", ":", "return", ":", "the", "list", "of", "border", "binaries" ]
def run(self, pickle_file=None, bins=None): """ Run the this module algorithm :param pickle_file: path to pickle file :param bins: binaries to consider :return: the list of border binaries """ self._start_time = time.time() if pickle_file and os.path.isfile(pickle_file): with open(pickle_file, 'rb') as fp: self._candidates, self._stats, self._multiplier = pickle.load(fp) else: if not pickle_file: pickle_dir = DEFAULT_PICKLE_DIR rel_pickle_name = self._fw_path.replace('/', '_').replace('.', '') pickle_file = pickle_dir + '/' + rel_pickle_name + '.pk' self._collect_stats(bins) self._apply_parsing_score() self._border_binaries = BorderBinariesFinder._get_first_cluster(self._candidates) self._pickle_it(pickle_file) self._end_time = time.time() log.info(f"The discovered border binaries are: {self._border_binaries}") return self._border_binaries
[ "def", "run", "(", "self", ",", "pickle_file", "=", "None", ",", "bins", "=", "None", ")", ":", "self", ".", "_start_time", "=", "time", ".", "time", "(", ")", "if", "pickle_file", "and", "os", ".", "path", ".", "isfile", "(", "pickle_file", ")", "...
https://github.com/ucsb-seclab/karonte/blob/427ac313e596f723e40768b95d13bd7a9fc92fd8/tool/bbf/border_binaries_finder.py#L569-L594
craffel/mir_eval
576aad4e0b5931e7c697c078a1153c99b885c64f
mir_eval/transcription.py
python
validate
(ref_intervals, ref_pitches, est_intervals, est_pitches)
Checks that the input annotations to a metric look like time intervals and a pitch list, and throws helpful errors if not. Parameters ---------- ref_intervals : np.ndarray, shape=(n,2) Array of reference notes time intervals (onset and offset times) ref_pitches : np.ndarray, shape=(n,) Array of reference pitch values in Hertz est_intervals : np.ndarray, shape=(m,2) Array of estimated notes time intervals (onset and offset times) est_pitches : np.ndarray, shape=(m,) Array of estimated pitch values in Hertz
Checks that the input annotations to a metric look like time intervals and a pitch list, and throws helpful errors if not.
[ "Checks", "that", "the", "input", "annotations", "to", "a", "metric", "look", "like", "time", "intervals", "and", "a", "pitch", "list", "and", "throws", "helpful", "errors", "if", "not", "." ]
def validate(ref_intervals, ref_pitches, est_intervals, est_pitches): """Checks that the input annotations to a metric look like time intervals and a pitch list, and throws helpful errors if not. Parameters ---------- ref_intervals : np.ndarray, shape=(n,2) Array of reference notes time intervals (onset and offset times) ref_pitches : np.ndarray, shape=(n,) Array of reference pitch values in Hertz est_intervals : np.ndarray, shape=(m,2) Array of estimated notes time intervals (onset and offset times) est_pitches : np.ndarray, shape=(m,) Array of estimated pitch values in Hertz """ # Validate intervals validate_intervals(ref_intervals, est_intervals) # Make sure intervals and pitches match in length if not ref_intervals.shape[0] == ref_pitches.shape[0]: raise ValueError('Reference intervals and pitches have different ' 'lengths.') if not est_intervals.shape[0] == est_pitches.shape[0]: raise ValueError('Estimated intervals and pitches have different ' 'lengths.') # Make sure all pitch values are positive if ref_pitches.size > 0 and np.min(ref_pitches) <= 0: raise ValueError("Reference contains at least one non-positive pitch " "value") if est_pitches.size > 0 and np.min(est_pitches) <= 0: raise ValueError("Estimate contains at least one non-positive pitch " "value")
[ "def", "validate", "(", "ref_intervals", ",", "ref_pitches", ",", "est_intervals", ",", "est_pitches", ")", ":", "# Validate intervals", "validate_intervals", "(", "ref_intervals", ",", "est_intervals", ")", "# Make sure intervals and pitches match in length", "if", "not", ...
https://github.com/craffel/mir_eval/blob/576aad4e0b5931e7c697c078a1153c99b885c64f/mir_eval/transcription.py#L117-L149
taowen/es-monitor
c4deceb4964857f495d13bfaf2d92f36734c9e1c
es_sql/sqlparse/sql_select.py
python
SqlSelect.on_GROUP_BY
(self, tokens, idx)
[]
def on_GROUP_BY(self, tokens, idx): while idx < len(tokens): token = tokens[idx] idx += 1 if token.is_whitespace(): continue self.group_by = OrderedDict() if isinstance(token, stypes.IdentifierList): ids = list(token.get_identifiers()) else: ids = [token] for id in ids: if ttypes.Keyword == id.ttype: raise Exception('%s is keyword' % id.value) elif id.is_field(): group_by_as = id.as_field_name() if group_by_as in self.projections: self.group_by[group_by_as] = self.projections.pop(group_by_as) else: self.group_by[group_by_as] = id elif isinstance(id, stypes.Expression): self.group_by[id.value] = id elif isinstance(id, stypes.Identifier): if isinstance(id.tokens[0], stypes.Parenthesis): striped = id.tokens[0].strip_parenthesis() if len(striped) > 1: raise Exception('unexpected: %s' % id.tokens[0]) self.group_by[id.get_name()] = striped[0] else: self.group_by[id.get_name()] = id.tokens[0] else: raise Exception('unexpected: %s' % repr(id)) return idx
[ "def", "on_GROUP_BY", "(", "self", ",", "tokens", ",", "idx", ")", ":", "while", "idx", "<", "len", "(", "tokens", ")", ":", "token", "=", "tokens", "[", "idx", "]", "idx", "+=", "1", "if", "token", ".", "is_whitespace", "(", ")", ":", "continue", ...
https://github.com/taowen/es-monitor/blob/c4deceb4964857f495d13bfaf2d92f36734c9e1c/es_sql/sqlparse/sql_select.py#L187-L219
krintoxi/NoobSec-Toolkit
38738541cbc03cedb9a3b3ed13b629f781ad64f6
NoobSecToolkit - MAC OSX/tools/inject/plugins/dbms/hsqldb/connector.py
python
Connector.fetchall
(self)
[]
def fetchall(self): try: return self.cursor.fetchall() except Exception, msg: logger.log(logging.WARN if conf.dbmsHandler else logging.DEBUG, "(remote) %s" % msg[1]) return None
[ "def", "fetchall", "(", "self", ")", ":", "try", ":", "return", "self", ".", "cursor", ".", "fetchall", "(", ")", "except", "Exception", ",", "msg", ":", "logger", ".", "log", "(", "logging", ".", "WARN", "if", "conf", ".", "dbmsHandler", "else", "lo...
https://github.com/krintoxi/NoobSec-Toolkit/blob/38738541cbc03cedb9a3b3ed13b629f781ad64f6/NoobSecToolkit - MAC OSX/tools/inject/plugins/dbms/hsqldb/connector.py#L60-L65
openstack/openstacksdk
58384268487fa854f21c470b101641ab382c9897
openstack/compute/v2/_proxy.py
python
Proxy.get_limits
(self)
return self._get(limits.Limits)
Retrieve limits that are applied to the project's account :returns: A Limits object, including both :class:`~openstack.compute.v2.limits.AbsoluteLimits` and :class:`~openstack.compute.v2.limits.RateLimits` :rtype: :class:`~openstack.compute.v2.limits.Limits`
Retrieve limits that are applied to the project's account
[ "Retrieve", "limits", "that", "are", "applied", "to", "the", "project", "s", "account" ]
def get_limits(self): """Retrieve limits that are applied to the project's account :returns: A Limits object, including both :class:`~openstack.compute.v2.limits.AbsoluteLimits` and :class:`~openstack.compute.v2.limits.RateLimits` :rtype: :class:`~openstack.compute.v2.limits.Limits` """ return self._get(limits.Limits)
[ "def", "get_limits", "(", "self", ")", ":", "return", "self", ".", "_get", "(", "limits", ".", "Limits", ")" ]
https://github.com/openstack/openstacksdk/blob/58384268487fa854f21c470b101641ab382c9897/openstack/compute/v2/_proxy.py#L591-L599
theislab/anndata
664e32b0aa6625fe593370d37174384c05abfd4e
anndata/_core/aligned_mapping.py
python
AlignedViewMixin.__setitem__
(self, key: str, value: V)
[]
def __setitem__(self, key: str, value: V): value = self._validate_value(value, key) # Validate before mutating adata = self.parent.copy() new_mapping = getattr(adata, self.attrname) new_mapping[key] = value self.parent._init_as_actual(adata)
[ "def", "__setitem__", "(", "self", ",", "key", ":", "str", ",", "value", ":", "V", ")", ":", "value", "=", "self", ".", "_validate_value", "(", "value", ",", "key", ")", "# Validate before mutating", "adata", "=", "self", ".", "parent", ".", "copy", "(...
https://github.com/theislab/anndata/blob/664e32b0aa6625fe593370d37174384c05abfd4e/anndata/_core/aligned_mapping.py#L117-L122
zhl2008/awd-platform
0416b31abea29743387b10b3914581fbe8e7da5e
web_flaskbb/Python-2.7.9/Lib/xml/sax/xmlreader.py
python
XMLReader.parse
(self, source)
Parse an XML document from a system identifier or an InputSource.
Parse an XML document from a system identifier or an InputSource.
[ "Parse", "an", "XML", "document", "from", "a", "system", "identifier", "or", "an", "InputSource", "." ]
def parse(self, source): "Parse an XML document from a system identifier or an InputSource." raise NotImplementedError("This method must be implemented!")
[ "def", "parse", "(", "self", ",", "source", ")", ":", "raise", "NotImplementedError", "(", "\"This method must be implemented!\"", ")" ]
https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/Python-2.7.9/Lib/xml/sax/xmlreader.py#L30-L32
x1ah/Daily_scripts
fa6ad8a4f820876d98b919012e74017e9363badd
GitHubThon/github_marathon.py
python
CreatCommit.write_json
(self, pre_json)
return new_json
[]
def write_json(self, pre_json): times = int(pre_json['commit_times']) + 1 start_from = pre_json['first_robot_commit'] start_from = ctime() if not start_from else start_from pre_json['current_time'] = ctime() pre_json['commit_times'] = times pre_json['first_robot_commit'] = start_from with open('marathon.json', 'w') as new: new_json = dumps(pre_json, indent=4) new.write(new_json) return new_json
[ "def", "write_json", "(", "self", ",", "pre_json", ")", ":", "times", "=", "int", "(", "pre_json", "[", "'commit_times'", "]", ")", "+", "1", "start_from", "=", "pre_json", "[", "'first_robot_commit'", "]", "start_from", "=", "ctime", "(", ")", "if", "no...
https://github.com/x1ah/Daily_scripts/blob/fa6ad8a4f820876d98b919012e74017e9363badd/GitHubThon/github_marathon.py#L21-L31
huggingface/transformers
623b4f7c63f60cce917677ee704d6c93ee960b4b
src/transformers/utils/dummy_pt_objects.py
python
BigBirdForQuestionAnswering.__init__
(self, *args, **kwargs)
[]
def __init__(self, *args, **kwargs): requires_backends(self, ["torch"])
[ "def", "__init__", "(", "self", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "requires_backends", "(", "self", ",", "[", "\"torch\"", "]", ")" ]
https://github.com/huggingface/transformers/blob/623b4f7c63f60cce917677ee704d6c93ee960b4b/src/transformers/utils/dummy_pt_objects.py#L985-L986
angr/cle
7dea3e72a06c7596b9ef7b884c42cb19bca7620a
cle/backends/symbol.py
python
Symbol.is_function
(self)
return self.type is SymbolType.TYPE_FUNCTION
Whether this symbol is a function
Whether this symbol is a function
[ "Whether", "this", "symbol", "is", "a", "function" ]
def is_function(self): """ Whether this symbol is a function """ return self.type is SymbolType.TYPE_FUNCTION
[ "def", "is_function", "(", "self", ")", ":", "return", "self", ".", "type", "is", "SymbolType", ".", "TYPE_FUNCTION" ]
https://github.com/angr/cle/blob/7dea3e72a06c7596b9ef7b884c42cb19bca7620a/cle/backends/symbol.py#L100-L104
spotify/chartify
5ac3a88e54cf620389741f396cc19d60fe032822
chartify/_core/style.py
python
AccentPalette.set_default_color
(self, color)
Set default color of values in the 'color_column' that are not accented.
Set default color of values in the 'color_column' that are not accented.
[ "Set", "default", "color", "of", "values", "in", "the", "color_column", "that", "are", "not", "accented", "." ]
def set_default_color(self, color): """ Set default color of values in the 'color_column' that are not accented.""" color = colors.Color(color).get_hex_l() self._default_color = color
[ "def", "set_default_color", "(", "self", ",", "color", ")", ":", "color", "=", "colors", ".", "Color", "(", "color", ")", ".", "get_hex_l", "(", ")", "self", ".", "_default_color", "=", "color" ]
https://github.com/spotify/chartify/blob/5ac3a88e54cf620389741f396cc19d60fe032822/chartify/_core/style.py#L180-L185
Ha0Tang/SelectionGAN
80aa7ad9f79f643c28633c40c621f208f3fb0121
semantic_synthesis/data/ade20k_dataset.py
python
ADE20KDataset.modify_commandline_options
(parser, is_train)
return parser
[]
def modify_commandline_options(parser, is_train): parser = Pix2pixDataset.modify_commandline_options(parser, is_train) parser.set_defaults(preprocess_mode='resize_and_crop') if is_train: parser.set_defaults(load_size=286) else: parser.set_defaults(load_size=256) parser.set_defaults(crop_size=256) parser.set_defaults(display_winsize=256) parser.set_defaults(label_nc=150) parser.set_defaults(contain_dontcare_label=True) parser.set_defaults(cache_filelist_read=False) parser.set_defaults(cache_filelist_write=False) parser.set_defaults(no_instance=True) return parser
[ "def", "modify_commandline_options", "(", "parser", ",", "is_train", ")", ":", "parser", "=", "Pix2pixDataset", ".", "modify_commandline_options", "(", "parser", ",", "is_train", ")", "parser", ".", "set_defaults", "(", "preprocess_mode", "=", "'resize_and_crop'", "...
https://github.com/Ha0Tang/SelectionGAN/blob/80aa7ad9f79f643c28633c40c621f208f3fb0121/semantic_synthesis/data/ade20k_dataset.py#L13-L27
nicolargo/glances
00c65933ae1d0ebd3e72dc30fc3c215a83dfaae2
glances/main.py
python
GlancesMain.parse_args
(self)
return args
Parse command line arguments.
Parse command line arguments.
[ "Parse", "command", "line", "arguments", "." ]
def parse_args(self): """Parse command line arguments.""" args = self.init_args().parse_args() # Load the configuration file, if it exists # This function should be called after the parse_args # because the configuration file path can be defined self.config = Config(args.conf_file) # Debug mode if args.debug: from logging import DEBUG logger.setLevel(DEBUG) else: from warnings import simplefilter simplefilter("ignore") # Plugins refresh rate if self.config.has_section('global'): global_refresh = self.config.get_float_value('global', 'refresh', default=self.DEFAULT_REFRESH_TIME) if args.time == self.DEFAULT_REFRESH_TIME: args.time = global_refresh logger.debug('Global refresh rate is set to {} seconds'.format(args.time)) # Plugins disable/enable # Allow users to disable plugins from the glances.conf (issue #1378) for s in self.config.sections(): if self.config.has_section(s) and (self.config.get_bool_value(s, 'disable', False)): disable(args, s) logger.debug('{} disabled by the configuration file'.format(s)) # The configuration key can be overwrite from the command line if args.disable_plugin is not None: for p in args.disable_plugin.split(','): disable(args, p) if args.enable_plugin is not None: for p in args.enable_plugin.split(','): enable(args, p) # Exporters activation if args.export is not None: for p in args.export.split(','): setattr(args, 'export_' + p, True) # Client/server Port if args.port is None: if args.webserver: args.port = self.web_server_port else: args.port = self.server_port # Port in the -c URI #996 if args.client is not None: args.client, args.port = ( x if x else y for (x, y) in zip(args.client.partition(':')[::2], (args.client, args.port)) ) # Autodiscover if args.disable_autodiscover: logger.info("Auto discover mode is disabled") # In web server mode if args.webserver: args.process_short_name = True # Server or client login/password if args.username_prompt: # Every username needs a password args.password_prompt = True # Prompt username if args.server: args.username = self.__get_username(description='Define the Glances server username: ') elif args.webserver: args.username = self.__get_username(description='Define the Glances webserver username: ') elif args.client: args.username = self.__get_username(description='Enter the Glances server username: ') else: if args.username_used: # A username has been set using the -u option ? args.username = args.username_used else: # Default user name is 'glances' args.username = self.username if args.password_prompt or args.username_used: # Interactive or file password if args.server: args.password = self.__get_password( description='Define the Glances server password ({} username): '.format(args.username), confirm=True, username=args.username, ) elif args.webserver: args.password = self.__get_password( description='Define the Glances webserver password ({} username): '.format(args.username), confirm=True, username=args.username, ) elif args.client: args.password = self.__get_password( description='Enter the Glances server password ({} username): '.format(args.username), clear=True, username=args.username, ) else: # Default is no password args.password = self.password # By default help is hidden args.help_tag = False # Display Rx and Tx, not the sum for the network args.network_sum = False args.network_cumul = False # Manage light mode if args.enable_light: logger.info("Light mode is on") args.disable_left_sidebar = True disable(args, 'process') disable(args, 'alert') disable(args, 'amps') disable(args, 'docker') # Manage full quicklook option if args.full_quicklook: logger.info("Full quicklook mode") enable(args, 'quicklook') disable(args, 'cpu') disable(args, 'mem') disable(args, 'memswap') enable(args, 'load') # Manage disable_top option if args.disable_top: logger.info("Disable top menu") disable(args, 'quicklook') disable(args, 'cpu') disable(args, 'mem') disable(args, 'memswap') disable(args, 'load') # Init the generate_graph tag # Should be set to True to generate graphs args.generate_graph = False # Control parameter and exit if it is not OK self.args = args # Export is only available in standalone or client mode (issue #614) export_tag = self.args.export is not None and any(self.args.export) if WINDOWS and export_tag: # On Windows, export is possible but only in quiet mode # See issue #1038 logger.info("On Windows OS, export disable the Web interface") self.args.quiet = True self.args.webserver = False elif not (self.is_standalone() or self.is_client()) and export_tag: logger.critical("Export is only available in standalone or client mode") sys.exit(2) # Filter is only available in standalone mode if args.process_filter is not None and not self.is_standalone(): logger.critical("Process filter is only available in standalone mode") sys.exit(2) # Disable HDDTemp if sensors are disabled if getattr(args, 'disable_sensors', False): disable(args, 'hddtemp') logger.debug("Sensors and HDDTemp are disabled") # Let the plugins known the Glances mode self.args.is_standalone = self.is_standalone() self.args.is_client = self.is_client() self.args.is_client_browser = self.is_client_browser() self.args.is_server = self.is_server() self.args.is_webserver = self.is_webserver() return args
[ "def", "parse_args", "(", "self", ")", ":", "args", "=", "self", ".", "init_args", "(", ")", ".", "parse_args", "(", ")", "# Load the configuration file, if it exists", "# This function should be called after the parse_args", "# because the configuration file path can be define...
https://github.com/nicolargo/glances/blob/00c65933ae1d0ebd3e72dc30fc3c215a83dfaae2/glances/main.py#L478-L656
googleads/google-ads-python
2a1d6062221f6aad1992a6bcca0e7e4a93d2db86
google/ads/googleads/v7/services/services/landing_page_view_service/transports/grpc.py
python
LandingPageViewServiceGrpcTransport.get_landing_page_view
( self, )
return self._stubs["get_landing_page_view"]
r"""Return a callable for the get landing page view method over gRPC. Returns the requested landing page view in full detail. List of thrown errors: `AuthenticationError <>`__ `AuthorizationError <>`__ `HeaderError <>`__ `InternalError <>`__ `QuotaError <>`__ `RequestError <>`__ Returns: Callable[[~.GetLandingPageViewRequest], ~.LandingPageView]: A function that, when called, will call the underlying RPC on the server.
r"""Return a callable for the get landing page view method over gRPC.
[ "r", "Return", "a", "callable", "for", "the", "get", "landing", "page", "view", "method", "over", "gRPC", "." ]
def get_landing_page_view( self, ) -> Callable[ [landing_page_view_service.GetLandingPageViewRequest], landing_page_view.LandingPageView, ]: r"""Return a callable for the get landing page view method over gRPC. Returns the requested landing page view in full detail. List of thrown errors: `AuthenticationError <>`__ `AuthorizationError <>`__ `HeaderError <>`__ `InternalError <>`__ `QuotaError <>`__ `RequestError <>`__ Returns: Callable[[~.GetLandingPageViewRequest], ~.LandingPageView]: A function that, when called, will call the underlying RPC on the server. """ # Generate a "stub function" on-the-fly which will actually make # the request. # gRPC handles serialization and deserialization, so we just need # to pass in the functions for each. if "get_landing_page_view" not in self._stubs: self._stubs[ "get_landing_page_view" ] = self.grpc_channel.unary_unary( "/google.ads.googleads.v7.services.LandingPageViewService/GetLandingPageView", request_serializer=landing_page_view_service.GetLandingPageViewRequest.serialize, response_deserializer=landing_page_view.LandingPageView.deserialize, ) return self._stubs["get_landing_page_view"]
[ "def", "get_landing_page_view", "(", "self", ",", ")", "->", "Callable", "[", "[", "landing_page_view_service", ".", "GetLandingPageViewRequest", "]", ",", "landing_page_view", ".", "LandingPageView", ",", "]", ":", "# Generate a \"stub function\" on-the-fly which will actu...
https://github.com/googleads/google-ads-python/blob/2a1d6062221f6aad1992a6bcca0e7e4a93d2db86/google/ads/googleads/v7/services/services/landing_page_view_service/transports/grpc.py#L213-L247
Qiskit/qiskit-terra
b66030e3b9192efdd3eb95cf25c6545fe0a13da4
qiskit/pulse/instructions/phase.py
python
SetPhase.__init__
( self, phase: Union[complex, ParameterExpression], channel: PulseChannel, name: Optional[str] = None, )
Instantiate a set phase instruction, setting the output signal phase on ``channel`` to ``phase`` [radians]. Args: phase: The rotation angle in radians. channel: The channel this instruction operates on. name: Display name for this instruction.
Instantiate a set phase instruction, setting the output signal phase on ``channel`` to ``phase`` [radians].
[ "Instantiate", "a", "set", "phase", "instruction", "setting", "the", "output", "signal", "phase", "on", "channel", "to", "phase", "[", "radians", "]", "." ]
def __init__( self, phase: Union[complex, ParameterExpression], channel: PulseChannel, name: Optional[str] = None, ): """Instantiate a set phase instruction, setting the output signal phase on ``channel`` to ``phase`` [radians]. Args: phase: The rotation angle in radians. channel: The channel this instruction operates on. name: Display name for this instruction. """ super().__init__(operands=(phase, channel), name=name)
[ "def", "__init__", "(", "self", ",", "phase", ":", "Union", "[", "complex", ",", "ParameterExpression", "]", ",", "channel", ":", "PulseChannel", ",", "name", ":", "Optional", "[", "str", "]", "=", "None", ",", ")", ":", "super", "(", ")", ".", "__in...
https://github.com/Qiskit/qiskit-terra/blob/b66030e3b9192efdd3eb95cf25c6545fe0a13da4/qiskit/pulse/instructions/phase.py#L98-L112
robclewley/pydstool
939e3abc9dd1f180d35152bacbde57e24c85ff26
PyDSTool/Generator/baseclasses.py
python
Generator._infostr
(self, verbose=1)
return outputStr
Return detailed information about the Generator specification.
Return detailed information about the Generator specification.
[ "Return", "detailed", "information", "about", "the", "Generator", "specification", "." ]
def _infostr(self, verbose=1): """Return detailed information about the Generator specification.""" if verbose == 0: outputStr = "Generator "+self.name else: outputStr = '**************************************************' outputStr += '\n Generator '+self.name outputStr +='\n**************************************************' outputStr +='\nType : ' + className(self) outputStr +='\nIndependent variable interval: ' + str(self.indepvariable.depdomain) outputStr +='\nGlobal t0 = ' + str(self.globalt0) outputStr +='\nInterval endpoint check level = ' + str(self.checklevel) outputStr +='\nDimension = ' + str(self.dimension) outputStr +='\n' if isinstance(self.funcspec, FuncSpec): outputStr += self.funcspec._infostr(verbose) if verbose == 2: outputStr += '\nVariables` validity intervals:' for v in self.variables.values(): outputStr += '\n ' + str(v.depdomain) if self.eventstruct is not None: outputStr += '\nEvents defined:' outputStr += '\n ' + str(list(self.eventstruct.events.keys())) if self._modeltag is not None: outputStr += '\nAssociated Model: ' + self._modeltag.name if verbose > 0: outputStr += '\n' return outputStr
[ "def", "_infostr", "(", "self", ",", "verbose", "=", "1", ")", ":", "if", "verbose", "==", "0", ":", "outputStr", "=", "\"Generator \"", "+", "self", ".", "name", "else", ":", "outputStr", "=", "'**************************************************'", "outputStr",...
https://github.com/robclewley/pydstool/blob/939e3abc9dd1f180d35152bacbde57e24c85ff26/PyDSTool/Generator/baseclasses.py#L905-L934
CastagnaIT/plugin.video.netflix
5cf5fa436eb9956576c0f62aa31a4c7d6c5b8a4a
packages/h11/_connection.py
python
Connection.our_state
(self)
return self._cstate.states[self.our_role]
The current state of whichever role we are playing. See :ref:`state-machine` for details.
The current state of whichever role we are playing. See :ref:`state-machine` for details.
[ "The", "current", "state", "of", "whichever", "role", "we", "are", "playing", ".", "See", ":", "ref", ":", "state", "-", "machine", "for", "details", "." ]
def our_state(self): """The current state of whichever role we are playing. See :ref:`state-machine` for details. """ return self._cstate.states[self.our_role]
[ "def", "our_state", "(", "self", ")", ":", "return", "self", ".", "_cstate", ".", "states", "[", "self", ".", "our_role", "]" ]
https://github.com/CastagnaIT/plugin.video.netflix/blob/5cf5fa436eb9956576c0f62aa31a4c7d6c5b8a4a/packages/h11/_connection.py#L176-L180
oilshell/oil
94388e7d44a9ad879b12615f6203b38596b5a2d3
oil_lang/funcs_builtin.py
python
_Shvar_get.__call__
(self, *args)
return expr_eval.LookupVar(self.mem, name, scope_e.Dynamic)
[]
def __call__(self, *args): name = args[0] return expr_eval.LookupVar(self.mem, name, scope_e.Dynamic)
[ "def", "__call__", "(", "self", ",", "*", "args", ")", ":", "name", "=", "args", "[", "0", "]", "return", "expr_eval", ".", "LookupVar", "(", "self", ".", "mem", ",", "name", ",", "scope_e", ".", "Dynamic", ")" ]
https://github.com/oilshell/oil/blob/94388e7d44a9ad879b12615f6203b38596b5a2d3/oil_lang/funcs_builtin.py#L124-L126
khanhnamle1994/natural-language-processing
01d450d5ac002b0156ef4cf93a07cb508c1bcdc5
assignment1/.env/lib/python2.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py
python
TarFile.gzopen
(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs)
return t
Open gzip compressed tar archive name for reading or writing. Appending is not allowed.
Open gzip compressed tar archive name for reading or writing. Appending is not allowed.
[ "Open", "gzip", "compressed", "tar", "archive", "name", "for", "reading", "or", "writing", ".", "Appending", "is", "not", "allowed", "." ]
def gzopen(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs): """Open gzip compressed tar archive name for reading or writing. Appending is not allowed. """ if len(mode) > 1 or mode not in "rw": raise ValueError("mode must be 'r' or 'w'") try: import gzip gzip.GzipFile except (ImportError, AttributeError): raise CompressionError("gzip module is not available") extfileobj = fileobj is not None try: fileobj = gzip.GzipFile(name, mode + "b", compresslevel, fileobj) t = cls.taropen(name, mode, fileobj, **kwargs) except IOError: if not extfileobj and fileobj is not None: fileobj.close() if fileobj is None: raise raise ReadError("not a gzip file") except: if not extfileobj and fileobj is not None: fileobj.close() raise t._extfileobj = extfileobj return t
[ "def", "gzopen", "(", "cls", ",", "name", ",", "mode", "=", "\"r\"", ",", "fileobj", "=", "None", ",", "compresslevel", "=", "9", ",", "*", "*", "kwargs", ")", ":", "if", "len", "(", "mode", ")", ">", "1", "or", "mode", "not", "in", "\"rw\"", "...
https://github.com/khanhnamle1994/natural-language-processing/blob/01d450d5ac002b0156ef4cf93a07cb508c1bcdc5/assignment1/.env/lib/python2.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py#L1798-L1826
python/cpython
e13cdca0f5224ec4e23bdd04bb3120506964bc8b
Lib/zoneinfo/_tzpath.py
python
find_tzfile
(key)
return None
Retrieve the path to a TZif file from a key.
Retrieve the path to a TZif file from a key.
[ "Retrieve", "the", "path", "to", "a", "TZif", "file", "from", "a", "key", "." ]
def find_tzfile(key): """Retrieve the path to a TZif file from a key.""" _validate_tzfile_path(key) for search_path in TZPATH: filepath = os.path.join(search_path, key) if os.path.isfile(filepath): return filepath return None
[ "def", "find_tzfile", "(", "key", ")", ":", "_validate_tzfile_path", "(", "key", ")", "for", "search_path", "in", "TZPATH", ":", "filepath", "=", "os", ".", "path", ".", "join", "(", "search_path", ",", "key", ")", "if", "os", ".", "path", ".", "isfile...
https://github.com/python/cpython/blob/e13cdca0f5224ec4e23bdd04bb3120506964bc8b/Lib/zoneinfo/_tzpath.py#L65-L73
yhlleo/tensorflow.cifar10
cd950323e7157e30391c123b665804b6276e38a6
cifar10_input.py
python
read_cifar10
(filename_queue)
return result
Reads and parses examples from CIFAR10 data files. Recommendation: if you want N-way read parallelism, call this function N times. This will give you N independent Readers reading different files & positions within those files, which will give better mixing of examples. Args: filename_queue: A queue of strings with the filenames to read from. Returns: An object representing a single example, with the following fields: height: number of rows in the result (32) width: number of columns in the result (32) depth: number of color channels in the result (3) key: a scalar string Tensor describing the filename & record number for this example. label: an int32 Tensor with the label in the range 0..9. uint8image: a [height, width, depth] uint8 Tensor with the image data
Reads and parses examples from CIFAR10 data files.
[ "Reads", "and", "parses", "examples", "from", "CIFAR10", "data", "files", "." ]
def read_cifar10(filename_queue): """Reads and parses examples from CIFAR10 data files. Recommendation: if you want N-way read parallelism, call this function N times. This will give you N independent Readers reading different files & positions within those files, which will give better mixing of examples. Args: filename_queue: A queue of strings with the filenames to read from. Returns: An object representing a single example, with the following fields: height: number of rows in the result (32) width: number of columns in the result (32) depth: number of color channels in the result (3) key: a scalar string Tensor describing the filename & record number for this example. label: an int32 Tensor with the label in the range 0..9. uint8image: a [height, width, depth] uint8 Tensor with the image data """ class CIFAR10Record(object): pass result = CIFAR10Record() # Dimensions of the images in the CIFAR-10 dataset. # See http://www.cs.toronto.edu/~kriz/cifar.html for a description of the # input format. label_bytes = 1 # 2 for CIFAR-100 result.height = 32 result.width = 32 result.depth = 3 image_bytes = result.height * result.width * result.depth # Every record consists of a label followed by the image, with a # fixed number of bytes for each. record_bytes = label_bytes + image_bytes # Read a record, getting filenames from the filename_queue. No # header or footer in the CIFAR-10 format, so we leave header_bytes # and footer_bytes at their default of 0. reader = tf.FixedLengthRecordReader(record_bytes=record_bytes) result.key, value = reader.read(filename_queue) # Convert from a string to a vector of uint8 that is record_bytes long. record_bytes = tf.decode_raw(value, tf.uint8) # The first bytes represent the label, which we convert from uint8->int32. result.label = tf.cast( tf.slice(record_bytes, [0], [label_bytes]), tf.int32) # The remaining bytes after the label represent the image, which we reshape # from [depth * height * width] to [depth, height, width]. depth_major = tf.reshape(tf.slice(record_bytes, [label_bytes], [image_bytes]), [result.depth, result.height, result.width]) # Convert from [depth, height, width] to [height, width, depth]. result.uint8image = tf.transpose(depth_major, [1, 2, 0]) return result
[ "def", "read_cifar10", "(", "filename_queue", ")", ":", "class", "CIFAR10Record", "(", "object", ")", ":", "pass", "result", "=", "CIFAR10Record", "(", ")", "# Dimensions of the images in the CIFAR-10 dataset.", "# See http://www.cs.toronto.edu/~kriz/cifar.html for a descriptio...
https://github.com/yhlleo/tensorflow.cifar10/blob/cd950323e7157e30391c123b665804b6276e38a6/cifar10_input.py#L41-L99
newsdev/elex
0e5918e127daba3eca4a6eff03f7b228d76ec675
elex/cli/app.py
python
ElexBaseController.clear_cache
(self)
``elex clear-cache`` Returns data about the next election with an optional date to start searching. Command: .. code:: bash elex clear-cache If no cache entries exist, elex will close with exit code 65.
``elex clear-cache``
[ "elex", "clear", "-", "cache" ]
def clear_cache(self): """ ``elex clear-cache`` Returns data about the next election with an optional date to start searching. Command: .. code:: bash elex clear-cache If no cache entries exist, elex will close with exit code 65. """ from elex import cache adapter = cache.get_adapter('http://') self.app.log.info('Clearing cache ({0})'.format(adapter.cache.directory)) try: rmtree(adapter.cache.directory) except OSError: self.app.log.info('No cache entries found.') self.app.exit_code = 65 else: self.app.log.info('Cache cleared.')
[ "def", "clear_cache", "(", "self", ")", ":", "from", "elex", "import", "cache", "adapter", "=", "cache", ".", "get_adapter", "(", "'http://'", ")", "self", ".", "app", ".", "log", ".", "info", "(", "'Clearing cache ({0})'", ".", "format", "(", "adapter", ...
https://github.com/newsdev/elex/blob/0e5918e127daba3eca4a6eff03f7b228d76ec675/elex/cli/app.py#L498-L522