nwo stringlengths 5 106 | sha stringlengths 40 40 | path stringlengths 4 174 | language stringclasses 1
value | identifier stringlengths 1 140 | parameters stringlengths 0 87.7k | argument_list stringclasses 1
value | return_statement stringlengths 0 426k | docstring stringlengths 0 64.3k | docstring_summary stringlengths 0 26.3k | docstring_tokens list | function stringlengths 18 4.83M | function_tokens list | url stringlengths 83 304 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SUSE/DeepSea | 9c7fad93915ba1250c40d50c855011e9fe41ed21 | srv/modules/runners/openstack.py | python | integrate | (**kwargs) | return {
'ceph_conf': {
'fsid': fsid,
'mon_initial_members': mon_initial_members,
'mon_host': mon_host,
'public_network': public_network,
'cluster_network': cluster_network
},
'cinder': {
'rbd_store_pool': prefix + 'cloud-volumes',
'rbd_store_user': prefix + 'cinder',
'key': cinder_key
},
'cinder-backup': {
'rbd_store_pool': prefix + 'cloud-backups',
'rbd_store_user': prefix + 'cinder-backup',
'key': backup_key
},
'glance': {
'rbd_store_pool': prefix + 'cloud-images',
'rbd_store_user': prefix + 'glance',
'key': glance_key
},
'nova': {
'rbd_store_pool': prefix + 'cloud-vms'
},
'radosgw_urls': rgw_urls
} | Create pools and users necessary for OpenStack integration. Returns
relevant Ceph configuration and keys, to be used to subsequently
configure OpenStack.
This will create pools for use by glance, cinder, cinder-backup and nova.
By default, these pools will be named "cloud-images", "cloud-volumes",
"cloud-backups" and "cloud-vms" respectively. If these names conflict
with any existing pools, or if you wish to have a single Ceph cluster
provide storage for multiple OpenStack deployments, use of the "prefix"
parameter will alter the pool names, for example specifying "prefix=other"
will result in pools named "other-cloud-images", "other-cloud-volumes",
"other-cloud-backups" and "other-cloud-vms".
Similarly, by default, this function will create users named
"client.glance", "client.cinder" and "client.cinder-backup". Specifying
"prefix=other" would result in users named "client.other-glance",
"client.other-cinder" and "client.other-cinder-backup" (there is no
separate nova user; nova should be configured to access the cluster using
the cinder key).
CLI Example:
salt-run --out=yaml openstack.integrate
salt-run --out=yaml openstack.integrate prefix=other
Sample Output:
ceph_conf:
cluster_network: 172.16.2.0/24
fsid: 049c4577-3806-3e5b-944e-eec8bedb12bc
mon_host: 172.16.1.13, 172.16.1.12, 172.16.1.11
mon_initial_members: mon3, mon2, mon1
public_network: 172.16.1.0/24
cinder:
key: AQAcJiJbAAAAABAAsJs2RbFr0bhDRP43Lj3h/g==
rbd_store_pool: cloud-volumes
rbd_store_user: cinder
cinder-backup:
key: AQAcJiJbAAAAABAAdbLzrL5QoXoqv+FzGQKg5Q==
rbd_store_pool: cloud-backups
rbd_store_user: cinder-backup
glance:
key: AQAcJiJbAAAAABAA8LLSGbhzblBK96WNDWzNiQ==
rbd_store_pool: cloud-images
rbd_store_user: glance
nova:
rbd_store_pool: cloud-vms
radosgw_urls:
- https://data4.ceph:443/swift/v1
The ceph_conf information returned is the minimum data required to
construct a suitable /etc/ceph/ceph.conf file on the OpenStack hosts.
The cinder, cinder-backup and glance sections include the keys
that need to be written to the appropriate ceph.client.*.keyring files
in /etc/ceph/. The various rbd_store_pool and rbd_store_user settings
are for use in the cinder, cinder-backup, glance and nova configuration
files. The radosgw_urls list will be populated auomatically, based on
whatever RGW instances have been configured. If RGW has not been
configured (or if this runner can't figure out what the URL is based on
what's in ceph.conf), this will be an empty list. If there are multiple
RGW instances, they will all be included in the list, and it's up to the
administrator to choose the correct one when configuring OpenStack.
Note that to correctly configure RGW for use by OpenStack, the following
must be set in /srv/salt/ceph/configuration/files/ceph.conf.d/rgw.conf:
rgw keystone api version = 3
rgw keystone url = http://192.168.126.2:5000/
rgw keystone admin user = ceph
rgw keystone admin password = verybadpassword
rgw keystone admin domain = Default
rgw keystone admin project = admin
rgw keystone verify ssl = false
The user, password, project and domain need to match what has been
configured on the OpenStack side. Note that these settings tend to be
case-sensitive, i.e. "Default" and "default" are not the same thing.
Also, real-world deployments are expected to use SSL, and choose a
better password than is specified above. | Create pools and users necessary for OpenStack integration. Returns
relevant Ceph configuration and keys, to be used to subsequently
configure OpenStack. | [
"Create",
"pools",
"and",
"users",
"necessary",
"for",
"OpenStack",
"integration",
".",
"Returns",
"relevant",
"Ceph",
"configuration",
"and",
"keys",
"to",
"be",
"used",
"to",
"subsequently",
"configure",
"OpenStack",
"."
] | def integrate(**kwargs):
"""
Create pools and users necessary for OpenStack integration. Returns
relevant Ceph configuration and keys, to be used to subsequently
configure OpenStack.
This will create pools for use by glance, cinder, cinder-backup and nova.
By default, these pools will be named "cloud-images", "cloud-volumes",
"cloud-backups" and "cloud-vms" respectively. If these names conflict
with any existing pools, or if you wish to have a single Ceph cluster
provide storage for multiple OpenStack deployments, use of the "prefix"
parameter will alter the pool names, for example specifying "prefix=other"
will result in pools named "other-cloud-images", "other-cloud-volumes",
"other-cloud-backups" and "other-cloud-vms".
Similarly, by default, this function will create users named
"client.glance", "client.cinder" and "client.cinder-backup". Specifying
"prefix=other" would result in users named "client.other-glance",
"client.other-cinder" and "client.other-cinder-backup" (there is no
separate nova user; nova should be configured to access the cluster using
the cinder key).
CLI Example:
salt-run --out=yaml openstack.integrate
salt-run --out=yaml openstack.integrate prefix=other
Sample Output:
ceph_conf:
cluster_network: 172.16.2.0/24
fsid: 049c4577-3806-3e5b-944e-eec8bedb12bc
mon_host: 172.16.1.13, 172.16.1.12, 172.16.1.11
mon_initial_members: mon3, mon2, mon1
public_network: 172.16.1.0/24
cinder:
key: AQAcJiJbAAAAABAAsJs2RbFr0bhDRP43Lj3h/g==
rbd_store_pool: cloud-volumes
rbd_store_user: cinder
cinder-backup:
key: AQAcJiJbAAAAABAAdbLzrL5QoXoqv+FzGQKg5Q==
rbd_store_pool: cloud-backups
rbd_store_user: cinder-backup
glance:
key: AQAcJiJbAAAAABAA8LLSGbhzblBK96WNDWzNiQ==
rbd_store_pool: cloud-images
rbd_store_user: glance
nova:
rbd_store_pool: cloud-vms
radosgw_urls:
- https://data4.ceph:443/swift/v1
The ceph_conf information returned is the minimum data required to
construct a suitable /etc/ceph/ceph.conf file on the OpenStack hosts.
The cinder, cinder-backup and glance sections include the keys
that need to be written to the appropriate ceph.client.*.keyring files
in /etc/ceph/. The various rbd_store_pool and rbd_store_user settings
are for use in the cinder, cinder-backup, glance and nova configuration
files. The radosgw_urls list will be populated auomatically, based on
whatever RGW instances have been configured. If RGW has not been
configured (or if this runner can't figure out what the URL is based on
what's in ceph.conf), this will be an empty list. If there are multiple
RGW instances, they will all be included in the list, and it's up to the
administrator to choose the correct one when configuring OpenStack.
Note that to correctly configure RGW for use by OpenStack, the following
must be set in /srv/salt/ceph/configuration/files/ceph.conf.d/rgw.conf:
rgw keystone api version = 3
rgw keystone url = http://192.168.126.2:5000/
rgw keystone admin user = ceph
rgw keystone admin password = verybadpassword
rgw keystone admin domain = Default
rgw keystone admin project = admin
rgw keystone verify ssl = false
The user, password, project and domain need to match what has been
configured on the OpenStack side. Note that these settings tend to be
case-sensitive, i.e. "Default" and "default" are not the same thing.
Also, real-world deployments are expected to use SSL, and choose a
better password than is specified above.
"""
local = salt.client.LocalClient()
__opts__ = salt.config.client_config('/etc/salt/master')
__grains__ = salt.loader.grains(__opts__)
__opts__['grains'] = __grains__
__utils__ = salt.loader.utils(__opts__)
__salt__ = salt.loader.minion_mods(__opts__, utils=__utils__)
master_minion = __salt__['master.minion']()
prefix = ""
if "prefix" in kwargs:
state_res = local.cmd(master_minion, 'state.apply', ['ceph.openstack',
'pillar={"openstack_prefix": "' + kwargs['prefix'] + '"}'])
# Set up prefix for subsequent string concatenation to match what's done
# in the SLS files for keyring and pool names.
prefix = "{}-".format(kwargs['prefix'])
else:
state_res = local.cmd(master_minion, 'state.apply', ['ceph.openstack'])
# If state.apply failed for any reason, this will return whatever
# state(s) failed to apply
failed = []
for _, states in state_res.items():
if isinstance(states, dict):
for _, state in states.items():
if 'result' not in state or not state['result']:
failed.append(state)
else:
# This could happen if the SLS being applied somehow doesn't exist,
# e.g. "No matching sls found for 'ceph.openstack' in env 'base'".
# Realistically this should never /actually/ happen.
failed.append(states)
if failed:
return {'ERROR': failed}
runner = salt.runner.RunnerClient(__opts__)
def _local(*args):
"""
salt.client.LocalClient.cmd() returns a dict keyed by minion ID. For
cases where we're running a single command on the master and want the
result, this is a convenient shorthand.
"""
# pylint: disable=no-value-for-parameter
return list(local.cmd(master_minion, *args).items())[0][1]
fsid = _local('pillar.get', ['fsid'])
public_network = _local('pillar.get', ['public_network'])
cluster_network = _local('pillar.get', ['cluster_network'])
mon_initial_members = ", ".join(runner.cmd('select.minions',
['cluster=ceph', 'roles=mon', 'host=True'],
print_event=False))
mon_host = ", ".join(runner.cmd('select.public_addresses',
['cluster=ceph', 'roles=mon'], print_event=False))
cinder_key = _local('keyring.secret', [_local('keyring.file', ['cinder', prefix])])
backup_key = _local('keyring.secret', [_local('keyring.file', ['cinder-backup', prefix])])
glance_key = _local('keyring.secret', [_local('keyring.file', ['glance', prefix])])
conf = configparser.RawConfigParser()
with open("/srv/salt/ceph/configuration/cache/ceph.conf") as lines:
conf.read_string('\n'.join(line.strip() for line in lines))
rgw_urls = []
rgw_configurations = runner.cmd('select.from',
['pillar=rgw_configurations', 'role=rgw', 'attr=host'],
print_event=False)
for rgw in rgw_configurations:
section = "client.{}.{}".format(rgw[0], rgw[1])
if not conf.has_section(section):
continue
if conf.has_option(section, "rgw frontends") and conf.has_option(section, "rgw dns name"):
https = re.match(r'.*port=(\d+)s.*', conf.get(section, "rgw frontends"))
http = re.match(r'.*port=(\d+).*', conf.get(section, "rgw frontends"))
if not http and not https:
continue
if http:
url = "http://{}:{}/swift/v1".format(conf.get(section, "rgw dns name"),
http.group(1))
if https:
url = "https://{}:{}/swift/v1".format(conf.get(section, "rgw dns name"),
https.group(1))
rgw_urls.append(url)
return {
'ceph_conf': {
'fsid': fsid,
'mon_initial_members': mon_initial_members,
'mon_host': mon_host,
'public_network': public_network,
'cluster_network': cluster_network
},
'cinder': {
'rbd_store_pool': prefix + 'cloud-volumes',
'rbd_store_user': prefix + 'cinder',
'key': cinder_key
},
'cinder-backup': {
'rbd_store_pool': prefix + 'cloud-backups',
'rbd_store_user': prefix + 'cinder-backup',
'key': backup_key
},
'glance': {
'rbd_store_pool': prefix + 'cloud-images',
'rbd_store_user': prefix + 'glance',
'key': glance_key
},
'nova': {
'rbd_store_pool': prefix + 'cloud-vms'
},
'radosgw_urls': rgw_urls
} | [
"def",
"integrate",
"(",
"*",
"*",
"kwargs",
")",
":",
"local",
"=",
"salt",
".",
"client",
".",
"LocalClient",
"(",
")",
"__opts__",
"=",
"salt",
".",
"config",
".",
"client_config",
"(",
"'/etc/salt/master'",
")",
"__grains__",
"=",
"salt",
".",
"loade... | https://github.com/SUSE/DeepSea/blob/9c7fad93915ba1250c40d50c855011e9fe41ed21/srv/modules/runners/openstack.py#L19-L212 | |
myaooo/RNNVis | 3bb2b099f236648d77885cc19e6cda5d85d6db28 | rnnvis/rnn/config_utils.py | python | RNNConfig.load | (file_or_dict) | return RNNConfig(**config_dict) | Load an RNNConfig from config file
:param file_or_dict: path of the config file
:return: an instance of RNNConfig | Load an RNNConfig from config file
:param file_or_dict: path of the config file
:return: an instance of RNNConfig | [
"Load",
"an",
"RNNConfig",
"from",
"config",
"file",
":",
"param",
"file_or_dict",
":",
"path",
"of",
"the",
"config",
"file",
":",
"return",
":",
"an",
"instance",
"of",
"RNNConfig"
] | def load(file_or_dict):
"""
Load an RNNConfig from config file
:param file_or_dict: path of the config file
:return: an instance of RNNConfig
"""
if isinstance(file_or_dict, dict):
config_dict = file_or_dict['model']
else:
with open(file_or_dict) as f:
try:
config_dict = yaml.safe_load(f)['model']
except:
raise ValueError("Malformat of config file!")
return RNNConfig(**config_dict) | [
"def",
"load",
"(",
"file_or_dict",
")",
":",
"if",
"isinstance",
"(",
"file_or_dict",
",",
"dict",
")",
":",
"config_dict",
"=",
"file_or_dict",
"[",
"'model'",
"]",
"else",
":",
"with",
"open",
"(",
"file_or_dict",
")",
"as",
"f",
":",
"try",
":",
"c... | https://github.com/myaooo/RNNVis/blob/3bb2b099f236648d77885cc19e6cda5d85d6db28/rnnvis/rnn/config_utils.py#L118-L132 | |
mesalock-linux/mesapy | ed546d59a21b36feb93e2309d5c6b75aa0ad95c9 | lib-python/2.7/lib2to3/pgen2/conv.py | python | Converter.run | (self, graminit_h, graminit_c) | Load the grammar tables from the text files written by pgen. | Load the grammar tables from the text files written by pgen. | [
"Load",
"the",
"grammar",
"tables",
"from",
"the",
"text",
"files",
"written",
"by",
"pgen",
"."
] | def run(self, graminit_h, graminit_c):
"""Load the grammar tables from the text files written by pgen."""
self.parse_graminit_h(graminit_h)
self.parse_graminit_c(graminit_c)
self.finish_off() | [
"def",
"run",
"(",
"self",
",",
"graminit_h",
",",
"graminit_c",
")",
":",
"self",
".",
"parse_graminit_h",
"(",
"graminit_h",
")",
"self",
".",
"parse_graminit_c",
"(",
"graminit_c",
")",
"self",
".",
"finish_off",
"(",
")"
] | https://github.com/mesalock-linux/mesapy/blob/ed546d59a21b36feb93e2309d5c6b75aa0ad95c9/lib-python/2.7/lib2to3/pgen2/conv.py#L47-L51 | ||
accel-brain/accel-brain-code | 86f489dc9be001a3bae6d053f48d6b57c0bedb95 | Algorithm-Wars/algowars/generativemodel/recursive_seq2seq_model.py | python | RecursiveSeq2SeqModel.collect_params | (self, select=None) | return params_dict | Overrided `collect_params` in `mxnet.gluon.HybridBlok`. | Overrided `collect_params` in `mxnet.gluon.HybridBlok`. | [
"Overrided",
"collect_params",
"in",
"mxnet",
".",
"gluon",
".",
"HybridBlok",
"."
] | def collect_params(self, select=None):
'''
Overrided `collect_params` in `mxnet.gluon.HybridBlok`.
'''
params_dict = super().collect_params(select)
params_dict.update(self.re_encoder_model.collect_params(select))
return params_dict | [
"def",
"collect_params",
"(",
"self",
",",
"select",
"=",
"None",
")",
":",
"params_dict",
"=",
"super",
"(",
")",
".",
"collect_params",
"(",
"select",
")",
"params_dict",
".",
"update",
"(",
"self",
".",
"re_encoder_model",
".",
"collect_params",
"(",
"s... | https://github.com/accel-brain/accel-brain-code/blob/86f489dc9be001a3bae6d053f48d6b57c0bedb95/Algorithm-Wars/algowars/generativemodel/recursive_seq2seq_model.py#L322-L328 | |
avidLearnerInProgress/python-automation-scripts | 859cbbf72571673500cfc0fbcf493beaed48b7c5 | medium-bookmarks-downloader/restrict.py | python | absolute_path_shortner | (absolute_path) | return tail | [] | def absolute_path_shortner(absolute_path): #returns filename after removing directory name
head,tail=os.path.split(absolute_path)
return tail | [
"def",
"absolute_path_shortner",
"(",
"absolute_path",
")",
":",
"#returns filename after removing directory name",
"head",
",",
"tail",
"=",
"os",
".",
"path",
".",
"split",
"(",
"absolute_path",
")",
"return",
"tail"
] | https://github.com/avidLearnerInProgress/python-automation-scripts/blob/859cbbf72571673500cfc0fbcf493beaed48b7c5/medium-bookmarks-downloader/restrict.py#L6-L8 | |||
fengxinjie/Transformer-OCR | abfcb78508c5d816ba494343612269642cebfe59 | model.py | python | DecoderLayer.forward | (self, x, memory, src_mask, tgt_mask) | return self.sublayer[2](x, self.feed_forward) | Follow Figure 1 (right) for connections. | Follow Figure 1 (right) for connections. | [
"Follow",
"Figure",
"1",
"(",
"right",
")",
"for",
"connections",
"."
] | def forward(self, x, memory, src_mask, tgt_mask):
"Follow Figure 1 (right) for connections."
m = memory
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))
x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))
return self.sublayer[2](x, self.feed_forward) | [
"def",
"forward",
"(",
"self",
",",
"x",
",",
"memory",
",",
"src_mask",
",",
"tgt_mask",
")",
":",
"m",
"=",
"memory",
"x",
"=",
"self",
".",
"sublayer",
"[",
"0",
"]",
"(",
"x",
",",
"lambda",
"x",
":",
"self",
".",
"self_attn",
"(",
"x",
","... | https://github.com/fengxinjie/Transformer-OCR/blob/abfcb78508c5d816ba494343612269642cebfe59/model.py#L122-L127 | |
ilius/pyglossary | d599b3beda3ae17642af5debd83bb991148e6425 | pyglossary/plugin_lib/pureSalsa20.py | python | Salsa20.setIV | (self, IV) | [] | def setIV(self, IV):
assert type(IV) == bytes
assert len(IV)*8 == 64, 'nonce (IV) not 64 bits'
self.IV = IV
ctx=self.ctx
ctx[ 6],ctx[ 7] = little2_i32.unpack( IV )
ctx[ 8],ctx[ 9] = 0, 0 | [
"def",
"setIV",
"(",
"self",
",",
"IV",
")",
":",
"assert",
"type",
"(",
"IV",
")",
"==",
"bytes",
"assert",
"len",
"(",
"IV",
")",
"*",
"8",
"==",
"64",
",",
"'nonce (IV) not 64 bits'",
"self",
".",
"IV",
"=",
"IV",
"ctx",
"=",
"self",
".",
"ctx... | https://github.com/ilius/pyglossary/blob/d599b3beda3ae17642af5debd83bb991148e6425/pyglossary/plugin_lib/pureSalsa20.py#L224-L230 | ||||
rwth-i6/returnn | f2d718a197a280b0d5f0fd91a7fcb8658560dddb | tools/extract_state_tying_from_dataset.py | python | OrthHandler.orth_to_allophone_states | (self, orth) | return allos | :param str orth: orthography as a str. orth.split() should give words in the lexicon
:rtype: list[AllophoneState]
:returns allophone state list. those will have repetitions etc | :param str orth: orthography as a str. orth.split() should give words in the lexicon
:rtype: list[AllophoneState]
:returns allophone state list. those will have repetitions etc | [
":",
"param",
"str",
"orth",
":",
"orthography",
"as",
"a",
"str",
".",
"orth",
".",
"split",
"()",
"should",
"give",
"words",
"in",
"the",
"lexicon",
":",
"rtype",
":",
"list",
"[",
"AllophoneState",
"]",
":",
"returns",
"allophone",
"state",
"list",
... | def orth_to_allophone_states(self, orth):
"""
:param str orth: orthography as a str. orth.split() should give words in the lexicon
:rtype: list[AllophoneState]
:returns allophone state list. those will have repetitions etc
"""
allos = []
for lemma in self.iter_orth(orth):
assert len(lemma["phons"]) == 1, "TODO..."
phon = lemma["phons"][0]
l_allos = list(self._phones_to_allos(phon["phon"].split()))
l_allos[0].mark_initial()
l_allos[-1].mark_final()
allos += l_allos
self._allos_set_context(allos)
allos = list(self._allos_add_states(allos))
return allos | [
"def",
"orth_to_allophone_states",
"(",
"self",
",",
"orth",
")",
":",
"allos",
"=",
"[",
"]",
"for",
"lemma",
"in",
"self",
".",
"iter_orth",
"(",
"orth",
")",
":",
"assert",
"len",
"(",
"lemma",
"[",
"\"phons\"",
"]",
")",
"==",
"1",
",",
"\"TODO..... | https://github.com/rwth-i6/returnn/blob/f2d718a197a280b0d5f0fd91a7fcb8658560dddb/tools/extract_state_tying_from_dataset.py#L265-L281 | |
linxid/Machine_Learning_Study_Path | 558e82d13237114bbb8152483977806fc0c222af | Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/encodings/iso2022_kr.py | python | getregentry | () | return codecs.CodecInfo(
name='iso2022_kr',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
) | [] | def getregentry():
return codecs.CodecInfo(
name='iso2022_kr',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
) | [
"def",
"getregentry",
"(",
")",
":",
"return",
"codecs",
".",
"CodecInfo",
"(",
"name",
"=",
"'iso2022_kr'",
",",
"encode",
"=",
"Codec",
"(",
")",
".",
"encode",
",",
"decode",
"=",
"Codec",
"(",
")",
".",
"decode",
",",
"incrementalencoder",
"=",
"In... | https://github.com/linxid/Machine_Learning_Study_Path/blob/558e82d13237114bbb8152483977806fc0c222af/Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/encodings/iso2022_kr.py#L30-L39 | |||
tensorflow/models | 6b8bb0cbeb3e10415c7a87448f08adc3c484c1d3 | research/object_detection/utils/np_box_list_ops.py | python | _copy_extra_fields | (boxlist_to_copy_to, boxlist_to_copy_from) | return boxlist_to_copy_to | Copies the extra fields of boxlist_to_copy_from to boxlist_to_copy_to.
Args:
boxlist_to_copy_to: BoxList to which extra fields are copied.
boxlist_to_copy_from: BoxList from which fields are copied.
Returns:
boxlist_to_copy_to with extra fields. | Copies the extra fields of boxlist_to_copy_from to boxlist_to_copy_to. | [
"Copies",
"the",
"extra",
"fields",
"of",
"boxlist_to_copy_from",
"to",
"boxlist_to_copy_to",
"."
] | def _copy_extra_fields(boxlist_to_copy_to, boxlist_to_copy_from):
"""Copies the extra fields of boxlist_to_copy_from to boxlist_to_copy_to.
Args:
boxlist_to_copy_to: BoxList to which extra fields are copied.
boxlist_to_copy_from: BoxList from which fields are copied.
Returns:
boxlist_to_copy_to with extra fields.
"""
for field in boxlist_to_copy_from.get_extra_fields():
boxlist_to_copy_to.add_field(field, boxlist_to_copy_from.get_field(field))
return boxlist_to_copy_to | [
"def",
"_copy_extra_fields",
"(",
"boxlist_to_copy_to",
",",
"boxlist_to_copy_from",
")",
":",
"for",
"field",
"in",
"boxlist_to_copy_from",
".",
"get_extra_fields",
"(",
")",
":",
"boxlist_to_copy_to",
".",
"add_field",
"(",
"field",
",",
"boxlist_to_copy_from",
".",... | https://github.com/tensorflow/models/blob/6b8bb0cbeb3e10415c7a87448f08adc3c484c1d3/research/object_detection/utils/np_box_list_ops.py#L545-L557 | |
AppScale/gts | 46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9 | AppServer/lib/django-1.3/django/contrib/gis/db/models/query.py | python | GeoQuerySet.svg | (self, relative=False, precision=8, **kwargs) | return self._spatial_attribute('svg', s, **kwargs) | Returns SVG representation of the geographic field in a `svg`
attribute on each element of this GeoQuerySet.
Keyword Arguments:
`relative` => If set to True, this will evaluate the path in
terms of relative moves (rather than absolute).
`precision` => May be used to set the maximum number of decimal
digits used in output (defaults to 8). | Returns SVG representation of the geographic field in a `svg`
attribute on each element of this GeoQuerySet. | [
"Returns",
"SVG",
"representation",
"of",
"the",
"geographic",
"field",
"in",
"a",
"svg",
"attribute",
"on",
"each",
"element",
"of",
"this",
"GeoQuerySet",
"."
] | def svg(self, relative=False, precision=8, **kwargs):
"""
Returns SVG representation of the geographic field in a `svg`
attribute on each element of this GeoQuerySet.
Keyword Arguments:
`relative` => If set to True, this will evaluate the path in
terms of relative moves (rather than absolute).
`precision` => May be used to set the maximum number of decimal
digits used in output (defaults to 8).
"""
relative = int(bool(relative))
if not isinstance(precision, (int, long)):
raise TypeError('SVG precision keyword argument must be an integer.')
s = {'desc' : 'SVG',
'procedure_fmt' : '%(geo_col)s,%(rel)s,%(precision)s',
'procedure_args' : {'rel' : relative,
'precision' : precision,
}
}
return self._spatial_attribute('svg', s, **kwargs) | [
"def",
"svg",
"(",
"self",
",",
"relative",
"=",
"False",
",",
"precision",
"=",
"8",
",",
"*",
"*",
"kwargs",
")",
":",
"relative",
"=",
"int",
"(",
"bool",
"(",
"relative",
")",
")",
"if",
"not",
"isinstance",
"(",
"precision",
",",
"(",
"int",
... | https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/django-1.3/django/contrib/gis/db/models/query.py#L339-L360 | |
demisto/content | 5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07 | Packs/GoogleVault/Integrations/GoogleVault/GoogleVault.py | python | create_groups_export_query | (export_name, emails, time_frame, start_time, end_time, terms, search_method,
export_pst='True', export_mbox='False', data_scope='All Data') | return request | Creates the query that will be used in the request to create a groups export | Creates the query that will be used in the request to create a groups export | [
"Creates",
"the",
"query",
"that",
"will",
"be",
"used",
"in",
"the",
"request",
"to",
"create",
"a",
"groups",
"export"
] | def create_groups_export_query(export_name, emails, time_frame, start_time, end_time, terms, search_method,
export_pst='True', export_mbox='False', data_scope='All Data'):
"""
Creates the query that will be used in the request to create a groups export
"""
# --- Sanitizing Input ---
if time_frame:
start_time, end_time = timeframe_to_utc_zulu_range(time_frame) # Making it UTC Zulu format
elif start_time:
if not end_time:
end_time = datetime.utcnow().isoformat() + 'Z' # End time will be now, if no end time was given
if isinstance(emails, (str, unicode)):
if ',' in emails:
emails = emails.split(',')
else:
emails = [emails]
if data_scope.upper() == 'HELD DATA':
data_scope = 'HELD_DATA'
if data_scope.upper() == 'ALL DATA':
data_scope = 'ALL_DATA'
if data_scope.upper() == 'UNPROCESSED DATA':
data_scope = 'UNPROCESSED_DATA'
# --- Building Request ---
request = {}
query = {}
emails_for_query = []
account_info = {'emails': []} # type: Dict[Any, Any]
corpus = 'GROUPS'
export_format = 'PST' # Default
if export_mbox.upper() == 'TRUE':
export_format = 'MBOX'
groups_options = {
'exportFormat': export_format
}
# --- Building all small parts into big request object ---
query['dataScope'] = data_scope
query['searchMethod'] = search_method
query['corpus'] = corpus
if start_time and end_time:
query['startTime'] = start_time
query['endTime'] = end_time
if terms:
query['terms'] = terms
if emails: # If user specified emails
for email in emails: # Go over all of them
emails_for_query.append(email) # Add them to the list
account_info['emails'] = emails_for_query # Add the list to the account_info dictionary
query['accountInfo'] = account_info # Add the account_info dictionary into the query object
request['query'] = query # Adding query AFTER IT'S COMPLETED
request['exportOptions'] = {'groupsOptions': groups_options}
request['name'] = export_name
return request | [
"def",
"create_groups_export_query",
"(",
"export_name",
",",
"emails",
",",
"time_frame",
",",
"start_time",
",",
"end_time",
",",
"terms",
",",
"search_method",
",",
"export_pst",
"=",
"'True'",
",",
"export_mbox",
"=",
"'False'",
",",
"data_scope",
"=",
"'All... | https://github.com/demisto/content/blob/5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07/Packs/GoogleVault/Integrations/GoogleVault/GoogleVault.py#L388-L441 | |
LexPredict/lexpredict-contraxsuite | 1d5a2540d31f8f3f1adc442cfa13a7c007319899 | sdk/python/sdk/openapi_client/model/update_project_documents_fields_request.py | python | UpdateProjectDocumentsFieldsRequest._from_openapi_data | (cls, fields_data, *args, **kwargs) | return self | UpdateProjectDocumentsFieldsRequest - a model defined in OpenAPI
Args:
fields_data ({str: (bool, date, datetime, dict, float, int, list, str, none_type)}):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
all (bool): [optional] # noqa: E501
document_ids ([int]): [optional] # noqa: E501
no_document_ids ([int]): [optional] # noqa: E501
on_existing_value (str): [optional] # noqa: E501 | UpdateProjectDocumentsFieldsRequest - a model defined in OpenAPI | [
"UpdateProjectDocumentsFieldsRequest",
"-",
"a",
"model",
"defined",
"in",
"OpenAPI"
] | def _from_openapi_data(cls, fields_data, *args, **kwargs): # noqa: E501
"""UpdateProjectDocumentsFieldsRequest - a model defined in OpenAPI
Args:
fields_data ({str: (bool, date, datetime, dict, float, int, list, str, none_type)}):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
all (bool): [optional] # noqa: E501
document_ids ([int]): [optional] # noqa: E501
no_document_ids ([int]): [optional] # noqa: E501
on_existing_value (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.fields_data = fields_data
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self | [
"def",
"_from_openapi_data",
"(",
"cls",
",",
"fields_data",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"# noqa: E501",
"_check_type",
"=",
"kwargs",
".",
"pop",
"(",
"'_check_type'",
",",
"True",
")",
"_spec_property_naming",
"=",
"kwargs",
".",
... | https://github.com/LexPredict/lexpredict-contraxsuite/blob/1d5a2540d31f8f3f1adc442cfa13a7c007319899/sdk/python/sdk/openapi_client/model/update_project_documents_fields_request.py#L111-L188 | |
PaddlePaddle/PaddleDetection | 635e3e0a80f3d05751cdcfca8af04ee17c601a92 | static/ppdet/utils/bbox_utils.py | python | bbox_overlaps | (boxes_1, boxes_2) | return ovr | bbox_overlaps
boxes_1: x1, y, x2, y2
boxes_2: x1, y, x2, y2 | bbox_overlaps
boxes_1: x1, y, x2, y2
boxes_2: x1, y, x2, y2 | [
"bbox_overlaps",
"boxes_1",
":",
"x1",
"y",
"x2",
"y2",
"boxes_2",
":",
"x1",
"y",
"x2",
"y2"
] | def bbox_overlaps(boxes_1, boxes_2):
'''
bbox_overlaps
boxes_1: x1, y, x2, y2
boxes_2: x1, y, x2, y2
'''
assert boxes_1.shape[1] == 4 and boxes_2.shape[1] == 4
num_1 = boxes_1.shape[0]
num_2 = boxes_2.shape[0]
x1_1 = boxes_1[:, 0:1]
y1_1 = boxes_1[:, 1:2]
x2_1 = boxes_1[:, 2:3]
y2_1 = boxes_1[:, 3:4]
area_1 = (x2_1 - x1_1 + 1) * (y2_1 - y1_1 + 1)
x1_2 = boxes_2[:, 0].transpose()
y1_2 = boxes_2[:, 1].transpose()
x2_2 = boxes_2[:, 2].transpose()
y2_2 = boxes_2[:, 3].transpose()
area_2 = (x2_2 - x1_2 + 1) * (y2_2 - y1_2 + 1)
xx1 = np.maximum(x1_1, x1_2)
yy1 = np.maximum(y1_1, y1_2)
xx2 = np.minimum(x2_1, x2_2)
yy2 = np.minimum(y2_1, y2_2)
w = np.maximum(0.0, xx2 - xx1 + 1)
h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h
ovr = inter / (area_1 + area_2 - inter)
return ovr | [
"def",
"bbox_overlaps",
"(",
"boxes_1",
",",
"boxes_2",
")",
":",
"assert",
"boxes_1",
".",
"shape",
"[",
"1",
"]",
"==",
"4",
"and",
"boxes_2",
".",
"shape",
"[",
"1",
"]",
"==",
"4",
"num_1",
"=",
"boxes_1",
".",
"shape",
"[",
"0",
"]",
"num_2",
... | https://github.com/PaddlePaddle/PaddleDetection/blob/635e3e0a80f3d05751cdcfca8af04ee17c601a92/static/ppdet/utils/bbox_utils.py#L27-L60 | |
AI-ON/Multitask-and-Transfer-Learning | 31e0798d436e314ddbc64c4a6b935df1b2160e50 | architectures/chainer/auto_trainer.py | python | to_err_mask_image | (arr) | return (compressed * 255).astype('int8') | [] | def to_err_mask_image(arr):
maxval = np.max(arr)
minval = np.min(arr)
compressed = (arr - minval) / (maxval - minval)
return (compressed * 255).astype('int8') | [
"def",
"to_err_mask_image",
"(",
"arr",
")",
":",
"maxval",
"=",
"np",
".",
"max",
"(",
"arr",
")",
"minval",
"=",
"np",
".",
"min",
"(",
"arr",
")",
"compressed",
"=",
"(",
"arr",
"-",
"minval",
")",
"/",
"(",
"maxval",
"-",
"minval",
")",
"retu... | https://github.com/AI-ON/Multitask-and-Transfer-Learning/blob/31e0798d436e314ddbc64c4a6b935df1b2160e50/architectures/chainer/auto_trainer.py#L32-L36 | |||
rhinstaller/anaconda | 63edc8680f1b05cbfe11bef28703acba808c5174 | pyanaconda/ui/gui/helpers.py | python | GUISpokeInputCheckHandler.password | (self) | return self.password_entry.get_text() | Input to be checked.
Content of the input field, etc.
:returns: input to be checked
:rtype: str | Input to be checked. | [
"Input",
"to",
"be",
"checked",
"."
] | def password(self):
"""Input to be checked.
Content of the input field, etc.
:returns: input to be checked
:rtype: str
"""
return self.password_entry.get_text() | [
"def",
"password",
"(",
"self",
")",
":",
"return",
"self",
".",
"password_entry",
".",
"get_text",
"(",
")"
] | https://github.com/rhinstaller/anaconda/blob/63edc8680f1b05cbfe11bef28703acba808c5174/pyanaconda/ui/gui/helpers.py#L269-L277 | |
CATIA-Systems/FMPy | fde192346c36eb69dbaca60a96e80cdc8ef37b89 | fmpy/fmi1.py | python | _FMU1._fmi1Function | (self, fname, argnames, argtypes, restype=fmi1Status) | Add an FMI 1.0 function to this instance and add a wrapper that allows
logging and checks the return code if the return type if fmi1Status
Parameters:
fname the name of the function (without 'fmi' prefix)
argnames names of the arguments
argtypes types of the arguments
restype return type | Add an FMI 1.0 function to this instance and add a wrapper that allows
logging and checks the return code if the return type if fmi1Status | [
"Add",
"an",
"FMI",
"1",
".",
"0",
"function",
"to",
"this",
"instance",
"and",
"add",
"a",
"wrapper",
"that",
"allows",
"logging",
"and",
"checks",
"the",
"return",
"code",
"if",
"the",
"return",
"type",
"if",
"fmi1Status"
] | def _fmi1Function(self, fname, argnames, argtypes, restype=fmi1Status):
""" Add an FMI 1.0 function to this instance and add a wrapper that allows
logging and checks the return code if the return type if fmi1Status
Parameters:
fname the name of the function (without 'fmi' prefix)
argnames names of the arguments
argtypes types of the arguments
restype return type
"""
# get the exported function form the shared library
f = getattr(self.dll, self.modelIdentifier + '_fmi' + fname)
f.argtypes = argtypes
f.restype = restype
def w(*args):
""" Wrapper function for the FMI call """
# call the FMI function
res = f(*args)
if self.fmiCallLogger is not None:
# log the call
self._log_fmi_args('fmi' + fname, argnames, argtypes, args, restype, res)
if restype == fmi1Status:
# check the status code
if res > fmi1Warning:
raise FMICallException(function=fname, status=res)
return res
# add the function to the instance
setattr(self, 'fmi1' + fname, w) | [
"def",
"_fmi1Function",
"(",
"self",
",",
"fname",
",",
"argnames",
",",
"argtypes",
",",
"restype",
"=",
"fmi1Status",
")",
":",
"# get the exported function form the shared library",
"f",
"=",
"getattr",
"(",
"self",
".",
"dll",
",",
"self",
".",
"modelIdentif... | https://github.com/CATIA-Systems/FMPy/blob/fde192346c36eb69dbaca60a96e80cdc8ef37b89/fmpy/fmi1.py#L300-L334 | ||
wistbean/fxxkpython | 88e16d79d8dd37236ba6ecd0d0ff11d63143968c | vip/qyxuan/projects/venv/lib/python3.6/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/req_install.py | python | InstallRequirement.remove_temporary_source | (self) | Remove the source files from this requirement, if they are marked
for deletion | Remove the source files from this requirement, if they are marked
for deletion | [
"Remove",
"the",
"source",
"files",
"from",
"this",
"requirement",
"if",
"they",
"are",
"marked",
"for",
"deletion"
] | def remove_temporary_source(self):
# type: () -> None
"""Remove the source files from this requirement, if they are marked
for deletion"""
if self.source_dir and os.path.exists(
os.path.join(self.source_dir, PIP_DELETE_MARKER_FILENAME)):
logger.debug('Removing source in %s', self.source_dir)
rmtree(self.source_dir)
self.source_dir = None
self._temp_build_dir.cleanup()
self.build_env.cleanup() | [
"def",
"remove_temporary_source",
"(",
"self",
")",
":",
"# type: () -> None",
"if",
"self",
".",
"source_dir",
"and",
"os",
".",
"path",
".",
"exists",
"(",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"source_dir",
",",
"PIP_DELETE_MARKER_FILENAME",
"... | https://github.com/wistbean/fxxkpython/blob/88e16d79d8dd37236ba6ecd0d0ff11d63143968c/vip/qyxuan/projects/venv/lib/python3.6/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/req_install.py#L364-L374 | ||
ansible-collections/community.general | 3faffe8f47968a2400ba3c896c8901c03001a194 | plugins/modules/packaging/os/yum_versionlock.py | python | main | () | start main program to add/remove a package to yum versionlock | start main program to add/remove a package to yum versionlock | [
"start",
"main",
"program",
"to",
"add",
"/",
"remove",
"a",
"package",
"to",
"yum",
"versionlock"
] | def main():
""" start main program to add/remove a package to yum versionlock"""
module = AnsibleModule(
argument_spec=dict(
state=dict(default='present', choices=['present', 'absent']),
name=dict(required=True, type='list', elements='str'),
),
supports_check_mode=True
)
state = module.params['state']
packages = module.params['name']
changed = False
yum_v = YumVersionLock(module)
# Get an overview of all packages that have a version lock
versionlock_packages = yum_v.get_versionlock_packages()
# Ensure versionlock state of packages
packages_list = []
if state in ('present'):
command = 'add'
for single_pkg in packages:
if not any(fnmatch(pkg.split(":", 1)[-1], single_pkg) for pkg in versionlock_packages.split()):
packages_list.append(single_pkg)
if packages_list:
if module.check_mode:
changed = True
else:
changed = yum_v.ensure_state(packages_list, command)
elif state in ('absent'):
command = 'delete'
for single_pkg in packages:
if any(fnmatch(pkg, single_pkg) for pkg in versionlock_packages.split()):
packages_list.append(single_pkg)
if packages_list:
if module.check_mode:
changed = True
else:
changed = yum_v.ensure_state(packages_list, command)
module.exit_json(
changed=changed,
meta={
"packages": packages,
"state": state
}
) | [
"def",
"main",
"(",
")",
":",
"module",
"=",
"AnsibleModule",
"(",
"argument_spec",
"=",
"dict",
"(",
"state",
"=",
"dict",
"(",
"default",
"=",
"'present'",
",",
"choices",
"=",
"[",
"'present'",
",",
"'absent'",
"]",
")",
",",
"name",
"=",
"dict",
... | https://github.com/ansible-collections/community.general/blob/3faffe8f47968a2400ba3c896c8901c03001a194/plugins/modules/packaging/os/yum_versionlock.py#L105-L153 | ||
gkrizek/bash-lambda-layer | 703b0ade8174022d44779d823172ab7ac33a5505 | bin/s3transfer/bandwidth.py | python | BandwidthRateTracker.current_rate | (self) | return self._current_rate | The current transfer rate
:rtype: float
:returns: The current tracked transfer rate | The current transfer rate | [
"The",
"current",
"transfer",
"rate"
] | def current_rate(self):
"""The current transfer rate
:rtype: float
:returns: The current tracked transfer rate
"""
if self._last_time is None:
return 0.0
return self._current_rate | [
"def",
"current_rate",
"(",
"self",
")",
":",
"if",
"self",
".",
"_last_time",
"is",
"None",
":",
"return",
"0.0",
"return",
"self",
".",
"_current_rate"
] | https://github.com/gkrizek/bash-lambda-layer/blob/703b0ade8174022d44779d823172ab7ac33a5505/bin/s3transfer/bandwidth.py#L359-L367 | |
pypa/pipenv | b21baade71a86ab3ee1429f71fbc14d4f95fb75d | pipenv/vendor/attr/_cmp.py | python | _is_comparable_to | (self, other) | return True | Check whether `other` is comparable to `self`. | Check whether `other` is comparable to `self`. | [
"Check",
"whether",
"other",
"is",
"comparable",
"to",
"self",
"."
] | def _is_comparable_to(self, other):
"""
Check whether `other` is comparable to `self`.
"""
for func in self._requirements:
if not func(self, other):
return False
return True | [
"def",
"_is_comparable_to",
"(",
"self",
",",
"other",
")",
":",
"for",
"func",
"in",
"self",
".",
"_requirements",
":",
"if",
"not",
"func",
"(",
"self",
",",
"other",
")",
":",
"return",
"False",
"return",
"True"
] | https://github.com/pypa/pipenv/blob/b21baade71a86ab3ee1429f71fbc14d4f95fb75d/pipenv/vendor/attr/_cmp.py#L138-L145 | |
vlachoudis/bCNC | 67126b4894dabf6579baf47af8d0f9b7de35e6e3 | bCNC/lib/dxf.py | python | DXF.write | (self, tag, value) | Write one tag,value pair | Write one tag,value pair | [
"Write",
"one",
"tag",
"value",
"pair"
] | def write(self, tag, value):
"""Write one tag,value pair"""
self._f.write("%d\n%s\n"%(tag,str(value))) | [
"def",
"write",
"(",
"self",
",",
"tag",
",",
"value",
")",
":",
"self",
".",
"_f",
".",
"write",
"(",
"\"%d\\n%s\\n\"",
"%",
"(",
"tag",
",",
"str",
"(",
"value",
")",
")",
")"
] | https://github.com/vlachoudis/bCNC/blob/67126b4894dabf6579baf47af8d0f9b7de35e6e3/bCNC/lib/dxf.py#L1221-L1223 | ||
numba/numba | bf480b9e0da858a65508c2b17759a72ee6a44c51 | numba/parfors/array_analysis.py | python | ArrayAnalysis._isarray | (self, varname) | return isinstance(typ, types.npytypes.Array) and typ.ndim > 0 | [] | def _isarray(self, varname):
typ = self.typemap[varname]
return isinstance(typ, types.npytypes.Array) and typ.ndim > 0 | [
"def",
"_isarray",
"(",
"self",
",",
"varname",
")",
":",
"typ",
"=",
"self",
".",
"typemap",
"[",
"varname",
"]",
"return",
"isinstance",
"(",
"typ",
",",
"types",
".",
"npytypes",
".",
"Array",
")",
"and",
"typ",
".",
"ndim",
">",
"0"
] | https://github.com/numba/numba/blob/bf480b9e0da858a65508c2b17759a72ee6a44c51/numba/parfors/array_analysis.py#L3219-L3221 | |||
misterch0c/shadowbroker | e3a069bea47a2c1009697941ac214adc6f90aa8d | windows/Resources/Python/Core/Lib/lib-tk/Tix.py | python | tixCommand.tix_getbitmap | (self, name) | return self.tk.call('tix', 'getbitmap', name) | Locates a bitmap file of the name name.xpm or name in one of the
bitmap directories (see the tix_addbitmapdir command above). By
using tix_getbitmap, you can avoid hard coding the pathnames of the
bitmap files in your application. When successful, it returns the
complete pathname of the bitmap file, prefixed with the character
'@'. The returned value can be used to configure the -bitmap
option of the TK and Tix widgets. | Locates a bitmap file of the name name.xpm or name in one of the
bitmap directories (see the tix_addbitmapdir command above). By
using tix_getbitmap, you can avoid hard coding the pathnames of the
bitmap files in your application. When successful, it returns the
complete pathname of the bitmap file, prefixed with the character
' | [
"Locates",
"a",
"bitmap",
"file",
"of",
"the",
"name",
"name",
".",
"xpm",
"or",
"name",
"in",
"one",
"of",
"the",
"bitmap",
"directories",
"(",
"see",
"the",
"tix_addbitmapdir",
"command",
"above",
")",
".",
"By",
"using",
"tix_getbitmap",
"you",
"can",
... | def tix_getbitmap(self, name):
"""Locates a bitmap file of the name name.xpm or name in one of the
bitmap directories (see the tix_addbitmapdir command above). By
using tix_getbitmap, you can avoid hard coding the pathnames of the
bitmap files in your application. When successful, it returns the
complete pathname of the bitmap file, prefixed with the character
'@'. The returned value can be used to configure the -bitmap
option of the TK and Tix widgets.
"""
return self.tk.call('tix', 'getbitmap', name) | [
"def",
"tix_getbitmap",
"(",
"self",
",",
"name",
")",
":",
"return",
"self",
".",
"tk",
".",
"call",
"(",
"'tix'",
",",
"'getbitmap'",
",",
"name",
")"
] | https://github.com/misterch0c/shadowbroker/blob/e3a069bea47a2c1009697941ac214adc6f90aa8d/windows/Resources/Python/Core/Lib/lib-tk/Tix.py#L116-L125 | |
nlloyd/SubliminalCollaborator | 5c619e17ddbe8acb9eea8996ec038169ddcd50a1 | libs/twisted/conch/ssh/userauth.py | python | SSHUserAuthClient.auth_publickey | (self) | return d | Try to authenticate with a public key. Ask the user for a public key;
if the user has one, send the request to the server and return True.
Otherwise, return False.
@rtype: C{bool} | Try to authenticate with a public key. Ask the user for a public key;
if the user has one, send the request to the server and return True.
Otherwise, return False. | [
"Try",
"to",
"authenticate",
"with",
"a",
"public",
"key",
".",
"Ask",
"the",
"user",
"for",
"a",
"public",
"key",
";",
"if",
"the",
"user",
"has",
"one",
"send",
"the",
"request",
"to",
"the",
"server",
"and",
"return",
"True",
".",
"Otherwise",
"retu... | def auth_publickey(self):
"""
Try to authenticate with a public key. Ask the user for a public key;
if the user has one, send the request to the server and return True.
Otherwise, return False.
@rtype: C{bool}
"""
d = defer.maybeDeferred(self.getPublicKey)
d.addBoth(self._cbGetPublicKey)
return d | [
"def",
"auth_publickey",
"(",
"self",
")",
":",
"d",
"=",
"defer",
".",
"maybeDeferred",
"(",
"self",
".",
"getPublicKey",
")",
"d",
".",
"addBoth",
"(",
"self",
".",
"_cbGetPublicKey",
")",
"return",
"d"
] | https://github.com/nlloyd/SubliminalCollaborator/blob/5c619e17ddbe8acb9eea8996ec038169ddcd50a1/libs/twisted/conch/ssh/userauth.py#L663-L673 | |
IronLanguages/ironpython3 | 7a7bb2a872eeab0d1009fc8a6e24dca43f65b693 | Src/StdLib/Lib/html/parser.py | python | HTMLParser.reset | (self) | Reset this instance. Loses all unprocessed data. | Reset this instance. Loses all unprocessed data. | [
"Reset",
"this",
"instance",
".",
"Loses",
"all",
"unprocessed",
"data",
"."
] | def reset(self):
"""Reset this instance. Loses all unprocessed data."""
self.rawdata = ''
self.lasttag = '???'
self.interesting = interesting_normal
self.cdata_elem = None
_markupbase.ParserBase.reset(self) | [
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"rawdata",
"=",
"''",
"self",
".",
"lasttag",
"=",
"'???'",
"self",
".",
"interesting",
"=",
"interesting_normal",
"self",
".",
"cdata_elem",
"=",
"None",
"_markupbase",
".",
"ParserBase",
".",
"reset",
... | https://github.com/IronLanguages/ironpython3/blob/7a7bb2a872eeab0d1009fc8a6e24dca43f65b693/Src/StdLib/Lib/html/parser.py#L150-L156 | ||
timkpaine/paperboy | 6c0854b2c0dad139c25153e520ca79ffed820fa4 | paperboy/resources/scheduler.py | python | SchedulerResource.on_get | (self, req, resp) | Get scheduler status of job and reports | Get scheduler status of job and reports | [
"Get",
"scheduler",
"status",
"of",
"job",
"and",
"reports"
] | def on_get(self, req, resp):
'''Get scheduler status of job and reports'''
resp.content_type = 'application/json'
resp.body = json.dumps(self.scheduler.status(req.context['user'], req.params, self.session)) | [
"def",
"on_get",
"(",
"self",
",",
"req",
",",
"resp",
")",
":",
"resp",
".",
"content_type",
"=",
"'application/json'",
"resp",
".",
"body",
"=",
"json",
".",
"dumps",
"(",
"self",
".",
"scheduler",
".",
"status",
"(",
"req",
".",
"context",
"[",
"'... | https://github.com/timkpaine/paperboy/blob/6c0854b2c0dad139c25153e520ca79ffed820fa4/paperboy/resources/scheduler.py#L9-L12 | ||
kamalgill/flask-appengine-template | 11760f83faccbb0d0afe416fc58e67ecfb4643c2 | src/pkg_resources.py | python | WorkingSet.iter_entry_points | (self, group, name=None) | Yield entry point objects from `group` matching `name`
If `name` is None, yields all entry points in `group` from all
distributions in the working set, otherwise only ones matching
both `group` and `name` are yielded (in distribution order). | Yield entry point objects from `group` matching `name` | [
"Yield",
"entry",
"point",
"objects",
"from",
"group",
"matching",
"name"
] | def iter_entry_points(self, group, name=None):
"""Yield entry point objects from `group` matching `name`
If `name` is None, yields all entry points in `group` from all
distributions in the working set, otherwise only ones matching
both `group` and `name` are yielded (in distribution order).
"""
for dist in self:
entries = dist.get_entry_map(group)
if name is None:
for ep in entries.values():
yield ep
elif name in entries:
yield entries[name] | [
"def",
"iter_entry_points",
"(",
"self",
",",
"group",
",",
"name",
"=",
"None",
")",
":",
"for",
"dist",
"in",
"self",
":",
"entries",
"=",
"dist",
".",
"get_entry_map",
"(",
"group",
")",
"if",
"name",
"is",
"None",
":",
"for",
"ep",
"in",
"entries... | https://github.com/kamalgill/flask-appengine-template/blob/11760f83faccbb0d0afe416fc58e67ecfb4643c2/src/pkg_resources.py#L440-L453 | ||
admintony/Prepare-for-AWD | 7f74fd85428eb14e4881002d9c9098c1a5048670 | attack_python/awd_attack.py | python | loadfile | (filepath) | [] | def loadfile(filepath):
try :
file = open(filepath,"rb")
return str(file.read())
except :
print "File %s Not Found!" %filepath
sys.exit() | [
"def",
"loadfile",
"(",
"filepath",
")",
":",
"try",
":",
"file",
"=",
"open",
"(",
"filepath",
",",
"\"rb\"",
")",
"return",
"str",
"(",
"file",
".",
"read",
"(",
")",
")",
"except",
":",
"print",
"\"File %s Not Found!\"",
"%",
"filepath",
"sys",
".",... | https://github.com/admintony/Prepare-for-AWD/blob/7f74fd85428eb14e4881002d9c9098c1a5048670/attack_python/awd_attack.py#L6-L12 | ||||
nussl/nussl | 471e7965c5788bff9fe2e1f7884537cae2d18e6f | nussl/separation/primitive/repet.py | python | Repet.compute_beat_spectrum | (power_spectrogram) | return beat_spectrum | Computes the beat spectrum averages (over freq's) the autocorrelation matrix of a one-sided spectrogram.
The autocorrelation matrix is computed by taking the autocorrelation of each row of the spectrogram and
dismissing the symmetric half.
Args:
power_spectrogram (:obj:`np.array`): 2D matrix containing the one-sided power spectrogram of an audio signal
Returns:
(:obj:`np.array`): array containing the beat spectrum based on the power spectrogram
See Also:
J Foote's original derivation of the Beat Spectrum:
Foote, Jonathan, and Shingo Uchihashi. "The beat spectrum: A new approach to rhythm analysis."
Multimedia and Expo, 2001. ICME 2001. IEEE International Conference on. IEEE, 2001.
(`See PDF here <http://rotorbrain.com/foote/papers/icme2001.pdf>`_) | Computes the beat spectrum averages (over freq's) the autocorrelation matrix of a one-sided spectrogram. | [
"Computes",
"the",
"beat",
"spectrum",
"averages",
"(",
"over",
"freq",
"s",
")",
"the",
"autocorrelation",
"matrix",
"of",
"a",
"one",
"-",
"sided",
"spectrogram",
"."
] | def compute_beat_spectrum(power_spectrogram):
""" Computes the beat spectrum averages (over freq's) the autocorrelation matrix of a one-sided spectrogram.
The autocorrelation matrix is computed by taking the autocorrelation of each row of the spectrogram and
dismissing the symmetric half.
Args:
power_spectrogram (:obj:`np.array`): 2D matrix containing the one-sided power spectrogram of an audio signal
Returns:
(:obj:`np.array`): array containing the beat spectrum based on the power spectrogram
See Also:
J Foote's original derivation of the Beat Spectrum:
Foote, Jonathan, and Shingo Uchihashi. "The beat spectrum: A new approach to rhythm analysis."
Multimedia and Expo, 2001. ICME 2001. IEEE International Conference on. IEEE, 2001.
(`See PDF here <http://rotorbrain.com/foote/papers/icme2001.pdf>`_)
"""
freq_bins, time_bins = power_spectrogram.shape
# row-wise autocorrelation according to the Wiener-Khinchin theorem
power_spectrogram = np.vstack([power_spectrogram, np.zeros_like(power_spectrogram)])
nearest_power_of_two = 2 ** np.ceil(np.log(power_spectrogram.shape[0]) / np.log(2))
pad_amount = int(nearest_power_of_two - power_spectrogram.shape[0])
power_spectrogram = np.pad(power_spectrogram, ((0, pad_amount), (0, 0)), 'constant')
fft_power_spec = scifft.fft(power_spectrogram, axis=0)
abs_fft = np.abs(fft_power_spec) ** 2
autocorrelation_rows = np.real(
scifft.ifft(abs_fft, axis=0)[:freq_bins, :]) # ifft over columns
# normalization factor
norm_factor = np.tile(np.arange(freq_bins, 0, -1), (time_bins, 1)).T
autocorrelation_rows = autocorrelation_rows / norm_factor
# compute the beat spectrum
beat_spectrum = np.mean(autocorrelation_rows, axis=1)
# average over frequencies
return beat_spectrum | [
"def",
"compute_beat_spectrum",
"(",
"power_spectrogram",
")",
":",
"freq_bins",
",",
"time_bins",
"=",
"power_spectrogram",
".",
"shape",
"# row-wise autocorrelation according to the Wiener-Khinchin theorem",
"power_spectrogram",
"=",
"np",
".",
"vstack",
"(",
"[",
"power_... | https://github.com/nussl/nussl/blob/471e7965c5788bff9fe2e1f7884537cae2d18e6f/nussl/separation/primitive/repet.py#L170-L210 | |
caiiiac/Machine-Learning-with-Python | 1a26c4467da41ca4ebc3d5bd789ea942ef79422f | MachineLearning/venv/lib/python3.5/site-packages/numpy/polynomial/hermite.py | python | hermfit | (x, y, deg, rcond=None, full=False, w=None) | Least squares fit of Hermite series to data.
Return the coefficients of a Hermite series of degree `deg` that is the
least squares fit to the data values `y` given at points `x`. If `y` is
1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
fits are done, one for each column of `y`, and the resulting
coefficients are stored in the corresponding columns of a 2-D return.
The fitted polynomial(s) are in the form
.. math:: p(x) = c_0 + c_1 * H_1(x) + ... + c_n * H_n(x),
where `n` is `deg`.
Parameters
----------
x : array_like, shape (M,)
x-coordinates of the M sample points ``(x[i], y[i])``.
y : array_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample
points sharing the same x-coordinates can be fitted at once by
passing in a 2D-array that contains one dataset per column.
deg : int or 1-D array_like
Degree(s) of the fitting polynomials. If `deg` is a single integer
all terms up to and including the `deg`'th term are included in the
fit. For NumPy versions >= 1.11.0 a list of integers specifying the
degrees of the terms to include may be used instead.
rcond : float, optional
Relative condition number of the fit. Singular values smaller than
this relative to the largest singular value will be ignored. The
default value is len(x)*eps, where eps is the relative precision of
the float type, about 2e-16 in most cases.
full : bool, optional
Switch determining nature of return value. When it is False (the
default) just the coefficients are returned, when True diagnostic
information from the singular value decomposition is also returned.
w : array_like, shape (`M`,), optional
Weights. If not None, the contribution of each point
``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the
weights are chosen so that the errors of the products ``w[i]*y[i]``
all have the same variance. The default value is None.
Returns
-------
coef : ndarray, shape (M,) or (M, K)
Hermite coefficients ordered from low to high. If `y` was 2-D,
the coefficients for the data in column k of `y` are in column
`k`.
[residuals, rank, singular_values, rcond] : list
These values are only returned if `full` = True
resid -- sum of squared residuals of the least squares fit
rank -- the numerical rank of the scaled Vandermonde matrix
sv -- singular values of the scaled Vandermonde matrix
rcond -- value of `rcond`.
For more details, see `linalg.lstsq`.
Warns
-----
RankWarning
The rank of the coefficient matrix in the least-squares fit is
deficient. The warning is only raised if `full` = False. The
warnings can be turned off by
>>> import warnings
>>> warnings.simplefilter('ignore', RankWarning)
See Also
--------
chebfit, legfit, lagfit, polyfit, hermefit
hermval : Evaluates a Hermite series.
hermvander : Vandermonde matrix of Hermite series.
hermweight : Hermite weight function
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
The solution is the coefficients of the Hermite series `p` that
minimizes the sum of the weighted squared errors
.. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
where the :math:`w_j` are the weights. This problem is solved by
setting up the (typically) overdetermined matrix equation
.. math:: V(x) * c = w * y,
where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
coefficients to be solved for, `w` are the weights, `y` are the
observed values. This equation is then solved using the singular value
decomposition of `V`.
If some of the singular values of `V` are so small that they are
neglected, then a `RankWarning` will be issued. This means that the
coefficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
set to a value smaller than its default, but the resulting fit may be
spurious and have large contributions from roundoff error.
Fits using Hermite series are probably most useful when the data can be
approximated by ``sqrt(w(x)) * p(x)``, where `w(x)` is the Hermite
weight. In that case the weight ``sqrt(w(x[i])`` should be used
together with data values ``y[i]/sqrt(w(x[i])``. The weight function is
available as `hermweight`.
References
----------
.. [1] Wikipedia, "Curve fitting",
http://en.wikipedia.org/wiki/Curve_fitting
Examples
--------
>>> from numpy.polynomial.hermite import hermfit, hermval
>>> x = np.linspace(-10, 10)
>>> err = np.random.randn(len(x))/10
>>> y = hermval(x, [1, 2, 3]) + err
>>> hermfit(x, y, 2)
array([ 0.97902637, 1.99849131, 3.00006 ]) | Least squares fit of Hermite series to data. | [
"Least",
"squares",
"fit",
"of",
"Hermite",
"series",
"to",
"data",
"."
] | def hermfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least squares fit of Hermite series to data.
Return the coefficients of a Hermite series of degree `deg` that is the
least squares fit to the data values `y` given at points `x`. If `y` is
1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
fits are done, one for each column of `y`, and the resulting
coefficients are stored in the corresponding columns of a 2-D return.
The fitted polynomial(s) are in the form
.. math:: p(x) = c_0 + c_1 * H_1(x) + ... + c_n * H_n(x),
where `n` is `deg`.
Parameters
----------
x : array_like, shape (M,)
x-coordinates of the M sample points ``(x[i], y[i])``.
y : array_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample
points sharing the same x-coordinates can be fitted at once by
passing in a 2D-array that contains one dataset per column.
deg : int or 1-D array_like
Degree(s) of the fitting polynomials. If `deg` is a single integer
all terms up to and including the `deg`'th term are included in the
fit. For NumPy versions >= 1.11.0 a list of integers specifying the
degrees of the terms to include may be used instead.
rcond : float, optional
Relative condition number of the fit. Singular values smaller than
this relative to the largest singular value will be ignored. The
default value is len(x)*eps, where eps is the relative precision of
the float type, about 2e-16 in most cases.
full : bool, optional
Switch determining nature of return value. When it is False (the
default) just the coefficients are returned, when True diagnostic
information from the singular value decomposition is also returned.
w : array_like, shape (`M`,), optional
Weights. If not None, the contribution of each point
``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the
weights are chosen so that the errors of the products ``w[i]*y[i]``
all have the same variance. The default value is None.
Returns
-------
coef : ndarray, shape (M,) or (M, K)
Hermite coefficients ordered from low to high. If `y` was 2-D,
the coefficients for the data in column k of `y` are in column
`k`.
[residuals, rank, singular_values, rcond] : list
These values are only returned if `full` = True
resid -- sum of squared residuals of the least squares fit
rank -- the numerical rank of the scaled Vandermonde matrix
sv -- singular values of the scaled Vandermonde matrix
rcond -- value of `rcond`.
For more details, see `linalg.lstsq`.
Warns
-----
RankWarning
The rank of the coefficient matrix in the least-squares fit is
deficient. The warning is only raised if `full` = False. The
warnings can be turned off by
>>> import warnings
>>> warnings.simplefilter('ignore', RankWarning)
See Also
--------
chebfit, legfit, lagfit, polyfit, hermefit
hermval : Evaluates a Hermite series.
hermvander : Vandermonde matrix of Hermite series.
hermweight : Hermite weight function
linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
The solution is the coefficients of the Hermite series `p` that
minimizes the sum of the weighted squared errors
.. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
where the :math:`w_j` are the weights. This problem is solved by
setting up the (typically) overdetermined matrix equation
.. math:: V(x) * c = w * y,
where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the
coefficients to be solved for, `w` are the weights, `y` are the
observed values. This equation is then solved using the singular value
decomposition of `V`.
If some of the singular values of `V` are so small that they are
neglected, then a `RankWarning` will be issued. This means that the
coefficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
set to a value smaller than its default, but the resulting fit may be
spurious and have large contributions from roundoff error.
Fits using Hermite series are probably most useful when the data can be
approximated by ``sqrt(w(x)) * p(x)``, where `w(x)` is the Hermite
weight. In that case the weight ``sqrt(w(x[i])`` should be used
together with data values ``y[i]/sqrt(w(x[i])``. The weight function is
available as `hermweight`.
References
----------
.. [1] Wikipedia, "Curve fitting",
http://en.wikipedia.org/wiki/Curve_fitting
Examples
--------
>>> from numpy.polynomial.hermite import hermfit, hermval
>>> x = np.linspace(-10, 10)
>>> err = np.random.randn(len(x))/10
>>> y = hermval(x, [1, 2, 3]) + err
>>> hermfit(x, y, 2)
array([ 0.97902637, 1.99849131, 3.00006 ])
"""
x = np.asarray(x) + 0.0
y = np.asarray(y) + 0.0
deg = np.asarray(deg)
# check arguments.
if deg.ndim > 1 or deg.dtype.kind not in 'iu' or deg.size == 0:
raise TypeError("deg must be an int or non-empty 1-D array of int")
if deg.min() < 0:
raise ValueError("expected deg >= 0")
if x.ndim != 1:
raise TypeError("expected 1D vector for x")
if x.size == 0:
raise TypeError("expected non-empty vector for x")
if y.ndim < 1 or y.ndim > 2:
raise TypeError("expected 1D or 2D array for y")
if len(x) != len(y):
raise TypeError("expected x and y to have same length")
if deg.ndim == 0:
lmax = deg
order = lmax + 1
van = hermvander(x, lmax)
else:
deg = np.sort(deg)
lmax = deg[-1]
order = len(deg)
van = hermvander(x, lmax)[:, deg]
# set up the least squares matrices in transposed form
lhs = van.T
rhs = y.T
if w is not None:
w = np.asarray(w) + 0.0
if w.ndim != 1:
raise TypeError("expected 1D vector for w")
if len(x) != len(w):
raise TypeError("expected x and w to have same length")
# apply weights. Don't use inplace operations as they
# can cause problems with NA.
lhs = lhs * w
rhs = rhs * w
# set rcond
if rcond is None:
rcond = len(x)*np.finfo(x.dtype).eps
# Determine the norms of the design matrix columns.
if issubclass(lhs.dtype.type, np.complexfloating):
scl = np.sqrt((np.square(lhs.real) + np.square(lhs.imag)).sum(1))
else:
scl = np.sqrt(np.square(lhs).sum(1))
scl[scl == 0] = 1
# Solve the least squares problem.
c, resids, rank, s = la.lstsq(lhs.T/scl, rhs.T, rcond)
c = (c.T/scl).T
# Expand c to include non-fitted coefficients which are set to zero
if deg.ndim > 0:
if c.ndim == 2:
cc = np.zeros((lmax+1, c.shape[1]), dtype=c.dtype)
else:
cc = np.zeros(lmax+1, dtype=c.dtype)
cc[deg] = c
c = cc
# warn on rank reduction
if rank != order and not full:
msg = "The fit may be poorly conditioned"
warnings.warn(msg, pu.RankWarning, stacklevel=2)
if full:
return c, [resids, rank, s, rcond]
else:
return c | [
"def",
"hermfit",
"(",
"x",
",",
"y",
",",
"deg",
",",
"rcond",
"=",
"None",
",",
"full",
"=",
"False",
",",
"w",
"=",
"None",
")",
":",
"x",
"=",
"np",
".",
"asarray",
"(",
"x",
")",
"+",
"0.0",
"y",
"=",
"np",
".",
"asarray",
"(",
"y",
... | https://github.com/caiiiac/Machine-Learning-with-Python/blob/1a26c4467da41ca4ebc3d5bd789ea942ef79422f/MachineLearning/venv/lib/python3.5/site-packages/numpy/polynomial/hermite.py#L1368-L1566 | ||
kermitt2/delft | 620ddf9e55e13213d2fc9af25b9d01331256d698 | delft/textClassification/reader.py | python | vectorize | (index, size) | return result | Create a numpy array of the provided size, where value at indicated index is 1, 0 otherwise | Create a numpy array of the provided size, where value at indicated index is 1, 0 otherwise | [
"Create",
"a",
"numpy",
"array",
"of",
"the",
"provided",
"size",
"where",
"value",
"at",
"indicated",
"index",
"is",
"1",
"0",
"otherwise"
] | def vectorize(index, size):
'''
Create a numpy array of the provided size, where value at indicated index is 1, 0 otherwise
'''
result = np.zeros(size)
if index < size:
result[index] = 1
else:
print("warning: index larger than vector size: ", index, size)
return result | [
"def",
"vectorize",
"(",
"index",
",",
"size",
")",
":",
"result",
"=",
"np",
".",
"zeros",
"(",
"size",
")",
"if",
"index",
"<",
"size",
":",
"result",
"[",
"index",
"]",
"=",
"1",
"else",
":",
"print",
"(",
"\"warning: index larger than vector size: \"... | https://github.com/kermitt2/delft/blob/620ddf9e55e13213d2fc9af25b9d01331256d698/delft/textClassification/reader.py#L289-L298 | |
khalim19/gimp-plugin-export-layers | b37255f2957ad322f4d332689052351cdea6e563 | export_layers/pygimplib/_lib/future/future/backports/http/cookiejar.py | python | lwp_cookie_str | (cookie) | return join_header_words([h]) | Return string representation of Cookie in an the LWP cookie file format.
Actually, the format is extended a bit -- see module docstring. | Return string representation of Cookie in an the LWP cookie file format. | [
"Return",
"string",
"representation",
"of",
"Cookie",
"in",
"an",
"the",
"LWP",
"cookie",
"file",
"format",
"."
] | def lwp_cookie_str(cookie):
"""Return string representation of Cookie in an the LWP cookie file format.
Actually, the format is extended a bit -- see module docstring.
"""
h = [(cookie.name, cookie.value),
("path", cookie.path),
("domain", cookie.domain)]
if cookie.port is not None: h.append(("port", cookie.port))
if cookie.path_specified: h.append(("path_spec", None))
if cookie.port_specified: h.append(("port_spec", None))
if cookie.domain_initial_dot: h.append(("domain_dot", None))
if cookie.secure: h.append(("secure", None))
if cookie.expires: h.append(("expires",
time2isoz(float(cookie.expires))))
if cookie.discard: h.append(("discard", None))
if cookie.comment: h.append(("comment", cookie.comment))
if cookie.comment_url: h.append(("commenturl", cookie.comment_url))
keys = sorted(cookie._rest.keys())
for k in keys:
h.append((k, str(cookie._rest[k])))
h.append(("version", str(cookie.version)))
return join_header_words([h]) | [
"def",
"lwp_cookie_str",
"(",
"cookie",
")",
":",
"h",
"=",
"[",
"(",
"cookie",
".",
"name",
",",
"cookie",
".",
"value",
")",
",",
"(",
"\"path\"",
",",
"cookie",
".",
"path",
")",
",",
"(",
"\"domain\"",
",",
"cookie",
".",
"domain",
")",
"]",
... | https://github.com/khalim19/gimp-plugin-export-layers/blob/b37255f2957ad322f4d332689052351cdea6e563/export_layers/pygimplib/_lib/future/future/backports/http/cookiejar.py#L1816-L1842 | |
morganstanley/treadmill | f18267c665baf6def4374d21170198f63ff1cde4 | lib/python/treadmill/scheduler/masterapi.py | python | cell_buckets | (zkclient) | return sorted(zkclient.get_children(z.CELL)) | Return list of top level cell buckets. | Return list of top level cell buckets. | [
"Return",
"list",
"of",
"top",
"level",
"cell",
"buckets",
"."
] | def cell_buckets(zkclient):
"""Return list of top level cell buckets."""
return sorted(zkclient.get_children(z.CELL)) | [
"def",
"cell_buckets",
"(",
"zkclient",
")",
":",
"return",
"sorted",
"(",
"zkclient",
".",
"get_children",
"(",
"z",
".",
"CELL",
")",
")"
] | https://github.com/morganstanley/treadmill/blob/f18267c665baf6def4374d21170198f63ff1cde4/lib/python/treadmill/scheduler/masterapi.py#L283-L285 | |
opensourcesec/CIRTKit | 58b8793ada69320ffdbdd4ecdc04a3bb2fa83c37 | modules/reversing/viper/peepdf/PDFCore.py | python | PDFReference.getId | (self) | return self.id | Gets the object id of the reference
@return: The object id (int) | Gets the object id of the reference | [
"Gets",
"the",
"object",
"id",
"of",
"the",
"reference"
] | def getId(self):
'''
Gets the object id of the reference
@return: The object id (int)
'''
return self.id | [
"def",
"getId",
"(",
"self",
")",
":",
"return",
"self",
".",
"id"
] | https://github.com/opensourcesec/CIRTKit/blob/58b8793ada69320ffdbdd4ecdc04a3bb2fa83c37/modules/reversing/viper/peepdf/PDFCore.py#L874-L880 | |
THUNLP-MT/Document-Transformer | 5bcc7f43cc948240fa0e3a400bffdc178f841fcd | thumt/bin/trainer_ctx.py | python | export_params | (output_dir, name, params) | [] | def export_params(output_dir, name, params):
if not tf.gfile.Exists(output_dir):
tf.gfile.MkDir(output_dir)
# Save params as params.json
filename = os.path.join(output_dir, name)
with tf.gfile.Open(filename, "w") as fd:
fd.write(params.to_json()) | [
"def",
"export_params",
"(",
"output_dir",
",",
"name",
",",
"params",
")",
":",
"if",
"not",
"tf",
".",
"gfile",
".",
"Exists",
"(",
"output_dir",
")",
":",
"tf",
".",
"gfile",
".",
"MkDir",
"(",
"output_dir",
")",
"# Save params as params.json",
"filenam... | https://github.com/THUNLP-MT/Document-Transformer/blob/5bcc7f43cc948240fa0e3a400bffdc178f841fcd/thumt/bin/trainer_ctx.py#L133-L140 | ||||
zhl2008/awd-platform | 0416b31abea29743387b10b3914581fbe8e7da5e | web_hxb2/lib/python3.5/site-packages/redis/client.py | python | StrictRedis.incrbyfloat | (self, name, amount=1.0) | return self.execute_command('INCRBYFLOAT', name, amount) | Increments the value at key ``name`` by floating ``amount``.
If no key exists, the value will be initialized as ``amount`` | Increments the value at key ``name`` by floating ``amount``.
If no key exists, the value will be initialized as ``amount`` | [
"Increments",
"the",
"value",
"at",
"key",
"name",
"by",
"floating",
"amount",
".",
"If",
"no",
"key",
"exists",
"the",
"value",
"will",
"be",
"initialized",
"as",
"amount"
] | def incrbyfloat(self, name, amount=1.0):
"""
Increments the value at key ``name`` by floating ``amount``.
If no key exists, the value will be initialized as ``amount``
"""
return self.execute_command('INCRBYFLOAT', name, amount) | [
"def",
"incrbyfloat",
"(",
"self",
",",
"name",
",",
"amount",
"=",
"1.0",
")",
":",
"return",
"self",
".",
"execute_command",
"(",
"'INCRBYFLOAT'",
",",
"name",
",",
"amount",
")"
] | https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_hxb2/lib/python3.5/site-packages/redis/client.py#L1023-L1028 | |
fredrik-johansson/mpmath | c11db84b3237bd8fc6721f5a0c5d7c0c98a24dc1 | mpmath/__init__.py | python | runtests | () | Run all mpmath tests and print output. | Run all mpmath tests and print output. | [
"Run",
"all",
"mpmath",
"tests",
"and",
"print",
"output",
"."
] | def runtests():
"""
Run all mpmath tests and print output.
"""
import os.path
from inspect import getsourcefile
from .tests import runtests as tests
testdir = os.path.dirname(os.path.abspath(getsourcefile(tests)))
importdir = os.path.abspath(testdir + '/../..')
tests.testit(importdir, testdir) | [
"def",
"runtests",
"(",
")",
":",
"import",
"os",
".",
"path",
"from",
"inspect",
"import",
"getsourcefile",
"from",
".",
"tests",
"import",
"runtests",
"as",
"tests",
"testdir",
"=",
"os",
".",
"path",
".",
"dirname",
"(",
"os",
".",
"path",
".",
"abs... | https://github.com/fredrik-johansson/mpmath/blob/c11db84b3237bd8fc6721f5a0c5d7c0c98a24dc1/mpmath/__init__.py#L431-L440 | ||
XX-net/XX-Net | a9898cfcf0084195fb7e69b6bc834e59aecdf14f | python3.8.2/Lib/site-packages/pip/_internal/wheel.py | python | WheelBuilder._build_one | (self, req, output_dir, python_tag=None) | Build one wheel.
:return: The filename of the built wheel, or None if the build failed. | Build one wheel. | [
"Build",
"one",
"wheel",
"."
] | def _build_one(self, req, output_dir, python_tag=None):
"""Build one wheel.
:return: The filename of the built wheel, or None if the build failed.
"""
# Install build deps into temporary directory (PEP 518)
with req.build_env:
return self._build_one_inside_env(req, output_dir,
python_tag=python_tag) | [
"def",
"_build_one",
"(",
"self",
",",
"req",
",",
"output_dir",
",",
"python_tag",
"=",
"None",
")",
":",
"# Install build deps into temporary directory (PEP 518)",
"with",
"req",
".",
"build_env",
":",
"return",
"self",
".",
"_build_one_inside_env",
"(",
"req",
... | https://github.com/XX-net/XX-Net/blob/a9898cfcf0084195fb7e69b6bc834e59aecdf14f/python3.8.2/Lib/site-packages/pip/_internal/wheel.py#L890-L898 | ||
PowerScript/KatanaFramework | 0f6ad90a88de865d58ec26941cb4460501e75496 | lib/future/src/future/backports/http/cookiejar.py | python | request_path | (request) | return path | Path component of request-URI, as defined by RFC 2965. | Path component of request-URI, as defined by RFC 2965. | [
"Path",
"component",
"of",
"request",
"-",
"URI",
"as",
"defined",
"by",
"RFC",
"2965",
"."
] | def request_path(request):
"""Path component of request-URI, as defined by RFC 2965."""
url = request.get_full_url()
parts = urlsplit(url)
path = escape_path(parts.path)
if not path.startswith("/"):
# fix bad RFC 2396 absoluteURI
path = "/" + path
return path | [
"def",
"request_path",
"(",
"request",
")",
":",
"url",
"=",
"request",
".",
"get_full_url",
"(",
")",
"parts",
"=",
"urlsplit",
"(",
"url",
")",
"path",
"=",
"escape_path",
"(",
"parts",
".",
"path",
")",
"if",
"not",
"path",
".",
"startswith",
"(",
... | https://github.com/PowerScript/KatanaFramework/blob/0f6ad90a88de865d58ec26941cb4460501e75496/lib/future/src/future/backports/http/cookiejar.py#L628-L636 | |
PyMVPA/PyMVPA | 76c476b3de8264b0bb849bf226da5674d659564e | mvpa2/support/afni/afni_surface_alphasim.py | python | _fn | (config, infix, ext=None) | return './%s%s%s' % (config['prefix'], infix, ext) | Returns a file name with a particular infix | Returns a file name with a particular infix | [
"Returns",
"a",
"file",
"name",
"with",
"a",
"particular",
"infix"
] | def _fn(config, infix, ext=None):
'''Returns a file name with a particular infix'''
if ext is None:
ext = _ext(config)
return './%s%s%s' % (config['prefix'], infix, ext) | [
"def",
"_fn",
"(",
"config",
",",
"infix",
",",
"ext",
"=",
"None",
")",
":",
"if",
"ext",
"is",
"None",
":",
"ext",
"=",
"_ext",
"(",
"config",
")",
"return",
"'./%s%s%s'",
"%",
"(",
"config",
"[",
"'prefix'",
"]",
",",
"infix",
",",
"ext",
")"
... | https://github.com/PyMVPA/PyMVPA/blob/76c476b3de8264b0bb849bf226da5674d659564e/mvpa2/support/afni/afni_surface_alphasim.py#L31-L35 | |
OpenCobolIDE/OpenCobolIDE | c78d0d335378e5fe0a5e74f53c19b68b55e85388 | open_cobol_ide/extlibs/future/backports/http/cookiejar.py | python | Cookie.__repr__ | (self) | return "Cookie(%s)" % ", ".join(args) | [] | def __repr__(self):
args = []
for name in ("version", "name", "value",
"port", "port_specified",
"domain", "domain_specified", "domain_initial_dot",
"path", "path_specified",
"secure", "expires", "discard", "comment", "comment_url",
):
attr = getattr(self, name)
### Python-Future:
# Avoid u'...' prefixes for unicode strings:
if isinstance(attr, str):
attr = str(attr)
###
args.append(str("%s=%s") % (name, repr(attr)))
args.append("rest=%s" % repr(self._rest))
args.append("rfc2109=%s" % repr(self.rfc2109))
return "Cookie(%s)" % ", ".join(args) | [
"def",
"__repr__",
"(",
"self",
")",
":",
"args",
"=",
"[",
"]",
"for",
"name",
"in",
"(",
"\"version\"",
",",
"\"name\"",
",",
"\"value\"",
",",
"\"port\"",
",",
"\"port_specified\"",
",",
"\"domain\"",
",",
"\"domain_specified\"",
",",
"\"domain_initial_dot\... | https://github.com/OpenCobolIDE/OpenCobolIDE/blob/c78d0d335378e5fe0a5e74f53c19b68b55e85388/open_cobol_ide/extlibs/future/backports/http/cookiejar.py#L808-L825 | |||
raw-packet/raw-packet | 78d27b3dc9532d27faa6e5d853c62bc9c8b21e71 | raw_packet/Scanners/icmpv6_scanner.py | python | ICMPv6Scan._sniff | (self) | Sniff ICMPv6 packets
:return: None | Sniff ICMPv6 packets
:return: None | [
"Sniff",
"ICMPv6",
"packets",
":",
"return",
":",
"None"
] | def _sniff(self) -> None:
"""
Sniff ICMPv6 packets
:return: None
"""
self._raw_sniff.start(protocols=['Ethernet', 'IPv6', 'ICMPv6'],
prn=self._analyze_packet,
filters={'Ethernet': {'destination': self._your['mac-address']},
'IPv6': {'destination-ip': self._your['ipv6-link-address']},
'ICMPv6': {'type': 129}},
network_interface=self._your['network-interface'],
scapy_filter='icmp6',
scapy_lfilter=lambda eth: eth.dst == self._your['mac-address']) | [
"def",
"_sniff",
"(",
"self",
")",
"->",
"None",
":",
"self",
".",
"_raw_sniff",
".",
"start",
"(",
"protocols",
"=",
"[",
"'Ethernet'",
",",
"'IPv6'",
",",
"'ICMPv6'",
"]",
",",
"prn",
"=",
"self",
".",
"_analyze_packet",
",",
"filters",
"=",
"{",
"... | https://github.com/raw-packet/raw-packet/blob/78d27b3dc9532d27faa6e5d853c62bc9c8b21e71/raw_packet/Scanners/icmpv6_scanner.py#L91-L103 | ||
delira-dev/delira | cd3ad277d6fad5f837d6c5147e6eee2ada648596 | docs/_api/_build/delira/logging/writer_backend.py | python | WriterLoggingBackend._figure | (self, tag, figure, global_step=None, close=True,
walltime=None) | Function to log a ``matplotlib.pyplot`` figure
Parameters
----------
tag : str
the tag to store the figure at
figure : :class:`matplotlib.pyplot.Figure``
the figure to log
global_step : int
the global step
close : bool
whether to close the figure after pushing it
walltime :
the overall time | Function to log a ``matplotlib.pyplot`` figure | [
"Function",
"to",
"log",
"a",
"matplotlib",
".",
"pyplot",
"figure"
] | def _figure(self, tag, figure, global_step=None, close=True,
walltime=None):
"""
Function to log a ``matplotlib.pyplot`` figure
Parameters
----------
tag : str
the tag to store the figure at
figure : :class:`matplotlib.pyplot.Figure``
the figure to log
global_step : int
the global step
close : bool
whether to close the figure after pushing it
walltime :
the overall time
"""
converted_args, converted_kwargs = self.convert_to_npy(
tag=tag, figure=figure, global_step=global_step, close=close,
walltime=walltime)
self._writer.add_figure(*converted_args, **converted_kwargs) | [
"def",
"_figure",
"(",
"self",
",",
"tag",
",",
"figure",
",",
"global_step",
"=",
"None",
",",
"close",
"=",
"True",
",",
"walltime",
"=",
"None",
")",
":",
"converted_args",
",",
"converted_kwargs",
"=",
"self",
".",
"convert_to_npy",
"(",
"tag",
"=",
... | https://github.com/delira-dev/delira/blob/cd3ad277d6fad5f837d6c5147e6eee2ada648596/docs/_api/_build/delira/logging/writer_backend.py#L194-L216 | ||
plotly/plotly.py | cfad7862594b35965c0e000813bd7805e8494a5b | packages/python/plotly/plotly/graph_objs/_contour.py | python | Contour.xaxis | (self) | return self["xaxis"] | Sets a reference between this trace's x coordinates and a 2D
cartesian x axis. If "x" (the default value), the x coordinates
refer to `layout.xaxis`. If "x2", the x coordinates refer to
`layout.xaxis2`, and so on.
The 'xaxis' property is an identifier of a particular
subplot, of type 'x', that may be specified as the string 'x'
optionally followed by an integer >= 1
(e.g. 'x', 'x1', 'x2', 'x3', etc.)
Returns
-------
str | Sets a reference between this trace's x coordinates and a 2D
cartesian x axis. If "x" (the default value), the x coordinates
refer to `layout.xaxis`. If "x2", the x coordinates refer to
`layout.xaxis2`, and so on.
The 'xaxis' property is an identifier of a particular
subplot, of type 'x', that may be specified as the string 'x'
optionally followed by an integer >= 1
(e.g. 'x', 'x1', 'x2', 'x3', etc.) | [
"Sets",
"a",
"reference",
"between",
"this",
"trace",
"s",
"x",
"coordinates",
"and",
"a",
"2D",
"cartesian",
"x",
"axis",
".",
"If",
"x",
"(",
"the",
"default",
"value",
")",
"the",
"x",
"coordinates",
"refer",
"to",
"layout",
".",
"xaxis",
".",
"If",... | def xaxis(self):
"""
Sets a reference between this trace's x coordinates and a 2D
cartesian x axis. If "x" (the default value), the x coordinates
refer to `layout.xaxis`. If "x2", the x coordinates refer to
`layout.xaxis2`, and so on.
The 'xaxis' property is an identifier of a particular
subplot, of type 'x', that may be specified as the string 'x'
optionally followed by an integer >= 1
(e.g. 'x', 'x1', 'x2', 'x3', etc.)
Returns
-------
str
"""
return self["xaxis"] | [
"def",
"xaxis",
"(",
"self",
")",
":",
"return",
"self",
"[",
"\"xaxis\"",
"]"
] | https://github.com/plotly/plotly.py/blob/cfad7862594b35965c0e000813bd7805e8494a5b/packages/python/plotly/plotly/graph_objs/_contour.py#L1594-L1610 | |
mesalock-linux/mesapy | ed546d59a21b36feb93e2309d5c6b75aa0ad95c9 | pypy/module/cpyext/import_.py | python | PyImport_Import | (space, w_name) | return space.call_function(w_import,
w_name, w_globals, w_globals,
space.newlist([space.newtext("__doc__")])) | This is a higher-level interface that calls the current "import hook function".
It invokes the __import__() function from the __builtins__ of the
current globals. This means that the import is done using whatever import hooks
are installed in the current environment, e.g. by rexec or ihooks.
Always uses absolute imports. | This is a higher-level interface that calls the current "import hook function".
It invokes the __import__() function from the __builtins__ of the
current globals. This means that the import is done using whatever import hooks
are installed in the current environment, e.g. by rexec or ihooks. | [
"This",
"is",
"a",
"higher",
"-",
"level",
"interface",
"that",
"calls",
"the",
"current",
"import",
"hook",
"function",
".",
"It",
"invokes",
"the",
"__import__",
"()",
"function",
"from",
"the",
"__builtins__",
"of",
"the",
"current",
"globals",
".",
"This... | def PyImport_Import(space, w_name):
"""
This is a higher-level interface that calls the current "import hook function".
It invokes the __import__() function from the __builtins__ of the
current globals. This means that the import is done using whatever import hooks
are installed in the current environment, e.g. by rexec or ihooks.
Always uses absolute imports."""
caller = space.getexecutioncontext().gettopframe_nohidden()
# Get the builtins from current globals
if caller is not None:
w_globals = caller.get_w_globals()
w_builtin = space.getitem(w_globals, space.newtext('__builtins__'))
else:
# No globals -- use standard builtins, and fake globals
w_builtin = space.getbuiltinmodule('__builtin__')
w_globals = space.newdict()
space.setitem(w_globals, space.newtext("__builtins__"), w_builtin)
# Get the __import__ function from the builtins
if space.isinstance_w(w_builtin, space.w_dict):
w_import = space.getitem(w_builtin, space.newtext("__import__"))
else:
w_import = space.getattr(w_builtin, space.newtext("__import__"))
# Call the __import__ function with the proper argument list
# Always use absolute import here.
return space.call_function(w_import,
w_name, w_globals, w_globals,
space.newlist([space.newtext("__doc__")])) | [
"def",
"PyImport_Import",
"(",
"space",
",",
"w_name",
")",
":",
"caller",
"=",
"space",
".",
"getexecutioncontext",
"(",
")",
".",
"gettopframe_nohidden",
"(",
")",
"# Get the builtins from current globals",
"if",
"caller",
"is",
"not",
"None",
":",
"w_globals",
... | https://github.com/mesalock-linux/mesapy/blob/ed546d59a21b36feb93e2309d5c6b75aa0ad95c9/pypy/module/cpyext/import_.py#L11-L40 | |
nosmokingbandit/watcher | dadacd21a5790ee609058a98a17fcc8954d24439 | lib/sqlalchemy/events.py | python | ConnectionEvents.engine_disposed | (self, engine) | Intercept when the :meth:`.Engine.dispose` method is called.
The :meth:`.Engine.dispose` method instructs the engine to
"dispose" of it's connection pool (e.g. :class:`.Pool`), and
replaces it with a new one. Disposing of the old pool has the
effect that existing checked-in connections are closed. The new
pool does not establish any new connections until it is first used.
This event can be used to indicate that resources related to the
:class:`.Engine` should also be cleaned up, keeping in mind that the
:class:`.Engine` can still be used for new requests in which case
it re-acquires connection resources.
.. versionadded:: 1.0.5 | Intercept when the :meth:`.Engine.dispose` method is called. | [
"Intercept",
"when",
"the",
":",
"meth",
":",
".",
"Engine",
".",
"dispose",
"method",
"is",
"called",
"."
] | def engine_disposed(self, engine):
"""Intercept when the :meth:`.Engine.dispose` method is called.
The :meth:`.Engine.dispose` method instructs the engine to
"dispose" of it's connection pool (e.g. :class:`.Pool`), and
replaces it with a new one. Disposing of the old pool has the
effect that existing checked-in connections are closed. The new
pool does not establish any new connections until it is first used.
This event can be used to indicate that resources related to the
:class:`.Engine` should also be cleaned up, keeping in mind that the
:class:`.Engine` can still be used for new requests in which case
it re-acquires connection resources.
.. versionadded:: 1.0.5
""" | [
"def",
"engine_disposed",
"(",
"self",
",",
"engine",
")",
":"
] | https://github.com/nosmokingbandit/watcher/blob/dadacd21a5790ee609058a98a17fcc8954d24439/lib/sqlalchemy/events.py#L940-L956 | ||
annoviko/pyclustering | bf4f51a472622292627ec8c294eb205585e50f52 | pyclustering/nnet/pcnn.py | python | pcnn_network.__del__ | (self) | !
@brief Default destructor of PCNN. | ! | [
"!"
] | def __del__(self):
"""!
@brief Default destructor of PCNN.
"""
if self.__ccore_pcnn_pointer is not None:
wrapper.pcnn_destroy(self.__ccore_pcnn_pointer)
self.__ccore_pcnn_pointer = None | [
"def",
"__del__",
"(",
"self",
")",
":",
"if",
"self",
".",
"__ccore_pcnn_pointer",
"is",
"not",
"None",
":",
"wrapper",
".",
"pcnn_destroy",
"(",
"self",
".",
"__ccore_pcnn_pointer",
")",
"self",
".",
"__ccore_pcnn_pointer",
"=",
"None"
] | https://github.com/annoviko/pyclustering/blob/bf4f51a472622292627ec8c294eb205585e50f52/pyclustering/nnet/pcnn.py#L383-L390 | ||
kubernetes-client/python | 47b9da9de2d02b2b7a34fbe05afb44afd130d73a | kubernetes/client/api/core_v1_api.py | python | CoreV1Api.create_namespaced_endpoints_with_http_info | (self, namespace, body, **kwargs) | return self.api_client.call_api(
'/api/v1/namespaces/{namespace}/endpoints', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1Endpoints', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats) | create_namespaced_endpoints # noqa: E501
create Endpoints # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_namespaced_endpoints_with_http_info(namespace, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1Endpoints body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(V1Endpoints, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread. | create_namespaced_endpoints # noqa: E501 | [
"create_namespaced_endpoints",
"#",
"noqa",
":",
"E501"
] | def create_namespaced_endpoints_with_http_info(self, namespace, body, **kwargs): # noqa: E501
"""create_namespaced_endpoints # noqa: E501
create Endpoints # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_namespaced_endpoints_with_http_info(namespace, body, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1Endpoints body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(V1Endpoints, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'namespace',
'body',
'pretty',
'dry_run',
'field_manager'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_namespaced_endpoints" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'namespace' is set
if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501
local_var_params['namespace'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_endpoints`") # noqa: E501
# verify the required parameter 'body' is set
if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501
local_var_params['body'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_endpoints`") # noqa: E501
collection_formats = {}
path_params = {}
if 'namespace' in local_var_params:
path_params['namespace'] = local_var_params['namespace'] # noqa: E501
query_params = []
if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501
query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501
if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501
query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501
if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501
query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in local_var_params:
body_params = local_var_params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf']) # noqa: E501
# Authentication setting
auth_settings = ['BearerToken'] # noqa: E501
return self.api_client.call_api(
'/api/v1/namespaces/{namespace}/endpoints', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1Endpoints', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats) | [
"def",
"create_namespaced_endpoints_with_http_info",
"(",
"self",
",",
"namespace",
",",
"body",
",",
"*",
"*",
"kwargs",
")",
":",
"# noqa: E501",
"local_var_params",
"=",
"locals",
"(",
")",
"all_params",
"=",
"[",
"'namespace'",
",",
"'body'",
",",
"'pretty'"... | https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/api/core_v1_api.py#L6770-L6877 | |
taomujian/linbing | fe772a58f41e3b046b51a866bdb7e4655abaf51a | python/app/thirdparty/dirsearch/thirdparty/jinja2/filters.py | python | do_mark_unsafe | (value: str) | return str(value) | Mark a value as unsafe. This is the reverse operation for :func:`safe`. | Mark a value as unsafe. This is the reverse operation for :func:`safe`. | [
"Mark",
"a",
"value",
"as",
"unsafe",
".",
"This",
"is",
"the",
"reverse",
"operation",
"for",
":",
"func",
":",
"safe",
"."
] | def do_mark_unsafe(value: str) -> str:
"""Mark a value as unsafe. This is the reverse operation for :func:`safe`."""
return str(value) | [
"def",
"do_mark_unsafe",
"(",
"value",
":",
"str",
")",
"->",
"str",
":",
"return",
"str",
"(",
"value",
")"
] | https://github.com/taomujian/linbing/blob/fe772a58f41e3b046b51a866bdb7e4655abaf51a/python/app/thirdparty/dirsearch/thirdparty/jinja2/filters.py#L1324-L1326 | |
ArduPilot/pymavlink | 9d6ea618e8d0622bee95fa902b6251882e225afb | mavextra.py | python | mixer | (servo1, servo2, mixtype=1, gain=0.5) | return (1500+v1,1500+v2) | mix two servos | mix two servos | [
"mix",
"two",
"servos"
] | def mixer(servo1, servo2, mixtype=1, gain=0.5):
'''mix two servos'''
s1 = servo1 - 1500
s2 = servo2 - 1500
v1 = (s1-s2)*gain
v2 = (s1+s2)*gain
if mixtype == 2:
v2 = -v2
elif mixtype == 3:
v1 = -v1
elif mixtype == 4:
v1 = -v1
v2 = -v2
if v1 > 600:
v1 = 600
elif v1 < -600:
v1 = -600
if v2 > 600:
v2 = 600
elif v2 < -600:
v2 = -600
return (1500+v1,1500+v2) | [
"def",
"mixer",
"(",
"servo1",
",",
"servo2",
",",
"mixtype",
"=",
"1",
",",
"gain",
"=",
"0.5",
")",
":",
"s1",
"=",
"servo1",
"-",
"1500",
"s2",
"=",
"servo2",
"-",
"1500",
"v1",
"=",
"(",
"s1",
"-",
"s2",
")",
"*",
"gain",
"v2",
"=",
"(",
... | https://github.com/ArduPilot/pymavlink/blob/9d6ea618e8d0622bee95fa902b6251882e225afb/mavextra.py#L789-L810 | |
holzschu/Carnets | 44effb10ddfc6aa5c8b0687582a724ba82c6b547 | Library/lib/python3.7/site-packages/astropy-4.0-py3.7-macosx-10.9-x86_64.egg/astropy/coordinates/representation.py | python | BaseRepresentationOrDifferential._values | (self) | return result | Turn the coordinates into a record array with the coordinate values.
The record array fields will have the component names. | Turn the coordinates into a record array with the coordinate values. | [
"Turn",
"the",
"coordinates",
"into",
"a",
"record",
"array",
"with",
"the",
"coordinate",
"values",
"."
] | def _values(self):
"""Turn the coordinates into a record array with the coordinate values.
The record array fields will have the component names.
"""
coo_items = [(c, getattr(self, c)) for c in self.components]
result = np.empty(self.shape, [(c, coo.dtype) for c, coo in coo_items])
for c, coo in coo_items:
result[c] = coo.value
return result | [
"def",
"_values",
"(",
"self",
")",
":",
"coo_items",
"=",
"[",
"(",
"c",
",",
"getattr",
"(",
"self",
",",
"c",
")",
")",
"for",
"c",
"in",
"self",
".",
"components",
"]",
"result",
"=",
"np",
".",
"empty",
"(",
"self",
".",
"shape",
",",
"[",... | https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/site-packages/astropy-4.0-py3.7-macosx-10.9-x86_64.egg/astropy/coordinates/representation.py#L341-L350 | |
roclark/sportsipy | c19f545d3376d62ded6304b137dc69238ac620a9 | sportsipy/nfl/teams.py | python | Teams.dataframes | (self) | return pd.concat(frames) | Returns a pandas DataFrame where each row is a representation of the
Team class. Rows are indexed by the team abbreviation. | Returns a pandas DataFrame where each row is a representation of the
Team class. Rows are indexed by the team abbreviation. | [
"Returns",
"a",
"pandas",
"DataFrame",
"where",
"each",
"row",
"is",
"a",
"representation",
"of",
"the",
"Team",
"class",
".",
"Rows",
"are",
"indexed",
"by",
"the",
"team",
"abbreviation",
"."
] | def dataframes(self):
"""
Returns a pandas DataFrame where each row is a representation of the
Team class. Rows are indexed by the team abbreviation.
"""
frames = []
for team in self.__iter__():
frames.append(team.dataframe)
return pd.concat(frames) | [
"def",
"dataframes",
"(",
"self",
")",
":",
"frames",
"=",
"[",
"]",
"for",
"team",
"in",
"self",
".",
"__iter__",
"(",
")",
":",
"frames",
".",
"append",
"(",
"team",
".",
"dataframe",
")",
"return",
"pd",
".",
"concat",
"(",
"frames",
")"
] | https://github.com/roclark/sportsipy/blob/c19f545d3376d62ded6304b137dc69238ac620a9/sportsipy/nfl/teams.py#L696-L704 | |
alpacahq/pylivetrader | 2d9bf97103814409ba8b56a4291f2655c59514ee | pylivetrader/misc/events.py | python | AfterOpen.should_trigger | (self, dt) | return dt == self._period_end | [] | def should_trigger(self, dt):
# There are two reasons why we might want to recalculate the dates.
# One is the first time we ever call should_trigger, when
# self._period_start is none. The second is when we're on a new day,
# and need to recalculate the dates. For performance reasons, we rely
# on the fact that our clock only ever ticks forward, since it's
# cheaper to do dt1 <= dt2 than dt1.date() != dt2.date(). This means
# that we will NOT correctly recognize a new date if we go backwards
# in time(which should never happen in a simulation, or in live
# trading)
if (
self._period_start is None or
self._period_close <= dt
):
self.calculate_dates(dt)
return dt == self._period_end | [
"def",
"should_trigger",
"(",
"self",
",",
"dt",
")",
":",
"# There are two reasons why we might want to recalculate the dates.",
"# One is the first time we ever call should_trigger, when",
"# self._period_start is none. The second is when we're on a new day,",
"# and need to recalculate the ... | https://github.com/alpacahq/pylivetrader/blob/2d9bf97103814409ba8b56a4291f2655c59514ee/pylivetrader/misc/events.py#L373-L389 | |||
pyg-team/pytorch_geometric | b920e9a3a64e22c8356be55301c88444ff051cae | examples/geniepath.py | python | GeniePathLayer.forward | (self, x, edge_index, h, c) | return x, (h, c) | [] | def forward(self, x, edge_index, h, c):
x = self.breadth_func(x, edge_index)
x = x[None, :]
x, (h, c) = self.depth_func(x, h, c)
x = x[0]
return x, (h, c) | [
"def",
"forward",
"(",
"self",
",",
"x",
",",
"edge_index",
",",
"h",
",",
"c",
")",
":",
"x",
"=",
"self",
".",
"breadth_func",
"(",
"x",
",",
"edge_index",
")",
"x",
"=",
"x",
"[",
"None",
",",
":",
"]",
"x",
",",
"(",
"h",
",",
"c",
")",... | https://github.com/pyg-team/pytorch_geometric/blob/b920e9a3a64e22c8356be55301c88444ff051cae/examples/geniepath.py#L54-L59 | |||
sony/nnabla-examples | 068be490aacf73740502a1c3b10f8b2d15a52d32 | graph-neural-networks/GCN/gcn_model.py | python | gcn | (A_hat, X, num_classes=7, dropout=0.5) | return H | Two layer GCN model. | Two layer GCN model. | [
"Two",
"layer",
"GCN",
"model",
"."
] | def gcn(A_hat, X, num_classes=7, dropout=0.5):
"""
Two layer GCN model.
"""
H = gcn_layer(A_hat, X, out_features=16,
name='gcn_layer_0', dropout=dropout)
H = gcn_layer(A_hat, H, out_features=num_classes,
name='gcn_layer_1', dropout=dropout, activation=F.softmax)
return H | [
"def",
"gcn",
"(",
"A_hat",
",",
"X",
",",
"num_classes",
"=",
"7",
",",
"dropout",
"=",
"0.5",
")",
":",
"H",
"=",
"gcn_layer",
"(",
"A_hat",
",",
"X",
",",
"out_features",
"=",
"16",
",",
"name",
"=",
"'gcn_layer_0'",
",",
"dropout",
"=",
"dropou... | https://github.com/sony/nnabla-examples/blob/068be490aacf73740502a1c3b10f8b2d15a52d32/graph-neural-networks/GCN/gcn_model.py#L20-L30 | |
nate-parrott/Flashlight | c3a7c7278a1cccf8918e7543faffc68e863ff5ab | PluginDirectories/1/calendar.bundle/jinja2/filters.py | python | do_int | (value, default=0) | Convert the value into an integer. If the
conversion doesn't work it will return ``0``. You can
override this default using the first parameter. | Convert the value into an integer. If the
conversion doesn't work it will return ``0``. You can
override this default using the first parameter. | [
"Convert",
"the",
"value",
"into",
"an",
"integer",
".",
"If",
"the",
"conversion",
"doesn",
"t",
"work",
"it",
"will",
"return",
"0",
".",
"You",
"can",
"override",
"this",
"default",
"using",
"the",
"first",
"parameter",
"."
] | def do_int(value, default=0):
"""Convert the value into an integer. If the
conversion doesn't work it will return ``0``. You can
override this default using the first parameter.
"""
try:
return int(value)
except (TypeError, ValueError):
# this quirk is necessary so that "42.23"|int gives 42.
try:
return int(float(value))
except (TypeError, ValueError):
return default | [
"def",
"do_int",
"(",
"value",
",",
"default",
"=",
"0",
")",
":",
"try",
":",
"return",
"int",
"(",
"value",
")",
"except",
"(",
"TypeError",
",",
"ValueError",
")",
":",
"# this quirk is necessary so that \"42.23\"|int gives 42.",
"try",
":",
"return",
"int"... | https://github.com/nate-parrott/Flashlight/blob/c3a7c7278a1cccf8918e7543faffc68e863ff5ab/PluginDirectories/1/calendar.bundle/jinja2/filters.py#L506-L518 | ||
pjlantz/Hale | 5c4c96f18f9a7ed0362e115007813c0b56dc3853 | src/utils/moduleInterface.py | python | Module.run | (self) | return | Method called when threadManager starts a module | Method called when threadManager starts a module | [
"Method",
"called",
"when",
"threadManager",
"starts",
"a",
"module"
] | def run(self):
"""
Method called when threadManager starts a module
"""
return | [
"def",
"run",
"(",
"self",
")",
":",
"return"
] | https://github.com/pjlantz/Hale/blob/5c4c96f18f9a7ed0362e115007813c0b56dc3853/src/utils/moduleInterface.py#L50-L55 | |
pyscf/pyscf | 0adfb464333f5ceee07b664f291d4084801bae64 | pyscf/pbc/gw/krgw_cd.py | python | get_rho_response_head | (gw, omega, mo_energy, qij) | return Pi_00 | Compute head (G=0, G'=0) density response function in auxiliary basis at freq iw | Compute head (G=0, G'=0) density response function in auxiliary basis at freq iw | [
"Compute",
"head",
"(",
"G",
"=",
"0",
"G",
"=",
"0",
")",
"density",
"response",
"function",
"in",
"auxiliary",
"basis",
"at",
"freq",
"iw"
] | def get_rho_response_head(gw, omega, mo_energy, qij):
'''
Compute head (G=0, G'=0) density response function in auxiliary basis at freq iw
'''
nkpts, nocc, nvir = qij.shape
nocc = gw.nocc
kpts = gw.kpts
# Compute Pi head
Pi_00 = 0j
for i, kpti in enumerate(kpts):
eia = mo_energy[i,:nocc,None] - mo_energy[i,None,nocc:]
eia = eia/(omega**2+eia*eia)
Pi_00 += 4./nkpts * einsum('ia,ia->',eia,qij[i].conj()*qij[i])
return Pi_00 | [
"def",
"get_rho_response_head",
"(",
"gw",
",",
"omega",
",",
"mo_energy",
",",
"qij",
")",
":",
"nkpts",
",",
"nocc",
",",
"nvir",
"=",
"qij",
".",
"shape",
"nocc",
"=",
"gw",
".",
"nocc",
"kpts",
"=",
"gw",
".",
"kpts",
"# Compute Pi head",
"Pi_00",
... | https://github.com/pyscf/pyscf/blob/0adfb464333f5ceee07b664f291d4084801bae64/pyscf/pbc/gw/krgw_cd.py#L490-L504 | |
auDeep/auDeep | 07df37b4fde5b10cd96a0c94d8804a1612c10d6f | audeep/backend/data/import_data.py | python | DataImporter._import | (self,
file: Path,
num_folds: int = 0,
fold_index: Optional[int] = None,
partition: Optional[Partition] = None) | Import a data set from CSV or ARFF.
This method decides based on the file extension which parser to use.
Parameters
----------
file: pathlib.Path
The file from which to import the data set
num_folds: int
The number of folds to create in the data set
fold_index: int, optional
The fold to which the instances in the data set belong. Ignored if `num_folds` is zero
partition: Partition, optional
The partition to which the instances in the data set belong
Returns
-------
DataSet
A data set containing instances imported from the specified file
Raises
------
IOError
If the file extension is unknown | Import a data set from CSV or ARFF.
This method decides based on the file extension which parser to use.
Parameters
----------
file: pathlib.Path
The file from which to import the data set
num_folds: int
The number of folds to create in the data set
fold_index: int, optional
The fold to which the instances in the data set belong. Ignored if `num_folds` is zero
partition: Partition, optional
The partition to which the instances in the data set belong | [
"Import",
"a",
"data",
"set",
"from",
"CSV",
"or",
"ARFF",
".",
"This",
"method",
"decides",
"based",
"on",
"the",
"file",
"extension",
"which",
"parser",
"to",
"use",
".",
"Parameters",
"----------",
"file",
":",
"pathlib",
".",
"Path",
"The",
"file",
"... | def _import(self,
file: Path,
num_folds: int = 0,
fold_index: Optional[int] = None,
partition: Optional[Partition] = None) -> DataSet:
"""
Import a data set from CSV or ARFF.
This method decides based on the file extension which parser to use.
Parameters
----------
file: pathlib.Path
The file from which to import the data set
num_folds: int
The number of folds to create in the data set
fold_index: int, optional
The fold to which the instances in the data set belong. Ignored if `num_folds` is zero
partition: Partition, optional
The partition to which the instances in the data set belong
Returns
-------
DataSet
A data set containing instances imported from the specified file
Raises
------
IOError
If the file extension is unknown
"""
if file.suffix.lower() == ".csv":
return self._import_csv(file=file,
num_folds=num_folds,
fold_index=fold_index,
partition=partition)
elif file.suffix.lower() == ".arff":
return self._import_arff(file=file,
num_folds=num_folds,
fold_index=fold_index,
partition=partition)
else:
raise IOError("unknown extension: %s" % file.suffix) | [
"def",
"_import",
"(",
"self",
",",
"file",
":",
"Path",
",",
"num_folds",
":",
"int",
"=",
"0",
",",
"fold_index",
":",
"Optional",
"[",
"int",
"]",
"=",
"None",
",",
"partition",
":",
"Optional",
"[",
"Partition",
"]",
"=",
"None",
")",
"->",
"Da... | https://github.com/auDeep/auDeep/blob/07df37b4fde5b10cd96a0c94d8804a1612c10d6f/audeep/backend/data/import_data.py#L341-L383 | ||
robhagemans/pcbasic | c3a043b46af66623a801e18a38175be077251ada | pcbasic/interface/window.py | python | WindowSizer.set_display_size | (self, new_size_x, new_size_y) | Change the physical display size. | Change the physical display size. | [
"Change",
"the",
"physical",
"display",
"size",
"."
] | def set_display_size(self, new_size_x, new_size_y):
"""Change the physical display size."""
self._window_size = new_size_x, new_size_y
self._display_size = self._window_size
self._calculate_scale()
self._calculate_letterbox_shift() | [
"def",
"set_display_size",
"(",
"self",
",",
"new_size_x",
",",
"new_size_y",
")",
":",
"self",
".",
"_window_size",
"=",
"new_size_x",
",",
"new_size_y",
"self",
".",
"_display_size",
"=",
"self",
".",
"_window_size",
"self",
".",
"_calculate_scale",
"(",
")"... | https://github.com/robhagemans/pcbasic/blob/c3a043b46af66623a801e18a38175be077251ada/pcbasic/interface/window.py#L114-L119 | ||
lark-parser/lark | e0af20ff164ed42564df40652f26400734cf0617 | docs/ide/app/core.py | python | domCreateElement | (tag, ns=None) | return document.createElement(tag) | Creates a new HTML/SVG/... tag
:param ns: the namespace. Default: HTML. Possible values: HTML, SVG, XBL, XUL | Creates a new HTML/SVG/... tag
:param ns: the namespace. Default: HTML. Possible values: HTML, SVG, XBL, XUL | [
"Creates",
"a",
"new",
"HTML",
"/",
"SVG",
"/",
"...",
"tag",
":",
"param",
"ns",
":",
"the",
"namespace",
".",
"Default",
":",
"HTML",
".",
"Possible",
"values",
":",
"HTML",
"SVG",
"XBL",
"XUL"
] | def domCreateElement(tag, ns=None):
"""
Creates a new HTML/SVG/... tag
:param ns: the namespace. Default: HTML. Possible values: HTML, SVG, XBL, XUL
"""
uri = None
if ns == "SVG":
uri = "http://www.w3.org/2000/svg"
elif ns == "XBL":
uri = "http://www.mozilla.org/xbl"
elif ns == "XUL":
uri = "http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"
if uri:
return document.createElementNS(uri, tag)
return document.createElement(tag) | [
"def",
"domCreateElement",
"(",
"tag",
",",
"ns",
"=",
"None",
")",
":",
"uri",
"=",
"None",
"if",
"ns",
"==",
"\"SVG\"",
":",
"uri",
"=",
"\"http://www.w3.org/2000/svg\"",
"elif",
"ns",
"==",
"\"XBL\"",
":",
"uri",
"=",
"\"http://www.mozilla.org/xbl\"",
"el... | https://github.com/lark-parser/lark/blob/e0af20ff164ed42564df40652f26400734cf0617/docs/ide/app/core.py#L41-L58 | |
python-ivi/python-ivi | cfa45ceade0758debe4bc24ba4c8195222cad1e2 | ivi/scpi/dcpwr.py | python | Base._initialize | (self, resource = None, id_query = False, reset = False, **keywargs) | Opens an I/O session to the instrument. | Opens an I/O session to the instrument. | [
"Opens",
"an",
"I",
"/",
"O",
"session",
"to",
"the",
"instrument",
"."
] | def _initialize(self, resource = None, id_query = False, reset = False, **keywargs):
"Opens an I/O session to the instrument."
super(Base, self)._initialize(resource, id_query, reset, **keywargs)
if not self._do_scpi_init:
return
# interface clear
if not self._driver_operation_simulate:
self._clear()
# check ID
if id_query and not self._driver_operation_simulate:
id = self.identity.instrument_model
id_check = self._instrument_id
id_short = id[:len(id_check)]
if id_short != id_check:
raise Exception("Instrument ID mismatch, expecting %s, got %s", id_check, id_short)
# reset
if reset:
self.utility_reset() | [
"def",
"_initialize",
"(",
"self",
",",
"resource",
"=",
"None",
",",
"id_query",
"=",
"False",
",",
"reset",
"=",
"False",
",",
"*",
"*",
"keywargs",
")",
":",
"super",
"(",
"Base",
",",
"self",
")",
".",
"_initialize",
"(",
"resource",
",",
"id_que... | https://github.com/python-ivi/python-ivi/blob/cfa45ceade0758debe4bc24ba4c8195222cad1e2/ivi/scpi/dcpwr.py#L79-L101 | ||
SiCKRAGE/SiCKRAGE | 45fb67c0c730fc22a34c695b5a62b11970621c53 | sickrage/search_providers/torrent/yggtorrent.py | python | YggtorrentProvider.search | (self, search_strings, age=0, series_id=None, series_provider_id=None, season=None, episode=None, **kwargs) | return results | Search a provider and parse the results.
:param search_strings: A dict with mode (key) and the search value (value)
:param age: Not used
:param ep_obj: Not used
:returns: A list of search results (structure) | Search a provider and parse the results. | [
"Search",
"a",
"provider",
"and",
"parse",
"the",
"results",
"."
] | def search(self, search_strings, age=0, series_id=None, series_provider_id=None, season=None, episode=None, **kwargs):
"""
Search a provider and parse the results.
:param search_strings: A dict with mode (key) and the search value (value)
:param age: Not used
:param ep_obj: Not used
:returns: A list of search results (structure)
"""
results = []
if not self.login():
return results
# Search Params
search_params = {
'category': 2145,
'do': 'search'
}
for mode in search_strings:
sickrage.app.log.debug('Search mode: {}'.format(mode))
for search_string in search_strings[mode]:
if mode != 'RSS':
sickrage.app.log.debug('Search string: {}'.format(search_string))
search_params['name'] = re.sub(r'[()]', '', search_string)
resp = self.session.get(self.urls['search'], params=search_params)
if not resp or not resp.text:
sickrage.app.log.debug('No data returned from provider')
continue
results += self.parse(resp.text, mode)
return results | [
"def",
"search",
"(",
"self",
",",
"search_strings",
",",
"age",
"=",
"0",
",",
"series_id",
"=",
"None",
",",
"series_provider_id",
"=",
"None",
",",
"season",
"=",
"None",
",",
"episode",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"results",
"... | https://github.com/SiCKRAGE/SiCKRAGE/blob/45fb67c0c730fc22a34c695b5a62b11970621c53/sickrage/search_providers/torrent/yggtorrent.py#L57-L92 | |
threat9/routersploit | 3fd394637f5566c4cf6369eecae08c4d27f93cda | routersploit/core/exploit/exploit.py | python | mute | (fn) | return wrapper | Suppress function from printing to sys.stdout | Suppress function from printing to sys.stdout | [
"Suppress",
"function",
"from",
"printing",
"to",
"sys",
".",
"stdout"
] | def mute(fn):
""" Suppress function from printing to sys.stdout """
@wraps(fn)
def wrapper(self, *args, **kwargs):
thread_output_stream.setdefault(threading.current_thread(), []).append(DummyFile())
try:
return fn(self, *args, **kwargs)
finally:
thread_output_stream[threading.current_thread()].pop()
return wrapper | [
"def",
"mute",
"(",
"fn",
")",
":",
"@",
"wraps",
"(",
"fn",
")",
"def",
"wrapper",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"thread_output_stream",
".",
"setdefault",
"(",
"threading",
".",
"current_thread",
"(",
")",
",",
... | https://github.com/threat9/routersploit/blob/3fd394637f5566c4cf6369eecae08c4d27f93cda/routersploit/core/exploit/exploit.py#L180-L190 | |
aplpy/aplpy | 241f744a33e7dee677718bd690ea52fbba3628d7 | aplpy/overlays.py | python | Beam.set_alpha | (self, alpha) | Set the alpha value (transparency).
This should be a floating point value between 0 and 1. | Set the alpha value (transparency). | [
"Set",
"the",
"alpha",
"value",
"(",
"transparency",
")",
"."
] | def set_alpha(self, alpha):
"""
Set the alpha value (transparency).
This should be a floating point value between 0 and 1.
"""
self.set(alpha=alpha) | [
"def",
"set_alpha",
"(",
"self",
",",
"alpha",
")",
":",
"self",
".",
"set",
"(",
"alpha",
"=",
"alpha",
")"
] | https://github.com/aplpy/aplpy/blob/241f744a33e7dee677718bd690ea52fbba3628d7/aplpy/overlays.py#L523-L529 | ||
sahana/eden | 1696fa50e90ce967df69f66b571af45356cc18da | modules/s3db/sync.py | python | SyncTaskModel.sync_resource_filter_onaccept | (form) | Reset last_push when adding/changing a filter | Reset last_push when adding/changing a filter | [
"Reset",
"last_push",
"when",
"adding",
"/",
"changing",
"a",
"filter"
] | def sync_resource_filter_onaccept(form):
"""
Reset last_push when adding/changing a filter
"""
db = current.db
s3db = current.s3db
ttable = s3db.sync_task
ftable = s3db.sync_resource_filter
if isinstance(form, Row):
filter_id = form.id
else:
try:
filter_id = form.vars.id
except AttributeError:
return
row = db(ftable.id == filter_id).select(ftable.id,
ftable.deleted,
ftable.task_id,
ftable.deleted_fk,
limitby=(0, 1),
).first()
if row:
task_id = None
if row.deleted:
try:
deleted_fk = json.loads(row.deleted_fk)
except:
return
if "task_id" in deleted_fk:
task_id = deleted_fk["task_id"]
else:
task_id = row.task_id
if task_id:
db(ttable.id == task_id).update(last_push=None) | [
"def",
"sync_resource_filter_onaccept",
"(",
"form",
")",
":",
"db",
"=",
"current",
".",
"db",
"s3db",
"=",
"current",
".",
"s3db",
"ttable",
"=",
"s3db",
".",
"sync_task",
"ftable",
"=",
"s3db",
".",
"sync_resource_filter",
"if",
"isinstance",
"(",
"form",... | https://github.com/sahana/eden/blob/1696fa50e90ce967df69f66b571af45356cc18da/modules/s3db/sync.py#L1279-L1317 | ||
pm4py/pm4py-core | 7807b09a088b02199cd0149d724d0e28793971bf | pm4py/objects/log/util/log_regex.py | python | get_encoded_log | (log, mapping, parameters=None) | return list_str | Gets the encoding of the provided log
Parameters
-------------
log
Event log
mapping
Mapping (activity to symbol)
Returns
-------------
list_str
List of encoded strings | Gets the encoding of the provided log | [
"Gets",
"the",
"encoding",
"of",
"the",
"provided",
"log"
] | def get_encoded_log(log, mapping, parameters=None):
"""
Gets the encoding of the provided log
Parameters
-------------
log
Event log
mapping
Mapping (activity to symbol)
Returns
-------------
list_str
List of encoded strings
"""
if parameters is None:
parameters = {}
list_str = list()
for trace in log:
list_str.append(get_encoded_trace(trace, mapping, parameters=parameters))
return list_str | [
"def",
"get_encoded_log",
"(",
"log",
",",
"mapping",
",",
"parameters",
"=",
"None",
")",
":",
"if",
"parameters",
"is",
"None",
":",
"parameters",
"=",
"{",
"}",
"list_str",
"=",
"list",
"(",
")",
"for",
"trace",
"in",
"log",
":",
"list_str",
".",
... | https://github.com/pm4py/pm4py-core/blob/7807b09a088b02199cd0149d724d0e28793971bf/pm4py/objects/log/util/log_regex.py#L53-L77 | |
dropbox/dropbox-sdk-python | 015437429be224732990041164a21a0501235db1 | dropbox/team_log.py | python | EventDetails.get_paper_published_link_view_details | (self) | return self._value | Only call this if :meth:`is_paper_published_link_view_details` is true.
:rtype: PaperPublishedLinkViewDetails | Only call this if :meth:`is_paper_published_link_view_details` is true. | [
"Only",
"call",
"this",
"if",
":",
"meth",
":",
"is_paper_published_link_view_details",
"is",
"true",
"."
] | def get_paper_published_link_view_details(self):
"""
Only call this if :meth:`is_paper_published_link_view_details` is true.
:rtype: PaperPublishedLinkViewDetails
"""
if not self.is_paper_published_link_view_details():
raise AttributeError("tag 'paper_published_link_view_details' not set")
return self._value | [
"def",
"get_paper_published_link_view_details",
"(",
"self",
")",
":",
"if",
"not",
"self",
".",
"is_paper_published_link_view_details",
"(",
")",
":",
"raise",
"AttributeError",
"(",
"\"tag 'paper_published_link_view_details' not set\"",
")",
"return",
"self",
".",
"_val... | https://github.com/dropbox/dropbox-sdk-python/blob/015437429be224732990041164a21a0501235db1/dropbox/team_log.py#L19239-L19247 | |
mdiazcl/fuzzbunch-debian | 2b76c2249ade83a389ae3badb12a1bd09901fd2c | windows/Resources/Python/Core/Lib/lib-tk/Tkinter.py | python | YView.yview | (self, *args) | Query and change the vertical position of the view. | Query and change the vertical position of the view. | [
"Query",
"and",
"change",
"the",
"vertical",
"position",
"of",
"the",
"view",
"."
] | def yview(self, *args):
"""Query and change the vertical position of the view."""
res = self.tk.call(self._w, 'yview', *args)
if not args:
return self._getdoubles(res) | [
"def",
"yview",
"(",
"self",
",",
"*",
"args",
")",
":",
"res",
"=",
"self",
".",
"tk",
".",
"call",
"(",
"self",
".",
"_w",
",",
"'yview'",
",",
"*",
"args",
")",
"if",
"not",
"args",
":",
"return",
"self",
".",
"_getdoubles",
"(",
"res",
")"
... | https://github.com/mdiazcl/fuzzbunch-debian/blob/2b76c2249ade83a389ae3badb12a1bd09901fd2c/windows/Resources/Python/Core/Lib/lib-tk/Tkinter.py#L1585-L1589 | ||
snorkel-team/snorkel-tutorials | 23bd9525a713a2faaff2cbe0d9123d1c52381958 | spam/03_spam_data_slicing_tutorial.py | python | textblob_polarity | (x) | return x.polarity > 0.9 | [] | def textblob_polarity(x):
return x.polarity > 0.9 | [
"def",
"textblob_polarity",
"(",
"x",
")",
":",
"return",
"x",
".",
"polarity",
">",
"0.9"
] | https://github.com/snorkel-team/snorkel-tutorials/blob/23bd9525a713a2faaff2cbe0d9123d1c52381958/spam/03_spam_data_slicing_tutorial.py#L240-L241 | |||
pyparallel/pyparallel | 11e8c6072d48c8f13641925d17b147bf36ee0ba3 | Lib/site-packages/pandas-0.17.0-py3.3-win-amd64.egg/pandas/core/reshape.py | python | wide_to_long | (df, stubnames, i, j) | return newdf.set_index([i, j]) | Wide panel to long format. Less flexible but more user-friendly than melt.
Parameters
----------
df : DataFrame
The wide-format DataFrame
stubnames : list
A list of stub names. The wide format variables are assumed to
start with the stub names.
i : str
The name of the id variable.
j : str
The name of the subobservation variable.
stubend : str
Regex to match for the end of the stubs.
Returns
-------
DataFrame
A DataFrame that contains each stub name as a variable as well as
variables for i and j.
Examples
--------
>>> import pandas as pd
>>> import numpy as np
>>> np.random.seed(123)
>>> df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"},
... "A1980" : {0 : "d", 1 : "e", 2 : "f"},
... "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7},
... "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1},
... "X" : dict(zip(range(3), np.random.randn(3)))
... })
>>> df["id"] = df.index
>>> df
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -1.085631 0
1 b e 1.2 1.3 0.997345 1
2 c f 0.7 0.1 0.282978 2
>>> wide_to_long(df, ["A", "B"], i="id", j="year")
X A B
id year
0 1970 -1.085631 a 2.5
1 1970 0.997345 b 1.2
2 1970 0.282978 c 0.7
0 1980 -1.085631 d 3.2
1 1980 0.997345 e 1.3
2 1980 0.282978 f 0.1
Notes
-----
All extra variables are treated as extra id variables. This simply uses
`pandas.melt` under the hood, but is hard-coded to "do the right thing"
in a typicaly case. | Wide panel to long format. Less flexible but more user-friendly than melt. | [
"Wide",
"panel",
"to",
"long",
"format",
".",
"Less",
"flexible",
"but",
"more",
"user",
"-",
"friendly",
"than",
"melt",
"."
] | def wide_to_long(df, stubnames, i, j):
"""
Wide panel to long format. Less flexible but more user-friendly than melt.
Parameters
----------
df : DataFrame
The wide-format DataFrame
stubnames : list
A list of stub names. The wide format variables are assumed to
start with the stub names.
i : str
The name of the id variable.
j : str
The name of the subobservation variable.
stubend : str
Regex to match for the end of the stubs.
Returns
-------
DataFrame
A DataFrame that contains each stub name as a variable as well as
variables for i and j.
Examples
--------
>>> import pandas as pd
>>> import numpy as np
>>> np.random.seed(123)
>>> df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"},
... "A1980" : {0 : "d", 1 : "e", 2 : "f"},
... "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7},
... "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1},
... "X" : dict(zip(range(3), np.random.randn(3)))
... })
>>> df["id"] = df.index
>>> df
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -1.085631 0
1 b e 1.2 1.3 0.997345 1
2 c f 0.7 0.1 0.282978 2
>>> wide_to_long(df, ["A", "B"], i="id", j="year")
X A B
id year
0 1970 -1.085631 a 2.5
1 1970 0.997345 b 1.2
2 1970 0.282978 c 0.7
0 1980 -1.085631 d 3.2
1 1980 0.997345 e 1.3
2 1980 0.282978 f 0.1
Notes
-----
All extra variables are treated as extra id variables. This simply uses
`pandas.melt` under the hood, but is hard-coded to "do the right thing"
in a typicaly case.
"""
def get_var_names(df, regex):
return df.filter(regex=regex).columns.tolist()
def melt_stub(df, stub, i, j):
varnames = get_var_names(df, "^" + stub)
newdf = melt(df, id_vars=i, value_vars=varnames, value_name=stub,
var_name=j)
newdf_j = newdf[j].str.replace(stub, "")
try:
newdf_j = newdf_j.astype(int)
except ValueError:
pass
newdf[j] = newdf_j
return newdf
id_vars = get_var_names(df, "^(?!%s)" % "|".join(stubnames))
if i not in id_vars:
id_vars += [i]
newdf = melt_stub(df, stubnames[0], id_vars, j)
for stub in stubnames[1:]:
new = melt_stub(df, stub, id_vars, j)
newdf = newdf.merge(new, how="outer", on=id_vars + [j], copy=False)
return newdf.set_index([i, j]) | [
"def",
"wide_to_long",
"(",
"df",
",",
"stubnames",
",",
"i",
",",
"j",
")",
":",
"def",
"get_var_names",
"(",
"df",
",",
"regex",
")",
":",
"return",
"df",
".",
"filter",
"(",
"regex",
"=",
"regex",
")",
".",
"columns",
".",
"tolist",
"(",
")",
... | https://github.com/pyparallel/pyparallel/blob/11e8c6072d48c8f13641925d17b147bf36ee0ba3/Lib/site-packages/pandas-0.17.0-py3.3-win-amd64.egg/pandas/core/reshape.py#L868-L949 | |
viewflow/viewflow | 2389bd379a2ab22cc277585df7c09514e273541d | viewflow/activation.py | python | ViewActivation.has_perm | (self, user) | return self.flow_task.can_execute(user, self.task) | Check user permission to execute the task. | Check user permission to execute the task. | [
"Check",
"user",
"permission",
"to",
"execute",
"the",
"task",
"."
] | def has_perm(self, user):
"""Check user permission to execute the task."""
return self.flow_task.can_execute(user, self.task) | [
"def",
"has_perm",
"(",
"self",
",",
"user",
")",
":",
"return",
"self",
".",
"flow_task",
".",
"can_execute",
"(",
"user",
",",
"self",
".",
"task",
")"
] | https://github.com/viewflow/viewflow/blob/2389bd379a2ab22cc277585df7c09514e273541d/viewflow/activation.py#L424-L426 | |
tenpy/tenpy | bbdd3dbbdb511948eb0e6ba7ff619ac6ca657fff | toycodes/c_tebd.py | python | calc_U_bonds | (H_bonds, dt) | return U_bonds | Given the H_bonds, calculate ``U_bonds[i] = expm(-dt*H_bonds[i])``.
Each local operator has legs (i out, (i+1) out, i in, (i+1) in), in short ``i j i* j*``.
Note that no imaginary 'i' is included, thus real `dt` means 'imaginary time' evolution! | Given the H_bonds, calculate ``U_bonds[i] = expm(-dt*H_bonds[i])``. | [
"Given",
"the",
"H_bonds",
"calculate",
"U_bonds",
"[",
"i",
"]",
"=",
"expm",
"(",
"-",
"dt",
"*",
"H_bonds",
"[",
"i",
"]",
")",
"."
] | def calc_U_bonds(H_bonds, dt):
"""Given the H_bonds, calculate ``U_bonds[i] = expm(-dt*H_bonds[i])``.
Each local operator has legs (i out, (i+1) out, i in, (i+1) in), in short ``i j i* j*``.
Note that no imaginary 'i' is included, thus real `dt` means 'imaginary time' evolution!
"""
d = H_bonds[0].shape[0]
U_bonds = []
for H in H_bonds:
H = np.reshape(H, [d * d, d * d])
U = expm(-dt * H)
U_bonds.append(np.reshape(U, [d, d, d, d]))
return U_bonds | [
"def",
"calc_U_bonds",
"(",
"H_bonds",
",",
"dt",
")",
":",
"d",
"=",
"H_bonds",
"[",
"0",
"]",
".",
"shape",
"[",
"0",
"]",
"U_bonds",
"=",
"[",
"]",
"for",
"H",
"in",
"H_bonds",
":",
"H",
"=",
"np",
".",
"reshape",
"(",
"H",
",",
"[",
"d",
... | https://github.com/tenpy/tenpy/blob/bbdd3dbbdb511948eb0e6ba7ff619ac6ca657fff/toycodes/c_tebd.py#L9-L21 | |
facebookresearch/phyre | b9d8f495213bcceca153d344cee687b8dbd742fe | src/python/phyre/simulator.py | python | scene_to_featurized_objects | (scene) | return phyre.simulation.FeaturizedObjects(
phyre.simulation.finalize_featurized_objects(
np.expand_dims(object_vector, axis=0))) | Convert scene to a FeaturizedObjects containing featurs of size
num_objects x OBJECT_FEATURE_SIZE. | Convert scene to a FeaturizedObjects containing featurs of size
num_objects x OBJECT_FEATURE_SIZE. | [
"Convert",
"scene",
"to",
"a",
"FeaturizedObjects",
"containing",
"featurs",
"of",
"size",
"num_objects",
"x",
"OBJECT_FEATURE_SIZE",
"."
] | def scene_to_featurized_objects(scene):
"""Convert scene to a FeaturizedObjects containing featurs of size
num_objects x OBJECT_FEATURE_SIZE."""
object_vector = simulator_bindings.featurize_scene(serialize(scene))
object_vector = np.array(object_vector, dtype=np.float32).reshape(
(-1, OBJECT_FEATURE_SIZE))
return phyre.simulation.FeaturizedObjects(
phyre.simulation.finalize_featurized_objects(
np.expand_dims(object_vector, axis=0))) | [
"def",
"scene_to_featurized_objects",
"(",
"scene",
")",
":",
"object_vector",
"=",
"simulator_bindings",
".",
"featurize_scene",
"(",
"serialize",
"(",
"scene",
")",
")",
"object_vector",
"=",
"np",
".",
"array",
"(",
"object_vector",
",",
"dtype",
"=",
"np",
... | https://github.com/facebookresearch/phyre/blob/b9d8f495213bcceca153d344cee687b8dbd742fe/src/python/phyre/simulator.py#L151-L159 | |
getsentry/sentry | 83b1f25aac3e08075e0e2495bc29efaf35aca18a | src/sentry/api/endpoints/team_issue_breakdown.py | python | TeamIssueBreakdownEndpoint.get | (self, request: Request, team: Team) | return Response(agg_project_counts) | Returns a dict of team projects, and a time-series dict of issue stat breakdowns for each.
If a list of statuses is passed then we return the count of each status and the totals.
Otherwise we the count of reviewed issues and the total count of issues. | Returns a dict of team projects, and a time-series dict of issue stat breakdowns for each. | [
"Returns",
"a",
"dict",
"of",
"team",
"projects",
"and",
"a",
"time",
"-",
"series",
"dict",
"of",
"issue",
"stat",
"breakdowns",
"for",
"each",
"."
] | def get(self, request: Request, team: Team) -> Response:
"""
Returns a dict of team projects, and a time-series dict of issue stat breakdowns for each.
If a list of statuses is passed then we return the count of each status and the totals.
Otherwise we the count of reviewed issues and the total count of issues.
"""
if not features.has("organizations:team-insights", team.organization, actor=request.user):
return Response({"detail": "You do not have the insights feature enabled"}, status=400)
start, end = get_date_range_from_params(request.GET)
end = end.replace(hour=0, minute=0, second=0, microsecond=0) + timedelta(days=1)
start = start.replace(hour=0, minute=0, second=0, microsecond=0) + timedelta(days=1)
if "statuses" in request.GET:
statuses = [
string_to_status_lookup[status] for status in request.GET.getlist("statuses")
]
new_format = True
else:
statuses = [GroupHistoryStatus.UNRESOLVED] + ACTIONED_STATUSES
new_format = False
new_issues = []
base_day_format = {"total": 0}
if new_format:
for status in statuses:
base_day_format[status_to_string_lookup[status]] = 0
else:
base_day_format["reviewed"] = 0
if GroupHistoryStatus.NEW in statuses:
statuses.remove(GroupHistoryStatus.NEW)
new_issues = list(
Group.objects.filter_to_team(team)
.filter(first_seen__gte=start, first_seen__lte=end)
.annotate(bucket=TruncDay("first_seen"))
.order_by("bucket")
.values("project", "bucket")
.annotate(
count=Count("id"),
status=Value(GroupHistoryStatus.NEW, output_field=IntegerField()),
)
)
bucketed_issues = (
GroupHistory.objects.filter_to_team(team)
.filter(
status__in=statuses,
date_added__gte=start,
date_added__lte=end,
)
.annotate(bucket=TruncDay("date_added"))
.order_by("bucket")
.values("project", "bucket", "status")
.annotate(count=Count("id"))
)
current_day, date_series_dict = start, {}
while current_day < end:
date_series_dict[current_day.isoformat()] = copy.deepcopy(base_day_format)
current_day += timedelta(days=1)
project_list = Project.objects.get_for_team_ids(team_ids=[team.id])
agg_project_counts = {
project.id: copy.deepcopy(date_series_dict) for project in project_list
}
for r in chain(bucketed_issues, new_issues):
bucket = agg_project_counts[r["project"]][r["bucket"].isoformat()]
bucket["total"] += r["count"]
if not new_format and r["status"] != GroupHistoryStatus.UNRESOLVED:
bucket["reviewed"] += r["count"]
if new_format:
bucket[status_to_string_lookup[r["status"]]] += r["count"]
return Response(agg_project_counts) | [
"def",
"get",
"(",
"self",
",",
"request",
":",
"Request",
",",
"team",
":",
"Team",
")",
"->",
"Response",
":",
"if",
"not",
"features",
".",
"has",
"(",
"\"organizations:team-insights\"",
",",
"team",
".",
"organization",
",",
"actor",
"=",
"request",
... | https://github.com/getsentry/sentry/blob/83b1f25aac3e08075e0e2495bc29efaf35aca18a/src/sentry/api/endpoints/team_issue_breakdown.py#L23-L98 | |
bitcraze/crazyflie-lib-python | 876f0dc003b91ba5e4de05daae9d0b79cf600f81 | cflib/crazyflie/__init__.py | python | _IncomingPacketHandler.add_port_callback | (self, port, cb) | Add a callback for data that comes on a specific port | Add a callback for data that comes on a specific port | [
"Add",
"a",
"callback",
"for",
"data",
"that",
"comes",
"on",
"a",
"specific",
"port"
] | def add_port_callback(self, port, cb):
"""Add a callback for data that comes on a specific port"""
logger.debug('Adding callback on port [%d] to [%s]', port, cb)
self.add_header_callback(cb, port, 0, 0xff, 0x0) | [
"def",
"add_port_callback",
"(",
"self",
",",
"port",
",",
"cb",
")",
":",
"logger",
".",
"debug",
"(",
"'Adding callback on port [%d] to [%s]'",
",",
"port",
",",
"cb",
")",
"self",
".",
"add_header_callback",
"(",
"cb",
",",
"port",
",",
"0",
",",
"0xff"... | https://github.com/bitcraze/crazyflie-lib-python/blob/876f0dc003b91ba5e4de05daae9d0b79cf600f81/cflib/crazyflie/__init__.py#L365-L368 | ||
twilio/twilio-python | 6e1e811ea57a1edfadd5161ace87397c563f6915 | twilio/rest/ip_messaging/v1/service/user/__init__.py | python | UserInstance.is_notifiable | (self) | return self._properties['is_notifiable'] | :returns: The is_notifiable
:rtype: bool | :returns: The is_notifiable
:rtype: bool | [
":",
"returns",
":",
"The",
"is_notifiable",
":",
"rtype",
":",
"bool"
] | def is_notifiable(self):
"""
:returns: The is_notifiable
:rtype: bool
"""
return self._properties['is_notifiable'] | [
"def",
"is_notifiable",
"(",
"self",
")",
":",
"return",
"self",
".",
"_properties",
"[",
"'is_notifiable'",
"]"
] | https://github.com/twilio/twilio-python/blob/6e1e811ea57a1edfadd5161ace87397c563f6915/twilio/rest/ip_messaging/v1/service/user/__init__.py#L420-L425 | |
magenta/magenta | be6558f1a06984faff6d6949234f5fe9ad0ffdb5 | magenta/models/latent_transfer/train_dataspace.py | python | main | (unused_argv) | [] | def main(unused_argv):
del unused_argv
# Load Config
config_name = FLAGS.config
config_module = importlib.import_module(configs_module_prefix +
'.%s' % config_name)
config = config_module.config
model_uid = common.get_model_uid(config_name, FLAGS.exp_uid)
batch_size = config['batch_size']
# Load dataset
dataset = common.load_dataset(config)
save_path = dataset.save_path
train_data = dataset.train_data
attr_train = dataset.attr_train
eval_data = dataset.eval_data
attr_eval = dataset.attr_eval
# Make the directory
save_dir = os.path.join(save_path, model_uid)
best_dir = os.path.join(save_dir, 'best')
tf.gfile.MakeDirs(save_dir)
tf.gfile.MakeDirs(best_dir)
tf.logging.info('Save Dir: %s', save_dir)
np.random.seed(FLAGS.random_seed)
# We use `N` in variable name to emphasis its being the Number of something.
N_train = train_data.shape[0] # pylint:disable=invalid-name
N_eval = eval_data.shape[0] # pylint:disable=invalid-name
# Load Model
tf.reset_default_graph()
sess = tf.Session()
m = model_dataspace.Model(config, name=model_uid)
_ = m() # noqa
# Create summaries
tf.summary.scalar('Train_Loss', m.vae_loss)
tf.summary.scalar('Mean_Recon_LL', m.mean_recons)
tf.summary.scalar('Mean_KL', m.mean_KL)
scalar_summaries = tf.summary.merge_all()
x_mean_, x_ = m.x_mean, m.x
if common.dataset_is_mnist_family(config['dataset']):
x_mean_ = tf.reshape(x_mean_, [-1, MNIST_SIZE, MNIST_SIZE, 1])
x_ = tf.reshape(x_, [-1, MNIST_SIZE, MNIST_SIZE, 1])
x_mean_summary = tf.summary.image(
'Reconstruction', nn.tf_batch_image(x_mean_), max_outputs=1)
x_summary = tf.summary.image('Original', nn.tf_batch_image(x_), max_outputs=1)
sample_summary = tf.summary.image(
'Sample', nn.tf_batch_image(x_mean_), max_outputs=1)
# Summary writers
train_writer = tf.summary.FileWriter(save_dir + '/vae_train', sess.graph)
eval_writer = tf.summary.FileWriter(save_dir + '/vae_eval', sess.graph)
# Initialize
sess.run(tf.global_variables_initializer())
i_start = 0
running_N_eval = 30 # pylint:disable=invalid-name
traces = {
'i': [],
'i_pred': [],
'loss': [],
'loss_eval': [],
}
best_eval_loss = np.inf
vae_lr_ = np.logspace(np.log10(FLAGS.lr), np.log10(1e-6), FLAGS.n_iters)
# Train the VAE
for i in range(i_start, FLAGS.n_iters):
start = (i * batch_size) % N_train
end = start + batch_size
batch = train_data[start:end]
labels = attr_train[start:end]
# train op
res = sess.run(
[m.train_vae, m.vae_loss, m.mean_recons, m.mean_KL, scalar_summaries], {
m.x: batch,
m.vae_lr: vae_lr_[i],
m.labels: labels,
})
tf.logging.info('Iter: %d, Loss: %d', i, res[1])
train_writer.add_summary(res[-1], i)
if i % FLAGS.n_iters_per_eval == 0:
# write training reconstructions
if batch.shape[0] == batch_size:
res = sess.run([x_summary, x_mean_summary], {
m.x: batch,
m.labels: labels,
})
train_writer.add_summary(res[0], i)
train_writer.add_summary(res[1], i)
# write sample reconstructions
prior_sample = sess.run(m.prior_sample)
res = sess.run([sample_summary], {
m.q_z_sample: prior_sample,
m.labels: labels,
})
train_writer.add_summary(res[0], i)
# write eval summaries
start = (i * batch_size) % N_eval
end = start + batch_size
batch = eval_data[start:end]
labels = attr_eval[start:end]
if batch.shape[0] == batch_size:
res_eval = sess.run([
m.vae_loss, m.mean_recons, m.mean_KL, scalar_summaries, x_summary,
x_mean_summary
], {
m.x: batch,
m.labels: labels,
})
traces['loss_eval'].append(res_eval[0])
eval_writer.add_summary(res_eval[-3], i)
eval_writer.add_summary(res_eval[-2], i)
eval_writer.add_summary(res_eval[-1], i)
if i % FLAGS.n_iters_per_save == 0:
smoothed_eval_loss = np.mean(traces['loss_eval'][-running_N_eval:])
if smoothed_eval_loss < best_eval_loss:
# Save the best model
best_eval_loss = smoothed_eval_loss
save_name = os.path.join(best_dir, 'vae_best_%s.ckpt' % model_uid)
tf.logging.info('SAVING BEST! %s Iter: %d', save_name, i)
m.vae_saver.save(sess, save_name)
with tf.gfile.Open(os.path.join(best_dir, 'best_ckpt_iters.txt'),
'w') as f:
f.write('%d' % i) | [
"def",
"main",
"(",
"unused_argv",
")",
":",
"del",
"unused_argv",
"# Load Config",
"config_name",
"=",
"FLAGS",
".",
"config",
"config_module",
"=",
"importlib",
".",
"import_module",
"(",
"configs_module_prefix",
"+",
"'.%s'",
"%",
"config_name",
")",
"config",
... | https://github.com/magenta/magenta/blob/be6558f1a06984faff6d6949234f5fe9ad0ffdb5/magenta/models/latent_transfer/train_dataspace.py#L50-L186 | ||||
XaF/TraktForVLC | 7851368d7ce62a15bb245fc078f3eab201c12357 | helper/commands/resolve.py | python | CommandResolve.add_arguments | (self, parser) | [] | def add_arguments(self, parser):
parser.add_argument(
'--meta',
help='The metadata provided by VLC',
)
parser.add_argument(
'--hash',
dest='oshash',
help='The hash of the media for OpenSubtitles resolution',
)
parser.add_argument(
'--size',
type=float,
help='The size of the media, in bytes',
)
parser.add_argument(
'--duration',
type=float,
help='The duration of the media, in seconds',
)
parser.add_argument(
'--trakt-api-key',
help='The Trakt API key to be used to resolve series from '
'Trakt.tv',
) | [
"def",
"add_arguments",
"(",
"self",
",",
"parser",
")",
":",
"parser",
".",
"add_argument",
"(",
"'--meta'",
",",
"help",
"=",
"'The metadata provided by VLC'",
",",
")",
"parser",
".",
"add_argument",
"(",
"'--hash'",
",",
"dest",
"=",
"'oshash'",
",",
"he... | https://github.com/XaF/TraktForVLC/blob/7851368d7ce62a15bb245fc078f3eab201c12357/helper/commands/resolve.py#L460-L484 | ||||
CenterForOpenScience/osf.io | cc02691be017e61e2cd64f19b848b2f4c18dcc84 | addons/bitbucket/api.py | python | BitbucketClient.repo_default_branch | (self, user, repo) | return res.json()['mainbranch']['name'] | Return the default branch for a BB repository (what they call the
"main branch").
API doc:
https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Busername%7D/%7Brepo_slug%7D
:param str user: Bitbucket user name
:param str repo: Bitbucket repo name
:rtype str:
:return: name of the main branch | Return the default branch for a BB repository (what they call the
"main branch"). | [
"Return",
"the",
"default",
"branch",
"for",
"a",
"BB",
"repository",
"(",
"what",
"they",
"call",
"the",
"main",
"branch",
")",
"."
] | def repo_default_branch(self, user, repo):
"""Return the default branch for a BB repository (what they call the
"main branch").
API doc:
https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Busername%7D/%7Brepo_slug%7D
:param str user: Bitbucket user name
:param str repo: Bitbucket repo name
:rtype str:
:return: name of the main branch
"""
res = self._make_request(
'GET',
self._build_url(settings.BITBUCKET_V2_API_URL, 'repositories', user, repo),
expects=(200, ),
throws=HTTPError(401)
)
return res.json()['mainbranch']['name'] | [
"def",
"repo_default_branch",
"(",
"self",
",",
"user",
",",
"repo",
")",
":",
"res",
"=",
"self",
".",
"_make_request",
"(",
"'GET'",
",",
"self",
".",
"_build_url",
"(",
"settings",
".",
"BITBUCKET_V2_API_URL",
",",
"'repositories'",
",",
"user",
",",
"r... | https://github.com/CenterForOpenScience/osf.io/blob/cc02691be017e61e2cd64f19b848b2f4c18dcc84/addons/bitbucket/api.py#L136-L155 | |
jython/jython3 | def4f8ec47cb7a9c799ea4c745f12badf92c5769 | lib-python/3.5.1/ast.py | python | parse | (source, filename='<unknown>', mode='exec') | return compile(source, filename, mode, PyCF_ONLY_AST) | Parse the source into an AST node.
Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). | Parse the source into an AST node.
Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). | [
"Parse",
"the",
"source",
"into",
"an",
"AST",
"node",
".",
"Equivalent",
"to",
"compile",
"(",
"source",
"filename",
"mode",
"PyCF_ONLY_AST",
")",
"."
] | def parse(source, filename='<unknown>', mode='exec'):
"""
Parse the source into an AST node.
Equivalent to compile(source, filename, mode, PyCF_ONLY_AST).
"""
return compile(source, filename, mode, PyCF_ONLY_AST) | [
"def",
"parse",
"(",
"source",
",",
"filename",
"=",
"'<unknown>'",
",",
"mode",
"=",
"'exec'",
")",
":",
"return",
"compile",
"(",
"source",
",",
"filename",
",",
"mode",
",",
"PyCF_ONLY_AST",
")"
] | https://github.com/jython/jython3/blob/def4f8ec47cb7a9c799ea4c745f12badf92c5769/lib-python/3.5.1/ast.py#L30-L35 | |
researchmm/tasn | 5dba8ccc096cedc63913730eeea14a9647911129 | tasn-mxnet/3rdparty/tvm/python/tvm/_ffi/_ctypes/node.py | python | NodeBase.__init_handle_by_constructor__ | (self, fconstructor, *args) | Initialize the handle by calling constructor function.
Parameters
----------
fconstructor : Function
Constructor function.
args: list of objects
The arguments to the constructor
Note
----
We have a special calling convention to call constructor functions.
So the return handle is directly set into the Node object
instead of creating a new Node. | Initialize the handle by calling constructor function. | [
"Initialize",
"the",
"handle",
"by",
"calling",
"constructor",
"function",
"."
] | def __init_handle_by_constructor__(self, fconstructor, *args):
"""Initialize the handle by calling constructor function.
Parameters
----------
fconstructor : Function
Constructor function.
args: list of objects
The arguments to the constructor
Note
----
We have a special calling convention to call constructor functions.
So the return handle is directly set into the Node object
instead of creating a new Node.
"""
# assign handle first to avoid error raising
self.handle = None
handle = __init_by_constructor__(fconstructor, args)
if not isinstance(handle, NodeHandle):
handle = NodeHandle(handle)
self.handle = handle | [
"def",
"__init_handle_by_constructor__",
"(",
"self",
",",
"fconstructor",
",",
"*",
"args",
")",
":",
"# assign handle first to avoid error raising",
"self",
".",
"handle",
"=",
"None",
"handle",
"=",
"__init_by_constructor__",
"(",
"fconstructor",
",",
"args",
")",
... | https://github.com/researchmm/tasn/blob/5dba8ccc096cedc63913730eeea14a9647911129/tasn-mxnet/3rdparty/tvm/python/tvm/_ffi/_ctypes/node.py#L62-L84 | ||
Fenixin/Minecraft-Region-Fixer | bfafd378ceb65116e4ea48cab24f1e6394051978 | gui/main.py | python | MainWindow.update_delete_buttons_status | (self, status) | [] | def update_delete_buttons_status(self, status):
if status:
self.delete_all_chunks_button.Enable()
self.delete_all_regions_button.Enable()
else:
self.delete_all_chunks_button.Disable()
self.delete_all_regions_button.Disable() | [
"def",
"update_delete_buttons_status",
"(",
"self",
",",
"status",
")",
":",
"if",
"status",
":",
"self",
".",
"delete_all_chunks_button",
".",
"Enable",
"(",
")",
"self",
".",
"delete_all_regions_button",
".",
"Enable",
"(",
")",
"else",
":",
"self",
".",
"... | https://github.com/Fenixin/Minecraft-Region-Fixer/blob/bfafd378ceb65116e4ea48cab24f1e6394051978/gui/main.py#L363-L370 | ||||
eirannejad/pyRevit | 49c0b7eb54eb343458ce1365425e6552d0c47d44 | site-packages/sqlalchemy/sql/schema.py | python | ForeignKey._get_colspec | (self, schema=None, table_name=None) | Return a string based 'column specification' for this
:class:`.ForeignKey`.
This is usually the equivalent of the string-based "tablename.colname"
argument first passed to the object's constructor. | Return a string based 'column specification' for this
:class:`.ForeignKey`. | [
"Return",
"a",
"string",
"based",
"column",
"specification",
"for",
"this",
":",
"class",
":",
".",
"ForeignKey",
"."
] | def _get_colspec(self, schema=None, table_name=None):
"""Return a string based 'column specification' for this
:class:`.ForeignKey`.
This is usually the equivalent of the string-based "tablename.colname"
argument first passed to the object's constructor.
"""
if schema:
_schema, tname, colname = self._column_tokens
if table_name is not None:
tname = table_name
return "%s.%s.%s" % (schema, tname, colname)
elif table_name:
schema, tname, colname = self._column_tokens
if schema:
return "%s.%s.%s" % (schema, table_name, colname)
else:
return "%s.%s" % (table_name, colname)
elif self._table_column is not None:
return "%s.%s" % (
self._table_column.table.fullname, self._table_column.key)
else:
return self._colspec | [
"def",
"_get_colspec",
"(",
"self",
",",
"schema",
"=",
"None",
",",
"table_name",
"=",
"None",
")",
":",
"if",
"schema",
":",
"_schema",
",",
"tname",
",",
"colname",
"=",
"self",
".",
"_column_tokens",
"if",
"table_name",
"is",
"not",
"None",
":",
"t... | https://github.com/eirannejad/pyRevit/blob/49c0b7eb54eb343458ce1365425e6552d0c47d44/site-packages/sqlalchemy/sql/schema.py#L1662-L1685 | ||
tp4a/teleport | 1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad | server/www/packages/packages-windows/x86/mako/runtime.py | python | _include_file | (context, uri, calling_uri, **kwargs) | locate the template from the given uri and include it in
the current output. | locate the template from the given uri and include it in
the current output. | [
"locate",
"the",
"template",
"from",
"the",
"given",
"uri",
"and",
"include",
"it",
"in",
"the",
"current",
"output",
"."
] | def _include_file(context, uri, calling_uri, **kwargs):
"""locate the template from the given uri and include it in
the current output."""
template = _lookup_template(context, uri, calling_uri)
(callable_, ctx) = _populate_self_namespace(
context._clean_inheritance_tokens(), template
)
kwargs = _kwargs_for_include(callable_, context._data, **kwargs)
if template.include_error_handler:
try:
callable_(ctx, **kwargs)
except Exception:
result = template.include_error_handler(ctx, compat.exception_as())
if not result:
compat.reraise(*sys.exc_info())
else:
callable_(ctx, **kwargs) | [
"def",
"_include_file",
"(",
"context",
",",
"uri",
",",
"calling_uri",
",",
"*",
"*",
"kwargs",
")",
":",
"template",
"=",
"_lookup_template",
"(",
"context",
",",
"uri",
",",
"calling_uri",
")",
"(",
"callable_",
",",
"ctx",
")",
"=",
"_populate_self_nam... | https://github.com/tp4a/teleport/blob/1fafd34f1f775d2cf80ea4af6e44468d8e0b24ad/server/www/packages/packages-windows/x86/mako/runtime.py#L778-L795 | ||
ankonzoid/artificio | d6879b540cc92705813d90331e2ab9b66dc20d89 | process_image/src/img2kmeans.py | python | choose_cluster_colors | (model, custom_colors, sort=True) | return cluster_colors | [] | def choose_cluster_colors(model, custom_colors, sort=True):
# Return custom colors if no need to sort
if sort == False:
return custom_colors
custom_colors = np.array(custom_colors, dtype=float)
n_colors = len(custom_colors)
# Sort k-means cluster centers by their euclidean length from (0,0,0)
brightness_vec_user = np.zeros((n_colors), dtype=float)
brightness_vec_kmeans = np.zeros((n_colors), dtype=float)
for i in range(n_colors):
rgb_vec_kmeans = model.cluster_centers_[i]
rgb_vec_user = custom_colors[i]
brightness_vec_user[i] = np.sum(rgb_vec_user) # sum rgb pixels for brightness
brightness_vec_kmeans[i] = np.sum(rgb_vec_kmeans) # sum rgb pixels for brightness
# Sort by ascending brightness
i_brightness_user_sorted = np.argsort(brightness_vec_user) # ascending brightness
i_brightness_kmeans_sorted = np.argsort(brightness_vec_kmeans) # ascending brightness
# Build sorted custom colors
customized_kmeans_cluster_centers = np.zeros((n_colors, 3), dtype=float)
for i in range(n_colors):
j = i_brightness_kmeans_sorted[i]
k = i_brightness_user_sorted[i]
customized_kmeans_cluster_centers[j] = custom_colors[k]
cluster_colors = custom_colors
return cluster_colors | [
"def",
"choose_cluster_colors",
"(",
"model",
",",
"custom_colors",
",",
"sort",
"=",
"True",
")",
":",
"# Return custom colors if no need to sort",
"if",
"sort",
"==",
"False",
":",
"return",
"custom_colors",
"custom_colors",
"=",
"np",
".",
"array",
"(",
"custom... | https://github.com/ankonzoid/artificio/blob/d6879b540cc92705813d90331e2ab9b66dc20d89/process_image/src/img2kmeans.py#L42-L74 | |||
dry-python/returns | dfc1613f22ef6cbc5d1c48e086affe16c1bd33bb | returns/iterables.py | python | _concat_failable_safely | (
current: KindN[
_FailableKind, _FirstType, _SecondType, _ThirdType,
],
acc: KindN[
_FailableKind, _UpdatedType, _SecondType, _ThirdType,
],
function: KindN[
_FailableKind,
Callable[[_FirstType], Callable[[_UpdatedType], _UpdatedType]],
_SecondType,
_ThirdType,
],
) | return _concat_applicative(current, acc, function).lash(lambda _: acc) | Concats two ``FailableN`` using a curried-like function and a fallback.
We need both ``.apply`` and ``.lash`` methods here. | Concats two ``FailableN`` using a curried-like function and a fallback. | [
"Concats",
"two",
"FailableN",
"using",
"a",
"curried",
"-",
"like",
"function",
"and",
"a",
"fallback",
"."
] | def _concat_failable_safely(
current: KindN[
_FailableKind, _FirstType, _SecondType, _ThirdType,
],
acc: KindN[
_FailableKind, _UpdatedType, _SecondType, _ThirdType,
],
function: KindN[
_FailableKind,
Callable[[_FirstType], Callable[[_UpdatedType], _UpdatedType]],
_SecondType,
_ThirdType,
],
) -> KindN[_FailableKind, _UpdatedType, _SecondType, _ThirdType]:
"""
Concats two ``FailableN`` using a curried-like function and a fallback.
We need both ``.apply`` and ``.lash`` methods here.
"""
return _concat_applicative(current, acc, function).lash(lambda _: acc) | [
"def",
"_concat_failable_safely",
"(",
"current",
":",
"KindN",
"[",
"_FailableKind",
",",
"_FirstType",
",",
"_SecondType",
",",
"_ThirdType",
",",
"]",
",",
"acc",
":",
"KindN",
"[",
"_FailableKind",
",",
"_UpdatedType",
",",
"_SecondType",
",",
"_ThirdType",
... | https://github.com/dry-python/returns/blob/dfc1613f22ef6cbc5d1c48e086affe16c1bd33bb/returns/iterables.py#L397-L416 | |
mutpy/mutpy | 5c8b3ca0d365083a4da8333f7fce8783114371fa | mutpy/utils.py | python | ModuleInjector.try_inject_other | (self, imported_as, target) | [] | def try_inject_other(self, imported_as, target):
if imported_as in self.source.__dict__ and not self.is_restricted(imported_as):
target.__dict__[imported_as] = self.source.__dict__[imported_as] | [
"def",
"try_inject_other",
"(",
"self",
",",
"imported_as",
",",
"target",
")",
":",
"if",
"imported_as",
"in",
"self",
".",
"source",
".",
"__dict__",
"and",
"not",
"self",
".",
"is_restricted",
"(",
"imported_as",
")",
":",
"target",
".",
"__dict__",
"["... | https://github.com/mutpy/mutpy/blob/5c8b3ca0d365083a4da8333f7fce8783114371fa/mutpy/utils.py#L249-L251 | ||||
sahana/eden | 1696fa50e90ce967df69f66b571af45356cc18da | controllers/water.py | python | debris_basin | () | return s3_rest_controller() | Debris Basins, RESTful controller | Debris Basins, RESTful controller | [
"Debris",
"Basins",
"RESTful",
"controller"
] | def debris_basin():
""" Debris Basins, RESTful controller """
return s3_rest_controller() | [
"def",
"debris_basin",
"(",
")",
":",
"return",
"s3_rest_controller",
"(",
")"
] | https://github.com/sahana/eden/blob/1696fa50e90ce967df69f66b571af45356cc18da/controllers/water.py#L18-L21 | |
wistbean/learn_python3_spider | 73c873f4845f4385f097e5057407d03dd37a117b | stackoverflow/venv/lib/python3.6/site-packages/twisted/runner/procmontap.py | python | Options.parseArgs | (self, *args) | Grab the command line that is going to be started and monitored | Grab the command line that is going to be started and monitored | [
"Grab",
"the",
"command",
"line",
"that",
"is",
"going",
"to",
"be",
"started",
"and",
"monitored"
] | def parseArgs(self, *args):
"""
Grab the command line that is going to be started and monitored
"""
self['args'] = args | [
"def",
"parseArgs",
"(",
"self",
",",
"*",
"args",
")",
":",
"self",
"[",
"'args'",
"]",
"=",
"args"
] | https://github.com/wistbean/learn_python3_spider/blob/73c873f4845f4385f097e5057407d03dd37a117b/stackoverflow/venv/lib/python3.6/site-packages/twisted/runner/procmontap.py#L48-L52 | ||
demisto/content | 5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07 | Packs/USTA/Integrations/USTA/USTA.py | python | search_identity_leaks | (client: Client, args: Dict[str, Any]) | return CommandResults(
readable_output=readable_output,
outputs_prefix='USTA.Identity_Leaks',
outputs_key_field='signature',
outputs=identityLeaks
) | Gets the Leaked Accounts related your company using the '/threat-stream/identity-leaks' API endpoint
:type start: ``str``
:param - start: Starting parameter for analysis
:type end: ``str``
:param - end: End parameter for analysis | Gets the Leaked Accounts related your company using the '/threat-stream/identity-leaks' API endpoint | [
"Gets",
"the",
"Leaked",
"Accounts",
"related",
"your",
"company",
"using",
"the",
"/",
"threat",
"-",
"stream",
"/",
"identity",
"-",
"leaks",
"API",
"endpoint"
] | def search_identity_leaks(client: Client, args: Dict[str, Any]) -> CommandResults:
"""Gets the Leaked Accounts related your company using the '/threat-stream/identity-leaks' API endpoint
:type start: ``str``
:param - start: Starting parameter for analysis
:type end: ``str``
:param - end: End parameter for analysis
"""
start = args.get('start')
end = args.get('end')
if start:
startDate = timeToEpoch(start)
else:
startDate = start
if end:
endDate = timeToEpoch(end)
else:
endDate = end
param = {
'start': startDate,
'end': endDate
}
identityLeaks = client.get_identity_leaks(param=param)
readable_output = tableToMarkdown('Identity Leaks', identityLeaks)
return CommandResults(
readable_output=readable_output,
outputs_prefix='USTA.Identity_Leaks',
outputs_key_field='signature',
outputs=identityLeaks
) | [
"def",
"search_identity_leaks",
"(",
"client",
":",
"Client",
",",
"args",
":",
"Dict",
"[",
"str",
",",
"Any",
"]",
")",
"->",
"CommandResults",
":",
"start",
"=",
"args",
".",
"get",
"(",
"'start'",
")",
"end",
"=",
"args",
".",
"get",
"(",
"'end'"... | https://github.com/demisto/content/blob/5c664a65b992ac8ca90ac3f11b1b2cdf11ee9b07/Packs/USTA/Integrations/USTA/USTA.py#L315-L351 | |
otsaloma/gaupol | 6dec7826654d223c71a8d3279dcd967e95c46714 | gaupol/assistants.py | python | IntroductionPage._init_tree_view | (self) | Initialize the tree view of tasks. | Initialize the tree view of tasks. | [
"Initialize",
"the",
"tree",
"view",
"of",
"tasks",
"."
] | def _init_tree_view(self):
"""Initialize the tree view of tasks."""
store = Gtk.ListStore(object, bool, str)
self._tree_view.set_model(store)
selection = self._tree_view.get_selection()
selection.set_mode(Gtk.SelectionMode.SINGLE)
renderer = Gtk.CellRendererToggle()
renderer.props.activatable = True
renderer.props.xpad = 6
renderer.connect("toggled", self._on_tree_view_cell_toggled)
column = Gtk.TreeViewColumn("", renderer, active=1)
self._tree_view.append_column(column)
renderer = Gtk.CellRendererText()
renderer.props.ellipsize = Pango.EllipsizeMode.END
column = Gtk.TreeViewColumn("", renderer, markup=2)
self._tree_view.append_column(column) | [
"def",
"_init_tree_view",
"(",
"self",
")",
":",
"store",
"=",
"Gtk",
".",
"ListStore",
"(",
"object",
",",
"bool",
",",
"str",
")",
"self",
".",
"_tree_view",
".",
"set_model",
"(",
"store",
")",
"selection",
"=",
"self",
".",
"_tree_view",
".",
"get_... | https://github.com/otsaloma/gaupol/blob/6dec7826654d223c71a8d3279dcd967e95c46714/gaupol/assistants.py#L145-L160 | ||
deepchem/deepchem | 054eb4b2b082e3df8e1a8e77f36a52137ae6e375 | deepchem/models/layers.py | python | MolGANAggregationLayer.get_config | (self) | return config | Returns config dictionary for this layer. | Returns config dictionary for this layer. | [
"Returns",
"config",
"dictionary",
"for",
"this",
"layer",
"."
] | def get_config(self) -> Dict:
"""
Returns config dictionary for this layer.
"""
config = super(MolGANAggregationLayer, self).get_config()
config["units"] = self.units
config["activation"] = self.activation
config["dropout_rate"] = self.dropout_rate
config["edges"] = self.edges
return config | [
"def",
"get_config",
"(",
"self",
")",
"->",
"Dict",
":",
"config",
"=",
"super",
"(",
"MolGANAggregationLayer",
",",
"self",
")",
".",
"get_config",
"(",
")",
"config",
"[",
"\"units\"",
"]",
"=",
"self",
".",
"units",
"config",
"[",
"\"activation\"",
"... | https://github.com/deepchem/deepchem/blob/054eb4b2b082e3df8e1a8e77f36a52137ae6e375/deepchem/models/layers.py#L594-L604 | |
unknown-horizons/unknown-horizons | 7397fb333006d26c3d9fe796c7bd9cb8c3b43a49 | horizons/world/traderoute.py | python | TradeRoute.has_route | (cls, db, worldid) | return len(db("SELECT * FROM ship_route WHERE ship_id = ?", worldid)) != 0 | Check if a savegame contains route information for a certain ship | Check if a savegame contains route information for a certain ship | [
"Check",
"if",
"a",
"savegame",
"contains",
"route",
"information",
"for",
"a",
"certain",
"ship"
] | def has_route(cls, db, worldid):
"""Check if a savegame contains route information for a certain ship"""
return len(db("SELECT * FROM ship_route WHERE ship_id = ?", worldid)) != 0 | [
"def",
"has_route",
"(",
"cls",
",",
"db",
",",
"worldid",
")",
":",
"return",
"len",
"(",
"db",
"(",
"\"SELECT * FROM ship_route WHERE ship_id = ?\"",
",",
"worldid",
")",
")",
"!=",
"0"
] | https://github.com/unknown-horizons/unknown-horizons/blob/7397fb333006d26c3d9fe796c7bd9cb8c3b43a49/horizons/world/traderoute.py#L280-L282 | |
tendenci/tendenci | 0f2c348cc0e7d41bc56f50b00ce05544b083bf1d | tendenci/apps/memberships/models.py | python | MembershipDefault.membership_type_link | (self) | return link | [] | def membership_type_link(self):
link = '<a href="%s">%s</a>' % (
reverse('admin:memberships_membershiptype_change',
args=[self.membership_type.id]),
self.membership_type.name)
if self.corporate_membership_id:
from tendenci.apps.corporate_memberships.models import CorpMembership
[corp_member] = CorpMembership.objects.filter(id=self.corporate_membership_id)[:1] or [None]
if corp_member:
link = '%s (<a href="%s">corp</a> %s)' % (
link,
reverse('corpmembership.view',
args=[self.corporate_membership_id]),
corp_member.status_detail)
return link | [
"def",
"membership_type_link",
"(",
"self",
")",
":",
"link",
"=",
"'<a href=\"%s\">%s</a>'",
"%",
"(",
"reverse",
"(",
"'admin:memberships_membershiptype_change'",
",",
"args",
"=",
"[",
"self",
".",
"membership_type",
".",
"id",
"]",
")",
",",
"self",
".",
"... | https://github.com/tendenci/tendenci/blob/0f2c348cc0e7d41bc56f50b00ce05544b083bf1d/tendenci/apps/memberships/models.py#L1882-L1896 | |||
Antergos/Cnchi | 13ac2209da9432d453e0097cf48a107640b563a9 | src/installation/auto_partition.py | python | test_module | () | Test autopartition module | Test autopartition module | [
"Test",
"autopartition",
"module"
] | def test_module():
""" Test autopartition module """
import gettext
_ = gettext.gettext
os.makedirs("/var/log/cnchi")
logging.basicConfig(
filename="/var/log/cnchi/cnchi-autopartition.log",
level=logging.DEBUG)
settings = {
'use_luks': True,
'luks_password': "luks",
'use_lvm': True,
'use_home': True,
'bootloader': "grub2"}
AutoPartition(
dest_dir="/install",
auto_device="/dev/sdb",
settings=settings,
callback_queue=None).run() | [
"def",
"test_module",
"(",
")",
":",
"import",
"gettext",
"_",
"=",
"gettext",
".",
"gettext",
"os",
".",
"makedirs",
"(",
"\"/var/log/cnchi\"",
")",
"logging",
".",
"basicConfig",
"(",
"filename",
"=",
"\"/var/log/cnchi/cnchi-autopartition.log\"",
",",
"level",
... | https://github.com/Antergos/Cnchi/blob/13ac2209da9432d453e0097cf48a107640b563a9/src/installation/auto_partition.py#L781-L803 | ||
webrecorder/pywb | 7ff789f1a8e246720dab7744617824aa1e7d06c9 | pywb/apps/cli.py | python | BaseCli.__init__ | (self, args=None, default_port=8080, desc='') | :param args: CLI arguments
:param int default_port: The default port that the application will use
:param str desc: The description for the application to be started | :param args: CLI arguments
:param int default_port: The default port that the application will use
:param str desc: The description for the application to be started | [
":",
"param",
"args",
":",
"CLI",
"arguments",
":",
"param",
"int",
"default_port",
":",
"The",
"default",
"port",
"that",
"the",
"application",
"will",
"use",
":",
"param",
"str",
"desc",
":",
"The",
"description",
"for",
"the",
"application",
"to",
"be",... | def __init__(self, args=None, default_port=8080, desc=''):
"""
:param args: CLI arguments
:param int default_port: The default port that the application will use
:param str desc: The description for the application to be started
"""
parser = ArgumentParser(description=desc)
parser.add_argument("-V", "--version", action="version", version=get_version())
parser.add_argument('-p', '--port', type=int, default=default_port,
help='Port to listen on (default %s)' % default_port)
parser.add_argument('-b', '--bind', default='0.0.0.0',
help='Address to listen on (default 0.0.0.0)')
parser.add_argument('-t', '--threads', type=int, default=4,
help='Number of threads to use (default 4)')
parser.add_argument('--debug', action='store_true',
help='Enable debug mode')
parser.add_argument('--profile', action='store_true',
help='Enable profile mode')
parser.add_argument('--live', action='store_true',
help='Add live-web handler at /live')
parser.add_argument('--record', action='store_true',
help='Enable recording from the live web')
parser.add_argument('--proxy',
help='Enable HTTP/S proxy on specified collection')
parser.add_argument('-pt', '--proxy-default-timestamp',
help='Default timestamp / ISO date to use for proxy requests')
parser.add_argument('--proxy-record', action='store_true',
help='Enable proxy recording into specified collection')
parser.add_argument('--proxy-enable-wombat', action='store_true',
help='Enable partial wombat JS overrides support in proxy mode')
parser.add_argument('--enable-auto-fetch', action='store_true',
help='Enable auto-fetch worker to capture resources from stylesheets, <img srcset> when running in live/recording mode')
self.desc = desc
self.extra_config = {}
self._extend_parser(parser)
self.r = parser.parse_args(args)
logging.basicConfig(format='%(asctime)s: [%(levelname)s]: %(message)s',
level=logging.DEBUG if self.r.debug else logging.INFO)
if self.r.proxy:
self.extra_config['proxy'] = {
'coll': self.r.proxy,
'recording': self.r.proxy_record,
'enable_wombat': self.r.proxy_enable_wombat,
'default_timestamp': self.r.proxy_default_timestamp,
}
self.r.live = True
self.extra_config['enable_auto_fetch'] = self.r.enable_auto_fetch
self.application = self.load()
if self.r.profile:
from werkzeug.contrib.profiler import ProfilerMiddleware
self.application = ProfilerMiddleware(self.application) | [
"def",
"__init__",
"(",
"self",
",",
"args",
"=",
"None",
",",
"default_port",
"=",
"8080",
",",
"desc",
"=",
"''",
")",
":",
"parser",
"=",
"ArgumentParser",
"(",
"description",
"=",
"desc",
")",
"parser",
".",
"add_argument",
"(",
"\"-V\"",
",",
"\"-... | https://github.com/webrecorder/pywb/blob/7ff789f1a8e246720dab7744617824aa1e7d06c9/pywb/apps/cli.py#L43-L102 | ||
onnx/onnx-coreml | 141fc33d7217674ea8bda36494fa8089a543a3f3 | onnx_coreml/_transformers.py | python | ConstantFillToInitializers.__call__ | (self, graph) | return graph.create_graph(nodes=transformed_nodes) | [] | def __call__(self, graph): # type: (Graph) -> Graph
output_names = [str(output_[0]) for output_ in graph.outputs]
nodes_to_be_removed = []
for node in graph.nodes:
if node.op_type == 'ConstantFill' and (node.name not in output_names) and \
node.attrs.get('input_as_shape', 0) and node.inputs[0] in node.input_tensors \
and node.attrs.get('extra_shape', None) is None:
s = node.input_tensors[node.inputs[0]]
x = np.ones(tuple(s.astype(int))) * node.attrs.get('value', 0.0)
nodes_to_be_removed.append(node)
for child in node.children:
child.input_tensors[node.outputs[0]] = x
child.parents.remove(node)
graph.shape_dict[node.outputs[0]] = x.shape
transformed_nodes = []
for node in graph.nodes:
if node not in nodes_to_be_removed:
transformed_nodes.append(node)
return graph.create_graph(nodes=transformed_nodes) | [
"def",
"__call__",
"(",
"self",
",",
"graph",
")",
":",
"# type: (Graph) -> Graph",
"output_names",
"=",
"[",
"str",
"(",
"output_",
"[",
"0",
"]",
")",
"for",
"output_",
"in",
"graph",
".",
"outputs",
"]",
"nodes_to_be_removed",
"=",
"[",
"]",
"for",
"n... | https://github.com/onnx/onnx-coreml/blob/141fc33d7217674ea8bda36494fa8089a543a3f3/onnx_coreml/_transformers.py#L592-L612 | |||
IronLanguages/main | a949455434b1fda8c783289e897e78a9a0caabb5 | External.LCA_RESTRICTED/Languages/IronPython/repackage/pip/pip/utils/__init__.py | python | untar_file | (filename, location) | Untar the file (with path `filename`) to the destination `location`.
All files are written based on system defaults and umask (i.e. permissions
are not preserved), except that regular file members with any execute
permissions (user, group, or world) have "chmod +x" applied after being
written. Note that for windows, any execute changes using os.chmod are
no-ops per the python docs. | Untar the file (with path `filename`) to the destination `location`.
All files are written based on system defaults and umask (i.e. permissions
are not preserved), except that regular file members with any execute
permissions (user, group, or world) have "chmod +x" applied after being
written. Note that for windows, any execute changes using os.chmod are
no-ops per the python docs. | [
"Untar",
"the",
"file",
"(",
"with",
"path",
"filename",
")",
"to",
"the",
"destination",
"location",
".",
"All",
"files",
"are",
"written",
"based",
"on",
"system",
"defaults",
"and",
"umask",
"(",
"i",
".",
"e",
".",
"permissions",
"are",
"not",
"prese... | def untar_file(filename, location):
"""
Untar the file (with path `filename`) to the destination `location`.
All files are written based on system defaults and umask (i.e. permissions
are not preserved), except that regular file members with any execute
permissions (user, group, or world) have "chmod +x" applied after being
written. Note that for windows, any execute changes using os.chmod are
no-ops per the python docs.
"""
ensure_dir(location)
if filename.lower().endswith('.gz') or filename.lower().endswith('.tgz'):
mode = 'r:gz'
elif filename.lower().endswith(BZ2_EXTENSIONS):
mode = 'r:bz2'
elif filename.lower().endswith(XZ_EXTENSIONS):
mode = 'r:xz'
elif filename.lower().endswith('.tar'):
mode = 'r'
else:
logger.warning(
'Cannot determine compression type for file %s', filename,
)
mode = 'r:*'
tar = tarfile.open(filename, mode)
try:
# note: python<=2.5 doesn't seem to know about pax headers, filter them
leading = has_leading_dir([
member.name for member in tar.getmembers()
if member.name != 'pax_global_header'
])
for member in tar.getmembers():
fn = member.name
if fn == 'pax_global_header':
continue
if leading:
fn = split_leading_dir(fn)[1]
path = os.path.join(location, fn)
if member.isdir():
ensure_dir(path)
elif member.issym():
try:
tar._extract_member(member, path)
except Exception as exc:
# Some corrupt tar files seem to produce this
# (specifically bad symlinks)
logger.warning(
'In the tar file %s the member %s is invalid: %s',
filename, member.name, exc,
)
continue
else:
try:
fp = tar.extractfile(member)
except (KeyError, AttributeError) as exc:
# Some corrupt tar files seem to produce this
# (specifically bad symlinks)
logger.warning(
'In the tar file %s the member %s is invalid: %s',
filename, member.name, exc,
)
continue
ensure_dir(os.path.dirname(path))
with open(path, 'wb') as destfp:
shutil.copyfileobj(fp, destfp)
fp.close()
# Update the timestamp (useful for cython compiled files)
tar.utime(member, path)
# member have any execute permissions for user/group/world?
if member.mode & 0o111:
# make dest file have execute for user/group/world
# no-op on windows per python docs
os.chmod(path, (0o777 - current_umask() | 0o111))
finally:
tar.close() | [
"def",
"untar_file",
"(",
"filename",
",",
"location",
")",
":",
"ensure_dir",
"(",
"location",
")",
"if",
"filename",
".",
"lower",
"(",
")",
".",
"endswith",
"(",
"'.gz'",
")",
"or",
"filename",
".",
"lower",
"(",
")",
".",
"endswith",
"(",
"'.tgz'",... | https://github.com/IronLanguages/main/blob/a949455434b1fda8c783289e897e78a9a0caabb5/External.LCA_RESTRICTED/Languages/IronPython/repackage/pip/pip/utils/__init__.py#L515-L588 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.