nwo
stringlengths
5
106
sha
stringlengths
40
40
path
stringlengths
4
174
language
stringclasses
1 value
identifier
stringlengths
1
140
parameters
stringlengths
0
87.7k
argument_list
stringclasses
1 value
return_statement
stringlengths
0
426k
docstring
stringlengths
0
64.3k
docstring_summary
stringlengths
0
26.3k
docstring_tokens
list
function
stringlengths
18
4.83M
function_tokens
list
url
stringlengths
83
304
tensorflow/models
6b8bb0cbeb3e10415c7a87448f08adc3c484c1d3
official/vision/beta/projects/yolo/ops/preprocessing_ops.py
python
resize_and_jitter_image
(image, desired_size, jitter=0.0, letter_box=None, random_pad=True, crop_only=False, shiftx=0.5, shifty=0.5, cut=None, method=tf.image.ResizeMethod.BILINEAR, seed=None)
Resize, Pad, and distort a given input image. Args: image: a `Tensor` of shape [height, width, 3] representing an image. desired_size: a `Tensor` or `int` list/tuple of two elements representing [height, width] of the desired actual output image size. jitter: an `int` representing the maximum jittering that can be applied to the image. letter_box: a `bool` representing if letterboxing should be applied. random_pad: a `bool` representing if random padding should be applied. crop_only: a `bool` representing if only cropping will be applied. shiftx: a `float` indicating if the image is in the left or right. shifty: a `float` value indicating if the image is in the top or bottom. cut: a `float` value indicating the desired center of the final patched image. method: function to resize input image to scaled image. seed: seed for random scale jittering. Returns: image_: a `Tensor` of shape [height, width, 3] where [height, width] equals to `desired_size`. infos: a 2D `Tensor` that encodes the information of the image and the applied preprocessing. It is in the format of [[original_height, original_width], [desired_height, desired_width], [y_scale, x_scale], [y_offset, x_offset]], where [desired_height, desired_width] is the actual scaled image size, and [y_scale, x_scale] is the scaling factor, which is the ratio of scaled dimension / original dimension. cast([original_width, original_height, width, height, ptop, pleft, pbottom, pright], tf.float32): a `Tensor` containing the information of the image andthe applied preprocessing.
Resize, Pad, and distort a given input image.
[ "Resize", "Pad", "and", "distort", "a", "given", "input", "image", "." ]
def resize_and_jitter_image(image, desired_size, jitter=0.0, letter_box=None, random_pad=True, crop_only=False, shiftx=0.5, shifty=0.5, cut=None, method=tf.image.ResizeMethod.BILINEAR, seed=None): """Resize, Pad, and distort a given input image. Args: image: a `Tensor` of shape [height, width, 3] representing an image. desired_size: a `Tensor` or `int` list/tuple of two elements representing [height, width] of the desired actual output image size. jitter: an `int` representing the maximum jittering that can be applied to the image. letter_box: a `bool` representing if letterboxing should be applied. random_pad: a `bool` representing if random padding should be applied. crop_only: a `bool` representing if only cropping will be applied. shiftx: a `float` indicating if the image is in the left or right. shifty: a `float` value indicating if the image is in the top or bottom. cut: a `float` value indicating the desired center of the final patched image. method: function to resize input image to scaled image. seed: seed for random scale jittering. Returns: image_: a `Tensor` of shape [height, width, 3] where [height, width] equals to `desired_size`. infos: a 2D `Tensor` that encodes the information of the image and the applied preprocessing. It is in the format of [[original_height, original_width], [desired_height, desired_width], [y_scale, x_scale], [y_offset, x_offset]], where [desired_height, desired_width] is the actual scaled image size, and [y_scale, x_scale] is the scaling factor, which is the ratio of scaled dimension / original dimension. cast([original_width, original_height, width, height, ptop, pleft, pbottom, pright], tf.float32): a `Tensor` containing the information of the image andthe applied preprocessing. """ def intersection(a, b): """Finds the intersection between 2 crops.""" minx = tf.maximum(a[0], b[0]) miny = tf.maximum(a[1], b[1]) maxx = tf.minimum(a[2], b[2]) maxy = tf.minimum(a[3], b[3]) return tf.convert_to_tensor([minx, miny, maxx, maxy]) def cast(values, dtype): return [tf.cast(value, dtype) for value in values] if jitter > 0.5 or jitter < 0: raise ValueError('maximum change in aspect ratio must be between 0 and 0.5') with tf.name_scope('resize_and_jitter_image'): # Cast all parameters to a usable float data type. jitter = tf.cast(jitter, tf.float32) original_dtype, original_dims = image.dtype, tf.shape(image)[:2] # original width, original height, desigered width, desired height original_width, original_height, width, height = cast( [original_dims[1], original_dims[0], desired_size[1], desired_size[0]], tf.float32) # Compute the random delta width and height etc. and randomize the # location of the corner points. jitter_width = original_width * jitter jitter_height = original_height * jitter pleft = random_uniform_strong( -jitter_width, jitter_width, jitter_width.dtype, seed=seed) pright = random_uniform_strong( -jitter_width, jitter_width, jitter_width.dtype, seed=seed) ptop = random_uniform_strong( -jitter_height, jitter_height, jitter_height.dtype, seed=seed) pbottom = random_uniform_strong( -jitter_height, jitter_height, jitter_height.dtype, seed=seed) # Letter box the image. if letter_box: (image_aspect_ratio, input_aspect_ratio) = original_width / original_height, width / height distorted_aspect = image_aspect_ratio / input_aspect_ratio delta_h, delta_w = 0.0, 0.0 pullin_h, pullin_w = 0.0, 0.0 if distorted_aspect > 1: delta_h = ((original_width / input_aspect_ratio) - original_height) / 2 else: delta_w = ((original_height * input_aspect_ratio) - original_width) / 2 ptop = ptop - delta_h - pullin_h pbottom = pbottom - delta_h - pullin_h pright = pright - delta_w - pullin_w pleft = pleft - delta_w - pullin_w # Compute the width and height to crop or pad too, and clip all crops to # to be contained within the image. swidth = original_width - pleft - pright sheight = original_height - ptop - pbottom src_crop = intersection([ptop, pleft, sheight + ptop, swidth + pleft], [0, 0, original_height, original_width]) # Random padding used for mosaic. h_ = src_crop[2] - src_crop[0] w_ = src_crop[3] - src_crop[1] if random_pad: rmh = tf.maximum(0.0, -ptop) rmw = tf.maximum(0.0, -pleft) else: rmw = (swidth - w_) * shiftx rmh = (sheight - h_) * shifty # Cast cropping params to usable dtype. src_crop = tf.cast(src_crop, tf.int32) # Compute padding parmeters. dst_shape = [rmh, rmw, rmh + h_, rmw + w_] ptop, pleft, pbottom, pright = dst_shape pad = dst_shape * tf.cast([1, 1, -1, -1], ptop.dtype) pad += tf.cast([0, 0, sheight, swidth], ptop.dtype) pad = tf.cast(pad, tf.int32) infos = [] # Crop the image to desired size. cropped_image = tf.slice( image, [src_crop[0], src_crop[1], 0], [src_crop[2] - src_crop[0], src_crop[3] - src_crop[1], -1]) crop_info = tf.stack([ tf.cast(original_dims, tf.float32), tf.cast(tf.shape(cropped_image)[:2], dtype=tf.float32), tf.ones_like(original_dims, dtype=tf.float32), tf.cast(src_crop[:2], tf.float32) ]) infos.append(crop_info) if crop_only: if not letter_box: h_, w_ = cast(get_image_shape(cropped_image), width.dtype) width = tf.cast(tf.round((w_ * width) / swidth), tf.int32) height = tf.cast(tf.round((h_ * height) / sheight), tf.int32) cropped_image = tf.image.resize( cropped_image, [height, width], method=method) cropped_image = tf.cast(cropped_image, original_dtype) return cropped_image, infos, cast([ original_width, original_height, width, height, ptop, pleft, pbottom, pright ], tf.int32) # Pad the image to desired size. image_ = tf.pad( cropped_image, [[pad[0], pad[2]], [pad[1], pad[3]], [0, 0]], constant_values=PAD_VALUE) # Pad and scale info isize = tf.cast(tf.shape(image_)[:2], dtype=tf.float32) osize = tf.cast((desired_size[0], desired_size[1]), dtype=tf.float32) pad_info = tf.stack([ tf.cast(tf.shape(cropped_image)[:2], tf.float32), osize, osize/isize, (-tf.cast(pad[:2], tf.float32)*osize/isize) ]) infos.append(pad_info) temp = tf.shape(image_)[:2] cond = temp > tf.cast(desired_size, temp.dtype) if tf.reduce_any(cond): size = tf.cast(desired_size, temp.dtype) size = tf.where(cond, size, temp) image_ = tf.image.resize( image_, (size[0], size[1]), method=tf.image.ResizeMethod.AREA) image_ = tf.cast(image_, original_dtype) image_ = tf.image.resize( image_, (desired_size[0], desired_size[1]), method=tf.image.ResizeMethod.BILINEAR, antialias=False) image_ = tf.cast(image_, original_dtype) if cut is not None: image_, crop_info = mosaic_cut(image_, original_width, original_height, width, height, cut, ptop, pleft, pbottom, pright, shiftx, shifty) infos.append(crop_info) return image_, infos, cast([ original_width, original_height, width, height, ptop, pleft, pbottom, pright ], tf.float32)
[ "def", "resize_and_jitter_image", "(", "image", ",", "desired_size", ",", "jitter", "=", "0.0", ",", "letter_box", "=", "None", ",", "random_pad", "=", "True", ",", "crop_only", "=", "False", ",", "shiftx", "=", "0.5", ",", "shifty", "=", "0.5", ",", "cu...
https://github.com/tensorflow/models/blob/6b8bb0cbeb3e10415c7a87448f08adc3c484c1d3/official/vision/beta/projects/yolo/ops/preprocessing_ops.py#L328-L520
ipython/ipython
c0abea7a6dfe52c1f74c9d0387d4accadba7cc14
IPython/core/events.py
python
EventManager.unregister
(self, event, function)
Remove a callback from the given event.
Remove a callback from the given event.
[ "Remove", "a", "callback", "from", "the", "given", "event", "." ]
def unregister(self, event, function): """Remove a callback from the given event.""" if function in self.callbacks[event]: return self.callbacks[event].remove(function) # Remove callback in case ``function`` was adapted by `backcall`. for callback in self.callbacks[event]: try: if callback.__wrapped__ is function: return self.callbacks[event].remove(callback) except AttributeError: pass raise ValueError('Function {!r} is not registered as a {} callback'.format(function, event))
[ "def", "unregister", "(", "self", ",", "event", ",", "function", ")", ":", "if", "function", "in", "self", ".", "callbacks", "[", "event", "]", ":", "return", "self", ".", "callbacks", "[", "event", "]", ".", "remove", "(", "function", ")", "# Remove c...
https://github.com/ipython/ipython/blob/c0abea7a6dfe52c1f74c9d0387d4accadba7cc14/IPython/core/events.py#L66-L79
containernet/containernet
7b2ae38d691b2ed8da2b2700b85ed03562271d01
mininet/node.py
python
OVSSwitch.TCReapply
( intf )
Unfortunately OVS and Mininet are fighting over tc queuing disciplines. As a quick hack/ workaround, we clear OVS's and reapply our own.
Unfortunately OVS and Mininet are fighting over tc queuing disciplines. As a quick hack/ workaround, we clear OVS's and reapply our own.
[ "Unfortunately", "OVS", "and", "Mininet", "are", "fighting", "over", "tc", "queuing", "disciplines", ".", "As", "a", "quick", "hack", "/", "workaround", "we", "clear", "OVS", "s", "and", "reapply", "our", "own", "." ]
def TCReapply( intf ): """Unfortunately OVS and Mininet are fighting over tc queuing disciplines. As a quick hack/ workaround, we clear OVS's and reapply our own.""" if isinstance( intf, TCIntf ): intf.config( **intf.params )
[ "def", "TCReapply", "(", "intf", ")", ":", "if", "isinstance", "(", "intf", ",", "TCIntf", ")", ":", "intf", ".", "config", "(", "*", "*", "intf", ".", "params", ")" ]
https://github.com/containernet/containernet/blob/7b2ae38d691b2ed8da2b2700b85ed03562271d01/mininet/node.py#L1717-L1722
CGATOxford/cgat
326aad4694bdfae8ddc194171bb5d73911243947
obsolete/PipelineMetagenomeAssembly.py
python
SoapDenovo2.build
(self, config)
return statement
return build statement to be run
return build statement to be run
[ "return", "build", "statement", "to", "be", "run" ]
def build(self, config): ''' return build statement to be run ''' # output directory outdir = "soapdenovo.dir" # get track from config file for line in open(config).readlines(): if line.startswith("q2"): continue elif line.startswith("q") or line.startswith("q1"): track = self.getTrack(line[:-1].split("=")[1]) options = "%(soapdenovo_options)s" tempdir = P.getTempDir(".") statement = '''%%(soapdenovo_executable)s all -s %%(infile)s -o %(tempdir)s/%(track)s -K %%(kmer)s %(options)s; checkpoint; mv %(tempdir)s/%(track)s* %(outdir)s; mv %(outdir)s/%(track)s.contig %(outdir)s/%(track)s.contigs.fa; cat %(outdir)s/%(track)s.contigs.fa | python %%(scriptsdir)s/rename_contigs.py -a --log=%(outdir)s/%(track)s.contigs.log rm -rf %(tempdir)s''' % locals() return statement
[ "def", "build", "(", "self", ",", "config", ")", ":", "# output directory", "outdir", "=", "\"soapdenovo.dir\"", "# get track from config file", "for", "line", "in", "open", "(", "config", ")", ".", "readlines", "(", ")", ":", "if", "line", ".", "startswith", ...
https://github.com/CGATOxford/cgat/blob/326aad4694bdfae8ddc194171bb5d73911243947/obsolete/PipelineMetagenomeAssembly.py#L673-L700
linxid/Machine_Learning_Study_Path
558e82d13237114bbb8152483977806fc0c222af
Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/site-packages/pip/cmdoptions.py
python
resolve_wheel_no_use_binary
(options)
[]
def resolve_wheel_no_use_binary(options): if not options.use_wheel: control = options.format_control fmt_ctl_no_use_wheel(control)
[ "def", "resolve_wheel_no_use_binary", "(", "options", ")", ":", "if", "not", "options", ".", "use_wheel", ":", "control", "=", "options", ".", "format_control", "fmt_ctl_no_use_wheel", "(", "control", ")" ]
https://github.com/linxid/Machine_Learning_Study_Path/blob/558e82d13237114bbb8152483977806fc0c222af/Machine Learning In Action/Chapter5-LogisticRegression/venv/Lib/site-packages/pip/cmdoptions.py#L36-L39
AppScale/gts
46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9
AppServer/lib/django-1.2/django/core/paginator.py
python
Page.start_index
(self)
return (self.paginator.per_page * (self.number - 1)) + 1
Returns the 1-based index of the first object on this page, relative to total objects in the paginator.
Returns the 1-based index of the first object on this page, relative to total objects in the paginator.
[ "Returns", "the", "1", "-", "based", "index", "of", "the", "first", "object", "on", "this", "page", "relative", "to", "total", "objects", "in", "the", "paginator", "." ]
def start_index(self): """ Returns the 1-based index of the first object on this page, relative to total objects in the paginator. """ # Special case, return zero if no items. if self.paginator.count == 0: return 0 return (self.paginator.per_page * (self.number - 1)) + 1
[ "def", "start_index", "(", "self", ")", ":", "# Special case, return zero if no items.", "if", "self", ".", "paginator", ".", "count", "==", "0", ":", "return", "0", "return", "(", "self", ".", "paginator", ".", "per_page", "*", "(", "self", ".", "number", ...
https://github.com/AppScale/gts/blob/46f909cf5dc5ba81faf9d81dc9af598dcf8a82a9/AppServer/lib/django-1.2/django/core/paginator.py#L102-L110
golismero/golismero
7d605b937e241f51c1ca4f47b20f755eeefb9d76
thirdparty_libs/nltk/model/ngram.py
python
_estimator
(fdist, bins)
return SimpleGoodTuringProbDist(fdist)
Default estimator function using a SimpleGoodTuringProbDist.
Default estimator function using a SimpleGoodTuringProbDist.
[ "Default", "estimator", "function", "using", "a", "SimpleGoodTuringProbDist", "." ]
def _estimator(fdist, bins): """ Default estimator function using a SimpleGoodTuringProbDist. """ # can't be an instance method of NgramModel as they # can't be pickled either. return SimpleGoodTuringProbDist(fdist)
[ "def", "_estimator", "(", "fdist", ",", "bins", ")", ":", "# can't be an instance method of NgramModel as they", "# can't be pickled either.", "return", "SimpleGoodTuringProbDist", "(", "fdist", ")" ]
https://github.com/golismero/golismero/blob/7d605b937e241f51c1ca4f47b20f755eeefb9d76/thirdparty_libs/nltk/model/ngram.py#L18-L24
francisck/DanderSpritz_docs
86bb7caca5a957147f120b18bb5c31f299914904
Python/Core/Lib/lib-tk/ttk.py
python
Labelframe.__init__
(self, master=None, **kw)
Construct a Ttk Labelframe with parent master. STANDARD OPTIONS class, cursor, style, takefocus WIDGET-SPECIFIC OPTIONS labelanchor, text, underline, padding, labelwidget, width, height
Construct a Ttk Labelframe with parent master. STANDARD OPTIONS class, cursor, style, takefocus WIDGET-SPECIFIC OPTIONS labelanchor, text, underline, padding, labelwidget, width, height
[ "Construct", "a", "Ttk", "Labelframe", "with", "parent", "master", ".", "STANDARD", "OPTIONS", "class", "cursor", "style", "takefocus", "WIDGET", "-", "SPECIFIC", "OPTIONS", "labelanchor", "text", "underline", "padding", "labelwidget", "width", "height" ]
def __init__(self, master=None, **kw): """Construct a Ttk Labelframe with parent master. STANDARD OPTIONS class, cursor, style, takefocus WIDGET-SPECIFIC OPTIONS labelanchor, text, underline, padding, labelwidget, width, height """ Widget.__init__(self, master, 'ttk::labelframe', kw)
[ "def", "__init__", "(", "self", ",", "master", "=", "None", ",", "*", "*", "kw", ")", ":", "Widget", ".", "__init__", "(", "self", ",", "master", ",", "'ttk::labelframe'", ",", "kw", ")" ]
https://github.com/francisck/DanderSpritz_docs/blob/86bb7caca5a957147f120b18bb5c31f299914904/Python/Core/Lib/lib-tk/ttk.py#L686-L697
schollii/pypubsub
dd5c1e848ff501b192a26a0a0187e618fda13f97
src/pubsub/core/topicmgr.py
python
TopicManager.getOrCreateTopic
(self, name: str, protoListener: UserListener = None)
return self.__createTopic(nameTuple, desc, parent=parentObj, specGiven=specGiven)
Get the Topic instance for topic of given name, creating it (and any of its missing parent topics) as necessary. Pubsub functions such as subscribe() use this to obtain the Topic object corresponding to a topic name. The name can be in dotted or string format (``'a.b.'`` or ``('a','b')``). This method always attempts to return a "complete" topic, i.e. one with a Message Data Specification (MDS). So if the topic does not have an MDS, it attempts to add it. It first tries to find an MDS from a TopicDefnProvider (see addDefnProvider()). If none is available, it attempts to set it from protoListener, if it has been given. If not, the topic has no MDS. Once a topic's MDS has been set, it is never again changed or accessed by this method. Examples:: # assume no topics exist # but a topic definition provider has been added via # pub.addTopicDefnProvider() and has definition for topics 'a' and 'a.b' # creates topic a and a.b; both will have MDS from the defn provider: t1 = topicMgr.getOrCreateTopic('a.b') t2 = topicMgr.getOrCreateTopic('a.b') assert(t1 is t2) assert(t1.getParent().getName() == 'a') def proto(req1, optarg1=None): pass # creates topic c.d with MDS based on proto; creates c without an MDS # since no proto for it, nor defn provider: t1 = topicMgr.getOrCreateTopic('c.d', proto) The MDS can also be defined via a call to subscribe(listener, topicName), which indirectly calls getOrCreateTopic(topicName, listener).
Get the Topic instance for topic of given name, creating it (and any of its missing parent topics) as necessary. Pubsub functions such as subscribe() use this to obtain the Topic object corresponding to a topic name.
[ "Get", "the", "Topic", "instance", "for", "topic", "of", "given", "name", "creating", "it", "(", "and", "any", "of", "its", "missing", "parent", "topics", ")", "as", "necessary", ".", "Pubsub", "functions", "such", "as", "subscribe", "()", "use", "this", ...
def getOrCreateTopic(self, name: str, protoListener: UserListener = None) -> Topic: """ Get the Topic instance for topic of given name, creating it (and any of its missing parent topics) as necessary. Pubsub functions such as subscribe() use this to obtain the Topic object corresponding to a topic name. The name can be in dotted or string format (``'a.b.'`` or ``('a','b')``). This method always attempts to return a "complete" topic, i.e. one with a Message Data Specification (MDS). So if the topic does not have an MDS, it attempts to add it. It first tries to find an MDS from a TopicDefnProvider (see addDefnProvider()). If none is available, it attempts to set it from protoListener, if it has been given. If not, the topic has no MDS. Once a topic's MDS has been set, it is never again changed or accessed by this method. Examples:: # assume no topics exist # but a topic definition provider has been added via # pub.addTopicDefnProvider() and has definition for topics 'a' and 'a.b' # creates topic a and a.b; both will have MDS from the defn provider: t1 = topicMgr.getOrCreateTopic('a.b') t2 = topicMgr.getOrCreateTopic('a.b') assert(t1 is t2) assert(t1.getParent().getName() == 'a') def proto(req1, optarg1=None): pass # creates topic c.d with MDS based on proto; creates c without an MDS # since no proto for it, nor defn provider: t1 = topicMgr.getOrCreateTopic('c.d', proto) The MDS can also be defined via a call to subscribe(listener, topicName), which indirectly calls getOrCreateTopic(topicName, listener). """ obj = self.getTopic(name, okIfNone=True) if obj: # if object is not sendable but a proto listener was given, # update its specification so that it is sendable if (protoListener is not None) and not obj.hasMDS(): allArgsDocs, required = topicArgsFromCallable(protoListener) obj.setMsgArgSpec(allArgsDocs, required) return obj # create missing parents nameTuple = tupleize(name) parentObj = self.__createParentTopics(nameTuple) # now the final topic object, args from listener if provided desc, specGiven = self.__defnProvider.getDefn(nameTuple) # POLICY: protoListener is used only if no definition available if specGiven is None: if protoListener is None: desc = 'UNDOCUMENTED: created without spec' else: allArgsDocs, required = topicArgsFromCallable(protoListener) specGiven = ArgSpecGiven(allArgsDocs, required) desc = 'UNDOCUMENTED: created from protoListener "%s" in module %s' % getID(protoListener) return self.__createTopic(nameTuple, desc, parent=parentObj, specGiven=specGiven)
[ "def", "getOrCreateTopic", "(", "self", ",", "name", ":", "str", ",", "protoListener", ":", "UserListener", "=", "None", ")", "->", "Topic", ":", "obj", "=", "self", ".", "getTopic", "(", "name", ",", "okIfNone", "=", "True", ")", "if", "obj", ":", "...
https://github.com/schollii/pypubsub/blob/dd5c1e848ff501b192a26a0a0187e618fda13f97/src/pubsub/core/topicmgr.py#L168-L231
sagemath/sage
f9b2db94f675ff16963ccdefba4f1a3393b3fe0d
src/sage/modular/local_comp/smoothchar.py
python
SmoothCharacterGroupGeneric.prime
(self)
return self._p
r""" The residue characteristic of the underlying field. EXAMPLES:: sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).prime() 3
r""" The residue characteristic of the underlying field.
[ "r", "The", "residue", "characteristic", "of", "the", "underlying", "field", "." ]
def prime(self): r""" The residue characteristic of the underlying field. EXAMPLES:: sage: from sage.modular.local_comp.smoothchar import SmoothCharacterGroupGeneric sage: SmoothCharacterGroupGeneric(3, QQ).prime() 3 """ return self._p
[ "def", "prime", "(", "self", ")", ":", "return", "self", ".", "_p" ]
https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/modular/local_comp/smoothchar.py#L494-L504
spesmilo/electrum
bdbd59300fbd35b01605e66145458e5f396108e8
electrum/plugins/trustedcoin/kivy.py
python
Plugin.abort_send
(self, window)
return False
[]
def abort_send(self, window): wallet = window.wallet if not isinstance(wallet, self.wallet_class): return if wallet.can_sign_without_server(): return if wallet.billing_info is None: self.start_request_thread(wallet) Clock.schedule_once( lambda dt: window.show_error(_('Requesting account info from TrustedCoin server...') + '\n' + _('Please try again.'))) return True return False
[ "def", "abort_send", "(", "self", ",", "window", ")", ":", "wallet", "=", "window", ".", "wallet", "if", "not", "isinstance", "(", "wallet", ",", "self", ".", "wallet_class", ")", ":", "return", "if", "wallet", ".", "can_sign_without_server", "(", ")", "...
https://github.com/spesmilo/electrum/blob/bdbd59300fbd35b01605e66145458e5f396108e8/electrum/plugins/trustedcoin/kivy.py#L98-L110
mesalock-linux/mesapy
ed546d59a21b36feb93e2309d5c6b75aa0ad95c9
rpython/jit/metainterp/warmspot.py
python
reset_jit
()
Helper for some tests (see micronumpy/test/test_zjit.py)
Helper for some tests (see micronumpy/test/test_zjit.py)
[ "Helper", "for", "some", "tests", "(", "see", "micronumpy", "/", "test", "/", "test_zjit", ".", "py", ")" ]
def reset_jit(): """Helper for some tests (see micronumpy/test/test_zjit.py)""" reset_stats() pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear() pyjitpl._warmrunnerdesc.jitcounter._clear_all()
[ "def", "reset_jit", "(", ")", ":", "reset_stats", "(", ")", "pyjitpl", ".", "_warmrunnerdesc", ".", "memory_manager", ".", "alive_loops", ".", "clear", "(", ")", "pyjitpl", ".", "_warmrunnerdesc", ".", "jitcounter", ".", "_clear_all", "(", ")" ]
https://github.com/mesalock-linux/mesapy/blob/ed546d59a21b36feb93e2309d5c6b75aa0ad95c9/rpython/jit/metainterp/warmspot.py#L208-L212
mathandy/svgpathtools
abd99f0846ea636b9c33ce28453348bd662b98c7
svgpathtools/path.py
python
Path.reversed
(self)
return Path(*newpath)
returns a copy of the Path object with its orientation reversed.
returns a copy of the Path object with its orientation reversed.
[ "returns", "a", "copy", "of", "the", "Path", "object", "with", "its", "orientation", "reversed", "." ]
def reversed(self): """returns a copy of the Path object with its orientation reversed.""" newpath = [seg.reversed() for seg in self] newpath.reverse() return Path(*newpath)
[ "def", "reversed", "(", "self", ")", ":", "newpath", "=", "[", "seg", ".", "reversed", "(", ")", "for", "seg", "in", "self", "]", "newpath", ".", "reverse", "(", ")", "return", "Path", "(", "*", "newpath", ")" ]
https://github.com/mathandy/svgpathtools/blob/abd99f0846ea636b9c33ce28453348bd662b98c7/svgpathtools/path.py#L2485-L2489
HymanLiuTS/flaskTs
286648286976e85d9b9a5873632331efcafe0b21
flasky/lib/python2.7/site-packages/mako/template.py
python
Template.get_def
(self, name)
return DefTemplate(self, getattr(self.module, "render_%s" % name))
Return a def of this template as a :class:`.DefTemplate`.
Return a def of this template as a :class:`.DefTemplate`.
[ "Return", "a", "def", "of", "this", "template", "as", "a", ":", "class", ":", ".", "DefTemplate", "." ]
def get_def(self, name): """Return a def of this template as a :class:`.DefTemplate`.""" return DefTemplate(self, getattr(self.module, "render_%s" % name))
[ "def", "get_def", "(", "self", ",", "name", ")", ":", "return", "DefTemplate", "(", "self", ",", "getattr", "(", "self", ".", "module", ",", "\"render_%s\"", "%", "name", ")", ")" ]
https://github.com/HymanLiuTS/flaskTs/blob/286648286976e85d9b9a5873632331efcafe0b21/flasky/lib/python2.7/site-packages/mako/template.py#L473-L476
gitpython-developers/GitPython
fac603789d66c0fd7c26e75debb41b06136c5026
git/refs/log.py
python
RefLogEntry.message
(self)
return self[4]
Message describing the operation that acted on the reference
Message describing the operation that acted on the reference
[ "Message", "describing", "the", "operation", "that", "acted", "on", "the", "reference" ]
def message(self) -> str: """Message describing the operation that acted on the reference""" return self[4]
[ "def", "message", "(", "self", ")", "->", "str", ":", "return", "self", "[", "4", "]" ]
https://github.com/gitpython-developers/GitPython/blob/fac603789d66c0fd7c26e75debb41b06136c5026/git/refs/log.py#L87-L89
sagemath/sage
f9b2db94f675ff16963ccdefba4f1a3393b3fe0d
src/sage/combinat/species/stream.py
python
Stream_class.__setitem__
(self, i, t)
Set the i-th entry of self to t. EXAMPLES:: sage: from sage.combinat.species.stream import Stream :: sage: s = Stream(const=0) sage: s[5] 0 sage: s.data() [0] sage: s[5] = 5 sage: s[5] 5 sage: s.data() [0, 0, 0, 0, 0, 5] :: sage: s = Stream(ZZ) sage: s[10] -5 sage: s.data() [0, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5] sage: s[10] = 10 sage: s.data() [0, 1, -1, 2, -2, 3, -3, 4, -4, 5, 10]
Set the i-th entry of self to t.
[ "Set", "the", "i", "-", "th", "entry", "of", "self", "to", "t", "." ]
def __setitem__(self, i, t): """ Set the i-th entry of self to t. EXAMPLES:: sage: from sage.combinat.species.stream import Stream :: sage: s = Stream(const=0) sage: s[5] 0 sage: s.data() [0] sage: s[5] = 5 sage: s[5] 5 sage: s.data() [0, 0, 0, 0, 0, 5] :: sage: s = Stream(ZZ) sage: s[10] -5 sage: s.data() [0, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5] sage: s[10] = 10 sage: s.data() [0, 1, -1, 2, -2, 3, -3, 4, -4, 5, 10] """ # Compute all of the coefficients up to (and including) the ith one self[i] if i < len(self._list): #If we are here, we can just change the entry in self._list self._list[i] = t else: #If we are here, then the stream has become constant. We just #extend self._list with self._constant and then change the #last entry. self._list += [ self._constant ] * (i+1 - len(self._list)) self._last_index = i self._list[i] = t
[ "def", "__setitem__", "(", "self", ",", "i", ",", "t", ")", ":", "# Compute all of the coefficients up to (and including) the ith one", "self", "[", "i", "]", "if", "i", "<", "len", "(", "self", ".", "_list", ")", ":", "#If we are here, we can just change the entry ...
https://github.com/sagemath/sage/blob/f9b2db94f675ff16963ccdefba4f1a3393b3fe0d/src/sage/combinat/species/stream.py#L178-L222
tp7/Sushi
908c0ff228734059aebb914a8d10f8e4ce2e868c
sushi.py
python
format_full_path
(temp_dir, base_path, postfix)
[]
def format_full_path(temp_dir, base_path, postfix): if temp_dir: return os.path.join(temp_dir, os.path.basename(base_path) + postfix) else: return base_path + postfix
[ "def", "format_full_path", "(", "temp_dir", ",", "base_path", ",", "postfix", ")", ":", "if", "temp_dir", ":", "return", "os", ".", "path", ".", "join", "(", "temp_dir", ",", "os", ".", "path", ".", "basename", "(", "base_path", ")", "+", "postfix", ")...
https://github.com/tp7/Sushi/blob/908c0ff228734059aebb914a8d10f8e4ce2e868c/sushi.py#L516-L520
tomplus/kubernetes_asyncio
f028cc793e3a2c519be6a52a49fb77ff0b014c9b
kubernetes_asyncio/client/models/v1beta1_run_as_group_strategy_options.py
python
V1beta1RunAsGroupStrategyOptions.to_str
(self)
return pprint.pformat(self.to_dict())
Returns the string representation of the model
Returns the string representation of the model
[ "Returns", "the", "string", "representation", "of", "the", "model" ]
def to_str(self): """Returns the string representation of the model""" return pprint.pformat(self.to_dict())
[ "def", "to_str", "(", "self", ")", ":", "return", "pprint", ".", "pformat", "(", "self", ".", "to_dict", "(", ")", ")" ]
https://github.com/tomplus/kubernetes_asyncio/blob/f028cc793e3a2c519be6a52a49fb77ff0b014c9b/kubernetes_asyncio/client/models/v1beta1_run_as_group_strategy_options.py#L131-L133
zhl2008/awd-platform
0416b31abea29743387b10b3914581fbe8e7da5e
web_hxb2/lib/python3.5/site-packages/bs4/element.py
python
Tag.encode
(self, encoding=DEFAULT_OUTPUT_ENCODING, indent_level=None, formatter="minimal", errors="xmlcharrefreplace")
return u.encode(encoding, errors)
[]
def encode(self, encoding=DEFAULT_OUTPUT_ENCODING, indent_level=None, formatter="minimal", errors="xmlcharrefreplace"): # Turn the data structure into Unicode, then encode the # Unicode. u = self.decode(indent_level, encoding, formatter) return u.encode(encoding, errors)
[ "def", "encode", "(", "self", ",", "encoding", "=", "DEFAULT_OUTPUT_ENCODING", ",", "indent_level", "=", "None", ",", "formatter", "=", "\"minimal\"", ",", "errors", "=", "\"xmlcharrefreplace\"", ")", ":", "# Turn the data structure into Unicode, then encode the", "# Un...
https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_hxb2/lib/python3.5/site-packages/bs4/element.py#L1103-L1109
home-assistant/core
265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1
homeassistant/components/device_automation/toggle_entity.py
python
async_get_triggers
( hass: HomeAssistant, device_id: str, domain: str )
return triggers
List device triggers.
List device triggers.
[ "List", "device", "triggers", "." ]
async def async_get_triggers( hass: HomeAssistant, device_id: str, domain: str ) -> list[dict[str, Any]]: """List device triggers.""" triggers = await entity.async_get_triggers(hass, device_id, domain) triggers.extend( await _async_get_automations(hass, device_id, ENTITY_TRIGGERS, domain) ) return triggers
[ "async", "def", "async_get_triggers", "(", "hass", ":", "HomeAssistant", ",", "device_id", ":", "str", ",", "domain", ":", "str", ")", "->", "list", "[", "dict", "[", "str", ",", "Any", "]", "]", ":", "triggers", "=", "await", "entity", ".", "async_get...
https://github.com/home-assistant/core/blob/265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1/homeassistant/components/device_automation/toggle_entity.py#L226-L234
davidmcclure/open-syllabus-project
078cfd4c5a257fbfb0901d43bfbc6350824eed4e
osp/citations/validate_config.py
python
Validate_Config.blacklisted_titles
(self)
return map( tokenize_field, singulars + [p.plural(s) for s in singulars], )
Pluralize the blacklisted titles. Returns: list
Pluralize the blacklisted titles.
[ "Pluralize", "the", "blacklisted", "titles", "." ]
def blacklisted_titles(self): """ Pluralize the blacklisted titles. Returns: list """ p = inflect.engine() singulars = self.config.get('blacklisted_titles', []) return map( tokenize_field, singulars + [p.plural(s) for s in singulars], )
[ "def", "blacklisted_titles", "(", "self", ")", ":", "p", "=", "inflect", ".", "engine", "(", ")", "singulars", "=", "self", ".", "config", ".", "get", "(", "'blacklisted_titles'", ",", "[", "]", ")", "return", "map", "(", "tokenize_field", ",", "singular...
https://github.com/davidmcclure/open-syllabus-project/blob/078cfd4c5a257fbfb0901d43bfbc6350824eed4e/osp/citations/validate_config.py#L31-L46
IOActive/XDiFF
552d3394e119ca4ced8115f9fd2d7e26760e40b1
xdiff_analyze.py
python
Analyze.check_minimum_risk
(self, function_risk, title)
return check
Check if the function has the minum risk required
Check if the function has the minum risk required
[ "Check", "if", "the", "function", "has", "the", "minum", "risk", "required" ]
def check_minimum_risk(self, function_risk, title): """Check if the function has the minum risk required""" check = False if self.settings['print_risk']: print("Function: %s, Risk: %s, Title: %s" % (inspect.stack()[1][3], function_risk, title[:title.find(" - ")])) elif function_risk >= self.settings['minimum_risk']: check = True return check
[ "def", "check_minimum_risk", "(", "self", ",", "function_risk", ",", "title", ")", ":", "check", "=", "False", "if", "self", ".", "settings", "[", "'print_risk'", "]", ":", "print", "(", "\"Function: %s, Risk: %s, Title: %s\"", "%", "(", "inspect", ".", "stack...
https://github.com/IOActive/XDiFF/blob/552d3394e119ca4ced8115f9fd2d7e26760e40b1/xdiff_analyze.py#L49-L56
morganstanley/treadmill
f18267c665baf6def4374d21170198f63ff1cde4
lib/python/treadmill/zkctx.py
python
connect
(zkurl, idpath=None, session_timeout=None)
return zkutils.connect( zkurl, idpath=idpath, listener=zkutils.exit_never, session_timeout=session_timeout )
Returns connection to Zookeeper.
Returns connection to Zookeeper.
[ "Returns", "connection", "to", "Zookeeper", "." ]
def connect(zkurl, idpath=None, session_timeout=None): """Returns connection to Zookeeper. """ return zkutils.connect( zkurl, idpath=idpath, listener=zkutils.exit_never, session_timeout=session_timeout )
[ "def", "connect", "(", "zkurl", ",", "idpath", "=", "None", ",", "session_timeout", "=", "None", ")", ":", "return", "zkutils", ".", "connect", "(", "zkurl", ",", "idpath", "=", "idpath", ",", "listener", "=", "zkutils", ".", "exit_never", ",", "session_...
https://github.com/morganstanley/treadmill/blob/f18267c665baf6def4374d21170198f63ff1cde4/lib/python/treadmill/zkctx.py#L16-L24
sony/nnabla-examples
068be490aacf73740502a1c3b10f8b2d15a52d32
GANs/pggan/args.py
python
get_args
(batch_size=16)
return args
Get command line arguments. Arguments set the default values of command line arguments.
Get command line arguments.
[ "Get", "command", "line", "arguments", "." ]
def get_args(batch_size=16): """ Get command line arguments. Arguments set the default values of command line arguments. """ import argparse import os description = "Example of Progressive Growing of GANs." parser = argparse.ArgumentParser(description) parser.add_argument("-d", "--device-id", type=int, default=0, help="Device id.") parser.add_argument("-c", "--context", type=str, default="cudnn", help="Context.") parser.add_argument("--type-config", "-t", type=str, default='float', help='Type of computation. e.g. "float", "half".') parser.add_argument("--batch-size", "-b", type=int, default=batch_size, help="Batch size.") parser.add_argument("--img-path", type=str, default="~/img_align_celeba_png", help="Image path.") parser.add_argument("--dataset-name", type=str, default="CelebA", choices=["CelebA"], help="Dataset name used.") parser.add_argument("--save-image-interval", type=int, default=1, help="Interval for saving images.") parser.add_argument("--epoch-per-resolution", type=int, default=4, help="Number of epochs per resolution.") parser.add_argument("--imsize", type=int, default=128, help="Input image size.") parser.add_argument("--train-samples", type=int, default=-1, help="Number of data to be used. When -1 is set all data is used.") parser.add_argument("--valid-samples", type=int, default=16384, help="Number of data used in validation.") parser.add_argument("--latent", type=int, default=512, help="Number of latent variables.") parser.add_argument("--critic", type=int, default=1, help="Number of critics.") parser.add_argument("--monitor-path", type=str, default="./result/example_0", help="Monitor path.") parser.add_argument("--model-load-path", type=str, default="./result/example_0/Gen_phase_128_epoch_4.h5", help="Model load path used in generation and validation.") parser.add_argument("--use-bn", action='store_true', help="Use batch normalization.") parser.add_argument("--use-ln", action='store_true', help="Use layer normalization.") parser.add_argument("--not-use-wscale", action='store_false', help="Not use the equalized learning rate.") parser.add_argument("--use-he-backward", action='store_true', help="Use the He initialization using the so-called `fan_in`. Default is the backward.") parser.add_argument("--leaky-alpha", type=float, default=0.2, help="Leaky alpha value.") parser.add_argument("--learning-rate", type=float, default=0.001, help="Learning rate.") parser.add_argument("--beta1", type=float, default=0.0, help="Beta1 of Adam solver.") parser.add_argument("--beta2", type=float, default=0.99, help="Beta2 of Adam solver.") parser.add_argument("--l2-fake-weight", type=float, default=0.1, help="Weight for the fake term in the discriminator loss in LSGAN.") parser.add_argument("--hyper-sphere", action='store_true', help="Latent vector lie in the hyper sphere.") parser.add_argument("--last-act", type=str, default="tanh", choices=["tanh"], help="Last activation of the generator.") parser.add_argument("--validation-metric", type=str, default="swd", choices=["swd", "ms-ssim"], help="Validation metric for PGGAN.") args = parser.parse_args() return args
[ "def", "get_args", "(", "batch_size", "=", "16", ")", ":", "import", "argparse", "import", "os", "description", "=", "\"Example of Progressive Growing of GANs.\"", "parser", "=", "argparse", ".", "ArgumentParser", "(", "description", ")", "parser", ".", "add_argumen...
https://github.com/sony/nnabla-examples/blob/068be490aacf73740502a1c3b10f8b2d15a52d32/GANs/pggan/args.py#L17-L91
google/rekall
55d1925f2df9759a989b35271b4fa48fc54a1c86
rekall-core/rekall/addrspace.py
python
TranslationLookasideBuffer.Get
(self, vaddr)
Returns the cached physical address for this virtual address.
Returns the cached physical address for this virtual address.
[ "Returns", "the", "cached", "physical", "address", "for", "this", "virtual", "address", "." ]
def Get(self, vaddr): """Returns the cached physical address for this virtual address.""" # The cache only stores page aligned virtual addresses. We add the page # offset to the physical addresses automatically. result = self.page_cache.Get(vaddr & self.PAGE_MASK) # None is a valid cached value, it means no mapping exists. if result is not None: return result + (vaddr & self.PAGE_ALIGNMENT)
[ "def", "Get", "(", "self", ",", "vaddr", ")", ":", "# The cache only stores page aligned virtual addresses. We add the page", "# offset to the physical addresses automatically.", "result", "=", "self", ".", "page_cache", ".", "Get", "(", "vaddr", "&", "self", ".", "PAGE_M...
https://github.com/google/rekall/blob/55d1925f2df9759a989b35271b4fa48fc54a1c86/rekall-core/rekall/addrspace.py#L71-L80
cool-RR/python_toolbox
cb9ef64b48f1d03275484d707dc5079b6701ad0c
python_toolbox/third_party/envelopes/connstack.py
python
use_connection
(connection)
Clears the stack and uses the given connection. Protects against mixed use of use_connection() and stacked connection contexts.
Clears the stack and uses the given connection. Protects against mixed use of use_connection() and stacked connection contexts.
[ "Clears", "the", "stack", "and", "uses", "the", "given", "connection", ".", "Protects", "against", "mixed", "use", "of", "use_connection", "()", "and", "stacked", "connection", "contexts", "." ]
def use_connection(connection): """Clears the stack and uses the given connection. Protects against mixed use of use_connection() and stacked connection contexts. """ assert len(_connection_stack) <= 1, \ 'You should not mix Connection contexts with use_connection().' release_local(_connection_stack) push_connection(connection)
[ "def", "use_connection", "(", "connection", ")", ":", "assert", "len", "(", "_connection_stack", ")", "<=", "1", ",", "'You should not mix Connection contexts with use_connection().'", "release_local", "(", "_connection_stack", ")", "push_connection", "(", "connection", "...
https://github.com/cool-RR/python_toolbox/blob/cb9ef64b48f1d03275484d707dc5079b6701ad0c/python_toolbox/third_party/envelopes/connstack.py#L67-L74
dmlc/gluon-nlp
5d4bc9eba7226ea9f9aabbbd39e3b1e886547e48
src/gluonnlp/data/sampler.py
python
FixedBucketSampler.__repr__
(self)
return ret
Return a string representing the statistics of the bucketing sampler. Returns ------- ret : str String representing the statistics of the buckets.
Return a string representing the statistics of the bucketing sampler.
[ "Return", "a", "string", "representing", "the", "statistics", "of", "the", "bucketing", "sampler", "." ]
def __repr__(self): """Return a string representing the statistics of the bucketing sampler. Returns ------- ret : str String representing the statistics of the buckets. """ ret = '{name}(\n' \ ' sample_num={sample_num}, batch_num={batch_num}\n' \ ' key={bucket_keys}\n' \ ' cnt={bucket_counts}\n' \ ' batch_size={bucket_batch_sizes}\n'\ ')'\ .format(name=self.__class__.__name__, sample_num=len(self._lengths), batch_num=len(self._batch_infos), bucket_keys=self._bucket_keys, bucket_counts=[len(sample_ids) for sample_ids in self._bucket_sample_ids], bucket_batch_sizes=self._bucket_batch_sizes) return ret
[ "def", "__repr__", "(", "self", ")", ":", "ret", "=", "'{name}(\\n'", "' sample_num={sample_num}, batch_num={batch_num}\\n'", "' key={bucket_keys}\\n'", "' cnt={bucket_counts}\\n'", "' batch_size={bucket_batch_sizes}\\n'", "')'", ".", "format", "(", "name", "=", "self", "...
https://github.com/dmlc/gluon-nlp/blob/5d4bc9eba7226ea9f9aabbbd39e3b1e886547e48/src/gluonnlp/data/sampler.py#L557-L577
exodrifter/unity-python
bef6e4e9ddfbbf1eaf7acbbb973e9aa3dd64a20d
Lib/robotparser.py
python
Entry.allowance
(self, filename)
return True
Preconditions: - our agent applies to this entry - filename is URL decoded
Preconditions: - our agent applies to this entry - filename is URL decoded
[ "Preconditions", ":", "-", "our", "agent", "applies", "to", "this", "entry", "-", "filename", "is", "URL", "decoded" ]
def allowance(self, filename): """Preconditions: - our agent applies to this entry - filename is URL decoded""" for line in self.rulelines: if line.applies_to(filename): return line.allowance return True
[ "def", "allowance", "(", "self", ",", "filename", ")", ":", "for", "line", "in", "self", ".", "rulelines", ":", "if", "line", ".", "applies_to", "(", "filename", ")", ":", "return", "line", ".", "allowance", "return", "True" ]
https://github.com/exodrifter/unity-python/blob/bef6e4e9ddfbbf1eaf7acbbb973e9aa3dd64a20d/Lib/robotparser.py#L214-L221
golismero/golismero
7d605b937e241f51c1ca4f47b20f755eeefb9d76
golismero/database/common.py
python
transactional
(fn, self, *args, **kwargs)
return self._transaction(fn, args, kwargs)
Transactional method.
Transactional method.
[ "Transactional", "method", "." ]
def transactional(fn, self, *args, **kwargs): """ Transactional method. """ return self._transaction(fn, args, kwargs)
[ "def", "transactional", "(", "fn", ",", "self", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "self", ".", "_transaction", "(", "fn", ",", "args", ",", "kwargs", ")" ]
https://github.com/golismero/golismero/blob/7d605b937e241f51c1ca4f47b20f755eeefb9d76/golismero/database/common.py#L47-L51
shaneshixiang/rllabplusplus
4d55f96ec98e3fe025b7991945e3e6a54fd5449f
rllab/misc/ext.py
python
flatten_hessian
(cost, wrt, consider_constant=None, disconnected_inputs='raise', block_diagonal=True)
:type cost: Scalar (0-dimensional) Variable. :type wrt: Vector (1-dimensional tensor) 'Variable' or list of vectors (1-dimensional tensors) Variables :param consider_constant: a list of expressions not to backpropagate through :type disconnected_inputs: string :param disconnected_inputs: Defines the behaviour if some of the variables in ``wrt`` are not part of the computational graph computing ``cost`` (or if all links are non-differentiable). The possible values are: - 'ignore': considers that the gradient on these parameters is zero. - 'warn': consider the gradient zero, and print a warning. - 'raise': raise an exception. :return: either a instance of Variable or list/tuple of Variables (depending upon `wrt`) repressenting the Hessian of the `cost` with respect to (elements of) `wrt`. If an element of `wrt` is not differentiable with respect to the output, then a zero variable is returned. The return value is of same type as `wrt`: a list/tuple or TensorVariable in all cases.
:type cost: Scalar (0-dimensional) Variable. :type wrt: Vector (1-dimensional tensor) 'Variable' or list of vectors (1-dimensional tensors) Variables
[ ":", "type", "cost", ":", "Scalar", "(", "0", "-", "dimensional", ")", "Variable", ".", ":", "type", "wrt", ":", "Vector", "(", "1", "-", "dimensional", "tensor", ")", "Variable", "or", "list", "of", "vectors", "(", "1", "-", "dimensional", "tensors", ...
def flatten_hessian(cost, wrt, consider_constant=None, disconnected_inputs='raise', block_diagonal=True): """ :type cost: Scalar (0-dimensional) Variable. :type wrt: Vector (1-dimensional tensor) 'Variable' or list of vectors (1-dimensional tensors) Variables :param consider_constant: a list of expressions not to backpropagate through :type disconnected_inputs: string :param disconnected_inputs: Defines the behaviour if some of the variables in ``wrt`` are not part of the computational graph computing ``cost`` (or if all links are non-differentiable). The possible values are: - 'ignore': considers that the gradient on these parameters is zero. - 'warn': consider the gradient zero, and print a warning. - 'raise': raise an exception. :return: either a instance of Variable or list/tuple of Variables (depending upon `wrt`) repressenting the Hessian of the `cost` with respect to (elements of) `wrt`. If an element of `wrt` is not differentiable with respect to the output, then a zero variable is returned. The return value is of same type as `wrt`: a list/tuple or TensorVariable in all cases. """ import theano from theano.tensor import arange # Check inputs have the right format import theano.tensor as TT from theano import Variable from theano import grad assert isinstance(cost, Variable), \ "tensor.hessian expects a Variable as `cost`" assert cost.ndim == 0, \ "tensor.hessian expects a 0 dimensional variable as `cost`" using_list = isinstance(wrt, list) using_tuple = isinstance(wrt, tuple) if isinstance(wrt, (list, tuple)): wrt = list(wrt) else: wrt = [wrt] hessians = [] if not block_diagonal: expr = TT.concatenate([ grad(cost, input, consider_constant=consider_constant, disconnected_inputs=disconnected_inputs).flatten() for input in wrt ]) for input in wrt: assert isinstance(input, Variable), \ "tensor.hessian expects a (list of) Variable as `wrt`" # assert input.ndim == 1, \ # "tensor.hessian expects a (list of) 1 dimensional variable " \ # "as `wrt`" if block_diagonal: expr = grad(cost, input, consider_constant=consider_constant, disconnected_inputs=disconnected_inputs).flatten() # It is possible that the inputs are disconnected from expr, # even if they are connected to cost. # This should not be an error. hess, updates = theano.scan(lambda i, y, x: grad( y[i], x, consider_constant=consider_constant, disconnected_inputs='ignore').flatten(), sequences=arange(expr.shape[0]), non_sequences=[expr, input]) assert not updates, \ ("Scan has returned a list of updates. This should not " "happen! Report this to theano-users (also include the " "script that generated the error)") hessians.append(hess) if block_diagonal: from theano.gradient import format_as return format_as(using_list, using_tuple, hessians) else: return TT.concatenate(hessians, axis=1)
[ "def", "flatten_hessian", "(", "cost", ",", "wrt", ",", "consider_constant", "=", "None", ",", "disconnected_inputs", "=", "'raise'", ",", "block_diagonal", "=", "True", ")", ":", "import", "theano", "from", "theano", ".", "tensor", "import", "arange", "# Chec...
https://github.com/shaneshixiang/rllabplusplus/blob/4d55f96ec98e3fe025b7991945e3e6a54fd5449f/rllab/misc/ext.py#L216-L297
xieyufei1993/InceptText-Tensorflow
bdb5c1bd4a7db277ddf9550e40c5a1fad0230ac4
lib/gt_data_layer/layer.py
python
GtDataLayer._get_next_minibatch_inds
(self)
return db_inds
Return the roidb indices for the next minibatch.
Return the roidb indices for the next minibatch.
[ "Return", "the", "roidb", "indices", "for", "the", "next", "minibatch", "." ]
def _get_next_minibatch_inds(self): """Return the roidb indices for the next minibatch.""" if self._cur + cfg.TRAIN.IMS_PER_BATCH >= len(self._roidb): self._shuffle_roidb_inds() db_inds = self._perm[self._cur:self._cur + cfg.TRAIN.IMS_PER_BATCH] self._cur += cfg.TRAIN.IMS_PER_BATCH """ # sample images with gt objects db_inds = np.zeros((cfg.TRAIN.IMS_PER_BATCH), dtype=np.int32) i = 0 while (i < cfg.TRAIN.IMS_PER_BATCH): ind = self._perm[self._cur] num_objs = self._roidb[ind]['boxes'].shape[0] if num_objs != 0: db_inds[i] = ind i += 1 self._cur += 1 if self._cur >= len(self._roidb): self._shuffle_roidb_inds() """ return db_inds
[ "def", "_get_next_minibatch_inds", "(", "self", ")", ":", "if", "self", ".", "_cur", "+", "cfg", ".", "TRAIN", ".", "IMS_PER_BATCH", ">=", "len", "(", "self", ".", "_roidb", ")", ":", "self", ".", "_shuffle_roidb_inds", "(", ")", "db_inds", "=", "self", ...
https://github.com/xieyufei1993/InceptText-Tensorflow/blob/bdb5c1bd4a7db277ddf9550e40c5a1fad0230ac4/lib/gt_data_layer/layer.py#L34-L58
bradfitz/scanningcabinet
fbe137968ff30c01631b5ab0de8e5c893ec92045
appengine/main.py
python
delete_doc_and_images
(user, doc)
return True
Deletes the document and its images.
Deletes the document and its images.
[ "Deletes", "the", "document", "and", "its", "images", "." ]
def delete_doc_and_images(user, doc): """Deletes the document and its images.""" scans = MediaObject.get(doc.pages) for scan in scans: blobstore.delete(scan.blob.key()) def tx(): db.delete(doc) scans = MediaObject.get(doc.pages) for scan in scans: user.media_objects -= 1 db.delete(scan) db.put(user) db.run_in_transaction(tx) return True
[ "def", "delete_doc_and_images", "(", "user", ",", "doc", ")", ":", "scans", "=", "MediaObject", ".", "get", "(", "doc", ".", "pages", ")", "for", "scan", "in", "scans", ":", "blobstore", ".", "delete", "(", "scan", ".", "blob", ".", "key", "(", ")", ...
https://github.com/bradfitz/scanningcabinet/blob/fbe137968ff30c01631b5ab0de8e5c893ec92045/appengine/main.py#L405-L418
google-research/pegasus
649a5978e45a078e1574ed01c92fc12d3aa05f7f
pegasus/data/datasets.py
python
TFDSDataset.s3_enabled
(self)
return True
[]
def s3_enabled(self): return True
[ "def", "s3_enabled", "(", "self", ")", ":", "return", "True" ]
https://github.com/google-research/pegasus/blob/649a5978e45a078e1574ed01c92fc12d3aa05f7f/pegasus/data/datasets.py#L145-L146
machawk1/wail
199bd1bfbd5d1126f83b113ce5261c3312a7b9ef
bundledApps/WAIL.py
python
WAILGUIFrame_Basic.check_if_url_is_in_archive
(self, button)
Send a request to the local Wayback instance and check if a Memento exists, inferring that a capture has been generated by Heritrix and an index generated from a WARC and the memento replayable
Send a request to the local Wayback instance and check if a Memento exists, inferring that a capture has been generated by Heritrix and an index generated from a WARC and the memento replayable
[ "Send", "a", "request", "to", "the", "local", "Wayback", "instance", "and", "check", "if", "a", "Memento", "exists", "inferring", "that", "a", "capture", "has", "been", "generated", "by", "Heritrix", "and", "an", "index", "generated", "from", "a", "WARC", ...
def check_if_url_is_in_archive(self, button): """Send a request to the local Wayback instance and check if a Memento exists, inferring that a capture has been generated by Heritrix and an index generated from a WARC and the memento replayable """ url = f"{config.uri_wayback_all_mementos}{self.uri.GetValue()}" status_code = None print('a') try: resp = urlopen(url) status_code = resp.getcode() except HTTPError as e: print('httperror') print(e) status_code = e.code except OSError as err: # When the server is unavailable, keep the default. # This is necessary, as unavailability will still cause an # exception """""" if status_code is None: launch_wayback_dialog = wx.MessageDialog( None, config.msg_wayback_not_started_body, config.msg_wayback_not_started_title, wx.YES_NO | wx.YES_DEFAULT, ) launch_wayback = launch_wayback_dialog.ShowModal() if launch_wayback == wx.ID_YES: wx.GetApp().Yield() Wayback().fix(None, lambda: self.check_if_url_is_in_archive(button)) elif 200 != status_code: wx.MessageBox( config.msg_uri_not_in_archive, f"Checking for {self.uri.GetValue()}" ) else: mb = wx.MessageDialog(self, config.msg_uri_in_archives_body, config.msg_uri_in_archives_title, style=wx.OK | wx.CANCEL) mb.SetOKCancelLabels("View Latest", "Go Back") resp = mb.ShowModal() if resp == wx.ID_OK: # View latest capture print('Showing latest capture') self.view_archive_in_browser(None, True) else: # Show main window again print('Show main window again')
[ "def", "check_if_url_is_in_archive", "(", "self", ",", "button", ")", ":", "url", "=", "f\"{config.uri_wayback_all_mementos}{self.uri.GetValue()}\"", "status_code", "=", "None", "print", "(", "'a'", ")", "try", ":", "resp", "=", "urlopen", "(", "url", ")", "status...
https://github.com/machawk1/wail/blob/199bd1bfbd5d1126f83b113ce5261c3312a7b9ef/bundledApps/WAIL.py#L724-L775
home-assistant/core
265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1
homeassistant/components/fan/__init__.py
python
FanEntity.percentage
(self)
return 0
Return the current speed as a percentage.
Return the current speed as a percentage.
[ "Return", "the", "current", "speed", "as", "a", "percentage", "." ]
def percentage(self) -> int | None: """Return the current speed as a percentage.""" if hasattr(self, "_attr_percentage"): return self._attr_percentage return 0
[ "def", "percentage", "(", "self", ")", "->", "int", "|", "None", ":", "if", "hasattr", "(", "self", ",", "\"_attr_percentage\"", ")", ":", "return", "self", ".", "_attr_percentage", "return", "0" ]
https://github.com/home-assistant/core/blob/265ebd17a3f17ed8dc1e9bdede03ac8e323f1ab1/homeassistant/components/fan/__init__.py#L394-L398
DragonComputer/Dragonfire
dd21f8e88d9b6390bd229ff73f89a8c3c137b89c
dragonfire/odqa.py
python
ODQA.semantic_extractor
(self, string)
return the_subject, subjects, focus, subject_with_objects
Function to extract subject, subjects, focus, subject_with_objects from given string. Args: string (str): String. Returns: (list) of (str)s: List of subject, subjects, focus, subject_with_objects.
Function to extract subject, subjects, focus, subject_with_objects from given string.
[ "Function", "to", "extract", "subject", "subjects", "focus", "subject_with_objects", "from", "given", "string", "." ]
def semantic_extractor(self, string): """Function to extract subject, subjects, focus, subject_with_objects from given string. Args: string (str): String. Returns: (list) of (str)s: List of subject, subjects, focus, subject_with_objects. """ doc = self.nlp(string) # spaCy does all kinds of NLP analysis in one function the_subject = None # Wikipedia search query variable definition (the subject) # Followings are lists because it could be multiple of them in a string. Multiple objects or subjects... subjects = [] # subject list pobjects = [] # object of a preposition list dobjects = [] # direct object list # https://nlp.stanford.edu/software/dependencies_manual.pdf - Hierarchy of typed dependencies for np in doc.noun_chunks: # Iterate over the noun phrases(chunks) # print(np.text, np.root.text, np.root.dep_, np.root.head.text) if (np.root.dep_ == 'nsubj' or np.root.dep_ == 'nsubjpass') and np.root.tag_ != 'WP': # if it's a nsubj(nominal subject) or nsubjpass(passive nominal subject) then subjects.append(np.text) # append it to subjects if np.root.dep_ == 'pobj': # if it's an object of a preposition then pobjects.append(np.text) # append it to pobjects if np.root.dep_ == 'dobj': # if it's a direct object then dobjects.append(np.text) # append it to direct objects # This block determines the Wikipedia query (the subject) by relying on this priority: [Object of a preposition] > [Subject] > [Direct object] pobjects = [x for x in pobjects] subjects = [x for x in subjects] dobjects = [x for x in dobjects] if pobjects: the_subject = ' '.join(pobjects) elif subjects: the_subject = ' '.join(subjects) elif dobjects: the_subject = ' '.join(dobjects) else: return None, None, None, None # This block determines the focus(objective/goal) by relying on this priority: [Direct object] > [Subject] > [Object of a preposition] focus = None if dobjects: focus = self.phrase_cleaner(' '.join(dobjects)) elif subjects: focus = self.phrase_cleaner(' '.join(subjects)) elif pobjects: focus = self.phrase_cleaner(' '.join(pobjects)) if focus in the_subject: focus = None # Full string of all subjects and objects concatenated subject_with_objects = [] for dobject in dobjects: subject_with_objects.append(dobject) for subject in subjects: subject_with_objects.append(subject) for pobject in pobjects: subject_with_objects.append(pobject) subject_with_objects = ' '.join(subject_with_objects) wh_found = False for word in doc: # iterate over the each word in the given command(user's speech) if word.tag_ in ['WDT', 'WP', 'WP$', 'WRB']: # check if there is a "wh-" question (we are determining that if it's a question or not, so only accepting questions with "wh-" form) wh_found = True if not wh_found: return None, None, None, None return the_subject, subjects, focus, subject_with_objects
[ "def", "semantic_extractor", "(", "self", ",", "string", ")", ":", "doc", "=", "self", ".", "nlp", "(", "string", ")", "# spaCy does all kinds of NLP analysis in one function", "the_subject", "=", "None", "# Wikipedia search query variable definition (the subject)", "# Foll...
https://github.com/DragonComputer/Dragonfire/blob/dd21f8e88d9b6390bd229ff73f89a8c3c137b89c/dragonfire/odqa.py#L120-L187
qiaoguan/deep-ctr-prediction
f8d83d6da2ee07158922474d11f444533ec6a7a3
Din/metric.py
python
cal_group_auc
(labels, preds, user_id_list)
return group_auc
Calculate group auc
Calculate group auc
[ "Calculate", "group", "auc" ]
def cal_group_auc(labels, preds, user_id_list): """Calculate group auc""" print('*' * 50) if len(user_id_list) != len(labels): raise ValueError( "impression id num should equal to the sample num," \ "impression id num is {0}".format(len(user_id_list))) group_score = defaultdict(lambda: []) group_truth = defaultdict(lambda: []) for idx, truth in enumerate(labels): user_id = user_id_list[idx] score = preds[idx] truth = labels[idx] group_score[user_id].append(score) group_truth[user_id].append(truth) group_flag = defaultdict(lambda: False) for user_id in set(user_id_list): truths = group_truth[user_id] flag = False for i in range(len(truths) - 1): if truths[i] != truths[i + 1]: flag = True break group_flag[user_id] = flag impression_total = 0 total_auc = 0 # for user_id in group_flag: if group_flag[user_id]: auc = roc_auc_score(np.asarray(group_truth[user_id]), np.asarray(group_score[user_id])) total_auc += auc * len(group_truth[user_id]) impression_total += len(group_truth[user_id]) group_auc = float(total_auc) / impression_total group_auc = round(group_auc, 4) return group_auc
[ "def", "cal_group_auc", "(", "labels", ",", "preds", ",", "user_id_list", ")", ":", "print", "(", "'*'", "*", "50", ")", "if", "len", "(", "user_id_list", ")", "!=", "len", "(", "labels", ")", ":", "raise", "ValueError", "(", "\"impression id num should eq...
https://github.com/qiaoguan/deep-ctr-prediction/blob/f8d83d6da2ee07158922474d11f444533ec6a7a3/Din/metric.py#L12-L49
makerbot/ReplicatorG
d6f2b07785a5a5f1e172fb87cb4303b17c575d5d
skein_engines/skeinforge-47/fabmetheus_utilities/geometry/creation/mechaslab.py
python
HollowPegSocket.__repr__
(self)
return euclidean.getDictionaryString(self.__dict__)
Get the string representation of this HollowPegSocket.
Get the string representation of this HollowPegSocket.
[ "Get", "the", "string", "representation", "of", "this", "HollowPegSocket", "." ]
def __repr__(self): 'Get the string representation of this HollowPegSocket.' return euclidean.getDictionaryString(self.__dict__)
[ "def", "__repr__", "(", "self", ")", ":", "return", "euclidean", ".", "getDictionaryString", "(", "self", ".", "__dict__", ")" ]
https://github.com/makerbot/ReplicatorG/blob/d6f2b07785a5a5f1e172fb87cb4303b17c575d5d/skein_engines/skeinforge-47/fabmetheus_utilities/geometry/creation/mechaslab.py#L199-L201
jython/frozen-mirror
b8d7aa4cee50c0c0fe2f4b235dd62922dd0f3f99
lib-python/2.7/decimal.py
python
Context._regard_flags
(self, *flags)
Stop ignoring the flags, if they are raised
Stop ignoring the flags, if they are raised
[ "Stop", "ignoring", "the", "flags", "if", "they", "are", "raised" ]
def _regard_flags(self, *flags): """Stop ignoring the flags, if they are raised""" if flags and isinstance(flags[0], (tuple,list)): flags = flags[0] for flag in flags: self._ignored_flags.remove(flag)
[ "def", "_regard_flags", "(", "self", ",", "*", "flags", ")", ":", "if", "flags", "and", "isinstance", "(", "flags", "[", "0", "]", ",", "(", "tuple", ",", "list", ")", ")", ":", "flags", "=", "flags", "[", "0", "]", "for", "flag", "in", "flags", ...
https://github.com/jython/frozen-mirror/blob/b8d7aa4cee50c0c0fe2f4b235dd62922dd0f3f99/lib-python/2.7/decimal.py#L3885-L3890
lightforever/mlcomp
c78fdb77ec9c4ec8ff11beea50b90cab20903ad9
mlcomp/contrib/segmentation/deeplabv3/backbone/drn.py
python
drn_d_24
(BatchNorm, pretrained=True)
return model
[]
def drn_d_24(BatchNorm, pretrained=True): model = DRN(BasicBlock, [1, 1, 2, 2, 2, 2, 2, 2], arch='D', BatchNorm=BatchNorm) if pretrained: pretrained = model_zoo.load_url(model_urls['drn-d-24']) del pretrained['fc.weight'] del pretrained['fc.bias'] model.load_state_dict(pretrained) return model
[ "def", "drn_d_24", "(", "BatchNorm", ",", "pretrained", "=", "True", ")", ":", "model", "=", "DRN", "(", "BasicBlock", ",", "[", "1", ",", "1", ",", "2", ",", "2", ",", "2", ",", "2", ",", "2", ",", "2", "]", ",", "arch", "=", "'D'", ",", "...
https://github.com/lightforever/mlcomp/blob/c78fdb77ec9c4ec8ff11beea50b90cab20903ad9/mlcomp/contrib/segmentation/deeplabv3/backbone/drn.py#L362-L370
hak5/nano-tetra-modules
aa43cb5e2338b8dbd12a75314104a34ba608263b
PortalAuth/includes/scripts/libs/email/message.py
python
Message.get_boundary
(self, failobj=None)
return utils.collapse_rfc2231_value(boundary).rstrip()
Return the boundary associated with the payload if present. The boundary is extracted from the Content-Type header's `boundary' parameter, and it is unquoted.
Return the boundary associated with the payload if present.
[ "Return", "the", "boundary", "associated", "with", "the", "payload", "if", "present", "." ]
def get_boundary(self, failobj=None): """Return the boundary associated with the payload if present. The boundary is extracted from the Content-Type header's `boundary' parameter, and it is unquoted. """ missing = object() boundary = self.get_param('boundary', missing) if boundary is missing: return failobj # RFC 2046 says that boundaries may begin but not end in w/s return utils.collapse_rfc2231_value(boundary).rstrip()
[ "def", "get_boundary", "(", "self", ",", "failobj", "=", "None", ")", ":", "missing", "=", "object", "(", ")", "boundary", "=", "self", ".", "get_param", "(", "'boundary'", ",", "missing", ")", "if", "boundary", "is", "missing", ":", "return", "failobj",...
https://github.com/hak5/nano-tetra-modules/blob/aa43cb5e2338b8dbd12a75314104a34ba608263b/PortalAuth/includes/scripts/libs/email/message.py#L689-L700
gramps-project/gramps
04d4651a43eb210192f40a9f8c2bad8ee8fa3753
gramps/gui/filters/sidebar/_sidebarfilter.py
python
SidebarFilter.add_filter_entry
(self, text, widget)
Adds the text and widget to GUI, with an Edit button.
Adds the text and widget to GUI, with an Edit button.
[ "Adds", "the", "text", "and", "widget", "to", "GUI", "with", "an", "Edit", "button", "." ]
def add_filter_entry(self, text, widget): """ Adds the text and widget to GUI, with an Edit button. """ hbox = Gtk.Box() hbox.pack_start(widget, True, True, 0) hbox.pack_start(widgets.SimpleButton('gtk-edit', self.edit_filter), False, False, 0) self.add_entry(text, hbox)
[ "def", "add_filter_entry", "(", "self", ",", "text", ",", "widget", ")", ":", "hbox", "=", "Gtk", ".", "Box", "(", ")", "hbox", ".", "pack_start", "(", "widget", ",", "True", ",", "True", ",", "0", ")", "hbox", ".", "pack_start", "(", "widgets", "....
https://github.com/gramps-project/gramps/blob/04d4651a43eb210192f40a9f8c2bad8ee8fa3753/gramps/gui/filters/sidebar/_sidebarfilter.py#L231-L239
mesonbuild/meson
a22d0f9a0a787df70ce79b05d0c45de90a970048
mesonbuild/dependencies/base.py
python
Dependency.get_link_args
(self, language: T.Optional[str] = None, raw: bool = False)
return self.link_args
[]
def get_link_args(self, language: T.Optional[str] = None, raw: bool = False) -> T.List[str]: if raw and self.raw_link_args is not None: return self.raw_link_args return self.link_args
[ "def", "get_link_args", "(", "self", ",", "language", ":", "T", ".", "Optional", "[", "str", "]", "=", "None", ",", "raw", ":", "bool", "=", "False", ")", "->", "T", ".", "List", "[", "str", "]", ":", "if", "raw", "and", "self", ".", "raw_link_ar...
https://github.com/mesonbuild/meson/blob/a22d0f9a0a787df70ce79b05d0c45de90a970048/mesonbuild/dependencies/base.py#L132-L135
scikit-learn/scikit-learn
1d1aadd0711b87d2a11c80aad15df6f8cf156712
sklearn/manifold/_t_sne.py
python
_joint_probabilities
(distances, desired_perplexity, verbose)
return P
Compute joint probabilities p_ij from distances. Parameters ---------- distances : ndarray of shape (n_samples * (n_samples-1) / 2,) Distances of samples are stored as condensed matrices, i.e. we omit the diagonal and duplicate entries and store everything in a one-dimensional array. desired_perplexity : float Desired perplexity of the joint probability distributions. verbose : int Verbosity level. Returns ------- P : ndarray of shape (n_samples * (n_samples-1) / 2,) Condensed joint probability matrix.
Compute joint probabilities p_ij from distances.
[ "Compute", "joint", "probabilities", "p_ij", "from", "distances", "." ]
def _joint_probabilities(distances, desired_perplexity, verbose): """Compute joint probabilities p_ij from distances. Parameters ---------- distances : ndarray of shape (n_samples * (n_samples-1) / 2,) Distances of samples are stored as condensed matrices, i.e. we omit the diagonal and duplicate entries and store everything in a one-dimensional array. desired_perplexity : float Desired perplexity of the joint probability distributions. verbose : int Verbosity level. Returns ------- P : ndarray of shape (n_samples * (n_samples-1) / 2,) Condensed joint probability matrix. """ # Compute conditional probabilities such that they approximately match # the desired perplexity distances = distances.astype(np.float32, copy=False) conditional_P = _utils._binary_search_perplexity( distances, desired_perplexity, verbose ) P = conditional_P + conditional_P.T sum_P = np.maximum(np.sum(P), MACHINE_EPSILON) P = np.maximum(squareform(P) / sum_P, MACHINE_EPSILON) return P
[ "def", "_joint_probabilities", "(", "distances", ",", "desired_perplexity", ",", "verbose", ")", ":", "# Compute conditional probabilities such that they approximately match", "# the desired perplexity", "distances", "=", "distances", ".", "astype", "(", "np", ".", "float32",...
https://github.com/scikit-learn/scikit-learn/blob/1d1aadd0711b87d2a11c80aad15df6f8cf156712/sklearn/manifold/_t_sne.py#L36-L66
pysmt/pysmt
ade4dc2a825727615033a96d31c71e9f53ce4764
pysmt/constants.py
python
is_pysmt_fraction
(var)
return type(var) == FractionClass
Tests whether var is a Fraction. This takes into account the class being used to represent the Fraction.
Tests whether var is a Fraction.
[ "Tests", "whether", "var", "is", "a", "Fraction", "." ]
def is_pysmt_fraction(var): """Tests whether var is a Fraction. This takes into account the class being used to represent the Fraction. """ return type(var) == FractionClass
[ "def", "is_pysmt_fraction", "(", "var", ")", ":", "return", "type", "(", "var", ")", "==", "FractionClass" ]
https://github.com/pysmt/pysmt/blob/ade4dc2a825727615033a96d31c71e9f53ce4764/pysmt/constants.py#L73-L78
olivierkes/manuskript
2b992e70c617325013e347b470246af66f6d2690
manuskript/models/plotModel.py
python
plotModel.getUniqueID
(self, parent=QModelIndex())
return str(k)
Returns an unused ID
Returns an unused ID
[ "Returns", "an", "unused", "ID" ]
def getUniqueID(self, parent=QModelIndex()): """Returns an unused ID""" parentItem = self.itemFromIndex(parent) vals = [] for i in range(self.rowCount(parent)): index = self.index(i, Plot.ID, parent) # item = self.item(i, Plot.ID) if index.isValid() and index.data(): vals.append(int(index.data())) k = 0 while k in vals: k += 1 return str(k)
[ "def", "getUniqueID", "(", "self", ",", "parent", "=", "QModelIndex", "(", ")", ")", ":", "parentItem", "=", "self", ".", "itemFromIndex", "(", "parent", ")", "vals", "=", "[", "]", "for", "i", "in", "range", "(", "self", ".", "rowCount", "(", "paren...
https://github.com/olivierkes/manuskript/blob/2b992e70c617325013e347b470246af66f6d2690/manuskript/models/plotModel.py#L118-L131
TencentCloud/tencentcloud-sdk-python
3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2
tencentcloud/iotvideo/v20191126/models.py
python
ModifyVerContentResponse.__init__
(self)
r""" :param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 :type RequestId: str
r""" :param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 :type RequestId: str
[ "r", ":", "param", "RequestId", ":", "唯一请求", "ID,每次请求都会返回。定位问题时需要提供该次请求的", "RequestId。", ":", "type", "RequestId", ":", "str" ]
def __init__(self): r""" :param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 :type RequestId: str """ self.RequestId = None
[ "def", "__init__", "(", "self", ")", ":", "self", ".", "RequestId", "=", "None" ]
https://github.com/TencentCloud/tencentcloud-sdk-python/blob/3677fd1cdc8c5fd626ce001c13fd3b59d1f279d2/tencentcloud/iotvideo/v20191126/models.py#L3579-L3584
beeware/ouroboros
a29123c6fab6a807caffbb7587cf548e0c370296
ouroboros/lib2to3/pytree.py
python
BasePattern.match_seq
(self, nodes, results=None)
return self.match(nodes[0], results)
Does this pattern exactly match a sequence of nodes? Default implementation for non-wildcard patterns.
Does this pattern exactly match a sequence of nodes?
[ "Does", "this", "pattern", "exactly", "match", "a", "sequence", "of", "nodes?" ]
def match_seq(self, nodes, results=None): """ Does this pattern exactly match a sequence of nodes? Default implementation for non-wildcard patterns. """ if len(nodes) != 1: return False return self.match(nodes[0], results)
[ "def", "match_seq", "(", "self", ",", "nodes", ",", "results", "=", "None", ")", ":", "if", "len", "(", "nodes", ")", "!=", "1", ":", "return", "False", "return", "self", ".", "match", "(", "nodes", "[", "0", "]", ",", "results", ")" ]
https://github.com/beeware/ouroboros/blob/a29123c6fab6a807caffbb7587cf548e0c370296/ouroboros/lib2to3/pytree.py#L480-L488
jorditorresBCN/FirstContactWithTensorFlow
e4bafcaa6a46baf9dead09075c28cef1871722e1
input_data.py
python
extract_images
(filename)
Extract the images into a 4D uint8 numpy array [index, y, x, depth].
Extract the images into a 4D uint8 numpy array [index, y, x, depth].
[ "Extract", "the", "images", "into", "a", "4D", "uint8", "numpy", "array", "[", "index", "y", "x", "depth", "]", "." ]
def extract_images(filename): """Extract the images into a 4D uint8 numpy array [index, y, x, depth].""" print('Extracting', filename) with gzip.open(filename) as bytestream: magic = _read32(bytestream) if magic != 2051: raise ValueError( 'Invalid magic number %d in MNIST image file: %s' % (magic, filename)) num_images = _read32(bytestream) rows = _read32(bytestream) cols = _read32(bytestream) buf = bytestream.read(rows * cols * num_images) data = numpy.frombuffer(buf, dtype=numpy.uint8) data = data.reshape(num_images, rows, cols, 1) return data
[ "def", "extract_images", "(", "filename", ")", ":", "print", "(", "'Extracting'", ",", "filename", ")", "with", "gzip", ".", "open", "(", "filename", ")", "as", "bytestream", ":", "magic", "=", "_read32", "(", "bytestream", ")", "if", "magic", "!=", "205...
https://github.com/jorditorresBCN/FirstContactWithTensorFlow/blob/e4bafcaa6a46baf9dead09075c28cef1871722e1/input_data.py#L41-L56
buke/GreenOdoo
3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df
runtime/python/lib/python2.7/profile.py
python
runctx
(statement, globals, locals, filename=None, sort=-1)
Run statement under profiler, supplying your own globals and locals, optionally saving results in filename. statement and filename have the same semantics as profile.run
Run statement under profiler, supplying your own globals and locals, optionally saving results in filename.
[ "Run", "statement", "under", "profiler", "supplying", "your", "own", "globals", "and", "locals", "optionally", "saving", "results", "in", "filename", "." ]
def runctx(statement, globals, locals, filename=None, sort=-1): """Run statement under profiler, supplying your own globals and locals, optionally saving results in filename. statement and filename have the same semantics as profile.run """ prof = Profile() try: prof = prof.runctx(statement, globals, locals) except SystemExit: pass if filename is not None: prof.dump_stats(filename) else: return prof.print_stats(sort)
[ "def", "runctx", "(", "statement", ",", "globals", ",", "locals", ",", "filename", "=", "None", ",", "sort", "=", "-", "1", ")", ":", "prof", "=", "Profile", "(", ")", "try", ":", "prof", "=", "prof", ".", "runctx", "(", "statement", ",", "globals"...
https://github.com/buke/GreenOdoo/blob/3d8c55d426fb41fdb3f2f5a1533cfe05983ba1df/runtime/python/lib/python2.7/profile.py#L69-L84
machinalis/yalign
e063b4f62d043476492bb0add06f60a90f7f025f
yalign/sequencealigner.py
python
SequenceAlignmentSearchProblem.cost
(self, state1, action, state2)
return cost
Cost of this action.
Cost of this action.
[ "Cost", "of", "this", "action", "." ]
def cost(self, state1, action, state2): """ Cost of this action. """ i, j, cost = action return cost
[ "def", "cost", "(", "self", ",", "state1", ",", "action", ",", "state2", ")", ":", "i", ",", "j", ",", "cost", "=", "action", "return", "cost" ]
https://github.com/machinalis/yalign/blob/e063b4f62d043476492bb0add06f60a90f7f025f/yalign/sequencealigner.py#L87-L90
pysmt/pysmt
ade4dc2a825727615033a96d31c71e9f53ce4764
pysmt/smtlib/printers.py
python
SmtPrinter.walk_plus
(self, formula)
return self.walk_nary(formula, "+")
[]
def walk_plus(self, formula): return self.walk_nary(formula, "+")
[ "def", "walk_plus", "(", "self", ",", "formula", ")", ":", "return", "self", ".", "walk_nary", "(", "formula", ",", "\"+\"", ")" ]
https://github.com/pysmt/pysmt/blob/ade4dc2a825727615033a96d31c71e9f53ce4764/pysmt/smtlib/printers.py#L54-L54
adamcaudill/EquationGroupLeak
52fa871c89008566c27159bd48f2a8641260c984
Firewall/EXPLOITS/EXBA/scapy/packet.py
python
Packet.haslayer
(self, cls)
return self.payload.haslayer(cls)
true if self has a layer that is an instance of cls. Superseded by "cls in self" syntax.
true if self has a layer that is an instance of cls. Superseded by "cls in self" syntax.
[ "true", "if", "self", "has", "a", "layer", "that", "is", "an", "instance", "of", "cls", ".", "Superseded", "by", "cls", "in", "self", "syntax", "." ]
def haslayer(self, cls): """true if self has a layer that is an instance of cls. Superseded by "cls in self" syntax.""" if self.__class__ == cls or self.__class__.__name__ == cls: return 1 for f in self.packetfields: fvalue_gen = self.getfieldval(f.name) if fvalue_gen is None: continue if not f.islist: fvalue_gen = SetGen(fvalue_gen,_iterpacket=0) for fvalue in fvalue_gen: if isinstance(fvalue, Packet): ret = fvalue.haslayer(cls) if ret: return ret return self.payload.haslayer(cls)
[ "def", "haslayer", "(", "self", ",", "cls", ")", ":", "if", "self", ".", "__class__", "==", "cls", "or", "self", ".", "__class__", ".", "__name__", "==", "cls", ":", "return", "1", "for", "f", "in", "self", ".", "packetfields", ":", "fvalue_gen", "="...
https://github.com/adamcaudill/EquationGroupLeak/blob/52fa871c89008566c27159bd48f2a8641260c984/Firewall/EXPLOITS/EXBA/scapy/packet.py#L687-L702
python/cpython
e13cdca0f5224ec4e23bdd04bb3120506964bc8b
Lib/_pyio.py
python
FileIO.readinto
(self, b)
return n
Same as RawIOBase.readinto().
Same as RawIOBase.readinto().
[ "Same", "as", "RawIOBase", ".", "readinto", "()", "." ]
def readinto(self, b): """Same as RawIOBase.readinto().""" m = memoryview(b).cast('B') data = self.read(len(m)) n = len(data) m[:n] = data return n
[ "def", "readinto", "(", "self", ",", "b", ")", ":", "m", "=", "memoryview", "(", "b", ")", ".", "cast", "(", "'B'", ")", "data", "=", "self", ".", "read", "(", "len", "(", "m", ")", ")", "n", "=", "len", "(", "data", ")", "m", "[", ":", "...
https://github.com/python/cpython/blob/e13cdca0f5224ec4e23bdd04bb3120506964bc8b/Lib/_pyio.py#L1701-L1707
bitprophet/ssh
e8bdad4c82a50158a749233dca58c29e47c60b76
ssh/client.py
python
SSHClient.invoke_shell
(self, term='vt100', width=80, height=24)
return chan
Start an interactive shell session on the SSH server. A new L{Channel} is opened and connected to a pseudo-terminal using the requested terminal type and size. @param term: the terminal type to emulate (for example, C{"vt100"}) @type term: str @param width: the width (in characters) of the terminal window @type width: int @param height: the height (in characters) of the terminal window @type height: int @return: a new channel connected to the remote shell @rtype: L{Channel} @raise SSHException: if the server fails to invoke a shell
Start an interactive shell session on the SSH server. A new L{Channel} is opened and connected to a pseudo-terminal using the requested terminal type and size.
[ "Start", "an", "interactive", "shell", "session", "on", "the", "SSH", "server", ".", "A", "new", "L", "{", "Channel", "}", "is", "opened", "and", "connected", "to", "a", "pseudo", "-", "terminal", "using", "the", "requested", "terminal", "type", "and", "...
def invoke_shell(self, term='vt100', width=80, height=24): """ Start an interactive shell session on the SSH server. A new L{Channel} is opened and connected to a pseudo-terminal using the requested terminal type and size. @param term: the terminal type to emulate (for example, C{"vt100"}) @type term: str @param width: the width (in characters) of the terminal window @type width: int @param height: the height (in characters) of the terminal window @type height: int @return: a new channel connected to the remote shell @rtype: L{Channel} @raise SSHException: if the server fails to invoke a shell """ chan = self._transport.open_session() chan.get_pty(term, width, height) chan.invoke_shell() return chan
[ "def", "invoke_shell", "(", "self", ",", "term", "=", "'vt100'", ",", "width", "=", "80", ",", "height", "=", "24", ")", ":", "chan", "=", "self", ".", "_transport", ".", "open_session", "(", ")", "chan", ".", "get_pty", "(", "term", ",", "width", ...
https://github.com/bitprophet/ssh/blob/e8bdad4c82a50158a749233dca58c29e47c60b76/ssh/client.py#L371-L391
mongodb/motor
055f5e05abf1f15e64ae43fb8c680d2706f3c419
motor/core.py
python
AgnosticClient.__init__
(self, *args, **kwargs)
Create a new connection to a single MongoDB instance at *host:port*. Takes the same constructor arguments as :class:`~pymongo.mongo_client.MongoClient`, as well as: :Parameters: - `io_loop` (optional): Special event loop instance to use instead of default.
Create a new connection to a single MongoDB instance at *host:port*.
[ "Create", "a", "new", "connection", "to", "a", "single", "MongoDB", "instance", "at", "*", "host", ":", "port", "*", "." ]
def __init__(self, *args, **kwargs): """Create a new connection to a single MongoDB instance at *host:port*. Takes the same constructor arguments as :class:`~pymongo.mongo_client.MongoClient`, as well as: :Parameters: - `io_loop` (optional): Special event loop instance to use instead of default. """ if 'io_loop' in kwargs: io_loop = kwargs.pop('io_loop') self._framework.check_event_loop(io_loop) else: io_loop = self._framework.get_event_loop() kwargs.setdefault('connect', False) kwargs.setdefault( 'driver', DriverInfo('Motor', motor_version, self._framework.platform_info())) delegate = self.__delegate_class__(*args, **kwargs) super().__init__(delegate) self.io_loop = io_loop
[ "def", "__init__", "(", "self", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "if", "'io_loop'", "in", "kwargs", ":", "io_loop", "=", "kwargs", ".", "pop", "(", "'io_loop'", ")", "self", ".", "_framework", ".", "check_event_loop", "(", "io_loop"...
https://github.com/mongodb/motor/blob/055f5e05abf1f15e64ae43fb8c680d2706f3c419/motor/core.py#L125-L148
leancloud/satori
701caccbd4fe45765001ca60435c0cb499477c03
satori-rules/plugin/libs/redis/connection.py
python
PythonParser.__del__
(self)
[]
def __del__(self): try: self.on_disconnect() except Exception: pass
[ "def", "__del__", "(", "self", ")", ":", "try", ":", "self", ".", "on_disconnect", "(", ")", "except", "Exception", ":", "pass" ]
https://github.com/leancloud/satori/blob/701caccbd4fe45765001ca60435c0cb499477c03/satori-rules/plugin/libs/redis/connection.py#L211-L215
nipy/mindboggle
bc10812979d42e94b8a01ad8f98b4ceae33169e5
mindboggle/mio/colors.py
python
group_colors
(colormap, colormap_name, description='', adjacency_matrix=[], IDs=[], names=[], groups=[], save_text_files=True, plot_colors=True, plot_graphs=True, out_dir='.', verbose=True)
return colors
This greedy algoritm reorders a colormap so that labels assigned to the same group have more similar colors, but within a group (usually of adjacent labels), the colors are reordered so that adjacent labels have dissimilar colors: 1. Convert colormap to Lab color space which better represents human perception. 2. Load a binary (or weighted) adjacency matrix, where each row or column represents a label, and each value signifies whether (or the degree to which) a given pair of labels are adjacent. If a string (file) is provided instead of a numpy ndarray: column 0 = label "ID" number column 1 = label "name" column 2 = "group" number (each label is assigned to a group) columns 3... = label adjacency matrix 3. Sort labels by decreasing number of adjacent labels (adjacency sum). 4. Order label groups by decreasing maximum adjacency sum. 5. Create a similarity matrix for pairs of colors. 6. Sort colors by decreasing perceptual difference from all other colors. 7. For each label group: 7.1. Select unpicked colors for group that are similar to the first unpicked color (unpicked colors were sorted above by decreasing perceptual difference from all other colors). 7.2. Reorder subgraph colors according to label adjacency sum (decreasing number of adjacent labels). 8. Assign new colors. For plotting graphs and colormap: 1. Convert the matrix to a graph, where each node represents a label and each edge represents the adjacency value between connected nodes. 2. Break up the graph into subgraphs, where each subgraph contains labels assigned the same group number (which usually means they are adjacent). 3. Plot the colormap and colored sub/graphs. NOTE: Requires pydotplus Parameters ---------- colormap : string or numpy ndarray of ndarrays of 3 floats between 0 and 1 csv file containing rgb colormap, or colormap array colormap_name : string name of colormap description : string description of colormap adjacency_matrix : string or NxN numpy ndarray (N = number of labels) csv file containing label adjacency matrix or matrix itself IDs : list of integers label ID numbers names : list of strings label names groups : list of integers label group numbers (one per label) save_text_files : Boolean save colormap as csv and json files? plot_colors : Boolean plot colormap as horizontal bar chart? plot_graphs : Boolean plot colormap as graphs? out_dir : string output directory path verbose : Boolean print to stdout? Returns ------- colors : numpy ndarray of ndarrays of 3 floats between 0 and 1 rgb colormap Examples -------- >>> # Get colormap: >>> from mindboggle.mio.colors import distinguishable_colors >>> import numpy as np >>> colormap = distinguishable_colors(ncolors=31, ... backgrounds=[[0,0,0],[1,1,1]], ... save_csv=False, plot_colormap=False, verbose=False) >>> # Get adjacency matrix: >>> from mindboggle.mio.colors import label_adjacency_matrix >>> from mindboggle.mio.fetch_data import prep_tests >>> urls, fetch_data = prep_tests() >>> label_file = fetch_data(urls['left_manual_labels'], '', '.vtk') >>> IDs, adjacency_matrix, output_table = label_adjacency_matrix(label_file, ... ignore_values=[-1, 0], add_value=0, save_table=False, ... output_format='', verbose=False) >>> adjacency_matrix = adjacency_matrix.values >>> adjacency_matrix = adjacency_matrix[:, 1::] >>> # Reorganize colormap: >>> from mindboggle.mio.colors import group_colors >>> from mindboggle.mio.labels import DKTprotocol >>> dkt = DKTprotocol() >>> colormap_name = "DKT31colormap" >>> description = "Colormap for DKT31 human brain cortical labels" >>> save_text_files = True >>> plot_colors = False >>> plot_graphs = False >>> out_dir = '.' >>> verbose = False >>> #IDs = dkt.DKT31_numbers >>> names = dkt.DKT31_names #dkt.left_cerebrum_cortex_DKT31_names >>> groups = dkt.DKT31_groups >>> colors = group_colors(colormap, colormap_name, description, ... adjacency_matrix, IDs, names, groups, ... save_text_files, plot_colors, plot_graphs, out_dir, verbose) >>> np.allclose(colors[0], [0.7586206896551724, 0.20689655172413793, 0.0]) True >>> np.allclose(colors[1], [0.48275862068965514, 0.4482758620689655, 0.48275862068965514]) True >>> np.allclose(colors[2], [0.3448275862068966, 0.3103448275862069, 0.034482758620689655]) True >>> np.allclose(colors[-1], [0.7931034482758621, 0.9655172413793103, 0.7931034482758621]) True No groups / subgraphs: >>> groups = [] >>> colors = group_colors(colormap, colormap_name, description, ... adjacency_matrix, IDs, names, groups, ... save_text_files, plot_colors, plot_graphs, out_dir, verbose) >>> np.allclose(colors[0], [0.5172413793103449, 0.8275862068965517, 1.0]) True >>> np.allclose(colors[1], [0.13793103448275862, 0.0, 0.24137931034482757]) True >>> np.allclose(colors[2], [0.3793103448275862, 0.27586206896551724, 0.48275862068965514]) True >>> np.allclose(colors[-1], [0.6206896551724138, 0.48275862068965514, 0.3448275862068966]) True
This greedy algoritm reorders a colormap so that labels assigned to the same group have more similar colors, but within a group (usually of adjacent labels), the colors are reordered so that adjacent labels have dissimilar colors:
[ "This", "greedy", "algoritm", "reorders", "a", "colormap", "so", "that", "labels", "assigned", "to", "the", "same", "group", "have", "more", "similar", "colors", "but", "within", "a", "group", "(", "usually", "of", "adjacent", "labels", ")", "the", "colors",...
def group_colors(colormap, colormap_name, description='', adjacency_matrix=[], IDs=[], names=[], groups=[], save_text_files=True, plot_colors=True, plot_graphs=True, out_dir='.', verbose=True): """ This greedy algoritm reorders a colormap so that labels assigned to the same group have more similar colors, but within a group (usually of adjacent labels), the colors are reordered so that adjacent labels have dissimilar colors: 1. Convert colormap to Lab color space which better represents human perception. 2. Load a binary (or weighted) adjacency matrix, where each row or column represents a label, and each value signifies whether (or the degree to which) a given pair of labels are adjacent. If a string (file) is provided instead of a numpy ndarray: column 0 = label "ID" number column 1 = label "name" column 2 = "group" number (each label is assigned to a group) columns 3... = label adjacency matrix 3. Sort labels by decreasing number of adjacent labels (adjacency sum). 4. Order label groups by decreasing maximum adjacency sum. 5. Create a similarity matrix for pairs of colors. 6. Sort colors by decreasing perceptual difference from all other colors. 7. For each label group: 7.1. Select unpicked colors for group that are similar to the first unpicked color (unpicked colors were sorted above by decreasing perceptual difference from all other colors). 7.2. Reorder subgraph colors according to label adjacency sum (decreasing number of adjacent labels). 8. Assign new colors. For plotting graphs and colormap: 1. Convert the matrix to a graph, where each node represents a label and each edge represents the adjacency value between connected nodes. 2. Break up the graph into subgraphs, where each subgraph contains labels assigned the same group number (which usually means they are adjacent). 3. Plot the colormap and colored sub/graphs. NOTE: Requires pydotplus Parameters ---------- colormap : string or numpy ndarray of ndarrays of 3 floats between 0 and 1 csv file containing rgb colormap, or colormap array colormap_name : string name of colormap description : string description of colormap adjacency_matrix : string or NxN numpy ndarray (N = number of labels) csv file containing label adjacency matrix or matrix itself IDs : list of integers label ID numbers names : list of strings label names groups : list of integers label group numbers (one per label) save_text_files : Boolean save colormap as csv and json files? plot_colors : Boolean plot colormap as horizontal bar chart? plot_graphs : Boolean plot colormap as graphs? out_dir : string output directory path verbose : Boolean print to stdout? Returns ------- colors : numpy ndarray of ndarrays of 3 floats between 0 and 1 rgb colormap Examples -------- >>> # Get colormap: >>> from mindboggle.mio.colors import distinguishable_colors >>> import numpy as np >>> colormap = distinguishable_colors(ncolors=31, ... backgrounds=[[0,0,0],[1,1,1]], ... save_csv=False, plot_colormap=False, verbose=False) >>> # Get adjacency matrix: >>> from mindboggle.mio.colors import label_adjacency_matrix >>> from mindboggle.mio.fetch_data import prep_tests >>> urls, fetch_data = prep_tests() >>> label_file = fetch_data(urls['left_manual_labels'], '', '.vtk') >>> IDs, adjacency_matrix, output_table = label_adjacency_matrix(label_file, ... ignore_values=[-1, 0], add_value=0, save_table=False, ... output_format='', verbose=False) >>> adjacency_matrix = adjacency_matrix.values >>> adjacency_matrix = adjacency_matrix[:, 1::] >>> # Reorganize colormap: >>> from mindboggle.mio.colors import group_colors >>> from mindboggle.mio.labels import DKTprotocol >>> dkt = DKTprotocol() >>> colormap_name = "DKT31colormap" >>> description = "Colormap for DKT31 human brain cortical labels" >>> save_text_files = True >>> plot_colors = False >>> plot_graphs = False >>> out_dir = '.' >>> verbose = False >>> #IDs = dkt.DKT31_numbers >>> names = dkt.DKT31_names #dkt.left_cerebrum_cortex_DKT31_names >>> groups = dkt.DKT31_groups >>> colors = group_colors(colormap, colormap_name, description, ... adjacency_matrix, IDs, names, groups, ... save_text_files, plot_colors, plot_graphs, out_dir, verbose) >>> np.allclose(colors[0], [0.7586206896551724, 0.20689655172413793, 0.0]) True >>> np.allclose(colors[1], [0.48275862068965514, 0.4482758620689655, 0.48275862068965514]) True >>> np.allclose(colors[2], [0.3448275862068966, 0.3103448275862069, 0.034482758620689655]) True >>> np.allclose(colors[-1], [0.7931034482758621, 0.9655172413793103, 0.7931034482758621]) True No groups / subgraphs: >>> groups = [] >>> colors = group_colors(colormap, colormap_name, description, ... adjacency_matrix, IDs, names, groups, ... save_text_files, plot_colors, plot_graphs, out_dir, verbose) >>> np.allclose(colors[0], [0.5172413793103449, 0.8275862068965517, 1.0]) True >>> np.allclose(colors[1], [0.13793103448275862, 0.0, 0.24137931034482757]) True >>> np.allclose(colors[2], [0.3793103448275862, 0.27586206896551724, 0.48275862068965514]) True >>> np.allclose(colors[-1], [0.6206896551724138, 0.48275862068965514, 0.3448275862068966]) True """ import os import pandas as pd import numpy as np import matplotlib.pyplot as plt import networkx as nx from colormath.color_diff import delta_e_cie2000 from colormath.color_objects import LabColor, AdobeRGBColor from colormath.color_conversions import convert_color import itertools from mindboggle.mio.colors import write_json_colormap, write_xml_colormap # ------------------------------------------------------------------------ # Set parameters for graph layout and output files: # ------------------------------------------------------------------------ if plot_graphs: graph_node_size = 1000 graph_edge_width = 2 graph_font_size = 10 subgraph_node_size = 3000 subgraph_edge_width = 5 subgraph_font_size = 18 axis_buffer = 10 graph_image_file = os.path.join(out_dir, "label_graph.png") subgraph_image_file_pre = os.path.join(out_dir, "label_subgraph") subgraph_image_file_post = ".png" if plot_colors: colormap_image_file = os.path.join(out_dir, 'label_colormap.png') if save_text_files: colormap_csv_file = os.path.join(out_dir, 'label_colormap.csv') colormap_json_file = os.path.join(out_dir, 'label_colormap.json') colormap_xml_file = os.path.join(out_dir, 'label_colormap.xml') run_permutations = False # ------------------------------------------------------------------------ # Load colormap: # ------------------------------------------------------------------------ if verbose: print("Load colormap and convert to CIELAB color space.") if isinstance(colormap, np.ndarray): colors = colormap elif isinstance(colormap, str): colors = pd.read_csv(colormap, sep=',', header=None) colors = colors.values else: raise IOError("Please use correct format for colormap.") nlabels = np.shape(colors)[0] new_colors = np.copy(colors) if len(IDs) == 0: IDs = range(nlabels) if len(names) == 0: names = [str(x) for x in range(nlabels)] if len(groups) == 0: groups = [1 for x in range(nlabels)] # ------------------------------------------------------------------------ # Convert to Lab color space which better represents human perception: # ------------------------------------------------------------------------ # https://python-colormath.readthedocs.io/en/latest/illuminants.html lab_colors = [] for rgb in colors: lab_colors.append(convert_color(AdobeRGBColor(rgb[0], rgb[1], rgb[2]), LabColor)) # ------------------------------------------------------------------------ # Load label adjacency matrix: # ------------------------------------------------------------------------ if np.size(adjacency_matrix): if verbose: print("Load label adjacency matrix.") if isinstance(adjacency_matrix, np.ndarray): adjacency_values = adjacency_matrix # If a string (file) is provided instead of a numpy ndarray: # column 0 = label "ID" number # column 1 = label "name" # column 2 = "group" number (each label is assigned to a group) # columns 3... = label adjacency matrix elif isinstance(adjacency_matrix, str): matrix = pd.read_csv(adjacency_matrix, sep=',', header=None) matrix = matrix.values IDs = matrix.ID names = matrix.name groups = matrix.group adjacency_values = matrix[[str(x) for x in IDs]].values else: raise IOError("Please use correct format for adjacency matrix.") if np.shape(adjacency_values)[0] != nlabels: raise IOError("The colormap and label adjacency matrix don't " "have the same number of labels.") # Normalize adjacency values: adjacency_values = adjacency_values / np.max(adjacency_values) else: plot_graphs = False # ------------------------------------------------------------------------ # Sort labels by decreasing number of adjacent labels (adjacency sum): # ------------------------------------------------------------------------ if np.size(adjacency_matrix): adjacency_sums = np.sum(adjacency_values, axis = 1) # sum rows isort_labels = np.argsort(adjacency_sums)[::-1] else: isort_labels = range(nlabels) # ------------------------------------------------------------------------ # Order label groups by decreasing maximum adjacency sum: # ------------------------------------------------------------------------ label_groups = np.unique(groups) if np.size(adjacency_matrix): max_adjacency_sums = [] for label_group in label_groups: igroup = [i for i,x in enumerate(groups) if x == label_group] max_adjacency_sums.append(max(adjacency_sums[igroup])) label_groups = label_groups[np.argsort(max_adjacency_sums)[::-1]] # ------------------------------------------------------------------------ # Convert adjacency matrix to graph for plotting: # ------------------------------------------------------------------------ if plot_graphs: adjacency_graph = nx.from_numpy_matrix(adjacency_values) for inode in range(nlabels): adjacency_graph.node[inode]['ID'] = IDs[inode] adjacency_graph.node[inode]['label'] = names[inode] adjacency_graph.node[inode]['group'] = groups[inode] # ------------------------------------------------------------------------ # Create a similarity matrix for pairs of colors: # ------------------------------------------------------------------------ if verbose: print("Create a similarity matrix for pairs of colors.") dx_matrix = np.zeros((nlabels, nlabels)) for icolor1 in range(nlabels): for icolor2 in range(nlabels): dx_matrix[icolor1,icolor2] = delta_e_cie2000(lab_colors[icolor1], lab_colors[icolor2]) # ------------------------------------------------------------------------ # Sort colors by decreasing perceptual difference from all other colors: # ------------------------------------------------------------------------ icolors_to_pick = list(np.argsort(np.sum(dx_matrix, axis = 1))[::-1]) # ------------------------------------------------------------------------ # Loop through label groups: # ------------------------------------------------------------------------ for label_group in label_groups: if verbose: print("Labels in group {0}...".format(label_group)) igroup = [i for i,x in enumerate(groups) if x == label_group] N = len(igroup) # -------------------------------------------------------------------- # Select unpicked colors for group that are similar to the first # unpicked color (unpicked colors were sorted above by decreasing # perceptual difference from all other colors): # -------------------------------------------------------------------- isimilar = np.argsort(dx_matrix[icolors_to_pick[0], icolors_to_pick])[0:N] icolors_to_pick_copy = icolors_to_pick.copy() group_colors = [list(colors[icolors_to_pick[i]]) for i in isimilar] if run_permutations: group_lab_colors = [lab_colors[icolors_to_pick[i]] for i in isimilar] for iremove in isimilar: icolors_to_pick.remove(icolors_to_pick_copy[iremove]) # -------------------------------------------------------------------- # Reorder group colors according to label adjacency sum # (decreasing number of adjacent labels): # -------------------------------------------------------------------- isort_group_labels = np.argsort(isort_labels[igroup]) group_colors = [group_colors[i] for i in isort_group_labels] # -------------------------------------------------------------------- # Compute differences between every pair of colors within group: # -------------------------------------------------------------------- weights = False if run_permutations: permutation_max = np.zeros(N) NxN_matrix = np.zeros((N, N)) # ---------------------------------------------------------------- # Extract group adjacency submatrix: # ---------------------------------------------------------------- neighbor_matrix = adjacency_values[igroup, :][:, igroup] if not weights: neighbor_matrix = (neighbor_matrix > 0).astype(np.uint8) # ---------------------------------------------------------------- # Permute colors and color pair differences: # ---------------------------------------------------------------- DEmax = 0 permutations = [np.array(s) for s in itertools.permutations(range(0, N), N)] if verbose: print(" ".join([str(N),'labels,', str(len(permutations)), 'permutations:'])) for permutation in permutations: delta_matrix = NxN_matrix.copy() for i1 in range(N): for i2 in range(N): if (i2 > i1) and (neighbor_matrix[i1, i2] > 0): delta_matrix[i1,i2] = delta_e_cie2000(group_lab_colors[i1], group_lab_colors[i2]) if weights: DE = np.sum((delta_matrix * neighbor_matrix)) else: DE = np.sum(delta_matrix) # ------------------------------------------------------------ # Store color permutation with maximum adjacency cost: # ------------------------------------------------------------ if DE > DEmax: DEmax = DE permutation_max = permutation # ------------------------------------------------------------ # Reorder group colors by the maximum adjacency cost: # ------------------------------------------------------------ group_colors = [group_colors[x] for x in permutation_max] new_colors[isimilar] = group_colors # -------------------------------------------------------------------- # Assign new colors: # -------------------------------------------------------------------- else: new_colors[isimilar] = group_colors # -------------------------------------------------------------------- # Draw a figure of the colored subgraph: # -------------------------------------------------------------------- if plot_graphs: plt.figure(label_group) subgraph = adjacency_graph.subgraph(igroup) # Layout: pos = nx.nx_pydot.graphviz_layout(subgraph, prog="neato") nx.draw(subgraph, pos, node_size=subgraph_node_size, width=subgraph_edge_width, alpha=0.5, with_labels=False) # Labels: labels={} for iN in range(N): labels[subgraph.nodes()[iN]] = \ subgraph.node[subgraph.nodes()[iN]]['label'] nx.draw_networkx_labels(subgraph, pos, labels, font_size=subgraph_font_size, font_color='black') # Nodes: nodelist = list(subgraph.node.keys()) for iN in range(N): nx.draw_networkx_nodes(subgraph, pos, node_size=subgraph_node_size, nodelist=[nodelist[iN]], node_color=group_colors[iN]) # Figure: ax = plt.gca().axis() plt.gca().axis([ax[0]-axis_buffer, ax[1]+axis_buffer, ax[2]-axis_buffer, ax[3]+axis_buffer]) plt.savefig(subgraph_image_file_pre + str(int(label_group)) + subgraph_image_file_post) #plt.show() # ------------------------------------------------------------------------ # Plot the entire graph (without colors): # ------------------------------------------------------------------------ if plot_graphs: plt.figure(nlabels) # Graph: pos = nx.nx_pydot.graphviz_layout(adjacency_graph, prog="neato") nx.draw(adjacency_graph, pos, node_color='yellow', node_size=graph_node_size, width=graph_edge_width, with_labels=False) # Labels: labels={} for ilabel in range(nlabels): labels[ilabel] = adjacency_graph.node[ilabel]['label'] nx.draw_networkx_labels(adjacency_graph, pos, labels, font_size=graph_font_size, font_color='black') # # Nodes: # nodelist = list(adjacency_graph.node.keys()) # for icolor, new_color in enumerate(new_colors): # nx.draw_networkx_nodes(subgraph, pos, # node_size=graph_node_size, # nodelist=[nodelist[icolor]], # node_color=new_color) plt.savefig(graph_image_file) plt.show() # ------------------------------------------------------------------------ # Plot the subgraphs (colors): # ------------------------------------------------------------------------ if plot_graphs: for label_group in label_groups: plt.figure(label_group) plt.show() # ------------------------------------------------------------------------ # Plot the colormap as a horizontal bar chart: # ------------------------------------------------------------------------ if plot_colors: plt.figure(nlabels, figsize=(5, 10)) for ilabel in range(nlabels): ax = plt.subplot(nlabels, 1, ilabel + 1) plt.axis("off") rgb = new_colors[ilabel] plt.barh(0, 50, 1, 0, color=rgb) plt.savefig(colormap_image_file) plt.show() # ------------------------------------------------------------------------ # Save new colormap as text files: # ------------------------------------------------------------------------ if save_text_files: # ------------------------------------------------------------------------ # Save new colormap as a csv file: # ------------------------------------------------------------------------ np.savetxt(colormap_csv_file, new_colors, fmt='%.18e', delimiter=',', newline='\n', header='') # ------------------------------------------------------------------------ # Save new colormap as a json file: # ------------------------------------------------------------------------ write_json_colormap(colormap=new_colors, label_numbers=IDs, label_names=names, colormap_file=colormap_json_file, colormap_name=colormap_name, description=description) # ------------------------------------------------------------------------ # Save new colormap as an xml file: # ------------------------------------------------------------------------ write_xml_colormap(colormap=new_colors, label_numbers=IDs, colormap_file=colormap_xml_file, colormap_name=colormap_name) # ------------------------------------------------------------------------ # Return new colors: # ------------------------------------------------------------------------ colors = new_colors.tolist() return colors
[ "def", "group_colors", "(", "colormap", ",", "colormap_name", ",", "description", "=", "''", ",", "adjacency_matrix", "=", "[", "]", ",", "IDs", "=", "[", "]", ",", "names", "=", "[", "]", ",", "groups", "=", "[", "]", ",", "save_text_files", "=", "T...
https://github.com/nipy/mindboggle/blob/bc10812979d42e94b8a01ad8f98b4ceae33169e5/mindboggle/mio/colors.py#L329-L813
radlab/sparrow
afb8efadeb88524f1394d1abe4ea66c6fd2ac744
deploy/third_party/boto-2.1.1/boto/rds/dbsecuritygroup.py
python
DBSecurityGroup.authorize
(self, cidr_ip=None, ec2_group=None)
return self.connection.authorize_dbsecurity_group(self.name, cidr_ip, group_name, group_owner_id)
Add a new rule to this DBSecurity group. You need to pass in either a CIDR block to authorize or and EC2 SecurityGroup. @type cidr_ip: string @param cidr_ip: A valid CIDR IP range to authorize @type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup>` @rtype: bool @return: True if successful.
Add a new rule to this DBSecurity group. You need to pass in either a CIDR block to authorize or and EC2 SecurityGroup.
[ "Add", "a", "new", "rule", "to", "this", "DBSecurity", "group", ".", "You", "need", "to", "pass", "in", "either", "a", "CIDR", "block", "to", "authorize", "or", "and", "EC2", "SecurityGroup", "." ]
def authorize(self, cidr_ip=None, ec2_group=None): """ Add a new rule to this DBSecurity group. You need to pass in either a CIDR block to authorize or and EC2 SecurityGroup. @type cidr_ip: string @param cidr_ip: A valid CIDR IP range to authorize @type ec2_group: :class:`boto.ec2.securitygroup.SecurityGroup>` @rtype: bool @return: True if successful. """ if isinstance(ec2_group, SecurityGroup): group_name = ec2_group.name group_owner_id = ec2_group.owner_id else: group_name = None group_owner_id = None return self.connection.authorize_dbsecurity_group(self.name, cidr_ip, group_name, group_owner_id)
[ "def", "authorize", "(", "self", ",", "cidr_ip", "=", "None", ",", "ec2_group", "=", "None", ")", ":", "if", "isinstance", "(", "ec2_group", ",", "SecurityGroup", ")", ":", "group_name", "=", "ec2_group", ".", "name", "group_owner_id", "=", "ec2_group", "....
https://github.com/radlab/sparrow/blob/afb8efadeb88524f1394d1abe4ea66c6fd2ac744/deploy/third_party/boto-2.1.1/boto/rds/dbsecuritygroup.py#L68-L91
makerbot/ReplicatorG
d6f2b07785a5a5f1e172fb87cb4303b17c575d5d
skein_engines/skeinforge-50/skeinforge_application/skeinforge.py
python
getPluginFileNames
()
return archive.getPluginFileNamesFromDirectoryPath(archive.getSkeinforgePluginsPath())
Get skeinforge plugin fileNames.
Get skeinforge plugin fileNames.
[ "Get", "skeinforge", "plugin", "fileNames", "." ]
def getPluginFileNames(): 'Get skeinforge plugin fileNames.' return archive.getPluginFileNamesFromDirectoryPath(archive.getSkeinforgePluginsPath())
[ "def", "getPluginFileNames", "(", ")", ":", "return", "archive", ".", "getPluginFileNamesFromDirectoryPath", "(", "archive", ".", "getSkeinforgePluginsPath", "(", ")", ")" ]
https://github.com/makerbot/ReplicatorG/blob/d6f2b07785a5a5f1e172fb87cb4303b17c575d5d/skein_engines/skeinforge-50/skeinforge_application/skeinforge.py#L548-L550
oracle/oci-python-sdk
3c1604e4e212008fb6718e2f68cdb5ef71fd5793
src/oci/waas/waas_client.py
python
WaasClient.update_waas_policy_custom_protection_rules
(self, waas_policy_id, update_custom_protection_rules_details, **kwargs)
Updates the action for each specified custom protection rule. Only the `DETECT` and `BLOCK` actions can be set. Disabled rules should not be included in the list. For more information on protection rules, see `WAF Protection Rules`__. __ https://docs.cloud.oracle.com/iaas/Content/WAF/Tasks/wafprotectionrules.htm :param str waas_policy_id: (required) The `OCID`__ of the WAAS policy. __ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm :param oci.waas.models.list[CustomProtectionRuleSetting] update_custom_protection_rules_details: (required) :param str opc_request_id: (optional) The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str opc_retry_token: (optional) A token that uniquely identifies a request so it can be retried in case of a timeout or server error without risk of executing that same action again. Retry tokens expire after 24 hours, but can be invalidated before then due to conflicting operations *Example:* If a resource has been deleted and purged from the system, then a retry of the original delete request may be rejected. :param str if_match: (optional) For optimistic concurrency control. In the `PUT` or `DELETE` call for a resource, set the `if-match` parameter to the value of the etag from a previous `GET` or `POST` response for that resource. The resource will be updated or deleted only if the etag provided matches the resource's current etag value. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :return: A :class:`~oci.response.Response` object with data of type None :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/waas/update_waas_policy_custom_protection_rules.py.html>`__ to see an example of how to use update_waas_policy_custom_protection_rules API.
Updates the action for each specified custom protection rule. Only the `DETECT` and `BLOCK` actions can be set. Disabled rules should not be included in the list. For more information on protection rules, see `WAF Protection Rules`__.
[ "Updates", "the", "action", "for", "each", "specified", "custom", "protection", "rule", ".", "Only", "the", "DETECT", "and", "BLOCK", "actions", "can", "be", "set", ".", "Disabled", "rules", "should", "not", "be", "included", "in", "the", "list", ".", "For...
def update_waas_policy_custom_protection_rules(self, waas_policy_id, update_custom_protection_rules_details, **kwargs): """ Updates the action for each specified custom protection rule. Only the `DETECT` and `BLOCK` actions can be set. Disabled rules should not be included in the list. For more information on protection rules, see `WAF Protection Rules`__. __ https://docs.cloud.oracle.com/iaas/Content/WAF/Tasks/wafprotectionrules.htm :param str waas_policy_id: (required) The `OCID`__ of the WAAS policy. __ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm :param oci.waas.models.list[CustomProtectionRuleSetting] update_custom_protection_rules_details: (required) :param str opc_request_id: (optional) The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str opc_retry_token: (optional) A token that uniquely identifies a request so it can be retried in case of a timeout or server error without risk of executing that same action again. Retry tokens expire after 24 hours, but can be invalidated before then due to conflicting operations *Example:* If a resource has been deleted and purged from the system, then a retry of the original delete request may be rejected. :param str if_match: (optional) For optimistic concurrency control. In the `PUT` or `DELETE` call for a resource, set the `if-match` parameter to the value of the etag from a previous `GET` or `POST` response for that resource. The resource will be updated or deleted only if the etag provided matches the resource's current etag value. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :return: A :class:`~oci.response.Response` object with data of type None :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/waas/update_waas_policy_custom_protection_rules.py.html>`__ to see an example of how to use update_waas_policy_custom_protection_rules API. """ resource_path = "/waasPolicies/{waasPolicyId}/wafConfig/customProtectionRules" method = "PUT" # Don't accept unknown kwargs expected_kwargs = [ "retry_strategy", "opc_request_id", "opc_retry_token", "if_match" ] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs] if extra_kwargs: raise ValueError( "update_waas_policy_custom_protection_rules got unknown kwargs: {!r}".format(extra_kwargs)) path_params = { "waasPolicyId": waas_policy_id } path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing} for (k, v) in six.iteritems(path_params): if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0): raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k)) header_params = { "accept": "application/json", "content-type": "application/json", "opc-request-id": kwargs.get("opc_request_id", missing), "opc-retry-token": kwargs.get("opc_retry_token", missing), "if-match": kwargs.get("if_match", missing) } header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None} retry_strategy = self.base_client.get_preferred_retry_strategy( operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy ) if retry_strategy: if not isinstance(retry_strategy, retry.NoneRetryStrategy): self.base_client.add_opc_retry_token_if_needed(header_params) self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call( self.base_client.call_api, resource_path=resource_path, method=method, path_params=path_params, header_params=header_params, body=update_custom_protection_rules_details) else: return self.base_client.call_api( resource_path=resource_path, method=method, path_params=path_params, header_params=header_params, body=update_custom_protection_rules_details)
[ "def", "update_waas_policy_custom_protection_rules", "(", "self", ",", "waas_policy_id", ",", "update_custom_protection_rules_details", ",", "*", "*", "kwargs", ")", ":", "resource_path", "=", "\"/waasPolicies/{waasPolicyId}/wafConfig/customProtectionRules\"", "method", "=", "\...
https://github.com/oracle/oci-python-sdk/blob/3c1604e4e212008fb6718e2f68cdb5ef71fd5793/src/oci/waas/waas_client.py#L6305-L6400
pypa/pipenv
b21baade71a86ab3ee1429f71fbc14d4f95fb75d
pipenv/vendor/orderedmultidict/itemlist.py
python
itemlist.__eq__
(self, other)
return True
[]
def __eq__(self, other): for (n1, key1, value1), (n2, key2, value2) in zip_longest(self, other): if key1 != key2 or value1 != value2: return False return True
[ "def", "__eq__", "(", "self", ",", "other", ")", ":", "for", "(", "n1", ",", "key1", ",", "value1", ")", ",", "(", "n2", ",", "key2", ",", "value2", ")", "in", "zip_longest", "(", "self", ",", "other", ")", ":", "if", "key1", "!=", "key2", "or"...
https://github.com/pypa/pipenv/blob/b21baade71a86ab3ee1429f71fbc14d4f95fb75d/pipenv/vendor/orderedmultidict/itemlist.py#L145-L149
csparpa/pyowm
0474b61cc67fa3c95f9e572b96d3248031828fce
pyowm/weatherapi25/forecaster.py
python
Forecaster.will_have_tornado
(self)
return weather.any_status_is(self.forecast.weathers, "tornado", self._wc_registry)
Tells if into the forecast coverage exist one or more *Weather* items related to tornadoes :returns: boolean
Tells if into the forecast coverage exist one or more *Weather* items related to tornadoes
[ "Tells", "if", "into", "the", "forecast", "coverage", "exist", "one", "or", "more", "*", "Weather", "*", "items", "related", "to", "tornadoes" ]
def will_have_tornado(self): """ Tells if into the forecast coverage exist one or more *Weather* items related to tornadoes :returns: boolean """ return weather.any_status_is(self.forecast.weathers, "tornado", self._wc_registry)
[ "def", "will_have_tornado", "(", "self", ")", ":", "return", "weather", ".", "any_status_is", "(", "self", ".", "forecast", ".", "weathers", ",", "\"tornado\"", ",", "self", ".", "_wc_registry", ")" ]
https://github.com/csparpa/pyowm/blob/0474b61cc67fa3c95f9e572b96d3248031828fce/pyowm/weatherapi25/forecaster.py#L120-L128
los-cocos/cocos
3b47281f95d6ee52bb2a357a767f213e670bd601
cocos/collision_model.py
python
CollisionManager.iter_all_collisions
(self)
Iterator that exposes all collisions between known objects. At each step it will yield a pair (obj, other). If (obj1, obj2) is seen when consuming the iterator, then (obj2, obj1) will not be seen. In other worlds, 'obj1 collides with obj2' means (obj1, obj2) or (obj2, obj1) will appear in the iterator output but not both.
Iterator that exposes all collisions between known objects. At each step it will yield a pair (obj, other). If (obj1, obj2) is seen when consuming the iterator, then (obj2, obj1) will not be seen. In other worlds, 'obj1 collides with obj2' means (obj1, obj2) or (obj2, obj1) will appear in the iterator output but not both.
[ "Iterator", "that", "exposes", "all", "collisions", "between", "known", "objects", ".", "At", "each", "step", "it", "will", "yield", "a", "pair", "(", "obj", "other", ")", ".", "If", "(", "obj1", "obj2", ")", "is", "seen", "when", "consuming", "the", "...
def iter_all_collisions(self): """ Iterator that exposes all collisions between known objects. At each step it will yield a pair (obj, other). If (obj1, obj2) is seen when consuming the iterator, then (obj2, obj1) will not be seen. In other worlds, 'obj1 collides with obj2' means (obj1, obj2) or (obj2, obj1) will appear in the iterator output but not both. """
[ "def", "iter_all_collisions", "(", "self", ")", ":" ]
https://github.com/los-cocos/cocos/blob/3b47281f95d6ee52bb2a357a767f213e670bd601/cocos/collision_model.py#L309-L317
golismero/golismero
7d605b937e241f51c1ca4f47b20f755eeefb9d76
thirdparty_libs/django/db/models/query.py
python
QuerySet.dates
(self, field_name, kind, order='ASC')
return self._clone(klass=DateQuerySet, setup=True, _field_name=field_name, _kind=kind, _order=order)
Returns a list of datetime objects representing all available dates for the given field_name, scoped to 'kind'.
Returns a list of datetime objects representing all available dates for the given field_name, scoped to 'kind'.
[ "Returns", "a", "list", "of", "datetime", "objects", "representing", "all", "available", "dates", "for", "the", "given", "field_name", "scoped", "to", "kind", "." ]
def dates(self, field_name, kind, order='ASC'): """ Returns a list of datetime objects representing all available dates for the given field_name, scoped to 'kind'. """ assert kind in ("month", "year", "day"), \ "'kind' must be one of 'year', 'month' or 'day'." assert order in ('ASC', 'DESC'), \ "'order' must be either 'ASC' or 'DESC'." return self._clone(klass=DateQuerySet, setup=True, _field_name=field_name, _kind=kind, _order=order)
[ "def", "dates", "(", "self", ",", "field_name", ",", "kind", ",", "order", "=", "'ASC'", ")", ":", "assert", "kind", "in", "(", "\"month\"", ",", "\"year\"", ",", "\"day\"", ")", ",", "\"'kind' must be one of 'year', 'month' or 'day'.\"", "assert", "order", "i...
https://github.com/golismero/golismero/blob/7d605b937e241f51c1ca4f47b20f755eeefb9d76/thirdparty_libs/django/db/models/query.py#L635-L645
dimagi/commcare-hq
d67ff1d3b4c51fa050c19e60c3253a79d3452a39
corehq/apps/data_analytics/malt_generator.py
python
MALTTableGenerator._update_or_create
(cls, malt_dict)
[]
def _update_or_create(cls, malt_dict): try: # try update unique_field_dict = {k: v for (k, v) in malt_dict.items() if k in MALTRow.get_unique_fields()} prev_obj = MALTRow.objects.get(**unique_field_dict) for k, v in malt_dict.items(): setattr(prev_obj, k, v) prev_obj.save() except MALTRow.DoesNotExist: # create try: MALTRow(**malt_dict).save() except Exception as ex: logger.error("Failed to insert malt-row {}. Exception is {}".format( str(malt_dict), str(ex) ), exc_info=True) except Exception as ex: logger.error("Failed to insert malt-row {}. Exception is {}".format( str(malt_dict), str(ex) ), exc_info=True)
[ "def", "_update_or_create", "(", "cls", ",", "malt_dict", ")", ":", "try", ":", "# try update", "unique_field_dict", "=", "{", "k", ":", "v", "for", "(", "k", ",", "v", ")", "in", "malt_dict", ".", "items", "(", ")", "if", "k", "in", "MALTRow", ".", ...
https://github.com/dimagi/commcare-hq/blob/d67ff1d3b4c51fa050c19e60c3253a79d3452a39/corehq/apps/data_analytics/malt_generator.py#L113-L136
pallets/flask
660994efc761efdfd49ca442b73f6712dc77b6cf
src/flask/scaffold.py
python
Scaffold.get_send_file_max_age
(self, filename: t.Optional[str])
return int(value.total_seconds())
Used by :func:`send_file` to determine the ``max_age`` cache value for a given file path if it wasn't passed. By default, this returns :data:`SEND_FILE_MAX_AGE_DEFAULT` from the configuration of :data:`~flask.current_app`. This defaults to ``None``, which tells the browser to use conditional requests instead of a timed cache, which is usually preferable. .. versionchanged:: 2.0 The default configuration is ``None`` instead of 12 hours. .. versionadded:: 0.9
Used by :func:`send_file` to determine the ``max_age`` cache value for a given file path if it wasn't passed.
[ "Used", "by", ":", "func", ":", "send_file", "to", "determine", "the", "max_age", "cache", "value", "for", "a", "given", "file", "path", "if", "it", "wasn", "t", "passed", "." ]
def get_send_file_max_age(self, filename: t.Optional[str]) -> t.Optional[int]: """Used by :func:`send_file` to determine the ``max_age`` cache value for a given file path if it wasn't passed. By default, this returns :data:`SEND_FILE_MAX_AGE_DEFAULT` from the configuration of :data:`~flask.current_app`. This defaults to ``None``, which tells the browser to use conditional requests instead of a timed cache, which is usually preferable. .. versionchanged:: 2.0 The default configuration is ``None`` instead of 12 hours. .. versionadded:: 0.9 """ value = current_app.send_file_max_age_default if value is None: return None return int(value.total_seconds())
[ "def", "get_send_file_max_age", "(", "self", ",", "filename", ":", "t", ".", "Optional", "[", "str", "]", ")", "->", "t", ".", "Optional", "[", "int", "]", ":", "value", "=", "current_app", ".", "send_file_max_age_default", "if", "value", "is", "None", "...
https://github.com/pallets/flask/blob/660994efc761efdfd49ca442b73f6712dc77b6cf/src/flask/scaffold.py#L297-L316
ganglia/gmond_python_modules
2f7fcab3d27926ef4a2feb1b53c09af16a43e729
network/netiron/netiron.py
python
buildDict
(oidDict,t,netiron)
return builtdict
[]
def buildDict(oidDict,t,netiron): # passed a list of tuples, build's a dict based on the alias name builtdict = {} for line in t: # if t[t.index(line)][2][1] != '': string = str(t[t.index(line)][2][1]) match = re.search(r'ethernet', string) if match and t[t.index(line)][0][1] != '': alias = str(t[t.index(line)][0][1]) index = str(t[t.index(line)][1][1]) name = str(t[t.index(line)][2][1]) hcinoct = str(t[t.index(line)][3][1]) builtdict[netiron+'_'+alias+'_bitsin'] = int(hcinoct) * 8 hcoutoct = str(t[t.index(line)][4][1]) builtdict[netiron+'_'+alias+'_bitsout'] = int(hcoutoct) * 8 hcinpkt = str(t[t.index(line)][5][1]) builtdict[netiron+'_'+alias+'_pktsin'] = int(hcinpkt) hcoutpkt = str(t[t.index(line)][6][1]) builtdict[netiron+'_'+alias+'_pktsout'] = int(hcoutpkt) return builtdict
[ "def", "buildDict", "(", "oidDict", ",", "t", ",", "netiron", ")", ":", "# passed a list of tuples, build's a dict based on the alias name", "builtdict", "=", "{", "}", "for", "line", "in", "t", ":", "# if t[t.index(line)][2][1] != '':", "string", "=", "str", "...
https://github.com/ganglia/gmond_python_modules/blob/2f7fcab3d27926ef4a2feb1b53c09af16a43e729/network/netiron/netiron.py#L108-L128
openstack/swift
b8d7c3dcb817504dcc0959ba52cc4ed2cf66c100
swift/common/middleware/recon.py
python
ReconMiddleware.get_relinker_info
(self)
return self._from_recon_cache(stat_keys, self.relink_recon_cache, ignore_missing=True)
get relinker info, if any
get relinker info, if any
[ "get", "relinker", "info", "if", "any" ]
def get_relinker_info(self): """get relinker info, if any""" stat_keys = ['devices', 'workers'] return self._from_recon_cache(stat_keys, self.relink_recon_cache, ignore_missing=True)
[ "def", "get_relinker_info", "(", "self", ")", ":", "stat_keys", "=", "[", "'devices'", ",", "'workers'", "]", "return", "self", ".", "_from_recon_cache", "(", "stat_keys", ",", "self", ".", "relink_recon_cache", ",", "ignore_missing", "=", "True", ")" ]
https://github.com/openstack/swift/blob/b8d7c3dcb817504dcc0959ba52cc4ed2cf66c100/swift/common/middleware/recon.py#L355-L361
apache/tvm
6eb4ed813ebcdcd9558f0906a1870db8302ff1e0
python/tvm/driver/tvmc/frontends.py
python
get_frontend_names
()
return [frontend.name() for frontend in ALL_FRONTENDS]
Return the names of all supported frontends Returns ------- list : list of str A list of frontend names as strings
Return the names of all supported frontends
[ "Return", "the", "names", "of", "all", "supported", "frontends" ]
def get_frontend_names(): """Return the names of all supported frontends Returns ------- list : list of str A list of frontend names as strings """ return [frontend.name() for frontend in ALL_FRONTENDS]
[ "def", "get_frontend_names", "(", ")", ":", "return", "[", "frontend", ".", "name", "(", ")", "for", "frontend", "in", "ALL_FRONTENDS", "]" ]
https://github.com/apache/tvm/blob/6eb4ed813ebcdcd9558f0906a1870db8302ff1e0/python/tvm/driver/tvmc/frontends.py#L297-L306
larryhastings/gilectomy
4315ec3f1d6d4f813cc82ce27a24e7f784dbfc1a
Lib/plistlib.py
python
Plist.write
(self, pathOrFile)
Deprecated. Use the dump() function instead.
Deprecated. Use the dump() function instead.
[ "Deprecated", ".", "Use", "the", "dump", "()", "function", "instead", "." ]
def write(self, pathOrFile): """Deprecated. Use the dump() function instead.""" with _maybe_open(pathOrFile, 'wb') as fp: dump(self, fp)
[ "def", "write", "(", "self", ",", "pathOrFile", ")", ":", "with", "_maybe_open", "(", "pathOrFile", ",", "'wb'", ")", "as", "fp", ":", "dump", "(", "self", ",", "fp", ")" ]
https://github.com/larryhastings/gilectomy/blob/4315ec3f1d6d4f813cc82ce27a24e7f784dbfc1a/Lib/plistlib.py#L146-L149
TarrySingh/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
5bb97d7e3ffd913abddb4cfa7d78a1b4c868890e
tensorflow_dl_models/research/slim/train_image_classifier.py
python
_get_init_fn
()
return slim.assign_from_checkpoint_fn( checkpoint_path, variables_to_restore, ignore_missing_vars=FLAGS.ignore_missing_vars)
Returns a function run by the chief worker to warm-start the training. Note that the init_fn is only run when initializing the model during the very first global step. Returns: An init function run by the supervisor.
Returns a function run by the chief worker to warm-start the training.
[ "Returns", "a", "function", "run", "by", "the", "chief", "worker", "to", "warm", "-", "start", "the", "training", "." ]
def _get_init_fn(): """Returns a function run by the chief worker to warm-start the training. Note that the init_fn is only run when initializing the model during the very first global step. Returns: An init function run by the supervisor. """ if FLAGS.checkpoint_path is None: return None # Warn the user if a checkpoint exists in the train_dir. Then we'll be # ignoring the checkpoint anyway. if tf.train.latest_checkpoint(FLAGS.train_dir): tf.logging.info( 'Ignoring --checkpoint_path because a checkpoint already exists in %s' % FLAGS.train_dir) return None exclusions = [] if FLAGS.checkpoint_exclude_scopes: exclusions = [scope.strip() for scope in FLAGS.checkpoint_exclude_scopes.split(',')] # TODO(sguada) variables.filter_variables() variables_to_restore = [] for var in slim.get_model_variables(): excluded = False for exclusion in exclusions: if var.op.name.startswith(exclusion): excluded = True break if not excluded: variables_to_restore.append(var) if tf.gfile.IsDirectory(FLAGS.checkpoint_path): checkpoint_path = tf.train.latest_checkpoint(FLAGS.checkpoint_path) else: checkpoint_path = FLAGS.checkpoint_path tf.logging.info('Fine-tuning from %s' % checkpoint_path) return slim.assign_from_checkpoint_fn( checkpoint_path, variables_to_restore, ignore_missing_vars=FLAGS.ignore_missing_vars)
[ "def", "_get_init_fn", "(", ")", ":", "if", "FLAGS", ".", "checkpoint_path", "is", "None", ":", "return", "None", "# Warn the user if a checkpoint exists in the train_dir. Then we'll be", "# ignoring the checkpoint anyway.", "if", "tf", ".", "train", ".", "latest_checkpoint...
https://github.com/TarrySingh/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials/blob/5bb97d7e3ffd913abddb4cfa7d78a1b4c868890e/tensorflow_dl_models/research/slim/train_image_classifier.py#L315-L361
and3rson/clay
c271cecf6b6ea6465abcdd2444171b1a565a60a3
clay/app.py
python
AppWidget.show_settings
(self)
Show settings page.
Show settings page.
[ "Show", "settings", "page", "." ]
def show_settings(self): """ Show settings page. """ self.set_page('settings')
[ "def", "show_settings", "(", "self", ")", ":", "self", ".", "set_page", "(", "'settings'", ")" ]
https://github.com/and3rson/clay/blob/c271cecf6b6ea6465abcdd2444171b1a565a60a3/clay/app.py#L259-L261
JiYou/openstack
8607dd488bde0905044b303eb6e52bdea6806923
chap19/monitor/monitor/monitor/policy.py
python
enforce
(context, action, target)
Verifies that the action is valid on the target in this context. :param context: monitor context :param action: string representing the action to be checked this should be colon separated for clarity. i.e. ``compute:create_instance``, ``compute:attach_servicemanage``, ``servicemanage:attach_servicemanage`` :param object: dictionary representing the object of the action for object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': context.project_id}`` :raises monitor.exception.PolicyNotAuthorized: if verification fails.
Verifies that the action is valid on the target in this context.
[ "Verifies", "that", "the", "action", "is", "valid", "on", "the", "target", "in", "this", "context", "." ]
def enforce(context, action, target): """Verifies that the action is valid on the target in this context. :param context: monitor context :param action: string representing the action to be checked this should be colon separated for clarity. i.e. ``compute:create_instance``, ``compute:attach_servicemanage``, ``servicemanage:attach_servicemanage`` :param object: dictionary representing the object of the action for object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': context.project_id}`` :raises monitor.exception.PolicyNotAuthorized: if verification fails. """ init() match_list = ('rule:%s' % action,) credentials = context.to_dict() policy.enforce(match_list, target, credentials, exception.PolicyNotAuthorized, action=action)
[ "def", "enforce", "(", "context", ",", "action", ",", "target", ")", ":", "init", "(", ")", "match_list", "=", "(", "'rule:%s'", "%", "action", ",", ")", "credentials", "=", "context", ".", "to_dict", "(", ")", "policy", ".", "enforce", "(", "match_lis...
https://github.com/JiYou/openstack/blob/8607dd488bde0905044b303eb6e52bdea6806923/chap19/monitor/monitor/monitor/policy.py#L64-L87
google-research/motion_imitation
d0e7b963c5a301984352d25a3ee0820266fa4218
motion_imitation/envs/env_wrappers/imitation_task.py
python
ImitationTask._reset_motion_time_offset
(self)
return
[]
def _reset_motion_time_offset(self): if not self._enable_rand_init_time: self._motion_time_offset = 0.0 elif self._curr_episode_warmup: self._motion_time_offset = self._rand_uniform(0, self._warmup_time) else: self._motion_time_offset = self._sample_time_offset() return
[ "def", "_reset_motion_time_offset", "(", "self", ")", ":", "if", "not", "self", ".", "_enable_rand_init_time", ":", "self", ".", "_motion_time_offset", "=", "0.0", "elif", "self", ".", "_curr_episode_warmup", ":", "self", ".", "_motion_time_offset", "=", "self", ...
https://github.com/google-research/motion_imitation/blob/d0e7b963c5a301984352d25a3ee0820266fa4218/motion_imitation/envs/env_wrappers/imitation_task.py#L1084-L1091
sabri-zaki/EasY_HaCk
2a39ac384dd0d6fc51c0dd22e8d38cece683fdb9
.modules/.sqlmap/thirdparty/bottle/bottle.py
python
load_app
(target)
Load a bottle application from a module and make sure that the import does not affect the current default application, but returns a separate application object. See :func:`load` for the target parameter.
Load a bottle application from a module and make sure that the import does not affect the current default application, but returns a separate application object. See :func:`load` for the target parameter.
[ "Load", "a", "bottle", "application", "from", "a", "module", "and", "make", "sure", "that", "the", "import", "does", "not", "affect", "the", "current", "default", "application", "but", "returns", "a", "separate", "application", "object", ".", "See", ":", "fu...
def load_app(target): """ Load a bottle application from a module and make sure that the import does not affect the current default application, but returns a separate application object. See :func:`load` for the target parameter. """ global NORUN NORUN, nr_old = True, NORUN tmp = default_app.push() # Create a new "default application" try: rv = load(target) # Import the target module return rv if callable(rv) else tmp finally: default_app.remove(tmp) # Remove the temporary added default application NORUN = nr_old
[ "def", "load_app", "(", "target", ")", ":", "global", "NORUN", "NORUN", ",", "nr_old", "=", "True", ",", "NORUN", "tmp", "=", "default_app", ".", "push", "(", ")", "# Create a new \"default application\"", "try", ":", "rv", "=", "load", "(", "target", ")",...
https://github.com/sabri-zaki/EasY_HaCk/blob/2a39ac384dd0d6fc51c0dd22e8d38cece683fdb9/.modules/.sqlmap/thirdparty/bottle/bottle.py#L3221-L3233
oracle/graalpython
577e02da9755d916056184ec441c26e00b70145c
graalpython/lib-python/3/idlelib/config.py
python
IdleConf.GetExtensionBindings
(self, extensionName)
return extBinds
Return dict {extensionName event : active or defined keybinding}. Augment self.GetExtensionKeys(extensionName) with mapping of non- configurable events (from default config) to GetOption splits, as in self.__GetRawExtensionKeys.
Return dict {extensionName event : active or defined keybinding}.
[ "Return", "dict", "{", "extensionName", "event", ":", "active", "or", "defined", "keybinding", "}", "." ]
def GetExtensionBindings(self, extensionName): """Return dict {extensionName event : active or defined keybinding}. Augment self.GetExtensionKeys(extensionName) with mapping of non- configurable events (from default config) to GetOption splits, as in self.__GetRawExtensionKeys. """ bindsName = extensionName + '_bindings' extBinds = self.GetExtensionKeys(extensionName) #add the non-configurable bindings if self.defaultCfg['extensions'].has_section(bindsName): eventNames = self.defaultCfg['extensions'].GetOptionList(bindsName) for eventName in eventNames: binding = self.GetOption( 'extensions', bindsName, eventName, default='').split() event = '<<' + eventName + '>>' extBinds[event] = binding return extBinds
[ "def", "GetExtensionBindings", "(", "self", ",", "extensionName", ")", ":", "bindsName", "=", "extensionName", "+", "'_bindings'", "extBinds", "=", "self", ".", "GetExtensionKeys", "(", "extensionName", ")", "#add the non-configurable bindings", "if", "self", ".", "...
https://github.com/oracle/graalpython/blob/577e02da9755d916056184ec441c26e00b70145c/graalpython/lib-python/3/idlelib/config.py#L507-L525
nameko/nameko
17ecee2bcfa90cb0f3a2f3328c5004f48e4e02a3
nameko/timer.py
python
Timer.handle_timer_tick
(self)
[]
def handle_timer_tick(self): args = () kwargs = {} # Note that we don't catch ContainerBeingKilled here. If that's raised, # there is nothing for us to do anyway. The exception bubbles, and is # caught by :meth:`Container._handle_thread_exited`, though the # triggered `kill` is a no-op, since the container is already # `_being_killed`. self.container.spawn_worker( self, args, kwargs, handle_result=self.handle_result)
[ "def", "handle_timer_tick", "(", "self", ")", ":", "args", "=", "(", ")", "kwargs", "=", "{", "}", "# Note that we don't catch ContainerBeingKilled here. If that's raised,", "# there is nothing for us to do anyway. The exception bubbles, and is", "# caught by :meth:`Container._handle...
https://github.com/nameko/nameko/blob/17ecee2bcfa90cb0f3a2f3328c5004f48e4e02a3/nameko/timer.py#L82-L92
jina-ai/jina
c77a492fcd5adba0fc3de5347bea83dd4e7d8087
docarray/array/mixins/io/from_gen.py
python
FromGeneratorMixin.from_ndjson
(cls: Type['T'], *args, **kwargs)
return cls._from_generator('from_ndjson', *args, **kwargs)
# noqa: DAR101 # noqa: DAR102 # noqa: DAR201
# noqa: DAR101 # noqa: DAR102 # noqa: DAR201
[ "#", "noqa", ":", "DAR101", "#", "noqa", ":", "DAR102", "#", "noqa", ":", "DAR201" ]
def from_ndjson(cls: Type['T'], *args, **kwargs) -> 'T': """ # noqa: DAR101 # noqa: DAR102 # noqa: DAR201 """ return cls._from_generator('from_ndjson', *args, **kwargs)
[ "def", "from_ndjson", "(", "cls", ":", "Type", "[", "'T'", "]", ",", "*", "args", ",", "*", "*", "kwargs", ")", "->", "'T'", ":", "return", "cls", ".", "_from_generator", "(", "'from_ndjson'", ",", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/jina-ai/jina/blob/c77a492fcd5adba0fc3de5347bea83dd4e7d8087/docarray/array/mixins/io/from_gen.py#L187-L193
google/coursebuilder-core
08f809db3226d9269e30d5edd0edd33bd22041f4
coursebuilder/tools/verify.py
python
SchemaHelper.does_value_match_type
(self, value, atype, context)
Same as other method, but does not throw an exception.
Same as other method, but does not throw an exception.
[ "Same", "as", "other", "method", "but", "does", "not", "throw", "an", "exception", "." ]
def does_value_match_type(self, value, atype, context): """Same as other method, but does not throw an exception.""" try: return self.check_value_matches_type(value, atype, context) except SchemaException: return False
[ "def", "does_value_match_type", "(", "self", ",", "value", ",", "atype", ",", "context", ")", ":", "try", ":", "return", "self", ".", "check_value_matches_type", "(", "value", ",", "atype", ",", "context", ")", "except", "SchemaException", ":", "return", "Fa...
https://github.com/google/coursebuilder-core/blob/08f809db3226d9269e30d5edd0edd33bd22041f4/coursebuilder/tools/verify.py#L442-L448
triaquae/triaquae
bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9
TriAquae/models/django/contrib/formtools/preview.py
python
FormPreview.post_post
(self, request)
Validates the POST data. If valid, calls done(). Else, redisplays form.
Validates the POST data. If valid, calls done(). Else, redisplays form.
[ "Validates", "the", "POST", "data", ".", "If", "valid", "calls", "done", "()", ".", "Else", "redisplays", "form", "." ]
def post_post(self, request): "Validates the POST data. If valid, calls done(). Else, redisplays form." f = self.form(request.POST, auto_id=self.get_auto_id()) if f.is_valid(): if not self._check_security_hash(request.POST.get(self.unused_name('hash'), ''), request, f): return self.failed_hash(request) # Security hash failed. return self.done(request, f.cleaned_data) else: return render_to_response(self.form_template, self.get_context(request, f), context_instance=RequestContext(request))
[ "def", "post_post", "(", "self", ",", "request", ")", ":", "f", "=", "self", ".", "form", "(", "request", ".", "POST", ",", "auto_id", "=", "self", ".", "get_auto_id", "(", ")", ")", "if", "f", ".", "is_valid", "(", ")", ":", "if", "not", "self",...
https://github.com/triaquae/triaquae/blob/bbabf736b3ba56a0c6498e7f04e16c13b8b8f2b9/TriAquae/models/django/contrib/formtools/preview.py#L71-L82
igraph/python-igraph
e9f83e8af08f24ea025596e745917197d8b44d94
src/igraph/__init__.py
python
Graph.add_edges
(self, es, attributes=None)
return res
Adds some edges to the graph. @param es: the list of edges to be added. Every edge is represented with a tuple containing the vertex IDs or names of the two endpoints. Vertices are enumerated from zero. @param attributes: dict of sequences, all of length equal to the number of edges to be added, containing the attributes of the new edges.
Adds some edges to the graph.
[ "Adds", "some", "edges", "to", "the", "graph", "." ]
def add_edges(self, es, attributes=None): """Adds some edges to the graph. @param es: the list of edges to be added. Every edge is represented with a tuple containing the vertex IDs or names of the two endpoints. Vertices are enumerated from zero. @param attributes: dict of sequences, all of length equal to the number of edges to be added, containing the attributes of the new edges. """ eid = self.ecount() res = GraphBase.add_edges(self, es) n = self.ecount() - eid if (attributes is not None) and (n > 0): for key, val in list(attributes.items()): self.es[eid:][key] = val return res
[ "def", "add_edges", "(", "self", ",", "es", ",", "attributes", "=", "None", ")", ":", "eid", "=", "self", ".", "ecount", "(", ")", "res", "=", "GraphBase", ".", "add_edges", "(", "self", ",", "es", ")", "n", "=", "self", ".", "ecount", "(", ")", ...
https://github.com/igraph/python-igraph/blob/e9f83e8af08f24ea025596e745917197d8b44d94/src/igraph/__init__.py#L365-L381
zhl2008/awd-platform
0416b31abea29743387b10b3914581fbe8e7da5e
web_flaskbb/Python-2.7.9/Lib/encodings/rot_13.py
python
Codec.encode
(self,input,errors='strict')
return codecs.charmap_encode(input,errors,encoding_map)
[]
def encode(self,input,errors='strict'): return codecs.charmap_encode(input,errors,encoding_map)
[ "def", "encode", "(", "self", ",", "input", ",", "errors", "=", "'strict'", ")", ":", "return", "codecs", ".", "charmap_encode", "(", "input", ",", "errors", ",", "encoding_map", ")" ]
https://github.com/zhl2008/awd-platform/blob/0416b31abea29743387b10b3914581fbe8e7da5e/web_flaskbb/Python-2.7.9/Lib/encodings/rot_13.py#L16-L17
IntelAI/models
1d7a53ccfad3e6f0e7378c9e3c8840895d63df8c
models/language_translation/tensorflow/transformer_mlperf/inference/int8/transformer/model/beam_search.py
python
_shape_list
(tensor)
return shape
Return a list of the tensor's shape, and ensure no None values in list.
Return a list of the tensor's shape, and ensure no None values in list.
[ "Return", "a", "list", "of", "the", "tensor", "s", "shape", "and", "ensure", "no", "None", "values", "in", "list", "." ]
def _shape_list(tensor): """Return a list of the tensor's shape, and ensure no None values in list.""" # Get statically known shape (may contain None's for unknown dimensions) shape = tensor.get_shape().as_list() # Ensure that the shape values are not None dynamic_shape = tf.shape(input=tensor) for i in range(len(shape)): if shape[i] is None: shape[i] = dynamic_shape[i] return shape
[ "def", "_shape_list", "(", "tensor", ")", ":", "# Get statically known shape (may contain None's for unknown dimensions)", "shape", "=", "tensor", ".", "get_shape", "(", ")", ".", "as_list", "(", ")", "# Ensure that the shape values are not None", "dynamic_shape", "=", "tf"...
https://github.com/IntelAI/models/blob/1d7a53ccfad3e6f0e7378c9e3c8840895d63df8c/models/language_translation/tensorflow/transformer_mlperf/inference/int8/transformer/model/beam_search.py#L476-L486
kubernetes-client/python
47b9da9de2d02b2b7a34fbe05afb44afd130d73a
kubernetes/client/models/v1_service_account_list.py
python
V1ServiceAccountList.items
(self, items)
Sets the items of this V1ServiceAccountList. List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ # noqa: E501 :param items: The items of this V1ServiceAccountList. # noqa: E501 :type: list[V1ServiceAccount]
Sets the items of this V1ServiceAccountList.
[ "Sets", "the", "items", "of", "this", "V1ServiceAccountList", "." ]
def items(self, items): """Sets the items of this V1ServiceAccountList. List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ # noqa: E501 :param items: The items of this V1ServiceAccountList. # noqa: E501 :type: list[V1ServiceAccount] """ if self.local_vars_configuration.client_side_validation and items is None: # noqa: E501 raise ValueError("Invalid value for `items`, must not be `None`") # noqa: E501 self._items = items
[ "def", "items", "(", "self", ",", "items", ")", ":", "if", "self", ".", "local_vars_configuration", ".", "client_side_validation", "and", "items", "is", "None", ":", "# noqa: E501", "raise", "ValueError", "(", "\"Invalid value for `items`, must not be `None`\"", ")", ...
https://github.com/kubernetes-client/python/blob/47b9da9de2d02b2b7a34fbe05afb44afd130d73a/kubernetes/client/models/v1_service_account_list.py#L104-L115
krintoxi/NoobSec-Toolkit
38738541cbc03cedb9a3b3ed13b629f781ad64f6
NoobSecToolkit - MAC OSX/tools/inject/thirdparty/gprof2dot/gprof2dot.py
python
Profile.dump
(self)
[]
def dump(self): for function in self.functions.itervalues(): sys.stderr.write('Function %s:\n' % (function.name,)) self._dump_events(function.events) for call in function.calls.itervalues(): callee = self.functions[call.callee_id] sys.stderr.write(' Call %s:\n' % (callee.name,)) self._dump_events(call.events) for cycle in self.cycles: sys.stderr.write('Cycle:\n') self._dump_events(cycle.events) for function in cycle.functions: sys.stderr.write(' Function %s\n' % (function.name,))
[ "def", "dump", "(", "self", ")", ":", "for", "function", "in", "self", ".", "functions", ".", "itervalues", "(", ")", ":", "sys", ".", "stderr", ".", "write", "(", "'Function %s:\\n'", "%", "(", "function", ".", "name", ",", ")", ")", "self", ".", ...
https://github.com/krintoxi/NoobSec-Toolkit/blob/38738541cbc03cedb9a3b3ed13b629f781ad64f6/NoobSecToolkit - MAC OSX/tools/inject/thirdparty/gprof2dot/gprof2dot.py#L525-L537
maxmind/GeoIP2-python
63b6a7e09cc3482be81ed565ea152e134e388788
geoip2/database.py
python
Reader.__enter__
(self)
return self
[]
def __enter__(self) -> "Reader": return self
[ "def", "__enter__", "(", "self", ")", "->", "\"Reader\"", ":", "return", "self" ]
https://github.com/maxmind/GeoIP2-python/blob/63b6a7e09cc3482be81ed565ea152e134e388788/geoip2/database.py#L123-L124
securesystemslab/zippy
ff0e84ac99442c2c55fe1d285332cfd4e185e089
zippy/lib-python/3/optparse.py
python
OptionContainer._share_option_mappings
(self, parser)
[]
def _share_option_mappings(self, parser): # For use by OptionGroup constructor -- use shared option # mappings from the OptionParser that owns this OptionGroup. self._short_opt = parser._short_opt self._long_opt = parser._long_opt self.defaults = parser.defaults
[ "def", "_share_option_mappings", "(", "self", ",", "parser", ")", ":", "# For use by OptionGroup constructor -- use shared option", "# mappings from the OptionParser that owns this OptionGroup.", "self", ".", "_short_opt", "=", "parser", ".", "_short_opt", "self", ".", "_long_o...
https://github.com/securesystemslab/zippy/blob/ff0e84ac99442c2c55fe1d285332cfd4e185e089/zippy/lib-python/3/optparse.py#L941-L946
jiangxinyang227/NLP-Project
b11f67d8962f40e17990b4fc4551b0ea5496881c
fine_grained_sentiment_analysis/bilstm_attention/predict.py
python
Predictor.sentence_to_idx
(self, sentence)
return sentence_pad
将分词后的句子转换成idx表示 :param sentence: :return:
将分词后的句子转换成idx表示 :param sentence: :return:
[ "将分词后的句子转换成idx表示", ":", "param", "sentence", ":", ":", "return", ":" ]
def sentence_to_idx(self, sentence): """ 将分词后的句子转换成idx表示 :param sentence: :return: """ sentence = jieba.lcut(sentence) sentence_ids = [self.word_to_index.get(token, self.word_to_index["<UNK>"]) for token in sentence] sentence_pad = sentence_ids[: self.sequence_length] if len(sentence_ids) > self.sequence_length \ else sentence_ids + [0] * (self.sequence_length - len(sentence_ids)) return sentence_pad
[ "def", "sentence_to_idx", "(", "self", ",", "sentence", ")", ":", "sentence", "=", "jieba", ".", "lcut", "(", "sentence", ")", "sentence_ids", "=", "[", "self", ".", "word_to_index", ".", "get", "(", "token", ",", "self", ".", "word_to_index", "[", "\"<U...
https://github.com/jiangxinyang227/NLP-Project/blob/b11f67d8962f40e17990b4fc4551b0ea5496881c/fine_grained_sentiment_analysis/bilstm_attention/predict.py#L49-L59
boto/boto
b2a6f08122b2f1b89888d2848e730893595cd001
boto/glacier/vault.py
python
Vault.delete_archive
(self, archive_id)
return self.layer1.delete_archive(self.name, archive_id)
This operation deletes an archive from the vault. :type archive_id: str :param archive_id: The ID for the archive to be deleted.
This operation deletes an archive from the vault.
[ "This", "operation", "deletes", "an", "archive", "from", "the", "vault", "." ]
def delete_archive(self, archive_id): """ This operation deletes an archive from the vault. :type archive_id: str :param archive_id: The ID for the archive to be deleted. """ return self.layer1.delete_archive(self.name, archive_id)
[ "def", "delete_archive", "(", "self", ",", "archive_id", ")", ":", "return", "self", ".", "layer1", ".", "delete_archive", "(", "self", ".", "name", ",", "archive_id", ")" ]
https://github.com/boto/boto/blob/b2a6f08122b2f1b89888d2848e730893595cd001/boto/glacier/vault.py#L387-L394
deepfakes/faceswap
09c7d8aca3c608d1afad941ea78e9fd9b64d9219
lib/gui/display_analysis.py
python
_Options._set_buttons_state
(self, *args)
Callback to enable/disable button when training is commenced and stopped.
Callback to enable/disable button when training is commenced and stopped.
[ "Callback", "to", "enable", "/", "disable", "button", "when", "training", "is", "commenced", "and", "stopped", "." ]
def _set_buttons_state(self, *args): # pylint:disable=unused-argument """ Callback to enable/disable button when training is commenced and stopped. """ is_training = self._parent.vars["is_training"].get() state = "disabled" if is_training else "!disabled" for name, button in self._buttons.items(): if name not in ("load", "clear"): continue logger.debug("Setting %s button state to %s", name, state) button.state([state])
[ "def", "_set_buttons_state", "(", "self", ",", "*", "args", ")", ":", "# pylint:disable=unused-argument", "is_training", "=", "self", ".", "_parent", ".", "vars", "[", "\"is_training\"", "]", ".", "get", "(", ")", "state", "=", "\"disabled\"", "if", "is_traini...
https://github.com/deepfakes/faceswap/blob/09c7d8aca3c608d1afad941ea78e9fd9b64d9219/lib/gui/display_analysis.py#L357-L365
afruehstueck/tileGAN
0460e228b1109528a0fefc6569b970c2934a649d
tileGAN_client.py
python
ImageViewer.toggleLatentIndicator
(self, state)
toggle the crosshair that indicates the latent size
toggle the crosshair that indicates the latent size
[ "toggle", "the", "crosshair", "that", "indicates", "the", "latent", "size" ]
def toggleLatentIndicator(self, state): """ toggle the crosshair that indicates the latent size """ self._showLatentIndicator = state
[ "def", "toggleLatentIndicator", "(", "self", ",", "state", ")", ":", "self", ".", "_showLatentIndicator", "=", "state" ]
https://github.com/afruehstueck/tileGAN/blob/0460e228b1109528a0fefc6569b970c2934a649d/tileGAN_client.py#L688-L692
dickreuter/Poker
b7642f0277e267e1a44eab957c4c7d1d8f50f4ee
poker/gui/action_and_signals.py
python
UIActionAndSignals.open_table_setup
(self)
[]
def open_table_setup(self): self.ui_setup_table = TableSetupForm() gui_signals = TableSetupActionAndSignals(self.ui_setup_table)
[ "def", "open_table_setup", "(", "self", ")", ":", "self", ".", "ui_setup_table", "=", "TableSetupForm", "(", ")", "gui_signals", "=", "TableSetupActionAndSignals", "(", "self", ".", "ui_setup_table", ")" ]
https://github.com/dickreuter/Poker/blob/b7642f0277e267e1a44eab957c4c7d1d8f50f4ee/poker/gui/action_and_signals.py#L367-L369
jython/jython3
def4f8ec47cb7a9c799ea4c745f12badf92c5769
lib-python/3.5.1/tkinter/ttk.py
python
Treeview.delete
(self, *items)
Delete all specified items and all their descendants. The root item may not be deleted.
Delete all specified items and all their descendants. The root item may not be deleted.
[ "Delete", "all", "specified", "items", "and", "all", "their", "descendants", ".", "The", "root", "item", "may", "not", "be", "deleted", "." ]
def delete(self, *items): """Delete all specified items and all their descendants. The root item may not be deleted.""" self.tk.call(self._w, "delete", items)
[ "def", "delete", "(", "self", ",", "*", "items", ")", ":", "self", ".", "tk", ".", "call", "(", "self", ".", "_w", ",", "\"delete\"", ",", "items", ")" ]
https://github.com/jython/jython3/blob/def4f8ec47cb7a9c799ea4c745f12badf92c5769/lib-python/3.5.1/tkinter/ttk.py#L1216-L1219
bruderstein/PythonScript
df9f7071ddf3a079e3a301b9b53a6dc78cf1208f
PythonLib/full/asyncio/streams.py
python
StreamReader.__aiter__
(self)
return self
[]
def __aiter__(self): return self
[ "def", "__aiter__", "(", "self", ")", ":", "return", "self" ]
https://github.com/bruderstein/PythonScript/blob/df9f7071ddf3a079e3a301b9b53a6dc78cf1208f/PythonLib/full/asyncio/streams.py#L719-L720
googleapis/python-ndb
e780c81cde1016651afbfcad8180d9912722cf1b
google/cloud/ndb/model.py
python
Property._get_user_value
(self, entity)
return self._apply_to_values(entity, self._opt_call_from_base_type)
Return the user value for this property of the given entity. This implies removing the :class:`_BaseValue` wrapper if present, and if it is, calling all ``_from_base_type()`` methods, in the reverse method resolution order of the property's class. It also handles default values and repeated properties. Args: entity (Model): An entity to get a value from. Returns: Any: The original value (if not :class:`_BaseValue`) or the wrapped value converted from the base type.
Return the user value for this property of the given entity.
[ "Return", "the", "user", "value", "for", "this", "property", "of", "the", "given", "entity", "." ]
def _get_user_value(self, entity): """Return the user value for this property of the given entity. This implies removing the :class:`_BaseValue` wrapper if present, and if it is, calling all ``_from_base_type()`` methods, in the reverse method resolution order of the property's class. It also handles default values and repeated properties. Args: entity (Model): An entity to get a value from. Returns: Any: The original value (if not :class:`_BaseValue`) or the wrapped value converted from the base type. """ return self._apply_to_values(entity, self._opt_call_from_base_type)
[ "def", "_get_user_value", "(", "self", ",", "entity", ")", ":", "return", "self", ".", "_apply_to_values", "(", "entity", ",", "self", ".", "_opt_call_from_base_type", ")" ]
https://github.com/googleapis/python-ndb/blob/e780c81cde1016651afbfcad8180d9912722cf1b/google/cloud/ndb/model.py#L1514-L1529
skorch-dev/skorch
cf6615be4e62a16af6f8d83a47e8b59b5c48a58c
skorch/helper.py
python
SliceDataset.transform
(self, data)
return data
Additional transformations on ``data``. Note: If you use this in conjuction with PyTorch :class:`~torch.utils.data.DataLoader`, the latter will call the dataset for each row separately, which means that the incoming ``data`` is a single rows.
Additional transformations on ``data``.
[ "Additional", "transformations", "on", "data", "." ]
def transform(self, data): """Additional transformations on ``data``. Note: If you use this in conjuction with PyTorch :class:`~torch.utils.data.DataLoader`, the latter will call the dataset for each row separately, which means that the incoming ``data`` is a single rows. """ return data
[ "def", "transform", "(", "self", ",", "data", ")", ":", "return", "data" ]
https://github.com/skorch-dev/skorch/blob/cf6615be4e62a16af6f8d83a47e8b59b5c48a58c/skorch/helper.py#L206-L215
oilshell/oil
94388e7d44a9ad879b12615f6203b38596b5a2d3
Python-2.7.13/Lib/numbers.py
python
Complex.__rpow__
(self, base)
base ** self
base ** self
[ "base", "**", "self" ]
def __rpow__(self, base): """base ** self""" raise NotImplementedError
[ "def", "__rpow__", "(", "self", ",", "base", ")", ":", "raise", "NotImplementedError" ]
https://github.com/oilshell/oil/blob/94388e7d44a9ad879b12615f6203b38596b5a2d3/Python-2.7.13/Lib/numbers.py#L142-L144
holzschu/Carnets
44effb10ddfc6aa5c8b0687582a724ba82c6b547
Library/lib/python3.7/datetime.py
python
datetime.now
(cls, tz=None)
return cls.fromtimestamp(t, tz)
Construct a datetime from time.time() and optional time zone info.
Construct a datetime from time.time() and optional time zone info.
[ "Construct", "a", "datetime", "from", "time", ".", "time", "()", "and", "optional", "time", "zone", "info", "." ]
def now(cls, tz=None): "Construct a datetime from time.time() and optional time zone info." t = _time.time() return cls.fromtimestamp(t, tz)
[ "def", "now", "(", "cls", ",", "tz", "=", "None", ")", ":", "t", "=", "_time", ".", "time", "(", ")", "return", "cls", ".", "fromtimestamp", "(", "t", ",", "tz", ")" ]
https://github.com/holzschu/Carnets/blob/44effb10ddfc6aa5c8b0687582a724ba82c6b547/Library/lib/python3.7/datetime.py#L1612-L1615
mpatacchiola/dissecting-reinforcement-learning
38660b0a0d5aed077a46acb4bcb2013565304d9c
src/2/gridworld.py
python
GridWorld.setTransitionMatrix
(self, transition_matrix)
Set the reward matrix. The transition matrix here is intended as a matrix which has a line for each action and the element of the row are the probabilities to executes each action when a command is given. For example: [[0.55, 0.25, 0.10, 0.10] [0.25, 0.25, 0.25, 0.25] [0.30, 0.20, 0.40, 0.10] [0.10, 0.20, 0.10, 0.60]] This matrix defines the transition rules for all the 4 possible actions. The first row corresponds to the probabilities of executing each one of the 4 actions when the policy orders to the robot to go UP. In this case the transition model says that with a probability of 0.55 the robot will go UP, with a probaiblity of 0.25 RIGHT, 0.10 DOWN and 0.10 LEFT.
Set the reward matrix.
[ "Set", "the", "reward", "matrix", "." ]
def setTransitionMatrix(self, transition_matrix): '''Set the reward matrix. The transition matrix here is intended as a matrix which has a line for each action and the element of the row are the probabilities to executes each action when a command is given. For example: [[0.55, 0.25, 0.10, 0.10] [0.25, 0.25, 0.25, 0.25] [0.30, 0.20, 0.40, 0.10] [0.10, 0.20, 0.10, 0.60]] This matrix defines the transition rules for all the 4 possible actions. The first row corresponds to the probabilities of executing each one of the 4 actions when the policy orders to the robot to go UP. In this case the transition model says that with a probability of 0.55 the robot will go UP, with a probaiblity of 0.25 RIGHT, 0.10 DOWN and 0.10 LEFT. ''' if(transition_matrix.shape != self.transition_matrix.shape): raise ValueError('The shape of the two matrices must be the same.') self.transition_matrix = transition_matrix
[ "def", "setTransitionMatrix", "(", "self", ",", "transition_matrix", ")", ":", "if", "(", "transition_matrix", ".", "shape", "!=", "self", ".", "transition_matrix", ".", "shape", ")", ":", "raise", "ValueError", "(", "'The shape of the two matrices must be the same.'"...
https://github.com/mpatacchiola/dissecting-reinforcement-learning/blob/38660b0a0d5aed077a46acb4bcb2013565304d9c/src/2/gridworld.py#L51-L70