code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def get_package(repo_url, pkg_name, timeout=1): url = repo_url + "/packages/" + pkg_name headers = {'accept': 'application/json'} resp = requests.get(url, headers=headers, timeout=timeout) if resp.status_code == 404: return None return resp.json()
Retrieve package information from a Bower registry at repo_url. Returns a dict of package data.
def basic_consume(self, queue='', consumer_tag='', no_local=False, no_ack=False, exclusive=False, nowait=False, callback=None, ticket=None): args = AMQPWriter() if ticket is not None: args.write_short(ticket) else: args.write_short(self.default_ticket) args.write_shortstr(queue) args.write_shortstr(consumer_tag) args.write_bit(no_local) args.write_bit(no_ack) args.write_bit(exclusive) args.write_bit(nowait) self._send_method((60, 20), args) if not nowait: consumer_tag = self.wait(allowed_methods=[ (60, 21), ]) self.callbacks[consumer_tag] = callback return consumer_tag
start a queue consumer This method asks the server to start a "consumer", which is a transient request for messages from a specific queue. Consumers last as long as the channel they were created on, or until the client cancels them. RULE: The server SHOULD support at least 16 consumers per queue, unless the queue was declared as private, and ideally, impose no limit except as defined by available resources. PARAMETERS: queue: shortstr Specifies the name of the queue to consume from. If the queue name is null, refers to the current queue for the channel, which is the last declared queue. RULE: If the client did not previously declare a queue, and the queue name in this method is empty, the server MUST raise a connection exception with reply code 530 (not allowed). consumer_tag: shortstr Specifies the identifier for the consumer. The consumer tag is local to a connection, so two clients can use the same consumer tags. If this field is empty the server will generate a unique tag. RULE: The tag MUST NOT refer to an existing consumer. If the client attempts to create two consumers with the same non-empty tag the server MUST raise a connection exception with reply code 530 (not allowed). no_local: boolean do not deliver own messages If the no-local field is set the server will not send messages to the client that published them. no_ack: boolean no acknowledgement needed If this field is set the server does not expect acknowledgments for messages. That is, when a message is delivered to the client the server automatically and silently acknowledges it on behalf of the client. This functionality increases performance but at the cost of reliability. Messages can get lost if a client dies before it can deliver them to the application. exclusive: boolean request exclusive access Request exclusive consumer access, meaning only this consumer can access the queue. RULE: If the server cannot grant exclusive access to the queue when asked, - because there are other consumers active - it MUST raise a channel exception with return code 403 (access refused). nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. callback: Python callable function/method called with each delivered message For each message delivered by the broker, the callable will be called with a Message object as the single argument. If no callable is specified, messages are quietly discarded, no_ack should probably be set to True in that case. ticket: short RULE: The client MUST provide a valid access ticket giving "read" access rights to the realm for the queue.
def run(self): kwargs = {'query': self.get_data()} if self.data_type == "ip": kwargs.update({'query_type': 'ip'}) elif self.data_type == "network": kwargs.update({'query_type': 'network'}) elif self.data_type == 'autonomous-system': kwargs.update({'query_type': 'asn'}) elif self.data_type == 'port': kwargs.update({'query_type': 'port'}) else: self.notSupported() return False if self.service == 'observations': response = self.bs.get_observations(**kwargs) self.report(response) elif self.service == 'enrichment': response = self.bs.enrich(**kwargs) self.report(response) else: self.report({'error': 'Invalid service defined.'})
Run the process to get observation data from Backscatter.io.
def url(self, filename, external=False): if filename.startswith('/'): filename = filename[1:] if self.has_url: return self.base_url + filename else: return url_for('fs.get_file', fs=self.name, filename=filename, _external=external)
This function gets the URL a file uploaded to this set would be accessed at. It doesn't check whether said file exists. :param string filename: The filename to return the URL for. :param bool external: If True, returns an absolute URL
def _end_of_decade(self): year = self.year - self.year % YEARS_PER_DECADE + YEARS_PER_DECADE - 1 return self.set(year, 12, 31)
Reset the date to the last day of the decade. :rtype: Date
def get_album(self, id): url = self._base_url + "/3/album/{0}".format(id) json = self._send_request(url) return Album(json, self)
Return information about this album.
def second_order_score(y, mean, scale, shape, skewness): return ((shape+1)/shape)*(y-mean)/(np.power(scale,2) + (np.power(y-mean,2)/shape))/((shape+1)*((np.power(scale,2)*shape) - np.power(y-mean,2))/np.power((np.power(scale,2)*shape) + np.power(y-mean,2),2))
GAS t Update term potentially using second-order information - native Python function Parameters ---------- y : float datapoint for the time series mean : float location parameter for the t distribution scale : float scale parameter for the t distribution shape : float tail thickness parameter for the t distribution skewness : float skewness parameter for the t distribution Returns ---------- - Adjusted score of the t family
def join_field(path): output = ".".join([f.replace(".", "\\.") for f in path if f != None]) return output if output else "."
RETURN field SEQUENCE AS STRING
def populate_request_data(self, request_args): request_args['auth'] = HTTPBasicAuth( self._username, self._password) return request_args
Add the authentication info to the supplied dictionary. We use the `requests.HTTPBasicAuth` class as the `auth` param. Args: `request_args`: The arguments that will be passed to the request. Returns: The updated arguments for the request.
def save_default_values(self): for parameter_container in self.default_value_parameter_containers: parameters = parameter_container.get_parameters() for parameter in parameters: set_inasafe_default_value_qsetting( self.settings, GLOBAL, parameter.guid, parameter.value )
Save InaSAFE default values.
def stop(self): logger.debug("Stopping playback") self.clock.stop() self.status = READY
Stops the video stream and resets the clock.
def is_appendable_to(self, group): return (group.attrs['format'] == self.dformat and group[self.name].dtype == self.dtype and self._group_dim(group) == self.dim)
Return True if features are appendable to a HDF5 group
def average_loss(lc): losses, poes = (lc['loss'], lc['poe']) if lc.dtype.names else lc return -pairwise_diff(losses) @ pairwise_mean(poes)
Given a loss curve array with `poe` and `loss` fields, computes the average loss on a period of time. :note: As the loss curve is supposed to be piecewise linear as it is a result of a linear interpolation, we compute an exact integral by using the trapeizodal rule with the width given by the loss bin width.
def _is_allowed_abbr(self, tokens): if len(tokens) <= 2: abbr_text = ''.join(tokens) if self.abbr_min <= len(abbr_text) <= self.abbr_max and bracket_level(abbr_text) == 0: if abbr_text[0].isalnum() and any(c.isalpha() for c in abbr_text): if re.match('^\d+(\.\d+)?(g|m[lL]|cm)$', abbr_text): return False return True return False
Return True if text is an allowed abbreviation.
def _SetRow(self, new_values, row=0): if not row: row = self._row_index if row > self.size: raise TableError("Entry %s beyond table size %s." % (row, self.size)) self._table[row].values = new_values
Sets the current row to new list. Args: new_values: List|dict of new values to insert into row. row: int, Row to insert values into. Raises: TableError: If number of new values is not equal to row size.
def console_set_char_foreground( con: tcod.console.Console, x: int, y: int, col: Tuple[int, int, int] ) -> None: lib.TCOD_console_set_char_foreground(_console(con), x, y, col)
Change the foreground color of x,y to col. Args: con (Console): Any Console instance. x (int): Character x position from the left. y (int): Character y position from the top. col (Union[Tuple[int, int, int], Sequence[int]]): An (r, g, b) sequence or Color instance. .. deprecated:: 8.4 Array access performs significantly faster than using this function. See :any:`Console.fg`.
def _get_pretty_string(obj): sio = StringIO() pprint.pprint(obj, stream=sio) return sio.getvalue()
Return a prettier version of obj Parameters ---------- obj : object Object to pretty print Returns ------- s : str Pretty print object repr
def content_location(self) -> Optional[UnstructuredHeader]: try: return cast(UnstructuredHeader, self[b'content-location'][0]) except (KeyError, IndexError): return None
The ``Content-Location`` header.
def get_empty_dirs(self, path): empty_dirs = [] for i in os.listdir(path): child_path = os.path.join(path, i) if i == '.git' or os.path.isfile(child_path) or os.path.islink(child_path): continue if self.path_only_contains_dirs(child_path): empty_dirs.append(i) return empty_dirs
Return a list of empty directories in path.
def _purge(dir, pattern, reason=''): for f in os.listdir(dir): if re.search(pattern, f): print "Purging file {0}. {1}".format(f, reason) os.remove(os.path.join(dir, f))
delete files in dir that match pattern
def add_wirevector(self, wirevector): self.sanity_check_wirevector(wirevector) self.wirevector_set.add(wirevector) self.wirevector_by_name[wirevector.name] = wirevector
Add a wirevector object to the block.
def run_vardict(align_bams, items, ref_file, assoc_files, region=None, out_file=None): items = shared.add_highdepth_genome_exclusion(items) if vcfutils.is_paired_analysis(align_bams, items): call_file = _run_vardict_paired(align_bams, items, ref_file, assoc_files, region, out_file) else: vcfutils.check_paired_problems(items) call_file = _run_vardict_caller(align_bams, items, ref_file, assoc_files, region, out_file) return call_file
Run VarDict variant calling.
def backup_file(*, file, host): if not _has_init: raise RuntimeError("This driver has not been properly initialised!") try: if not _dry_run: bucket = _boto_conn.get_bucket(_bucket_name) except boto.exception.S3ResponseError: log.msg_warn("Bucket '{bucket_name}' does not exist!, creating it..." .format(bucket_name=_bucket_name)) if not _dry_run: bucket = _boto_conn.create_bucket(_bucket_name) log.msg("Created bucket '{bucket}'".format(bucket=_bucket_name)) key_path = "{key}/{file}".format(key=host, file=ntpath.basename(file)) if not _dry_run: k = boto.s3.key.Key(bucket) k.key = key_path log.msg("Uploading '{key_path}' to bucket '{bucket_name}' ..." .format(key_path=key_path, bucket_name=_bucket_name)) if not _dry_run: k.set_contents_from_filename(file, encrypt_key=True) log.msg("The file '{key_path}' has been successfully uploaded to S3!" .format(key_path=key_path))
Backup a file on S3 :param file: full path to the file to be backed up :param host: this will be used to locate the file on S3 :raises TypeError: if an argument in kwargs does not have the type expected :raises ValueError: if an argument within kwargs has an invalid value
def start(self, *args): if self._is_verbose: return self self.writeln('start', *args) self._indent += 1 return self
Start a nested log.
def to_dict(self): return dict( host=self.host, port=self.port, database=self.database, username=self.username, password=self.password, )
Convert credentials into a dict.
def get_child_book_ids(self, book_id): if self._catalog_session is not None: return self._catalog_session.get_child_catalog_ids(catalog_id=book_id) return self._hierarchy_session.get_children(id_=book_id)
Gets the child ``Ids`` of the given book. arg: book_id (osid.id.Id): the ``Id`` to query return: (osid.id.IdList) - the children of the book raise: NotFound - ``book_id`` is not found raise: NullArgument - ``book_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
def _flush(self): d = os.path.dirname(self.path) if not os.path.isdir(d): os.makedirs(d) with io.open(self.path, 'w', encoding='utf8') as f: yaml.safe_dump(self._data, f, default_flow_style=False, encoding=None)
Save the contents of data to the file on disk. You should not need to call this manually.
def percent(self, value) -> 'Gap': raise_not_number(value) self.gap = '{}%'.format(value) return self
Set the margin as a percentage.
def build_row(row, left, center, right): if not row or not row[0]: yield combine((), left, center, right) return for row_index in range(len(row[0])): yield combine((c[row_index] for c in row), left, center, right)
Combine single or multi-lined cells into a single row of list of lists including borders. Row must already be padded and extended so each cell has the same number of lines. Example return value: [ ['>', 'Left ', '|', 'Center', '|', 'Right', '<'], ['>', 'Cell1', '|', 'Cell2 ', '|', 'Cell3', '<'], ] :param iter row: List of cells for one row. :param str left: Left border. :param str center: Column separator. :param str right: Right border. :return: Yields other generators that yield strings. :rtype: iter
def _convert_bin_to_datelike_type(bins, dtype): if is_datetime64tz_dtype(dtype): bins = to_datetime(bins.astype(np.int64), utc=True).tz_convert(dtype.tz) elif is_datetime_or_timedelta_dtype(dtype): bins = Index(bins.astype(np.int64), dtype=dtype) return bins
Convert bins to a DatetimeIndex or TimedeltaIndex if the orginal dtype is datelike Parameters ---------- bins : list-like of bins dtype : dtype of data Returns ------- bins : Array-like of bins, DatetimeIndex or TimedeltaIndex if dtype is datelike
def ReadAllClientGraphSeries( self, client_label, report_type, time_range = None, cursor=None): query = args = [client_label, report_type.SerializeToDataStore()] if time_range is not None: query += "AND `timestamp` BETWEEN FROM_UNIXTIME(%s) AND FROM_UNIXTIME(%s)" args += [ mysql_utils.RDFDatetimeToTimestamp(time_range.start), mysql_utils.RDFDatetimeToTimestamp(time_range.end) ] cursor.execute(query, args) results = {} for timestamp, raw_series in cursor.fetchall(): timestamp = cast(rdfvalue.RDFDatetime, mysql_utils.TimestampToRDFDatetime(timestamp)) series = rdf_stats.ClientGraphSeries.FromSerializedString(raw_series) results[timestamp] = series return results
Reads graph series for the given label and report-type from the DB.
def formatter_class(klass): def decorator(func): adaptor = ScriptAdaptor._get_adaptor(func) adaptor.formatter_class = klass return func return decorator
Decorator used to specify the formatter class for the console script. :param klass: The formatter class to use.
def add_neighbours(self): ipix = self._best_res_pixels() hp = HEALPix(nside=(1 << self.max_order), order='nested') extend_ipix = AbstractMOC._neighbour_pixels(hp, ipix) neigh_ipix = np.setdiff1d(extend_ipix, ipix) shift = 2 * (AbstractMOC.HPY_MAX_NORDER - self.max_order) neigh_itv = np.vstack((neigh_ipix << shift, (neigh_ipix + 1) << shift)).T self._interval_set = self._interval_set.union(IntervalSet(neigh_itv)) return self
Extends the MOC instance so that it includes the HEALPix cells touching its border. The depth of the HEALPix cells added at the border is equal to the maximum depth of the MOC instance. Returns ------- moc : `~mocpy.moc.MOC` self extended by one degree of neighbours.
def process(self, user, timestamp, data=None): event = Event(user, mwtypes.Timestamp(timestamp), self.event_i, data) self.event_i += 1 for user, events in self._clear_expired(event.timestamp): yield Session(user, unpack_events(events)) if event.user in self.active_users: events = self.active_users[event.user] else: events = [] self.active_users[event.user] = events active_session = ActiveSession(event.timestamp, event.i, events) self.recently_active.push(active_session) events.append(event)
Processes a user event. :Parameters: user : `hashable` A hashable value to identify a user (`int` or `str` are OK) timestamp : :class:`mwtypes.Timestamp` The timestamp of the event data : `mixed` Event meta data :Returns: A generator of :class:`~mwsessions.Session` expired after processing the user event.
def seek(self, pos): if self.debug: logging.debug('seek: %r' % pos) self.fp.seek(pos) self.bufpos = pos self.buf = b'' self.charpos = 0 self._parse1 = self._parse_main self._curtoken = b'' self._curtokenpos = 0 self._tokens = [] return
Seeks the parser to the given position.
def get_shared_people(self): people = [] output = self._get_data() self._logger.debug(output) shared_entries = output[0] or [] for info in shared_entries: try: people.append(Person(info)) except InvalidData: self._logger.debug('Missing location or other info, dropping person with info: %s', info) return people
Retrieves all people that share their location with this account
def in_batches(iterable, batch_size): items = list(iterable) size = len(items) for i in range(0, size, batch_size): yield items[i:min(i + batch_size, size)]
Split the given iterable into batches. Args: iterable (Iterable[Any]): The iterable you want to split into batches. batch_size (int): The size of each bach. The last batch will be probably smaller (if the number of elements cannot be equally divided. Returns: Generator[list[Any]]: Will yield all items in batches of **batch_size** size. Example: >>> from peltak.core import util >>> >>> batches = util.in_batches([1, 2, 3, 4, 5, 6, 7], 3) >>> batches = list(batches) # so we can query for lenght >>> len(batches) 3 >>> batches [[1, 2, 3], [4, 5, 6], [7]]
def options(self, urls=None, **overrides): if urls is not None: overrides['urls'] = urls return self.where(accept='OPTIONS', **overrides)
Sets the acceptable HTTP method to OPTIONS
def has_option(self, section, option): if section not in self.sections(): return False else: option = self.optionxform(option) return option in self[section]
Checks for the existence of a given option in a given section. Args: section (str): name of section option (str): name of option Returns: bool: whether the option exists in the given section
def resolve_aliases(self, target, scope=None): for declared in target.dependencies: if scope is not None and declared.scope != scope: continue elif type(declared) in (AliasTarget, Target): for r, _ in self.resolve_aliases(declared, scope=scope): yield r, declared else: yield declared, None
Resolve aliases in the direct dependencies of the target. :param target: The direct dependencies of this target are included. :param scope: When specified, only deps with this scope are included. This is more than a filter, because it prunes the subgraphs represented by aliases with un-matched scopes. :returns: An iterator of (resolved_dependency, resolved_from) tuples. `resolved_from` is the top level target alias that depends on `resolved_dependency`, and `None` if `resolved_dependency` is not a dependency of a target alias.
def tomof(self, maxline=MAX_MOF_LINE): mof = [] mof.append(_qualifiers_tomof(self.qualifiers, MOF_INDENT, maxline)) mof.append(u'class ') mof.append(self.classname) mof.append(u' ') if self.superclass is not None: mof.append(u': ') mof.append(self.superclass) mof.append(u' ') mof.append(u'{\n') for p in self.properties.itervalues(): mof.append(u'\n') mof.append(p.tomof(False, MOF_INDENT, maxline)) for m in self.methods.itervalues(): mof.append(u'\n') mof.append(m.tomof(MOF_INDENT, maxline)) mof.append(u'\n};\n') return u''.join(mof)
Return a MOF string with the declaration of this CIM class. The returned MOF string conforms to the ``classDeclaration`` ABNF rule defined in :term:`DSP0004`. The order of properties, methods, parameters, and qualifiers is preserved. The :attr:`~pywbem.CIMClass.path` attribute of this object will not be included in the returned MOF string. Consistent with that, class path information is not included in the returned MOF string. Returns: :term:`unicode string`: MOF string.
def check_bam(bam, samtype="bam"): ut.check_existance(bam) samfile = pysam.AlignmentFile(bam, "rb") if not samfile.has_index(): pysam.index(bam) samfile = pysam.AlignmentFile(bam, "rb") logging.info("Nanoget: No index for bam file could be found, created index.") if not samfile.header['HD']['SO'] == 'coordinate': logging.error("Nanoget: Bam file {} not sorted by coordinate!.".format(bam)) sys.exit("Please use a bam file sorted by coordinate.") if samtype == "bam": logging.info("Nanoget: Bam file {} contains {} mapped and {} unmapped reads.".format( bam, samfile.mapped, samfile.unmapped)) if samfile.mapped == 0: logging.error("Nanoget: Bam file {} does not contain aligned reads.".format(bam)) sys.exit("FATAL: not a single read was mapped in bam file {}".format(bam)) return samfile
Check if bam file is valid. Bam file should: - exists - has an index (create if necessary) - is sorted by coordinate - has at least one mapped read
def get_basic_profile(self, user_id, scope='profile/public'): profile = _get( token=self.oauth.get_app_token(scope), uri='/user/profile/' + urllib.quote(user_id) ) try: return json.loads(profile) except: raise MxitAPIException('Error parsing profile data')
Retrieve the Mxit user's basic profile No user authentication required
def last_midnight(): now = datetime.now() return datetime(now.year, now.month, now.day)
return a datetime of last mid-night
def check_list(self, node_list, pattern_list): if len(node_list) != len(pattern_list): return False else: return all(Check(node_elt, self.placeholders).visit(pattern_list[i]) for i, node_elt in enumerate(node_list))
Check if list of node are equal.
def build(self): if self.is_built(): return with _wait_signal(self.loadFinished, 20): self.rebuild() self._built = True
Build the full HTML source.
def _determine_selected_stencil(stencil_set, stencil_definition): if 'stencil' not in stencil_definition: selected_stencil_name = stencil_set.manifest.get('default_stencil') else: selected_stencil_name = stencil_definition.get('stencil') if not selected_stencil_name: raise ValueError("No stencil name, within stencil set %s, specified." % stencil_definition['name']) return selected_stencil_name
Determine appropriate stencil name for stencil definition. Given a fastfood.json stencil definition with a stencil set, figure out what the name of the stencil within the set should be, or use the default
def set_execution_context(self, execution_context): if self._execution_context: raise errors.AlreadyInContextError self._execution_context = execution_context
Set the ExecutionContext this async is executing under.
def load_config_key(): global api_token try: api_token = os.environ['SOCCER_CLI_API_TOKEN'] except KeyError: home = os.path.expanduser("~") config = os.path.join(home, ".soccer-cli.ini") if not os.path.exists(config): with open(config, "w") as cfile: key = get_input_key() cfile.write(key) else: with open(config, "r") as cfile: key = cfile.read() if key: api_token = key else: os.remove(config) click.secho('No API Token detected. ' 'Please visit {0} and get an API Token, ' 'which will be used by Soccer CLI ' 'to get access to the data.' .format(RequestHandler.BASE_URL), fg="red", bold=True) sys.exit(1) return api_token
Load API key from config file, write if needed
def start(self, phase, stage, **kwargs): return ProgressSection(self, self._session, phase, stage, self._logger, **kwargs)
Start a new routine, stage or phase
def plot(self, data, height=1000, render_large_data=False): import IPython if not isinstance(data, pd.DataFrame): raise ValueError('Expect a DataFrame.') if (len(data) > 10000 and not render_large_data): raise ValueError('Facets dive may not work well with more than 10000 rows. ' + 'Reduce data or set "render_large_data" to True.') jsonstr = data.to_json(orient='records') html_id = 'f' + datalab.utils.commands.Html.next_id() HTML_TEMPLATE = html = HTML_TEMPLATE.format(html_id=html_id, jsonstr=jsonstr, height=height) return IPython.core.display.HTML(html)
Plots a detail view of data. Args: data: a Pandas dataframe. height: the height of the output.
def load_ply(file_obj, resolver=None, fix_texture=True, *args, **kwargs): elements, is_ascii, image_name = parse_header(file_obj) if is_ascii: ply_ascii(elements, file_obj) else: ply_binary(elements, file_obj) image = None if image_name is not None: try: data = resolver.get(image_name) image = PIL.Image.open(util.wrap_as_stream(data)) except BaseException: log.warning('unable to load image!', exc_info=True) kwargs = elements_to_kwargs(elements, fix_texture=fix_texture, image=image) return kwargs
Load a PLY file from an open file object. Parameters --------- file_obj : an open file- like object Source data, ASCII or binary PLY resolver : trimesh.visual.resolvers.Resolver Object which can resolve assets fix_texture : bool If True, will re- index vertices and faces so vertices with different UV coordinates are disconnected. Returns --------- mesh_kwargs : dict Data which can be passed to Trimesh constructor, eg: a = Trimesh(**mesh_kwargs)
def render_reverse(self, inst=None, context=None): rendered = self.render(inst=inst, context=context) parts = rendered.split('/') if parts[-1] in ['index.html', 'index.htm']: return ('/'.join(parts[:-1])) + '/' return rendered
Renders the reverse URL for this path.
def get_errors(error_string): lines = error_string.splitlines() error_lines = tuple(line for line in lines if line.find('Error') >= 0) if len(error_lines) > 0: return '\n'.join(error_lines) else: return error_string.strip()
returns all lines in the error_string that start with the string "error"
def get_node(self, element): r ns, tag = self.split_namespace(element.tag) return {'tag': tag, 'value': (element.text or '').strip(), 'attr': element.attrib, 'namespace': ns}
r"""Get node info. Parse element and get the element tag info. Include tag name, value, attribute, namespace. :param element: an :class:`~xml.etree.ElementTree.Element` instance :rtype: dict
def _gather(self, *args, **kwargs): propagate = kwargs.pop('propagate', True) return (self.to_python(reply, propagate=propagate) for reply in self.actor._collect_replies(*args, **kwargs))
Generator over the results
def etag(self): value = [] for option in self.options: if option.number == defines.OptionRegistry.ETAG.number: value.append(option.value) return value
Get the ETag option of the message. :rtype: list :return: the ETag values or [] if not specified by the request
def Validate(self): ValidateMultiple(self.probe, "Method has invalid probes") Validate(self.target, "Method has invalid target") Validate(self.hint, "Method has invalid hint")
Check the Method is well constructed.
def set_bit(bitmask, bit, is_on): bitshift = bit - 1 if is_on: return bitmask | (1 << bitshift) return bitmask & (0xff & ~(1 << bitshift))
Set the value of a bit in a bitmask on or off. Uses the low bit is 1 and the high bit is 8.
def mkdir_p(*args, **kwargs): try: return os.mkdir(*args, **kwargs) except OSError as exc: if exc.errno != errno.EEXIST: raise
Like `mkdir`, but does not raise an exception if the directory already exists.
def set_bool(_bytearray, byte_index, bool_index, value): assert value in [0, 1, True, False] current_value = get_bool(_bytearray, byte_index, bool_index) index_value = 1 << bool_index if current_value == value: return if value: _bytearray[byte_index] += index_value else: _bytearray[byte_index] -= index_value
Set boolean value on location in bytearray
def get_components_for_species( alignment, species ): if len( alignment.components ) < len( species ): return None index = dict( [ ( c.src.split( '.' )[0], c ) for c in alignment.components ] ) try: return [ index[s] for s in species ] except: return None
Return the component for each species in the list `species` or None
def __traces_url(self): path = AGENT_TRACES_PATH % self.from_.pid return "http://%s:%s/%s" % (self.host, self.port, path)
URL for posting traces to the host agent. Only valid when announced.
def build_type(field): if field.type_id == 'string': if 'size' in field.options: return "builder.putString(%s, %d)" % (field.identifier, field.options['size'].value) else: return "builder.putString(%s)" % field.identifier elif field.type_id in JAVA_TYPE_MAP: return "builder.put%s(%s)" % (field.type_id.capitalize(), field.identifier) if field.type_id == 'array': t = field.options['fill'].value if t in JAVA_TYPE_MAP: if 'size' in field.options: return "builder.putArrayof%s(%s, %d)" % (t.capitalize(), field.identifier, field.options['size'].value) else: return "builder.putArrayof%s(%s)" % (t.capitalize(), field.identifier) else: if 'size' in field.options: return "builder.putArray(%s, %d)" % (field.identifier, field.options['size'].value) else: return "builder.putArray(%s)" % field.identifier else: return "%s.build(builder)" % field.identifier
Function to pack a type into the binary payload.
def get_bounding_box(points): assert len(points) > 0, "At least one point has to be given." min_x, max_x = points[0]['x'], points[0]['x'] min_y, max_y = points[0]['y'], points[0]['y'] for point in points: min_x, max_x = min(min_x, point['x']), max(max_x, point['x']) min_y, max_y = min(min_y, point['y']), max(max_y, point['y']) p1 = Point(min_x, min_y) p2 = Point(max_x, max_y) return BoundingBox(p1, p2)
Get the bounding box of a list of points. Parameters ---------- points : list of points Returns ------- BoundingBox
def do_copy(self, subcmd, opts, *args): print "'svn %s' opts: %s" % (subcmd, opts) print "'svn %s' args: %s" % (subcmd, args)
Duplicate something in working copy or repository, remembering history. usage: copy SRC DST SRC and DST can each be either a working copy (WC) path or URL: WC -> WC: copy and schedule for addition (with history) WC -> URL: immediately commit a copy of WC to URL URL -> WC: check out URL into WC, schedule for addition URL -> URL: complete server-side copy; used to branch & tag ${cmd_option_list}
def max_width(self): value, unit = float(self._width_str[:-1]), self._width_str[-1] ensure(unit in ["c", "%"], ValueError, "Width unit must be either 'c' or '%'") if unit == "c": ensure(value <= self.columns, ValueError, "Terminal only has {} columns, cannot draw " "bar of size {}.".format(self.columns, value)) retval = value else: ensure(0 < value <= 100, ValueError, "value=={} does not satisfy 0 < value <= 100".format(value)) dec = value / 100 retval = dec * self.columns return floor(retval)
Get maximum width of progress bar :rtype: int :returns: Maximum column width of progress bar
def retrieve_records(self, timeperiod, include_running, include_processed, include_noop, include_failed, include_disabled): resp = dict() resp.update(self._search_by_level(COLLECTION_JOB_HOURLY, timeperiod, include_running, include_processed, include_noop, include_failed, include_disabled)) resp.update(self._search_by_level(COLLECTION_JOB_DAILY, timeperiod, include_running, include_processed, include_noop, include_failed, include_disabled)) timeperiod = time_helper.cast_to_time_qualifier(QUALIFIER_MONTHLY, timeperiod) resp.update(self._search_by_level(COLLECTION_JOB_MONTHLY, timeperiod, include_running, include_processed, include_noop, include_failed, include_disabled)) timeperiod = time_helper.cast_to_time_qualifier(QUALIFIER_YEARLY, timeperiod) resp.update(self._search_by_level(COLLECTION_JOB_YEARLY, timeperiod, include_running, include_processed, include_noop, include_failed, include_disabled)) return resp
method looks for suitable job records in all Job collections and returns them as a dict
def delete_permission(self, username, virtual_host): virtual_host = quote(virtual_host, '') return self.http_client.delete( API_USER_VIRTUAL_HOST_PERMISSIONS % ( virtual_host, username ))
Delete User permissions for the configured virtual host. :param str username: Username :param str virtual_host: Virtual host name :raises ApiError: Raises if the remote server encountered an error. :raises ApiConnectionError: Raises if there was a connectivity issue. :rtype: dict
def get_raw_data(self, times=5): self._validate_measure_count(times) data_list = [] while len(data_list) < times: data = self._read() if data not in [False, -1]: data_list.append(data) return data_list
do some readings and aggregate them using the defined statistics function :param times: how many measures to aggregate :type times: int :return: the aggregate of the measured values :rtype float
def _cryptography_encrypt(cipher_factory, plaintext, key, iv): encryptor = cipher_factory(key, iv).encryptor() return encryptor.update(plaintext) + encryptor.finalize()
Use a cryptography cipher factory to encrypt data. :param cipher_factory: Factory callable that builds a cryptography Cipher instance based on the key and IV :type cipher_factory: callable :param bytes plaintext: Plaintext data to encrypt :param bytes key: Encryption key :param bytes IV: Initialization vector :returns: Encrypted ciphertext :rtype: bytes
def create_or_clear(self, path, **kwargs): try: yield self.create(path, **kwargs) except NodeExistsException: children = yield self.get_children(path) for name in children: yield self.recursive_delete(path + "/" + name)
Create path and recursively clear contents.
def _render(self): p_char = '' if not self.done and self.remainder: p_style = self._comp_style if self.partial_char_extra_style: if p_style is str: p_style = self.partial_char_extra_style else: p_style = p_style + self.partial_char_extra_style p_char = p_style(self.partial_chars[self.remainder]) self._num_empty_chars -= 1 cm_chars = self._comp_style(self.icons[_ic] * self._num_complete_chars) em_chars = self._empt_style(self.icons[_ie] * self._num_empty_chars) return f'{self._first}{cm_chars}{p_char}{em_chars}{self._last} {self._lbl}'
figure partial character
def lrem(self, name, value, num=1): with self.pipe as pipe: value = self.valueparse.encode(value) return pipe.execute_command('LREM', self.redis_key(name), num, value)
Remove first occurrence of value. Can't use redis-py interface. It's inconstistent between redis.Redis and redis.StrictRedis in terms of the kwargs. Better to use the underlying execute_command instead. :param name: str the name of the redis key :param num: :param value: :return: Future()
def filter_sequences(self, seq_type): return DictList(x for x in self.sequences if isinstance(x, seq_type))
Return a DictList of only specified types in the sequences attribute. Args: seq_type (SeqProp): Object type Returns: DictList: A filtered DictList of specified object type only
def head_values(self): values = set() for head in self._heads: values.add(head.value) return values
Return set of the head values
def _interact(self, location, error_info, payload): if (self._interaction_methods is None or len(self._interaction_methods) == 0): raise InteractionError('interaction required but not possible') if error_info.info.interaction_methods is None and \ error_info.info.visit_url is not None: return None, self._legacy_interact(location, error_info) for interactor in self._interaction_methods: found = error_info.info.interaction_methods.get(interactor.kind()) if found is None: continue try: token = interactor.interact(self, location, error_info) except InteractionMethodNotFound: continue if token is None: raise InteractionError('interaction method returned an empty ' 'token') return token, None raise InteractionError('no supported interaction method')
Gathers a macaroon by directing the user to interact with a web page. The error_info argument holds the interaction-required error response. @return DischargeToken, bakery.Macaroon
def received(self, data): self.logger.debug('Data received: {}'.format(data)) message_type = None if 'type' in data: message_type = data['type'] if message_type == 'confirm_subscription': self._subscribed() elif message_type == 'reject_subscription': self._rejected() elif self.receive_callback is not None and 'message' in data: self.receive_callback(data['message']) else: self.logger.warning('Message type unknown. ({})'.format(message_type))
API for the connection to forward information to this subscription instance. :param data: The JSON data which was received. :type data: Message
def _as_dict(self): values = self._dynamic_columns or {} for name, col in self._columns.items(): values[name] = col.to_database(getattr(self, name, None)) return values
Returns a map of column names to cleaned values
def asset(path): commit = bitcaster.get_full_version() return mark_safe('{0}?{1}'.format(_static(path), commit))
Join the given path with the STATIC_URL setting. Usage:: {% static path [as varname] %} Examples:: {% static "myapp/css/base.css" %} {% static variable_with_path %} {% static "myapp/css/base.css" as admin_base_css %} {% static variable_with_path as varname %}
def get_total_size(self, entries): size = 0 for entry in entries: if entry['response']['bodySize'] > 0: size += entry['response']['bodySize'] return size
Returns the total size of a collection of entries. :param entries: ``list`` of entries to calculate the total size of.
def _to_desired_dates(self, arr): times = utils.times.extract_months( arr[internal_names.TIME_STR], self.months ) return arr.sel(time=times)
Restrict the xarray DataArray or Dataset to the desired months.
def nrefs(self, tag): n = _C.Vnrefs(self._id, tag) _checkErr('nrefs', n, "bad arguments") return n
Determine the number of tags of a given type in a vgroup. Args:: tag tag type to look for in the vgroup Returns:: number of members identified by this tag type C library equivalent : Vnrefs
def is_hosting_device_reachable(self, hosting_device): ret_val = False hd = hosting_device hd_id = hosting_device['id'] hd_mgmt_ip = hosting_device['management_ip_address'] dead_hd_list = self.get_dead_hosting_devices_info() if hd_id in dead_hd_list: LOG.debug("Hosting device: %(hd_id)s@%(ip)s is already marked as" " Dead. It is assigned as non-reachable", {'hd_id': hd_id, 'ip': hd_mgmt_ip}) return False if not isinstance(hd['created_at'], datetime.datetime): hd['created_at'] = datetime.datetime.strptime(hd['created_at'], '%Y-%m-%d %H:%M:%S') if _is_pingable(hd_mgmt_ip): LOG.debug("Hosting device: %(hd_id)s@%(ip)s is reachable.", {'hd_id': hd_id, 'ip': hd_mgmt_ip}) hd['hd_state'] = cc.HD_ACTIVE ret_val = True else: LOG.debug("Hosting device: %(hd_id)s@%(ip)s is NOT reachable.", {'hd_id': hd_id, 'ip': hd_mgmt_ip}) hd['hd_state'] = cc.HD_NOT_RESPONDING ret_val = False if self.enable_heartbeat is True or ret_val is False: self.backlog_hosting_device(hd) return ret_val
Check the hosting device which hosts this resource is reachable. If the resource is not reachable, it is added to the backlog. * heartbeat revision We want to enqueue all hosting-devices into the backlog for monitoring purposes adds key/value pairs to hd (aka hosting_device dictionary) _is_pingable : if it returns true, hd['hd_state']='Active' _is_pingable : if it returns false, hd['hd_state']='Unknown' :param hosting_device : dict of the hosting device :returns: True if device is reachable, else None
def random_word(self, length, prefix=0, start=False, end=False, flatten=False): if start: word = ">" length += 1 return self._extend_word(word, length, prefix=prefix, end=end, flatten=flatten)[1:] else: first_letters = list(k for k in self if len(k) == 1 and k != ">") while True: word = random.choice(first_letters) try: word = self._extend_word(word, length, prefix=prefix, end=end, flatten=flatten) return word except GenerationError: first_letters.remove(word[0])
Generate a random word of length from this table. :param length: the length of the generated word; >= 1; :param prefix: if greater than 0, the maximum length of the prefix to consider to choose the next character; :param start: if True, the generated word starts as a word of table; :param end: if True, the generated word ends as a word of table; :param flatten: whether or not consider the table as flattened; :return: a random word of length generated from table. :raises GenerationError: if no word of length can be generated.
def _validate(wdl_file): start_dir = os.getcwd() os.chdir(os.path.dirname(wdl_file)) print("Validating", wdl_file) subprocess.check_call(["wdltool", "validate", wdl_file]) os.chdir(start_dir)
Run validation on the generated WDL output using wdltool.
def _new_from_cdata(cls, cdata: Any) -> "Random": self = object.__new__(cls) self.random_c = cdata return self
Return a new instance encapsulating this cdata.
def find_tags(self, tag_name, **attribute_filter): all_tags = [ self.find_tags_from_xml( i, tag_name, **attribute_filter ) for i in self.xml ] return [tag for tag_list in all_tags for tag in tag_list]
Return a list of all the matched tags in all available xml :param str tag: specify the tag name
def set_ssl_logging(self, enable=False, func=_ssl_logging_cb): u if enable: SSL_CTX_set_info_callback(self._ctx, func) else: SSL_CTX_set_info_callback(self._ctx, 0)
u''' Enable or disable SSL logging :param True | False enable: Enable or disable SSL logging :param func: Callback function for logging
def send(self, cumulative_counters=None, gauges=None, counters=None): if not gauges and not cumulative_counters and not counters: return data = { 'cumulative_counter': cumulative_counters, 'gauge': gauges, 'counter': counters, } _logger.debug('Sending datapoints to SignalFx: %s', data) for metric_type, datapoints in data.items(): if not datapoints: continue if not isinstance(datapoints, list): raise TypeError('Datapoints not of type list %s', datapoints) for datapoint in datapoints: self._add_extra_dimensions(datapoint) self._add_to_queue(metric_type, datapoint) self._start_thread()
Send the given metrics to SignalFx. Args: cumulative_counters (list): a list of dictionaries representing the cumulative counters to report. gauges (list): a list of dictionaries representing the gauges to report. counters (list): a list of dictionaries representing the counters to report.
def check_cache(resource_type): def decorator(func): @functools.wraps(func) def wrapper(*args, **kwargs): try: adapter = args[0] key, val = list(kwargs.items())[0] except IndexError: logger.warning("Couldn't generate full index key, skipping cache") else: index_key = (resource_type, key, val) try: cached_record = adapter._swimlane.resources_cache[index_key] except KeyError: logger.debug('Cache miss: `{!r}`'.format(index_key)) else: logger.debug('Cache hit: `{!r}`'.format(cached_record)) return cached_record return func(*args, **kwargs) return wrapper return decorator
Decorator for adapter methods to check cache for resource before normally sending requests to retrieve data Only works with single kwargs, almost always used with @one_of_keyword_only decorator Args: resource_type (type(APIResource)): Subclass of APIResource of cache to be checked when called
def get_account_details(self, account): result = self.get_user(account.username) if result is None: result = {} return result
Get the account details
def outbox_folder(self): return self.folder_constructor(parent=self, name='Outbox', folder_id=OutlookWellKnowFolderNames .OUTBOX.value)
Shortcut to get Outbox Folder instance :rtype: mailbox.Folder
def removeRow(self, triggered): if triggered: model = self.tableView.model() selection = self.tableView.selectedIndexes() rows = [index.row() for index in selection] model.removeDataFrameRows(set(rows)) self.sender().setChecked(False)
Removes a row to the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the selected row will be removed from the model.
def _get_other_names(self, line): m = re.search(self.compound_regex['other_names'][0], line, re.IGNORECASE) if m: self.other_names.append(m.group(1).strip())
Parse and extract any other names that might be recorded for the compound Args: line (str): line of the msp file
def exc_handle(url, out, testing): quiet_exceptions = [ConnectionError, ReadTimeout, ConnectTimeout, TooManyRedirects] type, value, _ = sys.exc_info() if type not in quiet_exceptions or testing: exc = traceback.format_exc() exc_string = ("Line '%s' raised:\n" % url) + exc out.warn(exc_string, whitespace_strp=False) if testing: print(exc) else: exc_string = "Line %s '%s: %s'" % (url, type, value) out.warn(exc_string)
Handle exception. If of a determinate subset, it is stored into a file as a single type. Otherwise, full stack is stored. Furthermore, if testing, stack is always shown. @param url: url which was being scanned when exception was thrown. @param out: Output object, usually self.out. @param testing: whether we are currently running unit tests.
def async_iter(func, args_iter, **kwargs): iter_count = len(args_iter) iter_group = uuid()[1] options = kwargs.get('q_options', kwargs) options.pop('hook', None) options['broker'] = options.get('broker', get_broker()) options['group'] = iter_group options['iter_count'] = iter_count if options.get('cached', None): options['iter_cached'] = options['cached'] options['cached'] = True broker = options['broker'] broker.cache.set('{}:{}:args'.format(broker.list_key, iter_group), SignedPackage.dumps(args_iter)) for args in args_iter: if not isinstance(args, tuple): args = (args,) async_task(func, *args, **options) return iter_group
enqueues a function with iterable arguments
def children_with_values(self): childs = [] for attribute in self._get_all_c_children_with_order(): member = getattr(self, attribute) if member is None or member == []: pass elif isinstance(member, list): for instance in member: childs.append(instance) else: childs.append(member) return childs
Returns all children that has values :return: Possibly empty list of children.
def model_m2m_changed(sender, instance, action, **kwargs): if sender._meta.app_label == 'rest_framework_reactive': return def notify(): table = sender._meta.db_table if action == 'post_add': notify_observers(table, ORM_NOTIFY_KIND_CREATE) elif action in ('post_remove', 'post_clear'): notify_observers(table, ORM_NOTIFY_KIND_DELETE) transaction.on_commit(notify)
Signal emitted after any M2M relation changes via Django ORM. :param sender: M2M intermediate model :param instance: The actual instance that was saved :param action: M2M action
def url_ok(match_tuple: MatchTuple) -> bool: try: result = requests.get(match_tuple.link, timeout=5) return result.ok except (requests.ConnectionError, requests.Timeout): return False
Check if a URL is reachable.