code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def senqueue(trg_queue, item_s, *args, **kwargs): return vsenqueue(trg_queue, item_s, args, **kwargs)
Enqueue a string, or string-like object to queue with arbitrary arguments, senqueue is to enqueue what sprintf is to printf, senqueue is to vsenqueue what sprintf is to vsprintf.
def upload_all_books(book_id_start, book_id_end, rdf_library=None): logger.info( "starting a gitberg mass upload: {0} -> {1}".format( book_id_start, book_id_end ) ) for book_id in range(int(book_id_start), int(book_id_end) + 1): cache = {} errors = 0 try: if int(book_id) in missing_pgid: print(u'missing\t{}'.format(book_id)) continue upload_book(book_id, rdf_library=rdf_library, cache=cache) except Exception as e: print(u'error\t{}'.format(book_id)) logger.error(u"Error processing: {}\r{}".format(book_id, e)) errors += 1 if errors > 10: print('error limit reached!') break
Uses the fetch, make, push subcommands to mirror Project Gutenberg to a github3 api
def add_request_type_view(request): form = RequestTypeForm(request.POST or None) if form.is_valid(): rtype = form.save() messages.add_message(request, messages.SUCCESS, MESSAGES['REQUEST_TYPE_ADDED'].format(typeName=rtype.name)) return HttpResponseRedirect(reverse('managers:manage_request_types')) return render_to_response('edit_request_type.html', { 'page_name': "Admin - Add Request Type", 'request_types': RequestType.objects.all(), 'form': form, }, context_instance=RequestContext(request))
View to add a new request type. Restricted to presidents and superadmins.
def _get_project_types(self): project_types = get_available_project_types() projects = [] for project in project_types: projects.append(project.PROJECT_TYPE_NAME) return projects
Get all available project types.
def delete_attachment(cls, session, attachment): return super(Conversations, cls).delete( session, attachment, endpoint_override='/attachments/%s.json' % attachment.id, out_type=Attachment, )
Delete an attachment. Args: session (requests.sessions.Session): Authenticated session. attachment (helpscout.models.Attachment): The attachment to be deleted. Returns: NoneType: Nothing.
def _save_xls(self, filepath): Interface = self.type2interface["xls"] workbook = xlwt.Workbook() interface = Interface(self.grid.code_array, workbook) interface.from_code_array() try: workbook.save(filepath) except IOError, err: try: post_command_event(self.main_window, self.StatusBarMsg, text=err) except TypeError: pass
Saves file as xls workbook Parameters ---------- filepath: String \tTarget file path for xls file
def _skip(self, cnt): while cnt > 0: if self._cur_avail == 0: if not self._open_next(): break if cnt > self._cur_avail: cnt -= self._cur_avail self._remain -= self._cur_avail self._cur_avail = 0 else: self._fd.seek(cnt, 1) self._cur_avail -= cnt self._remain -= cnt cnt = 0
RAR Seek, skipping through rar files to get to correct position
def update_default(self, new_default, respect_none=False): if new_default is not None: self.default = new_default elif new_default is None and respect_none: self.default = None
Update our current default with the new_default. Args: new_default: New default to set. respect_none: Flag to determine if ``None`` is a valid value.
def __get_default_settings(self): LOGGER.debug("> Accessing '{0}' default settings file!".format(UiConstants.settings_file)) self.__default_settings = QSettings( umbra.ui.common.get_resource_path(UiConstants.settings_file), QSettings.IniFormat)
Gets the default settings.
def parse_event_files_spec(logdir): files = {} if logdir is None: return files uri_pattern = re.compile('[a-zA-Z][0-9a-zA-Z.]*://.*') for specification in logdir.split(','): if (uri_pattern.match(specification) is None and ':' in specification and specification[0] != '/' and not os.path.splitdrive(specification)[0]): run_name, _, path = specification.partition(':') else: run_name = None path = specification if uri_pattern.match(path) is None: path = os.path.realpath(os.path.expanduser(path)) files[path] = run_name return files
Parses `logdir` into a map from paths to run group names. The events files flag format is a comma-separated list of path specifications. A path specification either looks like 'group_name:/path/to/directory' or '/path/to/directory'; in the latter case, the group is unnamed. Group names cannot start with a forward slash: /foo:bar/baz will be interpreted as a spec with no name and path '/foo:bar/baz'. Globs are not supported. Args: logdir: A comma-separated list of run specifications. Returns: A dict mapping directory paths to names like {'/path/to/directory': 'name'}. Groups without an explicit name are named after their path. If logdir is None, returns an empty dict, which is helpful for testing things that don't require any valid runs.
def submit_unseal_key(self, key=None, reset=False, migrate=False): params = { 'migrate': migrate, } if not reset and key is not None: params['key'] = key elif reset: params['reset'] = reset api_path = '/v1/sys/unseal' response = self._adapter.put( url=api_path, json=params, ) return response.json()
Enter a single master key share to progress the unsealing of the Vault. If the threshold number of master key shares is reached, Vault will attempt to unseal the Vault. Otherwise, this API must be called multiple times until that threshold is met. Either the key or reset parameter must be provided; if both are provided, reset takes precedence. Supported methods: PUT: /sys/unseal. Produces: 200 application/json :param key: Specifies a single master key share. This is required unless reset is true. :type key: str | unicode :param reset: Specifies if previously-provided unseal keys are discarded and the unseal process is reset. :type reset: bool :param migrate: Available in 1.0 Beta - Used to migrate the seal from shamir to autoseal or autoseal to shamir. Must be provided on all unseal key calls. :type: migrate: bool :return: The JSON response of the request. :rtype: dict
def load_creds_file(self, path, profile=None): config_cls = self.get_creds_reader() return config_cls.load_config(self, path, profile=profile)
Load the credentials config file.
def get_instance(self, payload): return AvailableAddOnExtensionInstance( self._version, payload, available_add_on_sid=self._solution['available_add_on_sid'], )
Build an instance of AvailableAddOnExtensionInstance :param dict payload: Payload response from the API :returns: twilio.rest.preview.marketplace.available_add_on.available_add_on_extension.AvailableAddOnExtensionInstance :rtype: twilio.rest.preview.marketplace.available_add_on.available_add_on_extension.AvailableAddOnExtensionInstance
def get_capture_handler_config_by_name(self, name): handler_confs = [] for address, stream_capturer in self._stream_capturers.iteritems(): handler_data = stream_capturer[0].dump_handler_config_data() for h in handler_data: if h['handler']['name'] == name: handler_confs.append(h) return handler_confs
Return data for handlers of a given name. Args: name: Name of the capture handler(s) to return config data for. Returns: Dictionary dump from the named capture handler as given by the :func:`SocketStreamCapturer.dump_handler_config_data` method.
def process(self): children = len(self.living_children) LOGGER.debug('%i active child%s', children, '' if children == 1 else 'ren')
Check up on child processes and make sure everything is running as it should be.
def count(self, objectType, *args, **coolArgs) : return self._makeLoadQuery(objectType, *args, **coolArgs).count()
Returns the number of elements satisfying the query
def start(name, call=None): if call != 'action': raise SaltCloudException( 'The start action must be called with -a or --action.' ) node_id = get_linode_id_from_name(name) node = get_linode(kwargs={'linode_id': node_id}) if node['STATUS'] == 1: return {'success': True, 'action': 'start', 'state': 'Running', 'msg': 'Machine already running'} response = _query('linode', 'boot', args={'LinodeID': node_id})['DATA'] if _wait_for_job(node_id, response['JobID']): return {'state': 'Running', 'action': 'start', 'success': True} else: return {'action': 'start', 'success': False}
Start a VM in Linode. name The name of the VM to start. CLI Example: .. code-block:: bash salt-cloud -a stop vm_name
def apply(self, doc): if not isinstance(doc, Document): raise TypeError( "Input Contexts to MentionFigures.apply() must be of type Document" ) for figure in doc.figures: if self.types is None or any( figure.url.lower().endswith(type) for type in self.types ): yield TemporaryFigureMention(figure)
Generate MentionFigures from a Document by parsing all of its Figures. :param doc: The ``Document`` to parse. :type doc: ``Document`` :raises TypeError: If the input doc is not of type ``Document``.
def distinct(expr, on=None, *ons): on = on or list() if not isinstance(on, list): on = [on, ] on = on + list(ons) on = [it(expr) if inspect.isfunction(it) else it for it in on] return DistinctCollectionExpr(expr, _unique_fields=on, _all=(len(on) == 0))
Get collection with duplicate rows removed, optionally only considering certain columns :param expr: collection :param on: sequence or sequences :return: dinstinct collection :Example: >>> df.distinct(['name', 'id']) >>> df['name', 'id'].distinct()
def get_email_subject(email, default): s = get_settings(string="OVP_EMAILS") email_settings = s.get(email, {}) title = email_settings.get("subject", default) return _(title)
Allows for email subject overriding from settings.py
def _parse_line(line): line, timestamp = line.rsplit(",", 1) line, command = line.rsplit(",", 1) path, username = line.rsplit(",", 1) return { "timestamp": timestamp.strip(), "command": command.strip(), "username": username.strip(), "path": path, }
Convert one line from the extended log to dict. Args: line (str): Line which will be converted. Returns: dict: dict with ``timestamp``, ``command``, ``username`` and ``path`` \ keys. Note: Typical line looks like this:: /home/ftp/xex/asd bsd.dat, xex, STOR, 1398351777 Filename may contain ``,`` character, so I am ``rsplitting`` the line from the end to the beginning.
def find_actions(namespace, action_prefix): actions = {} for key, value in iteritems(namespace): if key.startswith(action_prefix): actions[key[len(action_prefix):]] = analyse_action(value) return actions
Find all the actions in the namespace.
def run(self): try: while True: input_chunks = [input.get() for input in self.input_queues] for input in self.input_queues: input.task_done() if any(chunk is QUEUE_ABORT for chunk in input_chunks): self.abort() return if any(chunk is QUEUE_FINISHED for chunk in input_chunks): break self.output(self.process_chunks(input_chunks)) self.output(self.finalise()) except: self.abort() raise else: for queue in self.output_queues: queue.put(QUEUE_FINISHED)
Process the input queues in lock-step, and push any results to the registered output queues.
def owner(path): stat = os.stat(path) username = pwd.getpwuid(stat.st_uid)[0] groupname = grp.getgrgid(stat.st_gid)[0] return username, groupname
Returns a tuple containing the username & groupname owning the path. :param str path: the string path to retrieve the ownership :return tuple(str, str): A (username, groupname) tuple containing the name of the user and group owning the path. :raises OSError: if the specified path does not exist
def timed_cache(**timed_cache_kwargs): def _wrapper(f): maxsize = timed_cache_kwargs.pop('maxsize', 128) typed = timed_cache_kwargs.pop('typed', False) update_delta = timedelta(**timed_cache_kwargs) d = {'next_update': datetime.utcnow() - update_delta} try: f = functools.lru_cache(maxsize=maxsize, typed=typed)(f) except AttributeError: print( "LRU caching is not available in Pyton 2.7, " "this will have no effect!" ) pass @functools.wraps(f) def _wrapped(*args, **kwargs): now = datetime.utcnow() if now >= d['next_update']: try: f.cache_clear() except AttributeError: pass d['next_update'] = now + update_delta return f(*args, **kwargs) return _wrapped return _wrapper
LRU cache decorator with timeout. Parameters ---------- days: int seconds: int microseconds: int milliseconds: int minutes: int hours: int weeks: int maxsise: int [default: 128] typed: bool [default: False]
def sorted_migrations(self): if not self._sorted_migrations: self._sorted_migrations = sorted( self.migration_registry.items(), key=lambda migration_tuple: migration_tuple[0]) return self._sorted_migrations
Sort migrations if necessary and store in self._sorted_migrations
def add_inverse_distances(self, indices, periodic=True, indices2=None): from .distances import InverseDistanceFeature atom_pairs = _parse_pairwise_input( indices, indices2, self.logger, fname='add_inverse_distances()') atom_pairs = self._check_indices(atom_pairs) f = InverseDistanceFeature(self.topology, atom_pairs, periodic=periodic) self.__add_feature(f)
Adds the inverse distances between atoms to the feature list. Parameters ---------- indices : can be of two types: ndarray((n, 2), dtype=int): n x 2 array with the pairs of atoms between which the inverse distances shall be computed iterable of integers (either list or ndarray(n, dtype=int)): indices (not pairs of indices) of the atoms between which the inverse distances shall be computed. periodic : optional, boolean, default is True If periodic is True and the trajectory contains unitcell information, distances will be computed under the minimum image convention. indices2: iterable of integers (either list or ndarray(n, dtype=int)), optional: Only has effect if :py:obj:`indices` is an iterable of integers. Instead of the above behaviour, only the inverse distances between the atoms in :py:obj:`indices` and :py:obj:`indices2` will be computed. .. note:: When using the *iterable of integers* input, :py:obj:`indices` and :py:obj:`indices2` will be sorted numerically and made unique before converting them to a pairlist. Please look carefully at the output of :py:func:`describe()` to see what features exactly have been added.
def wr_long(f, x): if PYTHON3: f.write(bytes([x & 0xff])) f.write(bytes([(x >> 8) & 0xff])) f.write(bytes([(x >> 16) & 0xff])) f.write(bytes([(x >> 24) & 0xff])) else: f.write(chr( x & 0xff)) f.write(chr((x >> 8) & 0xff)) f.write(chr((x >> 16) & 0xff)) f.write(chr((x >> 24) & 0xff))
Internal; write a 32-bit int to a file in little-endian order.
def html_header(): file_path = resources_path('header.html') with codecs.open(file_path, 'r', encoding='utf8') as header_file: content = header_file.read() content = content.replace('PATH', resources_path()) return content
Get a standard html header for wrapping content in. :returns: A header containing a web page preamble in html - up to and including the body open tag. :rtype: str
def my_func(version): class MyClass(object): if version == 2: import docs.support.python2_module as pm else: import docs.support.python3_module as pm def __init__(self, value): self._value = value def _get_value(self): return self._value value = property(_get_value, pm._set_value, None, "Value property")
Enclosing function.
def _getFirmwareVersion(self, device): cmd = self._COMMAND.get('get-fw-version') self._writeData(cmd, device) try: result = self._serial.read(size=1) result = int(result) except serial.SerialException as e: self._log and self._log.error("Error: %s", e, exc_info=True) raise e except ValueError as e: result = None return result
Get the firmware version. :Parameters: device : `int` The device is the integer number of the hardware devices ID and is only used with the Pololu Protocol. :Returns: An integer indicating the version number.
def merge_report(self, otherself): self.notices += otherself.notices self.warnings += otherself.warnings self.errors += otherself.errors
Merge another report into this one.
def get_value(self): def get_element_value(): if self.tag_name() == 'input': return self.get_attribute('value') elif self.tag_name() == 'select': selected_options = self.element.all_selected_options if len(selected_options) > 1: raise ValueError( 'Select {} has multiple selected options, only one selected ' 'option is valid for this method'.format(self) ) return selected_options[0].get_attribute('value') else: raise ValueError('Can not get the value of elements or type "{}"'.format(self.tag_name())) return self.execute_and_handle_webelement_exceptions(get_element_value, name_of_action='get value')
Gets the value of a select or input element @rtype: str @return: The value of the element @raise: ValueError if element is not of type input or select, or has multiple selected options
def add(self, user, password): if self.__contains__(user): raise UserExists self.new_users[user] = self._encrypt_password(password) + "\n"
Adds a user with password
def apply_order(self): self._ensure_modification_is_safe() if len(self.query.orders) > 0: self._iterable = Order.sorted(self._iterable, self.query.orders)
Naively apply query orders.
def correlation(left, right, where=None, how='sample'): expr = ops.Correlation(left, right, how, where).to_expr() return expr
Compute correlation of two numeric array Parameters ---------- how : {'sample', 'pop'}, default 'sample' Returns ------- corr : double scalar
def terminate(self): logger.info('Sending SIGTERM to task {0}'.format(self.name)) if hasattr(self, 'remote_client') and self.remote_client is not None: self.terminate_sent = True self.remote_client.close() return if not self.process: raise DagobahError('task does not have a running process') self.terminate_sent = True self.process.terminate()
Send SIGTERM to the task's process.
def parentLayer(self): if self._parentLayer is None: from ..agol.services import FeatureService self.__init() url = os.path.dirname(self._url) self._parentLayer = FeatureService(url=url, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port) return self._parentLayer
returns information about the parent
def valid_at(self, valid_date): is_valid = db.Q(validity__end__gt=valid_date, validity__start__lte=valid_date) no_validity = db.Q(validity=None) return self(is_valid | no_validity)
Limit current QuerySet to zone valid at a given date
def draw_timeline(self): self.clear_timeline() self.create_scroll_region() self._timeline.config(width=self.pixel_width) self._canvas_scroll.config(width=self._width, height=self._height) self.draw_separators() self.draw_markers() self.draw_ticks() self.draw_time_marker()
Draw the contents of the whole TimeLine Canvas
def set_password(cls, instance, raw_password): hash_callable = getattr( instance.passwordmanager, "hash", instance.passwordmanager.encrypt ) password = hash_callable(raw_password) if six.PY2: instance.user_password = password.decode("utf8") else: instance.user_password = password cls.regenerate_security_code(instance)
sets new password on a user using password manager :param instance: :param raw_password: :return:
def from_text_list(name, ttl, rdclass, rdtype, text_rdatas): if isinstance(name, (str, unicode)): name = dns.name.from_text(name, None) if isinstance(rdclass, (str, unicode)): rdclass = dns.rdataclass.from_text(rdclass) if isinstance(rdtype, (str, unicode)): rdtype = dns.rdatatype.from_text(rdtype) r = RRset(name, rdclass, rdtype) r.update_ttl(ttl) for t in text_rdatas: rd = dns.rdata.from_text(r.rdclass, r.rdtype, t) r.add(rd) return r
Create an RRset with the specified name, TTL, class, and type, and with the specified list of rdatas in text format. @rtype: dns.rrset.RRset object
def _parse_reset(self, ref): from_ = self._get_from() return commands.ResetCommand(ref, from_)
Parse a reset command.
def cmd_status_codes_counter(self): status_codes = defaultdict(int) for line in self._valid_lines: status_codes[line.status_code] += 1 return status_codes
Generate statistics about HTTP status codes. 404, 500 and so on.
def setReturnParameter(self, name, type, namespace=None, element_type=0): parameter = ParameterInfo(name, type, namespace, element_type) self.retval = parameter return parameter
Set the return parameter description for the call info.
def get_filter_list(p_expression): result = [] for arg in p_expression: is_negated = len(arg) > 1 and arg[0] == '-' arg = arg[1:] if is_negated else arg argfilter = None for match, _filter in MATCHES: if re.match(match, arg): argfilter = _filter(arg) break if not argfilter: argfilter = GrepFilter(arg) if is_negated: argfilter = NegationFilter(argfilter) result.append(argfilter) return result
Returns a list of GrepFilters, OrdinalTagFilters or NegationFilters based on the given filter expression. The filter expression is a list of strings.
def clear(self): for n in self.nodes(): if self.nodes[n]["type"] == "variable": self.nodes[n]["value"] = None elif self.nodes[n]["type"] == "function": self.nodes[n]["func_visited"] = False
Clear variable nodes for next computation.
def _ExtractMetadataFromFileEntry(self, mediator, file_entry, data_stream): if file_entry.IsRoot() and file_entry.type_indicator not in ( self._TYPES_WITH_ROOT_METADATA): return if data_stream and not data_stream.IsDefault(): return display_name = mediator.GetDisplayName() logger.debug( '[ExtractMetadataFromFileEntry] processing file entry: {0:s}'.format( display_name)) self.processing_status = definitions.STATUS_INDICATOR_EXTRACTING if self._processing_profiler: self._processing_profiler.StartTiming('extracting') self._event_extractor.ParseFileEntryMetadata(mediator, file_entry) if self._processing_profiler: self._processing_profiler.StopTiming('extracting') self.processing_status = definitions.STATUS_INDICATOR_RUNNING
Extracts metadata from a file entry. Args: mediator (ParserMediator): mediates the interactions between parsers and other components, such as storage and abort signals. file_entry (dfvfs.FileEntry): file entry to extract metadata from. data_stream (dfvfs.DataStream): data stream or None if the file entry has no data stream.
def _prepare_persistence_engine(self): if self._persistence_engine: return persistence_engine = self._options.get('persistence_engine') if persistence_engine: self._persistence_engine = path_to_reference(persistence_engine) return from furious.config import get_default_persistence_engine self._persistence_engine = get_default_persistence_engine()
Load the specified persistence engine, or the default if none is set.
def sizeof(self, fields): if type(fields) in [tuple, list]: str = ','.join(fields) else: str = fields n = _C.VSsizeof(self._id, str) _checkErr('sizeof', n, "cannot retrieve field sizes") return n
Retrieve the size in bytes of the given fields. Args:: fields sequence of field names to query Returns:: total size of the fields in bytes C library equivalent : VSsizeof
def height(self): if self.__children_: return max([child.height for child in self.__children_]) + 1 else: return 0
Number of edges on the longest path to a leaf `Node`. >>> from anytree import Node >>> udo = Node("Udo") >>> marc = Node("Marc", parent=udo) >>> lian = Node("Lian", parent=marc) >>> udo.height 2 >>> marc.height 1 >>> lian.height 0
def _get_svc_list(service_status): prefix = '/etc/rc.d/' ret = set() lines = glob.glob('{0}*'.format(prefix)) for line in lines: svc = _get_svc(line, service_status) if svc is not None: ret.add(svc) return sorted(ret)
Returns all service statuses
def _strip_trailing_zeros(value): return list( reversed( list(itertools.dropwhile(lambda x: x == 0, reversed(value))) ) )
Strip trailing zeros from a list of ints. :param value: the value to be stripped :type value: list of str :returns: list with trailing zeros stripped :rtype: list of int
def asDictionary(self): feat_dict = {} if self._geom is not None: if 'feature' in self._dict: feat_dict['geometry'] = self._dict['feature']['geometry'] elif 'geometry' in self._dict: feat_dict['geometry'] = self._dict['geometry'] if 'feature' in self._dict: feat_dict['attributes'] = self._dict['feature']['attributes'] else: feat_dict['attributes'] = self._dict['attributes'] return self._dict
returns the feature as a dictionary
def stop_led_flash(self): if self._led_flashing: self._led_flash = (0, 0) self._led_flashing = False self._control() self._control()
Stops flashing the LED.
def get_module_constant(module, symbol, default=-1, paths=None): try: f, path, (suffix, mode, kind) = find_module(module, paths) except ImportError: return None try: if kind == PY_COMPILED: f.read(8) code = marshal.load(f) elif kind == PY_FROZEN: code = imp.get_frozen_object(module) elif kind == PY_SOURCE: code = compile(f.read(), path, 'exec') else: if module not in sys.modules: imp.load_module(module, f, path, (suffix, mode, kind)) return getattr(sys.modules[module], symbol, None) finally: if f: f.close() return extract_constant(code, symbol, default)
Find 'module' by searching 'paths', and extract 'symbol' Return 'None' if 'module' does not exist on 'paths', or it does not define 'symbol'. If the module defines 'symbol' as a constant, return the constant. Otherwise, return 'default'.
def verse_lookup(self, book_name, book_chapter, verse, cache_chapter = True): verses_list = self.get_chapter( book_name, str(book_chapter), cache_chapter = cache_chapter) return verses_list[int(verse) - 1]
Looks up a verse from online.recoveryversion.bible, then returns it.
def Entry(self, name, directory=None, create=1): return self._create_node(name, self.env.fs.Entry, directory, create)
Create `SCons.Node.FS.Entry`
def remove_yaml_frontmatter(source, return_frontmatter=False): if source.startswith("---\n"): frontmatter_end = source.find("\n---\n", 4) if frontmatter_end == -1: frontmatter = source source = "" else: frontmatter = source[0:frontmatter_end] source = source[frontmatter_end + 5:] if return_frontmatter: return (source, frontmatter) return source if return_frontmatter: return (source, None) return source
If there's one, remove the YAML front-matter from the source
def create_delete_model(record): data = cloudwatch.get_historical_base_info(record) group_id = cloudwatch.filter_request_parameters('groupId', record) arn = get_arn(group_id, cloudwatch.get_region(record), record['account']) LOG.debug(f'[-] Deleting Dynamodb Records. Hash Key: {arn}') data.update({'configuration': {}}) items = list(CurrentSecurityGroupModel.query(arn, limit=1)) if items: model_dict = items[0].__dict__['attribute_values'].copy() model_dict.update(data) model = CurrentSecurityGroupModel(**model_dict) model.save() return model return None
Create a security group model from a record.
def serialize_non_framed_open(algorithm, iv, plaintext_length, signer=None): body_start_format = (">" "{iv_length}s" "Q").format(iv_length=algorithm.iv_len) body_start = struct.pack(body_start_format, iv, plaintext_length) if signer: signer.update(body_start) return body_start
Serializes the opening block for a non-framed message body. :param algorithm: Algorithm to use for encryption :type algorithm: aws_encryption_sdk.identifiers.Algorithm :param bytes iv: IV value used to encrypt body :param int plaintext_length: Length of plaintext (and thus ciphertext) in body :param signer: Cryptographic signer object (optional) :type signer: aws_encryption_sdk.internal.crypto.Signer :returns: Serialized body start block :rtype: bytes
def process_result(self, new_sia, min_sia): if new_sia.phi == 0: self.done = True return new_sia elif new_sia < min_sia: return new_sia return min_sia
Check if the new SIA has smaller |big_phi| than the standing result.
def get_raster_array(image): if isinstance(image, RGB): rgb = image.rgb data = np.dstack([np.flipud(rgb.dimension_values(d, flat=False)) for d in rgb.vdims]) else: data = image.dimension_values(2, flat=False) if type(image) is Raster: data = data.T else: data = np.flipud(data) return data
Return the array data from any Raster or Image type
def to_outgoing_transaction(self, using, created=None, deleted=None): OutgoingTransaction = django_apps.get_model( "django_collect_offline", "OutgoingTransaction" ) created = True if created is None else created action = INSERT if created else UPDATE timestamp_datetime = ( self.instance.created if created else self.instance.modified ) if not timestamp_datetime: timestamp_datetime = get_utcnow() if deleted: timestamp_datetime = get_utcnow() action = DELETE outgoing_transaction = None if self.is_serialized: hostname = socket.gethostname() outgoing_transaction = OutgoingTransaction.objects.using(using).create( tx_name=self.instance._meta.label_lower, tx_pk=getattr(self.instance, self.primary_key_field.name), tx=self.encrypted_json(), timestamp=timestamp_datetime.strftime("%Y%m%d%H%M%S%f"), producer=f"{hostname}-{using}", action=action, using=using, ) return outgoing_transaction
Serialize the model instance to an AES encrypted json object and saves the json object to the OutgoingTransaction model.
def dataset_metrics(uuid, **kwargs): def getdata(x, **kwargs): url = gbif_baseurl + 'dataset/' + x + '/metrics' return gbif_GET(url, {}, **kwargs) if len2(uuid) == 1: return getdata(uuid) else: return [getdata(x) for x in uuid]
Get details on a GBIF dataset. :param uuid: [str] One or more dataset UUIDs. See examples. References: http://www.gbif.org/developer/registry#datasetMetrics Usage:: from pygbif import registry registry.dataset_metrics(uuid='3f8a1297-3259-4700-91fc-acc4170b27ce') registry.dataset_metrics(uuid='66dd0960-2d7d-46ee-a491-87b9adcfe7b1') registry.dataset_metrics(uuid=['3f8a1297-3259-4700-91fc-acc4170b27ce', '66dd0960-2d7d-46ee-a491-87b9adcfe7b1'])
def boto_client(self, service, *args, **kwargs): return self.boto_session.client(service, *args, **self.configure_boto_session_method_kwargs(service, kwargs))
A wrapper to apply configuration options to boto clients
def get_inputs_from_cm(index, cm): return tuple(i for i in range(cm.shape[0]) if cm[i][index])
Return indices of inputs to the node with the given index.
def configure_create(self, ns, definition): @self.add_route(ns.collection_path, Operation.Create, ns) @request(definition.request_schema) @response(definition.response_schema) @wraps(definition.func) def create(**path_data): request_data = load_request_data(definition.request_schema) response_data = definition.func(**merge_data(path_data, request_data)) headers = encode_id_header(response_data) definition.header_func(headers, response_data) response_format = self.negotiate_response_content(definition.response_formats) return dump_response_data( definition.response_schema, response_data, status_code=Operation.Create.value.default_code, headers=headers, response_format=response_format, ) create.__doc__ = "Create a new {}".format(ns.subject_name)
Register a create endpoint. The definition's func should be a create function, which must: - accept kwargs for the request and path data - return a new item :param ns: the namespace :param definition: the endpoint definition
def reactive_power_mode(self): if self._reactive_power_mode is None: if isinstance(self.grid, MVGrid): self._reactive_power_mode = self.grid.network.config[ 'reactive_power_mode']['mv_load'] elif isinstance(self.grid, LVGrid): self._reactive_power_mode = self.grid.network.config[ 'reactive_power_mode']['lv_load'] return self._reactive_power_mode
Power factor mode of Load. This information is necessary to make the load behave in an inductive or capacitive manner. Essentially this changes the sign of the reactive power. The convention used here in a load is that: - when `reactive_power_mode` is 'inductive' then Q is positive - when `reactive_power_mode` is 'capacitive' then Q is negative Parameters ---------- reactive_power_mode : :obj:`str` or None Possible options are 'inductive', 'capacitive' and 'not_applicable'. In the case of 'not_applicable' a reactive power time series must be given. Returns ------- :obj:`str` In the case that this attribute is not set, it is retrieved from the network config object depending on the voltage level the load is in.
def get_ip_prefixes_from_bird(filename): prefixes = [] with open(filename, 'r') as bird_conf: lines = bird_conf.read() for line in lines.splitlines(): line = line.strip(', ') if valid_ip_prefix(line): prefixes.append(line) return prefixes
Build a list of IP prefixes found in Bird configuration. Arguments: filename (str): The absolute path of the Bird configuration file. Notes: It can only parse a file with the following format define ACAST_PS_ADVERTISE = [ 10.189.200.155/32, 10.189.200.255/32 ]; Returns: A list of IP prefixes.
def extract_surface(self, pass_pointid=True, pass_cellid=True, inplace=False): surf_filter = vtk.vtkDataSetSurfaceFilter() surf_filter.SetInputData(self) if pass_pointid: surf_filter.PassThroughCellIdsOn() if pass_cellid: surf_filter.PassThroughPointIdsOn() surf_filter.Update() mesh = _get_output(surf_filter) if inplace: self.overwrite(mesh) else: return mesh
Extract surface mesh of the grid Parameters ---------- pass_pointid : bool, optional Adds a point scalar "vtkOriginalPointIds" that idenfities which original points these surface points correspond to pass_cellid : bool, optional Adds a cell scalar "vtkOriginalPointIds" that idenfities which original cells these surface cells correspond to inplace : bool, optional Return new mesh or overwrite input. Returns ------- extsurf : vtki.PolyData Surface mesh of the grid
def areaBetween(requestContext, *seriesLists): if len(seriesLists) == 1: [seriesLists] = seriesLists assert len(seriesLists) == 2, ("areaBetween series argument must " "reference *exactly* 2 series") lower, upper = seriesLists if len(lower) == 1: [lower] = lower if len(upper) == 1: [upper] = upper lower.options['stacked'] = True lower.options['invisible'] = True upper.options['stacked'] = True lower.name = upper.name = "areaBetween(%s)" % upper.pathExpression return [lower, upper]
Draws the vertical area in between the two series in seriesList. Useful for visualizing a range such as the minimum and maximum latency for a service. areaBetween expects **exactly one argument** that results in exactly two series (see example below). The order of the lower and higher values series does not matter. The visualization only works when used in conjunction with ``areaMode=stacked``. Most likely use case is to provide a band within which another metric should move. In such case applying an ``alpha()``, as in the second example, gives best visual results. Example:: &target=areaBetween(service.latency.{min,max})&areaMode=stacked &target=alpha(areaBetween(service.latency.{min,max}),0.3)&areaMode=stacked If for instance, you need to build a seriesList, you should use the ``group`` function, like so:: &target=areaBetween(group(minSeries(a.*.min),maxSeries(a.*.max)))
def maxCtxContextualSubtable(maxCtx, st, ruleType, chain=''): if st.Format == 1: for ruleset in getattr(st, '%s%sRuleSet' % (chain, ruleType)): if ruleset is None: continue for rule in getattr(ruleset, '%s%sRule' % (chain, ruleType)): if rule is None: continue maxCtx = maxCtxContextualRule(maxCtx, rule, chain) elif st.Format == 2: for ruleset in getattr(st, '%s%sClassSet' % (chain, ruleType)): if ruleset is None: continue for rule in getattr(ruleset, '%s%sClassRule' % (chain, ruleType)): if rule is None: continue maxCtx = maxCtxContextualRule(maxCtx, rule, chain) elif st.Format == 3: maxCtx = maxCtxContextualRule(maxCtx, st, chain) return maxCtx
Calculate usMaxContext based on a contextual feature subtable.
def _log_submission(submission, student_item): logger.info( u"Created submission uuid={submission_uuid} for " u"(course_id={course_id}, item_id={item_id}, " u"anonymous_student_id={anonymous_student_id})" .format( submission_uuid=submission["uuid"], course_id=student_item["course_id"], item_id=student_item["item_id"], anonymous_student_id=student_item["student_id"] ) )
Log the creation of a submission. Args: submission (dict): The serialized submission model. student_item (dict): The serialized student item model. Returns: None
def move(self, source, dest): source = self._item_path(source) dest = self._item_path(dest) if not (contains_array(self._store, source) or contains_group(self._store, source)): raise ValueError('The source, "%s", does not exist.' % source) if contains_array(self._store, dest) or contains_group(self._store, dest): raise ValueError('The dest, "%s", already exists.' % dest) if "/" in dest: self.require_group("/" + dest.rsplit("/", 1)[0]) self._write_op(self._move_nosync, source, dest)
Move contents from one path to another relative to the Group. Parameters ---------- source : string Name or path to a Zarr object to move. dest : string New name or path of the Zarr object.
def collect(self): instances = {} for device in os.listdir('/dev/'): instances.update(self.match_device(device, '/dev/')) for device_id in os.listdir('/dev/disk/by-id/'): instances.update(self.match_device(device, '/dev/disk/by-id/')) metrics = {} for device, p in instances.items(): output = p.communicate()[0].strip() try: metrics[device + ".Temperature"] = float(output) except: self.log.warn('Disk temperature retrieval failed on ' + device) for metric in metrics.keys(): self.publish(metric, metrics[metric])
Collect and publish disk temperatures
def replace_col(self, line, ndx): for row in range(len(line)): self.set_tile(row, ndx, line[row])
replace a grids column at index 'ndx' with 'line'
def cache(*depends_on): def cache_decorator(fn): @memoize @wraps(fn) def wrapper(*args, **kwargs): if cache.disabled: return fn(*args, **kwargs) else: return _cache.get_value(fn, depends_on, args, kwargs) return wrapper return cache_decorator
Caches function result in temporary file. Cache will be expired when modification date of files from `depends_on` will be changed. Only functions should be wrapped in `cache`, not methods.
def list_rules(self, chainname): data = self.__run([self.__iptables_save, '-t', self.__name, '-c']) return netfilter.parser.parse_rules(data, chainname)
Returns a list of Rules in the specified chain.
def set(self, safe_len=False, **kwds): if kwds: d = self.kwds() d.update(kwds) self.reset(**d) if safe_len and self.item: self.leng = _len
Set one or more attributes.
def flatten(iterable): if isiterable(iterable): flat = [] for item in list(iterable): item = flatten(item) if not isiterable(item): item = [item] flat += item return flat else: return iterable
convenience tool to flatten any nested iterable example: flatten([[[],[4]],[[[5,[6,7, []]]]]]) >>> [4, 5, 6, 7] flatten('hello') >>> 'hello' Parameters ---------- iterable Returns ------- flattened object
def process_update(self, update): data = json.loads(update) NetworkTables.getEntry(data["k"]).setValue(data["v"])
Process an incoming update from a remote NetworkTables
def cleanup_cuts(self, hist: Hist, cut_axes: Iterable[HistAxisRange]) -> None: for axis in cut_axes: axis.axis(hist).SetRange(1, axis.axis(hist).GetNbins())
Cleanup applied cuts by resetting the axis to the full range. Inspired by: https://github.com/matplo/rootutils/blob/master/python/2.7/THnSparseWrapper.py Args: hist: Histogram for which the axes should be reset. cut_axes: List of axis cuts, which correspond to axes that should be reset.
def add_paths(G, paths, bidirectional=False): osm_oneway_values = ['yes', 'true', '1', '-1'] for data in paths.values(): if ('oneway' in data and data['oneway'] in osm_oneway_values) and not bidirectional: if data['oneway'] == '-1': data['nodes'] = list(reversed(data['nodes'])) add_path(G, data, one_way=True) elif ('junction' in data and data['junction'] == 'roundabout') and not bidirectional: add_path(G, data, one_way=True) else: add_path(G, data, one_way=False) return G
Add a collection of paths to the graph. Parameters ---------- G : networkx multidigraph paths : dict the paths from OSM bidirectional : bool if True, create bidirectional edges for one-way streets Returns ------- None
def wipe_db(self): logger.warning("Wiping the whole database") self.client.drop_database(self.db_name) logger.debug("Database wiped")
Wipe the whole database
def verboselogs_class_transform(cls): if cls.name == 'RootLogger': for meth in ['notice', 'spam', 'success', 'verbose']: cls.locals[meth] = [scoped_nodes.Function(meth, None)]
Make Pylint aware of our custom logger methods.
async def change_user_password(self, username, password): user_facade = client.UserManagerFacade.from_connection( self.connection()) entity = client.EntityPassword(password, tag.user(username)) return await user_facade.SetPassword([entity])
Change the password for a user in this controller. :param str username: Username :param str password: New password
def getconf(self, path, conf=None, logger=None): result = conf pathconf = None rscpaths = self.rscpaths(path=path) for rscpath in rscpaths: pathconf = self._getconf(rscpath=rscpath, logger=logger, conf=conf) if pathconf is not None: if result is None: result = pathconf else: result.update(pathconf) return result
Parse a configuration path with input conf and returns parameters by param name. :param str path: conf resource path to parse and from get parameters. :param Configuration conf: conf to fill with path values and conf param names. :param Logger logger: logger to use in order to trace information/error. :rtype: Configuration
def to_config(self, k, v): if k == "setup": return base.to_commandline(v) return super(DataGenerator, self).to_config(k, v)
Hook method that allows conversion of individual options. :param k: the key of the option :type k: str :param v: the value :type v: object :return: the potentially processed value :rtype: object
async def async_get_camera_image(self, image_name, username=None, password=None): try: data = await self.async_fetch_image_data( image_name, username, password) if data is None: raise XeomaError('Unable to authenticate with Xeoma web ' 'server') return data except asyncio.TimeoutError: raise XeomaError('Connection timeout while fetching camera image.') except aiohttp.ClientError as e: raise XeomaError('Unable to fetch image: {}'.format(e))
Grab a single image from the Xeoma web server Arguments: image_name: the name of the image to fetch (i.e. image01) username: the username to directly access this image password: the password to directly access this image
def get_interfaces_counters(self): query = junos_views.junos_iface_counter_table(self.device) query.get() interface_counters = {} for interface, counters in query.items(): interface_counters[interface] = { k: v if v is not None else -1 for k, v in counters } return interface_counters
Return interfaces counters.
def get_annotations(self): try: obj_list = self.__dict__['annotations'] return [Annotation(i) for i in obj_list] except KeyError: self._lazy_load() obj_list = self.__dict__['annotations'] return [Annotation(i) for i in obj_list]
Fetch the annotations field if it does not exist.
def flag_message(current): current.output = {'status': 'Created', 'code': 201} FlaggedMessage.objects.get_or_create(user_id=current.user_id, message_id=current.input['key'])
Flag inappropriate messages .. code-block:: python # request: { 'view':'_zops_flag_message', 'message_key': key, } # response: { ' 'status': 'Created', 'code': 201, }
def exists(self, path): import hdfs try: self.client.status(path) return True except hdfs.util.HdfsError as e: if str(e).startswith('File does not exist: '): return False else: raise e
Returns true if the path exists and false otherwise.
def master(self): if len(self.mav_master) == 0: return None if self.settings.link > len(self.mav_master): self.settings.link = 1 if not self.mav_master[self.settings.link-1].linkerror: return self.mav_master[self.settings.link-1] for m in self.mav_master: if not m.linkerror: return m return self.mav_master[self.settings.link-1]
return the currently chosen mavlink master object
def derive_fields(self): if self.fields: return list(self.fields) else: fields = [] for field in self.object._meta.fields: fields.append(field.name) exclude = self.derive_exclude() fields = [field for field in fields if field not in exclude] return fields
Derives our fields. We first default to using our 'fields' variable if available, otherwise we figure it out from our object.
def get_tokendefs(cls): tokens = {} inheritable = {} for c in cls.__mro__: toks = c.__dict__.get('tokens', {}) for state, items in iteritems(toks): curitems = tokens.get(state) if curitems is None: tokens[state] = items try: inherit_ndx = items.index(inherit) except ValueError: continue inheritable[state] = inherit_ndx continue inherit_ndx = inheritable.pop(state, None) if inherit_ndx is None: continue curitems[inherit_ndx:inherit_ndx+1] = items try: new_inh_ndx = items.index(inherit) except ValueError: pass else: inheritable[state] = inherit_ndx + new_inh_ndx return tokens
Merge tokens from superclasses in MRO order, returning a single tokendef dictionary. Any state that is not defined by a subclass will be inherited automatically. States that *are* defined by subclasses will, by default, override that state in the superclass. If a subclass wishes to inherit definitions from a superclass, it can use the special value "inherit", which will cause the superclass' state definition to be included at that point in the state.
def on_intent(intent_request, session): print("on_intent requestId=" + intent_request['requestId'] + ", sessionId=" + session['sessionId']) intent = intent_request['intent'] intent_name = intent_request['intent']['name'] if intent_name == "MyColorIsIntent": return set_color_in_session(intent, session) elif intent_name == "WhatsMyColorIntent": return get_color_from_session(intent, session) elif intent_name == "AMAZON.HelpIntent": return get_welcome_response() elif intent_name == "AMAZON.CancelIntent" or intent_name == "AMAZON.StopIntent": return handle_session_end_request() else: raise ValueError("Invalid intent")
Called when the user specifies an intent for this skill
def datetime_from_iso(iso_string): try: assert datetime_regex.datetime.datetime.match(iso_string).groups()[0] except (ValueError, AssertionError, IndexError, AttributeError): raise TypeError("String is not in ISO format") try: return datetime.datetime.strptime(iso_string, "%Y-%m-%dT%H:%M:%S.%f") except ValueError: return datetime.datetime.strptime(iso_string, "%Y-%m-%dT%H:%M:%S")
Create a DateTime object from a ISO string .. code :: python reusables.datetime_from_iso('2017-03-10T12:56:55.031863') datetime.datetime(2017, 3, 10, 12, 56, 55, 31863) :param iso_string: string of an ISO datetime :return: DateTime object
def format_epilog_section(self, section, text): try: func = self._epilog_formatters[self.epilog_formatter] except KeyError: if not callable(self.epilog_formatter): raise func = self.epilog_formatter return func(section, text)
Format a section for the epilog by inserting a format