code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def include_items(items, any_all=any, ignore_case=False, normalize_values=False, **kwargs): if kwargs: match = functools.partial( _match_item, any_all=any_all, ignore_case=ignore_case, normalize_values=normalize_values, **kwargs ) return filter(match, items) else: return iter(items)
Include items by matching metadata. Note: Metadata values are lowercased when ``normalized_values`` is ``True``, so ``ignore_case`` is automatically set to ``True``. Parameters: items (list): A list of item dicts or filepaths. any_all (callable): A callable to determine if any or all filters must match to include items. Expected values :obj:`any` (default) or :obj:`all`. ignore_case (bool): Perform case-insensitive matching. Default: ``False`` normalize_values (bool): Normalize metadata values to remove common differences between sources. Default: ``False`` kwargs (list): Lists of values to match the given metadata field. Yields: dict: The next item to be included. Example: >>> from google_music_utils import exclude_items >>> list(include_items(song_list, any_all=all, ignore_case=True, normalize_values=True, artist=['Beck'], album=['Odelay']))
def coupl_model1(self): self.Coupl[0,0] = np.abs(self.Coupl[0,0]) self.Coupl[0,1] = -np.abs(self.Coupl[0,1]) self.Coupl[1,1] = np.abs(self.Coupl[1,1])
In model 1, we want enforce the following signs on the couplings. Model 2 has the same couplings but arbitrary signs.
def a_capture_show_configuration_failed(ctx): result = ctx.device.send("show configuration failed") ctx.device.last_command_result = result index = result.find("SEMANTIC ERRORS") ctx.device.chain.connection.emit_message(result, log_level=logging.ERROR) if index > 0: raise ConfigurationSemanticErrors(result) else: raise ConfigurationErrors(result)
Capture the show configuration failed result.
def get_logical_drives(self): logical_drives = [] for controller in self.controllers: for array in controller.raid_arrays: for logical_drive in array.logical_drives: logical_drives.append(logical_drive) return logical_drives
Get all the RAID logical drives in the Server. This method returns all the RAID logical drives on the server by examining all the controllers. :returns: a list of LogicalDrive objects.
def close(self, code=1000, message=''): try: message = self._encode_bytes(message) self.send_frame( struct.pack('!H%ds' % len(message), code, message), opcode=self.OPCODE_CLOSE) except WebSocketError: logger.debug("Failed to write closing frame -> closing socket") finally: logger.debug("Closed WebSocket") self._closed = True self.stream = None
Close the websocket and connection, sending the specified code and message. The underlying socket object is _not_ closed, that is the responsibility of the initiator.
def resolve_source_mapping( source_directory: str, output_directory: str, sources: Sources ) -> Mapping[str, str]: result = { os.path.join(source_directory, source_file): os.path.join( output_directory, output_file ) for source_file, output_file in sources.files.items() } filesystem = get_filesystem() for glob in sources.globs: matches = filesystem.list(source_directory, glob.patterns, exclude=glob.exclude) result.update( { os.path.join(source_directory, match): os.path.join( output_directory, match ) for match in matches } ) return result
Returns a mapping from absolute source path to absolute output path as specified by the sources object. Files are not guaranteed to exist.
async def fetchone(self): row = await self._cursor.fetchone() if not row: raise GeneratorExit self._rows.append(row)
Fetch single row from the cursor.
def show(self, commit): author = commit.author author_width = 25 committer = '' commit_date = date_to_str(commit.committer_time, commit.committer_tz, self.verbose) if self.verbose: author += " %s" % commit.author_mail author_width = 50 committer = " %s %s" % (commit.committer, commit.committer_mail) return " {} {:>5d} {:{}s} {}{}".format( commit.uuid[:8], commit.line_count, author, author_width, commit_date, committer)
Display one commit line. The output will be: <uuid> <#lines> <author> <short-commit-date> If verbose flag set, the output will be: <uuid> <#lines> <author+email> <long-date> <committer+email>
def ising_energy(sample, h, J, offset=0.0): for v in h: offset += h[v] * sample[v] for v0, v1 in J: offset += J[(v0, v1)] * sample[v0] * sample[v1] return offset
Calculate the energy for the specified sample of an Ising model. Energy of a sample for a binary quadratic model is defined as a sum, offset by the constant energy offset associated with the model, of the sample multipled by the linear bias of the variable and all its interactions. For an Ising model, .. math:: E(\mathbf{s}) = \sum_v h_v s_v + \sum_{u,v} J_{u,v} s_u s_v + c where :math:`s_v` is the sample, :math:`h_v` is the linear bias, :math:`J_{u,v}` the quadratic bias (interactions), and :math:`c` the energy offset. Args: sample (dict[variable, spin]): Sample for a binary quadratic model as a dict of form {v: spin, ...}, where keys are variables of the model and values are spins (either -1 or 1). h (dict[variable, bias]): Linear biases as a dict of the form {v: bias, ...}, where keys are variables of the model and values are biases. J (dict[(variable, variable), bias]): Quadratic biases as a dict of the form {(u, v): bias, ...}, where keys are 2-tuples of variables of the model and values are quadratic biases associated with the pair of variables (the interaction). offset (numeric, optional, default=0): Constant offset to be applied to the energy. Default 0. Returns: float: The induced energy. Notes: No input checking is performed. Examples: This example calculates the energy of a sample representing two down spins for an Ising model of two variables that have positive biases of value 1 and are positively coupled with an interaction of value 1. >>> import dimod >>> sample = {1: -1, 2: -1} >>> h = {1: 1, 2: 1} >>> J = {(1, 2): 1} >>> dimod.ising_energy(sample, h, J, 0.5) -0.5 References ---------- `Ising model on Wikipedia <https://en.wikipedia.org/wiki/Ising_model>`_
def handle_subscribe(self, request): ret = self._tree.handle_subscribe(request, request.path[1:]) self._subscription_keys[request.generate_key()] = request return ret
Handle a Subscribe request from outside. Called with lock taken
async def async_set_summary(program): import aiohttp async with aiohttp.ClientSession() as session: resp = await session.get(program.get('url')) text = await resp.text() summary = extract_program_summary(text) program['summary'] = summary return program
Set a program's summary
def list(self, **filters): LOG.debug(u'Querying %s by filters=%s', self.model_class.__name__, filters) query = self.__queryset__() perm = build_permission_name(self.model_class, 'view') LOG.debug(u"Checking if user %s has_perm %s" % (self.user, perm)) query_with_permission = filter(lambda o: self.user.has_perm(perm, obj=o), query) ids = map(lambda o: o.pk, query_with_permission) queryset = self.__queryset__().filter(pk__in=ids) related = getattr(self, 'select_related', None) if related: queryset = queryset.select_related(*related) return queryset
Returns a queryset filtering object by user permission. If you want, you can specify filter arguments. See https://docs.djangoproject.com/en/dev/ref/models/querysets/#filter for more details
def calc_qiga1_v1(self): der = self.parameters.derived.fastaccess old = self.sequences.states.fastaccess_old new = self.sequences.states.fastaccess_new if der.ki1 <= 0.: new.qiga1 = new.qigz1 elif der.ki1 > 1e200: new.qiga1 = old.qiga1+new.qigz1-old.qigz1 else: d_temp = (1.-modelutils.exp(-1./der.ki1)) new.qiga1 = (old.qiga1 + (old.qigz1-old.qiga1)*d_temp + (new.qigz1-old.qigz1)*(1.-der.ki1*d_temp))
Perform the runoff concentration calculation for the first interflow component. The working equation is the analytical solution of the linear storage equation under the assumption of constant change in inflow during the simulation time step. Required derived parameter: |KI1| Required state sequence: |QIGZ1| Calculated state sequence: |QIGA1| Basic equation: :math:`QIGA1_{neu} = QIGA1_{alt} + (QIGZ1_{alt}-QIGA1_{alt}) \\cdot (1-exp(-KI1^{-1})) + (QIGZ1_{neu}-QIGZ1_{alt}) \\cdot (1-KI1\\cdot(1-exp(-KI1^{-1})))` Examples: A normal test case: >>> from hydpy.models.lland import * >>> parameterstep() >>> derived.ki1(0.1) >>> states.qigz1.old = 2.0 >>> states.qigz1.new = 4.0 >>> states.qiga1.old = 3.0 >>> model.calc_qiga1_v1() >>> states.qiga1 qiga1(3.800054) First extreme test case (zero division is circumvented): >>> derived.ki1(0.0) >>> model.calc_qiga1_v1() >>> states.qiga1 qiga1(4.0) Second extreme test case (numerical overflow is circumvented): >>> derived.ki1(1e500) >>> model.calc_qiga1_v1() >>> states.qiga1 qiga1(5.0)
def Append(self, other): orig_len = len(self) self.Set(orig_len + len(other)) ipoint = orig_len if hasattr(self, 'SetPointError'): for point in other: self.SetPoint(ipoint, point.x.value, point.y.value) self.SetPointError( ipoint, point.x.error_low, point.x.error_hi, point.y.error_low, point.y.error_hi) ipoint += 1 else: for point in other: self.SetPoint(ipoint, point.x.value, point.y.value) ipoint += 1
Append points from another graph
def make_inverse_connectivity(conns, n_nod, ret_offsets=True): from itertools import chain iconn = [[] for ii in xrange( n_nod )] n_els = [0] * n_nod for ig, conn in enumerate( conns ): for iel, row in enumerate( conn ): for node in row: iconn[node].extend([ig, iel]) n_els[node] += 1 n_els = nm.array(n_els, dtype=nm.int32) iconn = nm.fromiter(chain(*iconn), nm.int32) if ret_offsets: offsets = nm.cumsum(nm.r_[0, n_els], dtype=nm.int32) return offsets, iconn else: return n_els, iconn
For each mesh node referenced in the connectivity conns, make a list of elements it belongs to.
def _create_api_method(cls, name, api_method): def _api_method(self, **kwargs): command = api_method['name'] if kwargs: return self._make_request(command, kwargs) else: kwargs = {} return self._make_request(command, kwargs) _api_method.__doc__ = api_method['description'] _api_method.__doc__ += _add_params_docstring(api_method['params']) _api_method.__name__ = str(name) setattr(cls, _api_method.__name__, _api_method)
Create dynamic class methods based on the Cloudmonkey precached_verbs
def actuator_on(self, service_location_id, actuator_id, duration=None): return self._actuator_on_off( on_off='on', service_location_id=service_location_id, actuator_id=actuator_id, duration=duration)
Turn actuator on Parameters ---------- service_location_id : int actuator_id : int duration : int, optional 300,900,1800 or 3600 , specifying the time in seconds the actuator should be turned on. Any other value results in turning on for an undetermined period of time. Returns ------- requests.Response
def is_running(self): try: result = requests.get(self.proxy_url) except RequestException: return False if 'ZAP-Header' in result.headers.get('Access-Control-Allow-Headers', []): return True raise ZAPError('Another process is listening on {0}'.format(self.proxy_url))
Check if ZAP is running.
def get_form_kwargs(self): kwargs = super().get_form_kwargs() if self.request.method == 'POST': data = copy(self.request.POST) i = 0 while(data.get('%s-%s-id' % ( settings.FLAT_MENU_ITEMS_RELATED_NAME, i ))): data['%s-%s-id' % ( settings.FLAT_MENU_ITEMS_RELATED_NAME, i )] = None i += 1 kwargs.update({ 'data': data, 'instance': self.model() }) return kwargs
When the form is posted, don't pass an instance to the form. It should create a new one out of the posted data. We also need to nullify any IDs posted for inline menu items, so that new instances of those are created too.
def git_lines(*args, git=maybeloggit, **kwargs): 'Generator of stdout lines from given git command' err = io.StringIO() try: for line in git('--no-pager', _err=err, *args, _decode_errors='replace', _iter=True, _bg_exc=False, **kwargs): yield line[:-1] except sh.ErrorReturnCode as e: status('exit_code=%s' % e.exit_code) errlines = err.getvalue().splitlines() if len(errlines) < 3: for line in errlines: status(line) else: vd().push(TextSheet('git ' + ' '.join(args), errlines))
Generator of stdout lines from given git command
def enable_logging(main): @functools.wraps(main) def wrapper(*args, **kwargs): import argparse parser = argparse.ArgumentParser() parser.add_argument( '--loglevel', default="ERROR", type=str, help="Set the loglevel. Possible values: CRITICAL, ERROR (default)," "WARNING, INFO, DEBUG") options = parser.parse_args() numeric_level = getattr(logging, options.loglevel.upper(), None) if not isinstance(numeric_level, int): raise ValueError('Invalid log level: %s' % options.loglevel) logging.basicConfig(level=numeric_level) retcode = main(*args, **kwargs) return retcode return wrapper
This decorator is used to decorate main functions. It adds the initialization of the logger and an argument parser that allows one to select the loglevel. Useful if we are writing simple main functions that call libraries where the logging module is used Args: main: main function.
def is_jail(name): jails = list_jails() for jail in jails: if jail.split()[0] == name: return True return False
Return True if jail exists False if not CLI Example: .. code-block:: bash salt '*' poudriere.is_jail <jail name>
def show_rsa(minion_id, dns_name): cache = salt.cache.Cache(__opts__, syspaths.CACHE_DIR) bank = 'digicert/domains' data = cache.fetch( bank, dns_name ) return data['private_key']
Show a private RSA key CLI Example: .. code-block:: bash salt-run digicert.show_rsa myminion domain.example.com
def _tc_below(self): tr_below = self._tr_below if tr_below is None: return None return tr_below.tc_at_grid_col(self._grid_col)
The tc element immediately below this one in its grid column.
def delete(self, ids): url = build_uri_with_ids('api/v4/as/%s/', ids) return super(ApiV4As, self).delete(url)
Method to delete asns by their id's :param ids: Identifiers of asns :return: None
def as_dict(self): d = {} _add_value(d, 'obstory_ids', self.obstory_ids) _add_string(d, 'field_name', self.field_name) _add_value(d, 'lat_min', self.lat_min) _add_value(d, 'lat_max', self.lat_max) _add_value(d, 'long_min', self.long_min) _add_value(d, 'long_max', self.long_max) _add_value(d, 'time_min', self.time_min) _add_value(d, 'time_max', self.time_max) _add_string(d, 'item_id', self.item_id) _add_value(d, 'skip', self.skip) _add_value(d, 'limit', self.limit) _add_boolean(d, 'exclude_imported', self.exclude_imported) _add_string(d, 'exclude_export_to', self.exclude_export_to) return d
Convert this ObservatoryMetadataSearch to a dict, ready for serialization to JSON for use in the API. :return: Dict representation of this ObservatoryMetadataSearch instance
def setup_project_view(self): for i in [1, 2, 3]: self.hideColumn(i) self.setHeaderHidden(True) self.filter_directories()
Setup view for projects
def token_network_leave( self, registry_address: PaymentNetworkID, token_address: TokenAddress, ) -> List[NettingChannelState]: if not is_binary_address(registry_address): raise InvalidAddress('registry_address must be a valid address in binary') if not is_binary_address(token_address): raise InvalidAddress('token_address must be a valid address in binary') if token_address not in self.get_tokens_list(registry_address): raise UnknownTokenAddress('token_address unknown') token_network_identifier = views.get_token_network_identifier_by_token_address( chain_state=views.state_from_raiden(self.raiden), payment_network_id=registry_address, token_address=token_address, ) connection_manager = self.raiden.connection_manager_for_token_network( token_network_identifier, ) return connection_manager.leave(registry_address)
Close all channels and wait for settlement.
def list_users(self): lines = output_lines(self.exec_rabbitmqctl_list('users')) return [_parse_rabbitmq_user(line) for line in lines]
Run the ``list_users`` command and return a list of tuples describing the users. :return: A list of 2-element tuples. The first element is the username, the second a list of tags for the user.
def fetch(self, buf = None, traceno = None): if buf is None: buf = self.buf if traceno is None: traceno = self.traceno try: if self.kind == TraceField: if traceno is None: return buf return self.filehandle.getth(traceno, buf) else: return self.filehandle.getbin() except IOError: if not self.readonly: return bytearray(len(self.buf)) else: raise
Fetch the header from disk This object will read header when it is constructed, which means it might be out-of-date if the file is updated through some other handle. This method is largely meant for internal use - if you need to reload disk contents, use ``reload``. Fetch does not update any internal state (unless `buf` is ``None`` on a trace header, and the read succeeds), but returns the fetched header contents. This method can be used to reposition the trace header, which is useful for constructing generators. If this is called on a writable, new file, and this header has not yet been written to, it will successfully return an empty buffer that, when written to, will be reflected on disk. Parameters ---------- buf : bytearray buffer to read into instead of ``self.buf`` traceno : int Returns ------- buf : bytearray Notes ----- .. versionadded:: 1.6 This method is not intended as user-oriented functionality, but might be useful in high-performance code.
def __load(self, path): try: path = os.path.abspath(path) with open(path, 'rb') as df: self.__data, self.__classes, self.__labels, \ self.__dtype, self.__description, \ self.__num_features, self.__feature_names = pickle.load(df) self.__validate(self.__data, self.__classes, self.__labels) except IOError as ioe: raise IOError('Unable to read the dataset from file: {}', format(ioe)) except: raise
Method to load the serialized dataset from disk.
def _adjust_image_paths(self, content: str, md_file_path: Path) -> str: def _sub(image): image_caption = image.group('caption') image_path = md_file_path.parent / Path(image.group('path')) self.logger.debug( f'Updating image reference; user specified path: {image.group("path")}, ' + f'absolute path: {image_path}, caption: {image_caption}' ) return f'![{image_caption}]({image_path.absolute().as_posix()})' return self._image_pattern.sub(_sub, content)
Locate images referenced in a Markdown string and replace their paths with the absolute ones. :param content: Markdown content :param md_file_path: Path to the Markdown file containing the content :returns: Markdown content with absolute image paths
def _process_token(cls, token): assert type(token) is _TokenType or callable(token), \ 'token type must be simple type or callable, not %r' % (token,) return token
Preprocess the token component of a token definition.
def geodetic2aer(lat: float, lon: float, h: float, lat0: float, lon0: float, h0: float, ell=None, deg: bool = True) -> Tuple[float, float, float]: e, n, u = geodetic2enu(lat, lon, h, lat0, lon0, h0, ell, deg=deg) return enu2aer(e, n, u, deg=deg)
gives azimuth, elevation and slant range from an Observer to a Point with geodetic coordinates. Parameters ---------- lat : float or numpy.ndarray of float target geodetic latitude lon : float or numpy.ndarray of float target geodetic longitude h : float or numpy.ndarray of float target altitude above geodetic ellipsoid (meters) lat0 : float Observer geodetic latitude lon0 : float Observer geodetic longitude h0 : float observer altitude above geodetic ellipsoid (meters) ell : Ellipsoid, optional reference ellipsoid deg : bool, optional degrees input/output (False: radians in/out) Returns ------- az : float or numpy.ndarray of float azimuth el : float or numpy.ndarray of float elevation srange : float or numpy.ndarray of float slant range [meters]
def decompress(self, value): if value: try: pk = self.queryset.get(recurrence_rule=value).pk except self.queryset.model.DoesNotExist: pk = None return [pk, None, value] return [None, None, None]
Return the primary key value for the ``Select`` widget if the given recurrence rule exists in the queryset.
def Parse(self, rdf_data): if self._filter: return list(self._filter.Parse(rdf_data, self.expression)) return rdf_data
Process rdf data through the filter. Filters sift data according to filter rules. Data that passes the filter rule is kept, other data is dropped. If no filter method is provided, the data is returned as a list. Otherwise, a items that meet filter conditions are returned in a list. Args: rdf_data: Host data that has already been processed by a Parser into RDF. Returns: A list containing data items that matched the filter rules.
def send_comment_email(email, package_owner, package_name, commenter): link = '{CATALOG_URL}/package/{owner}/{pkg}/comments'.format( CATALOG_URL=CATALOG_URL, owner=package_owner, pkg=package_name) subject = "New comment on {package_owner}/{package_name}".format( package_owner=package_owner, package_name=package_name) html = render_template('comment_email.html', commenter=commenter, link=link) body = render_template('comment_email.txt', commenter=commenter, link=link) send_email(recipients=[email], sender=DEFAULT_SENDER, subject=subject, html=html, body=body)
Send email to owner of package regarding new comment
def register_linter(linter): if hasattr(linter, "EXTS") and hasattr(linter, "run"): LintFactory.PLUGINS.append(linter) else: raise LinterException("Linter does not have 'run' method or EXTS variable!")
Register a Linter class for file verification. :param linter: :return:
def unregister_file(path, pkg=None, conn=None): close = False if conn is None: close = True conn = init() conn.execute('DELETE FROM files WHERE path=?', (path, )) if close: conn.close()
Unregister a file from the package database
def _populate(self, json): from .volume import Volume DerivedBase._populate(self, json) devices = {} for device_index, device in json['devices'].items(): if not device: devices[device_index] = None continue dev = None if 'disk_id' in device and device['disk_id']: dev = Disk.make_instance(device['disk_id'], self._client, parent_id=self.linode_id) else: dev = Volume.make_instance(device['volume_id'], self._client, parent_id=self.linode_id) devices[device_index] = dev self._set('devices', MappedObject(**devices))
Map devices more nicely while populating.
def items(self): for dictreader in self._csv_dictreader_list: for entry in dictreader: item = self.factory() item.key = self.key() item.attributes = entry try: item.validate() except Exception as e: logger.debug("skipping entry due to item validation exception: %s", str(e)) continue logger.debug("found validated item in CSV source, key: %s", str(item.attributes[self.key()])) yield item
Returns a generator of available ICachableItem in the ICachableSource
def _read_byte(self): to_return = "" if (self._mode == PROP_MODE_SERIAL): to_return = self._serial.read(1) elif (self._mode == PROP_MODE_TCP): to_return = self._socket.recv(1) elif (self._mode == PROP_MODE_FILE): to_return = struct.pack("B", int(self._file.readline())) _LOGGER.debug("READ: " + str(ord(to_return))) self._logdata.append(ord(to_return)) if (len(self._logdata) > self._logdatalen): self._logdata = self._logdata[len(self._logdata) - self._logdatalen:] self._debug(PROP_LOGLEVEL_TRACE, "READ: " + str(ord(to_return))) return to_return
Read a byte from input.
def get_parent_image_koji_data(workflow): koji_parent = workflow.prebuild_results.get(PLUGIN_KOJI_PARENT_KEY) or {} image_metadata = {} parents = {} for img, build in (koji_parent.get(PARENT_IMAGES_KOJI_BUILDS) or {}).items(): if not build: parents[str(img)] = None else: parents[str(img)] = {key: val for key, val in build.items() if key in ('id', 'nvr')} image_metadata[PARENT_IMAGE_BUILDS_KEY] = parents image_metadata[PARENT_IMAGES_KEY] = workflow.builder.parents_ordered if workflow.builder.base_from_scratch: return image_metadata base_info = koji_parent.get(BASE_IMAGE_KOJI_BUILD) or {} parent_id = base_info.get('id') if parent_id is not None: try: parent_id = int(parent_id) except ValueError: logger.exception("invalid koji parent id %r", parent_id) else: image_metadata[BASE_IMAGE_BUILD_ID_KEY] = parent_id return image_metadata
Transform koji_parent plugin results into metadata dict.
def _send_method(self, method_sig, args=bytes(), content=None): if isinstance(args, AMQPWriter): args = args.getvalue() self.connection.method_writer.write_method(self.channel_id, method_sig, args, content)
Send a method for our channel.
def insert_before(old, new): parent = old.getparent() parent.insert(parent.index(old), new)
A simple way to insert a new element node before the old element node among its siblings.
def cleanJsbConfig(self, jsbconfig): config = json.loads(jsbconfig) self._cleanJsbAllClassesSection(config) self._cleanJsbAppAllSection(config) return json.dumps(config, indent=4)
Clean up the JSB config.
def getheader(self, which, use_hash=None, polish=True): header = getheader( which, use_hash=use_hash, target=self.target, no_tco=self.no_tco, strict=self.strict, ) if polish: header = self.polish(header) return header
Get a formatted header.
def scan_file(path): path = os.path.abspath(path) assert os.path.exists(path), "Unreachable file '%s'." % path try: cd = pyclamd.ClamdUnixSocket() cd.ping() except pyclamd.ConnectionError: cd = pyclamd.ClamdNetworkSocket() try: cd.ping() except pyclamd.ConnectionError: raise ValueError( "Couldn't connect to clamd server using unix/network socket." ) cd = pyclamd.ClamdUnixSocket() assert cd.ping(), "clamd server is not reachable!" result = cd.scan_file(path) return result if result else {}
Scan `path` for viruses using ``clamd`` antivirus daemon. Args: path (str): Relative or absolute path of file/directory you need to scan. Returns: dict: ``{filename: ("FOUND", "virus type")}`` or blank dict. Raises: ValueError: When the server is not running. AssertionError: When the internal file doesn't exists.
def _incomplete_files(filenames): tmp_files = [get_incomplete_path(f) for f in filenames] try: yield tmp_files for tmp, output in zip(tmp_files, filenames): tf.io.gfile.rename(tmp, output) finally: for tmp in tmp_files: if tf.io.gfile.exists(tmp): tf.io.gfile.remove(tmp)
Create temporary files for filenames and rename on exit.
def make_filesystem(blk_device, fstype='ext4', timeout=10): count = 0 e_noent = os.errno.ENOENT while not os.path.exists(blk_device): if count >= timeout: log('Gave up waiting on block device %s' % blk_device, level=ERROR) raise IOError(e_noent, os.strerror(e_noent), blk_device) log('Waiting for block device %s to appear' % blk_device, level=DEBUG) count += 1 time.sleep(1) else: log('Formatting block device %s as filesystem %s.' % (blk_device, fstype), level=INFO) check_call(['mkfs', '-t', fstype, blk_device])
Make a new filesystem on the specified block device.
def health(self): return json.dumps(dict(uptime='{:.3f}s' .format((time.time() - self._start_time))))
Health check method, returns the up-time of the device.
def json(self): if hasattr(self, '_json'): return self._json try: self._json = json.loads(self.text or self.content) except ValueError: self._json = None return self._json
Returns the json-encoded content of the response, if any.
def delete_processing_block(processing_block_id): scheduling_block_id = processing_block_id.split(':')[0] config = get_scheduling_block(scheduling_block_id) processing_blocks = config.get('processing_blocks') processing_block = list(filter( lambda x: x.get('id') == processing_block_id, processing_blocks))[0] config['processing_blocks'].remove(processing_block) DB.set('scheduling_block/{}'.format(config['id']), json.dumps(config)) DB.rpush('processing_block_events', json.dumps(dict(type="deleted", id=processing_block_id)))
Delete Processing Block with the specified ID
def _dump(obj, abspath, serializer_type, dumper_func=None, compress=True, overwrite=False, verbose=False, **kwargs): _check_serializer_type(serializer_type) if not inspect.isfunction(dumper_func): raise TypeError("dumper_func has to be a function take object as input " "and return binary!") prt_console("\nDump to '%s' ..." % abspath, verbose) if os.path.exists(abspath): if not overwrite: prt_console( " Stop! File exists and overwrite is not allowed", verbose, ) return st = time.clock() b_or_str = dumper_func(obj, **kwargs) if serializer_type is "str": b = b_or_str.encode("utf-8") else: b = b_or_str if compress: b = zlib.compress(b) with atomic_write(abspath, overwrite=overwrite, mode="wb") as f: f.write(b) elapsed = time.clock() - st prt_console(" Complete! Elapse %.6f sec." % elapsed, verbose) if serializer_type is "str": return b_or_str else: return b
Dump object to file. :param abspath: The file path you want dump to. :type abspath: str :param serializer_type: 'binary' or 'str'. :type serializer_type: str :param dumper_func: A dumper function that takes an object as input, return binary or string. :type dumper_func: callable function :param compress: default ``False``. If True, then compress binary. :type compress: bool :param overwrite: default ``False``, If ``True``, when you dump to existing file, it silently overwrite it. If ``False``, an alert message is shown. Default setting ``False`` is to prevent overwrite file by mistake. :type overwrite: boolean :param verbose: default True, help-message-display trigger. :type verbose: boolean
def _AddStopTimeObjectUnordered(self, stoptime, schedule): stop_time_class = self.GetGtfsFactory().StopTime cursor = schedule._connection.cursor() insert_query = "INSERT INTO stop_times (%s) VALUES (%s);" % ( ','.join(stop_time_class._SQL_FIELD_NAMES), ','.join(['?'] * len(stop_time_class._SQL_FIELD_NAMES))) cursor = schedule._connection.cursor() cursor.execute( insert_query, stoptime.GetSqlValuesTuple(self.trip_id))
Add StopTime object to this trip. The trip isn't checked for duplicate sequence numbers so it must be validated later.
def from_object(self, obj: Union[str, Any]) -> None: if isinstance(obj, str): obj = importer.import_object_str(obj) for key in dir(obj): if key.isupper(): value = getattr(obj, key) self._setattr(key, value) logger.info("Config is loaded from object: %r", obj)
Load values from an object.
def assert_powernode(self, name:str) -> None or ValueError: if name not in self.inclusions: raise ValueError("Powernode '{}' does not exists.".format(name)) if self.is_node(name): raise ValueError("Given name '{}' is a node.".format(name))
Do nothing if given name refers to a powernode in given graph. Raise a ValueError in any other case.
def directed_tripartition_indices(N): result = [] if N <= 0: return result base = [0, 1, 2] for key in product(base, repeat=N): part = [[], [], []] for i, location in enumerate(key): part[location].append(i) result.append(tuple(tuple(p) for p in part)) return result
Return indices for directed tripartitions of a sequence. Args: N (int): The length of the sequence. Returns: list[tuple]: A list of tuples containing the indices for each partition. Example: >>> N = 1 >>> directed_tripartition_indices(N) [((0,), (), ()), ((), (0,), ()), ((), (), (0,))]
def befriend(self, other_agent, force=False): if force or self['openness'] > random(): self.env.add_edge(self, other_agent) self.info('Made some friend {}'.format(other_agent)) return True return False
Try to become friends with another agent. The chances of success depend on both agents' openness.
def tostring(self, encoding): if self.kind == 'string': if encoding is not None: return self.converted return '"{converted}"'.format(converted=self.converted) elif self.kind == 'float': return repr(self.converted) return self.converted
quote the string if not encoded else encode and return
def kitchen_create(backend, parent, kitchen): click.secho('%s - Creating kitchen %s from parent kitchen %s' % (get_datetime(), kitchen, parent), fg='green') master = 'master' if kitchen.lower() != master.lower(): check_and_print(DKCloudCommandRunner.create_kitchen(backend.dki, parent, kitchen)) else: raise click.ClickException('Cannot create a kitchen called %s' % master)
Create a new kitchen
def get(self): return self.render( 'index.html', databench_version=DATABENCH_VERSION, meta_infos=self.meta_infos(), **self.info )
Render the List-of-Analyses overview page.
def has_pending(self): if self.pending: return True for pending in self.node2pending.values(): if pending: return True return False
Return True if there are pending test items This indicates that collection has finished and nodes are still processing test items, so this can be thought of as "the scheduler is active".
def _generic_hook(self, name, **kwargs): entries = [entry for entry in self._plugin_manager.call_hook(name, **kwargs) if entry is not None] return "\n".join(entries)
A generic hook that links the TemplateHelper with PluginManager
def get_attributes(aspect, id): attributes = {} for entry in aspect: if entry['po'] == id: attributes[entry['n']] = entry['v'] return attributes
Return the attributes pointing to a given ID in a given aspect.
def hash_bytes( key, seed = 0x0, x64arch = True ): hash_128 = hash128( key, seed, x64arch ) bytestring = '' for i in xrange(0, 16, 1): lsbyte = hash_128 & 0xFF bytestring = bytestring + str( chr( lsbyte ) ) hash_128 = hash_128 >> 8 return bytestring
Implements 128bit murmur3 hash. Returns a byte string.
def delete(self, *args, **kwargs): lookup = self.with_respect_to() lookup["_order__gte"] = self._order concrete_model = base_concrete_model(Orderable, self) after = concrete_model.objects.filter(**lookup) after.update(_order=models.F("_order") - 1) super(Orderable, self).delete(*args, **kwargs)
Update the ordering values for siblings.
def remove_configurable(self, configurable_class, name): configurable_class_name = configurable_class.__name__.lower() logger.info("Removing %s: '%s'", configurable_class_name, name) registry = self.registry_for(configurable_class) if name not in registry: logger.warn( "Tried to remove unknown active %s: '%s'", configurable_class_name, name ) return hook = self.hook_for(configurable_class, action="remove") if not hook: registry.pop(name) return def done(f): try: f.result() registry.pop(name) except Exception: logger.exception("Error removing configurable '%s'", name) self.work_pool.submit(hook, name).add_done_callback(done)
Callback fired when a configurable instance is removed. Looks up the existing configurable in the proper "registry" and removes it. If a method named "on_<configurable classname>_remove" is defined it is called via the work pooland passed the configurable's name. If the removed configurable is not present, a warning is given and no further action is taken.
def get_contact_from_id(self, contact_id): contact = self.wapi_functions.getContact(contact_id) if contact is None: raise ContactNotFoundError("Contact {0} not found".format(contact_id)) return Contact(contact, self)
Fetches a contact given its ID :param contact_id: Contact ID :type contact_id: str :return: Contact or Error :rtype: Contact
def units(self): self._units, value = self.get_attr_string(self._units, 'units') return value
Returns the units of the measured value for the current mode. May return empty string
def scan(context, root_dir): root_dir = root_dir or context.obj['root'] config_files = Path(root_dir).glob('*/analysis/*_config.yaml') for config_file in config_files: LOG.debug("found analysis config: %s", config_file) with config_file.open() as stream: context.invoke(log_cmd, config=stream, quiet=True) context.obj['store'].track_update()
Scan a directory for analyses.
def WSGIMimeRender(*args, **kwargs): def wrapper(*args2, **kwargs2): def wrapped(f): return _WSGIMimeRender(*args, **kwargs)(*args2, **kwargs2)(wsgi_wrap(f)) return wrapped return wrapper
A wrapper for _WSGIMimeRender that wrapps the inner callable with wsgi_wrap first.
def focus_right(pymux): " Move focus to the right. " _move_focus(pymux, lambda wp: wp.xpos + wp.width + 1, lambda wp: wp.ypos)
Move focus to the right.
def _get_value(self): if self._aux_variable: return self._aux_variable['law'](self._aux_variable['variable'].value) if self._transformation is None: return self._internal_value else: return self._transformation.backward(self._internal_value)
Return current parameter value
def _build_cmd(self, args: Union[list, tuple]) -> str: cmd = [self.path] cmd.extend(args) return cmd
Build command.
def number(items): n = len(items) if n == 0: return items places = str(int(math.log10(n) // 1 + 1)) format = '[{0[0]:' + str(int(places)) + 'd}] {0[1]}' return map( lambda x: format.format(x), enumerate(items) )
Maps numbering onto given values
def register(self, identified_with, identifier, user): self.kv_store.set( self._get_storage_key(identified_with, identifier), self.serialization.dumps(user).encode(), )
Register new key for given client identifier. This is only a helper method that allows to register new user objects for client identities (keys, tokens, addresses etc.). Args: identified_with (object): authentication middleware used to identify the user. identifier (str): user identifier. user (str): user object to be stored in the backend.
def run_deps(self, conf, images): for dependency_name, detached in conf.dependency_images(for_running=True): try: self.run_container(images[dependency_name], images, detach=detached, dependency=True) except Exception as error: raise BadImage("Failed to start dependency container", image=conf.name, dependency=dependency_name, error=error)
Start containers for all our dependencies
def get_value(self, name, parameters=None): if not isinstance(parameters, dict): raise TypeError("parameters must a dict") if name not in self._cache: return None hash = self._parameter_hash(parameters) hashdigest = hash.hexdigest() return self._cache[name].get(hashdigest, None)
Return the value of a cached variable if applicable The value of the variable 'name' is returned, if no parameters are passed or if all parameters are identical to the ones stored for the variable. :param str name: Name of teh variable :param dict parameters: Current parameters or None if parameters do not matter :return: The cached value of the variable or None if the parameters differ
def textile(value): try: import textile except ImportError: warnings.warn("The Python textile library isn't installed.", RuntimeWarning) return value return textile.textile(force_text(value))
Textile processing.
def compute_position(self, layout): params = self.position.setup_params(self.data) data = self.position.setup_data(self.data, params) data = self.position.compute_layer(data, params, layout) self.data = data
Compute the position of each geometric object in concert with the other objects in the panel
def get_ordered_tokens_from_vocab(vocab: Vocab) -> List[str]: return [token for token, token_id in sorted(vocab.items(), key=lambda i: i[1])]
Returns the list of tokens in a vocabulary, ordered by increasing vocabulary id. :param vocab: Input vocabulary. :return: List of tokens.
def dns_encode(x, check_built=False): if not x or x == b".": return b"\x00" if check_built and b"." not in x and ( orb(x[-1]) == 0 or (orb(x[-2]) & 0xc0) == 0xc0 ): return x x = b"".join(chb(len(y)) + y for y in (k[:63] for k in x.split(b"."))) if x[-1:] != b"\x00": x += b"\x00" return x
Encodes a bytes string into the DNS format :param x: the string :param check_built: detect already-built strings and ignore them :returns: the encoded bytes string
def fetch(self): params = values.of({}) payload = self._version.fetch( 'GET', self._uri, params=params, ) return BalanceInstance(self._version, payload, account_sid=self._solution['account_sid'], )
Fetch a BalanceInstance :returns: Fetched BalanceInstance :rtype: twilio.rest.api.v2010.account.balance.BalanceInstance
def toc(self): output = [] for key in sorted(self.catalog.keys()): edition = self.catalog[key]['edition'] length = len(self.catalog[key]['transliteration']) output.append( "Pnum: {key}, Edition: {edition}, length: {length} line(s)".format( key=key, edition=edition, length=length)) return output
Returns a rich list of texts in the catalog.
def fetch_all_droplets(self, tag_name=None): r params = {} if tag_name is not None: params["tag_name"] = str(tag_name) return map(self._droplet, self.paginate('/v2/droplets', 'droplets', params=params))
r""" Returns a generator that yields all of the droplets belonging to the account .. versionchanged:: 0.2.0 ``tag_name`` parameter added :param tag_name: if non-`None`, only droplets with the given tag are returned :type tag_name: string or `Tag` :rtype: generator of `Droplet`\ s :raises DOAPIError: if the API endpoint replies with an error
def on(self): on_command = StandardSend(self._address, COMMAND_LIGHT_ON_0X11_NONE, 0xff) self._send_method(on_command, self._on_message_received)
Send ON command to device.
def get_random_connection(self): if self._available_connections: node_name = random.choice(list(self._available_connections.keys())) conn_list = self._available_connections[node_name] if conn_list: return conn_list.pop() for node in self.nodes.random_startup_node_iter(): connection = self.get_connection_by_node(node) if connection: return connection raise Exception("Cant reach a single startup node.")
Open new connection to random redis server.
def set_units_property(self, *, unit_ids=None, property_name, values): if unit_ids is None: unit_ids = self.get_unit_ids() for i, unit in enumerate(unit_ids): self.set_unit_property(unit_id=unit, property_name=property_name, value=values[i])
Sets unit property data for a list of units Parameters ---------- unit_ids: list The list of unit ids for which the property will be set Defaults to get_unit_ids() property_name: str The name of the property value: list The list of values to be set
def make_processors(**config): global processors if processors: return import pkg_resources processors = [] for processor in pkg_resources.iter_entry_points('fedmsg.meta'): try: processors.append(processor.load()(_, **config)) except Exception as e: log.warn("Failed to load %r processor." % processor.name) log.exception(e) processors.append(DefaultProcessor(_, **config)) if len(processors) == 3: log.warn("No fedmsg.meta plugins found. fedmsg.meta.msg2* crippled")
Initialize all of the text processors. You'll need to call this once before using any of the other functions in this module. >>> import fedmsg.config >>> import fedmsg.meta >>> config = fedmsg.config.load_config([], None) >>> fedmsg.meta.make_processors(**config) >>> text = fedmsg.meta.msg2repr(some_message_dict, **config)
def dfilter(fn, record): return dict([(k, v) for k, v in record.items() if fn(v)])
filter for a directory :param fn: A predicate function :param record: a dict :returns: a dict >>> odd = lambda x: x % 2 != 0 >>> dfilter(odd, {'Terry': 30, 'Graham': 35, 'John': 27}) {'John': 27, 'Graham': 35}
def _events(self, using_url, filters=None, limit=None): if not isinstance(limit, (int, NoneType)): limit = None if filters is None: filters = [] if isinstance(filters, string_types): filters = filters.split(',') if not self.blocking: self.blocking = True while self.blocking: params = { 'since': self._last_seen_id, 'limit': limit, } if filters: params['events'] = ','.join(map(str, filters)) try: data = self.get(using_url, params=params, raw_exceptions=True) except (ConnectTimeout, ConnectionError) as e: data = None except Exception as e: reraise('', e) if data: self._last_seen_id = data[-1]['id'] for event in data: self._count += 1 yield event
A long-polling method that queries Syncthing for events.. Args: using_url (str): REST HTTP endpoint filters (List[str]): Creates an "event group" in Syncthing to only receive events that have been subscribed to. limit (int): The number of events to query in the history to catch up to the current state. Returns: generator[dict]
def timeout(delay, handler=None): delay = int(delay) if handler is None: def default_handler(signum, frame): raise RuntimeError("{:d} seconds timeout expired".format(delay)) handler = default_handler prev_sigalrm_handler = signal.getsignal(signal.SIGALRM) signal.signal(signal.SIGALRM, handler) signal.alarm(delay) yield signal.alarm(0) signal.signal(signal.SIGALRM, prev_sigalrm_handler)
Context manager to run code and deliver a SIGALRM signal after `delay` seconds. Note that `delay` must be a whole number; otherwise it is converted to an integer by Python's `int()` built-in function. For floating-point numbers, that means rounding off to the nearest integer from below. If the optional argument `handler` is supplied, it must be a callable that is invoked if the alarm triggers while the code is still running. If no `handler` is provided (default), then a `RuntimeError` with message ``Timeout`` is raised.
def result(self, timeout=None): self._blocking_poll(timeout=timeout) if self._exception is not None: raise self._exception return self._result
Get the result of the operation, blocking if necessary. Args: timeout (int): How long (in seconds) to wait for the operation to complete. If None, wait indefinitely. Returns: google.protobuf.Message: The Operation's result. Raises: google.api_core.GoogleAPICallError: If the operation errors or if the timeout is reached before the operation completes.
def _get_label_uuid(xapi, rectype, label): try: return getattr(xapi, rectype).get_by_name_label(label)[0] except Exception: return False
Internal, returns label's uuid
def get_msg_count_info(self, channel=Channel.CHANNEL_CH0): msg_count_info = MsgCountInfo() UcanGetMsgCountInfoEx(self._handle, channel, byref(msg_count_info)) return msg_count_info.sent_msg_count, msg_count_info.recv_msg_count
Reads the message counters of the specified CAN channel. :param int channel: CAN channel, which is to be used (:data:`Channel.CHANNEL_CH0` or :data:`Channel.CHANNEL_CH1`). :return: Tuple with number of CAN messages sent and received. :rtype: tuple(int, int)
def get_licenses(service_instance, license_manager=None): if not license_manager: license_manager = get_license_manager(service_instance) log.debug('Retrieving licenses') try: return license_manager.licenses except vim.fault.NoPermission as exc: log.exception(exc) raise salt.exceptions.VMwareApiError( 'Not enough permissions. Required privilege: ' '{0}'.format(exc.privilegeId)) except vim.fault.VimFault as exc: log.exception(exc) raise salt.exceptions.VMwareApiError(exc.msg) except vmodl.RuntimeFault as exc: log.exception(exc) raise salt.exceptions.VMwareRuntimeError(exc.msg)
Returns the licenses on a specific instance. service_instance The Service Instance Object from which to obrain the licenses. license_manager The License Manager object of the service instance. If not provided it will be retrieved.
def remove_user(self, user, **kwargs): if isinstance(user, Entity): user = user['id'] assert isinstance(user, six.string_types) endpoint = '{0}/{1}/users/{2}'.format( self.endpoint, self['id'], user, ) return self.request('DELETE', endpoint=endpoint, query_params=kwargs)
Remove a user from this team.
def get_keybinding(self, mode, key): cmdline = None bindings = self._bindings if key in bindings.scalars: cmdline = bindings[key] if mode in bindings.sections: if key in bindings[mode].scalars: value = bindings[mode][key] if value: cmdline = value else: cmdline = None if isinstance(cmdline, list): cmdline = ','.join(cmdline) return cmdline
look up keybinding from `MODE-maps` sections :param mode: mode identifier :type mode: str :param key: urwid-style key identifier :type key: str :returns: a command line to be applied upon keypress :rtype: str
def devices(self): self.verify_integrity() if session.get('u2f_device_management_authorized', False): if request.method == 'GET': return jsonify(self.get_devices()), 200 elif request.method == 'DELETE': response = self.remove_device(request.json) if response['status'] == 'ok': return jsonify(response), 200 else: return jsonify(response), 404 return jsonify({'status': 'failed', 'error': 'Unauthorized!'}), 401
Manages users enrolled u2f devices