code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def get_min_distance(self, mesh): dists = [surf.get_min_distance(mesh) for surf in self.surfaces] return numpy.min(dists, axis=0)
For each point in ``mesh`` compute the minimum distance to each surface element and return the smallest value. See :meth:`superclass method <.base.BaseSurface.get_min_distance>` for spec of input and result values.
def get_tokens_unprocessed(self, text, stack=('root',)): self.content_type = None return RegexLexer.get_tokens_unprocessed(self, text, stack)
Reset the content-type state.
def decode_abi(self, types: Iterable[TypeStr], data: Decodable) -> Tuple[Any, ...]: if not is_bytes(data): raise TypeError("The `data` value must be of bytes type. Got {0}".format(type(data))) decoders = [ self._registry.get_decoder(type_str) for type_str in types ] decoder = TupleDecoder(decoders=decoders) stream = ContextFramesBytesIO(data) return decoder(stream)
Decodes the binary value ``data`` as a sequence of values of the ABI types in ``types`` via the head-tail mechanism into a tuple of equivalent python values. :param types: An iterable of string representations of the ABI types that will be used for decoding e.g. ``('uint256', 'bytes[]', '(int,int)')`` :param data: The binary value to be decoded. :returns: A tuple of equivalent python values for the ABI values represented in ``data``.
def upd_doc(self, doc, index_update=True, label_guesser_update=True): if not self.index_writer and index_update: self.index_writer = self.index.writer() if not self.label_guesser_updater and label_guesser_update: self.label_guesser_updater = self.label_guesser.get_updater() logger.info("Updating modified doc: %s" % doc) if index_update: self._update_doc_in_index(self.index_writer, doc) if label_guesser_update: self.label_guesser_updater.upd_doc(doc)
Update a document in the index
def has_ssd(system_obj): smart_value = False storage_value = False smart_resource = _get_attribute_value_of(system_obj, 'smart_storage') if smart_resource is not None: smart_value = _get_attribute_value_of( smart_resource, 'has_ssd', default=False) if smart_value: return smart_value storage_resource = _get_attribute_value_of(system_obj, 'storages') if storage_resource is not None: storage_value = _get_attribute_value_of( storage_resource, 'has_ssd', default=False) return storage_value
Gets if the system has any drive as SSD drive :param system_obj: The HPESystem object. :returns True if system has SSD drives.
def keyval_typ2str(var, val): varout = var.strip() if isinstance(val, list): data = ", ".join([keyval_typ2str(var, it)[1] for it in val]) valout = "["+data+"]" elif isinstance(val, float): valout = "{:.12f}".format(val) else: valout = "{}".format(val) return varout, valout
Convert a variable to a string Parameters ---------- var: str The variable name val: any type The value of the variable Returns ------- varout: str Stripped lowercase `var` valout: any type The value converted to a useful string representation See Also -------- keyval_str2typ: the opposite
def TXT(host, nameserver=None): dig = ['dig', '+short', six.text_type(host), 'TXT'] if nameserver is not None: dig.append('@{0}'.format(nameserver)) cmd = __salt__['cmd.run_all'](dig, python_shell=False) if cmd['retcode'] != 0: log.warning( 'dig returned exit code \'%s\'. Returning empty list as fallback.', cmd['retcode'] ) return [] return [i for i in cmd['stdout'].split('\n')]
Return the TXT record for ``host``. Always returns a list. CLI Example: .. code-block:: bash salt ns1 dig.TXT google.com
def target(key, full=True): if not key.startswith('/sys'): key = os.path.join('/sys', key) key = os.path.realpath(key) if not os.path.exists(key): log.debug('Unkown SysFS key %s', key) return False elif full: return key else: return os.path.basename(key)
Return the basename of a SysFS key path :param key: the location to resolve within SysFS :param full: full path instead of basename :return: fullpath or basename of path CLI example: .. code-block:: bash salt '*' sysfs.read class/ttyS0
def run(self, writer, reader): self._page_data = self.initialize_page_data() self._set_lastpage() if not self.term.is_a_tty: self._run_notty(writer) else: self._run_tty(writer, reader)
Pager entry point. In interactive mode (terminal is a tty), run until ``process_keystroke()`` detects quit keystroke ('q'). In non-interactive mode, exit after displaying all unicode points. :param writer: callable writes to output stream, receiving unicode. :type writer: callable :param reader: callable reads keystrokes from input stream, sending instance of blessed.keyboard.Keystroke. :type reader: callable
def extract_images(bs4, lazy_image_attribute=None): if lazy_image_attribute: images = [image[lazy_image_attribute] for image in bs4.select('img') if image.has_attr(lazy_image_attribute)] else: images = [image['src'] for image in bs4.select('img') if image.has_attr('src')] image_links = [link for link in extract_links(bs4) if link.endswith(('.jpg', '.JPG', '.png', '.PNG', '.gif', '.GIF'))] image_metas = [meta['content'] for meta in extract_metas(bs4) if 'content' in meta if meta['content'].endswith(('.jpg', '.JPG', '.png', '.PNG', '.gif', '.GIF'))] return list(set(images + image_links + image_metas))
If lazy attribute is supplied, find image url on that attribute :param bs4: :param lazy_image_attribute: :return:
def encrypt_key(key, password): public_key = load_pem_public_key(key.encode(), default_backend()) encrypted_password = public_key.encrypt(password, PKCS1v15()) return base64.b64encode(encrypted_password).decode('ascii')
Encrypt the password with the public key and return an ASCII representation. The public key retrieved from the Travis API is loaded as an RSAPublicKey object using Cryptography's default backend. Then the given password is encrypted with the encrypt() method of RSAPublicKey. The encrypted password is then encoded to base64 and decoded into ASCII in order to convert the bytes object into a string object. Parameters ---------- key: str Travis CI public RSA key that requires deserialization password: str the password to be encrypted Returns ------- encrypted_password: str the base64 encoded encrypted password decoded as ASCII Notes ----- Travis CI uses the PKCS1v15 padding scheme. While PKCS1v15 is secure, it is outdated and should be replaced with OAEP. Example: OAEP(mgf=MGF1(algorithm=SHA256()), algorithm=SHA256(), label=None))
def genl_ctrl_resolve_grp(sk, family_name, grp_name): family = genl_ctrl_probe_by_name(sk, family_name) if family is None: return -NLE_OBJ_NOTFOUND return genl_ctrl_grp_by_name(family, grp_name)
Resolve Generic Netlink family group name. https://github.com/thom311/libnl/blob/libnl3_2_25/lib/genl/ctrl.c#L471 Looks up the family object and resolves the group name to the numeric group identifier. Positional arguments: sk -- Generic Netlink socket (nl_sock class instance). family_name -- name of Generic Netlink family (bytes). grp_name -- name of group to resolve (bytes). Returns: The numeric group identifier or a negative error code.
def bodn2c(name): name = stypes.stringToCharP(name) code = ctypes.c_int(0) found = ctypes.c_int(0) libspice.bodn2c_c(name, ctypes.byref(code), ctypes.byref(found)) return code.value, bool(found.value)
Translate the name of a body or object to the corresponding SPICE integer ID code. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/bodn2c_c.html :param name: Body name to be translated into a SPICE ID code. :type name: str :return: SPICE integer ID code for the named body. :rtype: int
def _handle_options(self, option_line): if option_line is not None: usage = options = docopt.docopt(usage, shlex.split(option_line)) if options['--retest']: retest_file = options['--retest'] try: test_list_str = self._get_test_list_from_session_file(retest_file) except Exception as ex: raise KittyException('Failed to open session file (%s) for retesting: %s' % (retest_file, ex)) else: test_list_str = options['--test-list'] self._set_test_ranges(None, None, test_list_str) session_file = options['--session'] if session_file is not None: self.set_session_file(session_file) delay = options['--delay'] if delay is not None: self.set_delay_between_tests(float(delay)) skip_env_test = options['--no-env-test'] if skip_env_test: self.set_skip_env_test(True) verbosity = options['--verbose'] self.set_verbosity(verbosity)
Handle options from command line, in docopt style. This allows passing arguments to the fuzzer from the command line without the need to re-write it in each runner. :param option_line: string with the command line options to be parsed.
def all_descriptions(): para = [] para += build_parameter_descriptions(models.Soil()) + [",,\n"] para += build_parameter_descriptions(models.SoilProfile()) + [",,\n"] para += build_parameter_descriptions(models.Foundation()) + [",,\n"] para += build_parameter_descriptions(models.PadFoundation()) + [",,\n"] para += build_parameter_descriptions(models.SDOFBuilding()) + [",,\n"] para += build_parameter_descriptions(models.FrameBuilding2D(1, 1)) return para
Generates a list of descriptions of all the models :return:
def load_data_file(filename, encoding='utf-8'): data = pkgutil.get_data(PACKAGE_NAME, os.path.join(DATA_DIR, filename)) return data.decode(encoding).splitlines()
Load a data file and return it as a list of lines. Parameters: filename: The name of the file (no directories included). encoding: The file encoding. Defaults to utf-8.
def on_epoch_end(self, pbar, epoch, last_metrics, **kwargs): "Put the various losses in the recorder and show a sample image." if not hasattr(self, 'last_gen') or not self.show_img: return data = self.learn.data img = self.last_gen[0] norm = getattr(data,'norm',False) if norm and norm.keywords.get('do_y',False): img = data.denorm(img) img = data.train_ds.y.reconstruct(img) self.imgs.append(img) self.titles.append(f'Epoch {epoch}') pbar.show_imgs(self.imgs, self.titles) return add_metrics(last_metrics, [getattr(self.smoothenerG,'smooth',None),getattr(self.smoothenerC,'smooth',None)])
Put the various losses in the recorder and show a sample image.
def save(path, im): from PIL import Image if im.dtype == np.uint8: pil_im = Image.fromarray(im) else: pil_im = Image.fromarray((im*255).astype(np.uint8)) pil_im.save(path)
Saves an image to file. If the image is type float, it will assume to have values in [0, 1]. Parameters ---------- path : str Path to which the image will be saved. im : ndarray (image) Image.
def parse_sgf_to_examples(sgf_path): return zip(*[(p.position, p.next_move, p.result) for p in sgf_wrapper.replay_sgf_file(sgf_path)])
Return supervised examples from positions NOTE: last move is not played because no p.next_move after.
def qualified_name(self): idxstr = '' if self.index is None else str(self.index) return "%s[%s]" % (self.qualified_package_name, idxstr)
Get the qualified name of the variant. Returns: str: Name of the variant with version and index, eg "maya-2016.1[1]".
def runSearchDatasets(self, request): return self.runSearchRequest( request, protocol.SearchDatasetsRequest, protocol.SearchDatasetsResponse, self.datasetsGenerator)
Runs the specified SearchDatasetsRequest.
def wrap(ptr, base=None): if ptr is None: return None ptr = long(ptr) if base is None: qObj = shiboken.wrapInstance(long(ptr), QtCore.QObject) metaObj = qObj.metaObject() cls = metaObj.className() superCls = metaObj.superClass().className() if hasattr(QtGui, cls): base = getattr(QtGui, cls) elif hasattr(QtGui, superCls): base = getattr(QtGui, superCls) else: base = QtGui.QWidget return shiboken.wrapInstance(long(ptr), base)
Wrap the given pointer with shiboken and return the appropriate QObject :returns: if ptr is not None returns a QObject that is cast to the appropriate class :rtype: QObject | None :raises: None
def add_field(self, field_instance): if isinstance(field_instance, BaseScriptField): field_instance = field_instance else: raise ValueError('Expected a basetring or Field instance') self.fields.append(field_instance) return self
Appends a field.
def verify_fresh_jwt_in_request(): if request.method not in config.exempt_methods: jwt_data = _decode_jwt_from_request(request_type='access') ctx_stack.top.jwt = jwt_data fresh = jwt_data['fresh'] if isinstance(fresh, bool): if not fresh: raise FreshTokenRequired('Fresh token required') else: now = timegm(datetime.utcnow().utctimetuple()) if fresh < now: raise FreshTokenRequired('Fresh token required') verify_token_claims(jwt_data) _load_user(jwt_data[config.identity_claim_key])
Ensure that the requester has a valid and fresh access token. Raises an appropiate exception if there is no token, the token is invalid, or the token is not marked as fresh.
def get(vm, key='uuid'): ret = {} if key not in ['uuid', 'alias', 'hostname']: ret['Error'] = 'Key must be either uuid, alias or hostname' return ret vm = lookup('{0}={1}'.format(key, vm), one=True) if 'Error' in vm: return vm cmd = 'vmadm get {0}'.format(vm) res = __salt__['cmd.run_all'](cmd) retcode = res['retcode'] if retcode != 0: ret['Error'] = res['stderr'] if 'stderr' in res else _exit_status(retcode) return ret return salt.utils.json.loads(res['stdout'])
Output the JSON object describing a VM vm : string vm to be targeted key : string [uuid|alias|hostname] value type of 'vm' parameter CLI Example: .. code-block:: bash salt '*' vmadm.get 186da9ab-7392-4f55-91a5-b8f1fe770543 salt '*' vmadm.get nacl key=alias
def release_udp_port(self, port, project): if port in self._used_udp_ports: self._used_udp_ports.remove(port) project.remove_udp_port(port) log.debug("UDP port {} has been released".format(port))
Release a specific UDP port number :param port: UDP port number :param project: Project instance
def cross_successors(state, last_action=None): centres, edges = state acts = sum([ [s, s.inverse(), s * 2] for s in map(Step, "RUFDRB".replace(last_action.face if last_action else "", "", 1)) ], []) for step in acts: yield step, (centres, CrossSolver._rotate(edges, step))
Successors function for solving the cross.
def tmpdir(suffix='', prefix='tmp', dir=None): tmp = tempfile.mkdtemp(suffix=suffix, prefix=prefix, dir=dir) yield tmp shutil.rmtree(tmp)
Create a temporary directory with a context manager. The file is deleted when the context exits. The prefix, suffix, and dir arguments are the same as for mkstemp(). Args: suffix (str): If suffix is specified, the file name will end with that suffix, otherwise there will be no suffix. prefix (str): If prefix is specified, the file name will begin with that prefix; otherwise, a default prefix is used. dir (str): If dir is specified, the file will be created in that directory; otherwise, a default directory is used. Returns: str: path to the directory
def eval(self, x): aes = AES.new(self.key, AES.MODE_CFB, "\0" * AES.block_size) while True: nonce = 0 data = KeyedPRF.pad(SHA256.new(str(x + nonce).encode()).digest(), (number.size(self.range) + 7) // 8) num = self.mask & number.bytes_to_long(aes.encrypt(data)) if (num < self.range): return num nonce += 1
This method returns the evaluation of the function with input x :param x: this is the input as a Long
def get_json_response(self, content, **kwargs): if isinstance(content, dict): response_content = {k: deepcopy(v) for k, v in content.items() if k not in ('form', 'view') or k in ('form', 'view') and not isinstance(v, (Form, View))} else: response_content = content return HttpResponse(content=json.dumps(response_content), content_type='application/json; charset=utf-8', **kwargs)
Returns a json response object.
def invoke(self): if self.timeout_milliseconds > 0: self.ready_time = ( datetime.now() + timedelta(milliseconds=self.timeout_milliseconds)) self.callback(self.callback_args)
Run callback, optionally passing a variable number of arguments `callback_args`
def actions(self): r = self.session.query(models.Action).all() return [x.type_name for x in r]
Gets the list of allowed actions :rtype: list[str]
def _exception(etype, eval_, etrace): if hasattr(sys, 'ps1') or not sys.stderr.isatty(): sys.__excepthook__(etype, eval_, etrace) else: traceback.print_exception(etype, eval_, etrace, limit=2, file=sys.stdout) six.print_() pdb.pm()
Wrap exception in debugger if not in tty
def set_imu_callback(self, callback, data=None): self.imu_callback = callback self.imu_callback_data = data
Register a callback for incoming IMU data packets. This method allows you to pass in a callbable which will be called on receipt of each IMU data packet sent by this SK8 device. Set to `None` to disable it again. Args: callback: a callable with the following signature: (acc, gyro, mag, imu_index, seq, timestamp, data) where: acc, gyro, mag = sensor data ([x,y,z] in each case) imu_index = originating IMU number (int, 0-4) seq = packet sequence number (int, 0-255) timestamp = value of time.time() when packet received data = value of `data` parameter passed to this method data: an optional arbitrary object that will be passed as a parameter to the callback
def scale_rows_by_largest_entry(S): if not isspmatrix_csr(S): raise TypeError('expected csr_matrix') largest_row_entry = np.zeros((S.shape[0],), dtype=S.dtype) pyamg.amg_core.maximum_row_value(S.shape[0], largest_row_entry, S.indptr, S.indices, S.data) largest_row_entry[largest_row_entry != 0] =\ 1.0 / largest_row_entry[largest_row_entry != 0] S = scale_rows(S, largest_row_entry, copy=True) return S
Scale each row in S by it's largest in magnitude entry. Parameters ---------- S : csr_matrix Returns ------- S : csr_matrix Each row has been scaled by it's largest in magnitude entry Examples -------- >>> from pyamg.gallery import poisson >>> from pyamg.util.utils import scale_rows_by_largest_entry >>> A = poisson( (4,), format='csr' ) >>> A.data[1] = 5.0 >>> A = scale_rows_by_largest_entry(A) >>> A.todense() matrix([[ 0.4, 1. , 0. , 0. ], [-0.5, 1. , -0.5, 0. ], [ 0. , -0.5, 1. , -0.5], [ 0. , 0. , -0.5, 1. ]])
def hash_file(path: str, progress_callback: Callable[[float], None], chunk_size: int = 1024, file_size: int = None, algo: str = 'sha256') -> bytes: hasher = hashlib.new(algo) have_read = 0 if not chunk_size: chunk_size = 1024 with open(path, 'rb') as to_hash: if not file_size: file_size = to_hash.seek(0, 2) to_hash.seek(0) while True: chunk = to_hash.read(chunk_size) hasher.update(chunk) have_read += len(chunk) progress_callback(have_read/file_size) if len(chunk) != chunk_size: break return binascii.hexlify(hasher.digest())
Hash a file and return the hash, providing progress callbacks :param path: The file to hash :param progress_callback: The callback to call with progress between 0 and 1. May not ever be precisely 1.0. :param chunk_size: If specified, the size of the chunks to hash in one call If not specified, defaults to 1024 :param file_size: If specified, the size of the file to hash (used for progress callback generation). If not specified, calculated internally. :param algo: The algorithm to use. Can be anything used by :py:mod:`hashlib` :returns: The output has ascii hex
async def apply_commandline(self, cmdline): cmdline = cmdline.lstrip() def apply_this_command(cmdstring): logging.debug('%s command string: "%s"', self.mode, str(cmdstring)) cmd = commandfactory(cmdstring, self.mode) if cmd.repeatable: self.last_commandline = cmdline return self.apply_command(cmd) try: for c in split_commandline(cmdline): await apply_this_command(c) except Exception as e: self._error_handler(e)
interprets a command line string i.e., splits it into separate command strings, instanciates :class:`Commands <alot.commands.Command>` accordingly and applies then in sequence. :param cmdline: command line to interpret :type cmdline: str
def get_asset_spatial_session_for_repository(self, repository_id, proxy): if not repository_id: raise NullArgument() if not self.supports_asset_spatial() or not self.supports_visible_federation(): raise Unimplemented() try: from . import sessions except ImportError: raise OperationFailed('import error') proxy = self._convert_proxy(proxy) try: session = sessions.AssetSpatialSession(repository_id, proxy, runtime=self._runtime) except AttributeError: raise OperationFailed('attribute error') return session
Gets the session for retrieving spatial coverage of an asset for the given repository. arg: repository_id (osid.id.Id): the Id of the repository arg proxy (osid.proxy.Proxy): a proxy return: (osid.repository.AssetSpatialSession) - an AssetSpatialSession raise: NotFound - repository_id not found raise: NullArgument - repository_id is null raise: OperationFailed - unable to complete request raise: Unimplemented - supports_asset_spatial() or supports_visible_federation() is false compliance: optional - This method must be implemented if supports_asset_spatial() and supports_visible_federation() are true.
def collect_hypervisors_metrics( self, servers, custom_tags=None, use_shortname=False, collect_hypervisor_metrics=True, collect_hypervisor_load=False, ): hyp_project_names = defaultdict(set) for server in itervalues(servers): hypervisor_hostname = server.get('hypervisor_hostname') if not hypervisor_hostname: self.log.debug( "hypervisor_hostname is None for server %s. Check that your user is an administrative users.", server['server_id'], ) else: hyp_project_names[hypervisor_hostname].add(server['project_name']) hypervisors = self.get_os_hypervisors_detail() for hyp in hypervisors: self.get_stats_for_single_hypervisor( hyp, hyp_project_names, custom_tags=custom_tags, use_shortname=use_shortname, collect_hypervisor_metrics=collect_hypervisor_metrics, collect_hypervisor_load=collect_hypervisor_load, ) if not hypervisors: self.warning("Unable to collect any hypervisors from Nova response.")
Submits stats for all hypervisors registered to this control plane Raises specific exceptions based on response code
def debug(level=logging.DEBUG): from jcvi.apps.console import magenta, yellow format = yellow("%(asctime)s [%(module)s]") format += magenta(" %(message)s") logging.basicConfig(level=level, format=format, datefmt="%H:%M:%S")
Turn on the debugging
def get_aggregate_check(self, check, age=None): data = {} if age: data['max_age'] = age result = self._request('GET', '/aggregates/{}'.format(check), data=json.dumps(data)) return result.json()
Returns the list of aggregates for a given check
def OnViewTypeTool( self, event ): new = self.viewTypeTool.GetStringSelection() if new != self.viewType: self.viewType = new self.OnRootView( event )
When the user changes the selection, make that our selection
def tabActiveMarkupChanged(self, tab): if tab == self.currentTab: markupClass = tab.getActiveMarkupClass() dtMarkdown = (markupClass == markups.MarkdownMarkup) dtMkdOrReST = dtMarkdown or (markupClass == markups.ReStructuredTextMarkup) self.formattingBox.setEnabled(dtMarkdown) self.symbolBox.setEnabled(dtMarkdown) self.actionUnderline.setEnabled(dtMarkdown) self.actionBold.setEnabled(dtMkdOrReST) self.actionItalic.setEnabled(dtMkdOrReST)
Perform all UI state changes that need to be done when the active markup class of the current tab has changed.
def format_node(import_graph, node, indent): if isinstance(node, graph.NodeSet): ind = ' ' * indent out = [ind + 'cycle {'] + [ format_file_node(import_graph, n, indent + 1) for n in node.nodes ] + [ind + '}'] return '\n'.join(out) else: return format_file_node(import_graph, node, indent)
Helper function for print_tree
def register_gate(name, gateclass, allow_overwrite=False): if hasattr(Circuit, name): if allow_overwrite: warnings.warn(f"Circuit has attribute `{name}`.") else: raise ValueError(f"Circuit has attribute `{name}`.") if name.startswith("run_with_"): if allow_overwrite: warnings.warn(f"Gate name `{name}` may conflict with run of backend.") else: raise ValueError(f"Gate name `{name}` shall not start with 'run_with_'.") if not allow_overwrite: if name in GATE_SET: raise ValueError(f"Gate '{name}' is already exists in gate set.") if name in GLOBAL_MACROS: raise ValueError(f"Macro '{name}' is already exists.") GATE_SET[name] = gateclass
Register new gate to gate set. Args: name (str): The name of gate. gateclass (type): The type object of gate. allow_overwrite (bool, optional): If True, allow to overwrite the existing gate. Otherwise, raise the ValueError. Raises: ValueError: The name is duplicated with existing gate. When `allow_overwrite=True`, this error is not raised.
def compute_all_minutes(opens_in_ns, closes_in_ns): deltas = closes_in_ns - opens_in_ns daily_sizes = (deltas // NANOSECONDS_PER_MINUTE) + 1 num_minutes = daily_sizes.sum() pieces = [] for open_, size in zip(opens_in_ns, daily_sizes): pieces.append( np.arange(open_, open_ + size * NANOSECONDS_PER_MINUTE, NANOSECONDS_PER_MINUTE) ) out = np.concatenate(pieces).view('datetime64[ns]') assert len(out) == num_minutes return out
Given arrays of opens and closes, both in nanoseconds, return an array of each minute between the opens and closes.
def send_reply(self, context, reply): print("Status: 200 OK") print("Content-Type: application/json") print("Cache-Control: no-cache") print("Pragma: no-cache") print("Content-Length: %d" % len(reply)) print() print(reply.decode())
Sends a reply to a client. The client is usually identified by passing ``context`` as returned from the original :py:func:`receive_message` call. Messages must be bytes, it is up to the sender to convert the message beforehand. A non-bytes value raises a :py:exc:`TypeError`. :param any context: A context returned by :py:func:`receive_message`. :param bytes reply: A binary to send back as the reply.
def create_entity(self): self._highest_id_seen += 1 entity = Entity(self._highest_id_seen, self) self._entities.append(entity) return entity
Create a new entity. The entity will have a higher UID than any previously associated with this world. :return: the new entity :rtype: :class:`essence.Entity`
def match(self, key=None, year=None, event=None, type='qm', number=None, round=None, simple=False): if key: return Match(self._get('match/%s%s' % (key, '/simple' if simple else ''))) else: return Match(self._get('match/{year}{event}_{type}{number}{round}{simple}'.format(year=year if not event[0].isdigit() else '', event=event, type=type, number=number, round=('m%s' % round) if not type == 'qm' else '', simple='/simple' if simple else '')))
Get data on a match. You may either pass the match's key directly, or pass `year`, `event`, `type`, `match` (the match number), and `round` if applicable (playoffs only). The event year may be specified as part of the event key or specified in the `year` parameter. :param key: Key of match to get data on. First option for specifying a match (see above). :param year: Year in which match took place. Optional; if excluded then must be included in event key. :param event: Key of event in which match took place. Including year is optional; if excluded then must be specified in `year` parameter. :param type: One of 'qm' (qualifier match), 'qf' (quarterfinal), 'sf' (semifinal), 'f' (final). If unspecified, 'qm' will be assumed. :param number: Match number. For example, for qualifier 32, you'd pass 32. For Semifinal 2 round 3, you'd pass 2. :param round: For playoff matches, you will need to specify a round. :param simple: Get only vital data. :return: A single Match object.
def _assert_is_type(name, value, value_type): if not isinstance(value, value_type): if type(value_type) is tuple: types = ', '.join(t.__name__ for t in value_type) raise ValueError('{0} must be one of ({1})'.format(name, types)) else: raise ValueError('{0} must be {1}' .format(name, value_type.__name__))
Assert that a value must be a given type.
def isSurrounded(self): malefics = [const.MARS, const.SATURN] return self.__sepApp(malefics, aspList=[0, 90, 180])
Returns if the object is separating and applying to a malefic considering bad aspects.
def bundle(self, name: str) -> models.Bundle: return self.Bundle.filter_by(name=name).first()
Fetch a bundle from the store.
def _load_into_numpy(sf, np_array, start, end, strides=None, shape=None): np_array[:] = 0.0 np_array_2d = np_array.reshape((np_array.shape[0], np_array.shape[1] * np_array.shape[2])) _extensions.sframe_load_to_numpy(sf, np_array.ctypes.data, np_array_2d.strides, np_array_2d.shape, start, end)
Loads into numpy array from SFrame, assuming SFrame stores data flattened
def from_chars(cls, chars='', optimal=3): if not chars: chars = ''.join(ALNUM) sets = most_even_chunk(chars, optimal) return cls(sets)
Construct a Pat object from the specified string and optimal position count.
def method_not_allowed(cls, errors=None): if cls.expose_status: cls.response.content_type = 'application/json' cls.response._status_line = '405 Method Not Allowed' return cls(405, None, errors).to_json
Shortcut API for HTTP 405 `Method not allowed` response. Args: errors (list): Response key/value data. Returns: WSResponse Instance.
def configure_bound(self, surface_size): r = len(self.rows) max_length = self.max_length if self.key_size is None: self.key_size = (surface_size[0] - (self.padding * (max_length + 1))) / max_length height = self.key_size * r + self.padding * (r + 1) if height >= surface_size[1] / 2: logger.warning('Computed keyboard height outbound target surface, reducing key_size to match') self.key_size = ((surface_size[1] / 2) - (self.padding * (r + 1))) / r height = self.key_size * r + self.padding * (r + 1) logger.warning('Normalized key_size to %spx' % self.key_size) self.set_size((surface_size[0], height), surface_size)
Compute keyboard bound regarding of this layout. If key_size is None, then it will compute it regarding of the given surface_size. :param surface_size: Size of the surface this layout will be rendered on. :raise ValueError: If the layout model is empty.
def attach_storage(self, server, storage, storage_type, address): body = {'storage_device': {}} if storage: body['storage_device']['storage'] = str(storage) if storage_type: body['storage_device']['type'] = storage_type if address: body['storage_device']['address'] = address url = '/server/{0}/storage/attach'.format(server) res = self.post_request(url, body) return Storage._create_storage_objs(res['server']['storage_devices'], cloud_manager=self)
Attach a Storage object to a Server. Return a list of the server's storages.
def new_points( factory: IterationPointFactory, solution, weights: List[List[float]] = None ) -> List[Tuple[np.ndarray, List[float]]]: from desdeo.preference.direct import DirectSpecification points = [] nof = factory.optimization_method.optimization_problem.problem.nof_objectives() if not weights: weights = random_weights(nof, 50 * nof) for pref in map( lambda w: DirectSpecification(factory.optimization_method, np.array(w)), weights ): points.append(factory.result(pref, solution)) return points
Generate approximate set of points Generate set of Pareto optimal solutions projecting from the Pareto optimal solution using weights to determine the direction. Parameters ---------- factory: IterationPointFactory with suitable optimization problem solution: Current solution from which new solutions are projected weights: Direction of the projection, if not given generate with :func:random_weights
def display_paths(instances, type_str): print('%ss: count=%s' % (type_str, len(instances),)) for path in [instance.path for instance in instances]: print('%s: %s' % (type_str, path)) if len(instances): print('')
Display the count and paths for the list of instances in instances.
def conda_info(prefix): cmd = [join(prefix, 'bin', 'conda')] cmd.extend(['info', '--json']) output = check_output(cmd) return yaml.load(output)
returns conda infos
def execute_work_items(work_items, config): return celery.group( worker_task.s(work_item, config) for work_item in work_items )
Execute a suite of tests for a given set of work items. Args: work_items: An iterable of `work_db.WorkItem`s. config: The configuration to use for the test execution. Returns: An iterable of WorkItems.
def discretize(self, method, *args, **kwargs): super(CustomDistribution, self).discretize(method, *args, **kwargs)
Discretizes the continuous distribution into discrete probability masses using specified method. Parameters ---------- method: string, BaseDiscretizer instance A Discretizer Class from pgmpy.factors.discretize *args, **kwargs: values The parameters to be given to the Discretizer Class. Returns ------- An n-D array or a DiscreteFactor object according to the discretiztion method used. Examples -------- >>> import numpy as np >>> from scipy.special import beta >>> from pgmpy.factors.continuous import ContinuousFactor >>> from pgmpy.factors.continuous import RoundingDiscretizer >>> def dirichlet_pdf(x, y): ... return (np.power(x, 1) * np.power(y, 2)) / beta(x, y) >>> dirichlet_factor = ContinuousFactor(['x', 'y'], dirichlet_pdf) >>> dirichlet_factor.discretize(RoundingDiscretizer, ... low=1, high=2, cardinality=5) # TODO: finish this
def resolution(self, values): values = np.asanyarray(values, dtype=np.int64) if values.shape != (2,): raise ValueError('resolution must be (2,) float') self._resolution = values
Set the camera resolution in pixels. Parameters ------------ resolution (2,) float Camera resolution in pixels
def _fetch_data(self): self._view_col0 = clamp(self._view_col0, 0, self._max_col0) self._view_row0 = clamp(self._view_row0, 0, self._max_row0) self._view_ncols = clamp(self._view_ncols, 0, self._conn.frame_ncols - self._view_col0) self._view_nrows = clamp(self._view_nrows, 0, self._conn.frame_nrows - self._view_row0) self._conn.fetch_data( self._view_row0, self._view_row0 + self._view_nrows, self._view_col0, self._view_col0 + self._view_ncols)
Retrieve frame data within the current view window. This method will adjust the view window if it goes out-of-bounds.
def wait_for(self, new_state): if self._state == new_state: return if self._state > new_state: raise OrderedStateSkipped(new_state) fut = asyncio.Future(loop=self.loop) self._exact_waiters.append((new_state, fut)) yield from fut
Wait for an exact state `new_state` to be reached by the state machine. If the state is skipped, that is, if a state which is greater than `new_state` is written to :attr:`state`, the coroutine raises :class:`OrderedStateSkipped` exception as it is not possible anymore that it can return successfully (see :attr:`state`).
def update_line(self, resource_id, data): return OrderLines(self.client).on(self).update(resource_id, data)
Update a line for an order.
def download_file_from_bucket(self, bucket, file_path, key): with open(file_path, 'wb') as data: self.__s3.download_fileobj(bucket, key, data) return file_path
Download file from S3 Bucket
def get_timestamps(cols, created_name, updated_name): has_created = created_name in cols has_updated = updated_name in cols return (created_name if has_created else None, updated_name if has_updated else None)
Returns a 2-tuple of the timestamp columns that were found on the table definition.
def get_term_frequency(self, term, document, normalized=False): if document not in self._documents: raise IndexError(DOCUMENT_DOES_NOT_EXIST) if term not in self._terms: raise IndexError(TERM_DOES_NOT_EXIST) result = self._terms[term].get(document, 0) if normalized: result /= self.get_document_length(document) return float(result)
Returns the frequency of the term specified in the document.
def print_header(msg, sep='='): " More strong message " LOGGER.info("\n%s\n%s" % (msg, ''.join(sep for _ in msg)))
More strong message
def browse_userjournals(self, username, featured=False, offset=0, limit=10): response = self._req('/browse/user/journals', { "username":username, "featured":featured, "offset":offset, "limit":limit }) deviations = [] for item in response['results']: d = Deviation() d.from_dict(item) deviations.append(d) return { "results" : deviations, "has_more" : response['has_more'], "next_offset" : response['next_offset'] }
Fetch user journals from user :param username: name of user to retrieve journals from :param featured: fetch only featured or not :param offset: the pagination offset :param limit: the pagination limit
def verify_api(self, ret): if self.restApiId: deployed_label_json = self._get_current_deployment_label() if deployed_label_json == self.deployment_label_json: ret['comment'] = ('Already at desired state, the stage {0} is already at the desired ' 'deployment label:\n{1}'.format(self._stage_name, deployed_label_json)) ret['current'] = True return ret else: self._deploymentId = self._get_desired_deployment_id() if self._deploymentId: ret['publish'] = True return ret
this method helps determine if the given stage_name is already on a deployment label matching the input api_name, swagger_file. If yes, returns abort with comment indicating already at desired state. If not and there is previous deployment labels in AWS matching the given input api_name and swagger file, indicate to the caller that we only need to reassociate stage_name to the previously existing deployment label.
def getUnionTemporalPoolerInput(self): activeCells = numpy.zeros(self.tm.numberOfCells()).astype(realDType) activeCells[list(self.tm.activeCellsIndices())] = 1 predictedActiveCells = numpy.zeros(self.tm.numberOfCells()).astype( realDType) predictedActiveCells[list(self.tm.predictedActiveCellsIndices())] = 1 burstingColumns = numpy.zeros(self.tm.numberOfColumns()).astype(realDType) burstingColumns[list(self.tm.unpredictedActiveColumns)] = 1 return activeCells, predictedActiveCells, burstingColumns
Gets the Union Temporal Pooler input from the Temporal Memory
def main(argv=None): parser = argparse.ArgumentParser( description='Event storage and event proxy.', usage='%(prog)s <configfile>' ) parser.add_argument('--exit-codeword', metavar="MSG", dest="exit_message", default=None, help="An incoming message that makes" " Rewind quit. Used for testing.") parser.add_argument('configfile') args = argv if argv is not None else sys.argv[1:] args = parser.parse_args(args) config = configparser.SafeConfigParser() with open(args.configfile) as f: config.readfp(f) exitcode = run(config, args.exit_message) return exitcode
Entry point for Rewind. Parses input and calls run() for the real work. Parameters: argv -- sys.argv arguments. Can be set for testing purposes. returns -- the proposed exit code for the program.
def in_base(self, base): if base == self.base: return copy.deepcopy(self) (result, _) = Radices.from_rational(self.as_rational(), base) return result
Return value in ``base``. :returns: Radix in ``base`` :rtype: Radix :raises ConvertError: if ``base`` is less than 2
def authenticate_server(self, response): log.debug("authenticate_server(): Authenticate header: {0}".format( _negotiate_value(response))) host = urlparse(response.url).hostname try: if self.cbt_struct: result = kerberos.authGSSClientStep(self.context[host], _negotiate_value(response), channel_bindings=self.cbt_struct) else: result = kerberos.authGSSClientStep(self.context[host], _negotiate_value(response)) except kerberos.GSSError: log.exception("authenticate_server(): authGSSClientStep() failed:") return False if result < 1: log.error("authenticate_server(): authGSSClientStep() failed: " "{0}".format(result)) return False log.debug("authenticate_server(): returning {0}".format(response)) return True
Uses GSSAPI to authenticate the server. Returns True on success, False on failure.
def get_records(self, records=None, timeout=1.0): octets = b''.join(ndef.message_encoder(records)) if records else None octets = self.get_octets(octets, timeout) if octets and len(octets) >= 3: return list(ndef.message_decoder(octets))
Get NDEF message records from a SNEP Server. .. versionadded:: 0.13 The :class:`ndef.Record` list given by *records* is encoded as the request message octets input to :meth:`get_octets`. The return value is an :class:`ndef.Record` list decoded from the response message octets returned by :meth:`get_octets`. Same as:: import ndef send_octets = ndef.message_encoder(records) rcvd_octets = snep_client.get_octets(send_octets, timeout) records = list(ndef.message_decoder(rcvd_octets))
def _Bound_TP(T, P): region = None if 1073.15 < T <= 2273.15 and Pmin <= P <= 50: region = 5 elif Pmin <= P <= Ps_623: Tsat = _TSat_P(P) if 273.15 <= T <= Tsat: region = 1 elif Tsat < T <= 1073.15: region = 2 elif Ps_623 < P <= 100: T_b23 = _t_P(P) if 273.15 <= T <= 623.15: region = 1 elif 623.15 < T < T_b23: region = 3 elif T_b23 <= T <= 1073.15: region = 2 return region
Region definition for input T and P Parameters ---------- T : float Temperature, [K] P : float Pressure, [MPa] Returns ------- region : float IAPWS-97 region code References ---------- Wagner, W; Kretzschmar, H-J: International Steam Tables: Properties of Water and Steam Based on the Industrial Formulation IAPWS-IF97; Springer, 2008; doi: 10.1007/978-3-540-74234-0. Fig. 2.3
def parse_expression(clause): if isinstance(clause, Expression): return clause elif hasattr(clause, "getName") and clause.getName() != "field": if clause.getName() == "nested": return AttributeSelection.from_statement(clause) elif clause.getName() == "function": return SelectFunction.from_statement(clause) else: return Value(resolve(clause[0])) else: return Field(clause[0])
For a clause that could be a field, value, or expression
def build_action(self, runnable, regime, action): if isinstance(action, StateAssignment): return self.build_state_assignment(runnable, regime, action) if isinstance(action, EventOut): return self.build_event_out(action) if isinstance(action, Transition): return self.build_transition(action) else: return ['pass']
Build event handler action code. @param action: Event handler action object @type action: lems.model.dynamics.Action @return: Generated action code @rtype: string
def _rndPointDisposition(dx, dy): x = int(random.uniform(-dx, dx)) y = int(random.uniform(-dy, dy)) return (x, y)
Return random disposition point.
def standardize_mapping(into): if not inspect.isclass(into): if isinstance(into, collections.defaultdict): return partial( collections.defaultdict, into.default_factory) into = type(into) if not issubclass(into, abc.Mapping): raise TypeError('unsupported type: {into}'.format(into=into)) elif into == collections.defaultdict: raise TypeError( 'to_dict() only accepts initialized defaultdicts') return into
Helper function to standardize a supplied mapping. .. versionadded:: 0.21.0 Parameters ---------- into : instance or subclass of collections.abc.Mapping Must be a class, an initialized collections.defaultdict, or an instance of a collections.abc.Mapping subclass. Returns ------- mapping : a collections.abc.Mapping subclass or other constructor a callable object that can accept an iterator to create the desired Mapping. See Also -------- DataFrame.to_dict Series.to_dict
def querystring(self): return {key: value for (key, value) in self.qs.items() if key.startswith(self.MANAGED_KEYS) or self._get_key_values('filter[')}
Return original querystring but containing only managed keys :return dict: dict of managed querystring parameter
def _parse_desc_length_file(cls, fileobj): value = 0 for i in xrange(4): try: b = cdata.uint8(fileobj.read(1)) except cdata.error as e: raise ValueError(e) value = (value << 7) | (b & 0x7f) if not b >> 7: break else: raise ValueError("invalid descriptor length") return value
May raise ValueError
def ensure_subclass(value, types): ensure_class(value) if not issubclass(value, types): raise TypeError( "expected subclass of {}, not {}".format( types, value))
Ensure value is a subclass of types >>> class Hello(object): pass >>> ensure_subclass(Hello, Hello) >>> ensure_subclass(object, Hello) Traceback (most recent call last): TypeError:
def connection_factory(self, endpoint, *args, **kwargs): kwargs = self._make_connection_kwargs(endpoint, kwargs) return self.connection_class.factory(endpoint, self.connect_timeout, *args, **kwargs)
Called to create a new connection with proper configuration. Intended for internal use only.
def metric(self): if self._metric is None: _log.debug("Computing and caching operator basis metric") self._metric = np.matrix([[(j.dag() * k).tr() for k in self.ops] for j in self.ops]) return self._metric
Compute a matrix of Hilbert-Schmidt inner products for the basis operators, update self._metric, and return the value. :return: The matrix of inner products. :rtype: numpy.matrix
def play(self, sox_effects=()): audio_data = self.getAudioData() logging.getLogger().info("Playing speech segment (%s): '%s'" % (self.lang, self)) cmd = ["sox", "-q", "-t", "mp3", "-"] if sys.platform.startswith("win32"): cmd.extend(("-t", "waveaudio")) cmd.extend(("-d", "trim", "0.1", "reverse", "trim", "0.07", "reverse")) cmd.extend(sox_effects) logging.getLogger().debug("Start player process") p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.DEVNULL) p.communicate(input=audio_data) if p.returncode != 0: raise RuntimeError() logging.getLogger().debug("Done playing")
Play the segment.
def _construct_regex(cls, fmt): return re.compile(fmt.format(**vars(cls)), flags=re.U)
Given a format string, construct the regex with class attributes.
def authenticate(self, username="", password="", **kwargs): try: user = get_user_model().objects.filter(email__iexact=username)[0] if check_password(password, user.password): return user else: return None except IndexError: return None
Allow users to log in with their email address.
def message(self): name = self.__class__.__name__ return "{0} {1}".format(humanize(name), pp(*self.expectedArgs, **self.expectedKwArgs))
Override this to provide failure message
def safe_copyfile(src, dest): fd, tmpname = tempfile.mkstemp(dir=os.path.dirname(dest)) shutil.copyfileobj(open(src, 'rb'), os.fdopen(fd, 'wb')) shutil.copystat(src, tmpname) os.rename(tmpname, dest)
safely copy src to dest using a temporary intermediate and then renaming to dest
def parse(content, *args, **kwargs): global MECAB_PYTHON3 if 'mecab_loc' not in kwargs and MECAB_PYTHON3 and 'MeCab' in globals(): return MeCab.Tagger(*args).parse(content) else: return run_mecab_process(content, *args, **kwargs)
Use mecab-python3 by default to parse JP text. Fall back to mecab binary app if needed
def from_clock_time(cls, clock_time, epoch): try: clock_time = ClockTime(*clock_time) except (TypeError, ValueError): raise ValueError("Clock time must be a 2-tuple of (s, ns)") else: ordinal = clock_time.seconds // 86400 return Date.from_ordinal(ordinal + epoch.date().to_ordinal())
Convert from a ClockTime relative to a given epoch.
def get_parser(): from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter parser = ArgumentParser(description=__doc__, formatter_class=ArgumentDefaultsHelpFormatter) parser.add_argument("-s1", dest="s1", help="sequence 1") parser.add_argument("-s2", dest="s2", help="sequence 2") return parser
Get a parser object
def custom(command, user=None, conf_file=None, bin_env=None): ret = __salt__['cmd.run_all']( _ctl_cmd(command, None, conf_file, bin_env), runas=user, python_shell=False, ) return _get_return(ret)
Run any custom supervisord command user user to run supervisorctl as conf_file path to supervisord config file bin_env path to supervisorctl bin or path to virtualenv with supervisor installed CLI Example: .. code-block:: bash salt '*' supervisord.custom "mstop '*gunicorn*'"
def task_failure_message(task_report): trace_list = traceback.format_tb(task_report['traceback']) body = 'Error: task failure\n\n' body += 'Task ID: {}\n\n'.format(task_report['task_id']) body += 'Archive: {}\n\n'.format(task_report['archive']) body += 'Docker image: {}\n\n'.format(task_report['image']) body += 'Exception: {}\n\n'.format(task_report['exception']) body += 'Traceback:\n {} {}'.format( string.join(trace_list[:-1], ''), trace_list[-1]) return body
Task failure message.
def filter_pythons(path): if not isinstance(path, vistir.compat.Path): path = vistir.compat.Path(str(path)) if not path.is_dir(): return path if path_is_python(path) else None return filter(path_is_python, path.iterdir())
Return all valid pythons in a given path
def _init_compile_patterns(optional_attrs): attr2cmp = {} if optional_attrs is None: return attr2cmp if 'synonym' in optional_attrs: attr2cmp['synonym'] = re.compile(r'"(\S.*\S)" ([A-Z]+) (.*)\[(.*)\](.*)$') attr2cmp['synonym nt'] = cx.namedtuple("synonym", "text scope typename dbxrefs") if 'xref' in optional_attrs: attr2cmp['xref'] = re.compile(r'^(\S+:\s*\S+)\b(.*)$') return attr2cmp
Compile search patterns for optional attributes if needed.
def _webTranslator(store, fallback): if fallback is None: fallback = IWebTranslator(store, None) if fallback is None: warnings.warn( "No IWebTranslator plugin when creating Scrolltable - broken " "configuration, now deprecated! Try passing webTranslator " "keyword argument.", category=DeprecationWarning, stacklevel=4) return fallback
Discover a web translator based on an Axiom store and a specified default. Prefer the specified default. This is an implementation detail of various initializers in this module which require an L{IWebTranslator} provider. Some of those initializers did not previously require a webTranslator, so this function will issue a L{UserWarning} if no L{IWebTranslator} powerup exists for the given store and no fallback is provided. @param store: an L{axiom.store.Store} @param fallback: a provider of L{IWebTranslator}, or None @return: 'fallback', if it is provided, or the L{IWebTranslator} powerup on 'store'.