code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def from_url(url, db=None, **kwargs): from redis.client import Redis return Redis.from_url(url, db, **kwargs)
Returns an active Redis client generated from the given database URL. Will attempt to extract the database id from the path url fragment, if none is provided.
def addParts(parentPart, childPath, count, index): if index == None: index = 0 if index == len(childPath): return c = childPath[index] parentPart.count = coalesce(parentPart.count, 0) + count if parentPart.partitions == None: parentPart.partitions = FlatList() for i, part in enumerate(parentPart.partitions): if part.name == c.name: addParts(part, childPath, count, index + 1) return parentPart.partitions.append(c) addParts(c, childPath, count, index + 1)
BUILD A hierarchy BY REPEATEDLY CALLING self METHOD WITH VARIOUS childPaths count IS THE NUMBER FOUND FOR self PATH
def SADWindowSize(self, value): if value >= 1 and value <= 11 and value % 2: self._sad_window_size = value else: raise InvalidSADWindowSizeError("SADWindowSize must be odd and " "between 1 and 11.") self._replace_bm()
Set private ``_sad_window_size`` and reset ``_block_matcher``.
def target_to_source(target_adjacency, embedding): source_adjacency = {v: set() for v in embedding} reverse_embedding = {} for v, chain in iteritems(embedding): for u in chain: if u in reverse_embedding: raise ValueError("target node {} assigned to more than one source node".format(u)) reverse_embedding[u] = v for v, n in iteritems(reverse_embedding): neighbors = target_adjacency[v] for u in neighbors: if u not in reverse_embedding: continue m = reverse_embedding[u] if m == n: continue source_adjacency[n].add(m) source_adjacency[m].add(n) return source_adjacency
Derive the source adjacency from an embedding and target adjacency. Args: target_adjacency (dict/:class:`networkx.Graph`): A dict of the form {v: Nv, ...} where v is a node in the target graph and Nv is the neighbors of v as an iterable. This can also be a networkx graph. embedding (dict): A mapping from a source graph to a target graph. Returns: dict: The adjacency of the source graph. Raises: ValueError: If any node in the target_adjacency is assigned more than one node in the source graph by embedding. Examples: >>> target_adjacency = {0: {1, 3}, 1: {0, 2}, 2: {1, 3}, 3: {0, 2}} # a square graph >>> embedding = {'a': {0}, 'b': {1}, 'c': {2, 3}} >>> source_adjacency = dimod.embedding.target_to_source(target_adjacency, embedding) >>> source_adjacency # triangle {'a': {'b', 'c'}, 'b': {'a', 'c'}, 'c': {'a', 'b'}} This function also works with networkx graphs. >>> import networkx as nx >>> target_graph = nx.complete_graph(5) >>> embedding = {'a': {0, 1, 2}, 'b': {3, 4}} >>> dimod.embedding.target_to_source(target_graph, embedding)
def setup(self, ds): self._find_coord_vars(ds) self._find_aux_coord_vars(ds) self._find_ancillary_vars(ds) self._find_clim_vars(ds) self._find_boundary_vars(ds) self._find_metadata_vars(ds) self._find_cf_standard_name_table(ds) self._find_geophysical_vars(ds)
Initialize various special variable types within the class. Mutates a number of instance variables. :param netCDF4.Dataset ds: An open netCDF dataset
def as_hyperbola(self, rotated=False): idx = N.diag_indices(3) _ = 1/self.covariance_matrix[idx] d = list(_) d[-1] *= -1 arr = N.identity(4)*-1 arr[idx] = d hyp = conic(arr) if rotated: R = augment(self.axes) hyp = hyp.transform(R) return hyp
Hyperbolic error area
def attribute_read(self, sender, name): "Handles the creation of ExpectationBuilder when an attribute is read." return ExpectationBuilder(self.sender, self.delegate, self.add_invocation, self.add_expectations, name)
Handles the creation of ExpectationBuilder when an attribute is read.
def destroy(self, force=False): if force: super(UnmanagedLXC, self).destroy() else: raise UnmanagedLXCError('Destroying an unmanaged LXC might not ' 'work. To continue please call this method with force=True')
UnmanagedLXC Destructor. It requires force to be true in order to work. Otherwise it throws an error.
def worker_workerfinished(self, node): self.config.hook.pytest_testnodedown(node=node, error=None) if node.workeroutput["exitstatus"] == 2: self.shouldstop = "%s received keyboard-interrupt" % (node,) self.worker_errordown(node, "keyboard-interrupt") return if node in self.sched.nodes: crashitem = self.sched.remove_node(node) assert not crashitem, (crashitem, node) self._active_nodes.remove(node)
Emitted when node executes its pytest_sessionfinish hook. Removes the node from the scheduler. The node might not be in the scheduler if it had not emitted workerready before shutdown was triggered.
def inherit_dict(base, namespace, attr_name, inherit=lambda k, v: True): items = [] base_dict = getattr(base, attr_name, {}) new_dict = namespace.setdefault(attr_name, {}) for key, value in base_dict.items(): if key in new_dict or (inherit and not inherit(key, value)): continue if inherit: new_dict[key] = value items.append((key, value)) return items
Perform inheritance of dictionaries. Returns a list of key and value pairs for values that were inherited, for post-processing. :param base: The base class being considered; see ``iter_bases()``. :param namespace: The dictionary of the new class being built. :param attr_name: The name of the attribute containing the dictionary to be inherited. :param inherit: Filtering function to determine if a given item should be inherited. If ``False`` or ``None``, item will not be added, but will be included in the returned items. If a function, the function will be called with the key and value, and the item will be added and included in the items list only if the function returns ``True``. By default, all items are added and included in the items list.
def write_str(self, s): self.write(s) self.room -= len(s)
Add string s to the accumulated body.
def lambda_not_found_response(*args): response_data = jsonify(ServiceErrorResponses._NO_LAMBDA_INTEGRATION) return make_response(response_data, ServiceErrorResponses.HTTP_STATUS_CODE_502)
Constructs a Flask Response for when a Lambda function is not found for an endpoint :return: a Flask Response
def process_data(name=None): ct = current_process() if not hasattr(ct, '_pulsar_local'): ct._pulsar_local = {} loc = ct._pulsar_local return loc.get(name) if name else loc
Fetch the current process local data dictionary. If ``name`` is not ``None`` it returns the value at``name``, otherwise it return the process data dictionary
def hijack_require_http_methods(fn): required_methods = ['POST'] if hijack_settings.HIJACK_ALLOW_GET_REQUESTS: required_methods.append('GET') return require_http_methods(required_methods)(fn)
Wrapper for "require_http_methods" decorator. POST required by default, GET can optionally be allowed
def do_minus(self, parser, group): grouper = group.__class__() next_not = None for node in group: if isinstance(node, self.Minus): if next_not is not None: continue next_not = whoosh.qparser.syntax.NotGroup() grouper.append(next_not) else: if isinstance(node, whoosh.qparser.syntax.GroupNode): node = self.do_minus(parser, node) if next_not is not None: next_not.append(node) next_not = None else: grouper.append(node) if next_not is not None: grouper.pop() return grouper
This filter sorts nodes in a flat group into "required", "default", and "banned" subgroups based on the presence of plus and minus nodes.
def to_kaf(self): if self.type == 'NAF': for node in self.__get_opinion_nodes(): node.set('oid',node.get('id')) del node.attrib['id']
Converts the opinion layer to KAF
def get_cell(self, row, column): url = self.build_url(self._endpoints.get('get_cell').format(row=row, column=column)) response = self.session.get(url) if not response: return None return self.range_constructor(parent=self, **{self._cloud_data_key: response.json()})
Gets the range object containing the single cell based on row and column numbers.
def set_float_param(params, name, value, min=None, max=None): if value is None: return try: value = float(str(value)) except: raise ValueError( "Parameter '%s' must be numeric (or a numeric string) or None," " got %r." % (name, value)) if min is not None and value < min: raise ValueError( "Parameter '%s' must not be less than %r, got %r." % ( name, min, value)) if max is not None and value > max: raise ValueError( "Parameter '%s' must not be greater than %r, got %r." % ( name, min, value)) params[name] = str(value)
Set a float parameter if applicable. :param dict params: A dict containing API call parameters. :param str name: The name of the parameter to set. :param float value: The value of the parameter. If ``None``, the field will not be set. If an instance of a numeric type or a string that can be turned into a ``float``, the relevant field will be set. Any other value will raise a `ValueError`. :param float min: If provided, values less than this will raise ``ValueError``. :param float max: If provided, values greater than this will raise ``ValueError``. :returns: ``None``
def add_plot(self, *args, extension='pdf', **kwargs): add_image_kwargs = {} for key in ('width', 'placement'): if key in kwargs: add_image_kwargs[key] = kwargs.pop(key) filename = self._save_plot(*args, extension=extension, **kwargs) self.add_image(filename, **add_image_kwargs)
Add the current Matplotlib plot to the figure. The plot that gets added is the one that would normally be shown when using ``plt.show()``. Args ---- args: Arguments passed to plt.savefig for displaying the plot. extension : str extension of image file indicating figure file type kwargs: Keyword arguments passed to plt.savefig for displaying the plot. In case these contain ``width`` or ``placement``, they will be used for the same purpose as in the add_image command. Namely the width and placement of the generated plot in the LaTeX document.
def update_offer(self, offer_id, offer_dict): return self._create_put_request(resource=OFFERS, billomat_id=offer_id, send_data=offer_dict)
Updates an offer :param offer_id: the offer id :param offer_dict: dict :return: dict
def rvon_mises(mu, kappa, size=None): return (np.random.mtrand.vonmises( mu, kappa, size) + np.pi) % (2. * np.pi) - np.pi
Random von Mises variates.
def get_all_reminders(self, params=None): if not params: params = {} return self._iterate_through_pages(self.get_reminders_per_page, resource=REMINDERS, **{'params': params})
Get all reminders This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param params: search params :return: list
def memory(): memory_oper = ['read', 'write'] memory_scope = ['local', 'global'] test_command = 'sysbench --num-threads=64 --test=memory ' test_command += '--memory-oper={0} --memory-scope={1} ' test_command += '--memory-block-size=1K --memory-total-size=32G run ' result = None ret_val = {} for oper in memory_oper: for scope in memory_scope: key = 'Operation: {0} Scope: {1}'.format(oper, scope) run_command = test_command.format(oper, scope) result = __salt__['cmd.run'](run_command) ret_val[key] = _parser(result) return ret_val
This tests the memory for read and write operations. CLI Examples: .. code-block:: bash salt '*' sysbench.memory
def eth_getBlockByNumber(self, number): block_hash = self.reader._get_block_hash(number) block_number = _format_block_number(number) body_key = body_prefix + block_number + block_hash block_data = self.db.get(body_key) body = rlp.decode(block_data, sedes=Block) return body
Get block body by block number. :param number: :return:
def memory_usage(self, index=True, deep=False): result = Series([c.memory_usage(index=False, deep=deep) for col, c in self.iteritems()], index=self.columns) if index: result = Series(self.index.memory_usage(deep=deep), index=['Index']).append(result) return result
Return the memory usage of each column in bytes. The memory usage can optionally include the contribution of the index and elements of `object` dtype. This value is displayed in `DataFrame.info` by default. This can be suppressed by setting ``pandas.options.display.memory_usage`` to False. Parameters ---------- index : bool, default True Specifies whether to include the memory usage of the DataFrame's index in returned Series. If ``index=True``, the memory usage of the index is the first item in the output. deep : bool, default False If True, introspect the data deeply by interrogating `object` dtypes for system-level memory consumption, and include it in the returned values. Returns ------- Series A Series whose index is the original column names and whose values is the memory usage of each column in bytes. See Also -------- numpy.ndarray.nbytes : Total bytes consumed by the elements of an ndarray. Series.memory_usage : Bytes consumed by a Series. Categorical : Memory-efficient array for string values with many repeated values. DataFrame.info : Concise summary of a DataFrame. Examples -------- >>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool'] >>> data = dict([(t, np.ones(shape=5000).astype(t)) ... for t in dtypes]) >>> df = pd.DataFrame(data) >>> df.head() int64 float64 complex128 object bool 0 1 1.0 1.0+0.0j 1 True 1 1 1.0 1.0+0.0j 1 True 2 1 1.0 1.0+0.0j 1 True 3 1 1.0 1.0+0.0j 1 True 4 1 1.0 1.0+0.0j 1 True >>> df.memory_usage() Index 80 int64 40000 float64 40000 complex128 80000 object 40000 bool 5000 dtype: int64 >>> df.memory_usage(index=False) int64 40000 float64 40000 complex128 80000 object 40000 bool 5000 dtype: int64 The memory footprint of `object` dtype columns is ignored by default: >>> df.memory_usage(deep=True) Index 80 int64 40000 float64 40000 complex128 80000 object 160000 bool 5000 dtype: int64 Use a Categorical for efficient storage of an object-dtype column with many repeated values. >>> df['object'].astype('category').memory_usage(deep=True) 5168
def _check_user_parameters(self, user_parameters): if not user_parameters: return for key in user_parameters: if key not in submission_defaults: raise ValueError("Unknown parameter {0}".format(key))
Verifies that the parameter dict given by the user only contains known keys. This ensures that the user detects typos faster.
def create_saml_provider(name, saml_metadata_document, region=None, key=None, keyid=None, profile=None): conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) try: conn.create_saml_provider(saml_metadata_document, name) log.info('Successfully created %s SAML provider.', name) return True except boto.exception.BotoServerError as e: aws = __utils__['boto.get_error'](e) log.debug(aws) log.error('Failed to create SAML provider %s.', name) return False
Create SAML provider CLI Example: .. code-block:: bash salt myminion boto_iam.create_saml_provider my_saml_provider_name saml_metadata_document
def get_metadata(self): try: metadata = get_build_json()["metadata"] self.build_id = metadata["name"] except KeyError: self.log.error("No build metadata") raise for image in self.workflow.tag_conf.unique_images: self.pullspec_image = image break for image in self.workflow.tag_conf.primary_images: if '-' in image.tag[1:-1]: self.pullspec_image = image break if not self.pullspec_image: raise RuntimeError('Unable to determine pullspec_image') metadata_version = 0 buildroot = self.get_buildroot(build_id=self.build_id) output_files = self.get_output(buildroot['id']) output = [output.metadata for output in output_files] koji_metadata = { 'metadata_version': metadata_version, 'buildroots': [buildroot], 'output': output, } self.update_buildroot_koji(buildroot, output) return koji_metadata, output_files
Build the metadata needed for importing the build :return: tuple, the metadata and the list of Output instances
def tags(self, tags=None): if tags is None or not tags: return self nodes = [] for node in self.nodes: if any(tag in node.extra['tags'] for tag in tags): nodes.append(node) self.nodes = nodes return self
Filter by tags. :param tags: Tags to filter. :type tags: ``list`` :return: A list of Node objects. :rtype: ``list`` of :class:`Node`
def _init_imu(self): if not self._imu_init: self._imu_init = self._imu.IMUInit() if self._imu_init: self._imu_poll_interval = self._imu.IMUGetPollInterval() * 0.001 self.set_imu_config(True, True, True) else: raise OSError('IMU Init Failed')
Internal. Initialises the IMU sensor via RTIMU
def ipv4(value, allow_empty = False): if not value and allow_empty is False: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None try: components = value.split('.') except AttributeError: raise errors.InvalidIPAddressError('value (%s) is not a valid ipv4' % value) if len(components) != 4 or not all(x.isdigit() for x in components): raise errors.InvalidIPAddressError('value (%s) is not a valid ipv4' % value) for x in components: try: x = integer(x, minimum = 0, maximum = 255) except ValueError: raise errors.InvalidIPAddressError('value (%s) is not a valid ipv4' % value) return value
Validate that ``value`` is a valid IP version 4 address. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises InvalidIPAddressError: if ``value`` is not a valid IP version 4 address or empty with ``allow_empty`` set to ``True``
def setorigin(self): try: origin = self.repo.remotes.origin if origin.url != self.origin_url: log.debug('[%s] Changing origin url. Old: %s New: %s', self.name, origin.url, self.origin_url) origin.config_writer.set('url', self.origin_url) except AttributeError: origin = self.repo.create_remote('origin', self.origin_url) log.debug('[%s] Created remote "origin" with URL: %s', self.name, origin.url)
Set the 'origin' remote to the upstream url that we trust.
def length_prepend(byte_string): length = tx.VarInt(len(byte_string)) return length.to_bytes() + byte_string
bytes -> bytes
def delayed_assattr(self, node): try: frame = node.frame() for inferred in node.expr.infer(): if inferred is util.Uninferable: continue try: if inferred.__class__ is bases.Instance: inferred = inferred._proxied iattrs = inferred.instance_attrs if not _can_assign_attr(inferred, node.attrname): continue elif isinstance(inferred, bases.Instance): continue elif inferred.is_function: iattrs = inferred.instance_attrs else: iattrs = inferred.locals except AttributeError: continue values = iattrs.setdefault(node.attrname, []) if node in values: continue if ( frame.name == "__init__" and values and values[0].frame().name != "__init__" ): values.insert(0, node) else: values.append(node) except exceptions.InferenceError: pass
Visit a AssAttr node This adds name to locals and handle members definition.
def show(self, visible=True, run=False): self._backend._vispy_set_visible(visible) if run: self.app.run()
Show or hide the canvas Parameters ---------- visible : bool Make the canvas visible. run : bool Run the backend event loop.
def replace_meta(self, name, content=None, meta_key=None): children = self.meta._children if not content: children = tuple(children) meta_key = meta_key or 'name' for child in children: if child.attr(meta_key) == name: if content: child.attr('content', content) else: self.meta._children.remove(child) return if content: self.add_meta(**{meta_key: name, 'content': content})
Replace the ``content`` attribute of meta tag ``name`` If the meta with ``name`` is not available, it is added, otherwise its content is replaced. If ``content`` is not given or it is empty the meta tag with ``name`` is removed.
def add_bookmark(self, name, time, chan=''): try: bookmarks = self.rater.find('bookmarks') except AttributeError: raise IndexError('You need to have at least one rater') new_bookmark = SubElement(bookmarks, 'bookmark') bookmark_name = SubElement(new_bookmark, 'bookmark_name') bookmark_name.text = name bookmark_time = SubElement(new_bookmark, 'bookmark_start') bookmark_time.text = str(time[0]) bookmark_time = SubElement(new_bookmark, 'bookmark_end') bookmark_time.text = str(time[1]) if isinstance(chan, (tuple, list)): chan = ', '.join(chan) event_chan = SubElement(new_bookmark, 'bookmark_chan') event_chan.text = chan self.save()
Add a new bookmark Parameters ---------- name : str name of the bookmark time : (float, float) float with start and end time in s Raises ------ IndexError When there is no selected rater
def requestAvatarId(self, credentials): username, domain = credentials.username.split("@") key = self.users.key(domain, username) if key is None: return defer.fail(UnauthorizedLogin()) def _cbPasswordChecked(passwordIsCorrect): if passwordIsCorrect: return username + '@' + domain else: raise UnauthorizedLogin() return defer.maybeDeferred(credentials.checkPassword, key).addCallback(_cbPasswordChecked)
Return the ID associated with these credentials. @param credentials: something which implements one of the interfaces in self.credentialInterfaces. @return: a Deferred which will fire a string which identifies an avatar, an empty tuple to specify an authenticated anonymous user (provided as checkers.ANONYMOUS) or fire a Failure(UnauthorizedLogin). @see: L{twisted.cred.credentials}
def save(self, fname=''): if fname != '': with open(fname, 'w') as f: for i in self.lstPrograms: f.write(self.get_file_info_line(i, ',')) filemap = mod_filemap.FileMap([], []) object_fileList = filemap.get_full_filename(filemap.find_type('OBJECT'), filemap.find_ontology('FILE-PROGRAM')[0]) print('object_fileList = ' + object_fileList + '\n') if os.path.exists(object_fileList): os.remove(object_fileList) self.lstPrograms.sort() try: with open(object_fileList, 'a') as f: f.write('\n'.join([i[0] for i in self.lstPrograms])) except Exception as ex: print('ERROR = cant write to object_filelist ' , object_fileList, str(ex))
Save the list of items to AIKIF core and optionally to local file fname
def compile_with_symbol(self, func, theano_args=None, owner=None): if theano_args is None: theano_args = [] upc = UpdateCollector() theano_ret = func(*theano_args) if owner is None \ else func(owner, *theano_args) out = copy.copy(self.default_options) out['outputs'] = theano_ret out['updates'] = upc.extract_updates() return theano.function(theano_args, **out)
Compile the function with theano symbols
def _serialize(self, value, attr, obj): try: return super(DateString, self)._serialize( arrow.get(value).date(), attr, obj) except ParserError: return missing
Serialize an ISO8601-formatted date.
def get_authorizations_on_date(self, from_, to): authorization_list = [] for authorization in self.get_authorizations(): if overlap(from_, to, authorization.start_date, authorization.end_date): authorization_list.append(authorization) return objects.AuthorizationList(authorization_list, runtime=self._runtime)
Gets an ``AuthorizationList`` effective during the entire given date range inclusive but not confined to the date range. arg: from (osid.calendaring.DateTime): starting date arg: to (osid.calendaring.DateTime): ending date return: (osid.authorization.AuthorizationList) - the returned ``Authorization`` list raise: InvalidArgument - ``from`` is greater than ``to`` raise: NullArgument - ``from`` or ``to`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.*
def read_message_type(file): message_byte = file.read(1) if message_byte == b'': return ConnectionClosed message_number = message_byte[0] return _message_types.get(message_number, UnknownMessage)
Read the message type from a file.
def ccifrm(frclss, clssid, lenout=_default_len_out): frclss = ctypes.c_int(frclss) clssid = ctypes.c_int(clssid) lenout = ctypes.c_int(lenout) frcode = ctypes.c_int() frname = stypes.stringToCharP(lenout) center = ctypes.c_int() found = ctypes.c_int() libspice.ccifrm_c(frclss, clssid, lenout, ctypes.byref(frcode), frname, ctypes.byref(center), ctypes.byref(found)) return frcode.value, stypes.toPythonString( frname), center.value, bool(found.value)
Return the frame name, frame ID, and center associated with a given frame class and class ID. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ccifrm_c.html :param frclss: Class of frame. :type frclss: int :param clssid: Class ID of frame. :type clssid: int :param lenout: Maximum length of output string. :type lenout: int :return: the frame name, frame ID, center. :rtype: tuple
def get_fields(config): for data in config['scraping']['data']: if data['field'] != '': yield data['field'] if 'next' in config['scraping']: for n in config['scraping']['next']: for f in get_fields(n): yield f
Recursive generator that yields the field names in the config file :param config: The configuration file that contains the specification of the extractor :return: The field names in the config file, through a generator
def cluster_del_slots(self, slot, *slots): slots = (slot,) + slots if not all(isinstance(s, int) for s in slots): raise TypeError("All parameters must be of type int") fut = self.execute(b'CLUSTER', b'DELSLOTS', *slots) return wait_ok(fut)
Set hash slots as unbound in receiving node.
def _set_value(self, entity, value): if entity._projection: raise ReadonlyPropertyError( 'You cannot set property values of a projection entity') if self._repeated: if not isinstance(value, (list, tuple, set, frozenset)): raise datastore_errors.BadValueError('Expected list or tuple, got %r' % (value,)) value = [self._do_validate(v) for v in value] else: if value is not None: value = self._do_validate(value) self._store_value(entity, value)
Internal helper to set a value in an entity for a Property. This performs validation first. For a repeated Property the value should be a list.
def closed_by(self, **kwargs): path = '%s/%s/closed_by' % (self.manager.path, self.get_id()) return self.manager.gitlab.http_get(path, **kwargs)
List merge requests that will close the issue when merged. Args: **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabGetErrot: If the merge requests could not be retrieved Returns: list: The list of merge requests.
def get_zone_variable(self, zone_id, variable): try: return self._retrieve_cached_zone_variable(zone_id, variable) except UncachedVariable: return (yield from self._send_cmd("GET %s.%s" % ( zone_id.device_str(), variable)))
Retrieve the current value of a zone variable. If the variable is not found in the local cache then the value is requested from the controller.
def socket_recv(self): try: data = self.sock.recv(2048) except socket.error, ex: print ("?? socket.recv() error '%d:%s' from %s" % (ex[0], ex[1], self.addrport())) raise BogConnectionLost() size = len(data) if size == 0: raise BogConnectionLost() self.last_input_time = time.time() self.bytes_received += size for byte in data: self._iac_sniffer(byte) while True: mark = self.recv_buffer.find('\n') if mark == -1: break cmd = self.recv_buffer[:mark].strip() self.command_list.append(cmd) self.cmd_ready = True self.recv_buffer = self.recv_buffer[mark+1:]
Called by TelnetServer when recv data is ready.
def _load_themes(self): with utils.temporary_chdir(utils.get_file_directory()): self._append_theme_dir("themes") self.tk.eval("source themes/pkgIndex.tcl") theme_dir = "gif" if not self.png_support else "png" self._append_theme_dir(theme_dir) self.tk.eval("source {}/pkgIndex.tcl".format(theme_dir)) self.tk.call("package", "require", "ttk::theme::scid")
Load the themes into the Tkinter interpreter
def createBamHeader(self, baseHeader): header = dict(baseHeader) newSequences = [] for index, referenceInfo in enumerate(header['SQ']): if index < self.numChromosomes: referenceName = referenceInfo['SN'] assert referenceName == self.chromosomes[index] newReferenceInfo = { 'AS': self.referenceSetName, 'SN': referenceName, 'LN': 0, 'UR': 'http://example.com', 'M5': 'dbb6e8ece0b5de29da56601613007c2a', 'SP': 'Human' } newSequences.append(newReferenceInfo) header['SQ'] = newSequences return header
Creates a new bam header based on the specified header from the parent BAM file.
def ceil_nearest(x, dx=1): precision = get_sig_digits(dx) return round(math.ceil(float(x) / dx) * dx, precision)
ceil a number to within a given rounding accuracy
def set_title(self, msg): self.s.move(0, 0) self.overwrite_line(msg, curses.A_REVERSE)
Set first header line text
def readfmt(stream, fmt): size = struct.calcsize(fmt) blob = stream.read(size) return struct.unpack(fmt, blob)
Read and unpack an object from stream, using a struct format string.
def lock_up_period(self, lock_up_period): try: if isinstance(lock_up_period, (str, int)): self._lock_up_period = int(lock_up_period) except Exception: raise ValueError('invalid input of lock up period %s, cannot be converted to an int' % lock_up_period)
This lockup period is in months. This might change to a relative delta.
def set_timezone(rollback=False): if not rollback: if contains(filename='/etc/timezone', text=env.TIME_ZONE, use_sudo=True): return False if env.verbosity: print env.host, "CHANGING TIMEZONE /etc/timezone to "+env.TIME_ZONE _backup_file('/etc/timezone') sudo('echo %s > /tmp/timezone'% env.TIME_ZONE) sudo('cp -f /tmp/timezone /etc/timezone') sudo('dpkg-reconfigure --frontend noninteractive tzdata') else: _restore_fie('/etc/timezone') sudo('dpkg-reconfigure --frontend noninteractive tzdata') return True
Set the time zone on the server using Django settings.TIME_ZONE
def EncryptPrivateKey(self, decrypted): aes = AES.new(self._master_key, AES.MODE_CBC, self._iv) return aes.encrypt(decrypted)
Encrypt the provided plaintext with the initialized private key. Args: decrypted (byte string): the plaintext to be encrypted. Returns: bytes: the ciphertext.
def save_srm(self, filename): with open(filename, 'wb') as fp: raw_data = bread.write(self._song_data, spec.song) fp.write(raw_data)
Save a project in .srm format to the target file. :param filename: the name of the file to which to save
def reload(self): text = self._read(self.location) cursor_position = min(self.buffer.cursor_position, len(text)) self.buffer.document = Document(text, cursor_position) self._file_content = text
Reload file again from storage.
def create_customer(self, *, full_name, email): payload = { "fullName": full_name, "email": email } return self.client._post(self.url + 'customers', json=payload, headers=self.get_headers())
Creation of a customer in the system. Args: full_name: Customer's complete name. Alphanumeric. Max: 255. email: Customer's email address. Alphanumeric. Max: 255. Returns:
def execute_process_async(func, *args, **kwargs): global _GIPC_EXECUTOR if _GIPC_EXECUTOR is None: _GIPC_EXECUTOR = GIPCExecutor( num_procs=settings.node.gipc_pool_size, num_greenlets=settings.node.greenlet_pool_size) return _GIPC_EXECUTOR.submit(func, *args, **kwargs)
Executes `func` in a separate process. Memory and other resources are not available. This gives true concurrency at the cost of losing access to these resources. `args` and `kwargs` are
def get_tunnels(self): method = 'GET' endpoint = '/rest/v1/{}/tunnels'.format(self.client.sauce_username) return self.client.request(method, endpoint)
Retrieves all running tunnels for a specific user.
def move_right(self): self.at(ardrone.at.pcmd, True, self.speed, 0, 0, 0)
Make the drone move right.
def salt_ssh(): import salt.cli.ssh if '' in sys.path: sys.path.remove('') try: client = salt.cli.ssh.SaltSSH() _install_signal_handlers(client) client.run() except SaltClientError as err: trace = traceback.format_exc() try: hardcrash = client.options.hard_crash except (AttributeError, KeyError): hardcrash = False _handle_interrupt( SystemExit(err), err, hardcrash, trace=trace)
Execute the salt-ssh system
def get_xpub(xpub, filter=None, limit=None, offset=None, api_code=None): resource = 'multiaddr?active=' + xpub if filter is not None: if isinstance(filter, FilterType): resource += '&filter=' + str(filter.value) else: raise ValueError('Filter must be of FilterType enum') if limit is not None: resource += '&limit=' + str(limit) if offset is not None: resource += '&offset=' + str(offset) if api_code is not None: resource += '&api_code=' + api_code response = util.call_api(resource) json_response = json.loads(response) return Xpub(json_response)
Get data for a single xpub including balance and list of relevant transactions. :param str xpub: address(xpub) to look up :param FilterType filter: the filter for transactions selection (optional) :param int limit: limit number of transactions to fetch (optional) :param int offset: number of transactions to skip when fetch (optional) :param str api_code: Blockchain.info API code (optional) :return: an instance of :class:`Xpub` class
def on_btn_delete_fit(self, event): self.delete_fit(self.current_fit, specimen=self.s)
removes the current interpretation Parameters ---------- event : the wx.ButtonEvent that triggered this function
def edit(self, request, id): with pushd(tempfile.gettempdir()): try: self.clone(id) with pushd(id): files = [f for f in os.listdir('.') if os.path.isfile(f)] quoted = ['"{}"'.format(f) for f in files] os.system("{} {}".format(self.editor, ' '.join(quoted))) os.system('git commit -av && git push') finally: shutil.rmtree(id)
Edit a gist The files in the gist a cloned to a temporary directory and passed to the default editor (defined by the EDITOR environmental variable). When the user exits the editor, they will be provided with a prompt to commit the changes, which will then be pushed to the remote. Arguments: request: an initial request object id: the gist identifier
def _show_input_processor_key_buffer(self, cli, new_screen): key_buffer = cli.input_processor.key_buffer if key_buffer and _in_insert_mode(cli) and not cli.is_done: data = key_buffer[-1].data if get_cwidth(data) == 1: cpos = new_screen.cursor_position new_screen.data_buffer[cpos.y][cpos.x] = \ _CHAR_CACHE[data, Token.PartialKeyBinding]
When the user is typing a key binding that consists of several keys, display the last pressed key if the user is in insert mode and the key is meaningful to be displayed. E.g. Some people want to bind 'jj' to escape in Vi insert mode. But the first 'j' needs to be displayed in order to get some feedback.
def get_restricted_sites(self, request): try: return request.user.get_sites() except AttributeError: return Site.objects.none()
The sites on which the user has permission on. To return the permissions, the method check for the ``get_sites`` method on the user instance (e.g.: ``return request.user.get_sites()``) which must return the queryset of enabled sites. If the attribute does not exists, the user is considered enabled for all the websites. :param request: current request :return: boolean or a queryset of available sites
async def async_connect(self): kwargs = { 'username': self._username if self._username else None, 'client_keys': [self._ssh_key] if self._ssh_key else None, 'port': self._port, 'password': self._password if self._password else None, 'known_hosts': None } self._client = await asyncssh.connect(self._host, **kwargs) self._connected = True
Fetches the client or creates a new one.
def _explore_storage(self): path = '' dirs = [path] while dirs: path = dirs.pop() subdirs, files = self.media_storage.listdir(path) for media_filename in files: yield os.path.join(path, media_filename) dirs.extend([os.path.join(path, subdir) for subdir in subdirs])
Generator of all files contained in media storage.
def load_config(self, custom_config): self.config = configparser.ConfigParser() if custom_config: self.config.read(custom_config) return f'Loading config from file {custom_config}.' home = os.path.expanduser('~{}'.format(getpass.getuser())) home_conf_file = os.path.join(home, '.cronyrc') system_conf_file = '/etc/crony.conf' conf_precedence = (home_conf_file, system_conf_file) for conf_file in conf_precedence: if os.path.exists(conf_file): self.config.read(conf_file) return f'Loading config from file {conf_file}.' self.config['crony'] = {} return 'No config file found.'
Attempt to load config from file. If the command specified a --config parameter, then load that config file. Otherwise, the user's home directory takes precedence over a system wide config. Config file in the user's dir should be named ".cronyrc". System wide config should be located at "/etc/crony.conf"
def crop_box(endpoint=None, filename=None): crop_size = current_app.config['AVATARS_CROP_BASE_WIDTH'] if endpoint is None or filename is None: url = url_for('avatars.static', filename='default/default_l.jpg') else: url = url_for(endpoint, filename=filename) return Markup('<img src="%s" id="crop-box" style="max-width: %dpx; display: block;">' % (url, crop_size))
Create a crop box. :param endpoint: The endpoint of view function that serve avatar image file. :param filename: The filename of the image that need to be crop.
def _config_params(base_config, assoc_files, region, out_file, items): params = [] dbsnp = assoc_files.get("dbsnp") if dbsnp: params += ["--dbsnp", dbsnp] cosmic = assoc_files.get("cosmic") if cosmic: params += ["--cosmic", cosmic] variant_regions = bedutils.population_variant_regions(items) region = subset_variant_regions(variant_regions, region, out_file, items) if region: params += ["-L", bamprep.region_to_gatk(region), "--interval_set_rule", "INTERSECTION"] min_af = tz.get_in(["algorithm", "min_allele_fraction"], base_config) if min_af: params += ["--minimum_mutation_cell_fraction", "%.2f" % (min_af / 100.0)] resources = config_utils.get_resources("mutect", base_config) if resources.get("options") is not None: params += [str(x) for x in resources.get("options", [])] if "--enable_qscore_output" not in params: params.append("--enable_qscore_output") return params
Add parameters based on configuration variables, associated files and genomic regions.
def find_resistance(record): for feature in record.features: labels = set(feature.qualifiers.get("label", [])) cassettes = labels.intersection(_ANTIBIOTICS) if len(cassettes) > 1: raise RuntimeError("multiple resistance cassettes detected") elif len(cassettes) == 1: return _ANTIBIOTICS.get(cassettes.pop()) raise RuntimeError("could not find the resistance of '{}'".format(record.id))
Infer the antibiotics resistance of the given record. Arguments: record (`~Bio.SeqRecord.SeqRecord`): an annotated sequence. Raises: RuntimeError: when there's not exactly one resistance cassette.
def validate(self): errors = [] app = errors.append if not self.hint_cores >= self.mpi_procs * self.omp_threads >= self.min_cores: app("self.hint_cores >= mpi_procs * omp_threads >= self.min_cores not satisfied") if self.omp_threads > self.hw.cores_per_node: app("omp_threads > hw.cores_per_node") if self.mem_per_proc > self.hw.mem_per_node: app("mem_mb >= self.hw.mem_per_node") if not self.max_mem_per_proc >= self.mem_per_proc >= self.min_mem_per_proc: app("self.max_mem_per_proc >= mem_mb >= self.min_mem_per_proc not satisfied") if self.priority <= 0: app("priority must be > 0") if not (1 <= self.min_cores <= self.hw.num_cores >= self.hint_cores): app("1 <= min_cores <= hardware num_cores >= hint_cores not satisfied") if errors: raise self.Error(str(self) + "\n".join(errors))
Validate the parameters of the run. Raises self.Error if invalid parameters.
def range_initialization(X, num_weights): X_ = X.reshape(-1, X.shape[-1]) min_val, max_val = X_.min(0), X_.max(0) data_range = max_val - min_val return data_range * np.random.rand(num_weights, X.shape[-1]) + min_val
Initialize the weights by calculating the range of the data. The data range is calculated by reshaping the input matrix to a 2D matrix, and then taking the min and max values over the columns. Parameters ---------- X : numpy array The input data. The data range is calculated over the last axis. num_weights : int The number of weights to initialize. Returns ------- new_weights : numpy array A new version of the weights, initialized to the data range specified by X.
def set_alpha_for_selection(self, alpha): selection = self.treeview_layers.get_selection() list_store, selected_iter = selection.get_selected() if selected_iter is None: return else: surface_name, original_alpha = list_store[selected_iter] self.set_alpha(surface_name, alpha) self.set_scale_alpha_from_selection()
Set alpha for selected layer.
def populate(self, **values): values = values.copy() fields = list(self.iterate_with_name()) for _, structure_name, field in fields: if structure_name in values: field.__set__(self, values.pop(structure_name)) for name, _, field in fields: if name in values: field.__set__(self, values.pop(name))
Populate values to fields. Skip non-existing.
def fit_cosine_function(wind): wind_daily = wind.groupby(wind.index.date).mean() wind_daily_hourly = pd.Series(index=wind.index, data=wind_daily.loc[wind.index.date].values) df = pd.DataFrame(data=dict(daily=wind_daily_hourly, hourly=wind)).dropna(how='any') x = np.array([df.daily, df.index.hour]) popt, pcov = scipy.optimize.curve_fit(_cosine_function, x, df.hourly) return popt
fits a cosine function to observed hourly windspeed data Args: wind: observed hourly windspeed data Returns: parameters needed to generate diurnal features of windspeed using a cosine function
def init(opts): if CONFIG_BASE_URL in opts['proxy']: CONFIG[CONFIG_BASE_URL] = opts['proxy'][CONFIG_BASE_URL] else: log.error('missing proxy property %s', CONFIG_BASE_URL) log.debug('CONFIG: %s', CONFIG)
Perform any needed setup.
async def start(self): self._loop.add_task(self._periodic_loop, name="periodic task for %s" % self._adapter.__class__.__name__, parent=self._task) self._adapter.add_callback('on_scan', functools.partial(_on_scan, self._loop, self)) self._adapter.add_callback('on_report', functools.partial(_on_report, self._loop, self)) self._adapter.add_callback('on_trace', functools.partial(_on_trace, self._loop, self)) self._adapter.add_callback('on_disconnect', functools.partial(_on_disconnect, self._loop, self))
Start the device adapter. See :meth:`AbstractDeviceAdapter.start`.
def group(self): for group in self._server.groups: if self.identifier in group.clients: return group
Get group.
def groupByNode(requestContext, seriesList, nodeNum, callback): return groupByNodes(requestContext, seriesList, callback, nodeNum)
Takes a serieslist and maps a callback to subgroups within as defined by a common node. Example:: &target=groupByNode(ganglia.by-function.*.*.cpu.load5,2,"sumSeries") Would return multiple series which are each the result of applying the "sumSeries" function to groups joined on the second node (0 indexed) resulting in a list of targets like:: sumSeries(ganglia.by-function.server1.*.cpu.load5), sumSeries(ganglia.by-function.server2.*.cpu.load5),...
def remove_sbi_id(self, sbi_id): sbi_ids = self.sbi_ids sbi_ids.remove(sbi_id) DB.set_hash_value(self._key, 'sbi_ids', sbi_ids)
Remove an SBI Identifier.
def cleanup(self): keys = self.client.smembers(self.keys_container) for key in keys: entry = self.client.get(key) if entry: entry = pickle.loads(entry) if self._is_expired(entry, self.timeout): self.delete_entry(key)
Cleanup all the expired keys
def print_verbose(self): print "Nodes: " for a in (self.nodes(failed="all")): print a print "\nVectors: " for v in (self.vectors(failed="all")): print v print "\nInfos: " for i in (self.infos(failed="all")): print i print "\nTransmissions: " for t in (self.transmissions(failed="all")): print t print "\nTransformations: " for t in (self.transformations(failed="all")): print t
Print a verbose representation of a network.
def union(self, another_moc, *args): interval_set = self._interval_set.union(another_moc._interval_set) for moc in args: interval_set = interval_set.union(moc._interval_set) return self.__class__(interval_set)
Union between the MOC instance and other MOCs. Parameters ---------- another_moc : `~mocpy.moc.MOC` The MOC used for performing the union with self. args : `~mocpy.moc.MOC` Other additional MOCs to perform the union with. Returns ------- result : `~mocpy.moc.MOC`/`~mocpy.tmoc.TimeMOC` The resulting MOC.
def tell(self): self._check_open_file() if self._flushes_after_tell(): self.flush() if not self._append: return self._io.tell() if self._read_whence: write_seek = self._io.tell() self._io.seek(self._read_seek, self._read_whence) self._read_seek = self._io.tell() self._read_whence = 0 self._io.seek(write_seek) return self._read_seek
Return the file's current position. Returns: int, file's current position in bytes.
def has_bom(self, f): content = f.read(4) encoding = None m = RE_UTF_BOM.match(content) if m is not None: if m.group(1): encoding = 'utf-8-sig' elif m.group(2): encoding = 'utf-32' elif m.group(3): encoding = 'utf-32' elif m.group(4): encoding = 'utf-16' elif m.group(5): encoding = 'utf-16' return encoding
Check for UTF8, UTF16, and UTF32 BOMs.
def full_photos(self): if self._photos is None: if self.total_photo_count > 0: self.assert_bind_client() self._photos = self.bind_client.get_activity_photos(self.id, only_instagram=False) else: self._photos = [] return self._photos
Gets a list of photos using default options. :class:`list` of :class:`stravalib.model.ActivityPhoto` objects for this activity.
def _register_user_models(user_models, admin=None, schema=None): if any([issubclass(cls, AutomapModel) for cls in user_models]): AutomapModel.prepare( db.engine, reflect=True, schema=schema) for user_model in user_models: register_model(user_model, admin)
Register any user-defined models with the API Service. :param list user_models: A list of user-defined models to include in the API service
def pdf(self, mu): if self.transform is not None: mu = self.transform(mu) return self.pdf_internal(mu, df=self.df0, loc=self.loc0, scale=self.scale0, gamma=self.gamma0)
PDF for Skew t prior Parameters ---------- mu : float Latent variable for which the prior is being formed over Returns ---------- - p(mu)
def get_entity(self,entity_id): entity_node = self.map_entity_id_to_node.get(entity_id) if entity_node is not None: return Centity(node=entity_node,type=self.type) else: for entity_node in self.__get_entity_nodes(): if self.type == 'NAF': label_id = 'id' elif self.type == 'KAF': label_id = 'eid' if entity_node.get(label_id) == entity_id: return Centity(node=entity_node, type=self.type) return None
Returns the entity object for the given entity identifier @type entity_id: string @param entity_id: the token identifier @rtype: L{Centity} @return: the entity object
def software_language(instance): for key, obj in instance['objects'].items(): if ('type' in obj and obj['type'] == 'software' and 'languages' in obj): for lang in obj['languages']: if lang not in enums.SOFTWARE_LANG_CODES: yield JSONError("The 'languages' property of object '%s' " "contains an invalid ISO 639-2 language " " code ('%s')." % (key, lang), instance['id'])
Ensure the 'language' property of software objects is a valid ISO 639-2 language code.
def glyph_extents(self, glyphs): glyphs = ffi.new('cairo_glyph_t[]', glyphs) extents = ffi.new('cairo_text_extents_t *') cairo.cairo_glyph_extents( self._pointer, glyphs, len(glyphs), extents) self._check_status() return ( extents.x_bearing, extents.y_bearing, extents.width, extents.height, extents.x_advance, extents.y_advance)
Returns the extents for a list of glyphs. The extents describe a user-space rectangle that encloses the "inked" portion of the glyphs, (as it would be drawn by :meth:`show_glyphs`). Additionally, the :obj:`x_advance` and :obj:`y_advance` values indicate the amount by which the current point would be advanced by :meth:`show_glyphs`. :param glyphs: A list of glyphs. See :meth:`show_text_glyphs` for the data structure. :returns: A ``(x_bearing, y_bearing, width, height, x_advance, y_advance)`` tuple of floats. See :meth:`text_extents` for details.
def add_point_feature(self, resnum, feat_type=None, feat_id=None, qualifiers=None): if self.feature_file: raise ValueError('Feature file associated with sequence, please remove file association to append ' 'additional features.') if not feat_type: feat_type = 'Manually added protein sequence single residue feature' newfeat = SeqFeature(location=FeatureLocation(ExactPosition(resnum-1), ExactPosition(resnum)), type=feat_type, id=feat_id, qualifiers=qualifiers) self.features.append(newfeat)
Add a feature to the features list describing a single residue. Args: resnum (int): Protein sequence residue number feat_type (str, optional): Optional description of the feature type (ie. 'catalytic residue') feat_id (str, optional): Optional ID of the feature type (ie. 'TM1')
def get_all_items_of_credit_note(self, credit_note_id): return self._iterate_through_pages( get_function=self.get_items_of_credit_note_per_page, resource=CREDIT_NOTE_ITEMS, **{'credit_note_id': credit_note_id} )
Get all items of credit note This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param credit_note_id: the credit note id :return: list
def _generateInitialModel(self, output_model_type): logger().info("Generating initial model for BHMM using MLHMM...") from bhmm.estimators.maximum_likelihood import MaximumLikelihoodEstimator mlhmm = MaximumLikelihoodEstimator(self.observations, self.nstates, reversible=self.reversible, output=output_model_type) model = mlhmm.fit() return model
Initialize using an MLHMM.