code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def total_count(self, total_count): if total_count is None: raise ValueError("Invalid value for `total_count`, must not be `None`") if total_count is not None and total_count < 0: raise ValueError("Invalid value for `total_count`, must be a value greater than or equal to `0`") self._total_count = total_count
Sets the total_count of this ServicePackageQuotaHistoryResponse. Sum of all quota history entries that should be returned :param total_count: The total_count of this ServicePackageQuotaHistoryResponse. :type: int
def get_pattern_actual_step(self, patternnumber): _checkPatternNumber(patternnumber) address = _calculateRegisterAddress('actualstep', patternnumber) return self.read_register(address, 0)
Get the 'actual step' parameter for a given pattern. Args: patternnumber (integer): 0-7 Returns: The 'actual step' parameter (int).
def load_plugin(module_name: str) -> bool: try: module = importlib.import_module(module_name) name = getattr(module, '__plugin_name__', None) usage = getattr(module, '__plugin_usage__', None) _plugins.add(Plugin(module, name, usage)) logger.info(f'Succeeded to import "{module_name}"') return True except Exception as e: logger.error(f'Failed to import "{module_name}", error: {e}') logger.exception(e) return False
Load a module as a plugin. :param module_name: name of module to import :return: successful or not
def remove_cts_record(file_name, map, position): db = XonoticDB.load_path(file_name) db.remove_cts_record(map, position) db.save(file_name)
Remove cts record on MAP and POSITION
def update(self, device_json=None, info_json=None, settings_json=None, avatar_json=None): if device_json: UTILS.update(self._device_json, device_json) if avatar_json: UTILS.update(self._avatar_json, avatar_json) if info_json: UTILS.update(self._info_json, info_json) if settings_json: UTILS.update(self._settings_json, settings_json)
Update the internal device json data.
def clear_dead_threads(self): for tid in self.get_thread_ids(): aThread = self.get_thread(tid) if not aThread.is_alive(): self._del_thread(aThread)
Remove Thread objects from the snapshot referring to threads no longer running.
def _create_session(self, username, password): session = requests.Session() session.verify = False try: response = session.get(self.host_url) except requests.exceptions.ConnectionError: return False soup = BeautifulSoup(response.text, 'html.parser') csrf_token = soup.find('input', dict(name='csrf_token'))['value'] login_data = dict(username=username, password=password) session.headers.update({ 'x-csrftoken': csrf_token, 'referer': self.host_url }) _ = session.post('{0:s}/login/'.format(self.host_url), data=login_data) return session
Create HTTP session. Args: username (str): Timesketch username password (str): Timesketch password Returns: requests.Session: Session object.
def focusedWindow(cls): x, y, w, h = PlatformManager.getWindowRect(PlatformManager.getForegroundWindow()) return Region(x, y, w, h)
Returns a Region corresponding to whatever window is in the foreground
def orchestrate_show_sls(mods, saltenv='base', test=None, queue=False, pillar=None, pillarenv=None, pillar_enc=None): if pillar is not None and not isinstance(pillar, dict): raise SaltInvocationError( 'Pillar data must be formatted as a dictionary') __opts__['file_client'] = 'local' minion = salt.minion.MasterMinion(__opts__) running = minion.functions['state.show_sls']( mods, test, queue, pillar=pillar, pillarenv=pillarenv, pillar_enc=pillar_enc, saltenv=saltenv) ret = {minion.opts['id']: running} return ret
Display the state data from a specific sls, or list of sls files, after being render using the master minion. Note, the master minion adds a "_master" suffix to it's minion id. .. seealso:: The state.show_sls module function CLI Example: .. code-block:: bash salt-run state.orch_show_sls my-orch-formula.my-orch-state 'pillar={ nodegroup: ng1 }'
def get_meta(self): rdf = self.get_meta_rdf(fmt='n3') return PointMeta(self, rdf, self._client.default_lang, fmt='n3')
Get the metadata object for this Point Returns a [PointMeta](PointMeta.m.html#IoticAgent.IOT.PointMeta.PointMeta) object - OR - Raises [IOTException](./Exceptions.m.html#IoticAgent.IOT.Exceptions.IOTException) containing the error if the infrastructure detects a problem Raises [LinkException](../Core/AmqpLink.m.html#IoticAgent.Core.AmqpLink.LinkException) if there is a communications problem between you and the infrastructure
def connect(self): connection = _mssql.connect(user=self.user, password=self.password, server=self.host, port=self.port, database=self.database) return connection
Create a SQL Server connection and return a connection object
def run(path, code=None, params=None, **meta): import _ast builtins = params.get("builtins", "") if builtins: builtins = builtins.split(",") tree = compile(code, path, "exec", _ast.PyCF_ONLY_AST) w = checker.Checker(tree, path, builtins=builtins) w.messages = sorted(w.messages, key=lambda m: m.lineno) return [{ 'lnum': m.lineno, 'text': m.message % m.message_args, 'type': m.message[0] } for m in w.messages]
Check code with pyflakes. :return list: List of errors.
def stop(self): self._target = self.position self.log.info('Stopping movement after user request.') return self.target, self.position
Stops the motor and returns the new target and position, which are equal
def call_heat(tstat): current_hsp, current_csp = tstat.heating_setpoint, tstat.cooling_setpoint current_temp = tstat.temperature tstat.write({ 'heating_setpoint': current_temp+10, 'cooling_setpoint': current_temp+20, 'mode': HEAT, }) def restore(): tstat.write({ 'heating_setpoint': current_hsp, 'cooling_setpoint': current_csp, 'mode': AUTO, }) return restore
Adjusts the temperature setpoints in order to call for heating. Returns a handler to call when you want to reset the thermostat
def CollectionItemToClientPath(item, client_id=None): if isinstance(item, rdf_flows.GrrMessage): client_id = item.source item = item.payload elif isinstance(item, rdf_flow_objects.FlowResult): client_id = item.client_id item = item.payload if client_id is None: raise ValueError("Could not determine client_id.") elif isinstance(client_id, rdfvalue.RDFURN): client_id = client_id.Basename() if isinstance(item, rdf_client_fs.StatEntry): return db.ClientPath.FromPathSpec(client_id, item.pathspec) elif isinstance(item, rdf_file_finder.FileFinderResult): return db.ClientPath.FromPathSpec(client_id, item.stat_entry.pathspec) elif isinstance(item, collectors.ArtifactFilesDownloaderResult): if item.HasField("downloaded_file"): return db.ClientPath.FromPathSpec(client_id, item.downloaded_file.pathspec) raise ItemNotExportableError(item)
Converts given RDFValue to a ClientPath of a file to be downloaded.
def make_save_locals_impl(): try: if '__pypy__' in sys.builtin_module_names: import __pypy__ save_locals = __pypy__.locals_to_fast except: pass else: if '__pypy__' in sys.builtin_module_names: def save_locals_pypy_impl(frame): save_locals(frame) return save_locals_pypy_impl try: import ctypes locals_to_fast = ctypes.pythonapi.PyFrame_LocalsToFast except: pass else: def save_locals_ctypes_impl(frame): locals_to_fast(ctypes.py_object(frame), ctypes.c_int(0)) return save_locals_ctypes_impl return None
Factory for the 'save_locals_impl' method. This may seem like a complicated pattern but it is essential that the method is created at module load time. Inner imports after module load time would cause an occasional debugger deadlock due to the importer lock and debugger lock being taken in different order in different threads.
def exportable(self): if 'ExportableCertification' in self._signature.subpackets: return bool(next(iter(self._signature.subpackets['ExportableCertification']))) return True
``False`` if this signature is marked as being not exportable. Otherwise, ``True``.
def save(self, *objects): if len(objects) > 0: self.session.add_all(objects) self.session.commit()
Add all the objects to the session and commit them. This only needs to be done for networks and participants.
def converge(f, step, tol, max_h): g = f(0) dx = 10000 h = step while (dx > tol): g2 = f(h) dx = abs(g - g2) g = g2 h += step if h > max_h: raise Exception("Did not converge before {}".format(h)) return g
simple newton iteration based convergence function
def _clone(self, *args, **kwargs): for attr in ("_search_terms", "_search_fields", "_search_ordered"): kwargs[attr] = getattr(self, attr) return super(SearchableQuerySet, self)._clone(*args, **kwargs)
Ensure attributes are copied to subsequent queries.
def to_xml(self): for n, v in { "amount": self.amount, "date": self.date, "method":self.method}.items(): if is_empty_or_none(v): raise PaymentError("'%s' attribute cannot be empty or " \ "None." % n) doc = Document() root = doc.createElement("payment") super(Payment, self).to_xml(root) self._create_text_node(root, "amount", self.amount) self._create_text_node(root, "method", self.method) self._create_text_node(root, "reference", self.ref, True) self._create_text_node(root, "date", self.date) return root
Returns a DOM representation of the payment. @return: Element
def _enqueue_eor_msg(self, sor): if self._protocol.is_enhanced_rr_cap_valid() and not sor.eor_sent: afi = sor.afi safi = sor.safi eor = BGPRouteRefresh(afi, safi, demarcation=2) self.enque_outgoing_msg(eor) sor.eor_sent = True
Enqueues Enhanced RR EOR if for given SOR a EOR is not already sent.
def encode(input, output_filename): coder = rs.RSCoder(255,223) output = [] while True: block = input.read(223) if not block: break code = coder.encode_fast(block) output.append(code) sys.stderr.write(".") sys.stderr.write("\n") out = Image.new("L", (rowstride,len(output))) out.putdata("".join(output)) out.save(output_filename)
Encodes the input data with reed-solomon error correction in 223 byte blocks, and outputs each block along with 32 parity bytes to a new file by the given filename. input is a file-like object The outputted image will be in png format, and will be 255 by x pixels with one color channel. X is the number of 255 byte blocks from the input. Each block of data will be one row, therefore, the data can be recovered if no more than 16 pixels per row are altered.
def transfer(self, name, cache_key=None): if cache_key is None: cache_key = self.get_cache_key(name) return self.task.delay(name, cache_key, self.local_path, self.remote_path, self.local_options, self.remote_options)
Transfers the file with the given name to the remote storage backend by queuing the task. :param name: file name :type name: str :param cache_key: the cache key to set after a successful task run :type cache_key: str :rtype: task result
def _load_all_link_database(self): _LOGGER.debug("Starting: _load_all_link_database") self.devices.state = 'loading' self._get_first_all_link_record() _LOGGER.debug("Ending: _load_all_link_database")
Load the ALL-Link Database into object.
def load_from_docinfo(self, docinfo, delete_missing=False, raise_failure=False): for uri, shortkey, docinfo_name, converter in self.DOCINFO_MAPPING: qname = QName(uri, shortkey) val = docinfo.get(str(docinfo_name)) if val is None: if delete_missing and qname in self: del self[qname] continue try: val = str(val) if converter: val = converter.xmp_from_docinfo(val) if not val: continue self[qname] = val except (ValueError, AttributeError) as e: msg = "The metadata field {} could not be copied to XMP".format( docinfo_name ) if raise_failure: raise ValueError(msg) from e else: warn(msg)
Populate the XMP metadata object with DocumentInfo Arguments: docinfo: a DocumentInfo, e.g pdf.docinfo delete_missing: if the entry is not DocumentInfo, delete the equivalent from XMP raise_failure: if True, raise any failure to convert docinfo; otherwise warn and continue A few entries in the deprecated DocumentInfo dictionary are considered approximately equivalent to certain XMP records. This method copies those entries into the XMP metadata.
def g_reuss(self): return 15. / (8. * self.compliance_tensor.voigt[:3, :3].trace() - 4. * np.triu(self.compliance_tensor.voigt[:3, :3]).sum() + 3. * self.compliance_tensor.voigt[3:, 3:].trace())
returns the G_r shear modulus
def _get_and_assert_slice_param(url_dict, param_name, default_int): param_str = url_dict['query'].get(param_name, default_int) try: n = int(param_str) except ValueError: raise d1_common.types.exceptions.InvalidRequest( 0, 'Slice parameter is not a valid integer. {}="{}"'.format( param_name, param_str ), ) if n < 0: raise d1_common.types.exceptions.InvalidRequest( 0, 'Slice parameter cannot be a negative number. {}="{}"'.format( param_name, param_str ), ) return n
Return ``param_str`` converted to an int. If str cannot be converted to int or int is not zero or positive, raise InvalidRequest.
def get_oxi_state_decorated_structure(self, structure): s = structure.copy() if s.is_ordered: valences = self.get_valences(s) s.add_oxidation_state_by_site(valences) else: valences = self.get_valences(s) s = add_oxidation_state_by_site_fraction(s, valences) return s
Get an oxidation state decorated structure. This currently works only for ordered structures only. Args: structure: Structure to analyze Returns: A modified structure that is oxidation state decorated. Raises: ValueError if the valences cannot be determined.
def delete_object(container_name, object_name, profile, **libcloud_kwargs): conn = _get_driver(profile=profile) libcloud_kwargs = salt.utils.args.clean_kwargs(**libcloud_kwargs) obj = conn.get_object(container_name, object_name, **libcloud_kwargs) return conn.delete_object(obj)
Delete an object in the cloud :param container_name: Container name :type container_name: ``str`` :param object_name: Object name :type object_name: ``str`` :param profile: The profile key :type profile: ``str`` :param libcloud_kwargs: Extra arguments for the driver's delete_object method :type libcloud_kwargs: ``dict`` :return: True if an object has been successfully deleted, False otherwise. :rtype: ``bool`` CLI Example: .. code-block:: bash salt myminion libcloud_storage.delete_object MyFolder me.jpg profile1
def conf_int(self, alpha=0.05, **kwargs): r return self.arima_res_.conf_int(alpha=alpha, **kwargs)
r"""Returns the confidence interval of the fitted parameters. Returns ------- alpha : float, optional (default=0.05) The significance level for the confidence interval. ie., the default alpha = .05 returns a 95% confidence interval. **kwargs : keyword args or dict Keyword arguments to pass to the confidence interval function. Could include 'cols' or 'method'
def dumps(data): if not isinstance(data, _TOMLDocument) and isinstance(data, dict): data = item(data) return data.as_string()
Dumps a TOMLDocument into a string.
def users_setPresence(self, *, presence: str, **kwargs) -> SlackResponse: kwargs.update({"presence": presence}) return self.api_call("users.setPresence", json=kwargs)
Manually sets user presence. Args: presence (str): Either 'auto' or 'away'.
def read_pid_file(pidfile_path): try: fin = open(pidfile_path, "r") except Exception, e: return None else: pid_data = fin.read().strip() fin.close() try: pid = int(pid_data) return pid except: return None
Read the PID from the PID file
def _charSummary(self, char_count, block_count=None): if not self._display['omit_summary']: if block_count is None: print('Total code points:', char_count) else: print('Total {0} code point{1} in {2} block{3}'.format( char_count, 's' if char_count != 1 else '', block_count, 's' if block_count != 1 else '' ))
Displays characters summary.
def transform(self, X, lenscale=None): N, D = X.shape lenscale = self._check_dim(D, lenscale)[:, np.newaxis] WX = np.dot(X, self.W / lenscale) return np.hstack((np.cos(WX), np.sin(WX))) / np.sqrt(self.n)
Apply the random basis to X. Parameters ---------- X: ndarray (N, d) array of observations where N is the number of samples, and d is the dimensionality of X. lenscale: scalar or ndarray, optional scalar or array of shape (d,) length scales (one for each dimension of X). If not input, this uses the value of the initial length scale. Returns ------- ndarray: of shape (N, 2*nbases) where nbases is number of random bases to use, given in the constructor.
def keep_segments(self, segments_to_keep, preserve_segmentation=True): v_ind, f_ind = self.vertex_indices_in_segments(segments_to_keep, ret_face_indices=True) self.segm = {name: self.segm[name] for name in segments_to_keep} if not preserve_segmentation: self.segm = None self.f = self.f[f_ind] if self.ft is not None: self.ft = self.ft[f_ind] self.keep_vertices(v_ind)
Keep the faces and vertices for given segments, discarding all others. When preserve_segmentation is false self.segm is discarded for speed.
def build_image(image_path, image_name, build_args=None, dockerfile_path=None): cmd = ['docker', 'build', '-t', image_name, image_path] if dockerfile_path: cmd.extend(['-f', dockerfile_path]) for k, v in (build_args or {}).items(): cmd += ['--build-arg', '{}={}'.format(k, v)] check_call(cmd)
Build an image Args: image_path (str): the path to the image directory image_name (str): image 'name:tag' to build build_args (dict, optional): dict of docker build arguments dockerfile_path (str, optional): path to dockerfile relative to image_path if not `image_path/Dockerfile`.
def draw_image(self, video_name, image_name, out, start, end, x, y, verbose=False): cfilter = (r"[0] [1] overlay=x={x}: y={y}:" "enable='between(t, {start}, {end}')")\ .format(x=x, y=y, start=start, end=end) call(['ffmpeg', '-i', video_name, '-i', image_name, '-c:v', 'huffyuv', '-y', '-preset', 'veryslow', '-filter_complex', cfilter, out])
Draws an image over the video @param video_name : name of video input file @param image_name: name of image input file @param out : name of video output file @param start : when to start overlay @param end : when to end overlay @param x : x pos of image @param y : y pos of image
def _get_sts_token(self): logger.debug("Connecting to STS in region %s", self.region) sts = boto3.client('sts', region_name=self.region) arn = "arn:aws:iam::%s:role/%s" % (self.account_id, self.account_role) logger.debug("STS assume role for %s", arn) assume_kwargs = { 'RoleArn': arn, 'RoleSessionName': 'awslimitchecker' } if self.external_id is not None: assume_kwargs['ExternalId'] = self.external_id if self.mfa_serial_number is not None: assume_kwargs['SerialNumber'] = self.mfa_serial_number if self.mfa_token is not None: assume_kwargs['TokenCode'] = self.mfa_token role = sts.assume_role(**assume_kwargs) creds = ConnectableCredentials(role) creds.account_id = self.account_id logger.debug("Got STS credentials for role; access_key_id=%s " "(account_id=%s)", creds.access_key, creds.account_id) return creds
Assume a role via STS and return the credentials. First connect to STS via :py:func:`boto3.client`, then assume a role using `boto3.STS.Client.assume_role <https://boto3.readthe docs.org/en/latest/reference/services/sts.html#STS.Client.assume_role>`_ using ``self.account_id`` and ``self.account_role`` (and optionally ``self.external_id``, ``self.mfa_serial_number``, ``self.mfa_token``). Return the resulting :py:class:`~.ConnectableCredentials` object. :returns: STS assumed role credentials :rtype: :py:class:`~.ConnectableCredentials`
def _can_send_eth(irs): for ir in irs: if isinstance(ir, (HighLevelCall, LowLevelCall, Transfer, Send)): if ir.call_value: return True return False
Detect if the node can send eth
def insert_pattern(pattern, model, index=0): if not pattern: return False pattern = pattern.replace(QChar(QChar.ParagraphSeparator), QString("\n")) pattern = foundations.common.get_first_item(foundations.strings.to_string(pattern).split("\n")) model.insert_pattern(foundations.strings.to_string(pattern), index) return True
Inserts given pattern into given Model. :param pattern: Pattern. :type pattern: unicode :param model: Model. :type model: PatternsModel :param index: Insertion indes. :type index: int :return: Method success. :rtype: bool
def upload_buffer(self, target_id, page, address, buff): count = 0 pk = CRTPPacket() pk.set_header(0xFF, 0xFF) pk.data = struct.pack('=BBHH', target_id, 0x14, page, address) for i in range(0, len(buff)): pk.data.append(buff[i]) count += 1 if count > 24: self.link.send_packet(pk) count = 0 pk = CRTPPacket() pk.set_header(0xFF, 0xFF) pk.data = struct.pack('=BBHH', target_id, 0x14, page, i + address + 1) self.link.send_packet(pk)
Upload data into a buffer on the Crazyflie
def scan_for_spec(keyword): keyword = keyword.lstrip('(').rstrip(')') matches = release_line_re.findall(keyword) if matches: return Spec(">={}".format(matches[0])) try: return Spec(keyword) except ValueError: return None
Attempt to return some sort of Spec from given keyword value. Returns None if one could not be derived.
def _get_bib_element(bibitem, element): lst = [i.strip() for i in bibitem.split("\n")] for i in lst: if i.startswith(element): value = i.split("=", 1)[-1] value = value.strip() while value.endswith(','): value = value[:-1] while value.startswith('{') or value.startswith('"'): value = value[1:-1] return value return None
Return element from bibitem or None. Paramteters ----------- bibitem : element : Returns -------
def sentiment(self): if self._sentiment is None: results = self._xml.xpath('/root/document/sentences') self._sentiment = float(results[0].get("averageSentiment", 0)) if len(results) > 0 else None return self._sentiment
Returns average sentiment of document. Must have sentiment enabled in XML output. :getter: returns average sentiment of the document :type: float
def change_column(self, table, column_name, field): operations = [self.alter_change_column(table, column_name, field)] if not field.null: operations.extend([self.add_not_null(table, column_name)]) return operations
Change column.
def _format_changes(changes, orchestration=False): if not changes: return False, '' if orchestration: return True, _nested_changes(changes) if not isinstance(changes, dict): return True, 'Invalid Changes data: {0}'.format(changes) ret = changes.get('ret') if ret is not None and changes.get('out') == 'highstate': ctext = '' changed = False for host, hostdata in six.iteritems(ret): s, c = _format_host(host, hostdata) ctext += '\n' + '\n'.join((' ' * 14 + l) for l in s.splitlines()) changed = changed or c else: changed = True ctext = _nested_changes(changes) return changed, ctext
Format the changes dict based on what the data is
def visit_call(self, node): expr_str = self._precedence_parens(node, node.func) args = [arg.accept(self) for arg in node.args] if node.keywords: keywords = [kwarg.accept(self) for kwarg in node.keywords] else: keywords = [] args.extend(keywords) return "%s(%s)" % (expr_str, ", ".join(args))
return an astroid.Call node as string
def ledger_transactions(self, ledger_id, cursor=None, order='asc', include_failed=False, limit=10): endpoint = '/ledgers/{ledger_id}/transactions'.format( ledger_id=ledger_id) params = self.__query_params(cursor=cursor, order=order, limit=limit, include_failed=include_failed) return self.query(endpoint, params)
This endpoint represents all transactions in a given ledger. `GET /ledgers/{id}/transactions{?cursor,limit,order} <https://www.stellar.org/developers/horizon/reference/endpoints/transactions-for-ledger.html>`_ :param int ledger_id: The id of the ledger to look up. :param int cursor: A paging token, specifying where to start returning records from. :param str order: The order in which to return rows, "asc" or "desc". :param int limit: Maximum number of records to return. :param bool include_failed: Set to `True` to include failed transactions in results. :return: The transactions contained in a single ledger. :rtype: dict
def expect_optional_token(lexer: Lexer, kind: TokenKind) -> Optional[Token]: token = lexer.token if token.kind == kind: lexer.advance() return token return None
Expect the next token optionally to be of the given kind. If the next token is of the given kind, return that token after advancing the lexer. Otherwise, do not change the parser state and return None.
def endpointlist_post_save(instance, *args, **kwargs): with open(instance.upload.file.name, mode='rb') as f: lines = f.readlines() for url in lines: if len(url) > 255: LOGGER.debug('Skipping this endpoint, as it is more than 255 characters: %s' % url) else: if Endpoint.objects.filter(url=url, catalog=instance.catalog).count() == 0: endpoint = Endpoint(url=url, endpoint_list=instance) endpoint.catalog = instance.catalog endpoint.save() if not settings.REGISTRY_SKIP_CELERY: update_endpoints.delay(instance.id) else: update_endpoints(instance.id)
Used to process the lines of the endpoint list.
def prefix_search(self, job_name_prefix): json = self._fetch_json() jobs = json['response'] for job in jobs: if job.startswith(job_name_prefix): yield self._build_results(jobs, job)
Searches for jobs matching the given ``job_name_prefix``.
def build_extension(extensions: Sequence[ExtensionHeader]) -> str: return ", ".join( build_extension_item(name, parameters) for name, parameters in extensions )
Unparse a ``Sec-WebSocket-Extensions`` header. This is the reverse of :func:`parse_extension`.
def count_resources(domain, token): resources = get_resources(domain, token) return dict(Counter([r['resource']['type'] for r in resources if r['resource']['type'] != 'story']))
Given the domain in question, generates counts for that domain of each of the different data types. Parameters ---------- domain: str A Socrata data portal domain. "data.seattle.gov" or "data.cityofnewyork.us" for example. token: str A Socrata application token. Application tokens can be registered by going onto the Socrata portal in question, creating an account, logging in, going to developer tools, and spawning a token. Returns ------- A dict with counts of the different endpoint types classifiable as published public datasets.
def row_number(expr, sort=None, ascending=True): return _rank_op(expr, RowNumber, types.int64, sort=sort, ascending=ascending)
Calculate row number of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :return: calculated column
def mime(self): author = self.author sender = self.sender if not author: raise ValueError("You must specify an author.") if not self.subject: raise ValueError("You must specify a subject.") if len(self.recipients) == 0: raise ValueError("You must specify at least one recipient.") if not self.plain: raise ValueError("You must provide plain text content.") if not self._dirty and self._processed: return self._mime self._processed = False plain = MIMEText(self._callable(self.plain), 'plain', self.encoding) rich = None if self.rich: rich = MIMEText(self._callable(self.rich), 'html', self.encoding) message = self._mime_document(plain, rich) headers = self._build_header_list(author, sender) self._add_headers_to_message(message, headers) self._mime = message self._processed = True self._dirty = False return message
Produce the final MIME message.
def _select_root_port(self): root_port = None for port in self.ports.values(): root_msg = (self.root_priority if root_port is None else root_port.designated_priority) port_msg = port.designated_priority if port.state is PORT_STATE_DISABLE or port_msg is None: continue if root_msg.root_id.value > port_msg.root_id.value: result = SUPERIOR elif root_msg.root_id.value == port_msg.root_id.value: if root_msg.designated_bridge_id is None: result = INFERIOR else: result = Stp.compare_root_path( port_msg.root_path_cost, root_msg.root_path_cost, port_msg.designated_bridge_id.value, root_msg.designated_bridge_id.value, port_msg.designated_port_id.value, root_msg.designated_port_id.value) else: result = INFERIOR if result is SUPERIOR: root_port = port return root_port
ROOT_PORT is the nearest port to a root bridge. It is determined by the cost of path, etc.
def _make_text_block(name, content, content_type=None): if content_type == 'xhtml': return u'<%s type="xhtml"><div xmlns="%s">%s</div></%s>\n' % \ (name, XHTML_NAMESPACE, content, name) if not content_type: return u'<%s>%s</%s>\n' % (name, escape(content), name) return u'<%s type="%s">%s</%s>\n' % (name, content_type, escape(content), name)
Helper function for the builder that creates an XML text block.
def rest(url, req="GET", data=None): load_variables() return _rest(base_url + url, req, data)
Main function to be called from this module. send a request using method 'req' and to the url. the _rest() function will add the base_url to this, so 'url' should be something like '/ips'.
def batch_query_state_changes( self, batch_size: int, filters: List[Tuple[str, Any]] = None, logical_and: bool = True, ) -> Iterator[List[StateChangeRecord]]: limit = batch_size offset = 0 result_length = 1 while result_length != 0: result = self._get_state_changes( limit=limit, offset=offset, filters=filters, logical_and=logical_and, ) result_length = len(result) offset += result_length yield result
Batch query state change records with a given batch size and an optional filter This is a generator function returning each batch to the caller to work with.
def run_notebook_hook(notebook_type, action, *args, **kw): if notebook_type not in _HOOKS: raise RuntimeError("no display hook installed for notebook type %r" % notebook_type) if _HOOKS[notebook_type][action] is None: raise RuntimeError("notebook hook for %r did not install %r action" % notebook_type, action) return _HOOKS[notebook_type][action](*args, **kw)
Run an installed notebook hook with supplied arguments. Args: noteboook_type (str) : Name of an existing installed notebook hook actions (str) : Name of the hook action to execute, ``'doc'`` or ``'app'`` All other arguments and keyword arguments are passed to the hook action exactly as supplied. Returns: Result of the hook action, as-is Raises: RuntimeError If the hook or specific action is not installed
def set_prompt(scope, prompt=None): conn = scope.get('__connection__') conn.set_prompt(prompt) return True
Defines the pattern that is recognized at any future time when Exscript needs to wait for a prompt. In other words, whenever Exscript waits for a prompt, it searches the response of the host for the given pattern and continues as soon as the pattern is found. Exscript waits for a prompt whenever it sends a command (unless the send() method was used). set_prompt() redefines as to what is recognized as a prompt. :type prompt: regex :param prompt: The prompt pattern.
def add_command(self, cmd_name, *args): self.__commands.append(Command(cmd_name, args))
Add command to action.
def iter_all_users(self, number=-1, etag=None, per_page=None): url = self._build_url('users') return self._iter(int(number), url, User, params={'per_page': per_page}, etag=etag)
Iterate over every user in the order they signed up for GitHub. :param int number: (optional), number of users to return. Default: -1, returns all of them :param str etag: (optional), ETag from a previous request to the same endpoint :param int per_page: (optional), number of users to list per request :returns: generator of :class:`User <github3.users.User>`
def logs(self, **kwargs): return self.client.api.logs(self.id, **kwargs)
Get logs from this container. Similar to the ``docker logs`` command. The ``stream`` parameter makes the ``logs`` function return a blocking generator you can iterate over to retrieve log output as it happens. Args: stdout (bool): Get ``STDOUT``. Default ``True`` stderr (bool): Get ``STDERR``. Default ``True`` stream (bool): Stream the response. Default ``False`` timestamps (bool): Show timestamps. Default ``False`` tail (str or int): Output specified number of lines at the end of logs. Either an integer of number of lines or the string ``all``. Default ``all`` since (datetime or int): Show logs since a given datetime or integer epoch (in seconds) follow (bool): Follow log output. Default ``False`` until (datetime or int): Show logs that occurred before the given datetime or integer epoch (in seconds) Returns: (generator or str): Logs from the container. Raises: :py:class:`docker.errors.APIError` If the server returns an error.
def delete_account(self, account): try: luser = self._get_account(account.username) groups = luser['groups'].load(database=self._database) for group in groups: changes = changeset(group, {}) changes = group.remove_member(changes, luser) save(changes, database=self._database) delete(luser, database=self._database) except ObjectDoesNotExist: pass
Account was deleted.
def mark_done(task_id): task = Task.get_by_id(task_id) if task is None: raise ValueError('Task with id %d does not exist' % task_id) task.done = True task.put()
Marks a task as done. Args: task_id: The integer id of the task to update. Raises: ValueError: if the requested task doesn't exist.
def translate( nucleotide_sequence, first_codon_is_start=True, to_stop=True, truncate=False): if not isinstance(nucleotide_sequence, Seq): nucleotide_sequence = Seq(nucleotide_sequence) if truncate: n_nucleotides = int(len(nucleotide_sequence) / 3) * 3 nucleotide_sequence = nucleotide_sequence[:n_nucleotides] else: n_nucleotides = len(nucleotide_sequence) assert n_nucleotides % 3 == 0, \ ("Expected nucleotide sequence to be multiple of 3" " but got %s of length %d") % ( nucleotide_sequence, n_nucleotides) protein_sequence = nucleotide_sequence.translate(to_stop=to_stop, cds=False) if first_codon_is_start and ( len(protein_sequence) == 0 or protein_sequence[0] != "M"): if nucleotide_sequence[:3] in START_CODONS: return "M" + protein_sequence[1:] else: raise ValueError( ("Expected first codon of %s to be start codon" " (one of %s) but got %s") % ( protein_sequence[:10], START_CODONS, nucleotide_sequence)) return protein_sequence
Translates cDNA coding sequence into amino acid protein sequence. Should typically start with a start codon but allowing non-methionine first residues since the CDS we're translating might have been affected by a start loss mutation. The sequence may include the 3' UTR but will stop translation at the first encountered stop codon. Parameters ---------- nucleotide_sequence : BioPython Seq cDNA sequence first_codon_is_start : bool Treat the beginning of nucleotide_sequence (translates methionin) truncate : bool Truncate sequence if it's not a multiple of 3 (default = False) Returns BioPython Seq of amino acids
def _get_callable(self, classname, cname): callable = None if classname in self.provregs: provClass = self.provregs[classname] if hasattr(provClass, cname): callable = getattr(provClass, cname) elif hasattr(self.provmod, cname): callable = getattr(self.provmod, cname) if callable is None: raise pywbem.CIMError( pywbem.CIM_ERR_FAILED, "No provider registered for %s or no callable for %s:%s on " \ "provider %s" % (classname, classname, cname, self.provid)) return callable
Return a function or method object appropriate to fulfill a request classname -- The CIM class name associated with the request. cname -- The function or method name to look for.
def compile_rules(self): output = [] for key_name, section in self.config.items(): rule = self.compile_section(section) if rule is not None: output.append(rule) return output
Compile alert rules @rtype list of Rules
def _move_to_store(self, srcpath, objhash): destpath = self.object_path(objhash) if os.path.exists(destpath): os.chmod(destpath, S_IWUSR) os.remove(destpath) os.chmod(srcpath, S_IRUSR | S_IRGRP | S_IROTH) move(srcpath, destpath)
Make the object read-only and move it to the store.
def bi_square(xx, idx=None): ans = np.zeros(xx.shape) ans[idx] = (1-xx[idx]**2)**2 return ans
The bi-square weight function calculated over values of xx Parameters ---------- xx: float array Notes ----- This is the first equation on page 831 of [Cleveland79].
def alias_log(self, log_id, alias_id): if self._catalog_session is not None: return self._catalog_session.alias_catalog(catalog_id=log_id, alias_id=alias_id) self._alias_id(primary_id=log_id, equivalent_id=alias_id)
Adds an ``Id`` to a ``Log`` for the purpose of creating compatibility. The primary ``Id`` of the ``Log`` is determined by the provider. The new ``Id`` performs as an alias to the primary ``Id``. If the alias is a pointer to another log, it is reassigned to the given log ``Id``. arg: log_id (osid.id.Id): the ``Id`` of a ``Log`` arg: alias_id (osid.id.Id): the alias ``Id`` raise: AlreadyExists - ``alias_id`` is already assigned raise: NotFound - ``log_id`` not found raise: NullArgument - ``log_id`` or ``alias_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
def update(self, iterable): if iterable: return PBag(reduce(_add_to_counters, iterable, self._counts)) return self
Update bag with all elements in iterable. >>> s = pbag([1]) >>> s.update([1, 2]) pbag([1, 1, 2])
def get_data_home(data_home=None): data_home_default = Path(__file__).ancestor(3).child('demos', '_revrand_data') if data_home is None: data_home = os.environ.get('REVRAND_DATA', data_home_default) if not os.path.exists(data_home): os.makedirs(data_home) return data_home
Return the path of the revrand data dir. This folder is used by some large dataset loaders to avoid downloading the data several times. By default the data dir is set to a folder named 'revrand_data' in the user home folder. Alternatively, it can be set by the 'REVRAND_DATA' environment variable or programmatically by giving an explicit folder path. The '~' symbol is expanded to the user home folder. If the folder does not already exist, it is automatically created.
def get_config(name=__name__): cfg = ConfigParser() path = os.environ.get('%s_CONFIG_FILE' % name.upper()) if path is None or path == "": fname = '/etc/tapp/%s.ini' % name if isfile(fname): path = fname elif isfile('cfg.ini'): path = 'cfg.ini' else: raise ValueError("Unable to get configuration for tapp %s" % name) cfg.read(path) return cfg
Get a configuration parser for a given TAPP name. Reads config.ini files only, not in-database configuration records. :param name: The tapp name to get a configuration for. :rtype: ConfigParser :return: A config parser matching the given name
def _detect_term_type(): if os.name == 'nt': if os.environ.get('TERM') == 'xterm': return 'mintty' else: return 'nt' if platform.system().upper().startswith('CYGWIN'): return 'cygwin' return 'posix'
Detect the type of the terminal.
def get_env_variable(var_name, default=None): try: return os.environ[var_name] except KeyError: if default is not None: return default else: error_msg = 'The environment variable {} was missing, abort...'\ .format(var_name) raise EnvironmentError(error_msg)
Get the environment variable or raise exception.
def get_intra_edges(self, time_slice=0): if not isinstance(time_slice, int) or time_slice < 0: raise ValueError("The timeslice should be a positive value greater than or equal to zero") return [tuple((x[0], time_slice) for x in edge) for edge in self.edges() if edge[0][1] == edge[1][1] == 0]
Returns the intra slice edges present in the 2-TBN. Parameter --------- time_slice: int (whole number) The time slice for which to get intra edges. The timeslice should be a positive value or zero. Examples -------- >>> from pgmpy.models import DynamicBayesianNetwork as DBN >>> dbn = DBN() >>> dbn.add_nodes_from(['D', 'G', 'I', 'S', 'L']) >>> dbn.add_edges_from([(('D', 0), ('G', 0)), (('I', 0), ('G', 0)), ... (('G', 0), ('L', 0)), (('D', 0), ('D', 1)), ... (('I', 0), ('I', 1)), (('G', 0), ('G', 1)), ... (('G', 0), ('L', 1)), (('L', 0), ('L', 1))]) >>> dbn.get_intra_edges() [(('D', 0), ('G', 0)), (('G', 0), ('L', 0)), (('I', 0), ('G', 0))
def is_newer_than(after, seconds): if isinstance(after, six.string_types): after = parse_strtime(after).replace(tzinfo=None) else: after = after.replace(tzinfo=None) return after - utcnow() > datetime.timedelta(seconds=seconds)
Return True if after is newer than seconds.
def verify(self, byts, sign): try: chosen_hash = c_hashes.SHA256() hasher = c_hashes.Hash(chosen_hash, default_backend()) hasher.update(byts) digest = hasher.finalize() self.publ.verify(sign, digest, c_ec.ECDSA(c_utils.Prehashed(chosen_hash)) ) return True except InvalidSignature: logger.exception('Error in publ.verify') return False
Verify the signature for the given bytes using the ECC public key. Args: byts (bytes): The data bytes. sign (bytes): The signature bytes. Returns: bool: True if the data was verified, False otherwise.
def item_names(self): if "item_names" not in self.attrs.keys(): self.attrs["item_names"] = np.array([], dtype="S") return tuple(n.decode() for n in self.attrs["item_names"])
Item names.
def load_backend(build_configuration, backend_package): backend_module = backend_package + '.register' try: module = importlib.import_module(backend_module) except ImportError as e: traceback.print_exc() raise BackendConfigurationError('Failed to load the {backend} backend: {error}' .format(backend=backend_module, error=e)) def invoke_entrypoint(name): entrypoint = getattr(module, name, lambda: None) try: return entrypoint() except TypeError as e: traceback.print_exc() raise BackendConfigurationError( 'Entrypoint {entrypoint} in {backend} must be a zero-arg callable: {error}' .format(entrypoint=name, backend=backend_module, error=e)) build_file_aliases = invoke_entrypoint('build_file_aliases') if build_file_aliases: build_configuration.register_aliases(build_file_aliases) subsystems = invoke_entrypoint('global_subsystems') if subsystems: build_configuration.register_optionables(subsystems) rules = invoke_entrypoint('rules') if rules: build_configuration.register_rules(rules) invoke_entrypoint('register_goals')
Installs the given backend package into the build configuration. :param build_configuration the :class:``pants.build_graph.build_configuration.BuildConfiguration`` to install the backend plugin into. :param string backend_package: the package name containing the backend plugin register module that provides the plugin entrypoints. :raises: :class:``pants.base.exceptions.BuildConfigurationError`` if there is a problem loading the build configuration.
def get_rows_by_cols(self, matching_dict): result = [] for i in range(self.num_rows): row = self._table[i+1] matching = True for key, val in matching_dict.items(): if row[key] != val: matching = False break if matching: result.append(row) return result
Return all rows where the cols match the elements given in the matching_dict Parameters ---------- matching_dict: :obj:'dict' Desired dictionary of col values. Returns ------- :obj:`list` A list of rows that satisfy the matching_dict
def get_int(byte_array, signed=True): return int.from_bytes(byte_array, byteorder='big', signed=signed)
Gets the specified integer from its byte array. This should be used by this module alone, as it works with big endian. :param byte_array: the byte array representing th integer. :param signed: whether the number is signed or not. :return: the integer representing the given byte array.
def _multicomplex2(f, fx, x, h): n = len(x) ee = np.diag(h) hess = np.outer(h, h) cmplx_wrap = Bicomplex.__array_wrap__ for i in range(n): for j in range(i, n): zph = Bicomplex(x + 1j * ee[i, :], ee[j, :]) hess[i, j] = cmplx_wrap(f(zph)).imag12 / hess[j, i] hess[j, i] = hess[i, j] return hess
Calculate Hessian with Bicomplex-step derivative approximation
def filter_304_headers(headers): return [(k, v) for k, v in headers if k.lower() not in _filter_from_304]
Filter a list of headers to include in a "304 Not Modified" response.
def convert_PDF_to_plaintext(fpath, keep_layout=False): if not os.path.isfile(CFG_PATH_PDFTOTEXT): raise IOError('Missing pdftotext executable') if keep_layout: layout_option = "-layout" else: layout_option = "-raw" doclines = [] p_break_in_line = re.compile(ur'^\s*\f(.+)$', re.UNICODE) cmd_pdftotext = [CFG_PATH_PDFTOTEXT, layout_option, "-q", "-enc", "UTF-8", fpath, "-"] LOGGER.debug(u"%s", ' '.join(cmd_pdftotext)) pipe_pdftotext = subprocess.Popen(cmd_pdftotext, stdout=subprocess.PIPE) for docline in pipe_pdftotext.stdout: unicodeline = docline.decode("utf-8") m_break_in_line = p_break_in_line.match(unicodeline) if m_break_in_line is None: doclines.append(unicodeline) else: doclines.append(u"\f") doclines.append(m_break_in_line.group(1)) LOGGER.debug(u"convert_PDF_to_plaintext found: %s lines of text", len(doclines)) return doclines
Convert PDF to txt using pdftotext Take the path to a PDF file and run pdftotext for this file, capturing the output. @param fpath: (string) path to the PDF file @return: (list) of unicode strings (contents of the PDF file translated into plaintext; each string is a line in the document.)
def to_representation(self, obj): value = self.model_field.__get__(obj, None) return smart_text(value, strings_only=True)
convert value to representation. DRF ModelField uses ``value_to_string`` for this purpose. Mongoengine fields do not have such method. This implementation uses ``django.utils.encoding.smart_text`` to convert everything to text, while keeping json-safe types intact. NB: The argument is whole object, instead of attribute value. This is upstream feature. Probably because the field can be represented by a complicated method with nontrivial way to extract data.
def from_file(cls, filename, **kwargs): filename = os.path.expanduser(filename) if not os.path.isfile(filename): raise exceptions.PyKubeError("Configuration file {} not found".format(filename)) with open(filename) as f: doc = yaml.safe_load(f.read()) self = cls(doc, **kwargs) self.filename = filename return self
Creates an instance of the KubeConfig class from a kubeconfig file. :Parameters: - `filename`: The full path to the configuration file
def filter_rows(filters, rows): for row in rows: if all(condition(row, row.get(col)) for (cols, condition) in filters for col in cols if col is None or col in row): yield row
Yield rows matching all applicable filters. Filter functions have binary arity (e.g. `filter(row, col)`) where the first parameter is the dictionary of row data, and the second parameter is the data at one particular column. Args: filters: a tuple of (cols, filter_func) where filter_func will be tested (filter_func(row, col)) for each col in cols where col exists in the row rows: an iterable of rows to filter Yields: Rows matching all applicable filters .. deprecated:: v0.7.0
def invers(self): if self._columns != self._rows: raise ValueError("A square matrix is needed") mArray = self.get_array(False) appList = [0] * self._columns for col in xrange(self._columns): mArray.append(appList[:]) mArray[self._columns + col][col] = 1 exMatrix = Matrix.from_two_dim_array(2 * self._columns, self._rows, mArray) gjResult = exMatrix.gauss_jordan() gjResult.matrix = gjResult.matrix[self._columns:] gjResult._columns = len(gjResult.matrix) return gjResult
Return the invers matrix, if it can be calculated :return: Returns a new Matrix containing the invers :rtype: Matrix :raise: Raises an :py:exc:`ValueError` if the matrix is not inversible :note: Only a squared matrix with a determinant != 0 can be inverted. :todo: Reduce amount of create and copy operations
def _unfold_map(self, display_text_map): from ..type.primitives import Type lt_identifier = Id(display_text_map['languageTypeId']).get_identifier() st_identifier = Id(display_text_map['scriptTypeId']).get_identifier() ft_identifier = Id(display_text_map['formatTypeId']).get_identifier() try: self._language_type = Type(**language_types.get_type_data(lt_identifier)) except AttributeError: raise NotFound('Language Type: ' + lt_identifier) try: self._script_type = Type(**script_types.get_type_data(st_identifier)) except AttributeError: raise NotFound('Script Type: ' + st_identifier) try: self._format_type = Type(**format_types.get_type_data(ft_identifier)) except AttributeError: raise NotFound('Format Type: ' + ft_identifier) self._text = display_text_map['text']
Parses a display text dictionary map.
def currencies(self) -> CurrenciesAggregate: if not self.__currencies_aggregate: self.__currencies_aggregate = CurrenciesAggregate(self.book) return self.__currencies_aggregate
Returns the Currencies aggregate
def is_group(value): if type(value) == str: try: entry = grp.getgrnam(value) value = entry.gr_gid except KeyError: err_message = ('{0}: No such group.'.format(value)) raise validate.VdtValueError(err_message) return value elif type(value) == int: try: grp.getgrgid(value) except KeyError: err_message = ('{0}: No such group.'.format(value)) raise validate.VdtValueError(err_message) return value else: err_message = ('Please, use str or int to "user" parameter.') raise validate.VdtTypeError(err_message)
Check whether groupname or gid as argument exists. if this function recieved groupname, convert gid and exec validation.
def signal_stop(self, mode): if self.is_single_user: logger.warning("Cannot stop server; single-user server is running (PID: {0})".format(self.pid)) return False try: self.send_signal(STOP_SIGNALS[mode]) except psutil.NoSuchProcess: return True except psutil.AccessDenied as e: logger.warning("Could not send stop signal to PostgreSQL (error: {0})".format(e)) return False return None
Signal postmaster process to stop :returns None if signaled, True if process is already gone, False if error
def within_duration(events, time, limits): min_dur = max_dur = ones(events.shape[0], dtype=bool) if limits[0] is not None: min_dur = time[events[:, -1] - 1] - time[events[:, 0]] >= limits[0] if limits[1] is not None: max_dur = time[events[:, -1] - 1] - time[events[:, 0]] <= limits[1] return events[min_dur & max_dur, :]
Check whether event is within time limits. Parameters ---------- events : ndarray (dtype='int') N x M matrix with start sample first and end samples last on M time : ndarray (dtype='float') vector with time points limits : tuple of float low and high limit for spindle duration Returns ------- ndarray (dtype='int') N x M matrix with start sample first and end samples last on M
def listunion(ListOfLists): u = [] for s in ListOfLists: if s != None: u.extend(s) return u
Take the union of a list of lists. Take a Python list of Python lists:: [[l11,l12, ...], [l21,l22, ...], ... , [ln1, ln2, ...]] and return the aggregated list:: [l11,l12, ..., l21, l22 , ...] For a list of two lists, e.g. `[a, b]`, this is like:: a.extend(b) **Parameters** **ListOfLists** : Python list Python list of Python lists. **Returns** **u** : Python list Python list created by taking the union of the lists in `ListOfLists`.
def lookup_matching(self, urls): hosts = (urlparse(u).hostname for u in urls) for val in hosts: item = self.lookup(val) if item is not None: yield item
Get matching hosts for the given URLs. :param urls: an iterable containing URLs :returns: instances of AddressListItem representing listed hosts matching the ones used by the given URLs :raises InvalidURLError: if there are any invalid URLs in the sequence