code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def to_string(type): if type == None: return "unknown" elif type == TypeCode.Unknown: return "unknown" elif type == TypeCode.String: return "string" elif type == TypeCode.Integer: return "integer" elif type == TypeCode.Long: ...
Converts a TypeCode into its string name. :param type: the TypeCode to convert into a string. :return: the name of the TypeCode passed as a string value.
def manage_submissions(self): if not hasattr(self, 'submissions') or len(self.submissions) == 1: self.submissions = [] if self.options['mode'] == 'front': if self.options['password'] and self.options['username']: self.login() url = 'htt...
If there are no or only one submissions left, get new submissions. This function manages URL creation and the specifics for front page or subreddit mode.
def sle(actual, predicted): return (np.power(np.log(np.array(actual)+1) - np.log(np.array(predicted)+1), 2))
Computes the squared log error. This function computes the squared log error between two numbers, or for element between a pair of lists or numpy arrays. Parameters ---------- actual : int, float, list of numbers, numpy array The ground truth value predicted : same type as actual ...
def _local_install(self, args, pkg_name=None): if len(args) < 2: raise SPMInvocationError('A package file must be specified') self._install(args)
Install a package from a file
def teardown_websocket(self, func: Callable, name: AppOrBlueprintKey=None) -> Callable: handler = ensure_coroutine(func) self.teardown_websocket_funcs[name].append(handler) return func
Add a teardown websocket function. This is designed to be used as a decorator. An example usage, .. code-block:: python @app.teardown_websocket def func(): ... Arguments: func: The teardown websocket function itself. name: Optio...
async def remove(self, von_wallet: Wallet) -> None: LOGGER.debug('WalletManager.remove >>> wallet %s', von_wallet) await von_wallet.remove() LOGGER.debug('WalletManager.remove <<<')
Remove serialized wallet if it exists. Raise WalletState if wallet is open. :param von_wallet: (closed) wallet to remove
def send(self, data, sample_rate=None): if self._disabled: self.logger.debug('Connection disabled, not sending data') return False if sample_rate is None: sample_rate = self._sample_rate sampled_data = {} if sample_rate < 1: if random.rando...
Send the data over UDP while taking the sample_rate in account The sample rate should be a number between `0` and `1` which indicates the probability that a message will be sent. The sample_rate is also communicated to `statsd` so it knows what multiplier to use. :keyword data: The dat...
def getHTML(self): root = self.getRoot() if root is None: raise ValueError('Did not parse anything. Use parseFile or parseStr') if self.doctype: doctypeStr = '<!%s>\n' %(self.doctype) else: doctypeStr = '' rootNode = self.getRoot() if r...
getHTML - Get the full HTML as contained within this tree. If parsed from a document, this will contain the original whitespacing. @returns - <str> of html @see getFormattedHTML @see getMiniHTML
def to_decimal(number, strip='- '): if isinstance(number, six.integer_types): return str(number) number = str(number) number = re.sub(r'[%s]' % re.escape(strip), '', number) if number.startswith('0x'): return to_decimal(int(number[2:], 16)) elif number.startswith('o'): return...
Converts a number to a string of decimals in base 10. >>> to_decimal(123) '123' >>> to_decimal('o123') '83' >>> to_decimal('b101010') '42' >>> to_decimal('0x2a') '42'
def compile_jobgroups_from_joblist(joblist, jgprefix, sgegroupsize): jobcmds = defaultdict(list) for job in joblist: jobcmds[job.command.split(' ', 1)[0]].append(job.command) jobgroups = [] for cmds in list(jobcmds.items()): sublists = split_seq(cmds[1], sgegroupsize) count = 0 ...
Return list of jobgroups, rather than list of jobs.
def fetch(self, remote=None, refspec=None, verbose=False, tags=True): return git_fetch(self.repo_dir, remote=remote, refspec=refspec, verbose=verbose, tags=tags)
Do a git fetch of `refspec`.
def _MergeEntities(self, a, b): scheme = {'price': self._MergeIdentical, 'currency_type': self._MergeIdentical, 'payment_method': self._MergeIdentical, 'transfers': self._MergeIdentical, 'transfer_duration': self._MergeIdentical} return self._SchemedMerge(...
Merges the fares if all the attributes are the same.
def _compute_delta_beta(self, df, events, start, stop, weights): score_residuals = self._compute_residuals(df, events, start, stop, weights) * weights[:, None] naive_var = inv(self._hessian_) delta_betas = -score_residuals.dot(naive_var) / self._norm_std.values return delta_betas
approximate change in betas as a result of excluding ith row
def get_loglevel(level): if level == 'debug': return logging.DEBUG elif level == 'notice': return logging.INFO elif level == 'info': return logging.INFO elif level == 'warning' or level == 'warn': return logging.WARNING elif level == 'error' or level == 'err': ...
return logging level object corresponding to a given level passed as a string @str level: name of a syslog log level @rtype: logging, logging level from logging module
def connect(self, dsn): self.con = psycopg2.connect(dsn) self.cur = self.con.cursor(cursor_factory=psycopg2.extras.DictCursor) self.con.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
Connect to DB. dbname: the database name user: user name used to authenticate password: password used to authenticate host: database host address (defaults to UNIX socket if not provided) port: connection port number (defaults to 5432 if not provided)
def summary(self): if not self.translations: self.update() return [summary.taf(trans) for trans in self.translations.forecast]
Condensed summary for each forecast created from translations
def visit_create_library_command(element, compiler, **kw): query = bindparams = [ sa.bindparam( 'location', value=element.location, type_=sa.String, ), sa.bindparam( 'credentials', value=element.credentials, type_=sa...
Returns the actual sql query for the CreateLibraryCommand class.
def reload(self): utt_ids = sorted(self.utt_ids) if self.shuffle: self.rand.shuffle(utt_ids) partitions = [] current_partition = PartitionInfo() for utt_id in utt_ids: utt_size = self.utt_sizes[utt_id] utt_lengths = self.utt_lengths[utt_id] ...
Create a new partition scheme. A scheme defines which utterances are in which partition. The scheme only changes after every call if ``self.shuffle == True``. Returns: list: List of PartitionInfo objects, defining the new partitions (same as ``self.partitions``)
def compile_resource(resource): return re.compile("^" + trim_resource(re.sub(r":(\w+)", r"(?P<\1>[\w-]+?)", resource)) + r"(\?(?P<querystring>.*))?$")
Return compiled regex for resource matching
def serialize_yaml_tofile(filename, resource): stream = file(filename, "w") yaml.dump(resource, stream, default_flow_style=False)
Serializes a K8S resource to YAML-formatted file.
def mequg(m1, nr, nc): m1 = stypes.toDoubleMatrix(m1) mout = stypes.emptyDoubleMatrix(x=nc, y=nr) nc = ctypes.c_int(nc) nr = ctypes.c_int(nr) libspice.mequg_c(m1, nc, nr, mout) return stypes.cMatrixToNumpy(mout)
Set one double precision matrix of arbitrary size equal to another. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/mequg_c.html :param m1: Input matrix. :type m1: NxM-Element Array of floats :param nr: Row dimension of m1. :type nr: int :param nc: Column dimension of m1. :type nc:...
def run(self, *args): self.parser.parse_args(args) code = self.affiliate() return code
Affiliate unique identities to organizations.
def print_config(_run): final_config = _run.config config_mods = _run.config_modifications print(_format_config(final_config, config_mods))
Print the updated configuration and exit. Text is highlighted: green: value modified blue: value added red: value modified but type changed
def reset( self ): dataSet = self.dataSet() if ( not dataSet ): dataSet = XScheme() dataSet.reset()
Resets the colors to the default settings.
def string_for_count(dictionary, count): string_to_print = "" if count is not None: if count == 0: return "" ranger = count else: ranger = 2 for index in range(ranger): string_to_print += "{} ".format(get_random_word(dictionary)) return string_to_print.str...
Create a random string of N=`count` words
def _set_program_defaults(cls, programs): for program in programs: val = getattr(cls, program.__name__) \ and extern.does_external_program_run(program.__name__, Settings.verbose) setattr(cls, program.__name__, val)
Run the external program tester on the required binaries.
def avhrr(scans_nb, scan_points, scan_angle=55.37, frequency=1 / 6.0, apply_offset=True): avhrr_inst = np.vstack(((scan_points / 1023.5 - 1) * np.deg2rad(-scan_angle), np.zeros((len(scan_points),)))) avhrr_inst = np.tile( avhrr_inst[:, np...
Definition of the avhrr instrument. Source: NOAA KLM User's Guide, Appendix J http://www.ncdc.noaa.gov/oa/pod-guide/ncdc/docs/klm/html/j/app-j.htm
def build_parameter(name, properties): p = Parameter(name, Type=properties.get("type")) for name, attr in PARAMETER_PROPERTIES.items(): if name in properties: setattr(p, attr, properties[name]) return p
Builds a troposphere Parameter with the given properties. Args: name (string): The name of the parameter. properties (dict): Contains the properties that will be applied to the parameter. See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-s...
def PushEventSource(self, event_source): if event_source.file_entry_type == ( dfvfs_definitions.FILE_ENTRY_TYPE_DIRECTORY): weight = 1 else: weight = 100 heap_values = (weight, time.time(), event_source) heapq.heappush(self._heap, heap_values)
Pushes an event source onto the heap. Args: event_source (EventSource): event source.
def export_elements(self, filename='export_elements.zip', typeof='all'): valid_types = ['all', 'nw', 'ips', 'sv', 'rb', 'al', 'vpn'] if typeof not in valid_types: typeof = 'all' return Task.download(self, 'export_elements', filename, params={'recursive': True, 'type': typ...
Export elements from SMC. Valid types are: all (All Elements)|nw (Network Elements)|ips (IPS Elements)| sv (Services)|rb (Security Policies)|al (Alerts)| vpn (VPN Elements) :param type: type of element :param filename: Name of file for export :raises TaskRunFail...
def read_until_regex(self, regex: bytes, max_bytes: int = None) -> Awaitable[bytes]: future = self._start_read() self._read_regex = re.compile(regex) self._read_max_bytes = max_bytes try: self._try_inline_read() except UnsatisfiableReadError as e: gen_log....
Asynchronously read until we have matched the given regex. The result includes the data that matches the regex and anything that came before it. If ``max_bytes`` is not None, the connection will be closed if more than ``max_bytes`` bytes have been read and the regex is not sati...
def pickle_compress(obj, print_compression_info=False): p = pickle.dumps(obj) c = zlib.compress(p) if print_compression_info: print ("len = {:,d} compr={:,d} ratio:{:.6f}".format(len(p), len(c), float(len(c))/len(p))) return c
pickle and compress an object
def from_string(cls, dataset_id, default_project=None): output_dataset_id = dataset_id output_project_id = default_project parts = dataset_id.split(".") if len(parts) == 1 and not default_project: raise ValueError( "When default_project is not set, dataset_id ...
Construct a dataset reference from dataset ID string. Args: dataset_id (str): A dataset ID in standard SQL format. If ``default_project`` is not specified, this must included both the project ID and the dataset ID, separated by ``.``. defa...
def _range_check(self, value, min_value, max_value): if value < min_value or value > max_value: raise ValueError('%s out of range - %s is not between %s and %s' % (self.__class__.__name__, value, min_value, max_value))
Utility method to check that the given value is between min_value and max_value.
def createDocument(self, initDict = None) : if initDict is not None : return self.createDocument_(initDict) else : if self._validation["on_load"] : self._validation["on_load"] = False return self.createDocument_(self.defaultDocument) ...
create and returns a document populated with the defaults or with the values in initDict
def categories_percent(s, categories): count = 0 s = to_unicode(s, precise=True) for c in s: if unicodedata.category(c) in categories: count += 1 return 100 * float(count) / len(s) if len(s) else 0
Returns category characters percent. >>> categories_percent("qqq ggg hhh", ["Po"]) 0.0 >>> categories_percent("q,w.", ["Po"]) 50.0 >>> categories_percent("qqq ggg hhh", ["Nd"]) 0.0 >>> categories_percent("q5", ["Nd"]) 50.0 >>> categories_percent("s.s,5s", ["Po", "Nd"]) 50.0
def send_vdp_vnic_down(self, port_uuid=None, vsiid=None, mgrid=None, typeid=None, typeid_ver=None, vsiid_frmt=vdp_const.VDP_VSIFRMT_UUID, filter_frmt=vdp_const.VDP_FILTER_GIDMACVID, gid=0, mac="", vlan=0, oui="")...
Interface function to apps, called for a vNIC DOWN. This currently sends an VDP dis-associate message. Please refer http://www.ieee802.org/1/pages/802.1bg.html VDP Section for more detailed information :param uuid: uuid of the vNIC :param vsiid: VSI value, Only UUID supported fo...
def UpsertUserDefinedFunction(self, collection_link, udf, options=None): if options is None: options = {} collection_id, path, udf = self._GetContainerIdWithPathForUDF(collection_link, udf) return self.Upsert(udf, path, 'udfs', ...
Upserts a user defined function in a collection. :param str collection_link: The link to the collection. :param str udf: :param dict options: The request options for the request. :return: The upserted UDF. :rtype: dict
def json(self): try: return self._json except AttributeError: try: self._json = json.loads(self.text) return self._json except: raise RequestInvalidJSON("Invalid JSON received")
Return an object representing the return json for the request
def parse_sql(self, asql): import sqlparse statements = sqlparse.parse(sqlparse.format(asql, strip_comments=True)) parsed_statements = [] for statement in statements: statement_str = statement.to_unicode().strip() for preprocessor in self._backend.sql_processors()...
Executes all sql statements from asql. Args: library (library.Library): asql (str): ambry sql query - see https://github.com/CivicKnowledge/ambry/issues/140 for details.
def snow_depth(self, value=999.0): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `snow_depth`'.format(value)) self._snow_depth = ...
Corresponds to IDD Field `snow_depth` Args: value (float): value for IDD Field `snow_depth` Unit: cm Missing value: 999.0 if `value` is None it will not be checked against the specification and is assumed to be a missing value ...
def shuffle_columns( a ): mask = range( a.text_size ) random.shuffle( mask ) for c in a.components: c.text = ''.join( [ c.text[i] for i in mask ] )
Randomize the columns of an alignment
def write(self, group_id, handle): name = self.name.encode('utf-8') handle.write(struct.pack('bb', len(name), group_id)) handle.write(name) handle.write(struct.pack('<h', self.binary_size() - 2 - len(name))) handle.write(struct.pack('b', self.bytes_per_element)) handle.wr...
Write binary data for this parameter to a file handle. Parameters ---------- group_id : int The numerical ID of the group that holds this parameter. handle : file handle An open, writable, binary file handle.
def _waitForIP(cls, instance): logger.debug('Waiting for ip...') while True: time.sleep(a_short_time) instance.update() if instance.ip_address or instance.public_dns_name or instance.private_ip_address: logger.debug('...got ip') break
Wait until the instances has a public IP address assigned to it. :type instance: boto.ec2.instance.Instance
def single(C, namespace=None): if namespace is None: B = C()._ else: B = C(default=namespace, _=namespace)._ return B
An element maker with a single namespace that uses that namespace as the default
def get_all_users(self, path_prefix='/', marker=None, max_items=None): params = {'PathPrefix' : path_prefix} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListUsers', params, list_marker='Users')
List the users that have the specified path prefix. :type path_prefix: string :param path_prefix: If provided, only users whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and on...
def explain_weights_lightning(estimator, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None): return explain_weights_lightning_not_supported(estimator)
Return an explanation of a lightning estimator weights
def wait_for_thrift_interface(self, **kwargs): if self.cluster.version() >= '4': return; self.watch_log_for("Listening for thrift clients...", **kwargs) thrift_itf = self.network_interfaces['thrift'] if not common.check_socket_listening(thrift_itf, timeout=30): wa...
Waits for the Thrift interface to be listening. Emits a warning if not listening after 30 seconds.
def get_encoding(headers, content): encoding = None content_type = headers.get('content-type') if content_type: _, params = cgi.parse_header(content_type) if 'charset' in params: encoding = params['charset'].strip("'\"") if not encoding: content = utils.pretty_unicode...
Get encoding from request headers or page head.
def conv_stride2_multistep(x, nbr_steps, output_filters, name=None, reuse=None): with tf.variable_scope( name, default_name="conv_stride2_multistep", values=[x], reuse=reuse): if nbr_steps == 0: out = conv(x, output_filters, (1, 1)) return out, [out] hidden_layers = [x] for i in range(nb...
Use a strided convolution to downsample x by 2, `nbr_steps` times. We use stride and filter size 2 to avoid the checkerboard problem of deconvs. As detailed in http://distill.pub/2016/deconv-checkerboard/. Args: x: a `Tensor` with shape `[batch, spatial, depth]` or `[batch, spatial_1, spatial_2, depth]...
def draw_dot(self, pos, color): if 0 <= pos[0] < self.width and 0 <= pos[1] < self.height: self.matrix[pos[0]][pos[1]] = color
Draw one single dot with the given color on the screen. :param pos: Position of the dot :param color: COlor for the dot :type pos: tuple :type color: tuple
def garbage_graph(index): graph = _compute_garbage_graphs()[int(index)] reduce_graph = bottle.request.GET.get('reduce', '') if reduce_graph: graph = graph.reduce_to_cycles() if not graph: return None filename = 'garbage%so%s.png' % (index, reduce_graph) rendered_file = _get_graph...
Get graph representation of reference cycle.
def _get_parsed_url(url): try: parsed = urllib3_parse(url) except ValueError: scheme, _, url = url.partition("://") auth, _, url = url.rpartition("@") url = "{scheme}://{url}".format(scheme=scheme, url=url) parsed = urllib3_parse(url)._replace(auth=auth) return parsed
This is a stand-in function for `urllib3.util.parse_url` The orignal function doesn't handle special characters very well, this simply splits out the authentication section, creates the parsed url, then puts the authentication section back in, bypassing validation. :return: The new, parsed URL object ...
def _reversePoints(points): points = _copyPoints(points) firstOnCurve = None for index, point in enumerate(points): if point.segmentType is not None: firstOnCurve = index break lastSegmentType = points[firstOnCurve].segmentType points = reversed(points) final = []...
Reverse the points. This differs from the reversal point pen in RoboFab in that it doesn't worry about maintaing the start point position. That has no benefit within the context of this module.
def login(config, username=None, password=None, email=None, url=None, client=None, *args, **kwargs): try: c = (_get_client(config) if not client else client) lg = c.login(username, password, email, url) print "%s logged to %s"%(username,(url if url else "default hub")) except Exception a...
Wrapper to the docker.py login method
def profile_form_factory(): if current_app.config['USERPROFILES_EMAIL_ENABLED']: return EmailProfileForm( formdata=None, username=current_userprofile.username, full_name=current_userprofile.full_name, email=current_user.email, email_repeat=current_...
Create a profile form.
def fwriter(filename, gz=False, bz=False): if filename.endswith('.gz'): gz = True elif filename.endswith('.bz2'): bz = True if gz: if not filename.endswith('.gz'): filename += '.gz' return gzip.open(filename, 'wb') elif bz: if not filename.endswith('.b...
Returns a filewriter object that can write plain or gzipped output. If gzip or bzip2 compression is asked for then the usual filename extension will be added.
def _pop_digits(char_list): logger.debug('_pop_digits(%s)', char_list) digits = [] while len(char_list) != 0 and char_list[0].isdigit(): digits.append(char_list.pop(0)) logger.debug('got digits: %s', digits) logger.debug('updated char list: %s', char_list) return digits
Pop consecutive digits from the front of list and return them Pops any and all consecutive digits from the start of the provided character list and returns them as a list of string digits. Operates on (and possibly alters) the passed list. :param list char_list: a list of characters :return: a lis...
def update_edge_todo(self, elev_fn, dem_proc): for key in self.edges[elev_fn].keys(): self.edges[elev_fn][key].set_data('todo', data=dem_proc.edge_todo)
Can figure out how to update the todo based on the elev filename
def dump(data, out, ac_parser=None, **options): ioi = anyconfig.ioinfo.make(out) psr = find(ioi, forced_type=ac_parser) LOGGER.info("Dumping: %s", ioi.path) psr.dump(data, ioi, **options)
Save 'data' to 'out'. :param data: A mapping object may have configurations data to dump :param out: An output file path, a file, a file-like object, :class:`pathlib.Path` object represents the file or a namedtuple 'anyconfig.globals.IOInfo' object represents output to dump some data to...
def initialize_dbs(settings): global _MAIN_SETTINGS, _MAIN_SITEURL, _MAIN_LANG, _SUBSITE_QUEUE _MAIN_SETTINGS = settings _MAIN_LANG = settings['DEFAULT_LANG'] _MAIN_SITEURL = settings['SITEURL'] _SUBSITE_QUEUE = settings.get('I18N_SUBSITES', {}).copy() prepare_site_db_and_overrides() _SITES_...
Initialize internal DBs using the Pelican settings dict This clears the DBs for e.g. autoreload mode to work
def watched(self, option): params = join_params(self.parameters, {"watched": option}) return self.__class__(**params)
Set whether to filter by a user's watchlist. Options available are user.ONLY, user.NOT, and None; default is None.
def _check_r(self, r): if abs(np.dot(r[:, 0], r[:, 0]) - 1) > eps or \ abs(np.dot(r[:, 0], r[:, 0]) - 1) > eps or \ abs(np.dot(r[:, 0], r[:, 0]) - 1) > eps or \ np.dot(r[:, 0], r[:, 1]) > eps or \ np.dot(r[:, 1], r[:, 2]) > eps or \ np.dot(r[:, 2], r[:...
the columns must orthogonal
def ads_use_dev_spaces(cluster_name, resource_group_name, update=False, space_name=None, do_not_prompt=False): azds_cli = _install_dev_spaces_cli(update) use_command_arguments = [azds_cli, 'use', '--name', cluster_name, '--resource-group', resource_group_name] if space_name is n...
Use Azure Dev Spaces with a managed Kubernetes cluster. :param cluster_name: Name of the managed cluster. :type cluster_name: String :param resource_group_name: Name of resource group. You can configure the default group. \ Using 'az configure --defaults group=<name>'. :type resource_group_name: St...
def _populate_audio_file(self): self.log(u"Populate audio file...") if self.audio_file_path_absolute is not None: self.log([u"audio_file_path_absolute is '%s'", self.audio_file_path_absolute]) self.audio_file = AudioFile( file_path=self.audio_file_path_absolute, ...
Create the ``self.audio_file`` object by reading the audio file at ``self.audio_file_path_absolute``.
def skip(self): buflen = len(self.buf) while True: self.buf = self.buf.lstrip() if self.buf == '': self.readline() buflen = len(self.buf) else: self.offset += (buflen - len(self.buf)) break if not...
Skip whitespace and count position
def mean_squared_error(data, ground_truth, mask=None, normalized=False, force_lower_is_better=True): r if not hasattr(data, 'space'): data = odl.vector(data) space = data.space ground_truth = space.element(ground_truth) l2norm = odl.solvers.L2Norm(space) if mask is...
r"""Return mean squared L2 distance between ``data`` and ``ground_truth``. See also `this Wikipedia article <https://en.wikipedia.org/wiki/Mean_squared_error>`_. Parameters ---------- data : `Tensor` or `array-like` Input data to compare to the ground truth. If not a `Tensor`, an u...
def summary(self, title=None, complexity=False): if title is None: return javabridge.call( self.jobject, "toSummaryString", "()Ljava/lang/String;") else: return javabridge.call( self.jobject, "toSummaryString", "(Ljava/lang/String;Z)Ljava/lang/Stri...
Generates a summary. :param title: optional title :type title: str :param complexity: whether to print the complexity information as well :type complexity: bool :return: the summary :rtype: str
def find_course_by_crn(self, crn): for name, course in self.courses.iteritems(): if crn in course: return course return None
Searches all courses by CRNs. Not particularly efficient. Returns None if not found.
def normalizer(text, exclusion=OPERATIONS_EXCLUSION, lower=True, separate_char='-', **kwargs): clean_str = re.sub(r'[^\w{}]'.format( "".join(exclusion)), separate_char, text.strip()) or '' clean_lowerbar = clean_str_without_accents = strip_accents(clean_str) if '_' not in exclusion: clean_lo...
Clean text string of simbols only alphanumeric chars.
def reroot(self, s): o_s1 = self.first_lookup[s] splice1 = self.tour[1:o_s1] rest = self.tour[o_s1 + 1:] new_tour = [s] + rest + splice1 + [s] new_tree = TestETT.from_tour(new_tour, fast=self.fast) return new_tree
s = 3 s = 'B' Let os denote any occurrence of s. Splice out the first part of the sequence ending with the occurrence before os, remove its first occurrence (or), and tack this on to the end of the sequence which now begins with os. Add a new occurrence os to the end.
def _query(self, *criterion): return self.session.query( self.model_class ).filter( *criterion )
Construct a query for the model.
def mount_rate_limit_adapters(cls, session=None, rls_config=None, **kwargs): session = session or HTTP_SESSION if rls_config is None: rls_config = RateLimiter.get_configs() for name, rl_conf in rls_config.items(): urls = rl_conf.get('urls...
Mount rate-limits adapters on the specified `requests.Session` object. :param py:class:`requests.Session` session: Session to mount. If not specified, then use the global `HTTP_SESSION`. :param dict rls_config: Rate-limits configuration. If not specified, then ...
def C_array2dict(C): d = OrderedDict() i=0 for k in C_keys: s = C_keys_shape[k] if s == 1: j = i+1 d[k] = C[i] else: j = i \ + reduce(operator.mul, s, 1) d[k] = C[i:j].reshape(s) i = j return d
Convert a 1D array containing C values to a dictionary.
def read(self): def ds(data_element): value = self._str_filter.ToStringPair(data_element.GetTag()) if value[1]: return DataElement(data_element, value[0].strip(), value[1].strip()) results = [data for data in self.walk(ds) if data is not None] return resul...
Returns array of dictionaries containing all the data elements in the DICOM file.
def _recursive_gh_get(href, items): response = _request('GET', href) response.raise_for_status() items.extend(response.json()) if "link" not in response.headers: return links = link_header.parse(response.headers["link"]) rels = {link.rel: link.href for link in links.links} if "next" ...
Recursively get list of GitHub objects. See https://developer.github.com/v3/guides/traversing-with-pagination/
def generateSensorimotorSequence(self, sequenceLength): motorSequence = [] sensorySequence = [] sensorimotorSequence = [] currentEyeLoc = self.nupicRandomChoice(self.spatialConfig) for i in xrange(sequenceLength): currentSensoryInput = self.spatialMap[tuple(currentEyeLoc)] nextEyeLoc, cu...
Generate sensorimotor sequences of length sequenceLength. @param sequenceLength (int) Length of the sensorimotor sequence. @return (tuple) Contains: sensorySequence (list) Encoded sensory input for whole sequence. motorSequence (list) ...
def to_pandas(self): agedepthdf = pd.DataFrame(self.age, index=self.data.depth) agedepthdf.columns = list(range(self.n_members())) out = (agedepthdf.join(self.data.set_index('depth')) .reset_index() .melt(id_vars=self.data.columns.values, var_name='mciter', value_na...
Convert record to pandas.DataFrame
def _get_entity(service_instance, entity): log.trace('Retrieving entity: %s', entity) if entity['type'] == 'cluster': dc_ref = salt.utils.vmware.get_datacenter(service_instance, entity['datacenter']) return salt.utils.vmware.get_cluster(dc_ref, e...
Returns the entity associated with the entity dict representation Supported entities: cluster, vcenter Expected entity format: .. code-block:: python cluster: {'type': 'cluster', 'datacenter': <datacenter_name>, 'cluster': <cluster_name>} vcenter: ...
def send(self, channel, payload): with track('send_channel=' + channel): with track('create event'): Event.objects.create( group=self, channel=channel, value=payload) ChannelGroup(str(self.pk)).send( ...
Send a message with the given payload on the given channel. Messages are broadcast to all players in the group.
def check() -> Result: try: with Connection(conf.get('CELERY_BROKER_URL')) as conn: conn.connect() except ConnectionRefusedError: return Result(message='Service unable to connect, "Connection was refused".', severity=Result.ERROR) ...
Open and close the broker channel.
def collapse_nested(self, cats, max_nestedness=10): children = [] removed = set() nestedness = max_nestedness old = list(self.widget.options.values()) nested = [cat for cat in old if getattr(cat, 'cat') is not None] parents = {cat.cat for cat in nested} parents_to...
Collapse any items that are nested under cats. `max_nestedness` acts as a fail-safe to prevent infinite looping.
def accept_freeware_license(): ntab = 3 if version().startswith('6.6.') else 2 for _ in range(ntab): EasyProcess('xdotool key KP_Tab').call() time.sleep(0.5) EasyProcess('xdotool key KP_Space').call() time.sleep(0.5) EasyProcess('xdotool key KP_Space').call()
different Eagle versions need differnt TAB count. 6.5 -> 2 6.6 -> 3 7.4 -> 2
async def is_ready(self): async def slave_task(addr, timeout): try: r_manager = await self.env.connect(addr, timeout=timeout) ready = await r_manager.is_ready() if not ready: return False except: return F...
Check if the multi-environment has been fully initialized. This calls each slave environment managers' :py:meth:`is_ready` and checks if the multi-environment itself is ready by calling :py:meth:`~creamas.mp.MultiEnvironment.check_ready`. .. seealso:: :py:meth:`creamas.cor...
def live_dirs(self): if self.has_results_dir: yield self.results_dir yield self.current_results_dir if self.has_previous_results_dir: yield self.previous_results_dir
Yields directories that must exist for this VersionedTarget to function.
def query_foursquare(point, max_distance, client_id, client_secret): if not client_id: return [] if not client_secret: return [] if from_cache(FS_CACHE, point, max_distance): return from_cache(FS_CACHE, point, max_distance) url = FOURSQUARE_URL % (client_id, client_secret, point....
Queries Squarespace API for a location Args: point (:obj:`Point`): Point location to query max_distance (float): Search radius, in meters client_id (str): Valid Foursquare client id client_secret (str): Valid Foursquare client secret Returns: :obj:`list` of :obj:`dict`: ...
def wcwidth(wc): r ucs = ord(wc) if (ucs == 0 or ucs == 0x034F or 0x200B <= ucs <= 0x200F or ucs == 0x2028 or ucs == 0x2029 or 0x202A <= ucs <= 0x202E or 0x2060 <= ucs <= 0x2063): return 0 if ucs < 32 or 0x07F <= ucs < 0x0A0...
r""" Given one unicode character, return its printable length on a terminal. The wcwidth() function returns 0 if the wc argument has no printable effect on a terminal (such as NUL '\0'), -1 if wc is not printable, or has an indeterminate effect on the terminal, such as a control character. Otherwis...
def _get_repo_info(alias, repos_cfg=None, root=None): try: meta = dict((repos_cfg or _get_configured_repos(root=root)).items(alias)) meta['alias'] = alias for key, val in six.iteritems(meta): if val in ['0', '1']: meta[key] = int(meta[key]) == 1 elif v...
Get one repo meta-data.
def multitaper_cross_spectrum(self, clm, slm, k, convention='power', unit='per_l', **kwargs): return self._multitaper_cross_spectrum(clm, slm, k, convention=convention, unit=unit, **kw...
Return the multitaper cross-spectrum estimate and standard error. Usage ----- mtse, sd = x.multitaper_cross_spectrum(clm, slm, k, [convention, unit, lmax, taper_wt, clat, cl...
def exp_backoff(attempt, cap=3600, base=300): max_attempts = math.log(cap / base, 2) if attempt <= max_attempts: return base * 2 ** attempt return cap
Exponential backoff time
def create_boots_layer(aspect, ip): layer = [] if 'BOOTS' in aspect: layer = pgnreader.parse_pagan_file(FILE_BOOTS, ip, invert=False, sym=True) return layer
Reads the BOOTS.pgn file and creates the boots layer.
def make_instance(cls, id, client, parent_id=None, json=None): make_cls = CLASS_MAP.get(id) if make_cls is None: return None real_json = json['data'] real_id = real_json['id'] return Base.make(real_id, client, make_cls, parent_id=None, json=real_json)
Overrides Base's ``make_instance`` to allow dynamic creation of objects based on the defined type in the response json. :param cls: The class this was called on :param id: The id of the instance to create :param client: The client to use for this instance :param parent_id: The p...
def _parse_cod_segment(cls, fptr): offset = fptr.tell() - 2 read_buffer = fptr.read(2) length, = struct.unpack('>H', read_buffer) read_buffer = fptr.read(length - 2) lst = struct.unpack_from('>BBHBBBBBB', read_buffer, offset=0) scod, prog, nlayers, mct, nr, xcb, ycb, csty...
Parse the COD segment. Parameters ---------- fptr : file Open file object. Returns ------- CODSegment The current COD segment.
def _get_auth(self, force_console=False): if not self.target: raise ValueError("Unspecified target ({!r})".format(self.target)) elif not force_console and self.URL_RE.match(self.target): auth_url = urlparse(self.target) source = 'url' if auth_url.username:...
Try to get login auth from known sources.
def add_chunk(self, chunk_obj): if chunk_obj.get_id() in self.idx: raise ValueError("Chunk with id {} already exists!" .format(chunk_obj.get_id())) self.node.append(chunk_obj.get_node()) self.idx[chunk_obj.get_id()] = chunk_obj
Adds a chunk object to the layer @type chunk_obj: L{Cchunk} @param chunk_obj: the chunk object
def get_model_choices(): result = [] for ct in ContentType.objects.order_by('app_label', 'model'): try: if issubclass(ct.model_class(), TranslatableModel): result.append( ('{} - {}'.format(ct.app_label, ct.model.lower()), '{} - {}'.for...
Get the select options for the model selector :return:
def remove_datastore(datastore, service_instance=None): log.trace('Removing datastore \'%s\'', datastore) target = _get_proxy_target(service_instance) datastores = salt.utils.vmware.get_datastores( service_instance, reference=target, datastore_names=[datastore]) if not datastores...
Removes a datastore. If multiple datastores an error is raised. datastore Datastore name service_instance Service instance (vim.ServiceInstance) of the vCenter/ESXi host. Default is None. .. code-block:: bash salt '*' vsphere.remove_datastore ds_name
def get_stoplist(language): file_path = os.path.join("stoplists", "%s.txt" % language) try: stopwords = pkgutil.get_data("justext", file_path) except IOError: raise ValueError( "Stoplist for language '%s' is missing. " "Please use function 'get_stoplists' for complete...
Returns an built-in stop-list for the language as a set of words.
def _create_stdout_logger(logging_level): out_hdlr = logging.StreamHandler(sys.stdout) out_hdlr.setFormatter(logging.Formatter( '[%(asctime)s] %(message)s', "%H:%M:%S" )) out_hdlr.setLevel(logging_level) for name in LOGGING_NAMES: log = logging.getLogger(name) log.addHandler(...
create a logger to stdout. This creates logger for a series of module we would like to log information on.
def get_last_doc(self): def docs_by_ts(): for meta_collection_name in self._meta_collections(): meta_coll = self.meta_database[meta_collection_name] for ts_ns_doc in meta_coll.find(limit=-1).sort("_ts", -1): yield ts_ns_doc return max(docs_...
Returns the last document stored in Mongo.