code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def to_string(type): if type == None: return "unknown" elif type == TypeCode.Unknown: return "unknown" elif type == TypeCode.String: return "string" elif type == TypeCode.Integer: return "integer" elif type == TypeCode.Long: return "long" elif type == TypeCode.Float: return "float" elif type == TypeCode.Double: return "double" elif type == TypeCode.Duration: return "duration" elif type == TypeCode.DateTime: return "datetime" elif type == TypeCode.Object: return "object" elif type == TypeCode.Enum: return "enum" elif type == TypeCode.Array: return "array" elif type == TypeCode.Map: return "map" else: return "unknown"
Converts a TypeCode into its string name. :param type: the TypeCode to convert into a string. :return: the name of the TypeCode passed as a string value.
def manage_submissions(self): if not hasattr(self, 'submissions') or len(self.submissions) == 1: self.submissions = [] if self.options['mode'] == 'front': if self.options['password'] and self.options['username']: self.login() url = 'http://reddit.com/.json?sort={0}'.format(self.options['sort']) self.submissions = self.get_submissions(url) elif self.options['mode'] == 'subreddit': for subreddit in self.options['subreddits']: url = 'http://reddit.com/r/{0}/.json?sort={1}'.format( subreddit, self.options['limit']) self.submissions += self.get_submissions(url) else: return
If there are no or only one submissions left, get new submissions. This function manages URL creation and the specifics for front page or subreddit mode.
def sle(actual, predicted): return (np.power(np.log(np.array(actual)+1) - np.log(np.array(predicted)+1), 2))
Computes the squared log error. This function computes the squared log error between two numbers, or for element between a pair of lists or numpy arrays. Parameters ---------- actual : int, float, list of numbers, numpy array The ground truth value predicted : same type as actual The predicted value Returns ------- score : double or list of doubles The squared log error between actual and predicted
def _local_install(self, args, pkg_name=None): if len(args) < 2: raise SPMInvocationError('A package file must be specified') self._install(args)
Install a package from a file
def teardown_websocket(self, func: Callable, name: AppOrBlueprintKey=None) -> Callable: handler = ensure_coroutine(func) self.teardown_websocket_funcs[name].append(handler) return func
Add a teardown websocket function. This is designed to be used as a decorator. An example usage, .. code-block:: python @app.teardown_websocket def func(): ... Arguments: func: The teardown websocket function itself. name: Optional blueprint key name.
async def remove(self, von_wallet: Wallet) -> None: LOGGER.debug('WalletManager.remove >>> wallet %s', von_wallet) await von_wallet.remove() LOGGER.debug('WalletManager.remove <<<')
Remove serialized wallet if it exists. Raise WalletState if wallet is open. :param von_wallet: (closed) wallet to remove
def send(self, data, sample_rate=None): if self._disabled: self.logger.debug('Connection disabled, not sending data') return False if sample_rate is None: sample_rate = self._sample_rate sampled_data = {} if sample_rate < 1: if random.random() <= sample_rate: for stat, value in compat.iter_dict(data): sampled_data[stat] = '%s|@%s' % (data[stat], sample_rate) else: sampled_data = data try: for stat, value in compat.iter_dict(sampled_data): send_data = ('%s:%s' % (stat, value)).encode("utf-8") self.udp_sock.send(send_data) return True except Exception as e: self.logger.exception('unexpected error %r while sending data', e) return False
Send the data over UDP while taking the sample_rate in account The sample rate should be a number between `0` and `1` which indicates the probability that a message will be sent. The sample_rate is also communicated to `statsd` so it knows what multiplier to use. :keyword data: The data to send :type data: dict :keyword sample_rate: The sample rate, defaults to `1` (meaning always) :type sample_rate: int
def getHTML(self): root = self.getRoot() if root is None: raise ValueError('Did not parse anything. Use parseFile or parseStr') if self.doctype: doctypeStr = '<!%s>\n' %(self.doctype) else: doctypeStr = '' rootNode = self.getRoot() if rootNode.tagName == INVISIBLE_ROOT_TAG: return doctypeStr + rootNode.innerHTML else: return doctypeStr + rootNode.outerHTML
getHTML - Get the full HTML as contained within this tree. If parsed from a document, this will contain the original whitespacing. @returns - <str> of html @see getFormattedHTML @see getMiniHTML
def to_decimal(number, strip='- '): if isinstance(number, six.integer_types): return str(number) number = str(number) number = re.sub(r'[%s]' % re.escape(strip), '', number) if number.startswith('0x'): return to_decimal(int(number[2:], 16)) elif number.startswith('o'): return to_decimal(int(number[1:], 8)) elif number.startswith('b'): return to_decimal(int(number[1:], 2)) else: return str(int(number))
Converts a number to a string of decimals in base 10. >>> to_decimal(123) '123' >>> to_decimal('o123') '83' >>> to_decimal('b101010') '42' >>> to_decimal('0x2a') '42'
def compile_jobgroups_from_joblist(joblist, jgprefix, sgegroupsize): jobcmds = defaultdict(list) for job in joblist: jobcmds[job.command.split(' ', 1)[0]].append(job.command) jobgroups = [] for cmds in list(jobcmds.items()): sublists = split_seq(cmds[1], sgegroupsize) count = 0 for sublist in sublists: count += 1 sge_jobcmdlist = ['\"%s\"' % jc for jc in sublist] jobgroups.append(JobGroup("%s_%d" % (jgprefix, count), "$cmds", arguments={'cmds': sge_jobcmdlist})) return jobgroups
Return list of jobgroups, rather than list of jobs.
def fetch(self, remote=None, refspec=None, verbose=False, tags=True): return git_fetch(self.repo_dir, remote=remote, refspec=refspec, verbose=verbose, tags=tags)
Do a git fetch of `refspec`.
def _MergeEntities(self, a, b): scheme = {'price': self._MergeIdentical, 'currency_type': self._MergeIdentical, 'payment_method': self._MergeIdentical, 'transfers': self._MergeIdentical, 'transfer_duration': self._MergeIdentical} return self._SchemedMerge(scheme, a, b)
Merges the fares if all the attributes are the same.
def _compute_delta_beta(self, df, events, start, stop, weights): score_residuals = self._compute_residuals(df, events, start, stop, weights) * weights[:, None] naive_var = inv(self._hessian_) delta_betas = -score_residuals.dot(naive_var) / self._norm_std.values return delta_betas
approximate change in betas as a result of excluding ith row
def get_loglevel(level): if level == 'debug': return logging.DEBUG elif level == 'notice': return logging.INFO elif level == 'info': return logging.INFO elif level == 'warning' or level == 'warn': return logging.WARNING elif level == 'error' or level == 'err': return logging.ERROR elif level == 'critical' or level == 'crit': return logging.CRITICAL elif level == 'alert': return logging.CRITICAL elif level == 'emergency' or level == 'emerg': return logging.CRITICAL else: return logging.INFO
return logging level object corresponding to a given level passed as a string @str level: name of a syslog log level @rtype: logging, logging level from logging module
def connect(self, dsn): self.con = psycopg2.connect(dsn) self.cur = self.con.cursor(cursor_factory=psycopg2.extras.DictCursor) self.con.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
Connect to DB. dbname: the database name user: user name used to authenticate password: password used to authenticate host: database host address (defaults to UNIX socket if not provided) port: connection port number (defaults to 5432 if not provided)
def summary(self): if not self.translations: self.update() return [summary.taf(trans) for trans in self.translations.forecast]
Condensed summary for each forecast created from translations
def visit_create_library_command(element, compiler, **kw): query = bindparams = [ sa.bindparam( 'location', value=element.location, type_=sa.String, ), sa.bindparam( 'credentials', value=element.credentials, type_=sa.String, ), ] if element.region is not None: bindparams.append(sa.bindparam( 'region', value=element.region, type_=sa.String, )) quoted_lib_name = compiler.preparer.quote_identifier(element.library_name) query = query.format(name=quoted_lib_name, or_replace='OR REPLACE' if element.replace else '', region='REGION :region' if element.region else '') return compiler.process(sa.text(query).bindparams(*bindparams), **kw)
Returns the actual sql query for the CreateLibraryCommand class.
def reload(self): utt_ids = sorted(self.utt_ids) if self.shuffle: self.rand.shuffle(utt_ids) partitions = [] current_partition = PartitionInfo() for utt_id in utt_ids: utt_size = self.utt_sizes[utt_id] utt_lengths = self.utt_lengths[utt_id] if current_partition.size + utt_size > self.partition_size: partitions.append(current_partition) current_partition = PartitionInfo() current_partition.utt_ids.append(utt_id) current_partition.utt_lengths.append(utt_lengths) current_partition.size += utt_size if current_partition.size > 0: partitions.append(current_partition) self.partitions = partitions return self.partitions
Create a new partition scheme. A scheme defines which utterances are in which partition. The scheme only changes after every call if ``self.shuffle == True``. Returns: list: List of PartitionInfo objects, defining the new partitions (same as ``self.partitions``)
def compile_resource(resource): return re.compile("^" + trim_resource(re.sub(r":(\w+)", r"(?P<\1>[\w-]+?)", resource)) + r"(\?(?P<querystring>.*))?$")
Return compiled regex for resource matching
def serialize_yaml_tofile(filename, resource): stream = file(filename, "w") yaml.dump(resource, stream, default_flow_style=False)
Serializes a K8S resource to YAML-formatted file.
def mequg(m1, nr, nc): m1 = stypes.toDoubleMatrix(m1) mout = stypes.emptyDoubleMatrix(x=nc, y=nr) nc = ctypes.c_int(nc) nr = ctypes.c_int(nr) libspice.mequg_c(m1, nc, nr, mout) return stypes.cMatrixToNumpy(mout)
Set one double precision matrix of arbitrary size equal to another. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/mequg_c.html :param m1: Input matrix. :type m1: NxM-Element Array of floats :param nr: Row dimension of m1. :type nr: int :param nc: Column dimension of m1. :type nc: int :return: Output matrix equal to m1 :rtype: NxM-Element Array of floats
def run(self, *args): self.parser.parse_args(args) code = self.affiliate() return code
Affiliate unique identities to organizations.
def print_config(_run): final_config = _run.config config_mods = _run.config_modifications print(_format_config(final_config, config_mods))
Print the updated configuration and exit. Text is highlighted: green: value modified blue: value added red: value modified but type changed
def reset( self ): dataSet = self.dataSet() if ( not dataSet ): dataSet = XScheme() dataSet.reset()
Resets the colors to the default settings.
def string_for_count(dictionary, count): string_to_print = "" if count is not None: if count == 0: return "" ranger = count else: ranger = 2 for index in range(ranger): string_to_print += "{} ".format(get_random_word(dictionary)) return string_to_print.strip()
Create a random string of N=`count` words
def _set_program_defaults(cls, programs): for program in programs: val = getattr(cls, program.__name__) \ and extern.does_external_program_run(program.__name__, Settings.verbose) setattr(cls, program.__name__, val)
Run the external program tester on the required binaries.
def avhrr(scans_nb, scan_points, scan_angle=55.37, frequency=1 / 6.0, apply_offset=True): avhrr_inst = np.vstack(((scan_points / 1023.5 - 1) * np.deg2rad(-scan_angle), np.zeros((len(scan_points),)))) avhrr_inst = np.tile( avhrr_inst[:, np.newaxis, :], [1, np.int(scans_nb), 1]) times = np.tile(scan_points * 0.000025, [np.int(scans_nb), 1]) if apply_offset: offset = np.arange(np.int(scans_nb)) * frequency times += np.expand_dims(offset, 1) return ScanGeometry(avhrr_inst, times)
Definition of the avhrr instrument. Source: NOAA KLM User's Guide, Appendix J http://www.ncdc.noaa.gov/oa/pod-guide/ncdc/docs/klm/html/j/app-j.htm
def build_parameter(name, properties): p = Parameter(name, Type=properties.get("type")) for name, attr in PARAMETER_PROPERTIES.items(): if name in properties: setattr(p, attr, properties[name]) return p
Builds a troposphere Parameter with the given properties. Args: name (string): The name of the parameter. properties (dict): Contains the properties that will be applied to the parameter. See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html Returns: :class:`troposphere.Parameter`: The created parameter object.
def PushEventSource(self, event_source): if event_source.file_entry_type == ( dfvfs_definitions.FILE_ENTRY_TYPE_DIRECTORY): weight = 1 else: weight = 100 heap_values = (weight, time.time(), event_source) heapq.heappush(self._heap, heap_values)
Pushes an event source onto the heap. Args: event_source (EventSource): event source.
def export_elements(self, filename='export_elements.zip', typeof='all'): valid_types = ['all', 'nw', 'ips', 'sv', 'rb', 'al', 'vpn'] if typeof not in valid_types: typeof = 'all' return Task.download(self, 'export_elements', filename, params={'recursive': True, 'type': typeof})
Export elements from SMC. Valid types are: all (All Elements)|nw (Network Elements)|ips (IPS Elements)| sv (Services)|rb (Security Policies)|al (Alerts)| vpn (VPN Elements) :param type: type of element :param filename: Name of file for export :raises TaskRunFailed: failure during export with reason :rtype: DownloadTask
def read_until_regex(self, regex: bytes, max_bytes: int = None) -> Awaitable[bytes]: future = self._start_read() self._read_regex = re.compile(regex) self._read_max_bytes = max_bytes try: self._try_inline_read() except UnsatisfiableReadError as e: gen_log.info("Unsatisfiable read, closing connection: %s" % e) self.close(exc_info=e) return future except: future.add_done_callback(lambda f: f.exception()) raise return future
Asynchronously read until we have matched the given regex. The result includes the data that matches the regex and anything that came before it. If ``max_bytes`` is not None, the connection will be closed if more than ``max_bytes`` bytes have been read and the regex is not satisfied. .. versionchanged:: 4.0 Added the ``max_bytes`` argument. The ``callback`` argument is now optional and a `.Future` will be returned if it is omitted. .. versionchanged:: 6.0 The ``callback`` argument was removed. Use the returned `.Future` instead.
def pickle_compress(obj, print_compression_info=False): p = pickle.dumps(obj) c = zlib.compress(p) if print_compression_info: print ("len = {:,d} compr={:,d} ratio:{:.6f}".format(len(p), len(c), float(len(c))/len(p))) return c
pickle and compress an object
def from_string(cls, dataset_id, default_project=None): output_dataset_id = dataset_id output_project_id = default_project parts = dataset_id.split(".") if len(parts) == 1 and not default_project: raise ValueError( "When default_project is not set, dataset_id must be a " "fully-qualified dataset ID in standard SQL format. " 'e.g. "project.dataset_id", got {}'.format(dataset_id) ) elif len(parts) == 2: output_project_id, output_dataset_id = parts elif len(parts) > 2: raise ValueError( "Too many parts in dataset_id. Expected a fully-qualified " "dataset ID in standard SQL format. e.g. " '"project.dataset_id", got {}'.format(dataset_id) ) return cls(output_project_id, output_dataset_id)
Construct a dataset reference from dataset ID string. Args: dataset_id (str): A dataset ID in standard SQL format. If ``default_project`` is not specified, this must included both the project ID and the dataset ID, separated by ``.``. default_project (str): Optional. The project ID to use when ``dataset_id`` does not include a project ID. Returns: DatasetReference: Dataset reference parsed from ``dataset_id``. Examples: >>> DatasetReference.from_string('my-project-id.some_dataset') DatasetReference('my-project-id', 'some_dataset') Raises: ValueError: If ``dataset_id`` is not a fully-qualified dataset ID in standard SQL format.
def _range_check(self, value, min_value, max_value): if value < min_value or value > max_value: raise ValueError('%s out of range - %s is not between %s and %s' % (self.__class__.__name__, value, min_value, max_value))
Utility method to check that the given value is between min_value and max_value.
def createDocument(self, initDict = None) : if initDict is not None : return self.createDocument_(initDict) else : if self._validation["on_load"] : self._validation["on_load"] = False return self.createDocument_(self.defaultDocument) self._validation["on_load"] = True else : return self.createDocument_(self.defaultDocument)
create and returns a document populated with the defaults or with the values in initDict
def categories_percent(s, categories): count = 0 s = to_unicode(s, precise=True) for c in s: if unicodedata.category(c) in categories: count += 1 return 100 * float(count) / len(s) if len(s) else 0
Returns category characters percent. >>> categories_percent("qqq ggg hhh", ["Po"]) 0.0 >>> categories_percent("q,w.", ["Po"]) 50.0 >>> categories_percent("qqq ggg hhh", ["Nd"]) 0.0 >>> categories_percent("q5", ["Nd"]) 50.0 >>> categories_percent("s.s,5s", ["Po", "Nd"]) 50.0
def send_vdp_vnic_down(self, port_uuid=None, vsiid=None, mgrid=None, typeid=None, typeid_ver=None, vsiid_frmt=vdp_const.VDP_VSIFRMT_UUID, filter_frmt=vdp_const.VDP_FILTER_GIDMACVID, gid=0, mac="", vlan=0, oui=""): try: with self.mutex_lock: self.send_vdp_deassoc(vsiid=vsiid, mgrid=mgrid, typeid=typeid, typeid_ver=typeid_ver, vsiid_frmt=vsiid_frmt, filter_frmt=filter_frmt, gid=gid, mac=mac, vlan=vlan) self.clear_vdp_vsi(port_uuid) except Exception as e: LOG.error("VNIC Down exception %s", e)
Interface function to apps, called for a vNIC DOWN. This currently sends an VDP dis-associate message. Please refer http://www.ieee802.org/1/pages/802.1bg.html VDP Section for more detailed information :param uuid: uuid of the vNIC :param vsiid: VSI value, Only UUID supported for now :param mgrid: MGR ID :param typeid: Type ID :param typeid_ver: Version of the Type ID :param vsiid_frmt: Format of the following VSI argument :param filter_frmt: Filter Format. Only <GID,MAC,VID> supported for now :param gid: Group ID the vNIC belongs to :param mac: MAC Address of the vNIC :param vlan: VLAN of the vNIC :param oui_id: OUI Type :param oui_data: OUI Data :param sw_resp: Flag indicating if response is required from the daemon
def UpsertUserDefinedFunction(self, collection_link, udf, options=None): if options is None: options = {} collection_id, path, udf = self._GetContainerIdWithPathForUDF(collection_link, udf) return self.Upsert(udf, path, 'udfs', collection_id, None, options)
Upserts a user defined function in a collection. :param str collection_link: The link to the collection. :param str udf: :param dict options: The request options for the request. :return: The upserted UDF. :rtype: dict
def json(self): try: return self._json except AttributeError: try: self._json = json.loads(self.text) return self._json except: raise RequestInvalidJSON("Invalid JSON received")
Return an object representing the return json for the request
def parse_sql(self, asql): import sqlparse statements = sqlparse.parse(sqlparse.format(asql, strip_comments=True)) parsed_statements = [] for statement in statements: statement_str = statement.to_unicode().strip() for preprocessor in self._backend.sql_processors(): statement_str = preprocessor(statement_str, self._library, self._backend, self.connection) parsed_statements.append(statement_str) return parsed_statements
Executes all sql statements from asql. Args: library (library.Library): asql (str): ambry sql query - see https://github.com/CivicKnowledge/ambry/issues/140 for details.
def snow_depth(self, value=999.0): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `snow_depth`'.format(value)) self._snow_depth = value
Corresponds to IDD Field `snow_depth` Args: value (float): value for IDD Field `snow_depth` Unit: cm Missing value: 999.0 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
def shuffle_columns( a ): mask = range( a.text_size ) random.shuffle( mask ) for c in a.components: c.text = ''.join( [ c.text[i] for i in mask ] )
Randomize the columns of an alignment
def write(self, group_id, handle): name = self.name.encode('utf-8') handle.write(struct.pack('bb', len(name), group_id)) handle.write(name) handle.write(struct.pack('<h', self.binary_size() - 2 - len(name))) handle.write(struct.pack('b', self.bytes_per_element)) handle.write(struct.pack('B', len(self.dimensions))) handle.write(struct.pack('B' * len(self.dimensions), *self.dimensions)) if self.bytes: handle.write(self.bytes) desc = self.desc.encode('utf-8') handle.write(struct.pack('B', len(desc))) handle.write(desc)
Write binary data for this parameter to a file handle. Parameters ---------- group_id : int The numerical ID of the group that holds this parameter. handle : file handle An open, writable, binary file handle.
def _waitForIP(cls, instance): logger.debug('Waiting for ip...') while True: time.sleep(a_short_time) instance.update() if instance.ip_address or instance.public_dns_name or instance.private_ip_address: logger.debug('...got ip') break
Wait until the instances has a public IP address assigned to it. :type instance: boto.ec2.instance.Instance
def single(C, namespace=None): if namespace is None: B = C()._ else: B = C(default=namespace, _=namespace)._ return B
An element maker with a single namespace that uses that namespace as the default
def get_all_users(self, path_prefix='/', marker=None, max_items=None): params = {'PathPrefix' : path_prefix} if marker: params['Marker'] = marker if max_items: params['MaxItems'] = max_items return self.get_response('ListUsers', params, list_marker='Users')
List the users that have the specified path prefix. :type path_prefix: string :param path_prefix: If provided, only users whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response.
def explain_weights_lightning(estimator, vec=None, top=20, target_names=None, targets=None, feature_names=None, coef_scale=None): return explain_weights_lightning_not_supported(estimator)
Return an explanation of a lightning estimator weights
def wait_for_thrift_interface(self, **kwargs): if self.cluster.version() >= '4': return; self.watch_log_for("Listening for thrift clients...", **kwargs) thrift_itf = self.network_interfaces['thrift'] if not common.check_socket_listening(thrift_itf, timeout=30): warnings.warn("Thrift interface {}:{} is not listening after 30 seconds, node may have failed to start.".format(thrift_itf[0], thrift_itf[1]))
Waits for the Thrift interface to be listening. Emits a warning if not listening after 30 seconds.
def get_encoding(headers, content): encoding = None content_type = headers.get('content-type') if content_type: _, params = cgi.parse_header(content_type) if 'charset' in params: encoding = params['charset'].strip("'\"") if not encoding: content = utils.pretty_unicode(content[:1000]) if six.PY3 else content charset_re = re.compile(r'<meta.*?charset=["\']*(.+?)["\'>]', flags=re.I) pragma_re = re.compile(r'<meta.*?content=["\']*;?charset=(.+?)["\'>]', flags=re.I) xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]') encoding = (charset_re.findall(content) + pragma_re.findall(content) + xml_re.findall(content)) encoding = encoding and encoding[0] or None return encoding
Get encoding from request headers or page head.
def conv_stride2_multistep(x, nbr_steps, output_filters, name=None, reuse=None): with tf.variable_scope( name, default_name="conv_stride2_multistep", values=[x], reuse=reuse): if nbr_steps == 0: out = conv(x, output_filters, (1, 1)) return out, [out] hidden_layers = [x] for i in range(nbr_steps): hidden_layers.append( conv( hidden_layers[-1], output_filters, (2, 2), strides=2, activation=tf.nn.relu, name="conv" + str(i))) return hidden_layers[-1], hidden_layers
Use a strided convolution to downsample x by 2, `nbr_steps` times. We use stride and filter size 2 to avoid the checkerboard problem of deconvs. As detailed in http://distill.pub/2016/deconv-checkerboard/. Args: x: a `Tensor` with shape `[batch, spatial, depth]` or `[batch, spatial_1, spatial_2, depth]` nbr_steps: number of halving downsample rounds to apply output_filters: an int specifying the filter count for the convolutions name: a string reuse: a boolean Returns: a `Tensor` with shape `[batch, spatial / (2**nbr_steps), output_filters]` or `[batch, spatial_1 / (2**nbr_steps), spatial_2 / (2**nbr_steps), output_filters]`
def draw_dot(self, pos, color): if 0 <= pos[0] < self.width and 0 <= pos[1] < self.height: self.matrix[pos[0]][pos[1]] = color
Draw one single dot with the given color on the screen. :param pos: Position of the dot :param color: COlor for the dot :type pos: tuple :type color: tuple
def garbage_graph(index): graph = _compute_garbage_graphs()[int(index)] reduce_graph = bottle.request.GET.get('reduce', '') if reduce_graph: graph = graph.reduce_to_cycles() if not graph: return None filename = 'garbage%so%s.png' % (index, reduce_graph) rendered_file = _get_graph(graph, filename) if rendered_file: bottle.send_file(rendered_file, root=server.tmpdir) else: return None
Get graph representation of reference cycle.
def _get_parsed_url(url): try: parsed = urllib3_parse(url) except ValueError: scheme, _, url = url.partition("://") auth, _, url = url.rpartition("@") url = "{scheme}://{url}".format(scheme=scheme, url=url) parsed = urllib3_parse(url)._replace(auth=auth) return parsed
This is a stand-in function for `urllib3.util.parse_url` The orignal function doesn't handle special characters very well, this simply splits out the authentication section, creates the parsed url, then puts the authentication section back in, bypassing validation. :return: The new, parsed URL object :rtype: :class:`~urllib3.util.url.Url`
def _reversePoints(points): points = _copyPoints(points) firstOnCurve = None for index, point in enumerate(points): if point.segmentType is not None: firstOnCurve = index break lastSegmentType = points[firstOnCurve].segmentType points = reversed(points) final = [] for point in points: segmentType = point.segmentType if segmentType is not None: point.segmentType = lastSegmentType lastSegmentType = segmentType final.append(point) _prepPointsForSegments(final) return final
Reverse the points. This differs from the reversal point pen in RoboFab in that it doesn't worry about maintaing the start point position. That has no benefit within the context of this module.
def login(config, username=None, password=None, email=None, url=None, client=None, *args, **kwargs): try: c = (_get_client(config) if not client else client) lg = c.login(username, password, email, url) print "%s logged to %s"%(username,(url if url else "default hub")) except Exception as e: utils.error("%s can't login to repo %s: %s"%(username,(url if url else "default repo"),e)) return False return True
Wrapper to the docker.py login method
def profile_form_factory(): if current_app.config['USERPROFILES_EMAIL_ENABLED']: return EmailProfileForm( formdata=None, username=current_userprofile.username, full_name=current_userprofile.full_name, email=current_user.email, email_repeat=current_user.email, prefix='profile', ) else: return ProfileForm( formdata=None, obj=current_userprofile, prefix='profile', )
Create a profile form.
def fwriter(filename, gz=False, bz=False): if filename.endswith('.gz'): gz = True elif filename.endswith('.bz2'): bz = True if gz: if not filename.endswith('.gz'): filename += '.gz' return gzip.open(filename, 'wb') elif bz: if not filename.endswith('.bz2'): filename += '.bz2' return bz2.BZ2File(filename, 'w') else: return open(filename, 'w')
Returns a filewriter object that can write plain or gzipped output. If gzip or bzip2 compression is asked for then the usual filename extension will be added.
def _pop_digits(char_list): logger.debug('_pop_digits(%s)', char_list) digits = [] while len(char_list) != 0 and char_list[0].isdigit(): digits.append(char_list.pop(0)) logger.debug('got digits: %s', digits) logger.debug('updated char list: %s', char_list) return digits
Pop consecutive digits from the front of list and return them Pops any and all consecutive digits from the start of the provided character list and returns them as a list of string digits. Operates on (and possibly alters) the passed list. :param list char_list: a list of characters :return: a list of string digits :rtype: list
def update_edge_todo(self, elev_fn, dem_proc): for key in self.edges[elev_fn].keys(): self.edges[elev_fn][key].set_data('todo', data=dem_proc.edge_todo)
Can figure out how to update the todo based on the elev filename
def dump(data, out, ac_parser=None, **options): ioi = anyconfig.ioinfo.make(out) psr = find(ioi, forced_type=ac_parser) LOGGER.info("Dumping: %s", ioi.path) psr.dump(data, ioi, **options)
Save 'data' to 'out'. :param data: A mapping object may have configurations data to dump :param out: An output file path, a file, a file-like object, :class:`pathlib.Path` object represents the file or a namedtuple 'anyconfig.globals.IOInfo' object represents output to dump some data to. :param ac_parser: Forced parser type or parser object :param options: Backend specific optional arguments, e.g. {"indent": 2} for JSON loader/dumper backend :raises: ValueError, UnknownProcessorTypeError, UnknownFileTypeError
def initialize_dbs(settings): global _MAIN_SETTINGS, _MAIN_SITEURL, _MAIN_LANG, _SUBSITE_QUEUE _MAIN_SETTINGS = settings _MAIN_LANG = settings['DEFAULT_LANG'] _MAIN_SITEURL = settings['SITEURL'] _SUBSITE_QUEUE = settings.get('I18N_SUBSITES', {}).copy() prepare_site_db_and_overrides() _SITES_RELPATH_DB.clear() _NATIVE_CONTENT_URL_DB.clear() _GENERATOR_DB.clear()
Initialize internal DBs using the Pelican settings dict This clears the DBs for e.g. autoreload mode to work
def watched(self, option): params = join_params(self.parameters, {"watched": option}) return self.__class__(**params)
Set whether to filter by a user's watchlist. Options available are user.ONLY, user.NOT, and None; default is None.
def _check_r(self, r): if abs(np.dot(r[:, 0], r[:, 0]) - 1) > eps or \ abs(np.dot(r[:, 0], r[:, 0]) - 1) > eps or \ abs(np.dot(r[:, 0], r[:, 0]) - 1) > eps or \ np.dot(r[:, 0], r[:, 1]) > eps or \ np.dot(r[:, 1], r[:, 2]) > eps or \ np.dot(r[:, 2], r[:, 0]) > eps: raise ValueError("The rotation matrix is significantly non-orthonormal.")
the columns must orthogonal
def ads_use_dev_spaces(cluster_name, resource_group_name, update=False, space_name=None, do_not_prompt=False): azds_cli = _install_dev_spaces_cli(update) use_command_arguments = [azds_cli, 'use', '--name', cluster_name, '--resource-group', resource_group_name] if space_name is not None: use_command_arguments.append('--space') use_command_arguments.append(space_name) if do_not_prompt: use_command_arguments.append('-y') subprocess.call( use_command_arguments, universal_newlines=True)
Use Azure Dev Spaces with a managed Kubernetes cluster. :param cluster_name: Name of the managed cluster. :type cluster_name: String :param resource_group_name: Name of resource group. You can configure the default group. \ Using 'az configure --defaults group=<name>'. :type resource_group_name: String :param update: Update to the latest Azure Dev Spaces client components. :type update: bool :param space_name: Name of the new or existing dev space to select. Defaults to an interactive selection experience. :type space_name: String :param do_not_prompt: Do not prompt for confirmation. Requires --space. :type do_not_prompt: bool
def _populate_audio_file(self): self.log(u"Populate audio file...") if self.audio_file_path_absolute is not None: self.log([u"audio_file_path_absolute is '%s'", self.audio_file_path_absolute]) self.audio_file = AudioFile( file_path=self.audio_file_path_absolute, logger=self.logger ) self.audio_file.read_properties() else: self.log(u"audio_file_path_absolute is None") self.log(u"Populate audio file... done")
Create the ``self.audio_file`` object by reading the audio file at ``self.audio_file_path_absolute``.
def skip(self): buflen = len(self.buf) while True: self.buf = self.buf.lstrip() if self.buf == '': self.readline() buflen = len(self.buf) else: self.offset += (buflen - len(self.buf)) break if not self.keep_comments: if self.buf[0] == '/': if self.buf[1] == '/': self.readline() return self.skip() elif self.buf[1] == '*': i = self.buf.find('*/') while i == -1: self.readline() i = self.buf.find('*/') self.set_buf(i+2) return self.skip()
Skip whitespace and count position
def mean_squared_error(data, ground_truth, mask=None, normalized=False, force_lower_is_better=True): r if not hasattr(data, 'space'): data = odl.vector(data) space = data.space ground_truth = space.element(ground_truth) l2norm = odl.solvers.L2Norm(space) if mask is not None: data = data * mask ground_truth = ground_truth * mask diff = data - ground_truth fom = l2norm(diff) ** 2 if normalized: fom /= (l2norm(data) + l2norm(ground_truth)) ** 2 else: fom /= l2norm(space.one()) ** 2 return fom
r"""Return mean squared L2 distance between ``data`` and ``ground_truth``. See also `this Wikipedia article <https://en.wikipedia.org/wiki/Mean_squared_error>`_. Parameters ---------- data : `Tensor` or `array-like` Input data to compare to the ground truth. If not a `Tensor`, an unweighted tensor space will be assumed. ground_truth : `array-like` Reference to which ``data`` should be compared. mask : `array-like`, optional If given, ``data * mask`` is compared to ``ground_truth * mask``. normalized : bool, optional If ``True``, the output values are mapped to the interval :math:`[0, 1]` (see `Notes` for details). force_lower_is_better : bool, optional If ``True``, it is ensured that lower values correspond to better matches. For the mean squared error, this is already the case, and the flag is only present for compatibility to other figures of merit. Returns ------- mse : float FOM value, where a lower value means a better match. Notes ----- The FOM evaluates .. math:: \mathrm{MSE}(f, g) = \frac{\| f - g \|_2^2}{\| 1 \|_2^2}, where :math:`\| 1 \|^2_2` is the volume of the domain of definition of the functions. For :math:`\mathbb{R}^n` type spaces, this is equal to the number of elements :math:`n`. The normalized form is .. math:: \mathrm{MSE_N} = \frac{\| f - g \|_2^2}{(\| f \|_2 + \| g \|_2)^2}. The normalized variant takes values in :math:`[0, 1]`.
def summary(self, title=None, complexity=False): if title is None: return javabridge.call( self.jobject, "toSummaryString", "()Ljava/lang/String;") else: return javabridge.call( self.jobject, "toSummaryString", "(Ljava/lang/String;Z)Ljava/lang/String;", title, complexity)
Generates a summary. :param title: optional title :type title: str :param complexity: whether to print the complexity information as well :type complexity: bool :return: the summary :rtype: str
def find_course_by_crn(self, crn): for name, course in self.courses.iteritems(): if crn in course: return course return None
Searches all courses by CRNs. Not particularly efficient. Returns None if not found.
def normalizer(text, exclusion=OPERATIONS_EXCLUSION, lower=True, separate_char='-', **kwargs): clean_str = re.sub(r'[^\w{}]'.format( "".join(exclusion)), separate_char, text.strip()) or '' clean_lowerbar = clean_str_without_accents = strip_accents(clean_str) if '_' not in exclusion: clean_lowerbar = re.sub(r'\_', separate_char, clean_str_without_accents.strip()) limit_guion = re.sub(r'\-+', separate_char, clean_lowerbar.strip()) if limit_guion and separate_char and separate_char in limit_guion[0]: limit_guion = limit_guion[1:] if limit_guion and separate_char and separate_char in limit_guion[-1]: limit_guion = limit_guion[:-1] if lower: limit_guion = limit_guion.lower() return limit_guion
Clean text string of simbols only alphanumeric chars.
def reroot(self, s): o_s1 = self.first_lookup[s] splice1 = self.tour[1:o_s1] rest = self.tour[o_s1 + 1:] new_tour = [s] + rest + splice1 + [s] new_tree = TestETT.from_tour(new_tour, fast=self.fast) return new_tree
s = 3 s = 'B' Let os denote any occurrence of s. Splice out the first part of the sequence ending with the occurrence before os, remove its first occurrence (or), and tack this on to the end of the sequence which now begins with os. Add a new occurrence os to the end.
def _query(self, *criterion): return self.session.query( self.model_class ).filter( *criterion )
Construct a query for the model.
def mount_rate_limit_adapters(cls, session=None, rls_config=None, **kwargs): session = session or HTTP_SESSION if rls_config is None: rls_config = RateLimiter.get_configs() for name, rl_conf in rls_config.items(): urls = rl_conf.get('urls', []) if not urls: continue rl_adapter = RLRequestAdapter(name, config=rls_config, **kwargs) for url in urls: session.mount(url, rl_adapter)
Mount rate-limits adapters on the specified `requests.Session` object. :param py:class:`requests.Session` session: Session to mount. If not specified, then use the global `HTTP_SESSION`. :param dict rls_config: Rate-limits configuration. If not specified, then use the one defined at application level. :param kwargs: Additional keywords argument given to py:class:`docido_sdk.toolbox.rate_limits.RLRequestAdapter` constructor.
def C_array2dict(C): d = OrderedDict() i=0 for k in C_keys: s = C_keys_shape[k] if s == 1: j = i+1 d[k] = C[i] else: j = i \ + reduce(operator.mul, s, 1) d[k] = C[i:j].reshape(s) i = j return d
Convert a 1D array containing C values to a dictionary.
def read(self): def ds(data_element): value = self._str_filter.ToStringPair(data_element.GetTag()) if value[1]: return DataElement(data_element, value[0].strip(), value[1].strip()) results = [data for data in self.walk(ds) if data is not None] return results
Returns array of dictionaries containing all the data elements in the DICOM file.
def _recursive_gh_get(href, items): response = _request('GET', href) response.raise_for_status() items.extend(response.json()) if "link" not in response.headers: return links = link_header.parse(response.headers["link"]) rels = {link.rel: link.href for link in links.links} if "next" in rels: _recursive_gh_get(rels["next"], items)
Recursively get list of GitHub objects. See https://developer.github.com/v3/guides/traversing-with-pagination/
def generateSensorimotorSequence(self, sequenceLength): motorSequence = [] sensorySequence = [] sensorimotorSequence = [] currentEyeLoc = self.nupicRandomChoice(self.spatialConfig) for i in xrange(sequenceLength): currentSensoryInput = self.spatialMap[tuple(currentEyeLoc)] nextEyeLoc, currentEyeV = self.getNextEyeLocation(currentEyeLoc) if self.verbosity: print "sensory input = ", currentSensoryInput, \ "eye location = ", currentEyeLoc, \ " motor command = ", currentEyeV sensoryInput = self.encodeSensoryInput(currentSensoryInput) motorInput = self.encodeMotorInput(list(currentEyeV)) sensorimotorInput = numpy.concatenate((sensoryInput, motorInput)) sensorySequence.append(sensoryInput) motorSequence.append(motorInput) sensorimotorSequence.append(sensorimotorInput) currentEyeLoc = nextEyeLoc return (sensorySequence, motorSequence, sensorimotorSequence)
Generate sensorimotor sequences of length sequenceLength. @param sequenceLength (int) Length of the sensorimotor sequence. @return (tuple) Contains: sensorySequence (list) Encoded sensory input for whole sequence. motorSequence (list) Encoded motor input for whole sequence. sensorimotorSequence (list) Encoder sensorimotor input for whole sequence. This is useful when you want to give external input to temporal memory.
def to_pandas(self): agedepthdf = pd.DataFrame(self.age, index=self.data.depth) agedepthdf.columns = list(range(self.n_members())) out = (agedepthdf.join(self.data.set_index('depth')) .reset_index() .melt(id_vars=self.data.columns.values, var_name='mciter', value_name='age')) out['mciter'] = pd.to_numeric(out.loc[:, 'mciter']) if self.n_members() == 1: out = out.drop('mciter', axis=1) return out
Convert record to pandas.DataFrame
def _get_entity(service_instance, entity): log.trace('Retrieving entity: %s', entity) if entity['type'] == 'cluster': dc_ref = salt.utils.vmware.get_datacenter(service_instance, entity['datacenter']) return salt.utils.vmware.get_cluster(dc_ref, entity['cluster']) elif entity['type'] == 'vcenter': return None raise ArgumentValueError('Unsupported entity type \'{0}\'' ''.format(entity['type']))
Returns the entity associated with the entity dict representation Supported entities: cluster, vcenter Expected entity format: .. code-block:: python cluster: {'type': 'cluster', 'datacenter': <datacenter_name>, 'cluster': <cluster_name>} vcenter: {'type': 'vcenter'} service_instance Service instance (vim.ServiceInstance) of the vCenter. entity Entity dict in the format above
def send(self, channel, payload): with track('send_channel=' + channel): with track('create event'): Event.objects.create( group=self, channel=channel, value=payload) ChannelGroup(str(self.pk)).send( {'text': json.dumps({ 'channel': channel, 'payload': payload })})
Send a message with the given payload on the given channel. Messages are broadcast to all players in the group.
def check() -> Result: try: with Connection(conf.get('CELERY_BROKER_URL')) as conn: conn.connect() except ConnectionRefusedError: return Result(message='Service unable to connect, "Connection was refused".', severity=Result.ERROR) except AccessRefused: return Result(message='Service unable to connect, "Authentication error".', severity=Result.ERROR) except IOError: return Result(message='Service has an "IOError".', severity=Result.ERROR) except Exception as e: return Result(message='Service has an "{}" error.'.format(e), severity=Result.ERROR) else: return Result()
Open and close the broker channel.
def collapse_nested(self, cats, max_nestedness=10): children = [] removed = set() nestedness = max_nestedness old = list(self.widget.options.values()) nested = [cat for cat in old if getattr(cat, 'cat') is not None] parents = {cat.cat for cat in nested} parents_to_remove = cats while len(parents_to_remove) > 0 and nestedness > 0: for cat in nested: if cat.cat in parents_to_remove: children.append(cat) removed = removed.union(parents_to_remove) nested = [cat for cat in nested if cat not in children] parents_to_remove = {c for c in children if c in parents - removed} nestedness -= 1 self.remove(children)
Collapse any items that are nested under cats. `max_nestedness` acts as a fail-safe to prevent infinite looping.
def accept_freeware_license(): ntab = 3 if version().startswith('6.6.') else 2 for _ in range(ntab): EasyProcess('xdotool key KP_Tab').call() time.sleep(0.5) EasyProcess('xdotool key KP_Space').call() time.sleep(0.5) EasyProcess('xdotool key KP_Space').call()
different Eagle versions need differnt TAB count. 6.5 -> 2 6.6 -> 3 7.4 -> 2
async def is_ready(self): async def slave_task(addr, timeout): try: r_manager = await self.env.connect(addr, timeout=timeout) ready = await r_manager.is_ready() if not ready: return False except: return False return True if not self.env.is_ready(): return False if not self.check_ready(): return False rets = await create_tasks(slave_task, self.addrs, 0.5) if not all(rets): return False return True
Check if the multi-environment has been fully initialized. This calls each slave environment managers' :py:meth:`is_ready` and checks if the multi-environment itself is ready by calling :py:meth:`~creamas.mp.MultiEnvironment.check_ready`. .. seealso:: :py:meth:`creamas.core.environment.Environment.is_ready`
def live_dirs(self): if self.has_results_dir: yield self.results_dir yield self.current_results_dir if self.has_previous_results_dir: yield self.previous_results_dir
Yields directories that must exist for this VersionedTarget to function.
def query_foursquare(point, max_distance, client_id, client_secret): if not client_id: return [] if not client_secret: return [] if from_cache(FS_CACHE, point, max_distance): return from_cache(FS_CACHE, point, max_distance) url = FOURSQUARE_URL % (client_id, client_secret, point.lat, point.lon, max_distance) req = requests.get(url) if req.status_code != 200: return [] response = req.json() result = [] venues = response['response']['venues'] for venue in venues: name = venue['name'] distance = venue['location']['distance'] categories = [c['shortName'] for c in venue['categories']] result.append({ 'label': name, 'distance': distance, 'types': categories, 'suggestion_type': 'FOURSQUARE' }) foursquare_insert_cache(point, result) return result
Queries Squarespace API for a location Args: point (:obj:`Point`): Point location to query max_distance (float): Search radius, in meters client_id (str): Valid Foursquare client id client_secret (str): Valid Foursquare client secret Returns: :obj:`list` of :obj:`dict`: List of locations with the following format: { 'label': 'Coffee house', 'distance': 19, 'types': 'Commerce', 'suggestion_type': 'FOURSQUARE' }
def wcwidth(wc): r ucs = ord(wc) if (ucs == 0 or ucs == 0x034F or 0x200B <= ucs <= 0x200F or ucs == 0x2028 or ucs == 0x2029 or 0x202A <= ucs <= 0x202E or 0x2060 <= ucs <= 0x2063): return 0 if ucs < 32 or 0x07F <= ucs < 0x0A0: return -1 if _bisearch(ucs, ZERO_WIDTH): return 0 return 1 + _bisearch(ucs, WIDE_EASTASIAN)
r""" Given one unicode character, return its printable length on a terminal. The wcwidth() function returns 0 if the wc argument has no printable effect on a terminal (such as NUL '\0'), -1 if wc is not printable, or has an indeterminate effect on the terminal, such as a control character. Otherwise, the number of column positions the character occupies on a graphic terminal (1 or 2) is returned. The following have a column width of -1: - C0 control characters (U+001 through U+01F). - C1 control characters and DEL (U+07F through U+0A0). The following have a column width of 0: - Non-spacing and enclosing combining characters (general category code Mn or Me in the Unicode database). - NULL (U+0000, 0). - COMBINING GRAPHEME JOINER (U+034F). - ZERO WIDTH SPACE (U+200B) through RIGHT-TO-LEFT MARK (U+200F). - LINE SEPERATOR (U+2028) and PARAGRAPH SEPERATOR (U+2029). - LEFT-TO-RIGHT EMBEDDING (U+202A) through RIGHT-TO-LEFT OVERRIDE (U+202E). - WORD JOINER (U+2060) through INVISIBLE SEPARATOR (U+2063). The following have a column width of 1: - SOFT HYPHEN (U+00AD) has a column width of 1. - All remaining characters (including all printable ISO 8859-1 and WGL4 characters, Unicode control characters, etc.) have a column width of 1. The following have a column width of 2: - Spacing characters in the East Asian Wide (W) or East Asian Full-width (F) category as defined in Unicode Technical Report #11 have a column width of 2.
def _get_repo_info(alias, repos_cfg=None, root=None): try: meta = dict((repos_cfg or _get_configured_repos(root=root)).items(alias)) meta['alias'] = alias for key, val in six.iteritems(meta): if val in ['0', '1']: meta[key] = int(meta[key]) == 1 elif val == 'NONE': meta[key] = None return meta except (ValueError, configparser.NoSectionError): return {}
Get one repo meta-data.
def multitaper_cross_spectrum(self, clm, slm, k, convention='power', unit='per_l', **kwargs): return self._multitaper_cross_spectrum(clm, slm, k, convention=convention, unit=unit, **kwargs)
Return the multitaper cross-spectrum estimate and standard error. Usage ----- mtse, sd = x.multitaper_cross_spectrum(clm, slm, k, [convention, unit, lmax, taper_wt, clat, clon, coord_degrees]) Returns ------- mtse : ndarray, shape (lmax-lwin+1) The localized multitaper cross-spectrum estimate, where lmax is the smaller of the two spherical-harmonic bandwidths of clm and slm, and lwin is the spherical-harmonic bandwidth of the localization windows. sd : ndarray, shape (lmax-lwin+1) The standard error of the localized multitaper cross-spectrum estimate. Parameters ---------- clm : SHCoeffs class instance SHCoeffs class instance containing the spherical harmonic coefficients of the first global field to analyze. slm : SHCoeffs class instance SHCoeffs class instance containing the spherical harmonic coefficients of the second global field to analyze. k : int The number of tapers to be utilized in performing the multitaper spectral analysis. convention : str, optional, default = 'power' The type of output spectra: 'power' for power spectra, and 'energy' for energy spectra. unit : str, optional, default = 'per_l' The units of the output spectra. If 'per_l', the spectra contain the total contribution for each spherical harmonic degree l. If 'per_lm', the spectra contain the average contribution for each coefficient at spherical harmonic degree l. lmax : int, optional, default = min(clm.lmax, slm.lmax) The maximum spherical-harmonic degree of the input coefficients to use. taper_wt : ndarray, optional, default = None The weights used in calculating the multitaper cross-spectral estimates and standard error. clat, clon : float, optional, default = 90., 0. Latitude and longitude of the center of the spherical-cap localization windows. coord_degrees : bool, optional, default = True True if clat and clon are in degrees.
def exp_backoff(attempt, cap=3600, base=300): max_attempts = math.log(cap / base, 2) if attempt <= max_attempts: return base * 2 ** attempt return cap
Exponential backoff time
def create_boots_layer(aspect, ip): layer = [] if 'BOOTS' in aspect: layer = pgnreader.parse_pagan_file(FILE_BOOTS, ip, invert=False, sym=True) return layer
Reads the BOOTS.pgn file and creates the boots layer.
def make_instance(cls, id, client, parent_id=None, json=None): make_cls = CLASS_MAP.get(id) if make_cls is None: return None real_json = json['data'] real_id = real_json['id'] return Base.make(real_id, client, make_cls, parent_id=None, json=real_json)
Overrides Base's ``make_instance`` to allow dynamic creation of objects based on the defined type in the response json. :param cls: The class this was called on :param id: The id of the instance to create :param client: The client to use for this instance :param parent_id: The parent id for derived classes :param json: The JSON to populate the instance with :returns: A new instance of this type, populated with json
def _parse_cod_segment(cls, fptr): offset = fptr.tell() - 2 read_buffer = fptr.read(2) length, = struct.unpack('>H', read_buffer) read_buffer = fptr.read(length - 2) lst = struct.unpack_from('>BBHBBBBBB', read_buffer, offset=0) scod, prog, nlayers, mct, nr, xcb, ycb, cstyle, xform = lst if len(read_buffer) > 10: precinct_size = _parse_precinct_size(read_buffer[10:]) else: precinct_size = None sop = (scod & 2) > 0 eph = (scod & 4) > 0 if sop or eph: cls._parse_tpart_flag = True else: cls._parse_tpart_flag = False pargs = (scod, prog, nlayers, mct, nr, xcb, ycb, cstyle, xform, precinct_size) return CODsegment(*pargs, length=length, offset=offset)
Parse the COD segment. Parameters ---------- fptr : file Open file object. Returns ------- CODSegment The current COD segment.
def _get_auth(self, force_console=False): if not self.target: raise ValueError("Unspecified target ({!r})".format(self.target)) elif not force_console and self.URL_RE.match(self.target): auth_url = urlparse(self.target) source = 'url' if auth_url.username: self.user = auth_url.username if auth_url.password: self.password = auth_url.password if not self.auth_valid(): source = self._get_auth_from_keyring() if not self.auth_valid(): source = self._get_auth_from_netrc(auth_url.hostname) if not self.auth_valid(): source = self._get_auth_from_console(self.target) else: source = self._get_auth_from_console(self.target) if self.auth_valid(): self.source = source
Try to get login auth from known sources.
def add_chunk(self, chunk_obj): if chunk_obj.get_id() in self.idx: raise ValueError("Chunk with id {} already exists!" .format(chunk_obj.get_id())) self.node.append(chunk_obj.get_node()) self.idx[chunk_obj.get_id()] = chunk_obj
Adds a chunk object to the layer @type chunk_obj: L{Cchunk} @param chunk_obj: the chunk object
def get_model_choices(): result = [] for ct in ContentType.objects.order_by('app_label', 'model'): try: if issubclass(ct.model_class(), TranslatableModel): result.append( ('{} - {}'.format(ct.app_label, ct.model.lower()), '{} - {}'.format(ct.app_label.capitalize(), ct.model_class()._meta.verbose_name_plural)) ) except TypeError: continue return result
Get the select options for the model selector :return:
def remove_datastore(datastore, service_instance=None): log.trace('Removing datastore \'%s\'', datastore) target = _get_proxy_target(service_instance) datastores = salt.utils.vmware.get_datastores( service_instance, reference=target, datastore_names=[datastore]) if not datastores: raise VMwareObjectRetrievalError( 'Datastore \'{0}\' was not found'.format(datastore)) if len(datastores) > 1: raise VMwareObjectRetrievalError( 'Multiple datastores \'{0}\' were found'.format(datastore)) salt.utils.vmware.remove_datastore(service_instance, datastores[0]) return True
Removes a datastore. If multiple datastores an error is raised. datastore Datastore name service_instance Service instance (vim.ServiceInstance) of the vCenter/ESXi host. Default is None. .. code-block:: bash salt '*' vsphere.remove_datastore ds_name
def get_stoplist(language): file_path = os.path.join("stoplists", "%s.txt" % language) try: stopwords = pkgutil.get_data("justext", file_path) except IOError: raise ValueError( "Stoplist for language '%s' is missing. " "Please use function 'get_stoplists' for complete list of stoplists " "and feel free to contribute by your own stoplist." % language ) return frozenset(w.decode("utf8").lower() for w in stopwords.splitlines())
Returns an built-in stop-list for the language as a set of words.
def _create_stdout_logger(logging_level): out_hdlr = logging.StreamHandler(sys.stdout) out_hdlr.setFormatter(logging.Formatter( '[%(asctime)s] %(message)s', "%H:%M:%S" )) out_hdlr.setLevel(logging_level) for name in LOGGING_NAMES: log = logging.getLogger(name) log.addHandler(out_hdlr) log.setLevel(logging_level)
create a logger to stdout. This creates logger for a series of module we would like to log information on.
def get_last_doc(self): def docs_by_ts(): for meta_collection_name in self._meta_collections(): meta_coll = self.meta_database[meta_collection_name] for ts_ns_doc in meta_coll.find(limit=-1).sort("_ts", -1): yield ts_ns_doc return max(docs_by_ts(), key=lambda x: x["_ts"])
Returns the last document stored in Mongo.