code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _get_flowcell_id(in_file, require_single=True): fc_ids = set([x[0] for x in _read_input_csv(in_file)]) if require_single and len(fc_ids) > 1: raise ValueError("There are several FCIDs in the same samplesheet file: %s" % in_file) else: return fc_ids
Retrieve the unique flowcell id represented in the SampleSheet.
def dynamic_message_event(self, sender, message): _ = sender self.dynamic_messages.append(message) self.dynamic_messages_log.append(message) self.show_messages() return
Dynamic event handler - set message state based on event. Dynamic messages don't clear the message buffer. :param sender: Unused - the object that sent the message. :type sender: Object, None :param message: A message to show in the viewer. :type message: safe.messaging.Message
def sanitize_label(self, label): (module, function, offset) = self.split_label_fuzzy(label) label = self.parse_label(module, function, offset) return label
Converts a label taken from user input into a well-formed label. @type label: str @param label: Label taken from user input. @rtype: str @return: Sanitized label.
async def subscribe( schema: GraphQLSchema, document: DocumentNode, root_value: Any = None, context_value: Any = None, variable_values: Dict[str, Any] = None, operation_name: str = None, field_resolver: GraphQLFieldResolver = None, subscribe_field_resolver: GraphQLFieldResolver = None, ) -> Union[AsyncIterator[ExecutionResult], ExecutionResult]: try: result_or_stream = await create_source_event_stream( schema, document, root_value, context_value, variable_values, operation_name, subscribe_field_resolver, ) except GraphQLError as error: return ExecutionResult(data=None, errors=[error]) if isinstance(result_or_stream, ExecutionResult): return result_or_stream result_or_stream = cast(AsyncIterable, result_or_stream) async def map_source_to_response(payload): result = execute( schema, document, payload, context_value, variable_values, operation_name, field_resolver, ) return await result if isawaitable(result) else result return MapAsyncIterator(result_or_stream, map_source_to_response)
Create a GraphQL subscription. Implements the "Subscribe" algorithm described in the GraphQL spec. Returns a coroutine object which yields either an AsyncIterator (if successful) or an ExecutionResult (client error). The coroutine will raise an exception if a server error occurs. If the client-provided arguments to this function do not result in a compliant subscription, a GraphQL Response (ExecutionResult) with descriptive errors and no data will be returned. If the source stream could not be created due to faulty subscription resolver logic or underlying systems, the coroutine object will yield a single ExecutionResult containing `errors` and no `data`. If the operation succeeded, the coroutine will yield an AsyncIterator, which yields a stream of ExecutionResults representing the response stream.
def print_vertical(vertical_rows, labels, color, args): if color: sys.stdout.write(f'\033[{color}m') for row in vertical_rows: print(*row) sys.stdout.write('\033[0m') print("-" * len(row) + "Values" + "-" * len(row)) for value in zip_longest(*value_list, fillvalue=' '): print(" ".join(value)) if args['no_labels'] == False: print("-" * len(row) + "Labels" + "-" * len(row)) for label in zip_longest(*labels, fillvalue=''): print(" ".join(label))
Print the whole vertical graph.
def apply_features(body, features): lines = [line for line in body.splitlines() if line.strip()] last_lines = lines[-SIGNATURE_MAX_LINES:] return ([[f(line) for f in features] for line in last_lines] or [[0 for f in features]])
Applies features to message body lines. Returns list of lists. Each of the lists corresponds to the body line and is constituted by the numbers of features occurrences (0 or 1). E.g. if element j of list i equals 1 this means that feature j occurred in line i (counting from the last line of the body).
def get_waiting_components(self): with self.__instances_lock: result = [] for name, (context, _) in self.__waiting_handlers.items(): missing = set(context.factory_context.get_handlers_ids()) missing.difference_update(self._handlers.keys()) result.append((name, context.factory_context.name, missing)) result.sort() return result
Returns the list of the instances waiting for their handlers :return: A list of (name, factory name, missing handlers) tuples
def norm(self) -> bk.BKTensor: return bk.absolute(bk.inner(self.tensor, self.tensor))
Return the norm of this vector
def answer_challenge(authzr, client, responders): responder, challb = _find_supported_challenge(authzr, responders) response = challb.response(client.key) def _stop_responding(): return maybeDeferred( responder.stop_responding, authzr.body.identifier.value, challb.chall, response) return ( maybeDeferred( responder.start_responding, authzr.body.identifier.value, challb.chall, response) .addCallback(lambda _: client.answer_challenge(challb, response)) .addCallback(lambda _: _stop_responding) )
Complete an authorization using a responder. :param ~acme.messages.AuthorizationResource auth: The authorization to complete. :param .Client client: The ACME client. :type responders: List[`~txacme.interfaces.IResponder`] :param responders: A list of responders that can be used to complete the challenge with. :return: A deferred firing when the authorization is verified.
def _size_36(): from shutil import get_terminal_size dim = get_terminal_size() if isinstance(dim, list): return dim[0], dim[1] return dim.lines, dim.columns
returns the rows, columns of terminal
def _get_smallest_dimensions(self, data): min_width = 0 min_height = 0 for element in self.elements: if not element: continue size = element.get_minimum_size(data) min_width = max(min_width, size.x) min_height = max(min_height, size.y) return datatypes.Point(min_width, min_height)
A utility method to return the minimum size needed to fit all the elements in.
def _process_comparison_filter_directive(filter_operation_info, location, context, parameters, operator=None): comparison_operators = {u'=', u'!=', u'>', u'<', u'>=', u'<='} if operator not in comparison_operators: raise AssertionError(u'Expected a valid comparison operator ({}), but got ' u'{}'.format(comparison_operators, operator)) filtered_field_type = filter_operation_info.field_type filtered_field_name = filter_operation_info.field_name argument_inferred_type = strip_non_null_from_type(filtered_field_type) argument_expression, non_existence_expression = _represent_argument( location, context, parameters[0], argument_inferred_type) comparison_expression = expressions.BinaryComposition( operator, expressions.LocalField(filtered_field_name), argument_expression) final_expression = None if non_existence_expression is not None: final_expression = expressions.BinaryComposition( u'||', non_existence_expression, comparison_expression) else: final_expression = comparison_expression return blocks.Filter(final_expression)
Return a Filter basic block that performs the given comparison against the property field. Args: filter_operation_info: FilterOperationInfo object, containing the directive and field info of the field where the filter is to be applied. location: Location where this filter is used. context: dict, various per-compilation data (e.g. declared tags, whether the current block is optional, etc.). May be mutated in-place in this function! parameters: list of 1 element, containing the value to perform the comparison against; if the parameter is optional and missing, the check will return True operator: unicode, a comparison operator, like '=', '!=', '>=' etc. This is a kwarg only to preserve the same positional arguments in the function signature, to ease validation. Returns: a Filter basic block that performs the requested comparison
def set_language(self, language): LOGGER.debug("> Setting editor language to '{0}'.".format(language.name)) self.__language = language or umbra.ui.languages.PYTHON_LANGUAGE self.__set_language_description() self.language_changed.emit() return True
Sets the language. :param language: Language to set. :type language: Language :return: Method success. :rtype: bool
def list_train_dirs(dir_: str, recursive: bool, all_: bool, long: bool, verbose: bool) -> None: if verbose: long = True if dir_ == CXF_DEFAULT_LOG_DIR and not path.exists(CXF_DEFAULT_LOG_DIR): print('The default log directory `{}` does not exist.\n' 'Consider specifying the directory to be listed as an argument.'.format(CXF_DEFAULT_LOG_DIR)) quit(1) if not path.exists(dir_): print('Specified dir `{}` does not exist'.format(dir_)) quit(1) all_trainings = _ls_print_listing(dir_, recursive, all_, long) if long and len(all_trainings) > 1: if not recursive: print() _ls_print_summary(all_trainings) if verbose and len(all_trainings) == 1: if not recursive: print() _ls_print_verbose(all_trainings[0])
List training dirs contained in the given dir with options and outputs similar to the regular `ls` command. The function is accessible through cxflow CLI `cxflow ls`. :param dir_: dir to be listed :param recursive: walk recursively in sub-directories, stop at train dirs (--recursive option) :param all_: include train dirs with no epochs done (--all option) :param long: list more details including model name, odel and dataset class, age, duration and epochs done (--long option) :param verbose: print more verbose output with list of additional artifacts and training config, applicable only when a single train dir is listed (--verbose option)
def close(self, wait=False): self.session.close() self.pool.shutdown(wait=wait)
Close session, shutdown pool.
def generate_tags(self): self.tags = dict() for section in self.sections(): if self.has_option(section, 'tags'): tags = self.get(section, 'tags') for tag in [str(t).strip() for t in tags.split(',')]: if tag not in self.tags: self.tags[tag] = list() self.tags[tag].append(section.split(':')[1])
Generates the tags with collection with hosts
def get_middle_point(lon1, lat1, lon2, lat2): if lon1 == lon2 and lat1 == lat2: return lon1, lat1 dist = geodetic.geodetic_distance(lon1, lat1, lon2, lat2) azimuth = geodetic.azimuth(lon1, lat1, lon2, lat2) return geodetic.point_at(lon1, lat1, azimuth, dist / 2.0)
Given two points return the point exactly in the middle lying on the same great circle arc. Parameters are point coordinates in degrees. :returns: Tuple of longitude and latitude of the point in the middle.
def task(name=None, t=INFO, *args, **kwargs): def c_run(name, f, t, args, kwargs): def run(*largs, **lkwargs): thread = __get_current_thread() old_name = __THREAD_PARAMS[thread][__THREAD_PARAMS_FNAME_KEY] __THREAD_PARAMS[thread][__THREAD_PARAMS_FNAME_KEY] = name r = log(name, f, t, largs, lkwargs, *args, **kwargs) __THREAD_PARAMS[thread][__THREAD_PARAMS_FNAME_KEY] = old_name return r return run if callable(name): f = name name = f.__name__ return c_run(name, f, t, args, kwargs) if name == None: def wrapped(f): name = f.__name__ return c_run(name, f, t, args, kwargs) return wrapped else: return lambda f: c_run(name, f, t, args, kwargs)
This decorator modifies current function such that its start, end, and duration is logged in console. If the task name is not given, it will attempt to infer it from the function name. Optionally, the decorator can log information into files.
def sweObject(obj, jd): sweObj = SWE_OBJECTS[obj] sweList = swisseph.calc_ut(jd, sweObj) return { 'id': obj, 'lon': sweList[0], 'lat': sweList[1], 'lonspeed': sweList[3], 'latspeed': sweList[4] }
Returns an object from the Ephemeris.
def get_transitions_for(brain_or_object): workflow = get_tool('portal_workflow') transitions = [] instance = get_object(brain_or_object) for wfid in get_workflows_for(brain_or_object): wf = workflow[wfid] tlist = wf.getTransitionsFor(instance) transitions.extend([t for t in tlist if t not in transitions]) return transitions
List available workflow transitions for all workflows :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :returns: All possible available and allowed transitions :rtype: list[dict]
def _merge_layout(x: go.Layout, y: go.Layout) -> go.Layout: xjson = x.to_plotly_json() yjson = y.to_plotly_json() if 'shapes' in yjson and 'shapes' in xjson: xjson['shapes'] += yjson['shapes'] yjson.update(xjson) return go.Layout(yjson)
Merge attributes from two layouts.
def _ensure_sequence(self, mutable=False): if self.is_sequence: if mutable and not isinstance(self.response, list): self.response = list(self.response) return if self.direct_passthrough: raise RuntimeError('Attempted implicit sequence conversion ' 'but the response object is in direct ' 'passthrough mode.') if not self.implicit_sequence_conversion: raise RuntimeError('The response object required the iterable ' 'to be a sequence, but the implicit ' 'conversion was disabled. Call ' 'make_sequence() yourself.') self.make_sequence()
This method can be called by methods that need a sequence. If `mutable` is true, it will also ensure that the response sequence is a standard Python list. .. versionadded:: 0.6
def pipe(p1, p2): if isinstance(p1, Pipeable) or isinstance(p2, Pipeable): return p1 | p2 return Pipe([p1, p2])
Joins two pipes
def _topology_from_residue(res): topology = app.Topology() chain = topology.addChain() new_res = topology.addResidue(res.name, chain) atoms = dict() for res_atom in res.atoms(): topology_atom = topology.addAtom(name=res_atom.name, element=res_atom.element, residue=new_res) atoms[res_atom] = topology_atom topology_atom.bond_partners = [] for bond in res.bonds(): atom1 = atoms[bond.atom1] atom2 = atoms[bond.atom2] topology.addBond(atom1, atom2) atom1.bond_partners.append(atom2) atom2.bond_partners.append(atom1) return topology
Converts a openmm.app.Topology.Residue to openmm.app.Topology. Parameters ---------- res : openmm.app.Topology.Residue An individual residue in an openmm.app.Topology Returns ------- topology : openmm.app.Topology The generated topology
def read_auth_method_tuning(self, path): api_path = '/v1/sys/auth/{path}/tune'.format( path=path, ) response = self._adapter.get( url=api_path, ) return response.json()
Read the given auth path's configuration. This endpoint requires sudo capability on the final path, but the same functionality can be achieved without sudo via sys/mounts/auth/[auth-path]/tune. Supported methods: GET: /sys/auth/{path}/tune. Produces: 200 application/json :param path: The path the method was mounted on. If not provided, defaults to the value of the "method_type" argument. :type path: str | unicode :return: The JSON response of the request. :rtype: dict
def consume_payload(rlp, prefix, start, type_, length): if type_ is bytes: item = rlp[start: start + length] return (item, [prefix + item], start + length) elif type_ is list: items = [] per_item_rlp = [] list_rlp = prefix next_item_start = start end = next_item_start + length while next_item_start < end: p, t, l, s = consume_length_prefix(rlp, next_item_start) item, item_rlp, next_item_start = consume_payload(rlp, p, s, t, l) per_item_rlp.append(item_rlp) list_rlp += item_rlp[0] items.append(item) per_item_rlp.insert(0, list_rlp) if next_item_start > end: raise DecodingError('List length prefix announced a too small ' 'length', rlp) return (items, per_item_rlp, next_item_start) else: raise TypeError('Type must be either list or bytes')
Read the payload of an item from an RLP string. :param rlp: the rlp string to read from :param type_: the type of the payload (``bytes`` or ``list``) :param start: the position at which to start reading :param length: the length of the payload in bytes :returns: a tuple ``(item, per_item_rlp, end)``, where ``item`` is the read item, per_item_rlp is a list containing the RLP encoding of each item and ``end`` is the position of the first unprocessed byte
def interbase_range_affected_by_variant_on_transcript(variant, transcript): if variant.is_insertion: if transcript.strand == "+": start_offset = transcript.spliced_offset(variant.start) + 1 else: start_offset = transcript.spliced_offset(variant.start) end_offset = start_offset else: offsets = [] assert len(variant.ref) > 0 for dna_pos in range(variant.start, variant.start + len(variant.ref)): try: offsets.append(transcript.spliced_offset(dna_pos)) except ValueError: logger.info( "Couldn't find position %d from %s on exons of %s", dna_pos, variant, transcript) if len(offsets) == 0: raise ValueError( "Couldn't find any exonic reference bases affected by %s on %s", variant, transcript) start_offset = min(offsets) end_offset = max(offsets) + 1 return (start_offset, end_offset)
Convert from a variant's position in global genomic coordinates on the forward strand to an interval of interbase offsets on a particular transcript's mRNA. Parameters ---------- variant : varcode.Variant transcript : pyensembl.Transcript Assumes that the transcript overlaps the variant. Returns (start, end) tuple of offsets into the transcript's cDNA sequence which indicates which bases in the reference sequence are affected by a variant. Example: The insertion of "TTT" into the middle of an exon would result in an offset pair such as (100,100) since no reference bases are changed or deleted by an insertion. On the other hand, deletion the preceding "CGG" at that same locus could result in an offset pair such as (97, 100)
def on_treeview_delete_selection(self, event=None): tv = self.treeview selection = tv.selection() self.filter_remove(remember=True) toplevel_items = tv.get_children() parents_to_redraw = set() for item in selection: try: parent = '' if item not in toplevel_items: parent = self.get_toplevel_parent(item) else: self.previewer.delete(item) del self.treedata[item] tv.delete(item) self.app.set_changed() if parent: self._update_max_grid_rc(parent) parents_to_redraw.add(parent) self.widget_editor.hide_all() except tk.TclError: pass for item in parents_to_redraw: self.draw_widget(item) self.filter_restore()
Removes selected items from treeview
def copy(self): copy_distribution = GaussianDistribution(variables=self.variables, mean=self.mean.copy(), cov=self.covariance.copy()) if self._precision_matrix is not None: copy_distribution._precision_matrix = self._precision_matrix.copy() return copy_distribution
Return a copy of the distribution. Returns ------- GaussianDistribution: copy of the distribution Examples -------- >>> import numpy as np >>> from pgmpy.factors.distributions import GaussianDistribution as GD >>> gauss_dis = GD(variables=['x1', 'x2', 'x3'], ... mean=[1, -3, 4], ... cov=[[4, 2, -2], ... [2, 5, -5], ... [-2, -5, 8]]) >>> copy_dis = gauss_dis.copy() >>> copy_dis.variables ['x1', 'x2', 'x3'] >>> copy_dis.mean array([[ 1], [-3], [ 4]]) >>> copy_dis.covariance array([[ 4, 2, -2], [ 2, 5, -5], [-2, -5, 8]]) >>> copy_dis.precision_matrix array([[ 0.3125 , -0.125 , 0. ], [-0.125 , 0.58333333, 0.33333333], [ 0. , 0.33333333, 0.33333333]])
def console_set_custom_font( fontFile: AnyStr, flags: int = FONT_LAYOUT_ASCII_INCOL, nb_char_horiz: int = 0, nb_char_vertic: int = 0, ) -> None: if not os.path.exists(fontFile): raise RuntimeError( "File not found:\n\t%s" % (os.path.realpath(fontFile),) ) lib.TCOD_console_set_custom_font( _bytes(fontFile), flags, nb_char_horiz, nb_char_vertic )
Load the custom font file at `fontFile`. Call this before function before calling :any:`tcod.console_init_root`. Flags can be a mix of the following: * tcod.FONT_LAYOUT_ASCII_INCOL: Decode tileset raw in column-major order. * tcod.FONT_LAYOUT_ASCII_INROW: Decode tileset raw in row-major order. * tcod.FONT_TYPE_GREYSCALE: Force tileset to be read as greyscale. * tcod.FONT_TYPE_GRAYSCALE * tcod.FONT_LAYOUT_TCOD: Unique layout used by libtcod. * tcod.FONT_LAYOUT_CP437: Decode a row-major Code Page 437 tileset into Unicode. `nb_char_horiz` and `nb_char_vertic` are the columns and rows of the font file respectfully.
def _get_struct_bevelfilter(self): obj = _make_object("BevelFilter") obj.ShadowColor = self._get_struct_rgba() obj.HighlightColor = self._get_struct_rgba() obj.BlurX = unpack_fixed16(self._src) obj.BlurY = unpack_fixed16(self._src) obj.Angle = unpack_fixed16(self._src) obj.Distance = unpack_fixed16(self._src) obj.Strength = unpack_fixed8(self._src) bc = BitConsumer(self._src) obj.InnerShadow = bc.u_get(1) obj.Knockout = bc.u_get(1) obj.CompositeSource = bc.u_get(1) obj.OnTop = bc.u_get(1) obj.Passes = bc.u_get(4) return obj
Get the values for the BEVELFILTER record.
def get_key_section_header(self, key, spaces): header = super(NumpydocTools, self).get_key_section_header(key, spaces) header = spaces + header + '\n' + spaces + '-' * len(header) + '\n' return header
Get the key of the header section :param key: the key name :param spaces: spaces to set at the beginning of the header
def _parse_remote_response(self, response): try: if response.headers["Content-Type"] != 'application/json': logger.warning('Wrong Content_type ({})'.format( response.headers["Content-Type"])) except KeyError: pass logger.debug("Loaded JWKS: %s from %s" % (response.text, self.source)) try: return json.loads(response.text) except ValueError: return None
Parse JWKS from the HTTP response. Should be overriden by subclasses for adding support of e.g. signed JWKS. :param response: HTTP response from the 'jwks_uri' endpoint :return: response parsed as JSON
def import_foreign(name, custom_name=None): if lab.is_python3(): io.error(("Ignoring attempt to import foreign module '{mod}' " "using python version {major}.{minor}" .format(mod=name, major=sys.version_info[0], minor=sys.version_info[1]))) return custom_name = custom_name or name f, pathname, desc = imp.find_module(name, sys.path[1:]) module = imp.load_module(custom_name, f, pathname, desc) f.close() return module
Import a module with a custom name. NOTE this is only needed for Python2. For Python3, import the module using the "as" keyword to declare the custom name. For implementation details, see: http://stackoverflow.com/a/6032023 Example: To import the standard module "math" as "std_math": if labm8.is_python3(): import math as std_math else: std_math = modules.import_foreign("math", "std_math") Arguments: name (str): The name of the module to import. custom_name (str, optional): The custom name to assign the module to. Raises: ImportError: If the module is not found.
def _write_output_manifest(self, manifest, filestore_root): output = os.path.basename(manifest) fieldnames, source_manifest = self._parse_manifest(manifest) if 'file_path' not in fieldnames: fieldnames.append('file_path') with atomic_write(output, overwrite=True) as f: delimiter = b'\t' if USING_PYTHON2 else '\t' writer = csv.DictWriter(f, fieldnames, delimiter=delimiter, quoting=csv.QUOTE_NONE) writer.writeheader() for row in source_manifest: row['file_path'] = self._file_path(row['file_sha256'], filestore_root) writer.writerow(row) if os.path.isfile(output): logger.warning('Overwriting manifest %s', output) logger.info('Rewrote manifest %s with additional column containing path to downloaded files.', output)
Adds the file path column to the manifest and writes the copy to the current directory. If the original manifest is in the current directory it is overwritten with a warning.
def styles(self, mutagen_file): for style in self._styles: if mutagen_file.__class__.__name__ in style.formats: yield style
Yields the list of storage styles of this field that can handle the MediaFile's format.
def __gather_avail(self): avail = {} for saltenv in self._get_envs(): avail[saltenv] = self.client.list_states(saltenv) return avail
Gather the lists of available sls data from the master
def send_result(self, type, task, result): if self.outqueue: try: self.outqueue.put((task, result)) except Exception as e: logger.exception(e)
Send fetch result to processor
def hpai_body(self): body = [] body.extend([self.channel]) body.extend([0x00]) body.extend([0x08]) body.extend([0x01]) body.extend(ip_to_array(self.control_socket.getsockname()[0])) body.extend(int_to_array(self.control_socket.getsockname()[1])) return body
Create a body with HPAI information. This is used for disconnect and connection state requests.
def getClassPath(): global _CLASSPATHS global _SEP out=[] for path in _CLASSPATHS: if path=='': continue if path.endswith('*'): paths=_glob.glob(path+".jar") if len(path)==0: continue out.extend(paths) else: out.append(path) return _SEP.join(out)
Get the full java class path. Includes user added paths and the environment CLASSPATH.
def contract(self, jobs, result): for j in jobs: WorkerPool.put(self, j) r = [] for i in xrange(len(jobs)): r.append(result.get()) return r
Perform a contract on a number of jobs and block until a result is retrieved for each job.
def get_extra(descriptor): result = [] extra_length = descriptor.extra_length if extra_length: extra = buffer_at(descriptor.extra.value, extra_length) append = result.append while extra: length = _string_item_to_int(extra[0]) if not 0 < length <= len(extra): raise ValueError( 'Extra descriptor %i is incomplete/invalid' % ( len(result), ), ) append(extra[:length]) extra = extra[length:] return result
Python-specific helper to access "extra" field of descriptors, because it's not as straight-forward as in C. Returns a list, where each entry is an individual extra descriptor.
def _printf(self, *args, **kwargs): if self._stream and not kwargs.get('file'): kwargs['file'] = self._stream _printf(*args, **kwargs)
Print to configured stream if any is specified and the file argument is not already set for this specific call.
def stream_logs(container, timeout=10.0, **logs_kwargs): stream = container.logs(stream=True, **logs_kwargs) return stream_timeout( stream, timeout, 'Timeout waiting for container logs.')
Stream logs from a Docker container within a timeout. :param ~docker.models.containers.Container container: Container who's log lines to stream. :param timeout: Timeout value in seconds. :param logs_kwargs: Additional keyword arguments to pass to ``container.logs()``. For example, the ``stdout`` and ``stderr`` boolean arguments can be used to determine whether to stream stdout or stderr or both (the default). :raises TimeoutError: When the timeout value is reached before the logs have completed.
def sort_window_ids(winid_list, order='mru'): import utool as ut winid_order = XCtrl.sorted_window_ids(order) sorted_win_ids = ut.isect(winid_order, winid_list) return sorted_win_ids
Orders window ids by most recently used
def combinecrinfo(crinfo1, crinfo2): crinfo1 = fix_crinfo(crinfo1) crinfo2 = fix_crinfo(crinfo2) crinfo = [ [crinfo1[0][0] + crinfo2[0][0], crinfo1[0][0] + crinfo2[0][1]], [crinfo1[1][0] + crinfo2[1][0], crinfo1[1][0] + crinfo2[1][1]], [crinfo1[2][0] + crinfo2[2][0], crinfo1[2][0] + crinfo2[2][1]], ] return crinfo
Combine two crinfos. First used is crinfo1, second used is crinfo2.
def create(ctx, to, amount, symbol, secret, hash, account, expiration): ctx.blockchain.blocking = True tx = ctx.blockchain.htlc_create( Amount(amount, symbol), to, secret, hash_type=hash, expiration=expiration, account=account, ) tx.pop("trx", None) print_tx(tx) results = tx.get("operation_results", {}) if results: htlc_id = results[0][1] print("Your htlc_id is: {}".format(htlc_id))
Create an HTLC contract
def call_graphviz_dot(src, fmt): try: svg = dot(src, T=fmt) except OSError as e: if e.errno == 2: cli.error( ) raise return svg
Call dot command, and provide helpful error message if we cannot find it.
def check_name(self, name=None): if name: self.plugin_info['check_name'] = name if self.plugin_info['check_name'] is not None: return self.plugin_info['check_name'] return self.__class__.__name__
Checks the plugin name and sets it accordingly. Uses name if specified, class name if not set.
async def restore_networking_configuration(self): self._data = await self._handler.restore_networking_configuration( system_id=self.system_id)
Restore machine's networking configuration to its initial state.
def _approximate_common_period(periods: List[float], approx_denom: int = 60, reject_atol: float = 1e-8) -> Optional[float]: if not periods: return None if any(e == 0 for e in periods): return None if len(periods) == 1: return abs(periods[0]) approx_rational_periods = [ fractions.Fraction(int(np.round(abs(p) * approx_denom)), approx_denom) for p in periods ] common = float(_common_rational_period(approx_rational_periods)) for p in periods: if p != 0 and abs(p * np.round(common / p) - common) > reject_atol: return None return common
Finds a value that is nearly an integer multiple of multiple periods. The returned value should be the smallest non-negative number with this property. If `approx_denom` is too small the computation can fail to satisfy the `reject_atol` criteria and return `None`. This is actually desirable behavior, since otherwise the code would e.g. return a nonsense value when asked to compute the common period of `np.e` and `np.pi`. Args: periods: The result must be an approximate integer multiple of each of these. approx_denom: Determines how the floating point values are rounded into rational values (so that integer methods such as lcm can be used). Each floating point value f_k will be rounded to a rational number of the form n_k / approx_denom. If you want to recognize rational periods of the form i/d then d should divide `approx_denom`. reject_atol: If the computed approximate common period is at least this far from an integer multiple of any of the given periods, then it is discarded and `None` is returned instead. Returns: The approximate common period, or else `None` if the given `approx_denom` wasn't sufficient to approximate the common period to within the given `reject_atol`.
def close(self): if self.filename != ':memory:': if self._conn is not None: self._conn.commit() self._conn.close() self._conn = None
Closes the connection unless we're working in-memory
def get_page_generator(func, start_page=0, page_size=None): page_number = start_page more_pages = True while more_pages: page = func(page_number=page_number, page_size=page_size) yield page more_pages = page.has_more_pages() page_number += 1
Constructs a generator for retrieving pages from a paginated endpoint. This method is intended for internal use. :param func: Should take parameters ``page_number`` and ``page_size`` and return the corresponding |Page| object. :param start_page: The page to start on. :param page_size: The size of each page. :return: A generator that generates each successive page.
def get_ports_by_name(device_name): filtered_devices = filter( lambda device: device_name in device[1], list_ports.comports() ) device_ports = [device[0] for device in filtered_devices] return device_ports
Returns all serial devices with a given name
def terminate_process(self, idf): try: p = self.q.pop(idf) p.terminate() return p except: return None
Terminate a process by id
def to_csv(self, filename, delimiter=",", recommended_only=False, include_io=True): df = self.to_df(recommended_only, include_io) df.to_csv(filename, index=False, sep=delimiter)
Return a CSV for each model and dataset. Parameters ---------- filename : str or file Either the file name (string) or an open file (file-like object) where the data will be saved. delimiter : str, optional Delimiter used in CSV file between fields. recommended_only : bool, optional If True, only recommended models for each session are included. If no model is recommended, then a row with it's ID will be included, but all fields will be null. include_io : bool, optional If True, then the input/output files from BMDS will also be included, specifically the (d) input file and the out file. Returns ------- None
def set_viewup(self, vector): if isinstance(vector, np.ndarray): if vector.ndim != 1: vector = vector.ravel() self.camera.SetViewUp(vector) self._render()
sets camera viewup vector
def get_lock_requests(self): d = defaultdict(list) if self._context: for variant in self._context.resolved_packages: name = variant.name version = variant.version lock = self.patch_locks.get(name) if lock is None: lock = self.default_patch_lock request = get_lock_request(name, version, lock) if request is not None: d[lock].append(request) return d
Take the current context, and the current patch locks, and determine the effective requests that will be added to the main request. Returns: A dict of (PatchLock, [Requirement]) tuples. Each requirement will be a weak package reference. If there is no current context, an empty dict will be returned.
def get_key(key, host=None, port=None, db=None, password=None): server = _connect(host, port, db, password) return server.get(key)
Get redis key value CLI Example: .. code-block:: bash salt '*' redis.get_key foo
def replace_file_or_dir(dest, source): from rez.vendor.atomicwrites import replace_atomic if not os.path.exists(dest): try: os.rename(source, dest) return except: if not os.path.exists(dest): raise try: replace_atomic(source, dest) return except: pass with make_tmp_name(dest) as tmp_dest: os.rename(dest, tmp_dest) os.rename(source, dest)
Replace `dest` with `source`. Acts like an `os.rename` if `dest` does not exist. Otherwise, `dest` is deleted and `src` is renamed to `dest`.
def validate(template_dict, schema=None): if not schema: schema = SamTemplateValidator._read_schema() validation_errors = "" try: jsonschema.validate(template_dict, schema) except ValidationError as ex: validation_errors = str(ex) pass return validation_errors
Is this a valid SAM template dictionary :param dict template_dict: Data to be validated :param dict schema: Optional, dictionary containing JSON Schema representing SAM template :return: Empty string if there are no validation errors in template
def _pip_exists(self): return os.path.isfile(os.path.join(self.path, 'bin', 'pip'))
Returns True if pip exists inside the virtual environment. Can be used as a naive way to verify that the environment is installed.
def get_position_label(i, words, tags, heads, labels, ents): if len(words) < 20: return "short-doc" elif i == 0: return "first-word" elif i < 10: return "early-word" elif i < 20: return "mid-word" elif i == len(words) - 1: return "last-word" else: return "late-word"
Return labels indicating the position of the word in the document.
def insert(self, action: Action, where: 'Union[int, Delegate.Where]'): if isinstance(where, int): self.actions.insert(where, action) return here = where(self.actions) self.actions.insert(here, action)
add a new action with specific priority >>> delegate: Delegate >>> delegate.insert(lambda task, product, ctx: print(product), where=Delegate.Where.after(lambda action: action.__name__ == 'myfunc')) the codes above inserts an action after the specific action whose name is 'myfunc'.
def conv_uuid(self, column, name, **kwargs): return [f(column, name, **kwargs) for f in self.uuid_filters]
Convert UUID filter.
def count_variables_by_type(variables=None): if variables is None: variables = tf.global_variables() + tf.local_variables() unique_types = set(v.dtype.base_dtype for v in variables) results_dict = {} for dtype in unique_types: if dtype == tf.string: tf.logging.warning( "NB: string Variables present. The memory usage for these Variables " "will not be accurately computed as it depends on the exact strings " "stored in a particular session.") vars_of_type = [v for v in variables if v.dtype.base_dtype == dtype] num_scalars = sum(v.shape.num_elements() for v in vars_of_type) results_dict[dtype] = { "num_variables": len(vars_of_type), "num_scalars": num_scalars } return results_dict
Returns a dict mapping dtypes to number of variables and scalars. Args: variables: iterable of `tf.Variable`s, or None. If None is passed, then all global and local variables in the current graph are used. Returns: A dict mapping tf.dtype keys to a dict containing the keys 'num_scalars' and 'num_variables'.
def _n_parameters(self): ndim = self.means_.shape[1] if self.covariance_type == 'full': cov_params = self.n_components * ndim * (ndim + 1) / 2. elif self.covariance_type == 'diag': cov_params = self.n_components * ndim elif self.covariance_type == 'tied': cov_params = ndim * (ndim + 1) / 2. elif self.covariance_type == 'spherical': cov_params = self.n_components mean_params = ndim * self.n_components return int(cov_params + mean_params + self.n_components - 1)
Return the number of free parameters in the model.
def askopenfile(mode="r", **options): "Ask for a filename to open, and returned the opened file" filename = askopenfilename(**options) if filename: return open(filename, mode) return None
Ask for a filename to open, and returned the opened file
def visit_NameConstant(self, node: ast.NameConstant) -> Any: self.recomputed_values[node] = node.value return node.value
Forward the node value as a result.
def run(features, labels, regularization=0., constfeat=True): n_col = (features.shape[1] if len(features.shape) > 1 else 1) reg_matrix = regularization * np.identity(n_col, dtype='float64') if constfeat: reg_matrix[0, 0] = 0. return np.linalg.lstsq(features.T.dot(features) + reg_matrix, features.T.dot(labels))[0]
Run linear regression on the given data. .. versionadded:: 0.5.0 If a regularization parameter is provided, this function is a simplification and specialization of ridge regression, as implemented in `scikit-learn <http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html#sklearn.linear_model.Ridge>`_. Setting `solver` to `'svd'` in :class:`sklearn.linear_model.Ridge` and equating our `regularization` with their `alpha` will yield the same results. Parameters ---------- features : ndarray Features on which to run linear regression. labels : ndarray Labels for the given features. Multiple columns of labels are allowed. regularization : float, optional Regularization parameter. Defaults to 0. constfeat : bool, optional Whether or not the first column of features is the constant feature 1. If True, the first column will be excluded from regularization. Defaults to True. Returns ------- model : ndarray Regression model for the given data.
def _CreateTempDir(prefix, run_dir=None): temp_dir = tempfile.mkdtemp(prefix=prefix + '-', dir=run_dir) try: yield temp_dir finally: shutil.rmtree(temp_dir)
Context manager for creating a temporary directory. Args: prefix: string, the prefix for the temporary directory. run_dir: string, the base directory location of the temporary directory. Yields: string, the temporary directory created.
def headers(self): if py3k: return dict((k.lower(), v) for k, v in self.getheaders()) else: return dict(self.getheaders())
Response headers. Response headers is a dict with all keys in lower case. >>> import urlfetch >>> response = urlfetch.get("http://docs.python.org/") >>> response.headers { 'content-length': '8719', 'x-cache': 'MISS from localhost', 'accept-ranges': 'bytes', 'vary': 'Accept-Encoding', 'server': 'Apache/2.2.16 (Debian)', 'last-modified': 'Tue, 26 Jun 2012 19:23:18 GMT', 'connection': 'close', 'etag': '"13cc5e4-220f-4c36507ded580"', 'date': 'Wed, 27 Jun 2012 06:50:30 GMT', 'content-type': 'text/html', 'x-cache-lookup': 'MISS from localhost:8080' }
def get_obj_cacheable(obj, attr_name, calculate, recalculate=False): if not recalculate and hasattr(obj, attr_name): return getattr(obj, attr_name) calculated = calculate() setattr(obj, attr_name, calculated) return calculated
Gets the result of a method call, using the given object and attribute name as a cache
def count_tf(tokens_stream): tf = defaultdict(int) for tokens in tokens_stream: for token in tokens: tf[token] += 1 return tf
Count term frequencies for a single file.
def download(self, sub_url): response = requests.get(sub_url, headers=self.headers).text soup = BS(response, 'lxml') downlink = self.base_url+soup.select('.download a')[0]['href'] data = requests.get(downlink, headers=self.headers) z = zipfile.ZipFile(cStringIO.StringIO(data.content)) srt_files = [f.filename for f in z.filelist if f.filename.rsplit('.')[-1].lower() in ['srt', 'ass']] z.extract(srt_files[0], '/tmp/') return srt_files[0]
download and unzip subtitle archive to a temp location
def update_annotation_version(xml_file): with open(xml_file, 'r') as f: s = f.read() m = search('<annotations version="([0-9]*)">', s) current = int(m.groups()[0]) if current < 4: s = sub('<marker><name>(.*?)</name><time>(.*?)</time></marker>', '<marker><marker_name>\g<1></marker_name><marker_start>\g<2></marker_start><marker_end>\g<2></marker_end><marker_chan/></marker>', s) if current < 5: s = s.replace('marker', 'bookmark') s = sub('<annotations version="[0-9]*">', '<annotations version="5">', s) with open(xml_file, 'w') as f: f.write(s)
Update the fields that have changed over different versions. Parameters ---------- xml_file : path to file xml file with the sleep scoring Notes ----- new in version 4: use 'marker_name' instead of simply 'name' etc new in version 5: use 'bookmark' instead of 'marker'
def secured_task(f): @wraps(f) def secured_task_decorator(*args, **kwargs): task_data = _get_data_from_args(args) assert isinstance(task_data, TaskData) if not verify_security_data(task_data.get_data()['security']): raise SecurityException( task_data.get_data()['security']['hashed_token']) task_data.transform_payload(lambda x: x['data']) return f(*args, **kwargs) return secured_task_decorator
Secured task decorator.
def ipaddr(value, options=None): ipv4_obj = ipv4(value, options=options) ipv6_obj = ipv6(value, options=options) if ipv4_obj is None or ipv6_obj is None: return ipv4_obj or ipv6_obj else: return ipv4_obj + ipv6_obj
Filters and returns only valid IP objects.
def clean_password2(self): password1 = self.cleaned_data.get("password1") password2 = self.cleaned_data.get("password2") if password1: errors = [] if password1 != password2: errors.append(ugettext("Passwords do not match")) if len(password1) < settings.ACCOUNTS_MIN_PASSWORD_LENGTH: errors.append( ugettext("Password must be at least %s characters") % settings.ACCOUNTS_MIN_PASSWORD_LENGTH) if errors: self._errors["password1"] = self.error_class(errors) return password2
Ensure the password fields are equal, and match the minimum length defined by ``ACCOUNTS_MIN_PASSWORD_LENGTH``.
def nl_recvmsgs_report(sk, cb): if cb.cb_recvmsgs_ow: return int(cb.cb_recvmsgs_ow(sk, cb)) return int(recvmsgs(sk, cb))
Receive a set of messages from a Netlink socket and report parsed messages. https://github.com/thom311/libnl/blob/libnl3_2_25/lib/nl.c#L998 This function is identical to nl_recvmsgs() to the point that it will return the number of parsed messages instead of 0 on success. See nl_recvmsgs(). Positional arguments: sk -- Netlink socket (nl_sock class instance). cb -- set of callbacks to control behaviour (nl_cb class instance). Returns: Number of received messages or a negative error code from nl_recv().
def proxy(name, default = None): proxymodule = _ProxyMetaClass(name, (_ProxyModule,), {'_default': default}) proxymodule.__module__ = sys._getframe(1).f_globals.get('__name__') return proxymodule
Create a proxy module. A proxy module has a default implementation, but can be redirected to other implementations with configurations. Other modules can depend on proxy modules.
def configure_createfor(self, ns, definition): @self.add_route(ns.relation_path, Operation.CreateFor, ns) @request(definition.request_schema) @response(definition.response_schema) @wraps(definition.func) def create(**path_data): request_data = load_request_data(definition.request_schema) response_data = require_response_data(definition.func(**merge_data(path_data, request_data))) headers = encode_id_header(response_data) definition.header_func(headers, response_data) response_format = self.negotiate_response_content(definition.response_formats) return dump_response_data( definition.response_schema, response_data, Operation.CreateFor.value.default_code, headers=headers, response_format=response_format, ) create.__doc__ = "Create a new {} relative to a {}".format(pluralize(ns.object_name), ns.subject_name)
Register a create-for relation endpoint. The definition's func should be a create function, which must: - accept kwargs for the new instance creation parameters - return the created instance :param ns: the namespace :param definition: the endpoint definition
def _queryset_iterator(qs): if issubclass(type(qs), UrlNodeQuerySet): super_without_boobytrap_iterator = super(UrlNodeQuerySet, qs) else: super_without_boobytrap_iterator = super(PublishingQuerySet, qs) if is_publishing_middleware_active() \ and not is_draft_request_context(): for item in super_without_boobytrap_iterator.iterator(): if getattr(item, 'publishing_is_draft', False): yield DraftItemBoobyTrap(item) else: yield item else: for item in super_without_boobytrap_iterator.iterator(): yield item
Override default iterator to wrap returned items in a publishing sanity-checker "booby trap" to lazily raise an exception if DRAFT items are mistakenly returned and mis-used in a public context where only PUBLISHED items should be used. This booby trap is added when all of: - the publishing middleware is active, and therefore able to report accurately whether the request is in a drafts-permitted context - the publishing middleware tells us we are not in a drafts-permitted context, which means only published items should be used.
def need_completion_refresh(queries): tokens = { 'use', '\\u', 'create', 'drop' } for query in sqlparse.split(queries): try: first_token = query.split()[0] if first_token.lower() in tokens: return True except Exception: return False
Determines if the completion needs a refresh by checking if the sql statement is an alter, create, drop or change db.
def color(ip, mac, hue, saturation, value): bulb = MyStromBulb(ip, mac) bulb.set_color_hsv(hue, saturation, value)
Switch the bulb on with the given color.
def validate_type(value, types, **kwargs): if not is_value_of_any_type(value, types): raise ValidationError(MESSAGES['type']['invalid'].format( repr(value), get_type_for_value(value), types, ))
Validate that the value is one of the provided primative types.
def __group_tags(self, facts): if not facts: return facts grouped_facts = [] for fact_id, fact_tags in itertools.groupby(facts, lambda f: f["id"]): fact_tags = list(fact_tags) grouped_fact = fact_tags[0] keys = ["id", "start_time", "end_time", "description", "name", "activity_id", "category", "tag"] grouped_fact = dict([(key, grouped_fact[key]) for key in keys]) grouped_fact["tags"] = [ft["tag"] for ft in fact_tags if ft["tag"]] grouped_facts.append(grouped_fact) return grouped_facts
put the fact back together and move all the unique tags to an array
def init_from_adversarial_batches_write_to_datastore(self, submissions, adv_batches): idx = 0 for s_id in iterkeys(submissions.defenses): for adv_id in iterkeys(adv_batches.data): class_batch_id = CLASSIFICATION_BATCH_ID_PATTERN.format(idx) idx += 1 self.data[class_batch_id] = { 'adversarial_batch_id': adv_id, 'submission_id': s_id, 'result_path': os.path.join( self._round_name, CLASSIFICATION_BATCHES_SUBDIR, s_id + '_' + adv_id + '.csv') } client = self._datastore_client with client.no_transact_batch() as batch: for key, value in iteritems(self.data): entity = client.entity(client.key(KIND_CLASSIFICATION_BATCH, key)) entity.update(value) batch.put(entity)
Populates data from adversarial batches and writes to datastore. Args: submissions: instance of CompetitionSubmissions adv_batches: instance of AversarialBatches
def dict_to_pyxb(rp_dict): rp_pyxb = d1_common.types.dataoneTypes.replicationPolicy() rp_pyxb.replicationAllowed = rp_dict['allowed'] rp_pyxb.numberReplicas = rp_dict['num'] rp_pyxb.blockedMemberNode = rp_dict['block'] rp_pyxb.preferredMemberNode = rp_dict['pref'] normalize(rp_pyxb) return rp_pyxb
Convert dict to ReplicationPolicy PyXB object. Args: rp_dict: Native Python structure representing a Replication Policy. Example:: { 'allowed': True, 'num': 3, 'blockedMemberNode': {'urn:node:NODE1', 'urn:node:NODE2', 'urn:node:NODE3'}, 'preferredMemberNode': {'urn:node:NODE4', 'urn:node:NODE5'}, } Returns: ReplicationPolicy PyXB object.
def _value_decode(cls, member, value): if value is None: return None try: field_validator = cls.fields[member] except KeyError: return cls.valueparse.decode(value) return field_validator.decode(value)
Internal method used to decode values from redis hash :param member: str :param value: bytes :return: multi
def pants_setup_py(name, description, additional_classifiers=None, **kwargs): if not name.startswith('pantsbuild.pants'): raise ValueError("Pants distribution package names must start with 'pantsbuild.pants', " "given {}".format(name)) standard_classifiers = [ 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Operating System :: MacOS :: MacOS X', 'Operating System :: POSIX :: Linux', 'Programming Language :: Python', 'Topic :: Software Development :: Build Tools'] classifiers = OrderedSet(standard_classifiers + (additional_classifiers or [])) notes = PantsReleases.global_instance().notes_for_version(PANTS_SEMVER) return PythonArtifact( name=name, version=VERSION, description=description, long_description=(_read_contents('src/python/pants/ABOUT.rst') + notes), url='https://github.com/pantsbuild/pants', license='Apache License, Version 2.0', zip_safe=True, classifiers=list(classifiers), **kwargs)
Creates the setup_py for a pants artifact. :param str name: The name of the package. :param str description: A brief description of what the package provides. :param list additional_classifiers: Any additional trove classifiers that apply to the package, see: https://pypi.org/pypi?%3Aaction=list_classifiers :param kwargs: Any additional keyword arguments to be passed to `setuptools.setup <https://pythonhosted.org/setuptools/setuptools.html>`_. :returns: A setup_py suitable for building and publishing pants components.
def parse_kwargs(kwargs, *keys, **keyvalues): result = {} for key in keys: if key in kwargs: result[key] = kwargs[key] del kwargs[key] for key, value in keyvalues.items(): if key in kwargs: result[key] = kwargs[key] del kwargs[key] else: result[key] = value return result
Return dict with keys from keys|keyvals and values from kwargs|keyvals. Existing keys are deleted from kwargs. >>> kwargs = {'one': 1, 'two': 2, 'four': 4} >>> kwargs2 = parse_kwargs(kwargs, 'two', 'three', four=None, five=5) >>> kwargs == {'one': 1} True >>> kwargs2 == {'two': 2, 'four': 4, 'five': 5} True
def get_definition(self): cpds = self.model.get_cpds() cpds.sort(key=lambda x: x.variable) definition_tag = {} for cpd in cpds: definition_tag[cpd.variable] = etree.SubElement(self.network, "DEFINITION") etree.SubElement(definition_tag[cpd.variable], "FOR").text = cpd.variable for child in sorted(cpd.variables[:0:-1]): etree.SubElement(definition_tag[cpd.variable], "GIVEN").text = child return definition_tag
Add Definition to XMLBIF Return ------ dict: dict of type {variable: definition tag} Examples -------- >>> writer = XMLBIFWriter(model) >>> writer.get_definition() {'hear-bark': <Element DEFINITION at 0x7f1d48977408>, 'family-out': <Element DEFINITION at 0x7f1d489773c8>, 'dog-out': <Element DEFINITION at 0x7f1d48977388>, 'bowel-problem': <Element DEFINITION at 0x7f1d48977348>, 'light-on': <Element DEFINITION at 0x7f1d48977448>}
def filter_args_to_dict(filter_dict, accepted_filter_keys=[]): out_dict = {} for k, v in filter_dict.items(): if k not in accepted_filter_keys or v is None: logger.debug( 'Filter was not in accepted_filter_keys or value is None.') continue filter_type = filter_type_map.get(k, None) if filter_type is None: logger.debug('Filter key not foud in map.') continue filter_cast_map = { 'int': cast_integer_filter, 'datetime': cast_datetime_filter } cast_function = filter_cast_map.get(filter_type, None) if cast_function: out_value = cast_function(v) else: out_value = v out_dict[k] = out_value return out_dict
Cast and validate filter args. :param filter_dict: Filter kwargs :param accepted_filter_keys: List of keys that are acceptable to use.
def _get_output_template(self): path = self._file_writer_session.extra_resource_path('.youtube-dl') if not path: self._temp_dir = tempfile.TemporaryDirectory( dir=self._root_path, prefix='tmp-wpull-youtubedl' ) path = '{}/tmp'.format(self._temp_dir.name) return path, '{}.%(id)s.%(format_id)s.%(ext)s'.format(path)
Return the path prefix and output template.
def open(self): try: self.device.open() except ConnectTimeoutError as cte: raise ConnectionException(cte.msg) self.device.timeout = self.timeout self.device._conn._session.transport.set_keepalive(self.keepalive) if hasattr(self.device, "cu"): del self.device.cu self.device.bind(cu=Config) if not self.lock_disable and self.session_config_lock: self._lock()
Open the connection with the device.
def build_tree(X, y, criterion, max_depth, current_depth=1): if max_depth >= 0 and current_depth >= max_depth: return Leaf(y) gain, question = find_best_question(X, y, criterion) if gain == 0: return Leaf(y) true_X, false_X, true_y, false_y = split(X, y, question) true_branch = build_tree( true_X, true_y, criterion, max_depth, current_depth=current_depth+1 ) false_branch = build_tree( false_X, false_y, criterion, max_depth, current_depth=current_depth+1 ) return Node( question=question, true_branch=true_branch, false_branch=false_branch )
Builds the decision tree.
def force_unlock(self, key): check_not_none(key, "key can't be None") key_data = self._to_data(key) return self._encode_invoke_on_key(map_force_unlock_codec, key_data, key=key_data, reference_id=self.reference_id_generator.get_and_increment())
Releases the lock for the specified key regardless of the lock owner. It always successfully unlocks the key, never blocks, and returns immediately. **Warning: This method uses __hash__ and __eq__ methods of binary form of the key, not the actual implementations of __hash__ and __eq__ defined in key's class.** :param key: (object), the key to lock.
def get_event_exchange(service_name): exchange_name = "{}.events".format(service_name) exchange = Exchange( exchange_name, type='topic', durable=True, delivery_mode=PERSISTENT ) return exchange
Get an exchange for ``service_name`` events.
def scram(self): self.stop_all_children() signal.signal(signal.SIGTERM, signal.SIG_DFL) sys.exit(2)
Kill all workers and die ourselves. This runs in response to SIGABRT, from a specific invocation of the ``kill`` command. It also runs if :meth:`stop_gracefully` is called more than once.