code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def mask_and_mean_loss(input_tensor, binary_tensor, axis=None): return mean_on_masked(mask_loss(input_tensor, binary_tensor), binary_tensor, axis=axis)
Mask a loss by using a tensor filled with 0 or 1 and average correctly. :param input_tensor: A float tensor of shape [batch_size, ...] representing the loss/cross_entropy :param binary_tensor: A float tensor of shape [batch_size, ...] representing the mask. :return: A float tensor of shape [batch_size, ...] representing the masked loss. :param axis: The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
def remove_listener(registry, listener): if listener is not None and listener in registry: registry.remove(listener) return True return False
Removes a listener from the registry :param registry: A registry (a list) :param listener: The listener to remove :return: True if the listener was in the list
def set_error_callback(cbfun): global _error_callback previous_callback = _error_callback if cbfun is None: cbfun = 0 c_cbfun = _GLFWerrorfun(cbfun) _error_callback = (cbfun, c_cbfun) cbfun = c_cbfun _glfw.glfwSetErrorCallback(cbfun) if previous_callback is not None and previous_callback[0] != 0: return previous_callback[0]
Sets the error callback. Wrapper for: GLFWerrorfun glfwSetErrorCallback(GLFWerrorfun cbfun);
def hangul_to_jamo(hangul_string): return (_ for _ in chain.from_iterable(_hangul_char_to_jamo(_) for _ in hangul_string))
Convert a string of Hangul to jamo. Arguments may be iterables of characters. hangul_to_jamo should split every Hangul character into U+11xx jamo characters for any given string. Non-hangul characters are not changed. hangul_to_jamo is the generator version of h2j, the string version.
def PathList(self, pathlist): pathlist = self._PathList_key(pathlist) try: memo_dict = self._memo['PathList'] except KeyError: memo_dict = {} self._memo['PathList'] = memo_dict else: try: return memo_dict[pathlist] except KeyError: pass result = _PathList(pathlist) memo_dict[pathlist] = result return result
Returns the cached _PathList object for the specified pathlist, creating and caching a new object as necessary.
def set_learning_objectives(self, objective_ids): if not isinstance(objective_ids, list): raise errors.InvalidArgument() if self.get_learning_objectives_metadata().is_read_only(): raise errors.NoAccess() idstr_list = [] for object_id in objective_ids: if not self._is_valid_id(object_id): raise errors.InvalidArgument() idstr_list.append(str(object_id)) self._my_map['learningObjectiveIds'] = idstr_list
Sets the learning objectives. arg: objective_ids (osid.id.Id[]): the learning objective ``Ids`` raise: InvalidArgument - ``objective_ids`` is invalid raise: NoAccess - ``Metadata.isReadOnly()`` is ``true`` *compliance: mandatory -- This method must be implemented.*
def append( self, moment_or_operation_tree: Union[ops.Moment, ops.OP_TREE], strategy: InsertStrategy = InsertStrategy.EARLIEST): self.insert(len(self._moments), moment_or_operation_tree, strategy)
Appends operations onto the end of the circuit. Moments within the operation tree are appended intact. Args: moment_or_operation_tree: The moment or operation tree to append. strategy: How to pick/create the moment to put operations into.
def _get_response(self, method, endpoint, params=None): url = urljoin(self.api_url, endpoint) try: response = getattr(self._session, method)(url, params=params) if response.status_code == 401: raise TokenExpiredError except TokenExpiredError: self._refresh_oath_token() self._session = OAuth2Session( client_id=self._client_id, token=self._token, ) response = getattr(self._session, method)(url, params=params) if response.status_code != requests.codes.ok: raise MonzoAPIError( "Something went wrong: {}".format(response.json()) ) return response
Helper method to handle HTTP requests and catch API errors :param method: valid HTTP method :type method: str :param endpoint: API endpoint :type endpoint: str :param params: extra parameters passed with the request :type params: dict :returns: API response :rtype: Response
def disable(name, runas=None): service_target = _get_domain_target(name, service_target=True)[0] return launchctl('disable', service_target, runas=runas)
Disable a launchd service. Raises an error if the service fails to be disabled :param str name: Service label, file name, or full path :param str runas: User to run launchctl commands :return: ``True`` if successful or if the service is already disabled :rtype: bool CLI Example: .. code-block:: bash salt '*' service.disable org.cups.cupsd
def matchall(r, s, flags=0): try: return [m.group(0) for m in matchiter(r, s, flags)] except ValueError: return None
Returns the list of contiguous string matches of r in s, or None if r does not successively match the entire s.
def create(self): if self.path is not None: logger.debug( "Skipped creation of temporary directory: {}".format(self.path) ) return self.path = os.path.realpath( tempfile.mkdtemp(prefix="pip-{}-".format(self.kind)) ) self._register_finalizer() logger.debug("Created temporary directory: {}".format(self.path))
Create a temporary directory and store its path in self.path
def easeInOutCubic(n): _checkRange(n) n = 2 * n if n < 1: return 0.5 * n**3 else: n = n - 2 return 0.5 * (n**3 + 2)
A cubic tween function that accelerates, reaches the midpoint, and then decelerates. Args: n (float): The time progress, starting at 0.0 and ending at 1.0. Returns: (float) The line progress, starting at 0.0 and ending at 1.0. Suitable for passing to getPointOnLine().
def clamp(value, lower=0, upper=sys.maxsize): return max(lower, min(upper, value))
Clamp float between given range
def sync_release_files(self): release_files = [] for release in self.releases.values(): release_files.extend(release) downloaded_files = set() deferred_exception = None for release_file in release_files: try: downloaded_file = self.download_file( release_file["url"], release_file["digests"]["sha256"] ) if downloaded_file: downloaded_files.add( str(downloaded_file.relative_to(self.mirror.homedir)) ) except Exception as e: logger.exception( f"Continuing to next file after error downloading: " f"{release_file['url']}" ) if not deferred_exception: deferred_exception = e if deferred_exception: raise deferred_exception self.mirror.altered_packages[self.name] = downloaded_files
Purge + download files returning files removed + added
def viewport_changed(self, screen_id, x, y, width, height): if not isinstance(screen_id, baseinteger): raise TypeError("screen_id can only be an instance of type baseinteger") if not isinstance(x, baseinteger): raise TypeError("x can only be an instance of type baseinteger") if not isinstance(y, baseinteger): raise TypeError("y can only be an instance of type baseinteger") if not isinstance(width, baseinteger): raise TypeError("width can only be an instance of type baseinteger") if not isinstance(height, baseinteger): raise TypeError("height can only be an instance of type baseinteger") self._call("viewportChanged", in_p=[screen_id, x, y, width, height])
Signals that framebuffer window viewport has changed. in screen_id of type int Monitor to take the screenshot from. in x of type int Framebuffer x offset. in y of type int Framebuffer y offset. in width of type int Viewport width. in height of type int Viewport height. raises :class:`OleErrorInvalidarg` The specified viewport data is invalid.
def get_by_natural_key(self, *args): kwargs = self.natural_key_kwargs(*args) for name, rel_to in self.model.get_natural_key_info(): if not rel_to: continue nested_key = extract_nested_key(kwargs, rel_to, name) if nested_key: try: kwargs[name] = rel_to.objects.get_by_natural_key( *nested_key ) except rel_to.DoesNotExist: raise self.model.DoesNotExist() else: kwargs[name] = None return self.get(**kwargs)
Return the object corresponding to the provided natural key. (This is a generic implementation of the standard Django function)
def in_same_table(self): if self._tc.tbl is self._other_tc.tbl: return True return False
True if both cells provided to constructor are in same table.
def trajectory(self): traj = np.zeros((2, self.times.size)) for t, time in enumerate(self.times): traj[:, t] = self.center_of_mass(time) return traj
Calculates the center of mass for each time step and outputs an array Returns:
def get_as_integer(self, key): value = self.get(key) return IntegerConverter.to_integer(value)
Converts map element into an integer or returns 0 if conversion is not possible. :param key: an index of element to get. :return: integer value ot the element or 0 if conversion is not supported.
def download_from_url(path, url): filename = url.split("/")[-1] found_file = find_file(path, filename, max_depth=0) if found_file is None: filename = os.path.join(path, filename) tf.logging.info("Downloading from %s to %s." % (url, filename)) inprogress_filepath = filename + ".incomplete" inprogress_filepath, _ = urllib.request.urlretrieve( url, inprogress_filepath, reporthook=download_report_hook) print() tf.gfile.Rename(inprogress_filepath, filename) return filename else: tf.logging.info("Already downloaded: %s (at %s)." % (url, found_file)) return found_file
Download content from a url. Args: path: string directory where file will be downloaded url: string url Returns: Full path to downloaded file
def probability_lt(self, x): if self.mean is None: return return normdist(x=x, mu=self.mean, sigma=self.standard_deviation)
Returns the probability of a random variable being less than the given value.
def start(self): if (self.cf.link is not None): if (self._added is False): self.create() logger.debug('First time block is started, add block') else: logger.debug('Block already registered, starting logging' ' for id=%d', self.id) pk = CRTPPacket() pk.set_header(5, CHAN_SETTINGS) pk.data = (CMD_START_LOGGING, self.id, self.period) self.cf.send_packet(pk, expected_reply=( CMD_START_LOGGING, self.id))
Start the logging for this entry
def rsync(config_file, source, target, override_cluster_name, down): config = yaml.load(open(config_file).read()) if override_cluster_name is not None: config["cluster_name"] = override_cluster_name config = _bootstrap_config(config) head_node = _get_head_node( config, config_file, override_cluster_name, create_if_needed=False) provider = get_node_provider(config["provider"], config["cluster_name"]) try: updater = NodeUpdaterThread( node_id=head_node, provider_config=config["provider"], provider=provider, auth_config=config["auth"], cluster_name=config["cluster_name"], file_mounts=config["file_mounts"], initialization_commands=[], setup_commands=[], runtime_hash="", ) if down: rsync = updater.rsync_down else: rsync = updater.rsync_up rsync(source, target, check_error=False) finally: provider.cleanup()
Rsyncs files. Arguments: config_file: path to the cluster yaml source: source dir target: target dir override_cluster_name: set the name of the cluster down: whether we're syncing remote -> local
def decorate_function(self, name, decorator): self.functions[name] = decorator(self.functions[name])
Decorate function with given name with given decorator. :param str name: Name of the function. :param callable decorator: Decorator callback.
def which(program): def is_exe(_fpath): return os.path.isfile(_fpath) and os.access(_fpath, os.X_OK) fpath, fname = os.path.split(program) if fpath: if is_exe(program): return program else: for path in os.environ["PATH"].split(os.pathsep): exe_file = os.path.join(path, program) if is_exe(exe_file): return exe_file return None
returns the path to an executable or None if it can't be found
def kernels_pull_cli(self, kernel, kernel_opt=None, path=None, metadata=False): kernel = kernel or kernel_opt effective_path = self.kernels_pull( kernel, path=path, metadata=metadata, quiet=False) if metadata: print('Source code and metadata downloaded to ' + effective_path) else: print('Source code downloaded to ' + effective_path)
client wrapper for kernels_pull
def logout(self): if self._logged_in is True: self.si.flush_cache() self.sc.sessionManager.Logout() self._logged_in = False
Logout of a vSphere server.
def transformer_relative(): hparams = transformer_base() hparams.pos = None hparams.self_attention_type = "dot_product_relative" hparams.max_relative_position = 20 return hparams
Use relative position embeddings instead of absolute position encodings.
def cm_json_to_graph(im_json): cmap_data = im_json['contact map']['map'] graph = AGraph() edges = [] for node_idx, node in enumerate(cmap_data): sites_in_node = [] for site_idx, site in enumerate(node['node_sites']): site_key = (node_idx, site_idx) sites_in_node.append(site_key) graph.add_node(site_key, label=site['site_name'], style='filled', shape='ellipse') if not site['site_type'] or not site['site_type'][0] == 'port': continue for port_link in site['site_type'][1]['port_links']: edge = (site_key, tuple(port_link)) edges.append(edge) graph.add_subgraph(sites_in_node, name='cluster_%s' % node['node_type'], label=node['node_type']) for source, target in edges: graph.add_edge(source, target) return graph
Return pygraphviz Agraph from Kappy's contact map JSON. Parameters ---------- im_json : dict A JSON dict which contains a contact map generated by Kappy. Returns ------- graph : pygraphviz.Agraph A graph representing the contact map.
def close(self): if self._filename and self._fh: self._fh.close() self._fh = None
Close open file. Future asarray calls might fail.
def on_modified(self, event): if os.path.isdir(event.src_path): return logger.debug("file modified: %s", event.src_path) name = self.file_name(event) try: config = yaml.load(open(event.src_path)) self.target_class.from_config(name, config) except Exception: logger.exception( "Error when loading updated config file %s", event.src_path, ) return self.on_update(self.target_class, name, config)
Modified config file handler. If a config file is modified, the yaml contents are parsed and the new results are validated by the target class. Once validated, the new config is passed to the on_update callback.
def __get_registry_key(self, key): import winreg root = winreg.OpenKey( winreg.HKEY_CURRENT_USER, r'SOFTWARE\GSettings\org\gnucash\general', 0, winreg.KEY_READ) [pathname, regtype] = (winreg.QueryValueEx(root, key)) winreg.CloseKey(root) return pathname
Read currency from windows registry
def with_subprocess(cls): def run_process(self, command, input=None): if isinstance(command, (tuple, list)): command = ' '.join('"%s"' % x for x in command) proc = subprocess.Popen( command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE ) out, err = proc.communicate(input=input) return proc.returncode, out.strip(), err.strip() cls.run_process = run_process return cls
a class decorator for Crontabber Apps. This decorator gives the CronApp a _run_proxy method that will execute the cron app as a single PG transaction. Commit and Rollback are automatic. The cron app should do no transaction management of its own. The cron app should be short so that the transaction is not held open too long.
def get_connections(self): con = [] maxconn = self.max_connectivity for ii in range(0, maxconn.shape[0]): for jj in range(0, maxconn.shape[1]): if maxconn[ii][jj] != 0: dist = self.s.get_distance(ii, jj) con.append([ii, jj, dist]) return con
Returns a list of site pairs that are Voronoi Neighbors, along with their real-space distances.
def get_class_name(class_key, classification_key): classification = definition(classification_key) for the_class in classification['classes']: if the_class.get('key') == class_key: return the_class.get('name', class_key) return class_key
Helper to get class name from a class_key of a classification. :param class_key: The key of the class. :type class_key: str :type classification_key: The key of a classification. :param classification_key: str :returns: The name of the class. :rtype: str
def close(self, *args, **kwargs): if not self.__finalized: self._file.write('</cml>') self.__finalized = True super().close(*args, **kwargs)
write close tag of MRV file and close opened file :param force: force closing of externally opened file or buffer
def info(self, callback=None, **kwargs): self.client.fetch( self.mk_req('', method='GET', **kwargs), callback = callback )
Get the basic info from the current cluster.
def create_group(self, name): url = 'rest/api/2/group' data = {'name': name} return self.post(url, data=data)
Create a group by given group parameter :param name: str :return: New group params
def _unstack(self, unstacker_func, new_columns, n_rows, fill_value): unstacker = unstacker_func(self.values.T) new_items = unstacker.get_new_columns() new_placement = new_columns.get_indexer(new_items) new_values, mask = unstacker.get_new_values() mask = mask.any(0) new_values = new_values.T[mask] new_placement = new_placement[mask] blocks = [make_block(new_values, placement=new_placement)] return blocks, mask
Return a list of unstacked blocks of self Parameters ---------- unstacker_func : callable Partially applied unstacker. new_columns : Index All columns of the unstacked BlockManager. n_rows : int Only used in ExtensionBlock.unstack fill_value : int Only used in ExtensionBlock.unstack Returns ------- blocks : list of Block New blocks of unstacked values. mask : array_like of bool The mask of columns of `blocks` we should keep.
def update_keyjar(keyjar): for iss, kbl in keyjar.items(): for kb in kbl: kb.update()
Go through the whole key jar, key bundle by key bundle and update them one by one. :param keyjar: The key jar to update
def flatten(nested_list): return_list = [] for i in nested_list: if isinstance(i,list): return_list += flatten(i) else: return_list.append(i) return return_list
converts a list-of-lists to a single flat list
def light_to_gl(light, transform, lightN): gl_color = vector_to_gl(light.color.astype(np.float64) / 255.0) assert len(gl_color) == 4 gl_position = vector_to_gl(transform[:3, 3]) args = [(lightN, gl.GL_POSITION, gl_position), (lightN, gl.GL_SPECULAR, gl_color), (lightN, gl.GL_DIFFUSE, gl_color), (lightN, gl.GL_AMBIENT, gl_color)] return args
Convert trimesh.scene.lighting.Light objects into args for gl.glLightFv calls Parameters -------------- light : trimesh.scene.lighting.Light Light object to be converted to GL transform : (4, 4) float Transformation matrix of light lightN : int Result of gl.GL_LIGHT0, gl.GL_LIGHT1, etc Returns -------------- multiarg : [tuple] List of args to pass to gl.glLightFv eg: [gl.glLightfb(*a) for a in multiarg]
def atlas_peer_is_whitelisted( peer_hostport, peer_table=None ): ret = None with AtlasPeerTableLocked(peer_table) as ptbl: if peer_hostport not in ptbl.keys(): return None ret = ptbl[peer_hostport].get("whitelisted", False) return ret
Is a peer whitelisted
def date_range_filter(range_name): filter_days = list(filter( lambda time: time["label"] == range_name, settings.CUSTOM_SEARCH_TIME_PERIODS)) num_days = filter_days[0]["days"] if len(filter_days) else None if num_days: dt = timedelta(num_days) start_time = timezone.now() - dt return Range(published={"gte": start_time}) return MatchAll()
Create a filter from a named date range.
def get_defaults_dict(self) -> Dict: return deserializer.inventory.Defaults.serialize(self.defaults).dict()
Returns serialized dictionary of defaults from inventory
def best_assemblyfile(self): for sample in self.metadata: assembly_file = os.path.join(sample.general.spadesoutput, 'contigs.fasta') if os.path.isfile(assembly_file): sample.general.bestassemblyfile = assembly_file else: sample.general.bestassemblyfile = 'NA' filteredfile = os.path.join(sample.general.outputdirectory, '{}.fasta'.format(sample.name)) sample.general.filteredfile = filteredfile
Determine whether the contigs.fasta output file from SPAdes is present. If not, set the .bestassembly attribute to 'NA'
def _parse_apps_to_ignore(self): apps_to_ignore = set() section_title = 'applications_to_ignore' if self._parser.has_section(section_title): apps_to_ignore = set(self._parser.options(section_title)) return apps_to_ignore
Parse the applications to ignore in the config. Returns: set
def merge_obs(self): for model_type in self.model_types: self.matched_forecasts[model_type] = {} for model_name in self.model_names[model_type]: self.matched_forecasts[model_type][model_name] = pd.merge(self.forecasts[model_type][model_name], self.obs, right_on="Step_ID", how="left", left_index=True)
Match forecasts and observations.
def replace_rep(t:str) -> str: "Replace repetitions at the character level in `t`." def _replace_rep(m:Collection[str]) -> str: c,cc = m.groups() return f' {TK_REP} {len(cc)+1} {c} ' re_rep = re.compile(r'(\S)(\1{3,})') return re_rep.sub(_replace_rep, t)
Replace repetitions at the character level in `t`.
def is_empty(self): return all(date.is_empty() for date in [self.created, self.issued]) \ and not self.publisher
Returns True if all child date elements present are empty and other nodes are not set. Returns False if any child date elements are not empty or other nodes are set.
def update_w3(self, w3: Web3) -> "Package": validate_w3_instance(w3) return Package(self.manifest, w3, self.uri)
Returns a new instance of `Package` containing the same manifest, but connected to a different web3 instance. .. doctest:: >>> new_w3 = Web3(Web3.EthereumTesterProvider()) >>> NewPackage = OwnedPackage.update_w3(new_w3) >>> assert NewPackage.w3 == new_w3 >>> assert OwnedPackage.manifest == NewPackage.manifest
def getConfiguration(self): configuration = c_int() mayRaiseUSBError(libusb1.libusb_get_configuration( self.__handle, byref(configuration), )) return configuration.value
Get the current configuration number for this device.
def validate_api_response(schema, raw_response, request_method='get', raw_request=None): request = None if raw_request is not None: request = normalize_request(raw_request) response = None if raw_response is not None: response = normalize_response(raw_response, request=request) if response is not None: validate_response( response=response, request_method=request_method, schema=schema )
Validate the response of an api call against a swagger schema.
def read_file(rel_path, paths=None, raw=False, as_list=False, as_iter=False, *args, **kwargs): if not rel_path: raise ValueError("rel_path can not be null!") paths = str2list(paths) paths.extend([STATIC_DIR, os.path.join(SRC_DIR, 'static')]) paths = [os.path.expanduser(p) for p in set(paths)] for path in paths: path = os.path.join(path, rel_path) logger.debug("trying to read: %s " % path) if os.path.exists(path): break else: raise IOError("path %s does not exist!" % rel_path) args = args if args else ['rU'] fd = open(path, *args, **kwargs) if raw: return fd if as_iter: return read_in_chunks(fd) else: fd_lines = fd.readlines() if as_list: return fd_lines else: return ''.join(fd_lines)
find a file that lives somewhere within a set of paths and return its contents. Default paths include 'static_dir'
def iter_token_lines(tokenlist): line = [] for token, c in explode_tokens(tokenlist): line.append((token, c)) if c == '\n': yield line line = [] yield line
Iterator that yields tokenlists for each line.
def http_list(self, path, query_data={}, as_list=None, **kwargs): as_list = True if as_list is None else as_list get_all = kwargs.pop('all', False) url = self._build_url(path) if get_all is True: return list(GitlabList(self, url, query_data, **kwargs)) if 'page' in kwargs or as_list is True: return list(GitlabList(self, url, query_data, get_next=False, **kwargs)) return GitlabList(self, url, query_data, **kwargs)
Make a GET request to the Gitlab server for list-oriented queries. Args: path (str): Path or full URL to query ('/projects' or 'http://whatever/v4/api/projecs') query_data (dict): Data to send as query parameters **kwargs: Extra options to send to the server (e.g. sudo, page, per_page) Returns: list: A list of the objects returned by the server. If `as_list` is False and no pagination-related arguments (`page`, `per_page`, `all`) are defined then a GitlabList object (generator) is returned instead. This object will make API calls when needed to fetch the next items from the server. Raises: GitlabHttpError: When the return code is not 2xx GitlabParsingError: If the json data could not be parsed
def temp_to_spmatrix(self, ty): assert ty in ('jac0', 'jac') jac0s = ['Fx0', 'Fy0', 'Gx0', 'Gy0'] jacs = ['Fx', 'Fy', 'Gx', 'Gy'] if ty == 'jac0': todo = jac0s elif ty == 'jac': todo = jacs for m in todo: self.__dict__[m] = spmatrix(self._temp[m]['V'], self._temp[m]['I'], self._temp[m]['J'], self.get_size(m), 'd') if ty == 'jac': self.__dict__[m] += self.__dict__[m + '0'] self.apply_set(ty)
Convert Jacobian tuples to matrices :param ty: name of the matrices to convert in ``('jac0','jac')`` :return: None
def _get_color_size(self, style): color = "b" if "color" in style: color = style["color"] size = 7 if "size" in style: size = style["size"] return color, size
Get color and size from a style dict
def log_value(self, tag, val, desc=''): logging.info('%s (%s): %.4f' % (desc, tag, val)) self.summary.value.add(tag=tag, simple_value=val)
Log values to standard output and Tensorflow summary. :param tag: summary tag. :param val: (required float or numpy array) value to be logged. :param desc: (optional) additional description to be printed.
def read(self, count): if not self.data: raise UnpackException(None, count, 0) buff = self.data.read(count) if len(buff) < count: raise UnpackException(None, count, len(buff)) return buff
read count bytes from the unpacker and return it. Raises an UnpackException if there is not enough data in the underlying stream.
def input(channel): _check_configured(channel) pin = get_gpio_pin(_mode, channel) return sysfs.input(pin)
Read the value of a GPIO pin. :param channel: the channel based on the numbering system you have specified (:py:attr:`GPIO.BOARD`, :py:attr:`GPIO.BCM` or :py:attr:`GPIO.SUNXI`). :returns: This will return either :py:attr:`0` / :py:attr:`GPIO.LOW` / :py:attr:`False` or :py:attr:`1` / :py:attr:`GPIO.HIGH` / :py:attr:`True`).
def RegisterPlugin(cls, plugin_class): plugin_name = plugin_class.NAME.lower() if plugin_name in cls._plugin_classes: raise KeyError(( 'Plugin class already set for name: {0:s}.').format( plugin_class.NAME)) cls._plugin_classes[plugin_name] = plugin_class
Registers a plugin class. The plugin classes are identified based on their lower case name. Args: plugin_class (type): class of the plugin. Raises: KeyError: if plugin class is already set for the corresponding name.
def dbus_readBytesTwoFDs(self, fd1, fd2, byte_count): result = bytearray() for fd in (fd1, fd2): f = os.fdopen(fd, 'rb') result.extend(f.read(byte_count)) f.close() return result
Reads byte_count from fd1 and fd2. Returns concatenation.
def get(self, line_number): if line_number not in self._get_cache: self._get_cache[line_number] = self._get(line_number) return self._get_cache[line_number]
Return the needle positions or None. :param int line_number: the number of the line :rtype: list :return: the needle positions for a specific line specified by :paramref:`line_number` or :obj:`None` if no were given
def _check_response_for_errors(self, response): try: doc = minidom.parseString(_string(response).replace("opensearch:", "")) except Exception as e: raise MalformedResponseError(self.network, e) e = doc.getElementsByTagName("lfm")[0] if e.getAttribute("status") != "ok": e = doc.getElementsByTagName("error")[0] status = e.getAttribute("code") details = e.firstChild.data.strip() raise WSError(self.network, status, details)
Checks the response for errors and raises one if any exists.
def get(self, sid): return InteractionContext( self._version, service_sid=self._solution['service_sid'], session_sid=self._solution['session_sid'], sid=sid, )
Constructs a InteractionContext :param sid: The unique string that identifies the resource :returns: twilio.rest.proxy.v1.service.session.interaction.InteractionContext :rtype: twilio.rest.proxy.v1.service.session.interaction.InteractionContext
def write(self, section, option, value): self.config.read(self.filepath) string = tidy_headers._parse_item.item2string(value, sep=", ") self.config.set(section, option, string) with open(self.filepath, "w") as f: self.config.write(f)
Write to file. Parameters ---------- section : string Section. option : string Option. value : string Value.
def assign_contributor_permissions(obj, contributor=None): for permission in get_all_perms(obj): assign_perm(permission, contributor if contributor else obj.contributor, obj)
Assign all permissions to object's contributor.
def individuals(self, ind_ids=None): query = self.query(Individual) if ind_ids: query = query.filter(Individual.ind_id.in_(ind_ids)) return query
Fetch all individuals from the database.
def put_member(self, name: InstanceName, value: Value, raw: bool = False) -> "InstanceNode": if not isinstance(self.value, ObjectValue): raise InstanceValueError(self.json_pointer(), "member of non-object") csn = self._member_schema_node(name) newval = self.value.copy() newval[name] = csn.from_raw(value, self.json_pointer()) if raw else value return self._copy(newval)._member(name)
Return receiver's member with a new value. If the member is permitted by the schema but doesn't exist, it is created. Args: name: Instance name of the member. value: New value of the member. raw: Flag to be set if `value` is raw. Raises: NonexistentSchemaNode: If member `name` is not permitted by the schema. InstanceValueError: If the receiver's value is not an object.
def elcm_profile_get_versions(irmc_info): resp = elcm_request(irmc_info, method='GET', path=URL_PATH_PROFILE_MGMT + 'version') if resp.status_code == 200: return _parse_elcm_response_body_as_json(resp) else: raise scci.SCCIClientError(('Failed to get profile versions with ' 'error code %s' % resp.status_code))
send an eLCM request to get profile versions :param irmc_info: node info :returns: dict object of profiles if succeed { "Server":{ "@Version": "1.01", "AdapterConfigIrmc":{ "@Version": "1.00" }, "HWConfigurationIrmc":{ "@Version": "1.00" }, "SystemConfig":{ "IrmcConfig":{ "@Version": "1.02" }, "BiosConfig":{ "@Version": "1.02" } } } } :raises: SCCIClientError if SCCI failed
def _reduce_boolean_pair(self, config_dict, key1, key2): if key1 in config_dict and key2 in config_dict \ and config_dict[key1] == config_dict[key2]: msg = 'Boolean pair, %s and %s, have same value: %s. If both ' \ 'are given to this method, they cannot be the same, as this ' \ 'method cannot decide which one should be True.' \ % (key1, key2, config_dict[key1]) raise BooleansToReduceHaveSameValue(msg) elif key1 in config_dict and not config_dict[key1]: config_dict[key2] = True config_dict.pop(key1) elif key2 in config_dict and not config_dict[key2]: config_dict[key1] = True config_dict.pop(key2) return config_dict
Ensure only one key with a boolean value is present in dict. :param config_dict: dict -- dictionary of config or kwargs :param key1: string -- first key name :param key2: string -- second key name :raises: BooleansToReduceHaveSameValue
def get_metrics(model: Model, total_loss: float, num_batches: int, reset: bool = False) -> Dict[str, float]: metrics = model.get_metrics(reset=reset) metrics["loss"] = float(total_loss / num_batches) if num_batches > 0 else 0.0 return metrics
Gets the metrics but sets ``"loss"`` to the total loss divided by the ``num_batches`` so that the ``"loss"`` metric is "average loss per batch".
def begin_table(self, column_count): self.table_columns = column_count self.table_columns_left = 0 self.write('<table>')
Begins a table with the given 'column_count', required to automatically create the right amount of columns when adding items to the rows
def _get_section_name(cls, parser): for section_name in cls.POSSIBLE_SECTION_NAMES: if parser.has_section(section_name): return section_name return None
Parse options from relevant section.
def _get_subclass_list_for_enums(self, classname, namespace): if self._repo_lite: return NocaseDict({classname: classname}) if not self._class_exists(classname, namespace): raise CIMError( CIM_ERR_INVALID_CLASS, _format("Class {0!A} not found in namespace {1!A}.", classname, namespace)) if not self.classes: return NocaseDict() clnslist = self._get_subclass_names(classname, namespace, True) clnsdict = NocaseDict() for cln in clnslist: clnsdict[cln] = cln clnsdict[classname] = classname return clnsdict
Get class list (i.e names of subclasses for classname for the enumerateinstance methods. If conn.lite returns only classname but no subclasses. Returns NocaseDict where only the keys are important, This allows case insensitive matches of the names with Python "for cln in clns".
async def delete(self, device, remove=True): device = self._find_device(device) if not self.is_handleable(device) or not device.is_loop: self._log.warn(_('not deleting {0}: unhandled device', device)) return False if remove: await self.auto_remove(device, force=True) self._log.debug(_('deleting {0}', device)) await device.delete() self._log.info(_('deleted {0}', device)) return True
Detach the loop device. :param device: device object, block device path or mount path :param bool remove: whether to unmount the partition etc. :returns: whether the loop device is deleted
def get_args(parser): args = vars(parser.parse_args()).items() return {key: val for key, val in args if not isinstance(val, NotSet)}
Converts arguments extracted from a parser to a dict, and will dismiss arguments which default to NOT_SET. :param parser: an ``argparse.ArgumentParser`` instance. :type parser: argparse.ArgumentParser :return: Dictionary with the configs found in the parsed CLI arguments. :rtype: dict
def restore(self, image): if isinstance(image, Image): image = image.id return self.act(type='restore', image=image)
Restore the droplet to the specified backup image A Droplet restoration will rebuild an image using a backup image. The image ID that is passed in must be a backup of the current Droplet instance. The operation will leave any embedded SSH keys intact. [APIDocs]_ :param image: an image ID, an image slug, or an `Image` object representing a backup image of the droplet :type image: integer, string, or `Image` :return: an `Action` representing the in-progress operation on the droplet :rtype: Action :raises DOAPIError: if the API endpoint replies with an error
def get_files_changed(repository, review_id): repository.git.fetch([next(iter(repository.remotes)), review_id]) files_changed = repository.git.diff_tree(["--no-commit-id", "--name-only", "-r", "FETCH_HEAD"]).splitlines() print("Found {} files changed".format(len(files_changed))) return files_changed
Get a list of files changed compared to the given review. Compares against current directory. :param repository: Git repository. Used to get remote. - By default uses first remote in list. :param review_id: Gerrit review ID. :return: List of file paths relative to current directory.
def start_sequence(): print('getting started!...') t = threading.Thread(target=my_application, kwargs=dict(api=api)) t.daemon = True t.start() return 'ok, starting webhook to: %s' % (ngrok_url,)
Start the demo sequence We must start this thread in the same process as the webserver to be certain we are sharing the api instance in memory. (ideally in future the async id database will be capable of being more than just a dictionary)
def get_style_defs(self, arg=''): cp = self.commandprefix styles = [] for name, definition in iteritems(self.cmd2def): styles.append(r'\expandafter\def\csname %s@tok@%s\endcsname{%s}' % (cp, name, definition)) return STYLE_TEMPLATE % {'cp': self.commandprefix, 'styles': '\n'.join(styles)}
Return the command sequences needed to define the commands used to format text in the verbatim environment. ``arg`` is ignored.
def from_protocol(proto): cfg = TorConfig(control=proto) yield cfg.post_bootstrap defer.returnValue(cfg)
This creates and returns a ready-to-go TorConfig instance from the given protocol, which should be an instance of TorControlProtocol.
def set_vertices(self, verts=None, indexed=None, reset_normals=True): if indexed is None: if verts is not None: self._vertices = verts self._vertices_indexed_by_faces = None elif indexed == 'faces': self._vertices = None if verts is not None: self._vertices_indexed_by_faces = verts else: raise Exception("Invalid indexing mode. Accepts: None, 'faces'") if reset_normals: self.reset_normals()
Set the mesh vertices Parameters ---------- verts : ndarray | None The array (Nv, 3) of vertex coordinates. indexed : str | None If indexed=='faces', then the data must have shape (Nf, 3, 3) and is assumed to be already indexed as a list of faces. This will cause any pre-existing normal vectors to be cleared unless reset_normals=False. reset_normals : bool If True, reset the normals.
def run(self): LOGGER.info('%s v%s started', self.APPNAME, self.VERSION) self.setup() while not any([self.is_stopping, self.is_stopped]): self.set_state(self.STATE_SLEEPING) try: signum = self.pending_signals.get(True, self.wake_interval) except queue.Empty: pass else: self.process_signal(signum) if any([self.is_stopping, self.is_stopped]): break self.set_state(self.STATE_ACTIVE) self.process()
The core method for starting the application. Will setup logging, toggle the runtime state flag, block on loop, then call shutdown. Redefine this method if you intend to use an IO Loop or some other long running process.
def add_prefix(self, name, stmt): if self.gg_level: return name pref, colon, local = name.partition(":") if colon: return (self.module_prefixes[stmt.i_module.i_prefixes[pref][0]] + ":" + local) else: return self.prefix_stack[-1] + ":" + pref
Return `name` prepended with correct prefix. If the name is already prefixed, the prefix may be translated to the value obtained from `self.module_prefixes`. Unmodified `name` is returned if we are inside a global grouping.
def _removeSegment(self, segment, preserveCurve, **kwargs): segment = self.segments[segment] for point in segment.points: self.removePoint(point, preserveCurve)
segment will be a valid segment index. preserveCurve will be a boolean. Subclasses may override this method.
def _add_rr(self, name, ttl, rd, deleting=None, section=None): if section is None: section = self.authority covers = rd.covers() rrset = self.find_rrset(section, name, self.zone_rdclass, rd.rdtype, covers, deleting, True, True) rrset.add(rd, ttl)
Add a single RR to the update section.
def parse_args(): parser = argparse.ArgumentParser( description="Build a Sphinx documentation site for an EUPS stack, " "such as pipelines.lsst.io.", epilog="Version {0}".format(__version__) ) parser.add_argument( '-d', '--dir', dest='root_project_dir', help="Root Sphinx project directory") parser.add_argument( '-v', '--verbose', dest='verbose', action='store_true', default=False, help='Enable Verbose output (debug level logging)' ) return parser.parse_args()
Create an argument parser for the ``build-stack-docs`` program. Returns ------- args : `argparse.Namespace` Parsed argument object.
def _spawn_producer(f, port, addr='tcp://127.0.0.1'): process = Process(target=_producer_wrapper, args=(f, port, addr)) process.start() return process
Start a process that sends results on a PUSH socket. Parameters ---------- f : callable Callable that takes a single argument, a handle for a ZeroMQ PUSH socket. Must be picklable. Returns ------- process : multiprocessing.Process The process handle of the created producer process.
def parse_csv(file_path: str, entrez_id_header, log_fold_change_header, adjusted_p_value_header, entrez_delimiter, base_mean_header=None, sep=",") -> List[Gene]: logger.info("In parse_csv()") df = pd.read_csv(file_path, sep=sep) return handle_dataframe( df, entrez_id_name=entrez_id_header, log2_fold_change_name=log_fold_change_header, adjusted_p_value_name=adjusted_p_value_header, entrez_delimiter=entrez_delimiter, base_mean=base_mean_header, )
Read a csv file on differential expression values as Gene objects. :param str file_path: The path to the differential expression file to be parsed. :param config.Params params: An object that includes paths, cutoffs and other information. :return list: A list of Gene objects.
def list_user_permissions(self, name): return self._api_get('/api/users/{0}/permissions'.format( urllib.parse.quote_plus(name) ))
A list of all permissions for a given user. :param name: The user's name :type name: str
def FloatStringToFloat(float_string, problems=None): match = re.match(r"^[+-]?\d+(\.\d+)?$", float_string) parsed_value = float(float_string) if "x" in float_string: raise ValueError() if not match and problems is not None: problems.InvalidFloatValue(float_string) return parsed_value
Convert a float as a string to a float or raise an exception
def with_output(verbosity=1): def make_wrapper(func): @wraps(func) def wrapper(*args, **kwargs): configure_output(verbosity=verbosity) return func(*args, **kwargs) return wrapper return make_wrapper
Decorator that configures output verbosity.
def print_boards(hwpack='arduino', verbose=False): if verbose: pp(boards(hwpack)) else: print('\n'.join(board_names(hwpack)))
print boards from boards.txt.
def section(title, element_list): sect = { 'Type': 'Section', 'Title': title, } if isinstance(element_list, list): sect['Elements'] = element_list else: sect['Elements'] = [element_list] return sect
Returns a dictionary representing a new section. Sections contain a list of elements that are displayed separately from the global elements on the page. Args: title: The title of the section to be displayed element_list: The list of elements to display within the section Returns: A dictionary with metadata specifying that it is to be rendered as a section containing multiple elements
def update_parent_directory_number(self, parent_dir_num): if not self._initialized: raise pycdlibexception.PyCdlibInternalError('Path Table Record not yet initialized') self.parent_directory_num = parent_dir_num
A method to update the parent directory number for this Path Table Record from the directory record. Parameters: parent_dir_num - The new parent directory number to assign to this PTR. Returns: Nothing.
def send_response(self, request, result=None, error=None): message = self._version.create_response(request, result, error) self.send_message(message)
Respond to a JSON-RPC method call. This is a response to the message in *request*. If *error* is not provided, then this is a succesful response, and the value in *result*, which may be ``None``, is passed back to the client. if *error* is provided and not ``None`` then an error is sent back. In this case *error* must be a dictionary as specified by the JSON-RPC spec.
def AddEnvironmentVariable(self, environment_variable): name = environment_variable.name.upper() if name in self._environment_variables: raise KeyError('Environment variable: {0:s} already exists.'.format( environment_variable.name)) self._environment_variables[name] = environment_variable
Adds an environment variable. Args: environment_variable (EnvironmentVariableArtifact): environment variable artifact. Raises: KeyError: if the environment variable already exists.
def gone_online(stream): while True: packet = yield from stream.get() session_id = packet.get('session_key') if session_id: user_owner = get_user_from_session(session_id) if user_owner: logger.debug('User ' + user_owner.username + ' gone online') online_opponents = list(filter(lambda x: x[1] == user_owner.username, ws_connections)) online_opponents_sockets = [ws_connections[i] for i in online_opponents] yield from fanout_message(online_opponents_sockets, {'type': 'gone-online', 'usernames': [user_owner.username]}) else: pass else: pass
Distributes the users online status to everyone he has dialog with