code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _inner_take_over_or_update(self, full_values=None, current_values=None, value_indices=None): for key in current_values.keys(): if value_indices is not None and key in value_indices: index = value_indices[key] else: index = slice(None) if ke...
This is for automatic updates of values in the inner loop of missing data handling. Both arguments are dictionaries and the values in full_values will be updated by the current_gradients. If a key from current_values does not exist in full_values, it will be initialized to the value in ...
def _lock(self): lockfile = '{}.lock'.format(self._get_cookie_file()) safe_mkdir_for(lockfile) return OwnerPrintingInterProcessFileLock(lockfile)
An identity-keyed inter-process lock around the cookie file.
def copy(self, object_version=None, key=None): return ObjectVersionTag.create( self.object_version if object_version is None else object_version, key or self.key, self.value )
Copy a tag to a given object version. :param object_version: The object version instance to copy the tag to. Default: current object version. :param key: Key of destination tag. Default: current tag key. :return: The copied object version tag.
def apply_clicked(self, button): if isinstance(self.model.state, LibraryState): return self.set_script_text(self.view.get_text())
Triggered when the Apply-Shortcut in the editor is triggered.
def int(self, length, name, value=None, align=None): self._add_field(Int(length, name, value, align=align))
Add an signed integer to template. `length` is given in bytes and `value` is optional. `align` can be used to align the field to longer byte length. Signed integer uses twos-complement with bits numbered in big-endian. Examples: | int | 2 | foo | | int | 2 | foo | 42 | ...
def example_stats(filename): nd = results.load_nd_from_pickle(filename=filename) nodes_df, edges_df = nd.to_dataframe() stats = results.calculate_mvgd_stats(nd) stations = nodes_df[nodes_df['type'] == 'LV Station'] f, axarr = plt.subplots(2, sharex=True) f.suptitle("Peak load (top) / peak genera...
Obtain statistics from create grid topology Prints some statistical numbers and produces exemplary figures
def RGB_color_picker(obj): digest = hashlib.sha384(str(obj).encode('utf-8')).hexdigest() subsize = int(len(digest) / 3) splitted_digest = [digest[i * subsize: (i + 1) * subsize] for i in range(3)] max_value = float(int("f" * subsize, 16)) components = ( int(d, 16) ...
Build a color representation from the string representation of an object This allows to quickly get a color from some data, with the additional benefit that the color will be the same as long as the (string representation of the) data is the same:: >>> from colour import RGB_color_picker, Color ...
def subcommand(self, description='', arguments={}): def decorator(func): self.register_subparser( func, func.__name__.replace('_', '-'), description=description, arguments=arguments, ) return func return ...
Decorator for quickly adding subcommands to the omnic CLI
def experiments_predictions_upsert_property(self, experiment_id, run_id, properties): if self.experiments_predictions_get(experiment_id, run_id) is None: return None return self.predictions.upsert_object_property(run_id, properties)
Upsert property of a prodiction for an experiment. Raises ValueError if given property dictionary results in an illegal operation. Parameters ---------- experiment_id : string Unique experiment identifier run_id : string Unique model run identifi...
def add(self, piece_uid, index): if self.occupancy[index]: raise OccupiedPosition if self.exposed_territory[index]: raise VulnerablePosition klass = PIECE_CLASSES[piece_uid] piece = klass(self, index) territory = piece.territory for i in self.index...
Add a piece to the board at the provided linear position.
def import_keybase(useropt): public_key = None u_bits = useropt.split(':') username = u_bits[0] if len(u_bits) == 1: public_key = cryptorito.key_from_keybase(username) else: fingerprint = u_bits[1] public_key = cryptorito.key_from_keybase(username, fingerprint) if cryptor...
Imports a public GPG key from Keybase
def blocks(self, *args, **kwargs): return Stream(blocks(iter(self), *args, **kwargs))
Interface to apply audiolazy.blocks directly in a stream, returning another stream. Use keyword args.
def get_scoped_variable_from_name(self, name): for scoped_variable_id, scoped_variable in self.scoped_variables.items(): if scoped_variable.name == name: return scoped_variable_id raise AttributeError("Name %s is not in scoped_variables dictionary", name)
Get the scoped variable for a unique name :param name: the unique name of the scoped variable :return: the scoped variable specified by the name :raises exceptions.AttributeError: if the name is not in the the scoped_variables dictionary
def get_public_ip_validator(): from msrestazure.tools import is_valid_resource_id, resource_id def simple_validator(cmd, namespace): if namespace.public_ip_address: is_list = isinstance(namespace.public_ip_address, list) def _validate_name_or_id(public_ip): is_id ...
Retrieves a validator for public IP address. Accepting all defaults will perform a check for an existing name or ID with no ARM-required -type parameter.
def start2(self, yes): if yes: self.write_message(1) self.hints[3].used = True self.lamp_turns = 1000 self.oldloc2 = self.oldloc = self.loc = self.rooms[1] self.dwarves = [ Dwarf(self.rooms[n]) for n in (19, 27, 33, 44, 64) ] self.pirate = Pirate(self....
Display instructions if the user wants them.
def sub_filter(self, subset, filter, inplace=True): full_query = ''.join(('not (', subset, ') or not (', filter, ')')) with LogDataChanges(self, filter_action='filter', filter_query=filter): result = self.data.query(full_query, inplace=inplace) return result
Apply a filter to subset of the data Examples -------- :: .subquery( 'timestep == 2', 'R > 4', )
def _function_add_fakeret_edge(self, addr, src_node, src_func_addr, confirmed=None): target_node = self._nodes.get(addr, None) if target_node is None: target_snippet = self._to_snippet(addr=addr, base_state=self._base_state) else: target_snippet = self._to_snippet(cfg_nod...
Generate CodeNodes for target and source, if no source node add node for function, otherwise creates fake return to in function manager :param int addr: target address :param angr.analyses.CFGNode src_node: source node :param int src_func_addr: address of function :param confirm...
def _parse_standard_flag(read_buffer, mask_length): mask_format = {1: 'B', 2: 'H', 4: 'I'}[mask_length] num_standard_flags, = struct.unpack_from('>H', read_buffer, offset=0) fmt = '>' + ('H' + mask_format) * num_standard_flags data = struct.unpack_from(fmt, read_buffer, offset=2) standard_flag = dat...
Construct standard flag, standard mask data from the file. Specifically working on Reader Requirements box. Parameters ---------- fptr : file object File object for JP2K file. mask_length : int Length of standard mask flag
def getDatetimeAxis(): dataSet = 'nyc_taxi' filePath = './data/' + dataSet + '.csv' data = pd.read_csv(filePath, header=0, skiprows=[1, 2], names=['datetime', 'value', 'timeofday', 'dayofweek']) xaxisDate = pd.to_datetime(data['datetime']) return xaxisDate
use datetime as x-axis
def get_channelstate_settling( chain_state: ChainState, payment_network_id: PaymentNetworkID, token_address: TokenAddress, ) -> List[NettingChannelState]: return get_channelstate_filter( chain_state, payment_network_id, token_address, lambda channel_state: cha...
Return the state of settling channels in a token network.
def create(url, name, subject_id, image_group_id, properties): obj_props = [{'key':'name','value':name}] if not properties is None: try: for key in properties: if key != 'name': obj_props.append({'key':key, 'value':properties[key]})...
Create a new experiment using the given SCO-API create experiment Url. Parameters ---------- url : string Url to POST experiment create request name : string User-defined name for experiment subject_id : string Unique identifier for subject at...
def render(self): self.screen.reset() self.screen.blit(self.corners) self.screen.blit(self.lines, (1, 1)) self.screen.blit(self.rects, (int(self.screen.width / 2) + 1, 1)) self.screen.blit(self.circle, (0, int(self.screen.height / 2) + 1)) self.screen.blit(self.filled, (i...
Send the current screen content to Mate Light.
def weigh_users(X_test, model, classifier_type="LinearSVC"): if classifier_type == "LinearSVC": decision_weights = model.decision_function(X_test) elif classifier_type == "LogisticRegression": decision_weights = model.predict_proba(X_test) elif classifier_type == "RandomForest": if i...
Uses a trained model and the unlabelled features to produce a user-to-label distance matrix. Inputs: - feature_matrix: The graph based-features in either NumPy or SciPy sparse array format. - model: A trained scikit-learn One-vs-All multi-label scheme of linear SVC models. - classifier_t...
def crls(self): if not self._allow_fetching: return self._crls output = [] for issuer_serial in self._fetched_crls: output.extend(self._fetched_crls[issuer_serial]) return output
A list of all cached asn1crypto.crl.CertificateList objects
def visit_If(self, node): self.visit(node.test) old_range = self.result self.result = old_range.copy() for stmt in node.body: self.visit(stmt) body_range = self.result self.result = old_range.copy() for stmt in node.orelse: self.visit(stmt)...
Handle iterate variable across branches >>> import gast as ast >>> from pythran import passmanager, backend >>> node = ast.parse(''' ... def foo(a): ... if a > 1: b = 1 ... else: b = 3''') >>> pm = passmanager.PassManager("test") >>> res = pm.gat...
def standard_deviation(x): if x.ndim > 1 and len(x[0]) > 1: return np.std(x, axis=1) return np.std(x)
Return a numpy array of column standard deviation Parameters ---------- x : ndarray A numpy array instance Returns ------- ndarray A 1 x n numpy array instance of column standard deviation Examples -------- >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>...
def parseFile(self, filename): modname = self.filenameToModname(filename) module = Module(modname, filename) self.modules[modname] = module if self.trackUnusedNames: module.imported_names, module.unused_names = \ find_imports_and_track_names(filename, ...
Parse a single file.
def antiscia(self): obj = self.copy() obj.type = const.OBJ_GENERIC obj.relocate(360 - obj.lon + 180) return obj
Returns antiscia object.
def highlight_code(self, ontospy_entity): try: pygments_code = highlight(ontospy_entity.rdf_source(), TurtleLexer(), HtmlFormatter()) pygments_code_css = HtmlFormatter().get_style_defs('.highlight') return { "pygments_code...
produce an html version of Turtle code with syntax highlighted using Pygments CSS
def setup(parser): parser.add_argument( '-p', '--paramfile', type=str, required=True, help='Parameter Range File') parser.add_argument( '-o', '--output', type=str, required=True, help='Output File') parser.add_argument( '-s', '--seed', type=int, required=False, default=None, ...
Add common sampling options to CLI parser. Parameters ---------- parser : argparse object Returns ---------- Updated argparse object
def _run_parallel_process_with_profiling(self, start_path, stop_path, queue, filename): runctx('Engine._run_parallel_process(self, start_path, stop_path, queue)', globals(), locals(), filename)
wrapper for usage of profiling
def merge_into_adjustments_for_all_sids(self, all_adjustments_for_sid, col_to_all_adjustments): for col_name in all_adjustments_for_sid: if col_name not in col_to_all_adjustments: col_to_all_adjus...
Merge adjustments for a particular sid into a dictionary containing adjustments for all sids. Parameters ---------- all_adjustments_for_sid : dict[int -> AdjustedArray] All adjustments for a particular sid. col_to_all_adjustments : dict[int -> AdjustedArray] ...
def _def_lookup(self, variable): prevdefs = {} for code_loc in self._live_defs.lookup_defs(variable): if isinstance(variable, SimMemoryVariable): type_ = 'mem' elif isinstance(variable, SimRegisterVariable): type_ = 'reg' else: ...
This is a backward lookup in the previous defs. Note that, as we are using VSA, it is possible that `variable` is affected by several definitions. :param angr.analyses.ddg.LiveDefinitions live_defs: The collection of live definitions. :param SimVariable: The variable...
def do_loop_turn(self): self.check_and_del_zombie_modules() if self.watch_for_new_conf(timeout=0.05): logger.info("I got a new configuration...") self.setup_new_conf() _t0 = time.time() self.get_objects_from_from_queues() statsmgr.timer('core.get-objects-f...
Receiver daemon main loop :return: None
def iter_window(iterable, size=2, step=1, wrap=False): r iter_list = it.tee(iterable, size) if wrap: iter_list = [iter_list[0]] + list(map(it.cycle, iter_list[1:])) try: for count, iter_ in enumerate(iter_list[1:], start=1): for _ in range(count): six.next(ite...
r""" iterates through iterable with a window size generalizeation of itertwo Args: iterable (iter): an iterable sequence size (int): window size (default = 2) wrap (bool): wraparound (default = False) Returns: iter: returns windows in a sequence CommandLine: ...
def map_floor(continent_id, floor, lang="en"): cache_name = "map_floor.%s-%s.%s.json" % (continent_id, floor, lang) params = {"continent_id": continent_id, "floor": floor, "lang": lang} return get_cached("map_floor.json", cache_name, params=params)
This resource returns details about a map floor, used to populate a world map. All coordinates are map coordinates. The returned data only contains static content. Dynamic content, such as vendors, is not currently available. :param continent_id: The continent. :param floor: The map floor. :pa...
def filter_active(self, *args, **kwargs): grace = getattr(settings, 'HITCOUNT_KEEP_HIT_ACTIVE', {'days': 7}) period = timezone.now() - timedelta(**grace) return self.filter(created__gte=period).filter(*args, **kwargs)
Return only the 'active' hits. How you count a hit/view will depend on personal choice: Should the same user/visitor *ever* be counted twice? After a week, or a month, or a year, should their view be counted again? The defaulf is to consider a visitor's hit still 'active' if they ...
def load_yaml_config(conf_file): global g_config with open(conf_file) as fp: g_config = util.yaml_load(fp) src_dir = get_path('src_dir', None) if src_dir is not None: sys.path.insert(0, src_dir) for cmd in get('commands', []): _import(cmd)
Load a YAML configuration. This will not update the configuration but replace it entirely. Args: conf_file (str): Path to the YAML config. This function will not check the file name or extension and will just crash if the given file does not exist or is not a valid ...
def setContent(self, type_, value): if type_ in [self.CONTENT_TYPE_TXT, self.CONTENT_TYPE_URL, self.CONTENT_TYPE_FILE]: if type_ == self.CONTENT_TYPE_FILE: self._file = {} self._file = {'doc': open(value, 'rb')} else: s...
Sets the content that's going to be sent to analyze according to its type :param type_: Type of the content (text, file or url) :param value: Value of the content
def _initialize_from_model(self, model): for name, value in model.__dict__.items(): if name in self._properties: setattr(self, name, value)
Loads a model from
def merge(self, parent=None): if parent is None: parent = self.parent if parent is None: return self.sources = parent.sources + self.sources data = copy.deepcopy(parent.data) for key, value in sorted(self.data.items()): if key.endswith('+'): ...
Merge parent data
def _transform_col(self, x, i): return x.fillna(NAN_INT).map(self.label_encoders[i]).fillna(0)
Encode one categorical column into labels. Args: x (pandas.Series): a categorical column to encode i (int): column index Returns: x (pandas.Series): a column with labels.
def remove_child(self, index): if index < 0: index = index + len(self) self.__children = self.__children[0:index] + self.__children[(index + 1):]
Remove the child at the given index from the current list of children. :param int index: the index of the child to be removed
def _create_relational_field(self, attr, options): options['entity_class'] = attr.py_type options['allow_empty'] = not attr.is_required return EntityField, options
Creates the form element for working with entity relationships.
def is_valid_endpoint_host(interfaces, endpoint): result = urlparse(endpoint) hostname = result.hostname if hostname is None: return False for interface in interfaces: if interface == hostname: return False return True
An endpoint host name is valid if it is a URL and if the host is not the name of a network interface.
def get_tree_members(self): members = [] queue = deque() queue.appendleft(self) visited = set() while len(queue): node = queue.popleft() if node not in visited: members.extend(node.get_member_info()) queue.extendlef...
Retrieves all members from this node of the tree down.
def check_infile_and_wp(curinf, curwp): if not os.path.exists(curinf): if curwp is None: TauDEM.error('You must specify one of the workspace and the ' 'full path of input file!') curinf = curwp + os.sep + curinf curinf = os.path.ab...
Check the existence of the given file and directory path. 1. Raise Runtime exception of both not existed. 2. If the ``curwp`` is None, the set the base folder of ``curinf`` to it.
def load_settings_sizes(): page_size = AGNOCOMPLETE_DEFAULT_PAGESIZE settings_page_size = getattr( settings, 'AGNOCOMPLETE_DEFAULT_PAGESIZE', None) page_size = settings_page_size or page_size page_size_min = AGNOCOMPLETE_MIN_PAGESIZE settings_page_size_min = getattr( settings, 'AGNOC...
Load sizes from settings or fallback to the module constants
def bloch_vector_from_state_vector(state: Sequence, index: int) -> np.ndarray: rho = density_matrix_from_state_vector(state, [index]) v = np.zeros(3, dtype=np.float32) v[0] = 2*np.real(rho[0][1]) v[1] = 2*np.imag(rho[1][0]) v[2] = np.real(rho[0][0] - rho[1][1]) return v
Returns the bloch vector of a qubit. Calculates the bloch vector of the qubit at index in the wavefunction given by state, assuming state follows the standard Kronecker convention of numpy.kron. Args: state: A sequence representing a wave function in which the ordering mapping to q...
def create_albaran_automatic(pk, list_lines): line_bd = SalesLineAlbaran.objects.filter(line_order__pk__in=list_lines).values_list('line_order__pk') if line_bd.count() == 0 or len(list_lines) != len(line_bd[0]): if line_bd.count() != 0: for x in line_bd[0]: ...
creamos de forma automatica el albaran
def _deallocator(self): lookup = { "c_bool": "logical", "c_double": "double", "c_double_complex": "complex", "c_char": "char", "c_int": "int", "c_float": "float", "c_short": "short", "c_long": "long" ...
Returns the name of the subroutine in ftypes_dealloc.f90 that can deallocate the array for this Ftype's pointer. :arg ctype: the string c-type of the variable.
def collected(self, group, filename=None, host=None, location=None, move=True, all=True): ret = { 'name': 'support.collected', 'changes': {}, 'result': True, 'comment': '', } location = location or tempfile.gettempdir() self.check_destinati...
Sync archives to a central place. :param name: :param group: :param filename: :param host: :param location: :param move: :param all: :return:
def _compute_fixed(self): try: lon, lat = np.meshgrid(self.lon, self.lat) except: lat = self.lat phi = np.deg2rad(lat) try: albedo = self.a0 + self.a2 * P2(np.sin(phi)) except: albedo = np.zeros_like(phi) dom = next(iter(sel...
Recompute any fixed quantities after a change in parameters
def get(self,coordinate_system): if coordinate_system == 'DA-DIR' or coordinate_system == 'specimen': return self.pars elif coordinate_system == 'DA-DIR-GEO' or coordinate_system == 'geographic': return self.geopars elif coordinate_system == 'DA-DIR-TILT' or coordinate_sy...
Return the pmagpy paramters dictionary associated with this fit and the given coordinate system @param: coordinate_system -> the coordinate system who's parameters to return
def LoadConfig(config_obj, config_file=None, config_fd=None, secondary_configs=None, contexts=None, reset=False, parser=ConfigFileParser): if config_obj is None or reset: config_obj = _CONFIG.MakeNewConfig() if config_file...
Initialize a ConfigManager with the specified options. Args: config_obj: The ConfigManager object to use and update. If None, one will be created. config_file: Filename to read the config from. config_fd: A file-like object to read config data from. secondary_configs: A list of secondary config...
def build_list_marker_log(parser: str = 'github', list_marker: str = '.') -> list: r if (parser == 'github' or parser == 'cmark' or parser == 'gitlab' or parser == 'commonmarker' or parser == 'redcarpet'): assert list_marker in md_parser[parser]['list']['ordered'][ ...
r"""Create a data structure that holds list marker information. :parameter parser: decides rules on how compute indentations. Defaults to ``github``. :parameter list_marker: a string that contains some of the first characters of the list element. Defaults to ``-``. :type parser: str :...
def _check_import_source(): path_rel = '~/cltk_data/greek/software/greek_software_tlgu/tlgu.h' path = os.path.expanduser(path_rel) if not os.path.isfile(path): try: corpus_importer = CorpusImporter('greek') corpus_importer.import_corpus('greek_software...
Check if tlgu imported, if not import it.
def img(self, id): return self._serve_file(os.path.join(media_path, 'img', id))
Serve Pylons' stock images
def next(self, timeout=None): try: apply_result = self._collector._get_result(self._idx, timeout) except IndexError: self._idx = 0 raise StopIteration except: self._idx = 0 raise self._idx += 1 assert apply_result.ready(...
Return the next result value in the sequence. Raise StopIteration at the end. Can raise the exception raised by the Job
def filter_service_by_regex_host_name(regex): host_re = re.compile(regex) def inner_filter(items): service = items["service"] host = items["hosts"][service.host] if service is None or host is None: return False return host_re.match(host.host_name) is not None retu...
Filter for service Filter on regex host_name :param regex: regex to filter :type regex: str :return: Filter :rtype: bool
def append_provenance_step(self, title, description, timestamp=None): step_time = self._provenance.append_step(title, description, timestamp) if step_time > self.last_update: self.last_update = step_time
Add a step to the provenance of the metadata :param title: The title of the step. :type title: str :param description: The content of the step :type description: str :param timestamp: the time of the step :type timestamp: datetime, str
def add_license(key): result = { 'result': False, 'retcode': -1, 'output': '' } if not has_powerpath(): result['output'] = 'PowerPath is not installed' return result cmd = '/sbin/emcpreg -add {0}'.format(key) ret = __salt__['cmd.run_all'](cmd, python_shell=Tru...
Add a license
def MakePartialStat(self, fd): is_dir = "Container" in fd.behaviours return { "pathspec": fd.Get(fd.Schema.PATHSPEC, ""), "st_atime": fd.Get(fd.Schema.LAST, 0), "st_blksize": 0, "st_blocks": 0, "st_ctime": 0, "st_dev": 0, "st_gid": 0, "st_ino": 0, ...
Try and give a 'stat' for something not in the data store. Args: fd: The object with no stat. Returns: A dictionary corresponding to what we'll say the 'stat' is for objects which are not actually files, so have no OS level stat.
def list_catalogs(results=30, start=0): result = util.callm("%s/%s" % ('catalog', 'list'), {'results': results, 'start': start}) cats = [Catalog(**util.fix(d)) for d in result['response']['catalogs']] start = result['response']['start'] total = result['response']['total'] return ResultList(cats, sta...
Returns list of all catalogs created on this API key Args: Kwargs: results (int): An integer number of results to return start (int): An integer starting value for the result set Returns: A list of catalog objects Example: >>> catalog.list_catalogs() [<catalog - tes...
def selectionComponents(self): comps = [] model = self.model() for comp in self._selectedComponents: index = model.indexByComponent(comp) if index is not None: comps.append(comp) return comps
Returns the names of the component types in this selection
def _parse_qcd_segment(self, fptr): offset = fptr.tell() - 2 read_buffer = fptr.read(3) length, sqcd = struct.unpack('>HB', read_buffer) spqcd = fptr.read(length - 3) return QCDsegment(sqcd, spqcd, length, offset)
Parse the QCD segment. Parameters ---------- fptr : file Open file object. Returns ------- QCDSegment The current QCD segment.
def get_template(file): pattern = str(file).lower() while len(pattern) and not Lean.is_registered(pattern): pattern = os.path.basename(pattern) pattern = re.sub(r'^[^.]*\.?','',pattern) preferred_klass = Lean.preferred_mappings[pattern] if Lean.preferred_mappings.has_key(pattern) else None i...
Lookup a template class for the given filename or file extension. Return nil when no implementation is found.
def set_last_hop_errors(self, last_hop): if last_hop.is_error: self.last_hop_errors.append(last_hop.error_message) return for packet in last_hop.packets: if packet.is_error: self.last_hop_errors.append(packet.error_message)
Sets the last hop's errors.
def close(self): try: self.parent_fd.fileno() except io.UnsupportedOperation: logger.debug("Not closing parent_fd - reusing existing") else: self.parent_fd.close()
Close file, see file.close
def spin_gen(particles, index, gauge=1): mat = np.zeros((2**particles, 2**particles)) flipper = 2**index for i in range(2**particles): ispin = btest(i, index) if ispin == 1: mat[i ^ flipper, i] = 1 else: mat[i ^ flipper, i] = gauge return mat
Generates the generic spin operator in z basis for a system of N=particles and for the selected spin index name. where index=0..N-1 The gauge term sets the behavoir for a system away from half-filling
def get_or_create_pull(github_repo, title, body, head, base, *, none_if_no_commit=False): try: return github_repo.create_pull( title=title, body=body, head=head, base=base ) except GithubException as err: err_message = err.data['errors'][0]...
Try to create the PR. If the PR exists, try to find it instead. Raises otherwise. You should always use the complete head syntax "org:branch", since the syntax is required in case of listing. if "none_if_no_commit" is set, return None instead of raising exception if the problem is that head and base a...
def __set_cache(self, tokens): if DefaultCompleter._DefaultCompleter__tokens.get(self.__language): return DefaultCompleter._DefaultCompleter__tokens[self.__language] = tokens
Sets the tokens cache. :param tokens: Completer tokens list. :type tokens: tuple or list
def update_wrapper(wrapper, wrapped, assigned = functools.WRAPPER_ASSIGNMENTS, updated = functools.WRAPPER_UPDATES): assigned = tuple(attr for attr in assigned if hasattr(wrapped, attr)) wrapper = functools.update_wrapper(wrapper, wrapped, assigned, updat...
Patch two bugs in functools.update_wrapper.
def update_state_success(self, model_output): response = requests.post( self.links[REF_UPDATE_STATE_SUCCESS], files={'file': open(model_output, 'rb')} ) if response.status_code != 200: try: raise ValueError(json.loads(response.text)['message'])...
Update the state of the model run to 'SUCCESS'. Expects a model output result file. Will upload the file before changing the model run state. Raises an exception if update fails or resource is unknown. Parameters ---------- model_output : string Path to mode...
def get_base_layout(figs): layout={} for fig in figs: if not isinstance(fig,dict): fig=fig.to_dict() for k,v in list(fig['layout'].items()): layout[k]=v return layout
Generates a layout with the union of all properties of multiple figures' layouts Parameters: ----------- fig : list(Figures) List of Plotly Figures
def clear_file_systems(self): self._source_url = None self.dataset.config.library.source.url = None self._source_fs = None self._build_url = None self.dataset.config.library.build.url = None self._build_fs = None self.dataset.commit()
Remove references to build and source file systems, reverting to the defaults
def write_additional(self, productversion, channel): self.fileobj.seek(self.additional_offset) extras = extras_header.build(dict( count=1, sections=[dict( channel=six.u(channel), productversion=six.u(productversion), size=len(channe...
Write the additional information to the MAR header. Args: productversion (str): product and version string channel (str): channel string
def dict_filter(*args, **kwargs): result = {} for arg in itertools.chain(args, (kwargs,)): dict_filter_update(result, arg) return result
Merge all values into a single dict with all None values removed.
def __update_state(self): if self._state.active: self._state = self.__get_state_by_id(self.job_config.job_id)
Fetches most up to date state from db.
def fixed(ctx, number, decimals=2, no_commas=False): value = _round(ctx, number, decimals) format_str = '{:f}' if no_commas else '{:,f}' return format_str.format(value)
Formats the given number in decimal format using a period and commas
def extract_tar(tar_path, target_folder): with tarfile.open(tar_path, 'r') as archive: archive.extractall(target_folder)
Extract the content of the tar-file at `tar_path` into `target_folder`.
def run_program(self, name, arguments=[], timeout=30, exclusive=False): logger.debug("Running program ...") if exclusive: kill_longrunning(self.config) prog = RunningProgram(self, name, arguments, timeout) return prog.expect_end()
Runs a program in the working directory to completion. Args: name (str): The name of the program to be executed. arguments (tuple): Command-line arguments for the program. timeout (int): The timeout for execution. exclusive (bool): Prevent parallel va...
def show_schema(self, tables=None): tables = tables if tables else self.tables for t in tables: self._printer('\t{0}'.format(t)) for col in self.get_schema(t, True): self._printer('\t\t{0:30} {1:15} {2:10} {3:10} {4:10} {5:10}'.format(*col))
Print schema information.
def filter_url(url, **kwargs): d = parse_url_to_dict(url) d.update(kwargs) return unparse_url_dict({k: v for k, v in list(d.items()) if v})
filter a URL by returning a URL with only the parts specified in the keywords
def resize_file(fobj, diff, BUFFER_SIZE=2 ** 16): fobj.seek(0, 2) filesize = fobj.tell() if diff < 0: if filesize + diff < 0: raise ValueError fobj.truncate(filesize + diff) elif diff > 0: try: while diff: addsize = min(BUFFER_SIZE, diff) ...
Resize a file by `diff`. New space will be filled with zeros. Args: fobj (fileobj) diff (int): amount of size to change Raises: IOError
def _attacher(self, key, value, attributes, timed): id, extra = self._get_attach_id(key, value, attributes) record = self._create_attach_record(id, timed) if extra: record.update(extra) return record
Create a full attachment record payload.
def install_yum_priorities(distro, _yum=None): yum = _yum or pkg_managers.yum package_name = 'yum-plugin-priorities' if distro.normalized_name == 'centos': if distro.release[0] != '6': package_name = 'yum-priorities' yum(distro.conn, package_name)
EPEL started packaging Ceph so we need to make sure that the ceph.repo we install has a higher priority than the EPEL repo so that when installing Ceph it will come from the repo file we create. The name of the package changed back and forth (!) since CentOS 4: From the CentOS wiki:: Note: Th...
def get_lacp_mode(self, name): members = self.get_members(name) if not members: return DEFAULT_LACP_MODE for member in self.get_members(name): match = re.search(r'channel-group\s\d+\smode\s(?P<value>.+)', self.get_block('^interface %s' % memb...
Returns the LACP mode for the specified Port-Channel interface Args: name(str): The Port-Channel interface name to return the LACP mode for from the configuration Returns: The configured LACP mode for the interface. Valid mode values are 'on', '...
def get_asset_temporal_session_for_repository(self, repository_id=None): if not repository_id: raise NullArgument() if not self.supports_asset_temporal(): raise Unimplemented() try: from . import sessions except ImportError: raise Operation...
Gets the session for retrieving temporal coverage of an asset for the given repository. arg: repository_id (osid.id.Id): the Id of the repository return: (osid.repository.AssetTemporalSession) - an AssetTemporalSession raise: NotFound - repository_id not found ...
def endpoints_minima(self, slope_cutoff=5e-3): energies = self.energies try: sp = self.spline() except: print("Energy spline failed.") return None der = sp.derivative() der_energies = der(range(len(energies))) return {"polar": abs(der_e...
Test if spline endpoints are at minima for a given slope cutoff.
def get_package_version(): base = os.path.abspath(os.path.dirname(__file__)) with open(os.path.join(base, 'policy', '__init__.py'), mode='rt', encoding='utf-8') as initf: for line in initf: m = version.match(line.strip()) if not m: cont...
return package version without importing it
def _create_wx_app(): wxapp = wx.GetApp() if wxapp is None: wxapp = wx.App(False) wxapp.SetExitOnFrameDelete(True) _create_wx_app.theWxApp = wxapp
Creates a wx.App instance if it has not been created sofar.
def infer_type(self, in_type): return in_type, [in_type[0]]*len(self.list_outputs()), \ [in_type[0]]*len(self.list_auxiliary_states())
infer_type interface. override to create new operators Parameters ---------- in_type : list of np.dtype list of argument types in the same order as declared in list_arguments. Returns ------- in_type : list list of argument types. Can...
def fix_errors(config, validation): for e in flatten_errors(config, validation): sections, key, err = e sec = config for section in sections: sec = sec[section] if key is not None: sec[key] = sec.default_values.get(key, sec[key]) else: sec....
Replace errors with their default values :param config: a validated ConfigObj to fix :type config: ConfigObj :param validation: the resuts of the validation :type validation: ConfigObj :returns: The altered config (does alter it in place though) :raises: None
def flush(self, auth, resource, options=None, defer=False): args = [resource] if options is not None: args.append(options) return self._call('flush', auth, args, defer)
Empties the specified resource of data per specified constraints. Args: auth: <cik> resource: resource to empty. options: Time limits.
def set(self, key, value): if hasattr(value, 'labels'): if 'VARIABLE' in env.config['SOS_DEBUG'] or 'ALL' in env.config[ 'SOS_DEBUG']: env.log_to_file( 'VARIABLE', f"Set {key} to {short_repr(value)} with labels {short_repr(v...
A short cut to set value to key without triggering any logging or warning message.
def init(self, initial): if initial <= 0: return False step = initial // BLOCK_SIZE with self._lock: init = self._atomic_long.compare_and_set(0, step + 1).result() if init: self._local = step self._residue = (initial % BLOCK_SIZ...
Try to initialize this IdGenerator instance with the given id. The first generated id will be 1 greater than id. :param initial: (long), the given id. :return: (bool), ``true`` if initialization succeeded, ``false`` if id is less than 0.
def delete_token(): username = get_admin()[0] admins = get_couchdb_admins() if username in admins: print 'I delete {} CouchDB user'.format(username) delete_couchdb_admin(username) if os.path.isfile(LOGIN_FILENAME): print 'I delete {} token file'.format(LOGIN_FILENAME) os....
Delete current token, file & CouchDB admin user
def dollars_to_cents(s, allow_negative=False): if not s: return if isinstance(s, string_types): s = ''.join(RE_NUMBER.findall(s)) dollars = int(round(float(s) * 100)) if not allow_negative and dollars < 0: raise ValueError('Negative values not permitted.') return dollars
Given a string or integer representing dollars, return an integer of equivalent cents, in an input-resilient way. This works by stripping any non-numeric characters before attempting to cast the value. Examples:: >>> dollars_to_cents('$1') 100 >>> dollars_to_cents('1') ...
def add_child_bin(self, bin_id, child_id): if self._catalog_session is not None: return self._catalog_session.add_child_catalog(catalog_id=bin_id, child_id=child_id) return self._hierarchy_session.add_child(id_=bin_id, child_id=child_id)
Adds a child to a bin. arg: bin_id (osid.id.Id): the ``Id`` of a bin arg: child_id (osid.id.Id): the ``Id`` of the new child raise: AlreadyExists - ``bin_id`` is already a parent of ``child_id`` raise: NotFound - ``bin_id`` or ``child_id`` not found raise...