code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _inner_take_over_or_update(self, full_values=None, current_values=None, value_indices=None): for key in current_values.keys(): if value_indices is not None and key in value_indices: index = value_indices[key] else: index = slice(None) if key in full_values: try: full_values[key][index] += current_values[key] except: full_values[key] += current_values[key] else: full_values[key] = current_values[key]
This is for automatic updates of values in the inner loop of missing data handling. Both arguments are dictionaries and the values in full_values will be updated by the current_gradients. If a key from current_values does not exist in full_values, it will be initialized to the value in current_values. If there is indices needed for the update, value_indices can be used for that. If value_indices has the same key, as current_values, the update in full_values will be indexed by the indices in value_indices. grads: dictionary of standing gradients (you will have to carefully make sure, that the ordering is right!). The values in here will be updated such that full_values[key] += current_values[key] forall key in full_gradients.keys() gradients: dictionary of gradients in the current set of parameters. value_indices: dictionary holding indices for the update in full_values. if the key exists the update rule is:def df(x): full_values[key][value_indices[key]] += current_values[key]
def _lock(self): lockfile = '{}.lock'.format(self._get_cookie_file()) safe_mkdir_for(lockfile) return OwnerPrintingInterProcessFileLock(lockfile)
An identity-keyed inter-process lock around the cookie file.
def copy(self, object_version=None, key=None): return ObjectVersionTag.create( self.object_version if object_version is None else object_version, key or self.key, self.value )
Copy a tag to a given object version. :param object_version: The object version instance to copy the tag to. Default: current object version. :param key: Key of destination tag. Default: current tag key. :return: The copied object version tag.
def apply_clicked(self, button): if isinstance(self.model.state, LibraryState): return self.set_script_text(self.view.get_text())
Triggered when the Apply-Shortcut in the editor is triggered.
def int(self, length, name, value=None, align=None): self._add_field(Int(length, name, value, align=align))
Add an signed integer to template. `length` is given in bytes and `value` is optional. `align` can be used to align the field to longer byte length. Signed integer uses twos-complement with bits numbered in big-endian. Examples: | int | 2 | foo | | int | 2 | foo | 42 | | int | 2 | fourByteFoo | 42 | align=4 |
def example_stats(filename): nd = results.load_nd_from_pickle(filename=filename) nodes_df, edges_df = nd.to_dataframe() stats = results.calculate_mvgd_stats(nd) stations = nodes_df[nodes_df['type'] == 'LV Station'] f, axarr = plt.subplots(2, sharex=True) f.suptitle("Peak load (top) / peak generation capacity (bottom) at LV " "substations in kW") stations['peak_load'].hist(bins=20, alpha=0.5, ax=axarr[0]) axarr[0].set_title("Peak load in kW") stations['generation_capacity'].hist(bins=20, alpha=0.5, ax=axarr[1]) axarr[1].set_title("Peak generation capacity in kW") plt.show() print("You are analyzing MV grid district {mvgd}\n".format( mvgd=int(stats.index.values))) with option_context('display.max_rows', None, 'display.max_columns', None, 'display.max_colwidth', -1): print(stats.T)
Obtain statistics from create grid topology Prints some statistical numbers and produces exemplary figures
def RGB_color_picker(obj): digest = hashlib.sha384(str(obj).encode('utf-8')).hexdigest() subsize = int(len(digest) / 3) splitted_digest = [digest[i * subsize: (i + 1) * subsize] for i in range(3)] max_value = float(int("f" * subsize, 16)) components = ( int(d, 16) / max_value for d in splitted_digest) return Color(rgb2hex(components))
Build a color representation from the string representation of an object This allows to quickly get a color from some data, with the additional benefit that the color will be the same as long as the (string representation of the) data is the same:: >>> from colour import RGB_color_picker, Color Same inputs produce the same result:: >>> RGB_color_picker("Something") == RGB_color_picker("Something") True ... but different inputs produce different colors:: >>> RGB_color_picker("Something") != RGB_color_picker("Something else") True In any case, we still get a ``Color`` object:: >>> isinstance(RGB_color_picker("Something"), Color) True
def subcommand(self, description='', arguments={}): def decorator(func): self.register_subparser( func, func.__name__.replace('_', '-'), description=description, arguments=arguments, ) return func return decorator
Decorator for quickly adding subcommands to the omnic CLI
def experiments_predictions_upsert_property(self, experiment_id, run_id, properties): if self.experiments_predictions_get(experiment_id, run_id) is None: return None return self.predictions.upsert_object_property(run_id, properties)
Upsert property of a prodiction for an experiment. Raises ValueError if given property dictionary results in an illegal operation. Parameters ---------- experiment_id : string Unique experiment identifier run_id : string Unique model run identifier properties : Dictionary() Dictionary of property names and their new values. Returns ------- ModelRunHandle Handle for updated object of None if object doesn't exist
def add(self, piece_uid, index): if self.occupancy[index]: raise OccupiedPosition if self.exposed_territory[index]: raise VulnerablePosition klass = PIECE_CLASSES[piece_uid] piece = klass(self, index) territory = piece.territory for i in self.indexes: if self.occupancy[i] and territory[i]: raise AttackablePiece self.pieces.add(piece) self.occupancy[index] = True self.exposed_territory = list( map(or_, self.exposed_territory, territory))
Add a piece to the board at the provided linear position.
def import_keybase(useropt): public_key = None u_bits = useropt.split(':') username = u_bits[0] if len(u_bits) == 1: public_key = cryptorito.key_from_keybase(username) else: fingerprint = u_bits[1] public_key = cryptorito.key_from_keybase(username, fingerprint) if cryptorito.has_gpg_key(public_key['fingerprint']): sys.exit(2) cryptorito.import_gpg_key(public_key['bundle'].encode('ascii')) sys.exit(0)
Imports a public GPG key from Keybase
def blocks(self, *args, **kwargs): return Stream(blocks(iter(self), *args, **kwargs))
Interface to apply audiolazy.blocks directly in a stream, returning another stream. Use keyword args.
def get_scoped_variable_from_name(self, name): for scoped_variable_id, scoped_variable in self.scoped_variables.items(): if scoped_variable.name == name: return scoped_variable_id raise AttributeError("Name %s is not in scoped_variables dictionary", name)
Get the scoped variable for a unique name :param name: the unique name of the scoped variable :return: the scoped variable specified by the name :raises exceptions.AttributeError: if the name is not in the the scoped_variables dictionary
def get_public_ip_validator(): from msrestazure.tools import is_valid_resource_id, resource_id def simple_validator(cmd, namespace): if namespace.public_ip_address: is_list = isinstance(namespace.public_ip_address, list) def _validate_name_or_id(public_ip): is_id = is_valid_resource_id(public_ip) return public_ip if is_id else resource_id( subscription=get_subscription_id(cmd.cli_ctx), resource_group=namespace.resource_group_name, namespace='Microsoft.Network', type='publicIPAddresses', name=public_ip) if is_list: for i, public_ip in enumerate(namespace.public_ip_address): namespace.public_ip_address[i] = _validate_name_or_id(public_ip) else: namespace.public_ip_address = _validate_name_or_id(namespace.public_ip_address) return simple_validator
Retrieves a validator for public IP address. Accepting all defaults will perform a check for an existing name or ID with no ARM-required -type parameter.
def start2(self, yes): if yes: self.write_message(1) self.hints[3].used = True self.lamp_turns = 1000 self.oldloc2 = self.oldloc = self.loc = self.rooms[1] self.dwarves = [ Dwarf(self.rooms[n]) for n in (19, 27, 33, 44, 64) ] self.pirate = Pirate(self.chest_room) treasures = self.treasures self.treasures_not_found = len(treasures) for treasure in treasures: treasure.prop = -1 self.describe_location()
Display instructions if the user wants them.
def sub_filter(self, subset, filter, inplace=True): full_query = ''.join(('not (', subset, ') or not (', filter, ')')) with LogDataChanges(self, filter_action='filter', filter_query=filter): result = self.data.query(full_query, inplace=inplace) return result
Apply a filter to subset of the data Examples -------- :: .subquery( 'timestep == 2', 'R > 4', )
def _function_add_fakeret_edge(self, addr, src_node, src_func_addr, confirmed=None): target_node = self._nodes.get(addr, None) if target_node is None: target_snippet = self._to_snippet(addr=addr, base_state=self._base_state) else: target_snippet = self._to_snippet(cfg_node=target_node) if src_node is None: self.kb.functions._add_node(src_func_addr, target_snippet) else: src_snippet = self._to_snippet(cfg_node=src_node) self.kb.functions._add_fakeret_to(src_func_addr, src_snippet, target_snippet, confirmed=confirmed)
Generate CodeNodes for target and source, if no source node add node for function, otherwise creates fake return to in function manager :param int addr: target address :param angr.analyses.CFGNode src_node: source node :param int src_func_addr: address of function :param confirmed: used as attribute on eventual digraph :return: None
def _parse_standard_flag(read_buffer, mask_length): mask_format = {1: 'B', 2: 'H', 4: 'I'}[mask_length] num_standard_flags, = struct.unpack_from('>H', read_buffer, offset=0) fmt = '>' + ('H' + mask_format) * num_standard_flags data = struct.unpack_from(fmt, read_buffer, offset=2) standard_flag = data[0:num_standard_flags * 2:2] standard_mask = data[1:num_standard_flags * 2:2] return standard_flag, standard_mask
Construct standard flag, standard mask data from the file. Specifically working on Reader Requirements box. Parameters ---------- fptr : file object File object for JP2K file. mask_length : int Length of standard mask flag
def getDatetimeAxis(): dataSet = 'nyc_taxi' filePath = './data/' + dataSet + '.csv' data = pd.read_csv(filePath, header=0, skiprows=[1, 2], names=['datetime', 'value', 'timeofday', 'dayofweek']) xaxisDate = pd.to_datetime(data['datetime']) return xaxisDate
use datetime as x-axis
def get_channelstate_settling( chain_state: ChainState, payment_network_id: PaymentNetworkID, token_address: TokenAddress, ) -> List[NettingChannelState]: return get_channelstate_filter( chain_state, payment_network_id, token_address, lambda channel_state: channel.get_status(channel_state) == CHANNEL_STATE_SETTLING, )
Return the state of settling channels in a token network.
def create(url, name, subject_id, image_group_id, properties): obj_props = [{'key':'name','value':name}] if not properties is None: try: for key in properties: if key != 'name': obj_props.append({'key':key, 'value':properties[key]}) except TypeError as ex: raise ValueError('invalid property set') body = { 'subject' : subject_id, 'images' : image_group_id, 'properties' : obj_props } try: req = urllib2.Request(url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(body)) except urllib2.URLError as ex: raise ValueError(str(ex)) return references_to_dict(json.load(response)['links'])[REF_SELF]
Create a new experiment using the given SCO-API create experiment Url. Parameters ---------- url : string Url to POST experiment create request name : string User-defined name for experiment subject_id : string Unique identifier for subject at given SCO-API image_group_id : string Unique identifier for image group at given SCO-API properties : Dictionary Set of additional properties for created experiment. Argument may be None. Given name will override name property in this set (if present). Returns ------- string Url of created experiment resource
def render(self): self.screen.reset() self.screen.blit(self.corners) self.screen.blit(self.lines, (1, 1)) self.screen.blit(self.rects, (int(self.screen.width / 2) + 1, 1)) self.screen.blit(self.circle, (0, int(self.screen.height / 2) + 1)) self.screen.blit(self.filled, (int(self.screen.width / 2) + 1, int(self.screen.height / 2) + 1)) self.screen.update() self.clock.tick()
Send the current screen content to Mate Light.
def weigh_users(X_test, model, classifier_type="LinearSVC"): if classifier_type == "LinearSVC": decision_weights = model.decision_function(X_test) elif classifier_type == "LogisticRegression": decision_weights = model.predict_proba(X_test) elif classifier_type == "RandomForest": if issparse(X_test): decision_weights = model.predict_proba(X_test.tocsr()) else: decision_weights = model.predict_proba(X_test) else: print("Invalid classifier type.") raise RuntimeError return decision_weights
Uses a trained model and the unlabelled features to produce a user-to-label distance matrix. Inputs: - feature_matrix: The graph based-features in either NumPy or SciPy sparse array format. - model: A trained scikit-learn One-vs-All multi-label scheme of linear SVC models. - classifier_type: A string to be chosen among: * LinearSVC (LibLinear) * LogisticRegression (LibLinear) * RandomForest Output: - decision_weights: A NumPy array containing the distance of each user from each label discriminator.
def crls(self): if not self._allow_fetching: return self._crls output = [] for issuer_serial in self._fetched_crls: output.extend(self._fetched_crls[issuer_serial]) return output
A list of all cached asn1crypto.crl.CertificateList objects
def visit_If(self, node): self.visit(node.test) old_range = self.result self.result = old_range.copy() for stmt in node.body: self.visit(stmt) body_range = self.result self.result = old_range.copy() for stmt in node.orelse: self.visit(stmt) orelse_range = self.result self.result = body_range for k, v in orelse_range.items(): if k in self.result: self.result[k] = self.result[k].union(v) else: self.result[k] = v
Handle iterate variable across branches >>> import gast as ast >>> from pythran import passmanager, backend >>> node = ast.parse(''' ... def foo(a): ... if a > 1: b = 1 ... else: b = 3''') >>> pm = passmanager.PassManager("test") >>> res = pm.gather(RangeValues, node) >>> res['b'] Interval(low=1, high=3)
def standard_deviation(x): if x.ndim > 1 and len(x[0]) > 1: return np.std(x, axis=1) return np.std(x)
Return a numpy array of column standard deviation Parameters ---------- x : ndarray A numpy array instance Returns ------- ndarray A 1 x n numpy array instance of column standard deviation Examples -------- >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> np.testing.assert_array_almost_equal( ... standard_deviation(a), ... [0.816496, 0.816496, 0.816496]) >>> a = np.array([1, 2, 3]) >>> np.testing.assert_array_almost_equal( ... standard_deviation(a), ... 0.816496)
def parseFile(self, filename): modname = self.filenameToModname(filename) module = Module(modname, filename) self.modules[modname] = module if self.trackUnusedNames: module.imported_names, module.unused_names = \ find_imports_and_track_names(filename, self.warn_about_duplicates, self.verbose) else: module.imported_names = find_imports(filename) module.unused_names = None dir = os.path.dirname(filename) module.imports = set( [self.findModuleOfName(imp.name, imp.level, filename, dir) for imp in module.imported_names])
Parse a single file.
def antiscia(self): obj = self.copy() obj.type = const.OBJ_GENERIC obj.relocate(360 - obj.lon + 180) return obj
Returns antiscia object.
def highlight_code(self, ontospy_entity): try: pygments_code = highlight(ontospy_entity.rdf_source(), TurtleLexer(), HtmlFormatter()) pygments_code_css = HtmlFormatter().get_style_defs('.highlight') return { "pygments_code": pygments_code, "pygments_code_css": pygments_code_css } except Exception as e: printDebug("Error: Pygmentize Failed", "red") return {}
produce an html version of Turtle code with syntax highlighted using Pygments CSS
def setup(parser): parser.add_argument( '-p', '--paramfile', type=str, required=True, help='Parameter Range File') parser.add_argument( '-o', '--output', type=str, required=True, help='Output File') parser.add_argument( '-s', '--seed', type=int, required=False, default=None, help='Random Seed') parser.add_argument( '--delimiter', type=str, required=False, default=' ', help='Column delimiter') parser.add_argument('--precision', type=int, required=False, default=8, help='Output floating-point precision') return parser
Add common sampling options to CLI parser. Parameters ---------- parser : argparse object Returns ---------- Updated argparse object
def _run_parallel_process_with_profiling(self, start_path, stop_path, queue, filename): runctx('Engine._run_parallel_process(self, start_path, stop_path, queue)', globals(), locals(), filename)
wrapper for usage of profiling
def merge_into_adjustments_for_all_sids(self, all_adjustments_for_sid, col_to_all_adjustments): for col_name in all_adjustments_for_sid: if col_name not in col_to_all_adjustments: col_to_all_adjustments[col_name] = {} for ts in all_adjustments_for_sid[col_name]: adjs = all_adjustments_for_sid[col_name][ts] add_new_adjustments(col_to_all_adjustments, adjs, col_name, ts)
Merge adjustments for a particular sid into a dictionary containing adjustments for all sids. Parameters ---------- all_adjustments_for_sid : dict[int -> AdjustedArray] All adjustments for a particular sid. col_to_all_adjustments : dict[int -> AdjustedArray] All adjustments for all sids.
def _def_lookup(self, variable): prevdefs = {} for code_loc in self._live_defs.lookup_defs(variable): if isinstance(variable, SimMemoryVariable): type_ = 'mem' elif isinstance(variable, SimRegisterVariable): type_ = 'reg' else: raise AngrDDGError('Unknown variable type %s' % type(variable)) prevdefs[code_loc] = { 'type': type_, 'data': variable } return prevdefs
This is a backward lookup in the previous defs. Note that, as we are using VSA, it is possible that `variable` is affected by several definitions. :param angr.analyses.ddg.LiveDefinitions live_defs: The collection of live definitions. :param SimVariable: The variable to lookup for definitions. :returns: A dict {stmt:labels} where label is the number of individual addresses of `addr_list` (or the actual set of addresses depending on the keep_addrs flag) that are definted by stmt.
def do_loop_turn(self): self.check_and_del_zombie_modules() if self.watch_for_new_conf(timeout=0.05): logger.info("I got a new configuration...") self.setup_new_conf() _t0 = time.time() self.get_objects_from_from_queues() statsmgr.timer('core.get-objects-from-queues', time.time() - _t0) _t0 = time.time() self.get_external_commands_from_arbiters() statsmgr.timer('external-commands.got.time', time.time() - _t0) statsmgr.gauge('external-commands.got.count', len(self.unprocessed_external_commands)) _t0 = time.time() self.push_external_commands_to_schedulers() statsmgr.timer('external-commands.pushed.time', time.time() - _t0) _t0 = time.time() self.hook_point('tick') statsmgr.timer('hook.tick', time.time() - _t0)
Receiver daemon main loop :return: None
def iter_window(iterable, size=2, step=1, wrap=False): r iter_list = it.tee(iterable, size) if wrap: iter_list = [iter_list[0]] + list(map(it.cycle, iter_list[1:])) try: for count, iter_ in enumerate(iter_list[1:], start=1): for _ in range(count): six.next(iter_) except StopIteration: return iter(()) else: _window_iter = zip(*iter_list) window_iter = it.islice(_window_iter, 0, None, step) return window_iter
r""" iterates through iterable with a window size generalizeation of itertwo Args: iterable (iter): an iterable sequence size (int): window size (default = 2) wrap (bool): wraparound (default = False) Returns: iter: returns windows in a sequence CommandLine: python -m utool.util_iter --exec-iter_window Example: >>> # ENABLE_DOCTEST >>> from utool.util_iter import * # NOQA >>> iterable = [1, 2, 3, 4, 5, 6] >>> size, step, wrap = 3, 1, True >>> window_iter = iter_window(iterable, size, step, wrap) >>> window_list = list(window_iter) >>> result = ('window_list = %r' % (window_list,)) >>> print(result) window_list = [(1, 2, 3), (2, 3, 4), (3, 4, 5), (4, 5, 6), (5, 6, 1), (6, 1, 2)] Example: >>> # ENABLE_DOCTEST >>> from utool.util_iter import * # NOQA >>> iterable = [1, 2, 3, 4, 5, 6] >>> size, step, wrap = 3, 2, True >>> window_iter = iter_window(iterable, size, step, wrap) >>> window_list = list(window_iter) >>> result = ('window_list = %r' % (window_list,)) >>> print(result) window_list = [(1, 2, 3), (3, 4, 5), (5, 6, 1)]
def map_floor(continent_id, floor, lang="en"): cache_name = "map_floor.%s-%s.%s.json" % (continent_id, floor, lang) params = {"continent_id": continent_id, "floor": floor, "lang": lang} return get_cached("map_floor.json", cache_name, params=params)
This resource returns details about a map floor, used to populate a world map. All coordinates are map coordinates. The returned data only contains static content. Dynamic content, such as vendors, is not currently available. :param continent_id: The continent. :param floor: The map floor. :param lang: Show localized texts in the specified language. The response is an object with the following properties: texture_dims (dimension) The dimensions of the texture. clamped_view (rect) If present, it represents a rectangle of downloadable textures. Every tile coordinate outside this rectangle is not available on the tile server. regions (object) A mapping from region id to an object. Each region object contains the following properties: name (string) The region name. label_coord (coordinate) The coordinates of the region label. maps (object) A mapping from the map id to an object. Each map object contains the following properties: name (string) The map name. min_level (number) The minimum level of the map. max_level (number) The maximum level of the map. default_floor (number) The default floor of the map. map_rect (rect) The dimensions of the map. continent_rect (rect) The dimensions of the map within the continent coordinate system. points_of_interest (list) A list of points of interest (landmarks, waypoints and vistas) Each points of interest object contains the following properties: poi_id (number) The point of interest id. name (string) The name of the point of interest. type (string) The type. This can be either "landmark" for actual points of interest, "waypoint" for waypoints, or "vista" for vistas. floor (number) The floor of this object. coord (coordinate) The coordinates of this object. tasks (list) A list of renown hearts. Each task object contains the following properties: task_id (number) The renown heart id. objective (string) The objective or name of the heart. level (number) The level of the heart. coord (coordinate) The coordinates where it takes place. skill_challenges (list) A list of skill challenges. Each skill challenge object contains the following properties: coord (coordinate) The coordinates of this skill challenge. sectors (list) A list of areas within the map. Each sector object contains the following properties: sector_id (number) The area id. name (string) The name of the area. level (number) The level of the area. coord (coordinate) The coordinates of this area (this is usually the center position). Special types: Dimension properties are two-element lists of width and height. Coordinate properties are two-element lists of the x and y position. Rect properties are two-element lists of coordinates of the upper-left and lower-right coordinates.
def filter_active(self, *args, **kwargs): grace = getattr(settings, 'HITCOUNT_KEEP_HIT_ACTIVE', {'days': 7}) period = timezone.now() - timedelta(**grace) return self.filter(created__gte=period).filter(*args, **kwargs)
Return only the 'active' hits. How you count a hit/view will depend on personal choice: Should the same user/visitor *ever* be counted twice? After a week, or a month, or a year, should their view be counted again? The defaulf is to consider a visitor's hit still 'active' if they return within a the last seven days.. After that the hit will be counted again. So if one person visits once a week for a year, they will add 52 hits to a given object. Change how long the expiration is by adding to settings.py: HITCOUNT_KEEP_HIT_ACTIVE = {'days' : 30, 'minutes' : 30} Accepts days, seconds, microseconds, milliseconds, minutes, hours, and weeks. It's creating a datetime.timedelta object.
def load_yaml_config(conf_file): global g_config with open(conf_file) as fp: g_config = util.yaml_load(fp) src_dir = get_path('src_dir', None) if src_dir is not None: sys.path.insert(0, src_dir) for cmd in get('commands', []): _import(cmd)
Load a YAML configuration. This will not update the configuration but replace it entirely. Args: conf_file (str): Path to the YAML config. This function will not check the file name or extension and will just crash if the given file does not exist or is not a valid YAML file.
def setContent(self, type_, value): if type_ in [self.CONTENT_TYPE_TXT, self.CONTENT_TYPE_URL, self.CONTENT_TYPE_FILE]: if type_ == self.CONTENT_TYPE_FILE: self._file = {} self._file = {'doc': open(value, 'rb')} else: self.addParam(type_, value)
Sets the content that's going to be sent to analyze according to its type :param type_: Type of the content (text, file or url) :param value: Value of the content
def _initialize_from_model(self, model): for name, value in model.__dict__.items(): if name in self._properties: setattr(self, name, value)
Loads a model from
def merge(self, parent=None): if parent is None: parent = self.parent if parent is None: return self.sources = parent.sources + self.sources data = copy.deepcopy(parent.data) for key, value in sorted(self.data.items()): if key.endswith('+'): key = key.rstrip('+') if key in data: if type(data[key]) == type(value) == dict: data[key].update(value) continue try: value = data[key] + value except TypeError as error: raise utils.MergeError( "MergeError: Key '{0}' in {1} ({2}).".format( key, self.name, str(error))) data[key] = value self.data = data
Merge parent data
def _transform_col(self, x, i): return x.fillna(NAN_INT).map(self.label_encoders[i]).fillna(0)
Encode one categorical column into labels. Args: x (pandas.Series): a categorical column to encode i (int): column index Returns: x (pandas.Series): a column with labels.
def remove_child(self, index): if index < 0: index = index + len(self) self.__children = self.__children[0:index] + self.__children[(index + 1):]
Remove the child at the given index from the current list of children. :param int index: the index of the child to be removed
def _create_relational_field(self, attr, options): options['entity_class'] = attr.py_type options['allow_empty'] = not attr.is_required return EntityField, options
Creates the form element for working with entity relationships.
def is_valid_endpoint_host(interfaces, endpoint): result = urlparse(endpoint) hostname = result.hostname if hostname is None: return False for interface in interfaces: if interface == hostname: return False return True
An endpoint host name is valid if it is a URL and if the host is not the name of a network interface.
def get_tree_members(self): members = [] queue = deque() queue.appendleft(self) visited = set() while len(queue): node = queue.popleft() if node not in visited: members.extend(node.get_member_info()) queue.extendleft(node.get_children()) visited.add(node) return [{attribute: member.get(attribute) for attribute in self.attr_list} for member in members if member]
Retrieves all members from this node of the tree down.
def check_infile_and_wp(curinf, curwp): if not os.path.exists(curinf): if curwp is None: TauDEM.error('You must specify one of the workspace and the ' 'full path of input file!') curinf = curwp + os.sep + curinf curinf = os.path.abspath(curinf) if not os.path.exists(curinf): TauDEM.error('Input files parameter %s is not existed!' % curinf) else: curinf = os.path.abspath(curinf) if curwp is None: curwp = os.path.dirname(curinf) return curinf, curwp
Check the existence of the given file and directory path. 1. Raise Runtime exception of both not existed. 2. If the ``curwp`` is None, the set the base folder of ``curinf`` to it.
def load_settings_sizes(): page_size = AGNOCOMPLETE_DEFAULT_PAGESIZE settings_page_size = getattr( settings, 'AGNOCOMPLETE_DEFAULT_PAGESIZE', None) page_size = settings_page_size or page_size page_size_min = AGNOCOMPLETE_MIN_PAGESIZE settings_page_size_min = getattr( settings, 'AGNOCOMPLETE_MIN_PAGESIZE', None) page_size_min = settings_page_size_min or page_size_min page_size_max = AGNOCOMPLETE_MAX_PAGESIZE settings_page_size_max = getattr( settings, 'AGNOCOMPLETE_MAX_PAGESIZE', None) page_size_max = settings_page_size_max or page_size_max query_size = AGNOCOMPLETE_DEFAULT_QUERYSIZE settings_query_size = getattr( settings, 'AGNOCOMPLETE_DEFAULT_QUERYSIZE', None) query_size = settings_query_size or query_size query_size_min = AGNOCOMPLETE_MIN_QUERYSIZE settings_query_size_min = getattr( settings, 'AGNOCOMPLETE_MIN_QUERYSIZE', None) query_size_min = settings_query_size_min or query_size_min return ( page_size, page_size_min, page_size_max, query_size, query_size_min, )
Load sizes from settings or fallback to the module constants
def bloch_vector_from_state_vector(state: Sequence, index: int) -> np.ndarray: rho = density_matrix_from_state_vector(state, [index]) v = np.zeros(3, dtype=np.float32) v[0] = 2*np.real(rho[0][1]) v[1] = 2*np.imag(rho[1][0]) v[2] = np.real(rho[0][0] - rho[1][1]) return v
Returns the bloch vector of a qubit. Calculates the bloch vector of the qubit at index in the wavefunction given by state, assuming state follows the standard Kronecker convention of numpy.kron. Args: state: A sequence representing a wave function in which the ordering mapping to qubits follows the standard Kronecker convention of numpy.kron. index: index of qubit who's bloch vector we want to find. follows the standard Kronecker convention of numpy.kron. Returns: A length 3 numpy array representing the qubit's bloch vector. Raises: ValueError: if the size of state is not a power of 2. ValueError: if the size of the state represents more than 25 qubits. IndexError: if index is out of range for the number of qubits corresponding to the state.
def create_albaran_automatic(pk, list_lines): line_bd = SalesLineAlbaran.objects.filter(line_order__pk__in=list_lines).values_list('line_order__pk') if line_bd.count() == 0 or len(list_lines) != len(line_bd[0]): if line_bd.count() != 0: for x in line_bd[0]: list_lines.pop(list_lines.index(x)) GenLineProduct.create_albaran_from_order(pk, list_lines)
creamos de forma automatica el albaran
def _deallocator(self): lookup = { "c_bool": "logical", "c_double": "double", "c_double_complex": "complex", "c_char": "char", "c_int": "int", "c_float": "float", "c_short": "short", "c_long": "long" } ctype = type(self.pointer).__name__.replace("LP_", "").lower() if ctype in lookup: return "dealloc_{0}_{1:d}d".format(lookup[ctype], len(self.indices)) else: return None
Returns the name of the subroutine in ftypes_dealloc.f90 that can deallocate the array for this Ftype's pointer. :arg ctype: the string c-type of the variable.
def collected(self, group, filename=None, host=None, location=None, move=True, all=True): ret = { 'name': 'support.collected', 'changes': {}, 'result': True, 'comment': '', } location = location or tempfile.gettempdir() self.check_destination(location, group) ret['changes'] = __salt__['support.sync'](group, name=filename, host=host, location=location, move=move, all=all) return ret
Sync archives to a central place. :param name: :param group: :param filename: :param host: :param location: :param move: :param all: :return:
def _compute_fixed(self): try: lon, lat = np.meshgrid(self.lon, self.lat) except: lat = self.lat phi = np.deg2rad(lat) try: albedo = self.a0 + self.a2 * P2(np.sin(phi)) except: albedo = np.zeros_like(phi) dom = next(iter(self.domains.values())) self.albedo = Field(albedo, domain=dom)
Recompute any fixed quantities after a change in parameters
def get(self,coordinate_system): if coordinate_system == 'DA-DIR' or coordinate_system == 'specimen': return self.pars elif coordinate_system == 'DA-DIR-GEO' or coordinate_system == 'geographic': return self.geopars elif coordinate_system == 'DA-DIR-TILT' or coordinate_system == 'tilt-corrected': return self.tiltpars else: print("-E- no such parameters to fetch for " + coordinate_system + " in fit: " + self.name) return None
Return the pmagpy paramters dictionary associated with this fit and the given coordinate system @param: coordinate_system -> the coordinate system who's parameters to return
def LoadConfig(config_obj, config_file=None, config_fd=None, secondary_configs=None, contexts=None, reset=False, parser=ConfigFileParser): if config_obj is None or reset: config_obj = _CONFIG.MakeNewConfig() if config_file is not None: config_obj.Initialize(filename=config_file, must_exist=True, parser=parser) elif config_fd is not None: config_obj.Initialize(fd=config_fd, parser=parser) if secondary_configs: for config_file in secondary_configs: config_obj.LoadSecondaryConfig(config_file) if contexts: for context in contexts: config_obj.AddContext(context) return config_obj
Initialize a ConfigManager with the specified options. Args: config_obj: The ConfigManager object to use and update. If None, one will be created. config_file: Filename to read the config from. config_fd: A file-like object to read config data from. secondary_configs: A list of secondary config URLs to load. contexts: Add these contexts to the config object. reset: Completely wipe previous config before doing the load. parser: Specify which parser to use. Returns: The resulting config object. The one passed in, unless None was specified.
def build_list_marker_log(parser: str = 'github', list_marker: str = '.') -> list: r if (parser == 'github' or parser == 'cmark' or parser == 'gitlab' or parser == 'commonmarker' or parser == 'redcarpet'): assert list_marker in md_parser[parser]['list']['ordered'][ 'closing_markers'] list_marker_log = list() if (parser == 'github' or parser == 'cmark' or parser == 'gitlab' or parser == 'commonmarker'): list_marker_log = [ str(md_parser['github']['list']['ordered']['min_marker_number']) + list_marker for i in range(0, md_parser['github']['header']['max_levels']) ] elif parser == 'redcarpet': pass return list_marker_log
r"""Create a data structure that holds list marker information. :parameter parser: decides rules on how compute indentations. Defaults to ``github``. :parameter list_marker: a string that contains some of the first characters of the list element. Defaults to ``-``. :type parser: str :type list_marker: str :returns: list_marker_log, the data structure. :rtype: list :raises: a built-in exception. .. note:: This function makes sense for ordered lists only.
def _check_import_source(): path_rel = '~/cltk_data/greek/software/greek_software_tlgu/tlgu.h' path = os.path.expanduser(path_rel) if not os.path.isfile(path): try: corpus_importer = CorpusImporter('greek') corpus_importer.import_corpus('greek_software_tlgu') except Exception as exc: logger.error('Failed to import TLGU: %s', exc) raise
Check if tlgu imported, if not import it.
def img(self, id): return self._serve_file(os.path.join(media_path, 'img', id))
Serve Pylons' stock images
def next(self, timeout=None): try: apply_result = self._collector._get_result(self._idx, timeout) except IndexError: self._idx = 0 raise StopIteration except: self._idx = 0 raise self._idx += 1 assert apply_result.ready() return apply_result.get(0)
Return the next result value in the sequence. Raise StopIteration at the end. Can raise the exception raised by the Job
def filter_service_by_regex_host_name(regex): host_re = re.compile(regex) def inner_filter(items): service = items["service"] host = items["hosts"][service.host] if service is None or host is None: return False return host_re.match(host.host_name) is not None return inner_filter
Filter for service Filter on regex host_name :param regex: regex to filter :type regex: str :return: Filter :rtype: bool
def append_provenance_step(self, title, description, timestamp=None): step_time = self._provenance.append_step(title, description, timestamp) if step_time > self.last_update: self.last_update = step_time
Add a step to the provenance of the metadata :param title: The title of the step. :type title: str :param description: The content of the step :type description: str :param timestamp: the time of the step :type timestamp: datetime, str
def add_license(key): result = { 'result': False, 'retcode': -1, 'output': '' } if not has_powerpath(): result['output'] = 'PowerPath is not installed' return result cmd = '/sbin/emcpreg -add {0}'.format(key) ret = __salt__['cmd.run_all'](cmd, python_shell=True) result['retcode'] = ret['retcode'] if ret['retcode'] != 0: result['output'] = ret['stderr'] else: result['output'] = ret['stdout'] result['result'] = True return result
Add a license
def MakePartialStat(self, fd): is_dir = "Container" in fd.behaviours return { "pathspec": fd.Get(fd.Schema.PATHSPEC, ""), "st_atime": fd.Get(fd.Schema.LAST, 0), "st_blksize": 0, "st_blocks": 0, "st_ctime": 0, "st_dev": 0, "st_gid": 0, "st_ino": 0, "st_mode": self.default_dir_mode if is_dir else self.default_file_mode, "st_mtime": 0, "st_nlink": 0, "st_rdev": 0, "st_size": fd.Get(fd.Schema.SIZE, 0), "st_uid": 0 }
Try and give a 'stat' for something not in the data store. Args: fd: The object with no stat. Returns: A dictionary corresponding to what we'll say the 'stat' is for objects which are not actually files, so have no OS level stat.
def list_catalogs(results=30, start=0): result = util.callm("%s/%s" % ('catalog', 'list'), {'results': results, 'start': start}) cats = [Catalog(**util.fix(d)) for d in result['response']['catalogs']] start = result['response']['start'] total = result['response']['total'] return ResultList(cats, start, total)
Returns list of all catalogs created on this API key Args: Kwargs: results (int): An integer number of results to return start (int): An integer starting value for the result set Returns: A list of catalog objects Example: >>> catalog.list_catalogs() [<catalog - test_artist_catalog>, <catalog - test_song_catalog>, <catalog - my_songs>] >>>
def selectionComponents(self): comps = [] model = self.model() for comp in self._selectedComponents: index = model.indexByComponent(comp) if index is not None: comps.append(comp) return comps
Returns the names of the component types in this selection
def _parse_qcd_segment(self, fptr): offset = fptr.tell() - 2 read_buffer = fptr.read(3) length, sqcd = struct.unpack('>HB', read_buffer) spqcd = fptr.read(length - 3) return QCDsegment(sqcd, spqcd, length, offset)
Parse the QCD segment. Parameters ---------- fptr : file Open file object. Returns ------- QCDSegment The current QCD segment.
def get_template(file): pattern = str(file).lower() while len(pattern) and not Lean.is_registered(pattern): pattern = os.path.basename(pattern) pattern = re.sub(r'^[^.]*\.?','',pattern) preferred_klass = Lean.preferred_mappings[pattern] if Lean.preferred_mappings.has_key(pattern) else None if preferred_klass: return preferred_klass klasses = Lean.template_mappings[pattern] template = None for klass in klasses: if hasattr(klass,'is_engine_initialized') and callable(klass.is_engine_initialized): if klass.is_engine_initialized(): template = klass break if template: return template first_failure = None for klass in klasses: try: return klass except Exception, e: if not first_failure: first_failure = e if first_failure: raise Exception(first_failure)
Lookup a template class for the given filename or file extension. Return nil when no implementation is found.
def set_last_hop_errors(self, last_hop): if last_hop.is_error: self.last_hop_errors.append(last_hop.error_message) return for packet in last_hop.packets: if packet.is_error: self.last_hop_errors.append(packet.error_message)
Sets the last hop's errors.
def close(self): try: self.parent_fd.fileno() except io.UnsupportedOperation: logger.debug("Not closing parent_fd - reusing existing") else: self.parent_fd.close()
Close file, see file.close
def spin_gen(particles, index, gauge=1): mat = np.zeros((2**particles, 2**particles)) flipper = 2**index for i in range(2**particles): ispin = btest(i, index) if ispin == 1: mat[i ^ flipper, i] = 1 else: mat[i ^ flipper, i] = gauge return mat
Generates the generic spin operator in z basis for a system of N=particles and for the selected spin index name. where index=0..N-1 The gauge term sets the behavoir for a system away from half-filling
def get_or_create_pull(github_repo, title, body, head, base, *, none_if_no_commit=False): try: return github_repo.create_pull( title=title, body=body, head=head, base=base ) except GithubException as err: err_message = err.data['errors'][0].get('message', '') if err.status == 422 and err_message.startswith('A pull request already exists'): _LOGGER.info('PR already exists, get this PR') return list(github_repo.get_pulls( head=head, base=base ))[0] elif none_if_no_commit and err.status == 422 and err_message.startswith('No commits between'): _LOGGER.info('No PR possible since head %s and base %s are the same', head, base) return None else: _LOGGER.warning("Unable to create PR:\n%s", err.data) raise except Exception as err: response = traceback.format_exc() _LOGGER.warning("Unable to create PR:\n%s", response) raise
Try to create the PR. If the PR exists, try to find it instead. Raises otherwise. You should always use the complete head syntax "org:branch", since the syntax is required in case of listing. if "none_if_no_commit" is set, return None instead of raising exception if the problem is that head and base are the same.
def __set_cache(self, tokens): if DefaultCompleter._DefaultCompleter__tokens.get(self.__language): return DefaultCompleter._DefaultCompleter__tokens[self.__language] = tokens
Sets the tokens cache. :param tokens: Completer tokens list. :type tokens: tuple or list
def update_wrapper(wrapper, wrapped, assigned = functools.WRAPPER_ASSIGNMENTS, updated = functools.WRAPPER_UPDATES): assigned = tuple(attr for attr in assigned if hasattr(wrapped, attr)) wrapper = functools.update_wrapper(wrapper, wrapped, assigned, updated) wrapper.__wrapped__ = wrapped return wrapper
Patch two bugs in functools.update_wrapper.
def update_state_success(self, model_output): response = requests.post( self.links[REF_UPDATE_STATE_SUCCESS], files={'file': open(model_output, 'rb')} ) if response.status_code != 200: try: raise ValueError(json.loads(response.text)['message']) except ValueError as ex: raise ValueError('invalid state change: ' + str(response.text)) return self.refresh()
Update the state of the model run to 'SUCCESS'. Expects a model output result file. Will upload the file before changing the model run state. Raises an exception if update fails or resource is unknown. Parameters ---------- model_output : string Path to model run output file Returns ------- ModelRunHandle Refreshed run handle.
def get_base_layout(figs): layout={} for fig in figs: if not isinstance(fig,dict): fig=fig.to_dict() for k,v in list(fig['layout'].items()): layout[k]=v return layout
Generates a layout with the union of all properties of multiple figures' layouts Parameters: ----------- fig : list(Figures) List of Plotly Figures
def clear_file_systems(self): self._source_url = None self.dataset.config.library.source.url = None self._source_fs = None self._build_url = None self.dataset.config.library.build.url = None self._build_fs = None self.dataset.commit()
Remove references to build and source file systems, reverting to the defaults
def write_additional(self, productversion, channel): self.fileobj.seek(self.additional_offset) extras = extras_header.build(dict( count=1, sections=[dict( channel=six.u(channel), productversion=six.u(productversion), size=len(channel) + len(productversion) + 2 + 8, padding=b'', )], )) self.fileobj.write(extras) self.last_offset = self.fileobj.tell()
Write the additional information to the MAR header. Args: productversion (str): product and version string channel (str): channel string
def dict_filter(*args, **kwargs): result = {} for arg in itertools.chain(args, (kwargs,)): dict_filter_update(result, arg) return result
Merge all values into a single dict with all None values removed.
def __update_state(self): if self._state.active: self._state = self.__get_state_by_id(self.job_config.job_id)
Fetches most up to date state from db.
def fixed(ctx, number, decimals=2, no_commas=False): value = _round(ctx, number, decimals) format_str = '{:f}' if no_commas else '{:,f}' return format_str.format(value)
Formats the given number in decimal format using a period and commas
def extract_tar(tar_path, target_folder): with tarfile.open(tar_path, 'r') as archive: archive.extractall(target_folder)
Extract the content of the tar-file at `tar_path` into `target_folder`.
def run_program(self, name, arguments=[], timeout=30, exclusive=False): logger.debug("Running program ...") if exclusive: kill_longrunning(self.config) prog = RunningProgram(self, name, arguments, timeout) return prog.expect_end()
Runs a program in the working directory to completion. Args: name (str): The name of the program to be executed. arguments (tuple): Command-line arguments for the program. timeout (int): The timeout for execution. exclusive (bool): Prevent parallel validation runs on the test machines, e.g. when doing performance measurements for submitted code. Returns: tuple: A tuple of the exit code, as reported by the operating system, and the output produced during the execution.
def show_schema(self, tables=None): tables = tables if tables else self.tables for t in tables: self._printer('\t{0}'.format(t)) for col in self.get_schema(t, True): self._printer('\t\t{0:30} {1:15} {2:10} {3:10} {4:10} {5:10}'.format(*col))
Print schema information.
def filter_url(url, **kwargs): d = parse_url_to_dict(url) d.update(kwargs) return unparse_url_dict({k: v for k, v in list(d.items()) if v})
filter a URL by returning a URL with only the parts specified in the keywords
def resize_file(fobj, diff, BUFFER_SIZE=2 ** 16): fobj.seek(0, 2) filesize = fobj.tell() if diff < 0: if filesize + diff < 0: raise ValueError fobj.truncate(filesize + diff) elif diff > 0: try: while diff: addsize = min(BUFFER_SIZE, diff) fobj.write(b"\x00" * addsize) diff -= addsize fobj.flush() except IOError as e: if e.errno == errno.ENOSPC: fobj.truncate(filesize) raise
Resize a file by `diff`. New space will be filled with zeros. Args: fobj (fileobj) diff (int): amount of size to change Raises: IOError
def _attacher(self, key, value, attributes, timed): id, extra = self._get_attach_id(key, value, attributes) record = self._create_attach_record(id, timed) if extra: record.update(extra) return record
Create a full attachment record payload.
def install_yum_priorities(distro, _yum=None): yum = _yum or pkg_managers.yum package_name = 'yum-plugin-priorities' if distro.normalized_name == 'centos': if distro.release[0] != '6': package_name = 'yum-priorities' yum(distro.conn, package_name)
EPEL started packaging Ceph so we need to make sure that the ceph.repo we install has a higher priority than the EPEL repo so that when installing Ceph it will come from the repo file we create. The name of the package changed back and forth (!) since CentOS 4: From the CentOS wiki:: Note: This plugin has carried at least two differing names over time. It is named yum-priorities on CentOS-5 but was named yum-plugin-priorities on CentOS-4. CentOS-6 has reverted to yum-plugin-priorities. :params _yum: Used for testing, so we can inject a fake yum
def get_lacp_mode(self, name): members = self.get_members(name) if not members: return DEFAULT_LACP_MODE for member in self.get_members(name): match = re.search(r'channel-group\s\d+\smode\s(?P<value>.+)', self.get_block('^interface %s' % member)) return match.group('value')
Returns the LACP mode for the specified Port-Channel interface Args: name(str): The Port-Channel interface name to return the LACP mode for from the configuration Returns: The configured LACP mode for the interface. Valid mode values are 'on', 'passive', 'active'
def get_asset_temporal_session_for_repository(self, repository_id=None): if not repository_id: raise NullArgument() if not self.supports_asset_temporal(): raise Unimplemented() try: from . import sessions except ImportError: raise OperationFailed('import error') try: session = sessions.AssetTemporalSession(repository_id, proxy=self._proxy, runtime=self._runtime) except AttributeError: raise OperationFailed('attribute error') return session
Gets the session for retrieving temporal coverage of an asset for the given repository. arg: repository_id (osid.id.Id): the Id of the repository return: (osid.repository.AssetTemporalSession) - an AssetTemporalSession raise: NotFound - repository_id not found raise: NullArgument - repository_id is null raise: OperationFailed - unable to complete request raise: Unimplemented - supports_asset_temporal() or supports_visible_federation() is false compliance: optional - This method must be implemented if supports_asset_temporal() and supports_visible_federation() are true.
def endpoints_minima(self, slope_cutoff=5e-3): energies = self.energies try: sp = self.spline() except: print("Energy spline failed.") return None der = sp.derivative() der_energies = der(range(len(energies))) return {"polar": abs(der_energies[-1]) <= slope_cutoff, "nonpolar": abs(der_energies[0]) <= slope_cutoff}
Test if spline endpoints are at minima for a given slope cutoff.
def get_package_version(): base = os.path.abspath(os.path.dirname(__file__)) with open(os.path.join(base, 'policy', '__init__.py'), mode='rt', encoding='utf-8') as initf: for line in initf: m = version.match(line.strip()) if not m: continue return m.groups()[0]
return package version without importing it
def _create_wx_app(): wxapp = wx.GetApp() if wxapp is None: wxapp = wx.App(False) wxapp.SetExitOnFrameDelete(True) _create_wx_app.theWxApp = wxapp
Creates a wx.App instance if it has not been created sofar.
def infer_type(self, in_type): return in_type, [in_type[0]]*len(self.list_outputs()), \ [in_type[0]]*len(self.list_auxiliary_states())
infer_type interface. override to create new operators Parameters ---------- in_type : list of np.dtype list of argument types in the same order as declared in list_arguments. Returns ------- in_type : list list of argument types. Can be modified from in_type. out_type : list list of output types calculated from in_type, in the same order as declared in list_outputs. aux_type : Optional, list list of aux types calculated from in_type, in the same order as declared in list_auxiliary_states.
def fix_errors(config, validation): for e in flatten_errors(config, validation): sections, key, err = e sec = config for section in sections: sec = sec[section] if key is not None: sec[key] = sec.default_values.get(key, sec[key]) else: sec.walk(set_to_default) return config
Replace errors with their default values :param config: a validated ConfigObj to fix :type config: ConfigObj :param validation: the resuts of the validation :type validation: ConfigObj :returns: The altered config (does alter it in place though) :raises: None
def flush(self, auth, resource, options=None, defer=False): args = [resource] if options is not None: args.append(options) return self._call('flush', auth, args, defer)
Empties the specified resource of data per specified constraints. Args: auth: <cik> resource: resource to empty. options: Time limits.
def set(self, key, value): if hasattr(value, 'labels'): if 'VARIABLE' in env.config['SOS_DEBUG'] or 'ALL' in env.config[ 'SOS_DEBUG']: env.log_to_file( 'VARIABLE', f"Set {key} to {short_repr(value)} with labels {short_repr(value.labels)}" ) else: if 'VARIABLE' in env.config['SOS_DEBUG'] or 'ALL' in env.config[ 'SOS_DEBUG']: env.log_to_file( 'VARIABLE', f"Set {key} to {short_repr(value)} of type {value.__class__.__name__}" ) self._dict[key] = value
A short cut to set value to key without triggering any logging or warning message.
def init(self, initial): if initial <= 0: return False step = initial // BLOCK_SIZE with self._lock: init = self._atomic_long.compare_and_set(0, step + 1).result() if init: self._local = step self._residue = (initial % BLOCK_SIZE) + 1 return init
Try to initialize this IdGenerator instance with the given id. The first generated id will be 1 greater than id. :param initial: (long), the given id. :return: (bool), ``true`` if initialization succeeded, ``false`` if id is less than 0.
def delete_token(): username = get_admin()[0] admins = get_couchdb_admins() if username in admins: print 'I delete {} CouchDB user'.format(username) delete_couchdb_admin(username) if os.path.isfile(LOGIN_FILENAME): print 'I delete {} token file'.format(LOGIN_FILENAME) os.remove(LOGIN_FILENAME)
Delete current token, file & CouchDB admin user
def dollars_to_cents(s, allow_negative=False): if not s: return if isinstance(s, string_types): s = ''.join(RE_NUMBER.findall(s)) dollars = int(round(float(s) * 100)) if not allow_negative and dollars < 0: raise ValueError('Negative values not permitted.') return dollars
Given a string or integer representing dollars, return an integer of equivalent cents, in an input-resilient way. This works by stripping any non-numeric characters before attempting to cast the value. Examples:: >>> dollars_to_cents('$1') 100 >>> dollars_to_cents('1') 100 >>> dollars_to_cents(1) 100 >>> dollars_to_cents('1e2') 10000 >>> dollars_to_cents('-1$', allow_negative=True) -100 >>> dollars_to_cents('1 dollar') 100
def add_child_bin(self, bin_id, child_id): if self._catalog_session is not None: return self._catalog_session.add_child_catalog(catalog_id=bin_id, child_id=child_id) return self._hierarchy_session.add_child(id_=bin_id, child_id=child_id)
Adds a child to a bin. arg: bin_id (osid.id.Id): the ``Id`` of a bin arg: child_id (osid.id.Id): the ``Id`` of the new child raise: AlreadyExists - ``bin_id`` is already a parent of ``child_id`` raise: NotFound - ``bin_id`` or ``child_id`` not found raise: NullArgument - ``bin_id`` or ``child_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*