code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def call_me(iocb): if _debug: call_me._debug("callback_function %r", iocb) print("call me, %r or %r" % (iocb.ioResponse, iocb.ioError))
When a controller completes the processing of a request, the IOCB can contain one or more functions to be called.
def set_field_value(document_data, field_path, value): current = document_data for element in field_path.parts[:-1]: current = current.setdefault(element, {}) if value is _EmptyDict: value = {} current[field_path.parts[-1]] = value
Set a value into a document for a field_path
def distance(self, other): return distance(self.lat, self.lon, None, other.lat, other.lon, None)
Distance between points Args: other (:obj:`Point`) Returns: float: Distance in km
def _improve_class_docs(app, cls, lines): if issubclass(cls, models.Model): _add_model_fields_as_params(app, cls, lines) elif issubclass(cls, forms.Form): _add_form_fields(cls, lines)
Improve the documentation of a class.
def json_expand(json_op): if type(json_op) == dict and 'json' in json_op: return update_in(json_op, ['json'], safe_json_loads) return json_op
For custom_json ops.
def get_ituz(self, callsign, timestamp=timestamp_now): return self.get_all(callsign, timestamp)[const.ITUZ]
Returns ITU Zone of a callsign Args: callsign (str): Amateur Radio callsign timestamp (datetime, optional): datetime in UTC (tzinfo=pytz.UTC) Returns: int: containing the callsign's CQ Zone Raises: KeyError: No ITU Zone found for callsign Note: Currently, only Country-files.com lookup database contains ITU Zones
def setStyle(self, stylename): self.style = importlib.import_module(stylename) newHandler = Handler() newHandler.setFormatter(Formatter(self.style)) self.addHandler(newHandler)
Adjusts the output format of messages based on the style name provided Styles are loaded like python modules, so you can import styles from your own modules or use the ones in fastlog.styles Available styles can be found under /fastlog/styles/ The default style is 'fastlog.styles.pwntools'
def _fix_review_dates(self, item): for date_field in ['timestamp', 'createdOn', 'lastUpdated']: if date_field in item.keys(): date_ts = item[date_field] item[date_field] = unixtime_to_datetime(date_ts).isoformat() if 'patchSets' in item.keys(): for patch in item['patchSets']: pdate_ts = patch['createdOn'] patch['createdOn'] = unixtime_to_datetime(pdate_ts).isoformat() if 'approvals' in patch: for approval in patch['approvals']: adate_ts = approval['grantedOn'] approval['grantedOn'] = unixtime_to_datetime(adate_ts).isoformat() if 'comments' in item.keys(): for comment in item['comments']: cdate_ts = comment['timestamp'] comment['timestamp'] = unixtime_to_datetime(cdate_ts).isoformat()
Convert dates so ES detect them
def vars_(self): return [x for x in self[self.current_scope].values() if x.class_ == CLASS.var]
Returns symbol instances corresponding to variables of the current scope.
def _get_hanging_wall_coeffs_rx(self, C, rup, r_x): r_1 = rup.width * cos(radians(rup.dip)) r_2 = 62.0 * rup.mag - 350.0 fhngrx = np.zeros(len(r_x)) idx = np.logical_and(r_x >= 0., r_x < r_1) fhngrx[idx] = self._get_f1rx(C, r_x[idx], r_1) idx = r_x >= r_1 f2rx = self._get_f2rx(C, r_x[idx], r_1, r_2) f2rx[f2rx < 0.0] = 0.0 fhngrx[idx] = f2rx return fhngrx
Returns the hanging wall r-x caling term defined in equation 7 to 12
def setup(self, url, stream=True, post=False, parameters=None, timeout=None): self.close_response() self.response = None try: if post: full_url, parameters = self.get_url_params_for_post(url, parameters) self.response = self.session.post(full_url, data=parameters, stream=stream, timeout=timeout) else: self.response = self.session.get(self.get_url_for_get(url, parameters), stream=stream, timeout=timeout) self.response.raise_for_status() except Exception as e: raisefrom(DownloadError, 'Setup of Streaming Download of %s failed!' % url, e) return self.response
Setup download from provided url returning the response Args: url (str): URL to download stream (bool): Whether to stream download. Defaults to True. post (bool): Whether to use POST instead of GET. Defaults to False. parameters (Optional[Dict]): Parameters to pass. Defaults to None. timeout (Optional[float]): Timeout for connecting to URL. Defaults to None (no timeout). Returns: requests.Response: requests.Response object
def reassign_authorization_to_vault(self, authorization_id, from_vault_id, to_vault_id): self.assign_authorization_to_vault(authorization_id, to_vault_id) try: self.unassign_authorization_from_vault(authorization_id, from_vault_id) except: self.unassign_authorization_from_vault(authorization_id, to_vault_id) raise
Moves an ``Authorization`` from one ``Vault`` to another. Mappings to other ``Vaults`` are unaffected. arg: authorization_id (osid.id.Id): the ``Id`` of the ``Authorization`` arg: from_vault_id (osid.id.Id): the ``Id`` of the current ``Vault`` arg: to_vault_id (osid.id.Id): the ``Id`` of the destination ``Vault`` raise: NotFound - ``authorization_id, from_vault_id,`` or ``to_vault_id`` not found or ``authorization_id`` not mapped to ``from_vault_id`` raise: NullArgument - ``authorization_id, from_vault_id,`` or ``to_vault_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
def missing_count(self): if self.means: return self.means.missing_count return self._cube_dict["result"].get("missing", 0)
numeric representing count of missing rows in cube response.
def _repr_html_(self): out="<table class='taqltable'>\n" if not(self.name()[:4]=="Col_"): out+="<tr>" out+="<th><b>"+self.name()+"</b></th>" out+="</tr>" cropped=False rowcount=0 colkeywords=self.getkeywords() for row in self: out +="\n<tr>" out += "<td>" + _format_cell(row, colkeywords) + "</td>\n" out += "</tr>\n" rowcount+=1 out+="\n" if rowcount>=20: cropped=True break if out[-2:]=="\n\n": out=out[:-1] out+="</table>" if cropped: out+="<p style='text-align:center'>("+str(self.nrows()-20)+" more rows)</p>\n" return out
Give a nice representation of columns in notebooks.
def xidz(numerator, denominator, value_if_denom_is_zero): small = 1e-6 if abs(denominator) < small: return value_if_denom_is_zero else: return numerator * 1.0 / denominator
Implements Vensim's XIDZ function. This function executes a division, robust to denominator being zero. In the case of zero denominator, the final argument is returned. Parameters ---------- numerator: float denominator: float Components of the division operation value_if_denom_is_zero: float The value to return if the denominator is zero Returns ------- numerator / denominator if denominator > 1e-6 otherwise, returns value_if_denom_is_zero
def all_subclasses(cls): subclasses = cls.__subclasses__() return subclasses + [g for s in subclasses for g in all_subclasses(s)]
Given a class `cls`, this recursive function returns a list with all subclasses, subclasses of subclasses, and so on.
def start_volume(name, force=False): cmd = 'volume start {0}'.format(name) if force: cmd = '{0} force'.format(cmd) volinfo = info(name) if name not in volinfo: log.error("Cannot start non-existing volume %s", name) return False if not force and volinfo[name]['status'] == '1': log.info("Volume %s already started", name) return True return _gluster(cmd)
Start a gluster volume name Volume name force Force the volume start even if the volume is started .. versionadded:: 2015.8.4 CLI Example: .. code-block:: bash salt '*' glusterfs.start mycluster
def copy_folder_content(src, dst): for file in os.listdir(src): file_path = os.path.join(src, file) dst_file_path = os.path.join(dst, file) if os.path.isdir(file_path): shutil.copytree(file_path, dst_file_path) else: shutil.copyfile(file_path, dst_file_path)
Copy all content in src directory to dst directory. The src and dst must exist.
def gen_lines_from_binary_files( files: Iterable[BinaryIO], encoding: str = UTF8) -> Generator[str, None, None]: for file in files: for byteline in file: line = byteline.decode(encoding).strip() yield line
Generates lines from binary files. Strips out newlines. Args: files: iterable of :class:`BinaryIO` file-like objects encoding: encoding to use Yields: each line of all the files
def purge_all(user=None, fast=False): user = user or getpass.getuser() if os.path.exists(datadir): if fast: shutil.rmtree(datadir) print('Removed %s' % datadir) else: for fname in os.listdir(datadir): mo = re.match('calc_(\d+)\.hdf5', fname) if mo is not None: calc_id = int(mo.group(1)) purge_one(calc_id, user)
Remove all calculations of the given user
def rollforward(self, date): if self.onOffset(date): return date else: return date + QuarterBegin(month=self.month)
Roll date forward to nearest start of quarter
def deep_merge_dict(a, b): if not isinstance(a, dict): raise TypeError("a must be a dict, but found %s" % a.__class__.__name__) if not isinstance(b, dict): raise TypeError("b must be a dict, but found %s" % b.__class__.__name__) _a = copy(a) _b = copy(b) for key_b, val_b in iteritems(_b): if isinstance(val_b, dict): if key_b not in _a or not isinstance(_a[key_b], dict): _a[key_b] = {} _a[key_b] = deep_merge_dict(_a[key_b], val_b) else: _a[key_b] = val_b return _a
Deep merges dictionary b into dictionary a.
def run_command(self, data): command = data.get("command") if self.debug: self.py3_wrapper.log("Running remote command %s" % command) if command == "refresh": self.refresh(data) elif command == "refresh_all": self.py3_wrapper.refresh_modules() elif command == "click": self.click(data)
check the given command and send to the correct dispatcher
def SetFields(fields): @use_context @use_no_input def _SetFields(context): nonlocal fields if not context.output_type: context.set_output_fields(fields) return NOT_MODIFIED return _SetFields
Transformation factory that sets the field names on first iteration, without touching the values. :param fields: :return: callable
def UpdateNumberOfWarnings( self, number_of_consumed_warnings, number_of_produced_warnings): consumed_warnings_delta = 0 if number_of_consumed_warnings is not None: if number_of_consumed_warnings < self.number_of_consumed_warnings: raise ValueError( 'Number of consumed warnings smaller than previous update.') consumed_warnings_delta = ( number_of_consumed_warnings - self.number_of_consumed_warnings) self.number_of_consumed_warnings = number_of_consumed_warnings self.number_of_consumed_warnings_delta = consumed_warnings_delta produced_warnings_delta = 0 if number_of_produced_warnings is not None: if number_of_produced_warnings < self.number_of_produced_warnings: raise ValueError( 'Number of produced warnings smaller than previous update.') produced_warnings_delta = ( number_of_produced_warnings - self.number_of_produced_warnings) self.number_of_produced_warnings = number_of_produced_warnings self.number_of_produced_warnings_delta = produced_warnings_delta return consumed_warnings_delta > 0 or produced_warnings_delta > 0
Updates the number of warnings. Args: number_of_consumed_warnings (int): total number of warnings consumed by the process. number_of_produced_warnings (int): total number of warnings produced by the process. Returns: bool: True if either number of warnings has increased. Raises: ValueError: if the consumed or produced number of warnings is smaller than the value of the previous update.
def reference_axis_from_chains(chains): if not len(set([len(x) for x in chains])) == 1: raise ValueError("All chains must be of the same length") coords = [numpy.array(chains[0].primitive.coordinates)] orient_vector = polypeptide_vector(chains[0]) for i, c in enumerate(chains[1:]): if is_acute(polypeptide_vector(c), orient_vector): coords.append(numpy.array(c.primitive.coordinates)) else: coords.append(numpy.flipud(numpy.array(c.primitive.coordinates))) reference_axis = numpy.mean(numpy.array(coords), axis=0) return Primitive.from_coordinates(reference_axis)
Average coordinates from a set of primitives calculated from Chains. Parameters ---------- chains : list(Chain) Returns ------- reference_axis : numpy.array The averaged (x, y, z) coordinates of the primitives for the list of Chains. In the case of a coiled coil barrel, this would give the central axis for calculating e.g. Crick angles. Raises ------ ValueError : If the Chains are not all of the same length.
def recursive_copy(source, destination): if os.path.isdir(source): copy_tree(source, destination)
A wrapper around distutils.dir_util.copy_tree but won't throw any exception when the source directory does not exist. Args: source (str): source path destination (str): destination path
def _refreshNodeFromTarget(self): for key, value in self.viewBox.state.items(): if key != "limits": childItem = self.childByNodeName(key) childItem.data = value else: for limitKey, limitValue in value.items(): limitChildItem = self.limitsItem.childByNodeName(limitKey) limitChildItem.data = limitValue
Updates the config settings
def retrieve_product(self, product_id): response = self.request(E.retrieveProductSslCertRequest( E.id(product_id) )) return response.as_model(SSLProduct)
Retrieve details on a single product.
def check_mod_enabled(mod): if mod.endswith('.load') or mod.endswith('.conf'): mod_name = mod[:-5] else: mod_name = mod cmd = 'a2enmod -l' try: active_mods = __salt__['cmd.run'](cmd, python_shell=False).split(' ') except Exception as e: return e return mod_name in active_mods
Checks to see if the specific apache mod is enabled. This will only be functional on operating systems that support `a2enmod -l` to list the enabled mods. CLI Example: .. code-block:: bash salt '*' apache.check_mod_enabled status
def check_validation_level(validation_level): if validation_level not in (VALIDATION_LEVEL.QUIET, VALIDATION_LEVEL.STRICT, VALIDATION_LEVEL.TOLERANT): raise UnknownValidationLevel
Validate the given validation level :type validation_level: ``int`` :param validation_level: validation level (see :class:`hl7apy.consts.VALIDATION_LEVEL`) :raises: :exc:`hl7apy.exceptions.UnknownValidationLevel` if the given validation level is unsupported
def model_info(model_dir: Optional[str] = None) -> Tuple[str, bool]: if model_dir is None: try: model_dir = resource_filename(PACKAGE, DATADIR.format('model')) except DistributionNotFound as error: LOGGER.warning("Cannot load model from packages: %s", error) model_dir = str(DATA_FALLBACK.joinpath('model').absolute()) is_default_model = True else: is_default_model = False model_path = Path(model_dir) model_path.mkdir(exist_ok=True) LOGGER.debug("Using model: %s, default: %s", model_path, is_default_model) return (model_dir, is_default_model)
Retrieve Guesslang model directory name, and tells if it is the default model. :param model_dir: model location, if `None` default model is selected :return: selected model directory with an indication that the model is the default or not
def best_parent( self, node, tree_type=None ): parents = self.parents(node) selected_parent = None if node['type'] == 'type': module = ".".join( node['name'].split( '.' )[:-1] ) if module: for mod in parents: if mod['type'] == 'module' and mod['name'] == module: selected_parent = mod if parents and selected_parent is None: parents.sort( key = lambda x: self.value(node, x) ) return parents[-1] return selected_parent
Choose the best parent for a given node
def _logpdf(self, **kwargs): return self._polardist._logpdf(**kwargs) +\ self._azimuthaldist._logpdf(**kwargs)
Returns the logpdf at the given angles. Parameters ---------- \**kwargs: The keyword arguments should specify the value for each angle, using the names of the polar and azimuthal angles as the keywords. Unrecognized arguments are ignored. Returns ------- float The value of the pdf at the given values.
def _init_login_manager(app_): login_manager = flogin.LoginManager() login_manager.setup_app(app_) login_manager.anonymous_user = Anonymous login_manager.login_view = "login" users = {app_.config['USERNAME']: User('Admin', 0)} names = dict((int(v.get_id()), k) for k, v in users.items()) @login_manager.user_loader def load_user(userid): userid = int(userid) name = names.get(userid) return users.get(name) return users, names
Initialise and configure the login manager.
def add_primitives_path(path): if path not in _PRIMITIVES_PATHS: if not os.path.isdir(path): raise ValueError('Invalid path: {}'.format(path)) LOGGER.debug('Adding new primitives path %s', path) _PRIMITIVES_PATHS.insert(0, os.path.abspath(path))
Add a new path to look for primitives. The new path will be inserted in the first place of the list, so any primitive found in this new folder will take precedence over any other primitive with the same name that existed in the system before. Args: path (str): path to add Raises: ValueError: A `ValueError` will be raised if the path is not valid.
def pathconf(path, os_name=os.name, isdir_fnc=os.path.isdir, pathconf_fnc=getattr(os, 'pathconf', None), pathconf_names=getattr(os, 'pathconf_names', ())): if pathconf_fnc and pathconf_names: return {key: pathconf_fnc(path, key) for key in pathconf_names} if os_name == 'nt': maxpath = 246 if isdir_fnc(path) else 259 else: maxpath = 255 return { 'PC_PATH_MAX': maxpath, 'PC_NAME_MAX': maxpath - len(path), }
Get all pathconf variables for given path. :param path: absolute fs path :type path: str :returns: dictionary containing pathconf keys and their values (both str) :rtype: dict
def _process_items(items, user_conf, error_protocol): def process_meta(item, error_protocol): try: return item._parse() except Exception, e: error_protocol.append( "Can't parse %s: %s" % (item._get_filenames()[0], e.message) ) if isinstance(item, DataPair): return item.ebook_file out = [] for item in items: if isinstance(item, EbookFile): out.append(item) else: out.append(process_meta(item, error_protocol)) out = filter(lambda x: x, out) fn_pool = [] soon_removed = out if conf_merger(user_conf, "LEAVE_BAD_FILES") else items for item in soon_removed: fn_pool.extend(item._get_filenames()) _remove_files(fn_pool) return out
Parse metadata. Remove processed and sucessfully parsed items. Returns sucessfully processed items.
def load(self, path, name=None): if name is None: name = os.path.splitext(os.path.basename(path))[0] with open(path, 'r') as f: document = f.read() return self.compiler.compile(name, document, path).link().surface
Load and compile the given Thrift file. :param str path: Path to the ``.thrift`` file. :param str name: Name of the generated module. Defaults to the base name of the file. :returns: The compiled module.
def get_nearest_site(self, latitude=None, longitude=None): warning_message = 'This function is deprecated. Use get_nearest_forecast_site() instead' warn(warning_message, DeprecationWarning, stacklevel=2) return self.get_nearest_forecast_site(latitude, longitude)
Deprecated. This function returns nearest Site object to the specified coordinates.
def array_controllers(self): return array_controller.HPEArrayControllerCollection( self._conn, utils.get_subresource_path_by( self, ['Links', 'ArrayControllers']), redfish_version=self.redfish_version)
This property gets the list of instances for array controllers This property gets the list of instances for array controllers :returns: a list of instances of array controllers.
def set_row_name(self, index, name): javabridge.call(self.jobject, "setRowName", "(ILjava/lang/String;)V", index, name)
Sets the row name. :param index: the 0-based row index :type index: int :param name: the name of the row :type name: str
def _terminal_notifier(title, message): try: paths = common.extract_app_paths(['terminal-notifier']) except ValueError: pass common.shell_process([paths[0], '-title', title, '-message', message])
Shows user notification message via `terminal-notifier` command. `title` Notification title. `message` Notification message.
def releaseLicense(self, username): url = self._url + "/licenses/releaseLicense" params = { "username" : username, "f" : "json" } return self._post(url=url, param_dict=params, proxy_url=self._proxy_url, proxy_port=self._proxy_port)
If a user checks out an ArcGIS Pro license for offline or disconnected use, this operation releases the license for the specified account. A license can only be used with a single device running ArcGIS Pro. To check in the license, a valid access token and refresh token is required. If the refresh token for the device is lost, damaged, corrupted, or formatted, the user will not be able to check in the license. This prevents the user from logging in to ArcGIS Pro from any other device. As an administrator, you can release the license. This frees the outstanding license and allows the user to check out a new license or use ArcGIS Pro in a connected environment. Parameters: username - username of the account
def pip(self, cmd): pip_bin = self.cmd_path('pip') cmd = '{0} {1}'.format(pip_bin, cmd) return self._execute(cmd)
Execute some pip function using the virtual environment pip.
def shrink(self): "Get rid of one worker from the pool. Raises IndexError if empty." if self._size <= 0: raise IndexError("pool is already empty") self._size -= 1 self.put(SuicideJob())
Get rid of one worker from the pool. Raises IndexError if empty.
def read1(self, size=-1): self._check_can_read() if size is None: raise TypeError("Read size should be an integer, not None") if (size == 0 or self._mode == _MODE_READ_EOF or not self._fill_buffer()): return b"" if 0 < size < len(self._buffer): data = self._buffer[:size] self._buffer = self._buffer[size:] else: data = self._buffer self._buffer = None self._pos += len(data) return data
Read up to size uncompressed bytes, while trying to avoid making multiple reads from the underlying stream. Returns b"" if the file is at EOF.
def save_libsvm(X, y, path): dump_svmlight_file(X, y, path, zero_based=False)
Save data as a LibSVM file. Args: X (numpy or scipy sparse matrix): Data matrix y (numpy array): Target vector. path (str): Path to the CSV file to save data.
def _pass_list(self) -> List[Dict[str, Any]]: stops: List[Dict[str, Any]] = [] for stop in self.journey.PassList.BasicStop: index = stop.get("index") station = stop.Location.Station.HafasName.Text.text station_id = stop.Location.Station.ExternalId.text stops.append({"index": index, "stationId": station_id, "station": station}) return stops
Extract next stops along the journey.
def sign(hash,priv,k=0): if k == 0: k = generate_k(priv, hash) hash = int(hash,16) priv = int(priv,16) r = int(privtopub(dechex(k,32),True)[2:],16) % N s = ((hash + (r*priv)) * modinv(k,N)) % N if s > (N / 2): s = N - s r, s = inttoDER(r), inttoDER(s) olen = dechex(len(r+s)//2,1) return '30' + olen + r + s
Returns a DER-encoded signature from a input of a hash and private key, and optionally a K value. Hash and private key inputs must be 64-char hex strings, k input is an int/long. >>> h = 'f7011e94125b5bba7f62eb25efe23339eb1637539206c87df3ee61b5ec6b023e' >>> p = 'c05694a7af0e01dceb63e5912a415c28d3fc823ca1fd3fa34d41afde03740466' >>> k = 4 # chosen by fair dice roll, guaranteed to be random >>> sign(h,p,k) '3045022100e493dbf1c10d80f3581e4904930b1404cc6c13900ee0758474fa94abe8c4cd130220598e37e2e66277ef4d0caf0e32d095debb3c744219508cd394b9747e548662b7'
def try_one_generator_really (project, name, generator, target_type, properties, sources): if __debug__: from .targets import ProjectTarget assert isinstance(project, ProjectTarget) assert isinstance(name, basestring) or name is None assert isinstance(generator, Generator) assert isinstance(target_type, basestring) assert isinstance(properties, property_set.PropertySet) assert is_iterable_typed(sources, virtual_target.VirtualTarget) targets = generator.run (project, name, properties, sources) usage_requirements = [] success = False dout("returned " + str(targets)) if targets: success = True; if isinstance (targets[0], property_set.PropertySet): usage_requirements = targets [0] targets = targets [1] else: usage_requirements = property_set.empty () dout( " generator" + generator.id() + " spawned ") if success: return (usage_requirements, targets) else: return None
Returns usage requirements + list of created targets.
def parse_values_from_lines(self,lines): assert len(lines) == 3,"SvdData.parse_values_from_lines: expected " + \ "3 lines, not {0}".format(len(lines)) try: self.svdmode = int(lines[0].strip().split()[0]) except Exception as e: raise Exception("SvdData.parse_values_from_lines: error parsing" + \ " svdmode from line {0}: {1} \n".format(lines[0],str(e))) try: raw = lines[1].strip().split() self.maxsing = int(raw[0]) self.eigthresh = float(raw[1]) except Exception as e: raise Exception("SvdData.parse_values_from_lines: error parsing" + \ " maxsing and eigthresh from line {0}: {1} \n"\ .format(lines[1],str(e))) self.eigwrite = lines[2].strip()
parse values from lines of the SVD section Parameters ---------- lines : list
def save(self): if not self.is_loaded and self.id is None or self.id == '': data = self.resource.post(data=self.data, collection=self.collection) self.id = data['_id'] self.key = data['_key'] self.revision = data['_rev'] self.is_loaded = True else: data = self.resource(self.id).patch(data=self.data) self.revision = data['_rev']
If its internal state is loaded than it will only updated the set properties but otherwise it will create a new document.
def put(self, key, value): self.shardDatastore(key).put(key, value)
Stores the object to the corresponding datastore.
def assign_item_to_bank(self, item_id, bank_id): mgr = self._get_provider_manager('ASSESSMENT', local=True) lookup_session = mgr.get_bank_lookup_session(proxy=self._proxy) lookup_session.get_bank(bank_id) self._assign_object_to_catalog(item_id, bank_id)
Adds an existing ``Item`` to a ``Bank``. arg: item_id (osid.id.Id): the ``Id`` of the ``Item`` arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank`` raise: AlreadyExists - ``item_id`` is already assigned to ``bank_id`` raise: NotFound - ``item_id`` or ``bank_id`` not found raise: NullArgument - ``item_id`` or ``bank_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.*
def calculate_file_md5(filepath, blocksize=2 ** 20): checksum = hashlib.md5() with click.open_file(filepath, "rb") as f: def update_chunk(): buf = f.read(blocksize) if buf: checksum.update(buf) return bool(buf) while update_chunk(): pass return checksum.hexdigest()
Calculate an MD5 hash for a file.
def update(self): vehicle = self.api.vehicles(vid=self.vid)['vehicle'] newbus = self.fromapi(self.api, vehicle) self.__dict__ = newbus.__dict__ del newbus
Update this bus by creating a new one and transplanting dictionaries.
def team_scores(self, team_scores, time): data = [] for score in team_scores['matches']: if score['status'] == 'FINISHED': item = {'date': score["utcDate"].split('T')[0], 'homeTeamName': score['homeTeam']['name'], 'goalsHomeTeam': score['score']['fullTime']['homeTeam'], 'goalsAwayTeam': score['score']['fullTime']['awayTeam'], 'awayTeamName': score['awayTeam']['name']} data.append(item) self.generate_output({'team_scores': data})
Store output of team scores to a JSON file
def integrate(ii, r0, c0, r1, c1): S = np.zeros(ii.shape[-1]) S += ii[r1, c1] if (r0 - 1 >= 0) and (c0 - 1 >= 0): S += ii[r0 - 1, c0 - 1] if (r0 - 1 >= 0): S -= ii[r0 - 1, c1] if (c0 - 1 >= 0): S -= ii[r1, c0 - 1] return S
Use an integral image to integrate over a given window. Parameters ---------- ii : ndarray Integral image. r0, c0 : int Top-left corner of block to be summed. r1, c1 : int Bottom-right corner of block to be summed. Returns ------- S : int Integral (sum) over the given window.
def inquire(self): status, nRecs, interlace, fldNames, size, vName = \ _C.VSinquire(self._id) _checkErr('inquire', status, "cannot query vdata info") return nRecs, interlace, fldNames.split(','), size, vName
Retrieve info about the vdata. Args:: no argument Returns:: 5-element tuple with the following elements: -number of records in the vdata -interlace mode -list of vdata field names -size in bytes of the vdata record -name of the vdata C library equivalent : VSinquire
def float_str(value, symbol_str="", symbol_value=1, after=False, max_denominator=1000000): if value == 0: return "0" frac = Fraction(value/symbol_value).limit_denominator(max_denominator) num, den = frac.numerator, frac.denominator output_data = [] if num < 0: num = -num output_data.append("-") if (num != 1) or (symbol_str == "") or after: output_data.append(str(num)) if (value != 0) and not after: output_data.append(symbol_str) if den != 1: output_data.extend(["/", str(den)]) if after: output_data.append(symbol_str) return "".join(output_data)
Pretty rational string from float numbers. Converts a given numeric value to a string based on rational fractions of the given symbol, useful for labels in plots. Parameters ---------- value : A float number or an iterable with floats. symbol_str : String data that will be in the output representing the data as a numerator multiplier, if needed. Defaults to an empty string. symbol_value : The conversion value for the given symbol (e.g. pi = 3.1415...). Defaults to one (no effect). after : Chooses the place where the ``symbol_str`` should be written. If ``True``, that's the end of the string. If ``False``, that's in between the numerator and the denominator, before the slash. Defaults to ``False``. max_denominator : An int instance, used to round the float following the given limit. Defaults to the integer 1,000,000 (one million). Returns ------- A string with the rational number written into as a fraction, with or without a multiplying symbol. Examples -------- >>> float_str.frac(12.5) '25/2' >>> float_str.frac(0.333333333333333) '1/3' >>> float_str.frac(0.333) '333/1000' >>> float_str.frac(0.333, max_denominator=100) '1/3' >>> float_str.frac(0.125, symbol_str="steps") 'steps/8' >>> float_str.frac(0.125, symbol_str=" Hz", ... after=True) # The symbol includes whitespace! '1/8 Hz' See Also -------- float_str.pi : This fraction/ratio formatter, but configured with the "pi" symbol.
def FMErrorByNum( num ): if not num in FMErrorNum.keys(): raise FMServerError, (num, FMErrorNum[-1]) elif num == 102: raise FMFieldError, (num, FMErrorNum[num]) else: raise FMServerError, (num, FMErrorNum[num])
This function raises an error based on the specified error code.
def move_mouse_relative(self, x, y): _libxdo.xdo_move_mouse_relative(self._xdo, x, y)
Move the mouse relative to it's current position. :param x: the distance in pixels to move on the X axis. :param y: the distance in pixels to move on the Y axis.
def parse_response(self, raw): tables = raw.json()['tables'] crit = raw.json()['criteria_map'] return self.parse(tables, crit)
Format the requested data model into a dictionary of DataFrames and a criteria map DataFrame. Take data returned by a requests.get call to Earthref. Parameters ---------- raw: 'requests.models.Response' Returns --------- data_model : dictionary of DataFrames crit_map : DataFrame
def iter_fasta(file_path): fh = open(file_path) faiter = (x[1] for x in groupby(fh, lambda line: line[0] == ">")) for header in faiter: headerStr = header.__next__()[1:].strip() seq = "".join(s.strip() for s in faiter.__next__()) yield (headerStr, seq)
Returns an iterator over the fasta file Given a fasta file. yield tuples of header, sequence Code modified from Brent Pedersen's: "Correct Way To Parse A Fasta File In Python" # Example ```python fasta = fasta_iter("hg19.fa") for header, seq in fasta: print(header) ```
def edge(self, tail_name, head_name, label=None, _attributes=None, **attrs): tail_name = self._quote_edge(tail_name) head_name = self._quote_edge(head_name) attr_list = self._attr_list(label, attrs, _attributes) line = self._edge % (tail_name, head_name, attr_list) self.body.append(line)
Create an edge between two nodes. Args: tail_name: Start node identifier. head_name: End node identifier. label: Caption to be displayed near the edge. attrs: Any additional edge attributes (must be strings).
def keys_info(gandi, fqdn, key): key_info = gandi.dns.keys_info(fqdn, key) output_keys = ['uuid', 'algorithm', 'algorithm_name', 'ds', 'fingerprint', 'public_key', 'flags', 'tag', 'status'] output_generic(gandi, key_info, output_keys, justify=15) return key_info
Display information about a domain key.
def jsonFn(self, comic): fn = os.path.join(self.basepath, comic, 'dosage.json') fn = os.path.abspath(fn) return fn
Get filename for the JSON file for a comic.
def check_sensors(): all_sensors = walk_data(sess, oid_description, helper)[0] all_status = walk_data(sess, oid_status, helper)[0] zipped = zip(all_sensors, all_status) for sensor in zipped: description = sensor[0] status = sensor[1] try: status_string = senor_status_table[status] except KeyError: helper.exit(summary="received an undefined value from device: " + status, exit_code=unknown, perfdata='') helper.add_summary("%s: %s" % (description, status_string)) if status == "2": helper.status(critical) if status == "3": helper.status(warning)
collect and check all available sensors
def extract_tree_block(self): "iterate through data file to extract trees" lines = iter(self.data) while 1: try: line = next(lines).strip() except StopIteration: break if line.lower() == "begin trees;": while 1: sub = next(lines).strip().split() if not sub: continue if sub[0].lower() == "translate": while sub[0] != ";": sub = next(lines).strip().split() self.tdict[sub[0]] = sub[-1].strip(",") if sub[0].lower().startswith("tree"): self.newicks.append(sub[-1]) if sub[0].lower() == "end;": break
iterate through data file to extract trees
def is_valid_mpls_label(label): if (not isinstance(label, numbers.Integral) or (4 <= label <= 15) or (label < 0 or label > 2 ** 20)): return False return True
Validates `label` according to MPLS label rules RFC says: This 20-bit field. A value of 0 represents the "IPv4 Explicit NULL Label". A value of 1 represents the "Router Alert Label". A value of 2 represents the "IPv6 Explicit NULL Label". A value of 3 represents the "Implicit NULL Label". Values 4-15 are reserved.
def resolve_revision(self, dest, url, rev_options): rev = rev_options.arg_rev sha, is_branch = self.get_revision_sha(dest, rev) if sha is not None: rev_options = rev_options.make_new(sha) rev_options.branch_name = rev if is_branch else None return rev_options if not looks_like_hash(rev): logger.warning( "Did not find branch or tag '%s', assuming revision or ref.", rev, ) if not rev.startswith('refs/'): return rev_options self.run_command( ['fetch', '-q', url] + rev_options.to_args(), cwd=dest, ) sha = self.get_revision(dest, rev='FETCH_HEAD') rev_options = rev_options.make_new(sha) return rev_options
Resolve a revision to a new RevOptions object with the SHA1 of the branch, tag, or ref if found. Args: rev_options: a RevOptions object.
def info(self, message, *args, **kwargs): self.system.info(message, *args, **kwargs)
Log info event. Compatible with logging.info signature.
def create_message(username, message): message = message.replace('\n', '<br/>') return '{{"service":1, "data":{{"message":"{mes}", "username":"{user}"}} }}'.format(mes=message, user=username)
Creates a standard message from a given user with the message Replaces newline with html break
def negociate_content(default='json-ld'): mimetype = request.accept_mimetypes.best_match(ACCEPTED_MIME_TYPES.keys()) return ACCEPTED_MIME_TYPES.get(mimetype, default)
Perform a content negociation on the format given the Accept header
def handle_na(self, data): return remove_missing(data, self.params['na_rm'], list(self.REQUIRED_AES | self.NON_MISSING_AES), self.__class__.__name__)
Remove rows with NaN values geoms that infer extra information from missing values should override this method. For example :class:`~plotnine.geoms.geom_path`. Parameters ---------- data : dataframe Data Returns ------- out : dataframe Data without the NaNs. Notes ----- Shows a warning if the any rows are removed and the `na_rm` parameter is False. It only takes into account the columns of the required aesthetics.
def y1(x, context=None): return _apply_function_in_current_context( BigFloat, mpfr.mpfr_y1, (BigFloat._implicit_convert(x),), context, )
Return the value of the second kind Bessel function of order 1 at x.
def get_child_models(self): child_models = [] for plugin in self.child_model_plugin_class.get_plugins(): child_models.append((plugin.model, plugin.model_admin)) if not child_models: child_models.append(( self.child_model_admin.base_model, self.child_model_admin, )) return child_models
Get child models from registered plugins. Fallback to the child model admin and its base model if no plugins are registered.
def __get_descendants(node, dfs_data): list_of_descendants = [] stack = deque() children_lookup = dfs_data['children_lookup'] current_node = node children = children_lookup[current_node] dfs_current_node = D(current_node, dfs_data) for n in children: dfs_child = D(n, dfs_data) if dfs_child > dfs_current_node: stack.append(n) while len(stack) > 0: current_node = stack.pop() list_of_descendants.append(current_node) children = children_lookup[current_node] dfs_current_node = D(current_node, dfs_data) for n in children: dfs_child = D(n, dfs_data) if dfs_child > dfs_current_node: stack.append(n) return list_of_descendants
Gets the descendants of a node.
def _clear(self): (colour, attr, bg) = self.palette["background"] self._canvas.clear_buffer(colour, attr, bg)
Clear the current canvas.
def currentVersion(self): if self._currentVersion is None: self.__init(self._url) return self._currentVersion
returns the current version of the site
def on_cluster_remove(self, name): discovery_name = self.configurables[Cluster][name].discovery if discovery_name in self.configurables[Discovery]: self.configurables[Discovery][discovery_name].stop_watching( self.configurables[Cluster][name] ) self.kill_thread(name) self.sync_balancer_files()
Stops the cluster's associated discovery method from watching for changes to the cluster's nodes.
def _get_fullpath(self, filepath): if filepath[0] == '/': return filepath return os.path.join(self._base_path, filepath)
Return filepath with the base_path prefixed
def get_key(self, path, geometry, filters, options): seed = u' '.join([ str(path), str(geometry), str(filters), str(options), ]).encode('utf8') return md5(seed).hexdigest()
Generates the thumbnail's key from it's arguments. If the arguments doesn't change the key will not change
def inform_if_paths_invalid(egrc_path, examples_dir, custom_dir, debug=True): if (not debug): return if (egrc_path): _inform_if_path_does_not_exist(egrc_path) if (examples_dir): _inform_if_path_does_not_exist(examples_dir) if (custom_dir): _inform_if_path_does_not_exist(custom_dir)
If egrc_path, examples_dir, or custom_dir is truthy and debug is True, informs the user that a path is not set. This should be used to verify input arguments from the command line.
def apply(self, fn, *column_or_columns): if not column_or_columns: return np.array([fn(row) for row in self.rows]) else: if len(column_or_columns) == 1 and \ _is_non_string_iterable(column_or_columns[0]): warnings.warn( "column lists are deprecated; pass each as an argument", FutureWarning) column_or_columns = column_or_columns[0] rows = zip(*self.select(*column_or_columns).columns) return np.array([fn(*row) for row in rows])
Apply ``fn`` to each element or elements of ``column_or_columns``. If no ``column_or_columns`` provided, `fn`` is applied to each row. Args: ``fn`` (function) -- The function to apply. ``column_or_columns``: Columns containing the arguments to ``fn`` as either column labels (``str``) or column indices (``int``). The number of columns must match the number of arguments that ``fn`` expects. Raises: ``ValueError`` -- if ``column_label`` is not an existing column in the table. ``TypeError`` -- if insufficent number of ``column_label`` passed to ``fn``. Returns: An array consisting of results of applying ``fn`` to elements specified by ``column_label`` in each row. >>> t = Table().with_columns( ... 'letter', make_array('a', 'b', 'c', 'z'), ... 'count', make_array(9, 3, 3, 1), ... 'points', make_array(1, 2, 2, 10)) >>> t letter | count | points a | 9 | 1 b | 3 | 2 c | 3 | 2 z | 1 | 10 >>> t.apply(lambda x: x - 1, 'points') array([0, 1, 1, 9]) >>> t.apply(lambda x, y: x * y, 'count', 'points') array([ 9, 6, 6, 10]) >>> t.apply(lambda x: x - 1, 'count', 'points') Traceback (most recent call last): ... TypeError: <lambda>() takes 1 positional argument but 2 were given >>> t.apply(lambda x: x - 1, 'counts') Traceback (most recent call last): ... ValueError: The column "counts" is not in the table. The table contains these columns: letter, count, points Whole rows are passed to the function if no columns are specified. >>> t.apply(lambda row: row[1] * 2) array([18, 6, 6, 2])
def parse_value(source: SourceType, **options: dict) -> ValueNode: if isinstance(source, str): source = Source(source) lexer = Lexer(source, **options) expect_token(lexer, TokenKind.SOF) value = parse_value_literal(lexer, False) expect_token(lexer, TokenKind.EOF) return value
Parse the AST for a given string containing a GraphQL value. Throws GraphQLError if a syntax error is encountered. This is useful within tools that operate upon GraphQL Values directly and in isolation of complete GraphQL documents. Consider providing the results to the utility function: `value_from_ast()`.
def arp(interface='', ipaddr='', macaddr='', **kwargs): proxy_output = salt.utils.napalm.call( napalm_device, 'get_arp_table', **{ } ) if not proxy_output.get('result'): return proxy_output arp_table = proxy_output.get('out') if interface: arp_table = _filter_list(arp_table, 'interface', interface) if ipaddr: arp_table = _filter_list(arp_table, 'ip', ipaddr) if macaddr: arp_table = _filter_list(arp_table, 'mac', macaddr) proxy_output.update({ 'out': arp_table }) return proxy_output
NAPALM returns a list of dictionaries with details of the ARP entries. :param interface: interface name to filter on :param ipaddr: IP address to filter on :param macaddr: MAC address to filter on :return: List of the entries in the ARP table CLI Example: .. code-block:: bash salt '*' net.arp salt '*' net.arp macaddr='5c:5e:ab:da:3c:f0' Example output: .. code-block:: python [ { 'interface' : 'MgmtEth0/RSP0/CPU0/0', 'mac' : '5c:5e:ab:da:3c:f0', 'ip' : '172.17.17.1', 'age' : 1454496274.84 }, { 'interface': 'MgmtEth0/RSP0/CPU0/0', 'mac' : '66:0e:94:96:e0:ff', 'ip' : '172.17.17.2', 'age' : 1435641582.49 } ]
def comment (self, s, **args): self.write(u"<!-- ") self.write(s, **args) self.writeln(u" -->")
Write XML comment.
def monitor(self, timeout): def check(self, timeout): time.sleep(timeout) self.stop() wather = threading.Thread(target=check) wather.setDaemon(True) wather.start()
Monitor the process, check whether it runs out of time.
def dendrite_filter(n): return n.type == NeuriteType.basal_dendrite or n.type == NeuriteType.apical_dendrite
Select only dendrites
def index(self, value, floating=False): value = np.atleast_1d(self.set.element(value)) result = [] for val, cell_bdry_vec in zip(value, self.cell_boundary_vecs): ind = np.searchsorted(cell_bdry_vec, val) if floating: if cell_bdry_vec[ind] == val: result.append(float(ind)) else: csize = float(cell_bdry_vec[ind] - cell_bdry_vec[ind - 1]) result.append(ind - (cell_bdry_vec[ind] - val) / csize) else: if cell_bdry_vec[ind] == val and ind != len(cell_bdry_vec) - 1: result.append(ind) else: result.append(ind - 1) if self.ndim == 1: result = result[0] else: result = tuple(result) return result
Return the index of a value in the domain. Parameters ---------- value : ``self.set`` element Point whose index to find. floating : bool, optional If True, then the index should also give the position inside the voxel. This is given by returning the integer valued index of the voxel plus the distance from the left cell boundary as a fraction of the full cell size. Returns ------- index : int, float, tuple of int or tuple of float Index of the value, as counted from the left. If ``self.ndim > 1`` the result is a tuple, else a scalar. If ``floating=True`` the scalar is a float, else an int. Examples -------- Get the indices of start and end: >>> p = odl.uniform_partition(0, 2, 5) >>> p.index(0) 0 >>> p.index(2) 4 For points inside voxels, the index of the containing cell is returned: >>> p.index(0.2) 0 By using the ``floating`` argument, partial positions inside the voxels can instead be determined: >>> p.index(0.2, floating=True) 0.5 These indices work with indexing, extracting the voxel in which the point lies: >>> p[p.index(0.1)] uniform_partition(0.0, 0.4, 1) The same principle also works in higher dimensions: >>> p = uniform_partition([0, -1], [1, 2], (4, 1)) >>> p.index([0.5, 2]) (2, 0) >>> p[p.index([0.5, 2])] uniform_partition([ 0.5, -1. ], [ 0.75, 2. ], (1, 1))
def vsubg(v1, v2, ndim): v1 = stypes.toDoubleVector(v1) v2 = stypes.toDoubleVector(v2) vout = stypes.emptyDoubleVector(ndim) ndim = ctypes.c_int(ndim) libspice.vsubg_c(v1, v2, ndim, vout) return stypes.cVectorToPython(vout)
Compute the difference between two double precision vectors of arbitrary dimension. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vsubg_c.html :param v1: First vector (minuend). :type v1: Array of floats :param v2: Second vector (subtrahend). :type v2: Array of floats :param ndim: Dimension of v1, v2, and vout. :type ndim: int :return: Difference vector, v1 - v2. :rtype: Array of floats
def get_data_disk_size(vm_, swap, linode_id): disk_size = get_linode(kwargs={'linode_id': linode_id})['TOTALHD'] root_disk_size = config.get_cloud_config_value( 'disk_size', vm_, __opts__, default=disk_size - swap ) return disk_size - root_disk_size - swap
Return the size of of the data disk in MB .. versionadded:: 2016.3.0
def add_column(connection, column): stmt = alembic.ddl.base.AddColumn(_State.table.name, column) connection.execute(stmt) _State.reflect_metadata()
Add a column to the current table.
def get_probability_no_exceedance(self, poes): if numpy.isnan(self.occurrence_rate): if len(poes.shape) == 1: poes = numpy.reshape(poes, (-1, len(poes))) p_kT = self.probs_occur prob_no_exceed = numpy.array( [v * ((1 - poes) ** i) for i, v in enumerate(p_kT)]) prob_no_exceed = numpy.sum(prob_no_exceed, axis=0) prob_no_exceed[prob_no_exceed > 1.] = 1. prob_no_exceed[poes == 0.] = 1. return prob_no_exceed tom = self.temporal_occurrence_model return tom.get_probability_no_exceedance(self.occurrence_rate, poes)
Compute and return the probability that in the time span for which the rupture is defined, the rupture itself never generates a ground motion value higher than a given level at a given site. Such calculation is performed starting from the conditional probability that an occurrence of the current rupture is producing a ground motion value higher than the level of interest at the site of interest. The actual formula used for such calculation depends on the temporal occurrence model the rupture is associated with. The calculation can be performed for multiple intensity measure levels and multiple sites in a vectorized fashion. :param poes: 2D numpy array containing conditional probabilities the the a rupture occurrence causes a ground shaking value exceeding a ground motion level at a site. First dimension represent sites, second dimension intensity measure levels. ``poes`` can be obtained calling the :meth:`method <openquake.hazardlib.gsim.base.GroundShakingIntensityModel.get_poes>
def save(self, doc): self.log.debug('save()') self.docs.append(doc) self.commit()
Save a doc to cache
def titles(self, key, value): if not key.startswith('245'): return { 'source': value.get('9'), 'subtitle': value.get('b'), 'title': value.get('a'), } self.setdefault('titles', []).insert(0, { 'source': value.get('9'), 'subtitle': value.get('b'), 'title': value.get('a'), })
Populate the ``titles`` key.
def owner(self): if self._writer is not None: return self.WRITER if self._readers: return self.READER return None
Returns whether the lock is locked by a writer or reader.
def uri(self, value): if value == self.__uri: return match = URI_REGEX.match(value) if match is None: raise ValueError('Unable to match URI from `{}`'.format(value)) for key, value in match.groupdict().items(): setattr(self, key, value)
Attempt to validate URI and split into individual values