code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def pose2mat(pose): homo_pose_mat = np.zeros((4, 4), dtype=np.float32) homo_pose_mat[:3, :3] = quat2mat(pose[1]) homo_pose_mat[:3, 3] = np.array(pose[0], dtype=np.float32) homo_pose_mat[3, 3] = 1. return homo_pose_mat
Converts pose to homogeneous matrix. Args: pose: a (pos, orn) tuple where pos is vec3 float cartesian, and orn is vec4 float quaternion. Returns: 4x4 homogeneous matrix
def ls( self, rev, path, recursive=False, recursive_dirs=False, directory=False, report=() ): raise NotImplementedError
List directory or file :param rev: The revision to use. :param path: The path to list. May start with a '/' or not. Directories may end with a '/' or not. :param recursive: Recursively list files in subdirectories. :param recursive_dirs: Used when recursive=True, also list directories. :param directory: If path is a directory, list path itself instead of its contents. :param report: A list or tuple of extra attributes to return that may require extra processing. Recognized values are 'size', 'target', 'executable', and 'commit'. Returns a list of dictionaries with the following keys: **type** The type of the file: 'f' for file, 'd' for directory, 'l' for symlink. **name** The name of the file. Not present if directory=True. **size** The size of the file. Only present for files when 'size' is in report. **target** The target of the symlink. Only present for symlinks when 'target' is in report. **executable** True if the file is executable, False otherwise. Only present for files when 'executable' is in report. Raises PathDoesNotExist if the path does not exist.
def element_name_from_Z(Z, normalize=False): r = element_data_from_Z(Z)[2] if normalize: return r.capitalize() else: return r
Obtain an element's name from its Z number An exception is thrown if the Z number is not found If normalize is True, the first letter will be capitalized
def match(self, context, line): return line.kind == 'code' and line.partitioned[0] in self._both
Match code lines prefixed with a variety of keywords.
def reset(self): self._undo_stack.clear() self._spike_clusters = self._spike_clusters_base self._new_cluster_id = self._new_cluster_id_0
Reset the clustering to the original clustering. All changes are lost.
def _kill_process(self, pid, sig=signal.SIGKILL): try: os.kill(pid, sig) except OSError as e: if e.errno == errno.ESRCH: logging.debug("Failure %s while killing process %s with signal %s: %s", e.errno, pid, sig, e.strerror) else: logging.warning("Failure %s while killing process %s with signal %s: %s", e.errno, pid, sig, e.strerror)
Try to send signal to given process.
def from_form(self, param_name, field): return self.__from_source(param_name, field, lambda: request.form, 'form')
A decorator that converts a request form into a function parameter based on the specified field. :param str param_name: The parameter which receives the argument. :param Field field: The field class or instance used to deserialize the request form to a Python object. :return: A function
def updateParams(self, newvalues): for (param, value) in newvalues.items(): if param not in self.model.freeparams: raise RuntimeError("Can't handle param: {0}".format( param)) if newvalues: self.model.updateParams(newvalues) self._updateInternals() self._paramsarray = None
Update model parameters and re-compute likelihoods. This method is the **only** acceptable way to update model parameters. The likelihood is re-computed as needed by this method. Args: `newvalues` (dict) A dictionary keyed by param name and with value as new value to set. Each parameter name must either be a valid model parameter (in `model.freeparams`).
def interp(self, date: timetools.Date) -> float: xnew = timetools.TOY(date) xys = list(self) for idx, (x_1, y_1) in enumerate(xys): if x_1 > xnew: x_0, y_0 = xys[idx-1] break else: x_0, y_0 = xys[-1] x_1, y_1 = xys[0] return y_0+(y_1-y_0)/(x_1-x_0)*(xnew-x_0)
Perform a linear value interpolation for the given `date` and return the result. Instantiate a 1-dimensional |SeasonalParameter| object: >>> from hydpy.core.parametertools import SeasonalParameter >>> class Par(SeasonalParameter): ... NDIM = 1 ... TYPE = float ... TIME = None >>> par = Par(None) >>> par.simulationstep = '1d' >>> par.shape = (None,) Define three toy-value pairs: >>> par(_1=2.0, _2=5.0, _12_31=4.0) Passing a |Date| object matching a |TOY| object exactly returns the corresponding |float| value: >>> from hydpy import Date >>> par.interp(Date('2000.01.01')) 2.0 >>> par.interp(Date('2000.02.01')) 5.0 >>> par.interp(Date('2000.12.31')) 4.0 For all intermediate points, |SeasonalParameter.interp| performs a linear interpolation: >>> from hydpy import round_ >>> round_(par.interp(Date('2000.01.02'))) 2.096774 >>> round_(par.interp(Date('2000.01.31'))) 4.903226 >>> round_(par.interp(Date('2000.02.02'))) 4.997006 >>> round_(par.interp(Date('2000.12.30'))) 4.002994 Linear interpolation is also allowed between the first and the last pair when they do not capture the endpoints of the year: >>> par(_1_2=2.0, _12_30=4.0) >>> round_(par.interp(Date('2000.12.29'))) 3.99449 >>> par.interp(Date('2000.12.30')) 4.0 >>> round_(par.interp(Date('2000.12.31'))) 3.333333 >>> round_(par.interp(Date('2000.01.01'))) 2.666667 >>> par.interp(Date('2000.01.02')) 2.0 >>> round_(par.interp(Date('2000.01.03'))) 2.00551 The following example briefly shows interpolation performed for a 2-dimensional parameter: >>> Par.NDIM = 2 >>> par = Par(None) >>> par.shape = (None, 2) >>> par(_1_1=[1., 2.], _1_3=[-3, 0.]) >>> result = par.interp(Date('2000.01.02')) >>> round_(result[0]) -1.0 >>> round_(result[1]) 1.0
def _init_content_type_params(self): ret = {} if self.content_type: params = self.content_type.split(';')[1:] for param in params: try: key, val = param.split('=') ret[naked(key)] = naked(val) except ValueError: continue return ret
Return the Content-Type request header parameters Convert all of the semi-colon separated parameters into a dict of key/vals. If for some stupid reason duplicate & conflicting params are present then the last one wins. If a particular content-type param is non-compliant by not being a simple key=val pair then it is skipped. If no content-type header or params are present then return an empty dict. :return: dict
def send_messages(self, sms_messages): results = [] for message in sms_messages: try: assert message.connection is None except AssertionError: if not self.fail_silently: raise backend = self.backend fail_silently = self.fail_silently result = django_rq.enqueue(self._send, message, backend=backend, fail_silently=fail_silently) results.append(result) return results
Receives a list of SMSMessage instances and returns a list of RQ `Job` instances.
def do_find(lookup, term): space = defaultdict(list) for name in lookup.keys(): space[name].append(name) try: iter_lookup = lookup.iteritems() except AttributeError: iter_lookup = lookup.items() for name, definition in iter_lookup: for keyword in definition['keywords']: space[keyword].append(name) space[definition['category']].append(name) matches = fnmatch.filter(space.keys(), term) results = set() for match in matches: results.update(space[match]) return [(r, translate(lookup, r)) for r in results]
Matches term glob against short-name, keywords and categories.
def _is_common_text(self, inpath): one_suffix = inpath[-2:] two_suffix = inpath[-3:] three_suffix = inpath[-4:] four_suffix = inpath[-5:] if one_suffix in self.common_text: return True elif two_suffix in self.common_text: return True elif three_suffix in self.common_text: return True elif four_suffix in self.common_text: return True else: return False
private method to compare file path mime type to common text file types
def _load_resource_listing(resource_listing): try: with open(resource_listing) as resource_listing_file: return simplejson.load(resource_listing_file) except IOError: raise ResourceListingNotFoundError( 'No resource listing found at {0}. Note that your json file ' 'must be named {1}'.format(resource_listing, API_DOCS_FILENAME) )
Load the resource listing from file, handling errors. :param resource_listing: path to the api-docs resource listing file :type resource_listing: string :returns: contents of the resource listing file :rtype: dict
def _create_content_body(self, body): frames = int(math.ceil(len(body) / float(self._max_frame_size))) for offset in compatibility.RANGE(0, frames): start_frame = self._max_frame_size * offset end_frame = start_frame + self._max_frame_size body_len = len(body) if end_frame > body_len: end_frame = body_len yield pamqp_body.ContentBody(body[start_frame:end_frame])
Split body based on the maximum frame size. This function is based on code from Rabbitpy. https://github.com/gmr/rabbitpy :param bytes|str|unicode body: Message payload :rtype: collections.Iterable
def flush_pending(function): s = boto3.Session() client = s.client('lambda') results = client.invoke( FunctionName=function, Payload=json.dumps({'detail-type': 'Scheduled Event'}) ) content = results.pop('Payload').read() pprint.pprint(results) pprint.pprint(json.loads(content))
Attempt to acquire any pending locks.
def get_single_value(value): if not all_elements_equal(value): raise ValueError('Not all values are equal to each other.') if is_scalar(value): return value return value.item(0)
Get a single value out of the given value. This is meant to be used after a call to :func:`all_elements_equal` that returned True. With this function we return a single number from the input value. Args: value (ndarray or number): a numpy array or a single number. Returns: number: a single number from the input Raises: ValueError: if not all elements are equal
def calc_normal_std_glorot(inmaps, outmaps, kernel=(1, 1)): r return np.sqrt(2. / (np.prod(kernel) * inmaps + outmaps))
r"""Calculates the standard deviation proposed by Glorot et al. .. math:: \sigma = \sqrt{\frac{2}{NK + M}} Args: inmaps (int): Map size of an input Variable, :math:`N`. outmaps (int): Map size of an output Variable, :math:`M`. kernel (:obj:`tuple` of :obj:`int`): Convolution kernel spatial shape. In above definition, :math:`K` is the product of shape dimensions. In Affine, the default value should be used. Example: .. code-block:: python import nnabla as nn import nnabla.parametric_functions as PF import nnabla.initializer as I x = nn.Variable([60,1,28,28]) s = I.calc_normal_std_glorot(x.shape[1],64) w = I.NormalInitializer(s) b = I.ConstantInitializer(0) h = PF.convolution(x, 64, [3, 3], w_init=w, b_init=b, pad=[1, 1], name='conv') References: * `Glorot and Bengio. Understanding the difficulty of training deep feedforward neural networks <http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf>`_
def parse_css(self, css): rulesets = self.ruleset_re.findall(css) for (selector, declarations) in rulesets: rule = Rule(self.parse_selector(selector)) rule.properties = self.parse_declarations(declarations) self.rules.append(rule)
Parse a css style sheet into the CSS object. For the moment this will only work for very simple css documents. It works by using regular expression matching css syntax. This is not bullet proof.
def _validate_children(self): for child in self._children: if child.__class__ not in self._allowed_children: raise ValueError( "Child %s is not allowed as a children for this %s type entity." % ( child, self.__class__ ) )
Check that the children we have are allowed here.
def refresh(self) -> None: if not self: self.values[:] = 0. elif len(self) == 1: values = list(self._toy2values.values())[0] self.values[:] = self.apply_timefactor(values) else: for idx, date in enumerate( timetools.TOY.centred_timegrid(self.simulationstep)): values = self.interp(date) self.values[idx] = self.apply_timefactor(values)
Update the actual simulation values based on the toy-value pairs. Usually, one does not need to call refresh explicitly. The "magic" methods __call__, __setattr__, and __delattr__ invoke it automatically, when required. Instantiate a 1-dimensional |SeasonalParameter| object: >>> from hydpy.core.parametertools import SeasonalParameter >>> class Par(SeasonalParameter): ... NDIM = 1 ... TYPE = float ... TIME = None >>> par = Par(None) >>> par.simulationstep = '1d' >>> par.shape = (None,) When a |SeasonalParameter| object does not contain any toy-value pairs yet, the method |SeasonalParameter.refresh| sets all actual simulation values to zero: >>> par.values = 1. >>> par.refresh() >>> par.values[0] 0.0 When there is only one toy-value pair, its values are relevant for all actual simulation values: >>> par.toy_1 = 2. # calls refresh automatically >>> par.values[0] 2.0 Method |SeasonalParameter.refresh| performs a linear interpolation for the central time points of each simulation time step. Hence, in the following example, the original values of the toy-value pairs do not show up: >>> par.toy_12_31 = 4. >>> from hydpy import round_ >>> round_(par.values[0]) 2.00274 >>> round_(par.values[-2]) 3.99726 >>> par.values[-1] 3.0 If one wants to preserve the original values in this example, one would have to set the corresponding toy instances in the middle of some simulation step intervals: >>> del par.toy_1 >>> del par.toy_12_31 >>> par.toy_1_1_12 = 2 >>> par.toy_12_31_12 = 4. >>> par.values[0] 2.0 >>> round_(par.values[1]) 2.005479 >>> round_(par.values[-2]) 3.994521 >>> par.values[-1] 4.0
def trim_common_suffixes(strs, min_len=0): if len(strs) < 2: return 0, strs rev_strs = [s[::-1] for s in strs] trimmed, rev_strs = trim_common_prefixes(rev_strs, min_len) if trimmed: strs = [s[::-1] for s in rev_strs] return trimmed, strs
trim common suffixes >>> trim_common_suffixes('A', 1) (0, 'A')
def slice_shift(self, periods=1, axis=0): if periods == 0: return self if periods > 0: vslicer = slice(None, -periods) islicer = slice(periods, None) else: vslicer = slice(-periods, None) islicer = slice(None, periods) new_obj = self._slice(vslicer, axis=axis) shifted_axis = self._get_axis(axis)[islicer] new_obj.set_axis(shifted_axis, axis=axis, inplace=True) return new_obj.__finalize__(self)
Equivalent to `shift` without copying data. The shifted data will not include the dropped periods and the shifted axis will be smaller than the original. Parameters ---------- periods : int Number of periods to move, can be positive or negative Returns ------- shifted : same type as caller Notes ----- While the `slice_shift` is faster than `shift`, you may pay for it later during alignment.
def _exec_cmd(self, command, **kwargs): kwargs['command'] = command self._check_exclusive_parameters(**kwargs) requests_params = self._handle_requests_params(kwargs) session = self._meta_data['bigip']._meta_data['icr_session'] response = session.post( self._meta_data['uri'], json=kwargs, **requests_params) new_instance = self._stamp_out_core() new_instance._local_update(response.json()) if 'commandResult' in new_instance.__dict__: new_instance._check_command_result() return new_instance
Create a new method as command has specific requirements. There is a handful of the TMSH global commands supported, so this method requires them as a parameter. :raises: InvalidCommand
def remove_fields(layer, fields_to_remove): index_to_remove = [] data_provider = layer.dataProvider() for field in fields_to_remove: index = layer.fields().lookupField(field) if index != -1: index_to_remove.append(index) data_provider.deleteAttributes(index_to_remove) layer.updateFields()
Remove fields from a vector layer. :param layer: The vector layer. :type layer: QgsVectorLayer :param fields_to_remove: List of fields to remove. :type fields_to_remove: list
def builtin_lookup(name): builtin_astroid = MANAGER.ast_from_module(builtins) if name == "__dict__": return builtin_astroid, () try: stmts = builtin_astroid.locals[name] except KeyError: stmts = () return builtin_astroid, stmts
lookup a name into the builtin module return the list of matching statements and the astroid for the builtin module
def num_lines(self): if self.from_stdin: return None if not self._num_lines: self._iterate_lines() return self._num_lines
Lazy evaluation of the number of lines. Returns None for stdin input currently.
def competence(s): if any([isinstance(s, cls) for cls in [distributions.Wishart, distributions.WishartCov]]): return 2 else: return 0
The competence function for MatrixMetropolis
def search_drama_series(self, query_string): result = self._android_api.list_series( media_type=ANDROID.MEDIA_TYPE_DRAMA, filter=ANDROID.FILTER_PREFIX + query_string) return result
Search drama series list by series name, case-sensitive @param str query_string string to search for, note that the search is very simplistic and only matches against the start of the series name, ex) search for "space" matches "Space Brothers" but wouldn't match "Brothers Space" @return list<crunchyroll.models.Series>
def _detect_timezone_windows(): global win32timezone_to_en tzi = DTZI_c() kernel32 = ctypes.windll.kernel32 getter = kernel32.GetTimeZoneInformation getter = getattr(kernel32, "GetDynamicTimeZoneInformation", getter) _ = getter(ctypes.byref(tzi)) win32tz_key_name = tzi.key_name if not win32tz_key_name: if win32timezone is None: return None win32tz_name = tzi.standard_name if not win32timezone_to_en: win32timezone_to_en = dict( win32timezone.TimeZoneInfo._get_indexed_time_zone_keys("Std")) win32tz_key_name = win32timezone_to_en.get(win32tz_name, win32tz_name) territory = locale.getdefaultlocale()[0].split("_", 1)[1] olson_name = win32tz_map.win32timezones.get((win32tz_key_name, territory), win32tz_map.win32timezones.get(win32tz_key_name, None)) if not olson_name: return None if not isinstance(olson_name, str): olson_name = olson_name.encode("ascii") return pytz.timezone(olson_name)
Detect timezone on the windows platform.
def ConfigureLogging( debug_output=False, filename=None, mode='w', quiet_mode=False): for handler in logging.root.handlers: logging.root.removeHandler(handler) logger = logging.getLogger() if filename and filename.endswith('.gz'): handler = CompressedFileHandler(filename, mode=mode) elif filename: handler = logging.FileHandler(filename, mode=mode) else: handler = logging.StreamHandler() format_string = ( '%(asctime)s [%(levelname)s] (%(processName)-10s) PID:%(process)d ' '<%(module)s> %(message)s') formatter = logging.Formatter(format_string) handler.setFormatter(formatter) if debug_output: level = logging.DEBUG elif quiet_mode: level = logging.WARNING else: level = logging.INFO logger.setLevel(level) handler.setLevel(level) logger.addHandler(handler)
Configures the logging root logger. Args: debug_output (Optional[bool]): True if the logging should include debug output. filename (Optional[str]): log filename. mode (Optional[str]): log file access mode. quiet_mode (Optional[bool]): True if the logging should not include information output. Note that debug_output takes precedence over quiet_mode.
def search_directory(self, **kwargs): search_response = self.request('SearchDirectory', kwargs) result = {} items = { "account": zobjects.Account.from_dict, "domain": zobjects.Domain.from_dict, "dl": zobjects.DistributionList.from_dict, "cos": zobjects.COS.from_dict, "calresource": zobjects.CalendarResource.from_dict } for obj_type, func in items.items(): if obj_type in search_response: if isinstance(search_response[obj_type], list): result[obj_type] = [ func(v) for v in search_response[obj_type]] else: result[obj_type] = func(search_response[obj_type]) return result
SearchAccount is deprecated, using SearchDirectory :param query: Query string - should be an LDAP-style filter string (RFC 2254) :param limit: The maximum number of accounts to return (0 is default and means all) :param offset: The starting offset (0, 25, etc) :param domain: The domain name to limit the search to :param applyCos: applyCos - Flag whether or not to apply the COS policy to account. Specify 0 (false) if only requesting attrs that aren't inherited from COS :param applyConfig: whether or not to apply the global config attrs to account. specify 0 (false) if only requesting attrs that aren't inherited from global config :param sortBy: Name of attribute to sort on. Default is the account name. :param types: Comma-separated list of types to return. Legal values are: accounts|distributionlists|aliases|resources|domains|coses (default is accounts) :param sortAscending: Whether to sort in ascending order. Default is 1 (true) :param countOnly: Whether response should be count only. Default is 0 (false) :param attrs: Comma-seperated list of attrs to return ("displayName", "zimbraId", "zimbraAccountStatus") :return: dict of list of "account" "alias" "dl" "calresource" "domain" "cos"
def xgb_progressbar(rounds=1000): pbar = tqdm(total=rounds) def callback(_, ): pbar.update(1) return callback
Progressbar for xgboost using tqdm library. Examples -------- >>> model = xgb.train(params, X_train, 1000, callbacks=[xgb_progress(1000), ])
def attr(obj, attr): if not obj or not hasattr(obj, attr): return '' return getattr(obj, attr, '')
Does the same thing as getattr. getattr(obj, attr, '')
def _get_hyperparameters(self): hyperparameters = {} for key in self._hyperparameters: hyperparameters[key] = getattr(self, key) return hyperparameters
Get internal optimization parameters.
def guess_mime_type(self, path): _, ext = posixpath.splitext(path) if ext in self.extensions_map: return self.extensions_map[ext] ext = ext.lower() return self.extensions_map[ext if ext in self.extensions_map else '']
Guess an appropriate MIME type based on the extension of the provided path. :param str path: The of the file to analyze. :return: The guessed MIME type of the default if non are found. :rtype: str
def moz_info(self): if 'moz_info' not in self._memo: self._memo['moz_info'] = _get_url(self.artifact_url('mozinfo.json')).json() return self._memo['moz_info']
Return the build's mozinfo
def take_ownership(self, **kwargs): path = '%s/%s/take_ownership' % (self.manager.path, self.get_id()) server_data = self.manager.gitlab.http_post(path, **kwargs) self._update_attrs(server_data)
Update the owner of a pipeline schedule. Args: **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabOwnershipError: If the request failed
def get(self, section, key): try: return self.parser.get(section, key) except (NoOptionError, NoSectionError) as e: logger.warning("%s", e) return None
get function reads the config value for the requested section and key and returns it Parameters: * **section (string):** the section to look for the config value either - oxd, client * **key (string):** the key for the config value required Returns: **value (string):** the function returns the value of the key in the appropriate format if found or returns None if such a section or key couldnot be found Example: config = Configurer(location) oxd_port = config.get('oxd', 'port') # returns the port of the oxd
def visualize_saliency(model, layer_idx, filter_indices, seed_input, wrt_tensor=None, backprop_modifier=None, grad_modifier='absolute', keepdims=False): if backprop_modifier is not None: modifier_fn = get(backprop_modifier) model = modifier_fn(model) losses = [ (ActivationMaximization(model.layers[layer_idx], filter_indices), -1) ] return visualize_saliency_with_losses(model.input, losses, seed_input, wrt_tensor, grad_modifier, keepdims)
Generates an attention heatmap over the `seed_input` for maximizing `filter_indices` output in the given `layer_idx`. Args: model: The `keras.models.Model` instance. The model input shape must be: `(samples, channels, image_dims...)` if `image_data_format=channels_first` or `(samples, image_dims..., channels)` if `image_data_format=channels_last`. layer_idx: The layer index within `model.layers` whose filters needs to be visualized. filter_indices: filter indices within the layer to be maximized. If None, all filters are visualized. (Default value = None) For `keras.layers.Dense` layer, `filter_idx` is interpreted as the output index. If you are visualizing final `keras.layers.Dense` layer, consider switching 'softmax' activation for 'linear' using [utils.apply_modifications](vis.utils.utils#apply_modifications) for better results. seed_input: The model input for which activation map needs to be visualized. wrt_tensor: Short for, with respect to. The gradients of losses are computed with respect to this tensor. When None, this is assumed to be the same as `input_tensor` (Default value: None) backprop_modifier: backprop modifier to use. See [backprop_modifiers](vis.backprop_modifiers.md). If you don't specify anything, no backprop modification is applied. (Default value = None) grad_modifier: gradient modifier to use. See [grad_modifiers](vis.grad_modifiers.md). By default `absolute` value of gradients are used. To visualize positive or negative gradients, use `relu` and `negate` respectively. (Default value = 'absolute') keepdims: A boolean, whether to keep the dimensions or not. If keepdims is False, the channels axis is deleted. If keepdims is True, the grad with same shape as input_tensor is returned. (Default value: False) Example: If you wanted to visualize attention over 'bird' category, say output index 22 on the final `keras.layers.Dense` layer, then, `filter_indices = [22]`, `layer = dense_layer`. One could also set filter indices to more than one value. For example, `filter_indices = [22, 23]` should (hopefully) show attention map that corresponds to both 22, 23 output categories. Returns: The heatmap image indicating the `seed_input` regions whose change would most contribute towards maximizing the output of `filter_indices`.
def set_auth_key_from_file(user, source, config='.ssh/authorized_keys', saltenv='base', fingerprint_hash_type=None): lfile = __salt__['cp.cache_file'](source, saltenv) if not os.path.isfile(lfile): raise CommandExecutionError( 'Failed to pull key file from salt file server' ) s_keys = _validate_keys(lfile, fingerprint_hash_type) if not s_keys: err = ( 'No keys detected in {0}. Is file properly formatted?'.format( source ) ) log.error(err) __context__['ssh_auth.error'] = err return 'fail' else: rval = '' for key in s_keys: rval += set_auth_key( user, key, enc=s_keys[key]['enc'], comment=s_keys[key]['comment'], options=s_keys[key]['options'], config=config, cache_keys=list(s_keys.keys()), fingerprint_hash_type=fingerprint_hash_type ) if 'fail' in rval: return 'fail' elif 'replace' in rval: return 'replace' elif 'new' in rval: return 'new' else: return 'no change'
Add a key to the authorized_keys file, using a file as the source. CLI Example: .. code-block:: bash salt '*' ssh.set_auth_key_from_file <user> salt://ssh_keys/<user>.id_rsa.pub
def randomize_es(es_queryset): return es_queryset.query( query.FunctionScore( functions=[function.RandomScore()] ) ).sort("-_score")
Randomize an elasticsearch queryset.
def __is_valid_pos(pos_tuple, valid_pos): def is_valid_pos(valid_pos_tuple): length_valid_pos_tuple = len(valid_pos_tuple) if valid_pos_tuple == pos_tuple[:length_valid_pos_tuple]: return True else: return False seq_bool_flags = [is_valid_pos(valid_pos_tuple) for valid_pos_tuple in valid_pos] if True in set(seq_bool_flags): return True else: return False
This function checks token's pos is with in POS set that user specified. If token meets all conditions, Return True; else return False
def attrsignal(descriptor, signal_name, *, defer=False): def decorator(f): add_handler_spec( f, _attrsignal_spec(descriptor, signal_name, f, defer) ) return f return decorator
Connect the decorated method or coroutine method to the addressed signal on a descriptor. :param descriptor: The descriptor to connect to. :type descriptor: :class:`Descriptor` subclass. :param signal_name: Attribute name of the signal to connect to :type signal_name: :class:`str` :param defer: Flag indicating whether deferred execution of the decorated method is desired; see below for details. :type defer: :class:`bool` The signal is discovered by accessing the attribute with the name `signal_name` on the :attr:`~Descriptor.value_type` of the `descriptor`. During instantiation of the service, the value of the descriptor is used to obtain the signal and then the decorated method is connected to the signal. If the signal is a :class:`.callbacks.Signal` and `defer` is false, the decorated object is connected using the default :attr:`~.callbacks.AdHocSignal.STRONG` mode. If the signal is a :class:`.callbacks.Signal` and `defer` is true and the decorated object is a coroutine function, the :attr:`~.callbacks.AdHocSignal.SPAWN_WITH_LOOP` mode with the default asyncio event loop is used. If the decorated object is not a coroutine function, :attr:`~.callbacks.AdHocSignal.ASYNC_WITH_LOOP` is used instead. If the signal is a :class:`.callbacks.SyncSignal`, `defer` must be false and the decorated object must be a coroutine function. .. versionadded:: 0.9
async def _set_annotations(entity_tag, annotations, connection): log.debug('Updating annotations on %s', entity_tag) facade = client.AnnotationsFacade.from_connection(connection) args = client.EntityAnnotations( entity=entity_tag, annotations=annotations, ) return await facade.Set([args])
Set annotations on the specified entity. :param annotations map[string]string: the annotations as key/value pairs.
def clean_intersections(G, tolerance=15, dead_ends=False): if not dead_ends: if 'streets_per_node' in G.graph: streets_per_node = G.graph['streets_per_node'] else: streets_per_node = count_streets_per_node(G) dead_end_nodes = [node for node, count in streets_per_node.items() if count <= 1] G = G.copy() G.remove_nodes_from(dead_end_nodes) gdf_nodes = graph_to_gdfs(G, edges=False) buffered_nodes = gdf_nodes.buffer(tolerance).unary_union if isinstance(buffered_nodes, Polygon): buffered_nodes = [buffered_nodes] unified_intersections = gpd.GeoSeries(list(buffered_nodes)) intersection_centroids = unified_intersections.centroid return intersection_centroids
Clean-up intersections comprising clusters of nodes by merging them and returning their centroids. Divided roads are represented by separate centerline edges. The intersection of two divided roads thus creates 4 nodes, representing where each edge intersects a perpendicular edge. These 4 nodes represent a single intersection in the real world. This function cleans them up by buffering their points to an arbitrary distance, merging overlapping buffers, and taking their centroid. For best results, the tolerance argument should be adjusted to approximately match street design standards in the specific street network. Parameters ---------- G : networkx multidigraph tolerance : float nodes within this distance (in graph's geometry's units) will be dissolved into a single intersection dead_ends : bool if False, discard dead-end nodes to return only street-intersection points Returns ---------- intersection_centroids : geopandas.GeoSeries a GeoSeries of shapely Points representing the centroids of street intersections
def __parseDatasets(self): datasets = [] if self.__dataItem.has_key('dataSets'): for dataset in self.__dataItem['dataSets']: datasets.append(DataSet(Sitools2Abstract.getBaseUrl(self), dataset)) return datasets
Returns the list of Dataset related to the project.
def replace(self, key, val, time=0, min_compress_len=0): return self._set("replace", key, val, time, min_compress_len)
Replace existing key with value. Like L{set}, but only stores in memcache if the key already exists. The opposite of L{add}. @return: Nonzero on success. @rtype: int
def write(self, string): string = string.rstrip() if string: self.logger.critical(string)
Erase newline from a string and write to the logger.
def expand(sql, args=None): sql, args = SqlModule.get_sql_statement_with_environment(sql, args) return _sql_statement.SqlStatement.format(sql._sql, args)
Expand a SqlStatement, query string or SqlModule with a set of arguments. Args: sql: a SqlStatement, %%sql module, or string containing a query. args: a string of command line arguments or a dictionary of values. If a string, it is passed to the argument parser for the SqlModule associated with the SqlStatement or SqlModule. If a dictionary, it is used to override any default arguments from the argument parser. If the sql argument is a string then args must be None or a dictionary as in this case there is no associated argument parser. Returns: The expanded SQL, list of referenced scripts, and list of referenced external tables.
def _read_dataframes_100k(path): import pandas ratings = pandas.read_table(os.path.join(path, "u.data"), names=['userId', 'movieId', 'rating', 'timestamp']) movies = pandas.read_csv(os.path.join(path, "u.item"), names=['movieId', 'title'], usecols=[0, 1], delimiter='|', encoding='ISO-8859-1') return ratings, movies
reads in the movielens 100k dataset
def construct_url(ip_address: str) -> str: if 'http://' not in ip_address and 'https://' not in ip_address: ip_address = '{}{}'.format('http://', ip_address) if ip_address[-1] == '/': ip_address = ip_address[:-1] return ip_address
Construct the URL with a given IP address.
def normalize_response(response, request=None): if isinstance(response, Response): return response if request is not None and not isinstance(request, Request): request = normalize_request(request) for normalizer in RESPONSE_NORMALIZERS: try: return normalizer(response, request=request) except TypeError: continue raise ValueError("Unable to normalize the provided response")
Given a response, normalize it to the internal Response class. This also involves normalizing the associated request object.
def pieces(self, piece_type: PieceType, color: Color) -> "SquareSet": return SquareSet(self.pieces_mask(piece_type, color))
Gets pieces of the given type and color. Returns a :class:`set of squares <chess.SquareSet>`.
def simulate(self): min_ = (-sys.maxsize - 1) if self._min is None else self._min max_ = sys.maxsize if self._max is None else self._max return random.randint(min_, max_)
Generates a random integer in the available range.
def start_log(level=logging.DEBUG, filename=None): if filename is None: tstr = time.ctime() tstr = tstr.replace(' ', '.') tstr = tstr.replace(':', '.') filename = 'deblur.log.%s' % tstr logging.basicConfig(filename=filename, level=level, format='%(levelname)s(%(thread)d)' '%(asctime)s:%(message)s') logger = logging.getLogger(__name__) logger.info('*************************') logger.info('deblurring started')
start the logger for the run Parameters ---------- level : int, optional logging.DEBUG, logging.INFO etc. for the log level (between 0-50). filename : str, optional name of the filename to save the log to or None (default) to use deblur.log.TIMESTAMP
def fit_fn(distr, xvals, alpha, thresh): xvals = numpy.array(xvals) fit = fitfn_dict[distr](xvals, alpha, thresh) numpy.putmask(fit, xvals < thresh, 0.) return fit
The fitted function normalized to 1 above threshold To normalize to a given total count multiply by the count. Parameters ---------- xvals : sequence of floats Values where the function is to be evaluated alpha : float The fitted parameter thresh : float Threshold value applied to fitted values Returns ------- fit : array of floats Fitted function at the requested xvals
def layers(): global _cached_layers if _cached_layers is not None: return _cached_layers layers_module = tf.layers try: from tensorflow.python import tf2 if tf2.enabled(): tf.logging.info("Running in V2 mode, using Keras layers.") layers_module = tf.keras.layers except ImportError: pass _cached_layers = layers_module return layers_module
Get the layers module good for TF 1 and TF 2 work for now.
def sha_hash(self) -> str: return hashlib.sha256(self.signed_raw().encode("ascii")).hexdigest().upper()
Return uppercase hex sha256 hash from signed raw document :return:
def _generate_ngrams_with_context_helper(ngrams_iter: iter, ngrams_len: int) -> map: return map(lambda term: (term[1], term[0], term[0] + ngrams_len), enumerate(ngrams_iter))
Updates the end index
def crypto_config_from_kwargs(fallback, **kwargs): try: crypto_config = kwargs.pop("crypto_config") except KeyError: try: fallback_kwargs = {"table_name": kwargs["TableName"]} except KeyError: fallback_kwargs = {} crypto_config = fallback(**fallback_kwargs) return crypto_config, kwargs
Pull all encryption-specific parameters from the request and use them to build a crypto config. :returns: crypto config and updated kwargs :rtype: dynamodb_encryption_sdk.encrypted.CryptoConfig and dict
def notify(self): if self._notify is None: from twilio.rest.notify import Notify self._notify = Notify(self) return self._notify
Access the Notify Twilio Domain :returns: Notify Twilio Domain :rtype: twilio.rest.notify.Notify
def unhash(text, hashes): def retrieve_match(match): return hashes[match.group(0)] while re_hash.search(text): text = re_hash.sub(retrieve_match, text) text = re_pre_tag.sub(lambda m: re.sub('^' + m.group(1), '', m.group(0), flags=re.M), text) return text
Unhashes all hashed entites in the hashes dictionary. The pattern for hashes is defined by re_hash. After everything is unhashed, <pre> blocks are "pulled out" of whatever indentation level in which they used to be (e.g. in a list).
def converter_loader(app, entry_points=None, modules=None): if entry_points: for entry_point in entry_points: for ep in pkg_resources.iter_entry_points(entry_point): try: app.url_map.converters[ep.name] = ep.load() except Exception: app.logger.error( 'Failed to initialize entry point: {0}'.format(ep)) raise if modules: app.url_map.converters.update(**modules)
Run default converter loader. :param entry_points: List of entry points providing to Blue. :param modules: Map of coverters. .. versionadded: 1.0.0
def deriv2(self, p): return self.power * (self.power - 1) * np.power(p, self.power - 2)
Second derivative of the power transform Parameters ---------- p : array-like Mean parameters Returns -------- g''(p) : array Second derivative of the power transform of `p` Notes ----- g''(`p`) = `power` * (`power` - 1) * `p`**(`power` - 2)
def valid_loc(self,F=None): if F is not None: return [i for i,f in enumerate(F) if np.all(f < self.max_fit) and np.all(f >= 0)] else: return [i for i,f in enumerate(self.F) if np.all(f < self.max_fit) and np.all(f >= 0)]
returns the indices of individuals with valid fitness.
def makediagram(edges): graph = pydot.Dot(graph_type='digraph') nodes = edges2nodes(edges) epnodes = [(node, makeanode(node[0])) for node in nodes if nodetype(node)=="epnode"] endnodes = [(node, makeendnode(node[0])) for node in nodes if nodetype(node)=="EndNode"] epbr = [(node, makeabranch(node)) for node in nodes if not istuple(node)] nodedict = dict(epnodes + epbr + endnodes) for value in list(nodedict.values()): graph.add_node(value) for e1, e2 in edges: graph.add_edge(pydot.Edge(nodedict[e1], nodedict[e2])) return graph
make the diagram with the edges
def create_variable(self, varname, vtype=None): var_types = ('string', 'int', 'boolean', 'double') vname = varname var = None type_from_name = 'string' if ':' in varname: type_from_name, vname = varname.split(':') if type_from_name not in (var_types): type_from_name, vname = vname, type_from_name if type_from_name not in (var_types): raise Exception('Undefined variable type in "{0}"'.format(varname)) if vname in self.tkvariables: var = self.tkvariables[vname] else: if vtype is None: if type_from_name == 'int': var = tkinter.IntVar() elif type_from_name == 'boolean': var = tkinter.BooleanVar() elif type_from_name == 'double': var = tkinter.DoubleVar() else: var = tkinter.StringVar() else: var = vtype() self.tkvariables[vname] = var return var
Create a tk variable. If the variable was created previously return that instance.
def calculate_set_values(self): for ac in self.asset_classes: ac.alloc_value = self.total_amount * ac.allocation / Decimal(100)
Calculate the expected totals based on set allocations
def get(self, *, search, limit=0, headers=None): return self.transport.forward_request( method='GET', path=self.path, params={'search': search, 'limit': limit}, headers=headers )
Retrieves the assets that match a given text search string. Args: search (str): Text search string. limit (int): Limit the number of returned documents. Defaults to zero meaning that it returns all the matching assets. headers (dict): Optional headers to pass to the request. Returns: :obj:`list` of :obj:`dict`: List of assets that match the query.
def setEnable(self, status, wifiInterfaceId=1, timeout=1): namespace = Wifi.getServiceType("setEnable") + str(wifiInterfaceId) uri = self.getControlURL(namespace) if status: setStatus = 1 else: setStatus = 0 self.execute(uri, namespace, "SetEnable", timeout=timeout, NewEnable=setStatus)
Set enable status for a Wifi interface, be careful you don't cut yourself off. :param bool status: enable or disable the interface :param int wifiInterfaceId: the id of the Wifi interface :param float timeout: the timeout to wait for the action to be executed
def _with_primary(max_staleness, selection): primary = selection.primary sds = [] for s in selection.server_descriptions: if s.server_type == SERVER_TYPE.RSSecondary: staleness = ( (s.last_update_time - s.last_write_date) - (primary.last_update_time - primary.last_write_date) + selection.heartbeat_frequency) if staleness <= max_staleness: sds.append(s) else: sds.append(s) return selection.with_server_descriptions(sds)
Apply max_staleness, in seconds, to a Selection with a known primary.
def verify(obj, times=1, atleast=None, atmost=None, between=None, inorder=False): if isinstance(obj, str): obj = get_obj(obj) verification_fn = _get_wanted_verification( times=times, atleast=atleast, atmost=atmost, between=between) if inorder: verification_fn = verification.InOrder(verification_fn) theMock = _get_mock_or_raise(obj) class Verify(object): def __getattr__(self, method_name): return invocation.VerifiableInvocation( theMock, method_name, verification_fn) return Verify()
Central interface to verify interactions. `verify` uses a fluent interface:: verify(<obj>, times=2).<method_name>(<args>) `args` can be as concrete as necessary. Often a catch-all is enough, especially if you're working with strict mocks, bc they throw at call time on unwanted, unconfigured arguments:: from mockito import ANY, ARGS, KWARGS when(manager).add_tasks(1, 2, 3) ... # no need to duplicate the specification; every other argument pattern # would have raised anyway. verify(manager).add_tasks(1, 2, 3) # duplicates `when`call verify(manager).add_tasks(*ARGS) verify(manager).add_tasks(...) # Py3 verify(manager).add_tasks(Ellipsis) # Py2
def list_functions(region=None, key=None, keyid=None, profile=None): conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) ret = [] for funcs in __utils__['boto3.paged_call'](conn.list_functions): ret += funcs['Functions'] return ret
List all Lambda functions visible in the current scope. CLI Example: .. code-block:: bash salt myminion boto_lambda.list_functions
def push_build(id, tag_prefix): req = swagger_client.BuildRecordPushRequestRest() req.tag_prefix = tag_prefix req.build_record_id = id response = utils.checked_api_call(pnc_api.build_push, 'push', body=req) if response: return utils.format_json_list(response)
Push build to Brew
def get_processor_cpuid_leaf(self, cpu_id, leaf, sub_leaf): if not isinstance(cpu_id, baseinteger): raise TypeError("cpu_id can only be an instance of type baseinteger") if not isinstance(leaf, baseinteger): raise TypeError("leaf can only be an instance of type baseinteger") if not isinstance(sub_leaf, baseinteger): raise TypeError("sub_leaf can only be an instance of type baseinteger") (val_eax, val_ebx, val_ecx, val_edx) = self._call("getProcessorCPUIDLeaf", in_p=[cpu_id, leaf, sub_leaf]) return (val_eax, val_ebx, val_ecx, val_edx)
Returns the CPU cpuid information for the specified leaf. in cpu_id of type int Identifier of the CPU. The CPU most be online. The current implementation might not necessarily return the description for this exact CPU. in leaf of type int CPUID leaf index (eax). in sub_leaf of type int CPUID leaf sub index (ecx). This currently only applies to cache information on Intel CPUs. Use 0 if retrieving values for :py:func:`IMachine.set_cpuid_leaf` . out val_eax of type int CPUID leaf value for register eax. out val_ebx of type int CPUID leaf value for register ebx. out val_ecx of type int CPUID leaf value for register ecx. out val_edx of type int CPUID leaf value for register edx.
def page( self, title=None, pageid=None, auto_suggest=True, redirect=True, preload=False ): if (title is None or title.strip() == "") and pageid is None: raise ValueError("Either a title or a pageid must be specified") elif title: if auto_suggest: temp_title = self.suggest(title) if temp_title is None: raise PageError(title=title) else: title = temp_title return MediaWikiPage(self, title, redirect=redirect, preload=preload) else: return MediaWikiPage(self, pageid=pageid, preload=preload)
Get MediaWiki page based on the provided title or pageid Args: title (str): Page title pageid (int): MediaWiki page identifier auto-suggest (bool): **True:** Allow page title auto-suggest redirect (bool): **True:** Follow page redirects preload (bool): **True:** Load most page properties Raises: ValueError: when title is blank or None and no pageid is \ provided Raises: :py:func:`mediawiki.exceptions.PageError`: if page does \ not exist Note: Title takes precedence over pageid if both are provided
def _get_unit_data_from_expr(unit_expr, unit_symbol_lut): if isinstance(unit_expr, Number): if unit_expr is sympy_one: return (1.0, sympy_one) return (float(unit_expr), sympy_one) if isinstance(unit_expr, Symbol): return _lookup_unit_symbol(unit_expr.name, unit_symbol_lut) if isinstance(unit_expr, Pow): unit_data = _get_unit_data_from_expr(unit_expr.args[0], unit_symbol_lut) power = unit_expr.args[1] if isinstance(power, Symbol): raise UnitParseError("Invalid unit expression '%s'." % unit_expr) conv = float(unit_data[0] ** power) unit = unit_data[1] ** power return (conv, unit) if isinstance(unit_expr, Mul): base_value = 1.0 dimensions = 1 for expr in unit_expr.args: unit_data = _get_unit_data_from_expr(expr, unit_symbol_lut) base_value *= unit_data[0] dimensions *= unit_data[1] return (float(base_value), dimensions) raise UnitParseError( "Cannot parse for unit data from '%s'. Please supply" " an expression of only Unit, Symbol, Pow, and Mul" "objects." % str(unit_expr) )
Grabs the total base_value and dimensions from a valid unit expression. Parameters ---------- unit_expr: Unit object, or sympy Expr object The expression containing unit symbols. unit_symbol_lut: dict Provides the unit data for each valid unit symbol.
def decorate_method(wrapped): def wrapper(self): lines = Lines() if hasattr(self.model, wrapped.__name__): print(' . %s' % wrapped.__name__) lines.add(1, method_header(wrapped.__name__, nogil=True)) for line in wrapped(self): lines.add(2, line) return lines functools.update_wrapper(wrapper, wrapped) wrapper.__doc__ = 'Lines of model method %s.' % wrapped.__name__ return property(wrapper)
The decorated method will return a |Lines| object including a method header. However, the |Lines| object will be empty if the respective model does not implement a method with the same name as the wrapped method.
def delete(self, *args, **kwargs): hosted_zone = route53_backend.get_hosted_zone_by_name( self.hosted_zone_name) if not hosted_zone: hosted_zone = route53_backend.get_hosted_zone(self.hosted_zone_id) hosted_zone.delete_rrset_by_name(self.name)
Not exposed as part of the Route 53 API - used for CloudFormation. args are ignored
def create(self, name, address=None, enabled=True, balancing_mode='active', ipsec_vpn=True, nat_t=False, force_nat_t=False, dynamic=False, ike_phase1_id_type=None, ike_phase1_id_value=None): json = {'name': name, 'address': address, 'balancing_mode': balancing_mode, 'dynamic': dynamic, 'enabled': enabled, 'nat_t': nat_t, 'force_nat_t': force_nat_t, 'ipsec_vpn': ipsec_vpn} if dynamic: json.pop('address') json.update( ike_phase1_id_type=ike_phase1_id_type, ike_phase1_id_value=ike_phase1_id_value) return ElementCreator( self.__class__, href=self.href, json=json)
Create an test_external endpoint. Define common settings for that specify the address, enabled, nat_t, name, etc. You can also omit the IP address if the endpoint is dynamic. In that case, you must also specify the ike_phase1 settings. :param str name: name of test_external endpoint :param str address: address of remote host :param bool enabled: True|False (default: True) :param str balancing_mode: active :param bool ipsec_vpn: True|False (default: True) :param bool nat_t: True|False (default: False) :param bool force_nat_t: True|False (default: False) :param bool dynamic: is a dynamic VPN (default: False) :param int ike_phase1_id_type: If using a dynamic endpoint, you must set this value. Valid options: 0=DNS name, 1=Email, 2=DN, 3=IP Address :param str ike_phase1_id_value: value of ike_phase1_id. Required if ike_phase1_id_type and dynamic set. :raises CreateElementFailed: create element with reason :return: newly created element :rtype: ExternalEndpoint
def _read(self, fd, mask): try: if select.select([fd],[],[],0)[0]: snew = os.read(fd, self.nbytes) if PY3K: snew = snew.decode('ascii','replace') self.value.append(snew) self.nbytes -= len(snew) else: snew = '' if (self.nbytes <= 0 or len(snew) == 0) and self.widget: self.widget.quit() except OSError: raise IOError("Error reading from %s" % (fd,))
Read waiting data and terminate Tk mainloop if done
def get_context_data(self): buffer_name = self['buffer_name'].get_value() structure = CreateContextName.get_response_structure(buffer_name) if structure: structure.unpack(self['buffer_data'].get_value()) return structure else: return self['buffer_data'].get_value()
Get the buffer_data value of a context response and try to convert it to the relevant structure based on the buffer_name used. If it is an unknown structure then the raw bytes are returned. :return: relevant Structure of buffer_data or bytes if unknown name
def find_function_by_name(self, name): cfg_rv = None for cfg in self._cfgs: if cfg.name == name: cfg_rv = cfg break return cfg_rv
Return the cfg of the requested function by name.
def new_method_call(celf, destination, path, iface, method) : "creates a new DBUS.MESSAGE_TYPE_METHOD_CALL message." result = dbus.dbus_message_new_method_call \ ( (lambda : None, lambda : destination.encode())[destination != None](), path.encode(), (lambda : None, lambda : iface.encode())[iface != None](), method.encode(), ) if result == None : raise CallFailed("dbus_message_new_method_call") return \ celf(result)
creates a new DBUS.MESSAGE_TYPE_METHOD_CALL message.
def setup_logging(verbose=0, colors=False, name=None): root_logger = logging.getLogger(name) root_logger.setLevel(logging.DEBUG if verbose > 0 else logging.INFO) formatter = ColorFormatter(verbose > 0, colors) if colors: colorclass.Windows.enable() handler_stdout = logging.StreamHandler(sys.stdout) handler_stdout.setFormatter(formatter) handler_stdout.setLevel(logging.DEBUG) handler_stdout.addFilter(type('', (logging.Filter,), {'filter': staticmethod(lambda r: r.levelno <= logging.INFO)})) root_logger.addHandler(handler_stdout) handler_stderr = logging.StreamHandler(sys.stderr) handler_stderr.setFormatter(formatter) handler_stderr.setLevel(logging.WARNING) root_logger.addHandler(handler_stderr)
Configure console logging. Info and below go to stdout, others go to stderr. :param int verbose: Verbosity level. > 0 print debug statements. > 1 passed to sphinx-build. :param bool colors: Print color text in non-verbose mode. :param str name: Which logger name to set handlers to. Used for testing.
def check_constraint(self, pkge=None, constr=None): if not pkge is None: return javabridge.call( self.jobject, "checkConstraint", "(Lweka/core/packageManagement/Package;)Z", pkge.jobject) if not constr is None: return javabridge.call( self.jobject, "checkConstraint", "(Lweka/core/packageManagement/PackageConstraint;)Z", pkge.jobject) raise Exception("Either package or package constraing must be provided!")
Checks the constraints. :param pkge: the package to check :type pkge: Package :param constr: the package constraint to check :type constr: PackageConstraint
def initialTrendSmoothingFactors(self, timeSeries): result = 0.0 seasonLength = self.get_parameter("seasonLength") k = min(len(timeSeries) - seasonLength, seasonLength) for i in xrange(0, k): result += (timeSeries[seasonLength + i][1] - timeSeries[i][1]) / seasonLength return result / k
Calculate the initial Trend smoothing Factor b0. Explanation: http://en.wikipedia.org/wiki/Exponential_smoothing#Triple_exponential_smoothing :return: Returns the initial trend smoothing factor b0
def ordered_load(stream, Loader=yaml.Loader, object_pairs_hook=OrderedDict): class OrderedLoader(Loader): pass def construct_mapping(loader, node): loader.flatten_mapping(node) return object_pairs_hook(loader.construct_pairs(node)) OrderedLoader.add_constructor( yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG, construct_mapping) return yaml.load(stream, OrderedLoader)
Loads an ordered dict into a yaml while preserving the order :param stream: the name of the stream :param Loader: the yam loader (such as yaml.SafeLoader) :param object_pairs_hook: the ordered dict
def new(self): if self._initialized: raise pycdlibexception.PyCdlibInternalError('CE record already initialized!') self.bl_cont_area = 0 self.offset_cont_area = 0 self.len_cont_area = 0 self._initialized = True
Create a new Rock Ridge Continuation Entry record. Parameters: None. Returns: Nothing.
def extract_finditer(pos_seq, regex=SimpleNP): ss = coarse_tag_str(pos_seq) def gen(): for m in re.finditer(regex, ss): yield (m.start(), m.end()) return list(gen())
The "GreedyFSA" method in Handler et al. 2016. Returns token position spans of valid ngrams.
def _ircounts2radiance(counts, scale, offset): rad = (counts - offset) / scale return rad.clip(min=0)
Convert IR counts to radiance Reference: [IR]. Args: counts: Raw detector counts scale: Scale [mW-1 m2 cm sr] offset: Offset [1] Returns: Radiance [mW m-2 cm-1 sr-1]
def conditional_expected_number_of_purchases_up_to_time(self, t, frequency, recency, T): x = frequency r, alpha, a, b = self._unload_params("r", "alpha", "a", "b") hyp_term = hyp2f1(r + x, b + x + 1, a + b + x, t / (alpha + T + t)) first_term = (a + b + x) / (a - 1) second_term = 1 - hyp_term * ((alpha + T) / (alpha + t + T)) ** (r + x) numerator = first_term * second_term denominator = 1 + (a / (b + x)) * ((alpha + T) / (alpha + recency)) ** (r + x) return numerator / denominator
Conditional expected number of repeat purchases up to time t. Calculate the expected number of repeat purchases up to time t for a randomly choose individual from the population, given they have purchase history (frequency, recency, T) See Wagner, U. and Hoppe D. (2008). Parameters ---------- t: array_like times to calculate the expectation for. frequency: array_like historical frequency of customer. recency: array_like historical recency of customer. T: array_like age of the customer. Returns ------- array_like
def post(self, request, *args, **kwargs): queryset = self.get_selected(request) if request.POST.get('modify'): response = self.process_action(request, queryset) if not response: url = self.get_done_url() return self.render(request, redirect_url=url) else: return response else: return self.render(request, redirect_url=request.build_absolute_uri())
Method for handling POST requests. Checks for a modify confirmation and performs the action by calling `process_action`.
def set_status(self, status_code: int, reason: str = None) -> None: self._status_code = status_code if reason is not None: self._reason = escape.native_str(reason) else: self._reason = httputil.responses.get(status_code, "Unknown")
Sets the status code for our response. :arg int status_code: Response status code. :arg str reason: Human-readable reason phrase describing the status code. If ``None``, it will be filled in from `http.client.responses` or "Unknown". .. versionchanged:: 5.0 No longer validates that the response code is in `http.client.responses`.
def _notify_remove_at(self, index, length=1): slice_ = self._slice_at(index, length) self._notify_remove(slice_)
Notify about an RemoveChange at a caertain index and length.
def tube(self, radius=None, scalars=None, capping=True, n_sides=20, radius_factor=10, preference='point', inplace=False): if n_sides < 3: n_sides = 3 tube = vtk.vtkTubeFilter() tube.SetInputDataObject(self) tube.SetCapping(capping) if radius is not None: tube.SetRadius(radius) tube.SetNumberOfSides(n_sides) tube.SetRadiusFactor(radius_factor) if scalars is not None: if not isinstance(scalars, str): raise TypeError('Scalar array must be given as a string name') _, field = self.get_scalar(scalars, preference=preference, info=True) tube.SetInputArrayToProcess(0, 0, 0, field, scalars) tube.SetVaryRadiusToVaryRadiusByScalar() tube.Update() mesh = _get_output(tube) if inplace: self.overwrite(mesh) else: return mesh
Generate a tube around each input line. The radius of the tube can be set to linearly vary with a scalar value. Parameters ---------- radius : float Minimum tube radius (minimum because the tube radius may vary). scalars : str, optional Scalar array by which the radius varies capping : bool Turn on/off whether to cap the ends with polygons. Default True. n_sides : int Set the number of sides for the tube. Minimum of 3. radius_factor : float Maximum tube radius in terms of a multiple of the minimum radius. preference : str The field preference when searching for the scalar array by name inplace : bool, optional Updates mesh in-place while returning nothing. Returns ------- mesh : vtki.PolyData Tube-filtered mesh. None when inplace=True.
def finalize_episode(self, params): total_reward = sum(self.intermediate_rewards) self.total_rewards.append(total_reward) self.params.append(params) self.intermediate_rewards = []
Closes the current episode, sums up rewards and stores the parameters # Argument params (object): Parameters associated with the episode to be stored and then retrieved back in sample()
def check_rotation(rotation): if rotation not in ALLOWED_ROTATION: allowed_rotation = ', '.join(ALLOWED_ROTATION) raise UnsupportedRotation('Rotation %s is not allwoed. Allowed are %s' % (rotation, allowed_rotation))
checks rotation parameter if illegal value raises exception
def set_state(self, newState, timer=0): if _debug: ServerSSM._debug("set_state %r (%s) timer=%r", newState, SSM.transactionLabels[newState], timer) SSM.set_state(self, newState, timer) if (newState == COMPLETED) or (newState == ABORTED): if _debug: ServerSSM._debug(" - remove from active transactions") self.ssmSAP.serverTransactions.remove(self) if self.device_info: if _debug: ClientSSM._debug(" - release device information") self.ssmSAP.deviceInfoCache.release(self.device_info)
This function is called when the client wants to change state.