code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def longitude(self, longitude): """Setter for longitude.""" if not (-180 <= longitude <= 180): raise ValueError('longitude was {}, but has to be in [-180, 180]' .format(longitude)) self._longitude = longitude
Setter for longitude.
Below is the the instruction that describes the task: ### Input: Setter for longitude. ### Response: def longitude(self, longitude): """Setter for longitude.""" if not (-180 <= longitude <= 180): raise ValueError('longitude was {}, but has to be in [-180, 180]' .format(longitude)) self._longitude = longitude
def edit_config_input_default_operation(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") edit_config = ET.Element("edit_config") config = edit_config input = ET.SubElement(edit_config, "input") default_operation = ET.SubElement(input, "default-operation") default_operation.text = kwargs.pop('default_operation') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def edit_config_input_default_operation(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") edit_config = ET.Element("edit_config") config = edit_config input = ET.SubElement(edit_config, "input") default_operation = ET.SubElement(input, "default-operation") default_operation.text = kwargs.pop('default_operation') callback = kwargs.pop('callback', self._callback) return callback(config)
def bootstrap_results(self, init_state=None, transformed_init_state=None): """Returns an object with the same type as returned by `one_step`. Unlike other `TransitionKernel`s, `TransformedTransitionKernel.bootstrap_results` has the option of initializing the `TransformedTransitionKernelResults` from either an initial state, eg, requiring computing `bijector.inverse(init_state)`, or directly from `transformed_init_state`, i.e., a `Tensor` or list of `Tensor`s which is interpretted as the `bijector.inverse` transformed state. Args: init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. transformed_init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. Returns: kernel_results: A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within this function. Raises: ValueError: if `inner_kernel` results doesn't contain the member "target_log_prob". #### Examples To use `transformed_init_state` in context of `tfp.mcmc.sample_chain`, you need to explicitly pass the `previous_kernel_results`, e.g., ```python transformed_kernel = tfp.mcmc.TransformedTransitionKernel(...) init_state = ... # Doesnt matter. transformed_init_state = ... # Does matter. results, _ = tfp.mcmc.sample_chain( num_results=..., current_state=init_state, previous_kernel_results=transformed_kernel.bootstrap_results( transformed_init_state=transformed_init_state), kernel=transformed_kernel) ``` """ if (init_state is None) == (transformed_init_state is None): raise ValueError('Must specify exactly one of `init_state` ' 'or `transformed_init_state`.') with tf.compat.v1.name_scope( name=mcmc_util.make_name(self.name, 'transformed_kernel', 'bootstrap_results'), values=[init_state, transformed_init_state]): if transformed_init_state is None: init_state_parts = (init_state if mcmc_util.is_list_like(init_state) else [init_state]) transformed_init_state_parts = self._inverse_transform(init_state_parts) transformed_init_state = ( transformed_init_state_parts if mcmc_util.is_list_like(init_state) else transformed_init_state_parts[0]) else: if mcmc_util.is_list_like(transformed_init_state): transformed_init_state = [ tf.convert_to_tensor(value=s, name='transformed_init_state') for s in transformed_init_state ] else: transformed_init_state = tf.convert_to_tensor( value=transformed_init_state, name='transformed_init_state') kernel_results = TransformedTransitionKernelResults( transformed_state=transformed_init_state, inner_results=self._inner_kernel.bootstrap_results( transformed_init_state)) return kernel_results
Returns an object with the same type as returned by `one_step`. Unlike other `TransitionKernel`s, `TransformedTransitionKernel.bootstrap_results` has the option of initializing the `TransformedTransitionKernelResults` from either an initial state, eg, requiring computing `bijector.inverse(init_state)`, or directly from `transformed_init_state`, i.e., a `Tensor` or list of `Tensor`s which is interpretted as the `bijector.inverse` transformed state. Args: init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. transformed_init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. Returns: kernel_results: A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within this function. Raises: ValueError: if `inner_kernel` results doesn't contain the member "target_log_prob". #### Examples To use `transformed_init_state` in context of `tfp.mcmc.sample_chain`, you need to explicitly pass the `previous_kernel_results`, e.g., ```python transformed_kernel = tfp.mcmc.TransformedTransitionKernel(...) init_state = ... # Doesnt matter. transformed_init_state = ... # Does matter. results, _ = tfp.mcmc.sample_chain( num_results=..., current_state=init_state, previous_kernel_results=transformed_kernel.bootstrap_results( transformed_init_state=transformed_init_state), kernel=transformed_kernel) ```
Below is the the instruction that describes the task: ### Input: Returns an object with the same type as returned by `one_step`. Unlike other `TransitionKernel`s, `TransformedTransitionKernel.bootstrap_results` has the option of initializing the `TransformedTransitionKernelResults` from either an initial state, eg, requiring computing `bijector.inverse(init_state)`, or directly from `transformed_init_state`, i.e., a `Tensor` or list of `Tensor`s which is interpretted as the `bijector.inverse` transformed state. Args: init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. transformed_init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. Returns: kernel_results: A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within this function. Raises: ValueError: if `inner_kernel` results doesn't contain the member "target_log_prob". #### Examples To use `transformed_init_state` in context of `tfp.mcmc.sample_chain`, you need to explicitly pass the `previous_kernel_results`, e.g., ```python transformed_kernel = tfp.mcmc.TransformedTransitionKernel(...) init_state = ... # Doesnt matter. transformed_init_state = ... # Does matter. results, _ = tfp.mcmc.sample_chain( num_results=..., current_state=init_state, previous_kernel_results=transformed_kernel.bootstrap_results( transformed_init_state=transformed_init_state), kernel=transformed_kernel) ``` ### Response: def bootstrap_results(self, init_state=None, transformed_init_state=None): """Returns an object with the same type as returned by `one_step`. Unlike other `TransitionKernel`s, `TransformedTransitionKernel.bootstrap_results` has the option of initializing the `TransformedTransitionKernelResults` from either an initial state, eg, requiring computing `bijector.inverse(init_state)`, or directly from `transformed_init_state`, i.e., a `Tensor` or list of `Tensor`s which is interpretted as the `bijector.inverse` transformed state. Args: init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. transformed_init_state: `Tensor` or Python `list` of `Tensor`s representing the a state(s) of the Markov chain(s). Must specify `init_state` or `transformed_init_state` but not both. Returns: kernel_results: A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within this function. Raises: ValueError: if `inner_kernel` results doesn't contain the member "target_log_prob". #### Examples To use `transformed_init_state` in context of `tfp.mcmc.sample_chain`, you need to explicitly pass the `previous_kernel_results`, e.g., ```python transformed_kernel = tfp.mcmc.TransformedTransitionKernel(...) init_state = ... # Doesnt matter. transformed_init_state = ... # Does matter. results, _ = tfp.mcmc.sample_chain( num_results=..., current_state=init_state, previous_kernel_results=transformed_kernel.bootstrap_results( transformed_init_state=transformed_init_state), kernel=transformed_kernel) ``` """ if (init_state is None) == (transformed_init_state is None): raise ValueError('Must specify exactly one of `init_state` ' 'or `transformed_init_state`.') with tf.compat.v1.name_scope( name=mcmc_util.make_name(self.name, 'transformed_kernel', 'bootstrap_results'), values=[init_state, transformed_init_state]): if transformed_init_state is None: init_state_parts = (init_state if mcmc_util.is_list_like(init_state) else [init_state]) transformed_init_state_parts = self._inverse_transform(init_state_parts) transformed_init_state = ( transformed_init_state_parts if mcmc_util.is_list_like(init_state) else transformed_init_state_parts[0]) else: if mcmc_util.is_list_like(transformed_init_state): transformed_init_state = [ tf.convert_to_tensor(value=s, name='transformed_init_state') for s in transformed_init_state ] else: transformed_init_state = tf.convert_to_tensor( value=transformed_init_state, name='transformed_init_state') kernel_results = TransformedTransitionKernelResults( transformed_state=transformed_init_state, inner_results=self._inner_kernel.bootstrap_results( transformed_init_state)) return kernel_results
def data_received(self, data): """ Gets called when new data is received on the serial interface. Perform line buffering and call line_received() with complete lines. """ # DIY line buffering... newline = b'\r\n' eot = b'\x04' self._readbuf += data while newline in self._readbuf: line, _, self._readbuf = self._readbuf.partition(newline) if line: if eot in line: # Discard everything before EOT _, _, line = line.partition(eot) try: decoded = line.decode('ascii') except UnicodeDecodeError: _LOGGER.debug("Invalid data received, ignoring...") return self.line_received(decoded)
Gets called when new data is received on the serial interface. Perform line buffering and call line_received() with complete lines.
Below is the the instruction that describes the task: ### Input: Gets called when new data is received on the serial interface. Perform line buffering and call line_received() with complete lines. ### Response: def data_received(self, data): """ Gets called when new data is received on the serial interface. Perform line buffering and call line_received() with complete lines. """ # DIY line buffering... newline = b'\r\n' eot = b'\x04' self._readbuf += data while newline in self._readbuf: line, _, self._readbuf = self._readbuf.partition(newline) if line: if eot in line: # Discard everything before EOT _, _, line = line.partition(eot) try: decoded = line.decode('ascii') except UnicodeDecodeError: _LOGGER.debug("Invalid data received, ignoring...") return self.line_received(decoded)
def setOverlayName(self, ulOverlayHandle, pchName): """set the name to use for this overlay""" fn = self.function_table.setOverlayName result = fn(ulOverlayHandle, pchName) return result
set the name to use for this overlay
Below is the the instruction that describes the task: ### Input: set the name to use for this overlay ### Response: def setOverlayName(self, ulOverlayHandle, pchName): """set the name to use for this overlay""" fn = self.function_table.setOverlayName result = fn(ulOverlayHandle, pchName) return result
def check_outdated(package, version): """ Given the name of a package on PyPI and a version (both strings), checks if the given version is the latest version of the package available. Returns a 2-tuple (is_outdated, latest_version) where is_outdated is a boolean which is True if the given version is earlier than the latest version, which is the string latest_version. Attempts to cache on disk the HTTP call it makes for 24 hours. If this somehow fails the exception is converted to a warning (OutdatedCacheFailedWarning) and the function continues normally. """ from pkg_resources import parse_version parsed_version = parse_version(version) latest = None with utils.cache_file(package, 'r') as f: content = f.read() if content: # in case cache_file fails and so f is a dummy file latest, cache_dt = json.loads(content) if not utils.cache_is_valid(cache_dt): latest = None def get_latest(): url = 'https://pypi.python.org/pypi/%s/json' % package response = utils.get_url(url) return json.loads(response)['info']['version'] if latest is None: latest = get_latest() parsed_latest = parse_version(latest) if parsed_version > parsed_latest: # Probably a stale cached value latest = get_latest() parsed_latest = parse_version(latest) if parsed_version > parsed_latest: raise ValueError('Version %s is greater than the latest version on PyPI: %s' % (version, latest)) is_latest = parsed_version == parsed_latest assert is_latest or parsed_version < parsed_latest with utils.cache_file(package, 'w') as f: data = [latest, utils.format_date(datetime.now())] json.dump(data, f) return not is_latest, latest
Given the name of a package on PyPI and a version (both strings), checks if the given version is the latest version of the package available. Returns a 2-tuple (is_outdated, latest_version) where is_outdated is a boolean which is True if the given version is earlier than the latest version, which is the string latest_version. Attempts to cache on disk the HTTP call it makes for 24 hours. If this somehow fails the exception is converted to a warning (OutdatedCacheFailedWarning) and the function continues normally.
Below is the the instruction that describes the task: ### Input: Given the name of a package on PyPI and a version (both strings), checks if the given version is the latest version of the package available. Returns a 2-tuple (is_outdated, latest_version) where is_outdated is a boolean which is True if the given version is earlier than the latest version, which is the string latest_version. Attempts to cache on disk the HTTP call it makes for 24 hours. If this somehow fails the exception is converted to a warning (OutdatedCacheFailedWarning) and the function continues normally. ### Response: def check_outdated(package, version): """ Given the name of a package on PyPI and a version (both strings), checks if the given version is the latest version of the package available. Returns a 2-tuple (is_outdated, latest_version) where is_outdated is a boolean which is True if the given version is earlier than the latest version, which is the string latest_version. Attempts to cache on disk the HTTP call it makes for 24 hours. If this somehow fails the exception is converted to a warning (OutdatedCacheFailedWarning) and the function continues normally. """ from pkg_resources import parse_version parsed_version = parse_version(version) latest = None with utils.cache_file(package, 'r') as f: content = f.read() if content: # in case cache_file fails and so f is a dummy file latest, cache_dt = json.loads(content) if not utils.cache_is_valid(cache_dt): latest = None def get_latest(): url = 'https://pypi.python.org/pypi/%s/json' % package response = utils.get_url(url) return json.loads(response)['info']['version'] if latest is None: latest = get_latest() parsed_latest = parse_version(latest) if parsed_version > parsed_latest: # Probably a stale cached value latest = get_latest() parsed_latest = parse_version(latest) if parsed_version > parsed_latest: raise ValueError('Version %s is greater than the latest version on PyPI: %s' % (version, latest)) is_latest = parsed_version == parsed_latest assert is_latest or parsed_version < parsed_latest with utils.cache_file(package, 'w') as f: data = [latest, utils.format_date(datetime.now())] json.dump(data, f) return not is_latest, latest
def convert_inputs_to_sparse_if_necessary(lhs, rhs): ''' This function checks to see if a sparse output is desireable given the inputs and if so, casts the inputs to sparse in order to make it so. ''' if not sp.issparse(lhs) or not sp.issparse(rhs): if sparse_is_desireable(lhs, rhs): if not sp.issparse(lhs): lhs = sp.csc_matrix(lhs) #print "converting lhs into sparse matrix" if not sp.issparse(rhs): rhs = sp.csc_matrix(rhs) #print "converting rhs into sparse matrix" return lhs, rhs
This function checks to see if a sparse output is desireable given the inputs and if so, casts the inputs to sparse in order to make it so.
Below is the the instruction that describes the task: ### Input: This function checks to see if a sparse output is desireable given the inputs and if so, casts the inputs to sparse in order to make it so. ### Response: def convert_inputs_to_sparse_if_necessary(lhs, rhs): ''' This function checks to see if a sparse output is desireable given the inputs and if so, casts the inputs to sparse in order to make it so. ''' if not sp.issparse(lhs) or not sp.issparse(rhs): if sparse_is_desireable(lhs, rhs): if not sp.issparse(lhs): lhs = sp.csc_matrix(lhs) #print "converting lhs into sparse matrix" if not sp.issparse(rhs): rhs = sp.csc_matrix(rhs) #print "converting rhs into sparse matrix" return lhs, rhs
def parse(self): """Parse process. :return: A list of discovered process descriptors """ root = ast.parse(self._source) visitor = ProcessVisitor(source=self._source) visitor.visit(root) return visitor.processes
Parse process. :return: A list of discovered process descriptors
Below is the the instruction that describes the task: ### Input: Parse process. :return: A list of discovered process descriptors ### Response: def parse(self): """Parse process. :return: A list of discovered process descriptors """ root = ast.parse(self._source) visitor = ProcessVisitor(source=self._source) visitor.visit(root) return visitor.processes
def defrag(filt, threshold=3, mode='include'): """ 'Defragment' a filter. Parameters ---------- filt : boolean array A filter threshold : int Consecutive values equal to or below this threshold length are considered fragments, and will be removed. mode : str Wheter to change False fragments to True ('include') or True fragments to False ('exclude') Returns ------- defragmented filter : boolean array """ if bool_2_indices(filt) is None: return filt if mode == 'include': inds = bool_2_indices(~filt) + 1 rep = True if mode == 'exclude': inds = bool_2_indices(filt) + 1 rep = False rem = (np.diff(inds) <= threshold)[:, 0] cfilt = filt.copy() if any(rem): for lo, hi in inds[rem]: cfilt[lo:hi] = rep return cfilt
'Defragment' a filter. Parameters ---------- filt : boolean array A filter threshold : int Consecutive values equal to or below this threshold length are considered fragments, and will be removed. mode : str Wheter to change False fragments to True ('include') or True fragments to False ('exclude') Returns ------- defragmented filter : boolean array
Below is the the instruction that describes the task: ### Input: 'Defragment' a filter. Parameters ---------- filt : boolean array A filter threshold : int Consecutive values equal to or below this threshold length are considered fragments, and will be removed. mode : str Wheter to change False fragments to True ('include') or True fragments to False ('exclude') Returns ------- defragmented filter : boolean array ### Response: def defrag(filt, threshold=3, mode='include'): """ 'Defragment' a filter. Parameters ---------- filt : boolean array A filter threshold : int Consecutive values equal to or below this threshold length are considered fragments, and will be removed. mode : str Wheter to change False fragments to True ('include') or True fragments to False ('exclude') Returns ------- defragmented filter : boolean array """ if bool_2_indices(filt) is None: return filt if mode == 'include': inds = bool_2_indices(~filt) + 1 rep = True if mode == 'exclude': inds = bool_2_indices(filt) + 1 rep = False rem = (np.diff(inds) <= threshold)[:, 0] cfilt = filt.copy() if any(rem): for lo, hi in inds[rem]: cfilt[lo:hi] = rep return cfilt
def split_pred_string(predstr): """ Split *predstr* and return the (lemma, pos, sense, suffix) components. Examples: >>> Pred.split_pred_string('_dog_n_1_rel') ('dog', 'n', '1', 'rel') >>> Pred.split_pred_string('quant_rel') ('quant', None, None, 'rel') """ predstr = predstr.strip('"\'') # surrounding quotes don't matter rel_added = False if not predstr.lower().endswith('_rel'): logging.debug('Predicate does not end in "_rel": {}' .format(predstr)) rel_added = True predstr += '_rel' match = Pred.pred_re.search(predstr) if match is None: logging.debug('Unexpected predicate string: {}'.format(predstr)) return (predstr, None, None, None) # _lemma_pos(_sense)?_end return (match.group('lemma'), match.group('pos'), match.group('sense'), None if rel_added else match.group('end'))
Split *predstr* and return the (lemma, pos, sense, suffix) components. Examples: >>> Pred.split_pred_string('_dog_n_1_rel') ('dog', 'n', '1', 'rel') >>> Pred.split_pred_string('quant_rel') ('quant', None, None, 'rel')
Below is the the instruction that describes the task: ### Input: Split *predstr* and return the (lemma, pos, sense, suffix) components. Examples: >>> Pred.split_pred_string('_dog_n_1_rel') ('dog', 'n', '1', 'rel') >>> Pred.split_pred_string('quant_rel') ('quant', None, None, 'rel') ### Response: def split_pred_string(predstr): """ Split *predstr* and return the (lemma, pos, sense, suffix) components. Examples: >>> Pred.split_pred_string('_dog_n_1_rel') ('dog', 'n', '1', 'rel') >>> Pred.split_pred_string('quant_rel') ('quant', None, None, 'rel') """ predstr = predstr.strip('"\'') # surrounding quotes don't matter rel_added = False if not predstr.lower().endswith('_rel'): logging.debug('Predicate does not end in "_rel": {}' .format(predstr)) rel_added = True predstr += '_rel' match = Pred.pred_re.search(predstr) if match is None: logging.debug('Unexpected predicate string: {}'.format(predstr)) return (predstr, None, None, None) # _lemma_pos(_sense)?_end return (match.group('lemma'), match.group('pos'), match.group('sense'), None if rel_added else match.group('end'))
def process_post(self): """ Analyse the POST request: * check that the LoginTicket is valid * check that the user sumited credentials are valid :return: * :attr:`INVALID_LOGIN_TICKET` if the POSTed LoginTicket is not valid * :attr:`USER_ALREADY_LOGGED` if the user is already logged and do no request reauthentication. * :attr:`USER_LOGIN_FAILURE` if the user is not logged or request for reauthentication and his credentials are not valid * :attr:`USER_LOGIN_OK` if the user is not logged or request for reauthentication and his credentials are valid :rtype: int """ if not self.check_lt(): self.init_form(self.request.POST) logger.warning("Received an invalid login ticket") return self.INVALID_LOGIN_TICKET elif not self.request.session.get("authenticated") or self.renew: # authentication request receive, initialize the form to use self.init_form(self.request.POST) if self.form.is_valid(): self.request.session.set_expiry(0) self.request.session["username"] = self.form.cleaned_data['username'] self.request.session["warn"] = True if self.form.cleaned_data.get("warn") else False self.request.session["authenticated"] = True self.renewed = True self.warned = True logger.info("User %s successfully authenticated" % self.request.session["username"]) return self.USER_LOGIN_OK else: logger.warning("A login attempt failed") return self.USER_LOGIN_FAILURE else: logger.warning("Received a login attempt for an already-active user") return self.USER_ALREADY_LOGGED
Analyse the POST request: * check that the LoginTicket is valid * check that the user sumited credentials are valid :return: * :attr:`INVALID_LOGIN_TICKET` if the POSTed LoginTicket is not valid * :attr:`USER_ALREADY_LOGGED` if the user is already logged and do no request reauthentication. * :attr:`USER_LOGIN_FAILURE` if the user is not logged or request for reauthentication and his credentials are not valid * :attr:`USER_LOGIN_OK` if the user is not logged or request for reauthentication and his credentials are valid :rtype: int
Below is the the instruction that describes the task: ### Input: Analyse the POST request: * check that the LoginTicket is valid * check that the user sumited credentials are valid :return: * :attr:`INVALID_LOGIN_TICKET` if the POSTed LoginTicket is not valid * :attr:`USER_ALREADY_LOGGED` if the user is already logged and do no request reauthentication. * :attr:`USER_LOGIN_FAILURE` if the user is not logged or request for reauthentication and his credentials are not valid * :attr:`USER_LOGIN_OK` if the user is not logged or request for reauthentication and his credentials are valid :rtype: int ### Response: def process_post(self): """ Analyse the POST request: * check that the LoginTicket is valid * check that the user sumited credentials are valid :return: * :attr:`INVALID_LOGIN_TICKET` if the POSTed LoginTicket is not valid * :attr:`USER_ALREADY_LOGGED` if the user is already logged and do no request reauthentication. * :attr:`USER_LOGIN_FAILURE` if the user is not logged or request for reauthentication and his credentials are not valid * :attr:`USER_LOGIN_OK` if the user is not logged or request for reauthentication and his credentials are valid :rtype: int """ if not self.check_lt(): self.init_form(self.request.POST) logger.warning("Received an invalid login ticket") return self.INVALID_LOGIN_TICKET elif not self.request.session.get("authenticated") or self.renew: # authentication request receive, initialize the form to use self.init_form(self.request.POST) if self.form.is_valid(): self.request.session.set_expiry(0) self.request.session["username"] = self.form.cleaned_data['username'] self.request.session["warn"] = True if self.form.cleaned_data.get("warn") else False self.request.session["authenticated"] = True self.renewed = True self.warned = True logger.info("User %s successfully authenticated" % self.request.session["username"]) return self.USER_LOGIN_OK else: logger.warning("A login attempt failed") return self.USER_LOGIN_FAILURE else: logger.warning("Received a login attempt for an already-active user") return self.USER_ALREADY_LOGGED
def to_rst_(self) -> str: """Convert the main dataframe to restructured text :return: rst data :rtype: str :example: ``ds.to_rst_()`` """ try: renderer = pytablewriter.RstGridTableWriter data = self._build_export(renderer) return data except Exception as e: self.err(e, "Can not convert data to restructured text")
Convert the main dataframe to restructured text :return: rst data :rtype: str :example: ``ds.to_rst_()``
Below is the the instruction that describes the task: ### Input: Convert the main dataframe to restructured text :return: rst data :rtype: str :example: ``ds.to_rst_()`` ### Response: def to_rst_(self) -> str: """Convert the main dataframe to restructured text :return: rst data :rtype: str :example: ``ds.to_rst_()`` """ try: renderer = pytablewriter.RstGridTableWriter data = self._build_export(renderer) return data except Exception as e: self.err(e, "Can not convert data to restructured text")
def percentage(a, b, precision=1, mode=0): """ >>> percentage(100, 200) '100 of 200 (50.0%)' """ _a, _b = a, b pct = "{0:.{1}f}%".format(a * 100. / b, precision) a, b = thousands(a), thousands(b) if mode == 0: return "{0} of {1} ({2})".format(a, b, pct) elif mode == 1: return "{0} ({1})".format(a, pct) elif mode == 2: return _a * 100. / _b return pct
>>> percentage(100, 200) '100 of 200 (50.0%)'
Below is the the instruction that describes the task: ### Input: >>> percentage(100, 200) '100 of 200 (50.0%)' ### Response: def percentage(a, b, precision=1, mode=0): """ >>> percentage(100, 200) '100 of 200 (50.0%)' """ _a, _b = a, b pct = "{0:.{1}f}%".format(a * 100. / b, precision) a, b = thousands(a), thousands(b) if mode == 0: return "{0} of {1} ({2})".format(a, b, pct) elif mode == 1: return "{0} ({1})".format(a, pct) elif mode == 2: return _a * 100. / _b return pct
def subprocess_func(func, pipe, logger, mem_in_mb, cpu_time_limit_in_s, wall_time_limit_in_s, num_procs, grace_period_in_s, tmp_dir, *args, **kwargs): # simple signal handler to catch the signals for time limits def handler(signum, frame): # logs message with level debug on this logger logger.debug("signal handler: %i"%signum) if (signum == signal.SIGXCPU): # when process reaches soft limit --> a SIGXCPU signal is sent (it normally terminats the process) raise(CpuTimeoutException) elif (signum == signal.SIGALRM): # SIGALRM is sent to process when the specified time limit to an alarm function elapses (when real or clock time elapses) logger.debug("timeout") raise(TimeoutException) raise AnythingException # temporary directory to store stdout and stderr if not tmp_dir is None: logger.debug('Redirecting output of the function to files. Access them via the stdout and stderr attributes of the wrapped function.') stdout = open(os.path.join(tmp_dir, 'std.out'), 'a', buffering=1) sys.stdout=stdout stderr = open(os.path.join(tmp_dir, 'std.err'), 'a', buffering=1) sys.stderr=stderr # catching all signals at this point turned out to interfer with the subprocess (e.g. using ROS) signal.signal(signal.SIGALRM, handler) signal.signal(signal.SIGXCPU, handler) signal.signal(signal.SIGQUIT, handler) # code to catch EVERY catchable signal (even X11 related ones ... ) # only use for debugging/testing as this seems to be too intrusive. """ for i in [x for x in dir(signal) if x.startswith("SIG")]: try: signum = getattr(signal,i) print("register {}, {}".format(signum, i)) signal.signal(signum, handler) except: print("Skipping %s"%i) """ # set the memory limit if mem_in_mb is not None: # byte --> megabyte mem_in_b = mem_in_mb*1024*1024 # the maximum area (in bytes) of address space which may be taken by the process. resource.setrlimit(resource.RLIMIT_AS, (mem_in_b, mem_in_b)) # for now: don't allow the function to spawn subprocesses itself. #resource.setrlimit(resource.RLIMIT_NPROC, (1, 1)) # Turns out, this is quite restrictive, so we don't use this option by default if num_procs is not None: resource.setrlimit(resource.RLIMIT_NPROC, (num_procs, num_procs)) # schedule an alarm in specified number of seconds if wall_time_limit_in_s is not None: signal.alarm(wall_time_limit_in_s) if cpu_time_limit_in_s is not None: # From the Linux man page: # When the process reaches the soft limit, it is sent a SIGXCPU signal. # The default action for this signal is to terminate the process. # However, the signal can be caught, and the handler can return control # to the main program. If the process continues to consume CPU time, # it will be sent SIGXCPU once per second until the hard limit is reached, # at which time it is sent SIGKILL. resource.setrlimit(resource.RLIMIT_CPU, (cpu_time_limit_in_s,cpu_time_limit_in_s+grace_period_in_s)) # the actual function call try: logger.debug("call function") return_value = ((func(*args, **kwargs), 0)) logger.debug("function returned properly: {}".format(return_value)) except MemoryError: return_value = (None, MemorylimitException) except OSError as e: if (e.errno == 11): return_value = (None, SubprocessException) else: return_value = (None, AnythingException) except CpuTimeoutException: return_value = (None, CpuTimeoutException) except TimeoutException: return_value = (None, TimeoutException) except AnythingException as e: return_value = (None, AnythingException) except: raise logger.debug("Some wired exception occured!") finally: try: logger.debug("return value: {}".format(return_value)) pipe.send(return_value) pipe.close() except: # this part should only fail if the parent process is alread dead, so there is not much to do anymore :) pass finally: # recursively kill all children p = psutil.Process() for child in p.children(recursive=True): child.kill()
for i in [x for x in dir(signal) if x.startswith("SIG")]: try: signum = getattr(signal,i) print("register {}, {}".format(signum, i)) signal.signal(signum, handler) except: print("Skipping %s"%i)
Below is the the instruction that describes the task: ### Input: for i in [x for x in dir(signal) if x.startswith("SIG")]: try: signum = getattr(signal,i) print("register {}, {}".format(signum, i)) signal.signal(signum, handler) except: print("Skipping %s"%i) ### Response: def subprocess_func(func, pipe, logger, mem_in_mb, cpu_time_limit_in_s, wall_time_limit_in_s, num_procs, grace_period_in_s, tmp_dir, *args, **kwargs): # simple signal handler to catch the signals for time limits def handler(signum, frame): # logs message with level debug on this logger logger.debug("signal handler: %i"%signum) if (signum == signal.SIGXCPU): # when process reaches soft limit --> a SIGXCPU signal is sent (it normally terminats the process) raise(CpuTimeoutException) elif (signum == signal.SIGALRM): # SIGALRM is sent to process when the specified time limit to an alarm function elapses (when real or clock time elapses) logger.debug("timeout") raise(TimeoutException) raise AnythingException # temporary directory to store stdout and stderr if not tmp_dir is None: logger.debug('Redirecting output of the function to files. Access them via the stdout and stderr attributes of the wrapped function.') stdout = open(os.path.join(tmp_dir, 'std.out'), 'a', buffering=1) sys.stdout=stdout stderr = open(os.path.join(tmp_dir, 'std.err'), 'a', buffering=1) sys.stderr=stderr # catching all signals at this point turned out to interfer with the subprocess (e.g. using ROS) signal.signal(signal.SIGALRM, handler) signal.signal(signal.SIGXCPU, handler) signal.signal(signal.SIGQUIT, handler) # code to catch EVERY catchable signal (even X11 related ones ... ) # only use for debugging/testing as this seems to be too intrusive. """ for i in [x for x in dir(signal) if x.startswith("SIG")]: try: signum = getattr(signal,i) print("register {}, {}".format(signum, i)) signal.signal(signum, handler) except: print("Skipping %s"%i) """ # set the memory limit if mem_in_mb is not None: # byte --> megabyte mem_in_b = mem_in_mb*1024*1024 # the maximum area (in bytes) of address space which may be taken by the process. resource.setrlimit(resource.RLIMIT_AS, (mem_in_b, mem_in_b)) # for now: don't allow the function to spawn subprocesses itself. #resource.setrlimit(resource.RLIMIT_NPROC, (1, 1)) # Turns out, this is quite restrictive, so we don't use this option by default if num_procs is not None: resource.setrlimit(resource.RLIMIT_NPROC, (num_procs, num_procs)) # schedule an alarm in specified number of seconds if wall_time_limit_in_s is not None: signal.alarm(wall_time_limit_in_s) if cpu_time_limit_in_s is not None: # From the Linux man page: # When the process reaches the soft limit, it is sent a SIGXCPU signal. # The default action for this signal is to terminate the process. # However, the signal can be caught, and the handler can return control # to the main program. If the process continues to consume CPU time, # it will be sent SIGXCPU once per second until the hard limit is reached, # at which time it is sent SIGKILL. resource.setrlimit(resource.RLIMIT_CPU, (cpu_time_limit_in_s,cpu_time_limit_in_s+grace_period_in_s)) # the actual function call try: logger.debug("call function") return_value = ((func(*args, **kwargs), 0)) logger.debug("function returned properly: {}".format(return_value)) except MemoryError: return_value = (None, MemorylimitException) except OSError as e: if (e.errno == 11): return_value = (None, SubprocessException) else: return_value = (None, AnythingException) except CpuTimeoutException: return_value = (None, CpuTimeoutException) except TimeoutException: return_value = (None, TimeoutException) except AnythingException as e: return_value = (None, AnythingException) except: raise logger.debug("Some wired exception occured!") finally: try: logger.debug("return value: {}".format(return_value)) pipe.send(return_value) pipe.close() except: # this part should only fail if the parent process is alread dead, so there is not much to do anymore :) pass finally: # recursively kill all children p = psutil.Process() for child in p.children(recursive=True): child.kill()
def queue(self, name, value, quality=None, timestamp=None, attributes=None): """ To reduce network traffic, you can buffer datapoints and then flush() anything in the queue. :param name: the name / label / tag for sensor data :param value: the sensor reading or value to record :param quality: the quality value, use the constants BAD, GOOD, etc. (optional and defaults to UNCERTAIN) :param timestamp: the time the reading was recorded in epoch milliseconds (optional and defaults to now) :param attributes: dictionary for any key-value pairs to store with the reading (optional) """ # Get timestamp first in case delay opening websocket connection # and it must have millisecond accuracy if not timestamp: timestamp = int(round(time.time() * 1000)) else: # Coerce datetime objects to epoch if isinstance(timestamp, datetime.datetime): timestamp = int(round(int(timestamp.strftime('%s')) * 1000)) # Only specific quality values supported if quality not in [self.BAD, self.GOOD, self.NA, self.UNCERTAIN]: quality = self.UNCERTAIN # Check if adding to queue of an existing tag and add second datapoint for point in self._queue: if point['name'] == name: point['datapoints'].append([timestamp, value, quality]) return # If adding new tag, initialize and set any attributes datapoint = { "name": name, "datapoints": [[timestamp, value, quality]] } # Attributes are extra details for a datapoint if attributes is not None: if not isinstance(attributes, dict): raise ValueError("Attributes are expected to be a dictionary.") # Validate rules for attribute keys to provide guidance. invalid_value = ':;= ' has_invalid_value = re.compile(r'[%s]' % (invalid_value)).search has_valid_key = re.compile(r'^[\w\.\/\-]+$').search for (key, val) in list(attributes.items()): # Values cannot be empty if (val == '') or (val is None): raise ValueError("Attribute (%s) must have a non-empty value." % (key)) # Values should be treated as a string for regex validation val = str(val) # Values cannot contain certain arbitrary characters if bool(has_invalid_value(val)): raise ValueError("Attribute (%s) cannot contain (%s)." % (key, invalid_value)) # Attributes have to be alphanumeric-ish if not bool(has_valid_key): raise ValueError("Key (%s) not alphanumeric-ish." % (key)) datapoint['attributes'] = attributes self._queue.append(datapoint) logging.debug("QUEUE: " + str(len(self._queue)))
To reduce network traffic, you can buffer datapoints and then flush() anything in the queue. :param name: the name / label / tag for sensor data :param value: the sensor reading or value to record :param quality: the quality value, use the constants BAD, GOOD, etc. (optional and defaults to UNCERTAIN) :param timestamp: the time the reading was recorded in epoch milliseconds (optional and defaults to now) :param attributes: dictionary for any key-value pairs to store with the reading (optional)
Below is the the instruction that describes the task: ### Input: To reduce network traffic, you can buffer datapoints and then flush() anything in the queue. :param name: the name / label / tag for sensor data :param value: the sensor reading or value to record :param quality: the quality value, use the constants BAD, GOOD, etc. (optional and defaults to UNCERTAIN) :param timestamp: the time the reading was recorded in epoch milliseconds (optional and defaults to now) :param attributes: dictionary for any key-value pairs to store with the reading (optional) ### Response: def queue(self, name, value, quality=None, timestamp=None, attributes=None): """ To reduce network traffic, you can buffer datapoints and then flush() anything in the queue. :param name: the name / label / tag for sensor data :param value: the sensor reading or value to record :param quality: the quality value, use the constants BAD, GOOD, etc. (optional and defaults to UNCERTAIN) :param timestamp: the time the reading was recorded in epoch milliseconds (optional and defaults to now) :param attributes: dictionary for any key-value pairs to store with the reading (optional) """ # Get timestamp first in case delay opening websocket connection # and it must have millisecond accuracy if not timestamp: timestamp = int(round(time.time() * 1000)) else: # Coerce datetime objects to epoch if isinstance(timestamp, datetime.datetime): timestamp = int(round(int(timestamp.strftime('%s')) * 1000)) # Only specific quality values supported if quality not in [self.BAD, self.GOOD, self.NA, self.UNCERTAIN]: quality = self.UNCERTAIN # Check if adding to queue of an existing tag and add second datapoint for point in self._queue: if point['name'] == name: point['datapoints'].append([timestamp, value, quality]) return # If adding new tag, initialize and set any attributes datapoint = { "name": name, "datapoints": [[timestamp, value, quality]] } # Attributes are extra details for a datapoint if attributes is not None: if not isinstance(attributes, dict): raise ValueError("Attributes are expected to be a dictionary.") # Validate rules for attribute keys to provide guidance. invalid_value = ':;= ' has_invalid_value = re.compile(r'[%s]' % (invalid_value)).search has_valid_key = re.compile(r'^[\w\.\/\-]+$').search for (key, val) in list(attributes.items()): # Values cannot be empty if (val == '') or (val is None): raise ValueError("Attribute (%s) must have a non-empty value." % (key)) # Values should be treated as a string for regex validation val = str(val) # Values cannot contain certain arbitrary characters if bool(has_invalid_value(val)): raise ValueError("Attribute (%s) cannot contain (%s)." % (key, invalid_value)) # Attributes have to be alphanumeric-ish if not bool(has_valid_key): raise ValueError("Key (%s) not alphanumeric-ish." % (key)) datapoint['attributes'] = attributes self._queue.append(datapoint) logging.debug("QUEUE: " + str(len(self._queue)))
def ret_pcre_minions(self): ''' Return minions that match via pcre ''' tgt = re.compile(self.tgt) refilter = functools.partial(filter, tgt.match) return self._ret_minions(refilter)
Return minions that match via pcre
Below is the the instruction that describes the task: ### Input: Return minions that match via pcre ### Response: def ret_pcre_minions(self): ''' Return minions that match via pcre ''' tgt = re.compile(self.tgt) refilter = functools.partial(filter, tgt.match) return self._ret_minions(refilter)
def process_embedded(self, xlist, anexec, add=True): """Processes the specified xml list and executable to link *embedded* types and executables to their docstrings. :arg xlist: a list of XML elements returned by parse_docs(). :arg add: when true, docstrings are only appended, never overwritten. """ #Keep track of the changes that took place in the lengths of the #docstrings that got added/updated on the elements children. delta = 0 for t in anexec.types: key = "{}.{}".format(anexec.name, t) if key in xlist: docs = self.to_doc(xlist[key][0], t) self.process_memberdocs(docs, anexec.types[t], add) anexec.types[t].docstart = xlist[key][1] delta += xlist[key][2] - anexec.types[t].docend anexec.types[t].docend = xlist[key][2] for iexec in anexec.executables: key = "{}.{}".format(anexec.name, iexec) if key in xlist: docs = self.to_doc(xlist[key][0], t) self.process_memberdocs(docs, anexec.executables[iexec], add) anexec.executables[iexec].docstart = xlist[key][1] delta += xlist[key][2] - anexec.executables[iexec].docend anexec.executables[iexec].docend = xlist[key][2] if not add: return delta
Processes the specified xml list and executable to link *embedded* types and executables to their docstrings. :arg xlist: a list of XML elements returned by parse_docs(). :arg add: when true, docstrings are only appended, never overwritten.
Below is the the instruction that describes the task: ### Input: Processes the specified xml list and executable to link *embedded* types and executables to their docstrings. :arg xlist: a list of XML elements returned by parse_docs(). :arg add: when true, docstrings are only appended, never overwritten. ### Response: def process_embedded(self, xlist, anexec, add=True): """Processes the specified xml list and executable to link *embedded* types and executables to their docstrings. :arg xlist: a list of XML elements returned by parse_docs(). :arg add: when true, docstrings are only appended, never overwritten. """ #Keep track of the changes that took place in the lengths of the #docstrings that got added/updated on the elements children. delta = 0 for t in anexec.types: key = "{}.{}".format(anexec.name, t) if key in xlist: docs = self.to_doc(xlist[key][0], t) self.process_memberdocs(docs, anexec.types[t], add) anexec.types[t].docstart = xlist[key][1] delta += xlist[key][2] - anexec.types[t].docend anexec.types[t].docend = xlist[key][2] for iexec in anexec.executables: key = "{}.{}".format(anexec.name, iexec) if key in xlist: docs = self.to_doc(xlist[key][0], t) self.process_memberdocs(docs, anexec.executables[iexec], add) anexec.executables[iexec].docstart = xlist[key][1] delta += xlist[key][2] - anexec.executables[iexec].docend anexec.executables[iexec].docend = xlist[key][2] if not add: return delta
def movie_body_count_r_classify(data_set='movie_body_count'): """Data set of movies and body count for movies scraped from www.MovieBodyCounts.com created by Simon Garnier and Randy Olson for exploring differences between Python and R.""" data = movie_body_count()['Y'] import pandas as pd import numpy as np X = data[['Year', 'Body_Count']] Y = data['MPAA_Rating']=='R' # set label to be positive for R rated films. # Create series of movie genres with the relevant index s = data['Genre'].str.split('|').apply(pd.Series, 1).stack() s.index = s.index.droplevel(-1) # to line up with df's index # Extract from the series the unique list of genres. genres = s.unique() # For each genre extract the indices where it is present and add a column to X for genre in genres: index = s[s==genre].index.tolist() values = pd.Series(np.zeros(X.shape[0]), index=X.index) values[index] = 1 X[genre] = values return data_details_return({'X': X, 'Y': Y, 'info' : "Data set of movies and body count for movies scraped from www.MovieBodyCounts.com created by Simon Garnier and Randy Olson for exploring differences between Python and R. In this variant we aim to classify whether the film is rated R or not depending on the genre, the years and the body count.", }, data_set)
Data set of movies and body count for movies scraped from www.MovieBodyCounts.com created by Simon Garnier and Randy Olson for exploring differences between Python and R.
Below is the the instruction that describes the task: ### Input: Data set of movies and body count for movies scraped from www.MovieBodyCounts.com created by Simon Garnier and Randy Olson for exploring differences between Python and R. ### Response: def movie_body_count_r_classify(data_set='movie_body_count'): """Data set of movies and body count for movies scraped from www.MovieBodyCounts.com created by Simon Garnier and Randy Olson for exploring differences between Python and R.""" data = movie_body_count()['Y'] import pandas as pd import numpy as np X = data[['Year', 'Body_Count']] Y = data['MPAA_Rating']=='R' # set label to be positive for R rated films. # Create series of movie genres with the relevant index s = data['Genre'].str.split('|').apply(pd.Series, 1).stack() s.index = s.index.droplevel(-1) # to line up with df's index # Extract from the series the unique list of genres. genres = s.unique() # For each genre extract the indices where it is present and add a column to X for genre in genres: index = s[s==genre].index.tolist() values = pd.Series(np.zeros(X.shape[0]), index=X.index) values[index] = 1 X[genre] = values return data_details_return({'X': X, 'Y': Y, 'info' : "Data set of movies and body count for movies scraped from www.MovieBodyCounts.com created by Simon Garnier and Randy Olson for exploring differences between Python and R. In this variant we aim to classify whether the film is rated R or not depending on the genre, the years and the body count.", }, data_set)
def render(self, content = None, **settings): """ Perform widget rendering, but do not print anything. """ (content, settings) = self.get_setup(content, **settings) return self._render(content, **settings)
Perform widget rendering, but do not print anything.
Below is the the instruction that describes the task: ### Input: Perform widget rendering, but do not print anything. ### Response: def render(self, content = None, **settings): """ Perform widget rendering, but do not print anything. """ (content, settings) = self.get_setup(content, **settings) return self._render(content, **settings)
def adapt(self, d, x): """ Adapt weights according one desired value and its input. **Args:** * `d` : desired value (float) * `x` : input array (1-dimensional array) """ y = np.dot(self.w, x) e = d - y self.eps = self.eps - self.ro * self.mu * e * self.last_e * \ np.dot(x, self.last_x) / \ (np.dot(self.last_x, self.last_x) + self.eps)**2 nu = self.mu / (self.eps + np.dot(x, x)) self.w += nu * e * x self.last_e = e
Adapt weights according one desired value and its input. **Args:** * `d` : desired value (float) * `x` : input array (1-dimensional array)
Below is the the instruction that describes the task: ### Input: Adapt weights according one desired value and its input. **Args:** * `d` : desired value (float) * `x` : input array (1-dimensional array) ### Response: def adapt(self, d, x): """ Adapt weights according one desired value and its input. **Args:** * `d` : desired value (float) * `x` : input array (1-dimensional array) """ y = np.dot(self.w, x) e = d - y self.eps = self.eps - self.ro * self.mu * e * self.last_e * \ np.dot(x, self.last_x) / \ (np.dot(self.last_x, self.last_x) + self.eps)**2 nu = self.mu / (self.eps + np.dot(x, x)) self.w += nu * e * x self.last_e = e
def set_position(self, pos): """Seek in the current playing media.""" time_in_ms = int(pos)*1000 return self.apple_tv.set_property('dacp.playingtime', time_in_ms)
Seek in the current playing media.
Below is the the instruction that describes the task: ### Input: Seek in the current playing media. ### Response: def set_position(self, pos): """Seek in the current playing media.""" time_in_ms = int(pos)*1000 return self.apple_tv.set_property('dacp.playingtime', time_in_ms)
def name(self): """Instance name used in requests. .. note:: This property will not change if ``instance_id`` does not, but the return value is not cached. For example: .. literalinclude:: snippets.py :start-after: [START bigtable_instance_name] :end-before: [END bigtable_instance_name] The instance name is of the form ``"projects/{project}/instances/{instance_id}"`` :rtype: str :returns: Return a fully-qualified instance string. """ return self._client.instance_admin_client.instance_path( project=self._client.project, instance=self.instance_id )
Instance name used in requests. .. note:: This property will not change if ``instance_id`` does not, but the return value is not cached. For example: .. literalinclude:: snippets.py :start-after: [START bigtable_instance_name] :end-before: [END bigtable_instance_name] The instance name is of the form ``"projects/{project}/instances/{instance_id}"`` :rtype: str :returns: Return a fully-qualified instance string.
Below is the the instruction that describes the task: ### Input: Instance name used in requests. .. note:: This property will not change if ``instance_id`` does not, but the return value is not cached. For example: .. literalinclude:: snippets.py :start-after: [START bigtable_instance_name] :end-before: [END bigtable_instance_name] The instance name is of the form ``"projects/{project}/instances/{instance_id}"`` :rtype: str :returns: Return a fully-qualified instance string. ### Response: def name(self): """Instance name used in requests. .. note:: This property will not change if ``instance_id`` does not, but the return value is not cached. For example: .. literalinclude:: snippets.py :start-after: [START bigtable_instance_name] :end-before: [END bigtable_instance_name] The instance name is of the form ``"projects/{project}/instances/{instance_id}"`` :rtype: str :returns: Return a fully-qualified instance string. """ return self._client.instance_admin_client.instance_path( project=self._client.project, instance=self.instance_id )
def poisson(x, a, b, c, d=0): ''' Poisson function a -> height of the curve's peak b -> position of the center of the peak c -> standard deviation d -> offset ''' from scipy.misc import factorial #save startup time lamb = 1 X = (x/(2*c)).astype(int) return a * (( lamb**X/factorial(X)) * np.exp(-lamb) ) +d
Poisson function a -> height of the curve's peak b -> position of the center of the peak c -> standard deviation d -> offset
Below is the the instruction that describes the task: ### Input: Poisson function a -> height of the curve's peak b -> position of the center of the peak c -> standard deviation d -> offset ### Response: def poisson(x, a, b, c, d=0): ''' Poisson function a -> height of the curve's peak b -> position of the center of the peak c -> standard deviation d -> offset ''' from scipy.misc import factorial #save startup time lamb = 1 X = (x/(2*c)).astype(int) return a * (( lamb**X/factorial(X)) * np.exp(-lamb) ) +d
def update_snapshot(self, snapshot_id, **kwargs): """ Removes a snapshot from your account. :param snapshot_id: The unique ID of the snapshot. :type snapshot_id: ``str`` """ data = {} for attr, value in kwargs.items(): data[self._underscore_to_camelcase(attr)] = value response = self._perform_request( url='/snapshots/' + snapshot_id, method='PATCH', data=json.dumps(data)) return response
Removes a snapshot from your account. :param snapshot_id: The unique ID of the snapshot. :type snapshot_id: ``str``
Below is the the instruction that describes the task: ### Input: Removes a snapshot from your account. :param snapshot_id: The unique ID of the snapshot. :type snapshot_id: ``str`` ### Response: def update_snapshot(self, snapshot_id, **kwargs): """ Removes a snapshot from your account. :param snapshot_id: The unique ID of the snapshot. :type snapshot_id: ``str`` """ data = {} for attr, value in kwargs.items(): data[self._underscore_to_camelcase(attr)] = value response = self._perform_request( url='/snapshots/' + snapshot_id, method='PATCH', data=json.dumps(data)) return response
def _check_for_single_nested(target, items_by_key, input_order): """Check for single nested inputs that match our target count and unnest. Handles complex var inputs where some have an extra layer of nesting. """ out = utils.deepish_copy(items_by_key) for (k, t) in input_order.items(): if t == "var": v = items_by_key[tuple(k.split("__"))] if _is_nested_single(v, target): out[tuple(k.split("__"))] = v[0] return out
Check for single nested inputs that match our target count and unnest. Handles complex var inputs where some have an extra layer of nesting.
Below is the the instruction that describes the task: ### Input: Check for single nested inputs that match our target count and unnest. Handles complex var inputs where some have an extra layer of nesting. ### Response: def _check_for_single_nested(target, items_by_key, input_order): """Check for single nested inputs that match our target count and unnest. Handles complex var inputs where some have an extra layer of nesting. """ out = utils.deepish_copy(items_by_key) for (k, t) in input_order.items(): if t == "var": v = items_by_key[tuple(k.split("__"))] if _is_nested_single(v, target): out[tuple(k.split("__"))] = v[0] return out
def read(self, input_stream, kmip_version=enums.KMIPVersion.KMIP_1_0): """ Read the data encoding the KeyWrappingData struct and decode it into its constituent parts. Args: input_stream (stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0. """ super(KeyWrappingData, self).read( input_stream, kmip_version=kmip_version ) local_stream = BytearrayStream(input_stream.read(self.length)) if self.is_tag_next(enums.Tags.WRAPPING_METHOD, local_stream): self._wrapping_method = primitives.Enumeration( enum=enums.WrappingMethod, tag=enums.Tags.WRAPPING_METHOD ) self._wrapping_method.read( local_stream, kmip_version=kmip_version ) else: raise ValueError( "Invalid struct missing the wrapping method attribute." ) if self.is_tag_next( enums.Tags.ENCRYPTION_KEY_INFORMATION, local_stream ): self._encryption_key_information = EncryptionKeyInformation() self._encryption_key_information.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next( enums.Tags.MAC_SIGNATURE_KEY_INFORMATION, local_stream ): self._mac_signature_key_information = MACSignatureKeyInformation() self._mac_signature_key_information.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next(enums.Tags.MAC_SIGNATURE, local_stream): self._mac_signature = primitives.ByteString( tag=enums.Tags.MAC_SIGNATURE ) self._mac_signature.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next(enums.Tags.IV_COUNTER_NONCE, local_stream): self._iv_counter_nonce = primitives.ByteString( tag=enums.Tags.IV_COUNTER_NONCE ) self._iv_counter_nonce.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next(enums.Tags.ENCODING_OPTION, local_stream): self._encoding_option = primitives.Enumeration( enum=enums.EncodingOption, tag=enums.Tags.ENCODING_OPTION ) self._encoding_option.read( local_stream, kmip_version=kmip_version ) self.is_oversized(local_stream)
Read the data encoding the KeyWrappingData struct and decode it into its constituent parts. Args: input_stream (stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0.
Below is the the instruction that describes the task: ### Input: Read the data encoding the KeyWrappingData struct and decode it into its constituent parts. Args: input_stream (stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0. ### Response: def read(self, input_stream, kmip_version=enums.KMIPVersion.KMIP_1_0): """ Read the data encoding the KeyWrappingData struct and decode it into its constituent parts. Args: input_stream (stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0. """ super(KeyWrappingData, self).read( input_stream, kmip_version=kmip_version ) local_stream = BytearrayStream(input_stream.read(self.length)) if self.is_tag_next(enums.Tags.WRAPPING_METHOD, local_stream): self._wrapping_method = primitives.Enumeration( enum=enums.WrappingMethod, tag=enums.Tags.WRAPPING_METHOD ) self._wrapping_method.read( local_stream, kmip_version=kmip_version ) else: raise ValueError( "Invalid struct missing the wrapping method attribute." ) if self.is_tag_next( enums.Tags.ENCRYPTION_KEY_INFORMATION, local_stream ): self._encryption_key_information = EncryptionKeyInformation() self._encryption_key_information.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next( enums.Tags.MAC_SIGNATURE_KEY_INFORMATION, local_stream ): self._mac_signature_key_information = MACSignatureKeyInformation() self._mac_signature_key_information.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next(enums.Tags.MAC_SIGNATURE, local_stream): self._mac_signature = primitives.ByteString( tag=enums.Tags.MAC_SIGNATURE ) self._mac_signature.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next(enums.Tags.IV_COUNTER_NONCE, local_stream): self._iv_counter_nonce = primitives.ByteString( tag=enums.Tags.IV_COUNTER_NONCE ) self._iv_counter_nonce.read( local_stream, kmip_version=kmip_version ) if self.is_tag_next(enums.Tags.ENCODING_OPTION, local_stream): self._encoding_option = primitives.Enumeration( enum=enums.EncodingOption, tag=enums.Tags.ENCODING_OPTION ) self._encoding_option.read( local_stream, kmip_version=kmip_version ) self.is_oversized(local_stream)
def execute(self, input_data): ''' Okay this worker is going build graphs from PCAP Bro output logs ''' # Grab the Bro log handles from the input bro_logs = input_data['pcap_bro'] # Weird log if 'weird_log' in bro_logs: stream = self.workbench.stream_sample(bro_logs['weird_log']) self.weird_log_graph(stream) # HTTP log gsleep() stream = self.workbench.stream_sample(bro_logs['http_log']) self.http_log_graph(stream) # Files log gsleep() stream = self.workbench.stream_sample(bro_logs['files_log']) self.files_log_graph(stream) return {'output':'go to http://localhost:7474/browser and execute this query "match (s:origin), (t:file), p=allShortestPaths((s)--(t)) return p"'}
Okay this worker is going build graphs from PCAP Bro output logs
Below is the the instruction that describes the task: ### Input: Okay this worker is going build graphs from PCAP Bro output logs ### Response: def execute(self, input_data): ''' Okay this worker is going build graphs from PCAP Bro output logs ''' # Grab the Bro log handles from the input bro_logs = input_data['pcap_bro'] # Weird log if 'weird_log' in bro_logs: stream = self.workbench.stream_sample(bro_logs['weird_log']) self.weird_log_graph(stream) # HTTP log gsleep() stream = self.workbench.stream_sample(bro_logs['http_log']) self.http_log_graph(stream) # Files log gsleep() stream = self.workbench.stream_sample(bro_logs['files_log']) self.files_log_graph(stream) return {'output':'go to http://localhost:7474/browser and execute this query "match (s:origin), (t:file), p=allShortestPaths((s)--(t)) return p"'}
def _desc_has_data(desc): """Returns true if there is any data set for a particular PhoneNumberDesc.""" if desc is None: return False # Checking most properties since we don't know what's present, since a custom build may have # stripped just one of them (e.g. liteBuild strips exampleNumber). We don't bother checking the # possibleLengthsLocalOnly, since if this is the only thing that's present we don't really # support the type at all: no type-specific methods will work with only this data. return ((desc.example_number is not None) or _desc_has_possible_number_data(desc) or (desc.national_number_pattern is not None))
Returns true if there is any data set for a particular PhoneNumberDesc.
Below is the the instruction that describes the task: ### Input: Returns true if there is any data set for a particular PhoneNumberDesc. ### Response: def _desc_has_data(desc): """Returns true if there is any data set for a particular PhoneNumberDesc.""" if desc is None: return False # Checking most properties since we don't know what's present, since a custom build may have # stripped just one of them (e.g. liteBuild strips exampleNumber). We don't bother checking the # possibleLengthsLocalOnly, since if this is the only thing that's present we don't really # support the type at all: no type-specific methods will work with only this data. return ((desc.example_number is not None) or _desc_has_possible_number_data(desc) or (desc.national_number_pattern is not None))
def subscribe(self, topic, callback, qos): """Subscribe to an MQTT topic.""" if topic in self.topics: return def _message_callback(mqttc, userdata, msg): """Callback added to callback list for received message.""" callback(msg.topic, msg.payload.decode('utf-8'), msg.qos) self._mqttc.subscribe(topic, qos) self._mqttc.message_callback_add(topic, _message_callback) self.topics[topic] = callback
Subscribe to an MQTT topic.
Below is the the instruction that describes the task: ### Input: Subscribe to an MQTT topic. ### Response: def subscribe(self, topic, callback, qos): """Subscribe to an MQTT topic.""" if topic in self.topics: return def _message_callback(mqttc, userdata, msg): """Callback added to callback list for received message.""" callback(msg.topic, msg.payload.decode('utf-8'), msg.qos) self._mqttc.subscribe(topic, qos) self._mqttc.message_callback_add(topic, _message_callback) self.topics[topic] = callback
def set_app_name_for_pool(client, pool, name): """ Calls `osd pool application enable` for the specified pool name :param client: Name of the ceph client to use :type client: str :param pool: Pool to set app name for :type pool: str :param name: app name for the specified pool :type name: str :raises: CalledProcessError if ceph call fails """ if cmp_pkgrevno('ceph-common', '12.0.0') >= 0: cmd = ['ceph', '--id', client, 'osd', 'pool', 'application', 'enable', pool, name] check_call(cmd)
Calls `osd pool application enable` for the specified pool name :param client: Name of the ceph client to use :type client: str :param pool: Pool to set app name for :type pool: str :param name: app name for the specified pool :type name: str :raises: CalledProcessError if ceph call fails
Below is the the instruction that describes the task: ### Input: Calls `osd pool application enable` for the specified pool name :param client: Name of the ceph client to use :type client: str :param pool: Pool to set app name for :type pool: str :param name: app name for the specified pool :type name: str :raises: CalledProcessError if ceph call fails ### Response: def set_app_name_for_pool(client, pool, name): """ Calls `osd pool application enable` for the specified pool name :param client: Name of the ceph client to use :type client: str :param pool: Pool to set app name for :type pool: str :param name: app name for the specified pool :type name: str :raises: CalledProcessError if ceph call fails """ if cmp_pkgrevno('ceph-common', '12.0.0') >= 0: cmd = ['ceph', '--id', client, 'osd', 'pool', 'application', 'enable', pool, name] check_call(cmd)
def export2pyephem(self, center='500@10', equinox=2000.): """Call JPL HORIZONS website to obtain orbital elements based on the provided targetname, epochs, and center code and create a PyEphem (http://rhodesmill.org/pyephem/) object. This function requires PyEphem to be installed. :param center: str; center body (default: 500@10 = Sun) :param equinox: float; equinox (default: 2000.0) :result: list; list of PyEphem objects, one per epoch :example: >>> import callhorizons >>> import numpy >>> import ephem >>> >>> ceres = callhorizons.query('Ceres') >>> ceres.set_epochrange('2016-02-23 00:00', '2016-02-24 00:00', '1h') >>> ceres_pyephem = ceres.export2pyephem() >>> >>> nau = ephem.Observer() # setup observer site >>> nau.lon = -111.653152/180.*numpy.pi >>> nau.lat = 35.184108/180.*numpy.pi >>> nau.elevation = 2100 # m >>> nau.date = '2015/10/5 01:23' # UT >>> print ('next rising: %s' % nau.next_rising(ceres_pyephem[0])) >>> print ('next transit: %s' % nau.next_transit(ceres_pyephem[0])) >>> print ('next setting: %s' % nau.next_setting(ceres_pyephem[0])) """ try: import ephem except ImportError: raise ImportError( 'export2pyephem requires PyEphem to be installed') # obtain orbital elements self.get_elements(center) objects = [] for el in self.data: n = 0.9856076686/np.sqrt(el['a']**3) # mean daily motion epoch_djd = el['datetime_jd']-2415020.0 # Dublin Julian date epoch = ephem.date(epoch_djd) epoch_str = "%d/%f/%d" % (epoch.triple()[1], epoch.triple()[2], epoch.triple()[0]) # export to PyEphem objects.append(ephem.readdb("%s,e,%f,%f,%f,%f,%f,%f,%f,%s,%i,%f,%f" % (el['targetname'], el['incl'], el['node'], el['argper'], el['a'], n, el['e'], el['meananomaly'], epoch_str, equinox, el['H'], el['G']))) return objects
Call JPL HORIZONS website to obtain orbital elements based on the provided targetname, epochs, and center code and create a PyEphem (http://rhodesmill.org/pyephem/) object. This function requires PyEphem to be installed. :param center: str; center body (default: 500@10 = Sun) :param equinox: float; equinox (default: 2000.0) :result: list; list of PyEphem objects, one per epoch :example: >>> import callhorizons >>> import numpy >>> import ephem >>> >>> ceres = callhorizons.query('Ceres') >>> ceres.set_epochrange('2016-02-23 00:00', '2016-02-24 00:00', '1h') >>> ceres_pyephem = ceres.export2pyephem() >>> >>> nau = ephem.Observer() # setup observer site >>> nau.lon = -111.653152/180.*numpy.pi >>> nau.lat = 35.184108/180.*numpy.pi >>> nau.elevation = 2100 # m >>> nau.date = '2015/10/5 01:23' # UT >>> print ('next rising: %s' % nau.next_rising(ceres_pyephem[0])) >>> print ('next transit: %s' % nau.next_transit(ceres_pyephem[0])) >>> print ('next setting: %s' % nau.next_setting(ceres_pyephem[0]))
Below is the the instruction that describes the task: ### Input: Call JPL HORIZONS website to obtain orbital elements based on the provided targetname, epochs, and center code and create a PyEphem (http://rhodesmill.org/pyephem/) object. This function requires PyEphem to be installed. :param center: str; center body (default: 500@10 = Sun) :param equinox: float; equinox (default: 2000.0) :result: list; list of PyEphem objects, one per epoch :example: >>> import callhorizons >>> import numpy >>> import ephem >>> >>> ceres = callhorizons.query('Ceres') >>> ceres.set_epochrange('2016-02-23 00:00', '2016-02-24 00:00', '1h') >>> ceres_pyephem = ceres.export2pyephem() >>> >>> nau = ephem.Observer() # setup observer site >>> nau.lon = -111.653152/180.*numpy.pi >>> nau.lat = 35.184108/180.*numpy.pi >>> nau.elevation = 2100 # m >>> nau.date = '2015/10/5 01:23' # UT >>> print ('next rising: %s' % nau.next_rising(ceres_pyephem[0])) >>> print ('next transit: %s' % nau.next_transit(ceres_pyephem[0])) >>> print ('next setting: %s' % nau.next_setting(ceres_pyephem[0])) ### Response: def export2pyephem(self, center='500@10', equinox=2000.): """Call JPL HORIZONS website to obtain orbital elements based on the provided targetname, epochs, and center code and create a PyEphem (http://rhodesmill.org/pyephem/) object. This function requires PyEphem to be installed. :param center: str; center body (default: 500@10 = Sun) :param equinox: float; equinox (default: 2000.0) :result: list; list of PyEphem objects, one per epoch :example: >>> import callhorizons >>> import numpy >>> import ephem >>> >>> ceres = callhorizons.query('Ceres') >>> ceres.set_epochrange('2016-02-23 00:00', '2016-02-24 00:00', '1h') >>> ceres_pyephem = ceres.export2pyephem() >>> >>> nau = ephem.Observer() # setup observer site >>> nau.lon = -111.653152/180.*numpy.pi >>> nau.lat = 35.184108/180.*numpy.pi >>> nau.elevation = 2100 # m >>> nau.date = '2015/10/5 01:23' # UT >>> print ('next rising: %s' % nau.next_rising(ceres_pyephem[0])) >>> print ('next transit: %s' % nau.next_transit(ceres_pyephem[0])) >>> print ('next setting: %s' % nau.next_setting(ceres_pyephem[0])) """ try: import ephem except ImportError: raise ImportError( 'export2pyephem requires PyEphem to be installed') # obtain orbital elements self.get_elements(center) objects = [] for el in self.data: n = 0.9856076686/np.sqrt(el['a']**3) # mean daily motion epoch_djd = el['datetime_jd']-2415020.0 # Dublin Julian date epoch = ephem.date(epoch_djd) epoch_str = "%d/%f/%d" % (epoch.triple()[1], epoch.triple()[2], epoch.triple()[0]) # export to PyEphem objects.append(ephem.readdb("%s,e,%f,%f,%f,%f,%f,%f,%f,%s,%i,%f,%f" % (el['targetname'], el['incl'], el['node'], el['argper'], el['a'], n, el['e'], el['meananomaly'], epoch_str, equinox, el['H'], el['G']))) return objects
def df_metrics_to_num(self, df, query_object): """Converting metrics to numeric when pandas.read_sql cannot""" metrics = [metric for metric in query_object.metrics] for col, dtype in df.dtypes.items(): if dtype.type == np.object_ and col in metrics: df[col] = pd.to_numeric(df[col], errors='coerce')
Converting metrics to numeric when pandas.read_sql cannot
Below is the the instruction that describes the task: ### Input: Converting metrics to numeric when pandas.read_sql cannot ### Response: def df_metrics_to_num(self, df, query_object): """Converting metrics to numeric when pandas.read_sql cannot""" metrics = [metric for metric in query_object.metrics] for col, dtype in df.dtypes.items(): if dtype.type == np.object_ and col in metrics: df[col] = pd.to_numeric(df[col], errors='coerce')
def activate(name): ''' Install and activate the given product key name The 5x5 product key given to you by Microsoft ''' ret = {'name': name, 'result': True, 'comment': '', 'changes': {}} product_key = name license_info = __salt__['license.info']() licensed = False key_match = False if license_info is not None: licensed = license_info['licensed'] key_match = license_info['partial_key'] in product_key if not key_match: out = __salt__['license.install'](product_key) licensed = False if 'successfully' not in out: ret['result'] = False ret['comment'] += 'Unable to install the given product key is it valid?' return ret if not licensed: out = __salt__['license.activate']() if 'successfully' not in out: ret['result'] = False ret['comment'] += 'Unable to activate the given product key.' return ret ret['comment'] += 'Windows is now activated.' else: ret['comment'] += 'Windows is already activated.' return ret
Install and activate the given product key name The 5x5 product key given to you by Microsoft
Below is the the instruction that describes the task: ### Input: Install and activate the given product key name The 5x5 product key given to you by Microsoft ### Response: def activate(name): ''' Install and activate the given product key name The 5x5 product key given to you by Microsoft ''' ret = {'name': name, 'result': True, 'comment': '', 'changes': {}} product_key = name license_info = __salt__['license.info']() licensed = False key_match = False if license_info is not None: licensed = license_info['licensed'] key_match = license_info['partial_key'] in product_key if not key_match: out = __salt__['license.install'](product_key) licensed = False if 'successfully' not in out: ret['result'] = False ret['comment'] += 'Unable to install the given product key is it valid?' return ret if not licensed: out = __salt__['license.activate']() if 'successfully' not in out: ret['result'] = False ret['comment'] += 'Unable to activate the given product key.' return ret ret['comment'] += 'Windows is now activated.' else: ret['comment'] += 'Windows is already activated.' return ret
def main(args): """Main entry point allowing external calls Args: args ([str]): command line parameter list """ args = parse_args(args) setup_logging(args.loglevel) _logger.debug("Starting crazy calculations...") print("The {}-th Fibonacci number is {}".format(args.n, fib(args.n))) _logger.info("Script ends here")
Main entry point allowing external calls Args: args ([str]): command line parameter list
Below is the the instruction that describes the task: ### Input: Main entry point allowing external calls Args: args ([str]): command line parameter list ### Response: def main(args): """Main entry point allowing external calls Args: args ([str]): command line parameter list """ args = parse_args(args) setup_logging(args.loglevel) _logger.debug("Starting crazy calculations...") print("The {}-th Fibonacci number is {}".format(args.n, fib(args.n))) _logger.info("Script ends here")
def tokenize_version(version_string: str) -> Tuple[int, int, int]: """Tokenize a version string to a tuple. Truncates qualifiers like ``-dev``. :param version_string: A version string :return: A tuple representing the version string >>> tokenize_version('0.1.2-dev') (0, 1, 2) """ before_dash = version_string.split('-')[0] major, minor, patch = before_dash.split('.')[:3] # take only the first 3 in case there's an extension like -dev.0 return int(major), int(minor), int(patch)
Tokenize a version string to a tuple. Truncates qualifiers like ``-dev``. :param version_string: A version string :return: A tuple representing the version string >>> tokenize_version('0.1.2-dev') (0, 1, 2)
Below is the the instruction that describes the task: ### Input: Tokenize a version string to a tuple. Truncates qualifiers like ``-dev``. :param version_string: A version string :return: A tuple representing the version string >>> tokenize_version('0.1.2-dev') (0, 1, 2) ### Response: def tokenize_version(version_string: str) -> Tuple[int, int, int]: """Tokenize a version string to a tuple. Truncates qualifiers like ``-dev``. :param version_string: A version string :return: A tuple representing the version string >>> tokenize_version('0.1.2-dev') (0, 1, 2) """ before_dash = version_string.split('-')[0] major, minor, patch = before_dash.split('.')[:3] # take only the first 3 in case there's an extension like -dev.0 return int(major), int(minor), int(patch)
def format_time( self, hour_expression, minute_expression, second_expression='' ): """Given time parts, will contruct a formatted time description Args: hour_expression: Hours part minute_expression: Minutes part second_expression: Seconds part Returns: Formatted time description """ hour = int(hour_expression) period = '' if self._options.use_24hour_time_format is False: period = " PM" if (hour >= 12) else " AM" if hour > 12: hour -= 12 minute = str(int(minute_expression)) # !FIXME WUT ??? second = '' if second_expression is not None and second_expression: second = "{}{}".format(":", str(int(second_expression)).zfill(2)) return "{0}:{1}{2}{3}".format(str(hour).zfill(2), minute.zfill(2), second, period)
Given time parts, will contruct a formatted time description Args: hour_expression: Hours part minute_expression: Minutes part second_expression: Seconds part Returns: Formatted time description
Below is the the instruction that describes the task: ### Input: Given time parts, will contruct a formatted time description Args: hour_expression: Hours part minute_expression: Minutes part second_expression: Seconds part Returns: Formatted time description ### Response: def format_time( self, hour_expression, minute_expression, second_expression='' ): """Given time parts, will contruct a formatted time description Args: hour_expression: Hours part minute_expression: Minutes part second_expression: Seconds part Returns: Formatted time description """ hour = int(hour_expression) period = '' if self._options.use_24hour_time_format is False: period = " PM" if (hour >= 12) else " AM" if hour > 12: hour -= 12 minute = str(int(minute_expression)) # !FIXME WUT ??? second = '' if second_expression is not None and second_expression: second = "{}{}".format(":", str(int(second_expression)).zfill(2)) return "{0}:{1}{2}{3}".format(str(hour).zfill(2), minute.zfill(2), second, period)
def getmember(self, name): ''' Return an RPMInfo object for member `name'. If `name' can not be found in the archive, KeyError is raised. If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version. ''' members = self.getmembers() for m in members[::-1]: if m.name == name: return m raise KeyError("member %s could not be found" % name)
Return an RPMInfo object for member `name'. If `name' can not be found in the archive, KeyError is raised. If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version.
Below is the the instruction that describes the task: ### Input: Return an RPMInfo object for member `name'. If `name' can not be found in the archive, KeyError is raised. If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version. ### Response: def getmember(self, name): ''' Return an RPMInfo object for member `name'. If `name' can not be found in the archive, KeyError is raised. If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version. ''' members = self.getmembers() for m in members[::-1]: if m.name == name: return m raise KeyError("member %s could not be found" % name)
def separators(self, reordered = True): """ Returns a list of separator sets """ if reordered: return [list(self.snrowidx[self.sncolptr[k]+self.snptr[k+1]-self.snptr[k]:self.sncolptr[k+1]]) for k in range(self.Nsn)] else: return [list(self.__p[self.snrowidx[self.sncolptr[k]+self.snptr[k+1]-self.snptr[k]:self.sncolptr[k+1]]]) for k in range(self.Nsn)]
Returns a list of separator sets
Below is the the instruction that describes the task: ### Input: Returns a list of separator sets ### Response: def separators(self, reordered = True): """ Returns a list of separator sets """ if reordered: return [list(self.snrowidx[self.sncolptr[k]+self.snptr[k+1]-self.snptr[k]:self.sncolptr[k+1]]) for k in range(self.Nsn)] else: return [list(self.__p[self.snrowidx[self.sncolptr[k]+self.snptr[k+1]-self.snptr[k]:self.sncolptr[k+1]]]) for k in range(self.Nsn)]
def contains_only(self, elements): """ Ensures :attr:`subject` contains all of *elements*, which must be an iterable, and no other items. """ for element in self._subject: if element not in elements: raise self._error_factory(_format("Expected {} to have only {}, but it contains {}", self._subject, elements, element)) self.contains_all_of(elements) return ChainInspector(self._subject)
Ensures :attr:`subject` contains all of *elements*, which must be an iterable, and no other items.
Below is the the instruction that describes the task: ### Input: Ensures :attr:`subject` contains all of *elements*, which must be an iterable, and no other items. ### Response: def contains_only(self, elements): """ Ensures :attr:`subject` contains all of *elements*, which must be an iterable, and no other items. """ for element in self._subject: if element not in elements: raise self._error_factory(_format("Expected {} to have only {}, but it contains {}", self._subject, elements, element)) self.contains_all_of(elements) return ChainInspector(self._subject)
def url(self, **kwargs): """ Returns a formatted URL for the asset's File with serialized parameters. Usage: >>> my_asset.url() "//images.contentful.com/spaces/foobar/..." >>> my_asset.url(w=120, h=160) "//images.contentful.com/spaces/foobar/...?w=120&h=160" """ url = self.fields(self._locale()).get('file', {}).get('url', '') args = ['{0}={1}'.format(k, v) for k, v in kwargs.items()] if args: url += '?{0}'.format('&'.join(args)) return url
Returns a formatted URL for the asset's File with serialized parameters. Usage: >>> my_asset.url() "//images.contentful.com/spaces/foobar/..." >>> my_asset.url(w=120, h=160) "//images.contentful.com/spaces/foobar/...?w=120&h=160"
Below is the the instruction that describes the task: ### Input: Returns a formatted URL for the asset's File with serialized parameters. Usage: >>> my_asset.url() "//images.contentful.com/spaces/foobar/..." >>> my_asset.url(w=120, h=160) "//images.contentful.com/spaces/foobar/...?w=120&h=160" ### Response: def url(self, **kwargs): """ Returns a formatted URL for the asset's File with serialized parameters. Usage: >>> my_asset.url() "//images.contentful.com/spaces/foobar/..." >>> my_asset.url(w=120, h=160) "//images.contentful.com/spaces/foobar/...?w=120&h=160" """ url = self.fields(self._locale()).get('file', {}).get('url', '') args = ['{0}={1}'.format(k, v) for k, v in kwargs.items()] if args: url += '?{0}'.format('&'.join(args)) return url
def getAttrWithFallback(info, attr): """ Get the value for *attr* from the *info* object. If the object does not have the attribute or the value for the atribute is None, this will either get a value from a predefined set of attributes or it will synthesize a value from the available data. """ if hasattr(info, attr) and getattr(info, attr) is not None: value = getattr(info, attr) else: if attr in specialFallbacks: value = specialFallbacks[attr](info) else: value = staticFallbackData[attr] return value
Get the value for *attr* from the *info* object. If the object does not have the attribute or the value for the atribute is None, this will either get a value from a predefined set of attributes or it will synthesize a value from the available data.
Below is the the instruction that describes the task: ### Input: Get the value for *attr* from the *info* object. If the object does not have the attribute or the value for the atribute is None, this will either get a value from a predefined set of attributes or it will synthesize a value from the available data. ### Response: def getAttrWithFallback(info, attr): """ Get the value for *attr* from the *info* object. If the object does not have the attribute or the value for the atribute is None, this will either get a value from a predefined set of attributes or it will synthesize a value from the available data. """ if hasattr(info, attr) and getattr(info, attr) is not None: value = getattr(info, attr) else: if attr in specialFallbacks: value = specialFallbacks[attr](info) else: value = staticFallbackData[attr] return value
async def get_upstream_dns(cls) -> list: """Upstream DNS server addresses. Upstream DNS servers used to resolve domains not managed by this MAAS (space-separated IP addresses). Only used when MAAS is running its own DNS server. This value is used as the value of 'forwarders' in the DNS server config. """ data = await cls.get_config("upstream_dns") return [] if data is None else re.split(r'[,\s]+', data)
Upstream DNS server addresses. Upstream DNS servers used to resolve domains not managed by this MAAS (space-separated IP addresses). Only used when MAAS is running its own DNS server. This value is used as the value of 'forwarders' in the DNS server config.
Below is the the instruction that describes the task: ### Input: Upstream DNS server addresses. Upstream DNS servers used to resolve domains not managed by this MAAS (space-separated IP addresses). Only used when MAAS is running its own DNS server. This value is used as the value of 'forwarders' in the DNS server config. ### Response: async def get_upstream_dns(cls) -> list: """Upstream DNS server addresses. Upstream DNS servers used to resolve domains not managed by this MAAS (space-separated IP addresses). Only used when MAAS is running its own DNS server. This value is used as the value of 'forwarders' in the DNS server config. """ data = await cls.get_config("upstream_dns") return [] if data is None else re.split(r'[,\s]+', data)
def is_valid_ipv6(ip_str): """ Check the validity of an IPv6 address """ try: socket.inet_pton(socket.AF_INET6, ip_str) except socket.error: return False return True
Check the validity of an IPv6 address
Below is the the instruction that describes the task: ### Input: Check the validity of an IPv6 address ### Response: def is_valid_ipv6(ip_str): """ Check the validity of an IPv6 address """ try: socket.inet_pton(socket.AF_INET6, ip_str) except socket.error: return False return True
def get_similar_donors(self, client_data): """Computes a set of :float: similarity scores between a client and a set of candidate donors for which comparable variables have been measured. A custom similarity metric is defined in this function that combines the Hamming distance for categorical variables with the Canberra distance for continuous variables into a univariate similarity metric between the client and a set of candidate donors loaded during init. :param client_data: a client data payload including a subset fo telemetry fields. :return: the sorted approximate likelihood ratio (np.array) corresponding to the internally computed similarity score and a list of indices that link each LR score with the related donor in the |self.donors_pool|. """ # Compute the distance between self and any comparable client. distances = self.compute_clients_dist(client_data) # Compute the LR based on precomputed distributions that relate the score # to a probability of providing good addon recommendations. lrs_from_scores = np.array( [self.get_lr(distances[i]) for i in range(self.num_donors)] ) # Sort the LR values (descending) and return the sorted values together with # the original indices. indices = (-lrs_from_scores).argsort() return lrs_from_scores[indices], indices
Computes a set of :float: similarity scores between a client and a set of candidate donors for which comparable variables have been measured. A custom similarity metric is defined in this function that combines the Hamming distance for categorical variables with the Canberra distance for continuous variables into a univariate similarity metric between the client and a set of candidate donors loaded during init. :param client_data: a client data payload including a subset fo telemetry fields. :return: the sorted approximate likelihood ratio (np.array) corresponding to the internally computed similarity score and a list of indices that link each LR score with the related donor in the |self.donors_pool|.
Below is the the instruction that describes the task: ### Input: Computes a set of :float: similarity scores between a client and a set of candidate donors for which comparable variables have been measured. A custom similarity metric is defined in this function that combines the Hamming distance for categorical variables with the Canberra distance for continuous variables into a univariate similarity metric between the client and a set of candidate donors loaded during init. :param client_data: a client data payload including a subset fo telemetry fields. :return: the sorted approximate likelihood ratio (np.array) corresponding to the internally computed similarity score and a list of indices that link each LR score with the related donor in the |self.donors_pool|. ### Response: def get_similar_donors(self, client_data): """Computes a set of :float: similarity scores between a client and a set of candidate donors for which comparable variables have been measured. A custom similarity metric is defined in this function that combines the Hamming distance for categorical variables with the Canberra distance for continuous variables into a univariate similarity metric between the client and a set of candidate donors loaded during init. :param client_data: a client data payload including a subset fo telemetry fields. :return: the sorted approximate likelihood ratio (np.array) corresponding to the internally computed similarity score and a list of indices that link each LR score with the related donor in the |self.donors_pool|. """ # Compute the distance between self and any comparable client. distances = self.compute_clients_dist(client_data) # Compute the LR based on precomputed distributions that relate the score # to a probability of providing good addon recommendations. lrs_from_scores = np.array( [self.get_lr(distances[i]) for i in range(self.num_donors)] ) # Sort the LR values (descending) and return the sorted values together with # the original indices. indices = (-lrs_from_scores).argsort() return lrs_from_scores[indices], indices
def select(self, predicate=None, headers=None): """ Select rows from the reader using a predicate to select rows and and itemgetter to return a subset of elements :param predicate: If defined, a callable that is called for each row, and if it returns true, the row is included in the output. :param headers: If defined, a list or tuple of header names to return from each row :return: iterable of results WARNING: This routine works from the reader iterator, which returns RowProxy objects. RowProxy objects are reused, so if you construct a list directly from the output from this method, the list will have multiple copies of a single RowProxy, which will have as an inner row the last result row. If you will be directly constructing a list, use a getter that extracts the inner row, or which converts the RowProxy to a dict: list(s.datafile.select(lambda r: r.stusab == 'CA', lambda r: r.dict )) """ # FIXME; in Python 3, use yield from with self.reader as r: for row in r.select(predicate, headers): yield row
Select rows from the reader using a predicate to select rows and and itemgetter to return a subset of elements :param predicate: If defined, a callable that is called for each row, and if it returns true, the row is included in the output. :param headers: If defined, a list or tuple of header names to return from each row :return: iterable of results WARNING: This routine works from the reader iterator, which returns RowProxy objects. RowProxy objects are reused, so if you construct a list directly from the output from this method, the list will have multiple copies of a single RowProxy, which will have as an inner row the last result row. If you will be directly constructing a list, use a getter that extracts the inner row, or which converts the RowProxy to a dict: list(s.datafile.select(lambda r: r.stusab == 'CA', lambda r: r.dict ))
Below is the the instruction that describes the task: ### Input: Select rows from the reader using a predicate to select rows and and itemgetter to return a subset of elements :param predicate: If defined, a callable that is called for each row, and if it returns true, the row is included in the output. :param headers: If defined, a list or tuple of header names to return from each row :return: iterable of results WARNING: This routine works from the reader iterator, which returns RowProxy objects. RowProxy objects are reused, so if you construct a list directly from the output from this method, the list will have multiple copies of a single RowProxy, which will have as an inner row the last result row. If you will be directly constructing a list, use a getter that extracts the inner row, or which converts the RowProxy to a dict: list(s.datafile.select(lambda r: r.stusab == 'CA', lambda r: r.dict )) ### Response: def select(self, predicate=None, headers=None): """ Select rows from the reader using a predicate to select rows and and itemgetter to return a subset of elements :param predicate: If defined, a callable that is called for each row, and if it returns true, the row is included in the output. :param headers: If defined, a list or tuple of header names to return from each row :return: iterable of results WARNING: This routine works from the reader iterator, which returns RowProxy objects. RowProxy objects are reused, so if you construct a list directly from the output from this method, the list will have multiple copies of a single RowProxy, which will have as an inner row the last result row. If you will be directly constructing a list, use a getter that extracts the inner row, or which converts the RowProxy to a dict: list(s.datafile.select(lambda r: r.stusab == 'CA', lambda r: r.dict )) """ # FIXME; in Python 3, use yield from with self.reader as r: for row in r.select(predicate, headers): yield row
def eint(x, context=None): """ Return the exponential integral of x. """ return _apply_function_in_current_context( BigFloat, mpfr.mpfr_eint, (BigFloat._implicit_convert(x),), context, )
Return the exponential integral of x.
Below is the the instruction that describes the task: ### Input: Return the exponential integral of x. ### Response: def eint(x, context=None): """ Return the exponential integral of x. """ return _apply_function_in_current_context( BigFloat, mpfr.mpfr_eint, (BigFloat._implicit_convert(x),), context, )
def _get_docstring_indent(definition, docstring): """Return the indentation of the docstring's opening quotes.""" before_docstring, _, _ = definition.source.partition(docstring) _, _, indent = before_docstring.rpartition('\n') return indent
Return the indentation of the docstring's opening quotes.
Below is the the instruction that describes the task: ### Input: Return the indentation of the docstring's opening quotes. ### Response: def _get_docstring_indent(definition, docstring): """Return the indentation of the docstring's opening quotes.""" before_docstring, _, _ = definition.source.partition(docstring) _, _, indent = before_docstring.rpartition('\n') return indent
def add(self, name: str, value: str) -> None: """Adds a new value for the given key.""" norm_name = _normalized_headers[name] self._last_key = norm_name if norm_name in self: self._dict[norm_name] = ( native_str(self[norm_name]) + "," + native_str(value) ) self._as_list[norm_name].append(value) else: self[norm_name] = value
Adds a new value for the given key.
Below is the the instruction that describes the task: ### Input: Adds a new value for the given key. ### Response: def add(self, name: str, value: str) -> None: """Adds a new value for the given key.""" norm_name = _normalized_headers[name] self._last_key = norm_name if norm_name in self: self._dict[norm_name] = ( native_str(self[norm_name]) + "," + native_str(value) ) self._as_list[norm_name].append(value) else: self[norm_name] = value
def get_bearing(origin_point, destination_point): """ Calculate the bearing between two lat-long points. Each tuple should represent (lat, lng) as decimal degrees. Parameters ---------- origin_point : tuple destination_point : tuple Returns ------- bearing : float the compass bearing in decimal degrees from the origin point to the destination point """ if not (isinstance(origin_point, tuple) and isinstance(destination_point, tuple)): raise TypeError('origin_point and destination_point must be (lat, lng) tuples') # get latitudes and the difference in longitude, as radians lat1 = math.radians(origin_point[0]) lat2 = math.radians(destination_point[0]) diff_lng = math.radians(destination_point[1] - origin_point[1]) # calculate initial bearing from -180 degrees to +180 degrees x = math.sin(diff_lng) * math.cos(lat2) y = math.cos(lat1) * math.sin(lat2) - (math.sin(lat1) * math.cos(lat2) * math.cos(diff_lng)) initial_bearing = math.atan2(x, y) # normalize initial bearing to 0 degrees to 360 degrees to get compass bearing initial_bearing = math.degrees(initial_bearing) bearing = (initial_bearing + 360) % 360 return bearing
Calculate the bearing between two lat-long points. Each tuple should represent (lat, lng) as decimal degrees. Parameters ---------- origin_point : tuple destination_point : tuple Returns ------- bearing : float the compass bearing in decimal degrees from the origin point to the destination point
Below is the the instruction that describes the task: ### Input: Calculate the bearing between two lat-long points. Each tuple should represent (lat, lng) as decimal degrees. Parameters ---------- origin_point : tuple destination_point : tuple Returns ------- bearing : float the compass bearing in decimal degrees from the origin point to the destination point ### Response: def get_bearing(origin_point, destination_point): """ Calculate the bearing between two lat-long points. Each tuple should represent (lat, lng) as decimal degrees. Parameters ---------- origin_point : tuple destination_point : tuple Returns ------- bearing : float the compass bearing in decimal degrees from the origin point to the destination point """ if not (isinstance(origin_point, tuple) and isinstance(destination_point, tuple)): raise TypeError('origin_point and destination_point must be (lat, lng) tuples') # get latitudes and the difference in longitude, as radians lat1 = math.radians(origin_point[0]) lat2 = math.radians(destination_point[0]) diff_lng = math.radians(destination_point[1] - origin_point[1]) # calculate initial bearing from -180 degrees to +180 degrees x = math.sin(diff_lng) * math.cos(lat2) y = math.cos(lat1) * math.sin(lat2) - (math.sin(lat1) * math.cos(lat2) * math.cos(diff_lng)) initial_bearing = math.atan2(x, y) # normalize initial bearing to 0 degrees to 360 degrees to get compass bearing initial_bearing = math.degrees(initial_bearing) bearing = (initial_bearing + 360) % 360 return bearing
def read_envfile(fpath): """Reads environment variables from .env key-value file. Rules: * Lines starting with # (hash) considered comments. Inline comments not supported; * Multiline values not supported. * Invalid lines are ignored; * Matching opening-closing quotes are stripped; * ${VAL} will be replaced with VAL value previously defined in .env file(s) or currently available in env. Returns a dictionary. Empty dictionary is returned if file is not accessible. :param str|unicode fpath: :rtype: OrderedDict """ environ = os.environ env_vars = OrderedDict() try: with io.open(fpath) as f: lines = f.readlines() except IOError: return env_vars def drop_quotes(quote_char, val): if val.startswith(quote_char) and val.endswith(quote_char): val = val.strip(quote_char).strip() return val for line in lines: line = line.strip() if not line or line.startswith('#'): continue key, _, val = line.partition('=') key = key.strip() val = drop_quotes("'", drop_quotes('"', val.strip())) if not key: continue for match in RE_TPL_VAR.findall(val): var_tpl, var_name = match var_val = environ.get(var_name) if var_val is None: var_val = env_vars.get(var_name) if var_val is not None: val = val.replace(var_tpl, var_val, 1) env_vars[key] = val return env_vars
Reads environment variables from .env key-value file. Rules: * Lines starting with # (hash) considered comments. Inline comments not supported; * Multiline values not supported. * Invalid lines are ignored; * Matching opening-closing quotes are stripped; * ${VAL} will be replaced with VAL value previously defined in .env file(s) or currently available in env. Returns a dictionary. Empty dictionary is returned if file is not accessible. :param str|unicode fpath: :rtype: OrderedDict
Below is the the instruction that describes the task: ### Input: Reads environment variables from .env key-value file. Rules: * Lines starting with # (hash) considered comments. Inline comments not supported; * Multiline values not supported. * Invalid lines are ignored; * Matching opening-closing quotes are stripped; * ${VAL} will be replaced with VAL value previously defined in .env file(s) or currently available in env. Returns a dictionary. Empty dictionary is returned if file is not accessible. :param str|unicode fpath: :rtype: OrderedDict ### Response: def read_envfile(fpath): """Reads environment variables from .env key-value file. Rules: * Lines starting with # (hash) considered comments. Inline comments not supported; * Multiline values not supported. * Invalid lines are ignored; * Matching opening-closing quotes are stripped; * ${VAL} will be replaced with VAL value previously defined in .env file(s) or currently available in env. Returns a dictionary. Empty dictionary is returned if file is not accessible. :param str|unicode fpath: :rtype: OrderedDict """ environ = os.environ env_vars = OrderedDict() try: with io.open(fpath) as f: lines = f.readlines() except IOError: return env_vars def drop_quotes(quote_char, val): if val.startswith(quote_char) and val.endswith(quote_char): val = val.strip(quote_char).strip() return val for line in lines: line = line.strip() if not line or line.startswith('#'): continue key, _, val = line.partition('=') key = key.strip() val = drop_quotes("'", drop_quotes('"', val.strip())) if not key: continue for match in RE_TPL_VAR.findall(val): var_tpl, var_name = match var_val = environ.get(var_name) if var_val is None: var_val = env_vars.get(var_name) if var_val is not None: val = val.replace(var_tpl, var_val, 1) env_vars[key] = val return env_vars
def _find_entity_in_records_by_class_name(self, entity_name): """Fetch by Entity Name in values""" records = { key: value for (key, value) in self._registry.items() if value.name == entity_name } # If more than one record was found, we are dealing with the case of # an Entity name present in multiple places (packages or plugins). Throw an error # and ask for a fully qualified Entity name to be specified if len(records) > 1: raise ConfigurationError( f'Entity with name {entity_name} has been registered twice. ' f'Please use fully qualified Entity name to specify the exact Entity.') elif len(records) == 1: return next(iter(records.values())) else: raise AssertionError(f'No Entity registered with name {entity_name}')
Fetch by Entity Name in values
Below is the the instruction that describes the task: ### Input: Fetch by Entity Name in values ### Response: def _find_entity_in_records_by_class_name(self, entity_name): """Fetch by Entity Name in values""" records = { key: value for (key, value) in self._registry.items() if value.name == entity_name } # If more than one record was found, we are dealing with the case of # an Entity name present in multiple places (packages or plugins). Throw an error # and ask for a fully qualified Entity name to be specified if len(records) > 1: raise ConfigurationError( f'Entity with name {entity_name} has been registered twice. ' f'Please use fully qualified Entity name to specify the exact Entity.') elif len(records) == 1: return next(iter(records.values())) else: raise AssertionError(f'No Entity registered with name {entity_name}')
def start(self, blocking=False): """ Start the interface :param blocking: Should the call block until stop() is called (default: False) :type blocking: bool :rtype: None :raises SensorStartException: Failed to start """ self.debug("()") super(SensorClient, self).start(blocking=False) try: a_thread = threading.Thread( target=self._thread_wrapper, args=(self._packet_loop,) ) a_thread.daemon = True a_thread.start() except: self.exception("Failed to run packet loop") raise SensorStartException("Packet loop failed") self.info("Started") # Blocking - call StartStopable.start super(Sensor, self).start(blocking)
Start the interface :param blocking: Should the call block until stop() is called (default: False) :type blocking: bool :rtype: None :raises SensorStartException: Failed to start
Below is the the instruction that describes the task: ### Input: Start the interface :param blocking: Should the call block until stop() is called (default: False) :type blocking: bool :rtype: None :raises SensorStartException: Failed to start ### Response: def start(self, blocking=False): """ Start the interface :param blocking: Should the call block until stop() is called (default: False) :type blocking: bool :rtype: None :raises SensorStartException: Failed to start """ self.debug("()") super(SensorClient, self).start(blocking=False) try: a_thread = threading.Thread( target=self._thread_wrapper, args=(self._packet_loop,) ) a_thread.daemon = True a_thread.start() except: self.exception("Failed to run packet loop") raise SensorStartException("Packet loop failed") self.info("Started") # Blocking - call StartStopable.start super(Sensor, self).start(blocking)
def deriv(self, p): """ Derivative of C-Log-Log transform link function Parameters ---------- p : array-like Mean parameters Returns ------- g'(p) : array The derivative of the CLogLog transform link function Notes ----- g'(p) = - 1 / ((p-1)*log(1-p)) """ p = self._clean(p) return 1. / ((p - 1) * (np.log(1 - p)))
Derivative of C-Log-Log transform link function Parameters ---------- p : array-like Mean parameters Returns ------- g'(p) : array The derivative of the CLogLog transform link function Notes ----- g'(p) = - 1 / ((p-1)*log(1-p))
Below is the the instruction that describes the task: ### Input: Derivative of C-Log-Log transform link function Parameters ---------- p : array-like Mean parameters Returns ------- g'(p) : array The derivative of the CLogLog transform link function Notes ----- g'(p) = - 1 / ((p-1)*log(1-p)) ### Response: def deriv(self, p): """ Derivative of C-Log-Log transform link function Parameters ---------- p : array-like Mean parameters Returns ------- g'(p) : array The derivative of the CLogLog transform link function Notes ----- g'(p) = - 1 / ((p-1)*log(1-p)) """ p = self._clean(p) return 1. / ((p - 1) * (np.log(1 - p)))
def available_configuration_files(self): """A list of strings with the absolute pathnames of the available configuration files.""" known_files = [GLOBAL_CONFIG, LOCAL_CONFIG, self.environment.get('PIP_ACCEL_CONFIG')] absolute_paths = [parse_path(pathname) for pathname in known_files if pathname] return [pathname for pathname in absolute_paths if os.path.isfile(pathname)]
A list of strings with the absolute pathnames of the available configuration files.
Below is the the instruction that describes the task: ### Input: A list of strings with the absolute pathnames of the available configuration files. ### Response: def available_configuration_files(self): """A list of strings with the absolute pathnames of the available configuration files.""" known_files = [GLOBAL_CONFIG, LOCAL_CONFIG, self.environment.get('PIP_ACCEL_CONFIG')] absolute_paths = [parse_path(pathname) for pathname in known_files if pathname] return [pathname for pathname in absolute_paths if os.path.isfile(pathname)]
def write_log_entries( self, entries, log_name=None, resource=None, labels=None, partial_success=None, dry_run=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Writes log entries to Logging. This API method is the only way to send log entries to Logging. This method is used, directly or indirectly, by the Logging agent (fluentd) and all logging libraries configured to use Logging. A single request may contain log entries for a maximum of 1000 different resources (projects, organizations, billing accounts or folders) Example: >>> from google.cloud import logging_v2 >>> >>> client = logging_v2.LoggingServiceV2Client() >>> >>> # TODO: Initialize `entries`: >>> entries = [] >>> >>> response = client.write_log_entries(entries) Args: entries (list[Union[dict, ~google.cloud.logging_v2.types.LogEntry]]): Required. The log entries to send to Logging. The order of log entries in this list does not matter. Values supplied in this method's ``log_name``, ``resource``, and ``labels`` fields are copied into those log entries in this list that do not include values for their corresponding fields. For more information, see the ``LogEntry`` type. If the ``timestamp`` or ``insert_id`` fields are missing in log entries, then this method supplies the current time or a unique identifier, respectively. The supplied values are chosen so that, among the log entries that did not supply their own values, the entries earlier in the list will sort before the entries later in the list. See the ``entries.list`` method. Log entries with timestamps that are more than the `logs retention period <https://cloud.google.com/logging/quota-policy>`__ in the past or more than 24 hours in the future will not be available when calling ``entries.list``. However, those log entries can still be exported with `LogSinks <https://cloud.google.com/logging/docs/api/tasks/exporting-logs>`__. To improve throughput and to avoid exceeding the `quota limit <https://cloud.google.com/logging/quota-policy>`__ for calls to ``entries.write``, you should try to include several log entries in this list, rather than calling this method for each individual log entry. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.LogEntry` log_name (str): Optional. A default log resource name that is assigned to all log entries in ``entries`` that do not specify a value for ``log_name``: :: "projects/[PROJECT_ID]/logs/[LOG_ID]" "organizations/[ORGANIZATION_ID]/logs/[LOG_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]/logs/[LOG_ID]" "folders/[FOLDER_ID]/logs/[LOG_ID]" ``[LOG_ID]`` must be URL-encoded. For example: :: "projects/my-project-id/logs/syslog" "organizations/1234567890/logs/cloudresourcemanager.googleapis.com%2Factivity" The permission logging.logEntries.create is needed on each project, organization, billing account, or folder that is receiving new log entries, whether the resource is specified in logName or in an individual log entry. resource (Union[dict, ~google.cloud.logging_v2.types.MonitoredResource]): Optional. A default monitored resource object that is assigned to all log entries in ``entries`` that do not specify a value for ``resource``. Example: :: { "type": "gce_instance", "labels": { "zone": "us-central1-a", "instance_id": "00000000000000000000" }} See ``LogEntry``. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.MonitoredResource` labels (dict[str -> str]): Optional. Default labels that are added to the ``labels`` field of all log entries in ``entries``. If a log entry already has a label with the same key as a label in this parameter, then the log entry's label is not changed. See ``LogEntry``. partial_success (bool): Optional. Whether valid entries should be written even if some other entries fail due to INVALID\_ARGUMENT or PERMISSION\_DENIED errors. If any entry is not written, then the response status is the error associated with one of the failed entries and the response includes error details keyed by the entries' zero-based index in the ``entries.write`` method. dry_run (bool): Optional. If true, the request should expect normal response, but the entries won't be persisted nor exported. Useful for checking whether the logging API endpoints are working properly before sending valuable data. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.logging_v2.types.WriteLogEntriesResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "write_log_entries" not in self._inner_api_calls: self._inner_api_calls[ "write_log_entries" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.write_log_entries, default_retry=self._method_configs["WriteLogEntries"].retry, default_timeout=self._method_configs["WriteLogEntries"].timeout, client_info=self._client_info, ) request = logging_pb2.WriteLogEntriesRequest( entries=entries, log_name=log_name, resource=resource, labels=labels, partial_success=partial_success, dry_run=dry_run, ) return self._inner_api_calls["write_log_entries"]( request, retry=retry, timeout=timeout, metadata=metadata )
Writes log entries to Logging. This API method is the only way to send log entries to Logging. This method is used, directly or indirectly, by the Logging agent (fluentd) and all logging libraries configured to use Logging. A single request may contain log entries for a maximum of 1000 different resources (projects, organizations, billing accounts or folders) Example: >>> from google.cloud import logging_v2 >>> >>> client = logging_v2.LoggingServiceV2Client() >>> >>> # TODO: Initialize `entries`: >>> entries = [] >>> >>> response = client.write_log_entries(entries) Args: entries (list[Union[dict, ~google.cloud.logging_v2.types.LogEntry]]): Required. The log entries to send to Logging. The order of log entries in this list does not matter. Values supplied in this method's ``log_name``, ``resource``, and ``labels`` fields are copied into those log entries in this list that do not include values for their corresponding fields. For more information, see the ``LogEntry`` type. If the ``timestamp`` or ``insert_id`` fields are missing in log entries, then this method supplies the current time or a unique identifier, respectively. The supplied values are chosen so that, among the log entries that did not supply their own values, the entries earlier in the list will sort before the entries later in the list. See the ``entries.list`` method. Log entries with timestamps that are more than the `logs retention period <https://cloud.google.com/logging/quota-policy>`__ in the past or more than 24 hours in the future will not be available when calling ``entries.list``. However, those log entries can still be exported with `LogSinks <https://cloud.google.com/logging/docs/api/tasks/exporting-logs>`__. To improve throughput and to avoid exceeding the `quota limit <https://cloud.google.com/logging/quota-policy>`__ for calls to ``entries.write``, you should try to include several log entries in this list, rather than calling this method for each individual log entry. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.LogEntry` log_name (str): Optional. A default log resource name that is assigned to all log entries in ``entries`` that do not specify a value for ``log_name``: :: "projects/[PROJECT_ID]/logs/[LOG_ID]" "organizations/[ORGANIZATION_ID]/logs/[LOG_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]/logs/[LOG_ID]" "folders/[FOLDER_ID]/logs/[LOG_ID]" ``[LOG_ID]`` must be URL-encoded. For example: :: "projects/my-project-id/logs/syslog" "organizations/1234567890/logs/cloudresourcemanager.googleapis.com%2Factivity" The permission logging.logEntries.create is needed on each project, organization, billing account, or folder that is receiving new log entries, whether the resource is specified in logName or in an individual log entry. resource (Union[dict, ~google.cloud.logging_v2.types.MonitoredResource]): Optional. A default monitored resource object that is assigned to all log entries in ``entries`` that do not specify a value for ``resource``. Example: :: { "type": "gce_instance", "labels": { "zone": "us-central1-a", "instance_id": "00000000000000000000" }} See ``LogEntry``. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.MonitoredResource` labels (dict[str -> str]): Optional. Default labels that are added to the ``labels`` field of all log entries in ``entries``. If a log entry already has a label with the same key as a label in this parameter, then the log entry's label is not changed. See ``LogEntry``. partial_success (bool): Optional. Whether valid entries should be written even if some other entries fail due to INVALID\_ARGUMENT or PERMISSION\_DENIED errors. If any entry is not written, then the response status is the error associated with one of the failed entries and the response includes error details keyed by the entries' zero-based index in the ``entries.write`` method. dry_run (bool): Optional. If true, the request should expect normal response, but the entries won't be persisted nor exported. Useful for checking whether the logging API endpoints are working properly before sending valuable data. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.logging_v2.types.WriteLogEntriesResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.
Below is the the instruction that describes the task: ### Input: Writes log entries to Logging. This API method is the only way to send log entries to Logging. This method is used, directly or indirectly, by the Logging agent (fluentd) and all logging libraries configured to use Logging. A single request may contain log entries for a maximum of 1000 different resources (projects, organizations, billing accounts or folders) Example: >>> from google.cloud import logging_v2 >>> >>> client = logging_v2.LoggingServiceV2Client() >>> >>> # TODO: Initialize `entries`: >>> entries = [] >>> >>> response = client.write_log_entries(entries) Args: entries (list[Union[dict, ~google.cloud.logging_v2.types.LogEntry]]): Required. The log entries to send to Logging. The order of log entries in this list does not matter. Values supplied in this method's ``log_name``, ``resource``, and ``labels`` fields are copied into those log entries in this list that do not include values for their corresponding fields. For more information, see the ``LogEntry`` type. If the ``timestamp`` or ``insert_id`` fields are missing in log entries, then this method supplies the current time or a unique identifier, respectively. The supplied values are chosen so that, among the log entries that did not supply their own values, the entries earlier in the list will sort before the entries later in the list. See the ``entries.list`` method. Log entries with timestamps that are more than the `logs retention period <https://cloud.google.com/logging/quota-policy>`__ in the past or more than 24 hours in the future will not be available when calling ``entries.list``. However, those log entries can still be exported with `LogSinks <https://cloud.google.com/logging/docs/api/tasks/exporting-logs>`__. To improve throughput and to avoid exceeding the `quota limit <https://cloud.google.com/logging/quota-policy>`__ for calls to ``entries.write``, you should try to include several log entries in this list, rather than calling this method for each individual log entry. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.LogEntry` log_name (str): Optional. A default log resource name that is assigned to all log entries in ``entries`` that do not specify a value for ``log_name``: :: "projects/[PROJECT_ID]/logs/[LOG_ID]" "organizations/[ORGANIZATION_ID]/logs/[LOG_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]/logs/[LOG_ID]" "folders/[FOLDER_ID]/logs/[LOG_ID]" ``[LOG_ID]`` must be URL-encoded. For example: :: "projects/my-project-id/logs/syslog" "organizations/1234567890/logs/cloudresourcemanager.googleapis.com%2Factivity" The permission logging.logEntries.create is needed on each project, organization, billing account, or folder that is receiving new log entries, whether the resource is specified in logName or in an individual log entry. resource (Union[dict, ~google.cloud.logging_v2.types.MonitoredResource]): Optional. A default monitored resource object that is assigned to all log entries in ``entries`` that do not specify a value for ``resource``. Example: :: { "type": "gce_instance", "labels": { "zone": "us-central1-a", "instance_id": "00000000000000000000" }} See ``LogEntry``. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.MonitoredResource` labels (dict[str -> str]): Optional. Default labels that are added to the ``labels`` field of all log entries in ``entries``. If a log entry already has a label with the same key as a label in this parameter, then the log entry's label is not changed. See ``LogEntry``. partial_success (bool): Optional. Whether valid entries should be written even if some other entries fail due to INVALID\_ARGUMENT or PERMISSION\_DENIED errors. If any entry is not written, then the response status is the error associated with one of the failed entries and the response includes error details keyed by the entries' zero-based index in the ``entries.write`` method. dry_run (bool): Optional. If true, the request should expect normal response, but the entries won't be persisted nor exported. Useful for checking whether the logging API endpoints are working properly before sending valuable data. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.logging_v2.types.WriteLogEntriesResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. ### Response: def write_log_entries( self, entries, log_name=None, resource=None, labels=None, partial_success=None, dry_run=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Writes log entries to Logging. This API method is the only way to send log entries to Logging. This method is used, directly or indirectly, by the Logging agent (fluentd) and all logging libraries configured to use Logging. A single request may contain log entries for a maximum of 1000 different resources (projects, organizations, billing accounts or folders) Example: >>> from google.cloud import logging_v2 >>> >>> client = logging_v2.LoggingServiceV2Client() >>> >>> # TODO: Initialize `entries`: >>> entries = [] >>> >>> response = client.write_log_entries(entries) Args: entries (list[Union[dict, ~google.cloud.logging_v2.types.LogEntry]]): Required. The log entries to send to Logging. The order of log entries in this list does not matter. Values supplied in this method's ``log_name``, ``resource``, and ``labels`` fields are copied into those log entries in this list that do not include values for their corresponding fields. For more information, see the ``LogEntry`` type. If the ``timestamp`` or ``insert_id`` fields are missing in log entries, then this method supplies the current time or a unique identifier, respectively. The supplied values are chosen so that, among the log entries that did not supply their own values, the entries earlier in the list will sort before the entries later in the list. See the ``entries.list`` method. Log entries with timestamps that are more than the `logs retention period <https://cloud.google.com/logging/quota-policy>`__ in the past or more than 24 hours in the future will not be available when calling ``entries.list``. However, those log entries can still be exported with `LogSinks <https://cloud.google.com/logging/docs/api/tasks/exporting-logs>`__. To improve throughput and to avoid exceeding the `quota limit <https://cloud.google.com/logging/quota-policy>`__ for calls to ``entries.write``, you should try to include several log entries in this list, rather than calling this method for each individual log entry. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.LogEntry` log_name (str): Optional. A default log resource name that is assigned to all log entries in ``entries`` that do not specify a value for ``log_name``: :: "projects/[PROJECT_ID]/logs/[LOG_ID]" "organizations/[ORGANIZATION_ID]/logs/[LOG_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]/logs/[LOG_ID]" "folders/[FOLDER_ID]/logs/[LOG_ID]" ``[LOG_ID]`` must be URL-encoded. For example: :: "projects/my-project-id/logs/syslog" "organizations/1234567890/logs/cloudresourcemanager.googleapis.com%2Factivity" The permission logging.logEntries.create is needed on each project, organization, billing account, or folder that is receiving new log entries, whether the resource is specified in logName or in an individual log entry. resource (Union[dict, ~google.cloud.logging_v2.types.MonitoredResource]): Optional. A default monitored resource object that is assigned to all log entries in ``entries`` that do not specify a value for ``resource``. Example: :: { "type": "gce_instance", "labels": { "zone": "us-central1-a", "instance_id": "00000000000000000000" }} See ``LogEntry``. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.MonitoredResource` labels (dict[str -> str]): Optional. Default labels that are added to the ``labels`` field of all log entries in ``entries``. If a log entry already has a label with the same key as a label in this parameter, then the log entry's label is not changed. See ``LogEntry``. partial_success (bool): Optional. Whether valid entries should be written even if some other entries fail due to INVALID\_ARGUMENT or PERMISSION\_DENIED errors. If any entry is not written, then the response status is the error associated with one of the failed entries and the response includes error details keyed by the entries' zero-based index in the ``entries.write`` method. dry_run (bool): Optional. If true, the request should expect normal response, but the entries won't be persisted nor exported. Useful for checking whether the logging API endpoints are working properly before sending valuable data. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.logging_v2.types.WriteLogEntriesResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "write_log_entries" not in self._inner_api_calls: self._inner_api_calls[ "write_log_entries" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.write_log_entries, default_retry=self._method_configs["WriteLogEntries"].retry, default_timeout=self._method_configs["WriteLogEntries"].timeout, client_info=self._client_info, ) request = logging_pb2.WriteLogEntriesRequest( entries=entries, log_name=log_name, resource=resource, labels=labels, partial_success=partial_success, dry_run=dry_run, ) return self._inner_api_calls["write_log_entries"]( request, retry=retry, timeout=timeout, metadata=metadata )
def surfaceBrightness(abs_mag, r_physical, distance): """ Compute the average surface brightness [mag arcsec^-2] within the half-light radius abs_mag = absolute magnitude [mag] r_physical = half-light radius [kpc] distance = [kpc] The factor 2 in the c_v equation below account for half the luminosity within the half-light radius. The 3600.**2 is conversion from deg^2 to arcsec^2 c_v = 2.5 * np.log10(2.) + 2.5 * np.log10(np.pi * 3600.**2) = 19.78 """ r_angle = np.degrees(np.arctan(r_physical / distance)) c_v = 19.78 # mag/arcsec^2 return abs_mag + dist2mod(distance) + c_v + 2.5 * np.log10(r_angle**2)
Compute the average surface brightness [mag arcsec^-2] within the half-light radius abs_mag = absolute magnitude [mag] r_physical = half-light radius [kpc] distance = [kpc] The factor 2 in the c_v equation below account for half the luminosity within the half-light radius. The 3600.**2 is conversion from deg^2 to arcsec^2 c_v = 2.5 * np.log10(2.) + 2.5 * np.log10(np.pi * 3600.**2) = 19.78
Below is the the instruction that describes the task: ### Input: Compute the average surface brightness [mag arcsec^-2] within the half-light radius abs_mag = absolute magnitude [mag] r_physical = half-light radius [kpc] distance = [kpc] The factor 2 in the c_v equation below account for half the luminosity within the half-light radius. The 3600.**2 is conversion from deg^2 to arcsec^2 c_v = 2.5 * np.log10(2.) + 2.5 * np.log10(np.pi * 3600.**2) = 19.78 ### Response: def surfaceBrightness(abs_mag, r_physical, distance): """ Compute the average surface brightness [mag arcsec^-2] within the half-light radius abs_mag = absolute magnitude [mag] r_physical = half-light radius [kpc] distance = [kpc] The factor 2 in the c_v equation below account for half the luminosity within the half-light radius. The 3600.**2 is conversion from deg^2 to arcsec^2 c_v = 2.5 * np.log10(2.) + 2.5 * np.log10(np.pi * 3600.**2) = 19.78 """ r_angle = np.degrees(np.arctan(r_physical / distance)) c_v = 19.78 # mag/arcsec^2 return abs_mag + dist2mod(distance) + c_v + 2.5 * np.log10(r_angle**2)
def init(): """ Install gettext with the default parameters """ if "_" not in builtins.__dict__: # avoid installing lang two times os.environ["LANGUAGE"] = inginious.input.get_lang() if inginious.DEBUG: gettext.install("messages", get_lang_dir_path()) else: gettext.install("messages", get_lang_dir_path())
Install gettext with the default parameters
Below is the the instruction that describes the task: ### Input: Install gettext with the default parameters ### Response: def init(): """ Install gettext with the default parameters """ if "_" not in builtins.__dict__: # avoid installing lang two times os.environ["LANGUAGE"] = inginious.input.get_lang() if inginious.DEBUG: gettext.install("messages", get_lang_dir_path()) else: gettext.install("messages", get_lang_dir_path())
def _py_func_with_gradient(func, inp, Tout, stateful=True, name=None, grad_func=None): """ PyFunc defined as given by Tensorflow :param func: Custom Function :param inp: Function Inputs :param Tout: Ouput Type of out Custom Function :param stateful: Calculate Gradients when stateful is True :param name: Name of the PyFunction :param grad: Custom Gradient Function :return: """ # Generate random name in order to avoid conflicts with inbuilt names rnd_name = 'PyFuncGrad-' + '%0x' % getrandbits(30 * 4) # Register Tensorflow Gradient tf.RegisterGradient(rnd_name)(grad_func) # Get current graph g = tf.get_default_graph() # Add gradient override map with g.gradient_override_map({"PyFunc": rnd_name, "PyFuncStateless": rnd_name}): return tf.py_func(func, inp, Tout, stateful=stateful, name=name)
PyFunc defined as given by Tensorflow :param func: Custom Function :param inp: Function Inputs :param Tout: Ouput Type of out Custom Function :param stateful: Calculate Gradients when stateful is True :param name: Name of the PyFunction :param grad: Custom Gradient Function :return:
Below is the the instruction that describes the task: ### Input: PyFunc defined as given by Tensorflow :param func: Custom Function :param inp: Function Inputs :param Tout: Ouput Type of out Custom Function :param stateful: Calculate Gradients when stateful is True :param name: Name of the PyFunction :param grad: Custom Gradient Function :return: ### Response: def _py_func_with_gradient(func, inp, Tout, stateful=True, name=None, grad_func=None): """ PyFunc defined as given by Tensorflow :param func: Custom Function :param inp: Function Inputs :param Tout: Ouput Type of out Custom Function :param stateful: Calculate Gradients when stateful is True :param name: Name of the PyFunction :param grad: Custom Gradient Function :return: """ # Generate random name in order to avoid conflicts with inbuilt names rnd_name = 'PyFuncGrad-' + '%0x' % getrandbits(30 * 4) # Register Tensorflow Gradient tf.RegisterGradient(rnd_name)(grad_func) # Get current graph g = tf.get_default_graph() # Add gradient override map with g.gradient_override_map({"PyFunc": rnd_name, "PyFuncStateless": rnd_name}): return tf.py_func(func, inp, Tout, stateful=stateful, name=name)
def write_tdi_bits(self, buff, return_tdo=False, TMS=True): """ Command controller to write TDI data (with constant TMS bit) to the physical scan chain. Optionally return TDO bits sent back from scan the chain. Args: data - bits to send over TDI line of scan chain (bitarray) return_tdo (bool) - return the devices bitarray response TMS (bool) - whether TMS should send a bitarray of all 0's of same length as `data` (i.e False) or all 1's (i.e. True) Returns: None by default or the (bitarray) response of the device after receiving data, if return_tdo is True. Usage: >>> from proteusisc import getAttachedControllers, bitarray >>> c = getAttachedControllers()[0] >>> c.jtag_enable() >>> c.write_tdi_bits(bitarray("11111"), return_tdo=True) >>> c.jtag_disable() """ self._check_jtag() tms_bits = bitarray([TMS]*len(buff)) self._update_scanchain(tms_bits) self.bulkCommandDefault(_BMSG_WRITE_TDI % (return_tdo, TMS, len(buff).to_bytes(4, 'little'))) self.bulkWriteData(build_byte_align_buff(buff).tobytes()[::-1]) tdo_bits = self._read_tdo(len(buff)) if return_tdo else None self._get_adv_trans_stats(0x08, return_tdo) return tdo_bits
Command controller to write TDI data (with constant TMS bit) to the physical scan chain. Optionally return TDO bits sent back from scan the chain. Args: data - bits to send over TDI line of scan chain (bitarray) return_tdo (bool) - return the devices bitarray response TMS (bool) - whether TMS should send a bitarray of all 0's of same length as `data` (i.e False) or all 1's (i.e. True) Returns: None by default or the (bitarray) response of the device after receiving data, if return_tdo is True. Usage: >>> from proteusisc import getAttachedControllers, bitarray >>> c = getAttachedControllers()[0] >>> c.jtag_enable() >>> c.write_tdi_bits(bitarray("11111"), return_tdo=True) >>> c.jtag_disable()
Below is the the instruction that describes the task: ### Input: Command controller to write TDI data (with constant TMS bit) to the physical scan chain. Optionally return TDO bits sent back from scan the chain. Args: data - bits to send over TDI line of scan chain (bitarray) return_tdo (bool) - return the devices bitarray response TMS (bool) - whether TMS should send a bitarray of all 0's of same length as `data` (i.e False) or all 1's (i.e. True) Returns: None by default or the (bitarray) response of the device after receiving data, if return_tdo is True. Usage: >>> from proteusisc import getAttachedControllers, bitarray >>> c = getAttachedControllers()[0] >>> c.jtag_enable() >>> c.write_tdi_bits(bitarray("11111"), return_tdo=True) >>> c.jtag_disable() ### Response: def write_tdi_bits(self, buff, return_tdo=False, TMS=True): """ Command controller to write TDI data (with constant TMS bit) to the physical scan chain. Optionally return TDO bits sent back from scan the chain. Args: data - bits to send over TDI line of scan chain (bitarray) return_tdo (bool) - return the devices bitarray response TMS (bool) - whether TMS should send a bitarray of all 0's of same length as `data` (i.e False) or all 1's (i.e. True) Returns: None by default or the (bitarray) response of the device after receiving data, if return_tdo is True. Usage: >>> from proteusisc import getAttachedControllers, bitarray >>> c = getAttachedControllers()[0] >>> c.jtag_enable() >>> c.write_tdi_bits(bitarray("11111"), return_tdo=True) >>> c.jtag_disable() """ self._check_jtag() tms_bits = bitarray([TMS]*len(buff)) self._update_scanchain(tms_bits) self.bulkCommandDefault(_BMSG_WRITE_TDI % (return_tdo, TMS, len(buff).to_bytes(4, 'little'))) self.bulkWriteData(build_byte_align_buff(buff).tobytes()[::-1]) tdo_bits = self._read_tdo(len(buff)) if return_tdo else None self._get_adv_trans_stats(0x08, return_tdo) return tdo_bits
def trace(self, *attributes): """ Function decorator that traces functions NOTE: Must be placed after the @app.route decorator @param attributes any number of flask.Request attributes (strings) to be set as tags on the created span """ def decorator(f): def wrapper(*args, **kwargs): if self._trace_all_requests: return f(*args, **kwargs) self._before_request_fn(list(attributes)) try: r = f(*args, **kwargs) self._after_request_fn() except Exception as e: self._after_request_fn(error=e) raise self._after_request_fn() return r wrapper.__name__ = f.__name__ return wrapper return decorator
Function decorator that traces functions NOTE: Must be placed after the @app.route decorator @param attributes any number of flask.Request attributes (strings) to be set as tags on the created span
Below is the the instruction that describes the task: ### Input: Function decorator that traces functions NOTE: Must be placed after the @app.route decorator @param attributes any number of flask.Request attributes (strings) to be set as tags on the created span ### Response: def trace(self, *attributes): """ Function decorator that traces functions NOTE: Must be placed after the @app.route decorator @param attributes any number of flask.Request attributes (strings) to be set as tags on the created span """ def decorator(f): def wrapper(*args, **kwargs): if self._trace_all_requests: return f(*args, **kwargs) self._before_request_fn(list(attributes)) try: r = f(*args, **kwargs) self._after_request_fn() except Exception as e: self._after_request_fn(error=e) raise self._after_request_fn() return r wrapper.__name__ = f.__name__ return wrapper return decorator
def _proxy(self): """ Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: ConfigurationContext for this ConfigurationInstance :rtype: twilio.rest.flex_api.v1.configuration.ConfigurationContext """ if self._context is None: self._context = ConfigurationContext(self._version, ) return self._context
Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: ConfigurationContext for this ConfigurationInstance :rtype: twilio.rest.flex_api.v1.configuration.ConfigurationContext
Below is the the instruction that describes the task: ### Input: Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: ConfigurationContext for this ConfigurationInstance :rtype: twilio.rest.flex_api.v1.configuration.ConfigurationContext ### Response: def _proxy(self): """ Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: ConfigurationContext for this ConfigurationInstance :rtype: twilio.rest.flex_api.v1.configuration.ConfigurationContext """ if self._context is None: self._context = ConfigurationContext(self._version, ) return self._context
def word(self, _id, padding=75): """ Get words """ word = self.words[_id][2] vec = word_to_vector(word) vec += [-1] * (padding - len(vec)) return np.array(vec, dtype=np.int32)
Get words
Below is the the instruction that describes the task: ### Input: Get words ### Response: def word(self, _id, padding=75): """ Get words """ word = self.words[_id][2] vec = word_to_vector(word) vec += [-1] * (padding - len(vec)) return np.array(vec, dtype=np.int32)
def container(dec): """Meta-decorator (for decorating decorators) Keeps around original decorated function as a property ``orig_func`` :param dec: Decorator to decorate :type dec: function :returns: Decorated decorator """ # Credits: http://stackoverflow.com/a/1167248/1798683 @wraps(dec) def meta_decorator(f): decorator = dec(f) decorator.orig_func = f return decorator return meta_decorator
Meta-decorator (for decorating decorators) Keeps around original decorated function as a property ``orig_func`` :param dec: Decorator to decorate :type dec: function :returns: Decorated decorator
Below is the the instruction that describes the task: ### Input: Meta-decorator (for decorating decorators) Keeps around original decorated function as a property ``orig_func`` :param dec: Decorator to decorate :type dec: function :returns: Decorated decorator ### Response: def container(dec): """Meta-decorator (for decorating decorators) Keeps around original decorated function as a property ``orig_func`` :param dec: Decorator to decorate :type dec: function :returns: Decorated decorator """ # Credits: http://stackoverflow.com/a/1167248/1798683 @wraps(dec) def meta_decorator(f): decorator = dec(f) decorator.orig_func = f return decorator return meta_decorator
def refresh(self): '''Refetch instance data from the API. ''' response = requests.get('%s/media/images/%s' % (API_BASE_URL, self.id)) attributes = response.json() #self.exif = attributes['exif'] self.height = attributes['height'] self.width = attributes['width'] #self.ratio = attributes['ratio'] #self.markup = attributes['markup'] #self.srcid = attributes['srcid'] image = attributes['image'] # Images are allowed to have different sizes, depending on the dimensions # of the original image. To avoid doing a bunch of explicit 'in' checks, # splat the variables into scope. del(image['id']) for size in image: vars(self)[size] = image[size]
Refetch instance data from the API.
Below is the the instruction that describes the task: ### Input: Refetch instance data from the API. ### Response: def refresh(self): '''Refetch instance data from the API. ''' response = requests.get('%s/media/images/%s' % (API_BASE_URL, self.id)) attributes = response.json() #self.exif = attributes['exif'] self.height = attributes['height'] self.width = attributes['width'] #self.ratio = attributes['ratio'] #self.markup = attributes['markup'] #self.srcid = attributes['srcid'] image = attributes['image'] # Images are allowed to have different sizes, depending on the dimensions # of the original image. To avoid doing a bunch of explicit 'in' checks, # splat the variables into scope. del(image['id']) for size in image: vars(self)[size] = image[size]
def get_inconsistent_fieldnames(): """ Returns a set of keys from settings.CONFIG that are not accounted for in settings.CONFIG_FIELDSETS. If there are no fieldnames in settings.CONFIG_FIELDSETS, returns an empty set. """ field_name_list = [] for fieldset_title, fields_list in settings.CONFIG_FIELDSETS.items(): for field_name in fields_list: field_name_list.append(field_name) if not field_name_list: return {} return set(set(settings.CONFIG.keys()) - set(field_name_list))
Returns a set of keys from settings.CONFIG that are not accounted for in settings.CONFIG_FIELDSETS. If there are no fieldnames in settings.CONFIG_FIELDSETS, returns an empty set.
Below is the the instruction that describes the task: ### Input: Returns a set of keys from settings.CONFIG that are not accounted for in settings.CONFIG_FIELDSETS. If there are no fieldnames in settings.CONFIG_FIELDSETS, returns an empty set. ### Response: def get_inconsistent_fieldnames(): """ Returns a set of keys from settings.CONFIG that are not accounted for in settings.CONFIG_FIELDSETS. If there are no fieldnames in settings.CONFIG_FIELDSETS, returns an empty set. """ field_name_list = [] for fieldset_title, fields_list in settings.CONFIG_FIELDSETS.items(): for field_name in fields_list: field_name_list.append(field_name) if not field_name_list: return {} return set(set(settings.CONFIG.keys()) - set(field_name_list))
def _proxy(self): """ Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: DeploymentContext for this DeploymentInstance :rtype: twilio.rest.serverless.v1.service.environment.deployment.DeploymentContext """ if self._context is None: self._context = DeploymentContext( self._version, service_sid=self._solution['service_sid'], environment_sid=self._solution['environment_sid'], sid=self._solution['sid'], ) return self._context
Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: DeploymentContext for this DeploymentInstance :rtype: twilio.rest.serverless.v1.service.environment.deployment.DeploymentContext
Below is the the instruction that describes the task: ### Input: Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: DeploymentContext for this DeploymentInstance :rtype: twilio.rest.serverless.v1.service.environment.deployment.DeploymentContext ### Response: def _proxy(self): """ Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: DeploymentContext for this DeploymentInstance :rtype: twilio.rest.serverless.v1.service.environment.deployment.DeploymentContext """ if self._context is None: self._context = DeploymentContext( self._version, service_sid=self._solution['service_sid'], environment_sid=self._solution['environment_sid'], sid=self._solution['sid'], ) return self._context
def _raise_error_if_container_is_missing_an_utterance(self): """ Check if there is a dataset for every utterance in every container, otherwise raise an error. """ expected_keys = frozenset(self.utt_ids) for cnt in self.containers: keys = set(cnt.keys()) if not keys.issuperset(expected_keys): raise ValueError('Container is missing data for some utterances!')
Check if there is a dataset for every utterance in every container, otherwise raise an error.
Below is the the instruction that describes the task: ### Input: Check if there is a dataset for every utterance in every container, otherwise raise an error. ### Response: def _raise_error_if_container_is_missing_an_utterance(self): """ Check if there is a dataset for every utterance in every container, otherwise raise an error. """ expected_keys = frozenset(self.utt_ids) for cnt in self.containers: keys = set(cnt.keys()) if not keys.issuperset(expected_keys): raise ValueError('Container is missing data for some utterances!')
def dumps(self, msg, use_bin_type=False): ''' Run the correct dumps serialization format :param use_bin_type: Useful for Python 3 support. Tells msgpack to differentiate between 'str' and 'bytes' types by encoding them differently. Since this changes the wire protocol, this option should not be used outside of IPC. ''' def ext_type_encoder(obj): if isinstance(obj, six.integer_types): # msgpack can't handle the very long Python longs for jids # Convert any very long longs to strings return six.text_type(obj) elif isinstance(obj, (datetime.datetime, datetime.date)): # msgpack doesn't support datetime.datetime and datetime.date datatypes. # So here we have converted these types to custom datatype # This is msgpack Extended types numbered 78 return msgpack.ExtType(78, salt.utils.stringutils.to_bytes( obj.strftime('%Y%m%dT%H:%M:%S.%f'))) # The same for immutable types elif isinstance(obj, immutabletypes.ImmutableDict): return dict(obj) elif isinstance(obj, immutabletypes.ImmutableList): return list(obj) elif isinstance(obj, (set, immutabletypes.ImmutableSet)): # msgpack can't handle set so translate it to tuple return tuple(obj) elif isinstance(obj, CaseInsensitiveDict): return dict(obj) # Nothing known exceptions found. Let msgpack raise it's own. return obj try: if msgpack.version >= (0, 4, 0): # msgpack only supports 'use_bin_type' starting in 0.4.0. # Due to this, if we don't need it, don't pass it at all so # that under Python 2 we can still work with older versions # of msgpack. return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, use_bin_type=use_bin_type, _msgpack_module=msgpack) else: return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, _msgpack_module=msgpack) except (OverflowError, msgpack.exceptions.PackValueError): # msgpack<=0.4.6 don't call ext encoder on very long integers raising the error instead. # Convert any very long longs to strings and call dumps again. def verylong_encoder(obj, context): # Make sure we catch recursion here. objid = id(obj) if objid in context: return '<Recursion on {} with id={}>'.format(type(obj).__name__, id(obj)) context.add(objid) if isinstance(obj, dict): for key, value in six.iteritems(obj.copy()): obj[key] = verylong_encoder(value, context) return dict(obj) elif isinstance(obj, (list, tuple)): obj = list(obj) for idx, entry in enumerate(obj): obj[idx] = verylong_encoder(entry, context) return obj # A value of an Integer object is limited from -(2^63) upto (2^64)-1 by MessagePack # spec. Here we care only of JIDs that are positive integers. if isinstance(obj, six.integer_types) and obj >= pow(2, 64): return six.text_type(obj) else: return obj msg = verylong_encoder(msg, set()) if msgpack.version >= (0, 4, 0): return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, use_bin_type=use_bin_type, _msgpack_module=msgpack) else: return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, _msgpack_module=msgpack)
Run the correct dumps serialization format :param use_bin_type: Useful for Python 3 support. Tells msgpack to differentiate between 'str' and 'bytes' types by encoding them differently. Since this changes the wire protocol, this option should not be used outside of IPC.
Below is the the instruction that describes the task: ### Input: Run the correct dumps serialization format :param use_bin_type: Useful for Python 3 support. Tells msgpack to differentiate between 'str' and 'bytes' types by encoding them differently. Since this changes the wire protocol, this option should not be used outside of IPC. ### Response: def dumps(self, msg, use_bin_type=False): ''' Run the correct dumps serialization format :param use_bin_type: Useful for Python 3 support. Tells msgpack to differentiate between 'str' and 'bytes' types by encoding them differently. Since this changes the wire protocol, this option should not be used outside of IPC. ''' def ext_type_encoder(obj): if isinstance(obj, six.integer_types): # msgpack can't handle the very long Python longs for jids # Convert any very long longs to strings return six.text_type(obj) elif isinstance(obj, (datetime.datetime, datetime.date)): # msgpack doesn't support datetime.datetime and datetime.date datatypes. # So here we have converted these types to custom datatype # This is msgpack Extended types numbered 78 return msgpack.ExtType(78, salt.utils.stringutils.to_bytes( obj.strftime('%Y%m%dT%H:%M:%S.%f'))) # The same for immutable types elif isinstance(obj, immutabletypes.ImmutableDict): return dict(obj) elif isinstance(obj, immutabletypes.ImmutableList): return list(obj) elif isinstance(obj, (set, immutabletypes.ImmutableSet)): # msgpack can't handle set so translate it to tuple return tuple(obj) elif isinstance(obj, CaseInsensitiveDict): return dict(obj) # Nothing known exceptions found. Let msgpack raise it's own. return obj try: if msgpack.version >= (0, 4, 0): # msgpack only supports 'use_bin_type' starting in 0.4.0. # Due to this, if we don't need it, don't pass it at all so # that under Python 2 we can still work with older versions # of msgpack. return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, use_bin_type=use_bin_type, _msgpack_module=msgpack) else: return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, _msgpack_module=msgpack) except (OverflowError, msgpack.exceptions.PackValueError): # msgpack<=0.4.6 don't call ext encoder on very long integers raising the error instead. # Convert any very long longs to strings and call dumps again. def verylong_encoder(obj, context): # Make sure we catch recursion here. objid = id(obj) if objid in context: return '<Recursion on {} with id={}>'.format(type(obj).__name__, id(obj)) context.add(objid) if isinstance(obj, dict): for key, value in six.iteritems(obj.copy()): obj[key] = verylong_encoder(value, context) return dict(obj) elif isinstance(obj, (list, tuple)): obj = list(obj) for idx, entry in enumerate(obj): obj[idx] = verylong_encoder(entry, context) return obj # A value of an Integer object is limited from -(2^63) upto (2^64)-1 by MessagePack # spec. Here we care only of JIDs that are positive integers. if isinstance(obj, six.integer_types) and obj >= pow(2, 64): return six.text_type(obj) else: return obj msg = verylong_encoder(msg, set()) if msgpack.version >= (0, 4, 0): return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, use_bin_type=use_bin_type, _msgpack_module=msgpack) else: return salt.utils.msgpack.dumps(msg, default=ext_type_encoder, _msgpack_module=msgpack)
def _sim_prediction_bayes(self, h, simulations): """ Simulates a h-step ahead mean prediction Parameters ---------- h : int How many steps ahead for the prediction simulations : int How many simulations to perform Returns ---------- Matrix of simulations """ sim_vector = np.zeros([simulations,h]) for n in range(0,simulations): t_z = self.draw_latent_variables(nsims=1).T[0] lmda, Y, scores = self._model(t_z) t_z = np.array([self.latent_variables.z_list[k].prior.transform(t_z[k]) for k in range(t_z.shape[0])]) # Create arrays to iteratre over lmda_exp = lmda.copy() scores_exp = scores.copy() Y_exp = Y.copy() # Loop over h time periods for t in range(0,h): new_value = t_z[0] if self.p != 0: for j in range(1,self.p+1): new_value += t_z[j]*lmda_exp[-j] if self.q != 0: for k in range(1,self.q+1): new_value += t_z[k+self.p]*scores_exp[-k] if self.leverage is True: new_value += t_z[1+self.p+self.q]*np.sign(-(Y_exp[-1]-t_z[-2]-t_z[-1]*np.exp(lmda_exp[-1]/2.0)))*(scores_exp[-1]+1) lmda_exp = np.append(lmda_exp,[new_value]) # For indexing consistency scores_exp = np.append(scores_exp,scores[np.random.randint(scores.shape[0])]) # expectation of score is zero Y_exp = np.append(Y_exp,Y[np.random.randint(Y.shape[0])]) # bootstrap returns sim_vector[n] = lmda_exp[-h:] return np.transpose(sim_vector)
Simulates a h-step ahead mean prediction Parameters ---------- h : int How many steps ahead for the prediction simulations : int How many simulations to perform Returns ---------- Matrix of simulations
Below is the the instruction that describes the task: ### Input: Simulates a h-step ahead mean prediction Parameters ---------- h : int How many steps ahead for the prediction simulations : int How many simulations to perform Returns ---------- Matrix of simulations ### Response: def _sim_prediction_bayes(self, h, simulations): """ Simulates a h-step ahead mean prediction Parameters ---------- h : int How many steps ahead for the prediction simulations : int How many simulations to perform Returns ---------- Matrix of simulations """ sim_vector = np.zeros([simulations,h]) for n in range(0,simulations): t_z = self.draw_latent_variables(nsims=1).T[0] lmda, Y, scores = self._model(t_z) t_z = np.array([self.latent_variables.z_list[k].prior.transform(t_z[k]) for k in range(t_z.shape[0])]) # Create arrays to iteratre over lmda_exp = lmda.copy() scores_exp = scores.copy() Y_exp = Y.copy() # Loop over h time periods for t in range(0,h): new_value = t_z[0] if self.p != 0: for j in range(1,self.p+1): new_value += t_z[j]*lmda_exp[-j] if self.q != 0: for k in range(1,self.q+1): new_value += t_z[k+self.p]*scores_exp[-k] if self.leverage is True: new_value += t_z[1+self.p+self.q]*np.sign(-(Y_exp[-1]-t_z[-2]-t_z[-1]*np.exp(lmda_exp[-1]/2.0)))*(scores_exp[-1]+1) lmda_exp = np.append(lmda_exp,[new_value]) # For indexing consistency scores_exp = np.append(scores_exp,scores[np.random.randint(scores.shape[0])]) # expectation of score is zero Y_exp = np.append(Y_exp,Y[np.random.randint(Y.shape[0])]) # bootstrap returns sim_vector[n] = lmda_exp[-h:] return np.transpose(sim_vector)
def _check_directory_win(name, win_owner=None, win_perms=None, win_deny_perms=None, win_inheritance=None, win_perms_reset=None): ''' Check what changes need to be made on a directory ''' changes = {} if not os.path.isdir(name): changes = {name: {'directory': 'new'}} else: # Check owner by SID if win_owner is not None: current_owner = salt.utils.win_dacl.get_owner(name) current_owner_sid = salt.utils.win_functions.get_sid_from_name(current_owner) expected_owner_sid = salt.utils.win_functions.get_sid_from_name(win_owner) if not current_owner_sid == expected_owner_sid: changes['owner'] = win_owner # Check perms perms = salt.utils.win_dacl.get_permissions(name) # Verify Permissions if win_perms is not None: for user in win_perms: # Check that user exists: try: salt.utils.win_dacl.get_name(user) except CommandExecutionError: continue grant_perms = [] # Check for permissions if isinstance(win_perms[user]['perms'], six.string_types): if not salt.utils.win_dacl.has_permission( name, user, win_perms[user]['perms']): grant_perms = win_perms[user]['perms'] else: for perm in win_perms[user]['perms']: if not salt.utils.win_dacl.has_permission( name, user, perm, exact=False): grant_perms.append(win_perms[user]['perms']) if grant_perms: if 'grant_perms' not in changes: changes['grant_perms'] = {} if user not in changes['grant_perms']: changes['grant_perms'][user] = {} changes['grant_perms'][user]['perms'] = grant_perms # Check Applies to if 'applies_to' not in win_perms[user]: applies_to = 'this_folder_subfolders_files' else: applies_to = win_perms[user]['applies_to'] if user in perms: user = salt.utils.win_dacl.get_name(user) # Get the proper applies_to text at_flag = salt.utils.win_dacl.flags().ace_prop['file'][applies_to] applies_to_text = salt.utils.win_dacl.flags().ace_prop['file'][at_flag] if 'grant' in perms[user]: if not perms[user]['grant']['applies to'] == applies_to_text: if 'grant_perms' not in changes: changes['grant_perms'] = {} if user not in changes['grant_perms']: changes['grant_perms'][user] = {} changes['grant_perms'][user]['applies_to'] = applies_to # Verify Deny Permissions if win_deny_perms is not None: for user in win_deny_perms: # Check that user exists: try: salt.utils.win_dacl.get_name(user) except CommandExecutionError: continue deny_perms = [] # Check for permissions if isinstance(win_deny_perms[user]['perms'], six.string_types): if not salt.utils.win_dacl.has_permission( name, user, win_deny_perms[user]['perms'], 'deny'): deny_perms = win_deny_perms[user]['perms'] else: for perm in win_deny_perms[user]['perms']: if not salt.utils.win_dacl.has_permission( name, user, perm, 'deny', exact=False): deny_perms.append(win_deny_perms[user]['perms']) if deny_perms: if 'deny_perms' not in changes: changes['deny_perms'] = {} if user not in changes['deny_perms']: changes['deny_perms'][user] = {} changes['deny_perms'][user]['perms'] = deny_perms # Check Applies to if 'applies_to' not in win_deny_perms[user]: applies_to = 'this_folder_subfolders_files' else: applies_to = win_deny_perms[user]['applies_to'] if user in perms: user = salt.utils.win_dacl.get_name(user) # Get the proper applies_to text at_flag = salt.utils.win_dacl.flags().ace_prop['file'][applies_to] applies_to_text = salt.utils.win_dacl.flags().ace_prop['file'][at_flag] if 'deny' in perms[user]: if not perms[user]['deny']['applies to'] == applies_to_text: if 'deny_perms' not in changes: changes['deny_perms'] = {} if user not in changes['deny_perms']: changes['deny_perms'][user] = {} changes['deny_perms'][user]['applies_to'] = applies_to # Check inheritance if win_inheritance is not None: if not win_inheritance == salt.utils.win_dacl.get_inheritance(name): changes['inheritance'] = win_inheritance # Check reset if win_perms_reset: for user_name in perms: if user_name not in win_perms: if 'grant' in perms[user_name] and not perms[user_name]['grant']['inherited']: if 'remove_perms' not in changes: changes['remove_perms'] = {} changes['remove_perms'].update({user_name: perms[user_name]}) if user_name not in win_deny_perms: if 'deny' in perms[user_name] and not perms[user_name]['deny']['inherited']: if 'remove_perms' not in changes: changes['remove_perms'] = {} changes['remove_perms'].update({user_name: perms[user_name]}) if changes: return None, 'The directory "{0}" will be changed'.format(name), changes return True, 'The directory {0} is in the correct state'.format(name), changes
Check what changes need to be made on a directory
Below is the the instruction that describes the task: ### Input: Check what changes need to be made on a directory ### Response: def _check_directory_win(name, win_owner=None, win_perms=None, win_deny_perms=None, win_inheritance=None, win_perms_reset=None): ''' Check what changes need to be made on a directory ''' changes = {} if not os.path.isdir(name): changes = {name: {'directory': 'new'}} else: # Check owner by SID if win_owner is not None: current_owner = salt.utils.win_dacl.get_owner(name) current_owner_sid = salt.utils.win_functions.get_sid_from_name(current_owner) expected_owner_sid = salt.utils.win_functions.get_sid_from_name(win_owner) if not current_owner_sid == expected_owner_sid: changes['owner'] = win_owner # Check perms perms = salt.utils.win_dacl.get_permissions(name) # Verify Permissions if win_perms is not None: for user in win_perms: # Check that user exists: try: salt.utils.win_dacl.get_name(user) except CommandExecutionError: continue grant_perms = [] # Check for permissions if isinstance(win_perms[user]['perms'], six.string_types): if not salt.utils.win_dacl.has_permission( name, user, win_perms[user]['perms']): grant_perms = win_perms[user]['perms'] else: for perm in win_perms[user]['perms']: if not salt.utils.win_dacl.has_permission( name, user, perm, exact=False): grant_perms.append(win_perms[user]['perms']) if grant_perms: if 'grant_perms' not in changes: changes['grant_perms'] = {} if user not in changes['grant_perms']: changes['grant_perms'][user] = {} changes['grant_perms'][user]['perms'] = grant_perms # Check Applies to if 'applies_to' not in win_perms[user]: applies_to = 'this_folder_subfolders_files' else: applies_to = win_perms[user]['applies_to'] if user in perms: user = salt.utils.win_dacl.get_name(user) # Get the proper applies_to text at_flag = salt.utils.win_dacl.flags().ace_prop['file'][applies_to] applies_to_text = salt.utils.win_dacl.flags().ace_prop['file'][at_flag] if 'grant' in perms[user]: if not perms[user]['grant']['applies to'] == applies_to_text: if 'grant_perms' not in changes: changes['grant_perms'] = {} if user not in changes['grant_perms']: changes['grant_perms'][user] = {} changes['grant_perms'][user]['applies_to'] = applies_to # Verify Deny Permissions if win_deny_perms is not None: for user in win_deny_perms: # Check that user exists: try: salt.utils.win_dacl.get_name(user) except CommandExecutionError: continue deny_perms = [] # Check for permissions if isinstance(win_deny_perms[user]['perms'], six.string_types): if not salt.utils.win_dacl.has_permission( name, user, win_deny_perms[user]['perms'], 'deny'): deny_perms = win_deny_perms[user]['perms'] else: for perm in win_deny_perms[user]['perms']: if not salt.utils.win_dacl.has_permission( name, user, perm, 'deny', exact=False): deny_perms.append(win_deny_perms[user]['perms']) if deny_perms: if 'deny_perms' not in changes: changes['deny_perms'] = {} if user not in changes['deny_perms']: changes['deny_perms'][user] = {} changes['deny_perms'][user]['perms'] = deny_perms # Check Applies to if 'applies_to' not in win_deny_perms[user]: applies_to = 'this_folder_subfolders_files' else: applies_to = win_deny_perms[user]['applies_to'] if user in perms: user = salt.utils.win_dacl.get_name(user) # Get the proper applies_to text at_flag = salt.utils.win_dacl.flags().ace_prop['file'][applies_to] applies_to_text = salt.utils.win_dacl.flags().ace_prop['file'][at_flag] if 'deny' in perms[user]: if not perms[user]['deny']['applies to'] == applies_to_text: if 'deny_perms' not in changes: changes['deny_perms'] = {} if user not in changes['deny_perms']: changes['deny_perms'][user] = {} changes['deny_perms'][user]['applies_to'] = applies_to # Check inheritance if win_inheritance is not None: if not win_inheritance == salt.utils.win_dacl.get_inheritance(name): changes['inheritance'] = win_inheritance # Check reset if win_perms_reset: for user_name in perms: if user_name not in win_perms: if 'grant' in perms[user_name] and not perms[user_name]['grant']['inherited']: if 'remove_perms' not in changes: changes['remove_perms'] = {} changes['remove_perms'].update({user_name: perms[user_name]}) if user_name not in win_deny_perms: if 'deny' in perms[user_name] and not perms[user_name]['deny']['inherited']: if 'remove_perms' not in changes: changes['remove_perms'] = {} changes['remove_perms'].update({user_name: perms[user_name]}) if changes: return None, 'The directory "{0}" will be changed'.format(name), changes return True, 'The directory {0} is in the correct state'.format(name), changes
def read(self, size=-1): """! @brief Return bytes read from the connection.""" if self.connected is None: return None # Extract requested amount of data from the read buffer. data = self._get_input(size) return data
! @brief Return bytes read from the connection.
Below is the the instruction that describes the task: ### Input: ! @brief Return bytes read from the connection. ### Response: def read(self, size=-1): """! @brief Return bytes read from the connection.""" if self.connected is None: return None # Extract requested amount of data from the read buffer. data = self._get_input(size) return data
def _delToC(self): """_delToC(self) -> PyObject *""" if self.isClosed or self.isEncrypted: raise ValueError("operation illegal for closed / encrypted doc") val = _fitz.Document__delToC(self) self.initData() return val
_delToC(self) -> PyObject *
Below is the the instruction that describes the task: ### Input: _delToC(self) -> PyObject * ### Response: def _delToC(self): """_delToC(self) -> PyObject *""" if self.isClosed or self.isEncrypted: raise ValueError("operation illegal for closed / encrypted doc") val = _fitz.Document__delToC(self) self.initData() return val
def bind(self, data): """ Bind a VertexBuffer that has structured data Parameters ---------- data : VertexBuffer The vertex buffer to bind. The field names of the array are mapped to attribute names in GLSL. """ # Check if not isinstance(data, VertexBuffer): raise ValueError('Program.bind() requires a VertexBuffer.') # Apply for name in data.dtype.names: self[name] = data[name]
Bind a VertexBuffer that has structured data Parameters ---------- data : VertexBuffer The vertex buffer to bind. The field names of the array are mapped to attribute names in GLSL.
Below is the the instruction that describes the task: ### Input: Bind a VertexBuffer that has structured data Parameters ---------- data : VertexBuffer The vertex buffer to bind. The field names of the array are mapped to attribute names in GLSL. ### Response: def bind(self, data): """ Bind a VertexBuffer that has structured data Parameters ---------- data : VertexBuffer The vertex buffer to bind. The field names of the array are mapped to attribute names in GLSL. """ # Check if not isinstance(data, VertexBuffer): raise ValueError('Program.bind() requires a VertexBuffer.') # Apply for name in data.dtype.names: self[name] = data[name]
def obo(self): """str: the `Term` serialized in an Obo ``[Term]`` stanza. Note: The following guide was used: ftp://ftp.geneontology.org/pub/go/www/GO.format.obo-1_4.shtml """ def add_tags(stanza_list, tags): for tag in tags: if tag in self.other: if isinstance(self.other[tag], list): for attribute in self.other[tag]: stanza_list.append("{}: {}".format(tag, attribute)) else: stanza_list.append("{}: {}".format(tag, self.other[tag])) # metatags = ["id", "is_anonymous", "name", "namespace","alt_id", "def","comment", # "subset","synonym","xref","builtin","property_value","is_a", # "intersection_of","union_of","equivalent_to","disjoint_from", # "relationship","created_by","creation_date","is_obsolete", # "replaced_by", "consider"] stanza_list = ["[Term]"] # id stanza_list.append("id: {}".format(self.id)) # name if self.name is not None: stanza_list.append("name: {}".format(self.name)) else: stanza_list.append("name: ") add_tags(stanza_list, ['is_anonymous', 'alt_id']) # def if self.desc: stanza_list.append(self.desc.obo) # comment, subset add_tags(stanza_list, ['comment', 'subset']) # synonyms for synonym in sorted(self.synonyms, key=str): stanza_list.append(synonym.obo) add_tags(stanza_list, ['xref']) # is_a if Relationship('is_a') in self.relations: for companion in self.relations[Relationship('is_a')]: stanza_list.append("is_a: {} ! {}".format(companion.id, companion.name)) add_tags(stanza_list, ['intersection_of', 'union_of', 'disjoint_from']) for relation in self.relations: if relation.direction=="bottomup" and relation is not Relationship('is_a'): stanza_list.extend( "relationship: {} {} ! {}".format( relation.obo_name, companion.id, companion.name ) for companion in self.relations[relation] ) add_tags(stanza_list, ['is_obsolete', 'replaced_by', 'consider', 'builtin', 'created_by', 'creation_date']) return "\n".join(stanza_list)
str: the `Term` serialized in an Obo ``[Term]`` stanza. Note: The following guide was used: ftp://ftp.geneontology.org/pub/go/www/GO.format.obo-1_4.shtml
Below is the the instruction that describes the task: ### Input: str: the `Term` serialized in an Obo ``[Term]`` stanza. Note: The following guide was used: ftp://ftp.geneontology.org/pub/go/www/GO.format.obo-1_4.shtml ### Response: def obo(self): """str: the `Term` serialized in an Obo ``[Term]`` stanza. Note: The following guide was used: ftp://ftp.geneontology.org/pub/go/www/GO.format.obo-1_4.shtml """ def add_tags(stanza_list, tags): for tag in tags: if tag in self.other: if isinstance(self.other[tag], list): for attribute in self.other[tag]: stanza_list.append("{}: {}".format(tag, attribute)) else: stanza_list.append("{}: {}".format(tag, self.other[tag])) # metatags = ["id", "is_anonymous", "name", "namespace","alt_id", "def","comment", # "subset","synonym","xref","builtin","property_value","is_a", # "intersection_of","union_of","equivalent_to","disjoint_from", # "relationship","created_by","creation_date","is_obsolete", # "replaced_by", "consider"] stanza_list = ["[Term]"] # id stanza_list.append("id: {}".format(self.id)) # name if self.name is not None: stanza_list.append("name: {}".format(self.name)) else: stanza_list.append("name: ") add_tags(stanza_list, ['is_anonymous', 'alt_id']) # def if self.desc: stanza_list.append(self.desc.obo) # comment, subset add_tags(stanza_list, ['comment', 'subset']) # synonyms for synonym in sorted(self.synonyms, key=str): stanza_list.append(synonym.obo) add_tags(stanza_list, ['xref']) # is_a if Relationship('is_a') in self.relations: for companion in self.relations[Relationship('is_a')]: stanza_list.append("is_a: {} ! {}".format(companion.id, companion.name)) add_tags(stanza_list, ['intersection_of', 'union_of', 'disjoint_from']) for relation in self.relations: if relation.direction=="bottomup" and relation is not Relationship('is_a'): stanza_list.extend( "relationship: {} {} ! {}".format( relation.obo_name, companion.id, companion.name ) for companion in self.relations[relation] ) add_tags(stanza_list, ['is_obsolete', 'replaced_by', 'consider', 'builtin', 'created_by', 'creation_date']) return "\n".join(stanza_list)
def _get_cron_cmdstr(path, user=None): ''' Returns a format string, to be used to build a crontab command. ''' if user: cmd = 'crontab -u {0}'.format(user) else: cmd = 'crontab' return '{0} {1}'.format(cmd, path)
Returns a format string, to be used to build a crontab command.
Below is the the instruction that describes the task: ### Input: Returns a format string, to be used to build a crontab command. ### Response: def _get_cron_cmdstr(path, user=None): ''' Returns a format string, to be used to build a crontab command. ''' if user: cmd = 'crontab -u {0}'.format(user) else: cmd = 'crontab' return '{0} {1}'.format(cmd, path)
def push_func(self, cuin, callback): """Push a function for dfp. :param cuin: str,unicode: Callback Unique Identifier Name. :param callback: callable: Corresponding to the cuin to perform a function. :raises: DFPError,NotCallableError: raises an exception .. versionadded:: 2.3.0 """ if cuin and isinstance(cuin, string_types) and callable(callback): if cuin in self._dfp_funcs: raise DFPError("The cuin already exists") else: self._dfp_funcs[cuin] = callback else: if not callable(callback): raise NotCallableError("The cuin %s cannot be called back" % cuin) raise DFPError("Invalid parameter")
Push a function for dfp. :param cuin: str,unicode: Callback Unique Identifier Name. :param callback: callable: Corresponding to the cuin to perform a function. :raises: DFPError,NotCallableError: raises an exception .. versionadded:: 2.3.0
Below is the the instruction that describes the task: ### Input: Push a function for dfp. :param cuin: str,unicode: Callback Unique Identifier Name. :param callback: callable: Corresponding to the cuin to perform a function. :raises: DFPError,NotCallableError: raises an exception .. versionadded:: 2.3.0 ### Response: def push_func(self, cuin, callback): """Push a function for dfp. :param cuin: str,unicode: Callback Unique Identifier Name. :param callback: callable: Corresponding to the cuin to perform a function. :raises: DFPError,NotCallableError: raises an exception .. versionadded:: 2.3.0 """ if cuin and isinstance(cuin, string_types) and callable(callback): if cuin in self._dfp_funcs: raise DFPError("The cuin already exists") else: self._dfp_funcs[cuin] = callback else: if not callable(callback): raise NotCallableError("The cuin %s cannot be called back" % cuin) raise DFPError("Invalid parameter")
def _process_search_results(self, results: pysolr.Results) -> SearchResults: """ Convert solr docs to biolink object :param results: pysolr.Results :return: model.GolrResults.SearchResults """ # map go-golr fields to standard for doc in results.docs: if 'entity' in doc: doc['id'] = doc['entity'] doc['label'] = doc['entity_label'] highlighting = { doc['id']: self._process_highlight(results, doc)._asdict() for doc in results.docs if results.highlighting } payload = SearchResults( facet_counts=translate_facet_field(results.facets), highlighting=highlighting, docs=results.docs, numFound=results.hits ) logging.debug('Docs: {}'.format(len(results.docs))) return payload
Convert solr docs to biolink object :param results: pysolr.Results :return: model.GolrResults.SearchResults
Below is the the instruction that describes the task: ### Input: Convert solr docs to biolink object :param results: pysolr.Results :return: model.GolrResults.SearchResults ### Response: def _process_search_results(self, results: pysolr.Results) -> SearchResults: """ Convert solr docs to biolink object :param results: pysolr.Results :return: model.GolrResults.SearchResults """ # map go-golr fields to standard for doc in results.docs: if 'entity' in doc: doc['id'] = doc['entity'] doc['label'] = doc['entity_label'] highlighting = { doc['id']: self._process_highlight(results, doc)._asdict() for doc in results.docs if results.highlighting } payload = SearchResults( facet_counts=translate_facet_field(results.facets), highlighting=highlighting, docs=results.docs, numFound=results.hits ) logging.debug('Docs: {}'.format(len(results.docs))) return payload
def rarlognormal(a, sigma, rho, size=1): R""" Autoregressive normal random variates. If a is a scalar, generates one series of length size. If a is a sequence, generates size series of the same length as a. """ f = utils.ar1 if np.isscalar(a): r = f(rho, 0, sigma, size) else: n = len(a) r = [f(rho, 0, sigma, n) for i in range(size)] if size == 1: r = r[0] return a * np.exp(r)
R""" Autoregressive normal random variates. If a is a scalar, generates one series of length size. If a is a sequence, generates size series of the same length as a.
Below is the the instruction that describes the task: ### Input: R""" Autoregressive normal random variates. If a is a scalar, generates one series of length size. If a is a sequence, generates size series of the same length as a. ### Response: def rarlognormal(a, sigma, rho, size=1): R""" Autoregressive normal random variates. If a is a scalar, generates one series of length size. If a is a sequence, generates size series of the same length as a. """ f = utils.ar1 if np.isscalar(a): r = f(rho, 0, sigma, size) else: n = len(a) r = [f(rho, 0, sigma, n) for i in range(size)] if size == 1: r = r[0] return a * np.exp(r)
def pack_iterable(messages): '''Pack an iterable of messages in the TCP protocol format''' # [ 4-byte body size ] # [ 4-byte num messages ] # [ 4-byte message #1 size ][ N-byte binary data ] # ... (repeated <num_messages> times) return pack_string( struct.pack('>l', len(messages)) + ''.join(map(pack_string, messages)))
Pack an iterable of messages in the TCP protocol format
Below is the the instruction that describes the task: ### Input: Pack an iterable of messages in the TCP protocol format ### Response: def pack_iterable(messages): '''Pack an iterable of messages in the TCP protocol format''' # [ 4-byte body size ] # [ 4-byte num messages ] # [ 4-byte message #1 size ][ N-byte binary data ] # ... (repeated <num_messages> times) return pack_string( struct.pack('>l', len(messages)) + ''.join(map(pack_string, messages)))
def is_collection(self, path, environ): """Return True, if path maps to an existing collection resource. This method should only be used, if no other information is queried for <path>. Otherwise a _DAVResource should be created first. """ res = self.get_resource_inst(path, environ) return res and res.is_collection
Return True, if path maps to an existing collection resource. This method should only be used, if no other information is queried for <path>. Otherwise a _DAVResource should be created first.
Below is the the instruction that describes the task: ### Input: Return True, if path maps to an existing collection resource. This method should only be used, if no other information is queried for <path>. Otherwise a _DAVResource should be created first. ### Response: def is_collection(self, path, environ): """Return True, if path maps to an existing collection resource. This method should only be used, if no other information is queried for <path>. Otherwise a _DAVResource should be created first. """ res = self.get_resource_inst(path, environ) return res and res.is_collection
def set_dwelling_current(self, settings): ''' Sets the amperage of each motor for when it is dwelling. Values are initialized from the `robot_config.log_current` values, and can then be changed through this method by other parts of the API. For example, `Pipette` setting the dwelling-current of it's pipette, depending on what model pipette it is. settings Dict with axes as valies (e.g.: 'X', 'Y', 'Z', 'A', 'B', or 'C') and floating point number for current (generally between 0.1 and 2) ''' self._dwelling_current_settings['now'].update(settings) # if an axis specified in the `settings` is currently dwelling, # reset it's current to the new dwelling-current value dwelling_axes_to_update = { axis: amps for axis, amps in self._dwelling_current_settings['now'].items() if self._active_axes.get(axis) is False if self.current[axis] != amps } if dwelling_axes_to_update: self._save_current(dwelling_axes_to_update, axes_active=False)
Sets the amperage of each motor for when it is dwelling. Values are initialized from the `robot_config.log_current` values, and can then be changed through this method by other parts of the API. For example, `Pipette` setting the dwelling-current of it's pipette, depending on what model pipette it is. settings Dict with axes as valies (e.g.: 'X', 'Y', 'Z', 'A', 'B', or 'C') and floating point number for current (generally between 0.1 and 2)
Below is the the instruction that describes the task: ### Input: Sets the amperage of each motor for when it is dwelling. Values are initialized from the `robot_config.log_current` values, and can then be changed through this method by other parts of the API. For example, `Pipette` setting the dwelling-current of it's pipette, depending on what model pipette it is. settings Dict with axes as valies (e.g.: 'X', 'Y', 'Z', 'A', 'B', or 'C') and floating point number for current (generally between 0.1 and 2) ### Response: def set_dwelling_current(self, settings): ''' Sets the amperage of each motor for when it is dwelling. Values are initialized from the `robot_config.log_current` values, and can then be changed through this method by other parts of the API. For example, `Pipette` setting the dwelling-current of it's pipette, depending on what model pipette it is. settings Dict with axes as valies (e.g.: 'X', 'Y', 'Z', 'A', 'B', or 'C') and floating point number for current (generally between 0.1 and 2) ''' self._dwelling_current_settings['now'].update(settings) # if an axis specified in the `settings` is currently dwelling, # reset it's current to the new dwelling-current value dwelling_axes_to_update = { axis: amps for axis, amps in self._dwelling_current_settings['now'].items() if self._active_axes.get(axis) is False if self.current[axis] != amps } if dwelling_axes_to_update: self._save_current(dwelling_axes_to_update, axes_active=False)
def load(path, **kw): """python 2 + 3 compatible version of json.load. :param kw: Keyword parameters are passed to json.load :return: The python object read from path. """ _kw = {} if PY3: # pragma: no cover _kw['encoding'] = 'utf-8' with open(str(path), **_kw) as fp: return json.load(fp, **kw)
python 2 + 3 compatible version of json.load. :param kw: Keyword parameters are passed to json.load :return: The python object read from path.
Below is the the instruction that describes the task: ### Input: python 2 + 3 compatible version of json.load. :param kw: Keyword parameters are passed to json.load :return: The python object read from path. ### Response: def load(path, **kw): """python 2 + 3 compatible version of json.load. :param kw: Keyword parameters are passed to json.load :return: The python object read from path. """ _kw = {} if PY3: # pragma: no cover _kw['encoding'] = 'utf-8' with open(str(path), **_kw) as fp: return json.load(fp, **kw)
def compile(self, expr, params=None, limit=None): """ Translate expression to one or more queries according to backend target Returns ------- output : single query or list of queries """ query_ast = self._build_ast_ensure_limit(expr, limit, params=params) return query_ast.compile()
Translate expression to one or more queries according to backend target Returns ------- output : single query or list of queries
Below is the the instruction that describes the task: ### Input: Translate expression to one or more queries according to backend target Returns ------- output : single query or list of queries ### Response: def compile(self, expr, params=None, limit=None): """ Translate expression to one or more queries according to backend target Returns ------- output : single query or list of queries """ query_ast = self._build_ast_ensure_limit(expr, limit, params=params) return query_ast.compile()
def _candidate_tempdir_list(): """Generate a list of candidate temporary directories which _get_default_tempdir will try.""" dirlist = [] # First, try the environment. for envname in 'TMPDIR', 'TEMP', 'TMP': dirname = _os.getenv(envname) if dirname: dirlist.append(dirname) # Failing that, try OS-specific locations. if _os.name == 'nt': dirlist.extend([ r'c:\temp', r'c:\tmp', r'\temp', r'\tmp' ]) else: dirlist.extend([ '/tmp', '/var/tmp', '/usr/tmp' ]) # As a last resort, the current directory. try: dirlist.append(_os.getcwd()) except (AttributeError, OSError): dirlist.append(_os.curdir) return dirlist
Generate a list of candidate temporary directories which _get_default_tempdir will try.
Below is the the instruction that describes the task: ### Input: Generate a list of candidate temporary directories which _get_default_tempdir will try. ### Response: def _candidate_tempdir_list(): """Generate a list of candidate temporary directories which _get_default_tempdir will try.""" dirlist = [] # First, try the environment. for envname in 'TMPDIR', 'TEMP', 'TMP': dirname = _os.getenv(envname) if dirname: dirlist.append(dirname) # Failing that, try OS-specific locations. if _os.name == 'nt': dirlist.extend([ r'c:\temp', r'c:\tmp', r'\temp', r'\tmp' ]) else: dirlist.extend([ '/tmp', '/var/tmp', '/usr/tmp' ]) # As a last resort, the current directory. try: dirlist.append(_os.getcwd()) except (AttributeError, OSError): dirlist.append(_os.curdir) return dirlist
def _recursiveReduce(mapFunc, reductionFunc, scan, *iterables): """Generates the recursive reduction tree. Used by mapReduce.""" if iterables: half = min(len(x) // 2 for x in iterables) data_left = [list(x)[:half] for x in iterables] data_right = [list(x)[half:] for x in iterables] else: data_left = data_right = [[]] # Submit the left and right parts of the reduction out_futures = [None, None] out_results = [None, None] for index, data in enumerate([data_left, data_right]): if any(len(x) <= 1 for x in data): out_results[index] = mapFunc(*list(zip(*data))[0]) else: out_futures[index] = submit( _recursiveReduce, mapFunc, reductionFunc, scan, *data ) # Wait for the results for index, future in enumerate(out_futures): if future: out_results[index] = future.result() # Apply a scan if needed if scan: last_results = copy.copy(out_results) if type(out_results[0]) is not list: out_results[0] = [out_results[0]] else: last_results[0] = out_results[0][-1] if type(out_results[1]) is list: out_results[0].extend(out_results[1][:-1]) last_results[1] = out_results[1][-1] out_results[0].append(reductionFunc(*last_results)) return out_results[0] return reductionFunc(*out_results)
Generates the recursive reduction tree. Used by mapReduce.
Below is the the instruction that describes the task: ### Input: Generates the recursive reduction tree. Used by mapReduce. ### Response: def _recursiveReduce(mapFunc, reductionFunc, scan, *iterables): """Generates the recursive reduction tree. Used by mapReduce.""" if iterables: half = min(len(x) // 2 for x in iterables) data_left = [list(x)[:half] for x in iterables] data_right = [list(x)[half:] for x in iterables] else: data_left = data_right = [[]] # Submit the left and right parts of the reduction out_futures = [None, None] out_results = [None, None] for index, data in enumerate([data_left, data_right]): if any(len(x) <= 1 for x in data): out_results[index] = mapFunc(*list(zip(*data))[0]) else: out_futures[index] = submit( _recursiveReduce, mapFunc, reductionFunc, scan, *data ) # Wait for the results for index, future in enumerate(out_futures): if future: out_results[index] = future.result() # Apply a scan if needed if scan: last_results = copy.copy(out_results) if type(out_results[0]) is not list: out_results[0] = [out_results[0]] else: last_results[0] = out_results[0][-1] if type(out_results[1]) is list: out_results[0].extend(out_results[1][:-1]) last_results[1] = out_results[1][-1] out_results[0].append(reductionFunc(*last_results)) return out_results[0] return reductionFunc(*out_results)
def visit_Return(self, node: ast.Return) -> Any: # pylint: disable=no-self-use """Raise an exception that this node is unexpected.""" raise AssertionError("Unexpected return node during the re-computation: {}".format(ast.dump(node)))
Raise an exception that this node is unexpected.
Below is the the instruction that describes the task: ### Input: Raise an exception that this node is unexpected. ### Response: def visit_Return(self, node: ast.Return) -> Any: # pylint: disable=no-self-use """Raise an exception that this node is unexpected.""" raise AssertionError("Unexpected return node during the re-computation: {}".format(ast.dump(node)))
def __build_url(self, api_call, **kwargs): """Builds the api query""" kwargs['key'] = self.api_key if 'language' not in kwargs: kwargs['language'] = self.language if 'format' not in kwargs: kwargs['format'] = self.__format api_query = urlencode(kwargs) return "{0}{1}?{2}".format(urls.BASE_URL, api_call, api_query)
Builds the api query
Below is the the instruction that describes the task: ### Input: Builds the api query ### Response: def __build_url(self, api_call, **kwargs): """Builds the api query""" kwargs['key'] = self.api_key if 'language' not in kwargs: kwargs['language'] = self.language if 'format' not in kwargs: kwargs['format'] = self.__format api_query = urlencode(kwargs) return "{0}{1}?{2}".format(urls.BASE_URL, api_call, api_query)
def fcm_send_bulk_data_messages( api_key, registration_ids=None, condition=None, collapse_key=None, delay_while_idle=False, time_to_live=None, restricted_package_name=None, low_priority=False, dry_run=False, data_message=None, content_available=None, timeout=5, json_encoder=None): """ Arguments correspond to those from pyfcm/fcm.py. Sends push message to multiple devices, can send to over 1000 devices Args: api_key registration_ids (list): FCM device registration IDs. data_message (dict): Data message payload to send alone or with the notification message Keyword Args: collapse_key (str, optional): Identifier for a group of messages that can be collapsed so that only the last message gets sent when delivery can be resumed. Defaults to ``None``. delay_while_idle (bool, optional): If ``True`` indicates that the message should not be sent until the device becomes active. time_to_live (int, optional): How long (in seconds) the message should be kept in FCM storage if the device is offline. The maximum time to live supported is 4 weeks. Defaults to ``None`` which uses the FCM default of 4 weeks. low_priority (boolean, optional): Whether to send notification with the low priority flag. Defaults to ``False``. restricted_package_name (str, optional): Package name of the application where the registration IDs must match in order to receive the message. Defaults to ``None``. dry_run (bool, optional): If ``True`` no message will be sent but request will be tested. Returns: :tuple:`multicast_id(long), success(int), failure(int), canonical_ids(int), results(list)`: Response from FCM server. Raises: AuthenticationError: If :attr:`api_key` is not set or provided or there is an error authenticating the sender. FCMServerError: Internal server error or timeout error on Firebase cloud messaging server InvalidDataError: Invalid data provided InternalPackageError: JSON parsing error, mostly from changes in the response of FCM, create a new github issue to resolve it. """ push_service = FCMNotification( api_key=SETTINGS.get("FCM_SERVER_KEY") if api_key is None else api_key, json_encoder=json_encoder, ) return push_service.multiple_devices_data_message( registration_ids=registration_ids, condition=condition, collapse_key=collapse_key, delay_while_idle=delay_while_idle, time_to_live=time_to_live, restricted_package_name=restricted_package_name, low_priority=low_priority, dry_run=dry_run, data_message=data_message, content_available=content_available, timeout=timeout )
Arguments correspond to those from pyfcm/fcm.py. Sends push message to multiple devices, can send to over 1000 devices Args: api_key registration_ids (list): FCM device registration IDs. data_message (dict): Data message payload to send alone or with the notification message Keyword Args: collapse_key (str, optional): Identifier for a group of messages that can be collapsed so that only the last message gets sent when delivery can be resumed. Defaults to ``None``. delay_while_idle (bool, optional): If ``True`` indicates that the message should not be sent until the device becomes active. time_to_live (int, optional): How long (in seconds) the message should be kept in FCM storage if the device is offline. The maximum time to live supported is 4 weeks. Defaults to ``None`` which uses the FCM default of 4 weeks. low_priority (boolean, optional): Whether to send notification with the low priority flag. Defaults to ``False``. restricted_package_name (str, optional): Package name of the application where the registration IDs must match in order to receive the message. Defaults to ``None``. dry_run (bool, optional): If ``True`` no message will be sent but request will be tested. Returns: :tuple:`multicast_id(long), success(int), failure(int), canonical_ids(int), results(list)`: Response from FCM server. Raises: AuthenticationError: If :attr:`api_key` is not set or provided or there is an error authenticating the sender. FCMServerError: Internal server error or timeout error on Firebase cloud messaging server InvalidDataError: Invalid data provided InternalPackageError: JSON parsing error, mostly from changes in the response of FCM, create a new github issue to resolve it.
Below is the the instruction that describes the task: ### Input: Arguments correspond to those from pyfcm/fcm.py. Sends push message to multiple devices, can send to over 1000 devices Args: api_key registration_ids (list): FCM device registration IDs. data_message (dict): Data message payload to send alone or with the notification message Keyword Args: collapse_key (str, optional): Identifier for a group of messages that can be collapsed so that only the last message gets sent when delivery can be resumed. Defaults to ``None``. delay_while_idle (bool, optional): If ``True`` indicates that the message should not be sent until the device becomes active. time_to_live (int, optional): How long (in seconds) the message should be kept in FCM storage if the device is offline. The maximum time to live supported is 4 weeks. Defaults to ``None`` which uses the FCM default of 4 weeks. low_priority (boolean, optional): Whether to send notification with the low priority flag. Defaults to ``False``. restricted_package_name (str, optional): Package name of the application where the registration IDs must match in order to receive the message. Defaults to ``None``. dry_run (bool, optional): If ``True`` no message will be sent but request will be tested. Returns: :tuple:`multicast_id(long), success(int), failure(int), canonical_ids(int), results(list)`: Response from FCM server. Raises: AuthenticationError: If :attr:`api_key` is not set or provided or there is an error authenticating the sender. FCMServerError: Internal server error or timeout error on Firebase cloud messaging server InvalidDataError: Invalid data provided InternalPackageError: JSON parsing error, mostly from changes in the response of FCM, create a new github issue to resolve it. ### Response: def fcm_send_bulk_data_messages( api_key, registration_ids=None, condition=None, collapse_key=None, delay_while_idle=False, time_to_live=None, restricted_package_name=None, low_priority=False, dry_run=False, data_message=None, content_available=None, timeout=5, json_encoder=None): """ Arguments correspond to those from pyfcm/fcm.py. Sends push message to multiple devices, can send to over 1000 devices Args: api_key registration_ids (list): FCM device registration IDs. data_message (dict): Data message payload to send alone or with the notification message Keyword Args: collapse_key (str, optional): Identifier for a group of messages that can be collapsed so that only the last message gets sent when delivery can be resumed. Defaults to ``None``. delay_while_idle (bool, optional): If ``True`` indicates that the message should not be sent until the device becomes active. time_to_live (int, optional): How long (in seconds) the message should be kept in FCM storage if the device is offline. The maximum time to live supported is 4 weeks. Defaults to ``None`` which uses the FCM default of 4 weeks. low_priority (boolean, optional): Whether to send notification with the low priority flag. Defaults to ``False``. restricted_package_name (str, optional): Package name of the application where the registration IDs must match in order to receive the message. Defaults to ``None``. dry_run (bool, optional): If ``True`` no message will be sent but request will be tested. Returns: :tuple:`multicast_id(long), success(int), failure(int), canonical_ids(int), results(list)`: Response from FCM server. Raises: AuthenticationError: If :attr:`api_key` is not set or provided or there is an error authenticating the sender. FCMServerError: Internal server error or timeout error on Firebase cloud messaging server InvalidDataError: Invalid data provided InternalPackageError: JSON parsing error, mostly from changes in the response of FCM, create a new github issue to resolve it. """ push_service = FCMNotification( api_key=SETTINGS.get("FCM_SERVER_KEY") if api_key is None else api_key, json_encoder=json_encoder, ) return push_service.multiple_devices_data_message( registration_ids=registration_ids, condition=condition, collapse_key=collapse_key, delay_while_idle=delay_while_idle, time_to_live=time_to_live, restricted_package_name=restricted_package_name, low_priority=low_priority, dry_run=dry_run, data_message=data_message, content_available=content_available, timeout=timeout )
def calculate_ts_mac(ts, credentials): """Calculates a message authorization code (MAC) for a timestamp.""" normalized = ('hawk.{hawk_ver}.ts\n{ts}\n' .format(hawk_ver=HAWK_VER, ts=ts)) log.debug(u'normalized resource for ts mac calc: {norm}' .format(norm=normalized)) digestmod = getattr(hashlib, credentials['algorithm']) if not isinstance(normalized, six.binary_type): normalized = normalized.encode('utf8') key = credentials['key'] if not isinstance(key, six.binary_type): key = key.encode('ascii') result = hmac.new(key, normalized, digestmod) return b64encode(result.digest())
Calculates a message authorization code (MAC) for a timestamp.
Below is the the instruction that describes the task: ### Input: Calculates a message authorization code (MAC) for a timestamp. ### Response: def calculate_ts_mac(ts, credentials): """Calculates a message authorization code (MAC) for a timestamp.""" normalized = ('hawk.{hawk_ver}.ts\n{ts}\n' .format(hawk_ver=HAWK_VER, ts=ts)) log.debug(u'normalized resource for ts mac calc: {norm}' .format(norm=normalized)) digestmod = getattr(hashlib, credentials['algorithm']) if not isinstance(normalized, six.binary_type): normalized = normalized.encode('utf8') key = credentials['key'] if not isinstance(key, six.binary_type): key = key.encode('ascii') result = hmac.new(key, normalized, digestmod) return b64encode(result.digest())
def make_pass_decorator(object_type, ensure=False): """Given an object type this creates a decorator that will work similar to :func:`pass_obj` but instead of passing the object of the current context, it will find the innermost context of type :func:`object_type`. This generates a decorator that works roughly like this:: from functools import update_wrapper def decorator(f): @pass_context def new_func(ctx, *args, **kwargs): obj = ctx.find_object(object_type) return ctx.invoke(f, obj, *args, **kwargs) return update_wrapper(new_func, f) return decorator :param object_type: the type of the object to pass. :param ensure: if set to `True`, a new object will be created and remembered on the context if it's not there yet. """ def decorator(f): def new_func(*args, **kwargs): ctx = get_current_context() if ensure: obj = ctx.ensure_object(object_type) else: obj = ctx.find_object(object_type) if obj is None: raise RuntimeError('Managed to invoke callback without a ' 'context object of type %r existing' % object_type.__name__) return ctx.invoke(f, obj, *args, **kwargs) return update_wrapper(new_func, f) return decorator
Given an object type this creates a decorator that will work similar to :func:`pass_obj` but instead of passing the object of the current context, it will find the innermost context of type :func:`object_type`. This generates a decorator that works roughly like this:: from functools import update_wrapper def decorator(f): @pass_context def new_func(ctx, *args, **kwargs): obj = ctx.find_object(object_type) return ctx.invoke(f, obj, *args, **kwargs) return update_wrapper(new_func, f) return decorator :param object_type: the type of the object to pass. :param ensure: if set to `True`, a new object will be created and remembered on the context if it's not there yet.
Below is the the instruction that describes the task: ### Input: Given an object type this creates a decorator that will work similar to :func:`pass_obj` but instead of passing the object of the current context, it will find the innermost context of type :func:`object_type`. This generates a decorator that works roughly like this:: from functools import update_wrapper def decorator(f): @pass_context def new_func(ctx, *args, **kwargs): obj = ctx.find_object(object_type) return ctx.invoke(f, obj, *args, **kwargs) return update_wrapper(new_func, f) return decorator :param object_type: the type of the object to pass. :param ensure: if set to `True`, a new object will be created and remembered on the context if it's not there yet. ### Response: def make_pass_decorator(object_type, ensure=False): """Given an object type this creates a decorator that will work similar to :func:`pass_obj` but instead of passing the object of the current context, it will find the innermost context of type :func:`object_type`. This generates a decorator that works roughly like this:: from functools import update_wrapper def decorator(f): @pass_context def new_func(ctx, *args, **kwargs): obj = ctx.find_object(object_type) return ctx.invoke(f, obj, *args, **kwargs) return update_wrapper(new_func, f) return decorator :param object_type: the type of the object to pass. :param ensure: if set to `True`, a new object will be created and remembered on the context if it's not there yet. """ def decorator(f): def new_func(*args, **kwargs): ctx = get_current_context() if ensure: obj = ctx.ensure_object(object_type) else: obj = ctx.find_object(object_type) if obj is None: raise RuntimeError('Managed to invoke callback without a ' 'context object of type %r existing' % object_type.__name__) return ctx.invoke(f, obj, *args, **kwargs) return update_wrapper(new_func, f) return decorator
def delete_operation(self, name, options=None): """ Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns ``google.rpc.Code.UNIMPLEMENTED``. Example: >>> from google.gapic.longrunning import operations_client >>> api = operations_client.OperationsClient() >>> name = '' >>> api.delete_operation(name) Args: name (string): The name of the operation resource to be deleted. options (:class:`google.gax.CallOptions`): Overrides the default settings for this call, e.g, timeout, retries etc. Raises: :exc:`google.gax.errors.GaxError` if the RPC is aborted. :exc:`ValueError` if the parameters are invalid. """ # Create the request object. request = operations_pb2.DeleteOperationRequest(name=name) self._delete_operation(request, options)
Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns ``google.rpc.Code.UNIMPLEMENTED``. Example: >>> from google.gapic.longrunning import operations_client >>> api = operations_client.OperationsClient() >>> name = '' >>> api.delete_operation(name) Args: name (string): The name of the operation resource to be deleted. options (:class:`google.gax.CallOptions`): Overrides the default settings for this call, e.g, timeout, retries etc. Raises: :exc:`google.gax.errors.GaxError` if the RPC is aborted. :exc:`ValueError` if the parameters are invalid.
Below is the the instruction that describes the task: ### Input: Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns ``google.rpc.Code.UNIMPLEMENTED``. Example: >>> from google.gapic.longrunning import operations_client >>> api = operations_client.OperationsClient() >>> name = '' >>> api.delete_operation(name) Args: name (string): The name of the operation resource to be deleted. options (:class:`google.gax.CallOptions`): Overrides the default settings for this call, e.g, timeout, retries etc. Raises: :exc:`google.gax.errors.GaxError` if the RPC is aborted. :exc:`ValueError` if the parameters are invalid. ### Response: def delete_operation(self, name, options=None): """ Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns ``google.rpc.Code.UNIMPLEMENTED``. Example: >>> from google.gapic.longrunning import operations_client >>> api = operations_client.OperationsClient() >>> name = '' >>> api.delete_operation(name) Args: name (string): The name of the operation resource to be deleted. options (:class:`google.gax.CallOptions`): Overrides the default settings for this call, e.g, timeout, retries etc. Raises: :exc:`google.gax.errors.GaxError` if the RPC is aborted. :exc:`ValueError` if the parameters are invalid. """ # Create the request object. request = operations_pb2.DeleteOperationRequest(name=name) self._delete_operation(request, options)
def MetaField(self, name, t=None): """ Creates an instance of a metadata field of the dataset. It can be used in building expressions or conditions for projection or selection. Notice that this function is equivalent to call:: dataset["name"] If the MetaField is used in a region projection (:meth:`~.reg_project`), the user has also to specify the type of the metadata attribute that is selected:: dataset.reg_project(new_field_dict={'new_field': dataset['name', 'string']}) :param name: the name of the metadata that is considered :param t: the type of the metadata attribute {string, int, boolean, double} :return: a MetaField instance """ return MetaField(name=name, index=self.__index, t=t)
Creates an instance of a metadata field of the dataset. It can be used in building expressions or conditions for projection or selection. Notice that this function is equivalent to call:: dataset["name"] If the MetaField is used in a region projection (:meth:`~.reg_project`), the user has also to specify the type of the metadata attribute that is selected:: dataset.reg_project(new_field_dict={'new_field': dataset['name', 'string']}) :param name: the name of the metadata that is considered :param t: the type of the metadata attribute {string, int, boolean, double} :return: a MetaField instance
Below is the the instruction that describes the task: ### Input: Creates an instance of a metadata field of the dataset. It can be used in building expressions or conditions for projection or selection. Notice that this function is equivalent to call:: dataset["name"] If the MetaField is used in a region projection (:meth:`~.reg_project`), the user has also to specify the type of the metadata attribute that is selected:: dataset.reg_project(new_field_dict={'new_field': dataset['name', 'string']}) :param name: the name of the metadata that is considered :param t: the type of the metadata attribute {string, int, boolean, double} :return: a MetaField instance ### Response: def MetaField(self, name, t=None): """ Creates an instance of a metadata field of the dataset. It can be used in building expressions or conditions for projection or selection. Notice that this function is equivalent to call:: dataset["name"] If the MetaField is used in a region projection (:meth:`~.reg_project`), the user has also to specify the type of the metadata attribute that is selected:: dataset.reg_project(new_field_dict={'new_field': dataset['name', 'string']}) :param name: the name of the metadata that is considered :param t: the type of the metadata attribute {string, int, boolean, double} :return: a MetaField instance """ return MetaField(name=name, index=self.__index, t=t)
def init_app(application): """ Associates the error handler """ for code in werkzeug.exceptions.default_exceptions: application.register_error_handler(code, handle_http_exception)
Associates the error handler
Below is the the instruction that describes the task: ### Input: Associates the error handler ### Response: def init_app(application): """ Associates the error handler """ for code in werkzeug.exceptions.default_exceptions: application.register_error_handler(code, handle_http_exception)
def iterate_from_vcf(infile, sample): '''iterate over a vcf-formatted file. *infile* can be any iterator over a lines. The function yields named tuples of the type :class:`pysam.Pileup.PileupSubstitution` or :class:`pysam.Pileup.PileupIndel`. Positions without a snp will be skipped. This method is wasteful and written to support same legacy code that expects samtools pileup output. Better use the vcf parser directly. ''' vcf = pysam.VCF() vcf.connect(infile) if sample not in vcf.getsamples(): raise KeyError("sample %s not vcf file") for row in vcf.fetch(): result = vcf2pileup(row, sample) if result: yield result
iterate over a vcf-formatted file. *infile* can be any iterator over a lines. The function yields named tuples of the type :class:`pysam.Pileup.PileupSubstitution` or :class:`pysam.Pileup.PileupIndel`. Positions without a snp will be skipped. This method is wasteful and written to support same legacy code that expects samtools pileup output. Better use the vcf parser directly.
Below is the the instruction that describes the task: ### Input: iterate over a vcf-formatted file. *infile* can be any iterator over a lines. The function yields named tuples of the type :class:`pysam.Pileup.PileupSubstitution` or :class:`pysam.Pileup.PileupIndel`. Positions without a snp will be skipped. This method is wasteful and written to support same legacy code that expects samtools pileup output. Better use the vcf parser directly. ### Response: def iterate_from_vcf(infile, sample): '''iterate over a vcf-formatted file. *infile* can be any iterator over a lines. The function yields named tuples of the type :class:`pysam.Pileup.PileupSubstitution` or :class:`pysam.Pileup.PileupIndel`. Positions without a snp will be skipped. This method is wasteful and written to support same legacy code that expects samtools pileup output. Better use the vcf parser directly. ''' vcf = pysam.VCF() vcf.connect(infile) if sample not in vcf.getsamples(): raise KeyError("sample %s not vcf file") for row in vcf.fetch(): result = vcf2pileup(row, sample) if result: yield result
def force_encoding(value, encoding='utf-8'): """ Return a string encoded in the provided encoding """ if not isinstance(value, (str, unicode)): value = str(value) if isinstance(value, unicode): value = value.encode(encoding) elif encoding != 'utf-8': value = value.decode('utf-8').encode(encoding) return value
Return a string encoded in the provided encoding
Below is the the instruction that describes the task: ### Input: Return a string encoded in the provided encoding ### Response: def force_encoding(value, encoding='utf-8'): """ Return a string encoded in the provided encoding """ if not isinstance(value, (str, unicode)): value = str(value) if isinstance(value, unicode): value = value.encode(encoding) elif encoding != 'utf-8': value = value.decode('utf-8').encode(encoding) return value
def ConvBPDNMaskOptions(opt=None, method='admm'): """A wrapper function that dynamically defines a class derived from the Options class associated with one of the implementations of the Convolutional BPDN problem, and returns an object instantiated with the provided parameters. The wrapper is designed to allow the appropriate object to be created by calling this function using the same syntax as would be used if it were a class. The specific implementation is selected by use of an additional keyword argument 'method'. Valid values are as specified in the documentation for :func:`ConvBPDN`. """ # Assign base class depending on method selection argument base = cbpdnmsk_class_label_lookup(method).Options # Nested class with dynamically determined inheritance class ConvBPDNMaskOptions(base): def __init__(self, opt): super(ConvBPDNMaskOptions, self).__init__(opt) # Allow pickling of objects of type ConvBPDNOptions _fix_dynamic_class_lookup(ConvBPDNMaskOptions, method) # Return object of the nested class type return ConvBPDNMaskOptions(opt)
A wrapper function that dynamically defines a class derived from the Options class associated with one of the implementations of the Convolutional BPDN problem, and returns an object instantiated with the provided parameters. The wrapper is designed to allow the appropriate object to be created by calling this function using the same syntax as would be used if it were a class. The specific implementation is selected by use of an additional keyword argument 'method'. Valid values are as specified in the documentation for :func:`ConvBPDN`.
Below is the the instruction that describes the task: ### Input: A wrapper function that dynamically defines a class derived from the Options class associated with one of the implementations of the Convolutional BPDN problem, and returns an object instantiated with the provided parameters. The wrapper is designed to allow the appropriate object to be created by calling this function using the same syntax as would be used if it were a class. The specific implementation is selected by use of an additional keyword argument 'method'. Valid values are as specified in the documentation for :func:`ConvBPDN`. ### Response: def ConvBPDNMaskOptions(opt=None, method='admm'): """A wrapper function that dynamically defines a class derived from the Options class associated with one of the implementations of the Convolutional BPDN problem, and returns an object instantiated with the provided parameters. The wrapper is designed to allow the appropriate object to be created by calling this function using the same syntax as would be used if it were a class. The specific implementation is selected by use of an additional keyword argument 'method'. Valid values are as specified in the documentation for :func:`ConvBPDN`. """ # Assign base class depending on method selection argument base = cbpdnmsk_class_label_lookup(method).Options # Nested class with dynamically determined inheritance class ConvBPDNMaskOptions(base): def __init__(self, opt): super(ConvBPDNMaskOptions, self).__init__(opt) # Allow pickling of objects of type ConvBPDNOptions _fix_dynamic_class_lookup(ConvBPDNMaskOptions, method) # Return object of the nested class type return ConvBPDNMaskOptions(opt)
def p_document_shorthand_with_fragments(self, p): """ document : selection_set fragment_list """ p[0] = Document(definitions=[Query(selections=p[1])] + p[2])
document : selection_set fragment_list
Below is the the instruction that describes the task: ### Input: document : selection_set fragment_list ### Response: def p_document_shorthand_with_fragments(self, p): """ document : selection_set fragment_list """ p[0] = Document(definitions=[Query(selections=p[1])] + p[2])
def open_url(self, url, stale_after, parse_as_html = True, **kwargs): """ Download or retrieve from cache. url -- The URL to be downloaded, as a string. stale_after -- A network request for the url will be performed if the cached copy does not exist or if it exists but its age (in days) is larger or equal to the stale_after value. A non-positive value will force re-download. parse_as_html -- Parse the resource downloaded as HTML. This uses the lxml.html package to parse the resource leniently, thus it will not fail even for reasonably invalid HTML. This argument also decides the return type of this method; if True, then the return type is an ElementTree.Element root object; if False, the content of the resource is returned as a bytestring. Exceptions raised: BannedException -- If does_show_ban returns True. HTTPCodeNotOKError -- If the returned HTTP status code is not equal to 200. """ _LOGGER.info('open_url() received url: %s', url) today = datetime.date.today() threshold_date = today - datetime.timedelta(stale_after) downloaded = False with self._get_conn() as conn: rs = conn.execute(''' select content from cache where url = ? and date > ? ''', (url, _date_to_sqlite_str(threshold_date)) ) row = rs.fetchone() retry_run = kwargs.get('retry_run', False) assert (not retry_run) or (retry_run and row is None) if row is None: file_obj = self._download(url).get_file_obj() downloaded = True else: file_obj = cStringIO.StringIO(zlib.decompress(row[0])) if parse_as_html: tree = lxml.html.parse(file_obj) tree.getroot().url = url appears_to_be_banned = False if self.does_show_ban(tree.getroot()): appears_to_be_banned = True if downloaded: message = ('Function {f} claims we have been banned, ' 'it was called with an element parsed from url ' '(downloaded, not from cache): {u}' .format(f = self.does_show_ban, u = url)) _LOGGER.error(message) _LOGGER.info('Deleting url %s from the cache (if it exists) ' 'because it triggered ban page cache poisoning ' 'exception', url) with self._get_conn() as conn: conn.execute('delete from cache where url = ?', [str(url)]) if downloaded: raise BannedException(message) else: return self.open_url(url, stale_after, retry_run = True) else: tree = file_obj.read() if downloaded: # make_links_absolute should only be called when the document has a base_url # attribute, which it has not when it has been loaded from the database. So, # this "if" is needed: if parse_as_html: tree.getroot().make_links_absolute(tree.getroot().base_url) to_store = lxml.html.tostring( tree, pretty_print = True, encoding = 'utf-8' ) else: to_store = tree to_store = zlib.compress(to_store, 8) with self._get_conn() as conn: conn.execute(''' insert or replace into cache (url, date, content) values (?, ?, ?) ''', ( str(url), _date_to_sqlite_str(today), sqlite3.Binary(to_store) ) ) return tree
Download or retrieve from cache. url -- The URL to be downloaded, as a string. stale_after -- A network request for the url will be performed if the cached copy does not exist or if it exists but its age (in days) is larger or equal to the stale_after value. A non-positive value will force re-download. parse_as_html -- Parse the resource downloaded as HTML. This uses the lxml.html package to parse the resource leniently, thus it will not fail even for reasonably invalid HTML. This argument also decides the return type of this method; if True, then the return type is an ElementTree.Element root object; if False, the content of the resource is returned as a bytestring. Exceptions raised: BannedException -- If does_show_ban returns True. HTTPCodeNotOKError -- If the returned HTTP status code is not equal to 200.
Below is the the instruction that describes the task: ### Input: Download or retrieve from cache. url -- The URL to be downloaded, as a string. stale_after -- A network request for the url will be performed if the cached copy does not exist or if it exists but its age (in days) is larger or equal to the stale_after value. A non-positive value will force re-download. parse_as_html -- Parse the resource downloaded as HTML. This uses the lxml.html package to parse the resource leniently, thus it will not fail even for reasonably invalid HTML. This argument also decides the return type of this method; if True, then the return type is an ElementTree.Element root object; if False, the content of the resource is returned as a bytestring. Exceptions raised: BannedException -- If does_show_ban returns True. HTTPCodeNotOKError -- If the returned HTTP status code is not equal to 200. ### Response: def open_url(self, url, stale_after, parse_as_html = True, **kwargs): """ Download or retrieve from cache. url -- The URL to be downloaded, as a string. stale_after -- A network request for the url will be performed if the cached copy does not exist or if it exists but its age (in days) is larger or equal to the stale_after value. A non-positive value will force re-download. parse_as_html -- Parse the resource downloaded as HTML. This uses the lxml.html package to parse the resource leniently, thus it will not fail even for reasonably invalid HTML. This argument also decides the return type of this method; if True, then the return type is an ElementTree.Element root object; if False, the content of the resource is returned as a bytestring. Exceptions raised: BannedException -- If does_show_ban returns True. HTTPCodeNotOKError -- If the returned HTTP status code is not equal to 200. """ _LOGGER.info('open_url() received url: %s', url) today = datetime.date.today() threshold_date = today - datetime.timedelta(stale_after) downloaded = False with self._get_conn() as conn: rs = conn.execute(''' select content from cache where url = ? and date > ? ''', (url, _date_to_sqlite_str(threshold_date)) ) row = rs.fetchone() retry_run = kwargs.get('retry_run', False) assert (not retry_run) or (retry_run and row is None) if row is None: file_obj = self._download(url).get_file_obj() downloaded = True else: file_obj = cStringIO.StringIO(zlib.decompress(row[0])) if parse_as_html: tree = lxml.html.parse(file_obj) tree.getroot().url = url appears_to_be_banned = False if self.does_show_ban(tree.getroot()): appears_to_be_banned = True if downloaded: message = ('Function {f} claims we have been banned, ' 'it was called with an element parsed from url ' '(downloaded, not from cache): {u}' .format(f = self.does_show_ban, u = url)) _LOGGER.error(message) _LOGGER.info('Deleting url %s from the cache (if it exists) ' 'because it triggered ban page cache poisoning ' 'exception', url) with self._get_conn() as conn: conn.execute('delete from cache where url = ?', [str(url)]) if downloaded: raise BannedException(message) else: return self.open_url(url, stale_after, retry_run = True) else: tree = file_obj.read() if downloaded: # make_links_absolute should only be called when the document has a base_url # attribute, which it has not when it has been loaded from the database. So, # this "if" is needed: if parse_as_html: tree.getroot().make_links_absolute(tree.getroot().base_url) to_store = lxml.html.tostring( tree, pretty_print = True, encoding = 'utf-8' ) else: to_store = tree to_store = zlib.compress(to_store, 8) with self._get_conn() as conn: conn.execute(''' insert or replace into cache (url, date, content) values (?, ?, ?) ''', ( str(url), _date_to_sqlite_str(today), sqlite3.Binary(to_store) ) ) return tree
def sRGB1_to_sRGB1_linear(sRGB1): """Convert sRGB (as floats in the 0-to-1 range) to linear sRGB.""" sRGB1 = np.asarray(sRGB1, dtype=float) sRGB1_linear = C_linear(sRGB1) return sRGB1_linear
Convert sRGB (as floats in the 0-to-1 range) to linear sRGB.
Below is the the instruction that describes the task: ### Input: Convert sRGB (as floats in the 0-to-1 range) to linear sRGB. ### Response: def sRGB1_to_sRGB1_linear(sRGB1): """Convert sRGB (as floats in the 0-to-1 range) to linear sRGB.""" sRGB1 = np.asarray(sRGB1, dtype=float) sRGB1_linear = C_linear(sRGB1) return sRGB1_linear
def predict(self, text, k=1, threshold=0.0, on_unicode_error='strict'): """ Given a string, get a list of labels and a list of corresponding probabilities. k controls the number of returned labels. A choice of 5, will return the 5 most probable labels. By default this returns only the most likely label and probability. threshold filters the returned labels by a threshold on probability. A choice of 0.5 will return labels with at least 0.5 probability. k and threshold will be applied together to determine the returned labels. This function assumes to be given a single line of text. We split words on whitespace (space, newline, tab, vertical tab) and the control characters carriage return, formfeed and the null character. If the model is not supervised, this function will throw a ValueError. If given a list of strings, it will return a list of results as usually received for a single line of text. """ def check(entry): if entry.find('\n') != -1: raise ValueError( "predict processes one line at a time (remove \'\\n\')" ) entry += "\n" return entry if type(text) == list: text = [check(entry) for entry in text] predictions = self.f.multilinePredict(text, k, threshold, on_unicode_error) dt = np.dtype([('probability', 'float64'), ('label', 'object')]) result_as_pair = np.array(predictions, dtype=dt) return result_as_pair['label'].tolist(), result_as_pair['probability'] else: text = check(text) predictions = self.f.predict(text, k, threshold, on_unicode_error) probs, labels = zip(*predictions) return labels, np.array(probs, copy=False)
Given a string, get a list of labels and a list of corresponding probabilities. k controls the number of returned labels. A choice of 5, will return the 5 most probable labels. By default this returns only the most likely label and probability. threshold filters the returned labels by a threshold on probability. A choice of 0.5 will return labels with at least 0.5 probability. k and threshold will be applied together to determine the returned labels. This function assumes to be given a single line of text. We split words on whitespace (space, newline, tab, vertical tab) and the control characters carriage return, formfeed and the null character. If the model is not supervised, this function will throw a ValueError. If given a list of strings, it will return a list of results as usually received for a single line of text.
Below is the the instruction that describes the task: ### Input: Given a string, get a list of labels and a list of corresponding probabilities. k controls the number of returned labels. A choice of 5, will return the 5 most probable labels. By default this returns only the most likely label and probability. threshold filters the returned labels by a threshold on probability. A choice of 0.5 will return labels with at least 0.5 probability. k and threshold will be applied together to determine the returned labels. This function assumes to be given a single line of text. We split words on whitespace (space, newline, tab, vertical tab) and the control characters carriage return, formfeed and the null character. If the model is not supervised, this function will throw a ValueError. If given a list of strings, it will return a list of results as usually received for a single line of text. ### Response: def predict(self, text, k=1, threshold=0.0, on_unicode_error='strict'): """ Given a string, get a list of labels and a list of corresponding probabilities. k controls the number of returned labels. A choice of 5, will return the 5 most probable labels. By default this returns only the most likely label and probability. threshold filters the returned labels by a threshold on probability. A choice of 0.5 will return labels with at least 0.5 probability. k and threshold will be applied together to determine the returned labels. This function assumes to be given a single line of text. We split words on whitespace (space, newline, tab, vertical tab) and the control characters carriage return, formfeed and the null character. If the model is not supervised, this function will throw a ValueError. If given a list of strings, it will return a list of results as usually received for a single line of text. """ def check(entry): if entry.find('\n') != -1: raise ValueError( "predict processes one line at a time (remove \'\\n\')" ) entry += "\n" return entry if type(text) == list: text = [check(entry) for entry in text] predictions = self.f.multilinePredict(text, k, threshold, on_unicode_error) dt = np.dtype([('probability', 'float64'), ('label', 'object')]) result_as_pair = np.array(predictions, dtype=dt) return result_as_pair['label'].tolist(), result_as_pair['probability'] else: text = check(text) predictions = self.f.predict(text, k, threshold, on_unicode_error) probs, labels = zip(*predictions) return labels, np.array(probs, copy=False)