code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def hub_scores(msm, waypoints=None): """ Calculate the hub score for one or more waypoints The "hub score" is a measure of how well traveled a certain state or set of states is in a network. Specifically, it is the fraction of times that a walker visits a state en route from some state A to another state B, averaged over all combinations of A and B. Parameters ---------- msm : msmbuilder.MarkovStateModel MSM to analyze waypoints : array_like, int, optional The index of the intermediate state (or more than one). If None, then all waypoints will be used Returns ------- hub_score : float The hub score for the waypoint References ---------- .. [1] Dickson & Brooks (2012), J. Chem. Theory Comput., 8, 3044-3052. """ n_states = msm.n_states_ if isinstance(waypoints, int): waypoints = [waypoints] elif waypoints is None: waypoints = xrange(n_states) elif not (isinstance(waypoints, list) or isinstance(waypoints, np.ndarray)): raise ValueError("waypoints (%s) must be an int, a list, or None" % str(waypoints)) hub_scores = [] for waypoint in waypoints: other_states = (i for i in xrange(n_states) if i != waypoint) # calculate the hub score for this waypoint hub_score = 0.0 for (source, sink) in itertools.permutations(other_states, 2): hub_score += fraction_visited(source, sink, waypoint, msm) hub_score /= float((n_states - 1) * (n_states - 2)) hub_scores.append(hub_score) return np.array(hub_scores)
Calculate the hub score for one or more waypoints The "hub score" is a measure of how well traveled a certain state or set of states is in a network. Specifically, it is the fraction of times that a walker visits a state en route from some state A to another state B, averaged over all combinations of A and B. Parameters ---------- msm : msmbuilder.MarkovStateModel MSM to analyze waypoints : array_like, int, optional The index of the intermediate state (or more than one). If None, then all waypoints will be used Returns ------- hub_score : float The hub score for the waypoint References ---------- .. [1] Dickson & Brooks (2012), J. Chem. Theory Comput., 8, 3044-3052.
Below is the the instruction that describes the task: ### Input: Calculate the hub score for one or more waypoints The "hub score" is a measure of how well traveled a certain state or set of states is in a network. Specifically, it is the fraction of times that a walker visits a state en route from some state A to another state B, averaged over all combinations of A and B. Parameters ---------- msm : msmbuilder.MarkovStateModel MSM to analyze waypoints : array_like, int, optional The index of the intermediate state (or more than one). If None, then all waypoints will be used Returns ------- hub_score : float The hub score for the waypoint References ---------- .. [1] Dickson & Brooks (2012), J. Chem. Theory Comput., 8, 3044-3052. ### Response: def hub_scores(msm, waypoints=None): """ Calculate the hub score for one or more waypoints The "hub score" is a measure of how well traveled a certain state or set of states is in a network. Specifically, it is the fraction of times that a walker visits a state en route from some state A to another state B, averaged over all combinations of A and B. Parameters ---------- msm : msmbuilder.MarkovStateModel MSM to analyze waypoints : array_like, int, optional The index of the intermediate state (or more than one). If None, then all waypoints will be used Returns ------- hub_score : float The hub score for the waypoint References ---------- .. [1] Dickson & Brooks (2012), J. Chem. Theory Comput., 8, 3044-3052. """ n_states = msm.n_states_ if isinstance(waypoints, int): waypoints = [waypoints] elif waypoints is None: waypoints = xrange(n_states) elif not (isinstance(waypoints, list) or isinstance(waypoints, np.ndarray)): raise ValueError("waypoints (%s) must be an int, a list, or None" % str(waypoints)) hub_scores = [] for waypoint in waypoints: other_states = (i for i in xrange(n_states) if i != waypoint) # calculate the hub score for this waypoint hub_score = 0.0 for (source, sink) in itertools.permutations(other_states, 2): hub_score += fraction_visited(source, sink, waypoint, msm) hub_score /= float((n_states - 1) * (n_states - 2)) hub_scores.append(hub_score) return np.array(hub_scores)
def init_return(self, ret_type): """Specify the return type. Note that this will also construct the '~fermipy.castro.Interpolator' object for the requested return type. """ if self._ret_type == ret_type: return if ret_type == "straight": self._interp = self._lnlfn.interp if ret_type == "profile": self._profile_loglike_spline(self._lnlfn.interp.x) #self._profile_loglike(self._lnlfn.interp.x) self._interp = self._prof_interp elif ret_type == "marginal": self._marginal_loglike(self._lnlfn.interp.x) self._interp = self._marg_interp elif ret_type == "posterior": self._posterior(self._lnlfn.interp.x) self._interp = self._post_interp else: raise ValueError("Did not recognize return type %s" % ret_type) self._ret_type = ret_type
Specify the return type. Note that this will also construct the '~fermipy.castro.Interpolator' object for the requested return type.
Below is the the instruction that describes the task: ### Input: Specify the return type. Note that this will also construct the '~fermipy.castro.Interpolator' object for the requested return type. ### Response: def init_return(self, ret_type): """Specify the return type. Note that this will also construct the '~fermipy.castro.Interpolator' object for the requested return type. """ if self._ret_type == ret_type: return if ret_type == "straight": self._interp = self._lnlfn.interp if ret_type == "profile": self._profile_loglike_spline(self._lnlfn.interp.x) #self._profile_loglike(self._lnlfn.interp.x) self._interp = self._prof_interp elif ret_type == "marginal": self._marginal_loglike(self._lnlfn.interp.x) self._interp = self._marg_interp elif ret_type == "posterior": self._posterior(self._lnlfn.interp.x) self._interp = self._post_interp else: raise ValueError("Did not recognize return type %s" % ret_type) self._ret_type = ret_type
def compute_node_positions(self): """ Computes nodes positions. Arranges nodes in a line starting at (x,y) = (0,0). Node radius is assumed to be equal to 0.5 units. Nodes are placed at integer locations. """ xs = [0] * len(self.nodes) ys = [0] * len(self.nodes) for i, _ in enumerate(self.nodes[1:], start=1): prev_r = self.node_sizes[i - 1] / 2 curr_r = self.node_sizes[i] / 2 xs[i] = xs[i - 1] + prev_r + curr_r self.node_coords = {"x": xs, "y": ys}
Computes nodes positions. Arranges nodes in a line starting at (x,y) = (0,0). Node radius is assumed to be equal to 0.5 units. Nodes are placed at integer locations.
Below is the the instruction that describes the task: ### Input: Computes nodes positions. Arranges nodes in a line starting at (x,y) = (0,0). Node radius is assumed to be equal to 0.5 units. Nodes are placed at integer locations. ### Response: def compute_node_positions(self): """ Computes nodes positions. Arranges nodes in a line starting at (x,y) = (0,0). Node radius is assumed to be equal to 0.5 units. Nodes are placed at integer locations. """ xs = [0] * len(self.nodes) ys = [0] * len(self.nodes) for i, _ in enumerate(self.nodes[1:], start=1): prev_r = self.node_sizes[i - 1] / 2 curr_r = self.node_sizes[i] / 2 xs[i] = xs[i - 1] + prev_r + curr_r self.node_coords = {"x": xs, "y": ys}
def countdown( seconds=None, block=True, interval=1, daemon=True, tick_callback=None, finish_callback=None, ): """Run a countdown function to wait something, similar to threading.Timer, but will show the detail tick by tick_callback. :: from torequests.utils import countdown countdown(3) # 3 2 1 # countdown finished [3 seconds]: 2018-06-13 00:12:55 => 2018-06-13 00:12:58. countdown('2018-06-13 00:13:29') # 10 9 8 7 6 5 4 3 2 1 # countdown finished [10 seconds]: 2018-06-13 00:13:18 => 2018-06-13 00:13:28. """ def default_tick_callback(s, seconds, *args): flush_print(s, sep="", end=" ") def default_finish_callback(seconds, start_time): flush_print() def cd(seconds, interval): for s in range(seconds, 0, -interval): tick_callback(s, seconds, interval) time.sleep(interval) if callable(finish_callback): finish_callback(seconds, start_time) start_time = time.time() tick_callback = tick_callback or default_tick_callback finish_callback = ( default_finish_callback if finish_callback is None else finish_callback ) if unicode(seconds).isdigit(): seconds = int(seconds) elif isinstance(seconds, (unicode, str)): seconds = int(ptime(seconds) - time.time()) t = Thread(target=cd, args=(seconds, interval)) t.daemon = daemon t.start() if block: t.join()
Run a countdown function to wait something, similar to threading.Timer, but will show the detail tick by tick_callback. :: from torequests.utils import countdown countdown(3) # 3 2 1 # countdown finished [3 seconds]: 2018-06-13 00:12:55 => 2018-06-13 00:12:58. countdown('2018-06-13 00:13:29') # 10 9 8 7 6 5 4 3 2 1 # countdown finished [10 seconds]: 2018-06-13 00:13:18 => 2018-06-13 00:13:28.
Below is the the instruction that describes the task: ### Input: Run a countdown function to wait something, similar to threading.Timer, but will show the detail tick by tick_callback. :: from torequests.utils import countdown countdown(3) # 3 2 1 # countdown finished [3 seconds]: 2018-06-13 00:12:55 => 2018-06-13 00:12:58. countdown('2018-06-13 00:13:29') # 10 9 8 7 6 5 4 3 2 1 # countdown finished [10 seconds]: 2018-06-13 00:13:18 => 2018-06-13 00:13:28. ### Response: def countdown( seconds=None, block=True, interval=1, daemon=True, tick_callback=None, finish_callback=None, ): """Run a countdown function to wait something, similar to threading.Timer, but will show the detail tick by tick_callback. :: from torequests.utils import countdown countdown(3) # 3 2 1 # countdown finished [3 seconds]: 2018-06-13 00:12:55 => 2018-06-13 00:12:58. countdown('2018-06-13 00:13:29') # 10 9 8 7 6 5 4 3 2 1 # countdown finished [10 seconds]: 2018-06-13 00:13:18 => 2018-06-13 00:13:28. """ def default_tick_callback(s, seconds, *args): flush_print(s, sep="", end=" ") def default_finish_callback(seconds, start_time): flush_print() def cd(seconds, interval): for s in range(seconds, 0, -interval): tick_callback(s, seconds, interval) time.sleep(interval) if callable(finish_callback): finish_callback(seconds, start_time) start_time = time.time() tick_callback = tick_callback or default_tick_callback finish_callback = ( default_finish_callback if finish_callback is None else finish_callback ) if unicode(seconds).isdigit(): seconds = int(seconds) elif isinstance(seconds, (unicode, str)): seconds = int(ptime(seconds) - time.time()) t = Thread(target=cd, args=(seconds, interval)) t.daemon = daemon t.start() if block: t.join()
def parse(self, stream): """Parses the keys + values from a config file.""" items = OrderedDict() for i, line in enumerate(stream): line = line.strip() if not line or line[0] in ["#", ";", "["] or line.startswith("---"): continue white_space = "\\s*" key = "(?P<key>[^:=;#\s]+?)" value = white_space+"[:=\s]"+white_space+"(?P<value>.+?)" comment = white_space+"(?P<comment>\\s[;#].*)?" key_only_match = re.match("^" + key + comment + "$", line) if key_only_match: key = key_only_match.group("key") items[key] = "true" continue key_value_match = re.match("^"+key+value+comment+"$", line) if key_value_match: key = key_value_match.group("key") value = key_value_match.group("value") if value.startswith("[") and value.endswith("]"): # handle special case of lists value = [elem.strip() for elem in value[1:-1].split(",")] items[key] = value continue raise ConfigFileParserException("Unexpected line %s in %s: %s" % (i, getattr(stream, 'name', 'stream'), line)) return items
Parses the keys + values from a config file.
Below is the the instruction that describes the task: ### Input: Parses the keys + values from a config file. ### Response: def parse(self, stream): """Parses the keys + values from a config file.""" items = OrderedDict() for i, line in enumerate(stream): line = line.strip() if not line or line[0] in ["#", ";", "["] or line.startswith("---"): continue white_space = "\\s*" key = "(?P<key>[^:=;#\s]+?)" value = white_space+"[:=\s]"+white_space+"(?P<value>.+?)" comment = white_space+"(?P<comment>\\s[;#].*)?" key_only_match = re.match("^" + key + comment + "$", line) if key_only_match: key = key_only_match.group("key") items[key] = "true" continue key_value_match = re.match("^"+key+value+comment+"$", line) if key_value_match: key = key_value_match.group("key") value = key_value_match.group("value") if value.startswith("[") and value.endswith("]"): # handle special case of lists value = [elem.strip() for elem in value[1:-1].split(",")] items[key] = value continue raise ConfigFileParserException("Unexpected line %s in %s: %s" % (i, getattr(stream, 'name', 'stream'), line)) return items
def update(self): """Update RAID stats using the input method.""" # Init new stats stats = self.get_init_value() if import_error_tag: return self.stats if self.input_method == 'local': # Update stats using the PyMDstat lib (https://github.com/nicolargo/pymdstat) try: # Just for test # mds = MdStat(path='/home/nicolargo/dev/pymdstat/tests/mdstat.10') mds = MdStat() stats = mds.get_stats()['arrays'] except Exception as e: logger.debug("Can not grab RAID stats (%s)" % e) return self.stats elif self.input_method == 'snmp': # Update stats using SNMP # No standard way for the moment... pass # Update the stats self.stats = stats return self.stats
Update RAID stats using the input method.
Below is the the instruction that describes the task: ### Input: Update RAID stats using the input method. ### Response: def update(self): """Update RAID stats using the input method.""" # Init new stats stats = self.get_init_value() if import_error_tag: return self.stats if self.input_method == 'local': # Update stats using the PyMDstat lib (https://github.com/nicolargo/pymdstat) try: # Just for test # mds = MdStat(path='/home/nicolargo/dev/pymdstat/tests/mdstat.10') mds = MdStat() stats = mds.get_stats()['arrays'] except Exception as e: logger.debug("Can not grab RAID stats (%s)" % e) return self.stats elif self.input_method == 'snmp': # Update stats using SNMP # No standard way for the moment... pass # Update the stats self.stats = stats return self.stats
def add(self, *args, **kwargs): """ Add objects to the directory. """ for key in kwargs: if isinstance(kwargs[key], str): self._children[key] = File(kwargs[key]) else: self._children[key] = kwargs[key] self._children[key]._parent = self self._children[key]._env = self._env added = [] for arg in args: if isinstance(arg, File): self._children[arg.name] = arg self._children[arg.name]._parent = self self._children[arg.name]._env = self._env elif isinstance(arg, str): f = File(arg) added.append(f) self._children[arg] = f self._children[arg]._parent = self self._children[arg]._env = self._env else: raise TypeError(type(arg)) # if we were passed a single file/filename, return the File object for convenience if len(added) == 1: return added[0] if len(args) == 1: return args[0]
Add objects to the directory.
Below is the the instruction that describes the task: ### Input: Add objects to the directory. ### Response: def add(self, *args, **kwargs): """ Add objects to the directory. """ for key in kwargs: if isinstance(kwargs[key], str): self._children[key] = File(kwargs[key]) else: self._children[key] = kwargs[key] self._children[key]._parent = self self._children[key]._env = self._env added = [] for arg in args: if isinstance(arg, File): self._children[arg.name] = arg self._children[arg.name]._parent = self self._children[arg.name]._env = self._env elif isinstance(arg, str): f = File(arg) added.append(f) self._children[arg] = f self._children[arg]._parent = self self._children[arg]._env = self._env else: raise TypeError(type(arg)) # if we were passed a single file/filename, return the File object for convenience if len(added) == 1: return added[0] if len(args) == 1: return args[0]
def disconnect(service_instance): ''' Function that disconnects from the vCenter server or ESXi host service_instance The Service Instance from which to obtain managed object references. ''' log.trace('Disconnecting') try: Disconnect(service_instance) except vim.fault.NoPermission as exc: log.exception(exc) raise salt.exceptions.VMwareApiError( 'Not enough permissions. Required privilege: ' '{}'.format(exc.privilegeId)) except vim.fault.VimFault as exc: log.exception(exc) raise salt.exceptions.VMwareApiError(exc.msg) except vmodl.RuntimeFault as exc: log.exception(exc) raise salt.exceptions.VMwareRuntimeError(exc.msg)
Function that disconnects from the vCenter server or ESXi host service_instance The Service Instance from which to obtain managed object references.
Below is the the instruction that describes the task: ### Input: Function that disconnects from the vCenter server or ESXi host service_instance The Service Instance from which to obtain managed object references. ### Response: def disconnect(service_instance): ''' Function that disconnects from the vCenter server or ESXi host service_instance The Service Instance from which to obtain managed object references. ''' log.trace('Disconnecting') try: Disconnect(service_instance) except vim.fault.NoPermission as exc: log.exception(exc) raise salt.exceptions.VMwareApiError( 'Not enough permissions. Required privilege: ' '{}'.format(exc.privilegeId)) except vim.fault.VimFault as exc: log.exception(exc) raise salt.exceptions.VMwareApiError(exc.msg) except vmodl.RuntimeFault as exc: log.exception(exc) raise salt.exceptions.VMwareRuntimeError(exc.msg)
def _unmangle_attribute_name(name): """Unmangles attribute names so that correct Python variable names are used for mapping attribute names.""" # Python keywords cannot be used as variable names, an underscore should # be appended at the end of each of them when defining attribute names. name = _PYTHON_KEYWORD_MAP.get(name, name) # Attribute names are mangled with double underscore, as colon cannot # be used as a variable character symbol in Python. Single underscore is # used for substituting dash. name = name.replace('__', ':').replace('_', '-') return name
Unmangles attribute names so that correct Python variable names are used for mapping attribute names.
Below is the the instruction that describes the task: ### Input: Unmangles attribute names so that correct Python variable names are used for mapping attribute names. ### Response: def _unmangle_attribute_name(name): """Unmangles attribute names so that correct Python variable names are used for mapping attribute names.""" # Python keywords cannot be used as variable names, an underscore should # be appended at the end of each of them when defining attribute names. name = _PYTHON_KEYWORD_MAP.get(name, name) # Attribute names are mangled with double underscore, as colon cannot # be used as a variable character symbol in Python. Single underscore is # used for substituting dash. name = name.replace('__', ':').replace('_', '-') return name
def add_pkg(pkgs, name, pkgver): ''' Add a package to a dict of installed packages. CLI Example: .. code-block:: bash salt '*' pkg_resource.add_pkg '{}' bind 9 ''' try: pkgs.setdefault(name, []).append(pkgver) except AttributeError as exc: log.exception(exc)
Add a package to a dict of installed packages. CLI Example: .. code-block:: bash salt '*' pkg_resource.add_pkg '{}' bind 9
Below is the the instruction that describes the task: ### Input: Add a package to a dict of installed packages. CLI Example: .. code-block:: bash salt '*' pkg_resource.add_pkg '{}' bind 9 ### Response: def add_pkg(pkgs, name, pkgver): ''' Add a package to a dict of installed packages. CLI Example: .. code-block:: bash salt '*' pkg_resource.add_pkg '{}' bind 9 ''' try: pkgs.setdefault(name, []).append(pkgver) except AttributeError as exc: log.exception(exc)
def clone(self, d): """ Crée une Désinence copiée sur la désinence d. :param d: Desinence à copier :type d: Desinence :return: Désinence copiée sur la désinence d. :rtype: Desinence """ return Desinence(d.grq(), d.morphoNum(), d.numRad(), self)
Crée une Désinence copiée sur la désinence d. :param d: Desinence à copier :type d: Desinence :return: Désinence copiée sur la désinence d. :rtype: Desinence
Below is the the instruction that describes the task: ### Input: Crée une Désinence copiée sur la désinence d. :param d: Desinence à copier :type d: Desinence :return: Désinence copiée sur la désinence d. :rtype: Desinence ### Response: def clone(self, d): """ Crée une Désinence copiée sur la désinence d. :param d: Desinence à copier :type d: Desinence :return: Désinence copiée sur la désinence d. :rtype: Desinence """ return Desinence(d.grq(), d.morphoNum(), d.numRad(), self)
def grad_and_loss(func, argnum=None): """Return function that computes both gradient of arguments and loss value. Parameters ---------- func: a python function The forward (loss) function. argnum: an int or a list of int The index of argument to calculate gradient for. Returns ------- grad_and_loss_func: a python function A function that would compute both the gradient of arguments and loss value. """ @functools.wraps(func) def wrapped(*args): """Wrapped function.""" variables = args if argnum is not None: argnum_ = argnum if isinstance(argnum, list) else [argnum] variables = [args[i] for i in argnum_] for x in variables: assert isinstance(x, NDArray), "type of autograd input should NDArray." grads = [zeros_like(x) for x in variables] mark_variables(variables, grads) with train_section(): outputs = func(*args) compute_gradient([outputs] if isinstance(outputs, NDArray) else outputs) return grads, outputs return wrapped
Return function that computes both gradient of arguments and loss value. Parameters ---------- func: a python function The forward (loss) function. argnum: an int or a list of int The index of argument to calculate gradient for. Returns ------- grad_and_loss_func: a python function A function that would compute both the gradient of arguments and loss value.
Below is the the instruction that describes the task: ### Input: Return function that computes both gradient of arguments and loss value. Parameters ---------- func: a python function The forward (loss) function. argnum: an int or a list of int The index of argument to calculate gradient for. Returns ------- grad_and_loss_func: a python function A function that would compute both the gradient of arguments and loss value. ### Response: def grad_and_loss(func, argnum=None): """Return function that computes both gradient of arguments and loss value. Parameters ---------- func: a python function The forward (loss) function. argnum: an int or a list of int The index of argument to calculate gradient for. Returns ------- grad_and_loss_func: a python function A function that would compute both the gradient of arguments and loss value. """ @functools.wraps(func) def wrapped(*args): """Wrapped function.""" variables = args if argnum is not None: argnum_ = argnum if isinstance(argnum, list) else [argnum] variables = [args[i] for i in argnum_] for x in variables: assert isinstance(x, NDArray), "type of autograd input should NDArray." grads = [zeros_like(x) for x in variables] mark_variables(variables, grads) with train_section(): outputs = func(*args) compute_gradient([outputs] if isinstance(outputs, NDArray) else outputs) return grads, outputs return wrapped
def _pandas_interp(self, data, indices): """The actual transformation based on the following stackoverflow entry: http://stackoverflow.com/a/10465162 """ new_index = np.arange(indices[-1] + 1) data_frame = DataFrame(data, index=indices) data_frame_reindexed = data_frame.reindex(new_index) data_interpol = data_frame_reindexed.apply(Series.interpolate) del new_index del data_frame del data_frame_reindexed return data_interpol
The actual transformation based on the following stackoverflow entry: http://stackoverflow.com/a/10465162
Below is the the instruction that describes the task: ### Input: The actual transformation based on the following stackoverflow entry: http://stackoverflow.com/a/10465162 ### Response: def _pandas_interp(self, data, indices): """The actual transformation based on the following stackoverflow entry: http://stackoverflow.com/a/10465162 """ new_index = np.arange(indices[-1] + 1) data_frame = DataFrame(data, index=indices) data_frame_reindexed = data_frame.reindex(new_index) data_interpol = data_frame_reindexed.apply(Series.interpolate) del new_index del data_frame del data_frame_reindexed return data_interpol
def linkChunk(key, chunk): """ Parse LINK Chunk Method """ # Extract link type card linkType = chunk[1].strip().split()[0] # Cases if linkType == 'DX': # Cross section link type handler result = xSectionLink(chunk) elif linkType == 'STRUCTURE': # Structure link type handler result = structureLink(chunk) elif linkType in ('RESERVOIR', 'LAKE'): # Reservoir link type handler result = reservoirLink(chunk) return result
Parse LINK Chunk Method
Below is the the instruction that describes the task: ### Input: Parse LINK Chunk Method ### Response: def linkChunk(key, chunk): """ Parse LINK Chunk Method """ # Extract link type card linkType = chunk[1].strip().split()[0] # Cases if linkType == 'DX': # Cross section link type handler result = xSectionLink(chunk) elif linkType == 'STRUCTURE': # Structure link type handler result = structureLink(chunk) elif linkType in ('RESERVOIR', 'LAKE'): # Reservoir link type handler result = reservoirLink(chunk) return result
def next_population(self, population, fitnesses): """Make a new population after each optimization iteration. Args: population: The population current population of solutions. fitnesses: The fitness associated with each solution in the population Returns: list; a list of solutions. """ return [self._next_solution() for _ in range(self._population_size)]
Make a new population after each optimization iteration. Args: population: The population current population of solutions. fitnesses: The fitness associated with each solution in the population Returns: list; a list of solutions.
Below is the the instruction that describes the task: ### Input: Make a new population after each optimization iteration. Args: population: The population current population of solutions. fitnesses: The fitness associated with each solution in the population Returns: list; a list of solutions. ### Response: def next_population(self, population, fitnesses): """Make a new population after each optimization iteration. Args: population: The population current population of solutions. fitnesses: The fitness associated with each solution in the population Returns: list; a list of solutions. """ return [self._next_solution() for _ in range(self._population_size)]
async def get_tree(self, prefix, *, dc=None, separator=None, watch=None, consistency=None): """Gets all keys with a prefix of Key during the transaction. Parameters: prefix (str): Prefix to fetch separator (str): List only up to a given separator dc (str): Specify datacenter that will be used. Defaults to the agent's local datacenter. watch (Blocking): Do a blocking query consistency (Consistency): Force consistency Returns: CollectionMeta: where value is a list of values This does not fail the transaction if the Key doesn't exist. Not all keys may be present in the results if ACLs do not permit them to be read. """ response = await self._read(prefix, dc=dc, recurse=True, separator=separator, watch=watch, consistency=consistency) result = response.body for data in result: data["Value"] = decode_value(data["Value"], data["Flags"]) return consul(result, meta=extract_meta(response.headers))
Gets all keys with a prefix of Key during the transaction. Parameters: prefix (str): Prefix to fetch separator (str): List only up to a given separator dc (str): Specify datacenter that will be used. Defaults to the agent's local datacenter. watch (Blocking): Do a blocking query consistency (Consistency): Force consistency Returns: CollectionMeta: where value is a list of values This does not fail the transaction if the Key doesn't exist. Not all keys may be present in the results if ACLs do not permit them to be read.
Below is the the instruction that describes the task: ### Input: Gets all keys with a prefix of Key during the transaction. Parameters: prefix (str): Prefix to fetch separator (str): List only up to a given separator dc (str): Specify datacenter that will be used. Defaults to the agent's local datacenter. watch (Blocking): Do a blocking query consistency (Consistency): Force consistency Returns: CollectionMeta: where value is a list of values This does not fail the transaction if the Key doesn't exist. Not all keys may be present in the results if ACLs do not permit them to be read. ### Response: async def get_tree(self, prefix, *, dc=None, separator=None, watch=None, consistency=None): """Gets all keys with a prefix of Key during the transaction. Parameters: prefix (str): Prefix to fetch separator (str): List only up to a given separator dc (str): Specify datacenter that will be used. Defaults to the agent's local datacenter. watch (Blocking): Do a blocking query consistency (Consistency): Force consistency Returns: CollectionMeta: where value is a list of values This does not fail the transaction if the Key doesn't exist. Not all keys may be present in the results if ACLs do not permit them to be read. """ response = await self._read(prefix, dc=dc, recurse=True, separator=separator, watch=watch, consistency=consistency) result = response.body for data in result: data["Value"] = decode_value(data["Value"], data["Flags"]) return consul(result, meta=extract_meta(response.headers))
def categories(self): """Returns the categories of `Ligands` in `LigandGroup`.""" category_dict = {} for ligand in self: if ligand.category in category_dict: category_dict[ligand.category].append(ligand) else: category_dict[ligand.category] = [ligand] return category_dict
Returns the categories of `Ligands` in `LigandGroup`.
Below is the the instruction that describes the task: ### Input: Returns the categories of `Ligands` in `LigandGroup`. ### Response: def categories(self): """Returns the categories of `Ligands` in `LigandGroup`.""" category_dict = {} for ligand in self: if ligand.category in category_dict: category_dict[ligand.category].append(ligand) else: category_dict[ligand.category] = [ligand] return category_dict
def offer(self, p, e: Event): """ Offer a new event ``s`` at point ``p`` in this queue. """ existing = self.events_scan.setdefault( p, ([], [], [], []) if USE_VERTICAL else ([], [], [])) # Can use double linked-list for easy insertion at beginning/end ''' if e.type == Event.Type.END: existing.insert(0, e) else: existing.append(e) ''' existing[e.type].append(e)
Offer a new event ``s`` at point ``p`` in this queue.
Below is the the instruction that describes the task: ### Input: Offer a new event ``s`` at point ``p`` in this queue. ### Response: def offer(self, p, e: Event): """ Offer a new event ``s`` at point ``p`` in this queue. """ existing = self.events_scan.setdefault( p, ([], [], [], []) if USE_VERTICAL else ([], [], [])) # Can use double linked-list for easy insertion at beginning/end ''' if e.type == Event.Type.END: existing.insert(0, e) else: existing.append(e) ''' existing[e.type].append(e)
def forecast_until(self, timestamp, tsformat=None): """Sets the forecasting goal (timestamp wise). This function enables the automatic determination of valuesToForecast. :param timestamp: timestamp containing the end date of the forecast. :param string tsformat: Format of the timestamp. This is used to convert the timestamp from UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. """ if tsformat is not None: timestamp = TimeSeries.convert_timestamp_to_epoch(timestamp, tsformat) self._forecastUntil = timestamp
Sets the forecasting goal (timestamp wise). This function enables the automatic determination of valuesToForecast. :param timestamp: timestamp containing the end date of the forecast. :param string tsformat: Format of the timestamp. This is used to convert the timestamp from UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation.
Below is the the instruction that describes the task: ### Input: Sets the forecasting goal (timestamp wise). This function enables the automatic determination of valuesToForecast. :param timestamp: timestamp containing the end date of the forecast. :param string tsformat: Format of the timestamp. This is used to convert the timestamp from UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. ### Response: def forecast_until(self, timestamp, tsformat=None): """Sets the forecasting goal (timestamp wise). This function enables the automatic determination of valuesToForecast. :param timestamp: timestamp containing the end date of the forecast. :param string tsformat: Format of the timestamp. This is used to convert the timestamp from UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. """ if tsformat is not None: timestamp = TimeSeries.convert_timestamp_to_epoch(timestamp, tsformat) self._forecastUntil = timestamp
def read_nc(self,filename,**kwargs): ''' data reader based on :class:`altimetry.tools.nctools.nc` object. .. note:: THIS can be VERY powerful! ''' #Set filename self._filename = filename remove_existing = kwargs.get('remove_existing',True) #Read data from NetCDF obj=nctools.nc(verbose=self.verbose,limit=self.limit,use_local_dims=True) outStr=obj.read(filename,**kwargs) #Remove attributes already existing in data object for a in self.__dict__.keys(): if outStr.has_key(a) and not a.startswith('_') : if remove_existing : outStr.pop(a) self.message(2, 'Attribute {0} already exists - removing it (set remove_existing to False instead)'.format(a)) return outStr
data reader based on :class:`altimetry.tools.nctools.nc` object. .. note:: THIS can be VERY powerful!
Below is the the instruction that describes the task: ### Input: data reader based on :class:`altimetry.tools.nctools.nc` object. .. note:: THIS can be VERY powerful! ### Response: def read_nc(self,filename,**kwargs): ''' data reader based on :class:`altimetry.tools.nctools.nc` object. .. note:: THIS can be VERY powerful! ''' #Set filename self._filename = filename remove_existing = kwargs.get('remove_existing',True) #Read data from NetCDF obj=nctools.nc(verbose=self.verbose,limit=self.limit,use_local_dims=True) outStr=obj.read(filename,**kwargs) #Remove attributes already existing in data object for a in self.__dict__.keys(): if outStr.has_key(a) and not a.startswith('_') : if remove_existing : outStr.pop(a) self.message(2, 'Attribute {0} already exists - removing it (set remove_existing to False instead)'.format(a)) return outStr
def init(*, threshold_lvl=1, quiet_stdout=False, log_file): """ Initiate the log module :param threshold_lvl: messages under this level won't be issued/logged :param to_stdout: activate stdout log stream """ global _logger, _log_lvl # translate lvl to those used by 'logging' module _log_lvl = _set_lvl(threshold_lvl) # logger Creation _logger = logging.getLogger(PKG_NAME) _logger.setLevel(_log_lvl) # create syslog handler and set level to info log_h = logging.FileHandler(log_file) # Base message format base_fmt = '%(asctime)s - %(name)s - [%(levelname)s] - %(message)s' # set formatter log_fmt = logging.Formatter(base_fmt) log_h.setFormatter(log_fmt) # add Handler _logger.addHandler(log_h) # create stout handler if not quiet_stdout: global _stdout _stdout = True
Initiate the log module :param threshold_lvl: messages under this level won't be issued/logged :param to_stdout: activate stdout log stream
Below is the the instruction that describes the task: ### Input: Initiate the log module :param threshold_lvl: messages under this level won't be issued/logged :param to_stdout: activate stdout log stream ### Response: def init(*, threshold_lvl=1, quiet_stdout=False, log_file): """ Initiate the log module :param threshold_lvl: messages under this level won't be issued/logged :param to_stdout: activate stdout log stream """ global _logger, _log_lvl # translate lvl to those used by 'logging' module _log_lvl = _set_lvl(threshold_lvl) # logger Creation _logger = logging.getLogger(PKG_NAME) _logger.setLevel(_log_lvl) # create syslog handler and set level to info log_h = logging.FileHandler(log_file) # Base message format base_fmt = '%(asctime)s - %(name)s - [%(levelname)s] - %(message)s' # set formatter log_fmt = logging.Formatter(base_fmt) log_h.setFormatter(log_fmt) # add Handler _logger.addHandler(log_h) # create stout handler if not quiet_stdout: global _stdout _stdout = True
def notch_fir(self, f1, f2, order, beta=5.0, remove_corrupted=True): """ notch filter the time series using an FIR filtered generated from the ideal response passed through a time-domain kaiser window (beta = 5.0) The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band. Parameters ---------- Time Series: TimeSeries The time series to be notched. f1: float The start of the frequency suppression. f2: float The end of the frequency suppression. order: int Number of corrupted samples on each side of the time series beta: float Beta parameter of the kaiser window that sets the side lobe attenuation. """ from pycbc.filter import notch_fir ts = notch_fir(self, f1, f2, order, beta=beta) if remove_corrupted: ts = ts[order:len(ts)-order] return ts
notch filter the time series using an FIR filtered generated from the ideal response passed through a time-domain kaiser window (beta = 5.0) The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band. Parameters ---------- Time Series: TimeSeries The time series to be notched. f1: float The start of the frequency suppression. f2: float The end of the frequency suppression. order: int Number of corrupted samples on each side of the time series beta: float Beta parameter of the kaiser window that sets the side lobe attenuation.
Below is the the instruction that describes the task: ### Input: notch filter the time series using an FIR filtered generated from the ideal response passed through a time-domain kaiser window (beta = 5.0) The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band. Parameters ---------- Time Series: TimeSeries The time series to be notched. f1: float The start of the frequency suppression. f2: float The end of the frequency suppression. order: int Number of corrupted samples on each side of the time series beta: float Beta parameter of the kaiser window that sets the side lobe attenuation. ### Response: def notch_fir(self, f1, f2, order, beta=5.0, remove_corrupted=True): """ notch filter the time series using an FIR filtered generated from the ideal response passed through a time-domain kaiser window (beta = 5.0) The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band. Parameters ---------- Time Series: TimeSeries The time series to be notched. f1: float The start of the frequency suppression. f2: float The end of the frequency suppression. order: int Number of corrupted samples on each side of the time series beta: float Beta parameter of the kaiser window that sets the side lobe attenuation. """ from pycbc.filter import notch_fir ts = notch_fir(self, f1, f2, order, beta=beta) if remove_corrupted: ts = ts[order:len(ts)-order] return ts
def get_instance(self, payload): """ Build an instance of SyncMapPermissionInstance :param dict payload: Payload response from the API :returns: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance :rtype: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance """ return SyncMapPermissionInstance( self._version, payload, service_sid=self._solution['service_sid'], map_sid=self._solution['map_sid'], )
Build an instance of SyncMapPermissionInstance :param dict payload: Payload response from the API :returns: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance :rtype: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance
Below is the the instruction that describes the task: ### Input: Build an instance of SyncMapPermissionInstance :param dict payload: Payload response from the API :returns: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance :rtype: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance ### Response: def get_instance(self, payload): """ Build an instance of SyncMapPermissionInstance :param dict payload: Payload response from the API :returns: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance :rtype: twilio.rest.sync.v1.service.sync_map.sync_map_permission.SyncMapPermissionInstance """ return SyncMapPermissionInstance( self._version, payload, service_sid=self._solution['service_sid'], map_sid=self._solution['map_sid'], )
def impulse_deltav_plummer(v,y,b,w,GM,rs): """ NAME: impulse_deltav_plummer PURPOSE: calculate the delta velocity to due an encounter with a Plummer sphere in the impulse approximation; allows for arbitrary velocity vectors, but y is input as the position along the stream INPUT: v - velocity of the stream (nstar,3) y - position along the stream (nstar) b - impact parameter w - velocity of the Plummer sphere (3) GM - mass of the Plummer sphere (in natural units) rs - size of the Plummer sphere OUTPUT: deltav (nstar,3) HISTORY: 2015-04-30 - Written based on Erkal's expressions - Bovy (IAS) """ if len(v.shape) == 1: v= numpy.reshape(v,(1,3)) y= numpy.reshape(y,(1,1)) nv= v.shape[0] # Build the rotation matrices and their inverse rot= _rotation_vy(v) rotinv= _rotation_vy(v,inv=True) # Rotate the Plummer sphere's velocity to the stream frames tilew= numpy.sum(rot*numpy.tile(w,(nv,3,1)),axis=-1) # Use Denis' expressions wperp= numpy.sqrt(tilew[:,0]**2.+tilew[:,2]**2.) wpar= numpy.sqrt(numpy.sum(v**2.,axis=1))-tilew[:,1] wmag2= wpar**2.+wperp**2. wmag= numpy.sqrt(wmag2) out= numpy.empty_like(v) denom= wmag*((b**2.+rs**2.)*wmag2+wperp**2.*y**2.) out[:,0]= (b*wmag2*tilew[:,2]/wperp-y*wpar*tilew[:,0])/denom out[:,1]= -wperp**2.*y/denom out[:,2]= -(b*wmag2*tilew[:,0]/wperp+y*wpar*tilew[:,2])/denom # deal w/ perpendicular impacts wperp0Indx= numpy.fabs(wperp) < 10.**-10. out[wperp0Indx,0]= (b*wmag2[wperp0Indx]-y[wperp0Indx]*wpar[wperp0Indx]*tilew[wperp0Indx,0])/denom[wperp0Indx] out[wperp0Indx,2]=-(b*wmag2[wperp0Indx]+y[wperp0Indx]*wpar[wperp0Indx]*tilew[wperp0Indx,2])/denom[wperp0Indx] # Rotate back to the original frame return 2.0*GM*numpy.sum(\ rotinv*numpy.swapaxes(numpy.tile(out.T,(3,1,1)).T,1,2),axis=-1)
NAME: impulse_deltav_plummer PURPOSE: calculate the delta velocity to due an encounter with a Plummer sphere in the impulse approximation; allows for arbitrary velocity vectors, but y is input as the position along the stream INPUT: v - velocity of the stream (nstar,3) y - position along the stream (nstar) b - impact parameter w - velocity of the Plummer sphere (3) GM - mass of the Plummer sphere (in natural units) rs - size of the Plummer sphere OUTPUT: deltav (nstar,3) HISTORY: 2015-04-30 - Written based on Erkal's expressions - Bovy (IAS)
Below is the the instruction that describes the task: ### Input: NAME: impulse_deltav_plummer PURPOSE: calculate the delta velocity to due an encounter with a Plummer sphere in the impulse approximation; allows for arbitrary velocity vectors, but y is input as the position along the stream INPUT: v - velocity of the stream (nstar,3) y - position along the stream (nstar) b - impact parameter w - velocity of the Plummer sphere (3) GM - mass of the Plummer sphere (in natural units) rs - size of the Plummer sphere OUTPUT: deltav (nstar,3) HISTORY: 2015-04-30 - Written based on Erkal's expressions - Bovy (IAS) ### Response: def impulse_deltav_plummer(v,y,b,w,GM,rs): """ NAME: impulse_deltav_plummer PURPOSE: calculate the delta velocity to due an encounter with a Plummer sphere in the impulse approximation; allows for arbitrary velocity vectors, but y is input as the position along the stream INPUT: v - velocity of the stream (nstar,3) y - position along the stream (nstar) b - impact parameter w - velocity of the Plummer sphere (3) GM - mass of the Plummer sphere (in natural units) rs - size of the Plummer sphere OUTPUT: deltav (nstar,3) HISTORY: 2015-04-30 - Written based on Erkal's expressions - Bovy (IAS) """ if len(v.shape) == 1: v= numpy.reshape(v,(1,3)) y= numpy.reshape(y,(1,1)) nv= v.shape[0] # Build the rotation matrices and their inverse rot= _rotation_vy(v) rotinv= _rotation_vy(v,inv=True) # Rotate the Plummer sphere's velocity to the stream frames tilew= numpy.sum(rot*numpy.tile(w,(nv,3,1)),axis=-1) # Use Denis' expressions wperp= numpy.sqrt(tilew[:,0]**2.+tilew[:,2]**2.) wpar= numpy.sqrt(numpy.sum(v**2.,axis=1))-tilew[:,1] wmag2= wpar**2.+wperp**2. wmag= numpy.sqrt(wmag2) out= numpy.empty_like(v) denom= wmag*((b**2.+rs**2.)*wmag2+wperp**2.*y**2.) out[:,0]= (b*wmag2*tilew[:,2]/wperp-y*wpar*tilew[:,0])/denom out[:,1]= -wperp**2.*y/denom out[:,2]= -(b*wmag2*tilew[:,0]/wperp+y*wpar*tilew[:,2])/denom # deal w/ perpendicular impacts wperp0Indx= numpy.fabs(wperp) < 10.**-10. out[wperp0Indx,0]= (b*wmag2[wperp0Indx]-y[wperp0Indx]*wpar[wperp0Indx]*tilew[wperp0Indx,0])/denom[wperp0Indx] out[wperp0Indx,2]=-(b*wmag2[wperp0Indx]+y[wperp0Indx]*wpar[wperp0Indx]*tilew[wperp0Indx,2])/denom[wperp0Indx] # Rotate back to the original frame return 2.0*GM*numpy.sum(\ rotinv*numpy.swapaxes(numpy.tile(out.T,(3,1,1)).T,1,2),axis=-1)
def fprop(self, x): """ Performs a forward pass through the DkNN on a TF tensor by wrapping the fprop_np method. """ logits = tf.py_func(self.fprop_np, [x], tf.float32) return {self.O_LOGITS: logits}
Performs a forward pass through the DkNN on a TF tensor by wrapping the fprop_np method.
Below is the the instruction that describes the task: ### Input: Performs a forward pass through the DkNN on a TF tensor by wrapping the fprop_np method. ### Response: def fprop(self, x): """ Performs a forward pass through the DkNN on a TF tensor by wrapping the fprop_np method. """ logits = tf.py_func(self.fprop_np, [x], tf.float32) return {self.O_LOGITS: logits}
def removeblanklines(astr): """remove the blank lines in astr""" lines = astr.splitlines() lines = [line for line in lines if line.strip() != ""] return "\n".join(lines)
remove the blank lines in astr
Below is the the instruction that describes the task: ### Input: remove the blank lines in astr ### Response: def removeblanklines(astr): """remove the blank lines in astr""" lines = astr.splitlines() lines = [line for line in lines if line.strip() != ""] return "\n".join(lines)
def fit(self, data): """ Fits a transformer using the SFrame `data`. Parameters ---------- data : SFrame The data used to fit the transformer. Returns ------- self (A fitted object) See Also -------- transform, fit_transform """ self._setup_from_data(data) self.transform_chain.fit(data) self.__proxy__.update({"fitted" : True}) return self
Fits a transformer using the SFrame `data`. Parameters ---------- data : SFrame The data used to fit the transformer. Returns ------- self (A fitted object) See Also -------- transform, fit_transform
Below is the the instruction that describes the task: ### Input: Fits a transformer using the SFrame `data`. Parameters ---------- data : SFrame The data used to fit the transformer. Returns ------- self (A fitted object) See Also -------- transform, fit_transform ### Response: def fit(self, data): """ Fits a transformer using the SFrame `data`. Parameters ---------- data : SFrame The data used to fit the transformer. Returns ------- self (A fitted object) See Also -------- transform, fit_transform """ self._setup_from_data(data) self.transform_chain.fit(data) self.__proxy__.update({"fitted" : True}) return self
def find(self, tagtype, **kwargs): '''Get the first tag with a type in this token ''' for t in self.__tags: if t.tagtype == tagtype: return t if 'default' in kwargs: return kwargs['default'] else: raise LookupError("Token {} is not tagged with the speficied tagtype ({})".format(self, tagtype))
Get the first tag with a type in this token
Below is the the instruction that describes the task: ### Input: Get the first tag with a type in this token ### Response: def find(self, tagtype, **kwargs): '''Get the first tag with a type in this token ''' for t in self.__tags: if t.tagtype == tagtype: return t if 'default' in kwargs: return kwargs['default'] else: raise LookupError("Token {} is not tagged with the speficied tagtype ({})".format(self, tagtype))
def _get_c_string(data, position): """Decode a BSON 'C' string to python unicode string.""" end = data.index(b"\x00", position) return _utf_8_decode(data[position:end], None, True)[0], end + 1
Decode a BSON 'C' string to python unicode string.
Below is the the instruction that describes the task: ### Input: Decode a BSON 'C' string to python unicode string. ### Response: def _get_c_string(data, position): """Decode a BSON 'C' string to python unicode string.""" end = data.index(b"\x00", position) return _utf_8_decode(data[position:end], None, True)[0], end + 1
def disable_dataset(self, dataset=None, **kwargs): """ Disable a 'dataset'. Datasets that are enabled will be computed during :meth:`run_compute` and included in the cost function during :meth:`run_fitting`. If compute is not provided, the dataset will be disabled across all compute options. :parameter str dataset: name of the dataset :parameter **kwargs: any other tags to do the filter (except dataset or context) :return: :class:`phoebe.parameters.parameters.ParameterSet` of the disabled dataset """ kwargs['context'] = 'compute' kwargs['dataset'] = dataset kwargs['qualifier'] = 'enabled' self.set_value_all(value=False, **kwargs) self._add_history(redo_func='disable_dataset', redo_kwargs={'dataset': dataset}, undo_func='enable_dataset', undo_kwargs={'dataset': dataset}) return self.get_dataset(dataset=dataset)
Disable a 'dataset'. Datasets that are enabled will be computed during :meth:`run_compute` and included in the cost function during :meth:`run_fitting`. If compute is not provided, the dataset will be disabled across all compute options. :parameter str dataset: name of the dataset :parameter **kwargs: any other tags to do the filter (except dataset or context) :return: :class:`phoebe.parameters.parameters.ParameterSet` of the disabled dataset
Below is the the instruction that describes the task: ### Input: Disable a 'dataset'. Datasets that are enabled will be computed during :meth:`run_compute` and included in the cost function during :meth:`run_fitting`. If compute is not provided, the dataset will be disabled across all compute options. :parameter str dataset: name of the dataset :parameter **kwargs: any other tags to do the filter (except dataset or context) :return: :class:`phoebe.parameters.parameters.ParameterSet` of the disabled dataset ### Response: def disable_dataset(self, dataset=None, **kwargs): """ Disable a 'dataset'. Datasets that are enabled will be computed during :meth:`run_compute` and included in the cost function during :meth:`run_fitting`. If compute is not provided, the dataset will be disabled across all compute options. :parameter str dataset: name of the dataset :parameter **kwargs: any other tags to do the filter (except dataset or context) :return: :class:`phoebe.parameters.parameters.ParameterSet` of the disabled dataset """ kwargs['context'] = 'compute' kwargs['dataset'] = dataset kwargs['qualifier'] = 'enabled' self.set_value_all(value=False, **kwargs) self._add_history(redo_func='disable_dataset', redo_kwargs={'dataset': dataset}, undo_func='enable_dataset', undo_kwargs={'dataset': dataset}) return self.get_dataset(dataset=dataset)
def read_little_endian32(self): """Interprets the next 4 bytes of the stream as a little-endian encoded, unsiged 32-bit integer, and returns that integer. """ try: i = struct.unpack(wire_format.FORMAT_UINT32_LITTLE_ENDIAN, self._input.read(4)) self._pos += 4 return i[0] # unpack() result is a 1-element tuple. except struct.error as e: raise errors.DecodeError(e)
Interprets the next 4 bytes of the stream as a little-endian encoded, unsiged 32-bit integer, and returns that integer.
Below is the the instruction that describes the task: ### Input: Interprets the next 4 bytes of the stream as a little-endian encoded, unsiged 32-bit integer, and returns that integer. ### Response: def read_little_endian32(self): """Interprets the next 4 bytes of the stream as a little-endian encoded, unsiged 32-bit integer, and returns that integer. """ try: i = struct.unpack(wire_format.FORMAT_UINT32_LITTLE_ENDIAN, self._input.read(4)) self._pos += 4 return i[0] # unpack() result is a 1-element tuple. except struct.error as e: raise errors.DecodeError(e)
def canonicalize_origin(origin): """\ """ origin = origin.replace(u'USMISSION', u'') \ .replace(u'AMEMBASSY', u'') \ .replace(u'EMBASSY', u'').strip() return _STATION_C14N.get(origin, origin)
\
Below is the the instruction that describes the task: ### Input: \ ### Response: def canonicalize_origin(origin): """\ """ origin = origin.replace(u'USMISSION', u'') \ .replace(u'AMEMBASSY', u'') \ .replace(u'EMBASSY', u'').strip() return _STATION_C14N.get(origin, origin)
def results(self, key): """Serves a key off of the results backend""" if not results_backend: return json_error_response("Results backend isn't configured") read_from_results_backend_start = now_as_float() blob = results_backend.get(key) stats_logger.timing( 'sqllab.query.results_backend_read', now_as_float() - read_from_results_backend_start, ) if not blob: return json_error_response( 'Data could not be retrieved. ' 'You may want to re-run the query.', status=410, ) query = db.session.query(Query).filter_by(results_key=key).one() rejected_tables = security_manager.rejected_datasources( query.sql, query.database, query.schema) if rejected_tables: return json_error_response(security_manager.get_table_access_error_msg( '{}'.format(rejected_tables)), status=403) payload = utils.zlib_decompress_to_string(blob) display_limit = app.config.get('DEFAULT_SQLLAB_LIMIT', None) if display_limit: payload_json = json.loads(payload) payload_json['data'] = payload_json['data'][:display_limit] return json_success( json.dumps( payload_json, default=utils.json_iso_dttm_ser, ignore_nan=True, ), )
Serves a key off of the results backend
Below is the the instruction that describes the task: ### Input: Serves a key off of the results backend ### Response: def results(self, key): """Serves a key off of the results backend""" if not results_backend: return json_error_response("Results backend isn't configured") read_from_results_backend_start = now_as_float() blob = results_backend.get(key) stats_logger.timing( 'sqllab.query.results_backend_read', now_as_float() - read_from_results_backend_start, ) if not blob: return json_error_response( 'Data could not be retrieved. ' 'You may want to re-run the query.', status=410, ) query = db.session.query(Query).filter_by(results_key=key).one() rejected_tables = security_manager.rejected_datasources( query.sql, query.database, query.schema) if rejected_tables: return json_error_response(security_manager.get_table_access_error_msg( '{}'.format(rejected_tables)), status=403) payload = utils.zlib_decompress_to_string(blob) display_limit = app.config.get('DEFAULT_SQLLAB_LIMIT', None) if display_limit: payload_json = json.loads(payload) payload_json['data'] = payload_json['data'][:display_limit] return json_success( json.dumps( payload_json, default=utils.json_iso_dttm_ser, ignore_nan=True, ), )
def get_independencies(self, condition=None): """ Returns the independent variables in the joint probability distribution. Returns marginally independent variables if condition=None. Returns conditionally independent variables if condition!=None Parameter --------- condition: array_like Random Variable on which to condition the Joint Probability Distribution. Examples -------- >>> import numpy as np >>> from pgmpy.factors.discrete import JointProbabilityDistribution >>> prob = JointProbabilityDistribution(['x1', 'x2', 'x3'], [2, 3, 2], np.ones(12)/12) >>> prob.get_independencies() (x1 _|_ x2) (x1 _|_ x3) (x2 _|_ x3) """ JPD = self.copy() if condition: JPD.conditional_distribution(condition) independencies = Independencies() for variable_pair in itertools.combinations(list(JPD.variables), 2): if (JPD.marginal_distribution(variable_pair, inplace=False) == JPD.marginal_distribution(variable_pair[0], inplace=False) * JPD.marginal_distribution(variable_pair[1], inplace=False)): independencies.add_assertions(variable_pair) return independencies
Returns the independent variables in the joint probability distribution. Returns marginally independent variables if condition=None. Returns conditionally independent variables if condition!=None Parameter --------- condition: array_like Random Variable on which to condition the Joint Probability Distribution. Examples -------- >>> import numpy as np >>> from pgmpy.factors.discrete import JointProbabilityDistribution >>> prob = JointProbabilityDistribution(['x1', 'x2', 'x3'], [2, 3, 2], np.ones(12)/12) >>> prob.get_independencies() (x1 _|_ x2) (x1 _|_ x3) (x2 _|_ x3)
Below is the the instruction that describes the task: ### Input: Returns the independent variables in the joint probability distribution. Returns marginally independent variables if condition=None. Returns conditionally independent variables if condition!=None Parameter --------- condition: array_like Random Variable on which to condition the Joint Probability Distribution. Examples -------- >>> import numpy as np >>> from pgmpy.factors.discrete import JointProbabilityDistribution >>> prob = JointProbabilityDistribution(['x1', 'x2', 'x3'], [2, 3, 2], np.ones(12)/12) >>> prob.get_independencies() (x1 _|_ x2) (x1 _|_ x3) (x2 _|_ x3) ### Response: def get_independencies(self, condition=None): """ Returns the independent variables in the joint probability distribution. Returns marginally independent variables if condition=None. Returns conditionally independent variables if condition!=None Parameter --------- condition: array_like Random Variable on which to condition the Joint Probability Distribution. Examples -------- >>> import numpy as np >>> from pgmpy.factors.discrete import JointProbabilityDistribution >>> prob = JointProbabilityDistribution(['x1', 'x2', 'x3'], [2, 3, 2], np.ones(12)/12) >>> prob.get_independencies() (x1 _|_ x2) (x1 _|_ x3) (x2 _|_ x3) """ JPD = self.copy() if condition: JPD.conditional_distribution(condition) independencies = Independencies() for variable_pair in itertools.combinations(list(JPD.variables), 2): if (JPD.marginal_distribution(variable_pair, inplace=False) == JPD.marginal_distribution(variable_pair[0], inplace=False) * JPD.marginal_distribution(variable_pair[1], inplace=False)): independencies.add_assertions(variable_pair) return independencies
def RegisterCustomFieldCodec(encoder, decoder): """Register a custom encoder/decoder for this field.""" def Register(field): _CUSTOM_FIELD_CODECS[field] = _Codec(encoder=encoder, decoder=decoder) return field return Register
Register a custom encoder/decoder for this field.
Below is the the instruction that describes the task: ### Input: Register a custom encoder/decoder for this field. ### Response: def RegisterCustomFieldCodec(encoder, decoder): """Register a custom encoder/decoder for this field.""" def Register(field): _CUSTOM_FIELD_CODECS[field] = _Codec(encoder=encoder, decoder=decoder) return field return Register
def cut( self, start=None, stop=None, whence=0, version=None, include_ends=True, time_from_zero=False, ): """cut *MDF* file. *start* and *stop* limits are absolute values or values relative to the first timestamp depending on the *whence* argument. Parameters ---------- start : float start time, default *None*. If *None* then the start of measurement is used stop : float stop time, default *None*. If *None* then the end of measurement is used whence : int how to search for the start and stop values * 0 : absolute * 1 : relative to first timestamp version : str new mdf file version from ('2.00', '2.10', '2.14', '3.00', '3.10', '3.20', '3.30', '4.00', '4.10', '4.11'); default *None* and in this case the original file version is used include_ends : bool include the *start* and *stop* timestamps after cutting the signal. If *start* and *stop* are found in the original timestamps, then the new samples will be computed using interpolation. Default *True* time_from_zero : bool start time stamps from 0s in the cut measurement Returns ------- out : MDF new MDF object """ if version is None: version = self.version else: version = validate_version_argument(version) out = MDF(version=version) if whence == 1: timestamps = [] for i, group in enumerate(self.groups): fragment = next(self._load_data(group)) master = self.get_master( i, record_offset=0, record_count=1, ) if master.size: timestamps.append(master[0]) self._master_channel_cache.clear() if timestamps: first_timestamp = np.amin(timestamps) else: first_timestamp = 0 if start is not None: start += first_timestamp if stop is not None: stop += first_timestamp if time_from_zero: delta = start t_epoch = self.header.start_time.timestamp() + delta out.header.start_time = datetime.fromtimestamp(t_epoch) else: delta = 0 out.header.start_time = self.header.start_time groups_nr = len(self.groups) if self._callback: self._callback(0, groups_nr) cg_nr = -1 interpolation_mode = self._integer_interpolation # walk through all groups and get all channels for i, group in enumerate(self.groups): included_channels = self._included_channels(i) if included_channels: cg_nr += 1 else: continue data = self._load_data(group) parents, dtypes = self._prepare_record(group) idx = 0 for fragment in data: if dtypes.itemsize: group.record = np.core.records.fromstring(fragment[0], dtype=dtypes) else: group.record = None master = self.get_master(i, fragment, copy_master=False) if not len(master): continue needs_cutting = True # check if this fragement is within the cut interval or # if the cut interval has ended if start is None and stop is None: fragment_start = None fragment_stop = None start_index = 0 stop_index = len(master) needs_cutting = False elif start is None: fragment_start = None start_index = 0 if master[0] > stop: break else: fragment_stop = min(stop, master[-1]) stop_index = np.searchsorted( master, fragment_stop, side="right" ) if stop_index == len(master): needs_cutting = False elif stop is None: fragment_stop = None if master[-1] < start: continue else: fragment_start = max(start, master[0]) start_index = np.searchsorted( master, fragment_start, side="left" ) stop_index = len(master) if start_index == 0: needs_cutting = False else: if master[0] > stop: break elif master[-1] < start: continue else: fragment_start = max(start, master[0]) start_index = np.searchsorted( master, fragment_start, side="left" ) fragment_stop = min(stop, master[-1]) stop_index = np.searchsorted( master, fragment_stop, side="right" ) if start_index == 0 and stop_index == len(master): needs_cutting = False if needs_cutting: cut_timebase = ( Signal(master, master, name="_") .cut(fragment_start, fragment_stop, include_ends, interpolation_mode=interpolation_mode) .timestamps ) # the first fragment triggers and append that will add the # metadata for all channels if idx == 0: sigs = [] for j in included_channels: sig = self.get( group=i, index=j, data=fragment, raw=True, ignore_invalidation_bits=True, copy_master=False, ) if needs_cutting: sig = sig.interp(cut_timebase, interpolation_mode=interpolation_mode) # if sig.stream_sync and False: # attachment, _name = sig.attachment # duration = get_video_stream_duration(attachment) # if start is None: # start_t = 0 # else: # start_t = start # # if stop is None: # end_t = duration # else: # end_t = stop # # attachment = cut_video_stream( # attachment, # start_t, # end_t, # Path(_name).suffix, # ) # sig.attachment = attachment, _name if not sig.samples.flags.writeable: sig.samples = sig.samples.copy() sigs.append(sig) if sigs: if time_from_zero: new_timestamps = cut_timebase - delta for sig in sigs: sig.timestamps = new_timestamps if start: start_ = f"{start}s" else: start_ = "start of measurement" if stop: stop_ = f"{stop}s" else: stop_ = "end of measurement" out.append( sigs, f"Cut from {start_} to {stop_}", common_timebase=True ) try: if group.channel_group.flags & v4c.FLAG_CG_BUS_EVENT: out.groups[-1].channel_group.flags = group.channel_group.flags out.groups[-1].channel_group.acq_name = group.channel_group.acq_name out.groups[-1].channel_group.acq_source = group.channel_group.acq_source out.groups[-1].channel_group.comment = group.channel_group.comment except AttributeError: pass else: break idx += 1 # the other fragments will trigger onl the extension of # samples records to the data block else: if needs_cutting: timestamps = cut_timebase else: timestamps = master if time_from_zero: timestamps = timestamps - delta sigs = [(timestamps, None)] for j in included_channels: sig = self.get( group=i, index=j, data=fragment, raw=True, samples_only=True, ignore_invalidation_bits=True, ) if needs_cutting: _sig = Signal( sig[0], master, name="_", invalidation_bits=sig[1] ).interp(cut_timebase, interpolation_mode=interpolation_mode) sig = (_sig.samples, _sig.invalidation_bits) del _sig sigs.append(sig) if sigs: out.extend(cg_nr, sigs) idx += 1 group.record = None # if the cut interval is not found in the measurement # then append an empty data group if idx == 0: self.configure(read_fragment_size=1) sigs = [] fragment = next(self._load_data(group)) fragment = (fragment[0], -1, None) for j in included_channels: sig = self.get( group=i, index=j, data=fragment, raw=True, ignore_invalidation_bits=True, ) sig.samples = sig.samples[:0] sig.timestamps = sig.timestamps[:0] sigs.append(sig) if start: start_ = f"{start}s" else: start_ = "start of measurement" if stop: stop_ = f"{stop}s" else: stop_ = "end of measurement" out.append(sigs, f"Cut from {start_} to {stop_}", common_timebase=True) self.configure(read_fragment_size=0) if self._callback: self._callback(i + 1, groups_nr) if self._terminate: return out._transfer_events(self) if self._callback: out._callback = out._mdf._callback = self._callback return out
cut *MDF* file. *start* and *stop* limits are absolute values or values relative to the first timestamp depending on the *whence* argument. Parameters ---------- start : float start time, default *None*. If *None* then the start of measurement is used stop : float stop time, default *None*. If *None* then the end of measurement is used whence : int how to search for the start and stop values * 0 : absolute * 1 : relative to first timestamp version : str new mdf file version from ('2.00', '2.10', '2.14', '3.00', '3.10', '3.20', '3.30', '4.00', '4.10', '4.11'); default *None* and in this case the original file version is used include_ends : bool include the *start* and *stop* timestamps after cutting the signal. If *start* and *stop* are found in the original timestamps, then the new samples will be computed using interpolation. Default *True* time_from_zero : bool start time stamps from 0s in the cut measurement Returns ------- out : MDF new MDF object
Below is the the instruction that describes the task: ### Input: cut *MDF* file. *start* and *stop* limits are absolute values or values relative to the first timestamp depending on the *whence* argument. Parameters ---------- start : float start time, default *None*. If *None* then the start of measurement is used stop : float stop time, default *None*. If *None* then the end of measurement is used whence : int how to search for the start and stop values * 0 : absolute * 1 : relative to first timestamp version : str new mdf file version from ('2.00', '2.10', '2.14', '3.00', '3.10', '3.20', '3.30', '4.00', '4.10', '4.11'); default *None* and in this case the original file version is used include_ends : bool include the *start* and *stop* timestamps after cutting the signal. If *start* and *stop* are found in the original timestamps, then the new samples will be computed using interpolation. Default *True* time_from_zero : bool start time stamps from 0s in the cut measurement Returns ------- out : MDF new MDF object ### Response: def cut( self, start=None, stop=None, whence=0, version=None, include_ends=True, time_from_zero=False, ): """cut *MDF* file. *start* and *stop* limits are absolute values or values relative to the first timestamp depending on the *whence* argument. Parameters ---------- start : float start time, default *None*. If *None* then the start of measurement is used stop : float stop time, default *None*. If *None* then the end of measurement is used whence : int how to search for the start and stop values * 0 : absolute * 1 : relative to first timestamp version : str new mdf file version from ('2.00', '2.10', '2.14', '3.00', '3.10', '3.20', '3.30', '4.00', '4.10', '4.11'); default *None* and in this case the original file version is used include_ends : bool include the *start* and *stop* timestamps after cutting the signal. If *start* and *stop* are found in the original timestamps, then the new samples will be computed using interpolation. Default *True* time_from_zero : bool start time stamps from 0s in the cut measurement Returns ------- out : MDF new MDF object """ if version is None: version = self.version else: version = validate_version_argument(version) out = MDF(version=version) if whence == 1: timestamps = [] for i, group in enumerate(self.groups): fragment = next(self._load_data(group)) master = self.get_master( i, record_offset=0, record_count=1, ) if master.size: timestamps.append(master[0]) self._master_channel_cache.clear() if timestamps: first_timestamp = np.amin(timestamps) else: first_timestamp = 0 if start is not None: start += first_timestamp if stop is not None: stop += first_timestamp if time_from_zero: delta = start t_epoch = self.header.start_time.timestamp() + delta out.header.start_time = datetime.fromtimestamp(t_epoch) else: delta = 0 out.header.start_time = self.header.start_time groups_nr = len(self.groups) if self._callback: self._callback(0, groups_nr) cg_nr = -1 interpolation_mode = self._integer_interpolation # walk through all groups and get all channels for i, group in enumerate(self.groups): included_channels = self._included_channels(i) if included_channels: cg_nr += 1 else: continue data = self._load_data(group) parents, dtypes = self._prepare_record(group) idx = 0 for fragment in data: if dtypes.itemsize: group.record = np.core.records.fromstring(fragment[0], dtype=dtypes) else: group.record = None master = self.get_master(i, fragment, copy_master=False) if not len(master): continue needs_cutting = True # check if this fragement is within the cut interval or # if the cut interval has ended if start is None and stop is None: fragment_start = None fragment_stop = None start_index = 0 stop_index = len(master) needs_cutting = False elif start is None: fragment_start = None start_index = 0 if master[0] > stop: break else: fragment_stop = min(stop, master[-1]) stop_index = np.searchsorted( master, fragment_stop, side="right" ) if stop_index == len(master): needs_cutting = False elif stop is None: fragment_stop = None if master[-1] < start: continue else: fragment_start = max(start, master[0]) start_index = np.searchsorted( master, fragment_start, side="left" ) stop_index = len(master) if start_index == 0: needs_cutting = False else: if master[0] > stop: break elif master[-1] < start: continue else: fragment_start = max(start, master[0]) start_index = np.searchsorted( master, fragment_start, side="left" ) fragment_stop = min(stop, master[-1]) stop_index = np.searchsorted( master, fragment_stop, side="right" ) if start_index == 0 and stop_index == len(master): needs_cutting = False if needs_cutting: cut_timebase = ( Signal(master, master, name="_") .cut(fragment_start, fragment_stop, include_ends, interpolation_mode=interpolation_mode) .timestamps ) # the first fragment triggers and append that will add the # metadata for all channels if idx == 0: sigs = [] for j in included_channels: sig = self.get( group=i, index=j, data=fragment, raw=True, ignore_invalidation_bits=True, copy_master=False, ) if needs_cutting: sig = sig.interp(cut_timebase, interpolation_mode=interpolation_mode) # if sig.stream_sync and False: # attachment, _name = sig.attachment # duration = get_video_stream_duration(attachment) # if start is None: # start_t = 0 # else: # start_t = start # # if stop is None: # end_t = duration # else: # end_t = stop # # attachment = cut_video_stream( # attachment, # start_t, # end_t, # Path(_name).suffix, # ) # sig.attachment = attachment, _name if not sig.samples.flags.writeable: sig.samples = sig.samples.copy() sigs.append(sig) if sigs: if time_from_zero: new_timestamps = cut_timebase - delta for sig in sigs: sig.timestamps = new_timestamps if start: start_ = f"{start}s" else: start_ = "start of measurement" if stop: stop_ = f"{stop}s" else: stop_ = "end of measurement" out.append( sigs, f"Cut from {start_} to {stop_}", common_timebase=True ) try: if group.channel_group.flags & v4c.FLAG_CG_BUS_EVENT: out.groups[-1].channel_group.flags = group.channel_group.flags out.groups[-1].channel_group.acq_name = group.channel_group.acq_name out.groups[-1].channel_group.acq_source = group.channel_group.acq_source out.groups[-1].channel_group.comment = group.channel_group.comment except AttributeError: pass else: break idx += 1 # the other fragments will trigger onl the extension of # samples records to the data block else: if needs_cutting: timestamps = cut_timebase else: timestamps = master if time_from_zero: timestamps = timestamps - delta sigs = [(timestamps, None)] for j in included_channels: sig = self.get( group=i, index=j, data=fragment, raw=True, samples_only=True, ignore_invalidation_bits=True, ) if needs_cutting: _sig = Signal( sig[0], master, name="_", invalidation_bits=sig[1] ).interp(cut_timebase, interpolation_mode=interpolation_mode) sig = (_sig.samples, _sig.invalidation_bits) del _sig sigs.append(sig) if sigs: out.extend(cg_nr, sigs) idx += 1 group.record = None # if the cut interval is not found in the measurement # then append an empty data group if idx == 0: self.configure(read_fragment_size=1) sigs = [] fragment = next(self._load_data(group)) fragment = (fragment[0], -1, None) for j in included_channels: sig = self.get( group=i, index=j, data=fragment, raw=True, ignore_invalidation_bits=True, ) sig.samples = sig.samples[:0] sig.timestamps = sig.timestamps[:0] sigs.append(sig) if start: start_ = f"{start}s" else: start_ = "start of measurement" if stop: stop_ = f"{stop}s" else: stop_ = "end of measurement" out.append(sigs, f"Cut from {start_} to {stop_}", common_timebase=True) self.configure(read_fragment_size=0) if self._callback: self._callback(i + 1, groups_nr) if self._terminate: return out._transfer_events(self) if self._callback: out._callback = out._mdf._callback = self._callback return out
def properties(self): ''' This is a lazily loaded dictionary containing the launchd runtime information of the job in question. Internally, this is retrieved using ServiceManagement.SMJobCopyDictionary(). Keep in mind that some dictionary keys are not always present (for example 'PID'). If the job specified by the label cannot be found in launchd, then this method raises a ValueError exception. ''' if hasattr(self, '_nsproperties'): self._properties = convert_NSDictionary_to_dict(self._nsproperties) del self._nsproperties #self._nsproperties = None if self._properties is None: self.refresh() return self._properties
This is a lazily loaded dictionary containing the launchd runtime information of the job in question. Internally, this is retrieved using ServiceManagement.SMJobCopyDictionary(). Keep in mind that some dictionary keys are not always present (for example 'PID'). If the job specified by the label cannot be found in launchd, then this method raises a ValueError exception.
Below is the the instruction that describes the task: ### Input: This is a lazily loaded dictionary containing the launchd runtime information of the job in question. Internally, this is retrieved using ServiceManagement.SMJobCopyDictionary(). Keep in mind that some dictionary keys are not always present (for example 'PID'). If the job specified by the label cannot be found in launchd, then this method raises a ValueError exception. ### Response: def properties(self): ''' This is a lazily loaded dictionary containing the launchd runtime information of the job in question. Internally, this is retrieved using ServiceManagement.SMJobCopyDictionary(). Keep in mind that some dictionary keys are not always present (for example 'PID'). If the job specified by the label cannot be found in launchd, then this method raises a ValueError exception. ''' if hasattr(self, '_nsproperties'): self._properties = convert_NSDictionary_to_dict(self._nsproperties) del self._nsproperties #self._nsproperties = None if self._properties is None: self.refresh() return self._properties
def barrel_shifter(bits_to_shift, bit_in, direction, shift_dist, wrap_around=0): """ Create a barrel shifter that operates on data based on the wire width. :param bits_to_shift: the input wire :param bit_in: the 1-bit wire giving the value to shift in :param direction: a one bit WireVector representing shift direction (0 = shift down, 1 = shift up) :param shift_dist: WireVector representing offset to shift :param wrap_around: ****currently not implemented**** :return: shifted WireVector """ from pyrtl import concat, select # just for readability if wrap_around != 0: raise NotImplementedError # Implement with logN stages pyrtl.muxing between shifted and un-shifted values final_width = len(bits_to_shift) val = bits_to_shift append_val = bit_in for i in range(len(shift_dist)): shift_amt = pow(2, i) # stages shift 1,2,4,8,... if shift_amt < final_width: newval = select(direction, concat(val[:-shift_amt], append_val), # shift up concat(append_val, val[shift_amt:])) # shift down val = select(shift_dist[i], truecase=newval, # if bit of shift is 1, do the shift falsecase=val) # otherwise, don't # the value to append grows exponentially, but is capped at full width append_val = concat(append_val, append_val)[:final_width] else: # if we are shifting this much, all the data is gone val = select(shift_dist[i], truecase=append_val, # if bit of shift is 1, do the shift falsecase=val) # otherwise, don't return val
Create a barrel shifter that operates on data based on the wire width. :param bits_to_shift: the input wire :param bit_in: the 1-bit wire giving the value to shift in :param direction: a one bit WireVector representing shift direction (0 = shift down, 1 = shift up) :param shift_dist: WireVector representing offset to shift :param wrap_around: ****currently not implemented**** :return: shifted WireVector
Below is the the instruction that describes the task: ### Input: Create a barrel shifter that operates on data based on the wire width. :param bits_to_shift: the input wire :param bit_in: the 1-bit wire giving the value to shift in :param direction: a one bit WireVector representing shift direction (0 = shift down, 1 = shift up) :param shift_dist: WireVector representing offset to shift :param wrap_around: ****currently not implemented**** :return: shifted WireVector ### Response: def barrel_shifter(bits_to_shift, bit_in, direction, shift_dist, wrap_around=0): """ Create a barrel shifter that operates on data based on the wire width. :param bits_to_shift: the input wire :param bit_in: the 1-bit wire giving the value to shift in :param direction: a one bit WireVector representing shift direction (0 = shift down, 1 = shift up) :param shift_dist: WireVector representing offset to shift :param wrap_around: ****currently not implemented**** :return: shifted WireVector """ from pyrtl import concat, select # just for readability if wrap_around != 0: raise NotImplementedError # Implement with logN stages pyrtl.muxing between shifted and un-shifted values final_width = len(bits_to_shift) val = bits_to_shift append_val = bit_in for i in range(len(shift_dist)): shift_amt = pow(2, i) # stages shift 1,2,4,8,... if shift_amt < final_width: newval = select(direction, concat(val[:-shift_amt], append_val), # shift up concat(append_val, val[shift_amt:])) # shift down val = select(shift_dist[i], truecase=newval, # if bit of shift is 1, do the shift falsecase=val) # otherwise, don't # the value to append grows exponentially, but is capped at full width append_val = concat(append_val, append_val)[:final_width] else: # if we are shifting this much, all the data is gone val = select(shift_dist[i], truecase=append_val, # if bit of shift is 1, do the shift falsecase=val) # otherwise, don't return val
def beam_search(symbols_to_logits_fn, initial_ids, beam_size, decode_length, vocab_size, alpha, states=None, eos_id=EOS_ID, stop_early=True, use_tpu=False, use_top_k_with_unique=True): """Beam search with length penalties. Requires a function that can take the currently decoded symbols and return the logits for the next symbol. The implementation is inspired by https://arxiv.org/abs/1609.08144. When running, the beam search steps can be visualized by using tfdbg to watch the operations generating the output ids for each beam step. These operations have the pattern: (alive|finished)_topk_(seq,scores) Operations marked `alive` represent the new beam sequences that will be processed in the next step. Operations marked `finished` represent the completed beam sequences, which may be padded with 0s if no beams finished. Operations marked `seq` store the full beam sequence for the time step. Operations marked `scores` store the sequence's final log scores. The beam search steps will be processed sequentially in order, so when capturing observed from these operations, tensors, clients can make assumptions about which step is being recorded. WARNING: Assumes 2nd dimension of tensors in `states` and not invariant, this means that the shape of the 2nd dimension of these tensors will not be available (i.e. set to None) inside symbols_to_logits_fn. Args: symbols_to_logits_fn: Interface to the model, to provide logits. Shoud take [batch_size, decoded_ids] and return [batch_size, vocab_size] initial_ids: Ids to start off the decoding, this will be the first thing handed to symbols_to_logits_fn (after expanding to beam size) [batch_size] beam_size: Size of the beam. decode_length: Number of steps to decode for. vocab_size: Size of the vocab, must equal the size of the logits returned by symbols_to_logits_fn alpha: alpha for length penalty. states: dict (possibly nested) of decoding states. eos_id: ID for end of sentence. stop_early: a boolean - stop once best sequence is provably determined. use_tpu: A bool, whether to do beam search on TPU. use_top_k_with_unique: bool, whether to use a fast (but decreased precision) top_k during TPU beam search. Returns: Tuple of (decoded beams [batch_size, beam_size, decode_length] decoding probabilities [batch_size, beam_size]) """ batch_size = common_layers.shape_list(initial_ids)[0] # Assume initial_ids are prob 1.0 initial_log_probs = tf.constant([[0.] + [-INF] * (beam_size - 1)]) # Expand to beam_size (batch_size, beam_size) alive_log_probs = tf.tile(initial_log_probs, [batch_size, 1]) # Expand each batch and state to beam_size alive_seq = _expand_to_beam_size(initial_ids, beam_size) alive_seq = tf.expand_dims(alive_seq, axis=2) # (batch_size, beam_size, 1) if use_tpu: alive_seq = tf.tile(alive_seq, [1, 1, decode_length + 1]) if states: states = nest.map_structure( lambda state: _expand_to_beam_size(state, beam_size), states) else: states = {} # Finished will keep track of all the sequences that have finished so far # Finished log probs will be negative infinity in the beginning # finished_flags will keep track of booleans finished_seq = tf.zeros(common_layers.shape_list(alive_seq), tf.int32) # Setting the scores of the initial to negative infinity. finished_scores = tf.ones([batch_size, beam_size]) * -INF finished_flags = tf.zeros([batch_size, beam_size], tf.bool) def grow_finished(finished_seq, finished_scores, finished_flags, curr_seq, curr_scores, curr_finished): """Given sequences and scores, will gather the top k=beam size sequences. Args: finished_seq: Current finished sequences. [batch_size, beam_size, current_decoded_length] finished_scores: scores for each of these sequences. [batch_size, beam_size] finished_flags: finished bools for each of these sequences. [batch_size, beam_size] curr_seq: current topk sequence that has been grown by one position. [batch_size, beam_size, current_decoded_length] curr_scores: scores for each of these sequences. [batch_size, beam_size] curr_finished: Finished flags for each of these sequences. [batch_size, beam_size] Returns: Tuple of (Topk sequences based on scores, log probs of these sequences, Finished flags of these sequences) """ if not use_tpu: # First append a column of 0'ids to finished to make the same length with # finished scores finished_seq = tf.concat( [finished_seq, tf.zeros([batch_size, beam_size, 1], tf.int32)], axis=2) # Set the scores of the unfinished seq in curr_seq to large negative # values curr_scores += (1. - tf.to_float(curr_finished)) * -INF # concatenating the sequences and scores along beam axis curr_finished_seq = tf.concat([finished_seq, curr_seq], axis=1) curr_finished_scores = tf.concat([finished_scores, curr_scores], axis=1) curr_finished_flags = tf.concat([finished_flags, curr_finished], axis=1) return compute_topk_scores_and_seq( curr_finished_seq, curr_finished_scores, curr_finished_scores, curr_finished_flags, beam_size, batch_size, "grow_finished", use_tpu=use_tpu, use_top_k_with_unique=use_top_k_with_unique) def grow_alive(curr_seq, curr_scores, curr_log_probs, curr_finished, states): """Given sequences and scores, will gather the top k=beam size sequences. Args: curr_seq: current topk sequence that has been grown by one position. [batch_size, beam_size, i+1] curr_scores: scores for each of these sequences. [batch_size, beam_size] curr_log_probs: log probs for each of these sequences. [batch_size, beam_size] curr_finished: Finished flags for each of these sequences. [batch_size, beam_size] states: dict (possibly nested) of decoding states. Returns: Tuple of (Topk sequences based on scores, log probs of these sequences, Finished flags of these sequences) """ # Set the scores of the finished seq in curr_seq to large negative # values curr_scores += tf.to_float(curr_finished) * -INF return compute_topk_scores_and_seq(curr_seq, curr_scores, curr_log_probs, curr_finished, beam_size, batch_size, "grow_alive", states, use_tpu=use_tpu) def grow_topk(i, alive_seq, alive_log_probs, states): r"""Inner beam search loop. This function takes the current alive sequences, and grows them to topk sequences where k = 2*beam. We use 2*beam because, we could have beam_size number of sequences that might hit <EOS> and there will be no alive sequences to continue. With 2*beam_size, this will not happen. This relies on the assumption the vocab size is > beam size. If this is true, we'll have at least beam_size non <EOS> extensions if we extract the next top 2*beam words. Length penalty is given by = (5+len(decode)/6) ^ -\alpha. Pls refer to https://arxiv.org/abs/1609.08144. Args: i: loop index alive_seq: Topk sequences decoded so far [batch_size, beam_size, i+1] alive_log_probs: probabilities of these sequences. [batch_size, beam_size] states: dict (possibly nested) of decoding states. Returns: Tuple of (Topk sequences extended by the next word, The log probs of these sequences, The scores with length penalty of these sequences, Flags indicating which of these sequences have finished decoding, dict of transformed decoding states) """ # Get the logits for all the possible next symbols if use_tpu and states: flat_ids = tf.reshape( tf.slice(alive_seq, [0, 0, i], [batch_size, beam_size, 1]), [batch_size * beam_size, -1]) else: flat_ids = tf.reshape(alive_seq, [batch_size * beam_size, -1]) # (batch_size * beam_size, decoded_length) if states: flat_states = nest.map_structure(_merge_beam_dim, states) flat_logits, flat_states = symbols_to_logits_fn(flat_ids, i, flat_states) states = nest.map_structure( lambda t: _unmerge_beam_dim(t, batch_size, beam_size), flat_states) elif use_tpu: flat_logits = symbols_to_logits_fn(flat_ids, i) else: flat_logits = symbols_to_logits_fn(flat_ids) logits = tf.reshape(flat_logits, [batch_size, beam_size, -1]) # Convert logits to normalized log probs candidate_log_probs = common_layers.log_prob_from_logits(logits) # Multiply the probabilities by the current probabilities of the beam. # (batch_size, beam_size, vocab_size) + (batch_size, beam_size, 1) log_probs = candidate_log_probs + tf.expand_dims(alive_log_probs, axis=2) length_penalty = tf.pow(((5. + tf.to_float(i + 1)) / 6.), alpha) curr_scores = log_probs / length_penalty # Flatten out (beam_size, vocab_size) probs in to a list of possibilities flat_curr_scores = tf.reshape(curr_scores, [-1, beam_size * vocab_size]) if use_tpu and use_top_k_with_unique: topk_scores, topk_ids = top_k_with_unique( flat_curr_scores, k=beam_size * 2) else: topk_scores, topk_ids = tf.nn.top_k(flat_curr_scores, k=beam_size * 2) # Recovering the log probs because we will need to send them back topk_log_probs = topk_scores * length_penalty # Work out what beam the top probs are in. topk_beam_index = topk_ids // vocab_size topk_ids %= vocab_size # Unflatten the ids if not use_tpu: # The next three steps are to create coordinates for tf.gather_nd to pull # out the correct sequences from id's that we need to grow. # We will also use the coordinates to gather the booleans of the beam # items that survived. batch_pos = compute_batch_indices(batch_size, beam_size * 2) # top beams will give us the actual coordinates to do the gather. # stacking will create a tensor of dimension batch * beam * 2, where the # last dimension contains the i,j gathering coordinates. topk_coordinates = tf.stack([batch_pos, topk_beam_index], axis=2) # Gather up the most probable 2*beams both for the ids and # finished_in_alive bools topk_seq = tf.gather_nd(alive_seq, topk_coordinates) if states: states = nest.map_structure( lambda state: tf.gather_nd(state, topk_coordinates), states) # Append the most probable alive topk_seq = tf.concat([topk_seq, tf.expand_dims(topk_ids, axis=2)], axis=2) else: # Gather up the most probable 2*beams both for the ids and # finished_in_alive bools topk_seq = fast_tpu_gather(alive_seq, topk_beam_index) if states: states = nest.map_structure( lambda state: fast_tpu_gather(state, topk_beam_index), states) # Update the most probable alive topk_seq = tf.transpose(topk_seq, perm=[2, 0, 1]) topk_seq = inplace_ops.alias_inplace_update(topk_seq, i + 1, topk_ids) topk_seq = tf.transpose(topk_seq, perm=[1, 2, 0]) topk_finished = tf.equal(topk_ids, eos_id) return topk_seq, topk_log_probs, topk_scores, topk_finished, states def inner_loop(i, alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states): """Inner beam search loop. There are three groups of tensors, alive, finished, and topk. The alive group contains information about the current alive sequences The topk group contains information about alive + topk current decoded words the finished group contains information about finished sentences, that is, the ones that have decoded to <EOS>. These are what we return. The general beam search algorithm is as follows: While we haven't terminated (pls look at termination condition) 1. Grow the current alive to get beam*2 topk sequences 2. Among the topk, keep the top beam_size ones that haven't reached EOS into alive 3. Among the topk, keep the top beam_size ones have reached EOS into finished Repeat To make things simple with using fixed size tensors, we will end up inserting unfinished sequences into finished in the beginning. To stop that we add -ve INF to the score of the unfinished sequence so that when a true finished sequence does appear, it will have a higher score than all the unfinished ones. Args: i: loop index alive_seq: Topk sequences decoded so far [batch_size, beam_size, i+1] alive_log_probs: probabilities of the beams. [batch_size, beam_size] finished_seq: Current finished sequences. [batch_size, beam_size, i+1] finished_scores: scores for each of these sequences. [batch_size, beam_size] finished_flags: finished bools for each of these sequences. [batch_size, beam_size] states: dict (possibly nested) of decoding states. Returns: Tuple of (Incremented loop index New alive sequences, Log probs of the alive sequences, New finished sequences, Scores of the new finished sequences, Flags indicating which sequence in finished as reached EOS, dict of final decoding states) """ # Each inner loop, we carry out three steps: # 1. Get the current topk items. # 2. Extract the ones that have finished and haven't finished # 3. Recompute the contents of finished based on scores. topk_seq, topk_log_probs, topk_scores, topk_finished, states = grow_topk( i, alive_seq, alive_log_probs, states) alive_seq, alive_log_probs, _, states = grow_alive( topk_seq, topk_scores, topk_log_probs, topk_finished, states) finished_seq, finished_scores, finished_flags, _ = grow_finished( finished_seq, finished_scores, finished_flags, topk_seq, topk_scores, topk_finished) return (i + 1, alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states) def _is_finished(i, unused_alive_seq, alive_log_probs, unused_finished_seq, finished_scores, unused_finished_in_finished, unused_states): """Checking termination condition. We terminate when we decoded up to decode_length or the lowest scoring item in finished has a greater score that the highest prob item in alive divided by the max length penalty Args: i: loop index alive_log_probs: probabilities of the beams. [batch_size, beam_size] finished_scores: scores for each of these sequences. [batch_size, beam_size] Returns: Bool. """ max_length_penalty = tf.pow(((5. + tf.to_float(decode_length)) / 6.), alpha) # The best possible score of the most likely alive sequence. lower_bound_alive_scores = alive_log_probs[:, 0] / max_length_penalty if not stop_early: # by considering the min score (in the top N beams) we ensure that # the decoder will keep decoding until there is at least one beam # (in the top N) that can be improved (w.r.t. the alive beams). # any unfinished beam will have score -INF - thus the min # will always be -INF if there is at least one unfinished beam - # which means the bound_is_met condition cannot be true in this case. lowest_score_of_finished_in_finished = tf.reduce_min(finished_scores) else: # by taking the max score we only care about the first beam; # as soon as this first beam cannot be beaten from the alive beams # the beam decoder can stop. # similarly to the above, if the top beam is not completed, its # finished_score is -INF, thus it will not activate the # bound_is_met condition. (i.e., decoder will keep going on). # note we need to find the max for every sequence eparately - so, we need # to keep the batch dimension (see axis=1) lowest_score_of_finished_in_finished = tf.reduce_max(finished_scores, axis=1) bound_is_met = tf.reduce_all( tf.greater(lowest_score_of_finished_in_finished, lower_bound_alive_scores)) return tf.logical_and( tf.less(i, decode_length), tf.logical_not(bound_is_met)) inner_shape = tf.TensorShape([None, None, None]) if use_tpu: inner_shape = tf.TensorShape([batch_size, beam_size, decode_length + 1]) if use_tpu: state_struc = nest.map_structure(lambda state: state.get_shape(), states) else: state_struc = nest.map_structure(get_state_shape_invariants, states) (_, alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states) = tf.while_loop( _is_finished, inner_loop, [ tf.constant(0), alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states ], shape_invariants=[ tf.TensorShape([]), inner_shape, alive_log_probs.get_shape(), inner_shape, finished_scores.get_shape(), finished_flags.get_shape(), state_struc ], parallel_iterations=1, back_prop=False) alive_seq.set_shape((None, beam_size, None)) finished_seq.set_shape((None, beam_size, None)) # Accounting for corner case: It's possible that no sequence in alive for a # particular batch item ever reached EOS. In that case, we should just copy # the contents of alive for that batch item. tf.reduce_any(finished_flags, 1) # if 0, means that no sequence for that batch index had reached EOS. We need # to do the same for the scores as well. finished_seq = tf.where( tf.reduce_any(finished_flags, 1), finished_seq, alive_seq) finished_scores = tf.where( tf.reduce_any(finished_flags, 1), finished_scores, alive_log_probs) return finished_seq, finished_scores, states
Beam search with length penalties. Requires a function that can take the currently decoded symbols and return the logits for the next symbol. The implementation is inspired by https://arxiv.org/abs/1609.08144. When running, the beam search steps can be visualized by using tfdbg to watch the operations generating the output ids for each beam step. These operations have the pattern: (alive|finished)_topk_(seq,scores) Operations marked `alive` represent the new beam sequences that will be processed in the next step. Operations marked `finished` represent the completed beam sequences, which may be padded with 0s if no beams finished. Operations marked `seq` store the full beam sequence for the time step. Operations marked `scores` store the sequence's final log scores. The beam search steps will be processed sequentially in order, so when capturing observed from these operations, tensors, clients can make assumptions about which step is being recorded. WARNING: Assumes 2nd dimension of tensors in `states` and not invariant, this means that the shape of the 2nd dimension of these tensors will not be available (i.e. set to None) inside symbols_to_logits_fn. Args: symbols_to_logits_fn: Interface to the model, to provide logits. Shoud take [batch_size, decoded_ids] and return [batch_size, vocab_size] initial_ids: Ids to start off the decoding, this will be the first thing handed to symbols_to_logits_fn (after expanding to beam size) [batch_size] beam_size: Size of the beam. decode_length: Number of steps to decode for. vocab_size: Size of the vocab, must equal the size of the logits returned by symbols_to_logits_fn alpha: alpha for length penalty. states: dict (possibly nested) of decoding states. eos_id: ID for end of sentence. stop_early: a boolean - stop once best sequence is provably determined. use_tpu: A bool, whether to do beam search on TPU. use_top_k_with_unique: bool, whether to use a fast (but decreased precision) top_k during TPU beam search. Returns: Tuple of (decoded beams [batch_size, beam_size, decode_length] decoding probabilities [batch_size, beam_size])
Below is the the instruction that describes the task: ### Input: Beam search with length penalties. Requires a function that can take the currently decoded symbols and return the logits for the next symbol. The implementation is inspired by https://arxiv.org/abs/1609.08144. When running, the beam search steps can be visualized by using tfdbg to watch the operations generating the output ids for each beam step. These operations have the pattern: (alive|finished)_topk_(seq,scores) Operations marked `alive` represent the new beam sequences that will be processed in the next step. Operations marked `finished` represent the completed beam sequences, which may be padded with 0s if no beams finished. Operations marked `seq` store the full beam sequence for the time step. Operations marked `scores` store the sequence's final log scores. The beam search steps will be processed sequentially in order, so when capturing observed from these operations, tensors, clients can make assumptions about which step is being recorded. WARNING: Assumes 2nd dimension of tensors in `states` and not invariant, this means that the shape of the 2nd dimension of these tensors will not be available (i.e. set to None) inside symbols_to_logits_fn. Args: symbols_to_logits_fn: Interface to the model, to provide logits. Shoud take [batch_size, decoded_ids] and return [batch_size, vocab_size] initial_ids: Ids to start off the decoding, this will be the first thing handed to symbols_to_logits_fn (after expanding to beam size) [batch_size] beam_size: Size of the beam. decode_length: Number of steps to decode for. vocab_size: Size of the vocab, must equal the size of the logits returned by symbols_to_logits_fn alpha: alpha for length penalty. states: dict (possibly nested) of decoding states. eos_id: ID for end of sentence. stop_early: a boolean - stop once best sequence is provably determined. use_tpu: A bool, whether to do beam search on TPU. use_top_k_with_unique: bool, whether to use a fast (but decreased precision) top_k during TPU beam search. Returns: Tuple of (decoded beams [batch_size, beam_size, decode_length] decoding probabilities [batch_size, beam_size]) ### Response: def beam_search(symbols_to_logits_fn, initial_ids, beam_size, decode_length, vocab_size, alpha, states=None, eos_id=EOS_ID, stop_early=True, use_tpu=False, use_top_k_with_unique=True): """Beam search with length penalties. Requires a function that can take the currently decoded symbols and return the logits for the next symbol. The implementation is inspired by https://arxiv.org/abs/1609.08144. When running, the beam search steps can be visualized by using tfdbg to watch the operations generating the output ids for each beam step. These operations have the pattern: (alive|finished)_topk_(seq,scores) Operations marked `alive` represent the new beam sequences that will be processed in the next step. Operations marked `finished` represent the completed beam sequences, which may be padded with 0s if no beams finished. Operations marked `seq` store the full beam sequence for the time step. Operations marked `scores` store the sequence's final log scores. The beam search steps will be processed sequentially in order, so when capturing observed from these operations, tensors, clients can make assumptions about which step is being recorded. WARNING: Assumes 2nd dimension of tensors in `states` and not invariant, this means that the shape of the 2nd dimension of these tensors will not be available (i.e. set to None) inside symbols_to_logits_fn. Args: symbols_to_logits_fn: Interface to the model, to provide logits. Shoud take [batch_size, decoded_ids] and return [batch_size, vocab_size] initial_ids: Ids to start off the decoding, this will be the first thing handed to symbols_to_logits_fn (after expanding to beam size) [batch_size] beam_size: Size of the beam. decode_length: Number of steps to decode for. vocab_size: Size of the vocab, must equal the size of the logits returned by symbols_to_logits_fn alpha: alpha for length penalty. states: dict (possibly nested) of decoding states. eos_id: ID for end of sentence. stop_early: a boolean - stop once best sequence is provably determined. use_tpu: A bool, whether to do beam search on TPU. use_top_k_with_unique: bool, whether to use a fast (but decreased precision) top_k during TPU beam search. Returns: Tuple of (decoded beams [batch_size, beam_size, decode_length] decoding probabilities [batch_size, beam_size]) """ batch_size = common_layers.shape_list(initial_ids)[0] # Assume initial_ids are prob 1.0 initial_log_probs = tf.constant([[0.] + [-INF] * (beam_size - 1)]) # Expand to beam_size (batch_size, beam_size) alive_log_probs = tf.tile(initial_log_probs, [batch_size, 1]) # Expand each batch and state to beam_size alive_seq = _expand_to_beam_size(initial_ids, beam_size) alive_seq = tf.expand_dims(alive_seq, axis=2) # (batch_size, beam_size, 1) if use_tpu: alive_seq = tf.tile(alive_seq, [1, 1, decode_length + 1]) if states: states = nest.map_structure( lambda state: _expand_to_beam_size(state, beam_size), states) else: states = {} # Finished will keep track of all the sequences that have finished so far # Finished log probs will be negative infinity in the beginning # finished_flags will keep track of booleans finished_seq = tf.zeros(common_layers.shape_list(alive_seq), tf.int32) # Setting the scores of the initial to negative infinity. finished_scores = tf.ones([batch_size, beam_size]) * -INF finished_flags = tf.zeros([batch_size, beam_size], tf.bool) def grow_finished(finished_seq, finished_scores, finished_flags, curr_seq, curr_scores, curr_finished): """Given sequences and scores, will gather the top k=beam size sequences. Args: finished_seq: Current finished sequences. [batch_size, beam_size, current_decoded_length] finished_scores: scores for each of these sequences. [batch_size, beam_size] finished_flags: finished bools for each of these sequences. [batch_size, beam_size] curr_seq: current topk sequence that has been grown by one position. [batch_size, beam_size, current_decoded_length] curr_scores: scores for each of these sequences. [batch_size, beam_size] curr_finished: Finished flags for each of these sequences. [batch_size, beam_size] Returns: Tuple of (Topk sequences based on scores, log probs of these sequences, Finished flags of these sequences) """ if not use_tpu: # First append a column of 0'ids to finished to make the same length with # finished scores finished_seq = tf.concat( [finished_seq, tf.zeros([batch_size, beam_size, 1], tf.int32)], axis=2) # Set the scores of the unfinished seq in curr_seq to large negative # values curr_scores += (1. - tf.to_float(curr_finished)) * -INF # concatenating the sequences and scores along beam axis curr_finished_seq = tf.concat([finished_seq, curr_seq], axis=1) curr_finished_scores = tf.concat([finished_scores, curr_scores], axis=1) curr_finished_flags = tf.concat([finished_flags, curr_finished], axis=1) return compute_topk_scores_and_seq( curr_finished_seq, curr_finished_scores, curr_finished_scores, curr_finished_flags, beam_size, batch_size, "grow_finished", use_tpu=use_tpu, use_top_k_with_unique=use_top_k_with_unique) def grow_alive(curr_seq, curr_scores, curr_log_probs, curr_finished, states): """Given sequences and scores, will gather the top k=beam size sequences. Args: curr_seq: current topk sequence that has been grown by one position. [batch_size, beam_size, i+1] curr_scores: scores for each of these sequences. [batch_size, beam_size] curr_log_probs: log probs for each of these sequences. [batch_size, beam_size] curr_finished: Finished flags for each of these sequences. [batch_size, beam_size] states: dict (possibly nested) of decoding states. Returns: Tuple of (Topk sequences based on scores, log probs of these sequences, Finished flags of these sequences) """ # Set the scores of the finished seq in curr_seq to large negative # values curr_scores += tf.to_float(curr_finished) * -INF return compute_topk_scores_and_seq(curr_seq, curr_scores, curr_log_probs, curr_finished, beam_size, batch_size, "grow_alive", states, use_tpu=use_tpu) def grow_topk(i, alive_seq, alive_log_probs, states): r"""Inner beam search loop. This function takes the current alive sequences, and grows them to topk sequences where k = 2*beam. We use 2*beam because, we could have beam_size number of sequences that might hit <EOS> and there will be no alive sequences to continue. With 2*beam_size, this will not happen. This relies on the assumption the vocab size is > beam size. If this is true, we'll have at least beam_size non <EOS> extensions if we extract the next top 2*beam words. Length penalty is given by = (5+len(decode)/6) ^ -\alpha. Pls refer to https://arxiv.org/abs/1609.08144. Args: i: loop index alive_seq: Topk sequences decoded so far [batch_size, beam_size, i+1] alive_log_probs: probabilities of these sequences. [batch_size, beam_size] states: dict (possibly nested) of decoding states. Returns: Tuple of (Topk sequences extended by the next word, The log probs of these sequences, The scores with length penalty of these sequences, Flags indicating which of these sequences have finished decoding, dict of transformed decoding states) """ # Get the logits for all the possible next symbols if use_tpu and states: flat_ids = tf.reshape( tf.slice(alive_seq, [0, 0, i], [batch_size, beam_size, 1]), [batch_size * beam_size, -1]) else: flat_ids = tf.reshape(alive_seq, [batch_size * beam_size, -1]) # (batch_size * beam_size, decoded_length) if states: flat_states = nest.map_structure(_merge_beam_dim, states) flat_logits, flat_states = symbols_to_logits_fn(flat_ids, i, flat_states) states = nest.map_structure( lambda t: _unmerge_beam_dim(t, batch_size, beam_size), flat_states) elif use_tpu: flat_logits = symbols_to_logits_fn(flat_ids, i) else: flat_logits = symbols_to_logits_fn(flat_ids) logits = tf.reshape(flat_logits, [batch_size, beam_size, -1]) # Convert logits to normalized log probs candidate_log_probs = common_layers.log_prob_from_logits(logits) # Multiply the probabilities by the current probabilities of the beam. # (batch_size, beam_size, vocab_size) + (batch_size, beam_size, 1) log_probs = candidate_log_probs + tf.expand_dims(alive_log_probs, axis=2) length_penalty = tf.pow(((5. + tf.to_float(i + 1)) / 6.), alpha) curr_scores = log_probs / length_penalty # Flatten out (beam_size, vocab_size) probs in to a list of possibilities flat_curr_scores = tf.reshape(curr_scores, [-1, beam_size * vocab_size]) if use_tpu and use_top_k_with_unique: topk_scores, topk_ids = top_k_with_unique( flat_curr_scores, k=beam_size * 2) else: topk_scores, topk_ids = tf.nn.top_k(flat_curr_scores, k=beam_size * 2) # Recovering the log probs because we will need to send them back topk_log_probs = topk_scores * length_penalty # Work out what beam the top probs are in. topk_beam_index = topk_ids // vocab_size topk_ids %= vocab_size # Unflatten the ids if not use_tpu: # The next three steps are to create coordinates for tf.gather_nd to pull # out the correct sequences from id's that we need to grow. # We will also use the coordinates to gather the booleans of the beam # items that survived. batch_pos = compute_batch_indices(batch_size, beam_size * 2) # top beams will give us the actual coordinates to do the gather. # stacking will create a tensor of dimension batch * beam * 2, where the # last dimension contains the i,j gathering coordinates. topk_coordinates = tf.stack([batch_pos, topk_beam_index], axis=2) # Gather up the most probable 2*beams both for the ids and # finished_in_alive bools topk_seq = tf.gather_nd(alive_seq, topk_coordinates) if states: states = nest.map_structure( lambda state: tf.gather_nd(state, topk_coordinates), states) # Append the most probable alive topk_seq = tf.concat([topk_seq, tf.expand_dims(topk_ids, axis=2)], axis=2) else: # Gather up the most probable 2*beams both for the ids and # finished_in_alive bools topk_seq = fast_tpu_gather(alive_seq, topk_beam_index) if states: states = nest.map_structure( lambda state: fast_tpu_gather(state, topk_beam_index), states) # Update the most probable alive topk_seq = tf.transpose(topk_seq, perm=[2, 0, 1]) topk_seq = inplace_ops.alias_inplace_update(topk_seq, i + 1, topk_ids) topk_seq = tf.transpose(topk_seq, perm=[1, 2, 0]) topk_finished = tf.equal(topk_ids, eos_id) return topk_seq, topk_log_probs, topk_scores, topk_finished, states def inner_loop(i, alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states): """Inner beam search loop. There are three groups of tensors, alive, finished, and topk. The alive group contains information about the current alive sequences The topk group contains information about alive + topk current decoded words the finished group contains information about finished sentences, that is, the ones that have decoded to <EOS>. These are what we return. The general beam search algorithm is as follows: While we haven't terminated (pls look at termination condition) 1. Grow the current alive to get beam*2 topk sequences 2. Among the topk, keep the top beam_size ones that haven't reached EOS into alive 3. Among the topk, keep the top beam_size ones have reached EOS into finished Repeat To make things simple with using fixed size tensors, we will end up inserting unfinished sequences into finished in the beginning. To stop that we add -ve INF to the score of the unfinished sequence so that when a true finished sequence does appear, it will have a higher score than all the unfinished ones. Args: i: loop index alive_seq: Topk sequences decoded so far [batch_size, beam_size, i+1] alive_log_probs: probabilities of the beams. [batch_size, beam_size] finished_seq: Current finished sequences. [batch_size, beam_size, i+1] finished_scores: scores for each of these sequences. [batch_size, beam_size] finished_flags: finished bools for each of these sequences. [batch_size, beam_size] states: dict (possibly nested) of decoding states. Returns: Tuple of (Incremented loop index New alive sequences, Log probs of the alive sequences, New finished sequences, Scores of the new finished sequences, Flags indicating which sequence in finished as reached EOS, dict of final decoding states) """ # Each inner loop, we carry out three steps: # 1. Get the current topk items. # 2. Extract the ones that have finished and haven't finished # 3. Recompute the contents of finished based on scores. topk_seq, topk_log_probs, topk_scores, topk_finished, states = grow_topk( i, alive_seq, alive_log_probs, states) alive_seq, alive_log_probs, _, states = grow_alive( topk_seq, topk_scores, topk_log_probs, topk_finished, states) finished_seq, finished_scores, finished_flags, _ = grow_finished( finished_seq, finished_scores, finished_flags, topk_seq, topk_scores, topk_finished) return (i + 1, alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states) def _is_finished(i, unused_alive_seq, alive_log_probs, unused_finished_seq, finished_scores, unused_finished_in_finished, unused_states): """Checking termination condition. We terminate when we decoded up to decode_length or the lowest scoring item in finished has a greater score that the highest prob item in alive divided by the max length penalty Args: i: loop index alive_log_probs: probabilities of the beams. [batch_size, beam_size] finished_scores: scores for each of these sequences. [batch_size, beam_size] Returns: Bool. """ max_length_penalty = tf.pow(((5. + tf.to_float(decode_length)) / 6.), alpha) # The best possible score of the most likely alive sequence. lower_bound_alive_scores = alive_log_probs[:, 0] / max_length_penalty if not stop_early: # by considering the min score (in the top N beams) we ensure that # the decoder will keep decoding until there is at least one beam # (in the top N) that can be improved (w.r.t. the alive beams). # any unfinished beam will have score -INF - thus the min # will always be -INF if there is at least one unfinished beam - # which means the bound_is_met condition cannot be true in this case. lowest_score_of_finished_in_finished = tf.reduce_min(finished_scores) else: # by taking the max score we only care about the first beam; # as soon as this first beam cannot be beaten from the alive beams # the beam decoder can stop. # similarly to the above, if the top beam is not completed, its # finished_score is -INF, thus it will not activate the # bound_is_met condition. (i.e., decoder will keep going on). # note we need to find the max for every sequence eparately - so, we need # to keep the batch dimension (see axis=1) lowest_score_of_finished_in_finished = tf.reduce_max(finished_scores, axis=1) bound_is_met = tf.reduce_all( tf.greater(lowest_score_of_finished_in_finished, lower_bound_alive_scores)) return tf.logical_and( tf.less(i, decode_length), tf.logical_not(bound_is_met)) inner_shape = tf.TensorShape([None, None, None]) if use_tpu: inner_shape = tf.TensorShape([batch_size, beam_size, decode_length + 1]) if use_tpu: state_struc = nest.map_structure(lambda state: state.get_shape(), states) else: state_struc = nest.map_structure(get_state_shape_invariants, states) (_, alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states) = tf.while_loop( _is_finished, inner_loop, [ tf.constant(0), alive_seq, alive_log_probs, finished_seq, finished_scores, finished_flags, states ], shape_invariants=[ tf.TensorShape([]), inner_shape, alive_log_probs.get_shape(), inner_shape, finished_scores.get_shape(), finished_flags.get_shape(), state_struc ], parallel_iterations=1, back_prop=False) alive_seq.set_shape((None, beam_size, None)) finished_seq.set_shape((None, beam_size, None)) # Accounting for corner case: It's possible that no sequence in alive for a # particular batch item ever reached EOS. In that case, we should just copy # the contents of alive for that batch item. tf.reduce_any(finished_flags, 1) # if 0, means that no sequence for that batch index had reached EOS. We need # to do the same for the scores as well. finished_seq = tf.where( tf.reduce_any(finished_flags, 1), finished_seq, alive_seq) finished_scores = tf.where( tf.reduce_any(finished_flags, 1), finished_scores, alive_log_probs) return finished_seq, finished_scores, states
def _publish_metrics(self, name, prev_keys, key, data): """Recursively publish keys""" value = data[key] keys = prev_keys + [key] if isinstance(value, dict): for new_key in value: self._publish_metrics(name, keys, new_key, value) elif isinstance(value, (float, int, long)): joined_keys = '.'.join(keys) if name: publish_key = '{}.{}'.format(name, joined_keys) else: publish_key = joined_keys if isinstance(value, bool): value = int(value) self.publish(publish_key, value)
Recursively publish keys
Below is the the instruction that describes the task: ### Input: Recursively publish keys ### Response: def _publish_metrics(self, name, prev_keys, key, data): """Recursively publish keys""" value = data[key] keys = prev_keys + [key] if isinstance(value, dict): for new_key in value: self._publish_metrics(name, keys, new_key, value) elif isinstance(value, (float, int, long)): joined_keys = '.'.join(keys) if name: publish_key = '{}.{}'.format(name, joined_keys) else: publish_key = joined_keys if isinstance(value, bool): value = int(value) self.publish(publish_key, value)
def get_statement_model(self): """ Return the class for the statement model. """ from chatterbot.conversation import Statement # Create a storage-aware statement statement = Statement statement.storage = self return statement
Return the class for the statement model.
Below is the the instruction that describes the task: ### Input: Return the class for the statement model. ### Response: def get_statement_model(self): """ Return the class for the statement model. """ from chatterbot.conversation import Statement # Create a storage-aware statement statement = Statement statement.storage = self return statement
def fetch_seq(ac, start_i=None, end_i=None): """Fetches sequences and subsequences from NCBI eutils and Ensembl REST interfaces. :param string ac: accession of sequence to fetch :param int start_i: start position of *interbase* interval :param int end_i: end position of *interbase* interval **IMPORTANT** start_i and end_i specify 0-based interbase coordinates, which refer to junctions between nucleotides. This is numerically equivalent to 0-based, right-open nucleotide coordinates. Without an interval, the full sequence is returned:: >> len(fetch_seq('NP_056374.2')) 1596 Therefore, it's preferable to provide the interval rather than using Python slicing sequence on the delivered sequence:: >> fetch_seq('NP_056374.2',0,10) # This! 'MESRETLSSS' >> fetch_seq('NP_056374.2')[0:10] # Not this! 'MESRETLSSS' >> fetch_seq('NP_056374.2',0,10) == fetch_seq('NP_056374.2')[0:10] True Providing intervals is especially important for large sequences:: >> fetch_seq('NC_000001.10',2000000,2000030) 'ATCACACGTGCAGGAACCCTTTTCCAAAGG' This call will pull back 30 bases plus overhead; without the interval, one would receive 250MB of chr1 plus overhead! Essentially any RefSeq, Genbank, BIC, or Ensembl sequence may be fetched: >> [(ac,fetch_seq(ac,0,25)) ... for ac in ['NG_032072.1', 'NW_003571030.1', 'NT_113901.1', ... 'NC_000001.10','NP_056374.2', 'GL000191.1', 'KB663603.1', ... 'ENST00000288602', 'ENSP00000288602']] # doctest: +NORMALIZE_WHITESPACE [('NG_032072.1', 'AAAATTAAATTAAAATAAATAAAAA'), ('NW_003571030.1', 'TTGTGTGTTAGGGTGCTCTAAGCAA'), ('NT_113901.1', 'GAATTCCTCGTTCACACAGTTTCTT'), ('NC_000001.10', 'NNNNNNNNNNNNNNNNNNNNNNNNN'), ('NP_056374.2', 'MESRETLSSSRQRGGESDFLPVSSA'), ('GL000191.1', 'GATCCACCTGCCTCAGCCTCCCAGA'), ('KB663603.1', 'TTTATTTATTTTAGATACTTATCTC'), ('ENST00000288602', u'CGCCTCCCTTCCCCCTCCCCGCCCG'), ('ENSP00000288602', u'MAALSGGGGGGAEPGQALFNGDMEP')] RuntimeError is thrown in the case of errors:: >> fetch_seq('NM_9.9') Traceback (most recent call last): ... RuntimeError: No sequence available for NM_9.9 >> fetch_seq('QQ01234') Traceback (most recent call last): ... RuntimeError: No sequence fetcher for QQ01234 """ ac_dispatch = [ { "re": re.compile(r"^(?:AC|N[CGMPRTW])_|^[A-L]\w\d|^U\d"), "fetcher": _fetch_seq_ncbi }, { "re": re.compile(r"^ENS[TP]\d+"), "fetcher": _fetch_seq_ensembl }, ] eligible_fetchers = [ dr["fetcher"] for dr in ac_dispatch if dr["re"].match(ac) ] if len(eligible_fetchers) == 0: raise RuntimeError("No sequence fetcher for {ac}".format(ac=ac)) if len(eligible_fetchers) >= 1: # pragma: nocover (no way to test) _logger.debug("Multiple sequence fetchers found for " "{ac}; using first".format(ac=ac)) fetcher = eligible_fetchers[0] _logger.debug("fetching {ac} with {f}".format(ac=ac, f=fetcher)) try: return fetcher(ac, start_i, end_i) except requests.RequestException as ex: raise RuntimeError("Failed to fetch {ac} ({ex})".format(ac=ac, ex=ex))
Fetches sequences and subsequences from NCBI eutils and Ensembl REST interfaces. :param string ac: accession of sequence to fetch :param int start_i: start position of *interbase* interval :param int end_i: end position of *interbase* interval **IMPORTANT** start_i and end_i specify 0-based interbase coordinates, which refer to junctions between nucleotides. This is numerically equivalent to 0-based, right-open nucleotide coordinates. Without an interval, the full sequence is returned:: >> len(fetch_seq('NP_056374.2')) 1596 Therefore, it's preferable to provide the interval rather than using Python slicing sequence on the delivered sequence:: >> fetch_seq('NP_056374.2',0,10) # This! 'MESRETLSSS' >> fetch_seq('NP_056374.2')[0:10] # Not this! 'MESRETLSSS' >> fetch_seq('NP_056374.2',0,10) == fetch_seq('NP_056374.2')[0:10] True Providing intervals is especially important for large sequences:: >> fetch_seq('NC_000001.10',2000000,2000030) 'ATCACACGTGCAGGAACCCTTTTCCAAAGG' This call will pull back 30 bases plus overhead; without the interval, one would receive 250MB of chr1 plus overhead! Essentially any RefSeq, Genbank, BIC, or Ensembl sequence may be fetched: >> [(ac,fetch_seq(ac,0,25)) ... for ac in ['NG_032072.1', 'NW_003571030.1', 'NT_113901.1', ... 'NC_000001.10','NP_056374.2', 'GL000191.1', 'KB663603.1', ... 'ENST00000288602', 'ENSP00000288602']] # doctest: +NORMALIZE_WHITESPACE [('NG_032072.1', 'AAAATTAAATTAAAATAAATAAAAA'), ('NW_003571030.1', 'TTGTGTGTTAGGGTGCTCTAAGCAA'), ('NT_113901.1', 'GAATTCCTCGTTCACACAGTTTCTT'), ('NC_000001.10', 'NNNNNNNNNNNNNNNNNNNNNNNNN'), ('NP_056374.2', 'MESRETLSSSRQRGGESDFLPVSSA'), ('GL000191.1', 'GATCCACCTGCCTCAGCCTCCCAGA'), ('KB663603.1', 'TTTATTTATTTTAGATACTTATCTC'), ('ENST00000288602', u'CGCCTCCCTTCCCCCTCCCCGCCCG'), ('ENSP00000288602', u'MAALSGGGGGGAEPGQALFNGDMEP')] RuntimeError is thrown in the case of errors:: >> fetch_seq('NM_9.9') Traceback (most recent call last): ... RuntimeError: No sequence available for NM_9.9 >> fetch_seq('QQ01234') Traceback (most recent call last): ... RuntimeError: No sequence fetcher for QQ01234
Below is the the instruction that describes the task: ### Input: Fetches sequences and subsequences from NCBI eutils and Ensembl REST interfaces. :param string ac: accession of sequence to fetch :param int start_i: start position of *interbase* interval :param int end_i: end position of *interbase* interval **IMPORTANT** start_i and end_i specify 0-based interbase coordinates, which refer to junctions between nucleotides. This is numerically equivalent to 0-based, right-open nucleotide coordinates. Without an interval, the full sequence is returned:: >> len(fetch_seq('NP_056374.2')) 1596 Therefore, it's preferable to provide the interval rather than using Python slicing sequence on the delivered sequence:: >> fetch_seq('NP_056374.2',0,10) # This! 'MESRETLSSS' >> fetch_seq('NP_056374.2')[0:10] # Not this! 'MESRETLSSS' >> fetch_seq('NP_056374.2',0,10) == fetch_seq('NP_056374.2')[0:10] True Providing intervals is especially important for large sequences:: >> fetch_seq('NC_000001.10',2000000,2000030) 'ATCACACGTGCAGGAACCCTTTTCCAAAGG' This call will pull back 30 bases plus overhead; without the interval, one would receive 250MB of chr1 plus overhead! Essentially any RefSeq, Genbank, BIC, or Ensembl sequence may be fetched: >> [(ac,fetch_seq(ac,0,25)) ... for ac in ['NG_032072.1', 'NW_003571030.1', 'NT_113901.1', ... 'NC_000001.10','NP_056374.2', 'GL000191.1', 'KB663603.1', ... 'ENST00000288602', 'ENSP00000288602']] # doctest: +NORMALIZE_WHITESPACE [('NG_032072.1', 'AAAATTAAATTAAAATAAATAAAAA'), ('NW_003571030.1', 'TTGTGTGTTAGGGTGCTCTAAGCAA'), ('NT_113901.1', 'GAATTCCTCGTTCACACAGTTTCTT'), ('NC_000001.10', 'NNNNNNNNNNNNNNNNNNNNNNNNN'), ('NP_056374.2', 'MESRETLSSSRQRGGESDFLPVSSA'), ('GL000191.1', 'GATCCACCTGCCTCAGCCTCCCAGA'), ('KB663603.1', 'TTTATTTATTTTAGATACTTATCTC'), ('ENST00000288602', u'CGCCTCCCTTCCCCCTCCCCGCCCG'), ('ENSP00000288602', u'MAALSGGGGGGAEPGQALFNGDMEP')] RuntimeError is thrown in the case of errors:: >> fetch_seq('NM_9.9') Traceback (most recent call last): ... RuntimeError: No sequence available for NM_9.9 >> fetch_seq('QQ01234') Traceback (most recent call last): ... RuntimeError: No sequence fetcher for QQ01234 ### Response: def fetch_seq(ac, start_i=None, end_i=None): """Fetches sequences and subsequences from NCBI eutils and Ensembl REST interfaces. :param string ac: accession of sequence to fetch :param int start_i: start position of *interbase* interval :param int end_i: end position of *interbase* interval **IMPORTANT** start_i and end_i specify 0-based interbase coordinates, which refer to junctions between nucleotides. This is numerically equivalent to 0-based, right-open nucleotide coordinates. Without an interval, the full sequence is returned:: >> len(fetch_seq('NP_056374.2')) 1596 Therefore, it's preferable to provide the interval rather than using Python slicing sequence on the delivered sequence:: >> fetch_seq('NP_056374.2',0,10) # This! 'MESRETLSSS' >> fetch_seq('NP_056374.2')[0:10] # Not this! 'MESRETLSSS' >> fetch_seq('NP_056374.2',0,10) == fetch_seq('NP_056374.2')[0:10] True Providing intervals is especially important for large sequences:: >> fetch_seq('NC_000001.10',2000000,2000030) 'ATCACACGTGCAGGAACCCTTTTCCAAAGG' This call will pull back 30 bases plus overhead; without the interval, one would receive 250MB of chr1 plus overhead! Essentially any RefSeq, Genbank, BIC, or Ensembl sequence may be fetched: >> [(ac,fetch_seq(ac,0,25)) ... for ac in ['NG_032072.1', 'NW_003571030.1', 'NT_113901.1', ... 'NC_000001.10','NP_056374.2', 'GL000191.1', 'KB663603.1', ... 'ENST00000288602', 'ENSP00000288602']] # doctest: +NORMALIZE_WHITESPACE [('NG_032072.1', 'AAAATTAAATTAAAATAAATAAAAA'), ('NW_003571030.1', 'TTGTGTGTTAGGGTGCTCTAAGCAA'), ('NT_113901.1', 'GAATTCCTCGTTCACACAGTTTCTT'), ('NC_000001.10', 'NNNNNNNNNNNNNNNNNNNNNNNNN'), ('NP_056374.2', 'MESRETLSSSRQRGGESDFLPVSSA'), ('GL000191.1', 'GATCCACCTGCCTCAGCCTCCCAGA'), ('KB663603.1', 'TTTATTTATTTTAGATACTTATCTC'), ('ENST00000288602', u'CGCCTCCCTTCCCCCTCCCCGCCCG'), ('ENSP00000288602', u'MAALSGGGGGGAEPGQALFNGDMEP')] RuntimeError is thrown in the case of errors:: >> fetch_seq('NM_9.9') Traceback (most recent call last): ... RuntimeError: No sequence available for NM_9.9 >> fetch_seq('QQ01234') Traceback (most recent call last): ... RuntimeError: No sequence fetcher for QQ01234 """ ac_dispatch = [ { "re": re.compile(r"^(?:AC|N[CGMPRTW])_|^[A-L]\w\d|^U\d"), "fetcher": _fetch_seq_ncbi }, { "re": re.compile(r"^ENS[TP]\d+"), "fetcher": _fetch_seq_ensembl }, ] eligible_fetchers = [ dr["fetcher"] for dr in ac_dispatch if dr["re"].match(ac) ] if len(eligible_fetchers) == 0: raise RuntimeError("No sequence fetcher for {ac}".format(ac=ac)) if len(eligible_fetchers) >= 1: # pragma: nocover (no way to test) _logger.debug("Multiple sequence fetchers found for " "{ac}; using first".format(ac=ac)) fetcher = eligible_fetchers[0] _logger.debug("fetching {ac} with {f}".format(ac=ac, f=fetcher)) try: return fetcher(ac, start_i, end_i) except requests.RequestException as ex: raise RuntimeError("Failed to fetch {ac} ({ex})".format(ac=ac, ex=ex))
def get_disabled(self): """Sorted list of (username, napp_name) of disabled napps. The difference of installed and enabled. """ installed = set(self.get_installed()) enabled = set(self.get_enabled()) return sorted(installed - enabled)
Sorted list of (username, napp_name) of disabled napps. The difference of installed and enabled.
Below is the the instruction that describes the task: ### Input: Sorted list of (username, napp_name) of disabled napps. The difference of installed and enabled. ### Response: def get_disabled(self): """Sorted list of (username, napp_name) of disabled napps. The difference of installed and enabled. """ installed = set(self.get_installed()) enabled = set(self.get_enabled()) return sorted(installed - enabled)
def get_management_certificate(self, thumbprint): ''' The Get Management Certificate operation retrieves information about the management certificate with the specified thumbprint. Management certificates, which are also known as subscription certificates, authenticate clients attempting to connect to resources associated with your Windows Azure subscription. thumbprint: The thumbprint value of the certificate. ''' _validate_not_none('thumbprint', thumbprint) return self._perform_get( '/' + self.subscription_id + '/certificates/' + _str(thumbprint), SubscriptionCertificate)
The Get Management Certificate operation retrieves information about the management certificate with the specified thumbprint. Management certificates, which are also known as subscription certificates, authenticate clients attempting to connect to resources associated with your Windows Azure subscription. thumbprint: The thumbprint value of the certificate.
Below is the the instruction that describes the task: ### Input: The Get Management Certificate operation retrieves information about the management certificate with the specified thumbprint. Management certificates, which are also known as subscription certificates, authenticate clients attempting to connect to resources associated with your Windows Azure subscription. thumbprint: The thumbprint value of the certificate. ### Response: def get_management_certificate(self, thumbprint): ''' The Get Management Certificate operation retrieves information about the management certificate with the specified thumbprint. Management certificates, which are also known as subscription certificates, authenticate clients attempting to connect to resources associated with your Windows Azure subscription. thumbprint: The thumbprint value of the certificate. ''' _validate_not_none('thumbprint', thumbprint) return self._perform_get( '/' + self.subscription_id + '/certificates/' + _str(thumbprint), SubscriptionCertificate)
def unset_config_value(self, name, quiet=False): """unset a configuration value Parameters ========== name: the name of the value to unset (remove key in dictionary) quiet: disable verbose output if True (default is False) """ config_data = self._read_config_file() if name in config_data: del config_data[name] self._write_config_file(config_data) if not quiet: self.print_config_value(name, separator=' is now set to: ')
unset a configuration value Parameters ========== name: the name of the value to unset (remove key in dictionary) quiet: disable verbose output if True (default is False)
Below is the the instruction that describes the task: ### Input: unset a configuration value Parameters ========== name: the name of the value to unset (remove key in dictionary) quiet: disable verbose output if True (default is False) ### Response: def unset_config_value(self, name, quiet=False): """unset a configuration value Parameters ========== name: the name of the value to unset (remove key in dictionary) quiet: disable verbose output if True (default is False) """ config_data = self._read_config_file() if name in config_data: del config_data[name] self._write_config_file(config_data) if not quiet: self.print_config_value(name, separator=' is now set to: ')
def __get_stat_display(self, stats, layer): """Return a dict of dict with all the stats display. stats: Global stats dict layer: ~ cs_status "None": standalone or server mode "Connected": Client is connected to a Glances server "SNMP": Client is connected to a SNMP server "Disconnected": Client is disconnected from the server :returns: dict of dict * key: plugin name * value: dict returned by the get_stats_display Plugin method """ ret = {} for p in stats.getPluginsList(enable=False): if p == 'quicklook' or p == 'processlist': # processlist is done later # because we need to know how many processes could be displayed continue # Compute the plugin max size plugin_max_width = None if p in self._left_sidebar: plugin_max_width = max(self._left_sidebar_min_width, self.screen.getmaxyx()[1] - 105) plugin_max_width = min(self._left_sidebar_max_width, plugin_max_width) # Get the view ret[p] = stats.get_plugin(p).get_stats_display(args=self.args, max_width=plugin_max_width) return ret
Return a dict of dict with all the stats display. stats: Global stats dict layer: ~ cs_status "None": standalone or server mode "Connected": Client is connected to a Glances server "SNMP": Client is connected to a SNMP server "Disconnected": Client is disconnected from the server :returns: dict of dict * key: plugin name * value: dict returned by the get_stats_display Plugin method
Below is the the instruction that describes the task: ### Input: Return a dict of dict with all the stats display. stats: Global stats dict layer: ~ cs_status "None": standalone or server mode "Connected": Client is connected to a Glances server "SNMP": Client is connected to a SNMP server "Disconnected": Client is disconnected from the server :returns: dict of dict * key: plugin name * value: dict returned by the get_stats_display Plugin method ### Response: def __get_stat_display(self, stats, layer): """Return a dict of dict with all the stats display. stats: Global stats dict layer: ~ cs_status "None": standalone or server mode "Connected": Client is connected to a Glances server "SNMP": Client is connected to a SNMP server "Disconnected": Client is disconnected from the server :returns: dict of dict * key: plugin name * value: dict returned by the get_stats_display Plugin method """ ret = {} for p in stats.getPluginsList(enable=False): if p == 'quicklook' or p == 'processlist': # processlist is done later # because we need to know how many processes could be displayed continue # Compute the plugin max size plugin_max_width = None if p in self._left_sidebar: plugin_max_width = max(self._left_sidebar_min_width, self.screen.getmaxyx()[1] - 105) plugin_max_width = min(self._left_sidebar_max_width, plugin_max_width) # Get the view ret[p] = stats.get_plugin(p).get_stats_display(args=self.args, max_width=plugin_max_width) return ret
def proxy(self): """Return a Deferred that will result in a proxy object in the future.""" d = Deferred(self.loop) self._proxy_deferreds.append(d) if self._proxy: d.callback(self._proxy) return d
Return a Deferred that will result in a proxy object in the future.
Below is the the instruction that describes the task: ### Input: Return a Deferred that will result in a proxy object in the future. ### Response: def proxy(self): """Return a Deferred that will result in a proxy object in the future.""" d = Deferred(self.loop) self._proxy_deferreds.append(d) if self._proxy: d.callback(self._proxy) return d
def get_all_permissions_views(self): """ Returns a set of tuples with the perm name and view menu name """ perms_views = set() for role in self.get_user_roles(): perms_views.update({(perm_view.permission.name, perm_view.view_menu.name) for perm_view in role.permissions}) return perms_views
Returns a set of tuples with the perm name and view menu name
Below is the the instruction that describes the task: ### Input: Returns a set of tuples with the perm name and view menu name ### Response: def get_all_permissions_views(self): """ Returns a set of tuples with the perm name and view menu name """ perms_views = set() for role in self.get_user_roles(): perms_views.update({(perm_view.permission.name, perm_view.view_menu.name) for perm_view in role.permissions}) return perms_views
def UpdateShowDirInTVLibrary(self, showID, showDir): """ Update show directory entry for given show id in TVLibrary table. Parameters ---------- showID : int Show id value. showDir : string Show directory name. """ goodlogging.Log.Info("DB", "Updating TV library for ShowID={0}: ShowDir={1}".format(showID, showDir)) self._ActionDatabase("UPDATE TVLibrary SET ShowDir=? WHERE ShowID=?", (showDir, showID))
Update show directory entry for given show id in TVLibrary table. Parameters ---------- showID : int Show id value. showDir : string Show directory name.
Below is the the instruction that describes the task: ### Input: Update show directory entry for given show id in TVLibrary table. Parameters ---------- showID : int Show id value. showDir : string Show directory name. ### Response: def UpdateShowDirInTVLibrary(self, showID, showDir): """ Update show directory entry for given show id in TVLibrary table. Parameters ---------- showID : int Show id value. showDir : string Show directory name. """ goodlogging.Log.Info("DB", "Updating TV library for ShowID={0}: ShowDir={1}".format(showID, showDir)) self._ActionDatabase("UPDATE TVLibrary SET ShowDir=? WHERE ShowID=?", (showDir, showID))
def _build_dictionary(self, results): """ Build model dictionary keyed by the relation's foreign key. :param results: The results :type results: Collection :rtype: dict """ foreign = self._first_key dictionary = {} for result in results: key = getattr(result, foreign) if key not in dictionary: dictionary[key] = [] dictionary[key].append(result) return dictionary
Build model dictionary keyed by the relation's foreign key. :param results: The results :type results: Collection :rtype: dict
Below is the the instruction that describes the task: ### Input: Build model dictionary keyed by the relation's foreign key. :param results: The results :type results: Collection :rtype: dict ### Response: def _build_dictionary(self, results): """ Build model dictionary keyed by the relation's foreign key. :param results: The results :type results: Collection :rtype: dict """ foreign = self._first_key dictionary = {} for result in results: key = getattr(result, foreign) if key not in dictionary: dictionary[key] = [] dictionary[key].append(result) return dictionary
def get_ISBNs(self): """ Get list of VALID ISBN. Returns: list: List with *valid* ISBN strings. """ invalid_isbns = set(self.get_invalid_ISBNs()) valid_isbns = [ self._clean_isbn(isbn) for isbn in self["020a"] if self._clean_isbn(isbn) not in invalid_isbns ] if valid_isbns: return valid_isbns # this is used sometimes in czech national library return [ self._clean_isbn(isbn) for isbn in self["901i"] ]
Get list of VALID ISBN. Returns: list: List with *valid* ISBN strings.
Below is the the instruction that describes the task: ### Input: Get list of VALID ISBN. Returns: list: List with *valid* ISBN strings. ### Response: def get_ISBNs(self): """ Get list of VALID ISBN. Returns: list: List with *valid* ISBN strings. """ invalid_isbns = set(self.get_invalid_ISBNs()) valid_isbns = [ self._clean_isbn(isbn) for isbn in self["020a"] if self._clean_isbn(isbn) not in invalid_isbns ] if valid_isbns: return valid_isbns # this is used sometimes in czech national library return [ self._clean_isbn(isbn) for isbn in self["901i"] ]
def scaled_imu_encode(self, time_boot_ms, xacc, yacc, zacc, xgyro, ygyro, zgyro, xmag, ymag, zmag): ''' The RAW IMU readings for the usual 9DOF sensor setup. This message should contain the scaled values to the described units time_boot_ms : Timestamp (milliseconds since system boot) (uint32_t) xacc : X acceleration (mg) (int16_t) yacc : Y acceleration (mg) (int16_t) zacc : Z acceleration (mg) (int16_t) xgyro : Angular speed around X axis (millirad /sec) (int16_t) ygyro : Angular speed around Y axis (millirad /sec) (int16_t) zgyro : Angular speed around Z axis (millirad /sec) (int16_t) xmag : X Magnetic field (milli tesla) (int16_t) ymag : Y Magnetic field (milli tesla) (int16_t) zmag : Z Magnetic field (milli tesla) (int16_t) ''' return MAVLink_scaled_imu_message(time_boot_ms, xacc, yacc, zacc, xgyro, ygyro, zgyro, xmag, ymag, zmag)
The RAW IMU readings for the usual 9DOF sensor setup. This message should contain the scaled values to the described units time_boot_ms : Timestamp (milliseconds since system boot) (uint32_t) xacc : X acceleration (mg) (int16_t) yacc : Y acceleration (mg) (int16_t) zacc : Z acceleration (mg) (int16_t) xgyro : Angular speed around X axis (millirad /sec) (int16_t) ygyro : Angular speed around Y axis (millirad /sec) (int16_t) zgyro : Angular speed around Z axis (millirad /sec) (int16_t) xmag : X Magnetic field (milli tesla) (int16_t) ymag : Y Magnetic field (milli tesla) (int16_t) zmag : Z Magnetic field (milli tesla) (int16_t)
Below is the the instruction that describes the task: ### Input: The RAW IMU readings for the usual 9DOF sensor setup. This message should contain the scaled values to the described units time_boot_ms : Timestamp (milliseconds since system boot) (uint32_t) xacc : X acceleration (mg) (int16_t) yacc : Y acceleration (mg) (int16_t) zacc : Z acceleration (mg) (int16_t) xgyro : Angular speed around X axis (millirad /sec) (int16_t) ygyro : Angular speed around Y axis (millirad /sec) (int16_t) zgyro : Angular speed around Z axis (millirad /sec) (int16_t) xmag : X Magnetic field (milli tesla) (int16_t) ymag : Y Magnetic field (milli tesla) (int16_t) zmag : Z Magnetic field (milli tesla) (int16_t) ### Response: def scaled_imu_encode(self, time_boot_ms, xacc, yacc, zacc, xgyro, ygyro, zgyro, xmag, ymag, zmag): ''' The RAW IMU readings for the usual 9DOF sensor setup. This message should contain the scaled values to the described units time_boot_ms : Timestamp (milliseconds since system boot) (uint32_t) xacc : X acceleration (mg) (int16_t) yacc : Y acceleration (mg) (int16_t) zacc : Z acceleration (mg) (int16_t) xgyro : Angular speed around X axis (millirad /sec) (int16_t) ygyro : Angular speed around Y axis (millirad /sec) (int16_t) zgyro : Angular speed around Z axis (millirad /sec) (int16_t) xmag : X Magnetic field (milli tesla) (int16_t) ymag : Y Magnetic field (milli tesla) (int16_t) zmag : Z Magnetic field (milli tesla) (int16_t) ''' return MAVLink_scaled_imu_message(time_boot_ms, xacc, yacc, zacc, xgyro, ygyro, zgyro, xmag, ymag, zmag)
def get_bond_order(self, tol=0.2, default_bl=None): """ The bond order according the distance between the two sites Args: tol (float): Relative tolerance to test. (1 + tol) * the longest bond distance is considered to be the threshold length for a bond to exist. (1 - tol) * the shortest bond distance is considered to be the shortest possible bond length Defaults to 0.2. default_bl: If a particular type of bond does not exist, use this bond length as a default value (bond order = 1). If None, a ValueError will be thrown. Returns: Float value of bond order. For example, for C-C bond in benzene, return 1.7. """ sp1 = list(self.site1.species.keys())[0] sp2 = list(self.site2.species.keys())[0] dist = self.site1.distance(self.site2) return get_bond_order(sp1, sp2, dist, tol, default_bl)
The bond order according the distance between the two sites Args: tol (float): Relative tolerance to test. (1 + tol) * the longest bond distance is considered to be the threshold length for a bond to exist. (1 - tol) * the shortest bond distance is considered to be the shortest possible bond length Defaults to 0.2. default_bl: If a particular type of bond does not exist, use this bond length as a default value (bond order = 1). If None, a ValueError will be thrown. Returns: Float value of bond order. For example, for C-C bond in benzene, return 1.7.
Below is the the instruction that describes the task: ### Input: The bond order according the distance between the two sites Args: tol (float): Relative tolerance to test. (1 + tol) * the longest bond distance is considered to be the threshold length for a bond to exist. (1 - tol) * the shortest bond distance is considered to be the shortest possible bond length Defaults to 0.2. default_bl: If a particular type of bond does not exist, use this bond length as a default value (bond order = 1). If None, a ValueError will be thrown. Returns: Float value of bond order. For example, for C-C bond in benzene, return 1.7. ### Response: def get_bond_order(self, tol=0.2, default_bl=None): """ The bond order according the distance between the two sites Args: tol (float): Relative tolerance to test. (1 + tol) * the longest bond distance is considered to be the threshold length for a bond to exist. (1 - tol) * the shortest bond distance is considered to be the shortest possible bond length Defaults to 0.2. default_bl: If a particular type of bond does not exist, use this bond length as a default value (bond order = 1). If None, a ValueError will be thrown. Returns: Float value of bond order. For example, for C-C bond in benzene, return 1.7. """ sp1 = list(self.site1.species.keys())[0] sp2 = list(self.site2.species.keys())[0] dist = self.site1.distance(self.site2) return get_bond_order(sp1, sp2, dist, tol, default_bl)
def _get_location_list(interval_bed): """Retrieve list of locations to analyze from input BED file. """ import pybedtools regions = collections.OrderedDict() for region in pybedtools.BedTool(interval_bed): regions[str(region.chrom)] = None return regions.keys()
Retrieve list of locations to analyze from input BED file.
Below is the the instruction that describes the task: ### Input: Retrieve list of locations to analyze from input BED file. ### Response: def _get_location_list(interval_bed): """Retrieve list of locations to analyze from input BED file. """ import pybedtools regions = collections.OrderedDict() for region in pybedtools.BedTool(interval_bed): regions[str(region.chrom)] = None return regions.keys()
def build_chain(self, source, chain): """ Build markov chain from source on top of existin chain Args: source: iterable which will be used to build chain chain: MarkovChain in currently loaded shelve file that will be extended by source """ for group in WalkByGroup(source, chain.order+1): pre = group[:-1] res = group[-1] if pre not in chain.content: chain.content[pre] = {res: 1} else: if res not in chain.content[pre]: chain.content[pre][res] = 1 else: chain.content[pre][res] += 1 chain.decache()
Build markov chain from source on top of existin chain Args: source: iterable which will be used to build chain chain: MarkovChain in currently loaded shelve file that will be extended by source
Below is the the instruction that describes the task: ### Input: Build markov chain from source on top of existin chain Args: source: iterable which will be used to build chain chain: MarkovChain in currently loaded shelve file that will be extended by source ### Response: def build_chain(self, source, chain): """ Build markov chain from source on top of existin chain Args: source: iterable which will be used to build chain chain: MarkovChain in currently loaded shelve file that will be extended by source """ for group in WalkByGroup(source, chain.order+1): pre = group[:-1] res = group[-1] if pre not in chain.content: chain.content[pre] = {res: 1} else: if res not in chain.content[pre]: chain.content[pre][res] = 1 else: chain.content[pre][res] += 1 chain.decache()
def filter_by(self, values, column_name, exclude=False): """ Filter an SFrame by values inside an iterable object. Result is an SFrame that only includes (or excludes) the rows that have a column with the given ``column_name`` which holds one of the values in the given ``values`` :class:`~turicreate.SArray`. If ``values`` is not an SArray, we attempt to convert it to one before filtering. Parameters ---------- values : SArray | list | numpy.ndarray | pandas.Series | str The values to use to filter the SFrame. The resulting SFrame will only include rows that have one of these values in the given column. column_name : str The column of the SFrame to match with the given `values`. exclude : bool If True, the result SFrame will contain all rows EXCEPT those that have one of ``values`` in ``column_name``. Returns ------- out : SFrame The filtered SFrame. Examples -------- >>> sf = turicreate.SFrame({'id': [1, 2, 3, 4], ... 'animal_type': ['dog', 'cat', 'cow', 'horse'], ... 'name': ['bob', 'jim', 'jimbob', 'bobjim']}) >>> household_pets = ['cat', 'hamster', 'dog', 'fish', 'bird', 'snake'] >>> sf.filter_by(household_pets, 'animal_type') +-------------+----+------+ | animal_type | id | name | +-------------+----+------+ | dog | 1 | bob | | cat | 2 | jim | +-------------+----+------+ [2 rows x 3 columns] >>> sf.filter_by(household_pets, 'animal_type', exclude=True) +-------------+----+--------+ | animal_type | id | name | +-------------+----+--------+ | horse | 4 | bobjim | | cow | 3 | jimbob | +-------------+----+--------+ [2 rows x 3 columns] """ if type(column_name) is not str: raise TypeError("Must pass a str as column_name") existing_columns = self.column_names() if column_name not in existing_columns: raise KeyError("Column '" + column_name + "' not in SFrame.") if type(values) is not SArray: # If we were given a single element, try to put in list and convert # to SArray if not _is_non_string_iterable(values): values = [values] values = SArray(values) value_sf = SFrame() value_sf.add_column(values, column_name, inplace=True) existing_type = self.column_types()[self.column_names().index(column_name)] given_type = value_sf.column_types()[0] if given_type != existing_type: raise TypeError("Type of given values does not match type of column '" + column_name + "' in SFrame.") # Make sure the values list has unique values, or else join will not # filter. value_sf = value_sf.groupby(column_name, {}) with cython_context(): if exclude: id_name = "id" # Make sure this name is unique so we know what to remove in # the result while id_name in existing_columns: id_name += "1" value_sf = value_sf.add_row_number(id_name) tmp = SFrame(_proxy=self.__proxy__.join(value_sf.__proxy__, 'left', {column_name:column_name})) ret_sf = tmp[tmp[id_name] == None] del ret_sf[id_name] return ret_sf else: return SFrame(_proxy=self.__proxy__.join(value_sf.__proxy__, 'inner', {column_name:column_name}))
Filter an SFrame by values inside an iterable object. Result is an SFrame that only includes (or excludes) the rows that have a column with the given ``column_name`` which holds one of the values in the given ``values`` :class:`~turicreate.SArray`. If ``values`` is not an SArray, we attempt to convert it to one before filtering. Parameters ---------- values : SArray | list | numpy.ndarray | pandas.Series | str The values to use to filter the SFrame. The resulting SFrame will only include rows that have one of these values in the given column. column_name : str The column of the SFrame to match with the given `values`. exclude : bool If True, the result SFrame will contain all rows EXCEPT those that have one of ``values`` in ``column_name``. Returns ------- out : SFrame The filtered SFrame. Examples -------- >>> sf = turicreate.SFrame({'id': [1, 2, 3, 4], ... 'animal_type': ['dog', 'cat', 'cow', 'horse'], ... 'name': ['bob', 'jim', 'jimbob', 'bobjim']}) >>> household_pets = ['cat', 'hamster', 'dog', 'fish', 'bird', 'snake'] >>> sf.filter_by(household_pets, 'animal_type') +-------------+----+------+ | animal_type | id | name | +-------------+----+------+ | dog | 1 | bob | | cat | 2 | jim | +-------------+----+------+ [2 rows x 3 columns] >>> sf.filter_by(household_pets, 'animal_type', exclude=True) +-------------+----+--------+ | animal_type | id | name | +-------------+----+--------+ | horse | 4 | bobjim | | cow | 3 | jimbob | +-------------+----+--------+ [2 rows x 3 columns]
Below is the the instruction that describes the task: ### Input: Filter an SFrame by values inside an iterable object. Result is an SFrame that only includes (or excludes) the rows that have a column with the given ``column_name`` which holds one of the values in the given ``values`` :class:`~turicreate.SArray`. If ``values`` is not an SArray, we attempt to convert it to one before filtering. Parameters ---------- values : SArray | list | numpy.ndarray | pandas.Series | str The values to use to filter the SFrame. The resulting SFrame will only include rows that have one of these values in the given column. column_name : str The column of the SFrame to match with the given `values`. exclude : bool If True, the result SFrame will contain all rows EXCEPT those that have one of ``values`` in ``column_name``. Returns ------- out : SFrame The filtered SFrame. Examples -------- >>> sf = turicreate.SFrame({'id': [1, 2, 3, 4], ... 'animal_type': ['dog', 'cat', 'cow', 'horse'], ... 'name': ['bob', 'jim', 'jimbob', 'bobjim']}) >>> household_pets = ['cat', 'hamster', 'dog', 'fish', 'bird', 'snake'] >>> sf.filter_by(household_pets, 'animal_type') +-------------+----+------+ | animal_type | id | name | +-------------+----+------+ | dog | 1 | bob | | cat | 2 | jim | +-------------+----+------+ [2 rows x 3 columns] >>> sf.filter_by(household_pets, 'animal_type', exclude=True) +-------------+----+--------+ | animal_type | id | name | +-------------+----+--------+ | horse | 4 | bobjim | | cow | 3 | jimbob | +-------------+----+--------+ [2 rows x 3 columns] ### Response: def filter_by(self, values, column_name, exclude=False): """ Filter an SFrame by values inside an iterable object. Result is an SFrame that only includes (or excludes) the rows that have a column with the given ``column_name`` which holds one of the values in the given ``values`` :class:`~turicreate.SArray`. If ``values`` is not an SArray, we attempt to convert it to one before filtering. Parameters ---------- values : SArray | list | numpy.ndarray | pandas.Series | str The values to use to filter the SFrame. The resulting SFrame will only include rows that have one of these values in the given column. column_name : str The column of the SFrame to match with the given `values`. exclude : bool If True, the result SFrame will contain all rows EXCEPT those that have one of ``values`` in ``column_name``. Returns ------- out : SFrame The filtered SFrame. Examples -------- >>> sf = turicreate.SFrame({'id': [1, 2, 3, 4], ... 'animal_type': ['dog', 'cat', 'cow', 'horse'], ... 'name': ['bob', 'jim', 'jimbob', 'bobjim']}) >>> household_pets = ['cat', 'hamster', 'dog', 'fish', 'bird', 'snake'] >>> sf.filter_by(household_pets, 'animal_type') +-------------+----+------+ | animal_type | id | name | +-------------+----+------+ | dog | 1 | bob | | cat | 2 | jim | +-------------+----+------+ [2 rows x 3 columns] >>> sf.filter_by(household_pets, 'animal_type', exclude=True) +-------------+----+--------+ | animal_type | id | name | +-------------+----+--------+ | horse | 4 | bobjim | | cow | 3 | jimbob | +-------------+----+--------+ [2 rows x 3 columns] """ if type(column_name) is not str: raise TypeError("Must pass a str as column_name") existing_columns = self.column_names() if column_name not in existing_columns: raise KeyError("Column '" + column_name + "' not in SFrame.") if type(values) is not SArray: # If we were given a single element, try to put in list and convert # to SArray if not _is_non_string_iterable(values): values = [values] values = SArray(values) value_sf = SFrame() value_sf.add_column(values, column_name, inplace=True) existing_type = self.column_types()[self.column_names().index(column_name)] given_type = value_sf.column_types()[0] if given_type != existing_type: raise TypeError("Type of given values does not match type of column '" + column_name + "' in SFrame.") # Make sure the values list has unique values, or else join will not # filter. value_sf = value_sf.groupby(column_name, {}) with cython_context(): if exclude: id_name = "id" # Make sure this name is unique so we know what to remove in # the result while id_name in existing_columns: id_name += "1" value_sf = value_sf.add_row_number(id_name) tmp = SFrame(_proxy=self.__proxy__.join(value_sf.__proxy__, 'left', {column_name:column_name})) ret_sf = tmp[tmp[id_name] == None] del ret_sf[id_name] return ret_sf else: return SFrame(_proxy=self.__proxy__.join(value_sf.__proxy__, 'inner', {column_name:column_name}))
def wrap(f_df, xref, size=1): """ Memoizes an objective + gradient function, and splits it into two functions that return just the objective and gradient, respectively. Parameters ---------- f_df : function Must be unary (takes a single argument) xref : list, dict, or array_like The form of the parameters size : int, optional Size of the cache (Default=1) """ memoized_f_df = lrucache(lambda x: f_df(restruct(x, xref)), size) objective = compose(first, memoized_f_df) gradient = compose(destruct, second, memoized_f_df) return objective, gradient
Memoizes an objective + gradient function, and splits it into two functions that return just the objective and gradient, respectively. Parameters ---------- f_df : function Must be unary (takes a single argument) xref : list, dict, or array_like The form of the parameters size : int, optional Size of the cache (Default=1)
Below is the the instruction that describes the task: ### Input: Memoizes an objective + gradient function, and splits it into two functions that return just the objective and gradient, respectively. Parameters ---------- f_df : function Must be unary (takes a single argument) xref : list, dict, or array_like The form of the parameters size : int, optional Size of the cache (Default=1) ### Response: def wrap(f_df, xref, size=1): """ Memoizes an objective + gradient function, and splits it into two functions that return just the objective and gradient, respectively. Parameters ---------- f_df : function Must be unary (takes a single argument) xref : list, dict, or array_like The form of the parameters size : int, optional Size of the cache (Default=1) """ memoized_f_df = lrucache(lambda x: f_df(restruct(x, xref)), size) objective = compose(first, memoized_f_df) gradient = compose(destruct, second, memoized_f_df) return objective, gradient
def unhandle(self, handler): """ unregister handler (removing callback function) """ with self._hlock: try: self._handler_list.remove(handler) except ValueError: raise ValueError("Handler is not handling this event, so cannot unhandle it.") return self
unregister handler (removing callback function)
Below is the the instruction that describes the task: ### Input: unregister handler (removing callback function) ### Response: def unhandle(self, handler): """ unregister handler (removing callback function) """ with self._hlock: try: self._handler_list.remove(handler) except ValueError: raise ValueError("Handler is not handling this event, so cannot unhandle it.") return self
def _render_objects(self, items, attributes=None, datatype='object'): """Renders an HTML table with the specified list of objects. Args: items: the iterable collection of objects to render. attributes: the optional list of properties or keys to render. datatype: the type of data; one of 'object' for Python objects, 'dict' for a list of dictionaries, or 'chartdata' for Google chart data. """ if not items: return if datatype == 'chartdata': if not attributes: attributes = [items['cols'][i]['label'] for i in range(0, len(items['cols']))] items = items['rows'] indices = {attributes[i]: i for i in range(0, len(attributes))} num_segments = len(self._segments) self._segments.append('<table>') first = True for o in items: if first: first = False if datatype == 'dict' and not attributes: attributes = list(o.keys()) if attributes is not None: self._segments.append('<tr>') for attr in attributes: self._segments.append('<th>%s</th>' % attr) self._segments.append('</tr>') self._segments.append('<tr>') if attributes is None: self._segments.append('<td>%s</td>' % HtmlBuilder._format(o)) else: for attr in attributes: if datatype == 'dict': self._segments.append('<td>%s</td>' % HtmlBuilder._format(o.get(attr, None), nbsp=True)) elif datatype == 'chartdata': self._segments.append('<td>%s</td>' % HtmlBuilder._format(o['c'][indices[attr]]['v'], nbsp=True)) else: self._segments.append('<td>%s</td>' % HtmlBuilder._format(o.__getattribute__(attr), nbsp=True)) self._segments.append('</tr>') self._segments.append('</table>') if first: # The table was empty; drop it from the segments. self._segments = self._segments[:num_segments]
Renders an HTML table with the specified list of objects. Args: items: the iterable collection of objects to render. attributes: the optional list of properties or keys to render. datatype: the type of data; one of 'object' for Python objects, 'dict' for a list of dictionaries, or 'chartdata' for Google chart data.
Below is the the instruction that describes the task: ### Input: Renders an HTML table with the specified list of objects. Args: items: the iterable collection of objects to render. attributes: the optional list of properties or keys to render. datatype: the type of data; one of 'object' for Python objects, 'dict' for a list of dictionaries, or 'chartdata' for Google chart data. ### Response: def _render_objects(self, items, attributes=None, datatype='object'): """Renders an HTML table with the specified list of objects. Args: items: the iterable collection of objects to render. attributes: the optional list of properties or keys to render. datatype: the type of data; one of 'object' for Python objects, 'dict' for a list of dictionaries, or 'chartdata' for Google chart data. """ if not items: return if datatype == 'chartdata': if not attributes: attributes = [items['cols'][i]['label'] for i in range(0, len(items['cols']))] items = items['rows'] indices = {attributes[i]: i for i in range(0, len(attributes))} num_segments = len(self._segments) self._segments.append('<table>') first = True for o in items: if first: first = False if datatype == 'dict' and not attributes: attributes = list(o.keys()) if attributes is not None: self._segments.append('<tr>') for attr in attributes: self._segments.append('<th>%s</th>' % attr) self._segments.append('</tr>') self._segments.append('<tr>') if attributes is None: self._segments.append('<td>%s</td>' % HtmlBuilder._format(o)) else: for attr in attributes: if datatype == 'dict': self._segments.append('<td>%s</td>' % HtmlBuilder._format(o.get(attr, None), nbsp=True)) elif datatype == 'chartdata': self._segments.append('<td>%s</td>' % HtmlBuilder._format(o['c'][indices[attr]]['v'], nbsp=True)) else: self._segments.append('<td>%s</td>' % HtmlBuilder._format(o.__getattribute__(attr), nbsp=True)) self._segments.append('</tr>') self._segments.append('</table>') if first: # The table was empty; drop it from the segments. self._segments = self._segments[:num_segments]
def all_domain_events(self): """ Yields all domain events in the event store. """ for originator_id in self.record_manager.all_sequence_ids(): for domain_event in self.get_domain_events(originator_id=originator_id, page_size=100): yield domain_event
Yields all domain events in the event store.
Below is the the instruction that describes the task: ### Input: Yields all domain events in the event store. ### Response: def all_domain_events(self): """ Yields all domain events in the event store. """ for originator_id in self.record_manager.all_sequence_ids(): for domain_event in self.get_domain_events(originator_id=originator_id, page_size=100): yield domain_event
def summarize(self, sentences=0, chars=0): """ Summarize page either by number of sentences, chars, or first section (**default**) Args: sentences (int): Number of sentences to use in summary \ (first `x` sentences) chars (int): Number of characters to use in summary \ (first `x` characters) Returns: str: The summary of the MediaWiki page Note: Precedence for parameters: sentences then chars; if both are \ 0 then the entire first section is returned """ query_params = {"prop": "extracts", "explaintext": "", "titles": self.title} if sentences: query_params["exsentences"] = 10 if sentences > 10 else sentences elif chars: query_params["exchars"] = 1 if chars < 1 else chars else: query_params["exintro"] = "" request = self.mediawiki.wiki_request(query_params) summary = request["query"]["pages"][self.pageid]["extract"] return summary
Summarize page either by number of sentences, chars, or first section (**default**) Args: sentences (int): Number of sentences to use in summary \ (first `x` sentences) chars (int): Number of characters to use in summary \ (first `x` characters) Returns: str: The summary of the MediaWiki page Note: Precedence for parameters: sentences then chars; if both are \ 0 then the entire first section is returned
Below is the the instruction that describes the task: ### Input: Summarize page either by number of sentences, chars, or first section (**default**) Args: sentences (int): Number of sentences to use in summary \ (first `x` sentences) chars (int): Number of characters to use in summary \ (first `x` characters) Returns: str: The summary of the MediaWiki page Note: Precedence for parameters: sentences then chars; if both are \ 0 then the entire first section is returned ### Response: def summarize(self, sentences=0, chars=0): """ Summarize page either by number of sentences, chars, or first section (**default**) Args: sentences (int): Number of sentences to use in summary \ (first `x` sentences) chars (int): Number of characters to use in summary \ (first `x` characters) Returns: str: The summary of the MediaWiki page Note: Precedence for parameters: sentences then chars; if both are \ 0 then the entire first section is returned """ query_params = {"prop": "extracts", "explaintext": "", "titles": self.title} if sentences: query_params["exsentences"] = 10 if sentences > 10 else sentences elif chars: query_params["exchars"] = 1 if chars < 1 else chars else: query_params["exintro"] = "" request = self.mediawiki.wiki_request(query_params) summary = request["query"]["pages"][self.pageid]["extract"] return summary
def _check_data(self): """Ensure that the data in the cache is valid. If it's invalid, the cache is wiped. """ if not self.cache_available(): return parsed = self._parse_data() _LOGGER.debug('Received new data from sensor: Temp=%.1f, Humidity=%.1f', parsed[MI_TEMPERATURE], parsed[MI_HUMIDITY]) if parsed[MI_HUMIDITY] > 100: # humidity over 100 procent self.clear_cache() return if parsed[MI_TEMPERATURE] == 0: # humidity over 100 procent self.clear_cache() return
Ensure that the data in the cache is valid. If it's invalid, the cache is wiped.
Below is the the instruction that describes the task: ### Input: Ensure that the data in the cache is valid. If it's invalid, the cache is wiped. ### Response: def _check_data(self): """Ensure that the data in the cache is valid. If it's invalid, the cache is wiped. """ if not self.cache_available(): return parsed = self._parse_data() _LOGGER.debug('Received new data from sensor: Temp=%.1f, Humidity=%.1f', parsed[MI_TEMPERATURE], parsed[MI_HUMIDITY]) if parsed[MI_HUMIDITY] > 100: # humidity over 100 procent self.clear_cache() return if parsed[MI_TEMPERATURE] == 0: # humidity over 100 procent self.clear_cache() return
def determine_newline(data): """ Looks for a newline character in bytestring parameter 'data'. Currently only looks for strings '\r\n', '\n'. If '\n' is found at the first position of the string, this raises an exception. Parameters: data (bytes): The data to be searched Returns: None: If no-newline is found One of '\n', '\r\n': whichever is found first """ line_end_pos = data.find(b'\n') if line_end_pos == -1: return None elif line_end_pos == 0: return b'\n' prev_char = data[line_end_pos - 1] return b'\r\n' if (prev_char is b'\r'[0]) else b'\n'
Looks for a newline character in bytestring parameter 'data'. Currently only looks for strings '\r\n', '\n'. If '\n' is found at the first position of the string, this raises an exception. Parameters: data (bytes): The data to be searched Returns: None: If no-newline is found One of '\n', '\r\n': whichever is found first
Below is the the instruction that describes the task: ### Input: Looks for a newline character in bytestring parameter 'data'. Currently only looks for strings '\r\n', '\n'. If '\n' is found at the first position of the string, this raises an exception. Parameters: data (bytes): The data to be searched Returns: None: If no-newline is found One of '\n', '\r\n': whichever is found first ### Response: def determine_newline(data): """ Looks for a newline character in bytestring parameter 'data'. Currently only looks for strings '\r\n', '\n'. If '\n' is found at the first position of the string, this raises an exception. Parameters: data (bytes): The data to be searched Returns: None: If no-newline is found One of '\n', '\r\n': whichever is found first """ line_end_pos = data.find(b'\n') if line_end_pos == -1: return None elif line_end_pos == 0: return b'\n' prev_char = data[line_end_pos - 1] return b'\r\n' if (prev_char is b'\r'[0]) else b'\n'
def remove_all_timers(self): """Remove all waiting timers and terminate any blocking threads.""" with self.lock: if self.rtimer is not None: self.rtimer.cancel() self.timers = {} self.heap = [] self.rtimer = None self.expiring = False
Remove all waiting timers and terminate any blocking threads.
Below is the the instruction that describes the task: ### Input: Remove all waiting timers and terminate any blocking threads. ### Response: def remove_all_timers(self): """Remove all waiting timers and terminate any blocking threads.""" with self.lock: if self.rtimer is not None: self.rtimer.cancel() self.timers = {} self.heap = [] self.rtimer = None self.expiring = False
def is_valid(obj: JSGValidateable, log: Optional[Union[TextIO, Logger]] = None) -> bool: """ Determine whether obj is valid :param obj: Object to validate :param log: Logger to record validation failures. If absent, no information is recorded """ return obj._is_valid(log)
Determine whether obj is valid :param obj: Object to validate :param log: Logger to record validation failures. If absent, no information is recorded
Below is the the instruction that describes the task: ### Input: Determine whether obj is valid :param obj: Object to validate :param log: Logger to record validation failures. If absent, no information is recorded ### Response: def is_valid(obj: JSGValidateable, log: Optional[Union[TextIO, Logger]] = None) -> bool: """ Determine whether obj is valid :param obj: Object to validate :param log: Logger to record validation failures. If absent, no information is recorded """ return obj._is_valid(log)
def _get_format_from_style(self, token, style): """ Returns a QTextCharFormat for token by reading a Pygments style. """ result = QtGui.QTextCharFormat() items = list(style.style_for_token(token).items()) for key, value in items: if value is None and key == 'color': # make sure to use a default visible color for the foreground # brush value = drift_color(self.background, 1000).name() if value: if key == 'color': result.setForeground(self._get_brush(value)) elif key == 'bgcolor': result.setBackground(self._get_brush(value)) elif key == 'bold': result.setFontWeight(QtGui.QFont.Bold) elif key == 'italic': result.setFontItalic(value) elif key == 'underline': result.setUnderlineStyle( QtGui.QTextCharFormat.SingleUnderline) elif key == 'sans': result.setFontStyleHint(QtGui.QFont.SansSerif) elif key == 'roman': result.setFontStyleHint(QtGui.QFont.Times) elif key == 'mono': result.setFontStyleHint(QtGui.QFont.TypeWriter) if token in [Token.Literal.String, Token.Literal.String.Doc, Token.Comment]: # mark strings, comments and docstrings regions for further queries result.setObjectType(result.UserObject) return result
Returns a QTextCharFormat for token by reading a Pygments style.
Below is the the instruction that describes the task: ### Input: Returns a QTextCharFormat for token by reading a Pygments style. ### Response: def _get_format_from_style(self, token, style): """ Returns a QTextCharFormat for token by reading a Pygments style. """ result = QtGui.QTextCharFormat() items = list(style.style_for_token(token).items()) for key, value in items: if value is None and key == 'color': # make sure to use a default visible color for the foreground # brush value = drift_color(self.background, 1000).name() if value: if key == 'color': result.setForeground(self._get_brush(value)) elif key == 'bgcolor': result.setBackground(self._get_brush(value)) elif key == 'bold': result.setFontWeight(QtGui.QFont.Bold) elif key == 'italic': result.setFontItalic(value) elif key == 'underline': result.setUnderlineStyle( QtGui.QTextCharFormat.SingleUnderline) elif key == 'sans': result.setFontStyleHint(QtGui.QFont.SansSerif) elif key == 'roman': result.setFontStyleHint(QtGui.QFont.Times) elif key == 'mono': result.setFontStyleHint(QtGui.QFont.TypeWriter) if token in [Token.Literal.String, Token.Literal.String.Doc, Token.Comment]: # mark strings, comments and docstrings regions for further queries result.setObjectType(result.UserObject) return result
def get_requirement_info(dist): # type: (Distribution) -> RequirementInfo """ Compute and return values (req, editable, comments) for use in FrozenRequirement.from_dist(). """ if not dist_is_editable(dist): return (None, False, []) location = os.path.normcase(os.path.abspath(dist.location)) from pipenv.patched.notpip._internal.vcs import vcs, RemoteNotFoundError vc_type = vcs.get_backend_type(location) if not vc_type: req = dist.as_requirement() logger.debug( 'No VCS found for editable requirement {!r} in: {!r}', req, location, ) comments = [ '# Editable install with no version control ({})'.format(req) ] return (location, True, comments) try: req = vc_type.get_src_requirement(location, dist.project_name) except RemoteNotFoundError: req = dist.as_requirement() comments = [ '# Editable {} install with no remote ({})'.format( vc_type.__name__, req, ) ] return (location, True, comments) except BadCommand: logger.warning( 'cannot determine version of editable source in %s ' '(%s command not found in path)', location, vc_type.name, ) return (None, True, []) except InstallationError as exc: logger.warning( "Error when trying to get requirement for VCS system %s, " "falling back to uneditable format", exc ) else: if req is not None: return (req, True, []) logger.warning( 'Could not determine repository location of %s', location ) comments = ['## !! Could not determine repository location'] return (None, False, comments)
Compute and return values (req, editable, comments) for use in FrozenRequirement.from_dist().
Below is the the instruction that describes the task: ### Input: Compute and return values (req, editable, comments) for use in FrozenRequirement.from_dist(). ### Response: def get_requirement_info(dist): # type: (Distribution) -> RequirementInfo """ Compute and return values (req, editable, comments) for use in FrozenRequirement.from_dist(). """ if not dist_is_editable(dist): return (None, False, []) location = os.path.normcase(os.path.abspath(dist.location)) from pipenv.patched.notpip._internal.vcs import vcs, RemoteNotFoundError vc_type = vcs.get_backend_type(location) if not vc_type: req = dist.as_requirement() logger.debug( 'No VCS found for editable requirement {!r} in: {!r}', req, location, ) comments = [ '# Editable install with no version control ({})'.format(req) ] return (location, True, comments) try: req = vc_type.get_src_requirement(location, dist.project_name) except RemoteNotFoundError: req = dist.as_requirement() comments = [ '# Editable {} install with no remote ({})'.format( vc_type.__name__, req, ) ] return (location, True, comments) except BadCommand: logger.warning( 'cannot determine version of editable source in %s ' '(%s command not found in path)', location, vc_type.name, ) return (None, True, []) except InstallationError as exc: logger.warning( "Error when trying to get requirement for VCS system %s, " "falling back to uneditable format", exc ) else: if req is not None: return (req, True, []) logger.warning( 'Could not determine repository location of %s', location ) comments = ['## !! Could not determine repository location'] return (None, False, comments)
def validate_deployments_tx_receipt( deployments: Dict[str, Any], w3: Web3, allow_missing_data: bool = False ) -> None: """ Validate that address and block hash found in deployment data match what is found on-chain. :allow_missing_data: by default, enforces validation of address and blockHash. """ # todo: provide hook to lazily look up tx receipt via binary search if missing data for name, data in deployments.items(): if "transaction" in data: tx_hash = data["transaction"] tx_receipt = w3.eth.getTransactionReceipt(tx_hash) # tx_address will be None if contract created via contract factory tx_address = tx_receipt["contractAddress"] if tx_address is None and allow_missing_data is False: raise ValidationError( "No contract address found in tx receipt. Unable to verify " "address found in tx receipt matches address in manifest's deployment data. " "If this validation is not necessary, please enable `allow_missing_data` arg. " ) if tx_address is not None and not is_same_address( tx_address, data["address"] ): raise ValidationError( f"Error validating tx_receipt for {name} deployment. " f"Address found in manifest's deployment data: {data['address']} " f"Does not match address found on tx_receipt: {tx_address}." ) if "block" in data: if tx_receipt["blockHash"] != to_bytes(hexstr=data["block"]): raise ValidationError( f"Error validating tx_receipt for {name} deployment. " f"Block found in manifest's deployment data: {data['block']} does not " f"Does not match block found on tx_receipt: {tx_receipt['blockHash']}." ) elif allow_missing_data is False: raise ValidationError( "No block hash found in deployment data. " "Unable to verify block hash on tx receipt. " "If this validation is not necessary, please enable `allow_missing_data` arg." ) elif allow_missing_data is False: raise ValidationError( "No transaction hash found in deployment data. " "Unable to validate tx_receipt. " "If this validation is not necessary, please enable `allow_missing_data` arg." )
Validate that address and block hash found in deployment data match what is found on-chain. :allow_missing_data: by default, enforces validation of address and blockHash.
Below is the the instruction that describes the task: ### Input: Validate that address and block hash found in deployment data match what is found on-chain. :allow_missing_data: by default, enforces validation of address and blockHash. ### Response: def validate_deployments_tx_receipt( deployments: Dict[str, Any], w3: Web3, allow_missing_data: bool = False ) -> None: """ Validate that address and block hash found in deployment data match what is found on-chain. :allow_missing_data: by default, enforces validation of address and blockHash. """ # todo: provide hook to lazily look up tx receipt via binary search if missing data for name, data in deployments.items(): if "transaction" in data: tx_hash = data["transaction"] tx_receipt = w3.eth.getTransactionReceipt(tx_hash) # tx_address will be None if contract created via contract factory tx_address = tx_receipt["contractAddress"] if tx_address is None and allow_missing_data is False: raise ValidationError( "No contract address found in tx receipt. Unable to verify " "address found in tx receipt matches address in manifest's deployment data. " "If this validation is not necessary, please enable `allow_missing_data` arg. " ) if tx_address is not None and not is_same_address( tx_address, data["address"] ): raise ValidationError( f"Error validating tx_receipt for {name} deployment. " f"Address found in manifest's deployment data: {data['address']} " f"Does not match address found on tx_receipt: {tx_address}." ) if "block" in data: if tx_receipt["blockHash"] != to_bytes(hexstr=data["block"]): raise ValidationError( f"Error validating tx_receipt for {name} deployment. " f"Block found in manifest's deployment data: {data['block']} does not " f"Does not match block found on tx_receipt: {tx_receipt['blockHash']}." ) elif allow_missing_data is False: raise ValidationError( "No block hash found in deployment data. " "Unable to verify block hash on tx receipt. " "If this validation is not necessary, please enable `allow_missing_data` arg." ) elif allow_missing_data is False: raise ValidationError( "No transaction hash found in deployment data. " "Unable to validate tx_receipt. " "If this validation is not necessary, please enable `allow_missing_data` arg." )
def Connect(self): '''Connect a device ''' device_path = self.path if device_path not in mockobject.objects: raise dbus.exceptions.DBusException('No such device.', name='org.bluez.Error.NoSuchDevice') device = mockobject.objects[device_path] device.props[AUDIO_IFACE]['State'] = dbus.String("connected", variant_level=1) device.EmitSignal(AUDIO_IFACE, 'PropertyChanged', 'sv', [ 'State', dbus.String("connected", variant_level=1), ]) device.props[DEVICE_IFACE]['Connected'] = dbus.Boolean(True, variant_level=1) device.EmitSignal(DEVICE_IFACE, 'PropertyChanged', 'sv', [ 'Connected', dbus.Boolean(True, variant_level=1), ])
Connect a device
Below is the the instruction that describes the task: ### Input: Connect a device ### Response: def Connect(self): '''Connect a device ''' device_path = self.path if device_path not in mockobject.objects: raise dbus.exceptions.DBusException('No such device.', name='org.bluez.Error.NoSuchDevice') device = mockobject.objects[device_path] device.props[AUDIO_IFACE]['State'] = dbus.String("connected", variant_level=1) device.EmitSignal(AUDIO_IFACE, 'PropertyChanged', 'sv', [ 'State', dbus.String("connected", variant_level=1), ]) device.props[DEVICE_IFACE]['Connected'] = dbus.Boolean(True, variant_level=1) device.EmitSignal(DEVICE_IFACE, 'PropertyChanged', 'sv', [ 'Connected', dbus.Boolean(True, variant_level=1), ])
def load_meta_data(self, path=None): """Load meta data of state model from the file system The meta data of the state model is loaded from the file system and stored in the meta property of the model. Existing meta data is removed. Also the meta data of all state elements (data ports, outcomes, etc) are loaded, as those stored in the same file as the meta data of the state. This is either called on the __init__ of a new state model or if a state model for a container state is created, which then calls load_meta_data for all its children. :param str path: Optional file system path to the meta data file. If not given, the path will be derived from the state's path on the filesystem :return: if meta data file was loaded True otherwise False :rtype: bool """ # TODO: for an Execution state this method is called for each hierarchy level again and again, still?? check it! # print("1AbstractState_load_meta_data: ", path, not path) if not path: path = self.state.file_system_path # print("2AbstractState_load_meta_data: ", path) if path is None: self.meta = Vividict({}) return False path_meta_data = os.path.join(path, storage.FILE_NAME_META_DATA) # TODO: Should be removed with next minor release if not os.path.exists(path_meta_data): logger.debug("Because meta data was not found in {0} use backup option {1}" "".format(path_meta_data, os.path.join(path, storage.FILE_NAME_META_DATA_OLD))) path_meta_data = os.path.join(path, storage.FILE_NAME_META_DATA_OLD) # TODO use the following logger message to debug meta data load process and to avoid maybe repetitive loads # if not os.path.exists(path_meta_data): # logger.info("path not found {0}".format(path_meta_data)) try: # print("try to load meta data from {0} for state {1}".format(path_meta_data, self.state)) tmp_meta = storage.load_data_file(path_meta_data) except ValueError as e: # if no element which is newly generated log a warning # if os.path.exists(os.path.dirname(path)): # logger.debug("Because '{1}' meta data of {0} was not loaded properly.".format(self, e)) if not path.startswith(constants.RAFCON_TEMP_PATH_STORAGE) and not os.path.exists(os.path.dirname(path)): logger.debug("Because '{1}' meta data of {0} was not loaded properly.".format(self, e)) tmp_meta = {} # JSON returns a dict, which must be converted to a Vividict tmp_meta = Vividict(tmp_meta) if tmp_meta: self._parse_for_element_meta_data(tmp_meta) # assign the meta data to the state self.meta = tmp_meta self.meta_signal.emit(MetaSignalMsg("load_meta_data", "all", True)) return True else: # print("nothing to parse", tmp_meta) return False
Load meta data of state model from the file system The meta data of the state model is loaded from the file system and stored in the meta property of the model. Existing meta data is removed. Also the meta data of all state elements (data ports, outcomes, etc) are loaded, as those stored in the same file as the meta data of the state. This is either called on the __init__ of a new state model or if a state model for a container state is created, which then calls load_meta_data for all its children. :param str path: Optional file system path to the meta data file. If not given, the path will be derived from the state's path on the filesystem :return: if meta data file was loaded True otherwise False :rtype: bool
Below is the the instruction that describes the task: ### Input: Load meta data of state model from the file system The meta data of the state model is loaded from the file system and stored in the meta property of the model. Existing meta data is removed. Also the meta data of all state elements (data ports, outcomes, etc) are loaded, as those stored in the same file as the meta data of the state. This is either called on the __init__ of a new state model or if a state model for a container state is created, which then calls load_meta_data for all its children. :param str path: Optional file system path to the meta data file. If not given, the path will be derived from the state's path on the filesystem :return: if meta data file was loaded True otherwise False :rtype: bool ### Response: def load_meta_data(self, path=None): """Load meta data of state model from the file system The meta data of the state model is loaded from the file system and stored in the meta property of the model. Existing meta data is removed. Also the meta data of all state elements (data ports, outcomes, etc) are loaded, as those stored in the same file as the meta data of the state. This is either called on the __init__ of a new state model or if a state model for a container state is created, which then calls load_meta_data for all its children. :param str path: Optional file system path to the meta data file. If not given, the path will be derived from the state's path on the filesystem :return: if meta data file was loaded True otherwise False :rtype: bool """ # TODO: for an Execution state this method is called for each hierarchy level again and again, still?? check it! # print("1AbstractState_load_meta_data: ", path, not path) if not path: path = self.state.file_system_path # print("2AbstractState_load_meta_data: ", path) if path is None: self.meta = Vividict({}) return False path_meta_data = os.path.join(path, storage.FILE_NAME_META_DATA) # TODO: Should be removed with next minor release if not os.path.exists(path_meta_data): logger.debug("Because meta data was not found in {0} use backup option {1}" "".format(path_meta_data, os.path.join(path, storage.FILE_NAME_META_DATA_OLD))) path_meta_data = os.path.join(path, storage.FILE_NAME_META_DATA_OLD) # TODO use the following logger message to debug meta data load process and to avoid maybe repetitive loads # if not os.path.exists(path_meta_data): # logger.info("path not found {0}".format(path_meta_data)) try: # print("try to load meta data from {0} for state {1}".format(path_meta_data, self.state)) tmp_meta = storage.load_data_file(path_meta_data) except ValueError as e: # if no element which is newly generated log a warning # if os.path.exists(os.path.dirname(path)): # logger.debug("Because '{1}' meta data of {0} was not loaded properly.".format(self, e)) if not path.startswith(constants.RAFCON_TEMP_PATH_STORAGE) and not os.path.exists(os.path.dirname(path)): logger.debug("Because '{1}' meta data of {0} was not loaded properly.".format(self, e)) tmp_meta = {} # JSON returns a dict, which must be converted to a Vividict tmp_meta = Vividict(tmp_meta) if tmp_meta: self._parse_for_element_meta_data(tmp_meta) # assign the meta data to the state self.meta = tmp_meta self.meta_signal.emit(MetaSignalMsg("load_meta_data", "all", True)) return True else: # print("nothing to parse", tmp_meta) return False
def format_json(json_object, indent): """ Pretty-format json data """ indent_str = "\n" + " " * indent json_str = json.dumps(json_object, indent=2, default=serialize_json_var) return indent_str.join(json_str.split("\n"))
Pretty-format json data
Below is the the instruction that describes the task: ### Input: Pretty-format json data ### Response: def format_json(json_object, indent): """ Pretty-format json data """ indent_str = "\n" + " " * indent json_str = json.dumps(json_object, indent=2, default=serialize_json_var) return indent_str.join(json_str.split("\n"))
def generate_sentence(self, chain): """ !DEMO! Demo function that shows how to generate a simple sentence starting with uppercase letter without lenght limit. Args: chain: MarkovChain that will be used to generate sentence """ def weighted_choice(choices): total_weight = sum(weight for val, weight in choices) rand = random.uniform(0, total_weight) upto = 0 for val, weight in choices: if upto + weight >= rand: return val upto += weight sentence = list(random.choice(chain.startwords)) while not sentence[-1][-1] in ['.', '?', '!']: sentence.append( weighted_choice( chain.content[tuple(sentence[-2:])].items() ) ) return ' '.join(sentence)
!DEMO! Demo function that shows how to generate a simple sentence starting with uppercase letter without lenght limit. Args: chain: MarkovChain that will be used to generate sentence
Below is the the instruction that describes the task: ### Input: !DEMO! Demo function that shows how to generate a simple sentence starting with uppercase letter without lenght limit. Args: chain: MarkovChain that will be used to generate sentence ### Response: def generate_sentence(self, chain): """ !DEMO! Demo function that shows how to generate a simple sentence starting with uppercase letter without lenght limit. Args: chain: MarkovChain that will be used to generate sentence """ def weighted_choice(choices): total_weight = sum(weight for val, weight in choices) rand = random.uniform(0, total_weight) upto = 0 for val, weight in choices: if upto + weight >= rand: return val upto += weight sentence = list(random.choice(chain.startwords)) while not sentence[-1][-1] in ['.', '?', '!']: sentence.append( weighted_choice( chain.content[tuple(sentence[-2:])].items() ) ) return ' '.join(sentence)
def file_set_details(object_id, input_params={}, always_retry=True, **kwargs): """ Invokes the /file-xxxx/setDetails API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FsetDetails """ return DXHTTPRequest('/%s/setDetails' % object_id, input_params, always_retry=always_retry, **kwargs)
Invokes the /file-xxxx/setDetails API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FsetDetails
Below is the the instruction that describes the task: ### Input: Invokes the /file-xxxx/setDetails API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FsetDetails ### Response: def file_set_details(object_id, input_params={}, always_retry=True, **kwargs): """ Invokes the /file-xxxx/setDetails API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Details-and-Links#API-method%3A-%2Fclass-xxxx%2FsetDetails """ return DXHTTPRequest('/%s/setDetails' % object_id, input_params, always_retry=always_retry, **kwargs)
def main(): """Command line interface of mpu.""" parser = get_parser() args = parser.parse_args() if hasattr(args, 'func') and args.func: args.func(args) else: parser.print_help()
Command line interface of mpu.
Below is the the instruction that describes the task: ### Input: Command line interface of mpu. ### Response: def main(): """Command line interface of mpu.""" parser = get_parser() args = parser.parse_args() if hasattr(args, 'func') and args.func: args.func(args) else: parser.print_help()
def addDatastream(self, pid, dsID, dsLabel=None, mimeType=None, logMessage=None, controlGroup=None, dsLocation=None, altIDs=None, versionable=None, dsState=None, formatURI=None, checksumType=None, checksum=None, content=None): '''Add a new datastream to an existing object. On success, the return response should have a status of 201 Created; if there is an error, the response body includes the error message. :param pid: object pid :param dsID: id for the new datastream :param dslabel: label for the new datastream (optional) :param mimeType: mimetype for the new datastream (optional) :param logMessage: log message for the object history (optional) :param controlGroup: control group for the new datastream (optional) :param dsLocation: URL where the content should be ingested from :param altIDs: alternate ids (optional) :param versionable: configure datastream versioning (optional) :param dsState: datastream state (optional) :param formatURI: datastream format (optional) :param checksumType: checksum type (optional) :param checksum: checksum (optional) :param content: datastream content, as a file-like object or characterdata (optional) :rtype: :class:`requests.models.Response` ''' # objects/{pid}/datastreams/NEWDS? [opts] # content via multipart file in request content, or dsLocation=URI # one of dsLocation or filename must be specified # if checksum is sent without checksum type, Fedora seems to # ignore it (does not error on invalid checksum with no checksum type) if checksum is not None and checksumType is None: warnings.warn('Fedora will ignore the checksum (%s) because no checksum type is specified' \ % checksum) http_args = {} if dsLabel: http_args['dsLabel'] = dsLabel if mimeType: http_args['mimeType'] = mimeType if logMessage: http_args['logMessage'] = logMessage if controlGroup: http_args['controlGroup'] = controlGroup if dsLocation: http_args['dsLocation'] = dsLocation if altIDs: http_args['altIDs'] = altIDs if versionable is not None: http_args['versionable'] = versionable if dsState: http_args['dsState'] = dsState if formatURI: http_args['formatURI'] = formatURI if checksumType: http_args['checksumType'] = checksumType if checksum: http_args['checksum'] = checksum # Added code to match how content is now handled, see modifyDatastream. extra_args = {} # could be a string or a file-like object if content: if hasattr(content, 'read'): # if content is a file-like object, warn if no checksum if not checksum: logger.warning("File was ingested into fedora without a passed checksum for validation, pid was: %s and dsID was: %s.", pid, dsID) extra_args['files'] = {'file': content} else: # fedora wants a multipart file upload; # this seems to work better for handling unicode than # simply sending content via requests data parameter extra_args['files'] = {'file': ('filename', content)} # set content-type header ? url = 'objects/%s/datastreams/%s' % (pid, dsID) return self.post(url, params=http_args, **extra_args)
Add a new datastream to an existing object. On success, the return response should have a status of 201 Created; if there is an error, the response body includes the error message. :param pid: object pid :param dsID: id for the new datastream :param dslabel: label for the new datastream (optional) :param mimeType: mimetype for the new datastream (optional) :param logMessage: log message for the object history (optional) :param controlGroup: control group for the new datastream (optional) :param dsLocation: URL where the content should be ingested from :param altIDs: alternate ids (optional) :param versionable: configure datastream versioning (optional) :param dsState: datastream state (optional) :param formatURI: datastream format (optional) :param checksumType: checksum type (optional) :param checksum: checksum (optional) :param content: datastream content, as a file-like object or characterdata (optional) :rtype: :class:`requests.models.Response`
Below is the the instruction that describes the task: ### Input: Add a new datastream to an existing object. On success, the return response should have a status of 201 Created; if there is an error, the response body includes the error message. :param pid: object pid :param dsID: id for the new datastream :param dslabel: label for the new datastream (optional) :param mimeType: mimetype for the new datastream (optional) :param logMessage: log message for the object history (optional) :param controlGroup: control group for the new datastream (optional) :param dsLocation: URL where the content should be ingested from :param altIDs: alternate ids (optional) :param versionable: configure datastream versioning (optional) :param dsState: datastream state (optional) :param formatURI: datastream format (optional) :param checksumType: checksum type (optional) :param checksum: checksum (optional) :param content: datastream content, as a file-like object or characterdata (optional) :rtype: :class:`requests.models.Response` ### Response: def addDatastream(self, pid, dsID, dsLabel=None, mimeType=None, logMessage=None, controlGroup=None, dsLocation=None, altIDs=None, versionable=None, dsState=None, formatURI=None, checksumType=None, checksum=None, content=None): '''Add a new datastream to an existing object. On success, the return response should have a status of 201 Created; if there is an error, the response body includes the error message. :param pid: object pid :param dsID: id for the new datastream :param dslabel: label for the new datastream (optional) :param mimeType: mimetype for the new datastream (optional) :param logMessage: log message for the object history (optional) :param controlGroup: control group for the new datastream (optional) :param dsLocation: URL where the content should be ingested from :param altIDs: alternate ids (optional) :param versionable: configure datastream versioning (optional) :param dsState: datastream state (optional) :param formatURI: datastream format (optional) :param checksumType: checksum type (optional) :param checksum: checksum (optional) :param content: datastream content, as a file-like object or characterdata (optional) :rtype: :class:`requests.models.Response` ''' # objects/{pid}/datastreams/NEWDS? [opts] # content via multipart file in request content, or dsLocation=URI # one of dsLocation or filename must be specified # if checksum is sent without checksum type, Fedora seems to # ignore it (does not error on invalid checksum with no checksum type) if checksum is not None and checksumType is None: warnings.warn('Fedora will ignore the checksum (%s) because no checksum type is specified' \ % checksum) http_args = {} if dsLabel: http_args['dsLabel'] = dsLabel if mimeType: http_args['mimeType'] = mimeType if logMessage: http_args['logMessage'] = logMessage if controlGroup: http_args['controlGroup'] = controlGroup if dsLocation: http_args['dsLocation'] = dsLocation if altIDs: http_args['altIDs'] = altIDs if versionable is not None: http_args['versionable'] = versionable if dsState: http_args['dsState'] = dsState if formatURI: http_args['formatURI'] = formatURI if checksumType: http_args['checksumType'] = checksumType if checksum: http_args['checksum'] = checksum # Added code to match how content is now handled, see modifyDatastream. extra_args = {} # could be a string or a file-like object if content: if hasattr(content, 'read'): # if content is a file-like object, warn if no checksum if not checksum: logger.warning("File was ingested into fedora without a passed checksum for validation, pid was: %s and dsID was: %s.", pid, dsID) extra_args['files'] = {'file': content} else: # fedora wants a multipart file upload; # this seems to work better for handling unicode than # simply sending content via requests data parameter extra_args['files'] = {'file': ('filename', content)} # set content-type header ? url = 'objects/%s/datastreams/%s' % (pid, dsID) return self.post(url, params=http_args, **extra_args)
def fill_buffer(heap_data, i_chan): """Blocking function to populate data in the heap. This is run in an executor. """ # Calculate the time count and fraction. now = datetime.datetime.utcnow() time_full = now.timestamp() time_count = int(time_full) time_fraction = int((time_full - time_count) * (2**32 - 1)) diff = now - (now.replace(hour=0, minute=0, second=0, microsecond=0)) time_data = diff.seconds + 1e-6 * diff.microseconds # Write the data into the buffer. heap_data['visibility_timestamp_count'] = time_count heap_data['visibility_timestamp_fraction'] = time_fraction heap_data['correlator_output_data']['VIS'][:][:] = \ time_data + i_chan * 1j
Blocking function to populate data in the heap. This is run in an executor.
Below is the the instruction that describes the task: ### Input: Blocking function to populate data in the heap. This is run in an executor. ### Response: def fill_buffer(heap_data, i_chan): """Blocking function to populate data in the heap. This is run in an executor. """ # Calculate the time count and fraction. now = datetime.datetime.utcnow() time_full = now.timestamp() time_count = int(time_full) time_fraction = int((time_full - time_count) * (2**32 - 1)) diff = now - (now.replace(hour=0, minute=0, second=0, microsecond=0)) time_data = diff.seconds + 1e-6 * diff.microseconds # Write the data into the buffer. heap_data['visibility_timestamp_count'] = time_count heap_data['visibility_timestamp_fraction'] = time_fraction heap_data['correlator_output_data']['VIS'][:][:] = \ time_data + i_chan * 1j
def login(self, year, firstname, lastname, passwd, with_year=True): """ Authenticate an user """ firstname = firstname.upper() lastname = lastname.upper() if with_year and not self.set_year(year): return False url = URLS['login'] params = { 'prenom': firstname, 'nom': lastname, 'pwd': passwd, } soup = self.post_soup(url, data=params) return not soup.select('font[color=red]')
Authenticate an user
Below is the the instruction that describes the task: ### Input: Authenticate an user ### Response: def login(self, year, firstname, lastname, passwd, with_year=True): """ Authenticate an user """ firstname = firstname.upper() lastname = lastname.upper() if with_year and not self.set_year(year): return False url = URLS['login'] params = { 'prenom': firstname, 'nom': lastname, 'pwd': passwd, } soup = self.post_soup(url, data=params) return not soup.select('font[color=red]')
def create_cas_login_url(cas_url, cas_route, service, renew=None, gateway=None): """ Create a CAS login URL . Keyword arguments: cas_url -- The url to the CAS (ex. http://sso.pdx.edu) cas_route -- The route where the CAS lives on server (ex. /cas) service -- (ex. http://localhost:5000/login) renew -- "true" or "false" gateway -- "true" or "false" Example usage: >>> create_cas_login_url( ... 'http://sso.pdx.edu', ... '/cas', ... 'http://localhost:5000', ... ) 'http://sso.pdx.edu/cas?service=http%3A%2F%2Flocalhost%3A5000' """ return create_url( cas_url, cas_route, ('service', service), ('renew', renew), ('gateway', gateway), )
Create a CAS login URL . Keyword arguments: cas_url -- The url to the CAS (ex. http://sso.pdx.edu) cas_route -- The route where the CAS lives on server (ex. /cas) service -- (ex. http://localhost:5000/login) renew -- "true" or "false" gateway -- "true" or "false" Example usage: >>> create_cas_login_url( ... 'http://sso.pdx.edu', ... '/cas', ... 'http://localhost:5000', ... ) 'http://sso.pdx.edu/cas?service=http%3A%2F%2Flocalhost%3A5000'
Below is the the instruction that describes the task: ### Input: Create a CAS login URL . Keyword arguments: cas_url -- The url to the CAS (ex. http://sso.pdx.edu) cas_route -- The route where the CAS lives on server (ex. /cas) service -- (ex. http://localhost:5000/login) renew -- "true" or "false" gateway -- "true" or "false" Example usage: >>> create_cas_login_url( ... 'http://sso.pdx.edu', ... '/cas', ... 'http://localhost:5000', ... ) 'http://sso.pdx.edu/cas?service=http%3A%2F%2Flocalhost%3A5000' ### Response: def create_cas_login_url(cas_url, cas_route, service, renew=None, gateway=None): """ Create a CAS login URL . Keyword arguments: cas_url -- The url to the CAS (ex. http://sso.pdx.edu) cas_route -- The route where the CAS lives on server (ex. /cas) service -- (ex. http://localhost:5000/login) renew -- "true" or "false" gateway -- "true" or "false" Example usage: >>> create_cas_login_url( ... 'http://sso.pdx.edu', ... '/cas', ... 'http://localhost:5000', ... ) 'http://sso.pdx.edu/cas?service=http%3A%2F%2Flocalhost%3A5000' """ return create_url( cas_url, cas_route, ('service', service), ('renew', renew), ('gateway', gateway), )
def history_view(self, request, object_id, extra_context=None): from django.template.response import TemplateResponse from django.contrib.admin.options import get_content_type_for_model from django.contrib.admin.utils import unquote from django.core.exceptions import PermissionDenied from django.utils.text import capfirst from django.utils.encoding import force_text from django.utils.translation import ugettext as _ "The 'history' admin view for this model." from django.contrib.admin.models import LogEntry # First check if the user can see this history. model = self.model obj = self.get_object(request, unquote(object_id)) if obj is None: return self._get_obj_does_not_exist_redirect(request, model._meta, object_id) if not self.has_change_permission(request, obj): raise PermissionDenied # Then get the history for this object. opts = model._meta app_label = opts.app_label action_list = LogEntry.objects.filter( object_id=unquote(object_id), content_type=get_content_type_for_model(model) ).select_related().order_by('-action_time')[:self.max_history_length] context = dict( self.admin_site.each_context(request), title=_('Change history: %s') % force_text(obj), action_list=action_list, module_name=capfirst(force_text(opts.verbose_name_plural)), object=obj, opts=opts, preserved_filters=self.get_preserved_filters(request), ) context.update(extra_context or {}) request.current_app = self.admin_site.name return TemplateResponse(request, self.object_history_template or [ "admin/%s/%s/object_history.html" % (app_label, opts.model_name), "admin/%s/object_history.html" % app_label, "admin/object_history.html" ], context)
The 'history' admin view for this model.
Below is the the instruction that describes the task: ### Input: The 'history' admin view for this model. ### Response: def history_view(self, request, object_id, extra_context=None): from django.template.response import TemplateResponse from django.contrib.admin.options import get_content_type_for_model from django.contrib.admin.utils import unquote from django.core.exceptions import PermissionDenied from django.utils.text import capfirst from django.utils.encoding import force_text from django.utils.translation import ugettext as _ "The 'history' admin view for this model." from django.contrib.admin.models import LogEntry # First check if the user can see this history. model = self.model obj = self.get_object(request, unquote(object_id)) if obj is None: return self._get_obj_does_not_exist_redirect(request, model._meta, object_id) if not self.has_change_permission(request, obj): raise PermissionDenied # Then get the history for this object. opts = model._meta app_label = opts.app_label action_list = LogEntry.objects.filter( object_id=unquote(object_id), content_type=get_content_type_for_model(model) ).select_related().order_by('-action_time')[:self.max_history_length] context = dict( self.admin_site.each_context(request), title=_('Change history: %s') % force_text(obj), action_list=action_list, module_name=capfirst(force_text(opts.verbose_name_plural)), object=obj, opts=opts, preserved_filters=self.get_preserved_filters(request), ) context.update(extra_context or {}) request.current_app = self.admin_site.name return TemplateResponse(request, self.object_history_template or [ "admin/%s/%s/object_history.html" % (app_label, opts.model_name), "admin/%s/object_history.html" % app_label, "admin/object_history.html" ], context)
def nullify(v): """Convert empty strings and strings with only spaces to None values. """ if isinstance(v, six.string_types): v = v.strip() if v is None or v == '': return None else: return v
Convert empty strings and strings with only spaces to None values.
Below is the the instruction that describes the task: ### Input: Convert empty strings and strings with only spaces to None values. ### Response: def nullify(v): """Convert empty strings and strings with only spaces to None values. """ if isinstance(v, six.string_types): v = v.strip() if v is None or v == '': return None else: return v
def delete_resource(self, resource_name, name): """Delete a specific resource by name""" try: logger.info("Trying to get %s: '%s'", resource_name, name) if name is None: # No name is defined, delete all the resources... if not self.dry_run: headers = { 'Content-Type': 'application/json' } logger.info("-> deleting all %s", resource_name) self.backend.delete(resource_name, headers) logger.info("-> deleted all %s", resource_name) else: response = {'_id': '_fake', '_etag': '_fake'} logger.info("Dry-run mode: should have deleted all %s", resource_name) else: params = {'where': json.dumps({'name': name})} if resource_name in ['host', 'service', 'user']: params = {'where': json.dumps({'name': name, '_is_template': self.model})} if resource_name == 'service' and '/' in name: splitted_name = name.split('/') name = splitted_name[0] + '_' + splitted_name[1] # Get host from name response2 = self.backend.get( 'host', params={'where': json.dumps({'name': splitted_name[0]})}) if response2['_items']: host = response2['_items'][0] logger.info("Got host '%s' for the service '%s'", splitted_name[0], splitted_name[1]) else: logger.warning("Not found host '%s'!", splitted_name[0]) return False if splitted_name[1] == '*': params = {'where': json.dumps({'host': host['_id']})} else: params = {'where': json.dumps({'name': splitted_name[1], 'host': host['_id']})} response = self.backend.get_all(resource_name, params=params) if response['_items']: logger.info("-> found %d matching %s", len(response['_items']), resource_name) for item in response['_items']: logger.info("-> found %s '%s': %s", resource_name, name, item['name']) # Exists in the backend, we must delete the element... if not self.dry_run: headers = { 'Content-Type': 'application/json', 'If-Match': item['_etag'] } logger.info("-> deleting %s: %s", resource_name, item['name']) self.backend.delete(resource_name + '/' + item['_id'], headers) logger.info("-> deleted %s: %s", resource_name, item['name']) else: response = {'_id': '_fake', '_etag': '_fake'} logger.info("Dry-run mode: should have deleted an %s '%s'", resource_name, name) logger.info("-> deleted: '%s': %s", resource_name, item['_id']) else: logger.warning("-> %s item '%s' not found", resource_name, name) return False except BackendException as exp: # pragma: no cover, should never happen logger.exception("Exception: %s", exp) logger.error("Response: %s", exp.response) print("Deletion error for '%s' : %s" % (resource_name, name)) print("~~~~~~~~~~~~~~~~~~~~~~~~~~") print("Exiting with error code: 5") return False return True
Delete a specific resource by name
Below is the the instruction that describes the task: ### Input: Delete a specific resource by name ### Response: def delete_resource(self, resource_name, name): """Delete a specific resource by name""" try: logger.info("Trying to get %s: '%s'", resource_name, name) if name is None: # No name is defined, delete all the resources... if not self.dry_run: headers = { 'Content-Type': 'application/json' } logger.info("-> deleting all %s", resource_name) self.backend.delete(resource_name, headers) logger.info("-> deleted all %s", resource_name) else: response = {'_id': '_fake', '_etag': '_fake'} logger.info("Dry-run mode: should have deleted all %s", resource_name) else: params = {'where': json.dumps({'name': name})} if resource_name in ['host', 'service', 'user']: params = {'where': json.dumps({'name': name, '_is_template': self.model})} if resource_name == 'service' and '/' in name: splitted_name = name.split('/') name = splitted_name[0] + '_' + splitted_name[1] # Get host from name response2 = self.backend.get( 'host', params={'where': json.dumps({'name': splitted_name[0]})}) if response2['_items']: host = response2['_items'][0] logger.info("Got host '%s' for the service '%s'", splitted_name[0], splitted_name[1]) else: logger.warning("Not found host '%s'!", splitted_name[0]) return False if splitted_name[1] == '*': params = {'where': json.dumps({'host': host['_id']})} else: params = {'where': json.dumps({'name': splitted_name[1], 'host': host['_id']})} response = self.backend.get_all(resource_name, params=params) if response['_items']: logger.info("-> found %d matching %s", len(response['_items']), resource_name) for item in response['_items']: logger.info("-> found %s '%s': %s", resource_name, name, item['name']) # Exists in the backend, we must delete the element... if not self.dry_run: headers = { 'Content-Type': 'application/json', 'If-Match': item['_etag'] } logger.info("-> deleting %s: %s", resource_name, item['name']) self.backend.delete(resource_name + '/' + item['_id'], headers) logger.info("-> deleted %s: %s", resource_name, item['name']) else: response = {'_id': '_fake', '_etag': '_fake'} logger.info("Dry-run mode: should have deleted an %s '%s'", resource_name, name) logger.info("-> deleted: '%s': %s", resource_name, item['_id']) else: logger.warning("-> %s item '%s' not found", resource_name, name) return False except BackendException as exp: # pragma: no cover, should never happen logger.exception("Exception: %s", exp) logger.error("Response: %s", exp.response) print("Deletion error for '%s' : %s" % (resource_name, name)) print("~~~~~~~~~~~~~~~~~~~~~~~~~~") print("Exiting with error code: 5") return False return True
def sink_get(self, project, sink_name): """API call: retrieve a sink resource. :type project: str :param project: ID of the project containing the sink. :type sink_name: str :param sink_name: the name of the sink :rtype: dict :returns: The sink object returned from the API (converted from a protobuf to a dictionary). """ path = "projects/%s/sinks/%s" % (project, sink_name) sink_pb = self._gapic_api.get_sink(path) # NOTE: LogSink message type does not have an ``Any`` field # so `MessageToDict`` can safely be used. return MessageToDict(sink_pb)
API call: retrieve a sink resource. :type project: str :param project: ID of the project containing the sink. :type sink_name: str :param sink_name: the name of the sink :rtype: dict :returns: The sink object returned from the API (converted from a protobuf to a dictionary).
Below is the the instruction that describes the task: ### Input: API call: retrieve a sink resource. :type project: str :param project: ID of the project containing the sink. :type sink_name: str :param sink_name: the name of the sink :rtype: dict :returns: The sink object returned from the API (converted from a protobuf to a dictionary). ### Response: def sink_get(self, project, sink_name): """API call: retrieve a sink resource. :type project: str :param project: ID of the project containing the sink. :type sink_name: str :param sink_name: the name of the sink :rtype: dict :returns: The sink object returned from the API (converted from a protobuf to a dictionary). """ path = "projects/%s/sinks/%s" % (project, sink_name) sink_pb = self._gapic_api.get_sink(path) # NOTE: LogSink message type does not have an ``Any`` field # so `MessageToDict`` can safely be used. return MessageToDict(sink_pb)
def axes(self): """A mapping from axes to data objects with the plotter in this axes """ ret = utils.DefaultOrderedDict(lambda: self[1:0]) for arr in self: if arr.psy.plotter is not None: ret[arr.psy.plotter.ax].append(arr) return OrderedDict(ret)
A mapping from axes to data objects with the plotter in this axes
Below is the the instruction that describes the task: ### Input: A mapping from axes to data objects with the plotter in this axes ### Response: def axes(self): """A mapping from axes to data objects with the plotter in this axes """ ret = utils.DefaultOrderedDict(lambda: self[1:0]) for arr in self: if arr.psy.plotter is not None: ret[arr.psy.plotter.ax].append(arr) return OrderedDict(ret)
def param_list_to_dict(list,param_struct,skeys): """convert from param dictionary to list param_struct: structure of parameter array """ RV = [] i0= 0 for key in skeys: val = param_struct[key] shape = SP.array(val) np = shape.prod() i1 = i0+np params = list[i0:i1].reshape(shape) RV.append((key,params)) i0 = i1 return dict(RV)
convert from param dictionary to list param_struct: structure of parameter array
Below is the the instruction that describes the task: ### Input: convert from param dictionary to list param_struct: structure of parameter array ### Response: def param_list_to_dict(list,param_struct,skeys): """convert from param dictionary to list param_struct: structure of parameter array """ RV = [] i0= 0 for key in skeys: val = param_struct[key] shape = SP.array(val) np = shape.prod() i1 = i0+np params = list[i0:i1].reshape(shape) RV.append((key,params)) i0 = i1 return dict(RV)
def send_report(report, config): """ Sends the report to IOpipe's collector. :param report: The report to be sent. :param config: The IOpipe agent configuration. """ headers = {"Authorization": "Bearer {}".format(config["token"])} url = "https://{host}{path}".format(**config) try: response = session.post( url, json=report, headers=headers, timeout=config["network_timeout"] ) response.raise_for_status() except Exception as e: logger.debug("Error sending report to IOpipe: %s" % e) else: logger.debug("Report sent to IOpipe successfully")
Sends the report to IOpipe's collector. :param report: The report to be sent. :param config: The IOpipe agent configuration.
Below is the the instruction that describes the task: ### Input: Sends the report to IOpipe's collector. :param report: The report to be sent. :param config: The IOpipe agent configuration. ### Response: def send_report(report, config): """ Sends the report to IOpipe's collector. :param report: The report to be sent. :param config: The IOpipe agent configuration. """ headers = {"Authorization": "Bearer {}".format(config["token"])} url = "https://{host}{path}".format(**config) try: response = session.post( url, json=report, headers=headers, timeout=config["network_timeout"] ) response.raise_for_status() except Exception as e: logger.debug("Error sending report to IOpipe: %s" % e) else: logger.debug("Report sent to IOpipe successfully")
def attr_matches(self, text): """Compute matches when text contains a dot. Assuming the text is of the form NAME.NAME....[NAME], and is evaluatable in self.namespace or self.global_namespace, it will be evaluated and its attributes (as revealed by dir()) are used as possible completions. (For class instances, class members are are also considered.) WARNING: this can still invoke arbitrary C code, if an object with a __getattr__ hook is evaluated. """ import re # Another option, seems to work great. Catches things like ''.<tab> m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text) # @UndefinedVariable if not m: return [] expr, attr = m.group(1, 3) try: obj = eval(expr, self.namespace) except: try: obj = eval(expr, self.global_namespace) except: return [] filter = _StartsWithFilter(attr) words = dir2(obj, filter=filter) return words
Compute matches when text contains a dot. Assuming the text is of the form NAME.NAME....[NAME], and is evaluatable in self.namespace or self.global_namespace, it will be evaluated and its attributes (as revealed by dir()) are used as possible completions. (For class instances, class members are are also considered.) WARNING: this can still invoke arbitrary C code, if an object with a __getattr__ hook is evaluated.
Below is the the instruction that describes the task: ### Input: Compute matches when text contains a dot. Assuming the text is of the form NAME.NAME....[NAME], and is evaluatable in self.namespace or self.global_namespace, it will be evaluated and its attributes (as revealed by dir()) are used as possible completions. (For class instances, class members are are also considered.) WARNING: this can still invoke arbitrary C code, if an object with a __getattr__ hook is evaluated. ### Response: def attr_matches(self, text): """Compute matches when text contains a dot. Assuming the text is of the form NAME.NAME....[NAME], and is evaluatable in self.namespace or self.global_namespace, it will be evaluated and its attributes (as revealed by dir()) are used as possible completions. (For class instances, class members are are also considered.) WARNING: this can still invoke arbitrary C code, if an object with a __getattr__ hook is evaluated. """ import re # Another option, seems to work great. Catches things like ''.<tab> m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text) # @UndefinedVariable if not m: return [] expr, attr = m.group(1, 3) try: obj = eval(expr, self.namespace) except: try: obj = eval(expr, self.global_namespace) except: return [] filter = _StartsWithFilter(attr) words = dir2(obj, filter=filter) return words
def get_wan_ip(n=0): """ That IP module sucks. Occasionally it returns an IP address behind cloudflare which probably happens when cloudflare tries to proxy your web request because it thinks you're trying to DoS. It's better if we just run our own infrastructure. """ if n == 2: try: ip = myip() ip = extract_ip(ip) if is_ip_valid(ip): return ip except Exception as e: print(str(e)) return None # Fail-safe: use centralized server for IP lookup. from pyp2p.net import forwarding_servers for forwarding_server in forwarding_servers: url = "http://" + forwarding_server["addr"] + ":" url += str(forwarding_server["port"]) url += forwarding_server["url"] url += "?action=get_wan_ip" try: r = urlopen(url, timeout=5) response = r.read().decode("utf-8") response = extract_ip(response) if is_ip_valid(response): return response except Exception as e: print(str(e)) continue time.sleep(1) return get_wan_ip(n + 1)
That IP module sucks. Occasionally it returns an IP address behind cloudflare which probably happens when cloudflare tries to proxy your web request because it thinks you're trying to DoS. It's better if we just run our own infrastructure.
Below is the the instruction that describes the task: ### Input: That IP module sucks. Occasionally it returns an IP address behind cloudflare which probably happens when cloudflare tries to proxy your web request because it thinks you're trying to DoS. It's better if we just run our own infrastructure. ### Response: def get_wan_ip(n=0): """ That IP module sucks. Occasionally it returns an IP address behind cloudflare which probably happens when cloudflare tries to proxy your web request because it thinks you're trying to DoS. It's better if we just run our own infrastructure. """ if n == 2: try: ip = myip() ip = extract_ip(ip) if is_ip_valid(ip): return ip except Exception as e: print(str(e)) return None # Fail-safe: use centralized server for IP lookup. from pyp2p.net import forwarding_servers for forwarding_server in forwarding_servers: url = "http://" + forwarding_server["addr"] + ":" url += str(forwarding_server["port"]) url += forwarding_server["url"] url += "?action=get_wan_ip" try: r = urlopen(url, timeout=5) response = r.read().decode("utf-8") response = extract_ip(response) if is_ip_valid(response): return response except Exception as e: print(str(e)) continue time.sleep(1) return get_wan_ip(n + 1)
def get_measurements_from_kal_scan(kal_out): """Return a list of all measurements from kalibrate channel scan.""" result = [] for line in kal_out.splitlines(): if "offset " in line: p_line = line.split(' ') result.append(p_line[-1]) return result
Return a list of all measurements from kalibrate channel scan.
Below is the the instruction that describes the task: ### Input: Return a list of all measurements from kalibrate channel scan. ### Response: def get_measurements_from_kal_scan(kal_out): """Return a list of all measurements from kalibrate channel scan.""" result = [] for line in kal_out.splitlines(): if "offset " in line: p_line = line.split(' ') result.append(p_line[-1]) return result
def timeframe(self, start, end): r""" When you want to search bugs for a certain time frame. :param start: :param end: :returns: :class:`Search` """ if start: self._time_frame['chfieldfrom'] = start if end: self._time_frame['chfieldto'] = end return self
r""" When you want to search bugs for a certain time frame. :param start: :param end: :returns: :class:`Search`
Below is the the instruction that describes the task: ### Input: r""" When you want to search bugs for a certain time frame. :param start: :param end: :returns: :class:`Search` ### Response: def timeframe(self, start, end): r""" When you want to search bugs for a certain time frame. :param start: :param end: :returns: :class:`Search` """ if start: self._time_frame['chfieldfrom'] = start if end: self._time_frame['chfieldto'] = end return self
def is_visible(self, pos: Union[Point2, Point3, Unit]) -> bool: """ Returns True if you have vision on a grid point. """ # more info: https://github.com/Blizzard/s2client-proto/blob/9906df71d6909511907d8419b33acc1a3bd51ec0/s2clientprotocol/spatial.proto#L19 assert isinstance(pos, (Point2, Point3, Unit)) pos = pos.position.to2.rounded return self.state.visibility[pos] == 2
Returns True if you have vision on a grid point.
Below is the the instruction that describes the task: ### Input: Returns True if you have vision on a grid point. ### Response: def is_visible(self, pos: Union[Point2, Point3, Unit]) -> bool: """ Returns True if you have vision on a grid point. """ # more info: https://github.com/Blizzard/s2client-proto/blob/9906df71d6909511907d8419b33acc1a3bd51ec0/s2clientprotocol/spatial.proto#L19 assert isinstance(pos, (Point2, Point3, Unit)) pos = pos.position.to2.rounded return self.state.visibility[pos] == 2
def _merge_with_defaults(params): """ Performs a 2-level deep merge of params with _default_params with corrent merging of params for each mark. This is a bit complicated since params['marks'] is a list and we need to make sure each mark gets the default params. """ marks_params = [ tz.merge(default, param) for default, param in zip(itertools.repeat(_default_params['marks']), params['marks']) ] if 'marks' in params else [_default_params['marks']] merged_without_marks = tz.merge_with( tz.merge, tz.dissoc(_default_params, 'marks'), tz.dissoc(params, 'marks') ) return tz.merge(merged_without_marks, {'marks': marks_params})
Performs a 2-level deep merge of params with _default_params with corrent merging of params for each mark. This is a bit complicated since params['marks'] is a list and we need to make sure each mark gets the default params.
Below is the the instruction that describes the task: ### Input: Performs a 2-level deep merge of params with _default_params with corrent merging of params for each mark. This is a bit complicated since params['marks'] is a list and we need to make sure each mark gets the default params. ### Response: def _merge_with_defaults(params): """ Performs a 2-level deep merge of params with _default_params with corrent merging of params for each mark. This is a bit complicated since params['marks'] is a list and we need to make sure each mark gets the default params. """ marks_params = [ tz.merge(default, param) for default, param in zip(itertools.repeat(_default_params['marks']), params['marks']) ] if 'marks' in params else [_default_params['marks']] merged_without_marks = tz.merge_with( tz.merge, tz.dissoc(_default_params, 'marks'), tz.dissoc(params, 'marks') ) return tz.merge(merged_without_marks, {'marks': marks_params})
def do_action(self, target, dry_run=False): """ :param target: Full path and filename :param dry_run: True - don't actually perform action. False: perform action. No effect for this rule. :return: filename: Full path and filename after action completes """ original_path = os.path.dirname(target) original_filename, original_extension = os.path.splitext(os.path.basename(target)) new_filename = re.sub(self.match, self.replace_with, original_filename) + original_extension destination = os.path.join(original_path, new_filename) if dry_run is True: self.logger.debug("Dry run: Skipping rename {0} to {1}".format(target, new_filename)) return target else: self.logger.debug("Renaming {0} to {1}".format(original_filename + original_extension, new_filename + original_extension)) if not os.path.exists(destination): try: shutil.move(target, destination) except IOError: self.logger.error("Error renaming file {0} to {1}".format(target, new_filename)) raise IOError else: self.logger.error("Destination file already exists: {0}".format(new_filename)) raise IOError return destination
:param target: Full path and filename :param dry_run: True - don't actually perform action. False: perform action. No effect for this rule. :return: filename: Full path and filename after action completes
Below is the the instruction that describes the task: ### Input: :param target: Full path and filename :param dry_run: True - don't actually perform action. False: perform action. No effect for this rule. :return: filename: Full path and filename after action completes ### Response: def do_action(self, target, dry_run=False): """ :param target: Full path and filename :param dry_run: True - don't actually perform action. False: perform action. No effect for this rule. :return: filename: Full path and filename after action completes """ original_path = os.path.dirname(target) original_filename, original_extension = os.path.splitext(os.path.basename(target)) new_filename = re.sub(self.match, self.replace_with, original_filename) + original_extension destination = os.path.join(original_path, new_filename) if dry_run is True: self.logger.debug("Dry run: Skipping rename {0} to {1}".format(target, new_filename)) return target else: self.logger.debug("Renaming {0} to {1}".format(original_filename + original_extension, new_filename + original_extension)) if not os.path.exists(destination): try: shutil.move(target, destination) except IOError: self.logger.error("Error renaming file {0} to {1}".format(target, new_filename)) raise IOError else: self.logger.error("Destination file already exists: {0}".format(new_filename)) raise IOError return destination
def get_schema_for_game(self, appID, language=None, format=None): """Request the available achievements and stats for a game. appID: The app id language: The language to return the results in. None uses default. format: Return format. None defaults to json. (json, xml, vdf) """ parameters = {'appid' : appID} if format is not None: parameters['format'] = format if language is not None: parameters['l'] = language else: parameters['l'] = self.language url = self.create_request_url(self.interface, 'GetSchemaForGame', 2, parameters) data = self.retrieve_request(url) return self.return_data(data, format=format)
Request the available achievements and stats for a game. appID: The app id language: The language to return the results in. None uses default. format: Return format. None defaults to json. (json, xml, vdf)
Below is the the instruction that describes the task: ### Input: Request the available achievements and stats for a game. appID: The app id language: The language to return the results in. None uses default. format: Return format. None defaults to json. (json, xml, vdf) ### Response: def get_schema_for_game(self, appID, language=None, format=None): """Request the available achievements and stats for a game. appID: The app id language: The language to return the results in. None uses default. format: Return format. None defaults to json. (json, xml, vdf) """ parameters = {'appid' : appID} if format is not None: parameters['format'] = format if language is not None: parameters['l'] = language else: parameters['l'] = self.language url = self.create_request_url(self.interface, 'GetSchemaForGame', 2, parameters) data = self.retrieve_request(url) return self.return_data(data, format=format)
def create(self, enable_turn=values.unset, type=values.unset, unique_name=values.unset, status_callback=values.unset, status_callback_method=values.unset, max_participants=values.unset, record_participants_on_connect=values.unset, video_codecs=values.unset, media_region=values.unset): """ Create a new RoomInstance :param bool enable_turn: Use Twilio Network Traversal for TURN service. :param RoomInstance.RoomType type: Type of room, either peer-to-peer, group-small or group. :param unicode unique_name: Name of the Room. :param unicode status_callback: A URL that Twilio sends asynchronous webhook requests to on every room event. :param unicode status_callback_method: HTTP method Twilio should use when requesting the above URL. :param unicode max_participants: Maximum number of Participants in the Room. :param bool record_participants_on_connect: Start Participant recording when connected. :param RoomInstance.VideoCodec video_codecs: An array of video codecs supported when publishing a Track in the Room. :param unicode media_region: Region for the media server in Group Rooms. :returns: Newly created RoomInstance :rtype: twilio.rest.video.v1.room.RoomInstance """ data = values.of({ 'EnableTurn': enable_turn, 'Type': type, 'UniqueName': unique_name, 'StatusCallback': status_callback, 'StatusCallbackMethod': status_callback_method, 'MaxParticipants': max_participants, 'RecordParticipantsOnConnect': record_participants_on_connect, 'VideoCodecs': serialize.map(video_codecs, lambda e: e), 'MediaRegion': media_region, }) payload = self._version.create( 'POST', self._uri, data=data, ) return RoomInstance(self._version, payload, )
Create a new RoomInstance :param bool enable_turn: Use Twilio Network Traversal for TURN service. :param RoomInstance.RoomType type: Type of room, either peer-to-peer, group-small or group. :param unicode unique_name: Name of the Room. :param unicode status_callback: A URL that Twilio sends asynchronous webhook requests to on every room event. :param unicode status_callback_method: HTTP method Twilio should use when requesting the above URL. :param unicode max_participants: Maximum number of Participants in the Room. :param bool record_participants_on_connect: Start Participant recording when connected. :param RoomInstance.VideoCodec video_codecs: An array of video codecs supported when publishing a Track in the Room. :param unicode media_region: Region for the media server in Group Rooms. :returns: Newly created RoomInstance :rtype: twilio.rest.video.v1.room.RoomInstance
Below is the the instruction that describes the task: ### Input: Create a new RoomInstance :param bool enable_turn: Use Twilio Network Traversal for TURN service. :param RoomInstance.RoomType type: Type of room, either peer-to-peer, group-small or group. :param unicode unique_name: Name of the Room. :param unicode status_callback: A URL that Twilio sends asynchronous webhook requests to on every room event. :param unicode status_callback_method: HTTP method Twilio should use when requesting the above URL. :param unicode max_participants: Maximum number of Participants in the Room. :param bool record_participants_on_connect: Start Participant recording when connected. :param RoomInstance.VideoCodec video_codecs: An array of video codecs supported when publishing a Track in the Room. :param unicode media_region: Region for the media server in Group Rooms. :returns: Newly created RoomInstance :rtype: twilio.rest.video.v1.room.RoomInstance ### Response: def create(self, enable_turn=values.unset, type=values.unset, unique_name=values.unset, status_callback=values.unset, status_callback_method=values.unset, max_participants=values.unset, record_participants_on_connect=values.unset, video_codecs=values.unset, media_region=values.unset): """ Create a new RoomInstance :param bool enable_turn: Use Twilio Network Traversal for TURN service. :param RoomInstance.RoomType type: Type of room, either peer-to-peer, group-small or group. :param unicode unique_name: Name of the Room. :param unicode status_callback: A URL that Twilio sends asynchronous webhook requests to on every room event. :param unicode status_callback_method: HTTP method Twilio should use when requesting the above URL. :param unicode max_participants: Maximum number of Participants in the Room. :param bool record_participants_on_connect: Start Participant recording when connected. :param RoomInstance.VideoCodec video_codecs: An array of video codecs supported when publishing a Track in the Room. :param unicode media_region: Region for the media server in Group Rooms. :returns: Newly created RoomInstance :rtype: twilio.rest.video.v1.room.RoomInstance """ data = values.of({ 'EnableTurn': enable_turn, 'Type': type, 'UniqueName': unique_name, 'StatusCallback': status_callback, 'StatusCallbackMethod': status_callback_method, 'MaxParticipants': max_participants, 'RecordParticipantsOnConnect': record_participants_on_connect, 'VideoCodecs': serialize.map(video_codecs, lambda e: e), 'MediaRegion': media_region, }) payload = self._version.create( 'POST', self._uri, data=data, ) return RoomInstance(self._version, payload, )
def xpnsl(h1, h2, use_threads=True): """Cross-population version of the NSL statistic. Parameters ---------- h1 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the first population. h2 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the second population. use_threads : bool, optional If True use multiple threads to compute. Returns ------- score : ndarray, float, shape (n_variants,) Unstandardized XPNSL scores. """ # check inputs h1 = asarray_ndim(h1, 2) check_integer_dtype(h1) h2 = asarray_ndim(h2, 2) check_integer_dtype(h2) check_dim0_aligned(h1, h2) h1 = memoryview_safe(h1) h2 = memoryview_safe(h2) if use_threads and multiprocessing.cpu_count() > 1: # use multiple threads # setup threadpool pool = ThreadPool(min(4, multiprocessing.cpu_count())) # scan forward res1_fwd = pool.apply_async(nsl_scan, args=(h1,)) res2_fwd = pool.apply_async(nsl_scan, args=(h2,)) # scan backward res1_rev = pool.apply_async(nsl_scan, args=(h1[::-1],)) res2_rev = pool.apply_async(nsl_scan, args=(h2[::-1],)) # wait for both to finish pool.close() pool.join() # obtain results nsl1_fwd = res1_fwd.get() nsl2_fwd = res2_fwd.get() nsl1_rev = res1_rev.get() nsl2_rev = res2_rev.get() # cleanup pool.terminate() else: # compute without threads # scan forward nsl1_fwd = nsl_scan(h1) nsl2_fwd = nsl_scan(h2) # scan backward nsl1_rev = nsl_scan(h1[::-1]) nsl2_rev = nsl_scan(h2[::-1]) # handle reverse scans nsl1_rev = nsl1_rev[::-1] nsl2_rev = nsl2_rev[::-1] # compute unstandardized score nsl1 = nsl1_fwd + nsl1_rev nsl2 = nsl2_fwd + nsl2_rev score = np.log(nsl1 / nsl2) return score
Cross-population version of the NSL statistic. Parameters ---------- h1 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the first population. h2 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the second population. use_threads : bool, optional If True use multiple threads to compute. Returns ------- score : ndarray, float, shape (n_variants,) Unstandardized XPNSL scores.
Below is the the instruction that describes the task: ### Input: Cross-population version of the NSL statistic. Parameters ---------- h1 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the first population. h2 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the second population. use_threads : bool, optional If True use multiple threads to compute. Returns ------- score : ndarray, float, shape (n_variants,) Unstandardized XPNSL scores. ### Response: def xpnsl(h1, h2, use_threads=True): """Cross-population version of the NSL statistic. Parameters ---------- h1 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the first population. h2 : array_like, int, shape (n_variants, n_haplotypes) Haplotype array for the second population. use_threads : bool, optional If True use multiple threads to compute. Returns ------- score : ndarray, float, shape (n_variants,) Unstandardized XPNSL scores. """ # check inputs h1 = asarray_ndim(h1, 2) check_integer_dtype(h1) h2 = asarray_ndim(h2, 2) check_integer_dtype(h2) check_dim0_aligned(h1, h2) h1 = memoryview_safe(h1) h2 = memoryview_safe(h2) if use_threads and multiprocessing.cpu_count() > 1: # use multiple threads # setup threadpool pool = ThreadPool(min(4, multiprocessing.cpu_count())) # scan forward res1_fwd = pool.apply_async(nsl_scan, args=(h1,)) res2_fwd = pool.apply_async(nsl_scan, args=(h2,)) # scan backward res1_rev = pool.apply_async(nsl_scan, args=(h1[::-1],)) res2_rev = pool.apply_async(nsl_scan, args=(h2[::-1],)) # wait for both to finish pool.close() pool.join() # obtain results nsl1_fwd = res1_fwd.get() nsl2_fwd = res2_fwd.get() nsl1_rev = res1_rev.get() nsl2_rev = res2_rev.get() # cleanup pool.terminate() else: # compute without threads # scan forward nsl1_fwd = nsl_scan(h1) nsl2_fwd = nsl_scan(h2) # scan backward nsl1_rev = nsl_scan(h1[::-1]) nsl2_rev = nsl_scan(h2[::-1]) # handle reverse scans nsl1_rev = nsl1_rev[::-1] nsl2_rev = nsl2_rev[::-1] # compute unstandardized score nsl1 = nsl1_fwd + nsl1_rev nsl2 = nsl2_fwd + nsl2_rev score = np.log(nsl1 / nsl2) return score
def report(location, document): """ Run reporter on document and return list of ReporterError instances. (Depending on the location it will or won't run anything.) Returns a list of `ReporterError`. """ assert isinstance(location, six.string_types) if location.endswith('.py'): return report_pyflakes(document) else: return []
Run reporter on document and return list of ReporterError instances. (Depending on the location it will or won't run anything.) Returns a list of `ReporterError`.
Below is the the instruction that describes the task: ### Input: Run reporter on document and return list of ReporterError instances. (Depending on the location it will or won't run anything.) Returns a list of `ReporterError`. ### Response: def report(location, document): """ Run reporter on document and return list of ReporterError instances. (Depending on the location it will or won't run anything.) Returns a list of `ReporterError`. """ assert isinstance(location, six.string_types) if location.endswith('.py'): return report_pyflakes(document) else: return []
def systemInformationType2ter(): """SYSTEM INFORMATION TYPE 2ter Section 9.1.34""" a = L2PseudoLength(l2pLength=0x12) b = TpPd(pd=0x6) c = MessageType(mesType=0x3) # 00000011 d = NeighbourCellsDescription2() e = Si2terRestOctets() packet = a / b / c / d / e return packet
SYSTEM INFORMATION TYPE 2ter Section 9.1.34
Below is the the instruction that describes the task: ### Input: SYSTEM INFORMATION TYPE 2ter Section 9.1.34 ### Response: def systemInformationType2ter(): """SYSTEM INFORMATION TYPE 2ter Section 9.1.34""" a = L2PseudoLength(l2pLength=0x12) b = TpPd(pd=0x6) c = MessageType(mesType=0x3) # 00000011 d = NeighbourCellsDescription2() e = Si2terRestOctets() packet = a / b / c / d / e return packet
def from_path(path: str, encoding: str = 'utf-8', **kwargs) -> BELGraph: """Load a BEL graph from a file resource. This function is a thin wrapper around :func:`from_lines`. :param path: A file path :param encoding: the encoding to use when reading this file. Is passed to :code:`codecs.open`. See the python `docs <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ for a list of standard encodings. For example, files starting with a UTF-8 BOM should use :code:`utf_8_sig`. The remaining keyword arguments are passed to :func:`pybel.io.line_utils.parse_lines`. """ log.info('Loading from path: %s', path) graph = BELGraph(path=path) with codecs.open(os.path.expanduser(path), encoding=encoding) as lines: parse_lines(graph=graph, lines=lines, **kwargs) return graph
Load a BEL graph from a file resource. This function is a thin wrapper around :func:`from_lines`. :param path: A file path :param encoding: the encoding to use when reading this file. Is passed to :code:`codecs.open`. See the python `docs <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ for a list of standard encodings. For example, files starting with a UTF-8 BOM should use :code:`utf_8_sig`. The remaining keyword arguments are passed to :func:`pybel.io.line_utils.parse_lines`.
Below is the the instruction that describes the task: ### Input: Load a BEL graph from a file resource. This function is a thin wrapper around :func:`from_lines`. :param path: A file path :param encoding: the encoding to use when reading this file. Is passed to :code:`codecs.open`. See the python `docs <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ for a list of standard encodings. For example, files starting with a UTF-8 BOM should use :code:`utf_8_sig`. The remaining keyword arguments are passed to :func:`pybel.io.line_utils.parse_lines`. ### Response: def from_path(path: str, encoding: str = 'utf-8', **kwargs) -> BELGraph: """Load a BEL graph from a file resource. This function is a thin wrapper around :func:`from_lines`. :param path: A file path :param encoding: the encoding to use when reading this file. Is passed to :code:`codecs.open`. See the python `docs <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ for a list of standard encodings. For example, files starting with a UTF-8 BOM should use :code:`utf_8_sig`. The remaining keyword arguments are passed to :func:`pybel.io.line_utils.parse_lines`. """ log.info('Loading from path: %s', path) graph = BELGraph(path=path) with codecs.open(os.path.expanduser(path), encoding=encoding) as lines: parse_lines(graph=graph, lines=lines, **kwargs) return graph
def list_databases(self, page_size=None, page_token=None): """List databases for the instance. See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases :type page_size: int :param page_size: Optional. The maximum number of databases in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: Optional. If present, return the next batch of databases, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :rtype: :class:`~google.api._ore.page_iterator.Iterator` :returns: Iterator of :class:`~google.cloud.spanner_v1.database.Database` resources within the current instance. """ metadata = _metadata_with_prefix(self.name) page_iter = self._client.database_admin_api.list_databases( self.name, page_size=page_size, metadata=metadata ) page_iter.next_page_token = page_token page_iter.item_to_value = self._item_to_database return page_iter
List databases for the instance. See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases :type page_size: int :param page_size: Optional. The maximum number of databases in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: Optional. If present, return the next batch of databases, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :rtype: :class:`~google.api._ore.page_iterator.Iterator` :returns: Iterator of :class:`~google.cloud.spanner_v1.database.Database` resources within the current instance.
Below is the the instruction that describes the task: ### Input: List databases for the instance. See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases :type page_size: int :param page_size: Optional. The maximum number of databases in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: Optional. If present, return the next batch of databases, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :rtype: :class:`~google.api._ore.page_iterator.Iterator` :returns: Iterator of :class:`~google.cloud.spanner_v1.database.Database` resources within the current instance. ### Response: def list_databases(self, page_size=None, page_token=None): """List databases for the instance. See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases :type page_size: int :param page_size: Optional. The maximum number of databases in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: Optional. If present, return the next batch of databases, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :rtype: :class:`~google.api._ore.page_iterator.Iterator` :returns: Iterator of :class:`~google.cloud.spanner_v1.database.Database` resources within the current instance. """ metadata = _metadata_with_prefix(self.name) page_iter = self._client.database_admin_api.list_databases( self.name, page_size=page_size, metadata=metadata ) page_iter.next_page_token = page_token page_iter.item_to_value = self._item_to_database return page_iter
def convert_runsummary_to_json( df, comment='Uploaded via km3pipe.StreamDS', prefix='TEST_' ): """Convert a Pandas DataFrame with runsummary to JSON for DB upload""" data_field = [] comment += ", by {}".format(getpass.getuser()) for det_id, det_data in df.groupby('det_id'): runs_field = [] data_field.append({"DetectorId": det_id, "Runs": runs_field}) for run, run_data in det_data.groupby('run'): parameters_field = [] runs_field.append({ "Run": int(run), "Parameters": parameters_field }) parameter_dict = {} for row in run_data.itertuples(): for parameter_name in run_data.columns: if parameter_name in REQUIRED_COLUMNS: continue if parameter_name not in parameter_dict: entry = {'Name': prefix + parameter_name, 'Data': []} parameter_dict[parameter_name] = entry data_value = getattr(row, parameter_name) try: data_value = float(data_value) except ValueError as e: log.critical("Data values has to be floats!") raise ValueError(e) value = {'S': str(getattr(row, 'source')), 'D': data_value} parameter_dict[parameter_name]['Data'].append(value) for parameter_data in parameter_dict.values(): parameters_field.append(parameter_data) data_to_upload = {"Comment": comment, "Data": data_field} file_data_to_upload = json.dumps(data_to_upload) return file_data_to_upload
Convert a Pandas DataFrame with runsummary to JSON for DB upload
Below is the the instruction that describes the task: ### Input: Convert a Pandas DataFrame with runsummary to JSON for DB upload ### Response: def convert_runsummary_to_json( df, comment='Uploaded via km3pipe.StreamDS', prefix='TEST_' ): """Convert a Pandas DataFrame with runsummary to JSON for DB upload""" data_field = [] comment += ", by {}".format(getpass.getuser()) for det_id, det_data in df.groupby('det_id'): runs_field = [] data_field.append({"DetectorId": det_id, "Runs": runs_field}) for run, run_data in det_data.groupby('run'): parameters_field = [] runs_field.append({ "Run": int(run), "Parameters": parameters_field }) parameter_dict = {} for row in run_data.itertuples(): for parameter_name in run_data.columns: if parameter_name in REQUIRED_COLUMNS: continue if parameter_name not in parameter_dict: entry = {'Name': prefix + parameter_name, 'Data': []} parameter_dict[parameter_name] = entry data_value = getattr(row, parameter_name) try: data_value = float(data_value) except ValueError as e: log.critical("Data values has to be floats!") raise ValueError(e) value = {'S': str(getattr(row, 'source')), 'D': data_value} parameter_dict[parameter_name]['Data'].append(value) for parameter_data in parameter_dict.values(): parameters_field.append(parameter_data) data_to_upload = {"Comment": comment, "Data": data_field} file_data_to_upload = json.dumps(data_to_upload) return file_data_to_upload