code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def encode(self, payload): """ Encode payload """ try: return self.encoder.encode(payload) except Exception as exception: raise EncodeError(str(exception))
Encode payload
Below is the the instruction that describes the task: ### Input: Encode payload ### Response: def encode(self, payload): """ Encode payload """ try: return self.encoder.encode(payload) except Exception as exception: raise EncodeError(str(exception))
def _rewrite_function(f): """ Rewrite a function so that any if/else branches are intercepted and their behavior can be overridden. This is accomplished using 3 steps: 1. Get the source of the function and then rewrite the AST using _IfTransformer 2. Do some small fixups to the tree to make sure a) the function doesn't have the same name, and b) the decorator isn't called recursively on the transformed function as well 3. Bring the variables from the call site back into scope :param f: Function to rewrite :return: Rewritten function """ source = inspect.getsource(f) tree = ast.parse(source) _IfTransformer().visit(tree) ast.fix_missing_locations(tree) tree.body[0].name = f.__name__ + '_patched' tree.body[0].decorator_list = [] compiled = compile(tree, filename='<ast>', mode='exec') # The first f_back here gets to the body of magicquil() and the second f_back gets to the user's call site which # is what we want. If we didn't add these manually to the globals it wouldn't be possible to call other @magicquil # functions from within a @magicquil function. prev_globals = inspect.currentframe().f_back.f_back.f_globals # For reasons I don't quite understand it's critical to add locals() here otherwise the function will disappear and # we won't be able to return it below exec(compiled, {**prev_globals, **globals()}, locals()) return locals()[f.__name__ + '_patched']
Rewrite a function so that any if/else branches are intercepted and their behavior can be overridden. This is accomplished using 3 steps: 1. Get the source of the function and then rewrite the AST using _IfTransformer 2. Do some small fixups to the tree to make sure a) the function doesn't have the same name, and b) the decorator isn't called recursively on the transformed function as well 3. Bring the variables from the call site back into scope :param f: Function to rewrite :return: Rewritten function
Below is the the instruction that describes the task: ### Input: Rewrite a function so that any if/else branches are intercepted and their behavior can be overridden. This is accomplished using 3 steps: 1. Get the source of the function and then rewrite the AST using _IfTransformer 2. Do some small fixups to the tree to make sure a) the function doesn't have the same name, and b) the decorator isn't called recursively on the transformed function as well 3. Bring the variables from the call site back into scope :param f: Function to rewrite :return: Rewritten function ### Response: def _rewrite_function(f): """ Rewrite a function so that any if/else branches are intercepted and their behavior can be overridden. This is accomplished using 3 steps: 1. Get the source of the function and then rewrite the AST using _IfTransformer 2. Do some small fixups to the tree to make sure a) the function doesn't have the same name, and b) the decorator isn't called recursively on the transformed function as well 3. Bring the variables from the call site back into scope :param f: Function to rewrite :return: Rewritten function """ source = inspect.getsource(f) tree = ast.parse(source) _IfTransformer().visit(tree) ast.fix_missing_locations(tree) tree.body[0].name = f.__name__ + '_patched' tree.body[0].decorator_list = [] compiled = compile(tree, filename='<ast>', mode='exec') # The first f_back here gets to the body of magicquil() and the second f_back gets to the user's call site which # is what we want. If we didn't add these manually to the globals it wouldn't be possible to call other @magicquil # functions from within a @magicquil function. prev_globals = inspect.currentframe().f_back.f_back.f_globals # For reasons I don't quite understand it's critical to add locals() here otherwise the function will disappear and # we won't be able to return it below exec(compiled, {**prev_globals, **globals()}, locals()) return locals()[f.__name__ + '_patched']
def get_inventory_str(self, keys=None): """ Convert a dict generated by ansible.LagoAnsible.get_inventory to an INI-like file. Args: keys (list of str): Path to the keys that will be used to create groups. Returns: str: INI-like Ansible inventory """ inventory = self.get_inventory(keys) lines = [] for name, hosts in inventory.viewitems(): lines.append('[{name}]'.format(name=name)) for host in sorted(hosts): lines.append(host) return '\n'.join(lines)
Convert a dict generated by ansible.LagoAnsible.get_inventory to an INI-like file. Args: keys (list of str): Path to the keys that will be used to create groups. Returns: str: INI-like Ansible inventory
Below is the the instruction that describes the task: ### Input: Convert a dict generated by ansible.LagoAnsible.get_inventory to an INI-like file. Args: keys (list of str): Path to the keys that will be used to create groups. Returns: str: INI-like Ansible inventory ### Response: def get_inventory_str(self, keys=None): """ Convert a dict generated by ansible.LagoAnsible.get_inventory to an INI-like file. Args: keys (list of str): Path to the keys that will be used to create groups. Returns: str: INI-like Ansible inventory """ inventory = self.get_inventory(keys) lines = [] for name, hosts in inventory.viewitems(): lines.append('[{name}]'.format(name=name)) for host in sorted(hosts): lines.append(host) return '\n'.join(lines)
def clear_sonos_playlist(self, sonos_playlist, update_id=0): """Clear all tracks from a Sonos playlist. This is a convenience method for :py:meth:`reorder_sonos_playlist`. Example:: device.clear_sonos_playlist(sonos_playlist) Args: sonos_playlist (:py:class:`~.soco.data_structures.DidlPlaylistContainer`): Sonos playlist object or the item_id (str) of the Sonos playlist. update_id (int): Optional update counter for the object. If left at the default of 0, it will be looked up. Returns: dict: See :py:meth:`reorder_sonos_playlist` Raises: ValueError: If sonos_playlist specified by string and is not found. SoCoUPnPException: See :py:meth:`reorder_sonos_playlist` """ if not isinstance(sonos_playlist, DidlPlaylistContainer): sonos_playlist = self.get_sonos_playlist_by_attr('item_id', sonos_playlist) count = self.music_library.browse(ml_item=sonos_playlist).total_matches tracks = ','.join([str(x) for x in range(count)]) if tracks: return self.reorder_sonos_playlist(sonos_playlist, tracks=tracks, new_pos='', update_id=update_id) else: return {'change': 0, 'update_id': update_id, 'length': count}
Clear all tracks from a Sonos playlist. This is a convenience method for :py:meth:`reorder_sonos_playlist`. Example:: device.clear_sonos_playlist(sonos_playlist) Args: sonos_playlist (:py:class:`~.soco.data_structures.DidlPlaylistContainer`): Sonos playlist object or the item_id (str) of the Sonos playlist. update_id (int): Optional update counter for the object. If left at the default of 0, it will be looked up. Returns: dict: See :py:meth:`reorder_sonos_playlist` Raises: ValueError: If sonos_playlist specified by string and is not found. SoCoUPnPException: See :py:meth:`reorder_sonos_playlist`
Below is the the instruction that describes the task: ### Input: Clear all tracks from a Sonos playlist. This is a convenience method for :py:meth:`reorder_sonos_playlist`. Example:: device.clear_sonos_playlist(sonos_playlist) Args: sonos_playlist (:py:class:`~.soco.data_structures.DidlPlaylistContainer`): Sonos playlist object or the item_id (str) of the Sonos playlist. update_id (int): Optional update counter for the object. If left at the default of 0, it will be looked up. Returns: dict: See :py:meth:`reorder_sonos_playlist` Raises: ValueError: If sonos_playlist specified by string and is not found. SoCoUPnPException: See :py:meth:`reorder_sonos_playlist` ### Response: def clear_sonos_playlist(self, sonos_playlist, update_id=0): """Clear all tracks from a Sonos playlist. This is a convenience method for :py:meth:`reorder_sonos_playlist`. Example:: device.clear_sonos_playlist(sonos_playlist) Args: sonos_playlist (:py:class:`~.soco.data_structures.DidlPlaylistContainer`): Sonos playlist object or the item_id (str) of the Sonos playlist. update_id (int): Optional update counter for the object. If left at the default of 0, it will be looked up. Returns: dict: See :py:meth:`reorder_sonos_playlist` Raises: ValueError: If sonos_playlist specified by string and is not found. SoCoUPnPException: See :py:meth:`reorder_sonos_playlist` """ if not isinstance(sonos_playlist, DidlPlaylistContainer): sonos_playlist = self.get_sonos_playlist_by_attr('item_id', sonos_playlist) count = self.music_library.browse(ml_item=sonos_playlist).total_matches tracks = ','.join([str(x) for x in range(count)]) if tracks: return self.reorder_sonos_playlist(sonos_playlist, tracks=tracks, new_pos='', update_id=update_id) else: return {'change': 0, 'update_id': update_id, 'length': count}
def set_from_json(self, obj, json, models=None, setter=None): ''' Sets the value of this property from a JSON value. This method first Args: obj (HasProps) : json (JSON-dict) : models(seq[Model], optional) : setter (ClientSession or ServerSession or None, optional) : This is used to prevent "boomerang" updates to Bokeh apps. (default: None) In the context of a Bokeh server application, incoming updates to properties will be annotated with the session that is doing the updating. This value is propagated through any subsequent change notifications that the update triggers. The session can compare the event setter to itself, and suppress any updates that originate from itself. Returns: None ''' return super(BasicPropertyDescriptor, self).set_from_json(obj, self.property.from_json(json, models), models, setter)
Sets the value of this property from a JSON value. This method first Args: obj (HasProps) : json (JSON-dict) : models(seq[Model], optional) : setter (ClientSession or ServerSession or None, optional) : This is used to prevent "boomerang" updates to Bokeh apps. (default: None) In the context of a Bokeh server application, incoming updates to properties will be annotated with the session that is doing the updating. This value is propagated through any subsequent change notifications that the update triggers. The session can compare the event setter to itself, and suppress any updates that originate from itself. Returns: None
Below is the the instruction that describes the task: ### Input: Sets the value of this property from a JSON value. This method first Args: obj (HasProps) : json (JSON-dict) : models(seq[Model], optional) : setter (ClientSession or ServerSession or None, optional) : This is used to prevent "boomerang" updates to Bokeh apps. (default: None) In the context of a Bokeh server application, incoming updates to properties will be annotated with the session that is doing the updating. This value is propagated through any subsequent change notifications that the update triggers. The session can compare the event setter to itself, and suppress any updates that originate from itself. Returns: None ### Response: def set_from_json(self, obj, json, models=None, setter=None): ''' Sets the value of this property from a JSON value. This method first Args: obj (HasProps) : json (JSON-dict) : models(seq[Model], optional) : setter (ClientSession or ServerSession or None, optional) : This is used to prevent "boomerang" updates to Bokeh apps. (default: None) In the context of a Bokeh server application, incoming updates to properties will be annotated with the session that is doing the updating. This value is propagated through any subsequent change notifications that the update triggers. The session can compare the event setter to itself, and suppress any updates that originate from itself. Returns: None ''' return super(BasicPropertyDescriptor, self).set_from_json(obj, self.property.from_json(json, models), models, setter)
def get_nodes(cluster): """Get all the nodes of a given cluster. Args: cluster(string): uid of the cluster (e.g 'rennes') """ gk = get_api_client() site = get_cluster_site(cluster) return gk.sites[site].clusters[cluster].nodes.list()
Get all the nodes of a given cluster. Args: cluster(string): uid of the cluster (e.g 'rennes')
Below is the the instruction that describes the task: ### Input: Get all the nodes of a given cluster. Args: cluster(string): uid of the cluster (e.g 'rennes') ### Response: def get_nodes(cluster): """Get all the nodes of a given cluster. Args: cluster(string): uid of the cluster (e.g 'rennes') """ gk = get_api_client() site = get_cluster_site(cluster) return gk.sites[site].clusters[cluster].nodes.list()
def _flatten_dict(self, dict_): """Modifies the given dict into a flat dict consisting of only key/value pairs. """ flattened_dict = {} for (key, value) in dict_.iteritems(): if isinstance(value, dict): flattened_dict.update(self._flatten_dict(value)) else: flattened_dict[key] = value return flattened_dict
Modifies the given dict into a flat dict consisting of only key/value pairs.
Below is the the instruction that describes the task: ### Input: Modifies the given dict into a flat dict consisting of only key/value pairs. ### Response: def _flatten_dict(self, dict_): """Modifies the given dict into a flat dict consisting of only key/value pairs. """ flattened_dict = {} for (key, value) in dict_.iteritems(): if isinstance(value, dict): flattened_dict.update(self._flatten_dict(value)) else: flattened_dict[key] = value return flattened_dict
def cancel(self, client=None): """API call: cancel job via a POST request See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel :type client: :class:`~google.cloud.bigquery.client.Client` or ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. :rtype: bool :returns: Boolean indicating that the cancel request was sent. """ client = self._require_client(client) extra_params = {} if self.location: extra_params["location"] = self.location api_response = client._connection.api_request( method="POST", path="%s/cancel" % (self.path,), query_params=extra_params ) self._set_properties(api_response["job"]) # The Future interface requires that we return True if the *attempt* # to cancel was successful. return True
API call: cancel job via a POST request See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel :type client: :class:`~google.cloud.bigquery.client.Client` or ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. :rtype: bool :returns: Boolean indicating that the cancel request was sent.
Below is the the instruction that describes the task: ### Input: API call: cancel job via a POST request See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel :type client: :class:`~google.cloud.bigquery.client.Client` or ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. :rtype: bool :returns: Boolean indicating that the cancel request was sent. ### Response: def cancel(self, client=None): """API call: cancel job via a POST request See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel :type client: :class:`~google.cloud.bigquery.client.Client` or ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current dataset. :rtype: bool :returns: Boolean indicating that the cancel request was sent. """ client = self._require_client(client) extra_params = {} if self.location: extra_params["location"] = self.location api_response = client._connection.api_request( method="POST", path="%s/cancel" % (self.path,), query_params=extra_params ) self._set_properties(api_response["job"]) # The Future interface requires that we return True if the *attempt* # to cancel was successful. return True
def parse_options(self, option_values): """ Set the options (with parsing) and returns a dict of all options values """ self.set_options_values(option_values, parse=True) return self.get_options_values(hidden=False)
Set the options (with parsing) and returns a dict of all options values
Below is the the instruction that describes the task: ### Input: Set the options (with parsing) and returns a dict of all options values ### Response: def parse_options(self, option_values): """ Set the options (with parsing) and returns a dict of all options values """ self.set_options_values(option_values, parse=True) return self.get_options_values(hidden=False)
def persistent_load(self, pid): """ Reconstruct a GLC object using the persistent ID. This method should not be used externally. It is required by the unpickler super class. Parameters ---------- pid : The persistent ID used in pickle file to save the GLC object. Returns ---------- The GLC object. """ if len(pid) == 2: # Pre GLC-1.3 release behavior, without memorization type_tag, filename = pid abs_path = _os.path.join(self.gl_temp_storage_path, filename) return _get_gl_object_from_persistent_id(type_tag, abs_path) else: # Post GLC-1.3 release behavior, with memorization type_tag, filename, object_id = pid if object_id in self.gl_object_memo: return self.gl_object_memo[object_id] else: abs_path = _os.path.join(self.gl_temp_storage_path, filename) obj = _get_gl_object_from_persistent_id(type_tag, abs_path) self.gl_object_memo[object_id] = obj return obj
Reconstruct a GLC object using the persistent ID. This method should not be used externally. It is required by the unpickler super class. Parameters ---------- pid : The persistent ID used in pickle file to save the GLC object. Returns ---------- The GLC object.
Below is the the instruction that describes the task: ### Input: Reconstruct a GLC object using the persistent ID. This method should not be used externally. It is required by the unpickler super class. Parameters ---------- pid : The persistent ID used in pickle file to save the GLC object. Returns ---------- The GLC object. ### Response: def persistent_load(self, pid): """ Reconstruct a GLC object using the persistent ID. This method should not be used externally. It is required by the unpickler super class. Parameters ---------- pid : The persistent ID used in pickle file to save the GLC object. Returns ---------- The GLC object. """ if len(pid) == 2: # Pre GLC-1.3 release behavior, without memorization type_tag, filename = pid abs_path = _os.path.join(self.gl_temp_storage_path, filename) return _get_gl_object_from_persistent_id(type_tag, abs_path) else: # Post GLC-1.3 release behavior, with memorization type_tag, filename, object_id = pid if object_id in self.gl_object_memo: return self.gl_object_memo[object_id] else: abs_path = _os.path.join(self.gl_temp_storage_path, filename) obj = _get_gl_object_from_persistent_id(type_tag, abs_path) self.gl_object_memo[object_id] = obj return obj
def _return_fieldset(self, fieldset): """ This function became a bit messy, since it needs to deal with two cases. 1) No fieldset, which is represented as an integer 2) A fieldset """ collapsible = None description = None try: # Make sure strings with numbers work as well, do this int(str(fieldset)) title = None except ValueError: if fieldset.count("|") > 1: raise ImproperlyConfigured( "The fieldset name does not " "support more than one | sign. " "It's meant to separate a " "fieldset from its description." ) title = fieldset if "|" in fieldset: title, description = fieldset.split("|") if fieldset and (fieldset[0] in "-+"): if fieldset[0] == "-": collapsible = "closed" else: collapsible = "open" title = title[1:] return { "title": title, "description": description, "collapsible": collapsible, }
This function became a bit messy, since it needs to deal with two cases. 1) No fieldset, which is represented as an integer 2) A fieldset
Below is the the instruction that describes the task: ### Input: This function became a bit messy, since it needs to deal with two cases. 1) No fieldset, which is represented as an integer 2) A fieldset ### Response: def _return_fieldset(self, fieldset): """ This function became a bit messy, since it needs to deal with two cases. 1) No fieldset, which is represented as an integer 2) A fieldset """ collapsible = None description = None try: # Make sure strings with numbers work as well, do this int(str(fieldset)) title = None except ValueError: if fieldset.count("|") > 1: raise ImproperlyConfigured( "The fieldset name does not " "support more than one | sign. " "It's meant to separate a " "fieldset from its description." ) title = fieldset if "|" in fieldset: title, description = fieldset.split("|") if fieldset and (fieldset[0] in "-+"): if fieldset[0] == "-": collapsible = "closed" else: collapsible = "open" title = title[1:] return { "title": title, "description": description, "collapsible": collapsible, }
def contains_element(self, element): """Determines if an element is contained in the priority queue." :returns: bool -- true iff element is in the priority queue. """ return (element in self.element_finder) and \ (self.element_finder[element][1] != self.INVALID)
Determines if an element is contained in the priority queue." :returns: bool -- true iff element is in the priority queue.
Below is the the instruction that describes the task: ### Input: Determines if an element is contained in the priority queue." :returns: bool -- true iff element is in the priority queue. ### Response: def contains_element(self, element): """Determines if an element is contained in the priority queue." :returns: bool -- true iff element is in the priority queue. """ return (element in self.element_finder) and \ (self.element_finder[element][1] != self.INVALID)
def GetFirstWrittenEventSource(self): """Retrieves the first event source that was written after open. Using GetFirstWrittenEventSource and GetNextWrittenEventSource newly added event sources can be retrieved in order of addition. Returns: EventSource: event source or None if there are no newly written ones. Raises: IOError: when the storage writer is closed. OSError: when the storage writer is closed. """ if not self._is_open: raise IOError('Unable to read from closed storage writer.') if self._written_event_source_index >= len(self._event_sources): return None event_source = self._event_sources[self._first_written_event_source_index] self._written_event_source_index = ( self._first_written_event_source_index + 1) return event_source
Retrieves the first event source that was written after open. Using GetFirstWrittenEventSource and GetNextWrittenEventSource newly added event sources can be retrieved in order of addition. Returns: EventSource: event source or None if there are no newly written ones. Raises: IOError: when the storage writer is closed. OSError: when the storage writer is closed.
Below is the the instruction that describes the task: ### Input: Retrieves the first event source that was written after open. Using GetFirstWrittenEventSource and GetNextWrittenEventSource newly added event sources can be retrieved in order of addition. Returns: EventSource: event source or None if there are no newly written ones. Raises: IOError: when the storage writer is closed. OSError: when the storage writer is closed. ### Response: def GetFirstWrittenEventSource(self): """Retrieves the first event source that was written after open. Using GetFirstWrittenEventSource and GetNextWrittenEventSource newly added event sources can be retrieved in order of addition. Returns: EventSource: event source or None if there are no newly written ones. Raises: IOError: when the storage writer is closed. OSError: when the storage writer is closed. """ if not self._is_open: raise IOError('Unable to read from closed storage writer.') if self._written_event_source_index >= len(self._event_sources): return None event_source = self._event_sources[self._first_written_event_source_index] self._written_event_source_index = ( self._first_written_event_source_index + 1) return event_source
def resolve_DID(did, hostport=None, proxy=None): """ Resolve a DID to a public key. This is a multi-step process: 1. get the name record 2. get the zone file 3. parse the zone file to get its URLs (if it's not well-formed, then abort) 4. fetch and authenticate the JWT at each URL (abort if there are none) 5. extract the public key from the JWT and return that. Return {'public_key': ..., 'document': ...} on success Return {'error': ...} on error """ assert hostport or proxy, 'Need hostport or proxy' did_rec = get_DID_record(did, hostport=hostport, proxy=proxy) if 'error' in did_rec: log.error("Failed to get DID record for {}: {}".format(did, did_rec['error'])) return {'error': 'Failed to get DID record: {}'.format(did_rec['error']), 'http_status': did_rec.get('http_status', 500)} if 'value_hash' not in did_rec: log.error("DID record for {} has no zone file hash".format(did)) return {'error': 'No zone file hash in name record for {}'.format(did), 'http_status': 404} zonefile_hash = did_rec['value_hash'] zonefile_res = get_zonefiles(hostport, [zonefile_hash], proxy=proxy) if 'error' in zonefile_res: log.error("Failed to get zone file for {} for DID {}: {}".format(zonefile_hash, did, zonefile_res['error'])) return {'error': 'Failed to get zone file for {}'.format(did), 'http_status': 404} zonefile_txt = zonefile_res['zonefiles'][zonefile_hash] log.debug("Got {}-byte zone file {}".format(len(zonefile_txt), zonefile_hash)) try: zonefile_data = blockstack_zones.parse_zone_file(zonefile_txt) zonefile_data = dict(zonefile_data) assert 'uri' in zonefile_data if len(zonefile_data['uri']) == 0: return {'error': 'No URI records in zone file {} for {}'.format(zonefile_hash, did), 'http_status': 404} except Exception as e: if BLOCKSTACK_TEST: log.exception(e) return {'error': 'Failed to parse zone file {} for {}'.format(zonefile_hash, did), 'http_status': 404} urls = [uri['target'] for uri in zonefile_data['uri']] for url in urls: jwt = get_JWT(url, address=str(did_rec['address'])) if not jwt: continue if 'payload' not in jwt: log.error('Invalid JWT at {}: no payload'.format(url)) continue if 'issuer' not in jwt['payload']: log.error('Invalid JWT at {}: no issuer'.format(url)) continue if 'publicKey' not in jwt['payload']['issuer']: log.error('Invalid JWT at {}: no public key'.format(url)) continue if 'claim' not in jwt['payload']: log.error('Invalid JWT at {}: no claim'.format(url)) continue if not isinstance(jwt['payload'], dict): log.error('Invalid JWT at {}: claim is malformed'.format(url)) continue # found! public_key = str(jwt['payload']['issuer']['publicKey']) document = jwt['payload']['claim'] # make sure it's a well-formed DID document['@context'] = 'https://w3id.org/did/v1' document['publicKey'] = [ { 'id': did, 'type': 'secp256k1', 'publicKeyHex': public_key } ] return {'public_key': public_key, 'document': document} log.error("No zone file URLs resolved to a JWT with the public key whose address is {}".format(did_rec['address'])) return {'error': 'No public key found for the given DID', 'http_status': 404}
Resolve a DID to a public key. This is a multi-step process: 1. get the name record 2. get the zone file 3. parse the zone file to get its URLs (if it's not well-formed, then abort) 4. fetch and authenticate the JWT at each URL (abort if there are none) 5. extract the public key from the JWT and return that. Return {'public_key': ..., 'document': ...} on success Return {'error': ...} on error
Below is the the instruction that describes the task: ### Input: Resolve a DID to a public key. This is a multi-step process: 1. get the name record 2. get the zone file 3. parse the zone file to get its URLs (if it's not well-formed, then abort) 4. fetch and authenticate the JWT at each URL (abort if there are none) 5. extract the public key from the JWT and return that. Return {'public_key': ..., 'document': ...} on success Return {'error': ...} on error ### Response: def resolve_DID(did, hostport=None, proxy=None): """ Resolve a DID to a public key. This is a multi-step process: 1. get the name record 2. get the zone file 3. parse the zone file to get its URLs (if it's not well-formed, then abort) 4. fetch and authenticate the JWT at each URL (abort if there are none) 5. extract the public key from the JWT and return that. Return {'public_key': ..., 'document': ...} on success Return {'error': ...} on error """ assert hostport or proxy, 'Need hostport or proxy' did_rec = get_DID_record(did, hostport=hostport, proxy=proxy) if 'error' in did_rec: log.error("Failed to get DID record for {}: {}".format(did, did_rec['error'])) return {'error': 'Failed to get DID record: {}'.format(did_rec['error']), 'http_status': did_rec.get('http_status', 500)} if 'value_hash' not in did_rec: log.error("DID record for {} has no zone file hash".format(did)) return {'error': 'No zone file hash in name record for {}'.format(did), 'http_status': 404} zonefile_hash = did_rec['value_hash'] zonefile_res = get_zonefiles(hostport, [zonefile_hash], proxy=proxy) if 'error' in zonefile_res: log.error("Failed to get zone file for {} for DID {}: {}".format(zonefile_hash, did, zonefile_res['error'])) return {'error': 'Failed to get zone file for {}'.format(did), 'http_status': 404} zonefile_txt = zonefile_res['zonefiles'][zonefile_hash] log.debug("Got {}-byte zone file {}".format(len(zonefile_txt), zonefile_hash)) try: zonefile_data = blockstack_zones.parse_zone_file(zonefile_txt) zonefile_data = dict(zonefile_data) assert 'uri' in zonefile_data if len(zonefile_data['uri']) == 0: return {'error': 'No URI records in zone file {} for {}'.format(zonefile_hash, did), 'http_status': 404} except Exception as e: if BLOCKSTACK_TEST: log.exception(e) return {'error': 'Failed to parse zone file {} for {}'.format(zonefile_hash, did), 'http_status': 404} urls = [uri['target'] for uri in zonefile_data['uri']] for url in urls: jwt = get_JWT(url, address=str(did_rec['address'])) if not jwt: continue if 'payload' not in jwt: log.error('Invalid JWT at {}: no payload'.format(url)) continue if 'issuer' not in jwt['payload']: log.error('Invalid JWT at {}: no issuer'.format(url)) continue if 'publicKey' not in jwt['payload']['issuer']: log.error('Invalid JWT at {}: no public key'.format(url)) continue if 'claim' not in jwt['payload']: log.error('Invalid JWT at {}: no claim'.format(url)) continue if not isinstance(jwt['payload'], dict): log.error('Invalid JWT at {}: claim is malformed'.format(url)) continue # found! public_key = str(jwt['payload']['issuer']['publicKey']) document = jwt['payload']['claim'] # make sure it's a well-formed DID document['@context'] = 'https://w3id.org/did/v1' document['publicKey'] = [ { 'id': did, 'type': 'secp256k1', 'publicKeyHex': public_key } ] return {'public_key': public_key, 'document': document} log.error("No zone file URLs resolved to a JWT with the public key whose address is {}".format(did_rec['address'])) return {'error': 'No public key found for the given DID', 'http_status': 404}
def predict_topk(self, dataset, output_type='probability', k=3, output_frequency='per_row'): """ Return top-k predictions for the ``dataset``, using the trained model. Predictions are returned as an SFrame with three columns: `prediction_id`, `class`, and `probability`, or `rank`, depending on the ``output_type`` parameter. Parameters ---------- dataset : SFrame Dataset of new observations. Must include columns with the same names as the features and session id used for model training, but does not require a target column. Additional columns are ignored. output_type : {'probability', 'rank'}, optional Choose the return type of the prediction: - `probability`: Probability associated with each label in the prediction. - `rank` : Rank associated with each label in the prediction. k : int, optional Number of classes to return for each input example. output_frequency : {'per_row', 'per_window'}, optional The frequency of the predictions which is one of: - 'per_row': Each prediction is returned ``prediction_window`` times. - 'per_window': Return a single prediction for each ``prediction_window`` rows in ``dataset`` per ``session_id``. Returns ------- out : SFrame An SFrame with model predictions. See Also -------- predict, classify, evaluate Examples -------- >>> pred = m.predict_topk(validation_data, k=3) >>> pred +---------------+-------+-------------------+ | row_id | class | probability | +---------------+-------+-------------------+ | 0 | 4 | 0.995623886585 | | 0 | 9 | 0.0038311756216 | | 0 | 7 | 0.000301006948575 | | 1 | 1 | 0.928708016872 | | 1 | 3 | 0.0440889261663 | | 1 | 2 | 0.0176190119237 | | 2 | 3 | 0.996967732906 | | 2 | 2 | 0.00151345680933 | | 2 | 7 | 0.000637513934635 | | 3 | 1 | 0.998070061207 | | ... | ... | ... | +---------------+-------+-------------------+ """ _tkutl._check_categorical_option_type('output_type', output_type, ['probability', 'rank']) id_target_map = self._id_target_map preds = self.predict( dataset, output_type='probability_vector', output_frequency=output_frequency) if output_frequency == 'per_row': probs = preds elif output_frequency == 'per_window': probs = preds['probability_vector'] if output_type == 'rank': probs = probs.apply(lambda p: [ {'class': id_target_map[i], 'rank': i} for i in reversed(_np.argsort(p)[-k:])] ) elif output_type == 'probability': probs = probs.apply(lambda p: [ {'class': id_target_map[i], 'probability': p[i]} for i in reversed(_np.argsort(p)[-k:])] ) if output_frequency == 'per_row': output = _SFrame({'probs': probs}) output = output.add_row_number(column_name='row_id') elif output_frequency == 'per_window': output = _SFrame({ 'probs': probs, self.session_id: preds[self.session_id], 'prediction_id': preds['prediction_id'] }) output = output.stack('probs', new_column_name='probs') output = output.unpack('probs', column_name_prefix='') return output
Return top-k predictions for the ``dataset``, using the trained model. Predictions are returned as an SFrame with three columns: `prediction_id`, `class`, and `probability`, or `rank`, depending on the ``output_type`` parameter. Parameters ---------- dataset : SFrame Dataset of new observations. Must include columns with the same names as the features and session id used for model training, but does not require a target column. Additional columns are ignored. output_type : {'probability', 'rank'}, optional Choose the return type of the prediction: - `probability`: Probability associated with each label in the prediction. - `rank` : Rank associated with each label in the prediction. k : int, optional Number of classes to return for each input example. output_frequency : {'per_row', 'per_window'}, optional The frequency of the predictions which is one of: - 'per_row': Each prediction is returned ``prediction_window`` times. - 'per_window': Return a single prediction for each ``prediction_window`` rows in ``dataset`` per ``session_id``. Returns ------- out : SFrame An SFrame with model predictions. See Also -------- predict, classify, evaluate Examples -------- >>> pred = m.predict_topk(validation_data, k=3) >>> pred +---------------+-------+-------------------+ | row_id | class | probability | +---------------+-------+-------------------+ | 0 | 4 | 0.995623886585 | | 0 | 9 | 0.0038311756216 | | 0 | 7 | 0.000301006948575 | | 1 | 1 | 0.928708016872 | | 1 | 3 | 0.0440889261663 | | 1 | 2 | 0.0176190119237 | | 2 | 3 | 0.996967732906 | | 2 | 2 | 0.00151345680933 | | 2 | 7 | 0.000637513934635 | | 3 | 1 | 0.998070061207 | | ... | ... | ... | +---------------+-------+-------------------+
Below is the the instruction that describes the task: ### Input: Return top-k predictions for the ``dataset``, using the trained model. Predictions are returned as an SFrame with three columns: `prediction_id`, `class`, and `probability`, or `rank`, depending on the ``output_type`` parameter. Parameters ---------- dataset : SFrame Dataset of new observations. Must include columns with the same names as the features and session id used for model training, but does not require a target column. Additional columns are ignored. output_type : {'probability', 'rank'}, optional Choose the return type of the prediction: - `probability`: Probability associated with each label in the prediction. - `rank` : Rank associated with each label in the prediction. k : int, optional Number of classes to return for each input example. output_frequency : {'per_row', 'per_window'}, optional The frequency of the predictions which is one of: - 'per_row': Each prediction is returned ``prediction_window`` times. - 'per_window': Return a single prediction for each ``prediction_window`` rows in ``dataset`` per ``session_id``. Returns ------- out : SFrame An SFrame with model predictions. See Also -------- predict, classify, evaluate Examples -------- >>> pred = m.predict_topk(validation_data, k=3) >>> pred +---------------+-------+-------------------+ | row_id | class | probability | +---------------+-------+-------------------+ | 0 | 4 | 0.995623886585 | | 0 | 9 | 0.0038311756216 | | 0 | 7 | 0.000301006948575 | | 1 | 1 | 0.928708016872 | | 1 | 3 | 0.0440889261663 | | 1 | 2 | 0.0176190119237 | | 2 | 3 | 0.996967732906 | | 2 | 2 | 0.00151345680933 | | 2 | 7 | 0.000637513934635 | | 3 | 1 | 0.998070061207 | | ... | ... | ... | +---------------+-------+-------------------+ ### Response: def predict_topk(self, dataset, output_type='probability', k=3, output_frequency='per_row'): """ Return top-k predictions for the ``dataset``, using the trained model. Predictions are returned as an SFrame with three columns: `prediction_id`, `class`, and `probability`, or `rank`, depending on the ``output_type`` parameter. Parameters ---------- dataset : SFrame Dataset of new observations. Must include columns with the same names as the features and session id used for model training, but does not require a target column. Additional columns are ignored. output_type : {'probability', 'rank'}, optional Choose the return type of the prediction: - `probability`: Probability associated with each label in the prediction. - `rank` : Rank associated with each label in the prediction. k : int, optional Number of classes to return for each input example. output_frequency : {'per_row', 'per_window'}, optional The frequency of the predictions which is one of: - 'per_row': Each prediction is returned ``prediction_window`` times. - 'per_window': Return a single prediction for each ``prediction_window`` rows in ``dataset`` per ``session_id``. Returns ------- out : SFrame An SFrame with model predictions. See Also -------- predict, classify, evaluate Examples -------- >>> pred = m.predict_topk(validation_data, k=3) >>> pred +---------------+-------+-------------------+ | row_id | class | probability | +---------------+-------+-------------------+ | 0 | 4 | 0.995623886585 | | 0 | 9 | 0.0038311756216 | | 0 | 7 | 0.000301006948575 | | 1 | 1 | 0.928708016872 | | 1 | 3 | 0.0440889261663 | | 1 | 2 | 0.0176190119237 | | 2 | 3 | 0.996967732906 | | 2 | 2 | 0.00151345680933 | | 2 | 7 | 0.000637513934635 | | 3 | 1 | 0.998070061207 | | ... | ... | ... | +---------------+-------+-------------------+ """ _tkutl._check_categorical_option_type('output_type', output_type, ['probability', 'rank']) id_target_map = self._id_target_map preds = self.predict( dataset, output_type='probability_vector', output_frequency=output_frequency) if output_frequency == 'per_row': probs = preds elif output_frequency == 'per_window': probs = preds['probability_vector'] if output_type == 'rank': probs = probs.apply(lambda p: [ {'class': id_target_map[i], 'rank': i} for i in reversed(_np.argsort(p)[-k:])] ) elif output_type == 'probability': probs = probs.apply(lambda p: [ {'class': id_target_map[i], 'probability': p[i]} for i in reversed(_np.argsort(p)[-k:])] ) if output_frequency == 'per_row': output = _SFrame({'probs': probs}) output = output.add_row_number(column_name='row_id') elif output_frequency == 'per_window': output = _SFrame({ 'probs': probs, self.session_id: preds[self.session_id], 'prediction_id': preds['prediction_id'] }) output = output.stack('probs', new_column_name='probs') output = output.unpack('probs', column_name_prefix='') return output
def full_path(self): """ Return the full path of the local object If not local, it will return self.path :return: str """ if "local" in self.driver.name.lower(): return "%s/%s" % self.container.key, self.path return self.path
Return the full path of the local object If not local, it will return self.path :return: str
Below is the the instruction that describes the task: ### Input: Return the full path of the local object If not local, it will return self.path :return: str ### Response: def full_path(self): """ Return the full path of the local object If not local, it will return self.path :return: str """ if "local" in self.driver.name.lower(): return "%s/%s" % self.container.key, self.path return self.path
def deserialize(self, value, **kwargs): """Return a deserialized value If no deserializer is provided, it uses the deserialize method of the prop corresponding to the value """ kwargs.update({'trusted': kwargs.get('trusted', False)}) if self.deserializer is not None: return self.deserializer(value, **kwargs) if value is None: return None instance_props = [ prop for prop in self.props if isinstance(prop, Instance) ] kwargs = kwargs.copy() kwargs.update({ 'strict': kwargs.get('strict') or self.strict_instances, 'assert_valid': self.strict_instances, }) if isinstance(value, dict) and value.get('__class__'): clsname = value.get('__class__') for prop in instance_props: if clsname == prop.instance_class.__name__: return prop.deserialize(value, **kwargs) for prop in self.props: try: out_val = prop.deserialize(value, **kwargs) prop.validate(None, out_val) return out_val except GENERIC_ERRORS: continue return self.from_json(value, **kwargs)
Return a deserialized value If no deserializer is provided, it uses the deserialize method of the prop corresponding to the value
Below is the the instruction that describes the task: ### Input: Return a deserialized value If no deserializer is provided, it uses the deserialize method of the prop corresponding to the value ### Response: def deserialize(self, value, **kwargs): """Return a deserialized value If no deserializer is provided, it uses the deserialize method of the prop corresponding to the value """ kwargs.update({'trusted': kwargs.get('trusted', False)}) if self.deserializer is not None: return self.deserializer(value, **kwargs) if value is None: return None instance_props = [ prop for prop in self.props if isinstance(prop, Instance) ] kwargs = kwargs.copy() kwargs.update({ 'strict': kwargs.get('strict') or self.strict_instances, 'assert_valid': self.strict_instances, }) if isinstance(value, dict) and value.get('__class__'): clsname = value.get('__class__') for prop in instance_props: if clsname == prop.instance_class.__name__: return prop.deserialize(value, **kwargs) for prop in self.props: try: out_val = prop.deserialize(value, **kwargs) prop.validate(None, out_val) return out_val except GENERIC_ERRORS: continue return self.from_json(value, **kwargs)
def get_explicit_resnorms(self, indices=None): '''Explicitly computes the Ritz residual norms.''' res = self.get_explicit_residual(indices) # apply preconditioner linear_system = self._deflated_solver.linear_system Mres = linear_system.M * res # compute norms resnorms = numpy.zeros(res.shape[1]) for i in range(resnorms.shape[0]): resnorms[i] = utils.norm(res[:, [i]], Mres[:, [i]], ip_B=linear_system.ip_B) return resnorms
Explicitly computes the Ritz residual norms.
Below is the the instruction that describes the task: ### Input: Explicitly computes the Ritz residual norms. ### Response: def get_explicit_resnorms(self, indices=None): '''Explicitly computes the Ritz residual norms.''' res = self.get_explicit_residual(indices) # apply preconditioner linear_system = self._deflated_solver.linear_system Mres = linear_system.M * res # compute norms resnorms = numpy.zeros(res.shape[1]) for i in range(resnorms.shape[0]): resnorms[i] = utils.norm(res[:, [i]], Mres[:, [i]], ip_B=linear_system.ip_B) return resnorms
def many(p): """Parser(a, b) -> Parser(a, [b]) Returns a parser that infinitely applies the parser p to the input sequence of tokens while it successfully parsers them. The resulting parser returns a list of parsed values. """ @Parser def _many(tokens, s): """Iterative implementation preventing the stack overflow.""" res = [] try: while True: (v, s) = p.run(tokens, s) res.append(v) except NoParseError, e: return res, State(s.pos, e.state.max) _many.name = u'{ %s }' % p.name return _many
Parser(a, b) -> Parser(a, [b]) Returns a parser that infinitely applies the parser p to the input sequence of tokens while it successfully parsers them. The resulting parser returns a list of parsed values.
Below is the the instruction that describes the task: ### Input: Parser(a, b) -> Parser(a, [b]) Returns a parser that infinitely applies the parser p to the input sequence of tokens while it successfully parsers them. The resulting parser returns a list of parsed values. ### Response: def many(p): """Parser(a, b) -> Parser(a, [b]) Returns a parser that infinitely applies the parser p to the input sequence of tokens while it successfully parsers them. The resulting parser returns a list of parsed values. """ @Parser def _many(tokens, s): """Iterative implementation preventing the stack overflow.""" res = [] try: while True: (v, s) = p.run(tokens, s) res.append(v) except NoParseError, e: return res, State(s.pos, e.state.max) _many.name = u'{ %s }' % p.name return _many
def access_token(self): """Get access_token.""" if self.cache_token: return self.access_token_ or \ self._resolve_credential('access_token') return self.access_token_
Get access_token.
Below is the the instruction that describes the task: ### Input: Get access_token. ### Response: def access_token(self): """Get access_token.""" if self.cache_token: return self.access_token_ or \ self._resolve_credential('access_token') return self.access_token_
def default_route(family=None): ''' Return default route(s) from routing table .. versionchanged:: 2015.8.0 Added support for SunOS (Solaris 10, Illumos, SmartOS) .. versionchanged:: 2016.11.4 Added support for AIX CLI Example: .. code-block:: bash salt '*' network.default_route ''' if family != 'inet' and family != 'inet6' and family is not None: raise CommandExecutionError('Invalid address family {0}'.format(family)) _routes = routes() default_route = {} if __grains__['kernel'] == 'Linux': default_route['inet'] = ['0.0.0.0', 'default'] default_route['inet6'] = ['::/0', 'default'] elif __grains__['os'] in ['FreeBSD', 'NetBSD', 'OpenBSD', 'MacOS', 'Darwin'] or \ __grains__['kernel'] in ('SunOS', 'AIX'): default_route['inet'] = ['default'] default_route['inet6'] = ['default'] else: raise CommandExecutionError('Not yet supported on this platform') ret = [] for route in _routes: if family: if route['destination'] in default_route[family]: if __grains__['kernel'] == 'SunOS' and route['addr_family'] != family: continue ret.append(route) else: if route['destination'] in default_route['inet'] or \ route['destination'] in default_route['inet6']: ret.append(route) return ret
Return default route(s) from routing table .. versionchanged:: 2015.8.0 Added support for SunOS (Solaris 10, Illumos, SmartOS) .. versionchanged:: 2016.11.4 Added support for AIX CLI Example: .. code-block:: bash salt '*' network.default_route
Below is the the instruction that describes the task: ### Input: Return default route(s) from routing table .. versionchanged:: 2015.8.0 Added support for SunOS (Solaris 10, Illumos, SmartOS) .. versionchanged:: 2016.11.4 Added support for AIX CLI Example: .. code-block:: bash salt '*' network.default_route ### Response: def default_route(family=None): ''' Return default route(s) from routing table .. versionchanged:: 2015.8.0 Added support for SunOS (Solaris 10, Illumos, SmartOS) .. versionchanged:: 2016.11.4 Added support for AIX CLI Example: .. code-block:: bash salt '*' network.default_route ''' if family != 'inet' and family != 'inet6' and family is not None: raise CommandExecutionError('Invalid address family {0}'.format(family)) _routes = routes() default_route = {} if __grains__['kernel'] == 'Linux': default_route['inet'] = ['0.0.0.0', 'default'] default_route['inet6'] = ['::/0', 'default'] elif __grains__['os'] in ['FreeBSD', 'NetBSD', 'OpenBSD', 'MacOS', 'Darwin'] or \ __grains__['kernel'] in ('SunOS', 'AIX'): default_route['inet'] = ['default'] default_route['inet6'] = ['default'] else: raise CommandExecutionError('Not yet supported on this platform') ret = [] for route in _routes: if family: if route['destination'] in default_route[family]: if __grains__['kernel'] == 'SunOS' and route['addr_family'] != family: continue ret.append(route) else: if route['destination'] in default_route['inet'] or \ route['destination'] in default_route['inet6']: ret.append(route) return ret
def batch_gen(data, batch_size): ''' Usage:: for batch in batch_gen(iter, 100): do_something(batch) ''' data = data or [] for i in range(0, len(data), batch_size): yield data[i:i + batch_size]
Usage:: for batch in batch_gen(iter, 100): do_something(batch)
Below is the the instruction that describes the task: ### Input: Usage:: for batch in batch_gen(iter, 100): do_something(batch) ### Response: def batch_gen(data, batch_size): ''' Usage:: for batch in batch_gen(iter, 100): do_something(batch) ''' data = data or [] for i in range(0, len(data), batch_size): yield data[i:i + batch_size]
def parseFilename(filename): """ Parse out filename from any specified extensions. Returns rootname and string version of extension name. """ # Parse out any extension specified in filename _indx = filename.find('[') if _indx > 0: # Read extension name provided _fname = filename[:_indx] _extn = filename[_indx + 1:-1] else: _fname = filename _extn = None return _fname, _extn
Parse out filename from any specified extensions. Returns rootname and string version of extension name.
Below is the the instruction that describes the task: ### Input: Parse out filename from any specified extensions. Returns rootname and string version of extension name. ### Response: def parseFilename(filename): """ Parse out filename from any specified extensions. Returns rootname and string version of extension name. """ # Parse out any extension specified in filename _indx = filename.find('[') if _indx > 0: # Read extension name provided _fname = filename[:_indx] _extn = filename[_indx + 1:-1] else: _fname = filename _extn = None return _fname, _extn
def create_multiple_bar_chart(self, x_labels, mul_y_values, mul_y_labels, normalize=False): """Creates bar chart with multiple lines :param x_labels: Names for each variable :param mul_y_values: list of values of x labels :param mul_y_labels: list of labels for each y value :param normalize: True iff you want to normalize each y series :return: Bar chart """ self.setup(0.25) ax1 = self.get_ax() ax1.set_xticks(list(range(len(x_labels)))) ax1.set_xticklabels([x_labels[i] for i in range(len(x_labels))], rotation=90) y_counts = len(mul_y_values) colors = cm.rainbow(np.linspace(0, 1, y_counts)) # different colors max_bar_width = 0.6 bar_width = max_bar_width / y_counts # width of each bar x_shifts = np.linspace(0, max_bar_width, y_counts) - max_bar_width * 0.5 # center in 0 ax_series = [] for i in range(y_counts): x_pos = range(len(x_labels)) # x points x_pos = np.array(x_pos) + x_shifts[i] # shift for each y series if normalize: # normalize array y_values = normalize_array(mul_y_values[i]) else: y_values = mul_y_values[i] ax_series.append( ax1.bar( x_pos, y_values, width=bar_width, align="center", color=colors[i] ) ) ax1.legend(ax_series, mul_y_labels) return ax1
Creates bar chart with multiple lines :param x_labels: Names for each variable :param mul_y_values: list of values of x labels :param mul_y_labels: list of labels for each y value :param normalize: True iff you want to normalize each y series :return: Bar chart
Below is the the instruction that describes the task: ### Input: Creates bar chart with multiple lines :param x_labels: Names for each variable :param mul_y_values: list of values of x labels :param mul_y_labels: list of labels for each y value :param normalize: True iff you want to normalize each y series :return: Bar chart ### Response: def create_multiple_bar_chart(self, x_labels, mul_y_values, mul_y_labels, normalize=False): """Creates bar chart with multiple lines :param x_labels: Names for each variable :param mul_y_values: list of values of x labels :param mul_y_labels: list of labels for each y value :param normalize: True iff you want to normalize each y series :return: Bar chart """ self.setup(0.25) ax1 = self.get_ax() ax1.set_xticks(list(range(len(x_labels)))) ax1.set_xticklabels([x_labels[i] for i in range(len(x_labels))], rotation=90) y_counts = len(mul_y_values) colors = cm.rainbow(np.linspace(0, 1, y_counts)) # different colors max_bar_width = 0.6 bar_width = max_bar_width / y_counts # width of each bar x_shifts = np.linspace(0, max_bar_width, y_counts) - max_bar_width * 0.5 # center in 0 ax_series = [] for i in range(y_counts): x_pos = range(len(x_labels)) # x points x_pos = np.array(x_pos) + x_shifts[i] # shift for each y series if normalize: # normalize array y_values = normalize_array(mul_y_values[i]) else: y_values = mul_y_values[i] ax_series.append( ax1.bar( x_pos, y_values, width=bar_width, align="center", color=colors[i] ) ) ax1.legend(ax_series, mul_y_labels) return ax1
def relookup(self, pattern): """ Dictionary lookup with a regular expression. Return pairs whose key matches pattern. """ key = re.compile(pattern) return filter(lambda x : key.match(x[0]), self.data.items())
Dictionary lookup with a regular expression. Return pairs whose key matches pattern.
Below is the the instruction that describes the task: ### Input: Dictionary lookup with a regular expression. Return pairs whose key matches pattern. ### Response: def relookup(self, pattern): """ Dictionary lookup with a regular expression. Return pairs whose key matches pattern. """ key = re.compile(pattern) return filter(lambda x : key.match(x[0]), self.data.items())
def set_target(self, target: EventDispatcherBase) -> None: """ This method should be called by the event dispatcher that dispatches this event to set its target property. Args: target (EventDispatcherBase): The event dispatcher that will dispatch this event. Raises: PermissionError: If the target property of the event has already been set. TypeError: If `target` is not an `EventDispatcherBase` instance. """ if self._target is not None: raise PermissionError("The target property already has a valid value.") if not isinstance(target, EventDispatcherBase): raise TypeError("Invalid target type: {}".format(target)) self._target = target
This method should be called by the event dispatcher that dispatches this event to set its target property. Args: target (EventDispatcherBase): The event dispatcher that will dispatch this event. Raises: PermissionError: If the target property of the event has already been set. TypeError: If `target` is not an `EventDispatcherBase` instance.
Below is the the instruction that describes the task: ### Input: This method should be called by the event dispatcher that dispatches this event to set its target property. Args: target (EventDispatcherBase): The event dispatcher that will dispatch this event. Raises: PermissionError: If the target property of the event has already been set. TypeError: If `target` is not an `EventDispatcherBase` instance. ### Response: def set_target(self, target: EventDispatcherBase) -> None: """ This method should be called by the event dispatcher that dispatches this event to set its target property. Args: target (EventDispatcherBase): The event dispatcher that will dispatch this event. Raises: PermissionError: If the target property of the event has already been set. TypeError: If `target` is not an `EventDispatcherBase` instance. """ if self._target is not None: raise PermissionError("The target property already has a valid value.") if not isinstance(target, EventDispatcherBase): raise TypeError("Invalid target type: {}".format(target)) self._target = target
def cast_to_a1_notation(method): """ Decorator function casts wrapped arguments to A1 notation in range method calls. """ @wraps(method) def wrapper(self, *args, **kwargs): try: if len(args): int(args[0]) # Convert to A1 notation range_start = rowcol_to_a1(*args[:2]) range_end = rowcol_to_a1(*args[-2:]) range_name = ':'.join((range_start, range_end)) args = (range_name,) + args[4:] except ValueError: pass return method(self, *args, **kwargs) return wrapper
Decorator function casts wrapped arguments to A1 notation in range method calls.
Below is the the instruction that describes the task: ### Input: Decorator function casts wrapped arguments to A1 notation in range method calls. ### Response: def cast_to_a1_notation(method): """ Decorator function casts wrapped arguments to A1 notation in range method calls. """ @wraps(method) def wrapper(self, *args, **kwargs): try: if len(args): int(args[0]) # Convert to A1 notation range_start = rowcol_to_a1(*args[:2]) range_end = rowcol_to_a1(*args[-2:]) range_name = ':'.join((range_start, range_end)) args = (range_name,) + args[4:] except ValueError: pass return method(self, *args, **kwargs) return wrapper
def linear_cmap(field_name, palette, low, high, low_color=None, high_color=None, nan_color="gray"): ''' Create a ``DataSpec`` dict that applyies a client-side ``LinearColorMapper`` transformation to a ``ColumnDataSource`` column. Args: field_name (str) : a field name to configure ``DataSpec`` with palette (seq[color]) : a list of colors to use for colormapping low (float) : a minimum value of the range to map into the palette. Values below this are clamped to ``low``. high (float) : a maximum value of the range to map into the palette. Values above this are clamped to ``high``. low_color (color, optional) : color to be used if data is lower than ``low`` value. If None, values lower than ``low`` are mapped to the first color in the palette. (default: None) high_color (color, optional) : color to be used if data is higher than ``high`` value. If None, values higher than ``high`` are mapped to the last color in the palette. (default: None) nan_color (color, optional) : a default color to use when mapping data from a column does not succeed (default: "gray") ''' return field(field_name, LinearColorMapper(palette=palette, low=low, high=high, nan_color=nan_color, low_color=low_color, high_color=high_color))
Create a ``DataSpec`` dict that applyies a client-side ``LinearColorMapper`` transformation to a ``ColumnDataSource`` column. Args: field_name (str) : a field name to configure ``DataSpec`` with palette (seq[color]) : a list of colors to use for colormapping low (float) : a minimum value of the range to map into the palette. Values below this are clamped to ``low``. high (float) : a maximum value of the range to map into the palette. Values above this are clamped to ``high``. low_color (color, optional) : color to be used if data is lower than ``low`` value. If None, values lower than ``low`` are mapped to the first color in the palette. (default: None) high_color (color, optional) : color to be used if data is higher than ``high`` value. If None, values higher than ``high`` are mapped to the last color in the palette. (default: None) nan_color (color, optional) : a default color to use when mapping data from a column does not succeed (default: "gray")
Below is the the instruction that describes the task: ### Input: Create a ``DataSpec`` dict that applyies a client-side ``LinearColorMapper`` transformation to a ``ColumnDataSource`` column. Args: field_name (str) : a field name to configure ``DataSpec`` with palette (seq[color]) : a list of colors to use for colormapping low (float) : a minimum value of the range to map into the palette. Values below this are clamped to ``low``. high (float) : a maximum value of the range to map into the palette. Values above this are clamped to ``high``. low_color (color, optional) : color to be used if data is lower than ``low`` value. If None, values lower than ``low`` are mapped to the first color in the palette. (default: None) high_color (color, optional) : color to be used if data is higher than ``high`` value. If None, values higher than ``high`` are mapped to the last color in the palette. (default: None) nan_color (color, optional) : a default color to use when mapping data from a column does not succeed (default: "gray") ### Response: def linear_cmap(field_name, palette, low, high, low_color=None, high_color=None, nan_color="gray"): ''' Create a ``DataSpec`` dict that applyies a client-side ``LinearColorMapper`` transformation to a ``ColumnDataSource`` column. Args: field_name (str) : a field name to configure ``DataSpec`` with palette (seq[color]) : a list of colors to use for colormapping low (float) : a minimum value of the range to map into the palette. Values below this are clamped to ``low``. high (float) : a maximum value of the range to map into the palette. Values above this are clamped to ``high``. low_color (color, optional) : color to be used if data is lower than ``low`` value. If None, values lower than ``low`` are mapped to the first color in the palette. (default: None) high_color (color, optional) : color to be used if data is higher than ``high`` value. If None, values higher than ``high`` are mapped to the last color in the palette. (default: None) nan_color (color, optional) : a default color to use when mapping data from a column does not succeed (default: "gray") ''' return field(field_name, LinearColorMapper(palette=palette, low=low, high=high, nan_color=nan_color, low_color=low_color, high_color=high_color))
def connect_all(state): ''' Connect to all the configured servers in parallel. Reads/writes state.inventory. Args: state (``pyinfra.api.State`` obj): the state containing an inventory to connect to ''' hosts = [ host for host in state.inventory if state.is_host_in_limit(host) ] greenlet_to_host = { state.pool.spawn(host.connect, state): host for host in hosts } with progress_spinner(greenlet_to_host.values()) as progress: for greenlet in gevent.iwait(greenlet_to_host.keys()): host = greenlet_to_host[greenlet] progress(host) # Get/set the results failed_hosts = set() for greenlet, host in six.iteritems(greenlet_to_host): # Raise any unexpected exception greenlet.get() if host.connection: state.activate_host(host) else: failed_hosts.add(host) # Remove those that failed, triggering FAIL_PERCENT check state.fail_hosts(failed_hosts, activated_count=len(hosts))
Connect to all the configured servers in parallel. Reads/writes state.inventory. Args: state (``pyinfra.api.State`` obj): the state containing an inventory to connect to
Below is the the instruction that describes the task: ### Input: Connect to all the configured servers in parallel. Reads/writes state.inventory. Args: state (``pyinfra.api.State`` obj): the state containing an inventory to connect to ### Response: def connect_all(state): ''' Connect to all the configured servers in parallel. Reads/writes state.inventory. Args: state (``pyinfra.api.State`` obj): the state containing an inventory to connect to ''' hosts = [ host for host in state.inventory if state.is_host_in_limit(host) ] greenlet_to_host = { state.pool.spawn(host.connect, state): host for host in hosts } with progress_spinner(greenlet_to_host.values()) as progress: for greenlet in gevent.iwait(greenlet_to_host.keys()): host = greenlet_to_host[greenlet] progress(host) # Get/set the results failed_hosts = set() for greenlet, host in six.iteritems(greenlet_to_host): # Raise any unexpected exception greenlet.get() if host.connection: state.activate_host(host) else: failed_hosts.add(host) # Remove those that failed, triggering FAIL_PERCENT check state.fail_hosts(failed_hosts, activated_count=len(hosts))
def write_single_file(args, base_dir, crawler): """Write to a single output file and/or subdirectory.""" if args['urls'] and args['html']: # Create a directory to save PART.html files in domain = utils.get_domain(args['urls'][0]) if not args['quiet']: print('Storing html files in {0}/'.format(domain)) utils.mkdir_and_cd(domain) infilenames = [] for query in args['query']: if query in args['files']: infilenames.append(query) elif query.strip('/') in args['urls']: if args['crawl'] or args['crawl_all']: # Crawl and save HTML files/image files to disk infilenames += crawler.crawl_links(query) else: raw_resp = utils.get_raw_resp(query) if raw_resp is None: return False prev_part_num = utils.get_num_part_files() utils.write_part_file(args, query, raw_resp) curr_part_num = prev_part_num + 1 infilenames += utils.get_part_filenames(curr_part_num, prev_part_num) # Convert output or leave as PART.html files if args['html']: # HTML files have been written already, so return to base directory os.chdir(base_dir) else: # Write files to text or pdf if infilenames: if args['out']: outfilename = args['out'][0] else: outfilename = utils.get_single_outfilename(args) if outfilename: write_files(args, infilenames, outfilename) else: utils.remove_part_files() return True
Write to a single output file and/or subdirectory.
Below is the the instruction that describes the task: ### Input: Write to a single output file and/or subdirectory. ### Response: def write_single_file(args, base_dir, crawler): """Write to a single output file and/or subdirectory.""" if args['urls'] and args['html']: # Create a directory to save PART.html files in domain = utils.get_domain(args['urls'][0]) if not args['quiet']: print('Storing html files in {0}/'.format(domain)) utils.mkdir_and_cd(domain) infilenames = [] for query in args['query']: if query in args['files']: infilenames.append(query) elif query.strip('/') in args['urls']: if args['crawl'] or args['crawl_all']: # Crawl and save HTML files/image files to disk infilenames += crawler.crawl_links(query) else: raw_resp = utils.get_raw_resp(query) if raw_resp is None: return False prev_part_num = utils.get_num_part_files() utils.write_part_file(args, query, raw_resp) curr_part_num = prev_part_num + 1 infilenames += utils.get_part_filenames(curr_part_num, prev_part_num) # Convert output or leave as PART.html files if args['html']: # HTML files have been written already, so return to base directory os.chdir(base_dir) else: # Write files to text or pdf if infilenames: if args['out']: outfilename = args['out'][0] else: outfilename = utils.get_single_outfilename(args) if outfilename: write_files(args, infilenames, outfilename) else: utils.remove_part_files() return True
def read_all(filename): """ Reads the serialized objects from disk. Caller must wrap objects in appropriate Python wrapper classes. :param filename: the file with the serialized objects :type filename: str :return: the list of JB_OBjects :rtype: list """ array = javabridge.static_call( "Lweka/core/SerializationHelper;", "readAll", "(Ljava/lang/String;)[Ljava/lang/Object;", filename) if array is None: return None else: return javabridge.get_env().get_object_array_elements(array)
Reads the serialized objects from disk. Caller must wrap objects in appropriate Python wrapper classes. :param filename: the file with the serialized objects :type filename: str :return: the list of JB_OBjects :rtype: list
Below is the the instruction that describes the task: ### Input: Reads the serialized objects from disk. Caller must wrap objects in appropriate Python wrapper classes. :param filename: the file with the serialized objects :type filename: str :return: the list of JB_OBjects :rtype: list ### Response: def read_all(filename): """ Reads the serialized objects from disk. Caller must wrap objects in appropriate Python wrapper classes. :param filename: the file with the serialized objects :type filename: str :return: the list of JB_OBjects :rtype: list """ array = javabridge.static_call( "Lweka/core/SerializationHelper;", "readAll", "(Ljava/lang/String;)[Ljava/lang/Object;", filename) if array is None: return None else: return javabridge.get_env().get_object_array_elements(array)
def abort(self): """Terminate audio processing immediately. This does not wait for pending audio buffers. If successful, the stream is considered inactive. """ err = _pa.Pa_AbortStream(self._stream) if err == _pa.paStreamIsStopped: return self._handle_error(err)
Terminate audio processing immediately. This does not wait for pending audio buffers. If successful, the stream is considered inactive.
Below is the the instruction that describes the task: ### Input: Terminate audio processing immediately. This does not wait for pending audio buffers. If successful, the stream is considered inactive. ### Response: def abort(self): """Terminate audio processing immediately. This does not wait for pending audio buffers. If successful, the stream is considered inactive. """ err = _pa.Pa_AbortStream(self._stream) if err == _pa.paStreamIsStopped: return self._handle_error(err)
def move_asset_behind(self, asset_id, composition_id, reference_id): """Reorders assets in a composition by moving the specified asset behind of a reference asset. arg: asset_id (osid.id.Id): ``Id`` of the ``Asset`` arg: composition_id (osid.id.Id): ``Id`` of the ``Composition`` arg: reference_id (osid.id.Id): ``Id`` of the reference ``Asset`` raise: NotFound - ``asset_id`` or ``reference_id`` ``not found in composition_id`` raise: NullArgument - ``asset_id, reference_id`` or ``composition_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization fauilure *compliance: mandatory -- This method must be implemented.* """ self._provider_session.move_asset_behind(self, asset_id, composition_id, reference_id)
Reorders assets in a composition by moving the specified asset behind of a reference asset. arg: asset_id (osid.id.Id): ``Id`` of the ``Asset`` arg: composition_id (osid.id.Id): ``Id`` of the ``Composition`` arg: reference_id (osid.id.Id): ``Id`` of the reference ``Asset`` raise: NotFound - ``asset_id`` or ``reference_id`` ``not found in composition_id`` raise: NullArgument - ``asset_id, reference_id`` or ``composition_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization fauilure *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Reorders assets in a composition by moving the specified asset behind of a reference asset. arg: asset_id (osid.id.Id): ``Id`` of the ``Asset`` arg: composition_id (osid.id.Id): ``Id`` of the ``Composition`` arg: reference_id (osid.id.Id): ``Id`` of the reference ``Asset`` raise: NotFound - ``asset_id`` or ``reference_id`` ``not found in composition_id`` raise: NullArgument - ``asset_id, reference_id`` or ``composition_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization fauilure *compliance: mandatory -- This method must be implemented.* ### Response: def move_asset_behind(self, asset_id, composition_id, reference_id): """Reorders assets in a composition by moving the specified asset behind of a reference asset. arg: asset_id (osid.id.Id): ``Id`` of the ``Asset`` arg: composition_id (osid.id.Id): ``Id`` of the ``Composition`` arg: reference_id (osid.id.Id): ``Id`` of the reference ``Asset`` raise: NotFound - ``asset_id`` or ``reference_id`` ``not found in composition_id`` raise: NullArgument - ``asset_id, reference_id`` or ``composition_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization fauilure *compliance: mandatory -- This method must be implemented.* """ self._provider_session.move_asset_behind(self, asset_id, composition_id, reference_id)
def get_gdal_options(opts, is_remote=False): """ Return a merged set of custom and default GDAL/rasterio Env options. If is_remote is set to True, the default GDAL_HTTP_OPTS are appended. Parameters ---------- opts : dict or None Explicit GDAL options. is_remote : bool Indicate whether Env is for a remote file. Returns ------- dictionary """ user_opts = {} if opts is None else dict(**opts) if is_remote: return dict(GDAL_HTTP_OPTS, **user_opts) else: return user_opts
Return a merged set of custom and default GDAL/rasterio Env options. If is_remote is set to True, the default GDAL_HTTP_OPTS are appended. Parameters ---------- opts : dict or None Explicit GDAL options. is_remote : bool Indicate whether Env is for a remote file. Returns ------- dictionary
Below is the the instruction that describes the task: ### Input: Return a merged set of custom and default GDAL/rasterio Env options. If is_remote is set to True, the default GDAL_HTTP_OPTS are appended. Parameters ---------- opts : dict or None Explicit GDAL options. is_remote : bool Indicate whether Env is for a remote file. Returns ------- dictionary ### Response: def get_gdal_options(opts, is_remote=False): """ Return a merged set of custom and default GDAL/rasterio Env options. If is_remote is set to True, the default GDAL_HTTP_OPTS are appended. Parameters ---------- opts : dict or None Explicit GDAL options. is_remote : bool Indicate whether Env is for a remote file. Returns ------- dictionary """ user_opts = {} if opts is None else dict(**opts) if is_remote: return dict(GDAL_HTTP_OPTS, **user_opts) else: return user_opts
def plot_zt_dop(self, temps='all', output='average', relaxation_time=1e-14): """ Plot the figure of merit zT in function of doping levels for different temperatures. Args: temps: the default 'all' plots all the temperatures in the analyzer. Specify a list of temperatures if you want to plot only some. output: with 'average' you get an average of the three directions with 'eigs' you get all the three directions. relaxation_time: specify a constant relaxation time value Returns: a matplotlib object """ import matplotlib.pyplot as plt if output == 'average': zt = self._bz.get_zt(relaxation_time=relaxation_time, output='average') elif output == 'eigs': zt = self._bz.get_zt(relaxation_time=relaxation_time, output='eigs') tlist = sorted(zt['n'].keys()) if temps == 'all' else temps plt.figure(figsize=(22, 14)) for i, dt in enumerate(['n', 'p']): plt.subplot(121 + i) for temp in tlist: if output == 'eigs': for xyz in range(3): plt.semilogx(self._bz.doping[dt], zip(*zt[dt][temp])[xyz], marker='s', label=str(xyz) + ' ' + str(temp) + ' K') elif output == 'average': plt.semilogx(self._bz.doping[dt], zt[dt][temp], marker='s', label=str(temp) + ' K') plt.title(dt + '-type', fontsize=20) if i == 0: plt.ylabel("zT", fontsize=30.0) plt.xlabel('Doping concentration ($cm^{-3}$)', fontsize=30.0) p = 'lower right' if i == 0 else '' plt.legend(loc=p, fontsize=15) plt.grid() plt.xticks(fontsize=25) plt.yticks(fontsize=25) plt.tight_layout() return plt
Plot the figure of merit zT in function of doping levels for different temperatures. Args: temps: the default 'all' plots all the temperatures in the analyzer. Specify a list of temperatures if you want to plot only some. output: with 'average' you get an average of the three directions with 'eigs' you get all the three directions. relaxation_time: specify a constant relaxation time value Returns: a matplotlib object
Below is the the instruction that describes the task: ### Input: Plot the figure of merit zT in function of doping levels for different temperatures. Args: temps: the default 'all' plots all the temperatures in the analyzer. Specify a list of temperatures if you want to plot only some. output: with 'average' you get an average of the three directions with 'eigs' you get all the three directions. relaxation_time: specify a constant relaxation time value Returns: a matplotlib object ### Response: def plot_zt_dop(self, temps='all', output='average', relaxation_time=1e-14): """ Plot the figure of merit zT in function of doping levels for different temperatures. Args: temps: the default 'all' plots all the temperatures in the analyzer. Specify a list of temperatures if you want to plot only some. output: with 'average' you get an average of the three directions with 'eigs' you get all the three directions. relaxation_time: specify a constant relaxation time value Returns: a matplotlib object """ import matplotlib.pyplot as plt if output == 'average': zt = self._bz.get_zt(relaxation_time=relaxation_time, output='average') elif output == 'eigs': zt = self._bz.get_zt(relaxation_time=relaxation_time, output='eigs') tlist = sorted(zt['n'].keys()) if temps == 'all' else temps plt.figure(figsize=(22, 14)) for i, dt in enumerate(['n', 'p']): plt.subplot(121 + i) for temp in tlist: if output == 'eigs': for xyz in range(3): plt.semilogx(self._bz.doping[dt], zip(*zt[dt][temp])[xyz], marker='s', label=str(xyz) + ' ' + str(temp) + ' K') elif output == 'average': plt.semilogx(self._bz.doping[dt], zt[dt][temp], marker='s', label=str(temp) + ' K') plt.title(dt + '-type', fontsize=20) if i == 0: plt.ylabel("zT", fontsize=30.0) plt.xlabel('Doping concentration ($cm^{-3}$)', fontsize=30.0) p = 'lower right' if i == 0 else '' plt.legend(loc=p, fontsize=15) plt.grid() plt.xticks(fontsize=25) plt.yticks(fontsize=25) plt.tight_layout() return plt
def storage(self): """Getter for various Storage variables""" if self._storage is None: api = "SYNO.Storage.CGI.Storage" url = "%s/entry.cgi?api=%s&version=1&method=load_info" % ( self.base_url, api) self._storage = SynoStorage(self._get_url(url)) return self._storage
Getter for various Storage variables
Below is the the instruction that describes the task: ### Input: Getter for various Storage variables ### Response: def storage(self): """Getter for various Storage variables""" if self._storage is None: api = "SYNO.Storage.CGI.Storage" url = "%s/entry.cgi?api=%s&version=1&method=load_info" % ( self.base_url, api) self._storage = SynoStorage(self._get_url(url)) return self._storage
def get_statistic_by_name(stat_name): """ Fetches a statistics based on the given class name. Does a look-up in the gadgets' registered statistics to find the specified one. """ if stat_name == 'ALL': return get_statistic_models() for stat in get_statistic_models(): if stat.__name__ == stat_name: return stat raise Exception, _("%(stat)s cannot be found.") % {'stat': stat_name}
Fetches a statistics based on the given class name. Does a look-up in the gadgets' registered statistics to find the specified one.
Below is the the instruction that describes the task: ### Input: Fetches a statistics based on the given class name. Does a look-up in the gadgets' registered statistics to find the specified one. ### Response: def get_statistic_by_name(stat_name): """ Fetches a statistics based on the given class name. Does a look-up in the gadgets' registered statistics to find the specified one. """ if stat_name == 'ALL': return get_statistic_models() for stat in get_statistic_models(): if stat.__name__ == stat_name: return stat raise Exception, _("%(stat)s cannot be found.") % {'stat': stat_name}
def imf(m): ''' Returns ------- N(M)dM for given mass according to Kroupa IMF, vectorization available via vimf() ''' m1 = 0.08; m2 = 0.50 a1 = 0.30; a2 = 1.30; a3 = 2.3 const2 = m1**-a1 -m1**-a2 const3 = m2**-a2 -m2**-a3 if m < 0.08: alpha = 0.3 const = -const2 -const3 elif m < 0.50: alpha = 1.3 const = -const3 else: alpha = 2.3 const = 0.0 # print m,alpha, const, m**-alpha + const return m**-alpha + const
Returns ------- N(M)dM for given mass according to Kroupa IMF, vectorization available via vimf()
Below is the the instruction that describes the task: ### Input: Returns ------- N(M)dM for given mass according to Kroupa IMF, vectorization available via vimf() ### Response: def imf(m): ''' Returns ------- N(M)dM for given mass according to Kroupa IMF, vectorization available via vimf() ''' m1 = 0.08; m2 = 0.50 a1 = 0.30; a2 = 1.30; a3 = 2.3 const2 = m1**-a1 -m1**-a2 const3 = m2**-a2 -m2**-a3 if m < 0.08: alpha = 0.3 const = -const2 -const3 elif m < 0.50: alpha = 1.3 const = -const3 else: alpha = 2.3 const = 0.0 # print m,alpha, const, m**-alpha + const return m**-alpha + const
def is_contiguous(self): """Return offset and size of contiguous data, else None.""" if self._keyframe is None: raise RuntimeError('keyframe not set') if self._keyframe.is_contiguous: return self._offsetscounts[0][0], self._keyframe.is_contiguous[1] return None
Return offset and size of contiguous data, else None.
Below is the the instruction that describes the task: ### Input: Return offset and size of contiguous data, else None. ### Response: def is_contiguous(self): """Return offset and size of contiguous data, else None.""" if self._keyframe is None: raise RuntimeError('keyframe not set') if self._keyframe.is_contiguous: return self._offsetscounts[0][0], self._keyframe.is_contiguous[1] return None
async def execute_insert( self, sql: str, parameters: Iterable[Any] = None ) -> Optional[sqlite3.Row]: """Helper to insert and get the last_insert_rowid.""" if parameters is None: parameters = [] return await self._execute(self._execute_insert, sql, parameters)
Helper to insert and get the last_insert_rowid.
Below is the the instruction that describes the task: ### Input: Helper to insert and get the last_insert_rowid. ### Response: async def execute_insert( self, sql: str, parameters: Iterable[Any] = None ) -> Optional[sqlite3.Row]: """Helper to insert and get the last_insert_rowid.""" if parameters is None: parameters = [] return await self._execute(self._execute_insert, sql, parameters)
def remove_nonancestors_of(self, node): """Remove all of the non-ancestors operation nodes of node.""" if isinstance(node, int): warnings.warn('Calling remove_nonancestors_of() with a node id is deprecated,' ' use a DAGNode instead', DeprecationWarning, 2) node = self._id_to_node[node] anc = nx.ancestors(self._multi_graph, node) comp = list(set(self._multi_graph.nodes()) - set(anc)) for n in comp: if n.type == "op": self.remove_op_node(n)
Remove all of the non-ancestors operation nodes of node.
Below is the the instruction that describes the task: ### Input: Remove all of the non-ancestors operation nodes of node. ### Response: def remove_nonancestors_of(self, node): """Remove all of the non-ancestors operation nodes of node.""" if isinstance(node, int): warnings.warn('Calling remove_nonancestors_of() with a node id is deprecated,' ' use a DAGNode instead', DeprecationWarning, 2) node = self._id_to_node[node] anc = nx.ancestors(self._multi_graph, node) comp = list(set(self._multi_graph.nodes()) - set(anc)) for n in comp: if n.type == "op": self.remove_op_node(n)
def get_game(self, name): """Get the game instance for a game name :param name: the name of the game :type name: :class:`str` :returns: the game instance :rtype: :class:`models.Game` | None :raises: None """ games = self.search_games(query=name, live=False) for g in games: if g.name == name: return g
Get the game instance for a game name :param name: the name of the game :type name: :class:`str` :returns: the game instance :rtype: :class:`models.Game` | None :raises: None
Below is the the instruction that describes the task: ### Input: Get the game instance for a game name :param name: the name of the game :type name: :class:`str` :returns: the game instance :rtype: :class:`models.Game` | None :raises: None ### Response: def get_game(self, name): """Get the game instance for a game name :param name: the name of the game :type name: :class:`str` :returns: the game instance :rtype: :class:`models.Game` | None :raises: None """ games = self.search_games(query=name, live=False) for g in games: if g.name == name: return g
def true_positives(links_true, links_pred): """Count the number of True Positives. Returns the number of correctly predicted links, also called the number of True Positives (TP). Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. Returns ------- int The number of correctly predicted links. """ links_true = _get_multiindex(links_true) links_pred = _get_multiindex(links_pred) return len(links_true & links_pred)
Count the number of True Positives. Returns the number of correctly predicted links, also called the number of True Positives (TP). Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. Returns ------- int The number of correctly predicted links.
Below is the the instruction that describes the task: ### Input: Count the number of True Positives. Returns the number of correctly predicted links, also called the number of True Positives (TP). Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. Returns ------- int The number of correctly predicted links. ### Response: def true_positives(links_true, links_pred): """Count the number of True Positives. Returns the number of correctly predicted links, also called the number of True Positives (TP). Parameters ---------- links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. Returns ------- int The number of correctly predicted links. """ links_true = _get_multiindex(links_true) links_pred = _get_multiindex(links_pred) return len(links_true & links_pred)
def color_is_disabled(**envars): ''' Look for clues in environment, e.g.: - https://bixense.com/clicolors/ - http://no-color.org/ Arguments: envars: Additional environment variables to check for equality, i.e. ``MYAPP_COLOR_DISABLED='1'`` Returns: None, Bool: Disabled ''' result = None if 'NO_COLOR' in env: result = True elif env.CLICOLOR == '0': result = True log.debug('%r (NO_COLOR=%s, CLICOLOR=%s)', result, env.NO_COLOR or '', env.CLICOLOR or '' ) for name, value in envars.items(): envar = getattr(env, name) if envar.value == value: result = True log.debug('%s == %r: %r', name, value, result) return result
Look for clues in environment, e.g.: - https://bixense.com/clicolors/ - http://no-color.org/ Arguments: envars: Additional environment variables to check for equality, i.e. ``MYAPP_COLOR_DISABLED='1'`` Returns: None, Bool: Disabled
Below is the the instruction that describes the task: ### Input: Look for clues in environment, e.g.: - https://bixense.com/clicolors/ - http://no-color.org/ Arguments: envars: Additional environment variables to check for equality, i.e. ``MYAPP_COLOR_DISABLED='1'`` Returns: None, Bool: Disabled ### Response: def color_is_disabled(**envars): ''' Look for clues in environment, e.g.: - https://bixense.com/clicolors/ - http://no-color.org/ Arguments: envars: Additional environment variables to check for equality, i.e. ``MYAPP_COLOR_DISABLED='1'`` Returns: None, Bool: Disabled ''' result = None if 'NO_COLOR' in env: result = True elif env.CLICOLOR == '0': result = True log.debug('%r (NO_COLOR=%s, CLICOLOR=%s)', result, env.NO_COLOR or '', env.CLICOLOR or '' ) for name, value in envars.items(): envar = getattr(env, name) if envar.value == value: result = True log.debug('%s == %r: %r', name, value, result) return result
def get_op_nodes(self, op=None, data=False): """Deprecated. Use op_nodes().""" warnings.warn('The method get_op_nodes() is being replaced by op_nodes().' 'Returning a list of node_ids/(node_id, data) tuples is ' 'also deprecated, op_nodes() returns a list of DAGNodes ', DeprecationWarning, 2) if data: warnings.warn('The parameter data is deprecated, op_nodes() returns DAGNodes' ' which always contain the data', DeprecationWarning, 2) nodes = [] for node in self._multi_graph.nodes(): if node.type == "op": if op is None or isinstance(node.op, op): nodes.append((node._node_id, node.data_dict)) if not data: nodes = [n[0] for n in nodes] return nodes
Deprecated. Use op_nodes().
Below is the the instruction that describes the task: ### Input: Deprecated. Use op_nodes(). ### Response: def get_op_nodes(self, op=None, data=False): """Deprecated. Use op_nodes().""" warnings.warn('The method get_op_nodes() is being replaced by op_nodes().' 'Returning a list of node_ids/(node_id, data) tuples is ' 'also deprecated, op_nodes() returns a list of DAGNodes ', DeprecationWarning, 2) if data: warnings.warn('The parameter data is deprecated, op_nodes() returns DAGNodes' ' which always contain the data', DeprecationWarning, 2) nodes = [] for node in self._multi_graph.nodes(): if node.type == "op": if op is None or isinstance(node.op, op): nodes.append((node._node_id, node.data_dict)) if not data: nodes = [n[0] for n in nodes] return nodes
def _postprocess_hover(self, renderer, source): """ Attaches renderer to hover tool and processes tooltips to ensure datetime data is displayed correctly. """ hover = self.handles.get('hover') if hover is None: return if hover.renderers == 'auto': hover.renderers = [] hover.renderers.append(renderer) # If datetime column is in the data replace hover formatter for k, v in source.data.items(): if k+'_dt_strings' in source.data: tooltips = [] for name, formatter in hover.tooltips: if formatter == '@{%s}' % k: formatter = '@{%s_dt_strings}' % k tooltips.append((name, formatter)) hover.tooltips = tooltips
Attaches renderer to hover tool and processes tooltips to ensure datetime data is displayed correctly.
Below is the the instruction that describes the task: ### Input: Attaches renderer to hover tool and processes tooltips to ensure datetime data is displayed correctly. ### Response: def _postprocess_hover(self, renderer, source): """ Attaches renderer to hover tool and processes tooltips to ensure datetime data is displayed correctly. """ hover = self.handles.get('hover') if hover is None: return if hover.renderers == 'auto': hover.renderers = [] hover.renderers.append(renderer) # If datetime column is in the data replace hover formatter for k, v in source.data.items(): if k+'_dt_strings' in source.data: tooltips = [] for name, formatter in hover.tooltips: if formatter == '@{%s}' % k: formatter = '@{%s_dt_strings}' % k tooltips.append((name, formatter)) hover.tooltips = tooltips
def send_line(self, data, nowait=False): """send a line to the server. replace CR by spaces""" data = data.replace('\n', ' ').replace('\r', ' ') f = asyncio.Future(loop=self.loop) if self.queue is not None and nowait is False: self.queue.put_nowait((f, data)) else: self.send(data.replace('\n', ' ').replace('\r', ' ')) f.set_result(True) return f
send a line to the server. replace CR by spaces
Below is the the instruction that describes the task: ### Input: send a line to the server. replace CR by spaces ### Response: def send_line(self, data, nowait=False): """send a line to the server. replace CR by spaces""" data = data.replace('\n', ' ').replace('\r', ' ') f = asyncio.Future(loop=self.loop) if self.queue is not None and nowait is False: self.queue.put_nowait((f, data)) else: self.send(data.replace('\n', ' ').replace('\r', ' ')) f.set_result(True) return f
def verify(self, data, signature): """ Verify signed data using the Montgomery public key stored by this XEdDSA instance. :param data: A bytes-like object containing the data that was signed. :param signature: A bytes-like object encoding the signature with length SIGNATURE_SIZE. :returns: A boolean indicating whether the signature was valid or not. """ cls = self.__class__ if not isinstance(data, bytes): raise TypeError("The data parameter must be a bytes-like object.") if not isinstance(signature, bytes): raise TypeError("Wrong type passed for the signature parameter.") if len(signature) != cls.SIGNATURE_SIZE: raise ValueError("Invalid value passed for the signature parameter.") return cls._verify( bytearray(data), bytearray(signature), cls._mont_pub_to_ed_pub(bytearray(self.__mont_pub)) )
Verify signed data using the Montgomery public key stored by this XEdDSA instance. :param data: A bytes-like object containing the data that was signed. :param signature: A bytes-like object encoding the signature with length SIGNATURE_SIZE. :returns: A boolean indicating whether the signature was valid or not.
Below is the the instruction that describes the task: ### Input: Verify signed data using the Montgomery public key stored by this XEdDSA instance. :param data: A bytes-like object containing the data that was signed. :param signature: A bytes-like object encoding the signature with length SIGNATURE_SIZE. :returns: A boolean indicating whether the signature was valid or not. ### Response: def verify(self, data, signature): """ Verify signed data using the Montgomery public key stored by this XEdDSA instance. :param data: A bytes-like object containing the data that was signed. :param signature: A bytes-like object encoding the signature with length SIGNATURE_SIZE. :returns: A boolean indicating whether the signature was valid or not. """ cls = self.__class__ if not isinstance(data, bytes): raise TypeError("The data parameter must be a bytes-like object.") if not isinstance(signature, bytes): raise TypeError("Wrong type passed for the signature parameter.") if len(signature) != cls.SIGNATURE_SIZE: raise ValueError("Invalid value passed for the signature parameter.") return cls._verify( bytearray(data), bytearray(signature), cls._mont_pub_to_ed_pub(bytearray(self.__mont_pub)) )
def lock(self): """ This method sets a cache variable to mark current job as "already running". """ if self.cache.get(self.lock_name): return False else: self.cache.set(self.lock_name, timezone.now(), self.timeout) return True
This method sets a cache variable to mark current job as "already running".
Below is the the instruction that describes the task: ### Input: This method sets a cache variable to mark current job as "already running". ### Response: def lock(self): """ This method sets a cache variable to mark current job as "already running". """ if self.cache.get(self.lock_name): return False else: self.cache.set(self.lock_name, timezone.now(), self.timeout) return True
def verify_proxy_ticket(ticket, service): """ Verifies CAS 2.0+ XML-based proxy ticket. :param: ticket :param: service Returns username on success and None on failure. """ params = {'ticket': ticket, 'service': service} url = (urljoin(settings.CAS_SERVER_URL, 'proxyValidate') + '?' + urlencode(params)) page = urlopen(url) try: response = page.read() tree = ElementTree.fromstring(response) if tree[0].tag.endswith('authenticationSuccess'): username = tree[0][0].text proxies = [] if len(tree[0]) > 1: for element in tree[0][1]: proxies.append(element.text) return {"username": username, "proxies": proxies} else: return None finally: page.close()
Verifies CAS 2.0+ XML-based proxy ticket. :param: ticket :param: service Returns username on success and None on failure.
Below is the the instruction that describes the task: ### Input: Verifies CAS 2.0+ XML-based proxy ticket. :param: ticket :param: service Returns username on success and None on failure. ### Response: def verify_proxy_ticket(ticket, service): """ Verifies CAS 2.0+ XML-based proxy ticket. :param: ticket :param: service Returns username on success and None on failure. """ params = {'ticket': ticket, 'service': service} url = (urljoin(settings.CAS_SERVER_URL, 'proxyValidate') + '?' + urlencode(params)) page = urlopen(url) try: response = page.read() tree = ElementTree.fromstring(response) if tree[0].tag.endswith('authenticationSuccess'): username = tree[0][0].text proxies = [] if len(tree[0]) > 1: for element in tree[0][1]: proxies.append(element.text) return {"username": username, "proxies": proxies} else: return None finally: page.close()
def lock_packages(self, deps_file_path=None, depslock_file_path=None, packages=None): """ Lock packages. Downloader search packages """ if deps_file_path is None: deps_file_path = self._deps_path if depslock_file_path is None: depslock_file_path = self._depslock_path if deps_file_path == depslock_file_path: depslock_file_path += '.lock' # raise CrosspmException( # CROSSPM_ERRORCODE_WRONG_ARGS, # 'Dependencies and Lock files are same: "{}".'.format(deps_file_path), # ) if packages is None: self.search_dependencies(deps_file_path) else: self._root_package.packages = packages self._log.info('Writing lock file [{}]'.format(depslock_file_path)) output_params = { 'out_format': 'lock', 'output': depslock_file_path, } Output(config=self._config).write_output(output_params, self._root_package.packages) self._log.info('Done!')
Lock packages. Downloader search packages
Below is the the instruction that describes the task: ### Input: Lock packages. Downloader search packages ### Response: def lock_packages(self, deps_file_path=None, depslock_file_path=None, packages=None): """ Lock packages. Downloader search packages """ if deps_file_path is None: deps_file_path = self._deps_path if depslock_file_path is None: depslock_file_path = self._depslock_path if deps_file_path == depslock_file_path: depslock_file_path += '.lock' # raise CrosspmException( # CROSSPM_ERRORCODE_WRONG_ARGS, # 'Dependencies and Lock files are same: "{}".'.format(deps_file_path), # ) if packages is None: self.search_dependencies(deps_file_path) else: self._root_package.packages = packages self._log.info('Writing lock file [{}]'.format(depslock_file_path)) output_params = { 'out_format': 'lock', 'output': depslock_file_path, } Output(config=self._config).write_output(output_params, self._root_package.packages) self._log.info('Done!')
def _integrate_cvode(self, *args, **kwargs): """ Do not use directly (use ``integrate(..., integrator='cvode')``). Uses CVode from CVodes in `SUNDIALS <https://computation.llnl.gov/casc/sundials/>`_ (via `pycvodes <https://pypi.python.org/pypi/pycvodes>`_) to integrate the ODE system. """ import pycvodes # Python interface to SUNDIALS's cvodes integrators kwargs['with_jacobian'] = kwargs.get('method', 'bdf') in pycvodes.requires_jac if 'lband' in kwargs or 'uband' in kwargs or 'band' in kwargs: raise ValueError("lband and uband set locally (set at" " initialization instead)") if self.band is not None: kwargs['lband'], kwargs['uband'] = self.band kwargs['autonomous_exprs'] = self.autonomous_exprs return self._integrate(pycvodes.integrate_adaptive, pycvodes.integrate_predefined, *args, **kwargs)
Do not use directly (use ``integrate(..., integrator='cvode')``). Uses CVode from CVodes in `SUNDIALS <https://computation.llnl.gov/casc/sundials/>`_ (via `pycvodes <https://pypi.python.org/pypi/pycvodes>`_) to integrate the ODE system.
Below is the the instruction that describes the task: ### Input: Do not use directly (use ``integrate(..., integrator='cvode')``). Uses CVode from CVodes in `SUNDIALS <https://computation.llnl.gov/casc/sundials/>`_ (via `pycvodes <https://pypi.python.org/pypi/pycvodes>`_) to integrate the ODE system. ### Response: def _integrate_cvode(self, *args, **kwargs): """ Do not use directly (use ``integrate(..., integrator='cvode')``). Uses CVode from CVodes in `SUNDIALS <https://computation.llnl.gov/casc/sundials/>`_ (via `pycvodes <https://pypi.python.org/pypi/pycvodes>`_) to integrate the ODE system. """ import pycvodes # Python interface to SUNDIALS's cvodes integrators kwargs['with_jacobian'] = kwargs.get('method', 'bdf') in pycvodes.requires_jac if 'lband' in kwargs or 'uband' in kwargs or 'band' in kwargs: raise ValueError("lband and uband set locally (set at" " initialization instead)") if self.band is not None: kwargs['lband'], kwargs['uband'] = self.band kwargs['autonomous_exprs'] = self.autonomous_exprs return self._integrate(pycvodes.integrate_adaptive, pycvodes.integrate_predefined, *args, **kwargs)
def invoke_function(self, function_name, **function_params): """ Invokes an Excel Function """ url = self.build_url(self._endpoints.get('function').format(function_name)) response = self.session.post(url, data=function_params) if not response: return None data = response.json() error = data.get('error') if error is None: return data.get('value') else: raise FunctionException(error)
Invokes an Excel Function
Below is the the instruction that describes the task: ### Input: Invokes an Excel Function ### Response: def invoke_function(self, function_name, **function_params): """ Invokes an Excel Function """ url = self.build_url(self._endpoints.get('function').format(function_name)) response = self.session.post(url, data=function_params) if not response: return None data = response.json() error = data.get('error') if error is None: return data.get('value') else: raise FunctionException(error)
def create_config(sections, section_contents): """Create a config file from the provided sections and key value pairs. Args: sections (List[str]): A list of section keys. key_value_pairs (Dict[str, str]): A list of of dictionaries. Must be as long as the list of sections. That is to say, if there are two sections, there should be two dicts. Returns: configparser.ConfigParser: A ConfigParser. Raises: ValueError """ sections_length, section_contents_length = len(sections), len(section_contents) if sections_length != section_contents_length: raise ValueError("Mismatch between argument lengths.\n" "len(sections) = {}\n" "len(section_contents) = {}" .format(sections_length, section_contents_length)) config = configparser.ConfigParser() for section, section_content in zip(sections, section_contents): config[section] = section_content return config
Create a config file from the provided sections and key value pairs. Args: sections (List[str]): A list of section keys. key_value_pairs (Dict[str, str]): A list of of dictionaries. Must be as long as the list of sections. That is to say, if there are two sections, there should be two dicts. Returns: configparser.ConfigParser: A ConfigParser. Raises: ValueError
Below is the the instruction that describes the task: ### Input: Create a config file from the provided sections and key value pairs. Args: sections (List[str]): A list of section keys. key_value_pairs (Dict[str, str]): A list of of dictionaries. Must be as long as the list of sections. That is to say, if there are two sections, there should be two dicts. Returns: configparser.ConfigParser: A ConfigParser. Raises: ValueError ### Response: def create_config(sections, section_contents): """Create a config file from the provided sections and key value pairs. Args: sections (List[str]): A list of section keys. key_value_pairs (Dict[str, str]): A list of of dictionaries. Must be as long as the list of sections. That is to say, if there are two sections, there should be two dicts. Returns: configparser.ConfigParser: A ConfigParser. Raises: ValueError """ sections_length, section_contents_length = len(sections), len(section_contents) if sections_length != section_contents_length: raise ValueError("Mismatch between argument lengths.\n" "len(sections) = {}\n" "len(section_contents) = {}" .format(sections_length, section_contents_length)) config = configparser.ConfigParser() for section, section_content in zip(sections, section_contents): config[section] = section_content return config
def siblingsId(self) -> Tuple[CtsReference, CtsReference]: """ Siblings Identifiers of the passage :rtype: (str, str) """ if not self._text: raise MissingAttribute("CapitainsCtsPassage was iniated without CtsTextMetadata object") if self._prev_next is not None: return self._prev_next document_references = self._text.getReffs(level=self.depth) range_length = 1 if self.reference.is_range(): range_length = len(self.getReffs()) start = document_references.index(self.reference.start) if start == 0: # If the passage is already at the beginning _prev = None elif start - range_length < 0: _prev = document_references[0] else: _prev = document_references[start - 1] if start + 1 == len(document_references): # If the passage is already at the end _next = None elif start + range_length > len(document_references): _next = document_references[-1] else: _next = document_references[start + 1] self._prev_next = (_prev, _next) return self._prev_next
Siblings Identifiers of the passage :rtype: (str, str)
Below is the the instruction that describes the task: ### Input: Siblings Identifiers of the passage :rtype: (str, str) ### Response: def siblingsId(self) -> Tuple[CtsReference, CtsReference]: """ Siblings Identifiers of the passage :rtype: (str, str) """ if not self._text: raise MissingAttribute("CapitainsCtsPassage was iniated without CtsTextMetadata object") if self._prev_next is not None: return self._prev_next document_references = self._text.getReffs(level=self.depth) range_length = 1 if self.reference.is_range(): range_length = len(self.getReffs()) start = document_references.index(self.reference.start) if start == 0: # If the passage is already at the beginning _prev = None elif start - range_length < 0: _prev = document_references[0] else: _prev = document_references[start - 1] if start + 1 == len(document_references): # If the passage is already at the end _next = None elif start + range_length > len(document_references): _next = document_references[-1] else: _next = document_references[start + 1] self._prev_next = (_prev, _next) return self._prev_next
def simulateCatalog(config,roi=None,lon=None,lat=None): """ Simulate a catalog object. """ import ugali.simulation.simulator if roi is None: roi = createROI(config,lon,lat) sim = ugali.simulation.simulator.Simulator(config,roi) return sim.catalog()
Simulate a catalog object.
Below is the the instruction that describes the task: ### Input: Simulate a catalog object. ### Response: def simulateCatalog(config,roi=None,lon=None,lat=None): """ Simulate a catalog object. """ import ugali.simulation.simulator if roi is None: roi = createROI(config,lon,lat) sim = ugali.simulation.simulator.Simulator(config,roi) return sim.catalog()
def replace(self, **updates): """Return a new profile with the given updates. Unspecified fields will be the same as this instance. See `__new__` for details on the arguments. """ state = self.dump() state.update(updates) return self.__class__(**state)
Return a new profile with the given updates. Unspecified fields will be the same as this instance. See `__new__` for details on the arguments.
Below is the the instruction that describes the task: ### Input: Return a new profile with the given updates. Unspecified fields will be the same as this instance. See `__new__` for details on the arguments. ### Response: def replace(self, **updates): """Return a new profile with the given updates. Unspecified fields will be the same as this instance. See `__new__` for details on the arguments. """ state = self.dump() state.update(updates) return self.__class__(**state)
def preferred_width(self, cli, max_available_width): """ Report the width of the longest meta text as the preferred width of this control. It could be that we use less width, but this way, we're sure that the layout doesn't change when we select another completion (E.g. that completions are suddenly shown in more or fewer columns.) """ if cli.current_buffer.complete_state: state = cli.current_buffer.complete_state return 2 + max(get_cwidth(c.display_meta) for c in state.current_completions) else: return 0
Report the width of the longest meta text as the preferred width of this control. It could be that we use less width, but this way, we're sure that the layout doesn't change when we select another completion (E.g. that completions are suddenly shown in more or fewer columns.)
Below is the the instruction that describes the task: ### Input: Report the width of the longest meta text as the preferred width of this control. It could be that we use less width, but this way, we're sure that the layout doesn't change when we select another completion (E.g. that completions are suddenly shown in more or fewer columns.) ### Response: def preferred_width(self, cli, max_available_width): """ Report the width of the longest meta text as the preferred width of this control. It could be that we use less width, but this way, we're sure that the layout doesn't change when we select another completion (E.g. that completions are suddenly shown in more or fewer columns.) """ if cli.current_buffer.complete_state: state = cli.current_buffer.complete_state return 2 + max(get_cwidth(c.display_meta) for c in state.current_completions) else: return 0
def list_timeline(self, list_id, since_id=None, max_id=None, count=20): """ List the tweets of specified list. :param list_id: list ID number :param since_id: results will have ID greater than specified ID (more recent than) :param max_id: results will have ID less than specified ID (older than) :param count: number of results per page :return: list of :class:`~responsebot.models.Tweet` objects """ statuses = self._client.list_timeline(list_id=list_id, since_id=since_id, max_id=max_id, count=count) return [Tweet(tweet._json) for tweet in statuses]
List the tweets of specified list. :param list_id: list ID number :param since_id: results will have ID greater than specified ID (more recent than) :param max_id: results will have ID less than specified ID (older than) :param count: number of results per page :return: list of :class:`~responsebot.models.Tweet` objects
Below is the the instruction that describes the task: ### Input: List the tweets of specified list. :param list_id: list ID number :param since_id: results will have ID greater than specified ID (more recent than) :param max_id: results will have ID less than specified ID (older than) :param count: number of results per page :return: list of :class:`~responsebot.models.Tweet` objects ### Response: def list_timeline(self, list_id, since_id=None, max_id=None, count=20): """ List the tweets of specified list. :param list_id: list ID number :param since_id: results will have ID greater than specified ID (more recent than) :param max_id: results will have ID less than specified ID (older than) :param count: number of results per page :return: list of :class:`~responsebot.models.Tweet` objects """ statuses = self._client.list_timeline(list_id=list_id, since_id=since_id, max_id=max_id, count=count) return [Tweet(tweet._json) for tweet in statuses]
def violation_lines(self, src_path): """ Return a list of lines in violation (integers) in `src_path` that were changed. If we have no coverage information for `src_path`, returns an empty list. """ diff_violations = self._diff_violations().get(src_path) if diff_violations is None: return [] return sorted(diff_violations.lines)
Return a list of lines in violation (integers) in `src_path` that were changed. If we have no coverage information for `src_path`, returns an empty list.
Below is the the instruction that describes the task: ### Input: Return a list of lines in violation (integers) in `src_path` that were changed. If we have no coverage information for `src_path`, returns an empty list. ### Response: def violation_lines(self, src_path): """ Return a list of lines in violation (integers) in `src_path` that were changed. If we have no coverage information for `src_path`, returns an empty list. """ diff_violations = self._diff_violations().get(src_path) if diff_violations is None: return [] return sorted(diff_violations.lines)
def read_frames(self, n, channels=None): """Read ``n`` frames from the track, starting with the current frame :param integer n: Number of frames to read :param integer channels: Number of channels to return (default is number of channels in track) :returns: Next ``n`` frames from the track, starting with ``current_frame`` :rtype: numpy array """ if channels is None: channels = self.channels if channels == 1: out = np.zeros(n) elif channels == 2: out = np.zeros((n, 2)) else: print "Input needs to be 1 or 2 channels" return if n > self.remaining_frames(): print "Trying to retrieve too many frames!" print "Asked for", n n = self.remaining_frames() print "Returning", n if self.channels == 1 and channels == 1: out = self.sound.read_frames(n) elif self.channels == 1 and channels == 2: frames = self.sound.read_frames(n) out = np.vstack((frames.copy(), frames.copy())).T elif self.channels == 2 and channels == 1: frames = self.sound.read_frames(n) out = np.mean(frames, axis=1) elif self.channels == 2 and channels == 2: out[:n, :] = self.sound.read_frames(n) self.current_frame += n return out
Read ``n`` frames from the track, starting with the current frame :param integer n: Number of frames to read :param integer channels: Number of channels to return (default is number of channels in track) :returns: Next ``n`` frames from the track, starting with ``current_frame`` :rtype: numpy array
Below is the the instruction that describes the task: ### Input: Read ``n`` frames from the track, starting with the current frame :param integer n: Number of frames to read :param integer channels: Number of channels to return (default is number of channels in track) :returns: Next ``n`` frames from the track, starting with ``current_frame`` :rtype: numpy array ### Response: def read_frames(self, n, channels=None): """Read ``n`` frames from the track, starting with the current frame :param integer n: Number of frames to read :param integer channels: Number of channels to return (default is number of channels in track) :returns: Next ``n`` frames from the track, starting with ``current_frame`` :rtype: numpy array """ if channels is None: channels = self.channels if channels == 1: out = np.zeros(n) elif channels == 2: out = np.zeros((n, 2)) else: print "Input needs to be 1 or 2 channels" return if n > self.remaining_frames(): print "Trying to retrieve too many frames!" print "Asked for", n n = self.remaining_frames() print "Returning", n if self.channels == 1 and channels == 1: out = self.sound.read_frames(n) elif self.channels == 1 and channels == 2: frames = self.sound.read_frames(n) out = np.vstack((frames.copy(), frames.copy())).T elif self.channels == 2 and channels == 1: frames = self.sound.read_frames(n) out = np.mean(frames, axis=1) elif self.channels == 2 and channels == 2: out[:n, :] = self.sound.read_frames(n) self.current_frame += n return out
def users_set_preferences(self, user_id, data, **kwargs): """Set user’s preferences.""" return self.__call_api_post('users.setPreferences', userId=user_id, data=data, kwargs=kwargs)
Set user’s preferences.
Below is the the instruction that describes the task: ### Input: Set user’s preferences. ### Response: def users_set_preferences(self, user_id, data, **kwargs): """Set user’s preferences.""" return self.__call_api_post('users.setPreferences', userId=user_id, data=data, kwargs=kwargs)
def setup(self, analysis_project_name, remote_project_name, incident_id, zone, boot_disk_size, cpu_cores, remote_instance_name=None, disk_names=None, all_disks=False, image_project="ubuntu-os-cloud", image_family="ubuntu-1604-lts"): """Sets up a Google cloud collector. This method creates and starts an analysis VM in the analysis project and selects disks to copy from the remote project. If disk_names is specified, it will copy the corresponding disks from the project, ignoring disks belonging to any specific instances. If remote_instance_name is specified, two behaviors are possible: - If no other parameters are specified, it will select the instance's boot disk - if all_disks is set to True, it will select all disks in the project that are attached to the instance disk_names takes precedence over instance_names Args: analysis_project_name: The name of the project that contains the analysis VM (string). remote_project_name: The name of the remote project where the disks must be copied from (string). incident_id: The incident ID on which the name of the analysis VM will be based (string). zone: The zone in which new resources should be created (string). boot_disk_size: The size of the analysis VM boot disk (in GB) (float). cpu_cores: The number of CPU cores to create the machine with. remote_instance_name: The name of the instance in the remote project containing the disks to be copied (string). disk_names: Comma separated string with disk names to copy (string). all_disks: Copy all disks attached to the source instance (bool). image_project: Name of the project where the analysis VM image is hosted. image_family: Name of the image to use to create the analysis VM. """ disk_names = disk_names.split(",") if disk_names else [] self.analysis_project = libcloudforensics.GoogleCloudProject( analysis_project_name, default_zone=zone) remote_project = libcloudforensics.GoogleCloudProject( remote_project_name) if not (remote_instance_name or disk_names): self.state.add_error( "You need to specify at least an instance name or disks to copy", critical=True) return self.incident_id = incident_id analysis_vm_name = "gcp-forensics-vm-{0:s}".format(incident_id) print("Your analysis VM will be: {0:s}".format(analysis_vm_name)) print("Complimentary gcloud command:") print("gcloud compute ssh --project {0:s} {1:s} --zone {2:s}".format( analysis_project_name, analysis_vm_name, zone)) try: # TODO: Make creating an analysis VM optional # pylint: disable=too-many-function-args self.analysis_vm, _ = libcloudforensics.start_analysis_vm( self.analysis_project.project_id, analysis_vm_name, zone, boot_disk_size, int(cpu_cores), attach_disk=None, image_project=image_project, image_family=image_family) if disk_names: for name in disk_names: try: self.disks_to_copy.append(remote_project.get_disk(name)) except RuntimeError: self.state.add_error( "Disk '{0:s}' was not found in project {1:s}".format( name, remote_project_name), critical=True) break elif remote_instance_name: remote_instance = remote_project.get_instance( remote_instance_name) if all_disks: self.disks_to_copy = [ remote_project.get_disk(disk_name) for disk_name in remote_instance.list_disks() ] else: self.disks_to_copy = [remote_instance.get_boot_disk()] if not self.disks_to_copy: self.state.add_error("Could not find any disks to copy", critical=True) except AccessTokenRefreshError as err: self.state.add_error("Something is wrong with your gcloud access token.") self.state.add_error(err, critical=True) except ApplicationDefaultCredentialsError as err: self.state.add_error("Something is wrong with your Application Default " "Credentials. Try running:\n" " $ gcloud auth application-default login") self.state.add_error(err, critical=True) except HttpError as err: if err.resp.status == 403: self.state.add_error( "Make sure you have the appropriate permissions on the project") if err.resp.status == 404: self.state.add_error( "GCP resource not found. Maybe a typo in the project / instance / " "disk name?") self.state.add_error(err, critical=True)
Sets up a Google cloud collector. This method creates and starts an analysis VM in the analysis project and selects disks to copy from the remote project. If disk_names is specified, it will copy the corresponding disks from the project, ignoring disks belonging to any specific instances. If remote_instance_name is specified, two behaviors are possible: - If no other parameters are specified, it will select the instance's boot disk - if all_disks is set to True, it will select all disks in the project that are attached to the instance disk_names takes precedence over instance_names Args: analysis_project_name: The name of the project that contains the analysis VM (string). remote_project_name: The name of the remote project where the disks must be copied from (string). incident_id: The incident ID on which the name of the analysis VM will be based (string). zone: The zone in which new resources should be created (string). boot_disk_size: The size of the analysis VM boot disk (in GB) (float). cpu_cores: The number of CPU cores to create the machine with. remote_instance_name: The name of the instance in the remote project containing the disks to be copied (string). disk_names: Comma separated string with disk names to copy (string). all_disks: Copy all disks attached to the source instance (bool). image_project: Name of the project where the analysis VM image is hosted. image_family: Name of the image to use to create the analysis VM.
Below is the the instruction that describes the task: ### Input: Sets up a Google cloud collector. This method creates and starts an analysis VM in the analysis project and selects disks to copy from the remote project. If disk_names is specified, it will copy the corresponding disks from the project, ignoring disks belonging to any specific instances. If remote_instance_name is specified, two behaviors are possible: - If no other parameters are specified, it will select the instance's boot disk - if all_disks is set to True, it will select all disks in the project that are attached to the instance disk_names takes precedence over instance_names Args: analysis_project_name: The name of the project that contains the analysis VM (string). remote_project_name: The name of the remote project where the disks must be copied from (string). incident_id: The incident ID on which the name of the analysis VM will be based (string). zone: The zone in which new resources should be created (string). boot_disk_size: The size of the analysis VM boot disk (in GB) (float). cpu_cores: The number of CPU cores to create the machine with. remote_instance_name: The name of the instance in the remote project containing the disks to be copied (string). disk_names: Comma separated string with disk names to copy (string). all_disks: Copy all disks attached to the source instance (bool). image_project: Name of the project where the analysis VM image is hosted. image_family: Name of the image to use to create the analysis VM. ### Response: def setup(self, analysis_project_name, remote_project_name, incident_id, zone, boot_disk_size, cpu_cores, remote_instance_name=None, disk_names=None, all_disks=False, image_project="ubuntu-os-cloud", image_family="ubuntu-1604-lts"): """Sets up a Google cloud collector. This method creates and starts an analysis VM in the analysis project and selects disks to copy from the remote project. If disk_names is specified, it will copy the corresponding disks from the project, ignoring disks belonging to any specific instances. If remote_instance_name is specified, two behaviors are possible: - If no other parameters are specified, it will select the instance's boot disk - if all_disks is set to True, it will select all disks in the project that are attached to the instance disk_names takes precedence over instance_names Args: analysis_project_name: The name of the project that contains the analysis VM (string). remote_project_name: The name of the remote project where the disks must be copied from (string). incident_id: The incident ID on which the name of the analysis VM will be based (string). zone: The zone in which new resources should be created (string). boot_disk_size: The size of the analysis VM boot disk (in GB) (float). cpu_cores: The number of CPU cores to create the machine with. remote_instance_name: The name of the instance in the remote project containing the disks to be copied (string). disk_names: Comma separated string with disk names to copy (string). all_disks: Copy all disks attached to the source instance (bool). image_project: Name of the project where the analysis VM image is hosted. image_family: Name of the image to use to create the analysis VM. """ disk_names = disk_names.split(",") if disk_names else [] self.analysis_project = libcloudforensics.GoogleCloudProject( analysis_project_name, default_zone=zone) remote_project = libcloudforensics.GoogleCloudProject( remote_project_name) if not (remote_instance_name or disk_names): self.state.add_error( "You need to specify at least an instance name or disks to copy", critical=True) return self.incident_id = incident_id analysis_vm_name = "gcp-forensics-vm-{0:s}".format(incident_id) print("Your analysis VM will be: {0:s}".format(analysis_vm_name)) print("Complimentary gcloud command:") print("gcloud compute ssh --project {0:s} {1:s} --zone {2:s}".format( analysis_project_name, analysis_vm_name, zone)) try: # TODO: Make creating an analysis VM optional # pylint: disable=too-many-function-args self.analysis_vm, _ = libcloudforensics.start_analysis_vm( self.analysis_project.project_id, analysis_vm_name, zone, boot_disk_size, int(cpu_cores), attach_disk=None, image_project=image_project, image_family=image_family) if disk_names: for name in disk_names: try: self.disks_to_copy.append(remote_project.get_disk(name)) except RuntimeError: self.state.add_error( "Disk '{0:s}' was not found in project {1:s}".format( name, remote_project_name), critical=True) break elif remote_instance_name: remote_instance = remote_project.get_instance( remote_instance_name) if all_disks: self.disks_to_copy = [ remote_project.get_disk(disk_name) for disk_name in remote_instance.list_disks() ] else: self.disks_to_copy = [remote_instance.get_boot_disk()] if not self.disks_to_copy: self.state.add_error("Could not find any disks to copy", critical=True) except AccessTokenRefreshError as err: self.state.add_error("Something is wrong with your gcloud access token.") self.state.add_error(err, critical=True) except ApplicationDefaultCredentialsError as err: self.state.add_error("Something is wrong with your Application Default " "Credentials. Try running:\n" " $ gcloud auth application-default login") self.state.add_error(err, critical=True) except HttpError as err: if err.resp.status == 403: self.state.add_error( "Make sure you have the appropriate permissions on the project") if err.resp.status == 404: self.state.add_error( "GCP resource not found. Maybe a typo in the project / instance / " "disk name?") self.state.add_error(err, critical=True)
def create_summary_metadata(display_name, description, num_thresholds): """Create a `summary_pb2.SummaryMetadata` proto for pr_curves plugin data. Arguments: display_name: The display name used in TensorBoard. description: The description to show in TensorBoard. num_thresholds: The number of thresholds to use for PR curves. Returns: A `summary_pb2.SummaryMetadata` protobuf object. """ pr_curve_plugin_data = plugin_data_pb2.PrCurvePluginData( version=PROTO_VERSION, num_thresholds=num_thresholds) content = pr_curve_plugin_data.SerializeToString() return summary_pb2.SummaryMetadata( display_name=display_name, summary_description=description, plugin_data=summary_pb2.SummaryMetadata.PluginData( plugin_name=PLUGIN_NAME, content=content))
Create a `summary_pb2.SummaryMetadata` proto for pr_curves plugin data. Arguments: display_name: The display name used in TensorBoard. description: The description to show in TensorBoard. num_thresholds: The number of thresholds to use for PR curves. Returns: A `summary_pb2.SummaryMetadata` protobuf object.
Below is the the instruction that describes the task: ### Input: Create a `summary_pb2.SummaryMetadata` proto for pr_curves plugin data. Arguments: display_name: The display name used in TensorBoard. description: The description to show in TensorBoard. num_thresholds: The number of thresholds to use for PR curves. Returns: A `summary_pb2.SummaryMetadata` protobuf object. ### Response: def create_summary_metadata(display_name, description, num_thresholds): """Create a `summary_pb2.SummaryMetadata` proto for pr_curves plugin data. Arguments: display_name: The display name used in TensorBoard. description: The description to show in TensorBoard. num_thresholds: The number of thresholds to use for PR curves. Returns: A `summary_pb2.SummaryMetadata` protobuf object. """ pr_curve_plugin_data = plugin_data_pb2.PrCurvePluginData( version=PROTO_VERSION, num_thresholds=num_thresholds) content = pr_curve_plugin_data.SerializeToString() return summary_pb2.SummaryMetadata( display_name=display_name, summary_description=description, plugin_data=summary_pb2.SummaryMetadata.PluginData( plugin_name=PLUGIN_NAME, content=content))
def get_function_config(cfg): """Check whether a function exists or not and return its config""" function_name = cfg.get('function_name') profile_name = cfg.get('profile') aws_access_key_id = cfg.get('aws_access_key_id') aws_secret_access_key = cfg.get('aws_secret_access_key') client = get_client( 'lambda', profile_name, aws_access_key_id, aws_secret_access_key, cfg.get('region'), ) try: return client.get_function(FunctionName=function_name) except client.exceptions.ResourceNotFoundException as e: if 'Function not found' in str(e): return False
Check whether a function exists or not and return its config
Below is the the instruction that describes the task: ### Input: Check whether a function exists or not and return its config ### Response: def get_function_config(cfg): """Check whether a function exists or not and return its config""" function_name = cfg.get('function_name') profile_name = cfg.get('profile') aws_access_key_id = cfg.get('aws_access_key_id') aws_secret_access_key = cfg.get('aws_secret_access_key') client = get_client( 'lambda', profile_name, aws_access_key_id, aws_secret_access_key, cfg.get('region'), ) try: return client.get_function(FunctionName=function_name) except client.exceptions.ResourceNotFoundException as e: if 'Function not found' in str(e): return False
def add(self, selected: 'SelectedMailbox', *, replace: 'SelectedMailbox' = None) -> None: """Add a new selected mailbox object to the set, which may then be returned by :meth:`.any_selected`. Args: selected: The new selected mailbox object. replace: An existing selected mailbox object that should be removed from the weak set. """ if replace is not None: self._set.discard(replace) self._set.add(selected)
Add a new selected mailbox object to the set, which may then be returned by :meth:`.any_selected`. Args: selected: The new selected mailbox object. replace: An existing selected mailbox object that should be removed from the weak set.
Below is the the instruction that describes the task: ### Input: Add a new selected mailbox object to the set, which may then be returned by :meth:`.any_selected`. Args: selected: The new selected mailbox object. replace: An existing selected mailbox object that should be removed from the weak set. ### Response: def add(self, selected: 'SelectedMailbox', *, replace: 'SelectedMailbox' = None) -> None: """Add a new selected mailbox object to the set, which may then be returned by :meth:`.any_selected`. Args: selected: The new selected mailbox object. replace: An existing selected mailbox object that should be removed from the weak set. """ if replace is not None: self._set.discard(replace) self._set.add(selected)
def drop_keyspace(keyspace, contact_points=None, port=None, cql_user=None, cql_pass=None): ''' Drop a keyspace if it exists in a Cassandra cluster. :param keyspace: The keyspace to drop. :type keyspace: str :param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs. :type contact_points: str | list[str] :param cql_user: The Cassandra user if authentication is turned on. :type cql_user: str :param cql_pass: The Cassandra user password if authentication is turned on. :type cql_pass: str :param port: The Cassandra cluster port, defaults to None. :type port: int :return: The info for the keyspace or False if it does not exist. :rtype: dict CLI Example: .. code-block:: bash salt 'minion1' cassandra_cql.drop_keyspace keyspace=test salt 'minion1' cassandra_cql.drop_keyspace keyspace=test contact_points=minion1 ''' existing_keyspace = keyspace_exists(keyspace, contact_points, port) if existing_keyspace: query = '''drop keyspace {0};'''.format(keyspace) try: cql_query(query, contact_points, port, cql_user, cql_pass) except CommandExecutionError: log.critical('Could not drop keyspace.') raise except BaseException as e: log.critical('Unexpected error while dropping keyspace: %s', e) raise return True
Drop a keyspace if it exists in a Cassandra cluster. :param keyspace: The keyspace to drop. :type keyspace: str :param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs. :type contact_points: str | list[str] :param cql_user: The Cassandra user if authentication is turned on. :type cql_user: str :param cql_pass: The Cassandra user password if authentication is turned on. :type cql_pass: str :param port: The Cassandra cluster port, defaults to None. :type port: int :return: The info for the keyspace or False if it does not exist. :rtype: dict CLI Example: .. code-block:: bash salt 'minion1' cassandra_cql.drop_keyspace keyspace=test salt 'minion1' cassandra_cql.drop_keyspace keyspace=test contact_points=minion1
Below is the the instruction that describes the task: ### Input: Drop a keyspace if it exists in a Cassandra cluster. :param keyspace: The keyspace to drop. :type keyspace: str :param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs. :type contact_points: str | list[str] :param cql_user: The Cassandra user if authentication is turned on. :type cql_user: str :param cql_pass: The Cassandra user password if authentication is turned on. :type cql_pass: str :param port: The Cassandra cluster port, defaults to None. :type port: int :return: The info for the keyspace or False if it does not exist. :rtype: dict CLI Example: .. code-block:: bash salt 'minion1' cassandra_cql.drop_keyspace keyspace=test salt 'minion1' cassandra_cql.drop_keyspace keyspace=test contact_points=minion1 ### Response: def drop_keyspace(keyspace, contact_points=None, port=None, cql_user=None, cql_pass=None): ''' Drop a keyspace if it exists in a Cassandra cluster. :param keyspace: The keyspace to drop. :type keyspace: str :param contact_points: The Cassandra cluster addresses, can either be a string or a list of IPs. :type contact_points: str | list[str] :param cql_user: The Cassandra user if authentication is turned on. :type cql_user: str :param cql_pass: The Cassandra user password if authentication is turned on. :type cql_pass: str :param port: The Cassandra cluster port, defaults to None. :type port: int :return: The info for the keyspace or False if it does not exist. :rtype: dict CLI Example: .. code-block:: bash salt 'minion1' cassandra_cql.drop_keyspace keyspace=test salt 'minion1' cassandra_cql.drop_keyspace keyspace=test contact_points=minion1 ''' existing_keyspace = keyspace_exists(keyspace, contact_points, port) if existing_keyspace: query = '''drop keyspace {0};'''.format(keyspace) try: cql_query(query, contact_points, port, cql_user, cql_pass) except CommandExecutionError: log.critical('Could not drop keyspace.') raise except BaseException as e: log.critical('Unexpected error while dropping keyspace: %s', e) raise return True
def rigthgen(self, value=0): """Generate rows to fill right pixels in int mode""" while True: yield self.newarray(self.nplanes_right * self.width, value)
Generate rows to fill right pixels in int mode
Below is the the instruction that describes the task: ### Input: Generate rows to fill right pixels in int mode ### Response: def rigthgen(self, value=0): """Generate rows to fill right pixels in int mode""" while True: yield self.newarray(self.nplanes_right * self.width, value)
def generate_random(featureType, numberVertices=3, boundingBox=[-180.0, -90.0, 180.0, 90.0]): """ Generates random geojson features depending on the parameters passed through. The bounding box defaults to the world - [-180.0, -90.0, 180.0, 90.0]. The number of vertices defaults to 3. :param featureType: A geometry type :type featureType: Point, LineString, Polygon :param numberVertices: The number vertices that a linestring or polygon will have :type numberVertices: int :param boundingBox: A bounding box in which features will be restricted to :type boundingBox: list :return: The resulting random geojson object or geometry collection. :rtype: object :raises ValueError: if there is no featureType provided. """ from geojson import Point, LineString, Polygon import random import math lonMin = boundingBox[0] lonMax = boundingBox[2] def randomLon(): return random.uniform(lonMin, lonMax) latMin = boundingBox[1] latMax = boundingBox[3] def randomLat(): return random.uniform(latMin, latMax) def createPoint(): return Point((randomLon(), randomLat())) def createLine(): return LineString([createPoint() for unused in range(numberVertices)]) def createPoly(): aveRadius = 60 ctrX = 0.1 ctrY = 0.2 irregularity = clip(0.1, 0, 1) * 2 * math.pi / numberVertices spikeyness = clip(0.5, 0, 1) * aveRadius angleSteps = [] lower = (2 * math.pi / numberVertices) - irregularity upper = (2 * math.pi / numberVertices) + irregularity sum = 0 for i in range(numberVertices): tmp = random.uniform(lower, upper) angleSteps.append(tmp) sum = sum + tmp k = sum / (2 * math.pi) for i in range(numberVertices): angleSteps[i] = angleSteps[i] / k points = [] angle = random.uniform(0, 2 * math.pi) for i in range(numberVertices): r_i = clip(random.gauss(aveRadius, spikeyness), 0, 2 * aveRadius) x = ctrX + r_i * math.cos(angle) y = ctrY + r_i * math.sin(angle) points.append((int(x), int(y))) angle = angle + angleSteps[i] firstVal = points[0] points.append(firstVal) return Polygon([points]) def clip(x, min, max): if(min > max): return x elif(x < min): return min elif(x > max): return max else: return x if featureType == 'Point': return createPoint() if featureType == 'LineString': return createLine() if featureType == 'Polygon': return createPoly()
Generates random geojson features depending on the parameters passed through. The bounding box defaults to the world - [-180.0, -90.0, 180.0, 90.0]. The number of vertices defaults to 3. :param featureType: A geometry type :type featureType: Point, LineString, Polygon :param numberVertices: The number vertices that a linestring or polygon will have :type numberVertices: int :param boundingBox: A bounding box in which features will be restricted to :type boundingBox: list :return: The resulting random geojson object or geometry collection. :rtype: object :raises ValueError: if there is no featureType provided.
Below is the the instruction that describes the task: ### Input: Generates random geojson features depending on the parameters passed through. The bounding box defaults to the world - [-180.0, -90.0, 180.0, 90.0]. The number of vertices defaults to 3. :param featureType: A geometry type :type featureType: Point, LineString, Polygon :param numberVertices: The number vertices that a linestring or polygon will have :type numberVertices: int :param boundingBox: A bounding box in which features will be restricted to :type boundingBox: list :return: The resulting random geojson object or geometry collection. :rtype: object :raises ValueError: if there is no featureType provided. ### Response: def generate_random(featureType, numberVertices=3, boundingBox=[-180.0, -90.0, 180.0, 90.0]): """ Generates random geojson features depending on the parameters passed through. The bounding box defaults to the world - [-180.0, -90.0, 180.0, 90.0]. The number of vertices defaults to 3. :param featureType: A geometry type :type featureType: Point, LineString, Polygon :param numberVertices: The number vertices that a linestring or polygon will have :type numberVertices: int :param boundingBox: A bounding box in which features will be restricted to :type boundingBox: list :return: The resulting random geojson object or geometry collection. :rtype: object :raises ValueError: if there is no featureType provided. """ from geojson import Point, LineString, Polygon import random import math lonMin = boundingBox[0] lonMax = boundingBox[2] def randomLon(): return random.uniform(lonMin, lonMax) latMin = boundingBox[1] latMax = boundingBox[3] def randomLat(): return random.uniform(latMin, latMax) def createPoint(): return Point((randomLon(), randomLat())) def createLine(): return LineString([createPoint() for unused in range(numberVertices)]) def createPoly(): aveRadius = 60 ctrX = 0.1 ctrY = 0.2 irregularity = clip(0.1, 0, 1) * 2 * math.pi / numberVertices spikeyness = clip(0.5, 0, 1) * aveRadius angleSteps = [] lower = (2 * math.pi / numberVertices) - irregularity upper = (2 * math.pi / numberVertices) + irregularity sum = 0 for i in range(numberVertices): tmp = random.uniform(lower, upper) angleSteps.append(tmp) sum = sum + tmp k = sum / (2 * math.pi) for i in range(numberVertices): angleSteps[i] = angleSteps[i] / k points = [] angle = random.uniform(0, 2 * math.pi) for i in range(numberVertices): r_i = clip(random.gauss(aveRadius, spikeyness), 0, 2 * aveRadius) x = ctrX + r_i * math.cos(angle) y = ctrY + r_i * math.sin(angle) points.append((int(x), int(y))) angle = angle + angleSteps[i] firstVal = points[0] points.append(firstVal) return Polygon([points]) def clip(x, min, max): if(min > max): return x elif(x < min): return min elif(x > max): return max else: return x if featureType == 'Point': return createPoint() if featureType == 'LineString': return createLine() if featureType == 'Polygon': return createPoly()
def add(self, client): """Add a client to the penalty box.""" if client.pool_id in self._client_ids: log.info("%r is already in the penalty box. Ignoring.", client) return release = time.time() + self._min_wait heapq.heappush(self._clients, (release, (client, self._min_wait))) self._client_ids.add(client.pool_id)
Add a client to the penalty box.
Below is the the instruction that describes the task: ### Input: Add a client to the penalty box. ### Response: def add(self, client): """Add a client to the penalty box.""" if client.pool_id in self._client_ids: log.info("%r is already in the penalty box. Ignoring.", client) return release = time.time() + self._min_wait heapq.heappush(self._clients, (release, (client, self._min_wait))) self._client_ids.add(client.pool_id)
def _eigsorted(cov, asc=True): """ Computes eigenvalues and eigenvectors of a covariance matrix and returns them sorted by eigenvalue. Parameters ---------- cov : ndarray covariance matrix asc : bool, default=True determines whether we are sorted smallest to largest (asc=True), or largest to smallest (asc=False) Returns ------- eigval : 1D ndarray eigenvalues of covariance ordered largest to smallest eigvec : 2D ndarray eigenvectors of covariance matrix ordered to match `eigval` ordering. I.e eigvec[:, 0] is the rotation vector for eigval[0] """ eigval, eigvec = np.linalg.eigh(cov) order = eigval.argsort() if not asc: # sort largest to smallest order = order[::-1] return eigval[order], eigvec[:, order]
Computes eigenvalues and eigenvectors of a covariance matrix and returns them sorted by eigenvalue. Parameters ---------- cov : ndarray covariance matrix asc : bool, default=True determines whether we are sorted smallest to largest (asc=True), or largest to smallest (asc=False) Returns ------- eigval : 1D ndarray eigenvalues of covariance ordered largest to smallest eigvec : 2D ndarray eigenvectors of covariance matrix ordered to match `eigval` ordering. I.e eigvec[:, 0] is the rotation vector for eigval[0]
Below is the the instruction that describes the task: ### Input: Computes eigenvalues and eigenvectors of a covariance matrix and returns them sorted by eigenvalue. Parameters ---------- cov : ndarray covariance matrix asc : bool, default=True determines whether we are sorted smallest to largest (asc=True), or largest to smallest (asc=False) Returns ------- eigval : 1D ndarray eigenvalues of covariance ordered largest to smallest eigvec : 2D ndarray eigenvectors of covariance matrix ordered to match `eigval` ordering. I.e eigvec[:, 0] is the rotation vector for eigval[0] ### Response: def _eigsorted(cov, asc=True): """ Computes eigenvalues and eigenvectors of a covariance matrix and returns them sorted by eigenvalue. Parameters ---------- cov : ndarray covariance matrix asc : bool, default=True determines whether we are sorted smallest to largest (asc=True), or largest to smallest (asc=False) Returns ------- eigval : 1D ndarray eigenvalues of covariance ordered largest to smallest eigvec : 2D ndarray eigenvectors of covariance matrix ordered to match `eigval` ordering. I.e eigvec[:, 0] is the rotation vector for eigval[0] """ eigval, eigvec = np.linalg.eigh(cov) order = eigval.argsort() if not asc: # sort largest to smallest order = order[::-1] return eigval[order], eigvec[:, order]
def search(self, value): """Find an element in the tree""" if self.payload == value: return self else: if value <= self.payload: if self.left: return self.left.search(value) else: if self.right: return self.right.search(value) return None
Find an element in the tree
Below is the the instruction that describes the task: ### Input: Find an element in the tree ### Response: def search(self, value): """Find an element in the tree""" if self.payload == value: return self else: if value <= self.payload: if self.left: return self.left.search(value) else: if self.right: return self.right.search(value) return None
def indexables(self): """ create/cache the indexables if they don't exist """ if self._indexables is None: self._indexables = [] # index columns self._indexables.extend([ IndexCol(name=name, axis=axis, pos=i) for i, (axis, name) in enumerate(self.attrs.index_cols) ]) # values columns dc = set(self.data_columns) base_pos = len(self._indexables) def f(i, c): klass = DataCol if c in dc: klass = DataIndexableCol return klass.create_for_block(i=i, name=c, pos=base_pos + i, version=self.version) self._indexables.extend( [f(i, c) for i, c in enumerate(self.attrs.values_cols)]) return self._indexables
create/cache the indexables if they don't exist
Below is the the instruction that describes the task: ### Input: create/cache the indexables if they don't exist ### Response: def indexables(self): """ create/cache the indexables if they don't exist """ if self._indexables is None: self._indexables = [] # index columns self._indexables.extend([ IndexCol(name=name, axis=axis, pos=i) for i, (axis, name) in enumerate(self.attrs.index_cols) ]) # values columns dc = set(self.data_columns) base_pos = len(self._indexables) def f(i, c): klass = DataCol if c in dc: klass = DataIndexableCol return klass.create_for_block(i=i, name=c, pos=base_pos + i, version=self.version) self._indexables.extend( [f(i, c) for i, c in enumerate(self.attrs.values_cols)]) return self._indexables
def load(self, elem): """ Converts the inputted dict tag to Python. :param elem | <xml.etree.ElementTree> :return <dict> """ self.testTag(elem, 'dict') out = {} for xitem in elem: key = xitem.get('key') try: value = XmlDataIO.fromXml(xitem[0]) except IndexError: value = None out[key] = value return out
Converts the inputted dict tag to Python. :param elem | <xml.etree.ElementTree> :return <dict>
Below is the the instruction that describes the task: ### Input: Converts the inputted dict tag to Python. :param elem | <xml.etree.ElementTree> :return <dict> ### Response: def load(self, elem): """ Converts the inputted dict tag to Python. :param elem | <xml.etree.ElementTree> :return <dict> """ self.testTag(elem, 'dict') out = {} for xitem in elem: key = xitem.get('key') try: value = XmlDataIO.fromXml(xitem[0]) except IndexError: value = None out[key] = value return out
def correlation(df, cm=cm.PuOr_r, vmin=None, vmax=None, labels=None, show_scatter=False): """ Generate a column-wise correlation plot from the provided data. The columns of the supplied dataframes will be correlated (using `analysis.correlation`) to generate a Pearson correlation plot heatmap. Scatter plots of correlated samples can also be generated over the redundant half of the plot to give a visual indication of the protein distribution. :param df: `pandas.DataFrame` :param cm: Matplotlib colormap (default cm.PuOr_r) :param vmin: Minimum value for colormap normalization :param vmax: Maximum value for colormap normalization :param labels: Index column to retrieve labels from :param show_scatter: Show overlaid scatter plots for each sample in lower-left half. Note that this is slow for large numbers of samples. :return: `matplotlib.Figure` generated Figure. """ data = analysis.correlation(df) if labels: for axis in (0,1): data.sort_index(level=labels, axis=axis, inplace=True) data = data.values # Plot the distributions fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1,1,1) if vmin is None: vmin = np.nanmin(data) if vmax is None: vmax = np.nanmax(data) n_dims = data.shape[0] # If showing scatter plots, set the inlay portion to np.nan if show_scatter: # Get the triangle, other values will be zeroed idx = np.tril_indices(n_dims) data[idx] = np.nan cm.set_bad('w', 1.) i = ax.imshow(data, cmap=cm, vmin=vmin, vmax=vmax, interpolation='none') fig.colorbar(i) fig.axes[0].grid('off') if show_scatter: figo = mpl.figure.Figure(figsize=(n_dims, n_dims), dpi=300) # Create a dummy Agg canvas so we don't have to display/output this intermediate canvas = FigureCanvasAgg(figo) for x in range(0, n_dims): for y in range(x, n_dims): ax = figo.add_subplot(n_dims, n_dims, y*n_dims+x+1) if x != y: xd = df.values[:, x] yd = df.values[:, y] ax.scatter(xd, yd, lw=0, s=5, c='k', alpha=0.2) ax.grid('off') ax.axis('off') figo.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0) raw = BytesIO() figo.savefig(raw, format='png', bbox_inches=0, transparent=True) del figo raw.seek(0) img = mplimg.imread(raw) ax2 = fig.add_axes(fig.axes[0].get_position(), label='image', zorder=1) ax2.axis('off') ax2.imshow(img) if labels: # Build labels from the supplied axis labels = [ df.columns.get_level_values(l) for l in labels ] labels = [" ".join([str(s) for s in l]) for l in zip(*labels) ] fig.axes[0].set_xticks(range(n_dims)) fig.axes[0].set_xticklabels(labels, rotation=45) fig.axes[0].set_yticks(range(n_dims)) fig.axes[0].set_yticklabels(labels) return fig
Generate a column-wise correlation plot from the provided data. The columns of the supplied dataframes will be correlated (using `analysis.correlation`) to generate a Pearson correlation plot heatmap. Scatter plots of correlated samples can also be generated over the redundant half of the plot to give a visual indication of the protein distribution. :param df: `pandas.DataFrame` :param cm: Matplotlib colormap (default cm.PuOr_r) :param vmin: Minimum value for colormap normalization :param vmax: Maximum value for colormap normalization :param labels: Index column to retrieve labels from :param show_scatter: Show overlaid scatter plots for each sample in lower-left half. Note that this is slow for large numbers of samples. :return: `matplotlib.Figure` generated Figure.
Below is the the instruction that describes the task: ### Input: Generate a column-wise correlation plot from the provided data. The columns of the supplied dataframes will be correlated (using `analysis.correlation`) to generate a Pearson correlation plot heatmap. Scatter plots of correlated samples can also be generated over the redundant half of the plot to give a visual indication of the protein distribution. :param df: `pandas.DataFrame` :param cm: Matplotlib colormap (default cm.PuOr_r) :param vmin: Minimum value for colormap normalization :param vmax: Maximum value for colormap normalization :param labels: Index column to retrieve labels from :param show_scatter: Show overlaid scatter plots for each sample in lower-left half. Note that this is slow for large numbers of samples. :return: `matplotlib.Figure` generated Figure. ### Response: def correlation(df, cm=cm.PuOr_r, vmin=None, vmax=None, labels=None, show_scatter=False): """ Generate a column-wise correlation plot from the provided data. The columns of the supplied dataframes will be correlated (using `analysis.correlation`) to generate a Pearson correlation plot heatmap. Scatter plots of correlated samples can also be generated over the redundant half of the plot to give a visual indication of the protein distribution. :param df: `pandas.DataFrame` :param cm: Matplotlib colormap (default cm.PuOr_r) :param vmin: Minimum value for colormap normalization :param vmax: Maximum value for colormap normalization :param labels: Index column to retrieve labels from :param show_scatter: Show overlaid scatter plots for each sample in lower-left half. Note that this is slow for large numbers of samples. :return: `matplotlib.Figure` generated Figure. """ data = analysis.correlation(df) if labels: for axis in (0,1): data.sort_index(level=labels, axis=axis, inplace=True) data = data.values # Plot the distributions fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1,1,1) if vmin is None: vmin = np.nanmin(data) if vmax is None: vmax = np.nanmax(data) n_dims = data.shape[0] # If showing scatter plots, set the inlay portion to np.nan if show_scatter: # Get the triangle, other values will be zeroed idx = np.tril_indices(n_dims) data[idx] = np.nan cm.set_bad('w', 1.) i = ax.imshow(data, cmap=cm, vmin=vmin, vmax=vmax, interpolation='none') fig.colorbar(i) fig.axes[0].grid('off') if show_scatter: figo = mpl.figure.Figure(figsize=(n_dims, n_dims), dpi=300) # Create a dummy Agg canvas so we don't have to display/output this intermediate canvas = FigureCanvasAgg(figo) for x in range(0, n_dims): for y in range(x, n_dims): ax = figo.add_subplot(n_dims, n_dims, y*n_dims+x+1) if x != y: xd = df.values[:, x] yd = df.values[:, y] ax.scatter(xd, yd, lw=0, s=5, c='k', alpha=0.2) ax.grid('off') ax.axis('off') figo.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0) raw = BytesIO() figo.savefig(raw, format='png', bbox_inches=0, transparent=True) del figo raw.seek(0) img = mplimg.imread(raw) ax2 = fig.add_axes(fig.axes[0].get_position(), label='image', zorder=1) ax2.axis('off') ax2.imshow(img) if labels: # Build labels from the supplied axis labels = [ df.columns.get_level_values(l) for l in labels ] labels = [" ".join([str(s) for s in l]) for l in zip(*labels) ] fig.axes[0].set_xticks(range(n_dims)) fig.axes[0].set_xticklabels(labels, rotation=45) fig.axes[0].set_yticks(range(n_dims)) fig.axes[0].set_yticklabels(labels) return fig
def make_parser(defaults=None): """ :param defaults: Default option values """ if defaults is None: defaults = DEFAULTS ctypes = API.list_types() ctypes_s = ", ".join(ctypes) type_help = "Select type of %s config files from " + \ ctypes_s + " [Automatically detected by file ext]" mts = API.MERGE_STRATEGIES mts_s = ", ".join(mts) mt_help = "Select strategy to merge multiple configs from " + \ mts_s + " [%(merge)s]" % defaults parser = argparse.ArgumentParser(usage=USAGE) parser.set_defaults(**defaults) parser.add_argument("inputs", type=str, nargs='*', help="Input files") parser.add_argument("--version", action="version", version="%%(prog)s %s" % anyconfig.globals.VERSION) lpog = parser.add_argument_group("List specific options") lpog.add_argument("-L", "--list", action="store_true", help="List supported config types") spog = parser.add_argument_group("Schema specific options") spog.add_argument("--validate", action="store_true", help="Only validate input files and do not output. " "You must specify schema file with -S/--schema " "option.") spog.add_argument("--gen-schema", action="store_true", help="Generate JSON schema for givne config file[s] " "and output it instead of (merged) configuration.") gspog = parser.add_argument_group("Query/Get/set options") gspog.add_argument("-Q", "--query", help=_QUERY_HELP) gspog.add_argument("--get", help=_GET_HELP) gspog.add_argument("--set", help=_SET_HELP) parser.add_argument("-o", "--output", help="Output file path") parser.add_argument("-I", "--itype", choices=ctypes, metavar="ITYPE", help=(type_help % "Input")) parser.add_argument("-O", "--otype", choices=ctypes, metavar="OTYPE", help=(type_help % "Output")) parser.add_argument("-M", "--merge", choices=mts, metavar="MERGE", help=mt_help) parser.add_argument("-A", "--args", help="Argument configs to override") parser.add_argument("--atype", choices=ctypes, metavar="ATYPE", help=_ATYPE_HELP_FMT % ctypes_s) cpog = parser.add_argument_group("Common options") cpog.add_argument("-x", "--ignore-missing", action="store_true", help="Ignore missing input files") cpog.add_argument("-T", "--template", action="store_true", help="Enable template config support") cpog.add_argument("-E", "--env", action="store_true", help="Load configuration defaults from " "environment values") cpog.add_argument("-S", "--schema", help="Specify Schema file[s] path") cpog.add_argument("-e", "--extra-opts", help="Extra options given to the API call, " "--extra-options indent:2 (specify the " "indent for pretty-printing of JSON outputs) " "for example") cpog.add_argument("-v", "--verbose", action="count", dest="loglevel", help="Verbose mode; -v or -vv (more verbose)") return parser
:param defaults: Default option values
Below is the the instruction that describes the task: ### Input: :param defaults: Default option values ### Response: def make_parser(defaults=None): """ :param defaults: Default option values """ if defaults is None: defaults = DEFAULTS ctypes = API.list_types() ctypes_s = ", ".join(ctypes) type_help = "Select type of %s config files from " + \ ctypes_s + " [Automatically detected by file ext]" mts = API.MERGE_STRATEGIES mts_s = ", ".join(mts) mt_help = "Select strategy to merge multiple configs from " + \ mts_s + " [%(merge)s]" % defaults parser = argparse.ArgumentParser(usage=USAGE) parser.set_defaults(**defaults) parser.add_argument("inputs", type=str, nargs='*', help="Input files") parser.add_argument("--version", action="version", version="%%(prog)s %s" % anyconfig.globals.VERSION) lpog = parser.add_argument_group("List specific options") lpog.add_argument("-L", "--list", action="store_true", help="List supported config types") spog = parser.add_argument_group("Schema specific options") spog.add_argument("--validate", action="store_true", help="Only validate input files and do not output. " "You must specify schema file with -S/--schema " "option.") spog.add_argument("--gen-schema", action="store_true", help="Generate JSON schema for givne config file[s] " "and output it instead of (merged) configuration.") gspog = parser.add_argument_group("Query/Get/set options") gspog.add_argument("-Q", "--query", help=_QUERY_HELP) gspog.add_argument("--get", help=_GET_HELP) gspog.add_argument("--set", help=_SET_HELP) parser.add_argument("-o", "--output", help="Output file path") parser.add_argument("-I", "--itype", choices=ctypes, metavar="ITYPE", help=(type_help % "Input")) parser.add_argument("-O", "--otype", choices=ctypes, metavar="OTYPE", help=(type_help % "Output")) parser.add_argument("-M", "--merge", choices=mts, metavar="MERGE", help=mt_help) parser.add_argument("-A", "--args", help="Argument configs to override") parser.add_argument("--atype", choices=ctypes, metavar="ATYPE", help=_ATYPE_HELP_FMT % ctypes_s) cpog = parser.add_argument_group("Common options") cpog.add_argument("-x", "--ignore-missing", action="store_true", help="Ignore missing input files") cpog.add_argument("-T", "--template", action="store_true", help="Enable template config support") cpog.add_argument("-E", "--env", action="store_true", help="Load configuration defaults from " "environment values") cpog.add_argument("-S", "--schema", help="Specify Schema file[s] path") cpog.add_argument("-e", "--extra-opts", help="Extra options given to the API call, " "--extra-options indent:2 (specify the " "indent for pretty-printing of JSON outputs) " "for example") cpog.add_argument("-v", "--verbose", action="count", dest="loglevel", help="Verbose mode; -v or -vv (more verbose)") return parser
def _validate_login(self): ''' a method to validate user can access heroku account ''' title = '%s.validate_login' % self.__class__.__name__ # verbosity windows_insert = ' On windows, run in cmd.exe' self.printer('Checking heroku credentials ... ', flush=True) # validate netrc exists from os import path netrc_path = path.join(self.localhost.home, '.netrc') # TODO verify path exists on Windows if not path.exists(netrc_path): error_msg = '.netrc file is missing. Try: heroku login, then heroku auth:token' if self.localhost.os.sysname in ('Windows'): error_msg += windows_insert self.printer('ERROR.') raise Exception(error_msg) # replace value in netrc netrc_text = self._update_netrc(netrc_path, self.token, self.email) # verify remote access def handle_invalid(stdout, proc): # define process closing helper def _close_process(_proc): # close process import psutil process = psutil.Process(_proc.pid) for proc in process.children(recursive=True): proc.kill() process.kill() # restore values to netrc with open(netrc_path, 'wt') as f: f.write(netrc_text) f.close() # invalid credentials if stdout.find('Invalid credentials') > -1: _close_process(proc) self.printer('ERROR.') raise Exception('Permission denied. Heroku auth token is not valid.\nTry: "heroku login", then "heroku auth:token"') sys_command = 'heroku apps --json' response = self._handle_command(sys_command, interactive=handle_invalid, handle_error=True) if response.find('Warning: heroku update') > -1: self.printer('WARNING: heroku update available.') self.printer('Try: npm install -g -U heroku\nor see https://devcenter.heroku.com/articles/heroku-cli#staying-up-to-date') self.printer('Checking heroku credentials ... ') response_lines = response.splitlines() response = '\n'.join(response_lines[1:]) # add list to object import json try: self.apps = json.loads(response) except: self.printer('ERROR.') raise Exception(response) self.printer('done.') return self
a method to validate user can access heroku account
Below is the the instruction that describes the task: ### Input: a method to validate user can access heroku account ### Response: def _validate_login(self): ''' a method to validate user can access heroku account ''' title = '%s.validate_login' % self.__class__.__name__ # verbosity windows_insert = ' On windows, run in cmd.exe' self.printer('Checking heroku credentials ... ', flush=True) # validate netrc exists from os import path netrc_path = path.join(self.localhost.home, '.netrc') # TODO verify path exists on Windows if not path.exists(netrc_path): error_msg = '.netrc file is missing. Try: heroku login, then heroku auth:token' if self.localhost.os.sysname in ('Windows'): error_msg += windows_insert self.printer('ERROR.') raise Exception(error_msg) # replace value in netrc netrc_text = self._update_netrc(netrc_path, self.token, self.email) # verify remote access def handle_invalid(stdout, proc): # define process closing helper def _close_process(_proc): # close process import psutil process = psutil.Process(_proc.pid) for proc in process.children(recursive=True): proc.kill() process.kill() # restore values to netrc with open(netrc_path, 'wt') as f: f.write(netrc_text) f.close() # invalid credentials if stdout.find('Invalid credentials') > -1: _close_process(proc) self.printer('ERROR.') raise Exception('Permission denied. Heroku auth token is not valid.\nTry: "heroku login", then "heroku auth:token"') sys_command = 'heroku apps --json' response = self._handle_command(sys_command, interactive=handle_invalid, handle_error=True) if response.find('Warning: heroku update') > -1: self.printer('WARNING: heroku update available.') self.printer('Try: npm install -g -U heroku\nor see https://devcenter.heroku.com/articles/heroku-cli#staying-up-to-date') self.printer('Checking heroku credentials ... ') response_lines = response.splitlines() response = '\n'.join(response_lines[1:]) # add list to object import json try: self.apps = json.loads(response) except: self.printer('ERROR.') raise Exception(response) self.printer('done.') return self
def restore_node(self, node): """ Restores a previously hidden node back into the graph and restores all of its incoming and outgoing edges. """ try: self.nodes[node], all_edges = self.hidden_nodes[node] for edge in all_edges: self.restore_edge(edge) del self.hidden_nodes[node] except KeyError: raise GraphError('Invalid node %s' % node)
Restores a previously hidden node back into the graph and restores all of its incoming and outgoing edges.
Below is the the instruction that describes the task: ### Input: Restores a previously hidden node back into the graph and restores all of its incoming and outgoing edges. ### Response: def restore_node(self, node): """ Restores a previously hidden node back into the graph and restores all of its incoming and outgoing edges. """ try: self.nodes[node], all_edges = self.hidden_nodes[node] for edge in all_edges: self.restore_edge(edge) del self.hidden_nodes[node] except KeyError: raise GraphError('Invalid node %s' % node)
def stateDict(self): """Saves internal values to be loaded later :returns: dict -- {'parametername': value, ...} """ state = { 'duration' : self._duration, 'intensity' : self._intensity, 'risefall' : self._risefall, 'stim_type' : self.name } return state
Saves internal values to be loaded later :returns: dict -- {'parametername': value, ...}
Below is the the instruction that describes the task: ### Input: Saves internal values to be loaded later :returns: dict -- {'parametername': value, ...} ### Response: def stateDict(self): """Saves internal values to be loaded later :returns: dict -- {'parametername': value, ...} """ state = { 'duration' : self._duration, 'intensity' : self._intensity, 'risefall' : self._risefall, 'stim_type' : self.name } return state
def create_blueprint(endpoints): """Create Invenio-Records-REST blueprint. :params endpoints: Dictionary representing the endpoints configuration. :returns: Configured blueprint. """ endpoints = endpoints or {} blueprint = Blueprint( 'invenio_records_rest', __name__, url_prefix='', ) error_handlers_registry = defaultdict(dict) for endpoint, options in endpoints.items(): error_handlers = options.pop('error_handlers', {}) for rule in create_url_rules(endpoint, **options): for exc_or_code, handler in error_handlers.items(): view_name = rule['view_func'].__name__ error_handlers_registry[exc_or_code][view_name] = handler blueprint.add_url_rule(**rule) return create_error_handlers(blueprint, error_handlers_registry)
Create Invenio-Records-REST blueprint. :params endpoints: Dictionary representing the endpoints configuration. :returns: Configured blueprint.
Below is the the instruction that describes the task: ### Input: Create Invenio-Records-REST blueprint. :params endpoints: Dictionary representing the endpoints configuration. :returns: Configured blueprint. ### Response: def create_blueprint(endpoints): """Create Invenio-Records-REST blueprint. :params endpoints: Dictionary representing the endpoints configuration. :returns: Configured blueprint. """ endpoints = endpoints or {} blueprint = Blueprint( 'invenio_records_rest', __name__, url_prefix='', ) error_handlers_registry = defaultdict(dict) for endpoint, options in endpoints.items(): error_handlers = options.pop('error_handlers', {}) for rule in create_url_rules(endpoint, **options): for exc_or_code, handler in error_handlers.items(): view_name = rule['view_func'].__name__ error_handlers_registry[exc_or_code][view_name] = handler blueprint.add_url_rule(**rule) return create_error_handlers(blueprint, error_handlers_registry)
def create_expanded_design_for_mixing(design, draw_list, mixing_pos, rows_to_mixers): """ Parameters ---------- design : 2D ndarray. All elements should be ints, floats, or longs. Each row corresponds to an available alternative for a given individual. There should be one column per index coefficient being estimated. draw_list : list of 2D ndarrays. All numpy arrays should have the same number of columns (`num_draws`) and the same number of rows (`num_mixers`). All elements of the numpy arrays should be ints, floats, or longs. Should have as many elements as there are lements in `mixing_pos`. mixing_pos : list of ints. Each element should denote a column in design whose associated index coefficient is being treated as a random variable. rows_to_mixers : 2D scipy sparse array. All elements should be zeros and ones. Will map the rows of the design matrix to the particular units that the mixing is being performed over. Note that in the case of panel data, this matrix will be different from `rows_to_obs`. Returns ------- design_3d : 3D numpy array. Each slice of the third dimension will contain a copy of the design matrix corresponding to a given draw of the random variables being mixed over. """ if len(mixing_pos) != len(draw_list): msg = "mixing_pos == {}".format(mixing_pos) msg_2 = "len(draw_list) == {}".format(len(draw_list)) raise ValueError(msg + "\n" + msg_2) # Determine the number of draws being used. Note the next line assumes an # equal number of draws from each random coefficient's mixing distribution. num_draws = draw_list[0].shape[1] orig_num_vars = design.shape[1] # Initialize the expanded design matrix that replicates the columns of the # variables that are being mixed over. arrays_for_mixing = design[:, mixing_pos] expanded_design = np.concatenate((design, arrays_for_mixing), axis=1).copy() design_3d = np.repeat(expanded_design[:, None, :], repeats=num_draws, axis=1) # Multiply the columns that are being mixed over by their appropriate # draws from the normal distribution for pos, idx in enumerate(mixing_pos): rel_draws = draw_list[pos] # Note that rel_long_draws will be a dense, 2D numpy array of shape # (num_rows, num_draws). rel_long_draws = rows_to_mixers.dot(rel_draws) # Create the actual column in design 3d that should be used. # It should be the multiplication of the draws random variable and the # independent variable associated with the param that is being mixed. # NOTE THE IMPLICIT ASSUMPTION THAT ONLY INDEX COEFFICIENTS ARE MIXED. # Also, the final axis is selected on because the final axis sepecifies # the particular variable being multiplied by the draws. We select with # orig_num_vars + pos since the variables being mixed over were added, # in order so we simply need to start at the first position after all # the original variables (i.e. at orig_num_vars) and iterate. design_3d[:, :, orig_num_vars + pos] *= rel_long_draws return design_3d
Parameters ---------- design : 2D ndarray. All elements should be ints, floats, or longs. Each row corresponds to an available alternative for a given individual. There should be one column per index coefficient being estimated. draw_list : list of 2D ndarrays. All numpy arrays should have the same number of columns (`num_draws`) and the same number of rows (`num_mixers`). All elements of the numpy arrays should be ints, floats, or longs. Should have as many elements as there are lements in `mixing_pos`. mixing_pos : list of ints. Each element should denote a column in design whose associated index coefficient is being treated as a random variable. rows_to_mixers : 2D scipy sparse array. All elements should be zeros and ones. Will map the rows of the design matrix to the particular units that the mixing is being performed over. Note that in the case of panel data, this matrix will be different from `rows_to_obs`. Returns ------- design_3d : 3D numpy array. Each slice of the third dimension will contain a copy of the design matrix corresponding to a given draw of the random variables being mixed over.
Below is the the instruction that describes the task: ### Input: Parameters ---------- design : 2D ndarray. All elements should be ints, floats, or longs. Each row corresponds to an available alternative for a given individual. There should be one column per index coefficient being estimated. draw_list : list of 2D ndarrays. All numpy arrays should have the same number of columns (`num_draws`) and the same number of rows (`num_mixers`). All elements of the numpy arrays should be ints, floats, or longs. Should have as many elements as there are lements in `mixing_pos`. mixing_pos : list of ints. Each element should denote a column in design whose associated index coefficient is being treated as a random variable. rows_to_mixers : 2D scipy sparse array. All elements should be zeros and ones. Will map the rows of the design matrix to the particular units that the mixing is being performed over. Note that in the case of panel data, this matrix will be different from `rows_to_obs`. Returns ------- design_3d : 3D numpy array. Each slice of the third dimension will contain a copy of the design matrix corresponding to a given draw of the random variables being mixed over. ### Response: def create_expanded_design_for_mixing(design, draw_list, mixing_pos, rows_to_mixers): """ Parameters ---------- design : 2D ndarray. All elements should be ints, floats, or longs. Each row corresponds to an available alternative for a given individual. There should be one column per index coefficient being estimated. draw_list : list of 2D ndarrays. All numpy arrays should have the same number of columns (`num_draws`) and the same number of rows (`num_mixers`). All elements of the numpy arrays should be ints, floats, or longs. Should have as many elements as there are lements in `mixing_pos`. mixing_pos : list of ints. Each element should denote a column in design whose associated index coefficient is being treated as a random variable. rows_to_mixers : 2D scipy sparse array. All elements should be zeros and ones. Will map the rows of the design matrix to the particular units that the mixing is being performed over. Note that in the case of panel data, this matrix will be different from `rows_to_obs`. Returns ------- design_3d : 3D numpy array. Each slice of the third dimension will contain a copy of the design matrix corresponding to a given draw of the random variables being mixed over. """ if len(mixing_pos) != len(draw_list): msg = "mixing_pos == {}".format(mixing_pos) msg_2 = "len(draw_list) == {}".format(len(draw_list)) raise ValueError(msg + "\n" + msg_2) # Determine the number of draws being used. Note the next line assumes an # equal number of draws from each random coefficient's mixing distribution. num_draws = draw_list[0].shape[1] orig_num_vars = design.shape[1] # Initialize the expanded design matrix that replicates the columns of the # variables that are being mixed over. arrays_for_mixing = design[:, mixing_pos] expanded_design = np.concatenate((design, arrays_for_mixing), axis=1).copy() design_3d = np.repeat(expanded_design[:, None, :], repeats=num_draws, axis=1) # Multiply the columns that are being mixed over by their appropriate # draws from the normal distribution for pos, idx in enumerate(mixing_pos): rel_draws = draw_list[pos] # Note that rel_long_draws will be a dense, 2D numpy array of shape # (num_rows, num_draws). rel_long_draws = rows_to_mixers.dot(rel_draws) # Create the actual column in design 3d that should be used. # It should be the multiplication of the draws random variable and the # independent variable associated with the param that is being mixed. # NOTE THE IMPLICIT ASSUMPTION THAT ONLY INDEX COEFFICIENTS ARE MIXED. # Also, the final axis is selected on because the final axis sepecifies # the particular variable being multiplied by the draws. We select with # orig_num_vars + pos since the variables being mixed over were added, # in order so we simply need to start at the first position after all # the original variables (i.e. at orig_num_vars) and iterate. design_3d[:, :, orig_num_vars + pos] *= rel_long_draws return design_3d
def _prepare(self): """Builds a URL and return a PreparedRequest. Returns (requests.PreparedRequest) Raises UberIllegalState (APIError) """ if self.method not in http.ALLOWED_METHODS: raise UberIllegalState('Unsupported HTTP Method.') api_host = self.api_host headers = self._build_headers(self.method, self.auth_session) url = build_url(api_host, self.path) data, params = generate_data(self.method, self.args) return generate_prepared_request( self.method, url, headers, data, params, self.handlers, )
Builds a URL and return a PreparedRequest. Returns (requests.PreparedRequest) Raises UberIllegalState (APIError)
Below is the the instruction that describes the task: ### Input: Builds a URL and return a PreparedRequest. Returns (requests.PreparedRequest) Raises UberIllegalState (APIError) ### Response: def _prepare(self): """Builds a URL and return a PreparedRequest. Returns (requests.PreparedRequest) Raises UberIllegalState (APIError) """ if self.method not in http.ALLOWED_METHODS: raise UberIllegalState('Unsupported HTTP Method.') api_host = self.api_host headers = self._build_headers(self.method, self.auth_session) url = build_url(api_host, self.path) data, params = generate_data(self.method, self.args) return generate_prepared_request( self.method, url, headers, data, params, self.handlers, )
def new_password_challenge(self, password, new_password): """ Respond to the new password challenge using the SRP protocol :param password: The user's current passsword :param password: The user's new passsword """ aws = AWSSRP(username=self.username, password=password, pool_id=self.user_pool_id, client_id=self.client_id, client=self.client, client_secret=self.client_secret) tokens = aws.set_new_password_challenge(new_password) self.id_token = tokens['AuthenticationResult']['IdToken'] self.refresh_token = tokens['AuthenticationResult']['RefreshToken'] self.access_token = tokens['AuthenticationResult']['AccessToken'] self.token_type = tokens['AuthenticationResult']['TokenType']
Respond to the new password challenge using the SRP protocol :param password: The user's current passsword :param password: The user's new passsword
Below is the the instruction that describes the task: ### Input: Respond to the new password challenge using the SRP protocol :param password: The user's current passsword :param password: The user's new passsword ### Response: def new_password_challenge(self, password, new_password): """ Respond to the new password challenge using the SRP protocol :param password: The user's current passsword :param password: The user's new passsword """ aws = AWSSRP(username=self.username, password=password, pool_id=self.user_pool_id, client_id=self.client_id, client=self.client, client_secret=self.client_secret) tokens = aws.set_new_password_challenge(new_password) self.id_token = tokens['AuthenticationResult']['IdToken'] self.refresh_token = tokens['AuthenticationResult']['RefreshToken'] self.access_token = tokens['AuthenticationResult']['AccessToken'] self.token_type = tokens['AuthenticationResult']['TokenType']
def save_chat_message(*args, **kwargs): """ kwargs will always include: 'data': # will always be exactly what your client sent on the socket # in this case... {u'message': u'hi', u'sender': u'anonymous', u'channel': u'homepage'}, 'dispatcher': # the dispatcher that will allow for broadcasting a response <hendrix.contrib.concurrency.messaging.MessageDispatcher object at 0x10ddb1c10>, """ data = kwargs.get('data') if data.get('message') and data.get('channel'): cm = ChatMessage.objects.create( sender=data.get('sender'), content=data.get('message'), channel=data.get('channel') ) t = loader.get_template('message.html') # now send broadcast a message back to anyone listening on the channel hey_joe.send({'html': t.render({'message': cm})}, cm.channel)
kwargs will always include: 'data': # will always be exactly what your client sent on the socket # in this case... {u'message': u'hi', u'sender': u'anonymous', u'channel': u'homepage'}, 'dispatcher': # the dispatcher that will allow for broadcasting a response <hendrix.contrib.concurrency.messaging.MessageDispatcher object at 0x10ddb1c10>,
Below is the the instruction that describes the task: ### Input: kwargs will always include: 'data': # will always be exactly what your client sent on the socket # in this case... {u'message': u'hi', u'sender': u'anonymous', u'channel': u'homepage'}, 'dispatcher': # the dispatcher that will allow for broadcasting a response <hendrix.contrib.concurrency.messaging.MessageDispatcher object at 0x10ddb1c10>, ### Response: def save_chat_message(*args, **kwargs): """ kwargs will always include: 'data': # will always be exactly what your client sent on the socket # in this case... {u'message': u'hi', u'sender': u'anonymous', u'channel': u'homepage'}, 'dispatcher': # the dispatcher that will allow for broadcasting a response <hendrix.contrib.concurrency.messaging.MessageDispatcher object at 0x10ddb1c10>, """ data = kwargs.get('data') if data.get('message') and data.get('channel'): cm = ChatMessage.objects.create( sender=data.get('sender'), content=data.get('message'), channel=data.get('channel') ) t = loader.get_template('message.html') # now send broadcast a message back to anyone listening on the channel hey_joe.send({'html': t.render({'message': cm})}, cm.channel)
def local_assortativity_wu_sign(W): ''' Local assortativity measures the extent to which nodes are connected to nodes of similar strength. Adapted from Thedchanamoorthy et al. 2014 formula to allowed weighted/signed networks. Parameters ---------- W : NxN np.ndarray undirected connection matrix with positive and negative weights Returns ------- loc_assort_pos : Nx1 np.ndarray local assortativity from positive weights loc_assort_neg : Nx1 np.ndarray local assortativity from negative weights ''' n = len(W) np.fill_diagonal(W, 0) r_pos = assortativity_wei(W * (W > 0)) r_neg = assortativity_wei(W * (W < 0)) str_pos, str_neg, _, _ = strengths_und_sign(W) loc_assort_pos = np.zeros((n,)) loc_assort_neg = np.zeros((n,)) for curr_node in range(n): jp = np.where(W[curr_node, :] > 0) loc_assort_pos[curr_node] = np.sum(np.abs(str_pos[jp] - str_pos[curr_node])) / str_pos[curr_node] jn = np.where(W[curr_node, :] < 0) loc_assort_neg[curr_node] = np.sum(np.abs(str_neg[jn] - str_neg[curr_node])) / str_neg[curr_node] loc_assort_pos = ((r_pos + 1) / n - loc_assort_pos / np.sum(loc_assort_pos)) loc_assort_neg = ((r_neg + 1) / n - loc_assort_neg / np.sum(loc_assort_neg)) return loc_assort_pos, loc_assort_neg
Local assortativity measures the extent to which nodes are connected to nodes of similar strength. Adapted from Thedchanamoorthy et al. 2014 formula to allowed weighted/signed networks. Parameters ---------- W : NxN np.ndarray undirected connection matrix with positive and negative weights Returns ------- loc_assort_pos : Nx1 np.ndarray local assortativity from positive weights loc_assort_neg : Nx1 np.ndarray local assortativity from negative weights
Below is the the instruction that describes the task: ### Input: Local assortativity measures the extent to which nodes are connected to nodes of similar strength. Adapted from Thedchanamoorthy et al. 2014 formula to allowed weighted/signed networks. Parameters ---------- W : NxN np.ndarray undirected connection matrix with positive and negative weights Returns ------- loc_assort_pos : Nx1 np.ndarray local assortativity from positive weights loc_assort_neg : Nx1 np.ndarray local assortativity from negative weights ### Response: def local_assortativity_wu_sign(W): ''' Local assortativity measures the extent to which nodes are connected to nodes of similar strength. Adapted from Thedchanamoorthy et al. 2014 formula to allowed weighted/signed networks. Parameters ---------- W : NxN np.ndarray undirected connection matrix with positive and negative weights Returns ------- loc_assort_pos : Nx1 np.ndarray local assortativity from positive weights loc_assort_neg : Nx1 np.ndarray local assortativity from negative weights ''' n = len(W) np.fill_diagonal(W, 0) r_pos = assortativity_wei(W * (W > 0)) r_neg = assortativity_wei(W * (W < 0)) str_pos, str_neg, _, _ = strengths_und_sign(W) loc_assort_pos = np.zeros((n,)) loc_assort_neg = np.zeros((n,)) for curr_node in range(n): jp = np.where(W[curr_node, :] > 0) loc_assort_pos[curr_node] = np.sum(np.abs(str_pos[jp] - str_pos[curr_node])) / str_pos[curr_node] jn = np.where(W[curr_node, :] < 0) loc_assort_neg[curr_node] = np.sum(np.abs(str_neg[jn] - str_neg[curr_node])) / str_neg[curr_node] loc_assort_pos = ((r_pos + 1) / n - loc_assort_pos / np.sum(loc_assort_pos)) loc_assort_neg = ((r_neg + 1) / n - loc_assort_neg / np.sum(loc_assort_neg)) return loc_assort_pos, loc_assort_neg
def _single_tree_paths(self, tree): """Get all traversal paths from a single tree.""" skel = tree.consolidate() tree = defaultdict(list) for edge in skel.edges: svert = edge[0] evert = edge[1] tree[svert].append(evert) tree[evert].append(svert) def dfs(path, visited): paths = [] stack = [ (path, visited) ] while stack: path, visited = stack.pop(0) vertex = path[-1] children = tree[vertex] visited[vertex] = True children = [ child for child in children if not visited[child] ] if len(children) == 0: paths.append(path) for child in children: stack.append( (path + [child], copy.deepcopy(visited)) ) return paths root = skel.edges[0,0] paths = dfs([root], defaultdict(bool)) root = np.argmax([ len(_) for _ in paths ]) root = paths[root][-1] paths = dfs([ root ], defaultdict(bool)) return [ np.flip(skel.vertices[path], axis=0) for path in paths ]
Get all traversal paths from a single tree.
Below is the the instruction that describes the task: ### Input: Get all traversal paths from a single tree. ### Response: def _single_tree_paths(self, tree): """Get all traversal paths from a single tree.""" skel = tree.consolidate() tree = defaultdict(list) for edge in skel.edges: svert = edge[0] evert = edge[1] tree[svert].append(evert) tree[evert].append(svert) def dfs(path, visited): paths = [] stack = [ (path, visited) ] while stack: path, visited = stack.pop(0) vertex = path[-1] children = tree[vertex] visited[vertex] = True children = [ child for child in children if not visited[child] ] if len(children) == 0: paths.append(path) for child in children: stack.append( (path + [child], copy.deepcopy(visited)) ) return paths root = skel.edges[0,0] paths = dfs([root], defaultdict(bool)) root = np.argmax([ len(_) for _ in paths ]) root = paths[root][-1] paths = dfs([ root ], defaultdict(bool)) return [ np.flip(skel.vertices[path], axis=0) for path in paths ]
def get_dependencies_from_decl(decl, recursive=True): """ Returns the list of all types and declarations the declaration depends on. """ result = [] if isinstance(decl, typedef.typedef_t) or \ isinstance(decl, variable.variable_t): return [dependency_info_t(decl, decl.decl_type)] if isinstance(decl, namespace.namespace_t): if recursive: for d in decl.declarations: result.extend(get_dependencies_from_decl(d)) return result if isinstance(decl, calldef.calldef_t): if decl.return_type: result.append( dependency_info_t(decl, decl.return_type, hint="return type")) for arg in decl.arguments: result.append(dependency_info_t(decl, arg.decl_type)) for exc in decl.exceptions: result.append(dependency_info_t(decl, exc, hint="exception")) return result if isinstance(decl, class_declaration.class_t): for base in decl.bases: result.append( dependency_info_t( decl, base.related_class, base.access_type, "base class")) if recursive: for access_type in class_declaration.ACCESS_TYPES.ALL: result.extend( __find_out_member_dependencies( decl.get_members(access_type), access_type)) return result return result
Returns the list of all types and declarations the declaration depends on.
Below is the the instruction that describes the task: ### Input: Returns the list of all types and declarations the declaration depends on. ### Response: def get_dependencies_from_decl(decl, recursive=True): """ Returns the list of all types and declarations the declaration depends on. """ result = [] if isinstance(decl, typedef.typedef_t) or \ isinstance(decl, variable.variable_t): return [dependency_info_t(decl, decl.decl_type)] if isinstance(decl, namespace.namespace_t): if recursive: for d in decl.declarations: result.extend(get_dependencies_from_decl(d)) return result if isinstance(decl, calldef.calldef_t): if decl.return_type: result.append( dependency_info_t(decl, decl.return_type, hint="return type")) for arg in decl.arguments: result.append(dependency_info_t(decl, arg.decl_type)) for exc in decl.exceptions: result.append(dependency_info_t(decl, exc, hint="exception")) return result if isinstance(decl, class_declaration.class_t): for base in decl.bases: result.append( dependency_info_t( decl, base.related_class, base.access_type, "base class")) if recursive: for access_type in class_declaration.ACCESS_TYPES.ALL: result.extend( __find_out_member_dependencies( decl.get_members(access_type), access_type)) return result return result
def login(self): """ This method performs the login on TheTVDB given the api key, user name and account identifier. :return: None """ auth_data = dict() auth_data['apikey'] = self.api_key auth_data['username'] = self.username auth_data['userkey'] = self.account_identifier auth_resp = requests_util.run_request('post', self.API_BASE_URL + '/login', data=json.dumps(auth_data), headers=self.__get_header()) if auth_resp.status_code == 200: auth_resp_data = self.parse_raw_response(auth_resp) self.__token = auth_resp_data['token'] self.__auth_time = datetime.now() self.is_authenticated = True else: raise AuthenticationFailedException('Authentication failed!')
This method performs the login on TheTVDB given the api key, user name and account identifier. :return: None
Below is the the instruction that describes the task: ### Input: This method performs the login on TheTVDB given the api key, user name and account identifier. :return: None ### Response: def login(self): """ This method performs the login on TheTVDB given the api key, user name and account identifier. :return: None """ auth_data = dict() auth_data['apikey'] = self.api_key auth_data['username'] = self.username auth_data['userkey'] = self.account_identifier auth_resp = requests_util.run_request('post', self.API_BASE_URL + '/login', data=json.dumps(auth_data), headers=self.__get_header()) if auth_resp.status_code == 200: auth_resp_data = self.parse_raw_response(auth_resp) self.__token = auth_resp_data['token'] self.__auth_time = datetime.now() self.is_authenticated = True else: raise AuthenticationFailedException('Authentication failed!')
def iterator(self, *argv): """ Iterator returning any list of elements via attribute lookup in `self` This iterator retains the order of the arguments """ for arg in argv: if hasattr(self, arg): for item in getattr(self, arg): yield item
Iterator returning any list of elements via attribute lookup in `self` This iterator retains the order of the arguments
Below is the the instruction that describes the task: ### Input: Iterator returning any list of elements via attribute lookup in `self` This iterator retains the order of the arguments ### Response: def iterator(self, *argv): """ Iterator returning any list of elements via attribute lookup in `self` This iterator retains the order of the arguments """ for arg in argv: if hasattr(self, arg): for item in getattr(self, arg): yield item
def permission_delete_link(context, perm): """ Renders a html link to the delete view of the given permission. Returns no content if the request-user has no permission to delete foreign permissions. """ user = context['request'].user if user.is_authenticated(): if (user.has_perm('authority.delete_foreign_permissions') or user.pk == perm.creator.pk): return base_link(context, perm, 'authority-delete-permission') return {'url': None}
Renders a html link to the delete view of the given permission. Returns no content if the request-user has no permission to delete foreign permissions.
Below is the the instruction that describes the task: ### Input: Renders a html link to the delete view of the given permission. Returns no content if the request-user has no permission to delete foreign permissions. ### Response: def permission_delete_link(context, perm): """ Renders a html link to the delete view of the given permission. Returns no content if the request-user has no permission to delete foreign permissions. """ user = context['request'].user if user.is_authenticated(): if (user.has_perm('authority.delete_foreign_permissions') or user.pk == perm.creator.pk): return base_link(context, perm, 'authority-delete-permission') return {'url': None}
async def _set_whitelist(self): """ Whitelist domains for the messenger extensions """ page = self.settings() if 'whitelist' in page: await self._send_to_messenger_profile(page, { 'whitelisted_domains': page['whitelist'], }) logger.info('Whitelisted %s for page %s', page['whitelist'], page['page_id'])
Whitelist domains for the messenger extensions
Below is the the instruction that describes the task: ### Input: Whitelist domains for the messenger extensions ### Response: async def _set_whitelist(self): """ Whitelist domains for the messenger extensions """ page = self.settings() if 'whitelist' in page: await self._send_to_messenger_profile(page, { 'whitelisted_domains': page['whitelist'], }) logger.info('Whitelisted %s for page %s', page['whitelist'], page['page_id'])
def child_task(self): '''child process - this holds all the GUI elements''' mp_util.child_close_fds() from wx_loader import wx state = self self.app = wx.App(False) self.app.frame = MPImageFrame(state=self) self.app.frame.Show() self.app.MainLoop()
child process - this holds all the GUI elements
Below is the the instruction that describes the task: ### Input: child process - this holds all the GUI elements ### Response: def child_task(self): '''child process - this holds all the GUI elements''' mp_util.child_close_fds() from wx_loader import wx state = self self.app = wx.App(False) self.app.frame = MPImageFrame(state=self) self.app.frame.Show() self.app.MainLoop()
def send_string(self, string: str): """ Sends the given string for output. """ if not string: return string = string.replace('\n', "<enter>") string = string.replace('\t', "<tab>") _logger.debug("Send via event interface") self.__clearModifiers() modifiers = [] for section in KEY_SPLIT_RE.split(string): if len(section) > 0: if Key.is_key(section[:-1]) and section[-1] == '+' and section[:-1] in MODIFIERS: # Section is a modifier application (modifier followed by '+') modifiers.append(section[:-1]) else: if len(modifiers) > 0: # Modifiers ready for application - send modified key if Key.is_key(section): self.interface.send_modified_key(section, modifiers) modifiers = [] else: self.interface.send_modified_key(section[0], modifiers) if len(section) > 1: self.interface.send_string(section[1:]) modifiers = [] else: # Normal string/key operation if Key.is_key(section): self.interface.send_key(section) else: self.interface.send_string(section) self.__reapplyModifiers()
Sends the given string for output.
Below is the the instruction that describes the task: ### Input: Sends the given string for output. ### Response: def send_string(self, string: str): """ Sends the given string for output. """ if not string: return string = string.replace('\n', "<enter>") string = string.replace('\t', "<tab>") _logger.debug("Send via event interface") self.__clearModifiers() modifiers = [] for section in KEY_SPLIT_RE.split(string): if len(section) > 0: if Key.is_key(section[:-1]) and section[-1] == '+' and section[:-1] in MODIFIERS: # Section is a modifier application (modifier followed by '+') modifiers.append(section[:-1]) else: if len(modifiers) > 0: # Modifiers ready for application - send modified key if Key.is_key(section): self.interface.send_modified_key(section, modifiers) modifiers = [] else: self.interface.send_modified_key(section[0], modifiers) if len(section) > 1: self.interface.send_string(section[1:]) modifiers = [] else: # Normal string/key operation if Key.is_key(section): self.interface.send_key(section) else: self.interface.send_string(section) self.__reapplyModifiers()
def parse_network_data(data_packet=None, include_filter_key=None, filter_keys=[], record_tcp=True, record_udp=True, record_arp=True, record_icmp=True): """build_node :param data_packet: raw recvfrom data :param filter_keys: list of strings to filter and remove baby-birding packets to yourself :param record_tcp: want to record TCP frames? :param record_udp: want to record UDP frames? :param record_arp: want to record ARP frames? :param record_icmp: want to record ICMP frames? """ node = {"id": build_key(), "data_type": UNKNOWN, "eth_protocol": None, "eth_src_mac": None, "eth_dst_mac": None, "eth_length": SIZE_ETH_HEADER, "ip_version_ih1": None, "ip_version": None, "ip_ih1": None, "ip_hdr_len": None, "ip_tos": None, "ip_tlen": None, "ip_id": None, "ip_frag_off": None, "ip_ttl": None, "ip_protocol": None, "ip_src_addr": None, "ip_dst_addr": None, "tcp_src_port": None, "tcp_dst_port": None, "tcp_sequence": None, "tcp_ack": None, "tcp_resrve": None, "tcp_data_offset": None, "tcp_flags": None, "tcp_adwind": None, "tcp_urg_ptr": None, "tcp_ffin": None, "tcp_fsyn": None, "tcp_frst": None, "tcp_fpsh": None, "tcp_fack": None, "tcp_furg": None, "tcp_header_size": None, "tcp_data_size": None, "tcp_data": None, "udp_header_size": None, "udp_data_size": None, "udp_src_port": None, "udp_dst_port": None, "udp_data_len": None, "udp_csum": None, "udp_data": None, "icmp_header_size": None, "icmp_data": None, "icmp_type": None, "icmp_code": None, "icmp_csum": None, "icmp_data_size": None, "arp_header_size": None, "arp_data": None, "arp_hw_type": None, "arp_proto_type": None, "arp_hw_size": None, "arp_proto_size": None, "arp_opcode": None, "arp_src_mac": None, "arp_src_ip": None, "arp_dst_mac": None, "arp_dst_ip": None, "arp_data_size": None, "target_data": None, "full_offset": None, "eth_header_size": None, "ip_header_size": None, "err": "", "stream": None, "filtered": None, "status": INVALID} err = "no_data" if not data_packet: node["error"] = err return node try: err = "missing_packet" packet = data_packet[0] if len(packet) < 21: node["status"] = INVALID node["error"] = "invalid packet={}".format(packet) return node err = "failed_parsing_ethernet" eth_packet_min = 0 eth_packet_max = eth_packet_min + node["eth_length"] log.info(("unpacking ETH[{}:{}]") .format(eth_packet_min, eth_packet_max)) eth_datagram = packet[eth_packet_min:eth_packet_max] eth_header = unpack(ETH_HEADER_FORMAT, eth_datagram) node["eth_protocol"] = socket.ntohs(eth_header[2]) node["eth_src_mac"] = eth_addr(packet[0:6]) node["eth_dst_mac"] = eth_addr(packet[6:12]) log.debug(("eth src={} dst={} proto={}") .format(node["eth_src_mac"], node["eth_dst_mac"], node["eth_protocol"])) node["eth_header_size"] = SIZE_ETH_HEADER # Is this an IP packet: if node["eth_protocol"] == IP_PROTO_ETH: ip_packet_min = SIZE_ETH_HEADER ip_packet_max = SIZE_ETH_HEADER + 20 log.info(("unpacking IP[{}:{}]") .format(ip_packet_min, ip_packet_max)) err = ("failed_parsing_IP[{}:{}]").format( ip_packet_min, ip_packet_max) # take the first 20 characters for the IP header ip_datagram = packet[ip_packet_min:ip_packet_max] ip_header = unpack(IP_HEADER_FORMAT, ip_datagram) # https://docs.python.org/2/library/struct.html#format-characters node["ip_header_size"] = SIZE_IP_HEADER node["ip_version_ih1"] = ip_header[0] node["ip_version"] = node["ip_version_ih1"] >> 4 node["ip_ih1"] = node["ip_version_ih1"] & 0xF node["ip_hdr_len"] = node["ip_ih1"] * 4 node["ip_tos"] = ip_header[1] node["ip_tlen"] = ip_header[2] node["ip_id"] = ip_header[3] node["ip_frag_off"] = ip_header[4] node["ip_ttl"] = ip_header[5] node["ip_protocol"] = ip_header[6] node["ip_src_addr"] = socket.inet_ntoa(ip_header[8]) node["ip_dst_addr"] = socket.inet_ntoa(ip_header[9]) log.debug("-------------------------------------------") log.debug("IP Header - Layer 3") log.debug("") log.debug(" - Version: {}".format(node["ip_version"])) log.debug(" - HDR Len: {}".format(node["ip_ih1"])) log.debug(" - TOS: {}".format(node["ip_tos"])) log.debug(" - ID: {}".format(node["ip_id"])) log.debug(" - Frag: {}".format(node["ip_frag_off"])) log.debug(" - TTL: {}".format(node["ip_ttl"])) log.debug(" - Proto: {}".format(node["ip_protocol"])) log.debug(" - Src IP: {}".format(node["ip_src_addr"])) log.debug(" - Dst IP: {}".format(node["ip_dst_addr"])) log.debug("-------------------------------------------") log.debug("") tcp_data = None udp_data = None arp_data = None icmp_data = None target_data = None eh = node["eth_header_size"] ih = node["ip_header_size"] log.debug(("parsing ip_protocol={} data") .format(node["ip_protocol"])) if node["ip_protocol"] == TCP_PROTO_IP: packet_min = node["eth_length"] + node["ip_hdr_len"] packet_max = packet_min + 20 # unpack the TCP packet log.info(("unpacking TCP[{}:{}]") .format(packet_min, packet_max)) err = ("failed_parsing_TCP[{}:{}]").format( packet_min, packet_max) tcp_datagram = packet[packet_min:packet_max] log.debug(("unpacking TCP Header={}") .format(tcp_datagram)) # unpack the TCP packet tcp_header = unpack(TCP_HEADER_FORMAT, tcp_datagram) node["tcp_src_port"] = tcp_header[0] node["tcp_dst_port"] = tcp_header[1] node["tcp_sequence"] = tcp_header[2] node["tcp_ack"] = tcp_header[3] node["tcp_resrve"] = tcp_header[4] node["tcp_data_offset"] = node["tcp_resrve"] >> 4 node["tcp_flags"] = tcp_header[5] node["tcp_adwind"] = tcp_header[6] node["tcp_urg_ptr"] = tcp_header[7] # parse TCP flags flag_data = unshift_flags(node["tcp_flags"]) node["tcp_ffin"] = flag_data[0] node["tcp_fsyn"] = flag_data[1] node["tcp_frst"] = flag_data[2] node["tcp_fpsh"] = flag_data[3] node["tcp_fack"] = flag_data[4] node["tcp_furg"] = flag_data[5] # process the TCP options if there are # currently just skip it node["tcp_header_size"] = SIZE_TCP_HEADER log.debug(("src={} dst={} seq={} ack={} doff={} flags={} " "f urg={} fin={} syn={} rst={} " "psh={} fack={} urg={}") .format(node["tcp_src_port"], node["tcp_dst_port"], node["tcp_sequence"], node["tcp_ack"], node["tcp_data_offset"], node["tcp_flags"], node["tcp_urg_ptr"], node["tcp_ffin"], node["tcp_fsyn"], node["tcp_frst"], node["tcp_fpsh"], node["tcp_fack"], node["tcp_furg"])) # -------------------------------------------------------- err = "failed_tcp_data" node["data_type"] = TCP node["tcp_header_size"] = ( node["ip_hdr_len"] + (node["tcp_data_offset"] * 4)) node["tcp_data_size"] = len(packet) - node["tcp_header_size"] th = node["tcp_header_size"] node["full_offset"] = eh + ih + th log.info(("TCP Data size={} th1={} th2={} " "offset={} value={}") .format(node["tcp_data_size"], node["ip_hdr_len"], node["tcp_header_size"], node["full_offset"], tcp_data)) err = "failed_tcp_data_offset" tcp_data = packet[node["full_offset"]:] target_data = tcp_data node["error"] = "" node["status"] = VALID elif node["ip_protocol"] == UDP_PROTO_IP: packet_min = node["eth_length"] + node["ip_hdr_len"] packet_max = packet_min + 8 # unpack the UDP packet log.info(("unpacking UDP[{}:{}]") .format(packet_min, packet_max)) err = ("failed_parsing_UDP[{}:{}]").format( packet_min, packet_max) udp_datagram = packet[packet_min:packet_max] log.info(("unpacking UDP Header={}") .format(udp_datagram)) udp_header = unpack(UDP_HEADER_FORMAT, udp_datagram) node["udp_header_size"] = SIZE_UDP_HEADER node["udp_src_port"] = udp_header[0] node["udp_dst_port"] = udp_header[1] node["udp_data_len"] = udp_header[2] node["udp_csum"] = udp_header[3] node["data_type"] = UDP uh = node["udp_header_size"] node["full_offset"] = eh + ih + uh node["udp_data_size"] = len(packet) - node["udp_header_size"] log.info(("UDP Data size={} th1={} th2={} " "offset={} value={}") .format(node["udp_data_size"], node["ip_hdr_len"], node["udp_header_size"], node["full_offset"], udp_data)) err = "failed_udp_data_offset" udp_data = packet[node["full_offset"]:] target_data = udp_data node["error"] = "" node["status"] = VALID elif node["ip_protocol"] == ICMP_PROTO_IP: # unpack the ICMP packet packet_min = node["eth_length"] + node["ip_hdr_len"] packet_max = packet_min + 4 log.info(("unpacking ICMP[{}:{}]") .format(packet_min, packet_max)) err = ("failed_parsing_ICMP[{}:{}]").format( packet_min, packet_max) icmp_datagram = packet[packet_min:packet_max] log.info(("unpacking ICMP Header={}") .format(icmp_datagram)) icmp_header = unpack(ICMP_HEADER_FORMAT, icmp_datagram) node["icmp_header_size"] = SIZE_ICMP_HEADER node["icmp_type"] = icmp_header[0] node["icmp_code"] = icmp_header[1] node["icmp_csum"] = icmp_header[2] node["data_type"] = ICMP ah = node["icmp_header_size"] node["full_offset"] = eh + ih + ah node["icmp_data_size"] = len(packet) - node["icmp_header_size"] log.info(("ICMP Data size={} th1={} th2={} " "offset={} value={}") .format(node["icmp_data_size"], node["ip_hdr_len"], node["icmp_header_size"], node["full_offset"], icmp_data)) err = "failed_icmp_data_offset" icmp_data = packet[node["full_offset"]:] target_data = icmp_data node["error"] = "" node["status"] = VALID else: node["error"] = ("unsupported_ip_protocol={}").format( node["ip_protocol"]) node["status"] = IP_UNSUPPORTED # end of parsing supported protocols the final node data if node["status"] == VALID: log.debug("filtering") # filter out delimiters in the last 64 bytes if filter_keys: err = "filtering={}".format(len(filter_keys)) log.debug(err) for f in filter_keys: if target_data: if str(f) in str(target_data): log.info(("FOUND filter={} " "in data={}") .format(f, target_data)) node["error"] = "filtered" node["status"] = FILTERED node["filtered"] = f break # end of tagging packets to filter out of the # network-pipe stream # if there are filters log.debug(("was filtered={}") .format(node["filtered"])) if not node["filtered"]: err = "building_stream" log.debug(("building stream target={}") .format(target_data)) stream_size = 0 if target_data: try: # convert to hex string err = ("concerting target_data to " "hex string") node["target_data"] = target_data.hex() except Exception as e: log.info(("failed converting={} to " "utf-8 ex={}") .format(target_data, e)) err = "str target_data" node["target_data"] = target_data # end of try/ex stream_size += len(node["target_data"]) # end of target_data log.debug(("serializing stream={}") .format(node["target_data"])) node_json = json.dumps(node) data_stream = str("{} {}").format(node_json, include_filter_key) log.debug("compressing") if stream_size: node["stream"] = data_stream # end of building the stream log.debug("valid") else: log.error(("unsupported ip frame ip_protocol={}") .format(node["ip_protocol"])) # end of supported IP packet protocol or not elif node["eth_protocol"] == ARP_PROTO_ETH: arp_packet_min = SIZE_ETH_HEADER arp_packet_max = SIZE_ETH_HEADER + 28 log.info(("unpacking ARP[{}:{}]") .format(arp_packet_min, arp_packet_max)) err = ("failed_parsing_ARP[{}:{}]").format( arp_packet_min, arp_packet_max) # take the first 28 characters for the ARP header arp_datagram = packet[arp_packet_min:arp_packet_max] arp_header = unpack(ARP_HEADER_FORMAT, arp_datagram) # https://docs.python.org/2/library/struct.html#format-characters node["arp_header_size"] = SIZE_ARP_HEADER node["arp_hw_type"] = arp_header[0].hex() node["arp_proto_type"] = arp_header[1].hex() node["arp_hw_size"] = arp_header[2].hex() node["arp_proto_size"] = arp_header[3].hex() node["arp_opcode"] = arp_header[4].hex() node["arp_src_mac"] = arp_header[5].hex() node["arp_src_ip"] = socket.inet_ntoa(arp_header[6]) node["arp_dst_mac"] = arp_header[7].hex() node["arp_dst_ip"] = socket.inet_ntoa(arp_header[8]) arp_data = "" node["arp_data"] = arp_data node["target_data"] = arp_data node["data_type"] = ARP node["status"] = VALID node["arp_data_size"] = len(packet) - node["arp_header_size"] node_json = json.dumps(node) data_stream = str("{} {}").format(node_json, include_filter_key) node["stream"] = data_stream else: node["error"] = ("unsupported eth_frame protocol={}").format( node["eth_protocol"]) node["status"] = ETH_UNSUPPORTED log.error(node["error"]) # end of supported ETH packet or not except Exception as e: node["status"] = ERROR node["error"] = "err={} failed parsing frame ex={}".format(err, e) log.error(node["error"]) # end of try/ex return node
build_node :param data_packet: raw recvfrom data :param filter_keys: list of strings to filter and remove baby-birding packets to yourself :param record_tcp: want to record TCP frames? :param record_udp: want to record UDP frames? :param record_arp: want to record ARP frames? :param record_icmp: want to record ICMP frames?
Below is the the instruction that describes the task: ### Input: build_node :param data_packet: raw recvfrom data :param filter_keys: list of strings to filter and remove baby-birding packets to yourself :param record_tcp: want to record TCP frames? :param record_udp: want to record UDP frames? :param record_arp: want to record ARP frames? :param record_icmp: want to record ICMP frames? ### Response: def parse_network_data(data_packet=None, include_filter_key=None, filter_keys=[], record_tcp=True, record_udp=True, record_arp=True, record_icmp=True): """build_node :param data_packet: raw recvfrom data :param filter_keys: list of strings to filter and remove baby-birding packets to yourself :param record_tcp: want to record TCP frames? :param record_udp: want to record UDP frames? :param record_arp: want to record ARP frames? :param record_icmp: want to record ICMP frames? """ node = {"id": build_key(), "data_type": UNKNOWN, "eth_protocol": None, "eth_src_mac": None, "eth_dst_mac": None, "eth_length": SIZE_ETH_HEADER, "ip_version_ih1": None, "ip_version": None, "ip_ih1": None, "ip_hdr_len": None, "ip_tos": None, "ip_tlen": None, "ip_id": None, "ip_frag_off": None, "ip_ttl": None, "ip_protocol": None, "ip_src_addr": None, "ip_dst_addr": None, "tcp_src_port": None, "tcp_dst_port": None, "tcp_sequence": None, "tcp_ack": None, "tcp_resrve": None, "tcp_data_offset": None, "tcp_flags": None, "tcp_adwind": None, "tcp_urg_ptr": None, "tcp_ffin": None, "tcp_fsyn": None, "tcp_frst": None, "tcp_fpsh": None, "tcp_fack": None, "tcp_furg": None, "tcp_header_size": None, "tcp_data_size": None, "tcp_data": None, "udp_header_size": None, "udp_data_size": None, "udp_src_port": None, "udp_dst_port": None, "udp_data_len": None, "udp_csum": None, "udp_data": None, "icmp_header_size": None, "icmp_data": None, "icmp_type": None, "icmp_code": None, "icmp_csum": None, "icmp_data_size": None, "arp_header_size": None, "arp_data": None, "arp_hw_type": None, "arp_proto_type": None, "arp_hw_size": None, "arp_proto_size": None, "arp_opcode": None, "arp_src_mac": None, "arp_src_ip": None, "arp_dst_mac": None, "arp_dst_ip": None, "arp_data_size": None, "target_data": None, "full_offset": None, "eth_header_size": None, "ip_header_size": None, "err": "", "stream": None, "filtered": None, "status": INVALID} err = "no_data" if not data_packet: node["error"] = err return node try: err = "missing_packet" packet = data_packet[0] if len(packet) < 21: node["status"] = INVALID node["error"] = "invalid packet={}".format(packet) return node err = "failed_parsing_ethernet" eth_packet_min = 0 eth_packet_max = eth_packet_min + node["eth_length"] log.info(("unpacking ETH[{}:{}]") .format(eth_packet_min, eth_packet_max)) eth_datagram = packet[eth_packet_min:eth_packet_max] eth_header = unpack(ETH_HEADER_FORMAT, eth_datagram) node["eth_protocol"] = socket.ntohs(eth_header[2]) node["eth_src_mac"] = eth_addr(packet[0:6]) node["eth_dst_mac"] = eth_addr(packet[6:12]) log.debug(("eth src={} dst={} proto={}") .format(node["eth_src_mac"], node["eth_dst_mac"], node["eth_protocol"])) node["eth_header_size"] = SIZE_ETH_HEADER # Is this an IP packet: if node["eth_protocol"] == IP_PROTO_ETH: ip_packet_min = SIZE_ETH_HEADER ip_packet_max = SIZE_ETH_HEADER + 20 log.info(("unpacking IP[{}:{}]") .format(ip_packet_min, ip_packet_max)) err = ("failed_parsing_IP[{}:{}]").format( ip_packet_min, ip_packet_max) # take the first 20 characters for the IP header ip_datagram = packet[ip_packet_min:ip_packet_max] ip_header = unpack(IP_HEADER_FORMAT, ip_datagram) # https://docs.python.org/2/library/struct.html#format-characters node["ip_header_size"] = SIZE_IP_HEADER node["ip_version_ih1"] = ip_header[0] node["ip_version"] = node["ip_version_ih1"] >> 4 node["ip_ih1"] = node["ip_version_ih1"] & 0xF node["ip_hdr_len"] = node["ip_ih1"] * 4 node["ip_tos"] = ip_header[1] node["ip_tlen"] = ip_header[2] node["ip_id"] = ip_header[3] node["ip_frag_off"] = ip_header[4] node["ip_ttl"] = ip_header[5] node["ip_protocol"] = ip_header[6] node["ip_src_addr"] = socket.inet_ntoa(ip_header[8]) node["ip_dst_addr"] = socket.inet_ntoa(ip_header[9]) log.debug("-------------------------------------------") log.debug("IP Header - Layer 3") log.debug("") log.debug(" - Version: {}".format(node["ip_version"])) log.debug(" - HDR Len: {}".format(node["ip_ih1"])) log.debug(" - TOS: {}".format(node["ip_tos"])) log.debug(" - ID: {}".format(node["ip_id"])) log.debug(" - Frag: {}".format(node["ip_frag_off"])) log.debug(" - TTL: {}".format(node["ip_ttl"])) log.debug(" - Proto: {}".format(node["ip_protocol"])) log.debug(" - Src IP: {}".format(node["ip_src_addr"])) log.debug(" - Dst IP: {}".format(node["ip_dst_addr"])) log.debug("-------------------------------------------") log.debug("") tcp_data = None udp_data = None arp_data = None icmp_data = None target_data = None eh = node["eth_header_size"] ih = node["ip_header_size"] log.debug(("parsing ip_protocol={} data") .format(node["ip_protocol"])) if node["ip_protocol"] == TCP_PROTO_IP: packet_min = node["eth_length"] + node["ip_hdr_len"] packet_max = packet_min + 20 # unpack the TCP packet log.info(("unpacking TCP[{}:{}]") .format(packet_min, packet_max)) err = ("failed_parsing_TCP[{}:{}]").format( packet_min, packet_max) tcp_datagram = packet[packet_min:packet_max] log.debug(("unpacking TCP Header={}") .format(tcp_datagram)) # unpack the TCP packet tcp_header = unpack(TCP_HEADER_FORMAT, tcp_datagram) node["tcp_src_port"] = tcp_header[0] node["tcp_dst_port"] = tcp_header[1] node["tcp_sequence"] = tcp_header[2] node["tcp_ack"] = tcp_header[3] node["tcp_resrve"] = tcp_header[4] node["tcp_data_offset"] = node["tcp_resrve"] >> 4 node["tcp_flags"] = tcp_header[5] node["tcp_adwind"] = tcp_header[6] node["tcp_urg_ptr"] = tcp_header[7] # parse TCP flags flag_data = unshift_flags(node["tcp_flags"]) node["tcp_ffin"] = flag_data[0] node["tcp_fsyn"] = flag_data[1] node["tcp_frst"] = flag_data[2] node["tcp_fpsh"] = flag_data[3] node["tcp_fack"] = flag_data[4] node["tcp_furg"] = flag_data[5] # process the TCP options if there are # currently just skip it node["tcp_header_size"] = SIZE_TCP_HEADER log.debug(("src={} dst={} seq={} ack={} doff={} flags={} " "f urg={} fin={} syn={} rst={} " "psh={} fack={} urg={}") .format(node["tcp_src_port"], node["tcp_dst_port"], node["tcp_sequence"], node["tcp_ack"], node["tcp_data_offset"], node["tcp_flags"], node["tcp_urg_ptr"], node["tcp_ffin"], node["tcp_fsyn"], node["tcp_frst"], node["tcp_fpsh"], node["tcp_fack"], node["tcp_furg"])) # -------------------------------------------------------- err = "failed_tcp_data" node["data_type"] = TCP node["tcp_header_size"] = ( node["ip_hdr_len"] + (node["tcp_data_offset"] * 4)) node["tcp_data_size"] = len(packet) - node["tcp_header_size"] th = node["tcp_header_size"] node["full_offset"] = eh + ih + th log.info(("TCP Data size={} th1={} th2={} " "offset={} value={}") .format(node["tcp_data_size"], node["ip_hdr_len"], node["tcp_header_size"], node["full_offset"], tcp_data)) err = "failed_tcp_data_offset" tcp_data = packet[node["full_offset"]:] target_data = tcp_data node["error"] = "" node["status"] = VALID elif node["ip_protocol"] == UDP_PROTO_IP: packet_min = node["eth_length"] + node["ip_hdr_len"] packet_max = packet_min + 8 # unpack the UDP packet log.info(("unpacking UDP[{}:{}]") .format(packet_min, packet_max)) err = ("failed_parsing_UDP[{}:{}]").format( packet_min, packet_max) udp_datagram = packet[packet_min:packet_max] log.info(("unpacking UDP Header={}") .format(udp_datagram)) udp_header = unpack(UDP_HEADER_FORMAT, udp_datagram) node["udp_header_size"] = SIZE_UDP_HEADER node["udp_src_port"] = udp_header[0] node["udp_dst_port"] = udp_header[1] node["udp_data_len"] = udp_header[2] node["udp_csum"] = udp_header[3] node["data_type"] = UDP uh = node["udp_header_size"] node["full_offset"] = eh + ih + uh node["udp_data_size"] = len(packet) - node["udp_header_size"] log.info(("UDP Data size={} th1={} th2={} " "offset={} value={}") .format(node["udp_data_size"], node["ip_hdr_len"], node["udp_header_size"], node["full_offset"], udp_data)) err = "failed_udp_data_offset" udp_data = packet[node["full_offset"]:] target_data = udp_data node["error"] = "" node["status"] = VALID elif node["ip_protocol"] == ICMP_PROTO_IP: # unpack the ICMP packet packet_min = node["eth_length"] + node["ip_hdr_len"] packet_max = packet_min + 4 log.info(("unpacking ICMP[{}:{}]") .format(packet_min, packet_max)) err = ("failed_parsing_ICMP[{}:{}]").format( packet_min, packet_max) icmp_datagram = packet[packet_min:packet_max] log.info(("unpacking ICMP Header={}") .format(icmp_datagram)) icmp_header = unpack(ICMP_HEADER_FORMAT, icmp_datagram) node["icmp_header_size"] = SIZE_ICMP_HEADER node["icmp_type"] = icmp_header[0] node["icmp_code"] = icmp_header[1] node["icmp_csum"] = icmp_header[2] node["data_type"] = ICMP ah = node["icmp_header_size"] node["full_offset"] = eh + ih + ah node["icmp_data_size"] = len(packet) - node["icmp_header_size"] log.info(("ICMP Data size={} th1={} th2={} " "offset={} value={}") .format(node["icmp_data_size"], node["ip_hdr_len"], node["icmp_header_size"], node["full_offset"], icmp_data)) err = "failed_icmp_data_offset" icmp_data = packet[node["full_offset"]:] target_data = icmp_data node["error"] = "" node["status"] = VALID else: node["error"] = ("unsupported_ip_protocol={}").format( node["ip_protocol"]) node["status"] = IP_UNSUPPORTED # end of parsing supported protocols the final node data if node["status"] == VALID: log.debug("filtering") # filter out delimiters in the last 64 bytes if filter_keys: err = "filtering={}".format(len(filter_keys)) log.debug(err) for f in filter_keys: if target_data: if str(f) in str(target_data): log.info(("FOUND filter={} " "in data={}") .format(f, target_data)) node["error"] = "filtered" node["status"] = FILTERED node["filtered"] = f break # end of tagging packets to filter out of the # network-pipe stream # if there are filters log.debug(("was filtered={}") .format(node["filtered"])) if not node["filtered"]: err = "building_stream" log.debug(("building stream target={}") .format(target_data)) stream_size = 0 if target_data: try: # convert to hex string err = ("concerting target_data to " "hex string") node["target_data"] = target_data.hex() except Exception as e: log.info(("failed converting={} to " "utf-8 ex={}") .format(target_data, e)) err = "str target_data" node["target_data"] = target_data # end of try/ex stream_size += len(node["target_data"]) # end of target_data log.debug(("serializing stream={}") .format(node["target_data"])) node_json = json.dumps(node) data_stream = str("{} {}").format(node_json, include_filter_key) log.debug("compressing") if stream_size: node["stream"] = data_stream # end of building the stream log.debug("valid") else: log.error(("unsupported ip frame ip_protocol={}") .format(node["ip_protocol"])) # end of supported IP packet protocol or not elif node["eth_protocol"] == ARP_PROTO_ETH: arp_packet_min = SIZE_ETH_HEADER arp_packet_max = SIZE_ETH_HEADER + 28 log.info(("unpacking ARP[{}:{}]") .format(arp_packet_min, arp_packet_max)) err = ("failed_parsing_ARP[{}:{}]").format( arp_packet_min, arp_packet_max) # take the first 28 characters for the ARP header arp_datagram = packet[arp_packet_min:arp_packet_max] arp_header = unpack(ARP_HEADER_FORMAT, arp_datagram) # https://docs.python.org/2/library/struct.html#format-characters node["arp_header_size"] = SIZE_ARP_HEADER node["arp_hw_type"] = arp_header[0].hex() node["arp_proto_type"] = arp_header[1].hex() node["arp_hw_size"] = arp_header[2].hex() node["arp_proto_size"] = arp_header[3].hex() node["arp_opcode"] = arp_header[4].hex() node["arp_src_mac"] = arp_header[5].hex() node["arp_src_ip"] = socket.inet_ntoa(arp_header[6]) node["arp_dst_mac"] = arp_header[7].hex() node["arp_dst_ip"] = socket.inet_ntoa(arp_header[8]) arp_data = "" node["arp_data"] = arp_data node["target_data"] = arp_data node["data_type"] = ARP node["status"] = VALID node["arp_data_size"] = len(packet) - node["arp_header_size"] node_json = json.dumps(node) data_stream = str("{} {}").format(node_json, include_filter_key) node["stream"] = data_stream else: node["error"] = ("unsupported eth_frame protocol={}").format( node["eth_protocol"]) node["status"] = ETH_UNSUPPORTED log.error(node["error"]) # end of supported ETH packet or not except Exception as e: node["status"] = ERROR node["error"] = "err={} failed parsing frame ex={}".format(err, e) log.error(node["error"]) # end of try/ex return node
def selective_download(name, oldest, newest): """Note: RSS feeds are counted backwards, default newest is 0, the most recent.""" if six.PY3: name = name.encode("utf-8") feed = resolve_name(name) if six.PY3: feed = feed.decode() d = feedparser.parse(feed) logger.debug(d) try: d.entries[int(oldest)] except IndexError: print("Error feed does not contain this many items.") print("Hitman thinks there are %d items in this feed." % len(d.entries)) return for url in [q.enclosures[0]['href'] for q in d.entries[int(newest):int(oldest)]]: # iterate over urls in feed from newest to oldest feed items. url = str(url) with Database("downloads") as db: if url.split('/')[-1] not in db: # download(url, name, feed) with Database("settings") as settings: if 'dl' in settings: dl_dir = settings['dl'] else: dl_dir = os.path.join(os.path.expanduser("~"), "Downloads") requests_get(url, dl_dir)
Note: RSS feeds are counted backwards, default newest is 0, the most recent.
Below is the the instruction that describes the task: ### Input: Note: RSS feeds are counted backwards, default newest is 0, the most recent. ### Response: def selective_download(name, oldest, newest): """Note: RSS feeds are counted backwards, default newest is 0, the most recent.""" if six.PY3: name = name.encode("utf-8") feed = resolve_name(name) if six.PY3: feed = feed.decode() d = feedparser.parse(feed) logger.debug(d) try: d.entries[int(oldest)] except IndexError: print("Error feed does not contain this many items.") print("Hitman thinks there are %d items in this feed." % len(d.entries)) return for url in [q.enclosures[0]['href'] for q in d.entries[int(newest):int(oldest)]]: # iterate over urls in feed from newest to oldest feed items. url = str(url) with Database("downloads") as db: if url.split('/')[-1] not in db: # download(url, name, feed) with Database("settings") as settings: if 'dl' in settings: dl_dir = settings['dl'] else: dl_dir = os.path.join(os.path.expanduser("~"), "Downloads") requests_get(url, dl_dir)
def _computeWeights(self, logform=False, include_nonzero=False, recalc_denom=True, return_f_k=False): """Compute the normalized weights corresponding to samples for the given reduced potential. Compute the normalized weights corresponding to samples for the given reduced potential. Also stores the all_log_denom array for reuse. Parameters ---------- logform : bool, optional Whether the output is in logarithmic form, which is better for stability, though sometimes the exponential form is requires. include_nonzero : bool, optional whether to compute weights for states with nonzero states. Not necessary when performing self-consistent iteration. recalc_denom : bool, optional recalculate the denominator, must be done if the free energies change. default is to do it, so that errors are not made. But can be turned off if it is known the free energies have not changed. return_f_k : bool, optional return the self-consistent f_k values Returns ------- if logform==True: Log_W_nk (double) - Log_W_nk[n,k] is the normalized log weight of sample n from state k. else: W_nk (double) - W_nk[n,k] is the log weight of sample n from state k. if return_f_k==True: optionally return the self-consistent free energy from these weights. """ if (include_nonzero): f_k = self.f_k K = self.K else: f_k = self.f_k[self.states_with_samples] K = len(self.states_with_samples) # array of either weights or normalized log weights Warray_nk = np.zeros([self.N, K], dtype=np.float64) if (return_f_k): f_k_out = np.zeros([K], dtype=np.float64) if (recalc_denom): self.log_weight_denom = self._computeUnnormalizedLogWeights( np.zeros([self.N], dtype=np.float64)) for k in range(K): if (include_nonzero): index = k else: index = self.states_with_samples[k] log_w_n = -self.u_kn[index, :] + self.log_weight_denom + f_k[k] if (return_f_k): f_k_out[k] = f_k[k] - _logsum(log_w_n) if (include_nonzero): # renormalize the weights, needed for nonzero states. log_w_n += (f_k_out[k] - f_k[k]) if (logform): Warray_nk[:, k] = log_w_n else: Warray_nk[:, k] = np.exp(log_w_n) # Return weights (or log weights) if (return_f_k): f_k_out[:] = f_k_out[:] - f_k_out[0] return Warray_nk, f_k_out else: return Warray_nk
Compute the normalized weights corresponding to samples for the given reduced potential. Compute the normalized weights corresponding to samples for the given reduced potential. Also stores the all_log_denom array for reuse. Parameters ---------- logform : bool, optional Whether the output is in logarithmic form, which is better for stability, though sometimes the exponential form is requires. include_nonzero : bool, optional whether to compute weights for states with nonzero states. Not necessary when performing self-consistent iteration. recalc_denom : bool, optional recalculate the denominator, must be done if the free energies change. default is to do it, so that errors are not made. But can be turned off if it is known the free energies have not changed. return_f_k : bool, optional return the self-consistent f_k values Returns ------- if logform==True: Log_W_nk (double) - Log_W_nk[n,k] is the normalized log weight of sample n from state k. else: W_nk (double) - W_nk[n,k] is the log weight of sample n from state k. if return_f_k==True: optionally return the self-consistent free energy from these weights.
Below is the the instruction that describes the task: ### Input: Compute the normalized weights corresponding to samples for the given reduced potential. Compute the normalized weights corresponding to samples for the given reduced potential. Also stores the all_log_denom array for reuse. Parameters ---------- logform : bool, optional Whether the output is in logarithmic form, which is better for stability, though sometimes the exponential form is requires. include_nonzero : bool, optional whether to compute weights for states with nonzero states. Not necessary when performing self-consistent iteration. recalc_denom : bool, optional recalculate the denominator, must be done if the free energies change. default is to do it, so that errors are not made. But can be turned off if it is known the free energies have not changed. return_f_k : bool, optional return the self-consistent f_k values Returns ------- if logform==True: Log_W_nk (double) - Log_W_nk[n,k] is the normalized log weight of sample n from state k. else: W_nk (double) - W_nk[n,k] is the log weight of sample n from state k. if return_f_k==True: optionally return the self-consistent free energy from these weights. ### Response: def _computeWeights(self, logform=False, include_nonzero=False, recalc_denom=True, return_f_k=False): """Compute the normalized weights corresponding to samples for the given reduced potential. Compute the normalized weights corresponding to samples for the given reduced potential. Also stores the all_log_denom array for reuse. Parameters ---------- logform : bool, optional Whether the output is in logarithmic form, which is better for stability, though sometimes the exponential form is requires. include_nonzero : bool, optional whether to compute weights for states with nonzero states. Not necessary when performing self-consistent iteration. recalc_denom : bool, optional recalculate the denominator, must be done if the free energies change. default is to do it, so that errors are not made. But can be turned off if it is known the free energies have not changed. return_f_k : bool, optional return the self-consistent f_k values Returns ------- if logform==True: Log_W_nk (double) - Log_W_nk[n,k] is the normalized log weight of sample n from state k. else: W_nk (double) - W_nk[n,k] is the log weight of sample n from state k. if return_f_k==True: optionally return the self-consistent free energy from these weights. """ if (include_nonzero): f_k = self.f_k K = self.K else: f_k = self.f_k[self.states_with_samples] K = len(self.states_with_samples) # array of either weights or normalized log weights Warray_nk = np.zeros([self.N, K], dtype=np.float64) if (return_f_k): f_k_out = np.zeros([K], dtype=np.float64) if (recalc_denom): self.log_weight_denom = self._computeUnnormalizedLogWeights( np.zeros([self.N], dtype=np.float64)) for k in range(K): if (include_nonzero): index = k else: index = self.states_with_samples[k] log_w_n = -self.u_kn[index, :] + self.log_weight_denom + f_k[k] if (return_f_k): f_k_out[k] = f_k[k] - _logsum(log_w_n) if (include_nonzero): # renormalize the weights, needed for nonzero states. log_w_n += (f_k_out[k] - f_k[k]) if (logform): Warray_nk[:, k] = log_w_n else: Warray_nk[:, k] = np.exp(log_w_n) # Return weights (or log weights) if (return_f_k): f_k_out[:] = f_k_out[:] - f_k_out[0] return Warray_nk, f_k_out else: return Warray_nk
def download_async(self, remote_path, local_path, callback=None): """Downloads remote resources from WebDAV server asynchronously :param remote_path: the path to remote resource on WebDAV server. Can be file and directory. :param local_path: the path to save resource locally. :param callback: the callback which will be invoked when downloading is complete. """ target = (lambda: self.download_sync(local_path=local_path, remote_path=remote_path, callback=callback)) threading.Thread(target=target).start()
Downloads remote resources from WebDAV server asynchronously :param remote_path: the path to remote resource on WebDAV server. Can be file and directory. :param local_path: the path to save resource locally. :param callback: the callback which will be invoked when downloading is complete.
Below is the the instruction that describes the task: ### Input: Downloads remote resources from WebDAV server asynchronously :param remote_path: the path to remote resource on WebDAV server. Can be file and directory. :param local_path: the path to save resource locally. :param callback: the callback which will be invoked when downloading is complete. ### Response: def download_async(self, remote_path, local_path, callback=None): """Downloads remote resources from WebDAV server asynchronously :param remote_path: the path to remote resource on WebDAV server. Can be file and directory. :param local_path: the path to save resource locally. :param callback: the callback which will be invoked when downloading is complete. """ target = (lambda: self.download_sync(local_path=local_path, remote_path=remote_path, callback=callback)) threading.Thread(target=target).start()
def GetAvailableClaimTotal(self): """ Gets the total amount of Gas that this wallet is able to claim at a given moment. Returns: Fixed8: the amount of Gas available to claim as a Fixed8 number. """ coinrefs = [coin.Reference for coin in self.GetUnclaimedCoins()] bonus = Blockchain.CalculateBonusIgnoreClaimed(coinrefs, True) return bonus
Gets the total amount of Gas that this wallet is able to claim at a given moment. Returns: Fixed8: the amount of Gas available to claim as a Fixed8 number.
Below is the the instruction that describes the task: ### Input: Gets the total amount of Gas that this wallet is able to claim at a given moment. Returns: Fixed8: the amount of Gas available to claim as a Fixed8 number. ### Response: def GetAvailableClaimTotal(self): """ Gets the total amount of Gas that this wallet is able to claim at a given moment. Returns: Fixed8: the amount of Gas available to claim as a Fixed8 number. """ coinrefs = [coin.Reference for coin in self.GetUnclaimedCoins()] bonus = Blockchain.CalculateBonusIgnoreClaimed(coinrefs, True) return bonus
def get_cn(n, mc, dl, F, e): """ Compute c_n from Eq. 22 of Taylor et al. (2015). :param n: Harmonic number :param mc: Chirp mass of binary [Solar Mass] :param dl: Luminosity distance [Mpc] :param F: Orbital frequency of binary [Hz] :param e: Orbital Eccentricity :returns: c_n """ # convert to seconds mc *= SOLAR2S dl *= MPC2S omega = 2 * np.pi * F amp = 2 * mc**(5/3) * omega**(2/3) / dl ret = amp * ss.jn(n,n*e) / (n * omega) return ret
Compute c_n from Eq. 22 of Taylor et al. (2015). :param n: Harmonic number :param mc: Chirp mass of binary [Solar Mass] :param dl: Luminosity distance [Mpc] :param F: Orbital frequency of binary [Hz] :param e: Orbital Eccentricity :returns: c_n
Below is the the instruction that describes the task: ### Input: Compute c_n from Eq. 22 of Taylor et al. (2015). :param n: Harmonic number :param mc: Chirp mass of binary [Solar Mass] :param dl: Luminosity distance [Mpc] :param F: Orbital frequency of binary [Hz] :param e: Orbital Eccentricity :returns: c_n ### Response: def get_cn(n, mc, dl, F, e): """ Compute c_n from Eq. 22 of Taylor et al. (2015). :param n: Harmonic number :param mc: Chirp mass of binary [Solar Mass] :param dl: Luminosity distance [Mpc] :param F: Orbital frequency of binary [Hz] :param e: Orbital Eccentricity :returns: c_n """ # convert to seconds mc *= SOLAR2S dl *= MPC2S omega = 2 * np.pi * F amp = 2 * mc**(5/3) * omega**(2/3) / dl ret = amp * ss.jn(n,n*e) / (n * omega) return ret
def plot_ppc(self, nsims=1000, T=np.mean, **kwargs): """ Plots histogram of the discrepancy from draws of the posterior Parameters ---------- nsims : int (default : 1000) How many draws for the PPC T : function A discrepancy measure - e.g. np.mean, np.std, np.max """ if self.latent_variables.estimation_method not in ['BBVI', 'M-H']: raise Exception("No latent variables estimated!") else: import matplotlib.pyplot as plt import seaborn as sns figsize = kwargs.get('figsize',(10,7)) lv_draws = self.draw_latent_variables(nsims=nsims) mus = [self._model(lv_draws[:,i])[0] for i in range(nsims)] model_scale, model_shape, model_skewness = self._get_scale_and_shape_sim(lv_draws) data_draws = np.array([self.family.draw_variable(self.link(mus[i]), np.repeat(model_scale[i], mus[i].shape[0]), np.repeat(model_shape[i], mus[i].shape[0]), np.repeat(model_skewness[i], mus[i].shape[0]), mus[i].shape[0]) for i in range(nsims)]) T_sim = T(self.sample(nsims=nsims), axis=1) T_actual = T(self.data) if T == np.mean: description = " of the mean" elif T == np.max: description = " of the maximum" elif T == np.min: description = " of the minimum" elif T == np.median: description = " of the median" else: description = "" plt.figure(figsize=figsize) ax = plt.subplot() ax.axvline(T_actual) sns.distplot(T_sim, kde=False, ax=ax) ax.set(title='Posterior predictive' + description, xlabel='T(x)', ylabel='Frequency'); plt.show()
Plots histogram of the discrepancy from draws of the posterior Parameters ---------- nsims : int (default : 1000) How many draws for the PPC T : function A discrepancy measure - e.g. np.mean, np.std, np.max
Below is the the instruction that describes the task: ### Input: Plots histogram of the discrepancy from draws of the posterior Parameters ---------- nsims : int (default : 1000) How many draws for the PPC T : function A discrepancy measure - e.g. np.mean, np.std, np.max ### Response: def plot_ppc(self, nsims=1000, T=np.mean, **kwargs): """ Plots histogram of the discrepancy from draws of the posterior Parameters ---------- nsims : int (default : 1000) How many draws for the PPC T : function A discrepancy measure - e.g. np.mean, np.std, np.max """ if self.latent_variables.estimation_method not in ['BBVI', 'M-H']: raise Exception("No latent variables estimated!") else: import matplotlib.pyplot as plt import seaborn as sns figsize = kwargs.get('figsize',(10,7)) lv_draws = self.draw_latent_variables(nsims=nsims) mus = [self._model(lv_draws[:,i])[0] for i in range(nsims)] model_scale, model_shape, model_skewness = self._get_scale_and_shape_sim(lv_draws) data_draws = np.array([self.family.draw_variable(self.link(mus[i]), np.repeat(model_scale[i], mus[i].shape[0]), np.repeat(model_shape[i], mus[i].shape[0]), np.repeat(model_skewness[i], mus[i].shape[0]), mus[i].shape[0]) for i in range(nsims)]) T_sim = T(self.sample(nsims=nsims), axis=1) T_actual = T(self.data) if T == np.mean: description = " of the mean" elif T == np.max: description = " of the maximum" elif T == np.min: description = " of the minimum" elif T == np.median: description = " of the median" else: description = "" plt.figure(figsize=figsize) ax = plt.subplot() ax.axvline(T_actual) sns.distplot(T_sim, kde=False, ax=ax) ax.set(title='Posterior predictive' + description, xlabel='T(x)', ylabel='Frequency'); plt.show()