code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def generate_annotations_json_string(source_path, only_simple=False): # type: (str, bool) -> List[FunctionData] """Produce annotation data JSON file from a JSON file with runtime-collected types. Data formats: * The source JSON is a list of pyannotate_tools.annotations.parse.RawEntry items. * The output JSON is a list of FunctionData items. """ items = parse_json(source_path) results = [] for item in items: signature = unify_type_comments(item.type_comments) if is_signature_simple(signature) or not only_simple: data = { 'path': item.path, 'line': item.line, 'func_name': item.func_name, 'signature': signature, 'samples': item.samples } # type: FunctionData results.append(data) return results
Produce annotation data JSON file from a JSON file with runtime-collected types. Data formats: * The source JSON is a list of pyannotate_tools.annotations.parse.RawEntry items. * The output JSON is a list of FunctionData items.
Below is the the instruction that describes the task: ### Input: Produce annotation data JSON file from a JSON file with runtime-collected types. Data formats: * The source JSON is a list of pyannotate_tools.annotations.parse.RawEntry items. * The output JSON is a list of FunctionData items. ### Response: def generate_annotations_json_string(source_path, only_simple=False): # type: (str, bool) -> List[FunctionData] """Produce annotation data JSON file from a JSON file with runtime-collected types. Data formats: * The source JSON is a list of pyannotate_tools.annotations.parse.RawEntry items. * The output JSON is a list of FunctionData items. """ items = parse_json(source_path) results = [] for item in items: signature = unify_type_comments(item.type_comments) if is_signature_simple(signature) or not only_simple: data = { 'path': item.path, 'line': item.line, 'func_name': item.func_name, 'signature': signature, 'samples': item.samples } # type: FunctionData results.append(data) return results
def slice(dataset, normal='x', origin=None, generate_triangles=False, contour=False): """Slice a dataset by a plane at the specified origin and normal vector orientation. If no origin is specified, the center of the input dataset will be used. Parameters ---------- normal : tuple(float) or str Length 3 tuple for the normal vector direction. Can also be specified as a string conventional direction such as ``'x'`` for ``(1,0,0)`` or ``'-x'`` for ``(-1,0,0)```, etc. origin : tuple(float) The center (x,y,z) coordinate of the plane on which the slice occurs generate_triangles: bool, optional If this is enabled (``False`` by default), the output will be triangles otherwise, the output will be the intersection polygons. contour : bool, optional If True, apply a ``contour`` filter after slicing """ if isinstance(normal, str): normal = NORMALS[normal.lower()] # find center of data if origin not specified if origin is None: origin = dataset.center if not is_inside_bounds(origin, dataset.bounds): raise AssertionError('Slice is outside data bounds.') # create the plane for clipping plane = _generate_plane(normal, origin) # create slice alg = vtk.vtkCutter() # Construct the cutter object alg.SetInputDataObject(dataset) # Use the grid as the data we desire to cut alg.SetCutFunction(plane) # the the cutter to use the plane we made if not generate_triangles: alg.GenerateTrianglesOff() alg.Update() # Perfrom the Cut output = _get_output(alg) if contour: return output.contour() return output
Slice a dataset by a plane at the specified origin and normal vector orientation. If no origin is specified, the center of the input dataset will be used. Parameters ---------- normal : tuple(float) or str Length 3 tuple for the normal vector direction. Can also be specified as a string conventional direction such as ``'x'`` for ``(1,0,0)`` or ``'-x'`` for ``(-1,0,0)```, etc. origin : tuple(float) The center (x,y,z) coordinate of the plane on which the slice occurs generate_triangles: bool, optional If this is enabled (``False`` by default), the output will be triangles otherwise, the output will be the intersection polygons. contour : bool, optional If True, apply a ``contour`` filter after slicing
Below is the the instruction that describes the task: ### Input: Slice a dataset by a plane at the specified origin and normal vector orientation. If no origin is specified, the center of the input dataset will be used. Parameters ---------- normal : tuple(float) or str Length 3 tuple for the normal vector direction. Can also be specified as a string conventional direction such as ``'x'`` for ``(1,0,0)`` or ``'-x'`` for ``(-1,0,0)```, etc. origin : tuple(float) The center (x,y,z) coordinate of the plane on which the slice occurs generate_triangles: bool, optional If this is enabled (``False`` by default), the output will be triangles otherwise, the output will be the intersection polygons. contour : bool, optional If True, apply a ``contour`` filter after slicing ### Response: def slice(dataset, normal='x', origin=None, generate_triangles=False, contour=False): """Slice a dataset by a plane at the specified origin and normal vector orientation. If no origin is specified, the center of the input dataset will be used. Parameters ---------- normal : tuple(float) or str Length 3 tuple for the normal vector direction. Can also be specified as a string conventional direction such as ``'x'`` for ``(1,0,0)`` or ``'-x'`` for ``(-1,0,0)```, etc. origin : tuple(float) The center (x,y,z) coordinate of the plane on which the slice occurs generate_triangles: bool, optional If this is enabled (``False`` by default), the output will be triangles otherwise, the output will be the intersection polygons. contour : bool, optional If True, apply a ``contour`` filter after slicing """ if isinstance(normal, str): normal = NORMALS[normal.lower()] # find center of data if origin not specified if origin is None: origin = dataset.center if not is_inside_bounds(origin, dataset.bounds): raise AssertionError('Slice is outside data bounds.') # create the plane for clipping plane = _generate_plane(normal, origin) # create slice alg = vtk.vtkCutter() # Construct the cutter object alg.SetInputDataObject(dataset) # Use the grid as the data we desire to cut alg.SetCutFunction(plane) # the the cutter to use the plane we made if not generate_triangles: alg.GenerateTrianglesOff() alg.Update() # Perfrom the Cut output = _get_output(alg) if contour: return output.contour() return output
def _determine_beacon_config(self, current_beacon_config, key): ''' Process a beacon configuration to determine its interval ''' interval = False if isinstance(current_beacon_config, dict): interval = current_beacon_config.get(key, False) return interval
Process a beacon configuration to determine its interval
Below is the the instruction that describes the task: ### Input: Process a beacon configuration to determine its interval ### Response: def _determine_beacon_config(self, current_beacon_config, key): ''' Process a beacon configuration to determine its interval ''' interval = False if isinstance(current_beacon_config, dict): interval = current_beacon_config.get(key, False) return interval
def vibrational_internal_energy(self, temperature, volume): """ Vibrational internal energy, U_vib(V, T). Eq(4) in doi.org/10.1016/j.comphy.2003.12.001 Args: temperature (float): temperature in K volume (float): in Ang^3 Returns: float: vibrational internal energy in eV """ y = self.debye_temperature(volume) / temperature return self.kb * self.natoms * temperature * (9./8. * y + 3*self.debye_integral(y))
Vibrational internal energy, U_vib(V, T). Eq(4) in doi.org/10.1016/j.comphy.2003.12.001 Args: temperature (float): temperature in K volume (float): in Ang^3 Returns: float: vibrational internal energy in eV
Below is the the instruction that describes the task: ### Input: Vibrational internal energy, U_vib(V, T). Eq(4) in doi.org/10.1016/j.comphy.2003.12.001 Args: temperature (float): temperature in K volume (float): in Ang^3 Returns: float: vibrational internal energy in eV ### Response: def vibrational_internal_energy(self, temperature, volume): """ Vibrational internal energy, U_vib(V, T). Eq(4) in doi.org/10.1016/j.comphy.2003.12.001 Args: temperature (float): temperature in K volume (float): in Ang^3 Returns: float: vibrational internal energy in eV """ y = self.debye_temperature(volume) / temperature return self.kb * self.natoms * temperature * (9./8. * y + 3*self.debye_integral(y))
def enable_llama_ha(self, new_llama_host_id, zk_service_name=None, new_llama_role_name=None): """ Enable high availability for an Impala Llama ApplicationMaster. This command only applies to CDH 5.1+ Impala services. @param new_llama_host_id: id of the host where the second Llama role will be added. @param zk_service_name: Name of the ZooKeeper service to use for auto-failover. If Impala's ZooKeeper dependency is already set, then that ZooKeeper service will be used for auto-failover, and this parameter may be omitted. @param new_llama_role_name: Name of the new Llama role. If omitted, a name will be generated automatically. @return: Reference to the submitted command. @since: API v8 """ args = dict( newLlamaHostId = new_llama_host_id, zkServiceName = zk_service_name, newLlamaRoleName = new_llama_role_name ) return self._cmd('impalaEnableLlamaHa', data=args, api_version=8)
Enable high availability for an Impala Llama ApplicationMaster. This command only applies to CDH 5.1+ Impala services. @param new_llama_host_id: id of the host where the second Llama role will be added. @param zk_service_name: Name of the ZooKeeper service to use for auto-failover. If Impala's ZooKeeper dependency is already set, then that ZooKeeper service will be used for auto-failover, and this parameter may be omitted. @param new_llama_role_name: Name of the new Llama role. If omitted, a name will be generated automatically. @return: Reference to the submitted command. @since: API v8
Below is the the instruction that describes the task: ### Input: Enable high availability for an Impala Llama ApplicationMaster. This command only applies to CDH 5.1+ Impala services. @param new_llama_host_id: id of the host where the second Llama role will be added. @param zk_service_name: Name of the ZooKeeper service to use for auto-failover. If Impala's ZooKeeper dependency is already set, then that ZooKeeper service will be used for auto-failover, and this parameter may be omitted. @param new_llama_role_name: Name of the new Llama role. If omitted, a name will be generated automatically. @return: Reference to the submitted command. @since: API v8 ### Response: def enable_llama_ha(self, new_llama_host_id, zk_service_name=None, new_llama_role_name=None): """ Enable high availability for an Impala Llama ApplicationMaster. This command only applies to CDH 5.1+ Impala services. @param new_llama_host_id: id of the host where the second Llama role will be added. @param zk_service_name: Name of the ZooKeeper service to use for auto-failover. If Impala's ZooKeeper dependency is already set, then that ZooKeeper service will be used for auto-failover, and this parameter may be omitted. @param new_llama_role_name: Name of the new Llama role. If omitted, a name will be generated automatically. @return: Reference to the submitted command. @since: API v8 """ args = dict( newLlamaHostId = new_llama_host_id, zkServiceName = zk_service_name, newLlamaRoleName = new_llama_role_name ) return self._cmd('impalaEnableLlamaHa', data=args, api_version=8)
def get_pulls_list(project, auth=False, **params): """get pull request list""" params.setdefault("state", "closed") url = "https://api.github.com/repos/{project}/pulls".format(project=project) if auth: headers = make_auth_header() else: headers = None pages = get_paged_request(url, headers=headers, **params) return pages
get pull request list
Below is the the instruction that describes the task: ### Input: get pull request list ### Response: def get_pulls_list(project, auth=False, **params): """get pull request list""" params.setdefault("state", "closed") url = "https://api.github.com/repos/{project}/pulls".format(project=project) if auth: headers = make_auth_header() else: headers = None pages = get_paged_request(url, headers=headers, **params) return pages
def copy_subtree(self, source_node, dest_node): # noqa: D302 r""" Copy a sub-tree from one sub-node to another. Data is added if some nodes of the source sub-tree exist in the destination sub-tree :param source_name: Root node of the sub-tree to copy from :type source_name: :ref:`NodeName` :param dest_name: Root node of the sub-tree to copy to :type dest_name: :ref:`NodeName` :raises: * RuntimeError (Argument \`dest_node\` is not valid) * RuntimeError (Argument \`source_node\` is not valid) * RuntimeError (Illegal root in destination node) * RuntimeError (Node *[source_node]* not in tree) Using the same example tree created in :py:meth:`ptrie.Trie.add_nodes`:: >>> from __future__ import print_function >>> import docs.support.ptrie_example >>> tobj = docs.support.ptrie_example.create_tree() >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 └branch2 >>> tobj.copy_subtree('root.branch1', 'root.branch3') >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 ├branch2 └branch3 (*) ├leaf1 │└subleaf1 (*) └leaf2 (*) └subleaf2 """ if self._validate_node_name(source_node): raise RuntimeError("Argument `source_node` is not valid") if self._validate_node_name(dest_node): raise RuntimeError("Argument `dest_node` is not valid") if source_node not in self._db: raise RuntimeError("Node {0} not in tree".format(source_node)) if not dest_node.startswith(self.root_name + self._node_separator): raise RuntimeError("Illegal root in destination node") for node in self._get_subtree(source_node): self._db[node.replace(source_node, dest_node, 1)] = { "parent": self._db[node]["parent"].replace(source_node, dest_node, 1), "children": [ child.replace(source_node, dest_node, 1) for child in self._db[node]["children"] ], "data": copy.deepcopy(self._db[node]["data"]), } self._create_intermediate_nodes(dest_node) parent = self._node_separator.join(dest_node.split(self._node_separator)[:-1]) self._db[dest_node]["parent"] = parent self._db[parent]["children"] = sorted( self._db[parent]["children"] + [dest_node] )
r""" Copy a sub-tree from one sub-node to another. Data is added if some nodes of the source sub-tree exist in the destination sub-tree :param source_name: Root node of the sub-tree to copy from :type source_name: :ref:`NodeName` :param dest_name: Root node of the sub-tree to copy to :type dest_name: :ref:`NodeName` :raises: * RuntimeError (Argument \`dest_node\` is not valid) * RuntimeError (Argument \`source_node\` is not valid) * RuntimeError (Illegal root in destination node) * RuntimeError (Node *[source_node]* not in tree) Using the same example tree created in :py:meth:`ptrie.Trie.add_nodes`:: >>> from __future__ import print_function >>> import docs.support.ptrie_example >>> tobj = docs.support.ptrie_example.create_tree() >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 └branch2 >>> tobj.copy_subtree('root.branch1', 'root.branch3') >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 ├branch2 └branch3 (*) ├leaf1 │└subleaf1 (*) └leaf2 (*) └subleaf2
Below is the the instruction that describes the task: ### Input: r""" Copy a sub-tree from one sub-node to another. Data is added if some nodes of the source sub-tree exist in the destination sub-tree :param source_name: Root node of the sub-tree to copy from :type source_name: :ref:`NodeName` :param dest_name: Root node of the sub-tree to copy to :type dest_name: :ref:`NodeName` :raises: * RuntimeError (Argument \`dest_node\` is not valid) * RuntimeError (Argument \`source_node\` is not valid) * RuntimeError (Illegal root in destination node) * RuntimeError (Node *[source_node]* not in tree) Using the same example tree created in :py:meth:`ptrie.Trie.add_nodes`:: >>> from __future__ import print_function >>> import docs.support.ptrie_example >>> tobj = docs.support.ptrie_example.create_tree() >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 └branch2 >>> tobj.copy_subtree('root.branch1', 'root.branch3') >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 ├branch2 └branch3 (*) ├leaf1 │└subleaf1 (*) └leaf2 (*) └subleaf2 ### Response: def copy_subtree(self, source_node, dest_node): # noqa: D302 r""" Copy a sub-tree from one sub-node to another. Data is added if some nodes of the source sub-tree exist in the destination sub-tree :param source_name: Root node of the sub-tree to copy from :type source_name: :ref:`NodeName` :param dest_name: Root node of the sub-tree to copy to :type dest_name: :ref:`NodeName` :raises: * RuntimeError (Argument \`dest_node\` is not valid) * RuntimeError (Argument \`source_node\` is not valid) * RuntimeError (Illegal root in destination node) * RuntimeError (Node *[source_node]* not in tree) Using the same example tree created in :py:meth:`ptrie.Trie.add_nodes`:: >>> from __future__ import print_function >>> import docs.support.ptrie_example >>> tobj = docs.support.ptrie_example.create_tree() >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 └branch2 >>> tobj.copy_subtree('root.branch1', 'root.branch3') >>> print(tobj) root ├branch1 (*) │├leaf1 ││└subleaf1 (*) │└leaf2 (*) │ └subleaf2 ├branch2 └branch3 (*) ├leaf1 │└subleaf1 (*) └leaf2 (*) └subleaf2 """ if self._validate_node_name(source_node): raise RuntimeError("Argument `source_node` is not valid") if self._validate_node_name(dest_node): raise RuntimeError("Argument `dest_node` is not valid") if source_node not in self._db: raise RuntimeError("Node {0} not in tree".format(source_node)) if not dest_node.startswith(self.root_name + self._node_separator): raise RuntimeError("Illegal root in destination node") for node in self._get_subtree(source_node): self._db[node.replace(source_node, dest_node, 1)] = { "parent": self._db[node]["parent"].replace(source_node, dest_node, 1), "children": [ child.replace(source_node, dest_node, 1) for child in self._db[node]["children"] ], "data": copy.deepcopy(self._db[node]["data"]), } self._create_intermediate_nodes(dest_node) parent = self._node_separator.join(dest_node.split(self._node_separator)[:-1]) self._db[dest_node]["parent"] = parent self._db[parent]["children"] = sorted( self._db[parent]["children"] + [dest_node] )
def main(): """Main Function""" num_blocks = int(sys.argv[1]) if len(sys.argv) == 2 else 3 clear_db() for block_id in generate_scheduling_block_id(num_blocks=num_blocks, project='sip'): config = { "id": block_id, "sub_array_id": str(random.choice(range(3))), "processing_blocks": [] } for i in range(random.randint(1, 3)): config['processing_blocks'].append({ "id": "{}:pb{:03d}".format(block_id, i), "workflow": { "name": "{}".format(random.choice(['vis_ingest_01', 'dask_ical_01', 'dask_maps_01'])), "template": {}, "stages": [] } }) print('-' * 40) print(json.dumps(config, indent=2)) add_scheduling_block(config)
Main Function
Below is the the instruction that describes the task: ### Input: Main Function ### Response: def main(): """Main Function""" num_blocks = int(sys.argv[1]) if len(sys.argv) == 2 else 3 clear_db() for block_id in generate_scheduling_block_id(num_blocks=num_blocks, project='sip'): config = { "id": block_id, "sub_array_id": str(random.choice(range(3))), "processing_blocks": [] } for i in range(random.randint(1, 3)): config['processing_blocks'].append({ "id": "{}:pb{:03d}".format(block_id, i), "workflow": { "name": "{}".format(random.choice(['vis_ingest_01', 'dask_ical_01', 'dask_maps_01'])), "template": {}, "stages": [] } }) print('-' * 40) print(json.dumps(config, indent=2)) add_scheduling_block(config)
def affiliations(self): """A list of namedtuples storing affiliation information, where each namedtuple corresponds to one affiliation. The information in each namedtuple is (eid name variant documents city country parent). All entries are strings or None. variant combines variants of names with a semicolon. """ out = [] order = 'eid name variant documents city country parent' aff = namedtuple('Affiliation', order) for item in self._json: name = item.get('affiliation-name') variants = [d.get('$', "") for d in item.get('name-variant', []) if d.get('$', "") != name] new = aff(eid=item['eid'], variant=";".join(variants), documents=item.get('document-count', '0'), name=name, city=item.get('city'), country=item.get('country'), parent=item.get('parent-affiliation-id')) out.append(new) return out or None
A list of namedtuples storing affiliation information, where each namedtuple corresponds to one affiliation. The information in each namedtuple is (eid name variant documents city country parent). All entries are strings or None. variant combines variants of names with a semicolon.
Below is the the instruction that describes the task: ### Input: A list of namedtuples storing affiliation information, where each namedtuple corresponds to one affiliation. The information in each namedtuple is (eid name variant documents city country parent). All entries are strings or None. variant combines variants of names with a semicolon. ### Response: def affiliations(self): """A list of namedtuples storing affiliation information, where each namedtuple corresponds to one affiliation. The information in each namedtuple is (eid name variant documents city country parent). All entries are strings or None. variant combines variants of names with a semicolon. """ out = [] order = 'eid name variant documents city country parent' aff = namedtuple('Affiliation', order) for item in self._json: name = item.get('affiliation-name') variants = [d.get('$', "") for d in item.get('name-variant', []) if d.get('$', "") != name] new = aff(eid=item['eid'], variant=";".join(variants), documents=item.get('document-count', '0'), name=name, city=item.get('city'), country=item.get('country'), parent=item.get('parent-affiliation-id')) out.append(new) return out or None
def coerce(self, value): """Coerce a single value according to this parameter's settings. @param value: A L{str}, or L{None}. If L{None} is passed - meaning no value is avalable at all, not even the empty string - and this parameter is optional, L{self.default} will be returned. """ if value is None: if self.optional: return self.default else: value = "" if value == "": if not self.allow_none: raise MissingParameterError(self.name, kind=self.kind) return self.default try: self._check_range(value) parsed = self.parse(value) if self.validator and not self.validator(parsed): raise ValueError(value) return parsed except ValueError: try: value = value.decode("utf-8") message = "Invalid %s value %s" % (self.kind, value) except UnicodeDecodeError: message = "Invalid %s value" % self.kind raise InvalidParameterValueError(message)
Coerce a single value according to this parameter's settings. @param value: A L{str}, or L{None}. If L{None} is passed - meaning no value is avalable at all, not even the empty string - and this parameter is optional, L{self.default} will be returned.
Below is the the instruction that describes the task: ### Input: Coerce a single value according to this parameter's settings. @param value: A L{str}, or L{None}. If L{None} is passed - meaning no value is avalable at all, not even the empty string - and this parameter is optional, L{self.default} will be returned. ### Response: def coerce(self, value): """Coerce a single value according to this parameter's settings. @param value: A L{str}, or L{None}. If L{None} is passed - meaning no value is avalable at all, not even the empty string - and this parameter is optional, L{self.default} will be returned. """ if value is None: if self.optional: return self.default else: value = "" if value == "": if not self.allow_none: raise MissingParameterError(self.name, kind=self.kind) return self.default try: self._check_range(value) parsed = self.parse(value) if self.validator and not self.validator(parsed): raise ValueError(value) return parsed except ValueError: try: value = value.decode("utf-8") message = "Invalid %s value %s" % (self.kind, value) except UnicodeDecodeError: message = "Invalid %s value" % self.kind raise InvalidParameterValueError(message)
def list(self, argv): """List available indexes if no names provided, otherwise list the properties of the named indexes.""" def read(index): print(index.name) for key in sorted(index.content.keys()): value = index.content[key] print(" %s: %s" % (key, value)) if len(argv) == 0: for index in self.service.indexes: count = index['totalEventCount'] print("%s (%s)" % (index.name, count)) else: self.foreach(argv, read)
List available indexes if no names provided, otherwise list the properties of the named indexes.
Below is the the instruction that describes the task: ### Input: List available indexes if no names provided, otherwise list the properties of the named indexes. ### Response: def list(self, argv): """List available indexes if no names provided, otherwise list the properties of the named indexes.""" def read(index): print(index.name) for key in sorted(index.content.keys()): value = index.content[key] print(" %s: %s" % (key, value)) if len(argv) == 0: for index in self.service.indexes: count = index['totalEventCount'] print("%s (%s)" % (index.name, count)) else: self.foreach(argv, read)
def _clear(self): """ Clear the current image. """ self._plain_image = [" " * self._width for _ in range(self._height)] self._colour_map = [[(None, 0, 0) for _ in range(self._width)] for _ in range(self._height)]
Clear the current image.
Below is the the instruction that describes the task: ### Input: Clear the current image. ### Response: def _clear(self): """ Clear the current image. """ self._plain_image = [" " * self._width for _ in range(self._height)] self._colour_map = [[(None, 0, 0) for _ in range(self._width)] for _ in range(self._height)]
def on_master_missing(self): ''' Tries to spawn a master agency if the slave agency failed to connect for several times. To avoid several slave agencies spawning the master agency a file lock is used ''' self.info("We could not contact the master agency, starting a new one") if self._starting_master: self.info("Master already starting, waiting for it") return if self._shutdown_task is not None: self.info("Not spwaning master because we are about to terminate " "ourselves") return if self._startup_task is not None: raise error.FeatError("Standalone started without a previous " "master agency already running, terminating " "it") # Try the get an exclusive lock on the master agency startup if self._acquire_lock(): self._starting_master = True # Allow restarting a master if we didn't succeed after 10 seconds self._release_lock_cl = time.callLater(10, self._release_lock) return self._spawn_agency('master')
Tries to spawn a master agency if the slave agency failed to connect for several times. To avoid several slave agencies spawning the master agency a file lock is used
Below is the the instruction that describes the task: ### Input: Tries to spawn a master agency if the slave agency failed to connect for several times. To avoid several slave agencies spawning the master agency a file lock is used ### Response: def on_master_missing(self): ''' Tries to spawn a master agency if the slave agency failed to connect for several times. To avoid several slave agencies spawning the master agency a file lock is used ''' self.info("We could not contact the master agency, starting a new one") if self._starting_master: self.info("Master already starting, waiting for it") return if self._shutdown_task is not None: self.info("Not spwaning master because we are about to terminate " "ourselves") return if self._startup_task is not None: raise error.FeatError("Standalone started without a previous " "master agency already running, terminating " "it") # Try the get an exclusive lock on the master agency startup if self._acquire_lock(): self._starting_master = True # Allow restarting a master if we didn't succeed after 10 seconds self._release_lock_cl = time.callLater(10, self._release_lock) return self._spawn_agency('master')
def _wrap_client_error(e): """ Wrap botocore ClientError exception into ServerlessRepoClientError. :param e: botocore exception :type e: ClientError :return: S3PermissionsRequired or InvalidS3UriError or general ServerlessRepoClientError """ error_code = e.response['Error']['Code'] message = e.response['Error']['Message'] if error_code == 'BadRequestException': if "Failed to copy S3 object. Access denied:" in message: match = re.search('bucket=(.+?), key=(.+?)$', message) if match: return S3PermissionsRequired(bucket=match.group(1), key=match.group(2)) if "Invalid S3 URI" in message: return InvalidS3UriError(message=message) return ServerlessRepoClientError(message=message)
Wrap botocore ClientError exception into ServerlessRepoClientError. :param e: botocore exception :type e: ClientError :return: S3PermissionsRequired or InvalidS3UriError or general ServerlessRepoClientError
Below is the the instruction that describes the task: ### Input: Wrap botocore ClientError exception into ServerlessRepoClientError. :param e: botocore exception :type e: ClientError :return: S3PermissionsRequired or InvalidS3UriError or general ServerlessRepoClientError ### Response: def _wrap_client_error(e): """ Wrap botocore ClientError exception into ServerlessRepoClientError. :param e: botocore exception :type e: ClientError :return: S3PermissionsRequired or InvalidS3UriError or general ServerlessRepoClientError """ error_code = e.response['Error']['Code'] message = e.response['Error']['Message'] if error_code == 'BadRequestException': if "Failed to copy S3 object. Access denied:" in message: match = re.search('bucket=(.+?), key=(.+?)$', message) if match: return S3PermissionsRequired(bucket=match.group(1), key=match.group(2)) if "Invalid S3 URI" in message: return InvalidS3UriError(message=message) return ServerlessRepoClientError(message=message)
def dump_js(js, abspath, precision=None, fastmode=False, replace=False, compress=False, enable_verbose=True): """Dump Json serializable object to file. Provides multiple choice to customize the behavior. :param js: Serializable python object. :type js: dict or list :param abspath: ``save as`` path, file extension has to be ``.json`` or ``.gz`` (for compressed json). :type abspath: string :param fastmode: (default False) If ``True``, then dumping json without sorted keys and pretty indent, and it's faster and smaller in size. :type fastmode: boolean :param replace: (default False) If ``True``, when you dump json to a existing path, it silently overwrite it. If False, an exception will be raised. Default False setting is to prevent overwrite file by mistake. :type replace: boolean :param compress: (default False) If ``True``, use GNU program gzip to compress the json file. Disk usage can be greatly reduced. But you have to use :func:`load_js(abspath, compress=True)<load_js>` in loading. :type compress: boolean :param enable_verbose: (default True) Trigger for message. :type enable_verbose: boolean Usage:: >>> from weatherlab.lib.dataIO.js import dump_js >>> js = {"a": 1, "b": 2} >>> dump_js(js, "test.json", replace=True) Dumping to test.json... Complete! Elapse 0.002432 sec **中文文档** 将Python中可被序列化的"字典", "列表"以及他们的组合, 按照Json的编码方式写入文件 文件 参数列表 :param js: 可Json化的Python对象 :type js: ``字典`` 或 ``列表`` :param abspath: 写入文件的路径。扩展名必须为``.json``或``.gz``, 其中gz用于被压 缩的Json :type abspath: ``字符串`` :param fastmode: (默认 False) 当为``True``时, Json编码时不对Key进行排序, 也不 进行缩进排版。这样做写入的速度更快, 文件的大小也更小。 :type fastmode: "布尔值" :param replace: (默认 False) 当为``True``时, 如果写入路径已经存在, 则会自动覆盖 原文件。而为``False``时, 则会抛出异常。防止误操作覆盖源文件。 :type replace: "布尔值" :param compress: (默认 False) 当为``True``时, 使用开源压缩标准gzip压缩Json文件。 通常能让文件大小缩小10-20倍不等。如要读取文件, 则需要使用函数 :func:`load_js(abspath, compress=True)<load_js>`. :type compress: "布尔值" :param enable_verbose: (默认 True) 是否打开信息提示开关, 批处理时建议关闭. :type enable_verbose: "布尔值" """ abspath = str(abspath) # try stringlize if precision is not None: encoder.FLOAT_REPR = lambda x: format(x, ".%sf" % precision) msg = Messenger(enable_verbose=enable_verbose) if compress: # check extension name root, ext = os.path.splitext(abspath) if ext != ".gz": if ext != ".tmp": raise Exception("compressed json has to use extension '.gz'!") else: _, ext = os.path.splitext(root) if ext != ".gz": raise Exception( "compressed json has to use extension '.gz'!") else: root, ext = os.path.splitext(abspath) if ext != ".json": if ext != ".tmp": raise Exception("file extension are not '.json'!") else: _, ext = os.path.splitext(root) if ext != ".json": raise Exception("file extension are not '.json'!") msg.show("\nDumping to %s..." % abspath) st = time.clock() if os.path.exists(abspath): # if exists, check replace option if replace: # replace existing file if fastmode: # no sort and indent, do the fastest dumping if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js, f, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) else: # stop, print error message raise Exception("\tCANNOT WRITE to %s, it's already " "exists" % abspath) else: # if not exists, just write to it if fastmode: # no sort and indent, do the fastest dumping if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) msg.show(" Complete! Elapse %.6f sec" % (time.clock() - st))
Dump Json serializable object to file. Provides multiple choice to customize the behavior. :param js: Serializable python object. :type js: dict or list :param abspath: ``save as`` path, file extension has to be ``.json`` or ``.gz`` (for compressed json). :type abspath: string :param fastmode: (default False) If ``True``, then dumping json without sorted keys and pretty indent, and it's faster and smaller in size. :type fastmode: boolean :param replace: (default False) If ``True``, when you dump json to a existing path, it silently overwrite it. If False, an exception will be raised. Default False setting is to prevent overwrite file by mistake. :type replace: boolean :param compress: (default False) If ``True``, use GNU program gzip to compress the json file. Disk usage can be greatly reduced. But you have to use :func:`load_js(abspath, compress=True)<load_js>` in loading. :type compress: boolean :param enable_verbose: (default True) Trigger for message. :type enable_verbose: boolean Usage:: >>> from weatherlab.lib.dataIO.js import dump_js >>> js = {"a": 1, "b": 2} >>> dump_js(js, "test.json", replace=True) Dumping to test.json... Complete! Elapse 0.002432 sec **中文文档** 将Python中可被序列化的"字典", "列表"以及他们的组合, 按照Json的编码方式写入文件 文件 参数列表 :param js: 可Json化的Python对象 :type js: ``字典`` 或 ``列表`` :param abspath: 写入文件的路径。扩展名必须为``.json``或``.gz``, 其中gz用于被压 缩的Json :type abspath: ``字符串`` :param fastmode: (默认 False) 当为``True``时, Json编码时不对Key进行排序, 也不 进行缩进排版。这样做写入的速度更快, 文件的大小也更小。 :type fastmode: "布尔值" :param replace: (默认 False) 当为``True``时, 如果写入路径已经存在, 则会自动覆盖 原文件。而为``False``时, 则会抛出异常。防止误操作覆盖源文件。 :type replace: "布尔值" :param compress: (默认 False) 当为``True``时, 使用开源压缩标准gzip压缩Json文件。 通常能让文件大小缩小10-20倍不等。如要读取文件, 则需要使用函数 :func:`load_js(abspath, compress=True)<load_js>`. :type compress: "布尔值" :param enable_verbose: (默认 True) 是否打开信息提示开关, 批处理时建议关闭. :type enable_verbose: "布尔值"
Below is the the instruction that describes the task: ### Input: Dump Json serializable object to file. Provides multiple choice to customize the behavior. :param js: Serializable python object. :type js: dict or list :param abspath: ``save as`` path, file extension has to be ``.json`` or ``.gz`` (for compressed json). :type abspath: string :param fastmode: (default False) If ``True``, then dumping json without sorted keys and pretty indent, and it's faster and smaller in size. :type fastmode: boolean :param replace: (default False) If ``True``, when you dump json to a existing path, it silently overwrite it. If False, an exception will be raised. Default False setting is to prevent overwrite file by mistake. :type replace: boolean :param compress: (default False) If ``True``, use GNU program gzip to compress the json file. Disk usage can be greatly reduced. But you have to use :func:`load_js(abspath, compress=True)<load_js>` in loading. :type compress: boolean :param enable_verbose: (default True) Trigger for message. :type enable_verbose: boolean Usage:: >>> from weatherlab.lib.dataIO.js import dump_js >>> js = {"a": 1, "b": 2} >>> dump_js(js, "test.json", replace=True) Dumping to test.json... Complete! Elapse 0.002432 sec **中文文档** 将Python中可被序列化的"字典", "列表"以及他们的组合, 按照Json的编码方式写入文件 文件 参数列表 :param js: 可Json化的Python对象 :type js: ``字典`` 或 ``列表`` :param abspath: 写入文件的路径。扩展名必须为``.json``或``.gz``, 其中gz用于被压 缩的Json :type abspath: ``字符串`` :param fastmode: (默认 False) 当为``True``时, Json编码时不对Key进行排序, 也不 进行缩进排版。这样做写入的速度更快, 文件的大小也更小。 :type fastmode: "布尔值" :param replace: (默认 False) 当为``True``时, 如果写入路径已经存在, 则会自动覆盖 原文件。而为``False``时, 则会抛出异常。防止误操作覆盖源文件。 :type replace: "布尔值" :param compress: (默认 False) 当为``True``时, 使用开源压缩标准gzip压缩Json文件。 通常能让文件大小缩小10-20倍不等。如要读取文件, 则需要使用函数 :func:`load_js(abspath, compress=True)<load_js>`. :type compress: "布尔值" :param enable_verbose: (默认 True) 是否打开信息提示开关, 批处理时建议关闭. :type enable_verbose: "布尔值" ### Response: def dump_js(js, abspath, precision=None, fastmode=False, replace=False, compress=False, enable_verbose=True): """Dump Json serializable object to file. Provides multiple choice to customize the behavior. :param js: Serializable python object. :type js: dict or list :param abspath: ``save as`` path, file extension has to be ``.json`` or ``.gz`` (for compressed json). :type abspath: string :param fastmode: (default False) If ``True``, then dumping json without sorted keys and pretty indent, and it's faster and smaller in size. :type fastmode: boolean :param replace: (default False) If ``True``, when you dump json to a existing path, it silently overwrite it. If False, an exception will be raised. Default False setting is to prevent overwrite file by mistake. :type replace: boolean :param compress: (default False) If ``True``, use GNU program gzip to compress the json file. Disk usage can be greatly reduced. But you have to use :func:`load_js(abspath, compress=True)<load_js>` in loading. :type compress: boolean :param enable_verbose: (default True) Trigger for message. :type enable_verbose: boolean Usage:: >>> from weatherlab.lib.dataIO.js import dump_js >>> js = {"a": 1, "b": 2} >>> dump_js(js, "test.json", replace=True) Dumping to test.json... Complete! Elapse 0.002432 sec **中文文档** 将Python中可被序列化的"字典", "列表"以及他们的组合, 按照Json的编码方式写入文件 文件 参数列表 :param js: 可Json化的Python对象 :type js: ``字典`` 或 ``列表`` :param abspath: 写入文件的路径。扩展名必须为``.json``或``.gz``, 其中gz用于被压 缩的Json :type abspath: ``字符串`` :param fastmode: (默认 False) 当为``True``时, Json编码时不对Key进行排序, 也不 进行缩进排版。这样做写入的速度更快, 文件的大小也更小。 :type fastmode: "布尔值" :param replace: (默认 False) 当为``True``时, 如果写入路径已经存在, 则会自动覆盖 原文件。而为``False``时, 则会抛出异常。防止误操作覆盖源文件。 :type replace: "布尔值" :param compress: (默认 False) 当为``True``时, 使用开源压缩标准gzip压缩Json文件。 通常能让文件大小缩小10-20倍不等。如要读取文件, 则需要使用函数 :func:`load_js(abspath, compress=True)<load_js>`. :type compress: "布尔值" :param enable_verbose: (默认 True) 是否打开信息提示开关, 批处理时建议关闭. :type enable_verbose: "布尔值" """ abspath = str(abspath) # try stringlize if precision is not None: encoder.FLOAT_REPR = lambda x: format(x, ".%sf" % precision) msg = Messenger(enable_verbose=enable_verbose) if compress: # check extension name root, ext = os.path.splitext(abspath) if ext != ".gz": if ext != ".tmp": raise Exception("compressed json has to use extension '.gz'!") else: _, ext = os.path.splitext(root) if ext != ".gz": raise Exception( "compressed json has to use extension '.gz'!") else: root, ext = os.path.splitext(abspath) if ext != ".json": if ext != ".tmp": raise Exception("file extension are not '.json'!") else: _, ext = os.path.splitext(root) if ext != ".json": raise Exception("file extension are not '.json'!") msg.show("\nDumping to %s..." % abspath) st = time.clock() if os.path.exists(abspath): # if exists, check replace option if replace: # replace existing file if fastmode: # no sort and indent, do the fastest dumping if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js, f, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) else: # stop, print error message raise Exception("\tCANNOT WRITE to %s, it's already " "exists" % abspath) else: # if not exists, just write to it if fastmode: # no sort and indent, do the fastest dumping if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js).encode("utf-8")) else: if compress: with gzip.open(abspath, "wb") as f: f.write(json.dumps(js, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) else: with open(abspath, "wb") as f: f.write(json.dumps(js, sort_keys=True, indent=4, separators=(",", ": ")).encode("utf-8")) msg.show(" Complete! Elapse %.6f sec" % (time.clock() - st))
def server_info(self): """ Query information about the server. """ response = self._post(self.apiurl + "/v2/server/info", data={'apikey': self.apikey}) return self._raise_or_extract(response)
Query information about the server.
Below is the the instruction that describes the task: ### Input: Query information about the server. ### Response: def server_info(self): """ Query information about the server. """ response = self._post(self.apiurl + "/v2/server/info", data={'apikey': self.apikey}) return self._raise_or_extract(response)
def print_result(self, directory_result): """ Print out the contents of the directory result, using ANSI color codes if supported """ for file_name, line_results_dict in directory_result.iter_line_results_items(): full_path = path.join(directory_result.directory_path, file_name) self.write(full_path, 'green') self.write('\n') for line_number, line_results in sorted(line_results_dict.items()): self.write('%s: ' % (line_results[0].line_number)) out = list(line_results[0].left_of_group + line_results[0].group + line_results[0].right_of_group) offset = 0 for line_result in line_results: group_length = len(line_result.group) out.insert(offset+line_result.left_offset-1, self.colors['blue']) out.insert(offset+line_result.left_offset+group_length, self.colors['end']) offset += group_length + 1 self.write(''.join(out)+'\n') self.write('\n')
Print out the contents of the directory result, using ANSI color codes if supported
Below is the the instruction that describes the task: ### Input: Print out the contents of the directory result, using ANSI color codes if supported ### Response: def print_result(self, directory_result): """ Print out the contents of the directory result, using ANSI color codes if supported """ for file_name, line_results_dict in directory_result.iter_line_results_items(): full_path = path.join(directory_result.directory_path, file_name) self.write(full_path, 'green') self.write('\n') for line_number, line_results in sorted(line_results_dict.items()): self.write('%s: ' % (line_results[0].line_number)) out = list(line_results[0].left_of_group + line_results[0].group + line_results[0].right_of_group) offset = 0 for line_result in line_results: group_length = len(line_result.group) out.insert(offset+line_result.left_offset-1, self.colors['blue']) out.insert(offset+line_result.left_offset+group_length, self.colors['end']) offset += group_length + 1 self.write(''.join(out)+'\n') self.write('\n')
def integrate(self,t,pot=None,method='test-particle', **kwargs): """ NAME: integrate PURPOSE: integrate the snapshot in time INPUT: t - numpy.array of times to save the snapshots at (must start at 0) pot= potential object or list of such objects (default=None) method= method to use ('test-particle' or 'direct-python' for now) OUTPUT: list of snapshots at times t HISTORY: 2011-02-02 - Written - Bovy (NYU) """ if method.lower() == 'test-particle': return self._integrate_test_particle(t,pot) elif method.lower() == 'direct-python': return self._integrate_direct_python(t,pot,**kwargs)
NAME: integrate PURPOSE: integrate the snapshot in time INPUT: t - numpy.array of times to save the snapshots at (must start at 0) pot= potential object or list of such objects (default=None) method= method to use ('test-particle' or 'direct-python' for now) OUTPUT: list of snapshots at times t HISTORY: 2011-02-02 - Written - Bovy (NYU)
Below is the the instruction that describes the task: ### Input: NAME: integrate PURPOSE: integrate the snapshot in time INPUT: t - numpy.array of times to save the snapshots at (must start at 0) pot= potential object or list of such objects (default=None) method= method to use ('test-particle' or 'direct-python' for now) OUTPUT: list of snapshots at times t HISTORY: 2011-02-02 - Written - Bovy (NYU) ### Response: def integrate(self,t,pot=None,method='test-particle', **kwargs): """ NAME: integrate PURPOSE: integrate the snapshot in time INPUT: t - numpy.array of times to save the snapshots at (must start at 0) pot= potential object or list of such objects (default=None) method= method to use ('test-particle' or 'direct-python' for now) OUTPUT: list of snapshots at times t HISTORY: 2011-02-02 - Written - Bovy (NYU) """ if method.lower() == 'test-particle': return self._integrate_test_particle(t,pot) elif method.lower() == 'direct-python': return self._integrate_direct_python(t,pot,**kwargs)
def _parse_host(cls, host='localhost', port=0): """ Parse provided hostname and extract port number :param host: Server hostname :type host: string :param port: Server port :return: Tuple of (host, port) :rtype: tuple """ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: host, port = host[:i], host[i + 1:] try: port = int(port) except ValueError: raise OSError('nonnumeric port') return host, port
Parse provided hostname and extract port number :param host: Server hostname :type host: string :param port: Server port :return: Tuple of (host, port) :rtype: tuple
Below is the the instruction that describes the task: ### Input: Parse provided hostname and extract port number :param host: Server hostname :type host: string :param port: Server port :return: Tuple of (host, port) :rtype: tuple ### Response: def _parse_host(cls, host='localhost', port=0): """ Parse provided hostname and extract port number :param host: Server hostname :type host: string :param port: Server port :return: Tuple of (host, port) :rtype: tuple """ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: host, port = host[:i], host[i + 1:] try: port = int(port) except ValueError: raise OSError('nonnumeric port') return host, port
def rosenbrock(theta): """Objective and gradient for the rosenbrock function""" x, y = theta obj = (1 - x)**2 + 100 * (y - x**2)**2 grad = np.zeros(2) grad[0] = 2 * x - 400 * (x * y - x**3) - 2 grad[1] = 200 * (y - x**2) return obj, grad
Objective and gradient for the rosenbrock function
Below is the the instruction that describes the task: ### Input: Objective and gradient for the rosenbrock function ### Response: def rosenbrock(theta): """Objective and gradient for the rosenbrock function""" x, y = theta obj = (1 - x)**2 + 100 * (y - x**2)**2 grad = np.zeros(2) grad[0] = 2 * x - 400 * (x * y - x**3) - 2 grad[1] = 200 * (y - x**2) return obj, grad
def repolist(status='', media=None): """ Get the list of ``yum`` repositories. Returns enabled repositories by default. Extra *status* may be passed to list disabled repositories if necessary. Media and debug repositories are kept disabled, except if you pass *media*. :: import burlap # Install a package that may be included in disabled repositories burlap.rpm.install('vim', burlap.rpm.repolist('disabled')) """ manager = MANAGER with settings(hide('running', 'stdout')): if media: repos = run_as_root("%(manager)s repolist %(status)s | sed '$d' | sed -n '/repo id/,$p'" % locals()) else: repos = run_as_root("%(manager)s repolist %(status)s | sed '/Media\\|Debug/d' | sed '$d' | sed -n '/repo id/,$p'" % locals()) return [line.split(' ')[0] for line in repos.splitlines()[1:]]
Get the list of ``yum`` repositories. Returns enabled repositories by default. Extra *status* may be passed to list disabled repositories if necessary. Media and debug repositories are kept disabled, except if you pass *media*. :: import burlap # Install a package that may be included in disabled repositories burlap.rpm.install('vim', burlap.rpm.repolist('disabled'))
Below is the the instruction that describes the task: ### Input: Get the list of ``yum`` repositories. Returns enabled repositories by default. Extra *status* may be passed to list disabled repositories if necessary. Media and debug repositories are kept disabled, except if you pass *media*. :: import burlap # Install a package that may be included in disabled repositories burlap.rpm.install('vim', burlap.rpm.repolist('disabled')) ### Response: def repolist(status='', media=None): """ Get the list of ``yum`` repositories. Returns enabled repositories by default. Extra *status* may be passed to list disabled repositories if necessary. Media and debug repositories are kept disabled, except if you pass *media*. :: import burlap # Install a package that may be included in disabled repositories burlap.rpm.install('vim', burlap.rpm.repolist('disabled')) """ manager = MANAGER with settings(hide('running', 'stdout')): if media: repos = run_as_root("%(manager)s repolist %(status)s | sed '$d' | sed -n '/repo id/,$p'" % locals()) else: repos = run_as_root("%(manager)s repolist %(status)s | sed '/Media\\|Debug/d' | sed '$d' | sed -n '/repo id/,$p'" % locals()) return [line.split(' ')[0] for line in repos.splitlines()[1:]]
def get_job_results(self, job_resource_name: str) -> List[TrialResult]: """Returns the actual results (not metadata) of a completed job. Params: job_resource_name: A string of the form `projects/project_id/programs/program_id/jobs/job_id`. Returns: An iterable over the TrialResult, one per parameter in the parameter sweep. """ response = self.service.projects().programs().jobs().getResult( parent=job_resource_name).execute() trial_results = [] for sweep_result in response['result']['sweepResults']: sweep_repetitions = sweep_result['repetitions'] key_sizes = [(m['key'], len(m['qubits'])) for m in sweep_result['measurementKeys']] for result in sweep_result['parameterizedResults']: data = base64.standard_b64decode(result['measurementResults']) measurements = unpack_results(data, sweep_repetitions, key_sizes) trial_results.append(TrialResult( params=ParamResolver( result.get('params', {}).get('assignments', {})), repetitions=sweep_repetitions, measurements=measurements)) return trial_results
Returns the actual results (not metadata) of a completed job. Params: job_resource_name: A string of the form `projects/project_id/programs/program_id/jobs/job_id`. Returns: An iterable over the TrialResult, one per parameter in the parameter sweep.
Below is the the instruction that describes the task: ### Input: Returns the actual results (not metadata) of a completed job. Params: job_resource_name: A string of the form `projects/project_id/programs/program_id/jobs/job_id`. Returns: An iterable over the TrialResult, one per parameter in the parameter sweep. ### Response: def get_job_results(self, job_resource_name: str) -> List[TrialResult]: """Returns the actual results (not metadata) of a completed job. Params: job_resource_name: A string of the form `projects/project_id/programs/program_id/jobs/job_id`. Returns: An iterable over the TrialResult, one per parameter in the parameter sweep. """ response = self.service.projects().programs().jobs().getResult( parent=job_resource_name).execute() trial_results = [] for sweep_result in response['result']['sweepResults']: sweep_repetitions = sweep_result['repetitions'] key_sizes = [(m['key'], len(m['qubits'])) for m in sweep_result['measurementKeys']] for result in sweep_result['parameterizedResults']: data = base64.standard_b64decode(result['measurementResults']) measurements = unpack_results(data, sweep_repetitions, key_sizes) trial_results.append(TrialResult( params=ParamResolver( result.get('params', {}).get('assignments', {})), repetitions=sweep_repetitions, measurements=measurements)) return trial_results
def run(filename: 'python file generated by `rbnf` command, or rbnf sour file', opt: 'optimize switch' = False): """ You can apply immediate tests on your parser. P.S: use `--opt` option takes longer starting time. """ from rbnf.easy import build_parser import importlib.util import traceback full_path = Path(filename) base, ext = os.path.splitext(str(full_path)) full_path_str = str(full_path) if not ext: if full_path.into('.py').exists(): full_path_str = base + '.py' elif Path(base).into('.rbnf'): full_path_str = base + '.rbnf' if full_path_str[-3:].lower() != '.py': with Path(full_path_str).open('r') as fr: ze_exp = ze.compile(fr.read(), filename=full_path_str) lang = ze_exp.lang else: spec = importlib.util.spec_from_file_location("runbnf", full_path_str) mod = importlib.util.module_from_spec(spec) spec.loader.exec_module(mod) try: lang = next(each for each in mod.__dict__.values() if isinstance(each, Language)) except StopIteration: raise NameError("Found no language in {}".format(full_path_str)) parse = build_parser(lang, opt=bool(opt)) namespace = {} print(Purple('type `:i` to switch between python mode and parsing mode.')) print(Purple('The last result of parsing is stored as symbol `res`.')) while True: inp = input('runbnf> ') if not inp.strip(): continue elif inp.strip() == 'exit': break if inp.strip() == ':i': while True: inp = input('python> ') if inp.strip() == ':i': break try: try: res = eval(inp, namespace) if res is not None: print(res) namespace['_'] = res except SyntaxError: exec(inp, namespace) except Exception: traceback.print_exc() else: res = namespace['res'] = parse(inp) print( LightBlue( 'parsed result = res: ResultDescription{result, tokens, state}' )) print(res.result)
You can apply immediate tests on your parser. P.S: use `--opt` option takes longer starting time.
Below is the the instruction that describes the task: ### Input: You can apply immediate tests on your parser. P.S: use `--opt` option takes longer starting time. ### Response: def run(filename: 'python file generated by `rbnf` command, or rbnf sour file', opt: 'optimize switch' = False): """ You can apply immediate tests on your parser. P.S: use `--opt` option takes longer starting time. """ from rbnf.easy import build_parser import importlib.util import traceback full_path = Path(filename) base, ext = os.path.splitext(str(full_path)) full_path_str = str(full_path) if not ext: if full_path.into('.py').exists(): full_path_str = base + '.py' elif Path(base).into('.rbnf'): full_path_str = base + '.rbnf' if full_path_str[-3:].lower() != '.py': with Path(full_path_str).open('r') as fr: ze_exp = ze.compile(fr.read(), filename=full_path_str) lang = ze_exp.lang else: spec = importlib.util.spec_from_file_location("runbnf", full_path_str) mod = importlib.util.module_from_spec(spec) spec.loader.exec_module(mod) try: lang = next(each for each in mod.__dict__.values() if isinstance(each, Language)) except StopIteration: raise NameError("Found no language in {}".format(full_path_str)) parse = build_parser(lang, opt=bool(opt)) namespace = {} print(Purple('type `:i` to switch between python mode and parsing mode.')) print(Purple('The last result of parsing is stored as symbol `res`.')) while True: inp = input('runbnf> ') if not inp.strip(): continue elif inp.strip() == 'exit': break if inp.strip() == ':i': while True: inp = input('python> ') if inp.strip() == ':i': break try: try: res = eval(inp, namespace) if res is not None: print(res) namespace['_'] = res except SyntaxError: exec(inp, namespace) except Exception: traceback.print_exc() else: res = namespace['res'] = parse(inp) print( LightBlue( 'parsed result = res: ResultDescription{result, tokens, state}' )) print(res.result)
def unbounded(self): """ Returns a list of key dimensions that are unbounded, excluding stream parameters. If any of theses key dimensions are unbounded, the DynamicMap as a whole is also unbounded. """ unbounded_dims = [] # Dimensioned streams do not need to be bounded stream_params = set(util.stream_parameters(self.streams)) for kdim in self.kdims: if str(kdim) in stream_params: continue if kdim.values: continue if None in kdim.range: unbounded_dims.append(str(kdim)) return unbounded_dims
Returns a list of key dimensions that are unbounded, excluding stream parameters. If any of theses key dimensions are unbounded, the DynamicMap as a whole is also unbounded.
Below is the the instruction that describes the task: ### Input: Returns a list of key dimensions that are unbounded, excluding stream parameters. If any of theses key dimensions are unbounded, the DynamicMap as a whole is also unbounded. ### Response: def unbounded(self): """ Returns a list of key dimensions that are unbounded, excluding stream parameters. If any of theses key dimensions are unbounded, the DynamicMap as a whole is also unbounded. """ unbounded_dims = [] # Dimensioned streams do not need to be bounded stream_params = set(util.stream_parameters(self.streams)) for kdim in self.kdims: if str(kdim) in stream_params: continue if kdim.values: continue if None in kdim.range: unbounded_dims.append(str(kdim)) return unbounded_dims
def parse_directive(lexer: Lexer, is_const: bool) -> DirectiveNode: """Directive[Const]: @ Name Arguments[?Const]?""" start = lexer.token expect_token(lexer, TokenKind.AT) return DirectiveNode( name=parse_name(lexer), arguments=parse_arguments(lexer, is_const), loc=loc(lexer, start), )
Directive[Const]: @ Name Arguments[?Const]?
Below is the the instruction that describes the task: ### Input: Directive[Const]: @ Name Arguments[?Const]? ### Response: def parse_directive(lexer: Lexer, is_const: bool) -> DirectiveNode: """Directive[Const]: @ Name Arguments[?Const]?""" start = lexer.token expect_token(lexer, TokenKind.AT) return DirectiveNode( name=parse_name(lexer), arguments=parse_arguments(lexer, is_const), loc=loc(lexer, start), )
def archive(self, odir=None, aname=None, fmt=None, projectname=None, experiments=None, current_project=False, no_append=False, no_project_paths=False, exclude=None, keep_exp=False, rm_project=False, dry_run=False, dereference=False, **kwargs): """ Archive one or more experiments or a project instance This method may be used to archive experiments in order to minimize the amount of necessary configuration files Parameters ---------- odir: str The path where to store the archive aname: str The name of the archive (minus any format-specific extension). If None, defaults to the projectname fmt: { 'gztar' | 'bztar' | 'tar' | 'zip' } The format of the archive. If None, it is tested whether an archived with the name specified by `aname` already exists and if yes, the format is inferred, otherwise ``'tar'`` is used projectname: str If provided, the entire project is archived experiments: str If provided, the given experiments are archived. Note that an error is raised if they belong to multiple project instances current_project: bool If True, `projectname` is set to the current project no_append: bool It True and the archive already exists, it is deleted no_project_paths: bool If True, paths outside the experiment directories are neglected exclude: list of str Filename patterns to ignore (see :func:`glob.fnmatch.fnmatch`) keep_exp: bool If True, the experiment directories are not removed and no modification is made in the configuration rm_project: bool If True, remove all the project files dry_run: bool If True, set, do not actually make anything dereference: bool If set, dereference symbolic links. Note: This is automatically set for ``fmt=='zip'`` """ fnmatch = glob.fnmatch.fnmatch def to_exclude(fname): if exclude and (fnmatch(exclude, fname) or fnmatch(exclude, osp.basename(fname))): return True def do_nothing(path, file_obj): return def tar_add(path, file_obj): if sys.version_info[:2] < (3, 7): file_obj.add(path, self.relpath(path), exclude=to_exclude) else: file_obj.add(path, self.relpath(path), filter=lambda f: None if to_exclude(f) else f) def zip_add(path, file_obj): # ziph is zipfile handle for root, dirs, files in os.walk(path): for f in files: abs_file = os.path.join(root, f) if not to_exclude(abs_file): file_obj.write(abs_file, self.relpath(abs_file)) self.app_main(**kwargs) logger = self.logger all_exps = self.config.experiments if current_project or projectname is not None: if current_project: projectname = self.projectname experiments = list( self.config.experiments.project_map[projectname]) if not experiments: raise ValueError( "Could not find any unarchived experiment for %s" % ( projectname)) elif experiments is None: experiments = [self.experiment] already_archived = list(filter(self.is_archived, experiments)) if already_archived: raise ValueError( "The experiments %s have already been archived or are not " "existent!" % ', '.join( already_archived)) if projectname is None: projectnames = {all_exps[exp]['project'] for exp in experiments} if len(projectnames) > 1: raise ValueError( "Experiments belong to multiple projects: %s" % ( ', '.join(projectnames))) projectname = next(iter(projectnames)) self.projectname = projectname self.experiment = experiments[-1] exps2archive = OrderedDict( (exp, all_exps[exp]) for exp in experiments) project_config = self.config.projects[projectname] ext_map, fmt_map = self._archive_extensions() if aname is None: aname = projectname if fmt is None: ext, fmt = next( (t for t in fmt_map.items() if osp.exists(aname + t[0])), ['.tar', 'tar']) else: ext = fmt_map[fmt] if odir is None: odir = getcwd() archive_name = osp.join(odir, aname + ext) exists = osp.exists(archive_name) if exists and no_append: logger.debug('Removing existing archive %s' % archive_name) os.remove(archive_name) exists = False elif exists and fmt not in ['tar', 'zip']: raise ValueError( "'Cannot append to %s because this is only possible for 'tar' " "and 'zip' extension. Not %s" % (archive_name, fmt)) logger.info('Archiving to %s', archive_name) paths = self._get_all_paths(exps2archive) root_dir = self.config.projects[projectname]['root'] check_path = partial(utils.dir_contains, root_dir) not_included = OrderedDict([ (key, list(filterfalse(check_path, utils.safe_list(val)))) for key, val in paths.items()]) for key, key_paths in not_included.items(): for p in key_paths: logger.warn( '%s for key %s lies outside the project directory and ' 'will not be included in the archive!', p, key) modes = {'bztar': 'w:bz2', 'gztar': 'w:gz', 'tar': 'w', 'zip': 'w'} mode = 'a' if exists else modes[fmt] atype = 'zip' if fmt == 'zip' else 'tar' if dry_run: add_dir = do_nothing file_obj = None elif atype == 'zip': import zipfile add_dir = zip_add file_obj = zipfile.ZipFile(archive_name, mode) else: import tarfile add_dir = tar_add file_obj = tarfile.open(archive_name, mode, dereference=dereference) for exp in experiments: exp_dir = exps2archive[exp]['expdir'] logger.debug('Adding %s', exp_dir) add_dir(exp_dir, file_obj) now = str(dt.datetime.now()) # current time # configuration directory config_dir = osp.join(root_dir, '.project') if not dry_run and not osp.exists(config_dir): os.makedirs(config_dir) for exp in experiments: conf_file = osp.join(config_dir, exp + '.yml') logger.debug('Store %s experiment config to %s', exp, conf_file) if not dry_run: exps2archive[exp].setdefault('timestamps', {}) exps2archive[exp]['timestamps']['archive'] = now with open(osp.join(config_dir, exp + '.yml'), 'w') as f: ordered_yaml_dump(self.rel_paths( copy.deepcopy(exps2archive[exp])), f) # project configuration file conf_file = osp.join(config_dir, '.project.yml') logger.debug('Store %s project config to %s', projectname, conf_file) if not dry_run: safe_dump(project_config, conf_file) logger.debug('Add %s to archive', config_dir) add_dir(config_dir, file_obj) if not no_project_paths: for dirname in os.listdir(root_dir): if osp.basename(dirname) not in ['experiments', '.project']: logger.debug('Adding %s', osp.join(root_dir, dirname)) add_dir(osp.join(root_dir, dirname), file_obj) if not keep_exp: for exp in experiments: exp_dir = exps2archive[exp]['expdir'] logger.debug('Removing %s', exp_dir) if not dry_run: all_exps[exp] = a = Archive(archive_name) a.project = projectname a.time = now shutil.rmtree(exp_dir) if rm_project: logger.debug('Removing %s', root_dir) if not dry_run: shutil.rmtree(root_dir) if not dry_run: file_obj.close()
Archive one or more experiments or a project instance This method may be used to archive experiments in order to minimize the amount of necessary configuration files Parameters ---------- odir: str The path where to store the archive aname: str The name of the archive (minus any format-specific extension). If None, defaults to the projectname fmt: { 'gztar' | 'bztar' | 'tar' | 'zip' } The format of the archive. If None, it is tested whether an archived with the name specified by `aname` already exists and if yes, the format is inferred, otherwise ``'tar'`` is used projectname: str If provided, the entire project is archived experiments: str If provided, the given experiments are archived. Note that an error is raised if they belong to multiple project instances current_project: bool If True, `projectname` is set to the current project no_append: bool It True and the archive already exists, it is deleted no_project_paths: bool If True, paths outside the experiment directories are neglected exclude: list of str Filename patterns to ignore (see :func:`glob.fnmatch.fnmatch`) keep_exp: bool If True, the experiment directories are not removed and no modification is made in the configuration rm_project: bool If True, remove all the project files dry_run: bool If True, set, do not actually make anything dereference: bool If set, dereference symbolic links. Note: This is automatically set for ``fmt=='zip'``
Below is the the instruction that describes the task: ### Input: Archive one or more experiments or a project instance This method may be used to archive experiments in order to minimize the amount of necessary configuration files Parameters ---------- odir: str The path where to store the archive aname: str The name of the archive (minus any format-specific extension). If None, defaults to the projectname fmt: { 'gztar' | 'bztar' | 'tar' | 'zip' } The format of the archive. If None, it is tested whether an archived with the name specified by `aname` already exists and if yes, the format is inferred, otherwise ``'tar'`` is used projectname: str If provided, the entire project is archived experiments: str If provided, the given experiments are archived. Note that an error is raised if they belong to multiple project instances current_project: bool If True, `projectname` is set to the current project no_append: bool It True and the archive already exists, it is deleted no_project_paths: bool If True, paths outside the experiment directories are neglected exclude: list of str Filename patterns to ignore (see :func:`glob.fnmatch.fnmatch`) keep_exp: bool If True, the experiment directories are not removed and no modification is made in the configuration rm_project: bool If True, remove all the project files dry_run: bool If True, set, do not actually make anything dereference: bool If set, dereference symbolic links. Note: This is automatically set for ``fmt=='zip'`` ### Response: def archive(self, odir=None, aname=None, fmt=None, projectname=None, experiments=None, current_project=False, no_append=False, no_project_paths=False, exclude=None, keep_exp=False, rm_project=False, dry_run=False, dereference=False, **kwargs): """ Archive one or more experiments or a project instance This method may be used to archive experiments in order to minimize the amount of necessary configuration files Parameters ---------- odir: str The path where to store the archive aname: str The name of the archive (minus any format-specific extension). If None, defaults to the projectname fmt: { 'gztar' | 'bztar' | 'tar' | 'zip' } The format of the archive. If None, it is tested whether an archived with the name specified by `aname` already exists and if yes, the format is inferred, otherwise ``'tar'`` is used projectname: str If provided, the entire project is archived experiments: str If provided, the given experiments are archived. Note that an error is raised if they belong to multiple project instances current_project: bool If True, `projectname` is set to the current project no_append: bool It True and the archive already exists, it is deleted no_project_paths: bool If True, paths outside the experiment directories are neglected exclude: list of str Filename patterns to ignore (see :func:`glob.fnmatch.fnmatch`) keep_exp: bool If True, the experiment directories are not removed and no modification is made in the configuration rm_project: bool If True, remove all the project files dry_run: bool If True, set, do not actually make anything dereference: bool If set, dereference symbolic links. Note: This is automatically set for ``fmt=='zip'`` """ fnmatch = glob.fnmatch.fnmatch def to_exclude(fname): if exclude and (fnmatch(exclude, fname) or fnmatch(exclude, osp.basename(fname))): return True def do_nothing(path, file_obj): return def tar_add(path, file_obj): if sys.version_info[:2] < (3, 7): file_obj.add(path, self.relpath(path), exclude=to_exclude) else: file_obj.add(path, self.relpath(path), filter=lambda f: None if to_exclude(f) else f) def zip_add(path, file_obj): # ziph is zipfile handle for root, dirs, files in os.walk(path): for f in files: abs_file = os.path.join(root, f) if not to_exclude(abs_file): file_obj.write(abs_file, self.relpath(abs_file)) self.app_main(**kwargs) logger = self.logger all_exps = self.config.experiments if current_project or projectname is not None: if current_project: projectname = self.projectname experiments = list( self.config.experiments.project_map[projectname]) if not experiments: raise ValueError( "Could not find any unarchived experiment for %s" % ( projectname)) elif experiments is None: experiments = [self.experiment] already_archived = list(filter(self.is_archived, experiments)) if already_archived: raise ValueError( "The experiments %s have already been archived or are not " "existent!" % ', '.join( already_archived)) if projectname is None: projectnames = {all_exps[exp]['project'] for exp in experiments} if len(projectnames) > 1: raise ValueError( "Experiments belong to multiple projects: %s" % ( ', '.join(projectnames))) projectname = next(iter(projectnames)) self.projectname = projectname self.experiment = experiments[-1] exps2archive = OrderedDict( (exp, all_exps[exp]) for exp in experiments) project_config = self.config.projects[projectname] ext_map, fmt_map = self._archive_extensions() if aname is None: aname = projectname if fmt is None: ext, fmt = next( (t for t in fmt_map.items() if osp.exists(aname + t[0])), ['.tar', 'tar']) else: ext = fmt_map[fmt] if odir is None: odir = getcwd() archive_name = osp.join(odir, aname + ext) exists = osp.exists(archive_name) if exists and no_append: logger.debug('Removing existing archive %s' % archive_name) os.remove(archive_name) exists = False elif exists and fmt not in ['tar', 'zip']: raise ValueError( "'Cannot append to %s because this is only possible for 'tar' " "and 'zip' extension. Not %s" % (archive_name, fmt)) logger.info('Archiving to %s', archive_name) paths = self._get_all_paths(exps2archive) root_dir = self.config.projects[projectname]['root'] check_path = partial(utils.dir_contains, root_dir) not_included = OrderedDict([ (key, list(filterfalse(check_path, utils.safe_list(val)))) for key, val in paths.items()]) for key, key_paths in not_included.items(): for p in key_paths: logger.warn( '%s for key %s lies outside the project directory and ' 'will not be included in the archive!', p, key) modes = {'bztar': 'w:bz2', 'gztar': 'w:gz', 'tar': 'w', 'zip': 'w'} mode = 'a' if exists else modes[fmt] atype = 'zip' if fmt == 'zip' else 'tar' if dry_run: add_dir = do_nothing file_obj = None elif atype == 'zip': import zipfile add_dir = zip_add file_obj = zipfile.ZipFile(archive_name, mode) else: import tarfile add_dir = tar_add file_obj = tarfile.open(archive_name, mode, dereference=dereference) for exp in experiments: exp_dir = exps2archive[exp]['expdir'] logger.debug('Adding %s', exp_dir) add_dir(exp_dir, file_obj) now = str(dt.datetime.now()) # current time # configuration directory config_dir = osp.join(root_dir, '.project') if not dry_run and not osp.exists(config_dir): os.makedirs(config_dir) for exp in experiments: conf_file = osp.join(config_dir, exp + '.yml') logger.debug('Store %s experiment config to %s', exp, conf_file) if not dry_run: exps2archive[exp].setdefault('timestamps', {}) exps2archive[exp]['timestamps']['archive'] = now with open(osp.join(config_dir, exp + '.yml'), 'w') as f: ordered_yaml_dump(self.rel_paths( copy.deepcopy(exps2archive[exp])), f) # project configuration file conf_file = osp.join(config_dir, '.project.yml') logger.debug('Store %s project config to %s', projectname, conf_file) if not dry_run: safe_dump(project_config, conf_file) logger.debug('Add %s to archive', config_dir) add_dir(config_dir, file_obj) if not no_project_paths: for dirname in os.listdir(root_dir): if osp.basename(dirname) not in ['experiments', '.project']: logger.debug('Adding %s', osp.join(root_dir, dirname)) add_dir(osp.join(root_dir, dirname), file_obj) if not keep_exp: for exp in experiments: exp_dir = exps2archive[exp]['expdir'] logger.debug('Removing %s', exp_dir) if not dry_run: all_exps[exp] = a = Archive(archive_name) a.project = projectname a.time = now shutil.rmtree(exp_dir) if rm_project: logger.debug('Removing %s', root_dir) if not dry_run: shutil.rmtree(root_dir) if not dry_run: file_obj.close()
def array_join(col, delimiter, null_replacement=None): """ Concatenates the elements of `column` using the `delimiter`. Null values are replaced with `null_replacement` if set, otherwise they are ignored. >>> df = spark.createDataFrame([(["a", "b", "c"],), (["a", None],)], ['data']) >>> df.select(array_join(df.data, ",").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a')] >>> df.select(array_join(df.data, ",", "NULL").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a,NULL')] """ sc = SparkContext._active_spark_context if null_replacement is None: return Column(sc._jvm.functions.array_join(_to_java_column(col), delimiter)) else: return Column(sc._jvm.functions.array_join( _to_java_column(col), delimiter, null_replacement))
Concatenates the elements of `column` using the `delimiter`. Null values are replaced with `null_replacement` if set, otherwise they are ignored. >>> df = spark.createDataFrame([(["a", "b", "c"],), (["a", None],)], ['data']) >>> df.select(array_join(df.data, ",").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a')] >>> df.select(array_join(df.data, ",", "NULL").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a,NULL')]
Below is the the instruction that describes the task: ### Input: Concatenates the elements of `column` using the `delimiter`. Null values are replaced with `null_replacement` if set, otherwise they are ignored. >>> df = spark.createDataFrame([(["a", "b", "c"],), (["a", None],)], ['data']) >>> df.select(array_join(df.data, ",").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a')] >>> df.select(array_join(df.data, ",", "NULL").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a,NULL')] ### Response: def array_join(col, delimiter, null_replacement=None): """ Concatenates the elements of `column` using the `delimiter`. Null values are replaced with `null_replacement` if set, otherwise they are ignored. >>> df = spark.createDataFrame([(["a", "b", "c"],), (["a", None],)], ['data']) >>> df.select(array_join(df.data, ",").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a')] >>> df.select(array_join(df.data, ",", "NULL").alias("joined")).collect() [Row(joined=u'a,b,c'), Row(joined=u'a,NULL')] """ sc = SparkContext._active_spark_context if null_replacement is None: return Column(sc._jvm.functions.array_join(_to_java_column(col), delimiter)) else: return Column(sc._jvm.functions.array_join( _to_java_column(col), delimiter, null_replacement))
def write_extent(self): """After the extent selection, save the extent and disconnect signals. """ self.extent_dialog.accept() self.extent_dialog.clear_extent.disconnect( self.parent.dock.extent.clear_user_analysis_extent) self.extent_dialog.extent_defined.disconnect( self.parent.dock.define_user_analysis_extent) self.extent_dialog.capture_button.clicked.disconnect( self.start_capture_coordinates) self.extent_dialog.tool.rectangle_created.disconnect( self.stop_capture_coordinates)
After the extent selection, save the extent and disconnect signals.
Below is the the instruction that describes the task: ### Input: After the extent selection, save the extent and disconnect signals. ### Response: def write_extent(self): """After the extent selection, save the extent and disconnect signals. """ self.extent_dialog.accept() self.extent_dialog.clear_extent.disconnect( self.parent.dock.extent.clear_user_analysis_extent) self.extent_dialog.extent_defined.disconnect( self.parent.dock.define_user_analysis_extent) self.extent_dialog.capture_button.clicked.disconnect( self.start_capture_coordinates) self.extent_dialog.tool.rectangle_created.disconnect( self.stop_capture_coordinates)
def get_listener(name): ''' Return the listener class. ''' try: log.debug('Using %s as listener', name) return LISTENER_LOOKUP[name] except KeyError: msg = 'Listener {} is not available. Are the dependencies installed?'.format(name) log.error(msg, exc_info=True) raise InvalidListenerException(msg)
Return the listener class.
Below is the the instruction that describes the task: ### Input: Return the listener class. ### Response: def get_listener(name): ''' Return the listener class. ''' try: log.debug('Using %s as listener', name) return LISTENER_LOOKUP[name] except KeyError: msg = 'Listener {} is not available. Are the dependencies installed?'.format(name) log.error(msg, exc_info=True) raise InvalidListenerException(msg)
def get_item(dictionary, tuple_key, default_value): """Grab values from a dictionary using an unordered tuple as a key. Dictionary should not contain None, 0, or False as dictionary values. Args: dictionary: Dictionary that uses two-element tuple as keys tuple_key: Unordered tuple of two elements default_value: Value that is returned when the tuple_key is not found in the dictionary """ u, v = tuple_key # Grab tuple-values from dictionary tuple1 = dictionary.get((u, v), None) tuple2 = dictionary.get((v, u), None) # Return the first value that is not {None, 0, False} return tuple1 or tuple2 or default_value
Grab values from a dictionary using an unordered tuple as a key. Dictionary should not contain None, 0, or False as dictionary values. Args: dictionary: Dictionary that uses two-element tuple as keys tuple_key: Unordered tuple of two elements default_value: Value that is returned when the tuple_key is not found in the dictionary
Below is the the instruction that describes the task: ### Input: Grab values from a dictionary using an unordered tuple as a key. Dictionary should not contain None, 0, or False as dictionary values. Args: dictionary: Dictionary that uses two-element tuple as keys tuple_key: Unordered tuple of two elements default_value: Value that is returned when the tuple_key is not found in the dictionary ### Response: def get_item(dictionary, tuple_key, default_value): """Grab values from a dictionary using an unordered tuple as a key. Dictionary should not contain None, 0, or False as dictionary values. Args: dictionary: Dictionary that uses two-element tuple as keys tuple_key: Unordered tuple of two elements default_value: Value that is returned when the tuple_key is not found in the dictionary """ u, v = tuple_key # Grab tuple-values from dictionary tuple1 = dictionary.get((u, v), None) tuple2 = dictionary.get((v, u), None) # Return the first value that is not {None, 0, False} return tuple1 or tuple2 or default_value
def register(self, name, f, returnType=None): """Register a Python function (including lambda function) or a user-defined function as a SQL function. :param name: name of the user-defined function in SQL statements. :param f: a Python function, or a user-defined function. The user-defined function can be either row-at-a-time or vectorized. See :meth:`pyspark.sql.functions.udf` and :meth:`pyspark.sql.functions.pandas_udf`. :param returnType: the return type of the registered user-defined function. The value can be either a :class:`pyspark.sql.types.DataType` object or a DDL-formatted type string. :return: a user-defined function. To register a nondeterministic Python function, users need to first build a nondeterministic user-defined function for the Python function and then register it as a SQL function. `returnType` can be optionally specified when `f` is a Python function but not when `f` is a user-defined function. Please see below. 1. When `f` is a Python function: `returnType` defaults to string type and can be optionally specified. The produced object must match the specified type. In this case, this API works as if `register(name, f, returnType=StringType())`. >>> strlen = spark.udf.register("stringLengthString", lambda x: len(x)) >>> spark.sql("SELECT stringLengthString('test')").collect() [Row(stringLengthString(test)=u'4')] >>> spark.sql("SELECT 'foo' AS text").select(strlen("text")).collect() [Row(stringLengthString(text)=u'3')] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] 2. When `f` is a user-defined function: Spark uses the return type of the given user-defined function as the return type of the registered user-defined function. `returnType` should not be specified. In this case, this API works as if `register(name, f)`. >>> from pyspark.sql.types import IntegerType >>> from pyspark.sql.functions import udf >>> slen = udf(lambda s: len(s), IntegerType()) >>> _ = spark.udf.register("slen", slen) >>> spark.sql("SELECT slen('test')").collect() [Row(slen(test)=4)] >>> import random >>> from pyspark.sql.functions import udf >>> from pyspark.sql.types import IntegerType >>> random_udf = udf(lambda: random.randint(0, 100), IntegerType()).asNondeterministic() >>> new_random_udf = spark.udf.register("random_udf", random_udf) >>> spark.sql("SELECT random_udf()").collect() # doctest: +SKIP [Row(random_udf()=82)] >>> from pyspark.sql.functions import pandas_udf, PandasUDFType >>> @pandas_udf("integer", PandasUDFType.SCALAR) # doctest: +SKIP ... def add_one(x): ... return x + 1 ... >>> _ = spark.udf.register("add_one", add_one) # doctest: +SKIP >>> spark.sql("SELECT add_one(id) FROM range(3)").collect() # doctest: +SKIP [Row(add_one(id)=1), Row(add_one(id)=2), Row(add_one(id)=3)] >>> @pandas_udf("integer", PandasUDFType.GROUPED_AGG) # doctest: +SKIP ... def sum_udf(v): ... return v.sum() ... >>> _ = spark.udf.register("sum_udf", sum_udf) # doctest: +SKIP >>> q = "SELECT sum_udf(v1) FROM VALUES (3, 0), (2, 0), (1, 1) tbl(v1, v2) GROUP BY v2" >>> spark.sql(q).collect() # doctest: +SKIP [Row(sum_udf(v1)=1), Row(sum_udf(v1)=5)] .. note:: Registration for a user-defined function (case 2.) was added from Spark 2.3.0. """ # This is to check whether the input function is from a user-defined function or # Python function. if hasattr(f, 'asNondeterministic'): if returnType is not None: raise TypeError( "Invalid returnType: data type can not be specified when f is" "a user-defined function, but got %s." % returnType) if f.evalType not in [PythonEvalType.SQL_BATCHED_UDF, PythonEvalType.SQL_SCALAR_PANDAS_UDF, PythonEvalType.SQL_GROUPED_AGG_PANDAS_UDF]: raise ValueError( "Invalid f: f must be SQL_BATCHED_UDF, SQL_SCALAR_PANDAS_UDF or " "SQL_GROUPED_AGG_PANDAS_UDF") register_udf = UserDefinedFunction(f.func, returnType=f.returnType, name=name, evalType=f.evalType, deterministic=f.deterministic) return_udf = f else: if returnType is None: returnType = StringType() register_udf = UserDefinedFunction(f, returnType=returnType, name=name, evalType=PythonEvalType.SQL_BATCHED_UDF) return_udf = register_udf._wrapped() self.sparkSession._jsparkSession.udf().registerPython(name, register_udf._judf) return return_udf
Register a Python function (including lambda function) or a user-defined function as a SQL function. :param name: name of the user-defined function in SQL statements. :param f: a Python function, or a user-defined function. The user-defined function can be either row-at-a-time or vectorized. See :meth:`pyspark.sql.functions.udf` and :meth:`pyspark.sql.functions.pandas_udf`. :param returnType: the return type of the registered user-defined function. The value can be either a :class:`pyspark.sql.types.DataType` object or a DDL-formatted type string. :return: a user-defined function. To register a nondeterministic Python function, users need to first build a nondeterministic user-defined function for the Python function and then register it as a SQL function. `returnType` can be optionally specified when `f` is a Python function but not when `f` is a user-defined function. Please see below. 1. When `f` is a Python function: `returnType` defaults to string type and can be optionally specified. The produced object must match the specified type. In this case, this API works as if `register(name, f, returnType=StringType())`. >>> strlen = spark.udf.register("stringLengthString", lambda x: len(x)) >>> spark.sql("SELECT stringLengthString('test')").collect() [Row(stringLengthString(test)=u'4')] >>> spark.sql("SELECT 'foo' AS text").select(strlen("text")).collect() [Row(stringLengthString(text)=u'3')] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] 2. When `f` is a user-defined function: Spark uses the return type of the given user-defined function as the return type of the registered user-defined function. `returnType` should not be specified. In this case, this API works as if `register(name, f)`. >>> from pyspark.sql.types import IntegerType >>> from pyspark.sql.functions import udf >>> slen = udf(lambda s: len(s), IntegerType()) >>> _ = spark.udf.register("slen", slen) >>> spark.sql("SELECT slen('test')").collect() [Row(slen(test)=4)] >>> import random >>> from pyspark.sql.functions import udf >>> from pyspark.sql.types import IntegerType >>> random_udf = udf(lambda: random.randint(0, 100), IntegerType()).asNondeterministic() >>> new_random_udf = spark.udf.register("random_udf", random_udf) >>> spark.sql("SELECT random_udf()").collect() # doctest: +SKIP [Row(random_udf()=82)] >>> from pyspark.sql.functions import pandas_udf, PandasUDFType >>> @pandas_udf("integer", PandasUDFType.SCALAR) # doctest: +SKIP ... def add_one(x): ... return x + 1 ... >>> _ = spark.udf.register("add_one", add_one) # doctest: +SKIP >>> spark.sql("SELECT add_one(id) FROM range(3)").collect() # doctest: +SKIP [Row(add_one(id)=1), Row(add_one(id)=2), Row(add_one(id)=3)] >>> @pandas_udf("integer", PandasUDFType.GROUPED_AGG) # doctest: +SKIP ... def sum_udf(v): ... return v.sum() ... >>> _ = spark.udf.register("sum_udf", sum_udf) # doctest: +SKIP >>> q = "SELECT sum_udf(v1) FROM VALUES (3, 0), (2, 0), (1, 1) tbl(v1, v2) GROUP BY v2" >>> spark.sql(q).collect() # doctest: +SKIP [Row(sum_udf(v1)=1), Row(sum_udf(v1)=5)] .. note:: Registration for a user-defined function (case 2.) was added from Spark 2.3.0.
Below is the the instruction that describes the task: ### Input: Register a Python function (including lambda function) or a user-defined function as a SQL function. :param name: name of the user-defined function in SQL statements. :param f: a Python function, or a user-defined function. The user-defined function can be either row-at-a-time or vectorized. See :meth:`pyspark.sql.functions.udf` and :meth:`pyspark.sql.functions.pandas_udf`. :param returnType: the return type of the registered user-defined function. The value can be either a :class:`pyspark.sql.types.DataType` object or a DDL-formatted type string. :return: a user-defined function. To register a nondeterministic Python function, users need to first build a nondeterministic user-defined function for the Python function and then register it as a SQL function. `returnType` can be optionally specified when `f` is a Python function but not when `f` is a user-defined function. Please see below. 1. When `f` is a Python function: `returnType` defaults to string type and can be optionally specified. The produced object must match the specified type. In this case, this API works as if `register(name, f, returnType=StringType())`. >>> strlen = spark.udf.register("stringLengthString", lambda x: len(x)) >>> spark.sql("SELECT stringLengthString('test')").collect() [Row(stringLengthString(test)=u'4')] >>> spark.sql("SELECT 'foo' AS text").select(strlen("text")).collect() [Row(stringLengthString(text)=u'3')] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] 2. When `f` is a user-defined function: Spark uses the return type of the given user-defined function as the return type of the registered user-defined function. `returnType` should not be specified. In this case, this API works as if `register(name, f)`. >>> from pyspark.sql.types import IntegerType >>> from pyspark.sql.functions import udf >>> slen = udf(lambda s: len(s), IntegerType()) >>> _ = spark.udf.register("slen", slen) >>> spark.sql("SELECT slen('test')").collect() [Row(slen(test)=4)] >>> import random >>> from pyspark.sql.functions import udf >>> from pyspark.sql.types import IntegerType >>> random_udf = udf(lambda: random.randint(0, 100), IntegerType()).asNondeterministic() >>> new_random_udf = spark.udf.register("random_udf", random_udf) >>> spark.sql("SELECT random_udf()").collect() # doctest: +SKIP [Row(random_udf()=82)] >>> from pyspark.sql.functions import pandas_udf, PandasUDFType >>> @pandas_udf("integer", PandasUDFType.SCALAR) # doctest: +SKIP ... def add_one(x): ... return x + 1 ... >>> _ = spark.udf.register("add_one", add_one) # doctest: +SKIP >>> spark.sql("SELECT add_one(id) FROM range(3)").collect() # doctest: +SKIP [Row(add_one(id)=1), Row(add_one(id)=2), Row(add_one(id)=3)] >>> @pandas_udf("integer", PandasUDFType.GROUPED_AGG) # doctest: +SKIP ... def sum_udf(v): ... return v.sum() ... >>> _ = spark.udf.register("sum_udf", sum_udf) # doctest: +SKIP >>> q = "SELECT sum_udf(v1) FROM VALUES (3, 0), (2, 0), (1, 1) tbl(v1, v2) GROUP BY v2" >>> spark.sql(q).collect() # doctest: +SKIP [Row(sum_udf(v1)=1), Row(sum_udf(v1)=5)] .. note:: Registration for a user-defined function (case 2.) was added from Spark 2.3.0. ### Response: def register(self, name, f, returnType=None): """Register a Python function (including lambda function) or a user-defined function as a SQL function. :param name: name of the user-defined function in SQL statements. :param f: a Python function, or a user-defined function. The user-defined function can be either row-at-a-time or vectorized. See :meth:`pyspark.sql.functions.udf` and :meth:`pyspark.sql.functions.pandas_udf`. :param returnType: the return type of the registered user-defined function. The value can be either a :class:`pyspark.sql.types.DataType` object or a DDL-formatted type string. :return: a user-defined function. To register a nondeterministic Python function, users need to first build a nondeterministic user-defined function for the Python function and then register it as a SQL function. `returnType` can be optionally specified when `f` is a Python function but not when `f` is a user-defined function. Please see below. 1. When `f` is a Python function: `returnType` defaults to string type and can be optionally specified. The produced object must match the specified type. In this case, this API works as if `register(name, f, returnType=StringType())`. >>> strlen = spark.udf.register("stringLengthString", lambda x: len(x)) >>> spark.sql("SELECT stringLengthString('test')").collect() [Row(stringLengthString(test)=u'4')] >>> spark.sql("SELECT 'foo' AS text").select(strlen("text")).collect() [Row(stringLengthString(text)=u'3')] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] >>> from pyspark.sql.types import IntegerType >>> _ = spark.udf.register("stringLengthInt", lambda x: len(x), IntegerType()) >>> spark.sql("SELECT stringLengthInt('test')").collect() [Row(stringLengthInt(test)=4)] 2. When `f` is a user-defined function: Spark uses the return type of the given user-defined function as the return type of the registered user-defined function. `returnType` should not be specified. In this case, this API works as if `register(name, f)`. >>> from pyspark.sql.types import IntegerType >>> from pyspark.sql.functions import udf >>> slen = udf(lambda s: len(s), IntegerType()) >>> _ = spark.udf.register("slen", slen) >>> spark.sql("SELECT slen('test')").collect() [Row(slen(test)=4)] >>> import random >>> from pyspark.sql.functions import udf >>> from pyspark.sql.types import IntegerType >>> random_udf = udf(lambda: random.randint(0, 100), IntegerType()).asNondeterministic() >>> new_random_udf = spark.udf.register("random_udf", random_udf) >>> spark.sql("SELECT random_udf()").collect() # doctest: +SKIP [Row(random_udf()=82)] >>> from pyspark.sql.functions import pandas_udf, PandasUDFType >>> @pandas_udf("integer", PandasUDFType.SCALAR) # doctest: +SKIP ... def add_one(x): ... return x + 1 ... >>> _ = spark.udf.register("add_one", add_one) # doctest: +SKIP >>> spark.sql("SELECT add_one(id) FROM range(3)").collect() # doctest: +SKIP [Row(add_one(id)=1), Row(add_one(id)=2), Row(add_one(id)=3)] >>> @pandas_udf("integer", PandasUDFType.GROUPED_AGG) # doctest: +SKIP ... def sum_udf(v): ... return v.sum() ... >>> _ = spark.udf.register("sum_udf", sum_udf) # doctest: +SKIP >>> q = "SELECT sum_udf(v1) FROM VALUES (3, 0), (2, 0), (1, 1) tbl(v1, v2) GROUP BY v2" >>> spark.sql(q).collect() # doctest: +SKIP [Row(sum_udf(v1)=1), Row(sum_udf(v1)=5)] .. note:: Registration for a user-defined function (case 2.) was added from Spark 2.3.0. """ # This is to check whether the input function is from a user-defined function or # Python function. if hasattr(f, 'asNondeterministic'): if returnType is not None: raise TypeError( "Invalid returnType: data type can not be specified when f is" "a user-defined function, but got %s." % returnType) if f.evalType not in [PythonEvalType.SQL_BATCHED_UDF, PythonEvalType.SQL_SCALAR_PANDAS_UDF, PythonEvalType.SQL_GROUPED_AGG_PANDAS_UDF]: raise ValueError( "Invalid f: f must be SQL_BATCHED_UDF, SQL_SCALAR_PANDAS_UDF or " "SQL_GROUPED_AGG_PANDAS_UDF") register_udf = UserDefinedFunction(f.func, returnType=f.returnType, name=name, evalType=f.evalType, deterministic=f.deterministic) return_udf = f else: if returnType is None: returnType = StringType() register_udf = UserDefinedFunction(f, returnType=returnType, name=name, evalType=PythonEvalType.SQL_BATCHED_UDF) return_udf = register_udf._wrapped() self.sparkSession._jsparkSession.udf().registerPython(name, register_udf._judf) return return_udf
def write_shiftfile(image_list, filename, outwcs='tweak_wcs.fits'): """ Write out a shiftfile for a given list of input Image class objects """ rows = '' nrows = 0 for img in image_list: row = img.get_shiftfile_row() if row is not None: rows += row nrows += 1 if nrows == 0: # If there are no fits to report, do not write out a file return # write out reference WCS now if os.path.exists(outwcs): os.remove(outwcs) p = fits.HDUList() p.append(fits.PrimaryHDU()) p.append(createWcsHDU(image_list[0].refWCS)) p.writeto(outwcs) # Write out shiftfile to go with reference WCS with open(filename, 'w') as f: f.write('# frame: output\n') f.write('# refimage: %s[wcs]\n' % outwcs) f.write('# form: delta\n') f.write('# units: pixels\n') f.write(rows) print('Writing out shiftfile :', filename)
Write out a shiftfile for a given list of input Image class objects
Below is the the instruction that describes the task: ### Input: Write out a shiftfile for a given list of input Image class objects ### Response: def write_shiftfile(image_list, filename, outwcs='tweak_wcs.fits'): """ Write out a shiftfile for a given list of input Image class objects """ rows = '' nrows = 0 for img in image_list: row = img.get_shiftfile_row() if row is not None: rows += row nrows += 1 if nrows == 0: # If there are no fits to report, do not write out a file return # write out reference WCS now if os.path.exists(outwcs): os.remove(outwcs) p = fits.HDUList() p.append(fits.PrimaryHDU()) p.append(createWcsHDU(image_list[0].refWCS)) p.writeto(outwcs) # Write out shiftfile to go with reference WCS with open(filename, 'w') as f: f.write('# frame: output\n') f.write('# refimage: %s[wcs]\n' % outwcs) f.write('# form: delta\n') f.write('# units: pixels\n') f.write(rows) print('Writing out shiftfile :', filename)
def _parse(self): """ Generates a dictionary of responses from a <random> element """ responses = [] for child in self._element: weight = int_attribute(child, 'weight', 1) self._log.debug('Parsing random entry with weight {weight}: {entry}' .format(weight=weight, entry=child.text)) # If the random element doesn't contain any tags, just store the text and return if not len(child): responses.append((child.text, weight)) continue # Otherwise, parse all the available tags responses.append((tuple(self.trigger.agentml.parse_tags(child, self.trigger)), weight)) self._responses = tuple(responses)
Generates a dictionary of responses from a <random> element
Below is the the instruction that describes the task: ### Input: Generates a dictionary of responses from a <random> element ### Response: def _parse(self): """ Generates a dictionary of responses from a <random> element """ responses = [] for child in self._element: weight = int_attribute(child, 'weight', 1) self._log.debug('Parsing random entry with weight {weight}: {entry}' .format(weight=weight, entry=child.text)) # If the random element doesn't contain any tags, just store the text and return if not len(child): responses.append((child.text, weight)) continue # Otherwise, parse all the available tags responses.append((tuple(self.trigger.agentml.parse_tags(child, self.trigger)), weight)) self._responses = tuple(responses)
def order_by(self, attribute=None, *, ascending=True): """ Applies a order_by clause :param str attribute: attribute to apply on :param bool ascending: should it apply ascending order or descending :rtype: Query """ attribute = self._get_mapping(attribute) or self._attribute if attribute: self._order_by[attribute] = None if ascending else 'desc' else: raise ValueError( 'Attribute property needed. call on_attribute(attribute) ' 'or new(attribute)') return self
Applies a order_by clause :param str attribute: attribute to apply on :param bool ascending: should it apply ascending order or descending :rtype: Query
Below is the the instruction that describes the task: ### Input: Applies a order_by clause :param str attribute: attribute to apply on :param bool ascending: should it apply ascending order or descending :rtype: Query ### Response: def order_by(self, attribute=None, *, ascending=True): """ Applies a order_by clause :param str attribute: attribute to apply on :param bool ascending: should it apply ascending order or descending :rtype: Query """ attribute = self._get_mapping(attribute) or self._attribute if attribute: self._order_by[attribute] = None if ascending else 'desc' else: raise ValueError( 'Attribute property needed. call on_attribute(attribute) ' 'or new(attribute)') return self
def exp_comp_(t, alpha=1, beta=1): """beta*(1 - np.exp(-alpha*(t-beta)))""" return beta * (1 - np.exp(-alpha * np.maximum(0, t - 10 * beta)))
beta*(1 - np.exp(-alpha*(t-beta)))
Below is the the instruction that describes the task: ### Input: beta*(1 - np.exp(-alpha*(t-beta))) ### Response: def exp_comp_(t, alpha=1, beta=1): """beta*(1 - np.exp(-alpha*(t-beta)))""" return beta * (1 - np.exp(-alpha * np.maximum(0, t - 10 * beta)))
def _bytes_generator(descriptor, max_length=0, limit=0): 'Helper to create bytes values. (Derived from string generator)' strs = values.get_strings(max_length, limit) vals = [bytes(_, 'utf-8') for _ in strs] return gen.IterValueGenerator(descriptor.name, vals)
Helper to create bytes values. (Derived from string generator)
Below is the the instruction that describes the task: ### Input: Helper to create bytes values. (Derived from string generator) ### Response: def _bytes_generator(descriptor, max_length=0, limit=0): 'Helper to create bytes values. (Derived from string generator)' strs = values.get_strings(max_length, limit) vals = [bytes(_, 'utf-8') for _ in strs] return gen.IterValueGenerator(descriptor.name, vals)
def pckcov(pck, idcode, cover): """ Find the coverage window for a specified reference frame in a specified binary PCK file. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pckcov_c.html :param pck: Name of PCK file. :type pck: str :param idcode: Class ID code of PCK reference frame. :type idcode: int :param cover: Window giving coverage in pck for idcode. :type cover: SpiceCell """ pck = stypes.stringToCharP(pck) idcode = ctypes.c_int(idcode) assert isinstance(cover, stypes.SpiceCell) assert cover.dtype == 1 libspice.pckcov_c(pck, idcode, ctypes.byref(cover))
Find the coverage window for a specified reference frame in a specified binary PCK file. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pckcov_c.html :param pck: Name of PCK file. :type pck: str :param idcode: Class ID code of PCK reference frame. :type idcode: int :param cover: Window giving coverage in pck for idcode. :type cover: SpiceCell
Below is the the instruction that describes the task: ### Input: Find the coverage window for a specified reference frame in a specified binary PCK file. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pckcov_c.html :param pck: Name of PCK file. :type pck: str :param idcode: Class ID code of PCK reference frame. :type idcode: int :param cover: Window giving coverage in pck for idcode. :type cover: SpiceCell ### Response: def pckcov(pck, idcode, cover): """ Find the coverage window for a specified reference frame in a specified binary PCK file. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pckcov_c.html :param pck: Name of PCK file. :type pck: str :param idcode: Class ID code of PCK reference frame. :type idcode: int :param cover: Window giving coverage in pck for idcode. :type cover: SpiceCell """ pck = stypes.stringToCharP(pck) idcode = ctypes.c_int(idcode) assert isinstance(cover, stypes.SpiceCell) assert cover.dtype == 1 libspice.pckcov_c(pck, idcode, ctypes.byref(cover))
def predict_proba(self, X): """Predict probabilities on test vectors X. Parameters ---------- X : array-like, shape = [n_samples, n_features] Input vectors, where n_samples is the number of samples and n_features is the number of features. Returns ------- proba : array, shape = [n_samples, n_classes] The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`. """ if not hasattr(self, '_program'): raise NotFittedError('SymbolicClassifier not fitted.') X = check_array(X) _, n_features = X.shape if self.n_features_ != n_features: raise ValueError('Number of features of the model must match the ' 'input. Model n_features is %s and input ' 'n_features is %s.' % (self.n_features_, n_features)) scores = self._program.execute(X) proba = self._transformer(scores) proba = np.vstack([1 - proba, proba]).T return proba
Predict probabilities on test vectors X. Parameters ---------- X : array-like, shape = [n_samples, n_features] Input vectors, where n_samples is the number of samples and n_features is the number of features. Returns ------- proba : array, shape = [n_samples, n_classes] The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`.
Below is the the instruction that describes the task: ### Input: Predict probabilities on test vectors X. Parameters ---------- X : array-like, shape = [n_samples, n_features] Input vectors, where n_samples is the number of samples and n_features is the number of features. Returns ------- proba : array, shape = [n_samples, n_classes] The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`. ### Response: def predict_proba(self, X): """Predict probabilities on test vectors X. Parameters ---------- X : array-like, shape = [n_samples, n_features] Input vectors, where n_samples is the number of samples and n_features is the number of features. Returns ------- proba : array, shape = [n_samples, n_classes] The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`. """ if not hasattr(self, '_program'): raise NotFittedError('SymbolicClassifier not fitted.') X = check_array(X) _, n_features = X.shape if self.n_features_ != n_features: raise ValueError('Number of features of the model must match the ' 'input. Model n_features is %s and input ' 'n_features is %s.' % (self.n_features_, n_features)) scores = self._program.execute(X) proba = self._transformer(scores) proba = np.vstack([1 - proba, proba]).T return proba
def p_expression_Or(self, p): 'expression : expression OR expression' p[0] = Or(p[1], p[3], lineno=p.lineno(1)) p.set_lineno(0, p.lineno(1))
expression : expression OR expression
Below is the the instruction that describes the task: ### Input: expression : expression OR expression ### Response: def p_expression_Or(self, p): 'expression : expression OR expression' p[0] = Or(p[1], p[3], lineno=p.lineno(1)) p.set_lineno(0, p.lineno(1))
def _get_child_class(self, path): """ Return the appropriate child class given a subdirectory path. Args: path (str): The path to the subdirectory. Returns: An uninstantiated BIDSNode or one of its subclasses. """ if self._child_entity is None: return BIDSNode for i, child_ent in enumerate(listify(self._child_entity)): template = self.available_entities[child_ent].directory if template is None: return BIDSNode template = self.root_path + template # Construct regex search pattern from target directory template to_rep = re.findall(r'\{(.*?)\}', template) for ent in to_rep: patt = self.available_entities[ent].pattern template = template.replace('{%s}' % ent, patt) template += r'[^\%s]*$' % os.path.sep if re.match(template, path): return listify(self._child_class)[i] return BIDSNode
Return the appropriate child class given a subdirectory path. Args: path (str): The path to the subdirectory. Returns: An uninstantiated BIDSNode or one of its subclasses.
Below is the the instruction that describes the task: ### Input: Return the appropriate child class given a subdirectory path. Args: path (str): The path to the subdirectory. Returns: An uninstantiated BIDSNode or one of its subclasses. ### Response: def _get_child_class(self, path): """ Return the appropriate child class given a subdirectory path. Args: path (str): The path to the subdirectory. Returns: An uninstantiated BIDSNode or one of its subclasses. """ if self._child_entity is None: return BIDSNode for i, child_ent in enumerate(listify(self._child_entity)): template = self.available_entities[child_ent].directory if template is None: return BIDSNode template = self.root_path + template # Construct regex search pattern from target directory template to_rep = re.findall(r'\{(.*?)\}', template) for ent in to_rep: patt = self.available_entities[ent].pattern template = template.replace('{%s}' % ent, patt) template += r'[^\%s]*$' % os.path.sep if re.match(template, path): return listify(self._child_class)[i] return BIDSNode
def get_temp_file(keep=False, autoext="", fd=False): """Creates a temporary file. :param keep: If False, automatically delete the file when Scapy exits. :param autoext: Suffix to add to the generated file name. :param fd: If True, this returns a file-like object with the temporary file opened. If False (default), this returns a file path. """ f = tempfile.NamedTemporaryFile(prefix="scapy", suffix=autoext, delete=False) if not keep: conf.temp_files.append(f.name) if fd: return f else: # Close the file so something else can take it. f.close() return f.name
Creates a temporary file. :param keep: If False, automatically delete the file when Scapy exits. :param autoext: Suffix to add to the generated file name. :param fd: If True, this returns a file-like object with the temporary file opened. If False (default), this returns a file path.
Below is the the instruction that describes the task: ### Input: Creates a temporary file. :param keep: If False, automatically delete the file when Scapy exits. :param autoext: Suffix to add to the generated file name. :param fd: If True, this returns a file-like object with the temporary file opened. If False (default), this returns a file path. ### Response: def get_temp_file(keep=False, autoext="", fd=False): """Creates a temporary file. :param keep: If False, automatically delete the file when Scapy exits. :param autoext: Suffix to add to the generated file name. :param fd: If True, this returns a file-like object with the temporary file opened. If False (default), this returns a file path. """ f = tempfile.NamedTemporaryFile(prefix="scapy", suffix=autoext, delete=False) if not keep: conf.temp_files.append(f.name) if fd: return f else: # Close the file so something else can take it. f.close() return f.name
async def _fair_get_in_peer(self): """ Get the first available available inbound peer in a fair manner. :returns: A `Peer` inbox, whose inbox is guaranteed not to be empty (and thus can be read from without blocking). """ peer = None while not peer: await self._wait_peers() # This rotates the list, implementing fair-queuing. peers = list(self._in_peers) tasks = [asyncio.ensure_future(self._in_peers.wait_change())] tasks.extend([ asyncio.ensure_future( p.inbox.wait_not_empty(), loop=self.loop, ) for p in peers ]) try: done, pending = await asyncio.wait( tasks, return_when=asyncio.FIRST_COMPLETED, loop=self.loop, ) finally: for task in tasks: task.cancel() tasks.pop(0) # pop the wait_change task. peer = next( ( p for task, p in zip(tasks, peers) if task in done and not task.cancelled() ), None, ) return peer
Get the first available available inbound peer in a fair manner. :returns: A `Peer` inbox, whose inbox is guaranteed not to be empty (and thus can be read from without blocking).
Below is the the instruction that describes the task: ### Input: Get the first available available inbound peer in a fair manner. :returns: A `Peer` inbox, whose inbox is guaranteed not to be empty (and thus can be read from without blocking). ### Response: async def _fair_get_in_peer(self): """ Get the first available available inbound peer in a fair manner. :returns: A `Peer` inbox, whose inbox is guaranteed not to be empty (and thus can be read from without blocking). """ peer = None while not peer: await self._wait_peers() # This rotates the list, implementing fair-queuing. peers = list(self._in_peers) tasks = [asyncio.ensure_future(self._in_peers.wait_change())] tasks.extend([ asyncio.ensure_future( p.inbox.wait_not_empty(), loop=self.loop, ) for p in peers ]) try: done, pending = await asyncio.wait( tasks, return_when=asyncio.FIRST_COMPLETED, loop=self.loop, ) finally: for task in tasks: task.cancel() tasks.pop(0) # pop the wait_change task. peer = next( ( p for task, p in zip(tasks, peers) if task in done and not task.cancelled() ), None, ) return peer
def __init_yaml(): """Lazy init yaml because canmatrix might not be fully loaded when loading this format.""" global _yaml_initialized if not _yaml_initialized: _yaml_initialized = True yaml.add_constructor(u'tag:yaml.org,2002:Frame', _frame_constructor) yaml.add_constructor(u'tag:yaml.org,2002:Signal', _signal_constructor) yaml.add_representer(canmatrix.Frame, _frame_representer)
Lazy init yaml because canmatrix might not be fully loaded when loading this format.
Below is the the instruction that describes the task: ### Input: Lazy init yaml because canmatrix might not be fully loaded when loading this format. ### Response: def __init_yaml(): """Lazy init yaml because canmatrix might not be fully loaded when loading this format.""" global _yaml_initialized if not _yaml_initialized: _yaml_initialized = True yaml.add_constructor(u'tag:yaml.org,2002:Frame', _frame_constructor) yaml.add_constructor(u'tag:yaml.org,2002:Signal', _signal_constructor) yaml.add_representer(canmatrix.Frame, _frame_representer)
def _list_merge(src, dest): """ Merge the contents coming from src into dest :param src: source dictionary :param dest: destination dictionary """ for k in src: if type(src[k]) != dict: dest[k] = src[k] else: # --- # src could have a key whose value is a list # and does not yet exist on dest if k not in dest: dest[k] = {} _list_merge(src[k], dest[k])
Merge the contents coming from src into dest :param src: source dictionary :param dest: destination dictionary
Below is the the instruction that describes the task: ### Input: Merge the contents coming from src into dest :param src: source dictionary :param dest: destination dictionary ### Response: def _list_merge(src, dest): """ Merge the contents coming from src into dest :param src: source dictionary :param dest: destination dictionary """ for k in src: if type(src[k]) != dict: dest[k] = src[k] else: # --- # src could have a key whose value is a list # and does not yet exist on dest if k not in dest: dest[k] = {} _list_merge(src[k], dest[k])
def write(self, str): '''Write string str to the underlying file. Note that due to buffering, flush() or close() may be needed before the file on disk reflects the data written.''' if self.closed: raise ValueError('File closed') if self._mode in _allowed_read: raise Exception('File opened for read only') if self._valid is not None: raise Exception('file already finalized') if not self._done_header: self._write_header() # Encrypt and write the data encrypted = self._crypto.encrypt(str) self._checksumer.update(encrypted) self._fp.write(encrypted)
Write string str to the underlying file. Note that due to buffering, flush() or close() may be needed before the file on disk reflects the data written.
Below is the the instruction that describes the task: ### Input: Write string str to the underlying file. Note that due to buffering, flush() or close() may be needed before the file on disk reflects the data written. ### Response: def write(self, str): '''Write string str to the underlying file. Note that due to buffering, flush() or close() may be needed before the file on disk reflects the data written.''' if self.closed: raise ValueError('File closed') if self._mode in _allowed_read: raise Exception('File opened for read only') if self._valid is not None: raise Exception('file already finalized') if not self._done_header: self._write_header() # Encrypt and write the data encrypted = self._crypto.encrypt(str) self._checksumer.update(encrypted) self._fp.write(encrypted)
def comparable(self): """str: comparable representation of the path specification.""" string_parts = [] if self.location is not None: string_parts.append('location: {0:s}'.format(self.location)) if self.store_index is not None: string_parts.append('store index: {0:d}'.format(self.store_index)) return self._GetComparable(sub_comparable_string=', '.join(string_parts))
str: comparable representation of the path specification.
Below is the the instruction that describes the task: ### Input: str: comparable representation of the path specification. ### Response: def comparable(self): """str: comparable representation of the path specification.""" string_parts = [] if self.location is not None: string_parts.append('location: {0:s}'.format(self.location)) if self.store_index is not None: string_parts.append('store index: {0:d}'.format(self.store_index)) return self._GetComparable(sub_comparable_string=', '.join(string_parts))
def get(self, list_id, webhook_id): """ Get information about a specific webhook. :param list_id: The unique id for the list. :type list_id: :py:class:`str` :param webhook_id: The unique id for the webhook. :type webhook_id: :py:class:`str` """ self.list_id = list_id self.webhook_id = webhook_id return self._mc_client._get(url=self._build_path(list_id, 'webhooks', webhook_id))
Get information about a specific webhook. :param list_id: The unique id for the list. :type list_id: :py:class:`str` :param webhook_id: The unique id for the webhook. :type webhook_id: :py:class:`str`
Below is the the instruction that describes the task: ### Input: Get information about a specific webhook. :param list_id: The unique id for the list. :type list_id: :py:class:`str` :param webhook_id: The unique id for the webhook. :type webhook_id: :py:class:`str` ### Response: def get(self, list_id, webhook_id): """ Get information about a specific webhook. :param list_id: The unique id for the list. :type list_id: :py:class:`str` :param webhook_id: The unique id for the webhook. :type webhook_id: :py:class:`str` """ self.list_id = list_id self.webhook_id = webhook_id return self._mc_client._get(url=self._build_path(list_id, 'webhooks', webhook_id))
def _update_table_cache(self): """Clears and updates the table cache to be in sync with self""" self._table_cache.clear() for sel, tab, val in self: try: self._table_cache[tab].append((sel, val)) except KeyError: self._table_cache[tab] = [(sel, val)] assert len(self) == self._len_table_cache()
Clears and updates the table cache to be in sync with self
Below is the the instruction that describes the task: ### Input: Clears and updates the table cache to be in sync with self ### Response: def _update_table_cache(self): """Clears and updates the table cache to be in sync with self""" self._table_cache.clear() for sel, tab, val in self: try: self._table_cache[tab].append((sel, val)) except KeyError: self._table_cache[tab] = [(sel, val)] assert len(self) == self._len_table_cache()
def get_files_to_check(self): """Generate files and error codes to check on each one. Walk dir trees under `self._arguments` and yield file names that `match` under each directory that `match_dir`. The method locates the configuration for each file name and yields a tuple of (filename, [error_codes]). With every discovery of a new configuration file `IllegalConfiguration` might be raised. """ def _get_matches(conf): """Return the `match` and `match_dir` functions for `config`.""" match_func = re(conf.match + '$').match match_dir_func = re(conf.match_dir + '$').match return match_func, match_dir_func def _get_ignore_decorators(conf): """Return the `ignore_decorators` as None or regex.""" return (re(conf.ignore_decorators) if conf.ignore_decorators else None) for name in self._arguments: if os.path.isdir(name): for root, dirs, filenames in os.walk(name): config = self._get_config(os.path.abspath(root)) match, match_dir = _get_matches(config) ignore_decorators = _get_ignore_decorators(config) # Skip any dirs that do not match match_dir dirs[:] = [d for d in dirs if match_dir(d)] for filename in filenames: if match(filename): full_path = os.path.join(root, filename) yield (full_path, list(config.checked_codes), ignore_decorators) else: config = self._get_config(os.path.abspath(name)) match, _ = _get_matches(config) ignore_decorators = _get_ignore_decorators(config) if match(name): yield (name, list(config.checked_codes), ignore_decorators)
Generate files and error codes to check on each one. Walk dir trees under `self._arguments` and yield file names that `match` under each directory that `match_dir`. The method locates the configuration for each file name and yields a tuple of (filename, [error_codes]). With every discovery of a new configuration file `IllegalConfiguration` might be raised.
Below is the the instruction that describes the task: ### Input: Generate files and error codes to check on each one. Walk dir trees under `self._arguments` and yield file names that `match` under each directory that `match_dir`. The method locates the configuration for each file name and yields a tuple of (filename, [error_codes]). With every discovery of a new configuration file `IllegalConfiguration` might be raised. ### Response: def get_files_to_check(self): """Generate files and error codes to check on each one. Walk dir trees under `self._arguments` and yield file names that `match` under each directory that `match_dir`. The method locates the configuration for each file name and yields a tuple of (filename, [error_codes]). With every discovery of a new configuration file `IllegalConfiguration` might be raised. """ def _get_matches(conf): """Return the `match` and `match_dir` functions for `config`.""" match_func = re(conf.match + '$').match match_dir_func = re(conf.match_dir + '$').match return match_func, match_dir_func def _get_ignore_decorators(conf): """Return the `ignore_decorators` as None or regex.""" return (re(conf.ignore_decorators) if conf.ignore_decorators else None) for name in self._arguments: if os.path.isdir(name): for root, dirs, filenames in os.walk(name): config = self._get_config(os.path.abspath(root)) match, match_dir = _get_matches(config) ignore_decorators = _get_ignore_decorators(config) # Skip any dirs that do not match match_dir dirs[:] = [d for d in dirs if match_dir(d)] for filename in filenames: if match(filename): full_path = os.path.join(root, filename) yield (full_path, list(config.checked_codes), ignore_decorators) else: config = self._get_config(os.path.abspath(name)) match, _ = _get_matches(config) ignore_decorators = _get_ignore_decorators(config) if match(name): yield (name, list(config.checked_codes), ignore_decorators)
def _matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None): # pylint: disable=unused-argument """Numpy matmul wrapper.""" if a_is_sparse or b_is_sparse: raise NotImplementedError('Numpy backend does not support sparse matmul.') if transpose_a or adjoint_a: a = _matrix_transpose(a, conjugate=adjoint_a) if transpose_b or adjoint_b: b = _matrix_transpose(b, conjugate=adjoint_b) return np.matmul(a, b)
Numpy matmul wrapper.
Below is the the instruction that describes the task: ### Input: Numpy matmul wrapper. ### Response: def _matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None): # pylint: disable=unused-argument """Numpy matmul wrapper.""" if a_is_sparse or b_is_sparse: raise NotImplementedError('Numpy backend does not support sparse matmul.') if transpose_a or adjoint_a: a = _matrix_transpose(a, conjugate=adjoint_a) if transpose_b or adjoint_b: b = _matrix_transpose(b, conjugate=adjoint_b) return np.matmul(a, b)
def random_weights(n, bounds=(0., 1.), total=1.0): """ Generate pseudo-random weights. Returns a list of random weights that is of length n, where each weight is in the range bounds, and where the weights sum up to total. Useful for creating random portfolios when benchmarking. Args: * n (int): number of random weights * bounds ((low, high)): bounds for each weight * total (float): total sum of the weights """ low = bounds[0] high = bounds[1] if high < low: raise ValueError('Higher bound must be greater or ' 'equal to lower bound') if n * high < total or n * low > total: raise ValueError('solution not possible with given n and bounds') w = [0] * n tgt = -float(total) for i in range(n): rn = n - i - 1 rhigh = rn * high rlow = rn * low lowb = max(-rhigh - tgt, low) highb = min(-rlow - tgt, high) rw = random.uniform(lowb, highb) w[i] = rw tgt += rw random.shuffle(w) return w
Generate pseudo-random weights. Returns a list of random weights that is of length n, where each weight is in the range bounds, and where the weights sum up to total. Useful for creating random portfolios when benchmarking. Args: * n (int): number of random weights * bounds ((low, high)): bounds for each weight * total (float): total sum of the weights
Below is the the instruction that describes the task: ### Input: Generate pseudo-random weights. Returns a list of random weights that is of length n, where each weight is in the range bounds, and where the weights sum up to total. Useful for creating random portfolios when benchmarking. Args: * n (int): number of random weights * bounds ((low, high)): bounds for each weight * total (float): total sum of the weights ### Response: def random_weights(n, bounds=(0., 1.), total=1.0): """ Generate pseudo-random weights. Returns a list of random weights that is of length n, where each weight is in the range bounds, and where the weights sum up to total. Useful for creating random portfolios when benchmarking. Args: * n (int): number of random weights * bounds ((low, high)): bounds for each weight * total (float): total sum of the weights """ low = bounds[0] high = bounds[1] if high < low: raise ValueError('Higher bound must be greater or ' 'equal to lower bound') if n * high < total or n * low > total: raise ValueError('solution not possible with given n and bounds') w = [0] * n tgt = -float(total) for i in range(n): rn = n - i - 1 rhigh = rn * high rlow = rn * low lowb = max(-rhigh - tgt, low) highb = min(-rlow - tgt, high) rw = random.uniform(lowb, highb) w[i] = rw tgt += rw random.shuffle(w) return w
def _fill_albedos(self, mesh=None, irrad_frac_refl=0.0): """ TODO: add documentation """ logger.debug("{}._fill_albedos".format(self.component)) if mesh is None: mesh = self.mesh mesh.update_columns(irrad_frac_refl=irrad_frac_refl) if not self.needs_recompute_instantaneous: logger.debug("{}._fill_albedos: copying albedos to standard mesh".format(self.component)) theta = 0.0 self._standard_meshes[theta].update_columns(irrad_frac_refl=irrad_frac_refl)
TODO: add documentation
Below is the the instruction that describes the task: ### Input: TODO: add documentation ### Response: def _fill_albedos(self, mesh=None, irrad_frac_refl=0.0): """ TODO: add documentation """ logger.debug("{}._fill_albedos".format(self.component)) if mesh is None: mesh = self.mesh mesh.update_columns(irrad_frac_refl=irrad_frac_refl) if not self.needs_recompute_instantaneous: logger.debug("{}._fill_albedos: copying albedos to standard mesh".format(self.component)) theta = 0.0 self._standard_meshes[theta].update_columns(irrad_frac_refl=irrad_frac_refl)
def regular_to_sparse_from_sparse_mappings(regular_to_unmasked_sparse, unmasked_sparse_to_sparse): """Using the mapping between the regular-grid and unmasked pixelization grid, compute the mapping between each regular pixel and the masked pixelization grid. Parameters ----------- regular_to_unmasked_sparse : ndarray The index mapping between every regular-pixel and masked pixelization pixel. unmasked_sparse_to_sparse : ndarray The index mapping between every masked pixelization pixel and unmasked pixelization pixel. """ total_regular_pixels = regular_to_unmasked_sparse.shape[0] regular_to_sparse = np.zeros(total_regular_pixels) for regular_index in range(total_regular_pixels): # print(regular_index, regular_to_unmasked_sparse[regular_index], unmasked_sparse_to_sparse.shape[0]) regular_to_sparse[regular_index] = unmasked_sparse_to_sparse[regular_to_unmasked_sparse[regular_index]] return regular_to_sparse
Using the mapping between the regular-grid and unmasked pixelization grid, compute the mapping between each regular pixel and the masked pixelization grid. Parameters ----------- regular_to_unmasked_sparse : ndarray The index mapping between every regular-pixel and masked pixelization pixel. unmasked_sparse_to_sparse : ndarray The index mapping between every masked pixelization pixel and unmasked pixelization pixel.
Below is the the instruction that describes the task: ### Input: Using the mapping between the regular-grid and unmasked pixelization grid, compute the mapping between each regular pixel and the masked pixelization grid. Parameters ----------- regular_to_unmasked_sparse : ndarray The index mapping between every regular-pixel and masked pixelization pixel. unmasked_sparse_to_sparse : ndarray The index mapping between every masked pixelization pixel and unmasked pixelization pixel. ### Response: def regular_to_sparse_from_sparse_mappings(regular_to_unmasked_sparse, unmasked_sparse_to_sparse): """Using the mapping between the regular-grid and unmasked pixelization grid, compute the mapping between each regular pixel and the masked pixelization grid. Parameters ----------- regular_to_unmasked_sparse : ndarray The index mapping between every regular-pixel and masked pixelization pixel. unmasked_sparse_to_sparse : ndarray The index mapping between every masked pixelization pixel and unmasked pixelization pixel. """ total_regular_pixels = regular_to_unmasked_sparse.shape[0] regular_to_sparse = np.zeros(total_regular_pixels) for regular_index in range(total_regular_pixels): # print(regular_index, regular_to_unmasked_sparse[regular_index], unmasked_sparse_to_sparse.shape[0]) regular_to_sparse[regular_index] = unmasked_sparse_to_sparse[regular_to_unmasked_sparse[regular_index]] return regular_to_sparse
def crop(gens, seconds=5, cropper=None): ''' Crop the generator to a finite number of frames Return a generator which outputs the provided generator limited to enough samples to produce seconds seconds of audio (default 5s) at the provided frame rate. ''' if hasattr(gens, "next"): # single generator gens = (gens,) if cropper == None: cropper = lambda gen: itertools.islice(gen, 0, seconds * sampler.FRAME_RATE) cropped = [cropper(gen) for gen in gens] return cropped[0] if len(cropped) == 1 else cropped
Crop the generator to a finite number of frames Return a generator which outputs the provided generator limited to enough samples to produce seconds seconds of audio (default 5s) at the provided frame rate.
Below is the the instruction that describes the task: ### Input: Crop the generator to a finite number of frames Return a generator which outputs the provided generator limited to enough samples to produce seconds seconds of audio (default 5s) at the provided frame rate. ### Response: def crop(gens, seconds=5, cropper=None): ''' Crop the generator to a finite number of frames Return a generator which outputs the provided generator limited to enough samples to produce seconds seconds of audio (default 5s) at the provided frame rate. ''' if hasattr(gens, "next"): # single generator gens = (gens,) if cropper == None: cropper = lambda gen: itertools.islice(gen, 0, seconds * sampler.FRAME_RATE) cropped = [cropper(gen) for gen in gens] return cropped[0] if len(cropped) == 1 else cropped
def get_version(program, *, version_arg='--version', regex=r'(\d+(\.\d+)*)'): "Get the version of the specified program" args_prog = [program, version_arg] try: proc = run( args_prog, close_fds=True, universal_newlines=True, stdout=PIPE, stderr=STDOUT, check=True, ) output = proc.stdout except FileNotFoundError as e: raise MissingDependencyError( f"Could not find program '{program}' on the PATH" ) from e except CalledProcessError as e: if e.returncode != 0: raise MissingDependencyError( f"Ran program '{program}' but it exited with an error:\n{e.output}" ) from e raise MissingDependencyError( f"Could not find program '{program}' on the PATH" ) from e try: version = re.match(regex, output.strip()).group(1) except AttributeError as e: raise MissingDependencyError( f"The program '{program}' did not report its version. " f"Message was:\n{output}" ) return version
Get the version of the specified program
Below is the the instruction that describes the task: ### Input: Get the version of the specified program ### Response: def get_version(program, *, version_arg='--version', regex=r'(\d+(\.\d+)*)'): "Get the version of the specified program" args_prog = [program, version_arg] try: proc = run( args_prog, close_fds=True, universal_newlines=True, stdout=PIPE, stderr=STDOUT, check=True, ) output = proc.stdout except FileNotFoundError as e: raise MissingDependencyError( f"Could not find program '{program}' on the PATH" ) from e except CalledProcessError as e: if e.returncode != 0: raise MissingDependencyError( f"Ran program '{program}' but it exited with an error:\n{e.output}" ) from e raise MissingDependencyError( f"Could not find program '{program}' on the PATH" ) from e try: version = re.match(regex, output.strip()).group(1) except AttributeError as e: raise MissingDependencyError( f"The program '{program}' did not report its version. " f"Message was:\n{output}" ) return version
def _set_rbridge_id(self, v, load=False): """ Setter method for rbridge_id, mapped from YANG variable /preprovision/rbridge_id (list) If this variable is read-only (config: false) in the source YANG file, then _set_rbridge_id is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_rbridge_id() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("rbridge_id wwn",rbridge_id.rbridge_id, yang_name="rbridge-id", rest_name="rbridge-id", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='rbridge-id wwn', extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}), is_container='list', yang_name="rbridge-id", rest_name="rbridge-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}, namespace='urn:brocade.com:mgmt:brocade-preprovision', defining_module='brocade-preprovision', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """rbridge_id must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("rbridge_id wwn",rbridge_id.rbridge_id, yang_name="rbridge-id", rest_name="rbridge-id", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='rbridge-id wwn', extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}), is_container='list', yang_name="rbridge-id", rest_name="rbridge-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}, namespace='urn:brocade.com:mgmt:brocade-preprovision', defining_module='brocade-preprovision', yang_type='list', is_config=True)""", }) self.__rbridge_id = t if hasattr(self, '_set'): self._set()
Setter method for rbridge_id, mapped from YANG variable /preprovision/rbridge_id (list) If this variable is read-only (config: false) in the source YANG file, then _set_rbridge_id is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_rbridge_id() directly.
Below is the the instruction that describes the task: ### Input: Setter method for rbridge_id, mapped from YANG variable /preprovision/rbridge_id (list) If this variable is read-only (config: false) in the source YANG file, then _set_rbridge_id is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_rbridge_id() directly. ### Response: def _set_rbridge_id(self, v, load=False): """ Setter method for rbridge_id, mapped from YANG variable /preprovision/rbridge_id (list) If this variable is read-only (config: false) in the source YANG file, then _set_rbridge_id is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_rbridge_id() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("rbridge_id wwn",rbridge_id.rbridge_id, yang_name="rbridge-id", rest_name="rbridge-id", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='rbridge-id wwn', extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}), is_container='list', yang_name="rbridge-id", rest_name="rbridge-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}, namespace='urn:brocade.com:mgmt:brocade-preprovision', defining_module='brocade-preprovision', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """rbridge_id must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("rbridge_id wwn",rbridge_id.rbridge_id, yang_name="rbridge-id", rest_name="rbridge-id", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='rbridge-id wwn', extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}), is_container='list', yang_name="rbridge-id", rest_name="rbridge-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Rbridge Id for Pre-provision configuration', u'callpoint': u'switch_attributes_callpoint', u'display-when': u'((/vcsmode/vcs-mode = "true") and (/vcsmode/vcs-cluster-mode = "true"))', u'cli-mode-name': u'config-preprovision-rbridge-id-$(rbridge-id)'}}, namespace='urn:brocade.com:mgmt:brocade-preprovision', defining_module='brocade-preprovision', yang_type='list', is_config=True)""", }) self.__rbridge_id = t if hasattr(self, '_set'): self._set()
def parse(self, data, charset=None): """ Parse the data. It is usually a better idea to override ``_parse_data()`` than this method in derived classes. :param charset: the charset of the data. Uses datamapper's default (``self.charset``) if not given. :returns: """ charset = charset or self.charset return self._parse_data(data, charset)
Parse the data. It is usually a better idea to override ``_parse_data()`` than this method in derived classes. :param charset: the charset of the data. Uses datamapper's default (``self.charset``) if not given. :returns:
Below is the the instruction that describes the task: ### Input: Parse the data. It is usually a better idea to override ``_parse_data()`` than this method in derived classes. :param charset: the charset of the data. Uses datamapper's default (``self.charset``) if not given. :returns: ### Response: def parse(self, data, charset=None): """ Parse the data. It is usually a better idea to override ``_parse_data()`` than this method in derived classes. :param charset: the charset of the data. Uses datamapper's default (``self.charset``) if not given. :returns: """ charset = charset or self.charset return self._parse_data(data, charset)
def macro_body(self, node, frame): """Dump the function def of a macro or call block.""" frame = frame.inner() frame.symbols.analyze_node(node) macro_ref = MacroRef(node) explicit_caller = None skip_special_params = set() args = [] for idx, arg in enumerate(node.args): if arg.name == 'caller': explicit_caller = idx if arg.name in ('kwargs', 'varargs'): skip_special_params.add(arg.name) args.append(frame.symbols.ref(arg.name)) undeclared = find_undeclared(node.body, ('caller', 'kwargs', 'varargs')) if 'caller' in undeclared: # In older Jinja2 versions there was a bug that allowed caller # to retain the special behavior even if it was mentioned in # the argument list. However thankfully this was only really # working if it was the last argument. So we are explicitly # checking this now and error out if it is anywhere else in # the argument list. if explicit_caller is not None: try: node.defaults[explicit_caller - len(node.args)] except IndexError: self.fail('When defining macros or call blocks the ' 'special "caller" argument must be omitted ' 'or be given a default.', node.lineno) else: args.append(frame.symbols.declare_parameter('caller')) macro_ref.accesses_caller = True if 'kwargs' in undeclared and not 'kwargs' in skip_special_params: args.append(frame.symbols.declare_parameter('kwargs')) macro_ref.accesses_kwargs = True if 'varargs' in undeclared and not 'varargs' in skip_special_params: args.append(frame.symbols.declare_parameter('varargs')) macro_ref.accesses_varargs = True # macros are delayed, they never require output checks frame.require_output_check = False frame.symbols.analyze_node(node) self.writeline('%s(%s):' % (self.func('macro'), ', '.join(args)), node) self.indent() self.buffer(frame) self.enter_frame(frame) self.push_parameter_definitions(frame) for idx, arg in enumerate(node.args): ref = frame.symbols.ref(arg.name) self.writeline('if %s is missing:' % ref) self.indent() try: default = node.defaults[idx - len(node.args)] except IndexError: self.writeline('%s = undefined(%r, name=%r)' % ( ref, 'parameter %r was not provided' % arg.name, arg.name)) else: self.writeline('%s = ' % ref) self.visit(default, frame) self.mark_parameter_stored(ref) self.outdent() self.pop_parameter_definitions() self.blockvisit(node.body, frame) self.return_buffer_contents(frame, force_unescaped=True) self.leave_frame(frame, with_python_scope=True) self.outdent() return frame, macro_ref
Dump the function def of a macro or call block.
Below is the the instruction that describes the task: ### Input: Dump the function def of a macro or call block. ### Response: def macro_body(self, node, frame): """Dump the function def of a macro or call block.""" frame = frame.inner() frame.symbols.analyze_node(node) macro_ref = MacroRef(node) explicit_caller = None skip_special_params = set() args = [] for idx, arg in enumerate(node.args): if arg.name == 'caller': explicit_caller = idx if arg.name in ('kwargs', 'varargs'): skip_special_params.add(arg.name) args.append(frame.symbols.ref(arg.name)) undeclared = find_undeclared(node.body, ('caller', 'kwargs', 'varargs')) if 'caller' in undeclared: # In older Jinja2 versions there was a bug that allowed caller # to retain the special behavior even if it was mentioned in # the argument list. However thankfully this was only really # working if it was the last argument. So we are explicitly # checking this now and error out if it is anywhere else in # the argument list. if explicit_caller is not None: try: node.defaults[explicit_caller - len(node.args)] except IndexError: self.fail('When defining macros or call blocks the ' 'special "caller" argument must be omitted ' 'or be given a default.', node.lineno) else: args.append(frame.symbols.declare_parameter('caller')) macro_ref.accesses_caller = True if 'kwargs' in undeclared and not 'kwargs' in skip_special_params: args.append(frame.symbols.declare_parameter('kwargs')) macro_ref.accesses_kwargs = True if 'varargs' in undeclared and not 'varargs' in skip_special_params: args.append(frame.symbols.declare_parameter('varargs')) macro_ref.accesses_varargs = True # macros are delayed, they never require output checks frame.require_output_check = False frame.symbols.analyze_node(node) self.writeline('%s(%s):' % (self.func('macro'), ', '.join(args)), node) self.indent() self.buffer(frame) self.enter_frame(frame) self.push_parameter_definitions(frame) for idx, arg in enumerate(node.args): ref = frame.symbols.ref(arg.name) self.writeline('if %s is missing:' % ref) self.indent() try: default = node.defaults[idx - len(node.args)] except IndexError: self.writeline('%s = undefined(%r, name=%r)' % ( ref, 'parameter %r was not provided' % arg.name, arg.name)) else: self.writeline('%s = ' % ref) self.visit(default, frame) self.mark_parameter_stored(ref) self.outdent() self.pop_parameter_definitions() self.blockvisit(node.body, frame) self.return_buffer_contents(frame, force_unescaped=True) self.leave_frame(frame, with_python_scope=True) self.outdent() return frame, macro_ref
def get(self, key, value): """Retrieve single group record by id or name Supports resource cache Keyword Args: id (str): Full Group ID name (str): Group name Raises: TypeError: Unexpected or more than one keyword argument provided ValueError: No matching group found based on provided inputs Returns: Group: Group instance matching provided inputs """ if key == 'id': response = self._swimlane.request('get', 'groups/{}'.format(value)) return Group(self._swimlane, response.json()) else: response = self._swimlane.request('get', 'groups/lookup?name={}'.format(value)) matched_groups = response.json() for group_data in matched_groups: if group_data.get('name') == value: return Group(self._swimlane, group_data) raise ValueError('Unable to find group with name "{}"'.format(value))
Retrieve single group record by id or name Supports resource cache Keyword Args: id (str): Full Group ID name (str): Group name Raises: TypeError: Unexpected or more than one keyword argument provided ValueError: No matching group found based on provided inputs Returns: Group: Group instance matching provided inputs
Below is the the instruction that describes the task: ### Input: Retrieve single group record by id or name Supports resource cache Keyword Args: id (str): Full Group ID name (str): Group name Raises: TypeError: Unexpected or more than one keyword argument provided ValueError: No matching group found based on provided inputs Returns: Group: Group instance matching provided inputs ### Response: def get(self, key, value): """Retrieve single group record by id or name Supports resource cache Keyword Args: id (str): Full Group ID name (str): Group name Raises: TypeError: Unexpected or more than one keyword argument provided ValueError: No matching group found based on provided inputs Returns: Group: Group instance matching provided inputs """ if key == 'id': response = self._swimlane.request('get', 'groups/{}'.format(value)) return Group(self._swimlane, response.json()) else: response = self._swimlane.request('get', 'groups/lookup?name={}'.format(value)) matched_groups = response.json() for group_data in matched_groups: if group_data.get('name') == value: return Group(self._swimlane, group_data) raise ValueError('Unable to find group with name "{}"'.format(value))
def use_code_colorscheme(self, name): """ Apply new colorscheme. (By name.) """ assert name in self.code_styles self._current_code_style_name = name self._current_style = self._generate_style()
Apply new colorscheme. (By name.)
Below is the the instruction that describes the task: ### Input: Apply new colorscheme. (By name.) ### Response: def use_code_colorscheme(self, name): """ Apply new colorscheme. (By name.) """ assert name in self.code_styles self._current_code_style_name = name self._current_style = self._generate_style()
def get_eager_datasource(cls, session, datasource_type, datasource_id): """Returns datasource with columns and metrics.""" datasource_class = ConnectorRegistry.sources[datasource_type] return ( session.query(datasource_class) .options( subqueryload(datasource_class.columns), subqueryload(datasource_class.metrics), ) .filter_by(id=datasource_id) .one() )
Returns datasource with columns and metrics.
Below is the the instruction that describes the task: ### Input: Returns datasource with columns and metrics. ### Response: def get_eager_datasource(cls, session, datasource_type, datasource_id): """Returns datasource with columns and metrics.""" datasource_class = ConnectorRegistry.sources[datasource_type] return ( session.query(datasource_class) .options( subqueryload(datasource_class.columns), subqueryload(datasource_class.metrics), ) .filter_by(id=datasource_id) .one() )
def get_stored_cert_serials(store): ''' Get all of the certificate serials in the specified store store The store to get all the certificate serials from CLI Example: .. code-block:: bash salt '*' certutil.get_stored_cert_serials <store> ''' cmd = "certutil.exe -store {0}".format(store) out = __salt__['cmd.run'](cmd) # match serial numbers by header position to work with multiple languages matches = re.findall(r"={16}\r\n.*:\s*(\w*)\r\n", out) return matches
Get all of the certificate serials in the specified store store The store to get all the certificate serials from CLI Example: .. code-block:: bash salt '*' certutil.get_stored_cert_serials <store>
Below is the the instruction that describes the task: ### Input: Get all of the certificate serials in the specified store store The store to get all the certificate serials from CLI Example: .. code-block:: bash salt '*' certutil.get_stored_cert_serials <store> ### Response: def get_stored_cert_serials(store): ''' Get all of the certificate serials in the specified store store The store to get all the certificate serials from CLI Example: .. code-block:: bash salt '*' certutil.get_stored_cert_serials <store> ''' cmd = "certutil.exe -store {0}".format(store) out = __salt__['cmd.run'](cmd) # match serial numbers by header position to work with multiple languages matches = re.findall(r"={16}\r\n.*:\s*(\w*)\r\n", out) return matches
def p_static_member(p): '''static_member : class_name DOUBLE_COLON variable_without_objects | variable_class_name DOUBLE_COLON variable_without_objects''' p[0] = ast.StaticProperty(p[1], p[3], lineno=p.lineno(2))
static_member : class_name DOUBLE_COLON variable_without_objects | variable_class_name DOUBLE_COLON variable_without_objects
Below is the the instruction that describes the task: ### Input: static_member : class_name DOUBLE_COLON variable_without_objects | variable_class_name DOUBLE_COLON variable_without_objects ### Response: def p_static_member(p): '''static_member : class_name DOUBLE_COLON variable_without_objects | variable_class_name DOUBLE_COLON variable_without_objects''' p[0] = ast.StaticProperty(p[1], p[3], lineno=p.lineno(2))
def entropy_bits_nrange( minimum: Union[int, float], maximum: Union[int, float] ) -> float: """Calculate the number of entropy bits in a range of numbers.""" # Shannon: # d = fabs(maximum - minimum) # ent = -(1/d) * log(1/d, 2) * d # Aprox form: log10(digits) * log2(10) if not isinstance(minimum, (int, float)): raise TypeError('minimum can only be int or float') if not isinstance(maximum, (int, float)): raise TypeError('maximum can only be int or float') if minimum < 0: raise ValueError('minimum should be greater than 0') if maximum < 0: raise ValueError('maximum should be greater than 0') dif = fabs(maximum - minimum) if dif == 0: return 0.0 ent = log10(dif) * 3.321928 return ent
Calculate the number of entropy bits in a range of numbers.
Below is the the instruction that describes the task: ### Input: Calculate the number of entropy bits in a range of numbers. ### Response: def entropy_bits_nrange( minimum: Union[int, float], maximum: Union[int, float] ) -> float: """Calculate the number of entropy bits in a range of numbers.""" # Shannon: # d = fabs(maximum - minimum) # ent = -(1/d) * log(1/d, 2) * d # Aprox form: log10(digits) * log2(10) if not isinstance(minimum, (int, float)): raise TypeError('minimum can only be int or float') if not isinstance(maximum, (int, float)): raise TypeError('maximum can only be int or float') if minimum < 0: raise ValueError('minimum should be greater than 0') if maximum < 0: raise ValueError('maximum should be greater than 0') dif = fabs(maximum - minimum) if dif == 0: return 0.0 ent = log10(dif) * 3.321928 return ent
def add(self, sid, token): """ Add new sensor to the database Parameters ---------- sid : str SensorId token : str """ try: self.dbcur.execute(SQL_SENSOR_INS, (sid, token)) except sqlite3.IntegrityError: # sensor entry exists pass
Add new sensor to the database Parameters ---------- sid : str SensorId token : str
Below is the the instruction that describes the task: ### Input: Add new sensor to the database Parameters ---------- sid : str SensorId token : str ### Response: def add(self, sid, token): """ Add new sensor to the database Parameters ---------- sid : str SensorId token : str """ try: self.dbcur.execute(SQL_SENSOR_INS, (sid, token)) except sqlite3.IntegrityError: # sensor entry exists pass
def check_dependency(self, dependency_path): """Check if mtime of dependency_path is greater than stored mtime.""" stored_hash = self._stamp_file_hashes.get(dependency_path) # This file was newly added, or we don't have a file # with stored hashes yet. Assume out of date. if not stored_hash: return False return stored_hash == _sha1_for_file(dependency_path)
Check if mtime of dependency_path is greater than stored mtime.
Below is the the instruction that describes the task: ### Input: Check if mtime of dependency_path is greater than stored mtime. ### Response: def check_dependency(self, dependency_path): """Check if mtime of dependency_path is greater than stored mtime.""" stored_hash = self._stamp_file_hashes.get(dependency_path) # This file was newly added, or we don't have a file # with stored hashes yet. Assume out of date. if not stored_hash: return False return stored_hash == _sha1_for_file(dependency_path)
def reorient_wf(name='ReorientWorkflow'): """A workflow to reorient images to 'RPI' orientation""" workflow = pe.Workflow(name=name) inputnode = pe.Node(niu.IdentityInterface(fields=['in_file']), name='inputnode') outputnode = pe.Node(niu.IdentityInterface( fields=['out_file']), name='outputnode') deoblique = pe.Node(afni.Refit(deoblique=True), name='deoblique') reorient = pe.Node(afni.Resample( orientation='RPI', outputtype='NIFTI_GZ'), name='reorient') workflow.connect([ (inputnode, deoblique, [('in_file', 'in_file')]), (deoblique, reorient, [('out_file', 'in_file')]), (reorient, outputnode, [('out_file', 'out_file')]) ]) return workflow
A workflow to reorient images to 'RPI' orientation
Below is the the instruction that describes the task: ### Input: A workflow to reorient images to 'RPI' orientation ### Response: def reorient_wf(name='ReorientWorkflow'): """A workflow to reorient images to 'RPI' orientation""" workflow = pe.Workflow(name=name) inputnode = pe.Node(niu.IdentityInterface(fields=['in_file']), name='inputnode') outputnode = pe.Node(niu.IdentityInterface( fields=['out_file']), name='outputnode') deoblique = pe.Node(afni.Refit(deoblique=True), name='deoblique') reorient = pe.Node(afni.Resample( orientation='RPI', outputtype='NIFTI_GZ'), name='reorient') workflow.connect([ (inputnode, deoblique, [('in_file', 'in_file')]), (deoblique, reorient, [('out_file', 'in_file')]), (reorient, outputnode, [('out_file', 'out_file')]) ]) return workflow
def acquire(self, key): """Return the known information about the device and mark the record as being used by a segmenation state machine.""" if _debug: DeviceInfoCache._debug("acquire %r", key) if isinstance(key, int): device_info = self.cache.get(key, None) elif not isinstance(key, Address): raise TypeError("key must be integer or an address") elif key.addrType not in (Address.localStationAddr, Address.remoteStationAddr): raise TypeError("address must be a local or remote station") else: device_info = self.cache.get(key, None) if device_info: if _debug: DeviceInfoCache._debug(" - reference bump") device_info._ref_count += 1 if _debug: DeviceInfoCache._debug(" - device_info: %r", device_info) return device_info
Return the known information about the device and mark the record as being used by a segmenation state machine.
Below is the the instruction that describes the task: ### Input: Return the known information about the device and mark the record as being used by a segmenation state machine. ### Response: def acquire(self, key): """Return the known information about the device and mark the record as being used by a segmenation state machine.""" if _debug: DeviceInfoCache._debug("acquire %r", key) if isinstance(key, int): device_info = self.cache.get(key, None) elif not isinstance(key, Address): raise TypeError("key must be integer or an address") elif key.addrType not in (Address.localStationAddr, Address.remoteStationAddr): raise TypeError("address must be a local or remote station") else: device_info = self.cache.get(key, None) if device_info: if _debug: DeviceInfoCache._debug(" - reference bump") device_info._ref_count += 1 if _debug: DeviceInfoCache._debug(" - device_info: %r", device_info) return device_info
def init_handler(self): """ Check self options. """ assert self.options.get('host') and self.options.get('port'), "Invalid options" assert self.options.get('to'), 'Recipients list is empty. SMTP disabled.' if not isinstance(self.options['to'], (list, tuple)): self.options['to'] = [self.options['to']]
Check self options.
Below is the the instruction that describes the task: ### Input: Check self options. ### Response: def init_handler(self): """ Check self options. """ assert self.options.get('host') and self.options.get('port'), "Invalid options" assert self.options.get('to'), 'Recipients list is empty. SMTP disabled.' if not isinstance(self.options['to'], (list, tuple)): self.options['to'] = [self.options['to']]
def get_proof(self): """ Get a proof produced while deciding the formula. """ if self.maplesat and self.prfile: self.prfile.seek(0) return [line.rstrip() for line in self.prfile.readlines()]
Get a proof produced while deciding the formula.
Below is the the instruction that describes the task: ### Input: Get a proof produced while deciding the formula. ### Response: def get_proof(self): """ Get a proof produced while deciding the formula. """ if self.maplesat and self.prfile: self.prfile.seek(0) return [line.rstrip() for line in self.prfile.readlines()]
def WriteVtable(self): """ WriteVtable serializes the vtable for the current object, if needed. Before writing out the vtable, this checks pre-existing vtables for equality to this one. If an equal vtable is found, point the object to the existing vtable and return. Because vtable values are sensitive to alignment of object data, not all logically-equal vtables will be deduplicated. A vtable has the following format: <VOffsetT: size of the vtable in bytes, including this value> <VOffsetT: size of the object in bytes, including the vtable offset> <VOffsetT: offset for a field> * N, where N is the number of fields in the schema for this type. Includes deprecated fields. Thus, a vtable is made of 2 + N elements, each VOffsetT bytes wide. An object has the following format: <SOffsetT: offset to this object's vtable (may be negative)> <byte: data>+ """ # Prepend a zero scalar to the object. Later in this function we'll # write an offset here that points to the object's vtable: self.PrependSOffsetTRelative(0) objectOffset = self.Offset() existingVtable = None # Trim trailing 0 offsets. while self.current_vtable and self.current_vtable[-1] == 0: self.current_vtable.pop() # Search backwards through existing vtables, because similar vtables # are likely to have been recently appended. See # BenchmarkVtableDeduplication for a case in which this heuristic # saves about 30% of the time used in writing objects with duplicate # tables. i = len(self.vtables) - 1 while i >= 0: # Find the other vtable, which is associated with `i`: vt2Offset = self.vtables[i] vt2Start = len(self.Bytes) - vt2Offset vt2Len = encode.Get(packer.voffset, self.Bytes, vt2Start) metadata = VtableMetadataFields * N.VOffsetTFlags.bytewidth vt2End = vt2Start + vt2Len vt2 = self.Bytes[vt2Start+metadata:vt2End] # Compare the other vtable to the one under consideration. # If they are equal, store the offset and break: if vtableEqual(self.current_vtable, objectOffset, vt2): existingVtable = vt2Offset break i -= 1 if existingVtable is None: # Did not find a vtable, so write this one to the buffer. # Write out the current vtable in reverse , because # serialization occurs in last-first order: i = len(self.current_vtable) - 1 while i >= 0: off = 0 if self.current_vtable[i] != 0: # Forward reference to field; # use 32bit number to ensure no overflow: off = objectOffset - self.current_vtable[i] self.PrependVOffsetT(off) i -= 1 # The two metadata fields are written last. # First, store the object bytesize: objectSize = UOffsetTFlags.py_type(objectOffset - self.objectEnd) self.PrependVOffsetT(VOffsetTFlags.py_type(objectSize)) # Second, store the vtable bytesize: vBytes = len(self.current_vtable) + VtableMetadataFields vBytes *= N.VOffsetTFlags.bytewidth self.PrependVOffsetT(VOffsetTFlags.py_type(vBytes)) # Next, write the offset to the new vtable in the # already-allocated SOffsetT at the beginning of this object: objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset) encode.Write(packer.soffset, self.Bytes, objectStart, SOffsetTFlags.py_type(self.Offset() - objectOffset)) # Finally, store this vtable in memory for future # deduplication: self.vtables.append(self.Offset()) else: # Found a duplicate vtable. objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset) self.head = UOffsetTFlags.py_type(objectStart) # Write the offset to the found vtable in the # already-allocated SOffsetT at the beginning of this object: encode.Write(packer.soffset, self.Bytes, self.Head(), SOffsetTFlags.py_type(existingVtable - objectOffset)) self.current_vtable = None return objectOffset
WriteVtable serializes the vtable for the current object, if needed. Before writing out the vtable, this checks pre-existing vtables for equality to this one. If an equal vtable is found, point the object to the existing vtable and return. Because vtable values are sensitive to alignment of object data, not all logically-equal vtables will be deduplicated. A vtable has the following format: <VOffsetT: size of the vtable in bytes, including this value> <VOffsetT: size of the object in bytes, including the vtable offset> <VOffsetT: offset for a field> * N, where N is the number of fields in the schema for this type. Includes deprecated fields. Thus, a vtable is made of 2 + N elements, each VOffsetT bytes wide. An object has the following format: <SOffsetT: offset to this object's vtable (may be negative)> <byte: data>+
Below is the the instruction that describes the task: ### Input: WriteVtable serializes the vtable for the current object, if needed. Before writing out the vtable, this checks pre-existing vtables for equality to this one. If an equal vtable is found, point the object to the existing vtable and return. Because vtable values are sensitive to alignment of object data, not all logically-equal vtables will be deduplicated. A vtable has the following format: <VOffsetT: size of the vtable in bytes, including this value> <VOffsetT: size of the object in bytes, including the vtable offset> <VOffsetT: offset for a field> * N, where N is the number of fields in the schema for this type. Includes deprecated fields. Thus, a vtable is made of 2 + N elements, each VOffsetT bytes wide. An object has the following format: <SOffsetT: offset to this object's vtable (may be negative)> <byte: data>+ ### Response: def WriteVtable(self): """ WriteVtable serializes the vtable for the current object, if needed. Before writing out the vtable, this checks pre-existing vtables for equality to this one. If an equal vtable is found, point the object to the existing vtable and return. Because vtable values are sensitive to alignment of object data, not all logically-equal vtables will be deduplicated. A vtable has the following format: <VOffsetT: size of the vtable in bytes, including this value> <VOffsetT: size of the object in bytes, including the vtable offset> <VOffsetT: offset for a field> * N, where N is the number of fields in the schema for this type. Includes deprecated fields. Thus, a vtable is made of 2 + N elements, each VOffsetT bytes wide. An object has the following format: <SOffsetT: offset to this object's vtable (may be negative)> <byte: data>+ """ # Prepend a zero scalar to the object. Later in this function we'll # write an offset here that points to the object's vtable: self.PrependSOffsetTRelative(0) objectOffset = self.Offset() existingVtable = None # Trim trailing 0 offsets. while self.current_vtable and self.current_vtable[-1] == 0: self.current_vtable.pop() # Search backwards through existing vtables, because similar vtables # are likely to have been recently appended. See # BenchmarkVtableDeduplication for a case in which this heuristic # saves about 30% of the time used in writing objects with duplicate # tables. i = len(self.vtables) - 1 while i >= 0: # Find the other vtable, which is associated with `i`: vt2Offset = self.vtables[i] vt2Start = len(self.Bytes) - vt2Offset vt2Len = encode.Get(packer.voffset, self.Bytes, vt2Start) metadata = VtableMetadataFields * N.VOffsetTFlags.bytewidth vt2End = vt2Start + vt2Len vt2 = self.Bytes[vt2Start+metadata:vt2End] # Compare the other vtable to the one under consideration. # If they are equal, store the offset and break: if vtableEqual(self.current_vtable, objectOffset, vt2): existingVtable = vt2Offset break i -= 1 if existingVtable is None: # Did not find a vtable, so write this one to the buffer. # Write out the current vtable in reverse , because # serialization occurs in last-first order: i = len(self.current_vtable) - 1 while i >= 0: off = 0 if self.current_vtable[i] != 0: # Forward reference to field; # use 32bit number to ensure no overflow: off = objectOffset - self.current_vtable[i] self.PrependVOffsetT(off) i -= 1 # The two metadata fields are written last. # First, store the object bytesize: objectSize = UOffsetTFlags.py_type(objectOffset - self.objectEnd) self.PrependVOffsetT(VOffsetTFlags.py_type(objectSize)) # Second, store the vtable bytesize: vBytes = len(self.current_vtable) + VtableMetadataFields vBytes *= N.VOffsetTFlags.bytewidth self.PrependVOffsetT(VOffsetTFlags.py_type(vBytes)) # Next, write the offset to the new vtable in the # already-allocated SOffsetT at the beginning of this object: objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset) encode.Write(packer.soffset, self.Bytes, objectStart, SOffsetTFlags.py_type(self.Offset() - objectOffset)) # Finally, store this vtable in memory for future # deduplication: self.vtables.append(self.Offset()) else: # Found a duplicate vtable. objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset) self.head = UOffsetTFlags.py_type(objectStart) # Write the offset to the found vtable in the # already-allocated SOffsetT at the beginning of this object: encode.Write(packer.soffset, self.Bytes, self.Head(), SOffsetTFlags.py_type(existingVtable - objectOffset)) self.current_vtable = None return objectOffset
def checkin_boardingpass(self, code, passenger_name, seat_class, etkt_bnr, seat='', gate='', boarding_time=None, is_cancel=False, qrcode_data=None, card_id=None): """ 飞机票接口 """ data = { 'code': code, 'passenger_name': passenger_name, 'class': seat_class, 'etkt_bnr': etkt_bnr, 'seat': seat, 'gate': gate, 'is_cancel': is_cancel } if boarding_time: data['boarding_time'] = boarding_time if qrcode_data: data['qrcode_data'] = qrcode_data if card_id: data['card_id'] = card_id return self._post( 'card/boardingpass/checkin', data=data )
飞机票接口
Below is the the instruction that describes the task: ### Input: 飞机票接口 ### Response: def checkin_boardingpass(self, code, passenger_name, seat_class, etkt_bnr, seat='', gate='', boarding_time=None, is_cancel=False, qrcode_data=None, card_id=None): """ 飞机票接口 """ data = { 'code': code, 'passenger_name': passenger_name, 'class': seat_class, 'etkt_bnr': etkt_bnr, 'seat': seat, 'gate': gate, 'is_cancel': is_cancel } if boarding_time: data['boarding_time'] = boarding_time if qrcode_data: data['qrcode_data'] = qrcode_data if card_id: data['card_id'] = card_id return self._post( 'card/boardingpass/checkin', data=data )
def _redis_connection_settings(self): """Return a dictionary of redis connection settings. """ return {config.HOST: self.settings.get(config.HOST, self._REDIS_HOST), config.PORT: self.settings.get(config.PORT, self._REDIS_PORT), 'selected_db': self.settings.get(config.DB, self._REDIS_DB)}
Return a dictionary of redis connection settings.
Below is the the instruction that describes the task: ### Input: Return a dictionary of redis connection settings. ### Response: def _redis_connection_settings(self): """Return a dictionary of redis connection settings. """ return {config.HOST: self.settings.get(config.HOST, self._REDIS_HOST), config.PORT: self.settings.get(config.PORT, self._REDIS_PORT), 'selected_db': self.settings.get(config.DB, self._REDIS_DB)}
def netHours(self): ''' For regular event staff, this is the net hours worked for financial purposes. For Instructors, netHours is caclulated net of any substitutes. ''' if self.specifiedHours is not None: return self.specifiedHours elif self.category in [getConstant('general__eventStaffCategoryAssistant'),getConstant('general__eventStaffCategoryInstructor')]: return self.event.duration - sum([sub.netHours for sub in self.replacementFor.all()]) else: return sum([x.duration for x in self.occurrences.filter(cancelled=False)])
For regular event staff, this is the net hours worked for financial purposes. For Instructors, netHours is caclulated net of any substitutes.
Below is the the instruction that describes the task: ### Input: For regular event staff, this is the net hours worked for financial purposes. For Instructors, netHours is caclulated net of any substitutes. ### Response: def netHours(self): ''' For regular event staff, this is the net hours worked for financial purposes. For Instructors, netHours is caclulated net of any substitutes. ''' if self.specifiedHours is not None: return self.specifiedHours elif self.category in [getConstant('general__eventStaffCategoryAssistant'),getConstant('general__eventStaffCategoryInstructor')]: return self.event.duration - sum([sub.netHours for sub in self.replacementFor.all()]) else: return sum([x.duration for x in self.occurrences.filter(cancelled=False)])
def set_widgets(self): """Set widgets on the Field Mapping step.""" on_the_fly_metadata = {} layer_purpose = self.parent.step_kw_purpose.selected_purpose() on_the_fly_metadata['layer_purpose'] = layer_purpose['key'] if layer_purpose != layer_purpose_aggregation: subcategory = self.parent.step_kw_subcategory.\ selected_subcategory() if layer_purpose == layer_purpose_exposure: on_the_fly_metadata['exposure'] = subcategory['key'] if layer_purpose == layer_purpose_hazard: on_the_fly_metadata['hazard'] = subcategory['key'] inasafe_fields = self.parent.get_existing_keyword( 'inasafe_fields') inasafe_default_values = self.parent.get_existing_keyword( 'inasafe_default_values') on_the_fly_metadata['inasafe_fields'] = inasafe_fields on_the_fly_metadata['inasafe_default_values'] = inasafe_default_values self.field_mapping_widget.set_layer( self.parent.layer, on_the_fly_metadata) self.field_mapping_widget.show() self.main_layout.addWidget(self.field_mapping_widget)
Set widgets on the Field Mapping step.
Below is the the instruction that describes the task: ### Input: Set widgets on the Field Mapping step. ### Response: def set_widgets(self): """Set widgets on the Field Mapping step.""" on_the_fly_metadata = {} layer_purpose = self.parent.step_kw_purpose.selected_purpose() on_the_fly_metadata['layer_purpose'] = layer_purpose['key'] if layer_purpose != layer_purpose_aggregation: subcategory = self.parent.step_kw_subcategory.\ selected_subcategory() if layer_purpose == layer_purpose_exposure: on_the_fly_metadata['exposure'] = subcategory['key'] if layer_purpose == layer_purpose_hazard: on_the_fly_metadata['hazard'] = subcategory['key'] inasafe_fields = self.parent.get_existing_keyword( 'inasafe_fields') inasafe_default_values = self.parent.get_existing_keyword( 'inasafe_default_values') on_the_fly_metadata['inasafe_fields'] = inasafe_fields on_the_fly_metadata['inasafe_default_values'] = inasafe_default_values self.field_mapping_widget.set_layer( self.parent.layer, on_the_fly_metadata) self.field_mapping_widget.show() self.main_layout.addWidget(self.field_mapping_widget)
def _embedding_kernel_pca(matrix, dimensions=3, affinity_matrix=None, sigma=1): """ Private method to calculate KPCA embedding :param dimensions: (int) :return: coordinate matrix (np.array) """ if affinity_matrix is None: aff = rbf(matrix, sigma) else: aff = affinity_matrix kpca = sklearn.decomposition.KernelPCA(kernel='precomputed', n_components=dimensions) return kpca.fit_transform(aff)
Private method to calculate KPCA embedding :param dimensions: (int) :return: coordinate matrix (np.array)
Below is the the instruction that describes the task: ### Input: Private method to calculate KPCA embedding :param dimensions: (int) :return: coordinate matrix (np.array) ### Response: def _embedding_kernel_pca(matrix, dimensions=3, affinity_matrix=None, sigma=1): """ Private method to calculate KPCA embedding :param dimensions: (int) :return: coordinate matrix (np.array) """ if affinity_matrix is None: aff = rbf(matrix, sigma) else: aff = affinity_matrix kpca = sklearn.decomposition.KernelPCA(kernel='precomputed', n_components=dimensions) return kpca.fit_transform(aff)
def _get_fields_for_class(schema_graph, graphql_types, field_type_overrides, hidden_classes, cls_name): """Return a dict from field name to GraphQL field type, for the specified graph class.""" properties = schema_graph.get_element_by_class_name(cls_name).properties # Add leaf GraphQL fields (class properties). all_properties = { property_name: _property_descriptor_to_graphql_type(property_obj) for property_name, property_obj in six.iteritems(properties) } result = { property_name: graphql_representation for property_name, graphql_representation in six.iteritems(all_properties) if graphql_representation is not None } # Add edge GraphQL fields (edges to other vertex classes). schema_element = schema_graph.get_element_by_class_name(cls_name) outbound_edges = ( ('out_{}'.format(out_edge_name), schema_graph.get_element_by_class_name(out_edge_name).properties[ EDGE_DESTINATION_PROPERTY_NAME].qualifier) for out_edge_name in schema_element.out_connections ) inbound_edges = ( ('in_{}'.format(in_edge_name), schema_graph.get_element_by_class_name(in_edge_name).properties[ EDGE_SOURCE_PROPERTY_NAME].qualifier) for in_edge_name in schema_element.in_connections ) for field_name, to_type_name in chain(outbound_edges, inbound_edges): edge_endpoint_type_name = None subclasses = schema_graph.get_subclass_set(to_type_name) to_type_abstract = schema_graph.get_element_by_class_name(to_type_name).abstract if not to_type_abstract and len(subclasses) > 1: # If the edge endpoint type has no subclasses, it can't be coerced into any other type. # If the edge endpoint type is abstract (an interface type), we can already # coerce it to the proper type with a GraphQL fragment. However, if the endpoint type # is non-abstract and has subclasses, we need to return its subclasses as an union type. # This is because GraphQL fragments cannot be applied on concrete types, and # GraphQL does not support inheritance of concrete types. type_names_to_union = [ subclass for subclass in subclasses if subclass not in hidden_classes ] if type_names_to_union: edge_endpoint_type_name = _get_union_type_name(type_names_to_union) else: if to_type_name not in hidden_classes: edge_endpoint_type_name = to_type_name if edge_endpoint_type_name is not None: # If we decided to not hide this edge due to its endpoint type being non-representable, # represent the edge field as the GraphQL type List(edge_endpoint_type_name). result[field_name] = GraphQLList(graphql_types[edge_endpoint_type_name]) for field_name, field_type in six.iteritems(field_type_overrides): if field_name not in result: raise AssertionError(u'Attempting to override field "{}" from class "{}", but the ' u'class does not contain said field'.format(field_name, cls_name)) else: result[field_name] = field_type return result
Return a dict from field name to GraphQL field type, for the specified graph class.
Below is the the instruction that describes the task: ### Input: Return a dict from field name to GraphQL field type, for the specified graph class. ### Response: def _get_fields_for_class(schema_graph, graphql_types, field_type_overrides, hidden_classes, cls_name): """Return a dict from field name to GraphQL field type, for the specified graph class.""" properties = schema_graph.get_element_by_class_name(cls_name).properties # Add leaf GraphQL fields (class properties). all_properties = { property_name: _property_descriptor_to_graphql_type(property_obj) for property_name, property_obj in six.iteritems(properties) } result = { property_name: graphql_representation for property_name, graphql_representation in six.iteritems(all_properties) if graphql_representation is not None } # Add edge GraphQL fields (edges to other vertex classes). schema_element = schema_graph.get_element_by_class_name(cls_name) outbound_edges = ( ('out_{}'.format(out_edge_name), schema_graph.get_element_by_class_name(out_edge_name).properties[ EDGE_DESTINATION_PROPERTY_NAME].qualifier) for out_edge_name in schema_element.out_connections ) inbound_edges = ( ('in_{}'.format(in_edge_name), schema_graph.get_element_by_class_name(in_edge_name).properties[ EDGE_SOURCE_PROPERTY_NAME].qualifier) for in_edge_name in schema_element.in_connections ) for field_name, to_type_name in chain(outbound_edges, inbound_edges): edge_endpoint_type_name = None subclasses = schema_graph.get_subclass_set(to_type_name) to_type_abstract = schema_graph.get_element_by_class_name(to_type_name).abstract if not to_type_abstract and len(subclasses) > 1: # If the edge endpoint type has no subclasses, it can't be coerced into any other type. # If the edge endpoint type is abstract (an interface type), we can already # coerce it to the proper type with a GraphQL fragment. However, if the endpoint type # is non-abstract and has subclasses, we need to return its subclasses as an union type. # This is because GraphQL fragments cannot be applied on concrete types, and # GraphQL does not support inheritance of concrete types. type_names_to_union = [ subclass for subclass in subclasses if subclass not in hidden_classes ] if type_names_to_union: edge_endpoint_type_name = _get_union_type_name(type_names_to_union) else: if to_type_name not in hidden_classes: edge_endpoint_type_name = to_type_name if edge_endpoint_type_name is not None: # If we decided to not hide this edge due to its endpoint type being non-representable, # represent the edge field as the GraphQL type List(edge_endpoint_type_name). result[field_name] = GraphQLList(graphql_types[edge_endpoint_type_name]) for field_name, field_type in six.iteritems(field_type_overrides): if field_name not in result: raise AssertionError(u'Attempting to override field "{}" from class "{}", but the ' u'class does not contain said field'.format(field_name, cls_name)) else: result[field_name] = field_type return result
def get_transaction_status(self, transactionId): """ Returns the status of a given transaction. """ params = {} params['TransactionId'] = transactionId response = self.make_request("GetTransactionStatus", params) body = response.read() if(response.status == 200): rs = ResultSet() h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs else: raise FPSResponseError(response.status, response.reason, body)
Returns the status of a given transaction.
Below is the the instruction that describes the task: ### Input: Returns the status of a given transaction. ### Response: def get_transaction_status(self, transactionId): """ Returns the status of a given transaction. """ params = {} params['TransactionId'] = transactionId response = self.make_request("GetTransactionStatus", params) body = response.read() if(response.status == 200): rs = ResultSet() h = handler.XmlHandler(rs, self) xml.sax.parseString(body, h) return rs else: raise FPSResponseError(response.status, response.reason, body)
def export_service(self, svc_ref, export_props): # type: (ServiceReference, Dict[str, Any]) -> EndpointDescription """ Exports the given service :param svc_ref: Reference to the service to export :param export_props: Export properties :return: The endpoint description """ ed = EndpointDescription.fromprops(export_props) self._export_service( self._get_bundle_context().get_service(svc_ref), ed ) return ed
Exports the given service :param svc_ref: Reference to the service to export :param export_props: Export properties :return: The endpoint description
Below is the the instruction that describes the task: ### Input: Exports the given service :param svc_ref: Reference to the service to export :param export_props: Export properties :return: The endpoint description ### Response: def export_service(self, svc_ref, export_props): # type: (ServiceReference, Dict[str, Any]) -> EndpointDescription """ Exports the given service :param svc_ref: Reference to the service to export :param export_props: Export properties :return: The endpoint description """ ed = EndpointDescription.fromprops(export_props) self._export_service( self._get_bundle_context().get_service(svc_ref), ed ) return ed
def bitpos(self, key, bit, start=None, end=None): """Find first bit set or clear in a string. :raises ValueError: if bit is not 0 or 1 """ if bit not in (1, 0): raise ValueError("bit argument must be either 1 or 0") bytes_range = [] if start is not None: bytes_range.append(start) if end is not None: if start is None: bytes_range = [0, end] else: bytes_range.append(end) return self.execute(b'BITPOS', key, bit, *bytes_range)
Find first bit set or clear in a string. :raises ValueError: if bit is not 0 or 1
Below is the the instruction that describes the task: ### Input: Find first bit set or clear in a string. :raises ValueError: if bit is not 0 or 1 ### Response: def bitpos(self, key, bit, start=None, end=None): """Find first bit set or clear in a string. :raises ValueError: if bit is not 0 or 1 """ if bit not in (1, 0): raise ValueError("bit argument must be either 1 or 0") bytes_range = [] if start is not None: bytes_range.append(start) if end is not None: if start is None: bytes_range = [0, end] else: bytes_range.append(end) return self.execute(b'BITPOS', key, bit, *bytes_range)
def tabular(client, records): """Format dataset files with a tabular output. :param client: LocalClient instance. :param records: Filtered collection. """ from renku.models._tabulate import tabulate echo_via_pager( tabulate( records, headers=OrderedDict(( ('added', None), ('authors_csv', 'authors'), ('dataset', None), ('full_path', 'path'), )), ) )
Format dataset files with a tabular output. :param client: LocalClient instance. :param records: Filtered collection.
Below is the the instruction that describes the task: ### Input: Format dataset files with a tabular output. :param client: LocalClient instance. :param records: Filtered collection. ### Response: def tabular(client, records): """Format dataset files with a tabular output. :param client: LocalClient instance. :param records: Filtered collection. """ from renku.models._tabulate import tabulate echo_via_pager( tabulate( records, headers=OrderedDict(( ('added', None), ('authors_csv', 'authors'), ('dataset', None), ('full_path', 'path'), )), ) )
def _verify_ssl_from_first(hosts): """Check if SSL validation parameter is passed in URI >>> _verify_ssl_from_first(['https://myhost:4200/?verify_ssl=false']) False >>> _verify_ssl_from_first(['https://myhost:4200/']) True >>> _verify_ssl_from_first([ ... 'https://h1:4200/?verify_ssl=False', ... 'https://h2:4200/?verify_ssl=True' ... ]) False """ for host in hosts: query = parse_qs(urlparse(host).query) if 'verify_ssl' in query: return _to_boolean(query['verify_ssl'][0]) return True
Check if SSL validation parameter is passed in URI >>> _verify_ssl_from_first(['https://myhost:4200/?verify_ssl=false']) False >>> _verify_ssl_from_first(['https://myhost:4200/']) True >>> _verify_ssl_from_first([ ... 'https://h1:4200/?verify_ssl=False', ... 'https://h2:4200/?verify_ssl=True' ... ]) False
Below is the the instruction that describes the task: ### Input: Check if SSL validation parameter is passed in URI >>> _verify_ssl_from_first(['https://myhost:4200/?verify_ssl=false']) False >>> _verify_ssl_from_first(['https://myhost:4200/']) True >>> _verify_ssl_from_first([ ... 'https://h1:4200/?verify_ssl=False', ... 'https://h2:4200/?verify_ssl=True' ... ]) False ### Response: def _verify_ssl_from_first(hosts): """Check if SSL validation parameter is passed in URI >>> _verify_ssl_from_first(['https://myhost:4200/?verify_ssl=false']) False >>> _verify_ssl_from_first(['https://myhost:4200/']) True >>> _verify_ssl_from_first([ ... 'https://h1:4200/?verify_ssl=False', ... 'https://h2:4200/?verify_ssl=True' ... ]) False """ for host in hosts: query = parse_qs(urlparse(host).query) if 'verify_ssl' in query: return _to_boolean(query['verify_ssl'][0]) return True
def draw_round_rect(setter, x, y, w, h, r, color=None, aa=False): """Draw rectangle with top-left corner at x,y, width w, height h, and corner radius r. """ _draw_fast_hline(setter, x + r, y, w - 2 * r, color, aa) # Top _draw_fast_hline(setter, x + r, y + h - 1, w - 2 * r, color, aa) # Bottom _draw_fast_vline(setter, x, y + r, h - 2 * r, color, aa) # Left _draw_fast_vline(setter, x + w - 1, y + r, h - 2 * r, color, aa) # Right # draw four corners _draw_circle_helper(setter, x + r, y + r, r, 1, color, aa) _draw_circle_helper(setter, x + w - r - 1, y + r, r, 2, color, aa) _draw_circle_helper(setter, x + w - r - 1, y + h - r - 1, r, 4, color, aa) _draw_circle_helper(setter, x + r, y + h - r - 1, r, 8, color, aa)
Draw rectangle with top-left corner at x,y, width w, height h, and corner radius r.
Below is the the instruction that describes the task: ### Input: Draw rectangle with top-left corner at x,y, width w, height h, and corner radius r. ### Response: def draw_round_rect(setter, x, y, w, h, r, color=None, aa=False): """Draw rectangle with top-left corner at x,y, width w, height h, and corner radius r. """ _draw_fast_hline(setter, x + r, y, w - 2 * r, color, aa) # Top _draw_fast_hline(setter, x + r, y + h - 1, w - 2 * r, color, aa) # Bottom _draw_fast_vline(setter, x, y + r, h - 2 * r, color, aa) # Left _draw_fast_vline(setter, x + w - 1, y + r, h - 2 * r, color, aa) # Right # draw four corners _draw_circle_helper(setter, x + r, y + r, r, 1, color, aa) _draw_circle_helper(setter, x + w - r - 1, y + r, r, 2, color, aa) _draw_circle_helper(setter, x + w - r - 1, y + h - r - 1, r, 4, color, aa) _draw_circle_helper(setter, x + r, y + h - r - 1, r, 8, color, aa)
def get_backreferences(context, relationship=None, as_brains=None): """Return all objects which use a UIDReferenceField to reference context. :param context: The object which is the target of references. :param relationship: The relationship name of the UIDReferenceField. :param as_brains: Requests that this function returns only catalog brains. as_brains can only be used if a relationship has been specified. This function can be called with or without specifying a relationship. - If a relationship is provided, the return value will be a list of items which reference the context using the provided relationship. If relationship is provided, then you can request that the backrefs should be returned as catalog brains. If you do not specify as_brains, the raw list of UIDs will be returned. - If the relationship is not provided, then the entire set of backreferences to the context object is returned (by reference) as a dictionary. This value can then be modified in-place, to edit the stored backreferences. """ instance = context.aq_base raw_backrefs = get_storage(instance) if not relationship: assert not as_brains, "You cannot use as_brains with no relationship" return raw_backrefs backrefs = list(raw_backrefs.get(relationship, [])) if not backrefs: return [] if not as_brains: return backrefs cat = _get_catalog_for_uid(backrefs[0]) return cat(UID=backrefs)
Return all objects which use a UIDReferenceField to reference context. :param context: The object which is the target of references. :param relationship: The relationship name of the UIDReferenceField. :param as_brains: Requests that this function returns only catalog brains. as_brains can only be used if a relationship has been specified. This function can be called with or without specifying a relationship. - If a relationship is provided, the return value will be a list of items which reference the context using the provided relationship. If relationship is provided, then you can request that the backrefs should be returned as catalog brains. If you do not specify as_brains, the raw list of UIDs will be returned. - If the relationship is not provided, then the entire set of backreferences to the context object is returned (by reference) as a dictionary. This value can then be modified in-place, to edit the stored backreferences.
Below is the the instruction that describes the task: ### Input: Return all objects which use a UIDReferenceField to reference context. :param context: The object which is the target of references. :param relationship: The relationship name of the UIDReferenceField. :param as_brains: Requests that this function returns only catalog brains. as_brains can only be used if a relationship has been specified. This function can be called with or without specifying a relationship. - If a relationship is provided, the return value will be a list of items which reference the context using the provided relationship. If relationship is provided, then you can request that the backrefs should be returned as catalog brains. If you do not specify as_brains, the raw list of UIDs will be returned. - If the relationship is not provided, then the entire set of backreferences to the context object is returned (by reference) as a dictionary. This value can then be modified in-place, to edit the stored backreferences. ### Response: def get_backreferences(context, relationship=None, as_brains=None): """Return all objects which use a UIDReferenceField to reference context. :param context: The object which is the target of references. :param relationship: The relationship name of the UIDReferenceField. :param as_brains: Requests that this function returns only catalog brains. as_brains can only be used if a relationship has been specified. This function can be called with or without specifying a relationship. - If a relationship is provided, the return value will be a list of items which reference the context using the provided relationship. If relationship is provided, then you can request that the backrefs should be returned as catalog brains. If you do not specify as_brains, the raw list of UIDs will be returned. - If the relationship is not provided, then the entire set of backreferences to the context object is returned (by reference) as a dictionary. This value can then be modified in-place, to edit the stored backreferences. """ instance = context.aq_base raw_backrefs = get_storage(instance) if not relationship: assert not as_brains, "You cannot use as_brains with no relationship" return raw_backrefs backrefs = list(raw_backrefs.get(relationship, [])) if not backrefs: return [] if not as_brains: return backrefs cat = _get_catalog_for_uid(backrefs[0]) return cat(UID=backrefs)
def dumps(self, o): """ Returns a serialized string representing the object, post-deduplication. :param o: the object """ f = io.BytesIO() VaultPickler(self, f).dump(o) f.seek(0) return f.read()
Returns a serialized string representing the object, post-deduplication. :param o: the object
Below is the the instruction that describes the task: ### Input: Returns a serialized string representing the object, post-deduplication. :param o: the object ### Response: def dumps(self, o): """ Returns a serialized string representing the object, post-deduplication. :param o: the object """ f = io.BytesIO() VaultPickler(self, f).dump(o) f.seek(0) return f.read()
def regions_to_network(im, dt=None, voxel_size=1): r""" Analyzes an image that has been partitioned into pore regions and extracts the pore and throat geometry as well as network connectivity. Parameters ---------- im : ND-array An image of the pore space partitioned into individual pore regions. Note that this image must have zeros indicating the solid phase. dt : ND-array The distance transform of the pore space. If not given it will be calculated, but it can save time to provide one if available. voxel_size : scalar The resolution of the image, expressed as the length of one side of a voxel, so the volume of a voxel would be **voxel_size**-cubed. The default is 1, which is useful when overlaying the PNM on the original image since the scale of the image is alway 1 unit lenth per voxel. Returns ------- A dictionary containing all the pore and throat size data, as well as the network topological information. The dictionary names use the OpenPNM convention (i.e. 'pore.coords', 'throat.conns') so it may be converted directly to an OpenPNM network object using the ``update`` command. """ print('_'*60) print('Extracting pore and throat information from image') from skimage.morphology import disk, ball struc_elem = disk if im.ndim == 2 else ball # if ~sp.any(im == 0): # raise Exception('The received image has no solid phase (0\'s)') if dt is None: dt = spim.distance_transform_edt(im > 0) dt = spim.gaussian_filter(input=dt, sigma=0.5) # Get 'slices' into im for each pore region slices = spim.find_objects(im) # Initialize arrays Ps = sp.arange(1, sp.amax(im)+1) Np = sp.size(Ps) p_coords = sp.zeros((Np, im.ndim), dtype=float) p_volume = sp.zeros((Np, ), dtype=float) p_dia_local = sp.zeros((Np, ), dtype=float) p_dia_global = sp.zeros((Np, ), dtype=float) p_label = sp.zeros((Np, ), dtype=int) p_area_surf = sp.zeros((Np, ), dtype=int) t_conns = [] t_dia_inscribed = [] t_area = [] t_perimeter = [] t_coords = [] # dt_shape = sp.array(dt.shape) # Start extracting size information for pores and throats for i in tqdm(Ps): pore = i - 1 if slices[pore] is None: continue s = extend_slice(slices[pore], im.shape) sub_im = im[s] sub_dt = dt[s] pore_im = sub_im == i padded_mask = sp.pad(pore_im, pad_width=1, mode='constant') pore_dt = spim.distance_transform_edt(padded_mask) s_offset = sp.array([i.start for i in s]) p_label[pore] = i p_coords[pore, :] = spim.center_of_mass(pore_im) + s_offset p_volume[pore] = sp.sum(pore_im) p_dia_local[pore] = 2*sp.amax(pore_dt) p_dia_global[pore] = 2*sp.amax(sub_dt) p_area_surf[pore] = sp.sum(pore_dt == 1) im_w_throats = spim.binary_dilation(input=pore_im, structure=struc_elem(1)) im_w_throats = im_w_throats*sub_im Pn = sp.unique(im_w_throats)[1:] - 1 for j in Pn: if j > pore: t_conns.append([pore, j]) vx = sp.where(im_w_throats == (j + 1)) t_dia_inscribed.append(2*sp.amax(sub_dt[vx])) t_perimeter.append(sp.sum(sub_dt[vx] < 2)) t_area.append(sp.size(vx[0])) t_inds = tuple([i+j for i, j in zip(vx, s_offset)]) temp = sp.where(dt[t_inds] == sp.amax(dt[t_inds]))[0][0] if im.ndim == 2: t_coords.append(tuple((t_inds[0][temp], t_inds[1][temp]))) else: t_coords.append(tuple((t_inds[0][temp], t_inds[1][temp], t_inds[2][temp]))) # Clean up values Nt = len(t_dia_inscribed) # Get number of throats if im.ndim == 2: # If 2D, add 0's in 3rd dimension p_coords = sp.vstack((p_coords.T, sp.zeros((Np, )))).T t_coords = sp.vstack((sp.array(t_coords).T, sp.zeros((Nt, )))).T net = {} net['pore.all'] = sp.ones((Np, ), dtype=bool) net['throat.all'] = sp.ones((Nt, ), dtype=bool) net['pore.coords'] = sp.copy(p_coords)*voxel_size net['pore.centroid'] = sp.copy(p_coords)*voxel_size net['throat.centroid'] = sp.array(t_coords)*voxel_size net['throat.conns'] = sp.array(t_conns) net['pore.label'] = sp.array(p_label) net['pore.volume'] = sp.copy(p_volume)*(voxel_size**3) net['throat.volume'] = sp.zeros((Nt, ), dtype=float) net['pore.diameter'] = sp.copy(p_dia_local)*voxel_size net['pore.inscribed_diameter'] = sp.copy(p_dia_local)*voxel_size net['pore.equivalent_diameter'] = 2*((3/4*net['pore.volume']/sp.pi)**(1/3)) net['pore.extended_diameter'] = sp.copy(p_dia_global)*voxel_size net['pore.surface_area'] = sp.copy(p_area_surf)*(voxel_size)**2 net['throat.diameter'] = sp.array(t_dia_inscribed)*voxel_size net['throat.inscribed_diameter'] = sp.array(t_dia_inscribed)*voxel_size net['throat.area'] = sp.array(t_area)*(voxel_size**2) net['throat.perimeter'] = sp.array(t_perimeter)*voxel_size net['throat.equivalent_diameter'] = (sp.array(t_area) * (voxel_size**2))**0.5 P12 = net['throat.conns'] PT1 = sp.sqrt(sp.sum(((p_coords[P12[:, 0]]-t_coords) * voxel_size)**2, axis=1)) PT2 = sp.sqrt(sp.sum(((p_coords[P12[:, 1]]-t_coords) * voxel_size)**2, axis=1)) net['throat.total_length'] = PT1 + PT2 PT1 = PT1-p_dia_local[P12[:, 0]]/2*voxel_size PT2 = PT2-p_dia_local[P12[:, 1]]/2*voxel_size net['throat.length'] = PT1 + PT2 dist = (p_coords[P12[:, 0]]-p_coords[P12[:, 1]])*voxel_size net['throat.direct_length'] = sp.sqrt(sp.sum(dist**2, axis=1)) # Make a dummy openpnm network to get the conduit lengths pn = op.network.GenericNetwork() pn.update(net) pn.add_model(propname='throat.endpoints', model=op_gm.throat_endpoints.spherical_pores, pore_diameter='pore.inscribed_diameter', throat_diameter='throat.inscribed_diameter') pn.add_model(propname='throat.conduit_lengths', model=op_gm.throat_length.conduit_lengths) pn.add_model(propname='pore.area', model=op_gm.pore_area.sphere) net['throat.endpoints.head'] = pn['throat.endpoints.head'] net['throat.endpoints.tail'] = pn['throat.endpoints.tail'] net['throat.conduit_lengths.pore1'] = pn['throat.conduit_lengths.pore1'] net['throat.conduit_lengths.pore2'] = pn['throat.conduit_lengths.pore2'] net['throat.conduit_lengths.throat'] = pn['throat.conduit_lengths.throat'] net['pore.area'] = pn['pore.area'] prj = pn.project prj.clear() wrk = op.Workspace() wrk.close_project(prj) return net
r""" Analyzes an image that has been partitioned into pore regions and extracts the pore and throat geometry as well as network connectivity. Parameters ---------- im : ND-array An image of the pore space partitioned into individual pore regions. Note that this image must have zeros indicating the solid phase. dt : ND-array The distance transform of the pore space. If not given it will be calculated, but it can save time to provide one if available. voxel_size : scalar The resolution of the image, expressed as the length of one side of a voxel, so the volume of a voxel would be **voxel_size**-cubed. The default is 1, which is useful when overlaying the PNM on the original image since the scale of the image is alway 1 unit lenth per voxel. Returns ------- A dictionary containing all the pore and throat size data, as well as the network topological information. The dictionary names use the OpenPNM convention (i.e. 'pore.coords', 'throat.conns') so it may be converted directly to an OpenPNM network object using the ``update`` command.
Below is the the instruction that describes the task: ### Input: r""" Analyzes an image that has been partitioned into pore regions and extracts the pore and throat geometry as well as network connectivity. Parameters ---------- im : ND-array An image of the pore space partitioned into individual pore regions. Note that this image must have zeros indicating the solid phase. dt : ND-array The distance transform of the pore space. If not given it will be calculated, but it can save time to provide one if available. voxel_size : scalar The resolution of the image, expressed as the length of one side of a voxel, so the volume of a voxel would be **voxel_size**-cubed. The default is 1, which is useful when overlaying the PNM on the original image since the scale of the image is alway 1 unit lenth per voxel. Returns ------- A dictionary containing all the pore and throat size data, as well as the network topological information. The dictionary names use the OpenPNM convention (i.e. 'pore.coords', 'throat.conns') so it may be converted directly to an OpenPNM network object using the ``update`` command. ### Response: def regions_to_network(im, dt=None, voxel_size=1): r""" Analyzes an image that has been partitioned into pore regions and extracts the pore and throat geometry as well as network connectivity. Parameters ---------- im : ND-array An image of the pore space partitioned into individual pore regions. Note that this image must have zeros indicating the solid phase. dt : ND-array The distance transform of the pore space. If not given it will be calculated, but it can save time to provide one if available. voxel_size : scalar The resolution of the image, expressed as the length of one side of a voxel, so the volume of a voxel would be **voxel_size**-cubed. The default is 1, which is useful when overlaying the PNM on the original image since the scale of the image is alway 1 unit lenth per voxel. Returns ------- A dictionary containing all the pore and throat size data, as well as the network topological information. The dictionary names use the OpenPNM convention (i.e. 'pore.coords', 'throat.conns') so it may be converted directly to an OpenPNM network object using the ``update`` command. """ print('_'*60) print('Extracting pore and throat information from image') from skimage.morphology import disk, ball struc_elem = disk if im.ndim == 2 else ball # if ~sp.any(im == 0): # raise Exception('The received image has no solid phase (0\'s)') if dt is None: dt = spim.distance_transform_edt(im > 0) dt = spim.gaussian_filter(input=dt, sigma=0.5) # Get 'slices' into im for each pore region slices = spim.find_objects(im) # Initialize arrays Ps = sp.arange(1, sp.amax(im)+1) Np = sp.size(Ps) p_coords = sp.zeros((Np, im.ndim), dtype=float) p_volume = sp.zeros((Np, ), dtype=float) p_dia_local = sp.zeros((Np, ), dtype=float) p_dia_global = sp.zeros((Np, ), dtype=float) p_label = sp.zeros((Np, ), dtype=int) p_area_surf = sp.zeros((Np, ), dtype=int) t_conns = [] t_dia_inscribed = [] t_area = [] t_perimeter = [] t_coords = [] # dt_shape = sp.array(dt.shape) # Start extracting size information for pores and throats for i in tqdm(Ps): pore = i - 1 if slices[pore] is None: continue s = extend_slice(slices[pore], im.shape) sub_im = im[s] sub_dt = dt[s] pore_im = sub_im == i padded_mask = sp.pad(pore_im, pad_width=1, mode='constant') pore_dt = spim.distance_transform_edt(padded_mask) s_offset = sp.array([i.start for i in s]) p_label[pore] = i p_coords[pore, :] = spim.center_of_mass(pore_im) + s_offset p_volume[pore] = sp.sum(pore_im) p_dia_local[pore] = 2*sp.amax(pore_dt) p_dia_global[pore] = 2*sp.amax(sub_dt) p_area_surf[pore] = sp.sum(pore_dt == 1) im_w_throats = spim.binary_dilation(input=pore_im, structure=struc_elem(1)) im_w_throats = im_w_throats*sub_im Pn = sp.unique(im_w_throats)[1:] - 1 for j in Pn: if j > pore: t_conns.append([pore, j]) vx = sp.where(im_w_throats == (j + 1)) t_dia_inscribed.append(2*sp.amax(sub_dt[vx])) t_perimeter.append(sp.sum(sub_dt[vx] < 2)) t_area.append(sp.size(vx[0])) t_inds = tuple([i+j for i, j in zip(vx, s_offset)]) temp = sp.where(dt[t_inds] == sp.amax(dt[t_inds]))[0][0] if im.ndim == 2: t_coords.append(tuple((t_inds[0][temp], t_inds[1][temp]))) else: t_coords.append(tuple((t_inds[0][temp], t_inds[1][temp], t_inds[2][temp]))) # Clean up values Nt = len(t_dia_inscribed) # Get number of throats if im.ndim == 2: # If 2D, add 0's in 3rd dimension p_coords = sp.vstack((p_coords.T, sp.zeros((Np, )))).T t_coords = sp.vstack((sp.array(t_coords).T, sp.zeros((Nt, )))).T net = {} net['pore.all'] = sp.ones((Np, ), dtype=bool) net['throat.all'] = sp.ones((Nt, ), dtype=bool) net['pore.coords'] = sp.copy(p_coords)*voxel_size net['pore.centroid'] = sp.copy(p_coords)*voxel_size net['throat.centroid'] = sp.array(t_coords)*voxel_size net['throat.conns'] = sp.array(t_conns) net['pore.label'] = sp.array(p_label) net['pore.volume'] = sp.copy(p_volume)*(voxel_size**3) net['throat.volume'] = sp.zeros((Nt, ), dtype=float) net['pore.diameter'] = sp.copy(p_dia_local)*voxel_size net['pore.inscribed_diameter'] = sp.copy(p_dia_local)*voxel_size net['pore.equivalent_diameter'] = 2*((3/4*net['pore.volume']/sp.pi)**(1/3)) net['pore.extended_diameter'] = sp.copy(p_dia_global)*voxel_size net['pore.surface_area'] = sp.copy(p_area_surf)*(voxel_size)**2 net['throat.diameter'] = sp.array(t_dia_inscribed)*voxel_size net['throat.inscribed_diameter'] = sp.array(t_dia_inscribed)*voxel_size net['throat.area'] = sp.array(t_area)*(voxel_size**2) net['throat.perimeter'] = sp.array(t_perimeter)*voxel_size net['throat.equivalent_diameter'] = (sp.array(t_area) * (voxel_size**2))**0.5 P12 = net['throat.conns'] PT1 = sp.sqrt(sp.sum(((p_coords[P12[:, 0]]-t_coords) * voxel_size)**2, axis=1)) PT2 = sp.sqrt(sp.sum(((p_coords[P12[:, 1]]-t_coords) * voxel_size)**2, axis=1)) net['throat.total_length'] = PT1 + PT2 PT1 = PT1-p_dia_local[P12[:, 0]]/2*voxel_size PT2 = PT2-p_dia_local[P12[:, 1]]/2*voxel_size net['throat.length'] = PT1 + PT2 dist = (p_coords[P12[:, 0]]-p_coords[P12[:, 1]])*voxel_size net['throat.direct_length'] = sp.sqrt(sp.sum(dist**2, axis=1)) # Make a dummy openpnm network to get the conduit lengths pn = op.network.GenericNetwork() pn.update(net) pn.add_model(propname='throat.endpoints', model=op_gm.throat_endpoints.spherical_pores, pore_diameter='pore.inscribed_diameter', throat_diameter='throat.inscribed_diameter') pn.add_model(propname='throat.conduit_lengths', model=op_gm.throat_length.conduit_lengths) pn.add_model(propname='pore.area', model=op_gm.pore_area.sphere) net['throat.endpoints.head'] = pn['throat.endpoints.head'] net['throat.endpoints.tail'] = pn['throat.endpoints.tail'] net['throat.conduit_lengths.pore1'] = pn['throat.conduit_lengths.pore1'] net['throat.conduit_lengths.pore2'] = pn['throat.conduit_lengths.pore2'] net['throat.conduit_lengths.throat'] = pn['throat.conduit_lengths.throat'] net['pore.area'] = pn['pore.area'] prj = pn.project prj.clear() wrk = op.Workspace() wrk.close_project(prj) return net
def plot_compare_four(data_a, data_b, data_c, data_d, disply_kwargs=None, plotter_kwargs=None, show_kwargs=None, screenshot=None, camera_position=None, outline=None, outline_color='k', labels=('A', 'B', 'C', 'D')): """Plot a 2 by 2 comparison of data objects. Plotting parameters and camera positions will all be the same. """ datasets = [[data_a, data_b], [data_c, data_d]] labels = [labels[0:2], labels[2:4]] if plotter_kwargs is None: plotter_kwargs = {} if disply_kwargs is None: disply_kwargs = {} if show_kwargs is None: show_kwargs = {} p = vtki.Plotter(shape=(2,2), **plotter_kwargs) for i in range(2): for j in range(2): p.subplot(i, j) p.add_mesh(datasets[i][j], **disply_kwargs) p.add_text(labels[i][j]) if is_vtki_obj(outline): p.add_mesh(outline, color=outline_color) if camera_position is not None: p.camera_position = camera_position return p.show(screenshot=screenshot, **show_kwargs)
Plot a 2 by 2 comparison of data objects. Plotting parameters and camera positions will all be the same.
Below is the the instruction that describes the task: ### Input: Plot a 2 by 2 comparison of data objects. Plotting parameters and camera positions will all be the same. ### Response: def plot_compare_four(data_a, data_b, data_c, data_d, disply_kwargs=None, plotter_kwargs=None, show_kwargs=None, screenshot=None, camera_position=None, outline=None, outline_color='k', labels=('A', 'B', 'C', 'D')): """Plot a 2 by 2 comparison of data objects. Plotting parameters and camera positions will all be the same. """ datasets = [[data_a, data_b], [data_c, data_d]] labels = [labels[0:2], labels[2:4]] if plotter_kwargs is None: plotter_kwargs = {} if disply_kwargs is None: disply_kwargs = {} if show_kwargs is None: show_kwargs = {} p = vtki.Plotter(shape=(2,2), **plotter_kwargs) for i in range(2): for j in range(2): p.subplot(i, j) p.add_mesh(datasets[i][j], **disply_kwargs) p.add_text(labels[i][j]) if is_vtki_obj(outline): p.add_mesh(outline, color=outline_color) if camera_position is not None: p.camera_position = camera_position return p.show(screenshot=screenshot, **show_kwargs)
def _set_community_map(self, v, load=False): """ Setter method for community_map, mapped from YANG variable /snmp_server/mib/community_map (list) If this variable is read-only (config: false) in the source YANG file, then _set_community_map is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_community_map() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("community",community_map.community_map, yang_name="community-map", rest_name="community-map", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='community', extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}), is_container='list', yang_name="community-map", rest_name="community-map", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}, namespace='urn:brocade.com:mgmt:brocade-snmp', defining_module='brocade-snmp', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """community_map must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("community",community_map.community_map, yang_name="community-map", rest_name="community-map", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='community', extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}), is_container='list', yang_name="community-map", rest_name="community-map", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}, namespace='urn:brocade.com:mgmt:brocade-snmp', defining_module='brocade-snmp', yang_type='list', is_config=True)""", }) self.__community_map = t if hasattr(self, '_set'): self._set()
Setter method for community_map, mapped from YANG variable /snmp_server/mib/community_map (list) If this variable is read-only (config: false) in the source YANG file, then _set_community_map is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_community_map() directly.
Below is the the instruction that describes the task: ### Input: Setter method for community_map, mapped from YANG variable /snmp_server/mib/community_map (list) If this variable is read-only (config: false) in the source YANG file, then _set_community_map is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_community_map() directly. ### Response: def _set_community_map(self, v, load=False): """ Setter method for community_map, mapped from YANG variable /snmp_server/mib/community_map (list) If this variable is read-only (config: false) in the source YANG file, then _set_community_map is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_community_map() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("community",community_map.community_map, yang_name="community-map", rest_name="community-map", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='community', extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}), is_container='list', yang_name="community-map", rest_name="community-map", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}, namespace='urn:brocade.com:mgmt:brocade-snmp', defining_module='brocade-snmp', yang_type='list', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """community_map must be of a type compatible with list""", 'defined-type': "list", 'generated-type': """YANGDynClass(base=YANGListType("community",community_map.community_map, yang_name="community-map", rest_name="community-map", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='community', extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}), is_container='list', yang_name="community-map", rest_name="community-map", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'community string to map', u'cli-suppress-mode': None, u'cli-suppress-list-no': None, u'cli-compact-syntax': None, u'cli-incomplete-command': None, u'callpoint': u'snmpcommunitymapping'}}, namespace='urn:brocade.com:mgmt:brocade-snmp', defining_module='brocade-snmp', yang_type='list', is_config=True)""", }) self.__community_map = t if hasattr(self, '_set'): self._set()
def has_next_question(self, assessment_section_id, item_id): """Tests if there is a next question following the given question ``Id``. arg: assessment_section_id (osid.id.Id): ``Id`` of the ``AssessmentSection`` arg: item_id (osid.id.Id): ``Id`` of the ``Item`` return: (boolean) - ``true`` if there is a next question, ``false`` otherwise raise: IllegalState - ``has_assessment_section_begun() is false or is_assessment_section_over() is true`` raise: NotFound - ``assessment_section_id`` or ``item_id`` is not found, or ``item_id`` not part of ``assessment_section_id`` raise: NullArgument - ``assessment_section_id`` or ``item_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.* """ try: self.get_next_question(assessment_section_id, item_id) except errors.IllegalState: return False else: return True
Tests if there is a next question following the given question ``Id``. arg: assessment_section_id (osid.id.Id): ``Id`` of the ``AssessmentSection`` arg: item_id (osid.id.Id): ``Id`` of the ``Item`` return: (boolean) - ``true`` if there is a next question, ``false`` otherwise raise: IllegalState - ``has_assessment_section_begun() is false or is_assessment_section_over() is true`` raise: NotFound - ``assessment_section_id`` or ``item_id`` is not found, or ``item_id`` not part of ``assessment_section_id`` raise: NullArgument - ``assessment_section_id`` or ``item_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Tests if there is a next question following the given question ``Id``. arg: assessment_section_id (osid.id.Id): ``Id`` of the ``AssessmentSection`` arg: item_id (osid.id.Id): ``Id`` of the ``Item`` return: (boolean) - ``true`` if there is a next question, ``false`` otherwise raise: IllegalState - ``has_assessment_section_begun() is false or is_assessment_section_over() is true`` raise: NotFound - ``assessment_section_id`` or ``item_id`` is not found, or ``item_id`` not part of ``assessment_section_id`` raise: NullArgument - ``assessment_section_id`` or ``item_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.* ### Response: def has_next_question(self, assessment_section_id, item_id): """Tests if there is a next question following the given question ``Id``. arg: assessment_section_id (osid.id.Id): ``Id`` of the ``AssessmentSection`` arg: item_id (osid.id.Id): ``Id`` of the ``Item`` return: (boolean) - ``true`` if there is a next question, ``false`` otherwise raise: IllegalState - ``has_assessment_section_begun() is false or is_assessment_section_over() is true`` raise: NotFound - ``assessment_section_id`` or ``item_id`` is not found, or ``item_id`` not part of ``assessment_section_id`` raise: NullArgument - ``assessment_section_id`` or ``item_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.* """ try: self.get_next_question(assessment_section_id, item_id) except errors.IllegalState: return False else: return True
def paragraphs(quantity=2, separator='\n\n', wrap_start='', wrap_end='', html=False, sentences_quantity=3, as_list=False): """Random paragraphs.""" if html: wrap_start = '<p>' wrap_end = '</p>' separator = '\n\n' result = [] for i in xrange(0, quantity): result.append(wrap_start + sentences(sentences_quantity) + wrap_end) if as_list: return result else: return separator.join(result)
Random paragraphs.
Below is the the instruction that describes the task: ### Input: Random paragraphs. ### Response: def paragraphs(quantity=2, separator='\n\n', wrap_start='', wrap_end='', html=False, sentences_quantity=3, as_list=False): """Random paragraphs.""" if html: wrap_start = '<p>' wrap_end = '</p>' separator = '\n\n' result = [] for i in xrange(0, quantity): result.append(wrap_start + sentences(sentences_quantity) + wrap_end) if as_list: return result else: return separator.join(result)
def layers(self) -> Circuit: """Split DAGCircuit into layers, where the operations within each layer operate on different qubits (and therefore commute). Returns: A Circuit of Circuits, one Circuit per layer """ node_depth: Dict[Qubit, int] = {} G = self.graph for elem in self: depth = np.max(list(node_depth.get(prev, -1) + 1 for prev in G.predecessors(elem))) node_depth[elem] = depth depth_nodes = invert_map(node_depth, one_to_one=False) layers = [] for nd in range(0, self.depth()): elements = depth_nodes[nd] circ = Circuit(list(elements)) layers.append(circ) return Circuit(layers)
Split DAGCircuit into layers, where the operations within each layer operate on different qubits (and therefore commute). Returns: A Circuit of Circuits, one Circuit per layer
Below is the the instruction that describes the task: ### Input: Split DAGCircuit into layers, where the operations within each layer operate on different qubits (and therefore commute). Returns: A Circuit of Circuits, one Circuit per layer ### Response: def layers(self) -> Circuit: """Split DAGCircuit into layers, where the operations within each layer operate on different qubits (and therefore commute). Returns: A Circuit of Circuits, one Circuit per layer """ node_depth: Dict[Qubit, int] = {} G = self.graph for elem in self: depth = np.max(list(node_depth.get(prev, -1) + 1 for prev in G.predecessors(elem))) node_depth[elem] = depth depth_nodes = invert_map(node_depth, one_to_one=False) layers = [] for nd in range(0, self.depth()): elements = depth_nodes[nd] circ = Circuit(list(elements)) layers.append(circ) return Circuit(layers)
def calc_list_average(l): """ Calculates the average value of a list of numbers Returns a float """ total = 0.0 for value in l: total += value return total / len(l)
Calculates the average value of a list of numbers Returns a float
Below is the the instruction that describes the task: ### Input: Calculates the average value of a list of numbers Returns a float ### Response: def calc_list_average(l): """ Calculates the average value of a list of numbers Returns a float """ total = 0.0 for value in l: total += value return total / len(l)
def format_pattrs(pattrs: List['api.PrettyAttribute']) -> str: """Generates repr string given a list of pattrs.""" output = [] pattrs.sort( key=lambda x: ( _FORMATTER[x.display_group].display_index, x.display_group, x.name, ) ) for display_group, grouped_pattrs in groupby(pattrs, lambda x: x.display_group): output.append( _FORMATTER[display_group].formatter(display_group, grouped_pattrs) ) return '\n'.join(output)
Generates repr string given a list of pattrs.
Below is the the instruction that describes the task: ### Input: Generates repr string given a list of pattrs. ### Response: def format_pattrs(pattrs: List['api.PrettyAttribute']) -> str: """Generates repr string given a list of pattrs.""" output = [] pattrs.sort( key=lambda x: ( _FORMATTER[x.display_group].display_index, x.display_group, x.name, ) ) for display_group, grouped_pattrs in groupby(pattrs, lambda x: x.display_group): output.append( _FORMATTER[display_group].formatter(display_group, grouped_pattrs) ) return '\n'.join(output)
def process_rewards(self, rewards): """Clips, rounds, and changes to integer type. Args: rewards: numpy array of raw (float) rewards. Returns: processed_rewards: numpy array of np.int64 """ min_reward, max_reward = self.reward_range # Clips at min and max reward. rewards = np.clip(rewards, min_reward, max_reward) # Round to (nearest) int and convert to integral type. rewards = np.around(rewards, decimals=0).astype(np.int64) return rewards
Clips, rounds, and changes to integer type. Args: rewards: numpy array of raw (float) rewards. Returns: processed_rewards: numpy array of np.int64
Below is the the instruction that describes the task: ### Input: Clips, rounds, and changes to integer type. Args: rewards: numpy array of raw (float) rewards. Returns: processed_rewards: numpy array of np.int64 ### Response: def process_rewards(self, rewards): """Clips, rounds, and changes to integer type. Args: rewards: numpy array of raw (float) rewards. Returns: processed_rewards: numpy array of np.int64 """ min_reward, max_reward = self.reward_range # Clips at min and max reward. rewards = np.clip(rewards, min_reward, max_reward) # Round to (nearest) int and convert to integral type. rewards = np.around(rewards, decimals=0).astype(np.int64) return rewards
def import_rsakey_from_public_pem(pem, scheme='rsassa-pss-sha256'): """ <Purpose> Generate an RSA key object from 'pem'. In addition, a keyid identifier for the RSA key is generated. The object returned conforms to 'securesystemslib.formats.RSAKEY_SCHEMA' and has the form: {'keytype': 'rsa', 'keyid': keyid, 'keyval': {'public': '-----BEGIN PUBLIC KEY----- ...', 'private': ''}} The public portion of the RSA key is a string in PEM format. >>> rsa_key = generate_rsa_key() >>> public = rsa_key['keyval']['public'] >>> rsa_key['keyval']['private'] = '' >>> rsa_key2 = import_rsakey_from_public_pem(public) >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key) True >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key2) True <Arguments> pem: A string in PEM format (it should contain a public RSA key). <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> Only the public portion of the PEM is extracted. Leading or trailing whitespace is not included in the PEM string stored in the rsakey object returned. <Returns> A dictionary containing the RSA keys and other identifying information. Conforms to 'securesystemslib.formats.RSAKEY_SCHEMA'. """ # Does 'pem' have the correct format? # This check will ensure arguments has the appropriate number # of objects and object types, and that all dict keys are properly named. # Raise 'securesystemslib.exceptions.FormatError' if the check fails. securesystemslib.formats.PEMRSA_SCHEMA.check_match(pem) # Does 'scheme' have the correct format? securesystemslib.formats.RSA_SCHEME_SCHEMA.check_match(scheme) # Ensure the PEM string has a public header and footer. Although a simple # validation of 'pem' is performed here, a fully valid PEM string is needed # later to successfully verify signatures. Performing stricter validation of # PEMs are left to the external libraries that use 'pem'. if is_pem_public(pem): public_pem = extract_pem(pem, private_pem=False) else: raise securesystemslib.exceptions.FormatError('Invalid public' ' pem: ' + repr(pem)) # Begin building the RSA key dictionary. rsakey_dict = {} keytype = 'rsa' # Generate the keyid of the RSA key. 'key_value' corresponds to the # 'keyval' entry of the 'RSAKEY_SCHEMA' dictionary. The private key # information is not included in the generation of the 'keyid' identifier. # Convert any '\r\n' (e.g., Windows) newline characters to '\n' so that a # consistent keyid is generated. key_value = {'public': public_pem.replace('\r\n', '\n'), 'private': ''} keyid = _get_keyid(keytype, scheme, key_value) rsakey_dict['keytype'] = keytype rsakey_dict['scheme'] = scheme rsakey_dict['keyid'] = keyid rsakey_dict['keyval'] = key_value # Add "keyid_hash_algorithms" so that equal RSA keys with different keyids # can be associated using supported keyid_hash_algorithms. rsakey_dict['keyid_hash_algorithms'] = \ securesystemslib.settings.HASH_ALGORITHMS return rsakey_dict
<Purpose> Generate an RSA key object from 'pem'. In addition, a keyid identifier for the RSA key is generated. The object returned conforms to 'securesystemslib.formats.RSAKEY_SCHEMA' and has the form: {'keytype': 'rsa', 'keyid': keyid, 'keyval': {'public': '-----BEGIN PUBLIC KEY----- ...', 'private': ''}} The public portion of the RSA key is a string in PEM format. >>> rsa_key = generate_rsa_key() >>> public = rsa_key['keyval']['public'] >>> rsa_key['keyval']['private'] = '' >>> rsa_key2 = import_rsakey_from_public_pem(public) >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key) True >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key2) True <Arguments> pem: A string in PEM format (it should contain a public RSA key). <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> Only the public portion of the PEM is extracted. Leading or trailing whitespace is not included in the PEM string stored in the rsakey object returned. <Returns> A dictionary containing the RSA keys and other identifying information. Conforms to 'securesystemslib.formats.RSAKEY_SCHEMA'.
Below is the the instruction that describes the task: ### Input: <Purpose> Generate an RSA key object from 'pem'. In addition, a keyid identifier for the RSA key is generated. The object returned conforms to 'securesystemslib.formats.RSAKEY_SCHEMA' and has the form: {'keytype': 'rsa', 'keyid': keyid, 'keyval': {'public': '-----BEGIN PUBLIC KEY----- ...', 'private': ''}} The public portion of the RSA key is a string in PEM format. >>> rsa_key = generate_rsa_key() >>> public = rsa_key['keyval']['public'] >>> rsa_key['keyval']['private'] = '' >>> rsa_key2 = import_rsakey_from_public_pem(public) >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key) True >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key2) True <Arguments> pem: A string in PEM format (it should contain a public RSA key). <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> Only the public portion of the PEM is extracted. Leading or trailing whitespace is not included in the PEM string stored in the rsakey object returned. <Returns> A dictionary containing the RSA keys and other identifying information. Conforms to 'securesystemslib.formats.RSAKEY_SCHEMA'. ### Response: def import_rsakey_from_public_pem(pem, scheme='rsassa-pss-sha256'): """ <Purpose> Generate an RSA key object from 'pem'. In addition, a keyid identifier for the RSA key is generated. The object returned conforms to 'securesystemslib.formats.RSAKEY_SCHEMA' and has the form: {'keytype': 'rsa', 'keyid': keyid, 'keyval': {'public': '-----BEGIN PUBLIC KEY----- ...', 'private': ''}} The public portion of the RSA key is a string in PEM format. >>> rsa_key = generate_rsa_key() >>> public = rsa_key['keyval']['public'] >>> rsa_key['keyval']['private'] = '' >>> rsa_key2 = import_rsakey_from_public_pem(public) >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key) True >>> securesystemslib.formats.RSAKEY_SCHEMA.matches(rsa_key2) True <Arguments> pem: A string in PEM format (it should contain a public RSA key). <Exceptions> securesystemslib.exceptions.FormatError, if 'pem' is improperly formatted. <Side Effects> Only the public portion of the PEM is extracted. Leading or trailing whitespace is not included in the PEM string stored in the rsakey object returned. <Returns> A dictionary containing the RSA keys and other identifying information. Conforms to 'securesystemslib.formats.RSAKEY_SCHEMA'. """ # Does 'pem' have the correct format? # This check will ensure arguments has the appropriate number # of objects and object types, and that all dict keys are properly named. # Raise 'securesystemslib.exceptions.FormatError' if the check fails. securesystemslib.formats.PEMRSA_SCHEMA.check_match(pem) # Does 'scheme' have the correct format? securesystemslib.formats.RSA_SCHEME_SCHEMA.check_match(scheme) # Ensure the PEM string has a public header and footer. Although a simple # validation of 'pem' is performed here, a fully valid PEM string is needed # later to successfully verify signatures. Performing stricter validation of # PEMs are left to the external libraries that use 'pem'. if is_pem_public(pem): public_pem = extract_pem(pem, private_pem=False) else: raise securesystemslib.exceptions.FormatError('Invalid public' ' pem: ' + repr(pem)) # Begin building the RSA key dictionary. rsakey_dict = {} keytype = 'rsa' # Generate the keyid of the RSA key. 'key_value' corresponds to the # 'keyval' entry of the 'RSAKEY_SCHEMA' dictionary. The private key # information is not included in the generation of the 'keyid' identifier. # Convert any '\r\n' (e.g., Windows) newline characters to '\n' so that a # consistent keyid is generated. key_value = {'public': public_pem.replace('\r\n', '\n'), 'private': ''} keyid = _get_keyid(keytype, scheme, key_value) rsakey_dict['keytype'] = keytype rsakey_dict['scheme'] = scheme rsakey_dict['keyid'] = keyid rsakey_dict['keyval'] = key_value # Add "keyid_hash_algorithms" so that equal RSA keys with different keyids # can be associated using supported keyid_hash_algorithms. rsakey_dict['keyid_hash_algorithms'] = \ securesystemslib.settings.HASH_ALGORITHMS return rsakey_dict
def get_asset_lookup_session_for_repository(self, repository_id=None, proxy=None): """Gets the ``OsidSession`` associated with the asset lookup service for the given repository. arg: repository_id (osid.id.Id): the ``Id`` of the repository arg: proxy (osid.proxy.Proxy): a proxy return: (osid.repository.AssetLookupSession) - an ``AssetLookupSession`` raise: NotFound - ``repository_id`` not found raise: NullArgument - ``repository_id`` or ``proxy`` is ``null`` raise: OperationFailed - ``unable to complete request`` raise: Unimplemented - ``supports_asset_lookup()`` or ``supports_visible_federation()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_asset_lookup()`` and ``supports_visible_federation()`` are ``true``.* """ return AssetLookupSession( self._provider_manager.get_asset_lookup_session_for_repository(repository_id, proxy), self._config_map)
Gets the ``OsidSession`` associated with the asset lookup service for the given repository. arg: repository_id (osid.id.Id): the ``Id`` of the repository arg: proxy (osid.proxy.Proxy): a proxy return: (osid.repository.AssetLookupSession) - an ``AssetLookupSession`` raise: NotFound - ``repository_id`` not found raise: NullArgument - ``repository_id`` or ``proxy`` is ``null`` raise: OperationFailed - ``unable to complete request`` raise: Unimplemented - ``supports_asset_lookup()`` or ``supports_visible_federation()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_asset_lookup()`` and ``supports_visible_federation()`` are ``true``.*
Below is the the instruction that describes the task: ### Input: Gets the ``OsidSession`` associated with the asset lookup service for the given repository. arg: repository_id (osid.id.Id): the ``Id`` of the repository arg: proxy (osid.proxy.Proxy): a proxy return: (osid.repository.AssetLookupSession) - an ``AssetLookupSession`` raise: NotFound - ``repository_id`` not found raise: NullArgument - ``repository_id`` or ``proxy`` is ``null`` raise: OperationFailed - ``unable to complete request`` raise: Unimplemented - ``supports_asset_lookup()`` or ``supports_visible_federation()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_asset_lookup()`` and ``supports_visible_federation()`` are ``true``.* ### Response: def get_asset_lookup_session_for_repository(self, repository_id=None, proxy=None): """Gets the ``OsidSession`` associated with the asset lookup service for the given repository. arg: repository_id (osid.id.Id): the ``Id`` of the repository arg: proxy (osid.proxy.Proxy): a proxy return: (osid.repository.AssetLookupSession) - an ``AssetLookupSession`` raise: NotFound - ``repository_id`` not found raise: NullArgument - ``repository_id`` or ``proxy`` is ``null`` raise: OperationFailed - ``unable to complete request`` raise: Unimplemented - ``supports_asset_lookup()`` or ``supports_visible_federation()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_asset_lookup()`` and ``supports_visible_federation()`` are ``true``.* """ return AssetLookupSession( self._provider_manager.get_asset_lookup_session_for_repository(repository_id, proxy), self._config_map)
def to_python(self, value): """ Convert a string from the database to a Python value. """ if value == "": return None try: if isinstance(value, six.string_types): return self.deserializer(value) elif isinstance(value, bytes): return self.deserializer(value.decode('utf8')) except ValueError: pass return value
Convert a string from the database to a Python value.
Below is the the instruction that describes the task: ### Input: Convert a string from the database to a Python value. ### Response: def to_python(self, value): """ Convert a string from the database to a Python value. """ if value == "": return None try: if isinstance(value, six.string_types): return self.deserializer(value) elif isinstance(value, bytes): return self.deserializer(value.decode('utf8')) except ValueError: pass return value
def check(self, var): """Return True if the variable matches the specified type.""" return (isinstance(var, _num_type) and (self._lower_bound is None or var >= self._lower_bound) and (self._upper_bound is None or var <= self._upper_bound))
Return True if the variable matches the specified type.
Below is the the instruction that describes the task: ### Input: Return True if the variable matches the specified type. ### Response: def check(self, var): """Return True if the variable matches the specified type.""" return (isinstance(var, _num_type) and (self._lower_bound is None or var >= self._lower_bound) and (self._upper_bound is None or var <= self._upper_bound))
def walk_oid(self, oid): """Get a list of SNMP varbinds in response to a walk for oid. Each varbind in response list has a tag, iid, val and type attribute.""" var = netsnmp.Varbind(oid) varlist = netsnmp.VarList(var) data = self.walk(varlist) if len(data) == 0: raise SnmpException("SNMP walk response incomplete") return varlist
Get a list of SNMP varbinds in response to a walk for oid. Each varbind in response list has a tag, iid, val and type attribute.
Below is the the instruction that describes the task: ### Input: Get a list of SNMP varbinds in response to a walk for oid. Each varbind in response list has a tag, iid, val and type attribute. ### Response: def walk_oid(self, oid): """Get a list of SNMP varbinds in response to a walk for oid. Each varbind in response list has a tag, iid, val and type attribute.""" var = netsnmp.Varbind(oid) varlist = netsnmp.VarList(var) data = self.walk(varlist) if len(data) == 0: raise SnmpException("SNMP walk response incomplete") return varlist
def convert(area_um, deform, emodulus, channel_width_in, channel_width_out, flow_rate_in, flow_rate_out, viscosity_in, viscosity_out, inplace=False): """convert area-deformation-emodulus triplet The conversion formula is described in :cite:`Mietke2015`. Parameters ---------- area_um: ndarray Convex cell area [µm²] deform: ndarray Deformation emodulus: ndarray Young's Modulus [kPa] channel_width_in: float Original channel width [µm] channel_width_out: float Target channel width [µm] flow_rate_in: float Original flow rate [µl/s] flow_rate_in: float Target flow rate [µl/s] viscosity_in: float Original viscosity [mPa*s] viscosity_out: float Target viscosity [mPa*s] inplace: bool If True, override input arrays with corrected data Returns ------- area_um_corr: ndarray Corrected cell area [µm²] deform_corr: ndarray Deformation (a copy if `inplace` is False) emodulus_corr: ndarray Corrected emodulus [kPa] """ copy = not inplace # make sure area_um_corr is not an integer array area_um_corr = np.array(area_um, dtype=float, copy=copy) deform_corr = np.array(deform, copy=copy) emodulus_corr = np.array(emodulus, copy=copy) if channel_width_in != channel_width_out: area_um_corr *= (channel_width_out / channel_width_in)**2 if (flow_rate_in != flow_rate_out or viscosity_in != viscosity_out or channel_width_in != channel_width_out): emodulus_corr *= (flow_rate_out / flow_rate_in) \ * (viscosity_out / viscosity_in) \ * (channel_width_in / channel_width_out)**3 return area_um_corr, deform_corr, emodulus_corr
convert area-deformation-emodulus triplet The conversion formula is described in :cite:`Mietke2015`. Parameters ---------- area_um: ndarray Convex cell area [µm²] deform: ndarray Deformation emodulus: ndarray Young's Modulus [kPa] channel_width_in: float Original channel width [µm] channel_width_out: float Target channel width [µm] flow_rate_in: float Original flow rate [µl/s] flow_rate_in: float Target flow rate [µl/s] viscosity_in: float Original viscosity [mPa*s] viscosity_out: float Target viscosity [mPa*s] inplace: bool If True, override input arrays with corrected data Returns ------- area_um_corr: ndarray Corrected cell area [µm²] deform_corr: ndarray Deformation (a copy if `inplace` is False) emodulus_corr: ndarray Corrected emodulus [kPa]
Below is the the instruction that describes the task: ### Input: convert area-deformation-emodulus triplet The conversion formula is described in :cite:`Mietke2015`. Parameters ---------- area_um: ndarray Convex cell area [µm²] deform: ndarray Deformation emodulus: ndarray Young's Modulus [kPa] channel_width_in: float Original channel width [µm] channel_width_out: float Target channel width [µm] flow_rate_in: float Original flow rate [µl/s] flow_rate_in: float Target flow rate [µl/s] viscosity_in: float Original viscosity [mPa*s] viscosity_out: float Target viscosity [mPa*s] inplace: bool If True, override input arrays with corrected data Returns ------- area_um_corr: ndarray Corrected cell area [µm²] deform_corr: ndarray Deformation (a copy if `inplace` is False) emodulus_corr: ndarray Corrected emodulus [kPa] ### Response: def convert(area_um, deform, emodulus, channel_width_in, channel_width_out, flow_rate_in, flow_rate_out, viscosity_in, viscosity_out, inplace=False): """convert area-deformation-emodulus triplet The conversion formula is described in :cite:`Mietke2015`. Parameters ---------- area_um: ndarray Convex cell area [µm²] deform: ndarray Deformation emodulus: ndarray Young's Modulus [kPa] channel_width_in: float Original channel width [µm] channel_width_out: float Target channel width [µm] flow_rate_in: float Original flow rate [µl/s] flow_rate_in: float Target flow rate [µl/s] viscosity_in: float Original viscosity [mPa*s] viscosity_out: float Target viscosity [mPa*s] inplace: bool If True, override input arrays with corrected data Returns ------- area_um_corr: ndarray Corrected cell area [µm²] deform_corr: ndarray Deformation (a copy if `inplace` is False) emodulus_corr: ndarray Corrected emodulus [kPa] """ copy = not inplace # make sure area_um_corr is not an integer array area_um_corr = np.array(area_um, dtype=float, copy=copy) deform_corr = np.array(deform, copy=copy) emodulus_corr = np.array(emodulus, copy=copy) if channel_width_in != channel_width_out: area_um_corr *= (channel_width_out / channel_width_in)**2 if (flow_rate_in != flow_rate_out or viscosity_in != viscosity_out or channel_width_in != channel_width_out): emodulus_corr *= (flow_rate_out / flow_rate_in) \ * (viscosity_out / viscosity_in) \ * (channel_width_in / channel_width_out)**3 return area_um_corr, deform_corr, emodulus_corr