code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def validate(self, **kwargs): '''Validate device group state among given devices. :param kwargs: dict -- keyword args of device group information :raises: UnexpectedDeviceGroupType, UnexpectedDeviceGroupDevices ''' self._set_attributes(**kwargs) self._check_type() self.dev_group_uri_res = self._get_device_group(self.devices[0]) if self.dev_group_uri_res.type != self.type: msg = 'Device group type found: %r does not match expected ' \ 'device group type: %r' % ( self.dev_group_uri_res.type, self.type ) raise UnexpectedDeviceGroupType(msg) queried_device_names = self._get_device_names_in_group() given_device_names = [] for device in self.devices: device_name = get_device_info(device).name given_device_names.append(device_name) if sorted(queried_device_names) != sorted(given_device_names): msg = 'Given devices does not match queried devices.' raise UnexpectedDeviceGroupDevices(msg) self.ensure_all_devices_in_sync()
Validate device group state among given devices. :param kwargs: dict -- keyword args of device group information :raises: UnexpectedDeviceGroupType, UnexpectedDeviceGroupDevices
Below is the the instruction that describes the task: ### Input: Validate device group state among given devices. :param kwargs: dict -- keyword args of device group information :raises: UnexpectedDeviceGroupType, UnexpectedDeviceGroupDevices ### Response: def validate(self, **kwargs): '''Validate device group state among given devices. :param kwargs: dict -- keyword args of device group information :raises: UnexpectedDeviceGroupType, UnexpectedDeviceGroupDevices ''' self._set_attributes(**kwargs) self._check_type() self.dev_group_uri_res = self._get_device_group(self.devices[0]) if self.dev_group_uri_res.type != self.type: msg = 'Device group type found: %r does not match expected ' \ 'device group type: %r' % ( self.dev_group_uri_res.type, self.type ) raise UnexpectedDeviceGroupType(msg) queried_device_names = self._get_device_names_in_group() given_device_names = [] for device in self.devices: device_name = get_device_info(device).name given_device_names.append(device_name) if sorted(queried_device_names) != sorted(given_device_names): msg = 'Given devices does not match queried devices.' raise UnexpectedDeviceGroupDevices(msg) self.ensure_all_devices_in_sync()
def grid(self,EdgeAttribute=None,network=None,NodeAttribute=None,\ nodeHorizontalSpacing=None,nodeList=None,nodeVerticalSpacing=None,verbose=None): """ Execute the Grid Layout on a network. :param EdgeAttribute (string, optional): The name of the edge column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param network (string, optional): Specifies a network by name, or by SUID if the prefix SUID: is used. The keyword CURRENT, or a blank value c an also be used to specify the current network. :param NodeAttribute (string, optional): The name of the node column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param nodeHorizontalSpacing (string, optional): Horizontal spacing between nodes, in numeric value :param nodeList (string, optional): Specifies a list of nodes. The keywords all, selected, or unselected can be used to specify nodes by their selection state. The pattern COLUMN:VALUE sets this parameter to any rows that contain the specified column value; if the COLUMN prefix is not used, the NAME column is matched by default. A list of COLUMN :VALUE pairs of the format COLUMN1:VALUE1,COLUMN2:VALUE2,... can be used to match multiple values. :param nodeVerticalSpacing (string, optional): Vertical spacing between nod es, in numeric value """ network=check_network(self,network,verbose=verbose) PARAMS=set_param(['EdgeAttribute','network','NodeAttribute',\ 'nodeHorizontalSpacing','nodeList','nodeVerticalSpacing'],\ [EdgeAttribute,network,NodeAttribute,nodeHorizontalSpacing,nodeList,\ nodeVerticalSpacing]) response=api(url=self.__url+"/grid", PARAMS=PARAMS, method="POST", verbose=verbose) return response
Execute the Grid Layout on a network. :param EdgeAttribute (string, optional): The name of the edge column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param network (string, optional): Specifies a network by name, or by SUID if the prefix SUID: is used. The keyword CURRENT, or a blank value c an also be used to specify the current network. :param NodeAttribute (string, optional): The name of the node column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param nodeHorizontalSpacing (string, optional): Horizontal spacing between nodes, in numeric value :param nodeList (string, optional): Specifies a list of nodes. The keywords all, selected, or unselected can be used to specify nodes by their selection state. The pattern COLUMN:VALUE sets this parameter to any rows that contain the specified column value; if the COLUMN prefix is not used, the NAME column is matched by default. A list of COLUMN :VALUE pairs of the format COLUMN1:VALUE1,COLUMN2:VALUE2,... can be used to match multiple values. :param nodeVerticalSpacing (string, optional): Vertical spacing between nod es, in numeric value
Below is the the instruction that describes the task: ### Input: Execute the Grid Layout on a network. :param EdgeAttribute (string, optional): The name of the edge column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param network (string, optional): Specifies a network by name, or by SUID if the prefix SUID: is used. The keyword CURRENT, or a blank value c an also be used to specify the current network. :param NodeAttribute (string, optional): The name of the node column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param nodeHorizontalSpacing (string, optional): Horizontal spacing between nodes, in numeric value :param nodeList (string, optional): Specifies a list of nodes. The keywords all, selected, or unselected can be used to specify nodes by their selection state. The pattern COLUMN:VALUE sets this parameter to any rows that contain the specified column value; if the COLUMN prefix is not used, the NAME column is matched by default. A list of COLUMN :VALUE pairs of the format COLUMN1:VALUE1,COLUMN2:VALUE2,... can be used to match multiple values. :param nodeVerticalSpacing (string, optional): Vertical spacing between nod es, in numeric value ### Response: def grid(self,EdgeAttribute=None,network=None,NodeAttribute=None,\ nodeHorizontalSpacing=None,nodeList=None,nodeVerticalSpacing=None,verbose=None): """ Execute the Grid Layout on a network. :param EdgeAttribute (string, optional): The name of the edge column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param network (string, optional): Specifies a network by name, or by SUID if the prefix SUID: is used. The keyword CURRENT, or a blank value c an also be used to specify the current network. :param NodeAttribute (string, optional): The name of the node column contai ning numeric values that will be used as weights in the layout algor ithm. Only columns containing numeric values are shown :param nodeHorizontalSpacing (string, optional): Horizontal spacing between nodes, in numeric value :param nodeList (string, optional): Specifies a list of nodes. The keywords all, selected, or unselected can be used to specify nodes by their selection state. The pattern COLUMN:VALUE sets this parameter to any rows that contain the specified column value; if the COLUMN prefix is not used, the NAME column is matched by default. A list of COLUMN :VALUE pairs of the format COLUMN1:VALUE1,COLUMN2:VALUE2,... can be used to match multiple values. :param nodeVerticalSpacing (string, optional): Vertical spacing between nod es, in numeric value """ network=check_network(self,network,verbose=verbose) PARAMS=set_param(['EdgeAttribute','network','NodeAttribute',\ 'nodeHorizontalSpacing','nodeList','nodeVerticalSpacing'],\ [EdgeAttribute,network,NodeAttribute,nodeHorizontalSpacing,nodeList,\ nodeVerticalSpacing]) response=api(url=self.__url+"/grid", PARAMS=PARAMS, method="POST", verbose=verbose) return response
def ReadConflict(self, conflict_link, options=None): """Reads a conflict. :param str conflict_link: The link to the conflict. :param dict options: :return: The read Conflict. :rtype: dict """ if options is None: options = {} path = base.GetPathFromLink(conflict_link) conflict_id = base.GetResourceIdOrFullNameFromLink(conflict_link) return self.Read(path, 'conflicts', conflict_id, None, options)
Reads a conflict. :param str conflict_link: The link to the conflict. :param dict options: :return: The read Conflict. :rtype: dict
Below is the the instruction that describes the task: ### Input: Reads a conflict. :param str conflict_link: The link to the conflict. :param dict options: :return: The read Conflict. :rtype: dict ### Response: def ReadConflict(self, conflict_link, options=None): """Reads a conflict. :param str conflict_link: The link to the conflict. :param dict options: :return: The read Conflict. :rtype: dict """ if options is None: options = {} path = base.GetPathFromLink(conflict_link) conflict_id = base.GetResourceIdOrFullNameFromLink(conflict_link) return self.Read(path, 'conflicts', conflict_id, None, options)
def on_hazard_exposure_bookmark_toggled(self, enabled): """Update the UI when the user toggles the bookmarks radiobutton. :param enabled: The status of the radiobutton. :type enabled: bool """ if enabled: self.bookmarks_index_changed() else: self.ok_button.setEnabled(True) self._populate_coordinates()
Update the UI when the user toggles the bookmarks radiobutton. :param enabled: The status of the radiobutton. :type enabled: bool
Below is the the instruction that describes the task: ### Input: Update the UI when the user toggles the bookmarks radiobutton. :param enabled: The status of the radiobutton. :type enabled: bool ### Response: def on_hazard_exposure_bookmark_toggled(self, enabled): """Update the UI when the user toggles the bookmarks radiobutton. :param enabled: The status of the radiobutton. :type enabled: bool """ if enabled: self.bookmarks_index_changed() else: self.ok_button.setEnabled(True) self._populate_coordinates()
async def set_power(self, value: bool): """Toggle the device on and off.""" if value: status = "active" else: status = "off" # TODO WoL works when quickboot is not enabled return await self.services["system"]["setPowerStatus"](status=status)
Toggle the device on and off.
Below is the the instruction that describes the task: ### Input: Toggle the device on and off. ### Response: async def set_power(self, value: bool): """Toggle the device on and off.""" if value: status = "active" else: status = "off" # TODO WoL works when quickboot is not enabled return await self.services["system"]["setPowerStatus"](status=status)
def mute(self): """Mutes all existing communication, most notably the device will no longer generate a 13.56 MHz carrier signal when operating as Initiator. """ fname = "mute" cname = self.__class__.__module__ + '.' + self.__class__.__name__ raise NotImplementedError("%s.%s() is required" % (cname, fname))
Mutes all existing communication, most notably the device will no longer generate a 13.56 MHz carrier signal when operating as Initiator.
Below is the the instruction that describes the task: ### Input: Mutes all existing communication, most notably the device will no longer generate a 13.56 MHz carrier signal when operating as Initiator. ### Response: def mute(self): """Mutes all existing communication, most notably the device will no longer generate a 13.56 MHz carrier signal when operating as Initiator. """ fname = "mute" cname = self.__class__.__module__ + '.' + self.__class__.__name__ raise NotImplementedError("%s.%s() is required" % (cname, fname))
def params(self) -> T.List[DocstringParam]: """Return parameters indicated in docstring.""" return [ DocstringParam.from_meta(meta) for meta in self.meta if meta.args[0] in {"param", "parameter", "arg", "argument", "key", "keyword"} ]
Return parameters indicated in docstring.
Below is the the instruction that describes the task: ### Input: Return parameters indicated in docstring. ### Response: def params(self) -> T.List[DocstringParam]: """Return parameters indicated in docstring.""" return [ DocstringParam.from_meta(meta) for meta in self.meta if meta.args[0] in {"param", "parameter", "arg", "argument", "key", "keyword"} ]
def getInstalledOfferings(self): """ Return a mapping from the name of each L{InstalledOffering} in C{self._siteStore} to the corresponding L{IOffering} plugins. """ d = {} installed = self._siteStore.query(InstalledOffering) for installation in installed: offering = installation.getOffering() if offering is not None: d[offering.name] = offering return d
Return a mapping from the name of each L{InstalledOffering} in C{self._siteStore} to the corresponding L{IOffering} plugins.
Below is the the instruction that describes the task: ### Input: Return a mapping from the name of each L{InstalledOffering} in C{self._siteStore} to the corresponding L{IOffering} plugins. ### Response: def getInstalledOfferings(self): """ Return a mapping from the name of each L{InstalledOffering} in C{self._siteStore} to the corresponding L{IOffering} plugins. """ d = {} installed = self._siteStore.query(InstalledOffering) for installation in installed: offering = installation.getOffering() if offering is not None: d[offering.name] = offering return d
def html_authors(self): """HTML5-formatted authors (`list` of `str`).""" return self.format_authors(format='html5', deparagraph=True, mathjax=False, smart=True)
HTML5-formatted authors (`list` of `str`).
Below is the the instruction that describes the task: ### Input: HTML5-formatted authors (`list` of `str`). ### Response: def html_authors(self): """HTML5-formatted authors (`list` of `str`).""" return self.format_authors(format='html5', deparagraph=True, mathjax=False, smart=True)
def unwrap_raw(content): """ unwraps the callback and returns the raw content """ starting_symbol = get_start_symbol(content) ending_symbol = ']' if starting_symbol == '[' else '}' start = content.find(starting_symbol, 0) end = content.rfind(ending_symbol) return content[start:end+1]
unwraps the callback and returns the raw content
Below is the the instruction that describes the task: ### Input: unwraps the callback and returns the raw content ### Response: def unwrap_raw(content): """ unwraps the callback and returns the raw content """ starting_symbol = get_start_symbol(content) ending_symbol = ']' if starting_symbol == '[' else '}' start = content.find(starting_symbol, 0) end = content.rfind(ending_symbol) return content[start:end+1]
def _get_all_positional_parameter_names(fn): """Returns the names of all positional arguments to the given function.""" arg_spec = _get_cached_arg_spec(fn) args = arg_spec.args if arg_spec.defaults: args = args[:-len(arg_spec.defaults)] return args
Returns the names of all positional arguments to the given function.
Below is the the instruction that describes the task: ### Input: Returns the names of all positional arguments to the given function. ### Response: def _get_all_positional_parameter_names(fn): """Returns the names of all positional arguments to the given function.""" arg_spec = _get_cached_arg_spec(fn) args = arg_spec.args if arg_spec.defaults: args = args[:-len(arg_spec.defaults)] return args
def supernodes(self, reordered = True): """ Returns a list of supernode sets """ if reordered: return [list(self.snode[self.snptr[k]:self.snptr[k+1]]) for k in range(self.Nsn)] else: return [list(self.__p[self.snode[self.snptr[k]:self.snptr[k+1]]]) for k in range(self.Nsn)]
Returns a list of supernode sets
Below is the the instruction that describes the task: ### Input: Returns a list of supernode sets ### Response: def supernodes(self, reordered = True): """ Returns a list of supernode sets """ if reordered: return [list(self.snode[self.snptr[k]:self.snptr[k+1]]) for k in range(self.Nsn)] else: return [list(self.__p[self.snode[self.snptr[k]:self.snptr[k+1]]]) for k in range(self.Nsn)]
def run(self, circuit): """Run all the passes on a QuantumCircuit Args: circuit (QuantumCircuit): circuit to transform via all the registered passes Returns: QuantumCircuit: Transformed circuit. """ name = circuit.name dag = circuit_to_dag(circuit) del circuit for passset in self.working_list: for pass_ in passset: dag = self._do_pass(pass_, dag, passset.options) circuit = dag_to_circuit(dag) circuit.name = name return circuit
Run all the passes on a QuantumCircuit Args: circuit (QuantumCircuit): circuit to transform via all the registered passes Returns: QuantumCircuit: Transformed circuit.
Below is the the instruction that describes the task: ### Input: Run all the passes on a QuantumCircuit Args: circuit (QuantumCircuit): circuit to transform via all the registered passes Returns: QuantumCircuit: Transformed circuit. ### Response: def run(self, circuit): """Run all the passes on a QuantumCircuit Args: circuit (QuantumCircuit): circuit to transform via all the registered passes Returns: QuantumCircuit: Transformed circuit. """ name = circuit.name dag = circuit_to_dag(circuit) del circuit for passset in self.working_list: for pass_ in passset: dag = self._do_pass(pass_, dag, passset.options) circuit = dag_to_circuit(dag) circuit.name = name return circuit
def start(self): """ Start the worker processes. TODO: Move task receiving to a thread """ start = time.time() self._kill_event = threading.Event() self.procs = {} for worker_id in range(self.worker_count): p = multiprocessing.Process(target=worker, args=(worker_id, self.uid, self.pending_task_queue, self.pending_result_queue, self.ready_worker_queue, )) p.start() self.procs[worker_id] = p logger.debug("Manager synced with workers") self._task_puller_thread = threading.Thread(target=self.pull_tasks, args=(self._kill_event,)) self._result_pusher_thread = threading.Thread(target=self.push_results, args=(self._kill_event,)) self._task_puller_thread.start() self._result_pusher_thread.start() logger.info("Loop start") # TODO : Add mechanism in this loop to stop the worker pool # This might need a multiprocessing event to signal back. self._kill_event.wait() logger.critical("[MAIN] Received kill event, terminating worker processes") self._task_puller_thread.join() self._result_pusher_thread.join() for proc_id in self.procs: self.procs[proc_id].terminate() logger.critical("Terminating worker {}:{}".format(self.procs[proc_id], self.procs[proc_id].is_alive())) self.procs[proc_id].join() logger.debug("Worker:{} joined successfully".format(self.procs[proc_id])) self.task_incoming.close() self.result_outgoing.close() self.context.term() delta = time.time() - start logger.info("process_worker_pool ran for {} seconds".format(delta)) return
Start the worker processes. TODO: Move task receiving to a thread
Below is the the instruction that describes the task: ### Input: Start the worker processes. TODO: Move task receiving to a thread ### Response: def start(self): """ Start the worker processes. TODO: Move task receiving to a thread """ start = time.time() self._kill_event = threading.Event() self.procs = {} for worker_id in range(self.worker_count): p = multiprocessing.Process(target=worker, args=(worker_id, self.uid, self.pending_task_queue, self.pending_result_queue, self.ready_worker_queue, )) p.start() self.procs[worker_id] = p logger.debug("Manager synced with workers") self._task_puller_thread = threading.Thread(target=self.pull_tasks, args=(self._kill_event,)) self._result_pusher_thread = threading.Thread(target=self.push_results, args=(self._kill_event,)) self._task_puller_thread.start() self._result_pusher_thread.start() logger.info("Loop start") # TODO : Add mechanism in this loop to stop the worker pool # This might need a multiprocessing event to signal back. self._kill_event.wait() logger.critical("[MAIN] Received kill event, terminating worker processes") self._task_puller_thread.join() self._result_pusher_thread.join() for proc_id in self.procs: self.procs[proc_id].terminate() logger.critical("Terminating worker {}:{}".format(self.procs[proc_id], self.procs[proc_id].is_alive())) self.procs[proc_id].join() logger.debug("Worker:{} joined successfully".format(self.procs[proc_id])) self.task_incoming.close() self.result_outgoing.close() self.context.term() delta = time.time() - start logger.info("process_worker_pool ran for {} seconds".format(delta)) return
def register(request): ''' Register new user. ''' serializer_class = registration_settings.REGISTER_SERIALIZER_CLASS serializer = serializer_class(data=request.data) serializer.is_valid(raise_exception=True) kwargs = {} if registration_settings.REGISTER_VERIFICATION_ENABLED: verification_flag_field = get_user_setting('VERIFICATION_FLAG_FIELD') kwargs[verification_flag_field] = False email_field = get_user_setting('EMAIL_FIELD') if (email_field not in serializer.validated_data or not serializer.validated_data[email_field]): raise BadRequest("User without email cannot be verified") user = serializer.save(**kwargs) output_serializer_class = registration_settings.REGISTER_OUTPUT_SERIALIZER_CLASS # noqa: E501 output_serializer = output_serializer_class(instance=user) user_data = output_serializer.data if registration_settings.REGISTER_VERIFICATION_ENABLED: signer = RegisterSigner({ 'user_id': user.pk, }, request=request) template_config = ( registration_settings.REGISTER_VERIFICATION_EMAIL_TEMPLATES) send_verification_notification(user, signer, template_config) return Response(user_data, status=status.HTTP_201_CREATED)
Register new user.
Below is the the instruction that describes the task: ### Input: Register new user. ### Response: def register(request): ''' Register new user. ''' serializer_class = registration_settings.REGISTER_SERIALIZER_CLASS serializer = serializer_class(data=request.data) serializer.is_valid(raise_exception=True) kwargs = {} if registration_settings.REGISTER_VERIFICATION_ENABLED: verification_flag_field = get_user_setting('VERIFICATION_FLAG_FIELD') kwargs[verification_flag_field] = False email_field = get_user_setting('EMAIL_FIELD') if (email_field not in serializer.validated_data or not serializer.validated_data[email_field]): raise BadRequest("User without email cannot be verified") user = serializer.save(**kwargs) output_serializer_class = registration_settings.REGISTER_OUTPUT_SERIALIZER_CLASS # noqa: E501 output_serializer = output_serializer_class(instance=user) user_data = output_serializer.data if registration_settings.REGISTER_VERIFICATION_ENABLED: signer = RegisterSigner({ 'user_id': user.pk, }, request=request) template_config = ( registration_settings.REGISTER_VERIFICATION_EMAIL_TEMPLATES) send_verification_notification(user, signer, template_config) return Response(user_data, status=status.HTTP_201_CREATED)
def _get_optional_args(args, opts, err_on_missing=False, **kwargs): """Convenience function to retrieve arguments from an argparse namespace. Parameters ---------- args : list of str List of arguments to retreive. opts : argparse.namespace Namespace to retreive arguments for. err_on_missing : bool, optional If an argument is not found in the namespace, raise an AttributeError. Otherwise, just pass. Default is False. \**kwargs : All other keyword arguments are added to the return dictionary. Any keyword argument that is the same as an argument in ``args`` will override what was retrieved from ``opts``. Returns ------- dict : Dictionary mapping arguments to values retrieved from ``opts``. If keyword arguments were provided, these will also be included in the dictionary. """ parsed = {} for arg in args: try: parsed[arg] = getattr(opts, arg) except AttributeError as e: if err_on_missing: raise AttributeError(e) else: continue parsed.update(kwargs) return parsed
Convenience function to retrieve arguments from an argparse namespace. Parameters ---------- args : list of str List of arguments to retreive. opts : argparse.namespace Namespace to retreive arguments for. err_on_missing : bool, optional If an argument is not found in the namespace, raise an AttributeError. Otherwise, just pass. Default is False. \**kwargs : All other keyword arguments are added to the return dictionary. Any keyword argument that is the same as an argument in ``args`` will override what was retrieved from ``opts``. Returns ------- dict : Dictionary mapping arguments to values retrieved from ``opts``. If keyword arguments were provided, these will also be included in the dictionary.
Below is the the instruction that describes the task: ### Input: Convenience function to retrieve arguments from an argparse namespace. Parameters ---------- args : list of str List of arguments to retreive. opts : argparse.namespace Namespace to retreive arguments for. err_on_missing : bool, optional If an argument is not found in the namespace, raise an AttributeError. Otherwise, just pass. Default is False. \**kwargs : All other keyword arguments are added to the return dictionary. Any keyword argument that is the same as an argument in ``args`` will override what was retrieved from ``opts``. Returns ------- dict : Dictionary mapping arguments to values retrieved from ``opts``. If keyword arguments were provided, these will also be included in the dictionary. ### Response: def _get_optional_args(args, opts, err_on_missing=False, **kwargs): """Convenience function to retrieve arguments from an argparse namespace. Parameters ---------- args : list of str List of arguments to retreive. opts : argparse.namespace Namespace to retreive arguments for. err_on_missing : bool, optional If an argument is not found in the namespace, raise an AttributeError. Otherwise, just pass. Default is False. \**kwargs : All other keyword arguments are added to the return dictionary. Any keyword argument that is the same as an argument in ``args`` will override what was retrieved from ``opts``. Returns ------- dict : Dictionary mapping arguments to values retrieved from ``opts``. If keyword arguments were provided, these will also be included in the dictionary. """ parsed = {} for arg in args: try: parsed[arg] = getattr(opts, arg) except AttributeError as e: if err_on_missing: raise AttributeError(e) else: continue parsed.update(kwargs) return parsed
def encode16Int(value): ''' Encodes a 16 bit unsigned integer into MQTT format. Returns a bytearray ''' value = int(value) encoded = bytearray(2) encoded[0] = value >> 8 encoded[1] = value & 0xFF return encoded
Encodes a 16 bit unsigned integer into MQTT format. Returns a bytearray
Below is the the instruction that describes the task: ### Input: Encodes a 16 bit unsigned integer into MQTT format. Returns a bytearray ### Response: def encode16Int(value): ''' Encodes a 16 bit unsigned integer into MQTT format. Returns a bytearray ''' value = int(value) encoded = bytearray(2) encoded[0] = value >> 8 encoded[1] = value & 0xFF return encoded
def _wait_and_except_if_failed(self, event, timeout=None): """Combines waiting for event and call to `_except_if_failed`. If timeout is not specified the configured sync_timeout is used. """ event.wait(timeout or self.__sync_timeout) self._except_if_failed(event)
Combines waiting for event and call to `_except_if_failed`. If timeout is not specified the configured sync_timeout is used.
Below is the the instruction that describes the task: ### Input: Combines waiting for event and call to `_except_if_failed`. If timeout is not specified the configured sync_timeout is used. ### Response: def _wait_and_except_if_failed(self, event, timeout=None): """Combines waiting for event and call to `_except_if_failed`. If timeout is not specified the configured sync_timeout is used. """ event.wait(timeout or self.__sync_timeout) self._except_if_failed(event)
def register_module_alias(self, alias, module_path, after_init=False): """Adds an alias for a module. http://uwsgi-docs.readthedocs.io/en/latest/PythonModuleAlias.html :param str|unicode alias: :param str|unicode module_path: :param bool after_init: add a python module alias after uwsgi module initialization """ command = 'post-pymodule-alias' if after_init else 'pymodule-alias' self._set(command, '%s=%s' % (alias, module_path), multi=True) return self._section
Adds an alias for a module. http://uwsgi-docs.readthedocs.io/en/latest/PythonModuleAlias.html :param str|unicode alias: :param str|unicode module_path: :param bool after_init: add a python module alias after uwsgi module initialization
Below is the the instruction that describes the task: ### Input: Adds an alias for a module. http://uwsgi-docs.readthedocs.io/en/latest/PythonModuleAlias.html :param str|unicode alias: :param str|unicode module_path: :param bool after_init: add a python module alias after uwsgi module initialization ### Response: def register_module_alias(self, alias, module_path, after_init=False): """Adds an alias for a module. http://uwsgi-docs.readthedocs.io/en/latest/PythonModuleAlias.html :param str|unicode alias: :param str|unicode module_path: :param bool after_init: add a python module alias after uwsgi module initialization """ command = 'post-pymodule-alias' if after_init else 'pymodule-alias' self._set(command, '%s=%s' % (alias, module_path), multi=True) return self._section
def compute_err(self, solution_y, coefficients): """ Return an error value by finding the absolute difference for each element in a list of solution-generated y-values versus expected values. Compounds error by 50% for each negative coefficient in the solution. solution_y: list of y-values produced by a solution coefficients: list of polynomial coefficients represented by the solution return: error value """ error = 0 for modeled, expected in zip(solution_y, self.expected_values): error += abs(modeled - expected) if any([c < 0 for c in coefficients]): error *= 1.5 return error
Return an error value by finding the absolute difference for each element in a list of solution-generated y-values versus expected values. Compounds error by 50% for each negative coefficient in the solution. solution_y: list of y-values produced by a solution coefficients: list of polynomial coefficients represented by the solution return: error value
Below is the the instruction that describes the task: ### Input: Return an error value by finding the absolute difference for each element in a list of solution-generated y-values versus expected values. Compounds error by 50% for each negative coefficient in the solution. solution_y: list of y-values produced by a solution coefficients: list of polynomial coefficients represented by the solution return: error value ### Response: def compute_err(self, solution_y, coefficients): """ Return an error value by finding the absolute difference for each element in a list of solution-generated y-values versus expected values. Compounds error by 50% for each negative coefficient in the solution. solution_y: list of y-values produced by a solution coefficients: list of polynomial coefficients represented by the solution return: error value """ error = 0 for modeled, expected in zip(solution_y, self.expected_values): error += abs(modeled - expected) if any([c < 0 for c in coefficients]): error *= 1.5 return error
def build_endpoint_route_name(cls, method_name, class_name=None): """ Build the route endpoint It is recommended to place your views in /views directory, so it can build the endpoint from it. If not, it will make the endpoint from the module name The main reason for having the views directory, it is explicitly easy to see the path of the view :param cls: The view class :param method_name: The name of the method :param class_name: To pass the class name. :return: string """ module = cls.__module__.split("views.")[1] if ".views." in cls.__module__ \ else cls.__module__.split(".")[-1] return "%s.%s:%s" % (module, class_name or cls.__name__, method_name)
Build the route endpoint It is recommended to place your views in /views directory, so it can build the endpoint from it. If not, it will make the endpoint from the module name The main reason for having the views directory, it is explicitly easy to see the path of the view :param cls: The view class :param method_name: The name of the method :param class_name: To pass the class name. :return: string
Below is the the instruction that describes the task: ### Input: Build the route endpoint It is recommended to place your views in /views directory, so it can build the endpoint from it. If not, it will make the endpoint from the module name The main reason for having the views directory, it is explicitly easy to see the path of the view :param cls: The view class :param method_name: The name of the method :param class_name: To pass the class name. :return: string ### Response: def build_endpoint_route_name(cls, method_name, class_name=None): """ Build the route endpoint It is recommended to place your views in /views directory, so it can build the endpoint from it. If not, it will make the endpoint from the module name The main reason for having the views directory, it is explicitly easy to see the path of the view :param cls: The view class :param method_name: The name of the method :param class_name: To pass the class name. :return: string """ module = cls.__module__.split("views.")[1] if ".views." in cls.__module__ \ else cls.__module__.split(".")[-1] return "%s.%s:%s" % (module, class_name or cls.__name__, method_name)
def emg_parameters(data, sample_rate, raw_to_mv=True, device="biosignalsplux", resolution=16): """ ----- Brief ----- Function for extracting EMG parameters from time and frequency domains. ----------- Description ----------- EMG signals have specific properties that are different from other biosignals. For example, it is not periodic, contrary to ECG signals. This type of biosignals are composed of activation and inactivation periods that are different from one another. Specifically, there differences in the amplitude domain, in which the activation periods are characterised by an increase of amplitude. This function allows the extraction of a wide range of features specific of EMG signals. ---------- Parameters ---------- data : list EMG signal. sample_rate : int Sampling frequency. raw_to_mv : boolean If True then it is assumed that the input samples are in a raw format and the output results will be in mV. When True "device" and "resolution" inputs became mandatory. device : str Plux device label: - "bioplux" - "bioplux_exp" - "biosignalsplux" - "rachimeter" - "channeller" - "swifter" - "ddme_openbanplux" resolution : int Resolution selected during acquisition. Returns ------- out : dict Dictionary with EMG parameters values, with keys: Maximum Burst Duration : Duration of the longest activation in the EMG signal Minimum Burst Duration : Duration of the shortest activation in the EMG signal Average Burst Duration : Average duration of the activations in the EMG signal Standard Deviation of Burst Duration : Standard Deviation duration of the activations in the EMG signal Maximum Sample Value : Maximum value of the EMG signal Minimum Sample Value : Minimum value of the EMG signal Average Sample Value : Average value of the EMG signal Standard Deviation Sample Value : Standard deviation value of the EMG signal RMS : Root mean square of the EMG signal Area : Area under the curve of the EMG signal Total Power Spect : Total power of the power spectrum of the EMG signal Median Frequency : Median frequency of the EMG signal calculated using the power spectrum of the EMG signal Maximum Power Frequency : Frequency correspondent to the maximum amplitude in the power spectrum of the EMG signal """ out_dict = {} # Conversion of data samples to mV if requested by raw_to_mv input. if raw_to_mv is True: data = raw_to_phy("EMG", device, data, resolution, option="mV") # Detection of muscular activation periods. burst_begin, burst_end = detect_emg_activations(data, sample_rate, smooth_level=20, threshold_level=10, time_units=True)[:2] # --------------------------- Number of activation periods. ----------------------------------- out_dict["Number of Bursts"] = len(burst_begin) # ----------- Maximum, Minimum and Average duration of muscular activations. ------------------ # Bursts Duration. bursts_time = burst_end - burst_begin # Parameter extraction. out_dict["Maximum Burst Duration"] = numpy.max(bursts_time) out_dict["Minimum Burst Duration"] = numpy.min(bursts_time) out_dict["Average Burst Duration"] = numpy.average(bursts_time) out_dict["Standard Deviation of Burst Duration"] = numpy.std(bursts_time) # --------- Maximum, Minimum, Average and Standard Deviation of EMG sample values ------------- # Maximum. out_dict["Maximum Sample Value"] = numpy.max(data) # Minimum. out_dict["Minimum Sample Value"] = numpy.min(data) # Average and Standard Deviation. out_dict["Average Sample Value"] = numpy.average(data) out_dict["Standard Deviation Sample Value"] = numpy.std(data) # ---------- Root Mean Square and Area under the curve (Signal Intensity Estimators) ---------- # Root Mean Square. out_dict["RMS"] = numpy.sqrt(numpy.sum(numpy.array(data) * numpy.array(data)) / len(data)) # Area under the curve. out_dict["Area"] = integr.cumtrapz(data)[-1] # -------------------- Total power and frequency reference points ----------------------------- # Signal Power Spectrum. freq_axis, power_axis = scisignal.welch(data, fs=sample_rate, window='hanning', noverlap=0, nfft=int(256.)) # Total Power and Median Frequency (Frequency that divides the spectrum into two regions with # equal power). area_power_spect = integr.cumtrapz(power_axis, freq_axis, initial=0) out_dict["Total Power Spect"] = area_power_spect[-1] out_dict["Median Frequency"] = freq_axis[numpy.where(area_power_spect >= out_dict["Total Power Spect"] / 2)[0][0]] out_dict["Maximum Power Frequency"] = freq_axis[numpy.argmax(power_axis)] return out_dict
----- Brief ----- Function for extracting EMG parameters from time and frequency domains. ----------- Description ----------- EMG signals have specific properties that are different from other biosignals. For example, it is not periodic, contrary to ECG signals. This type of biosignals are composed of activation and inactivation periods that are different from one another. Specifically, there differences in the amplitude domain, in which the activation periods are characterised by an increase of amplitude. This function allows the extraction of a wide range of features specific of EMG signals. ---------- Parameters ---------- data : list EMG signal. sample_rate : int Sampling frequency. raw_to_mv : boolean If True then it is assumed that the input samples are in a raw format and the output results will be in mV. When True "device" and "resolution" inputs became mandatory. device : str Plux device label: - "bioplux" - "bioplux_exp" - "biosignalsplux" - "rachimeter" - "channeller" - "swifter" - "ddme_openbanplux" resolution : int Resolution selected during acquisition. Returns ------- out : dict Dictionary with EMG parameters values, with keys: Maximum Burst Duration : Duration of the longest activation in the EMG signal Minimum Burst Duration : Duration of the shortest activation in the EMG signal Average Burst Duration : Average duration of the activations in the EMG signal Standard Deviation of Burst Duration : Standard Deviation duration of the activations in the EMG signal Maximum Sample Value : Maximum value of the EMG signal Minimum Sample Value : Minimum value of the EMG signal Average Sample Value : Average value of the EMG signal Standard Deviation Sample Value : Standard deviation value of the EMG signal RMS : Root mean square of the EMG signal Area : Area under the curve of the EMG signal Total Power Spect : Total power of the power spectrum of the EMG signal Median Frequency : Median frequency of the EMG signal calculated using the power spectrum of the EMG signal Maximum Power Frequency : Frequency correspondent to the maximum amplitude in the power spectrum of the EMG signal
Below is the the instruction that describes the task: ### Input: ----- Brief ----- Function for extracting EMG parameters from time and frequency domains. ----------- Description ----------- EMG signals have specific properties that are different from other biosignals. For example, it is not periodic, contrary to ECG signals. This type of biosignals are composed of activation and inactivation periods that are different from one another. Specifically, there differences in the amplitude domain, in which the activation periods are characterised by an increase of amplitude. This function allows the extraction of a wide range of features specific of EMG signals. ---------- Parameters ---------- data : list EMG signal. sample_rate : int Sampling frequency. raw_to_mv : boolean If True then it is assumed that the input samples are in a raw format and the output results will be in mV. When True "device" and "resolution" inputs became mandatory. device : str Plux device label: - "bioplux" - "bioplux_exp" - "biosignalsplux" - "rachimeter" - "channeller" - "swifter" - "ddme_openbanplux" resolution : int Resolution selected during acquisition. Returns ------- out : dict Dictionary with EMG parameters values, with keys: Maximum Burst Duration : Duration of the longest activation in the EMG signal Minimum Burst Duration : Duration of the shortest activation in the EMG signal Average Burst Duration : Average duration of the activations in the EMG signal Standard Deviation of Burst Duration : Standard Deviation duration of the activations in the EMG signal Maximum Sample Value : Maximum value of the EMG signal Minimum Sample Value : Minimum value of the EMG signal Average Sample Value : Average value of the EMG signal Standard Deviation Sample Value : Standard deviation value of the EMG signal RMS : Root mean square of the EMG signal Area : Area under the curve of the EMG signal Total Power Spect : Total power of the power spectrum of the EMG signal Median Frequency : Median frequency of the EMG signal calculated using the power spectrum of the EMG signal Maximum Power Frequency : Frequency correspondent to the maximum amplitude in the power spectrum of the EMG signal ### Response: def emg_parameters(data, sample_rate, raw_to_mv=True, device="biosignalsplux", resolution=16): """ ----- Brief ----- Function for extracting EMG parameters from time and frequency domains. ----------- Description ----------- EMG signals have specific properties that are different from other biosignals. For example, it is not periodic, contrary to ECG signals. This type of biosignals are composed of activation and inactivation periods that are different from one another. Specifically, there differences in the amplitude domain, in which the activation periods are characterised by an increase of amplitude. This function allows the extraction of a wide range of features specific of EMG signals. ---------- Parameters ---------- data : list EMG signal. sample_rate : int Sampling frequency. raw_to_mv : boolean If True then it is assumed that the input samples are in a raw format and the output results will be in mV. When True "device" and "resolution" inputs became mandatory. device : str Plux device label: - "bioplux" - "bioplux_exp" - "biosignalsplux" - "rachimeter" - "channeller" - "swifter" - "ddme_openbanplux" resolution : int Resolution selected during acquisition. Returns ------- out : dict Dictionary with EMG parameters values, with keys: Maximum Burst Duration : Duration of the longest activation in the EMG signal Minimum Burst Duration : Duration of the shortest activation in the EMG signal Average Burst Duration : Average duration of the activations in the EMG signal Standard Deviation of Burst Duration : Standard Deviation duration of the activations in the EMG signal Maximum Sample Value : Maximum value of the EMG signal Minimum Sample Value : Minimum value of the EMG signal Average Sample Value : Average value of the EMG signal Standard Deviation Sample Value : Standard deviation value of the EMG signal RMS : Root mean square of the EMG signal Area : Area under the curve of the EMG signal Total Power Spect : Total power of the power spectrum of the EMG signal Median Frequency : Median frequency of the EMG signal calculated using the power spectrum of the EMG signal Maximum Power Frequency : Frequency correspondent to the maximum amplitude in the power spectrum of the EMG signal """ out_dict = {} # Conversion of data samples to mV if requested by raw_to_mv input. if raw_to_mv is True: data = raw_to_phy("EMG", device, data, resolution, option="mV") # Detection of muscular activation periods. burst_begin, burst_end = detect_emg_activations(data, sample_rate, smooth_level=20, threshold_level=10, time_units=True)[:2] # --------------------------- Number of activation periods. ----------------------------------- out_dict["Number of Bursts"] = len(burst_begin) # ----------- Maximum, Minimum and Average duration of muscular activations. ------------------ # Bursts Duration. bursts_time = burst_end - burst_begin # Parameter extraction. out_dict["Maximum Burst Duration"] = numpy.max(bursts_time) out_dict["Minimum Burst Duration"] = numpy.min(bursts_time) out_dict["Average Burst Duration"] = numpy.average(bursts_time) out_dict["Standard Deviation of Burst Duration"] = numpy.std(bursts_time) # --------- Maximum, Minimum, Average and Standard Deviation of EMG sample values ------------- # Maximum. out_dict["Maximum Sample Value"] = numpy.max(data) # Minimum. out_dict["Minimum Sample Value"] = numpy.min(data) # Average and Standard Deviation. out_dict["Average Sample Value"] = numpy.average(data) out_dict["Standard Deviation Sample Value"] = numpy.std(data) # ---------- Root Mean Square and Area under the curve (Signal Intensity Estimators) ---------- # Root Mean Square. out_dict["RMS"] = numpy.sqrt(numpy.sum(numpy.array(data) * numpy.array(data)) / len(data)) # Area under the curve. out_dict["Area"] = integr.cumtrapz(data)[-1] # -------------------- Total power and frequency reference points ----------------------------- # Signal Power Spectrum. freq_axis, power_axis = scisignal.welch(data, fs=sample_rate, window='hanning', noverlap=0, nfft=int(256.)) # Total Power and Median Frequency (Frequency that divides the spectrum into two regions with # equal power). area_power_spect = integr.cumtrapz(power_axis, freq_axis, initial=0) out_dict["Total Power Spect"] = area_power_spect[-1] out_dict["Median Frequency"] = freq_axis[numpy.where(area_power_spect >= out_dict["Total Power Spect"] / 2)[0][0]] out_dict["Maximum Power Frequency"] = freq_axis[numpy.argmax(power_axis)] return out_dict
def points(self): """ returns a pointer to the points as a numpy object """ # Get grid dimensions nx, ny, nz = self.dimensions nx -= 1 ny -= 1 nz -= 1 # get the points and convert to spacings dx, dy, dz = self.spacing # Now make the cell arrays ox, oy, oz = self.origin x = np.insert(np.cumsum(np.full(nx, dx)), 0, 0.0) + ox y = np.insert(np.cumsum(np.full(ny, dy)), 0, 0.0) + oy z = np.insert(np.cumsum(np.full(nz, dz)), 0, 0.0) + oz xx, yy, zz = np.meshgrid(x,y,z, indexing='ij') return np.c_[xx.ravel(), yy.ravel(), zz.ravel()]
returns a pointer to the points as a numpy object
Below is the the instruction that describes the task: ### Input: returns a pointer to the points as a numpy object ### Response: def points(self): """ returns a pointer to the points as a numpy object """ # Get grid dimensions nx, ny, nz = self.dimensions nx -= 1 ny -= 1 nz -= 1 # get the points and convert to spacings dx, dy, dz = self.spacing # Now make the cell arrays ox, oy, oz = self.origin x = np.insert(np.cumsum(np.full(nx, dx)), 0, 0.0) + ox y = np.insert(np.cumsum(np.full(ny, dy)), 0, 0.0) + oy z = np.insert(np.cumsum(np.full(nz, dz)), 0, 0.0) + oz xx, yy, zz = np.meshgrid(x,y,z, indexing='ij') return np.c_[xx.ravel(), yy.ravel(), zz.ravel()]
def _internal_sub(self, other, method=None): """ Used for specifing addition methods for __add__, __iadd__, __radd__ """ if hasattr(other, "datatype"): if other.datatype == self.datatype: oval = other.value else: oval = int(other.value) else: oval = int(other) if method == 'rsub': rtn_val = oval - self.value else: rtn_val = self.value - oval return XsdInteger(rtn_val)
Used for specifing addition methods for __add__, __iadd__, __radd__
Below is the the instruction that describes the task: ### Input: Used for specifing addition methods for __add__, __iadd__, __radd__ ### Response: def _internal_sub(self, other, method=None): """ Used for specifing addition methods for __add__, __iadd__, __radd__ """ if hasattr(other, "datatype"): if other.datatype == self.datatype: oval = other.value else: oval = int(other.value) else: oval = int(other) if method == 'rsub': rtn_val = oval - self.value else: rtn_val = self.value - oval return XsdInteger(rtn_val)
def set_interrupt_limits(self, low, high): """Set the interrupt limits to provied unsigned 16-bit threshold values. """ self.i2c.write8(0x04, low & 0xFF) self.i2c.write8(0x05, low >> 8) self.i2c.write8(0x06, high & 0xFF) self.i2c.write8(0x07, high >> 8)
Set the interrupt limits to provied unsigned 16-bit threshold values.
Below is the the instruction that describes the task: ### Input: Set the interrupt limits to provied unsigned 16-bit threshold values. ### Response: def set_interrupt_limits(self, low, high): """Set the interrupt limits to provied unsigned 16-bit threshold values. """ self.i2c.write8(0x04, low & 0xFF) self.i2c.write8(0x05, low >> 8) self.i2c.write8(0x06, high & 0xFF) self.i2c.write8(0x07, high >> 8)
def ensure_dir(path): """ :param path: path to directory to be created Create a directory if it does not already exist. """ if not os.path.exists(path): # path does not exist, create the directory os.mkdir(path) else: # The path exists, check that it is not a file if os.path.isfile(path): raise Exception("Path %s already exists, and it is a file, not a directory" % path)
:param path: path to directory to be created Create a directory if it does not already exist.
Below is the the instruction that describes the task: ### Input: :param path: path to directory to be created Create a directory if it does not already exist. ### Response: def ensure_dir(path): """ :param path: path to directory to be created Create a directory if it does not already exist. """ if not os.path.exists(path): # path does not exist, create the directory os.mkdir(path) else: # The path exists, check that it is not a file if os.path.isfile(path): raise Exception("Path %s already exists, and it is a file, not a directory" % path)
def get_scheme_cartocss(column, scheme_info): """Get TurboCARTO CartoCSS based on input parameters""" if 'colors' in scheme_info: color_scheme = '({})'.format(','.join(scheme_info['colors'])) else: color_scheme = 'cartocolor({})'.format(scheme_info['name']) if not isinstance(scheme_info['bins'], int): bins = ','.join(str(i) for i in scheme_info['bins']) else: bins = scheme_info['bins'] bin_method = scheme_info['bin_method'] comparison = ', {}'.format(BinMethod.mapping.get(bin_method, '>=')) return ('ramp([{column}], {color_scheme}, ' '{bin_method}({bins}){comparison})').format( column=column, color_scheme=color_scheme, bin_method=bin_method, bins=bins, comparison=comparison)
Get TurboCARTO CartoCSS based on input parameters
Below is the the instruction that describes the task: ### Input: Get TurboCARTO CartoCSS based on input parameters ### Response: def get_scheme_cartocss(column, scheme_info): """Get TurboCARTO CartoCSS based on input parameters""" if 'colors' in scheme_info: color_scheme = '({})'.format(','.join(scheme_info['colors'])) else: color_scheme = 'cartocolor({})'.format(scheme_info['name']) if not isinstance(scheme_info['bins'], int): bins = ','.join(str(i) for i in scheme_info['bins']) else: bins = scheme_info['bins'] bin_method = scheme_info['bin_method'] comparison = ', {}'.format(BinMethod.mapping.get(bin_method, '>=')) return ('ramp([{column}], {color_scheme}, ' '{bin_method}({bins}){comparison})').format( column=column, color_scheme=color_scheme, bin_method=bin_method, bins=bins, comparison=comparison)
def save(self, path_info, checksum): """Save checksum for the specified path info. Args: path_info (dict): path_info to save checksum for. checksum (str): checksum to save. """ assert path_info["scheme"] == "local" assert checksum is not None path = path_info["path"] assert os.path.exists(path) actual_mtime, actual_size = get_mtime_and_size(path) actual_inode = get_inode(path) existing_record = self.get_state_record_for_inode(actual_inode) if not existing_record: self._insert_new_state_record( path, actual_inode, actual_mtime, actual_size, checksum ) return self._update_state_for_path_changed( path, actual_inode, actual_mtime, actual_size, checksum )
Save checksum for the specified path info. Args: path_info (dict): path_info to save checksum for. checksum (str): checksum to save.
Below is the the instruction that describes the task: ### Input: Save checksum for the specified path info. Args: path_info (dict): path_info to save checksum for. checksum (str): checksum to save. ### Response: def save(self, path_info, checksum): """Save checksum for the specified path info. Args: path_info (dict): path_info to save checksum for. checksum (str): checksum to save. """ assert path_info["scheme"] == "local" assert checksum is not None path = path_info["path"] assert os.path.exists(path) actual_mtime, actual_size = get_mtime_and_size(path) actual_inode = get_inode(path) existing_record = self.get_state_record_for_inode(actual_inode) if not existing_record: self._insert_new_state_record( path, actual_inode, actual_mtime, actual_size, checksum ) return self._update_state_for_path_changed( path, actual_inode, actual_mtime, actual_size, checksum )
def close(self): """This method closes the canvas and writes contents to the associated file. Calling this procedure is optional, because Pychart calls this procedure for every open canvas on normal exit.""" for i in range(0, len(active_canvases)): if active_canvases[i] == self: del active_canvases[i] return
This method closes the canvas and writes contents to the associated file. Calling this procedure is optional, because Pychart calls this procedure for every open canvas on normal exit.
Below is the the instruction that describes the task: ### Input: This method closes the canvas and writes contents to the associated file. Calling this procedure is optional, because Pychart calls this procedure for every open canvas on normal exit. ### Response: def close(self): """This method closes the canvas and writes contents to the associated file. Calling this procedure is optional, because Pychart calls this procedure for every open canvas on normal exit.""" for i in range(0, len(active_canvases)): if active_canvases[i] == self: del active_canvases[i] return
def toggle_codecompletion(self, checked): """Toggle automatic code completion""" self.shell.set_codecompletion_auto(checked) self.set_option('codecompletion/auto', checked)
Toggle automatic code completion
Below is the the instruction that describes the task: ### Input: Toggle automatic code completion ### Response: def toggle_codecompletion(self, checked): """Toggle automatic code completion""" self.shell.set_codecompletion_auto(checked) self.set_option('codecompletion/auto', checked)
def enable(self, cmd="enable-admin", pattern="ssword", re_flags=re.IGNORECASE): """Enter enable mode.""" return super(AlcatelSrosSSH, self).enable( cmd=cmd, pattern=pattern, re_flags=re_flags )
Enter enable mode.
Below is the the instruction that describes the task: ### Input: Enter enable mode. ### Response: def enable(self, cmd="enable-admin", pattern="ssword", re_flags=re.IGNORECASE): """Enter enable mode.""" return super(AlcatelSrosSSH, self).enable( cmd=cmd, pattern=pattern, re_flags=re_flags )
def get_m49_from_iso3(cls, iso3, use_live=True, exception=None): # type: (str, bool, Optional[ExceptionUpperBound]) -> Optional[int] """Get M49 from ISO3 code Args: iso3 (str): ISO3 code for which to get M49 code use_live (bool): Try to get use latest data from web rather than file in package. Defaults to True. exception (Optional[ExceptionUpperBound]): An exception to raise if country not found. Defaults to None. Returns: Optional[int]: M49 code """ countriesdata = cls.countriesdata(use_live=use_live) m49 = countriesdata['m49iso3'].get(iso3) if m49 is not None: return m49 if exception is not None: raise exception return None
Get M49 from ISO3 code Args: iso3 (str): ISO3 code for which to get M49 code use_live (bool): Try to get use latest data from web rather than file in package. Defaults to True. exception (Optional[ExceptionUpperBound]): An exception to raise if country not found. Defaults to None. Returns: Optional[int]: M49 code
Below is the the instruction that describes the task: ### Input: Get M49 from ISO3 code Args: iso3 (str): ISO3 code for which to get M49 code use_live (bool): Try to get use latest data from web rather than file in package. Defaults to True. exception (Optional[ExceptionUpperBound]): An exception to raise if country not found. Defaults to None. Returns: Optional[int]: M49 code ### Response: def get_m49_from_iso3(cls, iso3, use_live=True, exception=None): # type: (str, bool, Optional[ExceptionUpperBound]) -> Optional[int] """Get M49 from ISO3 code Args: iso3 (str): ISO3 code for which to get M49 code use_live (bool): Try to get use latest data from web rather than file in package. Defaults to True. exception (Optional[ExceptionUpperBound]): An exception to raise if country not found. Defaults to None. Returns: Optional[int]: M49 code """ countriesdata = cls.countriesdata(use_live=use_live) m49 = countriesdata['m49iso3'].get(iso3) if m49 is not None: return m49 if exception is not None: raise exception return None
def build(self, builder): """Build XML by appending to builder""" builder.start("BasicDefinitions", {}) for child in self.measurement_units: child.build(builder) builder.end("BasicDefinitions")
Build XML by appending to builder
Below is the the instruction that describes the task: ### Input: Build XML by appending to builder ### Response: def build(self, builder): """Build XML by appending to builder""" builder.start("BasicDefinitions", {}) for child in self.measurement_units: child.build(builder) builder.end("BasicDefinitions")
def download(self, remotepath = '/', localpath = ''): ''' Usage: download [remotepath] [localpath] - \ download a remote directory (recursively) / file remotepath - remote path at Baidu Yun (after app root directory), if not specified, it is set to the root directory at Baidu Yun localpath - local path. if not specified, it is set to the current directory ''' subr = self.get_file_info(remotepath) if const.ENoError == subr: if 'isdir' in self.__remote_json: if self.__remote_json['isdir']: return self.downdir(remotepath, localpath) else: return self.downfile(remotepath, localpath) else: perr("Malformed path info JSON '{}' returned".format(self.__remote_json)) return const.EFatal elif const.EFileNotFound == subr: perr("Remote path '{}' does not exist".format(remotepath)) return subr else: perr("Error {} while getting info for remote path '{}'".format(subr, remotepath)) return subr
Usage: download [remotepath] [localpath] - \ download a remote directory (recursively) / file remotepath - remote path at Baidu Yun (after app root directory), if not specified, it is set to the root directory at Baidu Yun localpath - local path. if not specified, it is set to the current directory
Below is the the instruction that describes the task: ### Input: Usage: download [remotepath] [localpath] - \ download a remote directory (recursively) / file remotepath - remote path at Baidu Yun (after app root directory), if not specified, it is set to the root directory at Baidu Yun localpath - local path. if not specified, it is set to the current directory ### Response: def download(self, remotepath = '/', localpath = ''): ''' Usage: download [remotepath] [localpath] - \ download a remote directory (recursively) / file remotepath - remote path at Baidu Yun (after app root directory), if not specified, it is set to the root directory at Baidu Yun localpath - local path. if not specified, it is set to the current directory ''' subr = self.get_file_info(remotepath) if const.ENoError == subr: if 'isdir' in self.__remote_json: if self.__remote_json['isdir']: return self.downdir(remotepath, localpath) else: return self.downfile(remotepath, localpath) else: perr("Malformed path info JSON '{}' returned".format(self.__remote_json)) return const.EFatal elif const.EFileNotFound == subr: perr("Remote path '{}' does not exist".format(remotepath)) return subr else: perr("Error {} while getting info for remote path '{}'".format(subr, remotepath)) return subr
def assign(self, node): """ Translate an assign node into SQLQuery. :param node: a treebrd node :return: a SQLQuery object for the tree rooted at node """ child_object = self.translate(node.child) child_object.prefix = 'CREATE TEMPORARY TABLE {name}({attributes}) AS '\ .format(name=node.name, attributes=', '.join(node.attributes.names)) return child_object
Translate an assign node into SQLQuery. :param node: a treebrd node :return: a SQLQuery object for the tree rooted at node
Below is the the instruction that describes the task: ### Input: Translate an assign node into SQLQuery. :param node: a treebrd node :return: a SQLQuery object for the tree rooted at node ### Response: def assign(self, node): """ Translate an assign node into SQLQuery. :param node: a treebrd node :return: a SQLQuery object for the tree rooted at node """ child_object = self.translate(node.child) child_object.prefix = 'CREATE TEMPORARY TABLE {name}({attributes}) AS '\ .format(name=node.name, attributes=', '.join(node.attributes.names)) return child_object
def rbnf_lexing(text: str): """Read loudly for documentation.""" cast_map: const = _cast_map lexer_table: const = _lexer_table keyword: const = _keyword drop_table: const = _DropTable end: const = _END unknown: const = _UNKNOWN text_length = len(text) colno = 0 lineno = 0 position = 0 cast_const = ConstStrPool.cast_to_const while True: if text_length <= position: break for case_name, text_match_case in lexer_table: matched_text = text_match_case(text, position) if not matched_text: continue case_mem_addr = id(case_name) # memory address of case_name if case_mem_addr not in drop_table: if matched_text in cast_map: yield Tokenizer(keyword, cast_const(matched_text), lineno, colno) else: yield Tokenizer(cast_const(case_name), matched_text, lineno, colno) n = len(matched_text) line_inc = matched_text.count('\n') if line_inc: latest_newline_idx = matched_text.rindex('\n') colno = n - latest_newline_idx lineno += line_inc if case_name is _Space and matched_text[-1] == '\n': yield Tokenizer(end, '', lineno, colno) else: colno += n position += n break else: char = text[position] yield Tokenizer(unknown, char, lineno, colno) position += 1 if char == '\n': lineno += 1 colno = 0 else: colno += 1
Read loudly for documentation.
Below is the the instruction that describes the task: ### Input: Read loudly for documentation. ### Response: def rbnf_lexing(text: str): """Read loudly for documentation.""" cast_map: const = _cast_map lexer_table: const = _lexer_table keyword: const = _keyword drop_table: const = _DropTable end: const = _END unknown: const = _UNKNOWN text_length = len(text) colno = 0 lineno = 0 position = 0 cast_const = ConstStrPool.cast_to_const while True: if text_length <= position: break for case_name, text_match_case in lexer_table: matched_text = text_match_case(text, position) if not matched_text: continue case_mem_addr = id(case_name) # memory address of case_name if case_mem_addr not in drop_table: if matched_text in cast_map: yield Tokenizer(keyword, cast_const(matched_text), lineno, colno) else: yield Tokenizer(cast_const(case_name), matched_text, lineno, colno) n = len(matched_text) line_inc = matched_text.count('\n') if line_inc: latest_newline_idx = matched_text.rindex('\n') colno = n - latest_newline_idx lineno += line_inc if case_name is _Space and matched_text[-1] == '\n': yield Tokenizer(end, '', lineno, colno) else: colno += n position += n break else: char = text[position] yield Tokenizer(unknown, char, lineno, colno) position += 1 if char == '\n': lineno += 1 colno = 0 else: colno += 1
def safe_eval(value): """ Converts the inputted text value to a standard python value (if possible). :param value | <str> || <unicode> :return <variant> """ if not isinstance(value, (str, unicode)): return value try: return CONSTANT_EVALS[value] except KeyError: try: return ast.literal_eval(value) except StandardError: return value
Converts the inputted text value to a standard python value (if possible). :param value | <str> || <unicode> :return <variant>
Below is the the instruction that describes the task: ### Input: Converts the inputted text value to a standard python value (if possible). :param value | <str> || <unicode> :return <variant> ### Response: def safe_eval(value): """ Converts the inputted text value to a standard python value (if possible). :param value | <str> || <unicode> :return <variant> """ if not isinstance(value, (str, unicode)): return value try: return CONSTANT_EVALS[value] except KeyError: try: return ast.literal_eval(value) except StandardError: return value
def monitor_key_exists(service, key): """ Searches for the existence of a key in the monitor cluster. :param service: six.string_types. The Ceph user name to run the command under :param key: six.string_types. The key to search for :return: Returns True if the key exists, False if not and raises an exception if an unknown error occurs. :raise: CalledProcessError if an unknown error occurs """ try: check_call( ['ceph', '--id', service, 'config-key', 'exists', str(key)]) # I can return true here regardless because Ceph returns # ENOENT if the key wasn't found return True except CalledProcessError as e: if e.returncode == errno.ENOENT: return False else: log("Unknown error from ceph config-get exists: {} {}".format( e.returncode, e.output)) raise
Searches for the existence of a key in the monitor cluster. :param service: six.string_types. The Ceph user name to run the command under :param key: six.string_types. The key to search for :return: Returns True if the key exists, False if not and raises an exception if an unknown error occurs. :raise: CalledProcessError if an unknown error occurs
Below is the the instruction that describes the task: ### Input: Searches for the existence of a key in the monitor cluster. :param service: six.string_types. The Ceph user name to run the command under :param key: six.string_types. The key to search for :return: Returns True if the key exists, False if not and raises an exception if an unknown error occurs. :raise: CalledProcessError if an unknown error occurs ### Response: def monitor_key_exists(service, key): """ Searches for the existence of a key in the monitor cluster. :param service: six.string_types. The Ceph user name to run the command under :param key: six.string_types. The key to search for :return: Returns True if the key exists, False if not and raises an exception if an unknown error occurs. :raise: CalledProcessError if an unknown error occurs """ try: check_call( ['ceph', '--id', service, 'config-key', 'exists', str(key)]) # I can return true here regardless because Ceph returns # ENOENT if the key wasn't found return True except CalledProcessError as e: if e.returncode == errno.ENOENT: return False else: log("Unknown error from ceph config-get exists: {} {}".format( e.returncode, e.output)) raise
def _parse_query(self, source): """Parse one of the rules as either objectfilter or dottysql. Example: _parse_query("5 + 5") # Returns Sum(Literal(5), Literal(5)) Arguments: source: A rule in either objectfilter or dottysql syntax. Returns: The AST to represent the rule. """ if self.OBJECTFILTER_WORDS.search(source): syntax_ = "objectfilter" else: syntax_ = None # Default it is. return query.Query(source, syntax=syntax_)
Parse one of the rules as either objectfilter or dottysql. Example: _parse_query("5 + 5") # Returns Sum(Literal(5), Literal(5)) Arguments: source: A rule in either objectfilter or dottysql syntax. Returns: The AST to represent the rule.
Below is the the instruction that describes the task: ### Input: Parse one of the rules as either objectfilter or dottysql. Example: _parse_query("5 + 5") # Returns Sum(Literal(5), Literal(5)) Arguments: source: A rule in either objectfilter or dottysql syntax. Returns: The AST to represent the rule. ### Response: def _parse_query(self, source): """Parse one of the rules as either objectfilter or dottysql. Example: _parse_query("5 + 5") # Returns Sum(Literal(5), Literal(5)) Arguments: source: A rule in either objectfilter or dottysql syntax. Returns: The AST to represent the rule. """ if self.OBJECTFILTER_WORDS.search(source): syntax_ = "objectfilter" else: syntax_ = None # Default it is. return query.Query(source, syntax=syntax_)
def parse_encoded_styles(text, normalize_key=None): """ Parse text styles encoded in a string into a nested data structure. :param text: The encoded styles (a string). :returns: A dictionary in the structure of the :data:`DEFAULT_FIELD_STYLES` and :data:`DEFAULT_LEVEL_STYLES` dictionaries. Here's an example of how this function works: >>> from coloredlogs import parse_encoded_styles >>> from pprint import pprint >>> encoded_styles = 'debug=green;warning=yellow;error=red;critical=red,bold' >>> pprint(parse_encoded_styles(encoded_styles)) {'debug': {'color': 'green'}, 'warning': {'color': 'yellow'}, 'error': {'color': 'red'}, 'critical': {'bold': True, 'color': 'red'}} """ parsed_styles = {} for assignment in split(text, ';'): name, _, styles = assignment.partition('=') target = parsed_styles.setdefault(name, {}) for token in split(styles, ','): # When this code was originally written, setting background colors # wasn't supported yet, so there was no need to disambiguate # between the text color and background color. This explains why # a color name or number implies setting the text color (for # backwards compatibility). if token.isdigit(): target['color'] = int(token) elif token in ANSI_COLOR_CODES: target['color'] = token elif '=' in token: name, _, value = token.partition('=') if name in ('color', 'background'): if value.isdigit(): target[name] = int(value) elif value in ANSI_COLOR_CODES: target[name] = value else: target[token] = True return parsed_styles
Parse text styles encoded in a string into a nested data structure. :param text: The encoded styles (a string). :returns: A dictionary in the structure of the :data:`DEFAULT_FIELD_STYLES` and :data:`DEFAULT_LEVEL_STYLES` dictionaries. Here's an example of how this function works: >>> from coloredlogs import parse_encoded_styles >>> from pprint import pprint >>> encoded_styles = 'debug=green;warning=yellow;error=red;critical=red,bold' >>> pprint(parse_encoded_styles(encoded_styles)) {'debug': {'color': 'green'}, 'warning': {'color': 'yellow'}, 'error': {'color': 'red'}, 'critical': {'bold': True, 'color': 'red'}}
Below is the the instruction that describes the task: ### Input: Parse text styles encoded in a string into a nested data structure. :param text: The encoded styles (a string). :returns: A dictionary in the structure of the :data:`DEFAULT_FIELD_STYLES` and :data:`DEFAULT_LEVEL_STYLES` dictionaries. Here's an example of how this function works: >>> from coloredlogs import parse_encoded_styles >>> from pprint import pprint >>> encoded_styles = 'debug=green;warning=yellow;error=red;critical=red,bold' >>> pprint(parse_encoded_styles(encoded_styles)) {'debug': {'color': 'green'}, 'warning': {'color': 'yellow'}, 'error': {'color': 'red'}, 'critical': {'bold': True, 'color': 'red'}} ### Response: def parse_encoded_styles(text, normalize_key=None): """ Parse text styles encoded in a string into a nested data structure. :param text: The encoded styles (a string). :returns: A dictionary in the structure of the :data:`DEFAULT_FIELD_STYLES` and :data:`DEFAULT_LEVEL_STYLES` dictionaries. Here's an example of how this function works: >>> from coloredlogs import parse_encoded_styles >>> from pprint import pprint >>> encoded_styles = 'debug=green;warning=yellow;error=red;critical=red,bold' >>> pprint(parse_encoded_styles(encoded_styles)) {'debug': {'color': 'green'}, 'warning': {'color': 'yellow'}, 'error': {'color': 'red'}, 'critical': {'bold': True, 'color': 'red'}} """ parsed_styles = {} for assignment in split(text, ';'): name, _, styles = assignment.partition('=') target = parsed_styles.setdefault(name, {}) for token in split(styles, ','): # When this code was originally written, setting background colors # wasn't supported yet, so there was no need to disambiguate # between the text color and background color. This explains why # a color name or number implies setting the text color (for # backwards compatibility). if token.isdigit(): target['color'] = int(token) elif token in ANSI_COLOR_CODES: target['color'] = token elif '=' in token: name, _, value = token.partition('=') if name in ('color', 'background'): if value.isdigit(): target[name] = int(value) elif value in ANSI_COLOR_CODES: target[name] = value else: target[token] = True return parsed_styles
def GetPointWithDistanceTraveled(self, shape_dist_traveled): """Returns a point on the shape polyline with the input shape_dist_traveled. Args: shape_dist_traveled: The input shape_dist_traveled. Returns: The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and lng is the location of the shape point, and shape_dist_traveled is an increasing metric representing the distance traveled along the shape. Returns None if there is data error in shape. """ if not self.distance: return None if shape_dist_traveled <= self.distance[0]: return self.points[0] if shape_dist_traveled >= self.distance[-1]: return self.points[-1] index = bisect.bisect(self.distance, shape_dist_traveled) (lat0, lng0, dist0) = self.points[index - 1] (lat1, lng1, dist1) = self.points[index] # Interpolate if shape_dist_traveled does not equal to any of the point # in shape segment. # (lat0, lng0) (lat, lng) (lat1, lng1) # -----|--------------------|---------------------|------ # dist0 shape_dist_traveled dist1 # \------- ca --------/ \-------- bc -------/ # \----------------- ba ------------------/ ca = shape_dist_traveled - dist0 bc = dist1 - shape_dist_traveled ba = bc + ca if ba == 0: # This only happens when there's data error in shapes and should have been # catched before. Check to avoid crash. return None # This won't work crossing longitude 180 and is only an approximation which # works well for short distance. lat = (lat1 * ca + lat0 * bc) / ba lng = (lng1 * ca + lng0 * bc) / ba return (lat, lng, shape_dist_traveled)
Returns a point on the shape polyline with the input shape_dist_traveled. Args: shape_dist_traveled: The input shape_dist_traveled. Returns: The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and lng is the location of the shape point, and shape_dist_traveled is an increasing metric representing the distance traveled along the shape. Returns None if there is data error in shape.
Below is the the instruction that describes the task: ### Input: Returns a point on the shape polyline with the input shape_dist_traveled. Args: shape_dist_traveled: The input shape_dist_traveled. Returns: The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and lng is the location of the shape point, and shape_dist_traveled is an increasing metric representing the distance traveled along the shape. Returns None if there is data error in shape. ### Response: def GetPointWithDistanceTraveled(self, shape_dist_traveled): """Returns a point on the shape polyline with the input shape_dist_traveled. Args: shape_dist_traveled: The input shape_dist_traveled. Returns: The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and lng is the location of the shape point, and shape_dist_traveled is an increasing metric representing the distance traveled along the shape. Returns None if there is data error in shape. """ if not self.distance: return None if shape_dist_traveled <= self.distance[0]: return self.points[0] if shape_dist_traveled >= self.distance[-1]: return self.points[-1] index = bisect.bisect(self.distance, shape_dist_traveled) (lat0, lng0, dist0) = self.points[index - 1] (lat1, lng1, dist1) = self.points[index] # Interpolate if shape_dist_traveled does not equal to any of the point # in shape segment. # (lat0, lng0) (lat, lng) (lat1, lng1) # -----|--------------------|---------------------|------ # dist0 shape_dist_traveled dist1 # \------- ca --------/ \-------- bc -------/ # \----------------- ba ------------------/ ca = shape_dist_traveled - dist0 bc = dist1 - shape_dist_traveled ba = bc + ca if ba == 0: # This only happens when there's data error in shapes and should have been # catched before. Check to avoid crash. return None # This won't work crossing longitude 180 and is only an approximation which # works well for short distance. lat = (lat1 * ca + lat0 * bc) / ba lng = (lng1 * ca + lng0 * bc) / ba return (lat, lng, shape_dist_traveled)
def new_context(environment, template_name, blocks, vars=None, shared=None, globals=None, locals=None): """Internal helper to for context creation.""" if vars is None: vars = {} if shared: parent = vars else: parent = dict(globals or (), **vars) if locals: # if the parent is shared a copy should be created because # we don't want to modify the dict passed if shared: parent = dict(parent) for key, value in iteritems(locals): if value is not missing: parent[key] = value return environment.context_class(environment, parent, template_name, blocks)
Internal helper to for context creation.
Below is the the instruction that describes the task: ### Input: Internal helper to for context creation. ### Response: def new_context(environment, template_name, blocks, vars=None, shared=None, globals=None, locals=None): """Internal helper to for context creation.""" if vars is None: vars = {} if shared: parent = vars else: parent = dict(globals or (), **vars) if locals: # if the parent is shared a copy should be created because # we don't want to modify the dict passed if shared: parent = dict(parent) for key, value in iteritems(locals): if value is not missing: parent[key] = value return environment.context_class(environment, parent, template_name, blocks)
def create_training_job(self, config, wait_for_completion=True, print_log=True, check_interval=30, max_ingestion_time=None): """ Create a training job :param config: the config for training :type config: dict :param wait_for_completion: if the program should keep running until job finishes :type wait_for_completion: bool :param check_interval: the time interval in seconds which the operator will check the status of any SageMaker job :type check_interval: int :param max_ingestion_time: the maximum ingestion time in seconds. Any SageMaker jobs that run longer than this will fail. Setting this to None implies no timeout for any SageMaker job. :type max_ingestion_time: int :return: A response to training job creation """ self.check_training_config(config) response = self.get_conn().create_training_job(**config) if print_log: self.check_training_status_with_log(config['TrainingJobName'], self.non_terminal_states, self.failed_states, wait_for_completion, check_interval, max_ingestion_time ) elif wait_for_completion: describe_response = self.check_status(config['TrainingJobName'], 'TrainingJobStatus', self.describe_training_job, check_interval, max_ingestion_time ) billable_time = \ (describe_response['TrainingEndTime'] - describe_response['TrainingStartTime']) * \ describe_response['ResourceConfig']['InstanceCount'] self.log.info('Billable seconds:{}'.format(int(billable_time.total_seconds()) + 1)) return response
Create a training job :param config: the config for training :type config: dict :param wait_for_completion: if the program should keep running until job finishes :type wait_for_completion: bool :param check_interval: the time interval in seconds which the operator will check the status of any SageMaker job :type check_interval: int :param max_ingestion_time: the maximum ingestion time in seconds. Any SageMaker jobs that run longer than this will fail. Setting this to None implies no timeout for any SageMaker job. :type max_ingestion_time: int :return: A response to training job creation
Below is the the instruction that describes the task: ### Input: Create a training job :param config: the config for training :type config: dict :param wait_for_completion: if the program should keep running until job finishes :type wait_for_completion: bool :param check_interval: the time interval in seconds which the operator will check the status of any SageMaker job :type check_interval: int :param max_ingestion_time: the maximum ingestion time in seconds. Any SageMaker jobs that run longer than this will fail. Setting this to None implies no timeout for any SageMaker job. :type max_ingestion_time: int :return: A response to training job creation ### Response: def create_training_job(self, config, wait_for_completion=True, print_log=True, check_interval=30, max_ingestion_time=None): """ Create a training job :param config: the config for training :type config: dict :param wait_for_completion: if the program should keep running until job finishes :type wait_for_completion: bool :param check_interval: the time interval in seconds which the operator will check the status of any SageMaker job :type check_interval: int :param max_ingestion_time: the maximum ingestion time in seconds. Any SageMaker jobs that run longer than this will fail. Setting this to None implies no timeout for any SageMaker job. :type max_ingestion_time: int :return: A response to training job creation """ self.check_training_config(config) response = self.get_conn().create_training_job(**config) if print_log: self.check_training_status_with_log(config['TrainingJobName'], self.non_terminal_states, self.failed_states, wait_for_completion, check_interval, max_ingestion_time ) elif wait_for_completion: describe_response = self.check_status(config['TrainingJobName'], 'TrainingJobStatus', self.describe_training_job, check_interval, max_ingestion_time ) billable_time = \ (describe_response['TrainingEndTime'] - describe_response['TrainingStartTime']) * \ describe_response['ResourceConfig']['InstanceCount'] self.log.info('Billable seconds:{}'.format(int(billable_time.total_seconds()) + 1)) return response
def _contains_nd(nodes, point): r"""Predicate indicating if a point is within a bounding box. .. note:: There is also a Fortran implementation of this function, which will be used if it can be built. Args: nodes (numpy.ndarray): A set of points. point (numpy.ndarray): A 1D NumPy array representing a point in the same dimension as ``nodes``. Returns: bool: Indicating containment. """ min_vals = np.min(nodes, axis=1) if not np.all(min_vals <= point): return False max_vals = np.max(nodes, axis=1) if not np.all(point <= max_vals): return False return True
r"""Predicate indicating if a point is within a bounding box. .. note:: There is also a Fortran implementation of this function, which will be used if it can be built. Args: nodes (numpy.ndarray): A set of points. point (numpy.ndarray): A 1D NumPy array representing a point in the same dimension as ``nodes``. Returns: bool: Indicating containment.
Below is the the instruction that describes the task: ### Input: r"""Predicate indicating if a point is within a bounding box. .. note:: There is also a Fortran implementation of this function, which will be used if it can be built. Args: nodes (numpy.ndarray): A set of points. point (numpy.ndarray): A 1D NumPy array representing a point in the same dimension as ``nodes``. Returns: bool: Indicating containment. ### Response: def _contains_nd(nodes, point): r"""Predicate indicating if a point is within a bounding box. .. note:: There is also a Fortran implementation of this function, which will be used if it can be built. Args: nodes (numpy.ndarray): A set of points. point (numpy.ndarray): A 1D NumPy array representing a point in the same dimension as ``nodes``. Returns: bool: Indicating containment. """ min_vals = np.min(nodes, axis=1) if not np.all(min_vals <= point): return False max_vals = np.max(nodes, axis=1) if not np.all(point <= max_vals): return False return True
def getSequenceDollarID(self, strIndex, returnOffset=False): ''' This will take a given index and work backwards until it encounters a '$' indicating which dollar ID is associated with this read @param strIndex - the index of the character to start with @return - an integer indicating the dollar ID of the string the given character belongs to ''' #figure out the first hop backwards currIndex = strIndex prevChar = self.getCharAtIndex(currIndex) currIndex = self.getOccurrenceOfCharAtIndex(prevChar, currIndex) i = 0 #while we haven't looped back to the start while prevChar != 0: #figure out where to go from here prevChar = self.getCharAtIndex(currIndex) currIndex = self.getOccurrenceOfCharAtIndex(prevChar, currIndex) i += 1 if returnOffset: return (currIndex, i) else: return currIndex
This will take a given index and work backwards until it encounters a '$' indicating which dollar ID is associated with this read @param strIndex - the index of the character to start with @return - an integer indicating the dollar ID of the string the given character belongs to
Below is the the instruction that describes the task: ### Input: This will take a given index and work backwards until it encounters a '$' indicating which dollar ID is associated with this read @param strIndex - the index of the character to start with @return - an integer indicating the dollar ID of the string the given character belongs to ### Response: def getSequenceDollarID(self, strIndex, returnOffset=False): ''' This will take a given index and work backwards until it encounters a '$' indicating which dollar ID is associated with this read @param strIndex - the index of the character to start with @return - an integer indicating the dollar ID of the string the given character belongs to ''' #figure out the first hop backwards currIndex = strIndex prevChar = self.getCharAtIndex(currIndex) currIndex = self.getOccurrenceOfCharAtIndex(prevChar, currIndex) i = 0 #while we haven't looped back to the start while prevChar != 0: #figure out where to go from here prevChar = self.getCharAtIndex(currIndex) currIndex = self.getOccurrenceOfCharAtIndex(prevChar, currIndex) i += 1 if returnOffset: return (currIndex, i) else: return currIndex
def format_bytes(bytes): """ Get human readable version of given bytes. Ripped from https://github.com/rg3/youtube-dl """ if bytes is None: return 'N/A' if type(bytes) is str: bytes = float(bytes) if bytes == 0.0: exponent = 0 else: exponent = int(math.log(bytes, 1024.0)) suffix = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'][exponent] converted = float(bytes) / float(1024 ** exponent) return '{0:.2f}{1}'.format(converted, suffix)
Get human readable version of given bytes. Ripped from https://github.com/rg3/youtube-dl
Below is the the instruction that describes the task: ### Input: Get human readable version of given bytes. Ripped from https://github.com/rg3/youtube-dl ### Response: def format_bytes(bytes): """ Get human readable version of given bytes. Ripped from https://github.com/rg3/youtube-dl """ if bytes is None: return 'N/A' if type(bytes) is str: bytes = float(bytes) if bytes == 0.0: exponent = 0 else: exponent = int(math.log(bytes, 1024.0)) suffix = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'][exponent] converted = float(bytes) / float(1024 ** exponent) return '{0:.2f}{1}'.format(converted, suffix)
def from_xml(xml): """ Pick informations from :class:`.MARCXMLRecord` object and use it to build :class:`.SemanticInfo` structure. Args: xml (str/MARCXMLRecord): MarcXML which will be converted to SemanticInfo. In case of str, ``<record>`` tag is required. Returns: structure: :class:`.SemanticInfo`. """ hasAcquisitionFields = False acquisitionFields = [] isClosed = False isSummaryRecord = False contentOfFMT = "" parsedSummaryRecordSysNumber = "" summaryRecordSysNumber = "" parsed = xml if not isinstance(xml, MARCXMLRecord): parsed = MARCXMLRecord(str(xml)) # handle FMT record if "FMT" in parsed.controlfields: contentOfFMT = parsed["FMT"] if contentOfFMT == "SE": isSummaryRecord = True if "HLD" in parsed.datafields or "HLD" in parsed.controlfields: hasAcquisitionFields = True if "STZ" in parsed.datafields: acquisitionFields.extend(parsed["STZa"]) acquisitionFields.extend(parsed["STZb"]) def sign_and_author(sign): """ Sign is stored in ISTa, author's name is in ISTb. Sign is MarcSubrecord obj with pointers to other subrecords, so it is possible to pick references to author's name from signs. """ return [sign.replace(" ", "")] + sign.other_subfields.get("b", []) # look for catalogization fields for orig_sign in parsed["ISTa"]: sign = orig_sign.replace(" ", "") # remove spaces if sign.startswith("sk"): hasAcquisitionFields = True acquisitionFields.extend(sign_and_author(orig_sign)) # look whether the record was 'closed' by catalogizators for status in parsed["BASa"]: if status == "90": isClosed = True # if multiple PJM statuses are present, join them together status = "\n".join([x for x in parsed["PJMa"]]) # detect link to 'new' record, if the old one was 'closed' if status.strip(): summaryRecordSysNumber = status parsedSummaryRecordSysNumber = _parse_summaryRecordSysNumber( summaryRecordSysNumber ) return EPeriodicalSemanticInfo( hasAcquisitionFields=hasAcquisitionFields, acquisitionFields=acquisitionFields, isClosed=isClosed, isSummaryRecord=isSummaryRecord, contentOfFMT=contentOfFMT, parsedSummaryRecordSysNumber=parsedSummaryRecordSysNumber, summaryRecordSysNumber=summaryRecordSysNumber, )
Pick informations from :class:`.MARCXMLRecord` object and use it to build :class:`.SemanticInfo` structure. Args: xml (str/MARCXMLRecord): MarcXML which will be converted to SemanticInfo. In case of str, ``<record>`` tag is required. Returns: structure: :class:`.SemanticInfo`.
Below is the the instruction that describes the task: ### Input: Pick informations from :class:`.MARCXMLRecord` object and use it to build :class:`.SemanticInfo` structure. Args: xml (str/MARCXMLRecord): MarcXML which will be converted to SemanticInfo. In case of str, ``<record>`` tag is required. Returns: structure: :class:`.SemanticInfo`. ### Response: def from_xml(xml): """ Pick informations from :class:`.MARCXMLRecord` object and use it to build :class:`.SemanticInfo` structure. Args: xml (str/MARCXMLRecord): MarcXML which will be converted to SemanticInfo. In case of str, ``<record>`` tag is required. Returns: structure: :class:`.SemanticInfo`. """ hasAcquisitionFields = False acquisitionFields = [] isClosed = False isSummaryRecord = False contentOfFMT = "" parsedSummaryRecordSysNumber = "" summaryRecordSysNumber = "" parsed = xml if not isinstance(xml, MARCXMLRecord): parsed = MARCXMLRecord(str(xml)) # handle FMT record if "FMT" in parsed.controlfields: contentOfFMT = parsed["FMT"] if contentOfFMT == "SE": isSummaryRecord = True if "HLD" in parsed.datafields or "HLD" in parsed.controlfields: hasAcquisitionFields = True if "STZ" in parsed.datafields: acquisitionFields.extend(parsed["STZa"]) acquisitionFields.extend(parsed["STZb"]) def sign_and_author(sign): """ Sign is stored in ISTa, author's name is in ISTb. Sign is MarcSubrecord obj with pointers to other subrecords, so it is possible to pick references to author's name from signs. """ return [sign.replace(" ", "")] + sign.other_subfields.get("b", []) # look for catalogization fields for orig_sign in parsed["ISTa"]: sign = orig_sign.replace(" ", "") # remove spaces if sign.startswith("sk"): hasAcquisitionFields = True acquisitionFields.extend(sign_and_author(orig_sign)) # look whether the record was 'closed' by catalogizators for status in parsed["BASa"]: if status == "90": isClosed = True # if multiple PJM statuses are present, join them together status = "\n".join([x for x in parsed["PJMa"]]) # detect link to 'new' record, if the old one was 'closed' if status.strip(): summaryRecordSysNumber = status parsedSummaryRecordSysNumber = _parse_summaryRecordSysNumber( summaryRecordSysNumber ) return EPeriodicalSemanticInfo( hasAcquisitionFields=hasAcquisitionFields, acquisitionFields=acquisitionFields, isClosed=isClosed, isSummaryRecord=isSummaryRecord, contentOfFMT=contentOfFMT, parsedSummaryRecordSysNumber=parsedSummaryRecordSysNumber, summaryRecordSysNumber=summaryRecordSysNumber, )
def _format_list(self, extracted_list): """Format a list of traceback entry tuples for printing. Given a list of tuples as returned by extract_tb() or extract_stack(), return a list of strings ready for printing. Each string in the resulting list corresponds to the item with the same index in the argument list. Each string ends in a newline; the strings may contain internal newlines as well, for those items whose source text line is not None. Lifted almost verbatim from traceback.py """ Colors = self.Colors list = [] for filename, lineno, name, line in extracted_list[:-1]: item = ' File %s"%s"%s, line %s%d%s, in %s%s%s\n' % \ (Colors.filename, filename, Colors.Normal, Colors.lineno, lineno, Colors.Normal, Colors.name, name, Colors.Normal) if line: item += ' %s\n' % line.strip() list.append(item) # Emphasize the last entry filename, lineno, name, line = extracted_list[-1] item = '%s File %s"%s"%s, line %s%d%s, in %s%s%s%s\n' % \ (Colors.normalEm, Colors.filenameEm, filename, Colors.normalEm, Colors.linenoEm, lineno, Colors.normalEm, Colors.nameEm, name, Colors.normalEm, Colors.Normal) if line: item += '%s %s%s\n' % (Colors.line, line.strip(), Colors.Normal) list.append(item) #from pprint import pformat; print 'LISTTB', pformat(list) # dbg return list
Format a list of traceback entry tuples for printing. Given a list of tuples as returned by extract_tb() or extract_stack(), return a list of strings ready for printing. Each string in the resulting list corresponds to the item with the same index in the argument list. Each string ends in a newline; the strings may contain internal newlines as well, for those items whose source text line is not None. Lifted almost verbatim from traceback.py
Below is the the instruction that describes the task: ### Input: Format a list of traceback entry tuples for printing. Given a list of tuples as returned by extract_tb() or extract_stack(), return a list of strings ready for printing. Each string in the resulting list corresponds to the item with the same index in the argument list. Each string ends in a newline; the strings may contain internal newlines as well, for those items whose source text line is not None. Lifted almost verbatim from traceback.py ### Response: def _format_list(self, extracted_list): """Format a list of traceback entry tuples for printing. Given a list of tuples as returned by extract_tb() or extract_stack(), return a list of strings ready for printing. Each string in the resulting list corresponds to the item with the same index in the argument list. Each string ends in a newline; the strings may contain internal newlines as well, for those items whose source text line is not None. Lifted almost verbatim from traceback.py """ Colors = self.Colors list = [] for filename, lineno, name, line in extracted_list[:-1]: item = ' File %s"%s"%s, line %s%d%s, in %s%s%s\n' % \ (Colors.filename, filename, Colors.Normal, Colors.lineno, lineno, Colors.Normal, Colors.name, name, Colors.Normal) if line: item += ' %s\n' % line.strip() list.append(item) # Emphasize the last entry filename, lineno, name, line = extracted_list[-1] item = '%s File %s"%s"%s, line %s%d%s, in %s%s%s%s\n' % \ (Colors.normalEm, Colors.filenameEm, filename, Colors.normalEm, Colors.linenoEm, lineno, Colors.normalEm, Colors.nameEm, name, Colors.normalEm, Colors.Normal) if line: item += '%s %s%s\n' % (Colors.line, line.strip(), Colors.Normal) list.append(item) #from pprint import pformat; print 'LISTTB', pformat(list) # dbg return list
def load_file(self, app, pathname, relpath, pypath): """Loads a file and creates a View from it. Files are split between a YAML front-matter and the content (unless it is a .yml file). """ try: view_class = self.get_file_view_cls(relpath) return create_view_from_file(pathname, source_template=relpath, view_class=view_class) except DeclarativeViewError: pass
Loads a file and creates a View from it. Files are split between a YAML front-matter and the content (unless it is a .yml file).
Below is the the instruction that describes the task: ### Input: Loads a file and creates a View from it. Files are split between a YAML front-matter and the content (unless it is a .yml file). ### Response: def load_file(self, app, pathname, relpath, pypath): """Loads a file and creates a View from it. Files are split between a YAML front-matter and the content (unless it is a .yml file). """ try: view_class = self.get_file_view_cls(relpath) return create_view_from_file(pathname, source_template=relpath, view_class=view_class) except DeclarativeViewError: pass
def feed(self, from_date=None, from_offset=None, category=None, latest_items=None, arthur_items=None, filter_classified=None): """ Feed data in Elastic from Perceval or Arthur """ if self.fetch_archive: items = self.perceval_backend.fetch_from_archive() self.feed_items(items) return elif arthur_items: items = arthur_items self.feed_items(items) return if from_date and from_offset: raise RuntimeError("Can't not feed using from_date and from_offset.") # We need to filter by repository to support several repositories # in the same raw index filters_ = [get_repository_filter(self.perceval_backend, self.get_connector_name())] # Check if backend supports from_date signature = inspect.signature(self.perceval_backend.fetch) last_update = None if 'from_date' in signature.parameters: if from_date: last_update = from_date else: self.last_update = self.get_last_update_from_es(filters_=filters_) last_update = self.last_update logger.info("Incremental from: %s", last_update) offset = None if 'offset' in signature.parameters: if from_offset: offset = from_offset else: offset = self.elastic.get_last_offset("offset", filters_=filters_) if offset is not None: logger.info("Incremental from: %i offset", offset) else: logger.info("Not incremental") params = {} # category and filter_classified params are shared # by all Perceval backends if category is not None: params['category'] = category if filter_classified is not None: params['filter_classified'] = filter_classified # latest items, from_date and offset cannot be used together, # thus, the params dictionary is filled with the param available # and Perceval is executed if latest_items: params['latest_items'] = latest_items items = self.perceval_backend.fetch(**params) elif last_update: last_update = last_update.replace(tzinfo=None) params['from_date'] = last_update items = self.perceval_backend.fetch(**params) elif offset is not None: params['offset'] = offset items = self.perceval_backend.fetch(**params) else: items = self.perceval_backend.fetch(**params) self.feed_items(items) self.update_items()
Feed data in Elastic from Perceval or Arthur
Below is the the instruction that describes the task: ### Input: Feed data in Elastic from Perceval or Arthur ### Response: def feed(self, from_date=None, from_offset=None, category=None, latest_items=None, arthur_items=None, filter_classified=None): """ Feed data in Elastic from Perceval or Arthur """ if self.fetch_archive: items = self.perceval_backend.fetch_from_archive() self.feed_items(items) return elif arthur_items: items = arthur_items self.feed_items(items) return if from_date and from_offset: raise RuntimeError("Can't not feed using from_date and from_offset.") # We need to filter by repository to support several repositories # in the same raw index filters_ = [get_repository_filter(self.perceval_backend, self.get_connector_name())] # Check if backend supports from_date signature = inspect.signature(self.perceval_backend.fetch) last_update = None if 'from_date' in signature.parameters: if from_date: last_update = from_date else: self.last_update = self.get_last_update_from_es(filters_=filters_) last_update = self.last_update logger.info("Incremental from: %s", last_update) offset = None if 'offset' in signature.parameters: if from_offset: offset = from_offset else: offset = self.elastic.get_last_offset("offset", filters_=filters_) if offset is not None: logger.info("Incremental from: %i offset", offset) else: logger.info("Not incremental") params = {} # category and filter_classified params are shared # by all Perceval backends if category is not None: params['category'] = category if filter_classified is not None: params['filter_classified'] = filter_classified # latest items, from_date and offset cannot be used together, # thus, the params dictionary is filled with the param available # and Perceval is executed if latest_items: params['latest_items'] = latest_items items = self.perceval_backend.fetch(**params) elif last_update: last_update = last_update.replace(tzinfo=None) params['from_date'] = last_update items = self.perceval_backend.fetch(**params) elif offset is not None: params['offset'] = offset items = self.perceval_backend.fetch(**params) else: items = self.perceval_backend.fetch(**params) self.feed_items(items) self.update_items()
def get_xattrs(self, path, xattr_name=None, encoding='text', **kwargs): """Get one or more xattr values for a file or directory. :param xattr_name: ``str`` to get one attribute, ``list`` to get multiple attributes, ``None`` to get all attributes. :param encoding: ``text`` | ``hex`` | ``base64``, defaults to ``text`` :returns: Dictionary mapping xattr name to value. With text encoding, the value will be a unicode string. With hex or base64 encoding, the value will be a byte array. :rtype: dict """ kwargs['xattr.name'] = xattr_name json = _json(self._get(path, 'GETXATTRS', encoding=encoding, **kwargs))['XAttrs'] # Decode the result result = {} for attr in json: k = attr['name'] v = attr['value'] if v is None: result[k] = None elif encoding == 'text': assert attr['value'].startswith('"') and attr['value'].endswith('"') result[k] = v[1:-1] elif encoding == 'hex': assert attr['value'].startswith('0x') # older python demands bytes, so we have to ascii encode result[k] = binascii.unhexlify(v[2:].encode('ascii')) elif encoding == 'base64': assert attr['value'].startswith('0s') # older python demands bytes, so we have to ascii encode result[k] = base64.b64decode(v[2:].encode('ascii')) else: warnings.warn("Unexpected encoding {}".format(encoding)) result[k] = v return result
Get one or more xattr values for a file or directory. :param xattr_name: ``str`` to get one attribute, ``list`` to get multiple attributes, ``None`` to get all attributes. :param encoding: ``text`` | ``hex`` | ``base64``, defaults to ``text`` :returns: Dictionary mapping xattr name to value. With text encoding, the value will be a unicode string. With hex or base64 encoding, the value will be a byte array. :rtype: dict
Below is the the instruction that describes the task: ### Input: Get one or more xattr values for a file or directory. :param xattr_name: ``str`` to get one attribute, ``list`` to get multiple attributes, ``None`` to get all attributes. :param encoding: ``text`` | ``hex`` | ``base64``, defaults to ``text`` :returns: Dictionary mapping xattr name to value. With text encoding, the value will be a unicode string. With hex or base64 encoding, the value will be a byte array. :rtype: dict ### Response: def get_xattrs(self, path, xattr_name=None, encoding='text', **kwargs): """Get one or more xattr values for a file or directory. :param xattr_name: ``str`` to get one attribute, ``list`` to get multiple attributes, ``None`` to get all attributes. :param encoding: ``text`` | ``hex`` | ``base64``, defaults to ``text`` :returns: Dictionary mapping xattr name to value. With text encoding, the value will be a unicode string. With hex or base64 encoding, the value will be a byte array. :rtype: dict """ kwargs['xattr.name'] = xattr_name json = _json(self._get(path, 'GETXATTRS', encoding=encoding, **kwargs))['XAttrs'] # Decode the result result = {} for attr in json: k = attr['name'] v = attr['value'] if v is None: result[k] = None elif encoding == 'text': assert attr['value'].startswith('"') and attr['value'].endswith('"') result[k] = v[1:-1] elif encoding == 'hex': assert attr['value'].startswith('0x') # older python demands bytes, so we have to ascii encode result[k] = binascii.unhexlify(v[2:].encode('ascii')) elif encoding == 'base64': assert attr['value'].startswith('0s') # older python demands bytes, so we have to ascii encode result[k] = base64.b64decode(v[2:].encode('ascii')) else: warnings.warn("Unexpected encoding {}".format(encoding)) result[k] = v return result
def transfer_to_row(self, new_row): """Transfer this instruction to a new row. :param knittingpattern.Row.Row new_row: the new row the instruction is in. """ if new_row != self._row: index = self.get_index_in_row() if index is not None: self._row.instructions.pop(index) self._row = new_row
Transfer this instruction to a new row. :param knittingpattern.Row.Row new_row: the new row the instruction is in.
Below is the the instruction that describes the task: ### Input: Transfer this instruction to a new row. :param knittingpattern.Row.Row new_row: the new row the instruction is in. ### Response: def transfer_to_row(self, new_row): """Transfer this instruction to a new row. :param knittingpattern.Row.Row new_row: the new row the instruction is in. """ if new_row != self._row: index = self.get_index_in_row() if index is not None: self._row.instructions.pop(index) self._row = new_row
def _chunk_iter_progress(it, log, prefix): """Wrap a chunk iterator for progress logging.""" n_variants = 0 before_all = time.time() before_chunk = before_all for chunk, chunk_length, chrom, pos in it: after_chunk = time.time() elapsed_chunk = after_chunk - before_chunk elapsed = after_chunk - before_all n_variants += chunk_length chrom = text_type(chrom, 'utf8') message = ( '%s %s rows in %.2fs; chunk in %.2fs (%s rows/s)' % (prefix, n_variants, elapsed, elapsed_chunk, int(chunk_length // elapsed_chunk)) ) if chrom: message += '; %s:%s' % (chrom, pos) print(message, file=log) log.flush() yield chunk, chunk_length, chrom, pos before_chunk = after_chunk after_all = time.time() elapsed = after_all - before_all print('%s all done (%s rows/s)' % (prefix, int(n_variants // elapsed)), file=log) log.flush()
Wrap a chunk iterator for progress logging.
Below is the the instruction that describes the task: ### Input: Wrap a chunk iterator for progress logging. ### Response: def _chunk_iter_progress(it, log, prefix): """Wrap a chunk iterator for progress logging.""" n_variants = 0 before_all = time.time() before_chunk = before_all for chunk, chunk_length, chrom, pos in it: after_chunk = time.time() elapsed_chunk = after_chunk - before_chunk elapsed = after_chunk - before_all n_variants += chunk_length chrom = text_type(chrom, 'utf8') message = ( '%s %s rows in %.2fs; chunk in %.2fs (%s rows/s)' % (prefix, n_variants, elapsed, elapsed_chunk, int(chunk_length // elapsed_chunk)) ) if chrom: message += '; %s:%s' % (chrom, pos) print(message, file=log) log.flush() yield chunk, chunk_length, chrom, pos before_chunk = after_chunk after_all = time.time() elapsed = after_all - before_all print('%s all done (%s rows/s)' % (prefix, int(n_variants // elapsed)), file=log) log.flush()
def _make_annulus_path(patch_inner, patch_outer): """ Defines a matplotlib annulus path from two patches. This preserves the cubic Bezier curves (CURVE4) of the aperture paths. # This is borrowed from photutils aperture. """ import matplotlib.path as mpath path_inner = patch_inner.get_path() transform_inner = patch_inner.get_transform() path_inner = transform_inner.transform_path(path_inner) path_outer = patch_outer.get_path() transform_outer = patch_outer.get_transform() path_outer = transform_outer.transform_path(path_outer) verts_inner = path_inner.vertices[:-1][::-1] verts_inner = np.concatenate((verts_inner, [verts_inner[-1]])) verts = np.vstack((path_outer.vertices, verts_inner)) codes = np.hstack((path_outer.codes, path_inner.codes)) return mpath.Path(verts, codes)
Defines a matplotlib annulus path from two patches. This preserves the cubic Bezier curves (CURVE4) of the aperture paths. # This is borrowed from photutils aperture.
Below is the the instruction that describes the task: ### Input: Defines a matplotlib annulus path from two patches. This preserves the cubic Bezier curves (CURVE4) of the aperture paths. # This is borrowed from photutils aperture. ### Response: def _make_annulus_path(patch_inner, patch_outer): """ Defines a matplotlib annulus path from two patches. This preserves the cubic Bezier curves (CURVE4) of the aperture paths. # This is borrowed from photutils aperture. """ import matplotlib.path as mpath path_inner = patch_inner.get_path() transform_inner = patch_inner.get_transform() path_inner = transform_inner.transform_path(path_inner) path_outer = patch_outer.get_path() transform_outer = patch_outer.get_transform() path_outer = transform_outer.transform_path(path_outer) verts_inner = path_inner.vertices[:-1][::-1] verts_inner = np.concatenate((verts_inner, [verts_inner[-1]])) verts = np.vstack((path_outer.vertices, verts_inner)) codes = np.hstack((path_outer.codes, path_inner.codes)) return mpath.Path(verts, codes)
def _read( filename, schema, seq_label='sequence', alphabet=None, use_uids=True, **kwargs): """Use BioPython's sequence parsing module to convert any file format to a Pandas DataFrame. The resulting DataFrame has the following columns: - name - id - description - sequence """ # Check Alphabet if given if alphabet is None: alphabet = Bio.Alphabet.Alphabet() elif alphabet in ['dna', 'rna', 'protein', 'nucleotide']: alphabet = getattr(Bio.Alphabet, 'generic_{}'.format(alphabet)) else: raise Exception( "The alphabet is not recognized. Must be 'dna', 'rna', " "'nucleotide', or 'protein'.") kwargs.update(alphabet=alphabet) # Prepare DataFrame fields. data = { 'id': [], seq_label: [], 'description': [], 'label': [] } if use_uids: data['uid'] = [] # Parse Fasta file. for i, s in enumerate(SeqIO.parse(filename, format=schema, **kwargs)): data['id'].append(s.id) data[seq_label].append(str(s.seq)) data['description'].append(s.description) data['label'].append(s.name) if use_uids: data['uid'].append(get_random_id(10)) # Port to DataFrame. return pd.DataFrame(data)
Use BioPython's sequence parsing module to convert any file format to a Pandas DataFrame. The resulting DataFrame has the following columns: - name - id - description - sequence
Below is the the instruction that describes the task: ### Input: Use BioPython's sequence parsing module to convert any file format to a Pandas DataFrame. The resulting DataFrame has the following columns: - name - id - description - sequence ### Response: def _read( filename, schema, seq_label='sequence', alphabet=None, use_uids=True, **kwargs): """Use BioPython's sequence parsing module to convert any file format to a Pandas DataFrame. The resulting DataFrame has the following columns: - name - id - description - sequence """ # Check Alphabet if given if alphabet is None: alphabet = Bio.Alphabet.Alphabet() elif alphabet in ['dna', 'rna', 'protein', 'nucleotide']: alphabet = getattr(Bio.Alphabet, 'generic_{}'.format(alphabet)) else: raise Exception( "The alphabet is not recognized. Must be 'dna', 'rna', " "'nucleotide', or 'protein'.") kwargs.update(alphabet=alphabet) # Prepare DataFrame fields. data = { 'id': [], seq_label: [], 'description': [], 'label': [] } if use_uids: data['uid'] = [] # Parse Fasta file. for i, s in enumerate(SeqIO.parse(filename, format=schema, **kwargs)): data['id'].append(s.id) data[seq_label].append(str(s.seq)) data['description'].append(s.description) data['label'].append(s.name) if use_uids: data['uid'].append(get_random_id(10)) # Port to DataFrame. return pd.DataFrame(data)
def QA_SU_save_stock_list(client=DATABASE, ui_log=None, ui_progress=None): """save stock_list Keyword Arguments: client {[type]} -- [description] (default: {DATABASE}) """ client.drop_collection('stock_list') coll = client.stock_list coll.create_index('code') try: # 🛠todo 这个应该是第一个任务 JOB01, 先更新股票列表!! QA_util_log_info( '##JOB08 Now Saving STOCK_LIST ====', ui_log=ui_log, ui_progress=ui_progress, ui_progress_int_value=5000 ) stock_list_from_tdx = QA_fetch_get_stock_list() pandas_data = QA_util_to_json_from_pandas(stock_list_from_tdx) coll.insert_many(pandas_data) QA_util_log_info( "完成股票列表获取", ui_log=ui_log, ui_progress=ui_progress, ui_progress_int_value=10000 ) except Exception as e: QA_util_log_info(e, ui_log=ui_log) print(" Error save_tdx.QA_SU_save_stock_list exception!") pass
save stock_list Keyword Arguments: client {[type]} -- [description] (default: {DATABASE})
Below is the the instruction that describes the task: ### Input: save stock_list Keyword Arguments: client {[type]} -- [description] (default: {DATABASE}) ### Response: def QA_SU_save_stock_list(client=DATABASE, ui_log=None, ui_progress=None): """save stock_list Keyword Arguments: client {[type]} -- [description] (default: {DATABASE}) """ client.drop_collection('stock_list') coll = client.stock_list coll.create_index('code') try: # 🛠todo 这个应该是第一个任务 JOB01, 先更新股票列表!! QA_util_log_info( '##JOB08 Now Saving STOCK_LIST ====', ui_log=ui_log, ui_progress=ui_progress, ui_progress_int_value=5000 ) stock_list_from_tdx = QA_fetch_get_stock_list() pandas_data = QA_util_to_json_from_pandas(stock_list_from_tdx) coll.insert_many(pandas_data) QA_util_log_info( "完成股票列表获取", ui_log=ui_log, ui_progress=ui_progress, ui_progress_int_value=10000 ) except Exception as e: QA_util_log_info(e, ui_log=ui_log) print(" Error save_tdx.QA_SU_save_stock_list exception!") pass
def setPhase(self, tlsID, index): """setPhase(string, integer) -> None . """ self._connection._sendIntCmd( tc.CMD_SET_TL_VARIABLE, tc.TL_PHASE_INDEX, tlsID, index)
setPhase(string, integer) -> None .
Below is the the instruction that describes the task: ### Input: setPhase(string, integer) -> None . ### Response: def setPhase(self, tlsID, index): """setPhase(string, integer) -> None . """ self._connection._sendIntCmd( tc.CMD_SET_TL_VARIABLE, tc.TL_PHASE_INDEX, tlsID, index)
def convert_markdown(message): """Convert markdown in message text to HTML.""" assert message['Content-Type'].startswith("text/markdown") del message['Content-Type'] # Convert the text from markdown and then make the message multipart message = make_message_multipart(message) for payload_item in set(message.get_payload()): # Assume the plaintext item is formatted with markdown. # Add corresponding HTML version of the item as the last part of # the multipart message (as per RFC 2046) if payload_item['Content-Type'].startswith('text/plain'): original_text = payload_item.get_payload() html_text = markdown.markdown(original_text) html_payload = future.backports.email.mime.text.MIMEText( "<html><body>{}</body></html>".format(html_text), "html", ) message.attach(html_payload) return message
Convert markdown in message text to HTML.
Below is the the instruction that describes the task: ### Input: Convert markdown in message text to HTML. ### Response: def convert_markdown(message): """Convert markdown in message text to HTML.""" assert message['Content-Type'].startswith("text/markdown") del message['Content-Type'] # Convert the text from markdown and then make the message multipart message = make_message_multipart(message) for payload_item in set(message.get_payload()): # Assume the plaintext item is formatted with markdown. # Add corresponding HTML version of the item as the last part of # the multipart message (as per RFC 2046) if payload_item['Content-Type'].startswith('text/plain'): original_text = payload_item.get_payload() html_text = markdown.markdown(original_text) html_payload = future.backports.email.mime.text.MIMEText( "<html><body>{}</body></html>".format(html_text), "html", ) message.attach(html_payload) return message
def ekssum(handle, segno): """ Return summary information for a specified segment in a specified EK. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ekssum_c.html :param handle: Handle of EK. :type handle: int :param segno: Number of segment to be summarized. :type segno: int :return: EK segment summary. :rtype: spicepy.utils.support_types.SpiceEKSegSum """ handle = ctypes.c_int(handle) segno = ctypes.c_int(segno) segsum = stypes.SpiceEKSegSum() libspice.ekssum_c(handle, segno, ctypes.byref(segsum)) return segsum
Return summary information for a specified segment in a specified EK. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ekssum_c.html :param handle: Handle of EK. :type handle: int :param segno: Number of segment to be summarized. :type segno: int :return: EK segment summary. :rtype: spicepy.utils.support_types.SpiceEKSegSum
Below is the the instruction that describes the task: ### Input: Return summary information for a specified segment in a specified EK. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ekssum_c.html :param handle: Handle of EK. :type handle: int :param segno: Number of segment to be summarized. :type segno: int :return: EK segment summary. :rtype: spicepy.utils.support_types.SpiceEKSegSum ### Response: def ekssum(handle, segno): """ Return summary information for a specified segment in a specified EK. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ekssum_c.html :param handle: Handle of EK. :type handle: int :param segno: Number of segment to be summarized. :type segno: int :return: EK segment summary. :rtype: spicepy.utils.support_types.SpiceEKSegSum """ handle = ctypes.c_int(handle) segno = ctypes.c_int(segno) segsum = stypes.SpiceEKSegSum() libspice.ekssum_c(handle, segno, ctypes.byref(segsum)) return segsum
async def unpack(self, ciphertext: bytes) -> (str, str, str): """ Unpack a message. Return triple with cleartext, sender verification key, and recipient verification key. Raise AbsentMessage for missing ciphertext, or WalletState if wallet is closed. Raise AbsentRecord if wallet has no key to unpack ciphertext. :param ciphertext: JWE-like formatted message as pack() produces :return: cleartext, sender verification key, recipient verification key """ LOGGER.debug('Wallet.unpack >>> ciphertext: %s', ciphertext) if not ciphertext: LOGGER.debug('Wallet.pack <!< No ciphertext to unpack') raise AbsentMessage('No ciphertext to unpack') try: unpacked = json.loads(await crypto.unpack_message(self.handle, ciphertext)) except IndyError as x_indy: if x_indy.error_code == ErrorCode.WalletItemNotFound: LOGGER.debug('Wallet.unpack <!< Wallet %s has no local key to unpack ciphertext', self.name) raise AbsentRecord('Wallet {} has no local key to unpack ciphertext'.format(self.name)) LOGGER.debug('Wallet.unpack <!< Wallet %s unpack() raised indy error code {}', x_indy.error_code) raise rv = (unpacked['message'], unpacked.get('sender_verkey', None), unpacked.get('recipient_verkey', None)) LOGGER.debug('Wallet.unpack <<< %s', rv) return rv
Unpack a message. Return triple with cleartext, sender verification key, and recipient verification key. Raise AbsentMessage for missing ciphertext, or WalletState if wallet is closed. Raise AbsentRecord if wallet has no key to unpack ciphertext. :param ciphertext: JWE-like formatted message as pack() produces :return: cleartext, sender verification key, recipient verification key
Below is the the instruction that describes the task: ### Input: Unpack a message. Return triple with cleartext, sender verification key, and recipient verification key. Raise AbsentMessage for missing ciphertext, or WalletState if wallet is closed. Raise AbsentRecord if wallet has no key to unpack ciphertext. :param ciphertext: JWE-like formatted message as pack() produces :return: cleartext, sender verification key, recipient verification key ### Response: async def unpack(self, ciphertext: bytes) -> (str, str, str): """ Unpack a message. Return triple with cleartext, sender verification key, and recipient verification key. Raise AbsentMessage for missing ciphertext, or WalletState if wallet is closed. Raise AbsentRecord if wallet has no key to unpack ciphertext. :param ciphertext: JWE-like formatted message as pack() produces :return: cleartext, sender verification key, recipient verification key """ LOGGER.debug('Wallet.unpack >>> ciphertext: %s', ciphertext) if not ciphertext: LOGGER.debug('Wallet.pack <!< No ciphertext to unpack') raise AbsentMessage('No ciphertext to unpack') try: unpacked = json.loads(await crypto.unpack_message(self.handle, ciphertext)) except IndyError as x_indy: if x_indy.error_code == ErrorCode.WalletItemNotFound: LOGGER.debug('Wallet.unpack <!< Wallet %s has no local key to unpack ciphertext', self.name) raise AbsentRecord('Wallet {} has no local key to unpack ciphertext'.format(self.name)) LOGGER.debug('Wallet.unpack <!< Wallet %s unpack() raised indy error code {}', x_indy.error_code) raise rv = (unpacked['message'], unpacked.get('sender_verkey', None), unpacked.get('recipient_verkey', None)) LOGGER.debug('Wallet.unpack <<< %s', rv) return rv
def assignrepr(self, prefix='') -> str: """Return a |repr| string with a prefixed assignment.""" with objecttools.repr_.preserve_strings(True): with hydpy.pub.options.ellipsis(2, optional=True): prefix += '%s(' % objecttools.classname(self) repr_ = objecttools.assignrepr_values( sorted(self.names), prefix, 70) return repr_ + ')'
Return a |repr| string with a prefixed assignment.
Below is the the instruction that describes the task: ### Input: Return a |repr| string with a prefixed assignment. ### Response: def assignrepr(self, prefix='') -> str: """Return a |repr| string with a prefixed assignment.""" with objecttools.repr_.preserve_strings(True): with hydpy.pub.options.ellipsis(2, optional=True): prefix += '%s(' % objecttools.classname(self) repr_ = objecttools.assignrepr_values( sorted(self.names), prefix, 70) return repr_ + ')'
def firmware_version(self): """Returns a firmware identification string of the connected J-Link. It consists of the following: - Product Name (e.g. J-Link) - The string: compiled - Compile data and time. - Optional additional information. Args: self (JLink): the ``JLink`` instance Returns: Firmware identification string. """ buf = (ctypes.c_char * self.MAX_BUF_SIZE)() self._dll.JLINKARM_GetFirmwareString(buf, self.MAX_BUF_SIZE) return ctypes.string_at(buf).decode()
Returns a firmware identification string of the connected J-Link. It consists of the following: - Product Name (e.g. J-Link) - The string: compiled - Compile data and time. - Optional additional information. Args: self (JLink): the ``JLink`` instance Returns: Firmware identification string.
Below is the the instruction that describes the task: ### Input: Returns a firmware identification string of the connected J-Link. It consists of the following: - Product Name (e.g. J-Link) - The string: compiled - Compile data and time. - Optional additional information. Args: self (JLink): the ``JLink`` instance Returns: Firmware identification string. ### Response: def firmware_version(self): """Returns a firmware identification string of the connected J-Link. It consists of the following: - Product Name (e.g. J-Link) - The string: compiled - Compile data and time. - Optional additional information. Args: self (JLink): the ``JLink`` instance Returns: Firmware identification string. """ buf = (ctypes.c_char * self.MAX_BUF_SIZE)() self._dll.JLINKARM_GetFirmwareString(buf, self.MAX_BUF_SIZE) return ctypes.string_at(buf).decode()
def is_perfect_consonant(note1, note2, include_fourths=True): """Return True if the interval is a perfect consonant one. Perfect consonances are either unisons, perfect fourths or fifths, or octaves (which is the same as a unison in this model). Perfect fourths are usually included as well, but are considered dissonant when used contrapuntal, which is why you can exclude them. """ dhalf = measure(note1, note2) return dhalf in [0, 7] or include_fourths and dhalf == 5
Return True if the interval is a perfect consonant one. Perfect consonances are either unisons, perfect fourths or fifths, or octaves (which is the same as a unison in this model). Perfect fourths are usually included as well, but are considered dissonant when used contrapuntal, which is why you can exclude them.
Below is the the instruction that describes the task: ### Input: Return True if the interval is a perfect consonant one. Perfect consonances are either unisons, perfect fourths or fifths, or octaves (which is the same as a unison in this model). Perfect fourths are usually included as well, but are considered dissonant when used contrapuntal, which is why you can exclude them. ### Response: def is_perfect_consonant(note1, note2, include_fourths=True): """Return True if the interval is a perfect consonant one. Perfect consonances are either unisons, perfect fourths or fifths, or octaves (which is the same as a unison in this model). Perfect fourths are usually included as well, but are considered dissonant when used contrapuntal, which is why you can exclude them. """ dhalf = measure(note1, note2) return dhalf in [0, 7] or include_fourths and dhalf == 5
def _parse_mode(client, command, actor, args): """Parse a mode changes, update states, and dispatch MODE events.""" chantypes = client.server.features.get("CHANTYPES", "#") channel, _, args = args.partition(" ") args = args.lstrip(":") if channel[0] not in chantypes: # Personal modes for modes in args.split(): op, modes = modes[0], modes[1:] for mode in modes: if op == "+": client.user.modes.add(mode) else: client.user.modes.discard(mode) client.dispatch_event("MODE", actor, client.user, op, mode, None) return # channel-specific modes chan = client.server.get_channel(channel) user_modes = set(client._get_prefixes().itervalues()) chanmodes = client._get_chanmodes() list_modes, always_arg_modes, set_arg_modes, toggle_modes = chanmodes argument_modes = list_modes | always_arg_modes | set_arg_modes tokens = args.split() while tokens: modes, tokens = tokens[0], tokens[1:] op, modes = modes[0], modes[1:] for mode in modes: argument = None if mode in (user_modes | argument_modes): argument, tokens = tokens[0], tokens[1:] if mode in user_modes: user = client.server.get_channel(channel).members[argument] if op == "+": user.modes.add(mode) else: user.modes.discard(mode) if op == "+": if mode in (always_arg_modes | set_arg_modes): chan.modes[mode] = argument elif mode in toggle_modes: chan.modes[mode] = True else: if mode in (always_arg_modes | set_arg_modes | toggle_modes): if mode in chan.modes: del chan.modes[mode] # list-type modes (bans+exceptions, invite masks) aren't stored, # but do generate MODE events. client.dispatch_event("MODE", actor, chan, op, mode, argument)
Parse a mode changes, update states, and dispatch MODE events.
Below is the the instruction that describes the task: ### Input: Parse a mode changes, update states, and dispatch MODE events. ### Response: def _parse_mode(client, command, actor, args): """Parse a mode changes, update states, and dispatch MODE events.""" chantypes = client.server.features.get("CHANTYPES", "#") channel, _, args = args.partition(" ") args = args.lstrip(":") if channel[0] not in chantypes: # Personal modes for modes in args.split(): op, modes = modes[0], modes[1:] for mode in modes: if op == "+": client.user.modes.add(mode) else: client.user.modes.discard(mode) client.dispatch_event("MODE", actor, client.user, op, mode, None) return # channel-specific modes chan = client.server.get_channel(channel) user_modes = set(client._get_prefixes().itervalues()) chanmodes = client._get_chanmodes() list_modes, always_arg_modes, set_arg_modes, toggle_modes = chanmodes argument_modes = list_modes | always_arg_modes | set_arg_modes tokens = args.split() while tokens: modes, tokens = tokens[0], tokens[1:] op, modes = modes[0], modes[1:] for mode in modes: argument = None if mode in (user_modes | argument_modes): argument, tokens = tokens[0], tokens[1:] if mode in user_modes: user = client.server.get_channel(channel).members[argument] if op == "+": user.modes.add(mode) else: user.modes.discard(mode) if op == "+": if mode in (always_arg_modes | set_arg_modes): chan.modes[mode] = argument elif mode in toggle_modes: chan.modes[mode] = True else: if mode in (always_arg_modes | set_arg_modes | toggle_modes): if mode in chan.modes: del chan.modes[mode] # list-type modes (bans+exceptions, invite masks) aren't stored, # but do generate MODE events. client.dispatch_event("MODE", actor, chan, op, mode, argument)
def _chain_forks(elements): """Detect whether a sequence of elements leads to a fork of streams""" # we are only interested in the result, so unwind from the end for element in reversed(elements): if element.chain_fork: return True elif element.chain_join: return False return False
Detect whether a sequence of elements leads to a fork of streams
Below is the the instruction that describes the task: ### Input: Detect whether a sequence of elements leads to a fork of streams ### Response: def _chain_forks(elements): """Detect whether a sequence of elements leads to a fork of streams""" # we are only interested in the result, so unwind from the end for element in reversed(elements): if element.chain_fork: return True elif element.chain_join: return False return False
def sankey(*args, projection=None, start=None, end=None, path=None, hue=None, categorical=False, scheme=None, k=5, cmap='viridis', vmin=None, vmax=None, legend=False, legend_kwargs=None, legend_labels=None, legend_values=None, legend_var=None, extent=None, figsize=(8, 6), ax=None, scale=None, limits=(1, 5), scale_func=None, **kwargs): """ Spatial Sankey or flow map. Parameters ---------- df : GeoDataFrame, optional. The data being plotted. This parameter is optional - it is not needed if ``start`` and ``end`` (and ``hue``, if provided) are iterables. projection : geoplot.crs object instance, optional A geographic projection. For more information refer to `the tutorial page on projections <https://nbviewer.jupyter.org/github/ResidentMario/geoplot/blob/master/notebooks/tutorials/Projections.ipynb>`_. start : str or iterable A list of starting points. This parameter is required. end : str or iterable A list of ending points. This parameter is required. path : geoplot.crs object instance or iterable, optional Pass an iterable of paths to draw custom paths (see `this example <https://residentmario.github.io/geoplot/examples/dc-street-network.html>`_), or a projection to draw the shortest paths in that given projection. The default is ``Geodetic()``, which will connect points using `great circle distance <https://en.wikipedia.org/wiki/Great-circle_distance>`_—the true shortest path on the surface of the Earth. hue : None, Series, GeoSeries, iterable, or str, optional Applies a colormap to the output points. categorical : boolean, optional Set to ``True`` if ``hue`` references a categorical variable, and ``False`` (the default) otherwise. Ignored if ``hue`` is left unspecified. scheme : None or {"quantiles"|"equal_interval"|"fisher_jenks"}, optional Controls how the colormap bin edges are determined. Ignored if ``hue`` is left unspecified. k : int or None, optional Ignored if ``hue`` is left unspecified. Otherwise, if ``categorical`` is False, controls how many colors to use (5 is the default). If set to ``None``, a continuous colormap will be used. cmap : matplotlib color, optional The `matplotlib colormap <http://matplotlib.org/examples/color/colormaps_reference.html>`_ to be used. Ignored if ``hue`` is left unspecified. vmin : float, optional Values below this level will be colored the same threshold value. Defaults to the dataset minimum. Ignored if ``hue`` is left unspecified. vmax : float, optional Values above this level will be colored the same threshold value. Defaults to the dataset maximum. Ignored if ``hue`` is left unspecified. scale : str or iterable, optional Applies scaling to the output points. Defaults to None (no scaling). limits : (min, max) tuple, optional The minimum and maximum scale limits. Ignored if ``scale`` is left specified. scale_func : ufunc, optional The function used to scale point sizes. Defaults to a linear scale. For more information see `the Gallery demo <examples/usa-city-elevations.html>`_. legend : boolean, optional Whether or not to include a legend. Ignored if neither a ``hue`` nor a ``scale`` is specified. legend_values : list, optional The values to use in the legend. Defaults to equal intervals. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_labels : list, optional The names to use in the legend. Defaults to the variable values. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_var : "hue" or "scale", optional If both ``hue`` and ``scale`` are specified, which variable to use in the legend. legend_kwargs : dict, optional Keyword arguments to be passed to `the underlying legend <http://matplotlib.org/users/legend_guide.html>`_. extent : None or (minx, maxx, miny, maxy), optional Used to control plot x-axis and y-axis limits manually. figsize : tuple, optional An (x, y) tuple passed to ``matplotlib.figure`` which sets the size, in inches, of the resultant plot. ax : AxesSubplot or GeoAxesSubplot instance, optional A ``matplotlib.axes.AxesSubplot`` or ``cartopy.mpl.geoaxes.GeoAxesSubplot`` instance. Defaults to a new axis. kwargs: dict, optional Keyword arguments to be passed to the underlying ``matplotlib`` `Line2D <https://matplotlib.org/api/_as_gen/matplotlib.lines.Line2D.html#matplotlib.lines.Line2D>`_ instances. Returns ------- ``AxesSubplot`` or ``GeoAxesSubplot`` The plot axis Examples -------- A `Sankey diagram <https://en.wikipedia.org/wiki/Sankey_diagram>`_ is a simple visualization demonstrating flow through a network. A Sankey diagram is useful when you wish to show the volume of things moving between points or spaces: traffic load a road network, for example, or inter-airport travel volumes. The ``geoplot`` ``sankey`` adds spatial context to this plot type by laying out the points in meaningful locations: airport locations, say, or road intersections. A basic ``sankey`` specifies data, ``start`` points, ``end`` points, and, optionally, a projection. The ``df`` argument is optional; if geometries are provided as independent iterables it is ignored. We overlay world geometry to aid interpretability. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-geospatial-context.png The lines appear curved because they are `great circle <https://en.wikipedia.org/wiki/Great-circle_distance>`_ paths, which are the shortest routes between points on a sphere. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.Orthographic()) ax.set_global(); ax.coastlines(); ax.outline_patch.set_visible(True) .. image:: ../figures/sankey/sankey-greatest-circle-distance.png To plot using a different distance metric pass a ``cartopy`` ``crs`` object (*not* a ``geoplot`` one) to the ``path`` parameter. .. code-block:: python import cartopy.crs as ccrs ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree(), path=ccrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-path-projection.png If your data has custom paths, you can use those instead, via the ``path`` parameter. .. code-block:: python gplt.sankey(dc, path=dc.geometry, projection=gcrs.AlbersEqualArea(), scale='aadt') .. image:: ../figures/sankey/sankey-path.png ``hue`` parameterizes the color, and ``cmap`` controls the colormap. ``legend`` adds a a legend. Keyword arguments can be passed to the legend using the ``legend_kwargs`` argument. These arguments will be passed to the underlying ``matplotlib`` `Legend <http://matplotlib.org/api/legend_api.html#matplotlib.legend.Legend>`_. The ``loc`` and ``bbox_to_anchor`` parameters are particularly useful for positioning the legend. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.4, 1.0)}) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-kwargs.png Change the number of bins by specifying an alternative ``k`` value. To use a continuous colormap, explicitly specify ``k=None``. You can change the binning sceme with ``scheme``. The default is ``quantile``, which bins observations into classes of different sizes but the same numbers of observations. ``equal_interval`` will creates bins that are the same size, but potentially containing different numbers of observations. The more complicated ``fisher_jenks`` scheme is an intermediate between the two. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.25, 1.0)}, k=3, scheme='equal_interval') ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-scheme.png If your variable of interest is already `categorical <http://pandas.pydata.org/pandas-docs/stable/categorical.html>`_, specify ``categorical=True`` to use the labels in your dataset directly. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='above_meridian', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.2, 1.0)}, categorical=True) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-categorical.png ``scale`` can be used to enable ``linewidth`` as a visual variable. Adjust the upper and lower bound with the ``limits`` parameter. .. code-block:: python ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', limits=(0.1, 5), legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale.png The default scaling function is linear: an observations at the midpoint of two others will be exactly midway between them in size. To specify an alternative scaling function, use the ``scale_func`` parameter. This should be a factory function of two variables which, when given the maximum and minimum of the dataset, returns a scaling function which will be applied to the rest of the data. A demo is available in the `example gallery <examples/usa-city-elevations.html>`_. .. code-block:: python def trivial_scale(minval, maxval): return lambda v: 1 ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', scale_func=trivial_scale, legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale-func.png ``hue`` and ``scale`` can co-exist. In case more than one visual variable is used, control which one appears in the legend using ``legend_var``. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', scale='mock_data', legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}, hue='mock_data', legend_var="hue") ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-var.png """ # Validate df. if len(args) > 1: raise ValueError("Invalid input.") elif len(args) == 1: df = args[0] else: df = pd.DataFrame() # df = None # bind the local name here; initialize in a bit. # Validate the rest of the input. if ((start is None) or (end is None)) and not hasattr(path, "__iter__"): raise ValueError("The 'start' and 'end' parameters must both be specified.") if (isinstance(start, str) or isinstance(end, str)) and df.empty: raise ValueError("Invalid input.") if isinstance(start, str): start = df[start] elif start is not None: start = gpd.GeoSeries(start) if isinstance(end, str): end = df[end] elif end is not None: end = gpd.GeoSeries(end) if (start is not None) and (end is not None) and hasattr(path, "__iter__"): raise ValueError("One of 'start' and 'end' OR 'path' must be specified, but they cannot be specified " "simultaneously.") if path is None: # No path provided. path = ccrs.Geodetic() path_geoms = None elif isinstance(path, str): # Path is a column in the dataset. path_geoms = df[path] elif hasattr(path, "__iter__"): # Path is an iterable. path_geoms = gpd.GeoSeries(path) else: # Path is a cartopy.crs object. path_geoms = None if start is not None and end is not None: points = pd.concat([start, end]) else: points = None # Set legend variable. if legend_var is None: if scale is not None: legend_var = "scale" elif hue is not None: legend_var = "hue" # After validating the inputs, we are in one of two modes: # 1. Projective mode. In this case ``path_geoms`` is None, while ``points`` contains a concatenation of our # points (for use in initializing the plot extents). This case occurs when the user specifies ``start`` and # ``end``, and not ``path``. This is "projective mode" because it means that ``path`` will be a # projection---if one is not provided explicitly, the ``gcrs.Geodetic()`` projection. # 2. Path mode. In this case ``path_geoms`` is an iterable of LineString entities to be plotted, while ``points`` # is None. This occurs when the user specifies ``path``, and not ``start`` or ``end``. This is path mode # because we will need to plot exactly those paths! # At this point we'll initialize the rest of the variables we need. The way that we initialize them is going to # depend on which code path we are on. Additionally, we will initialize the `df` variable with a projection # dummy, if it has not been initialized already. This `df` will only be used for figuring out the extent, # and will be discarded afterwards! # # Variables we need to generate at this point, and why we need them: # 1. (clong, clat) --- To pass this to the projection settings. # 2. (xmin. xmax, ymin. ymax) --- To pass this to the extent settings. # 3. n --- To pass this to the color array in case no ``color`` is specified. if path_geoms is None and points is not None: if df.empty: df = gpd.GeoDataFrame(geometry=points) xs = np.array([p.x for p in points]) ys = np.array([p.y for p in points]) xmin, xmax, ymin, ymax = np.min(xs), np.max(xs), np.min(ys), np.max(ys) clong, clat = np.mean(xs), np.mean(ys) n = int(len(points) / 2) else: # path_geoms is an iterable path_geoms = gpd.GeoSeries(path_geoms) xmin, xmax, ymin, ymax = _get_envelopes_min_maxes(path_geoms.envelope.exterior) clong, clat = (xmin + xmax) / 2, (ymin + ymax) / 2 n = len(path_geoms) # Initialize the figure. fig = _init_figure(ax, figsize) # Load the projection. if projection: projection = projection.load(df, { 'central_longitude': lambda df: clong, 'central_latitude': lambda df: clat }) # Set up the axis. if not ax: ax = plt.subplot(111, projection=projection) else: if not ax: ax = plt.gca() # Clean up patches. _lay_out_axes(ax, projection) # Set extent. if projection: if extent: ax.set_extent(extent) else: ax.set_extent((xmin, xmax, ymin, ymax)) else: if extent: ax.set_xlim((extent[0], extent[1])) ax.set_ylim((extent[2], extent[3])) else: ax.set_xlim((xmin, xmax)) ax.set_ylim((ymin, ymax)) # Generate the coloring information, if needed. Follows one of two schemes, categorical or continuous, # based on whether or not ``k`` is specified (``hue`` must be specified for either to work). if k is not None: # Categorical colormap code path. categorical, k, scheme = _validate_buckets(categorical, k, scheme) hue = _validate_hue(df, hue) if hue is not None: cmap, categories, hue_values = _discrete_colorize(categorical, hue, scheme, k, cmap, vmin, vmax) colors = [cmap.to_rgba(v) for v in hue_values] # Add a legend, if appropriate. if legend and (legend_var != "scale" or scale is None): _paint_hue_legend(ax, categories, cmap, legend_labels, legend_kwargs) else: if 'color' not in kwargs.keys(): colors = ['steelblue'] * n else: colors = [kwargs['color']] * n kwargs.pop('color') elif k is None and hue is not None: # Continuous colormap code path. hue_values = hue cmap = _continuous_colormap(hue_values, cmap, vmin, vmax) colors = [cmap.to_rgba(v) for v in hue_values] # Add a legend, if appropriate. if legend and (legend_var != "scale" or scale is None): _paint_colorbar_legend(ax, hue_values, cmap, legend_kwargs) # Check if the ``scale`` parameter is filled, and use it to fill a ``values`` name. if scale: if isinstance(scale, str): scalar_values = df[scale] else: scalar_values = scale # Compute a scale function. dmin, dmax = np.min(scalar_values), np.max(scalar_values) if not scale_func: dslope = (limits[1] - limits[0]) / (dmax - dmin) dscale = lambda dval: limits[0] + dslope * (dval - dmin) else: dscale = scale_func(dmin, dmax) # Apply the scale function. scalar_multiples = np.array([dscale(d) for d in scalar_values]) widths = scalar_multiples * 1 # Draw a legend, if appropriate. if legend and (legend_var == "scale"): _paint_carto_legend(ax, scalar_values, legend_values, legend_labels, dscale, legend_kwargs) else: widths = [1] * n # pyplot default # Allow overwriting visual arguments. if 'linestyle' in kwargs.keys(): linestyle = kwargs['linestyle']; kwargs.pop('linestyle') else: linestyle = '-' if 'color' in kwargs.keys(): colors = [kwargs['color']]*n; kwargs.pop('color') elif 'edgecolor' in kwargs.keys(): # plt.plot uses 'color', mpl.ax.add_feature uses 'edgecolor'. Support both. colors = [kwargs['edgecolor']]*n; kwargs.pop('edgecolor') if 'linewidth' in kwargs.keys(): widths = [kwargs['linewidth']]*n; kwargs.pop('linewidth') if projection: # Duck test plot. The first will work if a valid transformation is passed to ``path`` (e.g. we are in the # ``start + ``end`` case), the second will work if ``path`` is an iterable (e.g. we are in the ``path`` case). try: for origin, destination, color, width in zip(start, end, colors, widths): ax.plot([origin.x, destination.x], [origin.y, destination.y], transform=path, linestyle=linestyle, linewidth=width, color=color, **kwargs) except TypeError: for line, color, width in zip(path_geoms, colors, widths): feature = ShapelyFeature([line], ccrs.PlateCarree()) ax.add_feature(feature, linestyle=linestyle, linewidth=width, edgecolor=color, facecolor='None', **kwargs) else: try: for origin, destination, color, width in zip(start, end, colors, widths): ax.plot([origin.x, destination.x], [origin.y, destination.y], linestyle=linestyle, linewidth=width, color=color, **kwargs) except TypeError: for path, color, width in zip(path_geoms, colors, widths): # We have to implement different methods for dealing with LineString and MultiLineString objects. # This calls for, yep, another duck test. try: # LineString line = mpl.lines.Line2D([coord[0] for coord in path.coords], [coord[1] for coord in path.coords], linestyle=linestyle, linewidth=width, color=color, **kwargs) ax.add_line(line) except NotImplementedError: # MultiLineString for line in path: line = mpl.lines.Line2D([coord[0] for coord in line.coords], [coord[1] for coord in line.coords], linestyle=linestyle, linewidth=width, color=color, **kwargs) ax.add_line(line) return ax
Spatial Sankey or flow map. Parameters ---------- df : GeoDataFrame, optional. The data being plotted. This parameter is optional - it is not needed if ``start`` and ``end`` (and ``hue``, if provided) are iterables. projection : geoplot.crs object instance, optional A geographic projection. For more information refer to `the tutorial page on projections <https://nbviewer.jupyter.org/github/ResidentMario/geoplot/blob/master/notebooks/tutorials/Projections.ipynb>`_. start : str or iterable A list of starting points. This parameter is required. end : str or iterable A list of ending points. This parameter is required. path : geoplot.crs object instance or iterable, optional Pass an iterable of paths to draw custom paths (see `this example <https://residentmario.github.io/geoplot/examples/dc-street-network.html>`_), or a projection to draw the shortest paths in that given projection. The default is ``Geodetic()``, which will connect points using `great circle distance <https://en.wikipedia.org/wiki/Great-circle_distance>`_—the true shortest path on the surface of the Earth. hue : None, Series, GeoSeries, iterable, or str, optional Applies a colormap to the output points. categorical : boolean, optional Set to ``True`` if ``hue`` references a categorical variable, and ``False`` (the default) otherwise. Ignored if ``hue`` is left unspecified. scheme : None or {"quantiles"|"equal_interval"|"fisher_jenks"}, optional Controls how the colormap bin edges are determined. Ignored if ``hue`` is left unspecified. k : int or None, optional Ignored if ``hue`` is left unspecified. Otherwise, if ``categorical`` is False, controls how many colors to use (5 is the default). If set to ``None``, a continuous colormap will be used. cmap : matplotlib color, optional The `matplotlib colormap <http://matplotlib.org/examples/color/colormaps_reference.html>`_ to be used. Ignored if ``hue`` is left unspecified. vmin : float, optional Values below this level will be colored the same threshold value. Defaults to the dataset minimum. Ignored if ``hue`` is left unspecified. vmax : float, optional Values above this level will be colored the same threshold value. Defaults to the dataset maximum. Ignored if ``hue`` is left unspecified. scale : str or iterable, optional Applies scaling to the output points. Defaults to None (no scaling). limits : (min, max) tuple, optional The minimum and maximum scale limits. Ignored if ``scale`` is left specified. scale_func : ufunc, optional The function used to scale point sizes. Defaults to a linear scale. For more information see `the Gallery demo <examples/usa-city-elevations.html>`_. legend : boolean, optional Whether or not to include a legend. Ignored if neither a ``hue`` nor a ``scale`` is specified. legend_values : list, optional The values to use in the legend. Defaults to equal intervals. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_labels : list, optional The names to use in the legend. Defaults to the variable values. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_var : "hue" or "scale", optional If both ``hue`` and ``scale`` are specified, which variable to use in the legend. legend_kwargs : dict, optional Keyword arguments to be passed to `the underlying legend <http://matplotlib.org/users/legend_guide.html>`_. extent : None or (minx, maxx, miny, maxy), optional Used to control plot x-axis and y-axis limits manually. figsize : tuple, optional An (x, y) tuple passed to ``matplotlib.figure`` which sets the size, in inches, of the resultant plot. ax : AxesSubplot or GeoAxesSubplot instance, optional A ``matplotlib.axes.AxesSubplot`` or ``cartopy.mpl.geoaxes.GeoAxesSubplot`` instance. Defaults to a new axis. kwargs: dict, optional Keyword arguments to be passed to the underlying ``matplotlib`` `Line2D <https://matplotlib.org/api/_as_gen/matplotlib.lines.Line2D.html#matplotlib.lines.Line2D>`_ instances. Returns ------- ``AxesSubplot`` or ``GeoAxesSubplot`` The plot axis Examples -------- A `Sankey diagram <https://en.wikipedia.org/wiki/Sankey_diagram>`_ is a simple visualization demonstrating flow through a network. A Sankey diagram is useful when you wish to show the volume of things moving between points or spaces: traffic load a road network, for example, or inter-airport travel volumes. The ``geoplot`` ``sankey`` adds spatial context to this plot type by laying out the points in meaningful locations: airport locations, say, or road intersections. A basic ``sankey`` specifies data, ``start`` points, ``end`` points, and, optionally, a projection. The ``df`` argument is optional; if geometries are provided as independent iterables it is ignored. We overlay world geometry to aid interpretability. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-geospatial-context.png The lines appear curved because they are `great circle <https://en.wikipedia.org/wiki/Great-circle_distance>`_ paths, which are the shortest routes between points on a sphere. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.Orthographic()) ax.set_global(); ax.coastlines(); ax.outline_patch.set_visible(True) .. image:: ../figures/sankey/sankey-greatest-circle-distance.png To plot using a different distance metric pass a ``cartopy`` ``crs`` object (*not* a ``geoplot`` one) to the ``path`` parameter. .. code-block:: python import cartopy.crs as ccrs ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree(), path=ccrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-path-projection.png If your data has custom paths, you can use those instead, via the ``path`` parameter. .. code-block:: python gplt.sankey(dc, path=dc.geometry, projection=gcrs.AlbersEqualArea(), scale='aadt') .. image:: ../figures/sankey/sankey-path.png ``hue`` parameterizes the color, and ``cmap`` controls the colormap. ``legend`` adds a a legend. Keyword arguments can be passed to the legend using the ``legend_kwargs`` argument. These arguments will be passed to the underlying ``matplotlib`` `Legend <http://matplotlib.org/api/legend_api.html#matplotlib.legend.Legend>`_. The ``loc`` and ``bbox_to_anchor`` parameters are particularly useful for positioning the legend. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.4, 1.0)}) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-kwargs.png Change the number of bins by specifying an alternative ``k`` value. To use a continuous colormap, explicitly specify ``k=None``. You can change the binning sceme with ``scheme``. The default is ``quantile``, which bins observations into classes of different sizes but the same numbers of observations. ``equal_interval`` will creates bins that are the same size, but potentially containing different numbers of observations. The more complicated ``fisher_jenks`` scheme is an intermediate between the two. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.25, 1.0)}, k=3, scheme='equal_interval') ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-scheme.png If your variable of interest is already `categorical <http://pandas.pydata.org/pandas-docs/stable/categorical.html>`_, specify ``categorical=True`` to use the labels in your dataset directly. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='above_meridian', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.2, 1.0)}, categorical=True) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-categorical.png ``scale`` can be used to enable ``linewidth`` as a visual variable. Adjust the upper and lower bound with the ``limits`` parameter. .. code-block:: python ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', limits=(0.1, 5), legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale.png The default scaling function is linear: an observations at the midpoint of two others will be exactly midway between them in size. To specify an alternative scaling function, use the ``scale_func`` parameter. This should be a factory function of two variables which, when given the maximum and minimum of the dataset, returns a scaling function which will be applied to the rest of the data. A demo is available in the `example gallery <examples/usa-city-elevations.html>`_. .. code-block:: python def trivial_scale(minval, maxval): return lambda v: 1 ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', scale_func=trivial_scale, legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale-func.png ``hue`` and ``scale`` can co-exist. In case more than one visual variable is used, control which one appears in the legend using ``legend_var``. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', scale='mock_data', legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}, hue='mock_data', legend_var="hue") ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-var.png
Below is the the instruction that describes the task: ### Input: Spatial Sankey or flow map. Parameters ---------- df : GeoDataFrame, optional. The data being plotted. This parameter is optional - it is not needed if ``start`` and ``end`` (and ``hue``, if provided) are iterables. projection : geoplot.crs object instance, optional A geographic projection. For more information refer to `the tutorial page on projections <https://nbviewer.jupyter.org/github/ResidentMario/geoplot/blob/master/notebooks/tutorials/Projections.ipynb>`_. start : str or iterable A list of starting points. This parameter is required. end : str or iterable A list of ending points. This parameter is required. path : geoplot.crs object instance or iterable, optional Pass an iterable of paths to draw custom paths (see `this example <https://residentmario.github.io/geoplot/examples/dc-street-network.html>`_), or a projection to draw the shortest paths in that given projection. The default is ``Geodetic()``, which will connect points using `great circle distance <https://en.wikipedia.org/wiki/Great-circle_distance>`_—the true shortest path on the surface of the Earth. hue : None, Series, GeoSeries, iterable, or str, optional Applies a colormap to the output points. categorical : boolean, optional Set to ``True`` if ``hue`` references a categorical variable, and ``False`` (the default) otherwise. Ignored if ``hue`` is left unspecified. scheme : None or {"quantiles"|"equal_interval"|"fisher_jenks"}, optional Controls how the colormap bin edges are determined. Ignored if ``hue`` is left unspecified. k : int or None, optional Ignored if ``hue`` is left unspecified. Otherwise, if ``categorical`` is False, controls how many colors to use (5 is the default). If set to ``None``, a continuous colormap will be used. cmap : matplotlib color, optional The `matplotlib colormap <http://matplotlib.org/examples/color/colormaps_reference.html>`_ to be used. Ignored if ``hue`` is left unspecified. vmin : float, optional Values below this level will be colored the same threshold value. Defaults to the dataset minimum. Ignored if ``hue`` is left unspecified. vmax : float, optional Values above this level will be colored the same threshold value. Defaults to the dataset maximum. Ignored if ``hue`` is left unspecified. scale : str or iterable, optional Applies scaling to the output points. Defaults to None (no scaling). limits : (min, max) tuple, optional The minimum and maximum scale limits. Ignored if ``scale`` is left specified. scale_func : ufunc, optional The function used to scale point sizes. Defaults to a linear scale. For more information see `the Gallery demo <examples/usa-city-elevations.html>`_. legend : boolean, optional Whether or not to include a legend. Ignored if neither a ``hue`` nor a ``scale`` is specified. legend_values : list, optional The values to use in the legend. Defaults to equal intervals. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_labels : list, optional The names to use in the legend. Defaults to the variable values. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_var : "hue" or "scale", optional If both ``hue`` and ``scale`` are specified, which variable to use in the legend. legend_kwargs : dict, optional Keyword arguments to be passed to `the underlying legend <http://matplotlib.org/users/legend_guide.html>`_. extent : None or (minx, maxx, miny, maxy), optional Used to control plot x-axis and y-axis limits manually. figsize : tuple, optional An (x, y) tuple passed to ``matplotlib.figure`` which sets the size, in inches, of the resultant plot. ax : AxesSubplot or GeoAxesSubplot instance, optional A ``matplotlib.axes.AxesSubplot`` or ``cartopy.mpl.geoaxes.GeoAxesSubplot`` instance. Defaults to a new axis. kwargs: dict, optional Keyword arguments to be passed to the underlying ``matplotlib`` `Line2D <https://matplotlib.org/api/_as_gen/matplotlib.lines.Line2D.html#matplotlib.lines.Line2D>`_ instances. Returns ------- ``AxesSubplot`` or ``GeoAxesSubplot`` The plot axis Examples -------- A `Sankey diagram <https://en.wikipedia.org/wiki/Sankey_diagram>`_ is a simple visualization demonstrating flow through a network. A Sankey diagram is useful when you wish to show the volume of things moving between points or spaces: traffic load a road network, for example, or inter-airport travel volumes. The ``geoplot`` ``sankey`` adds spatial context to this plot type by laying out the points in meaningful locations: airport locations, say, or road intersections. A basic ``sankey`` specifies data, ``start`` points, ``end`` points, and, optionally, a projection. The ``df`` argument is optional; if geometries are provided as independent iterables it is ignored. We overlay world geometry to aid interpretability. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-geospatial-context.png The lines appear curved because they are `great circle <https://en.wikipedia.org/wiki/Great-circle_distance>`_ paths, which are the shortest routes between points on a sphere. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.Orthographic()) ax.set_global(); ax.coastlines(); ax.outline_patch.set_visible(True) .. image:: ../figures/sankey/sankey-greatest-circle-distance.png To plot using a different distance metric pass a ``cartopy`` ``crs`` object (*not* a ``geoplot`` one) to the ``path`` parameter. .. code-block:: python import cartopy.crs as ccrs ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree(), path=ccrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-path-projection.png If your data has custom paths, you can use those instead, via the ``path`` parameter. .. code-block:: python gplt.sankey(dc, path=dc.geometry, projection=gcrs.AlbersEqualArea(), scale='aadt') .. image:: ../figures/sankey/sankey-path.png ``hue`` parameterizes the color, and ``cmap`` controls the colormap. ``legend`` adds a a legend. Keyword arguments can be passed to the legend using the ``legend_kwargs`` argument. These arguments will be passed to the underlying ``matplotlib`` `Legend <http://matplotlib.org/api/legend_api.html#matplotlib.legend.Legend>`_. The ``loc`` and ``bbox_to_anchor`` parameters are particularly useful for positioning the legend. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.4, 1.0)}) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-kwargs.png Change the number of bins by specifying an alternative ``k`` value. To use a continuous colormap, explicitly specify ``k=None``. You can change the binning sceme with ``scheme``. The default is ``quantile``, which bins observations into classes of different sizes but the same numbers of observations. ``equal_interval`` will creates bins that are the same size, but potentially containing different numbers of observations. The more complicated ``fisher_jenks`` scheme is an intermediate between the two. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.25, 1.0)}, k=3, scheme='equal_interval') ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-scheme.png If your variable of interest is already `categorical <http://pandas.pydata.org/pandas-docs/stable/categorical.html>`_, specify ``categorical=True`` to use the labels in your dataset directly. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='above_meridian', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.2, 1.0)}, categorical=True) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-categorical.png ``scale`` can be used to enable ``linewidth`` as a visual variable. Adjust the upper and lower bound with the ``limits`` parameter. .. code-block:: python ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', limits=(0.1, 5), legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale.png The default scaling function is linear: an observations at the midpoint of two others will be exactly midway between them in size. To specify an alternative scaling function, use the ``scale_func`` parameter. This should be a factory function of two variables which, when given the maximum and minimum of the dataset, returns a scaling function which will be applied to the rest of the data. A demo is available in the `example gallery <examples/usa-city-elevations.html>`_. .. code-block:: python def trivial_scale(minval, maxval): return lambda v: 1 ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', scale_func=trivial_scale, legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale-func.png ``hue`` and ``scale`` can co-exist. In case more than one visual variable is used, control which one appears in the legend using ``legend_var``. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', scale='mock_data', legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}, hue='mock_data', legend_var="hue") ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-var.png ### Response: def sankey(*args, projection=None, start=None, end=None, path=None, hue=None, categorical=False, scheme=None, k=5, cmap='viridis', vmin=None, vmax=None, legend=False, legend_kwargs=None, legend_labels=None, legend_values=None, legend_var=None, extent=None, figsize=(8, 6), ax=None, scale=None, limits=(1, 5), scale_func=None, **kwargs): """ Spatial Sankey or flow map. Parameters ---------- df : GeoDataFrame, optional. The data being plotted. This parameter is optional - it is not needed if ``start`` and ``end`` (and ``hue``, if provided) are iterables. projection : geoplot.crs object instance, optional A geographic projection. For more information refer to `the tutorial page on projections <https://nbviewer.jupyter.org/github/ResidentMario/geoplot/blob/master/notebooks/tutorials/Projections.ipynb>`_. start : str or iterable A list of starting points. This parameter is required. end : str or iterable A list of ending points. This parameter is required. path : geoplot.crs object instance or iterable, optional Pass an iterable of paths to draw custom paths (see `this example <https://residentmario.github.io/geoplot/examples/dc-street-network.html>`_), or a projection to draw the shortest paths in that given projection. The default is ``Geodetic()``, which will connect points using `great circle distance <https://en.wikipedia.org/wiki/Great-circle_distance>`_—the true shortest path on the surface of the Earth. hue : None, Series, GeoSeries, iterable, or str, optional Applies a colormap to the output points. categorical : boolean, optional Set to ``True`` if ``hue`` references a categorical variable, and ``False`` (the default) otherwise. Ignored if ``hue`` is left unspecified. scheme : None or {"quantiles"|"equal_interval"|"fisher_jenks"}, optional Controls how the colormap bin edges are determined. Ignored if ``hue`` is left unspecified. k : int or None, optional Ignored if ``hue`` is left unspecified. Otherwise, if ``categorical`` is False, controls how many colors to use (5 is the default). If set to ``None``, a continuous colormap will be used. cmap : matplotlib color, optional The `matplotlib colormap <http://matplotlib.org/examples/color/colormaps_reference.html>`_ to be used. Ignored if ``hue`` is left unspecified. vmin : float, optional Values below this level will be colored the same threshold value. Defaults to the dataset minimum. Ignored if ``hue`` is left unspecified. vmax : float, optional Values above this level will be colored the same threshold value. Defaults to the dataset maximum. Ignored if ``hue`` is left unspecified. scale : str or iterable, optional Applies scaling to the output points. Defaults to None (no scaling). limits : (min, max) tuple, optional The minimum and maximum scale limits. Ignored if ``scale`` is left specified. scale_func : ufunc, optional The function used to scale point sizes. Defaults to a linear scale. For more information see `the Gallery demo <examples/usa-city-elevations.html>`_. legend : boolean, optional Whether or not to include a legend. Ignored if neither a ``hue`` nor a ``scale`` is specified. legend_values : list, optional The values to use in the legend. Defaults to equal intervals. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_labels : list, optional The names to use in the legend. Defaults to the variable values. For more information see `the Gallery demo <https://residentmario.github.io/geoplot/examples/largest-cities-usa.html>`_. legend_var : "hue" or "scale", optional If both ``hue`` and ``scale`` are specified, which variable to use in the legend. legend_kwargs : dict, optional Keyword arguments to be passed to `the underlying legend <http://matplotlib.org/users/legend_guide.html>`_. extent : None or (minx, maxx, miny, maxy), optional Used to control plot x-axis and y-axis limits manually. figsize : tuple, optional An (x, y) tuple passed to ``matplotlib.figure`` which sets the size, in inches, of the resultant plot. ax : AxesSubplot or GeoAxesSubplot instance, optional A ``matplotlib.axes.AxesSubplot`` or ``cartopy.mpl.geoaxes.GeoAxesSubplot`` instance. Defaults to a new axis. kwargs: dict, optional Keyword arguments to be passed to the underlying ``matplotlib`` `Line2D <https://matplotlib.org/api/_as_gen/matplotlib.lines.Line2D.html#matplotlib.lines.Line2D>`_ instances. Returns ------- ``AxesSubplot`` or ``GeoAxesSubplot`` The plot axis Examples -------- A `Sankey diagram <https://en.wikipedia.org/wiki/Sankey_diagram>`_ is a simple visualization demonstrating flow through a network. A Sankey diagram is useful when you wish to show the volume of things moving between points or spaces: traffic load a road network, for example, or inter-airport travel volumes. The ``geoplot`` ``sankey`` adds spatial context to this plot type by laying out the points in meaningful locations: airport locations, say, or road intersections. A basic ``sankey`` specifies data, ``start`` points, ``end`` points, and, optionally, a projection. The ``df`` argument is optional; if geometries are provided as independent iterables it is ignored. We overlay world geometry to aid interpretability. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-geospatial-context.png The lines appear curved because they are `great circle <https://en.wikipedia.org/wiki/Great-circle_distance>`_ paths, which are the shortest routes between points on a sphere. .. code-block:: python ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.Orthographic()) ax.set_global(); ax.coastlines(); ax.outline_patch.set_visible(True) .. image:: ../figures/sankey/sankey-greatest-circle-distance.png To plot using a different distance metric pass a ``cartopy`` ``crs`` object (*not* a ``geoplot`` one) to the ``path`` parameter. .. code-block:: python import cartopy.crs as ccrs ax = gplt.sankey(la_flights, start='start', end='end', projection=gcrs.PlateCarree(), path=ccrs.PlateCarree()) ax.set_global(); ax.coastlines() .. image:: ../figures/sankey/sankey-path-projection.png If your data has custom paths, you can use those instead, via the ``path`` parameter. .. code-block:: python gplt.sankey(dc, path=dc.geometry, projection=gcrs.AlbersEqualArea(), scale='aadt') .. image:: ../figures/sankey/sankey-path.png ``hue`` parameterizes the color, and ``cmap`` controls the colormap. ``legend`` adds a a legend. Keyword arguments can be passed to the legend using the ``legend_kwargs`` argument. These arguments will be passed to the underlying ``matplotlib`` `Legend <http://matplotlib.org/api/legend_api.html#matplotlib.legend.Legend>`_. The ``loc`` and ``bbox_to_anchor`` parameters are particularly useful for positioning the legend. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.4, 1.0)}) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-kwargs.png Change the number of bins by specifying an alternative ``k`` value. To use a continuous colormap, explicitly specify ``k=None``. You can change the binning sceme with ``scheme``. The default is ``quantile``, which bins observations into classes of different sizes but the same numbers of observations. ``equal_interval`` will creates bins that are the same size, but potentially containing different numbers of observations. The more complicated ``fisher_jenks`` scheme is an intermediate between the two. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='mock_variable', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.25, 1.0)}, k=3, scheme='equal_interval') ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-scheme.png If your variable of interest is already `categorical <http://pandas.pydata.org/pandas-docs/stable/categorical.html>`_, specify ``categorical=True`` to use the labels in your dataset directly. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', hue='above_meridian', cmap='RdYlBu', legend=True, legend_kwargs={'bbox_to_anchor': (1.2, 1.0)}, categorical=True) ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-categorical.png ``scale`` can be used to enable ``linewidth`` as a visual variable. Adjust the upper and lower bound with the ``limits`` parameter. .. code-block:: python ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', limits=(0.1, 5), legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale.png The default scaling function is linear: an observations at the midpoint of two others will be exactly midway between them in size. To specify an alternative scaling function, use the ``scale_func`` parameter. This should be a factory function of two variables which, when given the maximum and minimum of the dataset, returns a scaling function which will be applied to the rest of the data. A demo is available in the `example gallery <examples/usa-city-elevations.html>`_. .. code-block:: python def trivial_scale(minval, maxval): return lambda v: 1 ax = gplt.sankey(la_flights, projection=gcrs.PlateCarree(), extent=(-125.0011, -66.9326, 24.9493, 49.5904), start='start', end='end', scale='Passengers', scale_func=trivial_scale, legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}) ax.coastlines() .. image:: ../figures/sankey/sankey-scale-func.png ``hue`` and ``scale`` can co-exist. In case more than one visual variable is used, control which one appears in the legend using ``legend_var``. .. code-block:: python ax = gplt.sankey(network, projection=gcrs.PlateCarree(), start='from', end='to', scale='mock_data', legend=True, legend_kwargs={'bbox_to_anchor': (1.1, 1.0)}, hue='mock_data', legend_var="hue") ax.set_global() ax.coastlines() .. image:: ../figures/sankey/sankey-legend-var.png """ # Validate df. if len(args) > 1: raise ValueError("Invalid input.") elif len(args) == 1: df = args[0] else: df = pd.DataFrame() # df = None # bind the local name here; initialize in a bit. # Validate the rest of the input. if ((start is None) or (end is None)) and not hasattr(path, "__iter__"): raise ValueError("The 'start' and 'end' parameters must both be specified.") if (isinstance(start, str) or isinstance(end, str)) and df.empty: raise ValueError("Invalid input.") if isinstance(start, str): start = df[start] elif start is not None: start = gpd.GeoSeries(start) if isinstance(end, str): end = df[end] elif end is not None: end = gpd.GeoSeries(end) if (start is not None) and (end is not None) and hasattr(path, "__iter__"): raise ValueError("One of 'start' and 'end' OR 'path' must be specified, but they cannot be specified " "simultaneously.") if path is None: # No path provided. path = ccrs.Geodetic() path_geoms = None elif isinstance(path, str): # Path is a column in the dataset. path_geoms = df[path] elif hasattr(path, "__iter__"): # Path is an iterable. path_geoms = gpd.GeoSeries(path) else: # Path is a cartopy.crs object. path_geoms = None if start is not None and end is not None: points = pd.concat([start, end]) else: points = None # Set legend variable. if legend_var is None: if scale is not None: legend_var = "scale" elif hue is not None: legend_var = "hue" # After validating the inputs, we are in one of two modes: # 1. Projective mode. In this case ``path_geoms`` is None, while ``points`` contains a concatenation of our # points (for use in initializing the plot extents). This case occurs when the user specifies ``start`` and # ``end``, and not ``path``. This is "projective mode" because it means that ``path`` will be a # projection---if one is not provided explicitly, the ``gcrs.Geodetic()`` projection. # 2. Path mode. In this case ``path_geoms`` is an iterable of LineString entities to be plotted, while ``points`` # is None. This occurs when the user specifies ``path``, and not ``start`` or ``end``. This is path mode # because we will need to plot exactly those paths! # At this point we'll initialize the rest of the variables we need. The way that we initialize them is going to # depend on which code path we are on. Additionally, we will initialize the `df` variable with a projection # dummy, if it has not been initialized already. This `df` will only be used for figuring out the extent, # and will be discarded afterwards! # # Variables we need to generate at this point, and why we need them: # 1. (clong, clat) --- To pass this to the projection settings. # 2. (xmin. xmax, ymin. ymax) --- To pass this to the extent settings. # 3. n --- To pass this to the color array in case no ``color`` is specified. if path_geoms is None and points is not None: if df.empty: df = gpd.GeoDataFrame(geometry=points) xs = np.array([p.x for p in points]) ys = np.array([p.y for p in points]) xmin, xmax, ymin, ymax = np.min(xs), np.max(xs), np.min(ys), np.max(ys) clong, clat = np.mean(xs), np.mean(ys) n = int(len(points) / 2) else: # path_geoms is an iterable path_geoms = gpd.GeoSeries(path_geoms) xmin, xmax, ymin, ymax = _get_envelopes_min_maxes(path_geoms.envelope.exterior) clong, clat = (xmin + xmax) / 2, (ymin + ymax) / 2 n = len(path_geoms) # Initialize the figure. fig = _init_figure(ax, figsize) # Load the projection. if projection: projection = projection.load(df, { 'central_longitude': lambda df: clong, 'central_latitude': lambda df: clat }) # Set up the axis. if not ax: ax = plt.subplot(111, projection=projection) else: if not ax: ax = plt.gca() # Clean up patches. _lay_out_axes(ax, projection) # Set extent. if projection: if extent: ax.set_extent(extent) else: ax.set_extent((xmin, xmax, ymin, ymax)) else: if extent: ax.set_xlim((extent[0], extent[1])) ax.set_ylim((extent[2], extent[3])) else: ax.set_xlim((xmin, xmax)) ax.set_ylim((ymin, ymax)) # Generate the coloring information, if needed. Follows one of two schemes, categorical or continuous, # based on whether or not ``k`` is specified (``hue`` must be specified for either to work). if k is not None: # Categorical colormap code path. categorical, k, scheme = _validate_buckets(categorical, k, scheme) hue = _validate_hue(df, hue) if hue is not None: cmap, categories, hue_values = _discrete_colorize(categorical, hue, scheme, k, cmap, vmin, vmax) colors = [cmap.to_rgba(v) for v in hue_values] # Add a legend, if appropriate. if legend and (legend_var != "scale" or scale is None): _paint_hue_legend(ax, categories, cmap, legend_labels, legend_kwargs) else: if 'color' not in kwargs.keys(): colors = ['steelblue'] * n else: colors = [kwargs['color']] * n kwargs.pop('color') elif k is None and hue is not None: # Continuous colormap code path. hue_values = hue cmap = _continuous_colormap(hue_values, cmap, vmin, vmax) colors = [cmap.to_rgba(v) for v in hue_values] # Add a legend, if appropriate. if legend and (legend_var != "scale" or scale is None): _paint_colorbar_legend(ax, hue_values, cmap, legend_kwargs) # Check if the ``scale`` parameter is filled, and use it to fill a ``values`` name. if scale: if isinstance(scale, str): scalar_values = df[scale] else: scalar_values = scale # Compute a scale function. dmin, dmax = np.min(scalar_values), np.max(scalar_values) if not scale_func: dslope = (limits[1] - limits[0]) / (dmax - dmin) dscale = lambda dval: limits[0] + dslope * (dval - dmin) else: dscale = scale_func(dmin, dmax) # Apply the scale function. scalar_multiples = np.array([dscale(d) for d in scalar_values]) widths = scalar_multiples * 1 # Draw a legend, if appropriate. if legend and (legend_var == "scale"): _paint_carto_legend(ax, scalar_values, legend_values, legend_labels, dscale, legend_kwargs) else: widths = [1] * n # pyplot default # Allow overwriting visual arguments. if 'linestyle' in kwargs.keys(): linestyle = kwargs['linestyle']; kwargs.pop('linestyle') else: linestyle = '-' if 'color' in kwargs.keys(): colors = [kwargs['color']]*n; kwargs.pop('color') elif 'edgecolor' in kwargs.keys(): # plt.plot uses 'color', mpl.ax.add_feature uses 'edgecolor'. Support both. colors = [kwargs['edgecolor']]*n; kwargs.pop('edgecolor') if 'linewidth' in kwargs.keys(): widths = [kwargs['linewidth']]*n; kwargs.pop('linewidth') if projection: # Duck test plot. The first will work if a valid transformation is passed to ``path`` (e.g. we are in the # ``start + ``end`` case), the second will work if ``path`` is an iterable (e.g. we are in the ``path`` case). try: for origin, destination, color, width in zip(start, end, colors, widths): ax.plot([origin.x, destination.x], [origin.y, destination.y], transform=path, linestyle=linestyle, linewidth=width, color=color, **kwargs) except TypeError: for line, color, width in zip(path_geoms, colors, widths): feature = ShapelyFeature([line], ccrs.PlateCarree()) ax.add_feature(feature, linestyle=linestyle, linewidth=width, edgecolor=color, facecolor='None', **kwargs) else: try: for origin, destination, color, width in zip(start, end, colors, widths): ax.plot([origin.x, destination.x], [origin.y, destination.y], linestyle=linestyle, linewidth=width, color=color, **kwargs) except TypeError: for path, color, width in zip(path_geoms, colors, widths): # We have to implement different methods for dealing with LineString and MultiLineString objects. # This calls for, yep, another duck test. try: # LineString line = mpl.lines.Line2D([coord[0] for coord in path.coords], [coord[1] for coord in path.coords], linestyle=linestyle, linewidth=width, color=color, **kwargs) ax.add_line(line) except NotImplementedError: # MultiLineString for line in path: line = mpl.lines.Line2D([coord[0] for coord in line.coords], [coord[1] for coord in line.coords], linestyle=linestyle, linewidth=width, color=color, **kwargs) ax.add_line(line) return ax
def get_design(self, design_name): """ Returns dict representation of the design document with the matching name design_name <str> name of the design """ try: r = requests.request( "GET", "%s/%s/_design/%s" % ( self.host, self.database_name, design_name ), auth=self.auth ) return self.result(r.text) except: raise
Returns dict representation of the design document with the matching name design_name <str> name of the design
Below is the the instruction that describes the task: ### Input: Returns dict representation of the design document with the matching name design_name <str> name of the design ### Response: def get_design(self, design_name): """ Returns dict representation of the design document with the matching name design_name <str> name of the design """ try: r = requests.request( "GET", "%s/%s/_design/%s" % ( self.host, self.database_name, design_name ), auth=self.auth ) return self.result(r.text) except: raise
def get_thing_shadow(self, **kwargs): r""" Call shadow lambda to obtain current shadow state. :Keyword Arguments: * *thingName* (``string``) -- [REQUIRED] The name of the thing. :returns: (``dict``) -- The output from the GetThingShadow operation * *payload* (``bytes``) -- The state information, in JSON format. """ thing_name = self._get_required_parameter('thingName', **kwargs) payload = b'' return self._shadow_op('get', thing_name, payload)
r""" Call shadow lambda to obtain current shadow state. :Keyword Arguments: * *thingName* (``string``) -- [REQUIRED] The name of the thing. :returns: (``dict``) -- The output from the GetThingShadow operation * *payload* (``bytes``) -- The state information, in JSON format.
Below is the the instruction that describes the task: ### Input: r""" Call shadow lambda to obtain current shadow state. :Keyword Arguments: * *thingName* (``string``) -- [REQUIRED] The name of the thing. :returns: (``dict``) -- The output from the GetThingShadow operation * *payload* (``bytes``) -- The state information, in JSON format. ### Response: def get_thing_shadow(self, **kwargs): r""" Call shadow lambda to obtain current shadow state. :Keyword Arguments: * *thingName* (``string``) -- [REQUIRED] The name of the thing. :returns: (``dict``) -- The output from the GetThingShadow operation * *payload* (``bytes``) -- The state information, in JSON format. """ thing_name = self._get_required_parameter('thingName', **kwargs) payload = b'' return self._shadow_op('get', thing_name, payload)
def state_type_changed(self, model, prop_name, info): """Reopen state editor when state type is changed When the type of the observed state changes, a new model is created. The look of this controller's view depends on the kind of model. Therefore, we have to destroy this editor and open a new one with the new model. """ msg = info['arg'] # print(self.__class__.__name__, "state_type_changed check", info) if msg.action in ['change_state_type', 'change_root_state_type'] and msg.after: # print(self.__class__.__name__, "state_type_changed") import rafcon.gui.singleton as gui_singletons msg = info['arg'] new_state_m = msg.affected_models[-1] states_editor_ctrl = gui_singletons.main_window_controller.get_controller('states_editor_ctrl') states_editor_ctrl.recreate_state_editor(self.model, new_state_m)
Reopen state editor when state type is changed When the type of the observed state changes, a new model is created. The look of this controller's view depends on the kind of model. Therefore, we have to destroy this editor and open a new one with the new model.
Below is the the instruction that describes the task: ### Input: Reopen state editor when state type is changed When the type of the observed state changes, a new model is created. The look of this controller's view depends on the kind of model. Therefore, we have to destroy this editor and open a new one with the new model. ### Response: def state_type_changed(self, model, prop_name, info): """Reopen state editor when state type is changed When the type of the observed state changes, a new model is created. The look of this controller's view depends on the kind of model. Therefore, we have to destroy this editor and open a new one with the new model. """ msg = info['arg'] # print(self.__class__.__name__, "state_type_changed check", info) if msg.action in ['change_state_type', 'change_root_state_type'] and msg.after: # print(self.__class__.__name__, "state_type_changed") import rafcon.gui.singleton as gui_singletons msg = info['arg'] new_state_m = msg.affected_models[-1] states_editor_ctrl = gui_singletons.main_window_controller.get_controller('states_editor_ctrl') states_editor_ctrl.recreate_state_editor(self.model, new_state_m)
def get_specs(data): """ Takes a magic format file and returns a list of unique specimen names """ # sort the specimen names speclist = [] for rec in data: try: spec = rec["er_specimen_name"] except KeyError as e: spec = rec["specimen"] if spec not in speclist: speclist.append(spec) speclist.sort() return speclist
Takes a magic format file and returns a list of unique specimen names
Below is the the instruction that describes the task: ### Input: Takes a magic format file and returns a list of unique specimen names ### Response: def get_specs(data): """ Takes a magic format file and returns a list of unique specimen names """ # sort the specimen names speclist = [] for rec in data: try: spec = rec["er_specimen_name"] except KeyError as e: spec = rec["specimen"] if spec not in speclist: speclist.append(spec) speclist.sort() return speclist
def freeRenderModel(self): """ Frees a previously returned render model It is safe to call this on a null ptr. """ fn = self.function_table.freeRenderModel pRenderModel = RenderModel_t() fn(byref(pRenderModel)) return pRenderModel
Frees a previously returned render model It is safe to call this on a null ptr.
Below is the the instruction that describes the task: ### Input: Frees a previously returned render model It is safe to call this on a null ptr. ### Response: def freeRenderModel(self): """ Frees a previously returned render model It is safe to call this on a null ptr. """ fn = self.function_table.freeRenderModel pRenderModel = RenderModel_t() fn(byref(pRenderModel)) return pRenderModel
def _load_from_hdx(self, object_type, id_field): # type: (str, str) -> bool """Helper method to load the HDX object given by identifier from HDX Args: object_type (str): Description of HDX object type (for messages) id_field (str): HDX object identifier Returns: bool: True if loaded, False if not """ success, result = self._read_from_hdx(object_type, id_field) if success: self.old_data = self.data self.data = result return True logger.debug(result) return False
Helper method to load the HDX object given by identifier from HDX Args: object_type (str): Description of HDX object type (for messages) id_field (str): HDX object identifier Returns: bool: True if loaded, False if not
Below is the the instruction that describes the task: ### Input: Helper method to load the HDX object given by identifier from HDX Args: object_type (str): Description of HDX object type (for messages) id_field (str): HDX object identifier Returns: bool: True if loaded, False if not ### Response: def _load_from_hdx(self, object_type, id_field): # type: (str, str) -> bool """Helper method to load the HDX object given by identifier from HDX Args: object_type (str): Description of HDX object type (for messages) id_field (str): HDX object identifier Returns: bool: True if loaded, False if not """ success, result = self._read_from_hdx(object_type, id_field) if success: self.old_data = self.data self.data = result return True logger.debug(result) return False
def DELETE_SLICE_0(self, instr): 'obj[:] = expr' value = self.ast_stack.pop() kw = dict(lineno=instr.lineno, col_offset=0) slice = _ast.Slice(lower=None, step=None, upper=None, **kw) subscr = _ast.Subscript(value=value, slice=slice, ctx=_ast.Del(), **kw) delete = _ast.Delete(targets=[subscr], **kw) self.ast_stack.append(delete)
obj[:] = expr
Below is the the instruction that describes the task: ### Input: obj[:] = expr ### Response: def DELETE_SLICE_0(self, instr): 'obj[:] = expr' value = self.ast_stack.pop() kw = dict(lineno=instr.lineno, col_offset=0) slice = _ast.Slice(lower=None, step=None, upper=None, **kw) subscr = _ast.Subscript(value=value, slice=slice, ctx=_ast.Del(), **kw) delete = _ast.Delete(targets=[subscr], **kw) self.ast_stack.append(delete)
async def set_presence(self, status: str = "online", ignore_cache: bool = False): """ Set the online status of the user. See also: `API reference`_ Args: status: The online status of the user. Allowed values: "online", "offline", "unavailable". ignore_cache: Whether or not to set presence even if the cache says the presence is already set to that value. .. _API reference: https://matrix.org/docs/spec/client_server/r0.3.0.html#put-matrix-client-r0-presence-userid-status """ await self.ensure_registered() if not ignore_cache and self.state_store.has_presence(self.mxid, status): return content = { "presence": status } resp = await self.client.request("PUT", f"/presence/{self.mxid}/status", content) self.state_store.set_presence(self.mxid, status)
Set the online status of the user. See also: `API reference`_ Args: status: The online status of the user. Allowed values: "online", "offline", "unavailable". ignore_cache: Whether or not to set presence even if the cache says the presence is already set to that value. .. _API reference: https://matrix.org/docs/spec/client_server/r0.3.0.html#put-matrix-client-r0-presence-userid-status
Below is the the instruction that describes the task: ### Input: Set the online status of the user. See also: `API reference`_ Args: status: The online status of the user. Allowed values: "online", "offline", "unavailable". ignore_cache: Whether or not to set presence even if the cache says the presence is already set to that value. .. _API reference: https://matrix.org/docs/spec/client_server/r0.3.0.html#put-matrix-client-r0-presence-userid-status ### Response: async def set_presence(self, status: str = "online", ignore_cache: bool = False): """ Set the online status of the user. See also: `API reference`_ Args: status: The online status of the user. Allowed values: "online", "offline", "unavailable". ignore_cache: Whether or not to set presence even if the cache says the presence is already set to that value. .. _API reference: https://matrix.org/docs/spec/client_server/r0.3.0.html#put-matrix-client-r0-presence-userid-status """ await self.ensure_registered() if not ignore_cache and self.state_store.has_presence(self.mxid, status): return content = { "presence": status } resp = await self.client.request("PUT", f"/presence/{self.mxid}/status", content) self.state_store.set_presence(self.mxid, status)
def x(self): ''' np.array: The grid points in x. ''' if None not in (self.x_min, self.x_max, self.x_step) and \ self.x_min != self.x_max: x = np.arange(self.x_min, self.x_max+self.x_step-self.y_step*0.1, self.x_step) else: x = np.array([]) return x
np.array: The grid points in x.
Below is the the instruction that describes the task: ### Input: np.array: The grid points in x. ### Response: def x(self): ''' np.array: The grid points in x. ''' if None not in (self.x_min, self.x_max, self.x_step) and \ self.x_min != self.x_max: x = np.arange(self.x_min, self.x_max+self.x_step-self.y_step*0.1, self.x_step) else: x = np.array([]) return x
def add_virtual_columns_cartesian_angular_momenta(self, x='x', y='y', z='z', vx='vx', vy='vy', vz='vz', Lx='Lx', Ly='Ly', Lz='Lz', propagate_uncertainties=False): """ Calculate the angular momentum components provided Cartesian positions and velocities. Be mindful of the point of origin: ex. if considering Galactic dynamics, and positions and velocities should be as seen from the Galactic centre. :param x: x-position Cartesian component :param y: y-position Cartesian component :param z: z-position Cartesian component :param vx: x-velocity Cartesian component :param vy: y-velocity Cartesian component :param vz: z-velocity Cartesian component :param Lx: name of virtual column :param Ly: name of virtual column :param Lz: name of virtial column :propagate_uncertainties: (bool) whether to propagate the uncertainties of the positions and velocities to the angular momentum components """ x, y, z, vx, vy, vz = self._expr(x, y, z, vx, vy, vz) self.add_virtual_column(Lx, y * vz - z * vy) self.add_virtual_column(Ly, z * vx - x * vz) self.add_virtual_column(Lz, x * vy - y * vx) if propagate_uncertainties: self.propagate_uncertainties([self[Lx], self[Ly], self[Lz]])
Calculate the angular momentum components provided Cartesian positions and velocities. Be mindful of the point of origin: ex. if considering Galactic dynamics, and positions and velocities should be as seen from the Galactic centre. :param x: x-position Cartesian component :param y: y-position Cartesian component :param z: z-position Cartesian component :param vx: x-velocity Cartesian component :param vy: y-velocity Cartesian component :param vz: z-velocity Cartesian component :param Lx: name of virtual column :param Ly: name of virtual column :param Lz: name of virtial column :propagate_uncertainties: (bool) whether to propagate the uncertainties of the positions and velocities to the angular momentum components
Below is the the instruction that describes the task: ### Input: Calculate the angular momentum components provided Cartesian positions and velocities. Be mindful of the point of origin: ex. if considering Galactic dynamics, and positions and velocities should be as seen from the Galactic centre. :param x: x-position Cartesian component :param y: y-position Cartesian component :param z: z-position Cartesian component :param vx: x-velocity Cartesian component :param vy: y-velocity Cartesian component :param vz: z-velocity Cartesian component :param Lx: name of virtual column :param Ly: name of virtual column :param Lz: name of virtial column :propagate_uncertainties: (bool) whether to propagate the uncertainties of the positions and velocities to the angular momentum components ### Response: def add_virtual_columns_cartesian_angular_momenta(self, x='x', y='y', z='z', vx='vx', vy='vy', vz='vz', Lx='Lx', Ly='Ly', Lz='Lz', propagate_uncertainties=False): """ Calculate the angular momentum components provided Cartesian positions and velocities. Be mindful of the point of origin: ex. if considering Galactic dynamics, and positions and velocities should be as seen from the Galactic centre. :param x: x-position Cartesian component :param y: y-position Cartesian component :param z: z-position Cartesian component :param vx: x-velocity Cartesian component :param vy: y-velocity Cartesian component :param vz: z-velocity Cartesian component :param Lx: name of virtual column :param Ly: name of virtual column :param Lz: name of virtial column :propagate_uncertainties: (bool) whether to propagate the uncertainties of the positions and velocities to the angular momentum components """ x, y, z, vx, vy, vz = self._expr(x, y, z, vx, vy, vz) self.add_virtual_column(Lx, y * vz - z * vy) self.add_virtual_column(Ly, z * vx - x * vz) self.add_virtual_column(Lz, x * vy - y * vx) if propagate_uncertainties: self.propagate_uncertainties([self[Lx], self[Ly], self[Lz]])
def joinswarm(remote_addr=int, listen_addr=int, token=str): ''' Join a Swarm Worker to the cluster remote_addr The manager node you want to connect to for the swarm listen_addr Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP) token Either the manager join token or the worker join token. You can get the worker or manager token via ``salt '*' swarm.swarm_tokens`` CLI Example: .. code-block:: bash salt '*' swarm.joinswarm remote_addr=192.168.50.10 listen_addr='0.0.0.0' \ token='SWMTKN-1-64tux2g0701r84ofq93zppcih0pe081akq45owe9ts61f30x4t-06trjugdu7x2z47j938s54il' ''' try: salt_return = {} __context__['client'].swarm.join(remote_addrs=[remote_addr], listen_addr=listen_addr, join_token=token) output = __context__['server_name'] + ' has joined the Swarm' salt_return.update({'Comment': output, 'Manager_Addr': remote_addr}) except TypeError: salt_return = {} salt_return.update({'Error': 'Please make sure this minion is not part of a swarm and you are ' 'passing remote_addr, listen_addr and token correctly.'}) return salt_return
Join a Swarm Worker to the cluster remote_addr The manager node you want to connect to for the swarm listen_addr Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP) token Either the manager join token or the worker join token. You can get the worker or manager token via ``salt '*' swarm.swarm_tokens`` CLI Example: .. code-block:: bash salt '*' swarm.joinswarm remote_addr=192.168.50.10 listen_addr='0.0.0.0' \ token='SWMTKN-1-64tux2g0701r84ofq93zppcih0pe081akq45owe9ts61f30x4t-06trjugdu7x2z47j938s54il'
Below is the the instruction that describes the task: ### Input: Join a Swarm Worker to the cluster remote_addr The manager node you want to connect to for the swarm listen_addr Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP) token Either the manager join token or the worker join token. You can get the worker or manager token via ``salt '*' swarm.swarm_tokens`` CLI Example: .. code-block:: bash salt '*' swarm.joinswarm remote_addr=192.168.50.10 listen_addr='0.0.0.0' \ token='SWMTKN-1-64tux2g0701r84ofq93zppcih0pe081akq45owe9ts61f30x4t-06trjugdu7x2z47j938s54il' ### Response: def joinswarm(remote_addr=int, listen_addr=int, token=str): ''' Join a Swarm Worker to the cluster remote_addr The manager node you want to connect to for the swarm listen_addr Listen address used for inter-manager communication if the node gets promoted to manager, as well as determining the networking interface used for the VXLAN Tunnel Endpoint (VTEP) token Either the manager join token or the worker join token. You can get the worker or manager token via ``salt '*' swarm.swarm_tokens`` CLI Example: .. code-block:: bash salt '*' swarm.joinswarm remote_addr=192.168.50.10 listen_addr='0.0.0.0' \ token='SWMTKN-1-64tux2g0701r84ofq93zppcih0pe081akq45owe9ts61f30x4t-06trjugdu7x2z47j938s54il' ''' try: salt_return = {} __context__['client'].swarm.join(remote_addrs=[remote_addr], listen_addr=listen_addr, join_token=token) output = __context__['server_name'] + ' has joined the Swarm' salt_return.update({'Comment': output, 'Manager_Addr': remote_addr}) except TypeError: salt_return = {} salt_return.update({'Error': 'Please make sure this minion is not part of a swarm and you are ' 'passing remote_addr, listen_addr and token correctly.'}) return salt_return
def _prt_edge(dag_edge, attr): """Print edge attribute""" # sequence parent_graph points attributes type parent_edge_list print("Edge {ATTR}: {VAL}".format(ATTR=attr, VAL=dag_edge.obj_dict[attr]))
Print edge attribute
Below is the the instruction that describes the task: ### Input: Print edge attribute ### Response: def _prt_edge(dag_edge, attr): """Print edge attribute""" # sequence parent_graph points attributes type parent_edge_list print("Edge {ATTR}: {VAL}".format(ATTR=attr, VAL=dag_edge.obj_dict[attr]))
def _getTPClass(temporalImp): """ Return the class corresponding to the given temporalImp string """ if temporalImp == 'py': return backtracking_tm.BacktrackingTM elif temporalImp == 'cpp': return backtracking_tm_cpp.BacktrackingTMCPP elif temporalImp == 'tm_py': return backtracking_tm_shim.TMShim elif temporalImp == 'tm_cpp': return backtracking_tm_shim.TMCPPShim elif temporalImp == 'monitored_tm_py': return backtracking_tm_shim.MonitoredTMShim else: raise RuntimeError("Invalid temporalImp '%s'. Legal values are: 'py', " "'cpp', 'tm_py', 'monitored_tm_py'" % (temporalImp))
Return the class corresponding to the given temporalImp string
Below is the the instruction that describes the task: ### Input: Return the class corresponding to the given temporalImp string ### Response: def _getTPClass(temporalImp): """ Return the class corresponding to the given temporalImp string """ if temporalImp == 'py': return backtracking_tm.BacktrackingTM elif temporalImp == 'cpp': return backtracking_tm_cpp.BacktrackingTMCPP elif temporalImp == 'tm_py': return backtracking_tm_shim.TMShim elif temporalImp == 'tm_cpp': return backtracking_tm_shim.TMCPPShim elif temporalImp == 'monitored_tm_py': return backtracking_tm_shim.MonitoredTMShim else: raise RuntimeError("Invalid temporalImp '%s'. Legal values are: 'py', " "'cpp', 'tm_py', 'monitored_tm_py'" % (temporalImp))
def generate_data(path, tokenizer, char_vcb, word_vcb, is_training=False): ''' Generate data ''' global root_path qp_pairs = data.load_from_file(path=path, is_training=is_training) tokenized_sent = 0 # qp_pairs = qp_pairs[:1000]1 for qp_pair in qp_pairs: tokenized_sent += 1 data.tokenize(qp_pair, tokenizer, is_training) for word in qp_pair['question_tokens']: word_vcb.add(word['word']) for char in word['word']: char_vcb.add(char) for word in qp_pair['passage_tokens']: word_vcb.add(word['word']) for char in word['word']: char_vcb.add(char) max_query_length = max(len(x['question_tokens']) for x in qp_pairs) max_passage_length = max(len(x['passage_tokens']) for x in qp_pairs) #min_passage_length = min(len(x['passage_tokens']) for x in qp_pairs) cfg.max_query_length = max_query_length cfg.max_passage_length = max_passage_length return qp_pairs
Generate data
Below is the the instruction that describes the task: ### Input: Generate data ### Response: def generate_data(path, tokenizer, char_vcb, word_vcb, is_training=False): ''' Generate data ''' global root_path qp_pairs = data.load_from_file(path=path, is_training=is_training) tokenized_sent = 0 # qp_pairs = qp_pairs[:1000]1 for qp_pair in qp_pairs: tokenized_sent += 1 data.tokenize(qp_pair, tokenizer, is_training) for word in qp_pair['question_tokens']: word_vcb.add(word['word']) for char in word['word']: char_vcb.add(char) for word in qp_pair['passage_tokens']: word_vcb.add(word['word']) for char in word['word']: char_vcb.add(char) max_query_length = max(len(x['question_tokens']) for x in qp_pairs) max_passage_length = max(len(x['passage_tokens']) for x in qp_pairs) #min_passage_length = min(len(x['passage_tokens']) for x in qp_pairs) cfg.max_query_length = max_query_length cfg.max_passage_length = max_passage_length return qp_pairs
def rfc2426(self): """RFC2426-encode the field content. :return: the field in the RFC 2426 format. :returntype: `str`""" if self.uri: return rfc2425encode(self.name,self.uri,{"value":"uri"}) elif self.sound: return rfc2425encode(self.name,self.sound)
RFC2426-encode the field content. :return: the field in the RFC 2426 format. :returntype: `str`
Below is the the instruction that describes the task: ### Input: RFC2426-encode the field content. :return: the field in the RFC 2426 format. :returntype: `str` ### Response: def rfc2426(self): """RFC2426-encode the field content. :return: the field in the RFC 2426 format. :returntype: `str`""" if self.uri: return rfc2425encode(self.name,self.uri,{"value":"uri"}) elif self.sound: return rfc2425encode(self.name,self.sound)
def _write_for_dstype(self, learn:Learner, batch:Tuple, iteration:int, tbwriter:SummaryWriter, ds_type:DatasetType)->None: "Writes batch images of specified DatasetType to Tensorboard." request = ImageTBRequest(learn=learn, batch=batch, iteration=iteration, tbwriter=tbwriter, ds_type=ds_type) asyncTBWriter.request_write(request)
Writes batch images of specified DatasetType to Tensorboard.
Below is the the instruction that describes the task: ### Input: Writes batch images of specified DatasetType to Tensorboard. ### Response: def _write_for_dstype(self, learn:Learner, batch:Tuple, iteration:int, tbwriter:SummaryWriter, ds_type:DatasetType)->None: "Writes batch images of specified DatasetType to Tensorboard." request = ImageTBRequest(learn=learn, batch=batch, iteration=iteration, tbwriter=tbwriter, ds_type=ds_type) asyncTBWriter.request_write(request)
def _add_utterance_to_document(self, utterance): """add an utterance to this docgraph (as a spanning relation)""" utter_id = 'utterance_{}'.format(utterance.attrib['nrgen']) norm, lemma, pos = [elem.text.split() for elem in utterance.iterchildren()] for i, word in enumerate(utterance.text.split()): token_id = self._add_token_to_document( word, token_attrs={self.ns+':norm': norm[i], self.ns+':lemma': lemma[i], self.ns+':pos': pos[i]}) self._add_spanning_relation(utter_id, token_id) self.utterances.append(utter_id)
add an utterance to this docgraph (as a spanning relation)
Below is the the instruction that describes the task: ### Input: add an utterance to this docgraph (as a spanning relation) ### Response: def _add_utterance_to_document(self, utterance): """add an utterance to this docgraph (as a spanning relation)""" utter_id = 'utterance_{}'.format(utterance.attrib['nrgen']) norm, lemma, pos = [elem.text.split() for elem in utterance.iterchildren()] for i, word in enumerate(utterance.text.split()): token_id = self._add_token_to_document( word, token_attrs={self.ns+':norm': norm[i], self.ns+':lemma': lemma[i], self.ns+':pos': pos[i]}) self._add_spanning_relation(utter_id, token_id) self.utterances.append(utter_id)
def solve_one(self, expr, constrain=False): """ Concretize a symbolic :class:`~manticore.core.smtlib.expression.Expression` into one solution. :param manticore.core.smtlib.Expression expr: Symbolic value to concretize :param bool constrain: If True, constrain expr to concretized value :return: Concrete value :rtype: int """ expr = self.migrate_expression(expr) value = self._solver.get_value(self._constraints, expr) if constrain: self.constrain(expr == value) #Include forgiveness here if isinstance(value, bytearray): value = bytes(value) return value
Concretize a symbolic :class:`~manticore.core.smtlib.expression.Expression` into one solution. :param manticore.core.smtlib.Expression expr: Symbolic value to concretize :param bool constrain: If True, constrain expr to concretized value :return: Concrete value :rtype: int
Below is the the instruction that describes the task: ### Input: Concretize a symbolic :class:`~manticore.core.smtlib.expression.Expression` into one solution. :param manticore.core.smtlib.Expression expr: Symbolic value to concretize :param bool constrain: If True, constrain expr to concretized value :return: Concrete value :rtype: int ### Response: def solve_one(self, expr, constrain=False): """ Concretize a symbolic :class:`~manticore.core.smtlib.expression.Expression` into one solution. :param manticore.core.smtlib.Expression expr: Symbolic value to concretize :param bool constrain: If True, constrain expr to concretized value :return: Concrete value :rtype: int """ expr = self.migrate_expression(expr) value = self._solver.get_value(self._constraints, expr) if constrain: self.constrain(expr == value) #Include forgiveness here if isinstance(value, bytearray): value = bytes(value) return value
def metadataContributer(self): """gets the metadata featurelayer object""" if self._metaFL is None: fl = FeatureService(url=self._metadataURL, proxy_url=self._proxy_url, proxy_port=self._proxy_port) self._metaFS = fl return self._metaFS
gets the metadata featurelayer object
Below is the the instruction that describes the task: ### Input: gets the metadata featurelayer object ### Response: def metadataContributer(self): """gets the metadata featurelayer object""" if self._metaFL is None: fl = FeatureService(url=self._metadataURL, proxy_url=self._proxy_url, proxy_port=self._proxy_port) self._metaFS = fl return self._metaFS
def notify(self, notices): """Send notifications to the users via. the provided methods Args: notices (:obj:`dict` of `str`: `dict`): List of the notifications to send Returns: `None` """ issues_html = get_template('unattached_ebs_volume.html') issues_text = get_template('unattached_ebs_volume.txt') for recipient, issues in list(notices.items()): if issues: message_html = issues_html.render(issues=issues) message_text = issues_text.render(issues=issues) send_notification( subsystem=self.name, recipients=[recipient], subject=self.subject, body_html=message_html, body_text=message_text )
Send notifications to the users via. the provided methods Args: notices (:obj:`dict` of `str`: `dict`): List of the notifications to send Returns: `None`
Below is the the instruction that describes the task: ### Input: Send notifications to the users via. the provided methods Args: notices (:obj:`dict` of `str`: `dict`): List of the notifications to send Returns: `None` ### Response: def notify(self, notices): """Send notifications to the users via. the provided methods Args: notices (:obj:`dict` of `str`: `dict`): List of the notifications to send Returns: `None` """ issues_html = get_template('unattached_ebs_volume.html') issues_text = get_template('unattached_ebs_volume.txt') for recipient, issues in list(notices.items()): if issues: message_html = issues_html.render(issues=issues) message_text = issues_text.render(issues=issues) send_notification( subsystem=self.name, recipients=[recipient], subject=self.subject, body_html=message_html, body_text=message_text )
def words(quantity=10, as_list=False): """Return random words.""" global _words if not _words: _words = ' '.join(get_dictionary('lorem_ipsum')).lower().\ replace('\n', '') _words = re.sub(r'\.|,|;/', '', _words) _words = _words.split(' ') result = random.sample(_words, quantity) if as_list: return result else: return ' '.join(result)
Return random words.
Below is the the instruction that describes the task: ### Input: Return random words. ### Response: def words(quantity=10, as_list=False): """Return random words.""" global _words if not _words: _words = ' '.join(get_dictionary('lorem_ipsum')).lower().\ replace('\n', '') _words = re.sub(r'\.|,|;/', '', _words) _words = _words.split(' ') result = random.sample(_words, quantity) if as_list: return result else: return ' '.join(result)
def _set_loopback(self, v, load=False): """ Setter method for loopback, mapped from YANG variable /overlay_gateway/ip/interface/loopback (container) If this variable is read-only (config: false) in the source YANG file, then _set_loopback is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_loopback() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=loopback.loopback, is_container='container', presence=False, yang_name="loopback", rest_name="Loopback", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Loopback interface', u'alt-name': u'Loopback'}}, namespace='urn:brocade.com:mgmt:brocade-tunnels', defining_module='brocade-tunnels', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """loopback must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=loopback.loopback, is_container='container', presence=False, yang_name="loopback", rest_name="Loopback", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Loopback interface', u'alt-name': u'Loopback'}}, namespace='urn:brocade.com:mgmt:brocade-tunnels', defining_module='brocade-tunnels', yang_type='container', is_config=True)""", }) self.__loopback = t if hasattr(self, '_set'): self._set()
Setter method for loopback, mapped from YANG variable /overlay_gateway/ip/interface/loopback (container) If this variable is read-only (config: false) in the source YANG file, then _set_loopback is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_loopback() directly.
Below is the the instruction that describes the task: ### Input: Setter method for loopback, mapped from YANG variable /overlay_gateway/ip/interface/loopback (container) If this variable is read-only (config: false) in the source YANG file, then _set_loopback is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_loopback() directly. ### Response: def _set_loopback(self, v, load=False): """ Setter method for loopback, mapped from YANG variable /overlay_gateway/ip/interface/loopback (container) If this variable is read-only (config: false) in the source YANG file, then _set_loopback is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_loopback() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=loopback.loopback, is_container='container', presence=False, yang_name="loopback", rest_name="Loopback", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Loopback interface', u'alt-name': u'Loopback'}}, namespace='urn:brocade.com:mgmt:brocade-tunnels', defining_module='brocade-tunnels', yang_type='container', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """loopback must be of a type compatible with container""", 'defined-type': "container", 'generated-type': """YANGDynClass(base=loopback.loopback, is_container='container', presence=False, yang_name="loopback", rest_name="Loopback", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Loopback interface', u'alt-name': u'Loopback'}}, namespace='urn:brocade.com:mgmt:brocade-tunnels', defining_module='brocade-tunnels', yang_type='container', is_config=True)""", }) self.__loopback = t if hasattr(self, '_set'): self._set()
def pull(self, project, run=None, entity=None): """Download files from W&B Args: project (str): The project to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: The requests library response object """ project, run = self.parse_slug(project, run=run) urls = self.download_urls(project, run, entity) responses = [] for fileName in urls: _, response = self.download_write_file(urls[fileName]) if response: responses.append(response) return responses
Download files from W&B Args: project (str): The project to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: The requests library response object
Below is the the instruction that describes the task: ### Input: Download files from W&B Args: project (str): The project to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: The requests library response object ### Response: def pull(self, project, run=None, entity=None): """Download files from W&B Args: project (str): The project to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: The requests library response object """ project, run = self.parse_slug(project, run=run) urls = self.download_urls(project, run, entity) responses = [] for fileName in urls: _, response = self.download_write_file(urls[fileName]) if response: responses.append(response) return responses
def get_enum_from_name(self, enum_name): """ Return an enum from a name Args: enum_name (str): name of the enum Returns: Enum """ return next((e for e in self.enums if e.name == enum_name), None)
Return an enum from a name Args: enum_name (str): name of the enum Returns: Enum
Below is the the instruction that describes the task: ### Input: Return an enum from a name Args: enum_name (str): name of the enum Returns: Enum ### Response: def get_enum_from_name(self, enum_name): """ Return an enum from a name Args: enum_name (str): name of the enum Returns: Enum """ return next((e for e in self.enums if e.name == enum_name), None)
def _read_indexlist(self, name): """Read a list of indexes.""" setattr(self, '_' + name, [self._timeline[int(i)] for i in self.db.lrange('site:{0}'.format(name), 0, -1)])
Read a list of indexes.
Below is the the instruction that describes the task: ### Input: Read a list of indexes. ### Response: def _read_indexlist(self, name): """Read a list of indexes.""" setattr(self, '_' + name, [self._timeline[int(i)] for i in self.db.lrange('site:{0}'.format(name), 0, -1)])
def processTPED(uniqueSamples, duplicatedSamples, fileName, prefix): """Process the TPED file. :param uniqueSamples: the position of unique samples. :param duplicatedSamples: the position of duplicated samples. :param fileName: the name of the file. :param prefix: the prefix of all the files. :type uniqueSamples: dict :type duplicatedSamples: collections.defaultdict :type fileName: str :type prefix: str :returns: a tuple containing the ``tped`` (:py:class:`numpy.array`) as first element, and the updated positions of the duplicated samples (:py:class:`dict`) Reads the entire ``tped`` and prints another one containing only unique samples (``prefix.unique_samples.tped``). It then creates a :py:class:`numpy.array` containing the duplicated samples. """ # Getting the indexes uI = sorted(uniqueSamples.values()) dI = [] for item in duplicatedSamples.itervalues(): dI.extend(item) dI.sort() tped = [] outputFile = None try: outputFile = open(prefix + ".unique_samples.tped", "w") except IOError: msg = "%s: can't write to file" % prefix + ".unique_samples.tped" raise ProgramError with open(fileName, 'r') as inputFile: for line in inputFile: row = line.rstrip("\r\n").split("\t") snpInfo = row[:4] genotype = row[4:] # Printing the new TPED file (unique samples only) print >>outputFile, "\t".join(snpInfo + [genotype[i] for i in uI]) # Saving the TPED file (duplicated samples only) tped.append(tuple(snpInfo + [genotype[i] for i in dI])) # Closing the output file outputFile.close() # Updating the positions updatedSamples = {} for sampleID in duplicatedSamples: positions = duplicatedSamples[sampleID] newPositions = range(len(positions)) for i, pos in enumerate(positions): newPositions[i] = dI.index(pos) updatedSamples[sampleID] = newPositions tped = np.array(tped) return tped, updatedSamples
Process the TPED file. :param uniqueSamples: the position of unique samples. :param duplicatedSamples: the position of duplicated samples. :param fileName: the name of the file. :param prefix: the prefix of all the files. :type uniqueSamples: dict :type duplicatedSamples: collections.defaultdict :type fileName: str :type prefix: str :returns: a tuple containing the ``tped`` (:py:class:`numpy.array`) as first element, and the updated positions of the duplicated samples (:py:class:`dict`) Reads the entire ``tped`` and prints another one containing only unique samples (``prefix.unique_samples.tped``). It then creates a :py:class:`numpy.array` containing the duplicated samples.
Below is the the instruction that describes the task: ### Input: Process the TPED file. :param uniqueSamples: the position of unique samples. :param duplicatedSamples: the position of duplicated samples. :param fileName: the name of the file. :param prefix: the prefix of all the files. :type uniqueSamples: dict :type duplicatedSamples: collections.defaultdict :type fileName: str :type prefix: str :returns: a tuple containing the ``tped`` (:py:class:`numpy.array`) as first element, and the updated positions of the duplicated samples (:py:class:`dict`) Reads the entire ``tped`` and prints another one containing only unique samples (``prefix.unique_samples.tped``). It then creates a :py:class:`numpy.array` containing the duplicated samples. ### Response: def processTPED(uniqueSamples, duplicatedSamples, fileName, prefix): """Process the TPED file. :param uniqueSamples: the position of unique samples. :param duplicatedSamples: the position of duplicated samples. :param fileName: the name of the file. :param prefix: the prefix of all the files. :type uniqueSamples: dict :type duplicatedSamples: collections.defaultdict :type fileName: str :type prefix: str :returns: a tuple containing the ``tped`` (:py:class:`numpy.array`) as first element, and the updated positions of the duplicated samples (:py:class:`dict`) Reads the entire ``tped`` and prints another one containing only unique samples (``prefix.unique_samples.tped``). It then creates a :py:class:`numpy.array` containing the duplicated samples. """ # Getting the indexes uI = sorted(uniqueSamples.values()) dI = [] for item in duplicatedSamples.itervalues(): dI.extend(item) dI.sort() tped = [] outputFile = None try: outputFile = open(prefix + ".unique_samples.tped", "w") except IOError: msg = "%s: can't write to file" % prefix + ".unique_samples.tped" raise ProgramError with open(fileName, 'r') as inputFile: for line in inputFile: row = line.rstrip("\r\n").split("\t") snpInfo = row[:4] genotype = row[4:] # Printing the new TPED file (unique samples only) print >>outputFile, "\t".join(snpInfo + [genotype[i] for i in uI]) # Saving the TPED file (duplicated samples only) tped.append(tuple(snpInfo + [genotype[i] for i in dI])) # Closing the output file outputFile.close() # Updating the positions updatedSamples = {} for sampleID in duplicatedSamples: positions = duplicatedSamples[sampleID] newPositions = range(len(positions)) for i, pos in enumerate(positions): newPositions[i] = dI.index(pos) updatedSamples[sampleID] = newPositions tped = np.array(tped) return tped, updatedSamples
def _add_warc_action_log(self, path, url): '''Add the action log to the WARC file.''' _logger.debug('Adding action log record.') actions = [] with open(path, 'r', encoding='utf-8', errors='replace') as file: for line in file: actions.append(json.loads(line)) log_data = json.dumps( {'actions': actions}, indent=4, ).encode('utf-8') self._action_warc_record = record = WARCRecord() record.set_common_fields('metadata', 'application/json') record.fields['WARC-Target-URI'] = 'urn:X-wpull:snapshot?url={0}' \ .format(wpull.url.percent_encode_query_value(url)) record.block_file = io.BytesIO(log_data) self._warc_recorder.set_length_and_maybe_checksums(record) self._warc_recorder.write_record(record)
Add the action log to the WARC file.
Below is the the instruction that describes the task: ### Input: Add the action log to the WARC file. ### Response: def _add_warc_action_log(self, path, url): '''Add the action log to the WARC file.''' _logger.debug('Adding action log record.') actions = [] with open(path, 'r', encoding='utf-8', errors='replace') as file: for line in file: actions.append(json.loads(line)) log_data = json.dumps( {'actions': actions}, indent=4, ).encode('utf-8') self._action_warc_record = record = WARCRecord() record.set_common_fields('metadata', 'application/json') record.fields['WARC-Target-URI'] = 'urn:X-wpull:snapshot?url={0}' \ .format(wpull.url.percent_encode_query_value(url)) record.block_file = io.BytesIO(log_data) self._warc_recorder.set_length_and_maybe_checksums(record) self._warc_recorder.write_record(record)
def _encode_message_set(cls, messages, offset=None): """ Encode a MessageSet. Unlike other arrays in the protocol, MessageSets are not length-prefixed. Format:: MessageSet => [Offset MessageSize Message] Offset => int64 MessageSize => int32 """ message_set = [] incr = 1 if offset is None: incr = 0 offset = 0 for message in messages: encoded_message = KafkaCodec._encode_message(message) message_set.append(struct.pack('>qi', offset, len(encoded_message))) message_set.append(encoded_message) offset += incr return b''.join(message_set)
Encode a MessageSet. Unlike other arrays in the protocol, MessageSets are not length-prefixed. Format:: MessageSet => [Offset MessageSize Message] Offset => int64 MessageSize => int32
Below is the the instruction that describes the task: ### Input: Encode a MessageSet. Unlike other arrays in the protocol, MessageSets are not length-prefixed. Format:: MessageSet => [Offset MessageSize Message] Offset => int64 MessageSize => int32 ### Response: def _encode_message_set(cls, messages, offset=None): """ Encode a MessageSet. Unlike other arrays in the protocol, MessageSets are not length-prefixed. Format:: MessageSet => [Offset MessageSize Message] Offset => int64 MessageSize => int32 """ message_set = [] incr = 1 if offset is None: incr = 0 offset = 0 for message in messages: encoded_message = KafkaCodec._encode_message(message) message_set.append(struct.pack('>qi', offset, len(encoded_message))) message_set.append(encoded_message) offset += incr return b''.join(message_set)
def mk_subsuper_association(m, r_subsup): ''' Create pyxtuml associations from a sub/super association in BridgePoint. ''' r_rel = one(r_subsup).R_REL[206]() r_rto = one(r_subsup).R_SUPER[212].R_RTO[204]() target_o_obj = one(r_rto).R_OIR[203].O_OBJ[201]() for r_sub in many(r_subsup).R_SUB[213](): r_rgo = one(r_sub).R_RGO[205]() source_o_obj = one(r_rgo).R_OIR[203].O_OBJ[201]() source_ids, target_ids = _get_related_attributes(r_rgo, r_rto) m.define_association(rel_id=r_rel.Numb, source_kind=source_o_obj.Key_Lett, target_kind=target_o_obj.Key_Lett, source_keys=source_ids, target_keys=target_ids, source_conditional=True, target_conditional=False, source_phrase='', target_phrase='', source_many=False, target_many=False)
Create pyxtuml associations from a sub/super association in BridgePoint.
Below is the the instruction that describes the task: ### Input: Create pyxtuml associations from a sub/super association in BridgePoint. ### Response: def mk_subsuper_association(m, r_subsup): ''' Create pyxtuml associations from a sub/super association in BridgePoint. ''' r_rel = one(r_subsup).R_REL[206]() r_rto = one(r_subsup).R_SUPER[212].R_RTO[204]() target_o_obj = one(r_rto).R_OIR[203].O_OBJ[201]() for r_sub in many(r_subsup).R_SUB[213](): r_rgo = one(r_sub).R_RGO[205]() source_o_obj = one(r_rgo).R_OIR[203].O_OBJ[201]() source_ids, target_ids = _get_related_attributes(r_rgo, r_rto) m.define_association(rel_id=r_rel.Numb, source_kind=source_o_obj.Key_Lett, target_kind=target_o_obj.Key_Lett, source_keys=source_ids, target_keys=target_ids, source_conditional=True, target_conditional=False, source_phrase='', target_phrase='', source_many=False, target_many=False)
def csv(self, sep=',', branches=None, include_labels=True, limit=None, stream=None): """ Print csv representation of tree only including branches of basic types (no objects, vectors, etc..) Parameters ---------- sep : str, optional (default=',') The delimiter used to separate columns branches : list, optional (default=None) Only include these branches in the CSV output. If None, then all basic types will be included. include_labels : bool, optional (default=True) Include a first row of branch names labelling each column. limit : int, optional (default=None) Only include up to a maximum of ``limit`` rows in the CSV. stream : file, (default=None) Stream to write the CSV output on. By default the CSV will be written to ``sys.stdout``. """ supported_types = (Scalar, Array, stl.string) if stream is None: stream = sys.stdout if not self._buffer: self.create_buffer(ignore_unsupported=True) if branches is None: branchdict = OrderedDict([ (name, self._buffer[name]) for name in self.iterbranchnames() if isinstance(self._buffer[name], supported_types)]) else: branchdict = OrderedDict() for name in branches: if not isinstance(self._buffer[name], supported_types): raise TypeError( "selected branch `{0}` " "is not a scalar or array type".format(name)) branchdict[name] = self._buffer[name] if not branchdict: raise RuntimeError( "no branches selected or no " "branches of scalar or array types exist") if include_labels: # expand array types to f[0],f[1],f[2],... print(sep.join( name if isinstance(value, (Scalar, BaseChar, stl.string)) else sep.join('{0}[{1:d}]'.format(name, idx) for idx in range(len(value))) for name, value in branchdict.items()), file=stream) # even though 'entry' is not used, enumerate or simply iterating over # self is required to update the buffer with the new branch values at # each tree entry. for i, entry in enumerate(self): line = [] for value in branchdict.values(): if isinstance(value, (Scalar, BaseChar)): token = str(value.value) elif isinstance(value, stl.string): token = str(value) else: token = sep.join(map(str, value)) line.append(token) print(sep.join(line), file=stream) if limit is not None and i + 1 == limit: break
Print csv representation of tree only including branches of basic types (no objects, vectors, etc..) Parameters ---------- sep : str, optional (default=',') The delimiter used to separate columns branches : list, optional (default=None) Only include these branches in the CSV output. If None, then all basic types will be included. include_labels : bool, optional (default=True) Include a first row of branch names labelling each column. limit : int, optional (default=None) Only include up to a maximum of ``limit`` rows in the CSV. stream : file, (default=None) Stream to write the CSV output on. By default the CSV will be written to ``sys.stdout``.
Below is the the instruction that describes the task: ### Input: Print csv representation of tree only including branches of basic types (no objects, vectors, etc..) Parameters ---------- sep : str, optional (default=',') The delimiter used to separate columns branches : list, optional (default=None) Only include these branches in the CSV output. If None, then all basic types will be included. include_labels : bool, optional (default=True) Include a first row of branch names labelling each column. limit : int, optional (default=None) Only include up to a maximum of ``limit`` rows in the CSV. stream : file, (default=None) Stream to write the CSV output on. By default the CSV will be written to ``sys.stdout``. ### Response: def csv(self, sep=',', branches=None, include_labels=True, limit=None, stream=None): """ Print csv representation of tree only including branches of basic types (no objects, vectors, etc..) Parameters ---------- sep : str, optional (default=',') The delimiter used to separate columns branches : list, optional (default=None) Only include these branches in the CSV output. If None, then all basic types will be included. include_labels : bool, optional (default=True) Include a first row of branch names labelling each column. limit : int, optional (default=None) Only include up to a maximum of ``limit`` rows in the CSV. stream : file, (default=None) Stream to write the CSV output on. By default the CSV will be written to ``sys.stdout``. """ supported_types = (Scalar, Array, stl.string) if stream is None: stream = sys.stdout if not self._buffer: self.create_buffer(ignore_unsupported=True) if branches is None: branchdict = OrderedDict([ (name, self._buffer[name]) for name in self.iterbranchnames() if isinstance(self._buffer[name], supported_types)]) else: branchdict = OrderedDict() for name in branches: if not isinstance(self._buffer[name], supported_types): raise TypeError( "selected branch `{0}` " "is not a scalar or array type".format(name)) branchdict[name] = self._buffer[name] if not branchdict: raise RuntimeError( "no branches selected or no " "branches of scalar or array types exist") if include_labels: # expand array types to f[0],f[1],f[2],... print(sep.join( name if isinstance(value, (Scalar, BaseChar, stl.string)) else sep.join('{0}[{1:d}]'.format(name, idx) for idx in range(len(value))) for name, value in branchdict.items()), file=stream) # even though 'entry' is not used, enumerate or simply iterating over # self is required to update the buffer with the new branch values at # each tree entry. for i, entry in enumerate(self): line = [] for value in branchdict.values(): if isinstance(value, (Scalar, BaseChar)): token = str(value.value) elif isinstance(value, stl.string): token = str(value) else: token = sep.join(map(str, value)) line.append(token) print(sep.join(line), file=stream) if limit is not None and i + 1 == limit: break
def load_datetime(value, dt_format): """ Create timezone-aware datetime object """ if dt_format.endswith('%z'): dt_format = dt_format[:-2] offset = value[-5:] value = value[:-5] if offset != offset.replace(':', ''): # strip : from HHMM if needed (isoformat() adds it between HH and MM) offset = '+' + offset.replace(':', '') value = value[:-1] return OffsetTime(offset).localize(datetime.strptime(value, dt_format)) return datetime.strptime(value, dt_format)
Create timezone-aware datetime object
Below is the the instruction that describes the task: ### Input: Create timezone-aware datetime object ### Response: def load_datetime(value, dt_format): """ Create timezone-aware datetime object """ if dt_format.endswith('%z'): dt_format = dt_format[:-2] offset = value[-5:] value = value[:-5] if offset != offset.replace(':', ''): # strip : from HHMM if needed (isoformat() adds it between HH and MM) offset = '+' + offset.replace(':', '') value = value[:-1] return OffsetTime(offset).localize(datetime.strptime(value, dt_format)) return datetime.strptime(value, dt_format)
def merge_stats(self, other_col_counters): """ Merge statistics from a different column stats counter in to this one. Parameters ---------- other_column_counters: Other col_stat_counter to marge in to this one. """ for column_name, _ in self._column_stats.items(): self._column_stats[column_name] = self._column_stats[column_name] \ .mergeStats(other_col_counters._column_stats[column_name]) return self
Merge statistics from a different column stats counter in to this one. Parameters ---------- other_column_counters: Other col_stat_counter to marge in to this one.
Below is the the instruction that describes the task: ### Input: Merge statistics from a different column stats counter in to this one. Parameters ---------- other_column_counters: Other col_stat_counter to marge in to this one. ### Response: def merge_stats(self, other_col_counters): """ Merge statistics from a different column stats counter in to this one. Parameters ---------- other_column_counters: Other col_stat_counter to marge in to this one. """ for column_name, _ in self._column_stats.items(): self._column_stats[column_name] = self._column_stats[column_name] \ .mergeStats(other_col_counters._column_stats[column_name]) return self
def show_floatingip(self, floatingip, **_params): """Fetches information of a certain floatingip.""" return self.get(self.floatingip_path % (floatingip), params=_params)
Fetches information of a certain floatingip.
Below is the the instruction that describes the task: ### Input: Fetches information of a certain floatingip. ### Response: def show_floatingip(self, floatingip, **_params): """Fetches information of a certain floatingip.""" return self.get(self.floatingip_path % (floatingip), params=_params)
def next_prime( starting_value ): "Return the smallest prime larger than the starting value." if starting_value < 2: return 2 result = ( starting_value + 1 ) | 1 while not is_prime( result ): result = result + 2 return result
Return the smallest prime larger than the starting value.
Below is the the instruction that describes the task: ### Input: Return the smallest prime larger than the starting value. ### Response: def next_prime( starting_value ): "Return the smallest prime larger than the starting value." if starting_value < 2: return 2 result = ( starting_value + 1 ) | 1 while not is_prime( result ): result = result + 2 return result