code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def angleDiff(angle1, angle2, take_smaller=True): """ smallest difference between 2 angles code from http://stackoverflow.com/questions/1878907/the-smallest-difference-between-2-angles """ a = np.arctan2(np.sin(angle1 - angle2), np.cos(angle1 - angle2)) if isinstance(a, np.ndarray) and take_smaller: a = np.abs(a) # take smaller of both possible angles: ab = np.abs(np.pi - a) with np.errstate(invalid='ignore'): i = a > ab a[i] = ab[i] return a
smallest difference between 2 angles code from http://stackoverflow.com/questions/1878907/the-smallest-difference-between-2-angles
Below is the the instruction that describes the task: ### Input: smallest difference between 2 angles code from http://stackoverflow.com/questions/1878907/the-smallest-difference-between-2-angles ### Response: def angleDiff(angle1, angle2, take_smaller=True): """ smallest difference between 2 angles code from http://stackoverflow.com/questions/1878907/the-smallest-difference-between-2-angles """ a = np.arctan2(np.sin(angle1 - angle2), np.cos(angle1 - angle2)) if isinstance(a, np.ndarray) and take_smaller: a = np.abs(a) # take smaller of both possible angles: ab = np.abs(np.pi - a) with np.errstate(invalid='ignore'): i = a > ab a[i] = ab[i] return a
def split(self, X, y=None, groups=None): """Generate indices to split data into training and test set. Parameters ---------- X : array-like, of length n_samples Training data, includes reaction's containers y : array-like, of length n_samples The target variable for supervised learning problems. groups : array-like, with shape (n_samples,) Group labels for the samples used while splitting the dataset into train/test set. Yields ------ train : ndarray The training set indices for that split. test : ndarray The testing set indices for that split. """ X, y, groups = indexable(X, y, groups) cgrs = [~r for r in X] condition_structure = defaultdict(set) for structure, condition in zip(cgrs, groups): condition_structure[condition].add(structure) train_data = defaultdict(list) test_data = [] for n, (structure, condition) in enumerate(zip(cgrs, groups)): train_data[structure].append(n) if len(condition_structure[condition]) > 1: test_data.append(n) if self.n_splits > len(train_data): raise ValueError("Cannot have number of splits n_splits=%d greater" " than the number of transformations: %d." % (self.n_splits, len(train_data))) structures_weight = sorted(((x, len(y)) for x, y in train_data.items()), key=lambda x: x[1], reverse=True) fold_mean_size = len(cgrs) // self.n_splits if structures_weight[0][1] > fold_mean_size: warning('You have transformation that greater fold size') for idx in range(self.n_repeats): train_folds = [[] for _ in range(self.n_splits)] for structure, structure_length in structures_weight: if self.shuffle: check_random_state(self.random_state).shuffle(train_folds) for fold in train_folds[:-1]: if len(fold) + structure_length <= fold_mean_size: fold.extend(train_data[structure]) break else: roulette_param = (structure_length - fold_mean_size + len(fold)) / structure_length if random() > roulette_param: fold.extend(train_data[structure]) break else: train_folds[-1].extend(train_data[structure]) test_folds = [[] for _ in range(self.n_splits)] for test, train in zip(test_folds, train_folds): for index in train: if index in test_data: test.append(index) for i in range(self.n_splits): train_index = [] for fold in train_folds[:i]: train_index.extend(fold) for fold in train_folds[i+1:]: train_index.extend(fold) test_index = test_folds[i] yield array(train_index), array(test_index)
Generate indices to split data into training and test set. Parameters ---------- X : array-like, of length n_samples Training data, includes reaction's containers y : array-like, of length n_samples The target variable for supervised learning problems. groups : array-like, with shape (n_samples,) Group labels for the samples used while splitting the dataset into train/test set. Yields ------ train : ndarray The training set indices for that split. test : ndarray The testing set indices for that split.
Below is the the instruction that describes the task: ### Input: Generate indices to split data into training and test set. Parameters ---------- X : array-like, of length n_samples Training data, includes reaction's containers y : array-like, of length n_samples The target variable for supervised learning problems. groups : array-like, with shape (n_samples,) Group labels for the samples used while splitting the dataset into train/test set. Yields ------ train : ndarray The training set indices for that split. test : ndarray The testing set indices for that split. ### Response: def split(self, X, y=None, groups=None): """Generate indices to split data into training and test set. Parameters ---------- X : array-like, of length n_samples Training data, includes reaction's containers y : array-like, of length n_samples The target variable for supervised learning problems. groups : array-like, with shape (n_samples,) Group labels for the samples used while splitting the dataset into train/test set. Yields ------ train : ndarray The training set indices for that split. test : ndarray The testing set indices for that split. """ X, y, groups = indexable(X, y, groups) cgrs = [~r for r in X] condition_structure = defaultdict(set) for structure, condition in zip(cgrs, groups): condition_structure[condition].add(structure) train_data = defaultdict(list) test_data = [] for n, (structure, condition) in enumerate(zip(cgrs, groups)): train_data[structure].append(n) if len(condition_structure[condition]) > 1: test_data.append(n) if self.n_splits > len(train_data): raise ValueError("Cannot have number of splits n_splits=%d greater" " than the number of transformations: %d." % (self.n_splits, len(train_data))) structures_weight = sorted(((x, len(y)) for x, y in train_data.items()), key=lambda x: x[1], reverse=True) fold_mean_size = len(cgrs) // self.n_splits if structures_weight[0][1] > fold_mean_size: warning('You have transformation that greater fold size') for idx in range(self.n_repeats): train_folds = [[] for _ in range(self.n_splits)] for structure, structure_length in structures_weight: if self.shuffle: check_random_state(self.random_state).shuffle(train_folds) for fold in train_folds[:-1]: if len(fold) + structure_length <= fold_mean_size: fold.extend(train_data[structure]) break else: roulette_param = (structure_length - fold_mean_size + len(fold)) / structure_length if random() > roulette_param: fold.extend(train_data[structure]) break else: train_folds[-1].extend(train_data[structure]) test_folds = [[] for _ in range(self.n_splits)] for test, train in zip(test_folds, train_folds): for index in train: if index in test_data: test.append(index) for i in range(self.n_splits): train_index = [] for fold in train_folds[:i]: train_index.extend(fold) for fold in train_folds[i+1:]: train_index.extend(fold) test_index = test_folds[i] yield array(train_index), array(test_index)
def humanize_hours(total_hours, frmt='{hours:02d}:{minutes:02d}:{seconds:02d}', negative_frmt=None): """Given time in hours, return a string representing the time.""" seconds = int(float(total_hours) * 3600) return humanize_seconds(seconds, frmt, negative_frmt)
Given time in hours, return a string representing the time.
Below is the the instruction that describes the task: ### Input: Given time in hours, return a string representing the time. ### Response: def humanize_hours(total_hours, frmt='{hours:02d}:{minutes:02d}:{seconds:02d}', negative_frmt=None): """Given time in hours, return a string representing the time.""" seconds = int(float(total_hours) * 3600) return humanize_seconds(seconds, frmt, negative_frmt)
def dirty(field,ttl=None): "decorator to cache the result of a function until a field changes" if ttl is not None: raise NotImplementedError('pg.dirty ttl feature') def decorator(f): @functools.wraps(f) def wrapper(self,*args,**kwargs): # warning: not reentrant d=self.dirty_cache[field] if field in self.dirty_cache else self.dirty_cache.setdefault(field,{}) return d[f.__name__] if f.__name__ in d else d.setdefault(f.__name__,f(self,*args,**kwargs)) return wrapper return decorator
decorator to cache the result of a function until a field changes
Below is the the instruction that describes the task: ### Input: decorator to cache the result of a function until a field changes ### Response: def dirty(field,ttl=None): "decorator to cache the result of a function until a field changes" if ttl is not None: raise NotImplementedError('pg.dirty ttl feature') def decorator(f): @functools.wraps(f) def wrapper(self,*args,**kwargs): # warning: not reentrant d=self.dirty_cache[field] if field in self.dirty_cache else self.dirty_cache.setdefault(field,{}) return d[f.__name__] if f.__name__ in d else d.setdefault(f.__name__,f(self,*args,**kwargs)) return wrapper return decorator
def paths_for_shell(paths, separator=' '): """ Converts a list of paths for use in shell commands """ paths = filter(None, paths) paths = map(shlex.quote, paths) if separator is None: return paths return separator.join(paths)
Converts a list of paths for use in shell commands
Below is the the instruction that describes the task: ### Input: Converts a list of paths for use in shell commands ### Response: def paths_for_shell(paths, separator=' '): """ Converts a list of paths for use in shell commands """ paths = filter(None, paths) paths = map(shlex.quote, paths) if separator is None: return paths return separator.join(paths)
def is_connection_dropped(conn): # Platform-specific """ Returns True if the connection is dropped and should be closed. :param conn: :class:`httplib.HTTPConnection` object. Note: For platforms like AppEngine, this will always return ``False`` to let the platform handle connection recycling transparently for us. """ sock = getattr(conn, 'sock', False) if sock is False: # Platform-specific: AppEngine return False if sock is None: # Connection already closed (such as by httplib). return True if not HAS_SELECT: return False try: return bool(wait_for_read(sock, timeout=0.0)) except SelectorError: return True
Returns True if the connection is dropped and should be closed. :param conn: :class:`httplib.HTTPConnection` object. Note: For platforms like AppEngine, this will always return ``False`` to let the platform handle connection recycling transparently for us.
Below is the the instruction that describes the task: ### Input: Returns True if the connection is dropped and should be closed. :param conn: :class:`httplib.HTTPConnection` object. Note: For platforms like AppEngine, this will always return ``False`` to let the platform handle connection recycling transparently for us. ### Response: def is_connection_dropped(conn): # Platform-specific """ Returns True if the connection is dropped and should be closed. :param conn: :class:`httplib.HTTPConnection` object. Note: For platforms like AppEngine, this will always return ``False`` to let the platform handle connection recycling transparently for us. """ sock = getattr(conn, 'sock', False) if sock is False: # Platform-specific: AppEngine return False if sock is None: # Connection already closed (such as by httplib). return True if not HAS_SELECT: return False try: return bool(wait_for_read(sock, timeout=0.0)) except SelectorError: return True
def do_print(filename): """Print the AST of filename.""" with open(filename) as cmake_file: body = ast.parse(cmake_file.read()) word_print = _print_details(lambda n: "{0} {1}".format(n.type, n.contents)) ast_visitor.recurse(body, while_stmnt=_print_details(), foreach=_print_details(), function_def=_print_details(), macro_def=_print_details(), if_block=_print_details(), if_stmnt=_print_details(), elseif_stmnt=_print_details(), else_stmnt=_print_details(), function_call=_print_details(lambda n: n.name), word=word_print)
Print the AST of filename.
Below is the the instruction that describes the task: ### Input: Print the AST of filename. ### Response: def do_print(filename): """Print the AST of filename.""" with open(filename) as cmake_file: body = ast.parse(cmake_file.read()) word_print = _print_details(lambda n: "{0} {1}".format(n.type, n.contents)) ast_visitor.recurse(body, while_stmnt=_print_details(), foreach=_print_details(), function_def=_print_details(), macro_def=_print_details(), if_block=_print_details(), if_stmnt=_print_details(), elseif_stmnt=_print_details(), else_stmnt=_print_details(), function_call=_print_details(lambda n: n.name), word=word_print)
def find_version(*args, **kwargs): """ Wrapper around :py:class:`~.VersionFinder` and its :py:meth:`~.VersionFinder.find_package_version` method. Pass arguments and kwargs to VersionFinder constructor, return the value of its ``find_package_version`` method. :param package_name: name of the package to find information about :type package_name: str :param package_file: absolute path to a Python source file in the package to find information about; if not specified, the file calling this class will be used :type package_file: str :param log: If not set to True, the "versionfinder" and "pip" loggers will be set to a level of ``logging.CRITICAL`` to suppress log output. If set to True, you will see a LOT of debug-level log output, for debugging the internals of versionfinder. :type log: bool :returns: information about the installed version of the package :rtype: :py:class:`~versionfinder.versioninfo.VersionInfo` """ if 'caller_frame' not in kwargs: kwargs['caller_frame'] = inspect.stack()[1][0] return VersionFinder(*args, **kwargs).find_package_version()
Wrapper around :py:class:`~.VersionFinder` and its :py:meth:`~.VersionFinder.find_package_version` method. Pass arguments and kwargs to VersionFinder constructor, return the value of its ``find_package_version`` method. :param package_name: name of the package to find information about :type package_name: str :param package_file: absolute path to a Python source file in the package to find information about; if not specified, the file calling this class will be used :type package_file: str :param log: If not set to True, the "versionfinder" and "pip" loggers will be set to a level of ``logging.CRITICAL`` to suppress log output. If set to True, you will see a LOT of debug-level log output, for debugging the internals of versionfinder. :type log: bool :returns: information about the installed version of the package :rtype: :py:class:`~versionfinder.versioninfo.VersionInfo`
Below is the the instruction that describes the task: ### Input: Wrapper around :py:class:`~.VersionFinder` and its :py:meth:`~.VersionFinder.find_package_version` method. Pass arguments and kwargs to VersionFinder constructor, return the value of its ``find_package_version`` method. :param package_name: name of the package to find information about :type package_name: str :param package_file: absolute path to a Python source file in the package to find information about; if not specified, the file calling this class will be used :type package_file: str :param log: If not set to True, the "versionfinder" and "pip" loggers will be set to a level of ``logging.CRITICAL`` to suppress log output. If set to True, you will see a LOT of debug-level log output, for debugging the internals of versionfinder. :type log: bool :returns: information about the installed version of the package :rtype: :py:class:`~versionfinder.versioninfo.VersionInfo` ### Response: def find_version(*args, **kwargs): """ Wrapper around :py:class:`~.VersionFinder` and its :py:meth:`~.VersionFinder.find_package_version` method. Pass arguments and kwargs to VersionFinder constructor, return the value of its ``find_package_version`` method. :param package_name: name of the package to find information about :type package_name: str :param package_file: absolute path to a Python source file in the package to find information about; if not specified, the file calling this class will be used :type package_file: str :param log: If not set to True, the "versionfinder" and "pip" loggers will be set to a level of ``logging.CRITICAL`` to suppress log output. If set to True, you will see a LOT of debug-level log output, for debugging the internals of versionfinder. :type log: bool :returns: information about the installed version of the package :rtype: :py:class:`~versionfinder.versioninfo.VersionInfo` """ if 'caller_frame' not in kwargs: kwargs['caller_frame'] = inspect.stack()[1][0] return VersionFinder(*args, **kwargs).find_package_version()
def ceafe(clusters, gold_clusters): """ Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference. Gold and predicted mentions are aligned into clusterings which maximise a metric - in this case, the F measure between gold and predicted clusters. <https://www.semanticscholar.org/paper/On-Coreference-Resolution-Performance-Metrics-Luo/de133c1f22d0dfe12539e25dda70f28672459b99> """ clusters = [cluster for cluster in clusters if len(cluster) != 1] scores = np.zeros((len(gold_clusters), len(clusters))) for i, gold_cluster in enumerate(gold_clusters): for j, cluster in enumerate(clusters): scores[i, j] = Scorer.phi4(gold_cluster, cluster) matching = linear_assignment(-scores) similarity = sum(scores[matching[:, 0], matching[:, 1]]) return similarity, len(clusters), similarity, len(gold_clusters)
Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference. Gold and predicted mentions are aligned into clusterings which maximise a metric - in this case, the F measure between gold and predicted clusters. <https://www.semanticscholar.org/paper/On-Coreference-Resolution-Performance-Metrics-Luo/de133c1f22d0dfe12539e25dda70f28672459b99>
Below is the the instruction that describes the task: ### Input: Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference. Gold and predicted mentions are aligned into clusterings which maximise a metric - in this case, the F measure between gold and predicted clusters. <https://www.semanticscholar.org/paper/On-Coreference-Resolution-Performance-Metrics-Luo/de133c1f22d0dfe12539e25dda70f28672459b99> ### Response: def ceafe(clusters, gold_clusters): """ Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference. Gold and predicted mentions are aligned into clusterings which maximise a metric - in this case, the F measure between gold and predicted clusters. <https://www.semanticscholar.org/paper/On-Coreference-Resolution-Performance-Metrics-Luo/de133c1f22d0dfe12539e25dda70f28672459b99> """ clusters = [cluster for cluster in clusters if len(cluster) != 1] scores = np.zeros((len(gold_clusters), len(clusters))) for i, gold_cluster in enumerate(gold_clusters): for j, cluster in enumerate(clusters): scores[i, j] = Scorer.phi4(gold_cluster, cluster) matching = linear_assignment(-scores) similarity = sum(scores[matching[:, 0], matching[:, 1]]) return similarity, len(clusters), similarity, len(gold_clusters)
def update_resource(self, resource_form): """Updates an existing resource. arg: resource_form (osid.resource.ResourceForm): the form containing the elements to be updated raise: IllegalState - ``resource_form`` already used in an update transaction raise: InvalidArgument - the form contains an invalid value raise: NullArgument - ``resource_form`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure raise: Unsupported - ``resource_form`` did not originate from ``get_resource_form_for_update()`` *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.ResourceAdminSession.update_resource_template collection = JSONClientValidated('resource', collection='Resource', runtime=self._runtime) if not isinstance(resource_form, ABCResourceForm): raise errors.InvalidArgument('argument type is not an ResourceForm') if not resource_form.is_for_update(): raise errors.InvalidArgument('the ResourceForm is for update only, not create') try: if self._forms[resource_form.get_id().get_identifier()] == UPDATED: raise errors.IllegalState('resource_form already used in an update transaction') except KeyError: raise errors.Unsupported('resource_form did not originate from this session') if not resource_form.is_valid(): raise errors.InvalidArgument('one or more of the form elements is invalid') collection.save(resource_form._my_map) self._forms[resource_form.get_id().get_identifier()] = UPDATED # Note: this is out of spec. The OSIDs don't require an object to be returned: return objects.Resource( osid_object_map=resource_form._my_map, runtime=self._runtime, proxy=self._proxy)
Updates an existing resource. arg: resource_form (osid.resource.ResourceForm): the form containing the elements to be updated raise: IllegalState - ``resource_form`` already used in an update transaction raise: InvalidArgument - the form contains an invalid value raise: NullArgument - ``resource_form`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure raise: Unsupported - ``resource_form`` did not originate from ``get_resource_form_for_update()`` *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Updates an existing resource. arg: resource_form (osid.resource.ResourceForm): the form containing the elements to be updated raise: IllegalState - ``resource_form`` already used in an update transaction raise: InvalidArgument - the form contains an invalid value raise: NullArgument - ``resource_form`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure raise: Unsupported - ``resource_form`` did not originate from ``get_resource_form_for_update()`` *compliance: mandatory -- This method must be implemented.* ### Response: def update_resource(self, resource_form): """Updates an existing resource. arg: resource_form (osid.resource.ResourceForm): the form containing the elements to be updated raise: IllegalState - ``resource_form`` already used in an update transaction raise: InvalidArgument - the form contains an invalid value raise: NullArgument - ``resource_form`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure raise: Unsupported - ``resource_form`` did not originate from ``get_resource_form_for_update()`` *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.ResourceAdminSession.update_resource_template collection = JSONClientValidated('resource', collection='Resource', runtime=self._runtime) if not isinstance(resource_form, ABCResourceForm): raise errors.InvalidArgument('argument type is not an ResourceForm') if not resource_form.is_for_update(): raise errors.InvalidArgument('the ResourceForm is for update only, not create') try: if self._forms[resource_form.get_id().get_identifier()] == UPDATED: raise errors.IllegalState('resource_form already used in an update transaction') except KeyError: raise errors.Unsupported('resource_form did not originate from this session') if not resource_form.is_valid(): raise errors.InvalidArgument('one or more of the form elements is invalid') collection.save(resource_form._my_map) self._forms[resource_form.get_id().get_identifier()] = UPDATED # Note: this is out of spec. The OSIDs don't require an object to be returned: return objects.Resource( osid_object_map=resource_form._my_map, runtime=self._runtime, proxy=self._proxy)
def to_latex(self, number=0): """ Returns a raw text string that contains a latex representation of the belief state as an attribute-value matrix. This requires: \usepackage{avm} """ latex = r"""\avmfont{\sc} \avmoptions{sorted,active} \avmvalfont{\rm}""" latex += "\n\nb_%i = \\begin{avm} \n " % number latex += DictCell.to_latex(self) latex += "\n\\end{avm}\n" return latex
Returns a raw text string that contains a latex representation of the belief state as an attribute-value matrix. This requires: \usepackage{avm}
Below is the the instruction that describes the task: ### Input: Returns a raw text string that contains a latex representation of the belief state as an attribute-value matrix. This requires: \usepackage{avm} ### Response: def to_latex(self, number=0): """ Returns a raw text string that contains a latex representation of the belief state as an attribute-value matrix. This requires: \usepackage{avm} """ latex = r"""\avmfont{\sc} \avmoptions{sorted,active} \avmvalfont{\rm}""" latex += "\n\nb_%i = \\begin{avm} \n " % number latex += DictCell.to_latex(self) latex += "\n\\end{avm}\n" return latex
def wallace_reducer(wire_array_2, result_bitwidth, final_adder=kogge_stone): """ The reduction and final adding part of a dada tree. Useful for adding many numbers together The use of single bitwidth wires is to allow for additional flexibility :param [[Wirevector]] wire_array_2: An array of arrays of single bitwidth wirevectors :param int result_bitwidth: The bitwidth you want for the resulting wire. Used to eliminate unnessary wires. :param final_adder: The adder used for the final addition :return: wirevector of length result_wirevector """ # verification that the wires are actually wirevectors of length 1 for wire_set in wire_array_2: for a_wire in wire_set: if not isinstance(a_wire, pyrtl.WireVector) or len(a_wire) != 1: raise pyrtl.PyrtlError( "The item {} is not a valid element for the wire_array_2. " "It must be a WireVector of bitwidth 1".format(a_wire)) while not all(len(i) <= 2 for i in wire_array_2): deferred = [[] for weight in range(result_bitwidth + 1)] for i, w_array in enumerate(wire_array_2): # Start with low weights and start reducing while len(w_array) >= 3: cout, sum = _one_bit_add_no_concat(*(w_array.pop(0) for j in range(3))) deferred[i].append(sum) deferred[i + 1].append(cout) if len(w_array) == 2: cout, sum = half_adder(*w_array) deferred[i].append(sum) deferred[i + 1].append(cout) else: deferred[i].extend(w_array) wire_array_2 = deferred[:result_bitwidth] # At this stage in the multiplication we have only 2 wire vectors left. # now we need to add them up result = _sparse_adder(wire_array_2, final_adder) if len(result) > result_bitwidth: return result[:result_bitwidth] else: return result
The reduction and final adding part of a dada tree. Useful for adding many numbers together The use of single bitwidth wires is to allow for additional flexibility :param [[Wirevector]] wire_array_2: An array of arrays of single bitwidth wirevectors :param int result_bitwidth: The bitwidth you want for the resulting wire. Used to eliminate unnessary wires. :param final_adder: The adder used for the final addition :return: wirevector of length result_wirevector
Below is the the instruction that describes the task: ### Input: The reduction and final adding part of a dada tree. Useful for adding many numbers together The use of single bitwidth wires is to allow for additional flexibility :param [[Wirevector]] wire_array_2: An array of arrays of single bitwidth wirevectors :param int result_bitwidth: The bitwidth you want for the resulting wire. Used to eliminate unnessary wires. :param final_adder: The adder used for the final addition :return: wirevector of length result_wirevector ### Response: def wallace_reducer(wire_array_2, result_bitwidth, final_adder=kogge_stone): """ The reduction and final adding part of a dada tree. Useful for adding many numbers together The use of single bitwidth wires is to allow for additional flexibility :param [[Wirevector]] wire_array_2: An array of arrays of single bitwidth wirevectors :param int result_bitwidth: The bitwidth you want for the resulting wire. Used to eliminate unnessary wires. :param final_adder: The adder used for the final addition :return: wirevector of length result_wirevector """ # verification that the wires are actually wirevectors of length 1 for wire_set in wire_array_2: for a_wire in wire_set: if not isinstance(a_wire, pyrtl.WireVector) or len(a_wire) != 1: raise pyrtl.PyrtlError( "The item {} is not a valid element for the wire_array_2. " "It must be a WireVector of bitwidth 1".format(a_wire)) while not all(len(i) <= 2 for i in wire_array_2): deferred = [[] for weight in range(result_bitwidth + 1)] for i, w_array in enumerate(wire_array_2): # Start with low weights and start reducing while len(w_array) >= 3: cout, sum = _one_bit_add_no_concat(*(w_array.pop(0) for j in range(3))) deferred[i].append(sum) deferred[i + 1].append(cout) if len(w_array) == 2: cout, sum = half_adder(*w_array) deferred[i].append(sum) deferred[i + 1].append(cout) else: deferred[i].extend(w_array) wire_array_2 = deferred[:result_bitwidth] # At this stage in the multiplication we have only 2 wire vectors left. # now we need to add them up result = _sparse_adder(wire_array_2, final_adder) if len(result) > result_bitwidth: return result[:result_bitwidth] else: return result
def from_ascii_hex(text: str) -> int: """Converts to an int value from both ASCII and regular hex. The format used appears to vary based on whether the command was to get an existing value (regular hex) or set a new value (ASCII hex mirrored back from original command). Regular hex: 0123456789abcdef ASCII hex: 0123456789:;<=>? """ value = 0 for index in range(0, len(text)): char_ord = ord(text[index:index + 1]) if char_ord in range(ord('0'), ord('?') + 1): digit = char_ord - ord('0') elif char_ord in range(ord('a'), ord('f') + 1): digit = 0xa + (char_ord - ord('a')) else: raise ValueError( "Response contains invalid character.") value = (value * 0x10) + digit return value
Converts to an int value from both ASCII and regular hex. The format used appears to vary based on whether the command was to get an existing value (regular hex) or set a new value (ASCII hex mirrored back from original command). Regular hex: 0123456789abcdef ASCII hex: 0123456789:;<=>?
Below is the the instruction that describes the task: ### Input: Converts to an int value from both ASCII and regular hex. The format used appears to vary based on whether the command was to get an existing value (regular hex) or set a new value (ASCII hex mirrored back from original command). Regular hex: 0123456789abcdef ASCII hex: 0123456789:;<=>? ### Response: def from_ascii_hex(text: str) -> int: """Converts to an int value from both ASCII and regular hex. The format used appears to vary based on whether the command was to get an existing value (regular hex) or set a new value (ASCII hex mirrored back from original command). Regular hex: 0123456789abcdef ASCII hex: 0123456789:;<=>? """ value = 0 for index in range(0, len(text)): char_ord = ord(text[index:index + 1]) if char_ord in range(ord('0'), ord('?') + 1): digit = char_ord - ord('0') elif char_ord in range(ord('a'), ord('f') + 1): digit = 0xa + (char_ord - ord('a')) else: raise ValueError( "Response contains invalid character.") value = (value * 0x10) + digit return value
def init_fftw_plan(self, planning_effort='measure', **kwargs): """Initialize the FFTW plan for this transform for later use. If the implementation of this operator is not 'pyfftw', this method should not be called. Parameters ---------- planning_effort : str, optional Flag for the amount of effort put into finding an optimal FFTW plan. See the `FFTW doc on planner flags <http://www.fftw.org/fftw3_doc/Planner-Flags.html>`_. Options: {'estimate', 'measure', 'patient', 'exhaustive'} planning_timelimit : float or ``None``, optional Limit planning time to roughly this many seconds. Default: ``None`` (no limit) threads : int, optional Number of threads to use. Default: 1 Raises ------ ValueError If `impl` is not 'pyfftw' Notes ----- To save memory, clear the plan when the transform is no longer used (the plan stores 2 arrays). See Also -------- clear_fftw_plan """ if self.impl != 'pyfftw': raise ValueError('cannot create fftw plan without fftw backend') # Using available temporaries if possible inverse = isinstance(self, FourierTransformInverse) if inverse: rspace = self.range fspace = self.domain else: rspace = self.domain fspace = self.range if rspace.field == ComplexNumbers(): # C2C: Use either one of 'r' or 'f' temporary if initialized if self._tmp_r is not None: arr_in = arr_out = self._tmp_r elif self._tmp_f is not None: arr_in = arr_out = self._tmp_f else: arr_in = arr_out = rspace.element().asarray() elif self.halfcomplex: # R2HC / HC2R: Use 'r' and 'f' temporary distinctly if initialized if self._tmp_r is not None: arr_r = self._tmp_r else: arr_r = rspace.element().asarray() if self._tmp_f is not None: arr_f = self._tmp_f else: arr_f = fspace.element().asarray() if inverse: arr_in, arr_out = arr_f, arr_r else: arr_in, arr_out = arr_r, arr_f else: # R2C / C2R: Use 'f' temporary for both sides if initialized if self._tmp_f is not None: arr_in = arr_out = self._tmp_f else: arr_in = arr_out = fspace.element().asarray() kwargs.pop('planning_timelimit', None) direction = 'forward' if self.sign == '-' else 'backward' self._fftw_plan = pyfftw_call( arr_in, arr_out, direction=direction, halfcomplex=self.halfcomplex, axes=self.axes, planning_effort=planning_effort, **kwargs)
Initialize the FFTW plan for this transform for later use. If the implementation of this operator is not 'pyfftw', this method should not be called. Parameters ---------- planning_effort : str, optional Flag for the amount of effort put into finding an optimal FFTW plan. See the `FFTW doc on planner flags <http://www.fftw.org/fftw3_doc/Planner-Flags.html>`_. Options: {'estimate', 'measure', 'patient', 'exhaustive'} planning_timelimit : float or ``None``, optional Limit planning time to roughly this many seconds. Default: ``None`` (no limit) threads : int, optional Number of threads to use. Default: 1 Raises ------ ValueError If `impl` is not 'pyfftw' Notes ----- To save memory, clear the plan when the transform is no longer used (the plan stores 2 arrays). See Also -------- clear_fftw_plan
Below is the the instruction that describes the task: ### Input: Initialize the FFTW plan for this transform for later use. If the implementation of this operator is not 'pyfftw', this method should not be called. Parameters ---------- planning_effort : str, optional Flag for the amount of effort put into finding an optimal FFTW plan. See the `FFTW doc on planner flags <http://www.fftw.org/fftw3_doc/Planner-Flags.html>`_. Options: {'estimate', 'measure', 'patient', 'exhaustive'} planning_timelimit : float or ``None``, optional Limit planning time to roughly this many seconds. Default: ``None`` (no limit) threads : int, optional Number of threads to use. Default: 1 Raises ------ ValueError If `impl` is not 'pyfftw' Notes ----- To save memory, clear the plan when the transform is no longer used (the plan stores 2 arrays). See Also -------- clear_fftw_plan ### Response: def init_fftw_plan(self, planning_effort='measure', **kwargs): """Initialize the FFTW plan for this transform for later use. If the implementation of this operator is not 'pyfftw', this method should not be called. Parameters ---------- planning_effort : str, optional Flag for the amount of effort put into finding an optimal FFTW plan. See the `FFTW doc on planner flags <http://www.fftw.org/fftw3_doc/Planner-Flags.html>`_. Options: {'estimate', 'measure', 'patient', 'exhaustive'} planning_timelimit : float or ``None``, optional Limit planning time to roughly this many seconds. Default: ``None`` (no limit) threads : int, optional Number of threads to use. Default: 1 Raises ------ ValueError If `impl` is not 'pyfftw' Notes ----- To save memory, clear the plan when the transform is no longer used (the plan stores 2 arrays). See Also -------- clear_fftw_plan """ if self.impl != 'pyfftw': raise ValueError('cannot create fftw plan without fftw backend') # Using available temporaries if possible inverse = isinstance(self, FourierTransformInverse) if inverse: rspace = self.range fspace = self.domain else: rspace = self.domain fspace = self.range if rspace.field == ComplexNumbers(): # C2C: Use either one of 'r' or 'f' temporary if initialized if self._tmp_r is not None: arr_in = arr_out = self._tmp_r elif self._tmp_f is not None: arr_in = arr_out = self._tmp_f else: arr_in = arr_out = rspace.element().asarray() elif self.halfcomplex: # R2HC / HC2R: Use 'r' and 'f' temporary distinctly if initialized if self._tmp_r is not None: arr_r = self._tmp_r else: arr_r = rspace.element().asarray() if self._tmp_f is not None: arr_f = self._tmp_f else: arr_f = fspace.element().asarray() if inverse: arr_in, arr_out = arr_f, arr_r else: arr_in, arr_out = arr_r, arr_f else: # R2C / C2R: Use 'f' temporary for both sides if initialized if self._tmp_f is not None: arr_in = arr_out = self._tmp_f else: arr_in = arr_out = fspace.element().asarray() kwargs.pop('planning_timelimit', None) direction = 'forward' if self.sign == '-' else 'backward' self._fftw_plan = pyfftw_call( arr_in, arr_out, direction=direction, halfcomplex=self.halfcomplex, axes=self.axes, planning_effort=planning_effort, **kwargs)
def generate_user(self, subid=None): '''generate a new user on the filesystem, still session based so we create a new identifier. This function is called from the users new entrypoint, and it assumes we want a user generated with a token. since we don't have a database proper, we write the folder name to the filesystem ''' # Only generate token if subid being created if subid is None: token = str(uuid.uuid4()) subid = self.generate_subid(token=token) if os.path.exists(self.data_base): # /scif/data data_base = "%s/%s" %(self.data_base, subid) # expfactory/00001 if not os.path.exists(data_base): mkdir_p(data_base) return data_base
generate a new user on the filesystem, still session based so we create a new identifier. This function is called from the users new entrypoint, and it assumes we want a user generated with a token. since we don't have a database proper, we write the folder name to the filesystem
Below is the the instruction that describes the task: ### Input: generate a new user on the filesystem, still session based so we create a new identifier. This function is called from the users new entrypoint, and it assumes we want a user generated with a token. since we don't have a database proper, we write the folder name to the filesystem ### Response: def generate_user(self, subid=None): '''generate a new user on the filesystem, still session based so we create a new identifier. This function is called from the users new entrypoint, and it assumes we want a user generated with a token. since we don't have a database proper, we write the folder name to the filesystem ''' # Only generate token if subid being created if subid is None: token = str(uuid.uuid4()) subid = self.generate_subid(token=token) if os.path.exists(self.data_base): # /scif/data data_base = "%s/%s" %(self.data_base, subid) # expfactory/00001 if not os.path.exists(data_base): mkdir_p(data_base) return data_base
def recognize_verify_code(image_path, broker="ht"): """识别验证码,返回识别后的字符串,使用 tesseract 实现 :param image_path: 图片路径 :param broker: 券商 ['ht', 'yjb', 'gf', 'yh'] :return recognized: verify code string""" if broker == "gf": return detect_gf_result(image_path) if broker in ["yh_client", "gj_client"]: return detect_yh_client_result(image_path) # 调用 tesseract 识别 return default_verify_code_detect(image_path)
识别验证码,返回识别后的字符串,使用 tesseract 实现 :param image_path: 图片路径 :param broker: 券商 ['ht', 'yjb', 'gf', 'yh'] :return recognized: verify code string
Below is the the instruction that describes the task: ### Input: 识别验证码,返回识别后的字符串,使用 tesseract 实现 :param image_path: 图片路径 :param broker: 券商 ['ht', 'yjb', 'gf', 'yh'] :return recognized: verify code string ### Response: def recognize_verify_code(image_path, broker="ht"): """识别验证码,返回识别后的字符串,使用 tesseract 实现 :param image_path: 图片路径 :param broker: 券商 ['ht', 'yjb', 'gf', 'yh'] :return recognized: verify code string""" if broker == "gf": return detect_gf_result(image_path) if broker in ["yh_client", "gj_client"]: return detect_yh_client_result(image_path) # 调用 tesseract 识别 return default_verify_code_detect(image_path)
def external_editor(self, filename, goto=-1): """Edit in an external editor Recommended: SciTE (e.g. to go to line where an error did occur)""" editor_path = CONF.get('internal_console', 'external_editor/path') goto_option = CONF.get('internal_console', 'external_editor/gotoline') try: args = [filename] if goto > 0 and goto_option: args.append('%s%d'.format(goto_option, goto)) programs.run_program(editor_path, args) except OSError: self.write_error("External editor was not found:" " %s\n" % editor_path)
Edit in an external editor Recommended: SciTE (e.g. to go to line where an error did occur)
Below is the the instruction that describes the task: ### Input: Edit in an external editor Recommended: SciTE (e.g. to go to line where an error did occur) ### Response: def external_editor(self, filename, goto=-1): """Edit in an external editor Recommended: SciTE (e.g. to go to line where an error did occur)""" editor_path = CONF.get('internal_console', 'external_editor/path') goto_option = CONF.get('internal_console', 'external_editor/gotoline') try: args = [filename] if goto > 0 and goto_option: args.append('%s%d'.format(goto_option, goto)) programs.run_program(editor_path, args) except OSError: self.write_error("External editor was not found:" " %s\n" % editor_path)
def _pick_level(cls, btc_amount): """ Choose between small, medium, large, ... depending on the amount specified. """ for size, level in cls.TICKER_LEVEL: if btc_amount < size: return level return cls.TICKER_LEVEL[-1][1]
Choose between small, medium, large, ... depending on the amount specified.
Below is the the instruction that describes the task: ### Input: Choose between small, medium, large, ... depending on the amount specified. ### Response: def _pick_level(cls, btc_amount): """ Choose between small, medium, large, ... depending on the amount specified. """ for size, level in cls.TICKER_LEVEL: if btc_amount < size: return level return cls.TICKER_LEVEL[-1][1]
def set_default(sld, tld): ''' Sets domain to use namecheap default DNS servers. Required for free services like Host record management, URL forwarding, email forwarding, dynamic DNS and other value added services. sld SLD of the domain name tld TLD of the domain name Returns ``True`` if the domain was successfully pointed at the default DNS servers. CLI Example: .. code-block:: bash salt 'my-minion' namecheap_domains_dns.set_default sld tld ''' opts = salt.utils.namecheap.get_opts('namecheap.domains.dns.setDefault') opts['SLD'] = sld opts['TLD'] = tld response_xml = salt.utils.namecheap.post_request(opts) if response_xml is None: return False dnsresult = response_xml.getElementsByTagName('DomainDNSSetDefaultResult')[0] return salt.utils.namecheap.string_to_value(dnsresult.getAttribute('Updated'))
Sets domain to use namecheap default DNS servers. Required for free services like Host record management, URL forwarding, email forwarding, dynamic DNS and other value added services. sld SLD of the domain name tld TLD of the domain name Returns ``True`` if the domain was successfully pointed at the default DNS servers. CLI Example: .. code-block:: bash salt 'my-minion' namecheap_domains_dns.set_default sld tld
Below is the the instruction that describes the task: ### Input: Sets domain to use namecheap default DNS servers. Required for free services like Host record management, URL forwarding, email forwarding, dynamic DNS and other value added services. sld SLD of the domain name tld TLD of the domain name Returns ``True`` if the domain was successfully pointed at the default DNS servers. CLI Example: .. code-block:: bash salt 'my-minion' namecheap_domains_dns.set_default sld tld ### Response: def set_default(sld, tld): ''' Sets domain to use namecheap default DNS servers. Required for free services like Host record management, URL forwarding, email forwarding, dynamic DNS and other value added services. sld SLD of the domain name tld TLD of the domain name Returns ``True`` if the domain was successfully pointed at the default DNS servers. CLI Example: .. code-block:: bash salt 'my-minion' namecheap_domains_dns.set_default sld tld ''' opts = salt.utils.namecheap.get_opts('namecheap.domains.dns.setDefault') opts['SLD'] = sld opts['TLD'] = tld response_xml = salt.utils.namecheap.post_request(opts) if response_xml is None: return False dnsresult = response_xml.getElementsByTagName('DomainDNSSetDefaultResult')[0] return salt.utils.namecheap.string_to_value(dnsresult.getAttribute('Updated'))
def write_implied_format(self, path, jpeg_quality=0, jpeg_progressive=0): """Write pix to the filename, with the extension indicating format. jpeg_quality -- quality (iff JPEG; 1 - 100, 0 for default) jpeg_progressive -- (iff JPEG; 0 for baseline seq., 1 for progressive) """ filename = fspath(path) with _LeptonicaErrorTrap(): lept.pixWriteImpliedFormat( os.fsencode(filename), self._cdata, jpeg_quality, jpeg_progressive )
Write pix to the filename, with the extension indicating format. jpeg_quality -- quality (iff JPEG; 1 - 100, 0 for default) jpeg_progressive -- (iff JPEG; 0 for baseline seq., 1 for progressive)
Below is the the instruction that describes the task: ### Input: Write pix to the filename, with the extension indicating format. jpeg_quality -- quality (iff JPEG; 1 - 100, 0 for default) jpeg_progressive -- (iff JPEG; 0 for baseline seq., 1 for progressive) ### Response: def write_implied_format(self, path, jpeg_quality=0, jpeg_progressive=0): """Write pix to the filename, with the extension indicating format. jpeg_quality -- quality (iff JPEG; 1 - 100, 0 for default) jpeg_progressive -- (iff JPEG; 0 for baseline seq., 1 for progressive) """ filename = fspath(path) with _LeptonicaErrorTrap(): lept.pixWriteImpliedFormat( os.fsencode(filename), self._cdata, jpeg_quality, jpeg_progressive )
def download_sample(job, sample, inputs): """ Download the input sample :param JobFunctionWrappingJob job: passed by Toil automatically :param tuple sample: Tuple containing (UUID,URL) of a sample :param Namespace inputs: Stores input arguments (see main) """ uuid, url = sample job.fileStore.logToMaster('Downloading sample: {}'.format(uuid)) # Download sample tar_id = job.addChildJobFn(download_url_job, url, s3_key_path=inputs.ssec, disk='30G').rv() # Create copy of inputs for each sample sample_inputs = argparse.Namespace(**vars(inputs)) sample_inputs.uuid = uuid sample_inputs.cores = multiprocessing.cpu_count() # Call children and follow-on jobs job.addFollowOnJobFn(process_sample, sample_inputs, tar_id, cores=2, disk='60G')
Download the input sample :param JobFunctionWrappingJob job: passed by Toil automatically :param tuple sample: Tuple containing (UUID,URL) of a sample :param Namespace inputs: Stores input arguments (see main)
Below is the the instruction that describes the task: ### Input: Download the input sample :param JobFunctionWrappingJob job: passed by Toil automatically :param tuple sample: Tuple containing (UUID,URL) of a sample :param Namespace inputs: Stores input arguments (see main) ### Response: def download_sample(job, sample, inputs): """ Download the input sample :param JobFunctionWrappingJob job: passed by Toil automatically :param tuple sample: Tuple containing (UUID,URL) of a sample :param Namespace inputs: Stores input arguments (see main) """ uuid, url = sample job.fileStore.logToMaster('Downloading sample: {}'.format(uuid)) # Download sample tar_id = job.addChildJobFn(download_url_job, url, s3_key_path=inputs.ssec, disk='30G').rv() # Create copy of inputs for each sample sample_inputs = argparse.Namespace(**vars(inputs)) sample_inputs.uuid = uuid sample_inputs.cores = multiprocessing.cpu_count() # Call children and follow-on jobs job.addFollowOnJobFn(process_sample, sample_inputs, tar_id, cores=2, disk='60G')
def start_health_check(self, recipient): """ Starts a task for healthchecking `recipient` if there is not one yet. It also whitelists the address """ if recipient not in self.addresses_events: self.whitelist(recipient) # noop for now, for compatibility ping_nonce = self.nodeaddresses_to_nonces.setdefault( recipient, {'nonce': 0}, # HACK: Allows the task to mutate the object ) events = healthcheck.HealthEvents( event_healthy=Event(), event_unhealthy=Event(), ) self.addresses_events[recipient] = events greenlet_healthcheck = gevent.spawn( healthcheck.healthcheck, self, recipient, self.event_stop, events.event_healthy, events.event_unhealthy, self.nat_keepalive_retries, self.nat_keepalive_timeout, self.nat_invitation_timeout, ping_nonce, ) greenlet_healthcheck.name = f'Healthcheck for {pex(recipient)}' greenlet_healthcheck.link_exception(self.on_error) self.greenlets.append(greenlet_healthcheck)
Starts a task for healthchecking `recipient` if there is not one yet. It also whitelists the address
Below is the the instruction that describes the task: ### Input: Starts a task for healthchecking `recipient` if there is not one yet. It also whitelists the address ### Response: def start_health_check(self, recipient): """ Starts a task for healthchecking `recipient` if there is not one yet. It also whitelists the address """ if recipient not in self.addresses_events: self.whitelist(recipient) # noop for now, for compatibility ping_nonce = self.nodeaddresses_to_nonces.setdefault( recipient, {'nonce': 0}, # HACK: Allows the task to mutate the object ) events = healthcheck.HealthEvents( event_healthy=Event(), event_unhealthy=Event(), ) self.addresses_events[recipient] = events greenlet_healthcheck = gevent.spawn( healthcheck.healthcheck, self, recipient, self.event_stop, events.event_healthy, events.event_unhealthy, self.nat_keepalive_retries, self.nat_keepalive_timeout, self.nat_invitation_timeout, ping_nonce, ) greenlet_healthcheck.name = f'Healthcheck for {pex(recipient)}' greenlet_healthcheck.link_exception(self.on_error) self.greenlets.append(greenlet_healthcheck)
def ancestor(self, index): """ Return the ``index``-th ancestor. The 0-th ancestor is the node itself, the 1-th ancestor is its parent node, etc. :param int index: the number of levels to go up :rtype: :class:`~aeneas.tree.Tree` :raises: TypeError if ``index`` is not an int :raises: ValueError if ``index`` is negative """ if not isinstance(index, int): self.log_exc(u"index is not an integer", None, True, TypeError) if index < 0: self.log_exc(u"index cannot be negative", None, True, ValueError) parent_node = self for i in range(index): if parent_node is None: break parent_node = parent_node.parent return parent_node
Return the ``index``-th ancestor. The 0-th ancestor is the node itself, the 1-th ancestor is its parent node, etc. :param int index: the number of levels to go up :rtype: :class:`~aeneas.tree.Tree` :raises: TypeError if ``index`` is not an int :raises: ValueError if ``index`` is negative
Below is the the instruction that describes the task: ### Input: Return the ``index``-th ancestor. The 0-th ancestor is the node itself, the 1-th ancestor is its parent node, etc. :param int index: the number of levels to go up :rtype: :class:`~aeneas.tree.Tree` :raises: TypeError if ``index`` is not an int :raises: ValueError if ``index`` is negative ### Response: def ancestor(self, index): """ Return the ``index``-th ancestor. The 0-th ancestor is the node itself, the 1-th ancestor is its parent node, etc. :param int index: the number of levels to go up :rtype: :class:`~aeneas.tree.Tree` :raises: TypeError if ``index`` is not an int :raises: ValueError if ``index`` is negative """ if not isinstance(index, int): self.log_exc(u"index is not an integer", None, True, TypeError) if index < 0: self.log_exc(u"index cannot be negative", None, True, ValueError) parent_node = self for i in range(index): if parent_node is None: break parent_node = parent_node.parent return parent_node
def run_3to2(args=None): """Convert Python files using lib3to2.""" args = BASE_ARGS_3TO2 if args is None else BASE_ARGS_3TO2 + args try: proc = subprocess.Popen(['3to2'] + args, stderr=subprocess.PIPE) except OSError: for path in glob.glob('*.egg'): if os.path.isdir(path) and path not in sys.path: sys.path.append(path) try: from lib3to2.main import main as lib3to2_main except ImportError: raise OSError('3to2 script is unavailable.') else: if lib3to2_main('lib3to2.fixes', args): raise Exception('lib3to2 parsing error') else: # HACK: workaround for 3to2 never returning non-zero # when using the -j option. num_errors = 0 while proc.poll() is None: line = proc.stderr.readline() sys.stderr.write(line) num_errors += line.count(': ParseError: ') if proc.returncode or num_errors: raise Exception('lib3to2 parsing error')
Convert Python files using lib3to2.
Below is the the instruction that describes the task: ### Input: Convert Python files using lib3to2. ### Response: def run_3to2(args=None): """Convert Python files using lib3to2.""" args = BASE_ARGS_3TO2 if args is None else BASE_ARGS_3TO2 + args try: proc = subprocess.Popen(['3to2'] + args, stderr=subprocess.PIPE) except OSError: for path in glob.glob('*.egg'): if os.path.isdir(path) and path not in sys.path: sys.path.append(path) try: from lib3to2.main import main as lib3to2_main except ImportError: raise OSError('3to2 script is unavailable.') else: if lib3to2_main('lib3to2.fixes', args): raise Exception('lib3to2 parsing error') else: # HACK: workaround for 3to2 never returning non-zero # when using the -j option. num_errors = 0 while proc.poll() is None: line = proc.stderr.readline() sys.stderr.write(line) num_errors += line.count(': ParseError: ') if proc.returncode or num_errors: raise Exception('lib3to2 parsing error')
def addSplits(self, login, tableName, splits): """ Parameters: - login - tableName - splits """ self.send_addSplits(login, tableName, splits) self.recv_addSplits()
Parameters: - login - tableName - splits
Below is the the instruction that describes the task: ### Input: Parameters: - login - tableName - splits ### Response: def addSplits(self, login, tableName, splits): """ Parameters: - login - tableName - splits """ self.send_addSplits(login, tableName, splits) self.recv_addSplits()
def geojson_to_wkt(geojson_obj, feature_number=0, decimals=4): """Convert a GeoJSON object to Well-Known Text. Intended for use with OpenSearch queries. In case of FeatureCollection, only one of the features is used (the first by default). 3D points are converted to 2D. Parameters ---------- geojson_obj : dict a GeoJSON object feature_number : int, optional Feature to extract polygon from (in case of MultiPolygon FeatureCollection), defaults to first Feature decimals : int, optional Number of decimal figures after point to round coordinate to. Defaults to 4 (about 10 meters). Returns ------- polygon coordinates string of comma separated coordinate tuples (lon, lat) to be used by SentinelAPI """ if 'coordinates' in geojson_obj: geometry = geojson_obj elif 'geometry' in geojson_obj: geometry = geojson_obj['geometry'] else: geometry = geojson_obj['features'][feature_number]['geometry'] def ensure_2d(geometry): if isinstance(geometry[0], (list, tuple)): return list(map(ensure_2d, geometry)) else: return geometry[:2] def check_bounds(geometry): if isinstance(geometry[0], (list, tuple)): return list(map(check_bounds, geometry)) else: if geometry[0] > 180 or geometry[0] < -180: raise ValueError('Longitude is out of bounds, check your JSON format or data') if geometry[1] > 90 or geometry[1] < -90: raise ValueError('Latitude is out of bounds, check your JSON format or data') # Discard z-coordinate, if it exists geometry['coordinates'] = ensure_2d(geometry['coordinates']) check_bounds(geometry['coordinates']) wkt = geomet.wkt.dumps(geometry, decimals=decimals) # Strip unnecessary spaces wkt = re.sub(r'(?<!\d) ', '', wkt) return wkt
Convert a GeoJSON object to Well-Known Text. Intended for use with OpenSearch queries. In case of FeatureCollection, only one of the features is used (the first by default). 3D points are converted to 2D. Parameters ---------- geojson_obj : dict a GeoJSON object feature_number : int, optional Feature to extract polygon from (in case of MultiPolygon FeatureCollection), defaults to first Feature decimals : int, optional Number of decimal figures after point to round coordinate to. Defaults to 4 (about 10 meters). Returns ------- polygon coordinates string of comma separated coordinate tuples (lon, lat) to be used by SentinelAPI
Below is the the instruction that describes the task: ### Input: Convert a GeoJSON object to Well-Known Text. Intended for use with OpenSearch queries. In case of FeatureCollection, only one of the features is used (the first by default). 3D points are converted to 2D. Parameters ---------- geojson_obj : dict a GeoJSON object feature_number : int, optional Feature to extract polygon from (in case of MultiPolygon FeatureCollection), defaults to first Feature decimals : int, optional Number of decimal figures after point to round coordinate to. Defaults to 4 (about 10 meters). Returns ------- polygon coordinates string of comma separated coordinate tuples (lon, lat) to be used by SentinelAPI ### Response: def geojson_to_wkt(geojson_obj, feature_number=0, decimals=4): """Convert a GeoJSON object to Well-Known Text. Intended for use with OpenSearch queries. In case of FeatureCollection, only one of the features is used (the first by default). 3D points are converted to 2D. Parameters ---------- geojson_obj : dict a GeoJSON object feature_number : int, optional Feature to extract polygon from (in case of MultiPolygon FeatureCollection), defaults to first Feature decimals : int, optional Number of decimal figures after point to round coordinate to. Defaults to 4 (about 10 meters). Returns ------- polygon coordinates string of comma separated coordinate tuples (lon, lat) to be used by SentinelAPI """ if 'coordinates' in geojson_obj: geometry = geojson_obj elif 'geometry' in geojson_obj: geometry = geojson_obj['geometry'] else: geometry = geojson_obj['features'][feature_number]['geometry'] def ensure_2d(geometry): if isinstance(geometry[0], (list, tuple)): return list(map(ensure_2d, geometry)) else: return geometry[:2] def check_bounds(geometry): if isinstance(geometry[0], (list, tuple)): return list(map(check_bounds, geometry)) else: if geometry[0] > 180 or geometry[0] < -180: raise ValueError('Longitude is out of bounds, check your JSON format or data') if geometry[1] > 90 or geometry[1] < -90: raise ValueError('Latitude is out of bounds, check your JSON format or data') # Discard z-coordinate, if it exists geometry['coordinates'] = ensure_2d(geometry['coordinates']) check_bounds(geometry['coordinates']) wkt = geomet.wkt.dumps(geometry, decimals=decimals) # Strip unnecessary spaces wkt = re.sub(r'(?<!\d) ', '', wkt) return wkt
def process_udp_frame(self, id=None, msg=None): """process_udp_frame Convert a complex nested json dictionary to a flattened dictionary and capture all unique keys for table construction :param id: key for this msg :param msg: udp frame for packet """ # normalize into a dataframe df = json_normalize(msg) # convert to a flattened dictionary dt = json.loads(df.to_json()) flat_msg = {} for k in dt: new_key = "udp_{}".format(k) flat_msg[new_key] = dt[k]["0"] if new_key not in self.udp_keys: self.udp_keys[new_key] = k # end of capturing all unique keys dt["udp_id"] = id self.all_udp.append(dt) log.debug("UDP data updated:") log.debug(self.udp_keys) log.debug(self.all_udp) log.debug("") return flat_msg
process_udp_frame Convert a complex nested json dictionary to a flattened dictionary and capture all unique keys for table construction :param id: key for this msg :param msg: udp frame for packet
Below is the the instruction that describes the task: ### Input: process_udp_frame Convert a complex nested json dictionary to a flattened dictionary and capture all unique keys for table construction :param id: key for this msg :param msg: udp frame for packet ### Response: def process_udp_frame(self, id=None, msg=None): """process_udp_frame Convert a complex nested json dictionary to a flattened dictionary and capture all unique keys for table construction :param id: key for this msg :param msg: udp frame for packet """ # normalize into a dataframe df = json_normalize(msg) # convert to a flattened dictionary dt = json.loads(df.to_json()) flat_msg = {} for k in dt: new_key = "udp_{}".format(k) flat_msg[new_key] = dt[k]["0"] if new_key not in self.udp_keys: self.udp_keys[new_key] = k # end of capturing all unique keys dt["udp_id"] = id self.all_udp.append(dt) log.debug("UDP data updated:") log.debug(self.udp_keys) log.debug(self.all_udp) log.debug("") return flat_msg
def _getTimeStamps(self, table): ''' get time stamps ''' timeStamps = [] for th in table.thead.tr.contents: if '\n' != th: timeStamps.append(th.getText()) return timeStamps[1:]
get time stamps
Below is the the instruction that describes the task: ### Input: get time stamps ### Response: def _getTimeStamps(self, table): ''' get time stamps ''' timeStamps = [] for th in table.thead.tr.contents: if '\n' != th: timeStamps.append(th.getText()) return timeStamps[1:]
def declare(self, symbol): """ Nothing gets declared here - it's the parents problem, except for the case where the symbol is the one we have here. """ if symbol != self.catch_symbol: self.parent.declare(symbol)
Nothing gets declared here - it's the parents problem, except for the case where the symbol is the one we have here.
Below is the the instruction that describes the task: ### Input: Nothing gets declared here - it's the parents problem, except for the case where the symbol is the one we have here. ### Response: def declare(self, symbol): """ Nothing gets declared here - it's the parents problem, except for the case where the symbol is the one we have here. """ if symbol != self.catch_symbol: self.parent.declare(symbol)
def to_dp(self): """ Convert to darkplaces color format :return: """ text = self.text.replace('^', '^^') return '%s%s' % (self.color.to_dp(), text)
Convert to darkplaces color format :return:
Below is the the instruction that describes the task: ### Input: Convert to darkplaces color format :return: ### Response: def to_dp(self): """ Convert to darkplaces color format :return: """ text = self.text.replace('^', '^^') return '%s%s' % (self.color.to_dp(), text)
def delete(self, postage_id, session): '''taobao.postage.delete 删除单个运费模板 删除单个邮费模板 postage_id对应的邮费模板要属于当前会话用户''' request = TOPRequest('taobao.postage.delete') request['postage_id'] = postage_id self.create(self.execute(request, session)['postage']) return self
taobao.postage.delete 删除单个运费模板 删除单个邮费模板 postage_id对应的邮费模板要属于当前会话用户
Below is the the instruction that describes the task: ### Input: taobao.postage.delete 删除单个运费模板 删除单个邮费模板 postage_id对应的邮费模板要属于当前会话用户 ### Response: def delete(self, postage_id, session): '''taobao.postage.delete 删除单个运费模板 删除单个邮费模板 postage_id对应的邮费模板要属于当前会话用户''' request = TOPRequest('taobao.postage.delete') request['postage_id'] = postage_id self.create(self.execute(request, session)['postage']) return self
def generate_token_string(self, action=None): """Generate a hash of the given token contents that can be verified. :param action: A string representing the action that the generated hash is valid for. This string is usually a URL. :returns: A string containing the hash contents of the given `action` and the contents of the `XSRFToken`. Can be verified with `verify_token_string`. The string is base64 encoded so it is safe to use in HTML forms without escaping. """ digest_maker = self._digest_maker() digest_maker.update(self.user_id) digest_maker.update(self._DELIMITER) if action: digest_maker.update(action) digest_maker.update(self._DELIMITER) digest_maker.update(str(self.current_time)) return base64.urlsafe_b64encode( self._DELIMITER.join([digest_maker.hexdigest(), str(self.current_time)]))
Generate a hash of the given token contents that can be verified. :param action: A string representing the action that the generated hash is valid for. This string is usually a URL. :returns: A string containing the hash contents of the given `action` and the contents of the `XSRFToken`. Can be verified with `verify_token_string`. The string is base64 encoded so it is safe to use in HTML forms without escaping.
Below is the the instruction that describes the task: ### Input: Generate a hash of the given token contents that can be verified. :param action: A string representing the action that the generated hash is valid for. This string is usually a URL. :returns: A string containing the hash contents of the given `action` and the contents of the `XSRFToken`. Can be verified with `verify_token_string`. The string is base64 encoded so it is safe to use in HTML forms without escaping. ### Response: def generate_token_string(self, action=None): """Generate a hash of the given token contents that can be verified. :param action: A string representing the action that the generated hash is valid for. This string is usually a URL. :returns: A string containing the hash contents of the given `action` and the contents of the `XSRFToken`. Can be verified with `verify_token_string`. The string is base64 encoded so it is safe to use in HTML forms without escaping. """ digest_maker = self._digest_maker() digest_maker.update(self.user_id) digest_maker.update(self._DELIMITER) if action: digest_maker.update(action) digest_maker.update(self._DELIMITER) digest_maker.update(str(self.current_time)) return base64.urlsafe_b64encode( self._DELIMITER.join([digest_maker.hexdigest(), str(self.current_time)]))
def resume(self): """ Resumes this VirtualBox VM. """ yield from self._control_vm("resume") self.status = "started" log.info("VirtualBox VM '{name}' [{id}] resumed".format(name=self.name, id=self.id))
Resumes this VirtualBox VM.
Below is the the instruction that describes the task: ### Input: Resumes this VirtualBox VM. ### Response: def resume(self): """ Resumes this VirtualBox VM. """ yield from self._control_vm("resume") self.status = "started" log.info("VirtualBox VM '{name}' [{id}] resumed".format(name=self.name, id=self.id))
def store( self, type, nick, time, fmt=None, code=None, filename=None, mime=None, data=None, makeshort=True): """ Store code or a file. Returns a tuple containing the uid and shortid """ uid = str(uuid.uuid4()) shortid = short_key() if makeshort else None paste = assign_params(self.build_paste, locals())() self._store(uid, paste, data) if nick: self._storeLog(nick, time, uid) return uid, shortid
Store code or a file. Returns a tuple containing the uid and shortid
Below is the the instruction that describes the task: ### Input: Store code or a file. Returns a tuple containing the uid and shortid ### Response: def store( self, type, nick, time, fmt=None, code=None, filename=None, mime=None, data=None, makeshort=True): """ Store code or a file. Returns a tuple containing the uid and shortid """ uid = str(uuid.uuid4()) shortid = short_key() if makeshort else None paste = assign_params(self.build_paste, locals())() self._store(uid, paste, data) if nick: self._storeLog(nick, time, uid) return uid, shortid
def get_deployments(self, project, definition_id=None, definition_environment_id=None, created_by=None, min_modified_time=None, max_modified_time=None, deployment_status=None, operation_status=None, latest_attempts_only=None, query_order=None, top=None, continuation_token=None, created_for=None, min_started_time=None, max_started_time=None, source_branch=None): """GetDeployments. :param str project: Project ID or project name :param int definition_id: :param int definition_environment_id: :param str created_by: :param datetime min_modified_time: :param datetime max_modified_time: :param str deployment_status: :param str operation_status: :param bool latest_attempts_only: :param str query_order: :param int top: :param int continuation_token: :param str created_for: :param datetime min_started_time: :param datetime max_started_time: :param str source_branch: :rtype: [Deployment] """ route_values = {} if project is not None: route_values['project'] = self._serialize.url('project', project, 'str') query_parameters = {} if definition_id is not None: query_parameters['definitionId'] = self._serialize.query('definition_id', definition_id, 'int') if definition_environment_id is not None: query_parameters['definitionEnvironmentId'] = self._serialize.query('definition_environment_id', definition_environment_id, 'int') if created_by is not None: query_parameters['createdBy'] = self._serialize.query('created_by', created_by, 'str') if min_modified_time is not None: query_parameters['minModifiedTime'] = self._serialize.query('min_modified_time', min_modified_time, 'iso-8601') if max_modified_time is not None: query_parameters['maxModifiedTime'] = self._serialize.query('max_modified_time', max_modified_time, 'iso-8601') if deployment_status is not None: query_parameters['deploymentStatus'] = self._serialize.query('deployment_status', deployment_status, 'str') if operation_status is not None: query_parameters['operationStatus'] = self._serialize.query('operation_status', operation_status, 'str') if latest_attempts_only is not None: query_parameters['latestAttemptsOnly'] = self._serialize.query('latest_attempts_only', latest_attempts_only, 'bool') if query_order is not None: query_parameters['queryOrder'] = self._serialize.query('query_order', query_order, 'str') if top is not None: query_parameters['$top'] = self._serialize.query('top', top, 'int') if continuation_token is not None: query_parameters['continuationToken'] = self._serialize.query('continuation_token', continuation_token, 'int') if created_for is not None: query_parameters['createdFor'] = self._serialize.query('created_for', created_for, 'str') if min_started_time is not None: query_parameters['minStartedTime'] = self._serialize.query('min_started_time', min_started_time, 'iso-8601') if max_started_time is not None: query_parameters['maxStartedTime'] = self._serialize.query('max_started_time', max_started_time, 'iso-8601') if source_branch is not None: query_parameters['sourceBranch'] = self._serialize.query('source_branch', source_branch, 'str') response = self._send(http_method='GET', location_id='b005ef73-cddc-448e-9ba2-5193bf36b19f', version='5.0', route_values=route_values, query_parameters=query_parameters) return self._deserialize('[Deployment]', self._unwrap_collection(response))
GetDeployments. :param str project: Project ID or project name :param int definition_id: :param int definition_environment_id: :param str created_by: :param datetime min_modified_time: :param datetime max_modified_time: :param str deployment_status: :param str operation_status: :param bool latest_attempts_only: :param str query_order: :param int top: :param int continuation_token: :param str created_for: :param datetime min_started_time: :param datetime max_started_time: :param str source_branch: :rtype: [Deployment]
Below is the the instruction that describes the task: ### Input: GetDeployments. :param str project: Project ID or project name :param int definition_id: :param int definition_environment_id: :param str created_by: :param datetime min_modified_time: :param datetime max_modified_time: :param str deployment_status: :param str operation_status: :param bool latest_attempts_only: :param str query_order: :param int top: :param int continuation_token: :param str created_for: :param datetime min_started_time: :param datetime max_started_time: :param str source_branch: :rtype: [Deployment] ### Response: def get_deployments(self, project, definition_id=None, definition_environment_id=None, created_by=None, min_modified_time=None, max_modified_time=None, deployment_status=None, operation_status=None, latest_attempts_only=None, query_order=None, top=None, continuation_token=None, created_for=None, min_started_time=None, max_started_time=None, source_branch=None): """GetDeployments. :param str project: Project ID or project name :param int definition_id: :param int definition_environment_id: :param str created_by: :param datetime min_modified_time: :param datetime max_modified_time: :param str deployment_status: :param str operation_status: :param bool latest_attempts_only: :param str query_order: :param int top: :param int continuation_token: :param str created_for: :param datetime min_started_time: :param datetime max_started_time: :param str source_branch: :rtype: [Deployment] """ route_values = {} if project is not None: route_values['project'] = self._serialize.url('project', project, 'str') query_parameters = {} if definition_id is not None: query_parameters['definitionId'] = self._serialize.query('definition_id', definition_id, 'int') if definition_environment_id is not None: query_parameters['definitionEnvironmentId'] = self._serialize.query('definition_environment_id', definition_environment_id, 'int') if created_by is not None: query_parameters['createdBy'] = self._serialize.query('created_by', created_by, 'str') if min_modified_time is not None: query_parameters['minModifiedTime'] = self._serialize.query('min_modified_time', min_modified_time, 'iso-8601') if max_modified_time is not None: query_parameters['maxModifiedTime'] = self._serialize.query('max_modified_time', max_modified_time, 'iso-8601') if deployment_status is not None: query_parameters['deploymentStatus'] = self._serialize.query('deployment_status', deployment_status, 'str') if operation_status is not None: query_parameters['operationStatus'] = self._serialize.query('operation_status', operation_status, 'str') if latest_attempts_only is not None: query_parameters['latestAttemptsOnly'] = self._serialize.query('latest_attempts_only', latest_attempts_only, 'bool') if query_order is not None: query_parameters['queryOrder'] = self._serialize.query('query_order', query_order, 'str') if top is not None: query_parameters['$top'] = self._serialize.query('top', top, 'int') if continuation_token is not None: query_parameters['continuationToken'] = self._serialize.query('continuation_token', continuation_token, 'int') if created_for is not None: query_parameters['createdFor'] = self._serialize.query('created_for', created_for, 'str') if min_started_time is not None: query_parameters['minStartedTime'] = self._serialize.query('min_started_time', min_started_time, 'iso-8601') if max_started_time is not None: query_parameters['maxStartedTime'] = self._serialize.query('max_started_time', max_started_time, 'iso-8601') if source_branch is not None: query_parameters['sourceBranch'] = self._serialize.query('source_branch', source_branch, 'str') response = self._send(http_method='GET', location_id='b005ef73-cddc-448e-9ba2-5193bf36b19f', version='5.0', route_values=route_values, query_parameters=query_parameters) return self._deserialize('[Deployment]', self._unwrap_collection(response))
def _change_line(self, delta): """ Move the cursor up/down the specified number of lines. :param delta: The number of lines to move (-ve is up, +ve is down). """ # Ensure new line is within limits self._line = min(max(0, self._line + delta), len(self._value) - 1) # Fix up column if the new line is shorter than before. if self._column >= len(self._value[self._line]): self._column = len(self._value[self._line])
Move the cursor up/down the specified number of lines. :param delta: The number of lines to move (-ve is up, +ve is down).
Below is the the instruction that describes the task: ### Input: Move the cursor up/down the specified number of lines. :param delta: The number of lines to move (-ve is up, +ve is down). ### Response: def _change_line(self, delta): """ Move the cursor up/down the specified number of lines. :param delta: The number of lines to move (-ve is up, +ve is down). """ # Ensure new line is within limits self._line = min(max(0, self._line + delta), len(self._value) - 1) # Fix up column if the new line is shorter than before. if self._column >= len(self._value[self._line]): self._column = len(self._value[self._line])
def seconds2time(s): """Inverse of time2seconds().""" hour, temp = divmod(s, 3600) minute, temp = divmod(temp, 60) temp, second = math.modf(temp) return datetime.time(hour=int(hour), minute=int(minute), second=int(second), microsecond=int(round(temp * 1e6)))
Inverse of time2seconds().
Below is the the instruction that describes the task: ### Input: Inverse of time2seconds(). ### Response: def seconds2time(s): """Inverse of time2seconds().""" hour, temp = divmod(s, 3600) minute, temp = divmod(temp, 60) temp, second = math.modf(temp) return datetime.time(hour=int(hour), minute=int(minute), second=int(second), microsecond=int(round(temp * 1e6)))
def save_channels(self, checked=False, test_name=None): """Save channel groups to file.""" self.read_group_info() if self.filename is not None: filename = self.filename elif self.parent.info.filename is not None: filename = (splitext(self.parent.info.filename)[0] + '_channels.json') else: filename = None if test_name is None: filename, _ = QFileDialog.getSaveFileName(self, 'Save Channels Montage', filename, 'Channels File (*.json)') else: filename = test_name if filename == '': return self.filename = filename groups = deepcopy(self.groups) for one_grp in groups: one_grp['color'] = one_grp['color'].rgba() with open(filename, 'w') as outfile: dump(groups, outfile, indent=' ')
Save channel groups to file.
Below is the the instruction that describes the task: ### Input: Save channel groups to file. ### Response: def save_channels(self, checked=False, test_name=None): """Save channel groups to file.""" self.read_group_info() if self.filename is not None: filename = self.filename elif self.parent.info.filename is not None: filename = (splitext(self.parent.info.filename)[0] + '_channels.json') else: filename = None if test_name is None: filename, _ = QFileDialog.getSaveFileName(self, 'Save Channels Montage', filename, 'Channels File (*.json)') else: filename = test_name if filename == '': return self.filename = filename groups = deepcopy(self.groups) for one_grp in groups: one_grp['color'] = one_grp['color'].rgba() with open(filename, 'w') as outfile: dump(groups, outfile, indent=' ')
def make_request(self, image, *features): """ Makes single image request :param image: One of file object, path, or URL :param features: Recognition features :return: """ return { "image": { "content": self.image_to_base64(image) }, "features": [{ "type": feature.type_, "maxResults": feature.max_results } for feature in features] }
Makes single image request :param image: One of file object, path, or URL :param features: Recognition features :return:
Below is the the instruction that describes the task: ### Input: Makes single image request :param image: One of file object, path, or URL :param features: Recognition features :return: ### Response: def make_request(self, image, *features): """ Makes single image request :param image: One of file object, path, or URL :param features: Recognition features :return: """ return { "image": { "content": self.image_to_base64(image) }, "features": [{ "type": feature.type_, "maxResults": feature.max_results } for feature in features] }
def retrieve_extension(self, name, **kw): """Retrieve details on a single extension.""" response = self.request(E.retrieveExtensionRequest( E.name(name), E.withDescription(int(kw.get('with_description', 0))), E.withPrice(int(kw.get('with_price', 0))), E.withUsageCount(int(kw.get('with_usage_count', 0))), )) return response.as_model(Extension)
Retrieve details on a single extension.
Below is the the instruction that describes the task: ### Input: Retrieve details on a single extension. ### Response: def retrieve_extension(self, name, **kw): """Retrieve details on a single extension.""" response = self.request(E.retrieveExtensionRequest( E.name(name), E.withDescription(int(kw.get('with_description', 0))), E.withPrice(int(kw.get('with_price', 0))), E.withUsageCount(int(kw.get('with_usage_count', 0))), )) return response.as_model(Extension)
def get_block_type(self, def_id): """Get a block_type by its definition id.""" try: return self._definitions[def_id] except KeyError: try: return def_id.aside_type except AttributeError: raise NoSuchDefinition(repr(def_id))
Get a block_type by its definition id.
Below is the the instruction that describes the task: ### Input: Get a block_type by its definition id. ### Response: def get_block_type(self, def_id): """Get a block_type by its definition id.""" try: return self._definitions[def_id] except KeyError: try: return def_id.aside_type except AttributeError: raise NoSuchDefinition(repr(def_id))
def delete(self, *args): """Remove the key from the request cache and from memcache.""" cache = get_cache() key = self.get_cache_key(*args) if key in cache: del cache[key]
Remove the key from the request cache and from memcache.
Below is the the instruction that describes the task: ### Input: Remove the key from the request cache and from memcache. ### Response: def delete(self, *args): """Remove the key from the request cache and from memcache.""" cache = get_cache() key = self.get_cache_key(*args) if key in cache: del cache[key]
def summary_pairwise_indices(self): """ndarray containing tuples of pairwise indices for the column summary.""" summary_pairwise_indices = np.empty( self.values[0].t_stats.shape[1], dtype=object ) summary_pairwise_indices[:] = [ sig.summary_pairwise_indices for sig in self.values ] return summary_pairwise_indices
ndarray containing tuples of pairwise indices for the column summary.
Below is the the instruction that describes the task: ### Input: ndarray containing tuples of pairwise indices for the column summary. ### Response: def summary_pairwise_indices(self): """ndarray containing tuples of pairwise indices for the column summary.""" summary_pairwise_indices = np.empty( self.values[0].t_stats.shape[1], dtype=object ) summary_pairwise_indices[:] = [ sig.summary_pairwise_indices for sig in self.values ] return summary_pairwise_indices
def find_row(self, ev_start, ev_end): """Highlight event row in table from start and end time. Parameters ---------- ev_start : float start time, in seconds from record start ev_end : float end time, in seconds from record start Returns ------- int index of event row in idx_annot_list QTableWidget """ all_starts = self.idx_annot_list.property('start') all_ends = self.idx_annot_list.property('end') for i, (start, end) in enumerate(zip(all_starts, all_ends)): if start == ev_start and end == ev_end: return i for i, start in enumerate(all_starts): if start == ev_start: return i for i, end in enumerate(all_ends): if end == ev_end: return i raise ValueError
Highlight event row in table from start and end time. Parameters ---------- ev_start : float start time, in seconds from record start ev_end : float end time, in seconds from record start Returns ------- int index of event row in idx_annot_list QTableWidget
Below is the the instruction that describes the task: ### Input: Highlight event row in table from start and end time. Parameters ---------- ev_start : float start time, in seconds from record start ev_end : float end time, in seconds from record start Returns ------- int index of event row in idx_annot_list QTableWidget ### Response: def find_row(self, ev_start, ev_end): """Highlight event row in table from start and end time. Parameters ---------- ev_start : float start time, in seconds from record start ev_end : float end time, in seconds from record start Returns ------- int index of event row in idx_annot_list QTableWidget """ all_starts = self.idx_annot_list.property('start') all_ends = self.idx_annot_list.property('end') for i, (start, end) in enumerate(zip(all_starts, all_ends)): if start == ev_start and end == ev_end: return i for i, start in enumerate(all_starts): if start == ev_start: return i for i, end in enumerate(all_ends): if end == ev_end: return i raise ValueError
def enable(self, key_id, **kwargs): """Enable a deploy key for a project. Args: key_id (int): The ID of the key to enable **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabProjectDeployKeyError: If the key could not be enabled """ path = '%s/%s/enable' % (self.path, key_id) self.gitlab.http_post(path, **kwargs)
Enable a deploy key for a project. Args: key_id (int): The ID of the key to enable **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabProjectDeployKeyError: If the key could not be enabled
Below is the the instruction that describes the task: ### Input: Enable a deploy key for a project. Args: key_id (int): The ID of the key to enable **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabProjectDeployKeyError: If the key could not be enabled ### Response: def enable(self, key_id, **kwargs): """Enable a deploy key for a project. Args: key_id (int): The ID of the key to enable **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabProjectDeployKeyError: If the key could not be enabled """ path = '%s/%s/enable' % (self.path, key_id) self.gitlab.http_post(path, **kwargs)
def process_image_field(self, data): """ Process perseus fields like questions and hints, which look like: .. code-block:: python { "content": "md string including imgs like ![](URL-key) and ![](URL-key2)", "images": { "URL-key": {"width": 425, "height": 425}, "URL-key2": {"width": 425, "height": 425} } } Replaces `content` attribute and returns (images_dict, image_files), where - `images_dict` is a replacement for the old `images` key - `image_files` is a list image files for the URLs found Note it is possible for assesment items to include images links `content` that are not listed under `images`, so code must handle that case too, see https://github.com/learningequality/ricecooker/issues/178 for details. """ new_images_dict = copy.deepcopy(data['images']) image_files = [] # STEP 1. Compile dict of {old_url-->new_url} image URL replacements image_replacements = {} # STEP 1A. get all images specified in data['images'] for old_url, image_settings in data['images'].items(): new_url, new_image_files = self.set_image(old_url) image_files += new_image_files new_images_dict[new_url] = new_images_dict.pop(old_url) image_replacements[old_url] = new_url # STEP 1B. look for additional `MARKDOWN_IMAGE_REGEX`-like link in `content` attr. img_link_pat = re.compile(MARKDOWN_IMAGE_REGEX, flags=re.IGNORECASE) img_link_matches = img_link_pat.findall(data['content']) for match in img_link_matches: old_url = match[1] if old_url not in image_replacements.keys(): new_url, new_image_files = self.set_image(old_url) image_files += new_image_files image_replacements[old_url] = new_url # Performd content replacent for all URLs in image_replacements for old_url, new_url in image_replacements.items(): data['content'] = data['content'].replace(old_url, new_url) return new_images_dict, image_files
Process perseus fields like questions and hints, which look like: .. code-block:: python { "content": "md string including imgs like ![](URL-key) and ![](URL-key2)", "images": { "URL-key": {"width": 425, "height": 425}, "URL-key2": {"width": 425, "height": 425} } } Replaces `content` attribute and returns (images_dict, image_files), where - `images_dict` is a replacement for the old `images` key - `image_files` is a list image files for the URLs found Note it is possible for assesment items to include images links `content` that are not listed under `images`, so code must handle that case too, see https://github.com/learningequality/ricecooker/issues/178 for details.
Below is the the instruction that describes the task: ### Input: Process perseus fields like questions and hints, which look like: .. code-block:: python { "content": "md string including imgs like ![](URL-key) and ![](URL-key2)", "images": { "URL-key": {"width": 425, "height": 425}, "URL-key2": {"width": 425, "height": 425} } } Replaces `content` attribute and returns (images_dict, image_files), where - `images_dict` is a replacement for the old `images` key - `image_files` is a list image files for the URLs found Note it is possible for assesment items to include images links `content` that are not listed under `images`, so code must handle that case too, see https://github.com/learningequality/ricecooker/issues/178 for details. ### Response: def process_image_field(self, data): """ Process perseus fields like questions and hints, which look like: .. code-block:: python { "content": "md string including imgs like ![](URL-key) and ![](URL-key2)", "images": { "URL-key": {"width": 425, "height": 425}, "URL-key2": {"width": 425, "height": 425} } } Replaces `content` attribute and returns (images_dict, image_files), where - `images_dict` is a replacement for the old `images` key - `image_files` is a list image files for the URLs found Note it is possible for assesment items to include images links `content` that are not listed under `images`, so code must handle that case too, see https://github.com/learningequality/ricecooker/issues/178 for details. """ new_images_dict = copy.deepcopy(data['images']) image_files = [] # STEP 1. Compile dict of {old_url-->new_url} image URL replacements image_replacements = {} # STEP 1A. get all images specified in data['images'] for old_url, image_settings in data['images'].items(): new_url, new_image_files = self.set_image(old_url) image_files += new_image_files new_images_dict[new_url] = new_images_dict.pop(old_url) image_replacements[old_url] = new_url # STEP 1B. look for additional `MARKDOWN_IMAGE_REGEX`-like link in `content` attr. img_link_pat = re.compile(MARKDOWN_IMAGE_REGEX, flags=re.IGNORECASE) img_link_matches = img_link_pat.findall(data['content']) for match in img_link_matches: old_url = match[1] if old_url not in image_replacements.keys(): new_url, new_image_files = self.set_image(old_url) image_files += new_image_files image_replacements[old_url] = new_url # Performd content replacent for all URLs in image_replacements for old_url, new_url in image_replacements.items(): data['content'] = data['content'].replace(old_url, new_url) return new_images_dict, image_files
def case_mme_update(self, case_obj, user_obj, mme_subm_obj): """Updates a case after a submission to MatchMaker Exchange Args: case_obj(dict): a scout case object user_obj(dict): a scout user object mme_subm_obj(dict): contains MME submission params and server response Returns: updated_case(dict): the updated scout case """ created = None patient_ids = [] updated = datetime.now() if 'mme_submission' in case_obj and case_obj['mme_submission']: created = case_obj['mme_submission']['created_at'] else: created = updated patients = [ resp['patient'] for resp in mme_subm_obj.get('server_responses')] subm_obj = { 'created_at' : created, 'updated_at' : updated, 'patients' : patients, # list of submitted patient data 'subm_user' : user_obj['_id'], # submitting user 'sex' : mme_subm_obj['sex'], 'features' : mme_subm_obj['features'], 'disorders' : mme_subm_obj['disorders'], 'genes_only' : mme_subm_obj['genes_only'] } case_obj['mme_submission'] = subm_obj updated_case = self.update_case(case_obj) # create events for subjects add in MatchMaker for this case institute_obj = self.institute(case_obj['owner']) for individual in case_obj['individuals']: if individual['phenotype'] == 2: # affected # create event for patient self.create_event(institute=institute_obj, case=case_obj, user=user_obj, link='', category='case', verb='mme_add', subject=individual['display_name'], level='specific') return updated_case
Updates a case after a submission to MatchMaker Exchange Args: case_obj(dict): a scout case object user_obj(dict): a scout user object mme_subm_obj(dict): contains MME submission params and server response Returns: updated_case(dict): the updated scout case
Below is the the instruction that describes the task: ### Input: Updates a case after a submission to MatchMaker Exchange Args: case_obj(dict): a scout case object user_obj(dict): a scout user object mme_subm_obj(dict): contains MME submission params and server response Returns: updated_case(dict): the updated scout case ### Response: def case_mme_update(self, case_obj, user_obj, mme_subm_obj): """Updates a case after a submission to MatchMaker Exchange Args: case_obj(dict): a scout case object user_obj(dict): a scout user object mme_subm_obj(dict): contains MME submission params and server response Returns: updated_case(dict): the updated scout case """ created = None patient_ids = [] updated = datetime.now() if 'mme_submission' in case_obj and case_obj['mme_submission']: created = case_obj['mme_submission']['created_at'] else: created = updated patients = [ resp['patient'] for resp in mme_subm_obj.get('server_responses')] subm_obj = { 'created_at' : created, 'updated_at' : updated, 'patients' : patients, # list of submitted patient data 'subm_user' : user_obj['_id'], # submitting user 'sex' : mme_subm_obj['sex'], 'features' : mme_subm_obj['features'], 'disorders' : mme_subm_obj['disorders'], 'genes_only' : mme_subm_obj['genes_only'] } case_obj['mme_submission'] = subm_obj updated_case = self.update_case(case_obj) # create events for subjects add in MatchMaker for this case institute_obj = self.institute(case_obj['owner']) for individual in case_obj['individuals']: if individual['phenotype'] == 2: # affected # create event for patient self.create_event(institute=institute_obj, case=case_obj, user=user_obj, link='', category='case', verb='mme_add', subject=individual['display_name'], level='specific') return updated_case
def is_larger(unit_1, unit_2): """Returns a boolean indicating whether unit_1 is larger than unit_2. E.g: >>> is_larger('KB', 'B') True >>> is_larger('min', 'day') False """ unit_1 = functions.value_for_key(INFORMATION_UNITS, unit_1) unit_2 = functions.value_for_key(INFORMATION_UNITS, unit_2) return ureg.parse_expression(unit_1) > ureg.parse_expression(unit_2)
Returns a boolean indicating whether unit_1 is larger than unit_2. E.g: >>> is_larger('KB', 'B') True >>> is_larger('min', 'day') False
Below is the the instruction that describes the task: ### Input: Returns a boolean indicating whether unit_1 is larger than unit_2. E.g: >>> is_larger('KB', 'B') True >>> is_larger('min', 'day') False ### Response: def is_larger(unit_1, unit_2): """Returns a boolean indicating whether unit_1 is larger than unit_2. E.g: >>> is_larger('KB', 'B') True >>> is_larger('min', 'day') False """ unit_1 = functions.value_for_key(INFORMATION_UNITS, unit_1) unit_2 = functions.value_for_key(INFORMATION_UNITS, unit_2) return ureg.parse_expression(unit_1) > ureg.parse_expression(unit_2)
def ObjectModifiedEventHandler(obj, event): """Object has been modified """ # only snapshot supported objects if not supports_snapshots(obj): return # take a new snapshot take_snapshot(obj, action="edit") # reindex the object in the auditlog catalog reindex_object(obj)
Object has been modified
Below is the the instruction that describes the task: ### Input: Object has been modified ### Response: def ObjectModifiedEventHandler(obj, event): """Object has been modified """ # only snapshot supported objects if not supports_snapshots(obj): return # take a new snapshot take_snapshot(obj, action="edit") # reindex the object in the auditlog catalog reindex_object(obj)
async def _send_reply(self, obj, reply): """Send a reply with added standard fields back to executor. :param obj: The original Channels message object to which we're replying. :param reply: The message contents dictionary. The data id is added automatically (``reply`` is modified in place). """ reply.update({ ExecutorProtocol.DATA_ID: obj[ExecutorProtocol.DATA_ID], }) await self._call_redis(aioredis.Redis.rpush, self._queue_response_channel(obj), json.dumps(reply))
Send a reply with added standard fields back to executor. :param obj: The original Channels message object to which we're replying. :param reply: The message contents dictionary. The data id is added automatically (``reply`` is modified in place).
Below is the the instruction that describes the task: ### Input: Send a reply with added standard fields back to executor. :param obj: The original Channels message object to which we're replying. :param reply: The message contents dictionary. The data id is added automatically (``reply`` is modified in place). ### Response: async def _send_reply(self, obj, reply): """Send a reply with added standard fields back to executor. :param obj: The original Channels message object to which we're replying. :param reply: The message contents dictionary. The data id is added automatically (``reply`` is modified in place). """ reply.update({ ExecutorProtocol.DATA_ID: obj[ExecutorProtocol.DATA_ID], }) await self._call_redis(aioredis.Redis.rpush, self._queue_response_channel(obj), json.dumps(reply))
def _get_result(self, idx, timeout=None): """Called by the CollectorIterator object to retrieve the result's values one after another, in the order the results have become available. \param idx The index of the result we want, wrt collector's order \param timeout integer telling how long to wait (in seconds) for the result at index idx to be available, or None (wait forever) """ self._cond.acquire() try: if idx >= self._expected: raise IndexError elif idx < len(self._collection): return self._collection[idx] elif idx != len(self._collection): # Violation of the sequence protocol raise IndexError() else: self._cond.wait(timeout=timeout) try: return self._collection[idx] except IndexError: # Still not added ! raise TimeoutError("Timeout while waiting for results") finally: self._cond.release()
Called by the CollectorIterator object to retrieve the result's values one after another, in the order the results have become available. \param idx The index of the result we want, wrt collector's order \param timeout integer telling how long to wait (in seconds) for the result at index idx to be available, or None (wait forever)
Below is the the instruction that describes the task: ### Input: Called by the CollectorIterator object to retrieve the result's values one after another, in the order the results have become available. \param idx The index of the result we want, wrt collector's order \param timeout integer telling how long to wait (in seconds) for the result at index idx to be available, or None (wait forever) ### Response: def _get_result(self, idx, timeout=None): """Called by the CollectorIterator object to retrieve the result's values one after another, in the order the results have become available. \param idx The index of the result we want, wrt collector's order \param timeout integer telling how long to wait (in seconds) for the result at index idx to be available, or None (wait forever) """ self._cond.acquire() try: if idx >= self._expected: raise IndexError elif idx < len(self._collection): return self._collection[idx] elif idx != len(self._collection): # Violation of the sequence protocol raise IndexError() else: self._cond.wait(timeout=timeout) try: return self._collection[idx] except IndexError: # Still not added ! raise TimeoutError("Timeout while waiting for results") finally: self._cond.release()
def __update_clusters(self): """! @brief Calculate Manhattan distance to each point from the each cluster. @details Nearest points are captured by according clusters and as a result clusters are updated. @return (list) updated clusters as list of clusters where each cluster contains indexes of objects from data. """ clusters = [[] for i in range(len(self.__medians))] for index_point in range(len(self.__pointer_data)): index_optim = -1 dist_optim = 0.0 for index in range(len(self.__medians)): dist = self.__metric(self.__pointer_data[index_point], self.__medians[index]) if (dist < dist_optim) or (index == 0): index_optim = index dist_optim = dist clusters[index_optim].append(index_point) # If cluster is not able to capture object it should be removed clusters = [cluster for cluster in clusters if len(cluster) > 0] return clusters
! @brief Calculate Manhattan distance to each point from the each cluster. @details Nearest points are captured by according clusters and as a result clusters are updated. @return (list) updated clusters as list of clusters where each cluster contains indexes of objects from data.
Below is the the instruction that describes the task: ### Input: ! @brief Calculate Manhattan distance to each point from the each cluster. @details Nearest points are captured by according clusters and as a result clusters are updated. @return (list) updated clusters as list of clusters where each cluster contains indexes of objects from data. ### Response: def __update_clusters(self): """! @brief Calculate Manhattan distance to each point from the each cluster. @details Nearest points are captured by according clusters and as a result clusters are updated. @return (list) updated clusters as list of clusters where each cluster contains indexes of objects from data. """ clusters = [[] for i in range(len(self.__medians))] for index_point in range(len(self.__pointer_data)): index_optim = -1 dist_optim = 0.0 for index in range(len(self.__medians)): dist = self.__metric(self.__pointer_data[index_point], self.__medians[index]) if (dist < dist_optim) or (index == 0): index_optim = index dist_optim = dist clusters[index_optim].append(index_point) # If cluster is not able to capture object it should be removed clusters = [cluster for cluster in clusters if len(cluster) > 0] return clusters
def set_data(self, ids=None): """ Set the data for all specified measurements (all if None given). """ fun = lambda x: x.set_data() self.apply(fun, ids=ids, applyto='measurement')
Set the data for all specified measurements (all if None given).
Below is the the instruction that describes the task: ### Input: Set the data for all specified measurements (all if None given). ### Response: def set_data(self, ids=None): """ Set the data for all specified measurements (all if None given). """ fun = lambda x: x.set_data() self.apply(fun, ids=ids, applyto='measurement')
def view_packgets_list(self, option: str = '-e', keyword: str = '') -> list: '''Show all packages. Args: option: -f see their associated file -d filter to only show disabled packages -e filter to only show enabled packages -s filter to only show system packages -3 filter to only show third party packages -i see the installer for the packages -u also include uninstalled packages -keyword: optionally only those whose name contains the text in keyword ''' if option not in ['-f', '-d', '-e', '-s', '-3', '-i', '-u']: raise ValueError(f'There is no option called {option!r}.') output, _ = self._execute( '-s', self.device_sn, 'shell', 'pm', 'list', 'packages', option, keyword) return list(map(lambda x: x[8:], output.splitlines()))
Show all packages. Args: option: -f see their associated file -d filter to only show disabled packages -e filter to only show enabled packages -s filter to only show system packages -3 filter to only show third party packages -i see the installer for the packages -u also include uninstalled packages -keyword: optionally only those whose name contains the text in keyword
Below is the the instruction that describes the task: ### Input: Show all packages. Args: option: -f see their associated file -d filter to only show disabled packages -e filter to only show enabled packages -s filter to only show system packages -3 filter to only show third party packages -i see the installer for the packages -u also include uninstalled packages -keyword: optionally only those whose name contains the text in keyword ### Response: def view_packgets_list(self, option: str = '-e', keyword: str = '') -> list: '''Show all packages. Args: option: -f see their associated file -d filter to only show disabled packages -e filter to only show enabled packages -s filter to only show system packages -3 filter to only show third party packages -i see the installer for the packages -u also include uninstalled packages -keyword: optionally only those whose name contains the text in keyword ''' if option not in ['-f', '-d', '-e', '-s', '-3', '-i', '-u']: raise ValueError(f'There is no option called {option!r}.') output, _ = self._execute( '-s', self.device_sn, 'shell', 'pm', 'list', 'packages', option, keyword) return list(map(lambda x: x[8:], output.splitlines()))
def parse(cls, fptr, offset, length): """Parse UUIDList box. Parameters ---------- f : file Open file object. offset : int Start position of box in bytes. length : int Length of the box in bytes. Returns ------- UUIDListBox Instance of the current UUID list box. """ num_bytes = offset + length - fptr.tell() read_buffer = fptr.read(num_bytes) num_uuids, = struct.unpack_from('>H', read_buffer) ulst = [] for j in range(num_uuids): uuid_buffer = read_buffer[2 + j * 16:2 + (j + 1) * 16] ulst.append(UUID(bytes=uuid_buffer)) return cls(ulst, length=length, offset=offset)
Parse UUIDList box. Parameters ---------- f : file Open file object. offset : int Start position of box in bytes. length : int Length of the box in bytes. Returns ------- UUIDListBox Instance of the current UUID list box.
Below is the the instruction that describes the task: ### Input: Parse UUIDList box. Parameters ---------- f : file Open file object. offset : int Start position of box in bytes. length : int Length of the box in bytes. Returns ------- UUIDListBox Instance of the current UUID list box. ### Response: def parse(cls, fptr, offset, length): """Parse UUIDList box. Parameters ---------- f : file Open file object. offset : int Start position of box in bytes. length : int Length of the box in bytes. Returns ------- UUIDListBox Instance of the current UUID list box. """ num_bytes = offset + length - fptr.tell() read_buffer = fptr.read(num_bytes) num_uuids, = struct.unpack_from('>H', read_buffer) ulst = [] for j in range(num_uuids): uuid_buffer = read_buffer[2 + j * 16:2 + (j + 1) * 16] ulst.append(UUID(bytes=uuid_buffer)) return cls(ulst, length=length, offset=offset)
def local_attention_2d(x, hparams, attention_type="local_attention_2d"): """Local 2d, self attention layer.""" # self-attention with tf.variable_scope("local_2d_self_att"): y = common_attention.multihead_attention_2d( x, None, hparams.attention_key_channels or hparams.hidden_size, hparams.attention_value_channels or hparams.hidden_size, hparams.hidden_size, hparams.num_heads, attention_type=attention_type, query_shape=hparams.query_shape, memory_flange=hparams.memory_flange, name="self_attention") return y
Local 2d, self attention layer.
Below is the the instruction that describes the task: ### Input: Local 2d, self attention layer. ### Response: def local_attention_2d(x, hparams, attention_type="local_attention_2d"): """Local 2d, self attention layer.""" # self-attention with tf.variable_scope("local_2d_self_att"): y = common_attention.multihead_attention_2d( x, None, hparams.attention_key_channels or hparams.hidden_size, hparams.attention_value_channels or hparams.hidden_size, hparams.hidden_size, hparams.num_heads, attention_type=attention_type, query_shape=hparams.query_shape, memory_flange=hparams.memory_flange, name="self_attention") return y
def stream(self, end_date=values.unset, friendly_name=values.unset, minutes=values.unset, start_date=values.unset, task_channel=values.unset, split_by_wait_time=values.unset, limit=None, page_size=None): """ Streams TaskQueuesStatisticsInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param datetime end_date: Filter cumulative statistics by an end date. :param unicode friendly_name: Filter the TaskQueue stats based on a TaskQueue's name :param unicode minutes: Filter cumulative statistics by up to 'x' minutes in the past. :param datetime start_date: Filter cumulative statistics by a start date. :param unicode task_channel: Filter real-time and cumulative statistics by TaskChannel. :param unicode split_by_wait_time: A comma separated values for viewing splits of tasks canceled and accepted above the given threshold in seconds. :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.taskrouter.v1.workspace.task_queue.task_queues_statistics.TaskQueuesStatisticsInstance] """ limits = self._version.read_limits(limit, page_size) page = self.page( end_date=end_date, friendly_name=friendly_name, minutes=minutes, start_date=start_date, task_channel=task_channel, split_by_wait_time=split_by_wait_time, page_size=limits['page_size'], ) return self._version.stream(page, limits['limit'], limits['page_limit'])
Streams TaskQueuesStatisticsInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param datetime end_date: Filter cumulative statistics by an end date. :param unicode friendly_name: Filter the TaskQueue stats based on a TaskQueue's name :param unicode minutes: Filter cumulative statistics by up to 'x' minutes in the past. :param datetime start_date: Filter cumulative statistics by a start date. :param unicode task_channel: Filter real-time and cumulative statistics by TaskChannel. :param unicode split_by_wait_time: A comma separated values for viewing splits of tasks canceled and accepted above the given threshold in seconds. :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.taskrouter.v1.workspace.task_queue.task_queues_statistics.TaskQueuesStatisticsInstance]
Below is the the instruction that describes the task: ### Input: Streams TaskQueuesStatisticsInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param datetime end_date: Filter cumulative statistics by an end date. :param unicode friendly_name: Filter the TaskQueue stats based on a TaskQueue's name :param unicode minutes: Filter cumulative statistics by up to 'x' minutes in the past. :param datetime start_date: Filter cumulative statistics by a start date. :param unicode task_channel: Filter real-time and cumulative statistics by TaskChannel. :param unicode split_by_wait_time: A comma separated values for viewing splits of tasks canceled and accepted above the given threshold in seconds. :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.taskrouter.v1.workspace.task_queue.task_queues_statistics.TaskQueuesStatisticsInstance] ### Response: def stream(self, end_date=values.unset, friendly_name=values.unset, minutes=values.unset, start_date=values.unset, task_channel=values.unset, split_by_wait_time=values.unset, limit=None, page_size=None): """ Streams TaskQueuesStatisticsInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param datetime end_date: Filter cumulative statistics by an end date. :param unicode friendly_name: Filter the TaskQueue stats based on a TaskQueue's name :param unicode minutes: Filter cumulative statistics by up to 'x' minutes in the past. :param datetime start_date: Filter cumulative statistics by a start date. :param unicode task_channel: Filter real-time and cumulative statistics by TaskChannel. :param unicode split_by_wait_time: A comma separated values for viewing splits of tasks canceled and accepted above the given threshold in seconds. :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.taskrouter.v1.workspace.task_queue.task_queues_statistics.TaskQueuesStatisticsInstance] """ limits = self._version.read_limits(limit, page_size) page = self.page( end_date=end_date, friendly_name=friendly_name, minutes=minutes, start_date=start_date, task_channel=task_channel, split_by_wait_time=split_by_wait_time, page_size=limits['page_size'], ) return self._version.stream(page, limits['limit'], limits['page_limit'])
def eqarea_magic(in_file='sites.txt', dir_path=".", input_dir_path="", spec_file="specimens.txt", samp_file="samples.txt", site_file="sites.txt", loc_file="locations.txt", plot_by="all", crd="g", ignore_tilt=False, save_plots=True, fmt="svg", contour=False, color_map="coolwarm", plot_ell="", n_plots=5, interactive=False): """ makes equal area projections from declination/inclination data Parameters ---------- in_file : str, default "sites.txt" dir_path : str output directory, default "." input_dir_path : str input file directory (if different from dir_path), default "" spec_file : str input specimen file name, default "specimens.txt" samp_file: str input sample file name, default "samples.txt" site_file : str input site file name, default "sites.txt" loc_file : str input location file name, default "locations.txt" plot_by : str [spc, sam, sit, loc, all] (specimen, sample, site, location, all), default "all" crd : ['s','g','t'], coordinate system for plotting whereby: s : specimen coordinates, aniso_tile_correction = -1 g : geographic coordinates, aniso_tile_correction = 0 (default) t : tilt corrected coordinates, aniso_tile_correction = 100 ignore_tilt : bool default False. If True, data are unoriented (allows plotting of measurement dec/inc) save_plots : bool plot and save non-interactively, default True fmt : str ["png", "svg", "pdf", "jpg"], default "svg" contour : bool plot as color contour colormap : str color map for contour plotting, default "coolwarm" see cartopy documentation for more options plot_ell : str [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors default "" plots none n_plots : int maximum number of plots to make, default 5 if you want to make all possible plots, specify "all" interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line or in the Python interpreter) Returns --------- type - Tuple : (True or False indicating if conversion was sucessful, file name(s) written) """ saved = [] # parse out input/out directories input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path) # initialize some variables verbose = pmagplotlib.verbose FIG = {} # plot dictionary FIG['eqarea'] = 1 # eqarea is figure 1 pmagplotlib.plot_init(FIG['eqarea'], 5, 5) # get coordinate system if crd == "s": coord = "-1" elif crd == "t": coord = "100" else: coord = "0" # get item to plot by if plot_by == 'all': plot_key = 'all' elif plot_by == 'sit': plot_key = 'site' elif plot_by == 'sam': plot_key = 'sample' elif plot_by == 'spc': plot_key = 'specimen' else: plot_by = 'all' plot_key = 'all' # get distribution to plot ellipses/eigenvectors if desired if save_plots: verbose = False # set keys dec_key = 'dir_dec' inc_key = 'dir_inc' tilt_key = 'dir_tilt_correction' # create contribution fnames = {"specimens": spec_file, "samples": samp_file, 'sites': site_file, 'locations': loc_file} if not os.path.exists(pmag.resolve_file_name(in_file, input_dir_path)): print('-E- Could not find {}'.format(in_file)) return False, [] contribution = cb.Contribution(input_dir_path, custom_filenames=fnames, single_file=in_file) table_name = list(contribution.tables.keys())[0] contribution.add_magic_table("contribution") # get contribution id if available for server plots if pmagplotlib.isServer: con_id = contribution.get_con_id() # try to propagate all names to measurement level try: contribution.propagate_location_to_samples() contribution.propagate_location_to_specimens() contribution.propagate_location_to_measurements() except KeyError as ex: pass # the object that contains the DataFrame + useful helper methods: data_container = contribution.tables[table_name] # the actual DataFrame: data = data_container.df plot_type = data_container.dtype if plot_key != "all" and plot_key not in data.columns: print("-E- You can't plot by {} with the data provided".format(plot_key)) return False, [] # add tilt key into DataFrame columns if it isn't there already if tilt_key not in data.columns: data.loc[:, tilt_key] = None if verbose: print(len(data), ' records read from ', in_file) # find desired dec,inc data: dir_type_key = '' # # get plotlist if not plotting all records # plotlist = [] if plot_key != "all": # return all where plot_key is not blank if plot_key not in data.columns: print('-E- Can\'t plot by "{}". That header is not in infile: {}'.format( plot_key, in_file)) return False, [] plots = data[data[plot_key].notnull()] plotlist = plots[plot_key].unique() # grab unique values else: plotlist.append('All') if n_plots != "all": if len(plotlist) > n_plots: plotlist = plotlist[:n_plots] fignum = 0 for plot in plotlist: fignum += 1 FIG['eqarea'] = fignum pmagplotlib.plot_init(FIG['eqarea'], 5, 5) if plot_ell: dist = plot_ell.upper() # if dist type is unrecognized, use Fisher if dist not in ['F', 'K', 'B', 'BE', 'BV']: dist = 'F' if dist == "BV": fignum += 1 FIG['bdirs'] = fignum pmagplotlib.plot_init(FIG['bdirs'], 5, 5) if verbose: print(plot) if plot == 'All': # plot everything at once plot_data = data else: # pull out only partial data plot_data = data[data[plot_key] == plot] # get location names for the data locs = [] if 'location' in plot_data.columns: locs = plot_data['location'].dropna().unique() DIblock = [] GCblock = [] # SLblock, SPblock = [], [] title = plot mode = 1 if dec_key not in plot_data.columns: print("-W- No dec/inc data") continue # get all records where dec & inc values exist plot_data = plot_data[plot_data[dec_key].notnull() & plot_data[inc_key].notnull()] if plot_data.empty: print("-W- No dec/inc data") continue # get metadata for naming the plot file locations = str(data_container.get_name('location', df_slice=plot_data)) site = str(data_container.get_name('site', df_slice=plot_data)) sample = str(data_container.get_name('sample', df_slice=plot_data)) specimen = str(data_container.get_name('specimen', df_slice=plot_data)) # make sure method_codes is in plot_data if 'method_codes' not in plot_data.columns: plot_data['method_codes'] = '' # get data blocks # would have to ignore tilt to use measurement level data DIblock = data_container.get_di_block(df_slice=plot_data, tilt_corr=coord, excl=['DE-BFP'], ignore_tilt=ignore_tilt) if title == 'All': if len(locs): title = " ,".join(locs) + " - {} {} plotted".format(str(len(DIblock)), plot_type) else: title = "{} {} plotted".format(str(len(DIblock)), plot_type) #SLblock = [[ind, row['method_codes']] for ind, row in plot_data.iterrows()] # get great circles great_circle_data = data_container.get_records_for_code('DE-BFP', incl=True, use_slice=True, sli=plot_data) if len(great_circle_data) > 0: gc_cond = great_circle_data[tilt_key] == coord GCblock = [[float(row[dec_key]), float(row[inc_key])] for ind, row in great_circle_data[gc_cond].iterrows()] #SPblock = [[ind, row['method_codes']] for ind, row in great_circle_data[gc_cond].iterrows()] if len(DIblock) > 0: if not contour: pmagplotlib.plot_eq(FIG['eqarea'], DIblock, title) else: pmagplotlib.plot_eq_cont( FIG['eqarea'], DIblock, color_map=color_map) else: pmagplotlib.plot_net(FIG['eqarea']) if len(GCblock) > 0: for rec in GCblock: pmagplotlib.plot_circ(FIG['eqarea'], rec, 90., 'g') if len(DIblock) == 0 and len(GCblock) == 0: if verbose: print("no records for plotting") fignum -= 1 if 'bdirs' in FIG: fignum -= 1 continue # sys.exit() if plot_ell: ppars = pmag.doprinc(DIblock) # get principal directions nDIs, rDIs, npars, rpars = [], [], [], [] for rec in DIblock: angle = pmag.angle([rec[0], rec[1]], [ ppars['dec'], ppars['inc']]) if angle > 90.: rDIs.append(rec) else: nDIs.append(rec) if dist == 'B': # do on whole dataset etitle = "Bingham confidence ellipse" bpars = pmag.dobingham(DIblock) for key in list(bpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (bpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (bpars[key])) npars.append(bpars['dec']) npars.append(bpars['inc']) npars.append(bpars['Zeta']) npars.append(bpars['Zdec']) npars.append(bpars['Zinc']) npars.append(bpars['Eta']) npars.append(bpars['Edec']) npars.append(bpars['Einc']) if dist == 'F': etitle = "Fisher confidence cone" if len(nDIs) > 2: fpars = pmag.fisher_mean(nDIs) for key in list(fpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (fpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (fpars[key])) mode += 1 npars.append(fpars['dec']) npars.append(fpars['inc']) npars.append(fpars['alpha95']) # Beta npars.append(fpars['dec']) isign = abs(fpars['inc']) / fpars['inc'] npars.append(fpars['inc']-isign*90.) # Beta inc npars.append(fpars['alpha95']) # gamma npars.append(fpars['dec']+90.) # Beta dec npars.append(0.) # Beta inc if len(rDIs) > 2: fpars = pmag.fisher_mean(rDIs) if verbose: print("mode ", mode) for key in list(fpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (fpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (fpars[key])) mode += 1 rpars.append(fpars['dec']) rpars.append(fpars['inc']) rpars.append(fpars['alpha95']) # Beta rpars.append(fpars['dec']) isign = abs(fpars['inc']) / fpars['inc'] rpars.append(fpars['inc']-isign*90.) # Beta inc rpars.append(fpars['alpha95']) # gamma rpars.append(fpars['dec']+90.) # Beta dec rpars.append(0.) # Beta inc if dist == 'K': etitle = "Kent confidence ellipse" if len(nDIs) > 3: kpars = pmag.dokent(nDIs, len(nDIs)) if verbose: print("mode ", mode) for key in list(kpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (kpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (kpars[key])) mode += 1 npars.append(kpars['dec']) npars.append(kpars['inc']) npars.append(kpars['Zeta']) npars.append(kpars['Zdec']) npars.append(kpars['Zinc']) npars.append(kpars['Eta']) npars.append(kpars['Edec']) npars.append(kpars['Einc']) if len(rDIs) > 3: kpars = pmag.dokent(rDIs, len(rDIs)) if verbose: print("mode ", mode) for key in list(kpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (kpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (kpars[key])) mode += 1 rpars.append(kpars['dec']) rpars.append(kpars['inc']) rpars.append(kpars['Zeta']) rpars.append(kpars['Zdec']) rpars.append(kpars['Zinc']) rpars.append(kpars['Eta']) rpars.append(kpars['Edec']) rpars.append(kpars['Einc']) else: # assume bootstrap if dist == 'BE': if len(nDIs) > 5: BnDIs = pmag.di_boot(nDIs) Bkpars = pmag.dokent(BnDIs, 1.) if verbose: print("mode ", mode) for key in list(Bkpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (Bkpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (Bkpars[key])) mode += 1 npars.append(Bkpars['dec']) npars.append(Bkpars['inc']) npars.append(Bkpars['Zeta']) npars.append(Bkpars['Zdec']) npars.append(Bkpars['Zinc']) npars.append(Bkpars['Eta']) npars.append(Bkpars['Edec']) npars.append(Bkpars['Einc']) if len(rDIs) > 5: BrDIs = pmag.di_boot(rDIs) Bkpars = pmag.dokent(BrDIs, 1.) if verbose: print("mode ", mode) for key in list(Bkpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (Bkpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (Bkpars[key])) mode += 1 rpars.append(Bkpars['dec']) rpars.append(Bkpars['inc']) rpars.append(Bkpars['Zeta']) rpars.append(Bkpars['Zdec']) rpars.append(Bkpars['Zinc']) rpars.append(Bkpars['Eta']) rpars.append(Bkpars['Edec']) rpars.append(Bkpars['Einc']) etitle = "Bootstrapped confidence ellipse" elif dist == 'BV': sym = {'lower': ['o', 'c'], 'upper': [ 'o', 'g'], 'size': 3, 'edgecolor': 'face'} if len(nDIs) > 5: BnDIs = pmag.di_boot(nDIs) pmagplotlib.plot_eq_sym( FIG['bdirs'], BnDIs, 'Bootstrapped Eigenvectors', sym) if len(rDIs) > 5: BrDIs = pmag.di_boot(rDIs) if len(nDIs) > 5: # plot on existing plots pmagplotlib.plot_di_sym(FIG['bdirs'], BrDIs, sym) else: pmagplotlib.plot_eq( FIG['bdirs'], BrDIs, 'Bootstrapped Eigenvectors') if dist == 'B': if len(nDIs) > 3 or len(rDIs) > 3: pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0) elif len(nDIs) > 3 and dist != 'BV': pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0) if len(rDIs) > 3: pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0) elif len(rDIs) > 3 and dist != 'BV': pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0) for key in list(FIG.keys()): files = {} #if filename: # use provided filename # filename += '.' + fmt if pmagplotlib.isServer: # use server plot naming convention if plot_key == 'all': filename = 'LO:_'+locations+'_SI:__SA:__SP:__CO:_'+crd+'_TY:_'+key+'_.'+fmt else: filename = 'LO:_'+locations+'_SI:_'+site+'_SA:_'+sample + \ '_SP:_'+str(specimen)+'_CO:_'+crd+'_TY:_'+key+'_.'+fmt elif plot_key == 'all': filename = 'all' if locs: loc_string = "_".join( [str(loc).replace(' ', '_') for loc in locs]) filename += "_" + loc_string filename += "_" + crd + "_" + key filename += ".{}".format(fmt) else: # use more readable naming convention filename = '' # fix this if plot_by is location , for example use_names = {'location': [locations], 'site': [locations, site], 'sample': [locations, site, sample], 'specimen': [locations, site, sample, specimen]} use = use_names[plot_key] use.extend([crd, key]) # [locations, site, sample, specimen, crd, key]: for item in use: if item: item = item.replace(' ', '_') filename += item + '_' if filename.endswith('_'): filename = filename[:-1] filename += ".{}".format(fmt) if not pmagplotlib.isServer: filename = os.path.join(dir_path, filename) files[key] = filename if pmagplotlib.isServer: titles = {'eqarea': 'Equal Area Plot'} FIG = pmagplotlib.add_borders(FIG, titles, con_id=con_id) saved_figs = pmagplotlib.save_plots(FIG, files) saved.extend(saved_figs) elif save_plots: saved_figs = pmagplotlib.save_plots(FIG, files, incl_directory=True) saved.extend(saved_figs) continue elif interactive: pmagplotlib.draw_figs(FIG) ans = input(" S[a]ve to save plot, [q]uit, Return to continue: ") if ans == "q": return True, [] if ans == "a": saved_figs = pmagplotlib.save_plots(FIG, files, incl_directory=True) saved.extend(saved) continue return True, saved
makes equal area projections from declination/inclination data Parameters ---------- in_file : str, default "sites.txt" dir_path : str output directory, default "." input_dir_path : str input file directory (if different from dir_path), default "" spec_file : str input specimen file name, default "specimens.txt" samp_file: str input sample file name, default "samples.txt" site_file : str input site file name, default "sites.txt" loc_file : str input location file name, default "locations.txt" plot_by : str [spc, sam, sit, loc, all] (specimen, sample, site, location, all), default "all" crd : ['s','g','t'], coordinate system for plotting whereby: s : specimen coordinates, aniso_tile_correction = -1 g : geographic coordinates, aniso_tile_correction = 0 (default) t : tilt corrected coordinates, aniso_tile_correction = 100 ignore_tilt : bool default False. If True, data are unoriented (allows plotting of measurement dec/inc) save_plots : bool plot and save non-interactively, default True fmt : str ["png", "svg", "pdf", "jpg"], default "svg" contour : bool plot as color contour colormap : str color map for contour plotting, default "coolwarm" see cartopy documentation for more options plot_ell : str [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors default "" plots none n_plots : int maximum number of plots to make, default 5 if you want to make all possible plots, specify "all" interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line or in the Python interpreter) Returns --------- type - Tuple : (True or False indicating if conversion was sucessful, file name(s) written)
Below is the the instruction that describes the task: ### Input: makes equal area projections from declination/inclination data Parameters ---------- in_file : str, default "sites.txt" dir_path : str output directory, default "." input_dir_path : str input file directory (if different from dir_path), default "" spec_file : str input specimen file name, default "specimens.txt" samp_file: str input sample file name, default "samples.txt" site_file : str input site file name, default "sites.txt" loc_file : str input location file name, default "locations.txt" plot_by : str [spc, sam, sit, loc, all] (specimen, sample, site, location, all), default "all" crd : ['s','g','t'], coordinate system for plotting whereby: s : specimen coordinates, aniso_tile_correction = -1 g : geographic coordinates, aniso_tile_correction = 0 (default) t : tilt corrected coordinates, aniso_tile_correction = 100 ignore_tilt : bool default False. If True, data are unoriented (allows plotting of measurement dec/inc) save_plots : bool plot and save non-interactively, default True fmt : str ["png", "svg", "pdf", "jpg"], default "svg" contour : bool plot as color contour colormap : str color map for contour plotting, default "coolwarm" see cartopy documentation for more options plot_ell : str [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors default "" plots none n_plots : int maximum number of plots to make, default 5 if you want to make all possible plots, specify "all" interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line or in the Python interpreter) Returns --------- type - Tuple : (True or False indicating if conversion was sucessful, file name(s) written) ### Response: def eqarea_magic(in_file='sites.txt', dir_path=".", input_dir_path="", spec_file="specimens.txt", samp_file="samples.txt", site_file="sites.txt", loc_file="locations.txt", plot_by="all", crd="g", ignore_tilt=False, save_plots=True, fmt="svg", contour=False, color_map="coolwarm", plot_ell="", n_plots=5, interactive=False): """ makes equal area projections from declination/inclination data Parameters ---------- in_file : str, default "sites.txt" dir_path : str output directory, default "." input_dir_path : str input file directory (if different from dir_path), default "" spec_file : str input specimen file name, default "specimens.txt" samp_file: str input sample file name, default "samples.txt" site_file : str input site file name, default "sites.txt" loc_file : str input location file name, default "locations.txt" plot_by : str [spc, sam, sit, loc, all] (specimen, sample, site, location, all), default "all" crd : ['s','g','t'], coordinate system for plotting whereby: s : specimen coordinates, aniso_tile_correction = -1 g : geographic coordinates, aniso_tile_correction = 0 (default) t : tilt corrected coordinates, aniso_tile_correction = 100 ignore_tilt : bool default False. If True, data are unoriented (allows plotting of measurement dec/inc) save_plots : bool plot and save non-interactively, default True fmt : str ["png", "svg", "pdf", "jpg"], default "svg" contour : bool plot as color contour colormap : str color map for contour plotting, default "coolwarm" see cartopy documentation for more options plot_ell : str [F,K,B,Be,Bv] plot Fisher, Kent, Bingham, Bootstrap ellipses or Boostrap eigenvectors default "" plots none n_plots : int maximum number of plots to make, default 5 if you want to make all possible plots, specify "all" interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line or in the Python interpreter) Returns --------- type - Tuple : (True or False indicating if conversion was sucessful, file name(s) written) """ saved = [] # parse out input/out directories input_dir_path, dir_path = pmag.fix_directories(input_dir_path, dir_path) # initialize some variables verbose = pmagplotlib.verbose FIG = {} # plot dictionary FIG['eqarea'] = 1 # eqarea is figure 1 pmagplotlib.plot_init(FIG['eqarea'], 5, 5) # get coordinate system if crd == "s": coord = "-1" elif crd == "t": coord = "100" else: coord = "0" # get item to plot by if plot_by == 'all': plot_key = 'all' elif plot_by == 'sit': plot_key = 'site' elif plot_by == 'sam': plot_key = 'sample' elif plot_by == 'spc': plot_key = 'specimen' else: plot_by = 'all' plot_key = 'all' # get distribution to plot ellipses/eigenvectors if desired if save_plots: verbose = False # set keys dec_key = 'dir_dec' inc_key = 'dir_inc' tilt_key = 'dir_tilt_correction' # create contribution fnames = {"specimens": spec_file, "samples": samp_file, 'sites': site_file, 'locations': loc_file} if not os.path.exists(pmag.resolve_file_name(in_file, input_dir_path)): print('-E- Could not find {}'.format(in_file)) return False, [] contribution = cb.Contribution(input_dir_path, custom_filenames=fnames, single_file=in_file) table_name = list(contribution.tables.keys())[0] contribution.add_magic_table("contribution") # get contribution id if available for server plots if pmagplotlib.isServer: con_id = contribution.get_con_id() # try to propagate all names to measurement level try: contribution.propagate_location_to_samples() contribution.propagate_location_to_specimens() contribution.propagate_location_to_measurements() except KeyError as ex: pass # the object that contains the DataFrame + useful helper methods: data_container = contribution.tables[table_name] # the actual DataFrame: data = data_container.df plot_type = data_container.dtype if plot_key != "all" and plot_key not in data.columns: print("-E- You can't plot by {} with the data provided".format(plot_key)) return False, [] # add tilt key into DataFrame columns if it isn't there already if tilt_key not in data.columns: data.loc[:, tilt_key] = None if verbose: print(len(data), ' records read from ', in_file) # find desired dec,inc data: dir_type_key = '' # # get plotlist if not plotting all records # plotlist = [] if plot_key != "all": # return all where plot_key is not blank if plot_key not in data.columns: print('-E- Can\'t plot by "{}". That header is not in infile: {}'.format( plot_key, in_file)) return False, [] plots = data[data[plot_key].notnull()] plotlist = plots[plot_key].unique() # grab unique values else: plotlist.append('All') if n_plots != "all": if len(plotlist) > n_plots: plotlist = plotlist[:n_plots] fignum = 0 for plot in plotlist: fignum += 1 FIG['eqarea'] = fignum pmagplotlib.plot_init(FIG['eqarea'], 5, 5) if plot_ell: dist = plot_ell.upper() # if dist type is unrecognized, use Fisher if dist not in ['F', 'K', 'B', 'BE', 'BV']: dist = 'F' if dist == "BV": fignum += 1 FIG['bdirs'] = fignum pmagplotlib.plot_init(FIG['bdirs'], 5, 5) if verbose: print(plot) if plot == 'All': # plot everything at once plot_data = data else: # pull out only partial data plot_data = data[data[plot_key] == plot] # get location names for the data locs = [] if 'location' in plot_data.columns: locs = plot_data['location'].dropna().unique() DIblock = [] GCblock = [] # SLblock, SPblock = [], [] title = plot mode = 1 if dec_key not in plot_data.columns: print("-W- No dec/inc data") continue # get all records where dec & inc values exist plot_data = plot_data[plot_data[dec_key].notnull() & plot_data[inc_key].notnull()] if plot_data.empty: print("-W- No dec/inc data") continue # get metadata for naming the plot file locations = str(data_container.get_name('location', df_slice=plot_data)) site = str(data_container.get_name('site', df_slice=plot_data)) sample = str(data_container.get_name('sample', df_slice=plot_data)) specimen = str(data_container.get_name('specimen', df_slice=plot_data)) # make sure method_codes is in plot_data if 'method_codes' not in plot_data.columns: plot_data['method_codes'] = '' # get data blocks # would have to ignore tilt to use measurement level data DIblock = data_container.get_di_block(df_slice=plot_data, tilt_corr=coord, excl=['DE-BFP'], ignore_tilt=ignore_tilt) if title == 'All': if len(locs): title = " ,".join(locs) + " - {} {} plotted".format(str(len(DIblock)), plot_type) else: title = "{} {} plotted".format(str(len(DIblock)), plot_type) #SLblock = [[ind, row['method_codes']] for ind, row in plot_data.iterrows()] # get great circles great_circle_data = data_container.get_records_for_code('DE-BFP', incl=True, use_slice=True, sli=plot_data) if len(great_circle_data) > 0: gc_cond = great_circle_data[tilt_key] == coord GCblock = [[float(row[dec_key]), float(row[inc_key])] for ind, row in great_circle_data[gc_cond].iterrows()] #SPblock = [[ind, row['method_codes']] for ind, row in great_circle_data[gc_cond].iterrows()] if len(DIblock) > 0: if not contour: pmagplotlib.plot_eq(FIG['eqarea'], DIblock, title) else: pmagplotlib.plot_eq_cont( FIG['eqarea'], DIblock, color_map=color_map) else: pmagplotlib.plot_net(FIG['eqarea']) if len(GCblock) > 0: for rec in GCblock: pmagplotlib.plot_circ(FIG['eqarea'], rec, 90., 'g') if len(DIblock) == 0 and len(GCblock) == 0: if verbose: print("no records for plotting") fignum -= 1 if 'bdirs' in FIG: fignum -= 1 continue # sys.exit() if plot_ell: ppars = pmag.doprinc(DIblock) # get principal directions nDIs, rDIs, npars, rpars = [], [], [], [] for rec in DIblock: angle = pmag.angle([rec[0], rec[1]], [ ppars['dec'], ppars['inc']]) if angle > 90.: rDIs.append(rec) else: nDIs.append(rec) if dist == 'B': # do on whole dataset etitle = "Bingham confidence ellipse" bpars = pmag.dobingham(DIblock) for key in list(bpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (bpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (bpars[key])) npars.append(bpars['dec']) npars.append(bpars['inc']) npars.append(bpars['Zeta']) npars.append(bpars['Zdec']) npars.append(bpars['Zinc']) npars.append(bpars['Eta']) npars.append(bpars['Edec']) npars.append(bpars['Einc']) if dist == 'F': etitle = "Fisher confidence cone" if len(nDIs) > 2: fpars = pmag.fisher_mean(nDIs) for key in list(fpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (fpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (fpars[key])) mode += 1 npars.append(fpars['dec']) npars.append(fpars['inc']) npars.append(fpars['alpha95']) # Beta npars.append(fpars['dec']) isign = abs(fpars['inc']) / fpars['inc'] npars.append(fpars['inc']-isign*90.) # Beta inc npars.append(fpars['alpha95']) # gamma npars.append(fpars['dec']+90.) # Beta dec npars.append(0.) # Beta inc if len(rDIs) > 2: fpars = pmag.fisher_mean(rDIs) if verbose: print("mode ", mode) for key in list(fpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (fpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (fpars[key])) mode += 1 rpars.append(fpars['dec']) rpars.append(fpars['inc']) rpars.append(fpars['alpha95']) # Beta rpars.append(fpars['dec']) isign = abs(fpars['inc']) / fpars['inc'] rpars.append(fpars['inc']-isign*90.) # Beta inc rpars.append(fpars['alpha95']) # gamma rpars.append(fpars['dec']+90.) # Beta dec rpars.append(0.) # Beta inc if dist == 'K': etitle = "Kent confidence ellipse" if len(nDIs) > 3: kpars = pmag.dokent(nDIs, len(nDIs)) if verbose: print("mode ", mode) for key in list(kpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (kpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (kpars[key])) mode += 1 npars.append(kpars['dec']) npars.append(kpars['inc']) npars.append(kpars['Zeta']) npars.append(kpars['Zdec']) npars.append(kpars['Zinc']) npars.append(kpars['Eta']) npars.append(kpars['Edec']) npars.append(kpars['Einc']) if len(rDIs) > 3: kpars = pmag.dokent(rDIs, len(rDIs)) if verbose: print("mode ", mode) for key in list(kpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (kpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (kpars[key])) mode += 1 rpars.append(kpars['dec']) rpars.append(kpars['inc']) rpars.append(kpars['Zeta']) rpars.append(kpars['Zdec']) rpars.append(kpars['Zinc']) rpars.append(kpars['Eta']) rpars.append(kpars['Edec']) rpars.append(kpars['Einc']) else: # assume bootstrap if dist == 'BE': if len(nDIs) > 5: BnDIs = pmag.di_boot(nDIs) Bkpars = pmag.dokent(BnDIs, 1.) if verbose: print("mode ", mode) for key in list(Bkpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (Bkpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (Bkpars[key])) mode += 1 npars.append(Bkpars['dec']) npars.append(Bkpars['inc']) npars.append(Bkpars['Zeta']) npars.append(Bkpars['Zdec']) npars.append(Bkpars['Zinc']) npars.append(Bkpars['Eta']) npars.append(Bkpars['Edec']) npars.append(Bkpars['Einc']) if len(rDIs) > 5: BrDIs = pmag.di_boot(rDIs) Bkpars = pmag.dokent(BrDIs, 1.) if verbose: print("mode ", mode) for key in list(Bkpars.keys()): if key != 'n' and verbose: print(" ", key, '%7.1f' % (Bkpars[key])) if key == 'n' and verbose: print(" ", key, ' %i' % (Bkpars[key])) mode += 1 rpars.append(Bkpars['dec']) rpars.append(Bkpars['inc']) rpars.append(Bkpars['Zeta']) rpars.append(Bkpars['Zdec']) rpars.append(Bkpars['Zinc']) rpars.append(Bkpars['Eta']) rpars.append(Bkpars['Edec']) rpars.append(Bkpars['Einc']) etitle = "Bootstrapped confidence ellipse" elif dist == 'BV': sym = {'lower': ['o', 'c'], 'upper': [ 'o', 'g'], 'size': 3, 'edgecolor': 'face'} if len(nDIs) > 5: BnDIs = pmag.di_boot(nDIs) pmagplotlib.plot_eq_sym( FIG['bdirs'], BnDIs, 'Bootstrapped Eigenvectors', sym) if len(rDIs) > 5: BrDIs = pmag.di_boot(rDIs) if len(nDIs) > 5: # plot on existing plots pmagplotlib.plot_di_sym(FIG['bdirs'], BrDIs, sym) else: pmagplotlib.plot_eq( FIG['bdirs'], BrDIs, 'Bootstrapped Eigenvectors') if dist == 'B': if len(nDIs) > 3 or len(rDIs) > 3: pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0) elif len(nDIs) > 3 and dist != 'BV': pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], npars, 0) if len(rDIs) > 3: pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0) elif len(rDIs) > 3 and dist != 'BV': pmagplotlib.plot_conf(FIG['eqarea'], etitle, [], rpars, 0) for key in list(FIG.keys()): files = {} #if filename: # use provided filename # filename += '.' + fmt if pmagplotlib.isServer: # use server plot naming convention if plot_key == 'all': filename = 'LO:_'+locations+'_SI:__SA:__SP:__CO:_'+crd+'_TY:_'+key+'_.'+fmt else: filename = 'LO:_'+locations+'_SI:_'+site+'_SA:_'+sample + \ '_SP:_'+str(specimen)+'_CO:_'+crd+'_TY:_'+key+'_.'+fmt elif plot_key == 'all': filename = 'all' if locs: loc_string = "_".join( [str(loc).replace(' ', '_') for loc in locs]) filename += "_" + loc_string filename += "_" + crd + "_" + key filename += ".{}".format(fmt) else: # use more readable naming convention filename = '' # fix this if plot_by is location , for example use_names = {'location': [locations], 'site': [locations, site], 'sample': [locations, site, sample], 'specimen': [locations, site, sample, specimen]} use = use_names[plot_key] use.extend([crd, key]) # [locations, site, sample, specimen, crd, key]: for item in use: if item: item = item.replace(' ', '_') filename += item + '_' if filename.endswith('_'): filename = filename[:-1] filename += ".{}".format(fmt) if not pmagplotlib.isServer: filename = os.path.join(dir_path, filename) files[key] = filename if pmagplotlib.isServer: titles = {'eqarea': 'Equal Area Plot'} FIG = pmagplotlib.add_borders(FIG, titles, con_id=con_id) saved_figs = pmagplotlib.save_plots(FIG, files) saved.extend(saved_figs) elif save_plots: saved_figs = pmagplotlib.save_plots(FIG, files, incl_directory=True) saved.extend(saved_figs) continue elif interactive: pmagplotlib.draw_figs(FIG) ans = input(" S[a]ve to save plot, [q]uit, Return to continue: ") if ans == "q": return True, [] if ans == "a": saved_figs = pmagplotlib.save_plots(FIG, files, incl_directory=True) saved.extend(saved) continue return True, saved
def _folded_slices(self): """Internal generator that is able to retrieve ranges organized by step. Complexity: O(n) with n = number of ranges in tree.""" if len(self) == 0: return prng = None # pending range istart = None # processing starting indice m = 0 # processing step for sli in self._contiguous_slices(): start = sli.start stop = sli.stop unitary = (start + 1 == stop) # one indice? if istart is None: # first loop if unitary: istart = start else: prng = [start, stop, 1] istart = stop - 1 i = k = istart elif m == 0: # istart is set but step is unknown if not unitary: if prng is not None: # yield and replace pending range yield slice(*prng) else: yield slice(istart, istart + 1, 1) prng = [start, stop, 1] istart = k = stop - 1 continue i = start else: # step m > 0 assert m > 0 i = start # does current range lead to broken step? if m != i - k or not unitary: #j = i if m == i - k else k if m == i - k: j = i else: j = k # stepped is True when autostep setting does apply stepped = (j - istart >= self._autostep * m) if prng: # yield pending range? if stepped: prng[1] -= 1 else: istart += m yield slice(*prng) prng = None if m != i - k: # case: step value has changed if stepped: yield slice(istart, k + 1, m) else: for j in range(istart, k - m + 1, m): yield slice(j, j + 1, 1) if not unitary: yield slice(k, k + 1, 1) if unitary: if stepped: istart = i = k = start else: istart = k else: prng = [start, stop, 1] istart = i = k = stop - 1 elif not unitary: # case: broken step by contiguous range if stepped: # yield 'range/m' by taking first indice of new range yield slice(istart, i + 1, m) i += 1 else: # autostep setting does not apply in that case for j in range(istart, i - m + 1, m): yield slice(j, j + 1, 1) if stop > i + 1: # current->pending only if not unitary prng = [i, stop, 1] istart = i = k = stop - 1 m = i - k # compute step k = i # exited loop, process pending range or indice... if m == 0: if prng: yield slice(*prng) else: yield slice(istart, istart + 1, 1) else: assert m > 0 stepped = (k - istart >= self._autostep * m) if prng: if stepped: prng[1] -= 1 else: istart += m yield slice(*prng) prng = None if stepped: yield slice(istart, i + 1, m) else: for j in range(istart, i + 1, m): yield slice(j, j + 1, 1)
Internal generator that is able to retrieve ranges organized by step. Complexity: O(n) with n = number of ranges in tree.
Below is the the instruction that describes the task: ### Input: Internal generator that is able to retrieve ranges organized by step. Complexity: O(n) with n = number of ranges in tree. ### Response: def _folded_slices(self): """Internal generator that is able to retrieve ranges organized by step. Complexity: O(n) with n = number of ranges in tree.""" if len(self) == 0: return prng = None # pending range istart = None # processing starting indice m = 0 # processing step for sli in self._contiguous_slices(): start = sli.start stop = sli.stop unitary = (start + 1 == stop) # one indice? if istart is None: # first loop if unitary: istart = start else: prng = [start, stop, 1] istart = stop - 1 i = k = istart elif m == 0: # istart is set but step is unknown if not unitary: if prng is not None: # yield and replace pending range yield slice(*prng) else: yield slice(istart, istart + 1, 1) prng = [start, stop, 1] istart = k = stop - 1 continue i = start else: # step m > 0 assert m > 0 i = start # does current range lead to broken step? if m != i - k or not unitary: #j = i if m == i - k else k if m == i - k: j = i else: j = k # stepped is True when autostep setting does apply stepped = (j - istart >= self._autostep * m) if prng: # yield pending range? if stepped: prng[1] -= 1 else: istart += m yield slice(*prng) prng = None if m != i - k: # case: step value has changed if stepped: yield slice(istart, k + 1, m) else: for j in range(istart, k - m + 1, m): yield slice(j, j + 1, 1) if not unitary: yield slice(k, k + 1, 1) if unitary: if stepped: istart = i = k = start else: istart = k else: prng = [start, stop, 1] istart = i = k = stop - 1 elif not unitary: # case: broken step by contiguous range if stepped: # yield 'range/m' by taking first indice of new range yield slice(istart, i + 1, m) i += 1 else: # autostep setting does not apply in that case for j in range(istart, i - m + 1, m): yield slice(j, j + 1, 1) if stop > i + 1: # current->pending only if not unitary prng = [i, stop, 1] istart = i = k = stop - 1 m = i - k # compute step k = i # exited loop, process pending range or indice... if m == 0: if prng: yield slice(*prng) else: yield slice(istart, istart + 1, 1) else: assert m > 0 stepped = (k - istart >= self._autostep * m) if prng: if stepped: prng[1] -= 1 else: istart += m yield slice(*prng) prng = None if stepped: yield slice(istart, i + 1, m) else: for j in range(istart, i + 1, m): yield slice(j, j + 1, 1)
def parse_multi_instantiate(self, node): """ Parses <MultiInstantiate> @param node: Node containing the <MultiInstantiate> element @type node: xml.etree.Element """ if 'component' in node.lattrib: component = node.lattrib['component'] else: self.raise_error('<MultiInstantiate> must specify a component reference.') if 'number' in node.lattrib: number = node.lattrib['number'] else: self.raise_error("Multi instantiation of '{0}' must specify a parameter specifying the number.", component) self.current_structure.add_multi_instantiate(MultiInstantiate(component, number))
Parses <MultiInstantiate> @param node: Node containing the <MultiInstantiate> element @type node: xml.etree.Element
Below is the the instruction that describes the task: ### Input: Parses <MultiInstantiate> @param node: Node containing the <MultiInstantiate> element @type node: xml.etree.Element ### Response: def parse_multi_instantiate(self, node): """ Parses <MultiInstantiate> @param node: Node containing the <MultiInstantiate> element @type node: xml.etree.Element """ if 'component' in node.lattrib: component = node.lattrib['component'] else: self.raise_error('<MultiInstantiate> must specify a component reference.') if 'number' in node.lattrib: number = node.lattrib['number'] else: self.raise_error("Multi instantiation of '{0}' must specify a parameter specifying the number.", component) self.current_structure.add_multi_instantiate(MultiInstantiate(component, number))
def _diff_dict(orig, new): """Diff a nested dictionary, returning only key/values that differ. """ final = {} for k, v in new.items(): if isinstance(v, dict): v = _diff_dict(orig.get(k, {}), v) if len(v) > 0: final[k] = v elif v != orig.get(k): final[k] = v for k, v in orig.items(): if k not in new: final[k] = None return final
Diff a nested dictionary, returning only key/values that differ.
Below is the the instruction that describes the task: ### Input: Diff a nested dictionary, returning only key/values that differ. ### Response: def _diff_dict(orig, new): """Diff a nested dictionary, returning only key/values that differ. """ final = {} for k, v in new.items(): if isinstance(v, dict): v = _diff_dict(orig.get(k, {}), v) if len(v) > 0: final[k] = v elif v != orig.get(k): final[k] = v for k, v in orig.items(): if k not in new: final[k] = None return final
def prepare_outdir(self): """create temp directory.""" self._outdir = self.outdir if self._outdir is None: self._tmpdir = TemporaryDirectory() self.outdir = self._tmpdir.name elif isinstance(self.outdir, str): mkdirs(self.outdir) else: raise Exception("Error parsing outdir: %s"%type(self.outdir)) # handle gmt type if isinstance(self.gene_sets, str): _gset = os.path.split(self.gene_sets)[-1].lower().rstrip(".gmt") elif isinstance(self.gene_sets, dict): _gset = "blank_name" else: raise Exception("Error parsing gene_sets parameter for gene sets") logfile = os.path.join(self.outdir, "gseapy.%s.%s.log" % (self.module, _gset)) return logfile
create temp directory.
Below is the the instruction that describes the task: ### Input: create temp directory. ### Response: def prepare_outdir(self): """create temp directory.""" self._outdir = self.outdir if self._outdir is None: self._tmpdir = TemporaryDirectory() self.outdir = self._tmpdir.name elif isinstance(self.outdir, str): mkdirs(self.outdir) else: raise Exception("Error parsing outdir: %s"%type(self.outdir)) # handle gmt type if isinstance(self.gene_sets, str): _gset = os.path.split(self.gene_sets)[-1].lower().rstrip(".gmt") elif isinstance(self.gene_sets, dict): _gset = "blank_name" else: raise Exception("Error parsing gene_sets parameter for gene sets") logfile = os.path.join(self.outdir, "gseapy.%s.%s.log" % (self.module, _gset)) return logfile
def _invokeWrite(self, fileIO, session, directory, filename, replaceParamFile): """ Invoke File Write Method on Other Files """ # Default value for instance instance = None try: # Handle case where fileIO interfaces with single file # Retrieve File using FileIO instance = session.query(fileIO). \ filter(fileIO.projectFile == self). \ one() except: # Handle case where fileIO interfaces with multiple files # Retrieve File using FileIO and file extension extension = filename.split('.')[1] try: instance = session.query(fileIO). \ filter(fileIO.projectFile == self). \ filter(fileIO.fileExtension == extension). \ one() except NoResultFound: # Handle case when there is no file in database but the # card is listed in the project file log.warning('{0} listed as card in project file, but ' 'the file is not found in the database.'.format(filename)) except MultipleResultsFound: self._invokeWriteForMultipleOfType(directory, extension, fileIO, filename, session, replaceParamFile=replaceParamFile) return # Initiate Write Method on File if instance is not None: instance.write(session=session, directory=directory, name=filename, replaceParamFile=replaceParamFile)
Invoke File Write Method on Other Files
Below is the the instruction that describes the task: ### Input: Invoke File Write Method on Other Files ### Response: def _invokeWrite(self, fileIO, session, directory, filename, replaceParamFile): """ Invoke File Write Method on Other Files """ # Default value for instance instance = None try: # Handle case where fileIO interfaces with single file # Retrieve File using FileIO instance = session.query(fileIO). \ filter(fileIO.projectFile == self). \ one() except: # Handle case where fileIO interfaces with multiple files # Retrieve File using FileIO and file extension extension = filename.split('.')[1] try: instance = session.query(fileIO). \ filter(fileIO.projectFile == self). \ filter(fileIO.fileExtension == extension). \ one() except NoResultFound: # Handle case when there is no file in database but the # card is listed in the project file log.warning('{0} listed as card in project file, but ' 'the file is not found in the database.'.format(filename)) except MultipleResultsFound: self._invokeWriteForMultipleOfType(directory, extension, fileIO, filename, session, replaceParamFile=replaceParamFile) return # Initiate Write Method on File if instance is not None: instance.write(session=session, directory=directory, name=filename, replaceParamFile=replaceParamFile)
def POST(self): """ The HTTP POST body parsed into a MultiDict. This supports urlencoded and multipart POST requests. Multipart is commonly used for file uploads and may result in some of the values beeing cgi.FieldStorage objects instead of strings. Multiple values per key are possible. See MultiDict for details. """ if self._POST is None: save_env = dict() # Build a save environment for cgi for key in ('REQUEST_METHOD', 'CONTENT_TYPE', 'CONTENT_LENGTH'): if key in self.environ: save_env[key] = self.environ[key] save_env['QUERY_STRING'] = '' # Without this, sys.argv is called! if TextIOWrapper: fb = TextIOWrapper(self.body, encoding='ISO-8859-1') else: fb = self.body data = cgi.FieldStorage(fp=fb, environ=save_env) self._POST = MultiDict() for item in data.list: self._POST[item.name] = item if item.filename else item.value return self._POST
The HTTP POST body parsed into a MultiDict. This supports urlencoded and multipart POST requests. Multipart is commonly used for file uploads and may result in some of the values beeing cgi.FieldStorage objects instead of strings. Multiple values per key are possible. See MultiDict for details.
Below is the the instruction that describes the task: ### Input: The HTTP POST body parsed into a MultiDict. This supports urlencoded and multipart POST requests. Multipart is commonly used for file uploads and may result in some of the values beeing cgi.FieldStorage objects instead of strings. Multiple values per key are possible. See MultiDict for details. ### Response: def POST(self): """ The HTTP POST body parsed into a MultiDict. This supports urlencoded and multipart POST requests. Multipart is commonly used for file uploads and may result in some of the values beeing cgi.FieldStorage objects instead of strings. Multiple values per key are possible. See MultiDict for details. """ if self._POST is None: save_env = dict() # Build a save environment for cgi for key in ('REQUEST_METHOD', 'CONTENT_TYPE', 'CONTENT_LENGTH'): if key in self.environ: save_env[key] = self.environ[key] save_env['QUERY_STRING'] = '' # Without this, sys.argv is called! if TextIOWrapper: fb = TextIOWrapper(self.body, encoding='ISO-8859-1') else: fb = self.body data = cgi.FieldStorage(fp=fb, environ=save_env) self._POST = MultiDict() for item in data.list: self._POST[item.name] = item if item.filename else item.value return self._POST
def resolution(file_, resolution_string): """ A filter to return the URL for the provided resolution of the thumbnail. """ if sorl_settings.THUMBNAIL_DUMMY: dummy_source = sorl_settings.THUMBNAIL_DUMMY_SOURCE source = dummy_source.replace('%(width)s', '(?P<width>[0-9]+)') source = source.replace('%(height)s', '(?P<height>[0-9]+)') source = re.compile(source) try: resolution = decimal.Decimal(resolution_string.strip('x')) info = source.match(file_).groupdict() info = {dimension: int(int(size) * resolution) for (dimension, size) in info.items()} return dummy_source % info except (AttributeError, TypeError, KeyError): # If we can't manipulate the dummy we shouldn't change it at all return file_ filename, extension = os.path.splitext(file_) return '%s@%s%s' % (filename, resolution_string, extension)
A filter to return the URL for the provided resolution of the thumbnail.
Below is the the instruction that describes the task: ### Input: A filter to return the URL for the provided resolution of the thumbnail. ### Response: def resolution(file_, resolution_string): """ A filter to return the URL for the provided resolution of the thumbnail. """ if sorl_settings.THUMBNAIL_DUMMY: dummy_source = sorl_settings.THUMBNAIL_DUMMY_SOURCE source = dummy_source.replace('%(width)s', '(?P<width>[0-9]+)') source = source.replace('%(height)s', '(?P<height>[0-9]+)') source = re.compile(source) try: resolution = decimal.Decimal(resolution_string.strip('x')) info = source.match(file_).groupdict() info = {dimension: int(int(size) * resolution) for (dimension, size) in info.items()} return dummy_source % info except (AttributeError, TypeError, KeyError): # If we can't manipulate the dummy we shouldn't change it at all return file_ filename, extension = os.path.splitext(file_) return '%s@%s%s' % (filename, resolution_string, extension)
def itermovieshash(self): """ Iterate over movies hash stored in the database. """ cur = self._db.firstkey() while cur is not None: yield cur cur = self._db.nextkey(cur)
Iterate over movies hash stored in the database.
Below is the the instruction that describes the task: ### Input: Iterate over movies hash stored in the database. ### Response: def itermovieshash(self): """ Iterate over movies hash stored in the database. """ cur = self._db.firstkey() while cur is not None: yield cur cur = self._db.nextkey(cur)
def _exclude_files_parser(cls, key): """Returns a parser function to make sure field inputs are not files. Parses a value after getting the key so error messages are more informative. :param key: :rtype: callable """ def parser(value): exclude_directive = 'file:' if value.startswith(exclude_directive): raise ValueError( 'Only strings are accepted for the {0} field, ' 'files are not accepted'.format(key)) return value return parser
Returns a parser function to make sure field inputs are not files. Parses a value after getting the key so error messages are more informative. :param key: :rtype: callable
Below is the the instruction that describes the task: ### Input: Returns a parser function to make sure field inputs are not files. Parses a value after getting the key so error messages are more informative. :param key: :rtype: callable ### Response: def _exclude_files_parser(cls, key): """Returns a parser function to make sure field inputs are not files. Parses a value after getting the key so error messages are more informative. :param key: :rtype: callable """ def parser(value): exclude_directive = 'file:' if value.startswith(exclude_directive): raise ValueError( 'Only strings are accepted for the {0} field, ' 'files are not accepted'.format(key)) return value return parser
def compute_layer(cls, data, params, layout): """ Compute position for the layer in all panels Positions can override this function instead of `compute_panel` if the position computations are independent of the panel. i.e when not colliding """ def fn(pdata): """ Helper compute function """ # Given data belonging to a specific panel, grab # the corresponding scales and call the method # that does the real computation if len(pdata) == 0: return pdata scales = layout.get_scales(pdata['PANEL'].iat[0]) return cls.compute_panel(pdata, scales, params) return groupby_apply(data, 'PANEL', fn)
Compute position for the layer in all panels Positions can override this function instead of `compute_panel` if the position computations are independent of the panel. i.e when not colliding
Below is the the instruction that describes the task: ### Input: Compute position for the layer in all panels Positions can override this function instead of `compute_panel` if the position computations are independent of the panel. i.e when not colliding ### Response: def compute_layer(cls, data, params, layout): """ Compute position for the layer in all panels Positions can override this function instead of `compute_panel` if the position computations are independent of the panel. i.e when not colliding """ def fn(pdata): """ Helper compute function """ # Given data belonging to a specific panel, grab # the corresponding scales and call the method # that does the real computation if len(pdata) == 0: return pdata scales = layout.get_scales(pdata['PANEL'].iat[0]) return cls.compute_panel(pdata, scales, params) return groupby_apply(data, 'PANEL', fn)
def get_project_by_network_id(network_id,**kwargs): """ get a project complexmodel by a network_id """ user_id = kwargs.get('user_id') projects_i = db.DBSession.query(Project).join(ProjectOwner).join(Network, Project.id==Network.project_id).filter( Network.id==network_id, ProjectOwner.user_id==user_id).order_by('name').all() ret_project = None for project_i in projects_i: try: project_i.check_read_permission(user_id) ret_project = project_i except: log.info("Can't return project %s. User %s does not have permission to read it.", project_i.id, user_id) return ret_project
get a project complexmodel by a network_id
Below is the the instruction that describes the task: ### Input: get a project complexmodel by a network_id ### Response: def get_project_by_network_id(network_id,**kwargs): """ get a project complexmodel by a network_id """ user_id = kwargs.get('user_id') projects_i = db.DBSession.query(Project).join(ProjectOwner).join(Network, Project.id==Network.project_id).filter( Network.id==network_id, ProjectOwner.user_id==user_id).order_by('name').all() ret_project = None for project_i in projects_i: try: project_i.check_read_permission(user_id) ret_project = project_i except: log.info("Can't return project %s. User %s does not have permission to read it.", project_i.id, user_id) return ret_project
def generate_monthly(rain_day_threshold, day_end_hour, use_dst, daily_data, monthly_data, process_from): """Generate monthly summaries from daily data.""" start = monthly_data.before(datetime.max) if start is None: start = datetime.min start = daily_data.after(start + SECOND) if process_from: if start: start = min(start, process_from) else: start = process_from if start is None: return start # set start to start of first day of month (local time) start = timezone.local_replace( start, use_dst=use_dst, day=1, hour=day_end_hour, minute=0, second=0) if day_end_hour >= 12: # month actually starts on the last day of previous month start -= DAY del monthly_data[start:] stop = daily_data.before(datetime.max) if stop is None: return None acc = MonthAcc(rain_day_threshold) def monthlygen(inputdata): """Internal generator function""" month_start = start count = 0 while month_start <= stop: count += 1 if count % 12 == 0: logger.info("monthly: %s", month_start.isoformat(' ')) else: logger.debug("monthly: %s", month_start.isoformat(' ')) month_end = month_start + WEEK if month_end.month < 12: month_end = month_end.replace(month=month_end.month+1) else: month_end = month_end.replace(month=1, year=month_end.year+1) month_end = month_end - WEEK if use_dst: # month might straddle summer time start or end month_end = timezone.local_replace( month_end + HOURx3, use_dst=use_dst, hour=day_end_hour) acc.reset() for data in inputdata[month_start:month_end]: acc.add_daily(data) new_data = acc.result() if new_data: new_data['start'] = month_start yield new_data month_start = month_end monthly_data.update(monthlygen(daily_data)) return start
Generate monthly summaries from daily data.
Below is the the instruction that describes the task: ### Input: Generate monthly summaries from daily data. ### Response: def generate_monthly(rain_day_threshold, day_end_hour, use_dst, daily_data, monthly_data, process_from): """Generate monthly summaries from daily data.""" start = monthly_data.before(datetime.max) if start is None: start = datetime.min start = daily_data.after(start + SECOND) if process_from: if start: start = min(start, process_from) else: start = process_from if start is None: return start # set start to start of first day of month (local time) start = timezone.local_replace( start, use_dst=use_dst, day=1, hour=day_end_hour, minute=0, second=0) if day_end_hour >= 12: # month actually starts on the last day of previous month start -= DAY del monthly_data[start:] stop = daily_data.before(datetime.max) if stop is None: return None acc = MonthAcc(rain_day_threshold) def monthlygen(inputdata): """Internal generator function""" month_start = start count = 0 while month_start <= stop: count += 1 if count % 12 == 0: logger.info("monthly: %s", month_start.isoformat(' ')) else: logger.debug("monthly: %s", month_start.isoformat(' ')) month_end = month_start + WEEK if month_end.month < 12: month_end = month_end.replace(month=month_end.month+1) else: month_end = month_end.replace(month=1, year=month_end.year+1) month_end = month_end - WEEK if use_dst: # month might straddle summer time start or end month_end = timezone.local_replace( month_end + HOURx3, use_dst=use_dst, hour=day_end_hour) acc.reset() for data in inputdata[month_start:month_end]: acc.add_daily(data) new_data = acc.result() if new_data: new_data['start'] = month_start yield new_data month_start = month_end monthly_data.update(monthlygen(daily_data)) return start
def create_sconstruct(self, project_dir='', sayyes=False): """Creates a default SConstruct file""" project_dir = util.check_dir(project_dir) sconstruct_name = 'SConstruct' sconstruct_path = util.safe_join(project_dir, sconstruct_name) local_sconstruct_path = util.safe_join( util.get_folder('resources'), sconstruct_name) if isfile(sconstruct_path): # -- If sayyes, skip the question if sayyes: self._copy_sconstruct_file(sconstruct_name, sconstruct_path, local_sconstruct_path) else: click.secho( 'Warning: {} file already exists'.format(sconstruct_name), fg='yellow') if click.confirm('Do you want to replace it?'): self._copy_sconstruct_file(sconstruct_name, sconstruct_path, local_sconstruct_path) else: click.secho('Abort!', fg='red') else: self._copy_sconstruct_file(sconstruct_name, sconstruct_path, local_sconstruct_path)
Creates a default SConstruct file
Below is the the instruction that describes the task: ### Input: Creates a default SConstruct file ### Response: def create_sconstruct(self, project_dir='', sayyes=False): """Creates a default SConstruct file""" project_dir = util.check_dir(project_dir) sconstruct_name = 'SConstruct' sconstruct_path = util.safe_join(project_dir, sconstruct_name) local_sconstruct_path = util.safe_join( util.get_folder('resources'), sconstruct_name) if isfile(sconstruct_path): # -- If sayyes, skip the question if sayyes: self._copy_sconstruct_file(sconstruct_name, sconstruct_path, local_sconstruct_path) else: click.secho( 'Warning: {} file already exists'.format(sconstruct_name), fg='yellow') if click.confirm('Do you want to replace it?'): self._copy_sconstruct_file(sconstruct_name, sconstruct_path, local_sconstruct_path) else: click.secho('Abort!', fg='red') else: self._copy_sconstruct_file(sconstruct_name, sconstruct_path, local_sconstruct_path)
def resolution(self, channels=None): """ Get the resolution of the specified channel(s). The resolution specifies the number of different values that the events can take. The resolution is directly obtained from the $PnR parameter. Parameters ---------- channels : int, str, list of int, list of str Channel(s) for which to get the resolution. If None, return a list with the resolution of all channels, in the order of ``FCSData.channels``. Return ------ int or list of ints Resolution of the specified channel(s). """ # Check default if channels is None: channels = self._channels # Get numerical indices of channels channels = self._name_to_index(channels) # Get resolution of the specified channels if hasattr(channels, '__iter__') \ and not isinstance(channels, six.string_types): return [self._resolution[ch] for ch in channels] else: return self._resolution[channels]
Get the resolution of the specified channel(s). The resolution specifies the number of different values that the events can take. The resolution is directly obtained from the $PnR parameter. Parameters ---------- channels : int, str, list of int, list of str Channel(s) for which to get the resolution. If None, return a list with the resolution of all channels, in the order of ``FCSData.channels``. Return ------ int or list of ints Resolution of the specified channel(s).
Below is the the instruction that describes the task: ### Input: Get the resolution of the specified channel(s). The resolution specifies the number of different values that the events can take. The resolution is directly obtained from the $PnR parameter. Parameters ---------- channels : int, str, list of int, list of str Channel(s) for which to get the resolution. If None, return a list with the resolution of all channels, in the order of ``FCSData.channels``. Return ------ int or list of ints Resolution of the specified channel(s). ### Response: def resolution(self, channels=None): """ Get the resolution of the specified channel(s). The resolution specifies the number of different values that the events can take. The resolution is directly obtained from the $PnR parameter. Parameters ---------- channels : int, str, list of int, list of str Channel(s) for which to get the resolution. If None, return a list with the resolution of all channels, in the order of ``FCSData.channels``. Return ------ int or list of ints Resolution of the specified channel(s). """ # Check default if channels is None: channels = self._channels # Get numerical indices of channels channels = self._name_to_index(channels) # Get resolution of the specified channels if hasattr(channels, '__iter__') \ and not isinstance(channels, six.string_types): return [self._resolution[ch] for ch in channels] else: return self._resolution[channels]
def randfloat(a, b=None): """Return a random float :param float a: Either the minimum value (inclusive) if ``b`` is set, or the maximum value if ``b`` is not set (non-inclusive, in which case the minimum is implicitly 0.0) :param float b: The maximum value to generate (non-inclusive) :returns: float """ if b is None: max_ = a min_ = 0.0 else: min_ = a max_ = b diff = max_ - min_ res = _random() res *= diff res += min_ return res
Return a random float :param float a: Either the minimum value (inclusive) if ``b`` is set, or the maximum value if ``b`` is not set (non-inclusive, in which case the minimum is implicitly 0.0) :param float b: The maximum value to generate (non-inclusive) :returns: float
Below is the the instruction that describes the task: ### Input: Return a random float :param float a: Either the minimum value (inclusive) if ``b`` is set, or the maximum value if ``b`` is not set (non-inclusive, in which case the minimum is implicitly 0.0) :param float b: The maximum value to generate (non-inclusive) :returns: float ### Response: def randfloat(a, b=None): """Return a random float :param float a: Either the minimum value (inclusive) if ``b`` is set, or the maximum value if ``b`` is not set (non-inclusive, in which case the minimum is implicitly 0.0) :param float b: The maximum value to generate (non-inclusive) :returns: float """ if b is None: max_ = a min_ = 0.0 else: min_ = a max_ = b diff = max_ - min_ res = _random() res *= diff res += min_ return res
def process_byte(self, tag): """Process byte type tags""" tag.set_address(self.normal_register.current_address) # each address needs 1 byte self.normal_register.move_to_next_address(1)
Process byte type tags
Below is the the instruction that describes the task: ### Input: Process byte type tags ### Response: def process_byte(self, tag): """Process byte type tags""" tag.set_address(self.normal_register.current_address) # each address needs 1 byte self.normal_register.move_to_next_address(1)
def wrap_json_response(func=None, *, encoder=json.JSONEncoder): """ A middleware that encodes in json the response body in case of that the "Content-Type" header is "application/json". This middlware accepts and optional `encoder` parameter, that allow to the user specify its own json encoder class. """ if func is None: return functools.partial(wrap_json_response, encoder=encoder) @functools.wraps(func) def wrapper(request, *args, **kwargs): response = func(request, *args, **kwargs) if "Content-Type" in response.headers and response.headers['Content-Type'] is not None: ctype, pdict = parse_header(response.headers.get('Content-Type', '')) if ctype == "application/json" and (isinstance(response.body, dict) or isinstance(response.body, list)): response.body = json.dumps(response.body, cls=encoder) return response return wrapper
A middleware that encodes in json the response body in case of that the "Content-Type" header is "application/json". This middlware accepts and optional `encoder` parameter, that allow to the user specify its own json encoder class.
Below is the the instruction that describes the task: ### Input: A middleware that encodes in json the response body in case of that the "Content-Type" header is "application/json". This middlware accepts and optional `encoder` parameter, that allow to the user specify its own json encoder class. ### Response: def wrap_json_response(func=None, *, encoder=json.JSONEncoder): """ A middleware that encodes in json the response body in case of that the "Content-Type" header is "application/json". This middlware accepts and optional `encoder` parameter, that allow to the user specify its own json encoder class. """ if func is None: return functools.partial(wrap_json_response, encoder=encoder) @functools.wraps(func) def wrapper(request, *args, **kwargs): response = func(request, *args, **kwargs) if "Content-Type" in response.headers and response.headers['Content-Type'] is not None: ctype, pdict = parse_header(response.headers.get('Content-Type', '')) if ctype == "application/json" and (isinstance(response.body, dict) or isinstance(response.body, list)): response.body = json.dumps(response.body, cls=encoder) return response return wrapper
def full_photos(self): """ Gets a list of photos using default options. :class:`list` of :class:`stravalib.model.ActivityPhoto` objects for this activity. """ if self._photos is None: if self.total_photo_count > 0: self.assert_bind_client() self._photos = self.bind_client.get_activity_photos(self.id, only_instagram=False) else: self._photos = [] return self._photos
Gets a list of photos using default options. :class:`list` of :class:`stravalib.model.ActivityPhoto` objects for this activity.
Below is the the instruction that describes the task: ### Input: Gets a list of photos using default options. :class:`list` of :class:`stravalib.model.ActivityPhoto` objects for this activity. ### Response: def full_photos(self): """ Gets a list of photos using default options. :class:`list` of :class:`stravalib.model.ActivityPhoto` objects for this activity. """ if self._photos is None: if self.total_photo_count > 0: self.assert_bind_client() self._photos = self.bind_client.get_activity_photos(self.id, only_instagram=False) else: self._photos = [] return self._photos
def best_match_from_list(item,options,fuzzy=90,fname_match=True,fuzzy_fragment=None,guess=False): '''Returns the best match from :meth:`matches_from_list` or ``None`` if no good matches''' matches = matches_from_list(item,options,fuzzy,fname_match,fuzzy_fragment,guess) if len(matches)>0: return matches[0] return None
Returns the best match from :meth:`matches_from_list` or ``None`` if no good matches
Below is the the instruction that describes the task: ### Input: Returns the best match from :meth:`matches_from_list` or ``None`` if no good matches ### Response: def best_match_from_list(item,options,fuzzy=90,fname_match=True,fuzzy_fragment=None,guess=False): '''Returns the best match from :meth:`matches_from_list` or ``None`` if no good matches''' matches = matches_from_list(item,options,fuzzy,fname_match,fuzzy_fragment,guess) if len(matches)>0: return matches[0] return None
def set_formatted(self, raw_timestamp, raw_status, raw_value, major=DEFAULT_KATCP_MAJOR): """Set the current value of the sensor. Parameters ---------- timestamp : str KATCP formatted timestamp string status : str KATCP formatted sensor status string value : str KATCP formatted sensor value major : int, default = 5 KATCP major version to use for interpreting the raw values """ timestamp = self.TIMESTAMP_TYPE.decode(raw_timestamp, major) status = self.STATUS_NAMES[raw_status] value = self.parse_value(raw_value, major) self.set(timestamp, status, value)
Set the current value of the sensor. Parameters ---------- timestamp : str KATCP formatted timestamp string status : str KATCP formatted sensor status string value : str KATCP formatted sensor value major : int, default = 5 KATCP major version to use for interpreting the raw values
Below is the the instruction that describes the task: ### Input: Set the current value of the sensor. Parameters ---------- timestamp : str KATCP formatted timestamp string status : str KATCP formatted sensor status string value : str KATCP formatted sensor value major : int, default = 5 KATCP major version to use for interpreting the raw values ### Response: def set_formatted(self, raw_timestamp, raw_status, raw_value, major=DEFAULT_KATCP_MAJOR): """Set the current value of the sensor. Parameters ---------- timestamp : str KATCP formatted timestamp string status : str KATCP formatted sensor status string value : str KATCP formatted sensor value major : int, default = 5 KATCP major version to use for interpreting the raw values """ timestamp = self.TIMESTAMP_TYPE.decode(raw_timestamp, major) status = self.STATUS_NAMES[raw_status] value = self.parse_value(raw_value, major) self.set(timestamp, status, value)
def setup(self, redis_conn=None, host='localhost', port=6379): ''' Set up the counting manager class @param redis_conn: A premade redis connection (overrides host and port) @param host: the redis host @param port: the redis port ''' AbstractCounter.setup(self, redis_conn=redis_conn, host=host, port=port) self._threaded_start()
Set up the counting manager class @param redis_conn: A premade redis connection (overrides host and port) @param host: the redis host @param port: the redis port
Below is the the instruction that describes the task: ### Input: Set up the counting manager class @param redis_conn: A premade redis connection (overrides host and port) @param host: the redis host @param port: the redis port ### Response: def setup(self, redis_conn=None, host='localhost', port=6379): ''' Set up the counting manager class @param redis_conn: A premade redis connection (overrides host and port) @param host: the redis host @param port: the redis port ''' AbstractCounter.setup(self, redis_conn=redis_conn, host=host, port=port) self._threaded_start()
def a_over_Rs(P,R2,M2,M1=1,R1=1,planet=True): """ Returns a/Rs for given parameters. """ if planet: M2 *= REARTH/RSUN R2 *= MEARTH/MSUN return semimajor(P,M1+M2)*AU/(R1*RSUN)
Returns a/Rs for given parameters.
Below is the the instruction that describes the task: ### Input: Returns a/Rs for given parameters. ### Response: def a_over_Rs(P,R2,M2,M1=1,R1=1,planet=True): """ Returns a/Rs for given parameters. """ if planet: M2 *= REARTH/RSUN R2 *= MEARTH/MSUN return semimajor(P,M1+M2)*AU/(R1*RSUN)
def solve(self, neigs=4, tol=0, guess=None, mode_profiles=True, initial_mode_guess=None): """ This function finds the eigenmodes. Parameters ---------- neigs : int number of eigenmodes to find tol : float Relative accuracy for eigenvalues. The default value of 0 implies machine precision. guess : float a guess for the refractive index. Only finds eigenvectors with an effective refractive index higher than this value. Returns ------- self : an instance of the VFDModeSolver class obtain the fields of interest for specific modes using, for example: solver = EMpy.modesolvers.FD.VFDModeSolver(wavelength, x, y, epsf, boundary).solve() Ex = solver.modes[0].Ex Ey = solver.modes[0].Ey Ez = solver.modes[0].Ez """ from scipy.sparse.linalg import eigen self.nmodes = neigs self.tol = tol A = self.build_matrix() if guess is not None: # calculate shift for eigs function k = 2 * numpy.pi / self.wl shift = (guess * k) ** 2 else: shift = None [eigvals, eigvecs] = eigen.eigs(A, k=neigs, which='LR', tol=0.001, ncv=None, v0 = initial_mode_guess, return_eigenvectors=mode_profiles, sigma=shift) neffs = self.wl * scipy.sqrt(eigvals) / (2 * numpy.pi) if mode_profiles: Hxs = [] Hys = [] nx = self.nx ny = self.ny for ieig in range(neigs): Hxs.append(eigvecs[:nx * ny, ieig].reshape(nx, ny)) Hys.append(eigvecs[nx * ny:, ieig].reshape(nx, ny)) # sort the modes idx = numpy.flipud(numpy.argsort(neffs)) neffs = neffs[idx] self.neff = neffs if mode_profiles: tmpx = [] tmpy = [] for i in idx: tmpx.append(Hxs[i]) tmpy.append(Hys[i]) Hxs = tmpx Hys = tmpy [Hzs, Exs, Eys, Ezs] = self.compute_other_fields(neffs, Hxs, Hys) self.modes = [] for (neff, Hx, Hy, Hz, Ex, Ey, Ez) in zip(neffs, Hxs, Hys, Hzs, Exs, Eys, Ezs): self.modes.append( FDMode(self.wl, self.x, self.y, neff, Ey, Ex, Ez, Hy, Hx, Hz).normalize()) return self
This function finds the eigenmodes. Parameters ---------- neigs : int number of eigenmodes to find tol : float Relative accuracy for eigenvalues. The default value of 0 implies machine precision. guess : float a guess for the refractive index. Only finds eigenvectors with an effective refractive index higher than this value. Returns ------- self : an instance of the VFDModeSolver class obtain the fields of interest for specific modes using, for example: solver = EMpy.modesolvers.FD.VFDModeSolver(wavelength, x, y, epsf, boundary).solve() Ex = solver.modes[0].Ex Ey = solver.modes[0].Ey Ez = solver.modes[0].Ez
Below is the the instruction that describes the task: ### Input: This function finds the eigenmodes. Parameters ---------- neigs : int number of eigenmodes to find tol : float Relative accuracy for eigenvalues. The default value of 0 implies machine precision. guess : float a guess for the refractive index. Only finds eigenvectors with an effective refractive index higher than this value. Returns ------- self : an instance of the VFDModeSolver class obtain the fields of interest for specific modes using, for example: solver = EMpy.modesolvers.FD.VFDModeSolver(wavelength, x, y, epsf, boundary).solve() Ex = solver.modes[0].Ex Ey = solver.modes[0].Ey Ez = solver.modes[0].Ez ### Response: def solve(self, neigs=4, tol=0, guess=None, mode_profiles=True, initial_mode_guess=None): """ This function finds the eigenmodes. Parameters ---------- neigs : int number of eigenmodes to find tol : float Relative accuracy for eigenvalues. The default value of 0 implies machine precision. guess : float a guess for the refractive index. Only finds eigenvectors with an effective refractive index higher than this value. Returns ------- self : an instance of the VFDModeSolver class obtain the fields of interest for specific modes using, for example: solver = EMpy.modesolvers.FD.VFDModeSolver(wavelength, x, y, epsf, boundary).solve() Ex = solver.modes[0].Ex Ey = solver.modes[0].Ey Ez = solver.modes[0].Ez """ from scipy.sparse.linalg import eigen self.nmodes = neigs self.tol = tol A = self.build_matrix() if guess is not None: # calculate shift for eigs function k = 2 * numpy.pi / self.wl shift = (guess * k) ** 2 else: shift = None [eigvals, eigvecs] = eigen.eigs(A, k=neigs, which='LR', tol=0.001, ncv=None, v0 = initial_mode_guess, return_eigenvectors=mode_profiles, sigma=shift) neffs = self.wl * scipy.sqrt(eigvals) / (2 * numpy.pi) if mode_profiles: Hxs = [] Hys = [] nx = self.nx ny = self.ny for ieig in range(neigs): Hxs.append(eigvecs[:nx * ny, ieig].reshape(nx, ny)) Hys.append(eigvecs[nx * ny:, ieig].reshape(nx, ny)) # sort the modes idx = numpy.flipud(numpy.argsort(neffs)) neffs = neffs[idx] self.neff = neffs if mode_profiles: tmpx = [] tmpy = [] for i in idx: tmpx.append(Hxs[i]) tmpy.append(Hys[i]) Hxs = tmpx Hys = tmpy [Hzs, Exs, Eys, Ezs] = self.compute_other_fields(neffs, Hxs, Hys) self.modes = [] for (neff, Hx, Hy, Hz, Ex, Ey, Ez) in zip(neffs, Hxs, Hys, Hzs, Exs, Eys, Ezs): self.modes.append( FDMode(self.wl, self.x, self.y, neff, Ey, Ex, Ez, Hy, Hx, Hz).normalize()) return self
def file_download_using_wget(self,url): '''It will download file specified by url using wget utility of linux ''' file_name=url.split('/')[-1] print 'Downloading file %s '%file_name command='wget -c --read-timeout=50 --tries=3 -q --show-progress --no-check-certificate ' url='"'+url+'"' command=command+url os.system(command)
It will download file specified by url using wget utility of linux
Below is the the instruction that describes the task: ### Input: It will download file specified by url using wget utility of linux ### Response: def file_download_using_wget(self,url): '''It will download file specified by url using wget utility of linux ''' file_name=url.split('/')[-1] print 'Downloading file %s '%file_name command='wget -c --read-timeout=50 --tries=3 -q --show-progress --no-check-certificate ' url='"'+url+'"' command=command+url os.system(command)
def set_client_cmds(self): """ This is method automatically called on each request and updates "object_id", "cmd" and "flow" client variables from current.input. "flow" and "object_id" variables will always exists in the task_data so app developers can safely check for their values in workflows. Their values will be reset to None if they not exists in the current input data set. On the other side, if there isn't a "cmd" in the current.input cmd will be removed from task_data. """ self.task_data['cmd'] = self.input.get('cmd') self.task_data['flow'] = self.input.get('flow') filters = self.input.get('filters', {}) try: if isinstance(filters, dict): # this is the new form, others will be removed when ui be ready self.task_data['object_id'] = filters.get('object_id')['values'][0] elif filters[0]['field'] == 'object_id': self.task_data['object_id'] = filters[0]['values'][0] except: if 'object_id' in self.input: self.task_data['object_id'] = self.input.get('object_id')
This is method automatically called on each request and updates "object_id", "cmd" and "flow" client variables from current.input. "flow" and "object_id" variables will always exists in the task_data so app developers can safely check for their values in workflows. Their values will be reset to None if they not exists in the current input data set. On the other side, if there isn't a "cmd" in the current.input cmd will be removed from task_data.
Below is the the instruction that describes the task: ### Input: This is method automatically called on each request and updates "object_id", "cmd" and "flow" client variables from current.input. "flow" and "object_id" variables will always exists in the task_data so app developers can safely check for their values in workflows. Their values will be reset to None if they not exists in the current input data set. On the other side, if there isn't a "cmd" in the current.input cmd will be removed from task_data. ### Response: def set_client_cmds(self): """ This is method automatically called on each request and updates "object_id", "cmd" and "flow" client variables from current.input. "flow" and "object_id" variables will always exists in the task_data so app developers can safely check for their values in workflows. Their values will be reset to None if they not exists in the current input data set. On the other side, if there isn't a "cmd" in the current.input cmd will be removed from task_data. """ self.task_data['cmd'] = self.input.get('cmd') self.task_data['flow'] = self.input.get('flow') filters = self.input.get('filters', {}) try: if isinstance(filters, dict): # this is the new form, others will be removed when ui be ready self.task_data['object_id'] = filters.get('object_id')['values'][0] elif filters[0]['field'] == 'object_id': self.task_data['object_id'] = filters[0]['values'][0] except: if 'object_id' in self.input: self.task_data['object_id'] = self.input.get('object_id')
def constant_image_value(image, crs='EPSG:32613', scale=1): """Extract the output value from a calculation done with constant images""" return getinfo(ee.Image(image).reduceRegion( reducer=ee.Reducer.first(), scale=scale, geometry=ee.Geometry.Rectangle([0, 0, 10, 10], crs, False)))
Extract the output value from a calculation done with constant images
Below is the the instruction that describes the task: ### Input: Extract the output value from a calculation done with constant images ### Response: def constant_image_value(image, crs='EPSG:32613', scale=1): """Extract the output value from a calculation done with constant images""" return getinfo(ee.Image(image).reduceRegion( reducer=ee.Reducer.first(), scale=scale, geometry=ee.Geometry.Rectangle([0, 0, 10, 10], crs, False)))
def determine_json_encoding(json_bytes): ''' Given the fact that the first 2 characters in json are guaranteed to be ASCII, we can use these to determine the encoding. See: http://tools.ietf.org/html/rfc4627#section-3 Copied here: Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. 00 00 00 xx UTF-32BE 00 xx 00 xx UTF-16BE xx 00 00 00 UTF-32LE xx 00 xx 00 UTF-16LE xx xx xx xx UTF-8 ''' assert isinstance(json_bytes, bytes), "`determine_json_encoding()` can only operate on bytestring inputs" if len(json_bytes) > 4: b1, b2, b3, b4 = json_bytes[0], json_bytes[1], json_bytes[2], json_bytes[3] if b1 == 0 and b2 == 0 and b3 == 0 and b4 != 0: return "UTF-32BE" elif b1 == 0 and b2 != 0 and b3 == 0 and b4 != 0: return "UTF-16BE" elif b1 != 0 and b2 == 0 and b3 == 0 and b4 == 0: return "UTF-32LE" elif b1 != 0 and b2 == 0 and b3 != 0 and b4 == 0: return "UTF-16LE" elif b1 != 0 and b2 != 0 and b3 != 0 and b4 != 0: return "UTF-8" else: raise Exceptions.ContentTypeError("Unknown encoding!") elif len(json_bytes) > 2: b1, b2 = json_bytes[0], json_bytes[1] if b1 == 0 and b2 == 0: return "UTF-32BE" elif b1 == 0 and b2 != 0: return "UTF-16BE" elif b1 != 0 and b2 == 0: raise Exceptions.ContentTypeError("Json string too short to definitively infer encoding.") elif b1 != 0 and b2 != 0: return "UTF-8" else: raise Exceptions.ContentTypeError("Unknown encoding!") raise Exceptions.ContentTypeError("Input string too short to guess encoding!")
Given the fact that the first 2 characters in json are guaranteed to be ASCII, we can use these to determine the encoding. See: http://tools.ietf.org/html/rfc4627#section-3 Copied here: Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. 00 00 00 xx UTF-32BE 00 xx 00 xx UTF-16BE xx 00 00 00 UTF-32LE xx 00 xx 00 UTF-16LE xx xx xx xx UTF-8
Below is the the instruction that describes the task: ### Input: Given the fact that the first 2 characters in json are guaranteed to be ASCII, we can use these to determine the encoding. See: http://tools.ietf.org/html/rfc4627#section-3 Copied here: Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. 00 00 00 xx UTF-32BE 00 xx 00 xx UTF-16BE xx 00 00 00 UTF-32LE xx 00 xx 00 UTF-16LE xx xx xx xx UTF-8 ### Response: def determine_json_encoding(json_bytes): ''' Given the fact that the first 2 characters in json are guaranteed to be ASCII, we can use these to determine the encoding. See: http://tools.ietf.org/html/rfc4627#section-3 Copied here: Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. 00 00 00 xx UTF-32BE 00 xx 00 xx UTF-16BE xx 00 00 00 UTF-32LE xx 00 xx 00 UTF-16LE xx xx xx xx UTF-8 ''' assert isinstance(json_bytes, bytes), "`determine_json_encoding()` can only operate on bytestring inputs" if len(json_bytes) > 4: b1, b2, b3, b4 = json_bytes[0], json_bytes[1], json_bytes[2], json_bytes[3] if b1 == 0 and b2 == 0 and b3 == 0 and b4 != 0: return "UTF-32BE" elif b1 == 0 and b2 != 0 and b3 == 0 and b4 != 0: return "UTF-16BE" elif b1 != 0 and b2 == 0 and b3 == 0 and b4 == 0: return "UTF-32LE" elif b1 != 0 and b2 == 0 and b3 != 0 and b4 == 0: return "UTF-16LE" elif b1 != 0 and b2 != 0 and b3 != 0 and b4 != 0: return "UTF-8" else: raise Exceptions.ContentTypeError("Unknown encoding!") elif len(json_bytes) > 2: b1, b2 = json_bytes[0], json_bytes[1] if b1 == 0 and b2 == 0: return "UTF-32BE" elif b1 == 0 and b2 != 0: return "UTF-16BE" elif b1 != 0 and b2 == 0: raise Exceptions.ContentTypeError("Json string too short to definitively infer encoding.") elif b1 != 0 and b2 != 0: return "UTF-8" else: raise Exceptions.ContentTypeError("Unknown encoding!") raise Exceptions.ContentTypeError("Input string too short to guess encoding!")
def parse_cadd(variant, transcripts): """Check if the cadd phred score is annotated""" cadd = 0 cadd_keys = ['CADD', 'CADD_PHRED'] for key in cadd_keys: cadd = variant.INFO.get(key, 0) if cadd: return float(cadd) for transcript in transcripts: cadd_entry = transcript.get('cadd') if (cadd_entry and cadd_entry > cadd): cadd = cadd_entry return cadd
Check if the cadd phred score is annotated
Below is the the instruction that describes the task: ### Input: Check if the cadd phred score is annotated ### Response: def parse_cadd(variant, transcripts): """Check if the cadd phred score is annotated""" cadd = 0 cadd_keys = ['CADD', 'CADD_PHRED'] for key in cadd_keys: cadd = variant.INFO.get(key, 0) if cadd: return float(cadd) for transcript in transcripts: cadd_entry = transcript.get('cadd') if (cadd_entry and cadd_entry > cadd): cadd = cadd_entry return cadd
def do_quit(self, arg): """ quit || exit || q Stop and quit the current debugging session """ for name, fh in self._backup: setattr(sys, name, fh) self.console.writeline('*** Aborting program ***\n') self.console.flush() self.console.close() WebPdb.active_instance = None return Pdb.do_quit(self, arg)
quit || exit || q Stop and quit the current debugging session
Below is the the instruction that describes the task: ### Input: quit || exit || q Stop and quit the current debugging session ### Response: def do_quit(self, arg): """ quit || exit || q Stop and quit the current debugging session """ for name, fh in self._backup: setattr(sys, name, fh) self.console.writeline('*** Aborting program ***\n') self.console.flush() self.console.close() WebPdb.active_instance = None return Pdb.do_quit(self, arg)
def __fetch_heatmap_data_from_profile(self): """Method to create heatmap data from profile information.""" # Read lines from file. with open(self.pyfile.path, "r") as file_to_read: for line in file_to_read: # Remove return char from the end of the line and add a # space in the beginning for better visibility. self.pyfile.lines.append(" " + line.strip("\n")) # Total number of lines in file. self.pyfile.length = len(self.pyfile.lines) # Fetch line profiles. line_profiles = self.__get_line_profile_data() # Creating an array of data points. As the profile keys are 1 indexed # we should range from 1 to line_count + 1 and not 0 to line_count. arr = [] for line_num in range(1, self.pyfile.length + 1): if line_num in line_profiles: # line_profiles[i] will have multiple entries if line i is # invoked from multiple places in the code. Here we sum over # each invocation to get the total time spent on that line. line_times = [ ltime for _, ltime in line_profiles[line_num].values() ] arr.append([sum(line_times)]) else: arr.append([0.0]) # Create nd-array from list of data points. self.pyfile.data = np.array(arr)
Method to create heatmap data from profile information.
Below is the the instruction that describes the task: ### Input: Method to create heatmap data from profile information. ### Response: def __fetch_heatmap_data_from_profile(self): """Method to create heatmap data from profile information.""" # Read lines from file. with open(self.pyfile.path, "r") as file_to_read: for line in file_to_read: # Remove return char from the end of the line and add a # space in the beginning for better visibility. self.pyfile.lines.append(" " + line.strip("\n")) # Total number of lines in file. self.pyfile.length = len(self.pyfile.lines) # Fetch line profiles. line_profiles = self.__get_line_profile_data() # Creating an array of data points. As the profile keys are 1 indexed # we should range from 1 to line_count + 1 and not 0 to line_count. arr = [] for line_num in range(1, self.pyfile.length + 1): if line_num in line_profiles: # line_profiles[i] will have multiple entries if line i is # invoked from multiple places in the code. Here we sum over # each invocation to get the total time spent on that line. line_times = [ ltime for _, ltime in line_profiles[line_num].values() ] arr.append([sum(line_times)]) else: arr.append([0.0]) # Create nd-array from list of data points. self.pyfile.data = np.array(arr)
def get_event_with_balance_proof_by_locksroot( storage: sqlite.SQLiteStorage, canonical_identifier: CanonicalIdentifier, locksroot: Locksroot, recipient: Address, ) -> sqlite.EventRecord: """ Returns the event which contains the corresponding balance proof. Use this function to find a balance proof for a call to unlock, which only happens after settle, so the channel has the unblinded version of the balance proof. """ return storage.get_latest_event_by_data_field({ 'balance_proof.canonical_identifier.chain_identifier': str( canonical_identifier.chain_identifier, ), 'balance_proof.canonical_identifier.token_network_address': to_checksum_address( canonical_identifier.token_network_address, ), 'balance_proof.canonical_identifier.channel_identifier': str( canonical_identifier.channel_identifier, ), 'balance_proof.locksroot': serialize_bytes(locksroot), 'recipient': to_checksum_address(recipient), })
Returns the event which contains the corresponding balance proof. Use this function to find a balance proof for a call to unlock, which only happens after settle, so the channel has the unblinded version of the balance proof.
Below is the the instruction that describes the task: ### Input: Returns the event which contains the corresponding balance proof. Use this function to find a balance proof for a call to unlock, which only happens after settle, so the channel has the unblinded version of the balance proof. ### Response: def get_event_with_balance_proof_by_locksroot( storage: sqlite.SQLiteStorage, canonical_identifier: CanonicalIdentifier, locksroot: Locksroot, recipient: Address, ) -> sqlite.EventRecord: """ Returns the event which contains the corresponding balance proof. Use this function to find a balance proof for a call to unlock, which only happens after settle, so the channel has the unblinded version of the balance proof. """ return storage.get_latest_event_by_data_field({ 'balance_proof.canonical_identifier.chain_identifier': str( canonical_identifier.chain_identifier, ), 'balance_proof.canonical_identifier.token_network_address': to_checksum_address( canonical_identifier.token_network_address, ), 'balance_proof.canonical_identifier.channel_identifier': str( canonical_identifier.channel_identifier, ), 'balance_proof.locksroot': serialize_bytes(locksroot), 'recipient': to_checksum_address(recipient), })
def copy(self): """Returns a deep copy of itself.""" net = QueueNetwork(None) net.g = self.g.copy() net.max_agents = copy.deepcopy(self.max_agents) net.nV = copy.deepcopy(self.nV) net.nE = copy.deepcopy(self.nE) net.num_agents = copy.deepcopy(self.num_agents) net.num_events = copy.deepcopy(self.num_events) net._t = copy.deepcopy(self._t) net._initialized = copy.deepcopy(self._initialized) net._prev_edge = copy.deepcopy(self._prev_edge) net._blocking = copy.deepcopy(self._blocking) net.colors = copy.deepcopy(self.colors) net.out_edges = copy.deepcopy(self.out_edges) net.in_edges = copy.deepcopy(self.in_edges) net.edge2queue = copy.deepcopy(self.edge2queue) net._route_probs = copy.deepcopy(self._route_probs) if net._initialized: keys = [q._key() for q in net.edge2queue if q._time < np.infty] net._fancy_heap = PriorityQueue(keys, net.nE) return net
Returns a deep copy of itself.
Below is the the instruction that describes the task: ### Input: Returns a deep copy of itself. ### Response: def copy(self): """Returns a deep copy of itself.""" net = QueueNetwork(None) net.g = self.g.copy() net.max_agents = copy.deepcopy(self.max_agents) net.nV = copy.deepcopy(self.nV) net.nE = copy.deepcopy(self.nE) net.num_agents = copy.deepcopy(self.num_agents) net.num_events = copy.deepcopy(self.num_events) net._t = copy.deepcopy(self._t) net._initialized = copy.deepcopy(self._initialized) net._prev_edge = copy.deepcopy(self._prev_edge) net._blocking = copy.deepcopy(self._blocking) net.colors = copy.deepcopy(self.colors) net.out_edges = copy.deepcopy(self.out_edges) net.in_edges = copy.deepcopy(self.in_edges) net.edge2queue = copy.deepcopy(self.edge2queue) net._route_probs = copy.deepcopy(self._route_probs) if net._initialized: keys = [q._key() for q in net.edge2queue if q._time < np.infty] net._fancy_heap = PriorityQueue(keys, net.nE) return net
def normalize_middle_english(text, to_lower=True, alpha_conv=True, punct=True): """ :param text: str text to be normalized :param to_lower: bool convert text to lower text >>> normalize_middle_english('Whan Phebus in the CraBbe had neRe hys cours ronne', to_lower = True) 'whan phebus in the crabbe had nere hys cours ronne' :param alpha_conv: bool convert text to canonical form æ -> ae, þ -> th, ð -> th, ȝ -> y if at beginning, gh otherwise >>> normalize_middle_english('I pray ȝow þat ȝe woll', alpha_conv = True) 'i pray yow that ye woll' :param punct: remove punctuation >>> normalize_middle_english("furst, to begynne:...", punct = True) 'furst to begynne' :return: """ if to_lower: text = text.lower() if alpha_conv: text = text.replace("æ", "ae").replace("þ", "th").replace("ð", "th") text = re.sub(r'(?<!\w)(?=\w)ȝ', 'y', text) text = text.replace("ȝ", "gh") if punct: text = re.sub(r"[\.\";\,\:\[\]\(\)!&?‘]", "", text) return text
:param text: str text to be normalized :param to_lower: bool convert text to lower text >>> normalize_middle_english('Whan Phebus in the CraBbe had neRe hys cours ronne', to_lower = True) 'whan phebus in the crabbe had nere hys cours ronne' :param alpha_conv: bool convert text to canonical form æ -> ae, þ -> th, ð -> th, ȝ -> y if at beginning, gh otherwise >>> normalize_middle_english('I pray ȝow þat ȝe woll', alpha_conv = True) 'i pray yow that ye woll' :param punct: remove punctuation >>> normalize_middle_english("furst, to begynne:...", punct = True) 'furst to begynne' :return:
Below is the the instruction that describes the task: ### Input: :param text: str text to be normalized :param to_lower: bool convert text to lower text >>> normalize_middle_english('Whan Phebus in the CraBbe had neRe hys cours ronne', to_lower = True) 'whan phebus in the crabbe had nere hys cours ronne' :param alpha_conv: bool convert text to canonical form æ -> ae, þ -> th, ð -> th, ȝ -> y if at beginning, gh otherwise >>> normalize_middle_english('I pray ȝow þat ȝe woll', alpha_conv = True) 'i pray yow that ye woll' :param punct: remove punctuation >>> normalize_middle_english("furst, to begynne:...", punct = True) 'furst to begynne' :return: ### Response: def normalize_middle_english(text, to_lower=True, alpha_conv=True, punct=True): """ :param text: str text to be normalized :param to_lower: bool convert text to lower text >>> normalize_middle_english('Whan Phebus in the CraBbe had neRe hys cours ronne', to_lower = True) 'whan phebus in the crabbe had nere hys cours ronne' :param alpha_conv: bool convert text to canonical form æ -> ae, þ -> th, ð -> th, ȝ -> y if at beginning, gh otherwise >>> normalize_middle_english('I pray ȝow þat ȝe woll', alpha_conv = True) 'i pray yow that ye woll' :param punct: remove punctuation >>> normalize_middle_english("furst, to begynne:...", punct = True) 'furst to begynne' :return: """ if to_lower: text = text.lower() if alpha_conv: text = text.replace("æ", "ae").replace("þ", "th").replace("ð", "th") text = re.sub(r'(?<!\w)(?=\w)ȝ', 'y', text) text = text.replace("ȝ", "gh") if punct: text = re.sub(r"[\.\";\,\:\[\]\(\)!&?‘]", "", text) return text
def adjacency_plot_und(A, coor, tube=False): ''' This function in matlab is a visualization helper which translates an adjacency matrix and an Nx3 matrix of spatial coordinates, and plots a 3D isometric network connecting the undirected unweighted nodes using a specific plotting format. Including the formatted output is not useful at all for bctpy since matplotlib will not be able to plot it in quite the same way. Instead of doing this, I have included code that will plot the adjacency matrix onto nodes at the given spatial coordinates in mayavi This routine is basically a less featureful version of the 3D brain in cvu, the connectome visualization utility which I also maintain. cvu uses freesurfer surfaces and annotations to get the node coordinates (rather than leaving them up to the user) and has many other interactive visualization features not included here for the sake of brevity. There are other similar visualizations in the ConnectomeViewer and the UCLA multimodal connectivity database. Note that unlike other bctpy functions, this function depends on mayavi. Paramaters ---------- A : NxN np.ndarray adjacency matrix coor : Nx3 np.ndarray vector of node coordinates tube : bool plots using cylindrical tubes for higher resolution image. If True, plots cylindrical tube sources. If False, plots line sources. Default value is False. Returns ------- fig : Instance(Scene) handle to a mayavi figure. Notes ----- To display the output interactively, call fig=adjacency_plot_und(A,coor) from mayavi import mlab mlab.show() Note: Thresholding the matrix is strongly recommended. It is recommended that the input matrix have fewer than 5000 total connections in order to achieve reasonable performance and noncluttered visualization. ''' from mayavi import mlab n = len(A) nr_edges = (n * n - 1) // 2 #starts = np.zeros((nr_edges,3)) #vecs = np.zeros((nr_edges,3)) #adjdat = np.zeros((nr_edges,)) ixes, = np.where(np.triu(np.ones((n, n)), 1).flat) # i=0 # for r2 in xrange(n): # for r1 in xrange(r2): # starts[i,:] = coor[r1,:] # vecs[i,:] = coor[r2,:] - coor[r1,:] # adjdat[i,:] # i+=1 adjdat = A.flat[ixes] A_r = np.tile(coor, (n, 1, 1)) starts = np.reshape(A_r, (n * n, 3))[ixes, :] vecs = np.reshape(A_r - np.transpose(A_r, (1, 0, 2)), (n * n, 3))[ixes, :] # plotting fig = mlab.figure() nodesource = mlab.pipeline.scalar_scatter( coor[:, 0], coor[:, 1], coor[:, 2], figure=fig) nodes = mlab.pipeline.glyph(nodesource, scale_mode='none', scale_factor=3., mode='sphere', figure=fig) nodes.glyph.color_mode = 'color_by_scalar' vectorsrc = mlab.pipeline.vector_scatter( starts[:, 0], starts[:, 1], starts[ :, 2], vecs[:, 0], vecs[:, 1], vecs[:, 2], figure=fig) vectorsrc.mlab_source.dataset.point_data.scalars = adjdat thres = mlab.pipeline.threshold(vectorsrc, low=0.0001, up=np.max(A), figure=fig) vectors = mlab.pipeline.vectors(thres, colormap='YlOrRd', scale_mode='vector', figure=fig) vectors.glyph.glyph.clamping = False vectors.glyph.glyph.color_mode = 'color_by_scalar' vectors.glyph.color_mode = 'color_by_scalar' vectors.glyph.glyph_source.glyph_position = 'head' vectors.actor.property.opacity = .7 if tube: vectors.glyph.glyph_source.glyph_source = (vectors.glyph.glyph_source. glyph_dict['cylinder_source']) vectors.glyph.glyph_source.glyph_source.radius = 0.015 else: vectors.glyph.glyph_source.glyph_source.glyph_type = 'dash' return fig
This function in matlab is a visualization helper which translates an adjacency matrix and an Nx3 matrix of spatial coordinates, and plots a 3D isometric network connecting the undirected unweighted nodes using a specific plotting format. Including the formatted output is not useful at all for bctpy since matplotlib will not be able to plot it in quite the same way. Instead of doing this, I have included code that will plot the adjacency matrix onto nodes at the given spatial coordinates in mayavi This routine is basically a less featureful version of the 3D brain in cvu, the connectome visualization utility which I also maintain. cvu uses freesurfer surfaces and annotations to get the node coordinates (rather than leaving them up to the user) and has many other interactive visualization features not included here for the sake of brevity. There are other similar visualizations in the ConnectomeViewer and the UCLA multimodal connectivity database. Note that unlike other bctpy functions, this function depends on mayavi. Paramaters ---------- A : NxN np.ndarray adjacency matrix coor : Nx3 np.ndarray vector of node coordinates tube : bool plots using cylindrical tubes for higher resolution image. If True, plots cylindrical tube sources. If False, plots line sources. Default value is False. Returns ------- fig : Instance(Scene) handle to a mayavi figure. Notes ----- To display the output interactively, call fig=adjacency_plot_und(A,coor) from mayavi import mlab mlab.show() Note: Thresholding the matrix is strongly recommended. It is recommended that the input matrix have fewer than 5000 total connections in order to achieve reasonable performance and noncluttered visualization.
Below is the the instruction that describes the task: ### Input: This function in matlab is a visualization helper which translates an adjacency matrix and an Nx3 matrix of spatial coordinates, and plots a 3D isometric network connecting the undirected unweighted nodes using a specific plotting format. Including the formatted output is not useful at all for bctpy since matplotlib will not be able to plot it in quite the same way. Instead of doing this, I have included code that will plot the adjacency matrix onto nodes at the given spatial coordinates in mayavi This routine is basically a less featureful version of the 3D brain in cvu, the connectome visualization utility which I also maintain. cvu uses freesurfer surfaces and annotations to get the node coordinates (rather than leaving them up to the user) and has many other interactive visualization features not included here for the sake of brevity. There are other similar visualizations in the ConnectomeViewer and the UCLA multimodal connectivity database. Note that unlike other bctpy functions, this function depends on mayavi. Paramaters ---------- A : NxN np.ndarray adjacency matrix coor : Nx3 np.ndarray vector of node coordinates tube : bool plots using cylindrical tubes for higher resolution image. If True, plots cylindrical tube sources. If False, plots line sources. Default value is False. Returns ------- fig : Instance(Scene) handle to a mayavi figure. Notes ----- To display the output interactively, call fig=adjacency_plot_und(A,coor) from mayavi import mlab mlab.show() Note: Thresholding the matrix is strongly recommended. It is recommended that the input matrix have fewer than 5000 total connections in order to achieve reasonable performance and noncluttered visualization. ### Response: def adjacency_plot_und(A, coor, tube=False): ''' This function in matlab is a visualization helper which translates an adjacency matrix and an Nx3 matrix of spatial coordinates, and plots a 3D isometric network connecting the undirected unweighted nodes using a specific plotting format. Including the formatted output is not useful at all for bctpy since matplotlib will not be able to plot it in quite the same way. Instead of doing this, I have included code that will plot the adjacency matrix onto nodes at the given spatial coordinates in mayavi This routine is basically a less featureful version of the 3D brain in cvu, the connectome visualization utility which I also maintain. cvu uses freesurfer surfaces and annotations to get the node coordinates (rather than leaving them up to the user) and has many other interactive visualization features not included here for the sake of brevity. There are other similar visualizations in the ConnectomeViewer and the UCLA multimodal connectivity database. Note that unlike other bctpy functions, this function depends on mayavi. Paramaters ---------- A : NxN np.ndarray adjacency matrix coor : Nx3 np.ndarray vector of node coordinates tube : bool plots using cylindrical tubes for higher resolution image. If True, plots cylindrical tube sources. If False, plots line sources. Default value is False. Returns ------- fig : Instance(Scene) handle to a mayavi figure. Notes ----- To display the output interactively, call fig=adjacency_plot_und(A,coor) from mayavi import mlab mlab.show() Note: Thresholding the matrix is strongly recommended. It is recommended that the input matrix have fewer than 5000 total connections in order to achieve reasonable performance and noncluttered visualization. ''' from mayavi import mlab n = len(A) nr_edges = (n * n - 1) // 2 #starts = np.zeros((nr_edges,3)) #vecs = np.zeros((nr_edges,3)) #adjdat = np.zeros((nr_edges,)) ixes, = np.where(np.triu(np.ones((n, n)), 1).flat) # i=0 # for r2 in xrange(n): # for r1 in xrange(r2): # starts[i,:] = coor[r1,:] # vecs[i,:] = coor[r2,:] - coor[r1,:] # adjdat[i,:] # i+=1 adjdat = A.flat[ixes] A_r = np.tile(coor, (n, 1, 1)) starts = np.reshape(A_r, (n * n, 3))[ixes, :] vecs = np.reshape(A_r - np.transpose(A_r, (1, 0, 2)), (n * n, 3))[ixes, :] # plotting fig = mlab.figure() nodesource = mlab.pipeline.scalar_scatter( coor[:, 0], coor[:, 1], coor[:, 2], figure=fig) nodes = mlab.pipeline.glyph(nodesource, scale_mode='none', scale_factor=3., mode='sphere', figure=fig) nodes.glyph.color_mode = 'color_by_scalar' vectorsrc = mlab.pipeline.vector_scatter( starts[:, 0], starts[:, 1], starts[ :, 2], vecs[:, 0], vecs[:, 1], vecs[:, 2], figure=fig) vectorsrc.mlab_source.dataset.point_data.scalars = adjdat thres = mlab.pipeline.threshold(vectorsrc, low=0.0001, up=np.max(A), figure=fig) vectors = mlab.pipeline.vectors(thres, colormap='YlOrRd', scale_mode='vector', figure=fig) vectors.glyph.glyph.clamping = False vectors.glyph.glyph.color_mode = 'color_by_scalar' vectors.glyph.color_mode = 'color_by_scalar' vectors.glyph.glyph_source.glyph_position = 'head' vectors.actor.property.opacity = .7 if tube: vectors.glyph.glyph_source.glyph_source = (vectors.glyph.glyph_source. glyph_dict['cylinder_source']) vectors.glyph.glyph_source.glyph_source.radius = 0.015 else: vectors.glyph.glyph_source.glyph_source.glyph_type = 'dash' return fig
def renew_token(self): """Convenience method: renew Vault token""" url = _url_joiner(self._vault_url, 'v1/auth/token/renew-self') resp = requests.get(url, headers=self._headers) resp.raise_for_status() data = resp.json() if data.get('errors'): raise VaultException(u'Error renewing Vault token: {}'.format(data['errors'])) return data
Convenience method: renew Vault token
Below is the the instruction that describes the task: ### Input: Convenience method: renew Vault token ### Response: def renew_token(self): """Convenience method: renew Vault token""" url = _url_joiner(self._vault_url, 'v1/auth/token/renew-self') resp = requests.get(url, headers=self._headers) resp.raise_for_status() data = resp.json() if data.get('errors'): raise VaultException(u'Error renewing Vault token: {}'.format(data['errors'])) return data
def display_unit(self): """ Display unit of value. :type: ``str`` """ if self._display_unit: return self._display_unit elif self._Q: config = Configuration.display.unit_systems default_system = Configuration.unit_system units = config.systems[default_system] self._display_unit = units.get(self._type, self._unit) if self._type == "temperature": from_unit = "deg" + self._unit.upper() to_unit = "deg" + self._display_unit.upper() else: from_unit = self._unit to_unit = self._display_unit #print("dv", from_unit, to_unit) self._q_unit = self._Q("1 " + from_unit) self._q_display = self._Q("1 " + to_unit) return self._display_unit
Display unit of value. :type: ``str``
Below is the the instruction that describes the task: ### Input: Display unit of value. :type: ``str`` ### Response: def display_unit(self): """ Display unit of value. :type: ``str`` """ if self._display_unit: return self._display_unit elif self._Q: config = Configuration.display.unit_systems default_system = Configuration.unit_system units = config.systems[default_system] self._display_unit = units.get(self._type, self._unit) if self._type == "temperature": from_unit = "deg" + self._unit.upper() to_unit = "deg" + self._display_unit.upper() else: from_unit = self._unit to_unit = self._display_unit #print("dv", from_unit, to_unit) self._q_unit = self._Q("1 " + from_unit) self._q_display = self._Q("1 " + to_unit) return self._display_unit
def find(self, start, end): """find all elements between (or overlapping) start and end""" if self.intervals and not end < self.intervals[0].start: overlapping = [i for i in self.intervals if i.end >= start and i.start <= end] else: overlapping = [] if self.left and start <= self.center: overlapping += self.left.find(start, end) if self.right and end >= self.center: overlapping += self.right.find(start, end) return overlapping
find all elements between (or overlapping) start and end
Below is the the instruction that describes the task: ### Input: find all elements between (or overlapping) start and end ### Response: def find(self, start, end): """find all elements between (or overlapping) start and end""" if self.intervals and not end < self.intervals[0].start: overlapping = [i for i in self.intervals if i.end >= start and i.start <= end] else: overlapping = [] if self.left and start <= self.center: overlapping += self.left.find(start, end) if self.right and end >= self.center: overlapping += self.right.find(start, end) return overlapping
def url_has_contents(url, contents, case_sensitive=False, timeout=10): ''' Check whether the HTML page contains the content or not and return boolean ''' try: req = urllib2.urlopen(url, timeout=timeout) except Exception, _: False else: rep = req.read() if (not case_sensitive and rep.lower().find(contents.lower()) >= 0) or (case_sensitive and rep.find(contents) >= 0): return True else: return False
Check whether the HTML page contains the content or not and return boolean
Below is the the instruction that describes the task: ### Input: Check whether the HTML page contains the content or not and return boolean ### Response: def url_has_contents(url, contents, case_sensitive=False, timeout=10): ''' Check whether the HTML page contains the content or not and return boolean ''' try: req = urllib2.urlopen(url, timeout=timeout) except Exception, _: False else: rep = req.read() if (not case_sensitive and rep.lower().find(contents.lower()) >= 0) or (case_sensitive and rep.find(contents) >= 0): return True else: return False
def get_all_offers(self, params=None): """ Get all offers This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param params: search params :return: list """ if not params: params = {} return self._iterate_through_pages(self.get_offers_per_page, resource=OFFERS, **{'params': params})
Get all offers This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param params: search params :return: list
Below is the the instruction that describes the task: ### Input: Get all offers This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param params: search params :return: list ### Response: def get_all_offers(self, params=None): """ Get all offers This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param params: search params :return: list """ if not params: params = {} return self._iterate_through_pages(self.get_offers_per_page, resource=OFFERS, **{'params': params})
def analyze(self, scratch, **kwargs): """Run and return the results form the DeadCode plugin. The variable_event indicates that the Scratch file contains at least one instance of a broadcast event based on a variable. When variable_event is True, dead code scripts reported by this plugin that begin with a "when I receive" block may not actually indicate dead code. """ self.total_instances += 1 sprites = {} for sprite, script in self.iter_sprite_scripts(scratch): if not script.reachable: sprites.setdefault(sprite, []).append(script) if sprites: self.dead_code_instances += 1 import pprint pprint.pprint(sprites) variable_event = any(True in self.get_broadcast_events(x) for x in self.iter_scripts(scratch)) return {'dead_code': {'sprites': sprites, 'variable_event': variable_event}}
Run and return the results form the DeadCode plugin. The variable_event indicates that the Scratch file contains at least one instance of a broadcast event based on a variable. When variable_event is True, dead code scripts reported by this plugin that begin with a "when I receive" block may not actually indicate dead code.
Below is the the instruction that describes the task: ### Input: Run and return the results form the DeadCode plugin. The variable_event indicates that the Scratch file contains at least one instance of a broadcast event based on a variable. When variable_event is True, dead code scripts reported by this plugin that begin with a "when I receive" block may not actually indicate dead code. ### Response: def analyze(self, scratch, **kwargs): """Run and return the results form the DeadCode plugin. The variable_event indicates that the Scratch file contains at least one instance of a broadcast event based on a variable. When variable_event is True, dead code scripts reported by this plugin that begin with a "when I receive" block may not actually indicate dead code. """ self.total_instances += 1 sprites = {} for sprite, script in self.iter_sprite_scripts(scratch): if not script.reachable: sprites.setdefault(sprite, []).append(script) if sprites: self.dead_code_instances += 1 import pprint pprint.pprint(sprites) variable_event = any(True in self.get_broadcast_events(x) for x in self.iter_scripts(scratch)) return {'dead_code': {'sprites': sprites, 'variable_event': variable_event}}
def _fromStorage(self, value): ''' _fromStorage - Convert the value from storage (string) to the value type. @return - The converted value, or "irNull" if no value was defined (and field type is not default/string) ''' for chainedField in reversed(self.chainedFields): value = chainedField._fromStorage(value) return value
_fromStorage - Convert the value from storage (string) to the value type. @return - The converted value, or "irNull" if no value was defined (and field type is not default/string)
Below is the the instruction that describes the task: ### Input: _fromStorage - Convert the value from storage (string) to the value type. @return - The converted value, or "irNull" if no value was defined (and field type is not default/string) ### Response: def _fromStorage(self, value): ''' _fromStorage - Convert the value from storage (string) to the value type. @return - The converted value, or "irNull" if no value was defined (and field type is not default/string) ''' for chainedField in reversed(self.chainedFields): value = chainedField._fromStorage(value) return value
def intersects(self, key, include_self=False, exclusive=False, biggest_first=True, only=None): """Get all locations that intersect this location. Note that sorting is done by first by number of faces intersecting ``key``; the total number of faces in the intersected region is only used to break sorting ties. If the ``resolved_row`` context manager is not used, ``RoW`` doesn't have a spatial definition, and therefore nothing intersects it. ``.intersects("RoW")`` returns a list with with ``RoW`` or nothing. """ possibles = self.topology if only is None else {k: self[k] for k in only} if key == 'RoW' and 'RoW' not in self: return ['RoW'] if 'RoW' in possibles else [] faces = self[key] lst = [ (k, (len(v.intersection(faces)), len(v))) for k, v in possibles.items() if (faces.intersection(v)) ] return self._finish_filter(lst, key, include_self, exclusive, biggest_first)
Get all locations that intersect this location. Note that sorting is done by first by number of faces intersecting ``key``; the total number of faces in the intersected region is only used to break sorting ties. If the ``resolved_row`` context manager is not used, ``RoW`` doesn't have a spatial definition, and therefore nothing intersects it. ``.intersects("RoW")`` returns a list with with ``RoW`` or nothing.
Below is the the instruction that describes the task: ### Input: Get all locations that intersect this location. Note that sorting is done by first by number of faces intersecting ``key``; the total number of faces in the intersected region is only used to break sorting ties. If the ``resolved_row`` context manager is not used, ``RoW`` doesn't have a spatial definition, and therefore nothing intersects it. ``.intersects("RoW")`` returns a list with with ``RoW`` or nothing. ### Response: def intersects(self, key, include_self=False, exclusive=False, biggest_first=True, only=None): """Get all locations that intersect this location. Note that sorting is done by first by number of faces intersecting ``key``; the total number of faces in the intersected region is only used to break sorting ties. If the ``resolved_row`` context manager is not used, ``RoW`` doesn't have a spatial definition, and therefore nothing intersects it. ``.intersects("RoW")`` returns a list with with ``RoW`` or nothing. """ possibles = self.topology if only is None else {k: self[k] for k in only} if key == 'RoW' and 'RoW' not in self: return ['RoW'] if 'RoW' in possibles else [] faces = self[key] lst = [ (k, (len(v.intersection(faces)), len(v))) for k, v in possibles.items() if (faces.intersection(v)) ] return self._finish_filter(lst, key, include_self, exclusive, biggest_first)