code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def iget(self, irods_path, attempts=1, pause=15): """Add an iget command to retrieve a file from iRODS. Parameters ---------- irods_path: str Filepath which should be fetched using iget attempts: int (default: 1) Number of retries, if iRODS access fails pause: int (default: 15) Pause between two access attempts in seconds """ if attempts > 1: cmd = """ for i in {{1..{0}}}; do ret=$(iget -v {1} 2>&1) echo $ret if [[ $ret == *"ERROR"* ]]; then echo "Attempt $i failed" else break fi sleep {2}s done """ cmd = lstrip(cmd) cmd = cmd.format(attempts, irods_path, pause) self.add(cmd) else: self.add('iget -v "{}"'.format(irods_path))
Add an iget command to retrieve a file from iRODS. Parameters ---------- irods_path: str Filepath which should be fetched using iget attempts: int (default: 1) Number of retries, if iRODS access fails pause: int (default: 15) Pause between two access attempts in seconds
Below is the the instruction that describes the task: ### Input: Add an iget command to retrieve a file from iRODS. Parameters ---------- irods_path: str Filepath which should be fetched using iget attempts: int (default: 1) Number of retries, if iRODS access fails pause: int (default: 15) Pause between two access attempts in seconds ### Response: def iget(self, irods_path, attempts=1, pause=15): """Add an iget command to retrieve a file from iRODS. Parameters ---------- irods_path: str Filepath which should be fetched using iget attempts: int (default: 1) Number of retries, if iRODS access fails pause: int (default: 15) Pause between two access attempts in seconds """ if attempts > 1: cmd = """ for i in {{1..{0}}}; do ret=$(iget -v {1} 2>&1) echo $ret if [[ $ret == *"ERROR"* ]]; then echo "Attempt $i failed" else break fi sleep {2}s done """ cmd = lstrip(cmd) cmd = cmd.format(attempts, irods_path, pause) self.add(cmd) else: self.add('iget -v "{}"'.format(irods_path))
def green(self, value): """gets/sets the green value""" if value != self._green and \ isinstance(value, int): self._green = value
gets/sets the green value
Below is the the instruction that describes the task: ### Input: gets/sets the green value ### Response: def green(self, value): """gets/sets the green value""" if value != self._green and \ isinstance(value, int): self._green = value
def add_phase( self, name, done, score, summary, steps, report_every=None, log_every=None, checkpoint_every=None, feed=None): """Add a phase to the loop protocol. If the model breaks long computation into multiple steps, the done tensor indicates whether the current score should be added to the mean counter. For example, in reinforcement learning we only have a valid score at the end of the episode. Score and done tensors can either be scalars or vectors, to support single and batched computations. Args: name: Name for the phase, used for the summary writer. done: Tensor indicating whether current score can be used. score: Tensor holding the current, possibly intermediate, score. summary: Tensor holding summary string to write if not an empty string. steps: Duration of the phase in steps. report_every: Yield mean score every this number of steps. log_every: Request summaries via `log` tensor every this number of steps. checkpoint_every: Write checkpoint every this number of steps. feed: Additional feed dictionary for the session run call. Raises: ValueError: Unknown rank for done or score tensors. """ done = tf.convert_to_tensor(done, tf.bool) score = tf.convert_to_tensor(score, tf.float32) summary = tf.convert_to_tensor(summary, tf.string) feed = feed or {} if done.shape.ndims is None or score.shape.ndims is None: raise ValueError("Rank of 'done' and 'score' tensors must be known.") writer = self._logdir and tf.summary.FileWriter( os.path.join(self._logdir, name), tf.get_default_graph(), flush_secs=60) op = self._define_step(done, score, summary) batch = 1 if score.shape.ndims == 0 else score.shape[0].value self._phases.append(_Phase( name, writer, op, batch, int(steps), feed, report_every, log_every, checkpoint_every))
Add a phase to the loop protocol. If the model breaks long computation into multiple steps, the done tensor indicates whether the current score should be added to the mean counter. For example, in reinforcement learning we only have a valid score at the end of the episode. Score and done tensors can either be scalars or vectors, to support single and batched computations. Args: name: Name for the phase, used for the summary writer. done: Tensor indicating whether current score can be used. score: Tensor holding the current, possibly intermediate, score. summary: Tensor holding summary string to write if not an empty string. steps: Duration of the phase in steps. report_every: Yield mean score every this number of steps. log_every: Request summaries via `log` tensor every this number of steps. checkpoint_every: Write checkpoint every this number of steps. feed: Additional feed dictionary for the session run call. Raises: ValueError: Unknown rank for done or score tensors.
Below is the the instruction that describes the task: ### Input: Add a phase to the loop protocol. If the model breaks long computation into multiple steps, the done tensor indicates whether the current score should be added to the mean counter. For example, in reinforcement learning we only have a valid score at the end of the episode. Score and done tensors can either be scalars or vectors, to support single and batched computations. Args: name: Name for the phase, used for the summary writer. done: Tensor indicating whether current score can be used. score: Tensor holding the current, possibly intermediate, score. summary: Tensor holding summary string to write if not an empty string. steps: Duration of the phase in steps. report_every: Yield mean score every this number of steps. log_every: Request summaries via `log` tensor every this number of steps. checkpoint_every: Write checkpoint every this number of steps. feed: Additional feed dictionary for the session run call. Raises: ValueError: Unknown rank for done or score tensors. ### Response: def add_phase( self, name, done, score, summary, steps, report_every=None, log_every=None, checkpoint_every=None, feed=None): """Add a phase to the loop protocol. If the model breaks long computation into multiple steps, the done tensor indicates whether the current score should be added to the mean counter. For example, in reinforcement learning we only have a valid score at the end of the episode. Score and done tensors can either be scalars or vectors, to support single and batched computations. Args: name: Name for the phase, used for the summary writer. done: Tensor indicating whether current score can be used. score: Tensor holding the current, possibly intermediate, score. summary: Tensor holding summary string to write if not an empty string. steps: Duration of the phase in steps. report_every: Yield mean score every this number of steps. log_every: Request summaries via `log` tensor every this number of steps. checkpoint_every: Write checkpoint every this number of steps. feed: Additional feed dictionary for the session run call. Raises: ValueError: Unknown rank for done or score tensors. """ done = tf.convert_to_tensor(done, tf.bool) score = tf.convert_to_tensor(score, tf.float32) summary = tf.convert_to_tensor(summary, tf.string) feed = feed or {} if done.shape.ndims is None or score.shape.ndims is None: raise ValueError("Rank of 'done' and 'score' tensors must be known.") writer = self._logdir and tf.summary.FileWriter( os.path.join(self._logdir, name), tf.get_default_graph(), flush_secs=60) op = self._define_step(done, score, summary) batch = 1 if score.shape.ndims == 0 else score.shape[0].value self._phases.append(_Phase( name, writer, op, batch, int(steps), feed, report_every, log_every, checkpoint_every))
def density(args): """ %prog density test.clm Estimate link density of contigs. """ p = OptionParser(density.__doc__) p.add_option("--save", default=False, action="store_true", help="Write log densitites of contigs to file") p.set_cpus() opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) clmfile, = args clm = CLMFile(clmfile) pf = clmfile.rsplit(".", 1)[0] if opts.save: logdensities = clm.calculate_densities() densityfile = pf + ".density" fw = open(densityfile, "w") for name, logd in logdensities.items(): s = clm.tig_to_size[name] print("\t".join(str(x) for x in (name, s, logd)), file=fw) fw.close() logging.debug("Density written to `{}`".format(densityfile)) tourfile = clmfile.rsplit(".", 1)[0] + ".tour" tour = clm.activate(tourfile=tourfile, backuptour=False) clm.flip_all(tour) clm.flip_whole(tour) clm.flip_one(tour)
%prog density test.clm Estimate link density of contigs.
Below is the the instruction that describes the task: ### Input: %prog density test.clm Estimate link density of contigs. ### Response: def density(args): """ %prog density test.clm Estimate link density of contigs. """ p = OptionParser(density.__doc__) p.add_option("--save", default=False, action="store_true", help="Write log densitites of contigs to file") p.set_cpus() opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) clmfile, = args clm = CLMFile(clmfile) pf = clmfile.rsplit(".", 1)[0] if opts.save: logdensities = clm.calculate_densities() densityfile = pf + ".density" fw = open(densityfile, "w") for name, logd in logdensities.items(): s = clm.tig_to_size[name] print("\t".join(str(x) for x in (name, s, logd)), file=fw) fw.close() logging.debug("Density written to `{}`".format(densityfile)) tourfile = clmfile.rsplit(".", 1)[0] + ".tour" tour = clm.activate(tourfile=tourfile, backuptour=False) clm.flip_all(tour) clm.flip_whole(tour) clm.flip_one(tour)
def backup(path, name=None): """Start a Backup run""" from PyHardLinkBackup.phlb.phlb_main import backup backup(path, name)
Start a Backup run
Below is the the instruction that describes the task: ### Input: Start a Backup run ### Response: def backup(path, name=None): """Start a Backup run""" from PyHardLinkBackup.phlb.phlb_main import backup backup(path, name)
def _main(): """ Command line interface for testing. """ if len(sys.argv) != 2: print("Usage: python -m auvyon.imaging.spectrograms <mediafile>") else: try: print("Created %s" % spectrogram_image(sys.argv[1], dpi=103, outfile="spectrogram.jpg")) except subprocess.CalledProcessError, exc: print("Conversion error: %s" % exc)
Command line interface for testing.
Below is the the instruction that describes the task: ### Input: Command line interface for testing. ### Response: def _main(): """ Command line interface for testing. """ if len(sys.argv) != 2: print("Usage: python -m auvyon.imaging.spectrograms <mediafile>") else: try: print("Created %s" % spectrogram_image(sys.argv[1], dpi=103, outfile="spectrogram.jpg")) except subprocess.CalledProcessError, exc: print("Conversion error: %s" % exc)
def authenticators(self, auths): """ A decorator that sets a list of authenticators for a function. :param auths: The list of authenticator instances or classes. :return: A function """ if not isinstance(auths, (list, tuple)): auths = [auths] instances = [] for auth in auths: if isclass(auth): instances.append(auth()) else: instances.append(auth) def decorator(func): func.authenticators = instances return func return decorator
A decorator that sets a list of authenticators for a function. :param auths: The list of authenticator instances or classes. :return: A function
Below is the the instruction that describes the task: ### Input: A decorator that sets a list of authenticators for a function. :param auths: The list of authenticator instances or classes. :return: A function ### Response: def authenticators(self, auths): """ A decorator that sets a list of authenticators for a function. :param auths: The list of authenticator instances or classes. :return: A function """ if not isinstance(auths, (list, tuple)): auths = [auths] instances = [] for auth in auths: if isclass(auth): instances.append(auth()) else: instances.append(auth) def decorator(func): func.authenticators = instances return func return decorator
def get_docker_secret(name, default=None, cast_to=str, autocast_name=True, getenv=True, safe=True, secrets_dir=os.path.join(root, 'var', 'run', 'secrets')): """This function fetches a docker secret :param name: the name of the docker secret :param default: the default value if no secret found :param cast_to: casts the value to the given type :param autocast_name: whether the name should be lowercase for secrets and upper case for environment :param getenv: if environment variable should be fetched as fallback :param safe: Whether the function should raise exceptions :param secrets_dir: the directory where the secrets are stored :returns: docker secret or environment variable depending on params :raises TypeError: if cast fails due to wrong type (None) :raises ValueError: if casts fails due to Value """ # cast name if autocast enabled name_secret = name.lower() if autocast_name else name name_env = name.upper() if autocast_name else name # initiallize value value = None # try to read from secret file try: with open(os.path.join(secrets_dir, name_secret), 'r') as secret_file: value = secret_file.read() except IOError as e: # try to read from env if enabled if getenv: value = os.environ.get(name_env) # set default value if no value found if value is None: value = default # try to cast try: # so None wont be cast to 'None' if value is None: raise TypeError('value is None') # special case bool if cast_to == bool: if value not in ('True', 'true', 'False', 'false'): raise ValueError('value %s not of type bool' % value) value = 1 if value in ('True', 'true') else 0 # try to cast return cast_to(value) except (TypeError, ValueError) as e: # whether exception should be thrown if safe: return default raise e
This function fetches a docker secret :param name: the name of the docker secret :param default: the default value if no secret found :param cast_to: casts the value to the given type :param autocast_name: whether the name should be lowercase for secrets and upper case for environment :param getenv: if environment variable should be fetched as fallback :param safe: Whether the function should raise exceptions :param secrets_dir: the directory where the secrets are stored :returns: docker secret or environment variable depending on params :raises TypeError: if cast fails due to wrong type (None) :raises ValueError: if casts fails due to Value
Below is the the instruction that describes the task: ### Input: This function fetches a docker secret :param name: the name of the docker secret :param default: the default value if no secret found :param cast_to: casts the value to the given type :param autocast_name: whether the name should be lowercase for secrets and upper case for environment :param getenv: if environment variable should be fetched as fallback :param safe: Whether the function should raise exceptions :param secrets_dir: the directory where the secrets are stored :returns: docker secret or environment variable depending on params :raises TypeError: if cast fails due to wrong type (None) :raises ValueError: if casts fails due to Value ### Response: def get_docker_secret(name, default=None, cast_to=str, autocast_name=True, getenv=True, safe=True, secrets_dir=os.path.join(root, 'var', 'run', 'secrets')): """This function fetches a docker secret :param name: the name of the docker secret :param default: the default value if no secret found :param cast_to: casts the value to the given type :param autocast_name: whether the name should be lowercase for secrets and upper case for environment :param getenv: if environment variable should be fetched as fallback :param safe: Whether the function should raise exceptions :param secrets_dir: the directory where the secrets are stored :returns: docker secret or environment variable depending on params :raises TypeError: if cast fails due to wrong type (None) :raises ValueError: if casts fails due to Value """ # cast name if autocast enabled name_secret = name.lower() if autocast_name else name name_env = name.upper() if autocast_name else name # initiallize value value = None # try to read from secret file try: with open(os.path.join(secrets_dir, name_secret), 'r') as secret_file: value = secret_file.read() except IOError as e: # try to read from env if enabled if getenv: value = os.environ.get(name_env) # set default value if no value found if value is None: value = default # try to cast try: # so None wont be cast to 'None' if value is None: raise TypeError('value is None') # special case bool if cast_to == bool: if value not in ('True', 'true', 'False', 'false'): raise ValueError('value %s not of type bool' % value) value = 1 if value in ('True', 'true') else 0 # try to cast return cast_to(value) except (TypeError, ValueError) as e: # whether exception should be thrown if safe: return default raise e
def reload(self): """ Automatically reloads the config file. This is just an alias for self.load().""" if not self.fd.closed: self.fd.close() self.fd = open(self.fd.name, 'r') self.load()
Automatically reloads the config file. This is just an alias for self.load().
Below is the the instruction that describes the task: ### Input: Automatically reloads the config file. This is just an alias for self.load(). ### Response: def reload(self): """ Automatically reloads the config file. This is just an alias for self.load().""" if not self.fd.closed: self.fd.close() self.fd = open(self.fd.name, 'r') self.load()
def formatException(self, exc_info, record=None): """Format exception output with CONF.logging_exception_prefix.""" if not record: return logging.Formatter.formatException(self, exc_info) stringbuffer = cStringIO.StringIO() traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], None, stringbuffer) lines = stringbuffer.getvalue().split('\n') stringbuffer.close() if CONF.logging_exception_prefix.find('%(asctime)') != -1: record.asctime = self.formatTime(record, self.datefmt) formatted_lines = [] for line in lines: pl = CONF.logging_exception_prefix % record.__dict__ fl = '%s%s' % (pl, line) formatted_lines.append(fl) return '\n'.join(formatted_lines)
Format exception output with CONF.logging_exception_prefix.
Below is the the instruction that describes the task: ### Input: Format exception output with CONF.logging_exception_prefix. ### Response: def formatException(self, exc_info, record=None): """Format exception output with CONF.logging_exception_prefix.""" if not record: return logging.Formatter.formatException(self, exc_info) stringbuffer = cStringIO.StringIO() traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], None, stringbuffer) lines = stringbuffer.getvalue().split('\n') stringbuffer.close() if CONF.logging_exception_prefix.find('%(asctime)') != -1: record.asctime = self.formatTime(record, self.datefmt) formatted_lines = [] for line in lines: pl = CONF.logging_exception_prefix % record.__dict__ fl = '%s%s' % (pl, line) formatted_lines.append(fl) return '\n'.join(formatted_lines)
def _linkToParent(self, feature, parentName): """ Link a feature with its children """ parentParts = self.byFeatureName.get(parentName) if parentParts is None: raise GFF3Exception( "Parent feature does not exist: {}".format(parentName), self.fileName) # parent maybe disjoint for parentPart in parentParts: feature.parents.add(parentPart) parentPart.children.add(feature)
Link a feature with its children
Below is the the instruction that describes the task: ### Input: Link a feature with its children ### Response: def _linkToParent(self, feature, parentName): """ Link a feature with its children """ parentParts = self.byFeatureName.get(parentName) if parentParts is None: raise GFF3Exception( "Parent feature does not exist: {}".format(parentName), self.fileName) # parent maybe disjoint for parentPart in parentParts: feature.parents.add(parentPart) parentPart.children.add(feature)
def evaluate(self,n, features, stack_float, stack_bool,labels=None): """evaluate node in program""" np.seterr(all='ignore') if len(stack_float) >= n.arity['f'] and len(stack_bool) >= n.arity['b']: if n.out_type == 'f': stack_float.append( self.safe(self.eval_dict[n.name](n,features,stack_float, stack_bool,labels))) if (np.isnan(stack_float[-1]).any() or np.isinf(stack_float[-1]).any()): print("problem operator:",n) else: stack_bool.append(self.safe(self.eval_dict[n.name](n,features, stack_float, stack_bool, labels))) if np.isnan(stack_bool[-1]).any() or np.isinf(stack_bool[-1]).any(): print("problem operator:",n)
evaluate node in program
Below is the the instruction that describes the task: ### Input: evaluate node in program ### Response: def evaluate(self,n, features, stack_float, stack_bool,labels=None): """evaluate node in program""" np.seterr(all='ignore') if len(stack_float) >= n.arity['f'] and len(stack_bool) >= n.arity['b']: if n.out_type == 'f': stack_float.append( self.safe(self.eval_dict[n.name](n,features,stack_float, stack_bool,labels))) if (np.isnan(stack_float[-1]).any() or np.isinf(stack_float[-1]).any()): print("problem operator:",n) else: stack_bool.append(self.safe(self.eval_dict[n.name](n,features, stack_float, stack_bool, labels))) if np.isnan(stack_bool[-1]).any() or np.isinf(stack_bool[-1]).any(): print("problem operator:",n)
def _get_fields(ast): """Return a list of vertex fields, and a list of property fields, for the given AST node. Also verifies that all property fields for the AST node appear before all vertex fields, raising GraphQLCompilationError if that is not the case. Args: ast: GraphQL AST node, obtained from the graphql library Returns: tuple of two lists - the first list contains ASTs for vertex fields - the second list contains ASTs for property fields """ if not ast.selection_set: # There are no child fields. return [], [] property_fields = [] vertex_fields = [] seen_field_names = set() switched_to_vertices = False # Ensures that all property fields are before all vertex fields. for field_ast in ast.selection_set.selections: if not isinstance(field_ast, Field): # We are getting Fields only, ignore everything else. continue name = get_ast_field_name(field_ast) if name in seen_field_names: # If we ever allow repeated field names, # then we have to change the Location naming scheme to reflect the repetitions # and disambiguate between Recurse and Traverse visits to a Location. raise GraphQLCompilationError(u'Encountered repeated field name: {}'.format(name)) seen_field_names.add(name) # Vertex fields start with 'out_' or 'in_', denoting the edge direction to that vertex. if is_vertex_field_name(name): switched_to_vertices = True vertex_fields.append(field_ast) else: if switched_to_vertices: raise GraphQLCompilationError(u'Encountered property field {} ' u'after vertex fields!'.format(name)) property_fields.append(field_ast) return vertex_fields, property_fields
Return a list of vertex fields, and a list of property fields, for the given AST node. Also verifies that all property fields for the AST node appear before all vertex fields, raising GraphQLCompilationError if that is not the case. Args: ast: GraphQL AST node, obtained from the graphql library Returns: tuple of two lists - the first list contains ASTs for vertex fields - the second list contains ASTs for property fields
Below is the the instruction that describes the task: ### Input: Return a list of vertex fields, and a list of property fields, for the given AST node. Also verifies that all property fields for the AST node appear before all vertex fields, raising GraphQLCompilationError if that is not the case. Args: ast: GraphQL AST node, obtained from the graphql library Returns: tuple of two lists - the first list contains ASTs for vertex fields - the second list contains ASTs for property fields ### Response: def _get_fields(ast): """Return a list of vertex fields, and a list of property fields, for the given AST node. Also verifies that all property fields for the AST node appear before all vertex fields, raising GraphQLCompilationError if that is not the case. Args: ast: GraphQL AST node, obtained from the graphql library Returns: tuple of two lists - the first list contains ASTs for vertex fields - the second list contains ASTs for property fields """ if not ast.selection_set: # There are no child fields. return [], [] property_fields = [] vertex_fields = [] seen_field_names = set() switched_to_vertices = False # Ensures that all property fields are before all vertex fields. for field_ast in ast.selection_set.selections: if not isinstance(field_ast, Field): # We are getting Fields only, ignore everything else. continue name = get_ast_field_name(field_ast) if name in seen_field_names: # If we ever allow repeated field names, # then we have to change the Location naming scheme to reflect the repetitions # and disambiguate between Recurse and Traverse visits to a Location. raise GraphQLCompilationError(u'Encountered repeated field name: {}'.format(name)) seen_field_names.add(name) # Vertex fields start with 'out_' or 'in_', denoting the edge direction to that vertex. if is_vertex_field_name(name): switched_to_vertices = True vertex_fields.append(field_ast) else: if switched_to_vertices: raise GraphQLCompilationError(u'Encountered property field {} ' u'after vertex fields!'.format(name)) property_fields.append(field_ast) return vertex_fields, property_fields
def update(self, **kwargs): """Customize the lazy field""" assert not self.called self.kw.update(kwargs) return self
Customize the lazy field
Below is the the instruction that describes the task: ### Input: Customize the lazy field ### Response: def update(self, **kwargs): """Customize the lazy field""" assert not self.called self.kw.update(kwargs) return self
def drop_function(self, dbName, funcName): """ Parameters: - dbName - funcName """ self.send_drop_function(dbName, funcName) self.recv_drop_function()
Parameters: - dbName - funcName
Below is the the instruction that describes the task: ### Input: Parameters: - dbName - funcName ### Response: def drop_function(self, dbName, funcName): """ Parameters: - dbName - funcName """ self.send_drop_function(dbName, funcName) self.recv_drop_function()
def get_ve_base(self, environ): """Find a directory to look for virtualenvs in. """ # set ve_base to a path we can look for virtualenvs: # 1. .vexrc # 2. WORKON_HOME (as defined for virtualenvwrapper's benefit) # 3. $HOME/.virtualenvs # (unless we got --path, then we don't need it) ve_base_value = self.headings[self.default_heading].get('virtualenvs') if ve_base_value: ve_base = os.path.expanduser(ve_base_value) else: ve_base = environ.get('WORKON_HOME', '') if not ve_base: # On Cygwin os.name == 'posix' and we want $HOME. if platform.system() == 'Windows' and os.name == 'nt': _win_drive = environ.get('HOMEDRIVE') home = environ.get('HOMEPATH', '') if home: home = os.path.join(_win_drive, home) else: home = environ.get('HOME', '') if not home: home = os.path.expanduser('~') if not home: return '' ve_base = os.path.join(home, '.virtualenvs') # pass through invalid paths so messages can be generated # if not os.path.exists(ve_base) or os.path.isfile(ve_base): # return '' return ve_base or ''
Find a directory to look for virtualenvs in.
Below is the the instruction that describes the task: ### Input: Find a directory to look for virtualenvs in. ### Response: def get_ve_base(self, environ): """Find a directory to look for virtualenvs in. """ # set ve_base to a path we can look for virtualenvs: # 1. .vexrc # 2. WORKON_HOME (as defined for virtualenvwrapper's benefit) # 3. $HOME/.virtualenvs # (unless we got --path, then we don't need it) ve_base_value = self.headings[self.default_heading].get('virtualenvs') if ve_base_value: ve_base = os.path.expanduser(ve_base_value) else: ve_base = environ.get('WORKON_HOME', '') if not ve_base: # On Cygwin os.name == 'posix' and we want $HOME. if platform.system() == 'Windows' and os.name == 'nt': _win_drive = environ.get('HOMEDRIVE') home = environ.get('HOMEPATH', '') if home: home = os.path.join(_win_drive, home) else: home = environ.get('HOME', '') if not home: home = os.path.expanduser('~') if not home: return '' ve_base = os.path.join(home, '.virtualenvs') # pass through invalid paths so messages can be generated # if not os.path.exists(ve_base) or os.path.isfile(ve_base): # return '' return ve_base or ''
def ppf(q, df, loc=0.0, scale=1.0, gamma = 1.0): """ PPF function for Skew t distribution """ result = np.zeros(q.shape[0]) probzero = Skewt.cdf(x=np.zeros(1),loc=np.zeros(1),df=df,gamma=gamma) result[q<probzero] = 1.0/gamma*ss.t.ppf(((np.power(gamma,2) + 1.0) * q[q<probzero])/2.0,df) result[q>=probzero] = gamma*ss.t.ppf((1.0 + 1.0/np.power(gamma,2))/2.0*(q[q >= probzero] - probzero) + 0.5, df) return result
PPF function for Skew t distribution
Below is the the instruction that describes the task: ### Input: PPF function for Skew t distribution ### Response: def ppf(q, df, loc=0.0, scale=1.0, gamma = 1.0): """ PPF function for Skew t distribution """ result = np.zeros(q.shape[0]) probzero = Skewt.cdf(x=np.zeros(1),loc=np.zeros(1),df=df,gamma=gamma) result[q<probzero] = 1.0/gamma*ss.t.ppf(((np.power(gamma,2) + 1.0) * q[q<probzero])/2.0,df) result[q>=probzero] = gamma*ss.t.ppf((1.0 + 1.0/np.power(gamma,2))/2.0*(q[q >= probzero] - probzero) + 0.5, df) return result
def funcFindPrfMltpPrdXVal(idxPrc, aryFuncChnkTrn, aryFuncChnkTst, aryPrfMdlsTrnConv, aryPrfMdlsTstConv, aryMdls, queOut): """ Function for finding best pRF model for voxel time course. This function should be used if there are several predictors. """ # Number of voxels to be fitted in this chunk: varNumVoxChnk = aryFuncChnkTrn.shape[0] # Number of volumes: varNumVolTrn = aryFuncChnkTrn.shape[2] varNumVolTst = aryFuncChnkTst.shape[2] # get number of cross validations varNumXval = aryPrfMdlsTrnConv.shape[2] # Vectors for pRF finding results [number-of-voxels times one]: vecBstXpos = np.zeros(varNumVoxChnk) vecBstYpos = np.zeros(varNumVoxChnk) vecBstSd = np.zeros(varNumVoxChnk) # vecBstR2 = np.zeros(varNumVoxChnk) # Vector for temporary residuals values that are obtained during # the different loops of cross validation vecTmpResXVal = np.empty((varNumVoxChnk, varNumXval), dtype='float32') # Vector for best residual values. vecBstRes = np.add(np.zeros(varNumVoxChnk), 100000.0) # Constant term for the model: vecConstTrn = np.ones((varNumVolTrn), dtype=np.float32) vecConstTst = np.ones((varNumVolTst), dtype=np.float32) # Change type to float 32: aryPrfMdlsTrnConv = aryPrfMdlsTrnConv.astype(np.float32) aryPrfMdlsTstConv = aryPrfMdlsTstConv.astype(np.float32) # Number of pRF models to fit: varNumMdls = len(aryMdls) # Prepare status indicator if this is the first of the parallel processes: if idxPrc == 0: # We create a status indicator for the time consuming pRF model finding # algorithm. Number of steps of the status indicator: varStsStpSze = 20 # Vector with pRF values at which to give status feedback: vecStatPrf = np.linspace(0, varNumMdls, num=(varStsStpSze+1), endpoint=True) vecStatPrf = np.ceil(vecStatPrf) vecStatPrf = vecStatPrf.astype(int) # Vector with corresponding percentage values at which to give status # feedback: vecStatPrc = np.linspace(0, 100, num=(varStsStpSze+1), endpoint=True) vecStatPrc = np.ceil(vecStatPrc) vecStatPrc = vecStatPrc.astype(int) # Counter for status indicator: varCntSts01 = 0 varCntSts02 = 0 # Loop through pRF models: for idxMdls in range(0, varNumMdls): # Status indicator (only used in the first of the parallel # processes): if idxPrc == 0: # Status indicator: if varCntSts02 == vecStatPrf[varCntSts01]: # Prepare status message: strStsMsg = ('---------Progress: ' + str(vecStatPrc[varCntSts01]) + ' % --- ' + str(vecStatPrf[varCntSts01]) + ' pRF models out of ' + str(varNumMdls)) print(strStsMsg) # Only increment counter if the last value has not been # reached yet: if varCntSts01 < varStsStpSze: varCntSts01 = varCntSts01 + int(1) # Loop through different cross validations for idxXval in range(0, varNumXval): # Current pRF time course model: vecMdlTrn = aryPrfMdlsTrnConv[idxMdls, :, idxXval, :] vecMdlTst = aryPrfMdlsTstConv[idxMdls, :, idxXval, :] # We create a design matrix including the current pRF time # course model, and a constant term: aryDsgnTrn = np.vstack([vecMdlTrn, vecConstTrn]).T aryDsgnTst = np.vstack([vecMdlTst, vecConstTst]).T # Calculate the least-squares solution for all voxels # and get parameter estimates from the training fit aryTmpPrmEst = np.linalg.lstsq(aryDsgnTrn, aryFuncChnkTrn[:, idxXval, :].T)[0] # calculate predicted model fit based on training data aryTmpMdlTc = np.dot(aryDsgnTst, aryTmpPrmEst) # calculate residual sum of squares between test data and # predicted model fit based on training data vecTmpResXVal[:, idxXval] = np.sum( (np.subtract(aryFuncChnkTst[:, idxXval, :].T, aryTmpMdlTc))**2, axis=0) vecTmpRes = np.mean(vecTmpResXVal, axis=1) # Check whether current residuals are lower than previously # calculated ones: vecLgcTmpRes = np.less(vecTmpRes, vecBstRes) # Replace best x and y position values, and SD values. vecBstXpos[vecLgcTmpRes] = aryMdls[idxMdls][0] vecBstYpos[vecLgcTmpRes] = aryMdls[idxMdls][1] vecBstSd[vecLgcTmpRes] = aryMdls[idxMdls][2] # Replace best residual values: vecBstRes[vecLgcTmpRes] = vecTmpRes[vecLgcTmpRes] # Status indicator (only used in the first of the parallel # processes): if idxPrc == 0: # Increment status indicator counter: varCntSts02 = varCntSts02 + 1 # Output list: lstOut = [idxPrc, vecBstXpos, vecBstYpos, vecBstSd, ] queOut.put(lstOut)
Function for finding best pRF model for voxel time course. This function should be used if there are several predictors.
Below is the the instruction that describes the task: ### Input: Function for finding best pRF model for voxel time course. This function should be used if there are several predictors. ### Response: def funcFindPrfMltpPrdXVal(idxPrc, aryFuncChnkTrn, aryFuncChnkTst, aryPrfMdlsTrnConv, aryPrfMdlsTstConv, aryMdls, queOut): """ Function for finding best pRF model for voxel time course. This function should be used if there are several predictors. """ # Number of voxels to be fitted in this chunk: varNumVoxChnk = aryFuncChnkTrn.shape[0] # Number of volumes: varNumVolTrn = aryFuncChnkTrn.shape[2] varNumVolTst = aryFuncChnkTst.shape[2] # get number of cross validations varNumXval = aryPrfMdlsTrnConv.shape[2] # Vectors for pRF finding results [number-of-voxels times one]: vecBstXpos = np.zeros(varNumVoxChnk) vecBstYpos = np.zeros(varNumVoxChnk) vecBstSd = np.zeros(varNumVoxChnk) # vecBstR2 = np.zeros(varNumVoxChnk) # Vector for temporary residuals values that are obtained during # the different loops of cross validation vecTmpResXVal = np.empty((varNumVoxChnk, varNumXval), dtype='float32') # Vector for best residual values. vecBstRes = np.add(np.zeros(varNumVoxChnk), 100000.0) # Constant term for the model: vecConstTrn = np.ones((varNumVolTrn), dtype=np.float32) vecConstTst = np.ones((varNumVolTst), dtype=np.float32) # Change type to float 32: aryPrfMdlsTrnConv = aryPrfMdlsTrnConv.astype(np.float32) aryPrfMdlsTstConv = aryPrfMdlsTstConv.astype(np.float32) # Number of pRF models to fit: varNumMdls = len(aryMdls) # Prepare status indicator if this is the first of the parallel processes: if idxPrc == 0: # We create a status indicator for the time consuming pRF model finding # algorithm. Number of steps of the status indicator: varStsStpSze = 20 # Vector with pRF values at which to give status feedback: vecStatPrf = np.linspace(0, varNumMdls, num=(varStsStpSze+1), endpoint=True) vecStatPrf = np.ceil(vecStatPrf) vecStatPrf = vecStatPrf.astype(int) # Vector with corresponding percentage values at which to give status # feedback: vecStatPrc = np.linspace(0, 100, num=(varStsStpSze+1), endpoint=True) vecStatPrc = np.ceil(vecStatPrc) vecStatPrc = vecStatPrc.astype(int) # Counter for status indicator: varCntSts01 = 0 varCntSts02 = 0 # Loop through pRF models: for idxMdls in range(0, varNumMdls): # Status indicator (only used in the first of the parallel # processes): if idxPrc == 0: # Status indicator: if varCntSts02 == vecStatPrf[varCntSts01]: # Prepare status message: strStsMsg = ('---------Progress: ' + str(vecStatPrc[varCntSts01]) + ' % --- ' + str(vecStatPrf[varCntSts01]) + ' pRF models out of ' + str(varNumMdls)) print(strStsMsg) # Only increment counter if the last value has not been # reached yet: if varCntSts01 < varStsStpSze: varCntSts01 = varCntSts01 + int(1) # Loop through different cross validations for idxXval in range(0, varNumXval): # Current pRF time course model: vecMdlTrn = aryPrfMdlsTrnConv[idxMdls, :, idxXval, :] vecMdlTst = aryPrfMdlsTstConv[idxMdls, :, idxXval, :] # We create a design matrix including the current pRF time # course model, and a constant term: aryDsgnTrn = np.vstack([vecMdlTrn, vecConstTrn]).T aryDsgnTst = np.vstack([vecMdlTst, vecConstTst]).T # Calculate the least-squares solution for all voxels # and get parameter estimates from the training fit aryTmpPrmEst = np.linalg.lstsq(aryDsgnTrn, aryFuncChnkTrn[:, idxXval, :].T)[0] # calculate predicted model fit based on training data aryTmpMdlTc = np.dot(aryDsgnTst, aryTmpPrmEst) # calculate residual sum of squares between test data and # predicted model fit based on training data vecTmpResXVal[:, idxXval] = np.sum( (np.subtract(aryFuncChnkTst[:, idxXval, :].T, aryTmpMdlTc))**2, axis=0) vecTmpRes = np.mean(vecTmpResXVal, axis=1) # Check whether current residuals are lower than previously # calculated ones: vecLgcTmpRes = np.less(vecTmpRes, vecBstRes) # Replace best x and y position values, and SD values. vecBstXpos[vecLgcTmpRes] = aryMdls[idxMdls][0] vecBstYpos[vecLgcTmpRes] = aryMdls[idxMdls][1] vecBstSd[vecLgcTmpRes] = aryMdls[idxMdls][2] # Replace best residual values: vecBstRes[vecLgcTmpRes] = vecTmpRes[vecLgcTmpRes] # Status indicator (only used in the first of the parallel # processes): if idxPrc == 0: # Increment status indicator counter: varCntSts02 = varCntSts02 + 1 # Output list: lstOut = [idxPrc, vecBstXpos, vecBstYpos, vecBstSd, ] queOut.put(lstOut)
def get_code(self, *args, **kwargs): """ get the python source code from callback """ # FIXME: Honestly should allow multiple commands callback = self._commands[args[0]] # TODO: syntax color would be nice source = _inspect.getsourcelines(callback)[0] """ source_len = len(source) source = PygmentsLexer(CythonLexer).lex_document(source)() """ # FIXME: formatting sucks return "\n" + "".join(source)
get the python source code from callback
Below is the the instruction that describes the task: ### Input: get the python source code from callback ### Response: def get_code(self, *args, **kwargs): """ get the python source code from callback """ # FIXME: Honestly should allow multiple commands callback = self._commands[args[0]] # TODO: syntax color would be nice source = _inspect.getsourcelines(callback)[0] """ source_len = len(source) source = PygmentsLexer(CythonLexer).lex_document(source)() """ # FIXME: formatting sucks return "\n" + "".join(source)
def create_group(self, title, parent=None, icon=1, expires=None): """ This method creates a new group. A group title is needed or no group will be created. If a parent is given, the group will be created as a sub-group. title must be a string, image an unsigned int >0 and parent a Group. :return: The newly created group. :rtype: :class:`keepassdb.model.Group` """ if parent and not isinstance(parent, Group): raise TypeError("Parent must be of type Group") if expires is None: expires = const.NEVER if self.groups: group_id = max([g.id for g in self.groups]) + 1 else: group_id = 1 group = Group(id=group_id, title=title, icon=icon, db=self, created=util.now(), modified=util.now(), accessed=util.now(), expires=expires) # If no parent is given, just append the new group at the end if parent is None: group.parent = self.root self.root.children.append(group) group.level = 0 self.groups.append(group) # Else insert the group behind the parent else: if parent not in self.groups: raise ValueError("Group doesn't exist / is not bound to this database.") parent.children.append(group) group.parent = parent group.level = parent.level + 1 self.groups.insert(self.groups.index(parent) + 1, group) return group
This method creates a new group. A group title is needed or no group will be created. If a parent is given, the group will be created as a sub-group. title must be a string, image an unsigned int >0 and parent a Group. :return: The newly created group. :rtype: :class:`keepassdb.model.Group`
Below is the the instruction that describes the task: ### Input: This method creates a new group. A group title is needed or no group will be created. If a parent is given, the group will be created as a sub-group. title must be a string, image an unsigned int >0 and parent a Group. :return: The newly created group. :rtype: :class:`keepassdb.model.Group` ### Response: def create_group(self, title, parent=None, icon=1, expires=None): """ This method creates a new group. A group title is needed or no group will be created. If a parent is given, the group will be created as a sub-group. title must be a string, image an unsigned int >0 and parent a Group. :return: The newly created group. :rtype: :class:`keepassdb.model.Group` """ if parent and not isinstance(parent, Group): raise TypeError("Parent must be of type Group") if expires is None: expires = const.NEVER if self.groups: group_id = max([g.id for g in self.groups]) + 1 else: group_id = 1 group = Group(id=group_id, title=title, icon=icon, db=self, created=util.now(), modified=util.now(), accessed=util.now(), expires=expires) # If no parent is given, just append the new group at the end if parent is None: group.parent = self.root self.root.children.append(group) group.level = 0 self.groups.append(group) # Else insert the group behind the parent else: if parent not in self.groups: raise ValueError("Group doesn't exist / is not bound to this database.") parent.children.append(group) group.parent = parent group.level = parent.level + 1 self.groups.insert(self.groups.index(parent) + 1, group) return group
def square_root_mod_prime( a, p ): """Modular square root of a, mod p, p prime.""" # Based on the Handbook of Applied Cryptography, algorithms 3.34 to 3.39. # This module has been tested for all values in [0,p-1] for # every prime p from 3 to 1229. assert 0 <= a < p assert 1 < p if a == 0: return 0 if p == 2: return a jac = jacobi( a, p ) if jac == -1: raise SquareRootError( "%d has no square root modulo %d" \ % ( a, p ) ) if p % 4 == 3: return modular_exp( a, (p+1)//4, p ) if p % 8 == 5: d = modular_exp( a, (p-1)//4, p ) if d == 1: return modular_exp( a, (p+3)//8, p ) if d == p-1: return ( 2 * a * modular_exp( 4*a, (p-5)//8, p ) ) % p raise RuntimeError("Shouldn't get here.") for b in range( 2, p ): if jacobi( b*b-4*a, p ) == -1: f = ( a, -b, 1 ) ff = polynomial_exp_mod( ( 0, 1 ), (p+1)//2, f, p ) assert ff[1] == 0 return ff[0] raise RuntimeError("No b found.")
Modular square root of a, mod p, p prime.
Below is the the instruction that describes the task: ### Input: Modular square root of a, mod p, p prime. ### Response: def square_root_mod_prime( a, p ): """Modular square root of a, mod p, p prime.""" # Based on the Handbook of Applied Cryptography, algorithms 3.34 to 3.39. # This module has been tested for all values in [0,p-1] for # every prime p from 3 to 1229. assert 0 <= a < p assert 1 < p if a == 0: return 0 if p == 2: return a jac = jacobi( a, p ) if jac == -1: raise SquareRootError( "%d has no square root modulo %d" \ % ( a, p ) ) if p % 4 == 3: return modular_exp( a, (p+1)//4, p ) if p % 8 == 5: d = modular_exp( a, (p-1)//4, p ) if d == 1: return modular_exp( a, (p+3)//8, p ) if d == p-1: return ( 2 * a * modular_exp( 4*a, (p-5)//8, p ) ) % p raise RuntimeError("Shouldn't get here.") for b in range( 2, p ): if jacobi( b*b-4*a, p ) == -1: f = ( a, -b, 1 ) ff = polynomial_exp_mod( ( 0, 1 ), (p+1)//2, f, p ) assert ff[1] == 0 return ff[0] raise RuntimeError("No b found.")
def search(self, what, name=None, version=None): """ Search for a plugin """ filtered = {} # The search may for a scan (what is None) or if what is None: whats = list(self.plugins.keys()) elif what is not None: if what not in self.plugins: raise Exception("Unknown class of plugins") whats = [what] for what in whats: if what not in filtered: filtered[what] = [] for key in self.plugins[what].keys(): (k_name, k_version) = key if name is not None and k_name != name: continue if version is not None and k_version != version: continue if self.plugins[what][key].enable == 'n': continue filtered[what].append(key) # print(filtered) return filtered
Search for a plugin
Below is the the instruction that describes the task: ### Input: Search for a plugin ### Response: def search(self, what, name=None, version=None): """ Search for a plugin """ filtered = {} # The search may for a scan (what is None) or if what is None: whats = list(self.plugins.keys()) elif what is not None: if what not in self.plugins: raise Exception("Unknown class of plugins") whats = [what] for what in whats: if what not in filtered: filtered[what] = [] for key in self.plugins[what].keys(): (k_name, k_version) = key if name is not None and k_name != name: continue if version is not None and k_version != version: continue if self.plugins[what][key].enable == 'n': continue filtered[what].append(key) # print(filtered) return filtered
def upgrade(yes, dry_run, patches): """ Upgrade the datamodel by applying recusively the patches available """ patcher = _get_mongopatcher() if dry_run: patcher.discover_and_apply(directory=patches, dry_run=dry_run) else: if (yes or prompt_bool("Are you sure you want to alter %s" % green(patcher.db))): patcher.discover_and_apply(patches) else: raise SystemExit('You changed your mind, exiting...')
Upgrade the datamodel by applying recusively the patches available
Below is the the instruction that describes the task: ### Input: Upgrade the datamodel by applying recusively the patches available ### Response: def upgrade(yes, dry_run, patches): """ Upgrade the datamodel by applying recusively the patches available """ patcher = _get_mongopatcher() if dry_run: patcher.discover_and_apply(directory=patches, dry_run=dry_run) else: if (yes or prompt_bool("Are you sure you want to alter %s" % green(patcher.db))): patcher.discover_and_apply(patches) else: raise SystemExit('You changed your mind, exiting...')
def as_fixed_width(self, copy=True): """Convert binning to recipe with fixed width (if possible.) Parameters ---------- copy: bool Ensure that we receive another object Returns ------- FixedWidthBinning """ if self.bin_count == 0: raise RuntimeError("Cannot guess binning width with zero bins") elif self.bin_count == 1 or self.is_consecutive() and self.is_regular(): return FixedWidthBinning(min=self.bins[0][0], bin_count=self.bin_count, bin_width=self.bins[1] - self.bins[0]) else: raise RuntimeError("Cannot create fixed-width binning from differing bin widths.")
Convert binning to recipe with fixed width (if possible.) Parameters ---------- copy: bool Ensure that we receive another object Returns ------- FixedWidthBinning
Below is the the instruction that describes the task: ### Input: Convert binning to recipe with fixed width (if possible.) Parameters ---------- copy: bool Ensure that we receive another object Returns ------- FixedWidthBinning ### Response: def as_fixed_width(self, copy=True): """Convert binning to recipe with fixed width (if possible.) Parameters ---------- copy: bool Ensure that we receive another object Returns ------- FixedWidthBinning """ if self.bin_count == 0: raise RuntimeError("Cannot guess binning width with zero bins") elif self.bin_count == 1 or self.is_consecutive() and self.is_regular(): return FixedWidthBinning(min=self.bins[0][0], bin_count=self.bin_count, bin_width=self.bins[1] - self.bins[0]) else: raise RuntimeError("Cannot create fixed-width binning from differing bin widths.")
def zone_data(self): """Get zone data""" if self._zone_data is None: self._zone_data = self._get('/zones/' + self.domain).json() return self._zone_data
Get zone data
Below is the the instruction that describes the task: ### Input: Get zone data ### Response: def zone_data(self): """Get zone data""" if self._zone_data is None: self._zone_data = self._get('/zones/' + self.domain).json() return self._zone_data
def get_did_providers(self, did): """ Return the list providers registered on-chain for the given did. :param did: hex str the id of an asset on-chain :return: list of addresses None if asset has no registerd providers """ register_values = self.contract_concise.getDIDRegister(did) if register_values and len(register_values) == 5: return DIDRegisterValues(*register_values).providers return None
Return the list providers registered on-chain for the given did. :param did: hex str the id of an asset on-chain :return: list of addresses None if asset has no registerd providers
Below is the the instruction that describes the task: ### Input: Return the list providers registered on-chain for the given did. :param did: hex str the id of an asset on-chain :return: list of addresses None if asset has no registerd providers ### Response: def get_did_providers(self, did): """ Return the list providers registered on-chain for the given did. :param did: hex str the id of an asset on-chain :return: list of addresses None if asset has no registerd providers """ register_values = self.contract_concise.getDIDRegister(did) if register_values and len(register_values) == 5: return DIDRegisterValues(*register_values).providers return None
def get_region_order(regions, scen7=False): """ Get the region order expected by MAGICC. Parameters ---------- regions : list_like The regions to get THISFILE_DATTYPE and THISFILE_REGIONMODE flags for. scen7 : bool, optional Whether the file we are getting the flags for is a SCEN7 file or not. Returns ------- list Region order expected by MAGICC for the given region set. """ region_dattype_row = _get_dattype_regionmode_regions_row(regions, scen7=scen7) region_order = DATTYPE_REGIONMODE_REGIONS["regions"][region_dattype_row].iloc[0] return region_order
Get the region order expected by MAGICC. Parameters ---------- regions : list_like The regions to get THISFILE_DATTYPE and THISFILE_REGIONMODE flags for. scen7 : bool, optional Whether the file we are getting the flags for is a SCEN7 file or not. Returns ------- list Region order expected by MAGICC for the given region set.
Below is the the instruction that describes the task: ### Input: Get the region order expected by MAGICC. Parameters ---------- regions : list_like The regions to get THISFILE_DATTYPE and THISFILE_REGIONMODE flags for. scen7 : bool, optional Whether the file we are getting the flags for is a SCEN7 file or not. Returns ------- list Region order expected by MAGICC for the given region set. ### Response: def get_region_order(regions, scen7=False): """ Get the region order expected by MAGICC. Parameters ---------- regions : list_like The regions to get THISFILE_DATTYPE and THISFILE_REGIONMODE flags for. scen7 : bool, optional Whether the file we are getting the flags for is a SCEN7 file or not. Returns ------- list Region order expected by MAGICC for the given region set. """ region_dattype_row = _get_dattype_regionmode_regions_row(regions, scen7=scen7) region_order = DATTYPE_REGIONMODE_REGIONS["regions"][region_dattype_row].iloc[0] return region_order
def set_reference_channel(self, chname): """This is the API call to set the reference channel. """ # change the GUI control to match idx = self.chnames.index(str(chname)) self.w.ref_channel.set_index(idx) return self._set_reference_channel(chname)
This is the API call to set the reference channel.
Below is the the instruction that describes the task: ### Input: This is the API call to set the reference channel. ### Response: def set_reference_channel(self, chname): """This is the API call to set the reference channel. """ # change the GUI control to match idx = self.chnames.index(str(chname)) self.w.ref_channel.set_index(idx) return self._set_reference_channel(chname)
def log_error(self, msg, *args): """Log an error or print in stdout if no logger.""" if self._logger is not None: self._logger.error(msg, *args) else: print(msg % args)
Log an error or print in stdout if no logger.
Below is the the instruction that describes the task: ### Input: Log an error or print in stdout if no logger. ### Response: def log_error(self, msg, *args): """Log an error or print in stdout if no logger.""" if self._logger is not None: self._logger.error(msg, *args) else: print(msg % args)
def DictOf(name, *fields): """ This function creates a dict type with the specified name and fields. >>> from pyws.functions.args import DictOf, Field >>> dct = DictOf( ... 'HelloWorldDict', Field('hello', str), Field('hello', int)) >>> issubclass(dct, Dict) True >>> dct.__name__ 'HelloWorldDict' >>> len(dct.fields) 2 """ ret = type(name, (Dict,), {'fields': []}) #noinspection PyUnresolvedReferences ret.add_fields(*fields) return ret
This function creates a dict type with the specified name and fields. >>> from pyws.functions.args import DictOf, Field >>> dct = DictOf( ... 'HelloWorldDict', Field('hello', str), Field('hello', int)) >>> issubclass(dct, Dict) True >>> dct.__name__ 'HelloWorldDict' >>> len(dct.fields) 2
Below is the the instruction that describes the task: ### Input: This function creates a dict type with the specified name and fields. >>> from pyws.functions.args import DictOf, Field >>> dct = DictOf( ... 'HelloWorldDict', Field('hello', str), Field('hello', int)) >>> issubclass(dct, Dict) True >>> dct.__name__ 'HelloWorldDict' >>> len(dct.fields) 2 ### Response: def DictOf(name, *fields): """ This function creates a dict type with the specified name and fields. >>> from pyws.functions.args import DictOf, Field >>> dct = DictOf( ... 'HelloWorldDict', Field('hello', str), Field('hello', int)) >>> issubclass(dct, Dict) True >>> dct.__name__ 'HelloWorldDict' >>> len(dct.fields) 2 """ ret = type(name, (Dict,), {'fields': []}) #noinspection PyUnresolvedReferences ret.add_fields(*fields) return ret
def Font(name=None, source="sys", italic=False, bold=False, size=20): """Unifies loading of fonts. :param name: name of system-font or filepath, if None is passed the default system-font is loaded :type name: str :param source: "sys" for system font, or "file" to load a file :type source: str """ assert source in ["sys", "file"] if not name: return pygame.font.SysFont(pygame.font.get_default_font(), size, bold=bold, italic=italic) if source == "sys": return pygame.font.SysFont(name, size, bold=bold, italic=italic) else: f = pygame.font.Font(name, size) f.set_italic(italic) f.set_bold(bold) return f
Unifies loading of fonts. :param name: name of system-font or filepath, if None is passed the default system-font is loaded :type name: str :param source: "sys" for system font, or "file" to load a file :type source: str
Below is the the instruction that describes the task: ### Input: Unifies loading of fonts. :param name: name of system-font or filepath, if None is passed the default system-font is loaded :type name: str :param source: "sys" for system font, or "file" to load a file :type source: str ### Response: def Font(name=None, source="sys", italic=False, bold=False, size=20): """Unifies loading of fonts. :param name: name of system-font or filepath, if None is passed the default system-font is loaded :type name: str :param source: "sys" for system font, or "file" to load a file :type source: str """ assert source in ["sys", "file"] if not name: return pygame.font.SysFont(pygame.font.get_default_font(), size, bold=bold, italic=italic) if source == "sys": return pygame.font.SysFont(name, size, bold=bold, italic=italic) else: f = pygame.font.Font(name, size) f.set_italic(italic) f.set_bold(bold) return f
def split_docstring(value): """ Splits the docstring of the given value into it's summary and body. :returns: a 2-tuple of the format ``(summary, body)`` """ docstring = textwrap.dedent(getattr(value, '__doc__', '')) if not docstring: return None pieces = docstring.strip().split('\n\n', 1) try: body = pieces[1] except IndexError: body = None return Docstring(pieces[0], body)
Splits the docstring of the given value into it's summary and body. :returns: a 2-tuple of the format ``(summary, body)``
Below is the the instruction that describes the task: ### Input: Splits the docstring of the given value into it's summary and body. :returns: a 2-tuple of the format ``(summary, body)`` ### Response: def split_docstring(value): """ Splits the docstring of the given value into it's summary and body. :returns: a 2-tuple of the format ``(summary, body)`` """ docstring = textwrap.dedent(getattr(value, '__doc__', '')) if not docstring: return None pieces = docstring.strip().split('\n\n', 1) try: body = pieces[1] except IndexError: body = None return Docstring(pieces[0], body)
def start(self): '''start running in background.''' self.update_device_info() self.get_device_status(0) # start addons. self.hook() self.thread = threading.Thread(target=self._run) self.thread.start() self.running = True
start running in background.
Below is the the instruction that describes the task: ### Input: start running in background. ### Response: def start(self): '''start running in background.''' self.update_device_info() self.get_device_status(0) # start addons. self.hook() self.thread = threading.Thread(target=self._run) self.thread.start() self.running = True
def _unescape(self, value): ''' Recursively unescape values. Though slower, this doesn't require the user to know anything about the escaping when writing their own custom fetch functions. ''' if isinstance(value, (str,unicode)): return value.replace(self._escape_character, '.') elif isinstance(value, dict): return { self._unescape(k) : self._unescape(v) for k,v in value.items() } elif isinstance(value, list): return [ self._unescape(v) for v in value ] return value
Recursively unescape values. Though slower, this doesn't require the user to know anything about the escaping when writing their own custom fetch functions.
Below is the the instruction that describes the task: ### Input: Recursively unescape values. Though slower, this doesn't require the user to know anything about the escaping when writing their own custom fetch functions. ### Response: def _unescape(self, value): ''' Recursively unescape values. Though slower, this doesn't require the user to know anything about the escaping when writing their own custom fetch functions. ''' if isinstance(value, (str,unicode)): return value.replace(self._escape_character, '.') elif isinstance(value, dict): return { self._unescape(k) : self._unescape(v) for k,v in value.items() } elif isinstance(value, list): return [ self._unescape(v) for v in value ] return value
def list_media_endpoint_keys(access_token, subscription_id, rgname, msname): '''list the media endpoint keys in a media service Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. msname (str): Media service name. Returns: HTTP response. JSON body. ''' endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', rgname, '/providers/microsoft.media/', '/mediaservices/', msname, '/listKeys?api-version=', MEDIA_API]) return do_get(endpoint, access_token)
list the media endpoint keys in a media service Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. msname (str): Media service name. Returns: HTTP response. JSON body.
Below is the the instruction that describes the task: ### Input: list the media endpoint keys in a media service Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. msname (str): Media service name. Returns: HTTP response. JSON body. ### Response: def list_media_endpoint_keys(access_token, subscription_id, rgname, msname): '''list the media endpoint keys in a media service Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. msname (str): Media service name. Returns: HTTP response. JSON body. ''' endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', rgname, '/providers/microsoft.media/', '/mediaservices/', msname, '/listKeys?api-version=', MEDIA_API]) return do_get(endpoint, access_token)
def tax_class_based_on(self, tax_class_based_on): """Sets the tax_class_based_on of this TaxSettings. :param tax_class_based_on: The tax_class_based_on of this TaxSettings. :type: str """ allowed_values = ["shippingAddress", "billingAddress"] # noqa: E501 if tax_class_based_on is not None and tax_class_based_on not in allowed_values: raise ValueError( "Invalid value for `tax_class_based_on` ({0}), must be one of {1}" # noqa: E501 .format(tax_class_based_on, allowed_values) ) self._tax_class_based_on = tax_class_based_on
Sets the tax_class_based_on of this TaxSettings. :param tax_class_based_on: The tax_class_based_on of this TaxSettings. :type: str
Below is the the instruction that describes the task: ### Input: Sets the tax_class_based_on of this TaxSettings. :param tax_class_based_on: The tax_class_based_on of this TaxSettings. :type: str ### Response: def tax_class_based_on(self, tax_class_based_on): """Sets the tax_class_based_on of this TaxSettings. :param tax_class_based_on: The tax_class_based_on of this TaxSettings. :type: str """ allowed_values = ["shippingAddress", "billingAddress"] # noqa: E501 if tax_class_based_on is not None and tax_class_based_on not in allowed_values: raise ValueError( "Invalid value for `tax_class_based_on` ({0}), must be one of {1}" # noqa: E501 .format(tax_class_based_on, allowed_values) ) self._tax_class_based_on = tax_class_based_on
def authenticate(self, request, **kwargs): """Authenticates a user based on the OIDC code flow.""" self.request = request if not self.request: return None state = self.request.GET.get('state') code = self.request.GET.get('code') nonce = kwargs.pop('nonce', None) if not code or not state: return None reverse_url = self.get_settings('OIDC_AUTHENTICATION_CALLBACK_URL', 'oidc_authentication_callback') token_payload = { 'client_id': self.OIDC_RP_CLIENT_ID, 'client_secret': self.OIDC_RP_CLIENT_SECRET, 'grant_type': 'authorization_code', 'code': code, 'redirect_uri': absolutify( self.request, reverse(reverse_url) ), } # Get the token token_info = self.get_token(token_payload) id_token = token_info.get('id_token') access_token = token_info.get('access_token') # Validate the token payload = self.verify_token(id_token, nonce=nonce) if payload: self.store_tokens(access_token, id_token) try: return self.get_or_create_user(access_token, id_token, payload) except SuspiciousOperation as exc: LOGGER.warning('failed to get or create user: %s', exc) return None return None
Authenticates a user based on the OIDC code flow.
Below is the the instruction that describes the task: ### Input: Authenticates a user based on the OIDC code flow. ### Response: def authenticate(self, request, **kwargs): """Authenticates a user based on the OIDC code flow.""" self.request = request if not self.request: return None state = self.request.GET.get('state') code = self.request.GET.get('code') nonce = kwargs.pop('nonce', None) if not code or not state: return None reverse_url = self.get_settings('OIDC_AUTHENTICATION_CALLBACK_URL', 'oidc_authentication_callback') token_payload = { 'client_id': self.OIDC_RP_CLIENT_ID, 'client_secret': self.OIDC_RP_CLIENT_SECRET, 'grant_type': 'authorization_code', 'code': code, 'redirect_uri': absolutify( self.request, reverse(reverse_url) ), } # Get the token token_info = self.get_token(token_payload) id_token = token_info.get('id_token') access_token = token_info.get('access_token') # Validate the token payload = self.verify_token(id_token, nonce=nonce) if payload: self.store_tokens(access_token, id_token) try: return self.get_or_create_user(access_token, id_token, payload) except SuspiciousOperation as exc: LOGGER.warning('failed to get or create user: %s', exc) return None return None
def hash(self, value): """ Generate a hash of the given iterable. This is for use in a cache key. """ if is_iterable(value): value = tuple(to_bytestring(v) for v in value) return hashlib.md5(six.b(':').join(value)).hexdigest()
Generate a hash of the given iterable. This is for use in a cache key.
Below is the the instruction that describes the task: ### Input: Generate a hash of the given iterable. This is for use in a cache key. ### Response: def hash(self, value): """ Generate a hash of the given iterable. This is for use in a cache key. """ if is_iterable(value): value = tuple(to_bytestring(v) for v in value) return hashlib.md5(six.b(':').join(value)).hexdigest()
def intersection(a, b, digits=8): """ Given a pair of ranges, merge them in to one range if they overlap at all Parameters -------------- a : (2, ) float Start and end of a 1D interval b : (2, ) float Start and end of a 1D interval digits : int How many digits to consider Returns -------------- intersects : bool or (n,) bool Indicates if the ranges overlap at all new_range : (2, ) or (2, 2) float The unioned range from the two inputs, or both of the original ranges if not overlapping """ # check shape and convert a, b, a_int, b_int, is_1D = check(a, b, digits) # what are the starting and ending points of the overlap overlap = np.zeros(a.shape, dtype=np.float64) # A fully overlaps B current = np.logical_and(a_int[:, 0] <= b_int[:, 0], a_int[:, 1] >= b_int[:, 1]) overlap[current] = b[current] # B fully overlaps A current = np.logical_and(a_int[:, 0] >= b_int[:, 0], a_int[:, 1] <= b_int[:, 1]) overlap[current] = a[current] # A starts B ends # A:, 0 B:, 0 A:, 1 B:, 1 current = np.logical_and( np.logical_and(a_int[:, 0] <= b_int[:, 0], b_int[:, 0] < a_int[:, 1]), a_int[:, 1] < b_int[:, 1]) overlap[current] = np.column_stack([b[current][:, 0], a[current][:, 1]]) # B starts A ends # B:, 0 A:, 0 B:, 1 A:, 1 current = np.logical_and( np.logical_and(b_int[:, 0] <= a_int[:, 0], a_int[:, 0] < b_int[:, 1]), b_int[:, 1] < a_int[:, 1]) overlap[current] = np.column_stack([a[current][:, 0], b[current][:, 1]]) # is range overlapping at all intersects = overlap.ptp(axis=1) > 10**-digits if is_1D: return intersects[0], overlap[0] return intersects, overlap
Given a pair of ranges, merge them in to one range if they overlap at all Parameters -------------- a : (2, ) float Start and end of a 1D interval b : (2, ) float Start and end of a 1D interval digits : int How many digits to consider Returns -------------- intersects : bool or (n,) bool Indicates if the ranges overlap at all new_range : (2, ) or (2, 2) float The unioned range from the two inputs, or both of the original ranges if not overlapping
Below is the the instruction that describes the task: ### Input: Given a pair of ranges, merge them in to one range if they overlap at all Parameters -------------- a : (2, ) float Start and end of a 1D interval b : (2, ) float Start and end of a 1D interval digits : int How many digits to consider Returns -------------- intersects : bool or (n,) bool Indicates if the ranges overlap at all new_range : (2, ) or (2, 2) float The unioned range from the two inputs, or both of the original ranges if not overlapping ### Response: def intersection(a, b, digits=8): """ Given a pair of ranges, merge them in to one range if they overlap at all Parameters -------------- a : (2, ) float Start and end of a 1D interval b : (2, ) float Start and end of a 1D interval digits : int How many digits to consider Returns -------------- intersects : bool or (n,) bool Indicates if the ranges overlap at all new_range : (2, ) or (2, 2) float The unioned range from the two inputs, or both of the original ranges if not overlapping """ # check shape and convert a, b, a_int, b_int, is_1D = check(a, b, digits) # what are the starting and ending points of the overlap overlap = np.zeros(a.shape, dtype=np.float64) # A fully overlaps B current = np.logical_and(a_int[:, 0] <= b_int[:, 0], a_int[:, 1] >= b_int[:, 1]) overlap[current] = b[current] # B fully overlaps A current = np.logical_and(a_int[:, 0] >= b_int[:, 0], a_int[:, 1] <= b_int[:, 1]) overlap[current] = a[current] # A starts B ends # A:, 0 B:, 0 A:, 1 B:, 1 current = np.logical_and( np.logical_and(a_int[:, 0] <= b_int[:, 0], b_int[:, 0] < a_int[:, 1]), a_int[:, 1] < b_int[:, 1]) overlap[current] = np.column_stack([b[current][:, 0], a[current][:, 1]]) # B starts A ends # B:, 0 A:, 0 B:, 1 A:, 1 current = np.logical_and( np.logical_and(b_int[:, 0] <= a_int[:, 0], a_int[:, 0] < b_int[:, 1]), b_int[:, 1] < a_int[:, 1]) overlap[current] = np.column_stack([a[current][:, 0], b[current][:, 1]]) # is range overlapping at all intersects = overlap.ptp(axis=1) > 10**-digits if is_1D: return intersects[0], overlap[0] return intersects, overlap
def assign_to_series(self, name, series_type, item): """Assign name to item converted to the given series_type.""" if series_type == "(": self.add_def(name + " = _coconut.tuple(" + item + ")") elif series_type == "[": self.add_def(name + " = _coconut.list(" + item + ")") else: raise CoconutInternalException("invalid series match type", series_type)
Assign name to item converted to the given series_type.
Below is the the instruction that describes the task: ### Input: Assign name to item converted to the given series_type. ### Response: def assign_to_series(self, name, series_type, item): """Assign name to item converted to the given series_type.""" if series_type == "(": self.add_def(name + " = _coconut.tuple(" + item + ")") elif series_type == "[": self.add_def(name + " = _coconut.list(" + item + ")") else: raise CoconutInternalException("invalid series match type", series_type)
def uninstall(self, bug: Bug) -> bool: """ Uninstalls the Docker image associated with a given bug. """ r = self.__api.post('bugs/{}/uninstall'.format(bug.name)) raise NotImplementedError
Uninstalls the Docker image associated with a given bug.
Below is the the instruction that describes the task: ### Input: Uninstalls the Docker image associated with a given bug. ### Response: def uninstall(self, bug: Bug) -> bool: """ Uninstalls the Docker image associated with a given bug. """ r = self.__api.post('bugs/{}/uninstall'.format(bug.name)) raise NotImplementedError
def request_seen(self, request): """Returns True if request was already seen. Parameters ---------- request : scrapy.http.Request Returns ------- bool """ fp = self.request_fingerprint(request) # This returns the number of values added, zero if already exists. added = self.server.sadd(self.key, fp) return added == 0
Returns True if request was already seen. Parameters ---------- request : scrapy.http.Request Returns ------- bool
Below is the the instruction that describes the task: ### Input: Returns True if request was already seen. Parameters ---------- request : scrapy.http.Request Returns ------- bool ### Response: def request_seen(self, request): """Returns True if request was already seen. Parameters ---------- request : scrapy.http.Request Returns ------- bool """ fp = self.request_fingerprint(request) # This returns the number of values added, zero if already exists. added = self.server.sadd(self.key, fp) return added == 0
def prep_recal(data): """Do pre-BQSR recalibration, calculation of recalibration tables. """ if dd.get_recalibrate(data) in [True, "gatk"]: logger.info("Prepare BQSR tables with GATK: %s " % str(dd.get_sample_name(data))) dbsnp_file = tz.get_in(("genome_resources", "variation", "dbsnp"), data) if not dbsnp_file: logger.info("Skipping GATK BaseRecalibrator because no VCF file of known variants was found.") return data broad_runner = broad.runner_from_config(data["config"]) data["prep_recal"] = _gatk_base_recalibrator(broad_runner, dd.get_align_bam(data), dd.get_ref_file(data), dd.get_platform(data), dbsnp_file, dd.get_variant_regions(data) or dd.get_sample_callable(data), data) elif dd.get_recalibrate(data) == "sentieon": logger.info("Prepare BQSR tables with sentieon: %s " % str(dd.get_sample_name(data))) data["prep_recal"] = sentieon.bqsr_table(data) elif dd.get_recalibrate(data): raise NotImplementedError("Unsupported recalibration type: %s" % (dd.get_recalibrate(data))) return data
Do pre-BQSR recalibration, calculation of recalibration tables.
Below is the the instruction that describes the task: ### Input: Do pre-BQSR recalibration, calculation of recalibration tables. ### Response: def prep_recal(data): """Do pre-BQSR recalibration, calculation of recalibration tables. """ if dd.get_recalibrate(data) in [True, "gatk"]: logger.info("Prepare BQSR tables with GATK: %s " % str(dd.get_sample_name(data))) dbsnp_file = tz.get_in(("genome_resources", "variation", "dbsnp"), data) if not dbsnp_file: logger.info("Skipping GATK BaseRecalibrator because no VCF file of known variants was found.") return data broad_runner = broad.runner_from_config(data["config"]) data["prep_recal"] = _gatk_base_recalibrator(broad_runner, dd.get_align_bam(data), dd.get_ref_file(data), dd.get_platform(data), dbsnp_file, dd.get_variant_regions(data) or dd.get_sample_callable(data), data) elif dd.get_recalibrate(data) == "sentieon": logger.info("Prepare BQSR tables with sentieon: %s " % str(dd.get_sample_name(data))) data["prep_recal"] = sentieon.bqsr_table(data) elif dd.get_recalibrate(data): raise NotImplementedError("Unsupported recalibration type: %s" % (dd.get_recalibrate(data))) return data
def patch_persistent_volume_status(self, name, body, **kwargs): # noqa: E501 """patch_persistent_volume_status # noqa: E501 partially update status of the specified PersistentVolume # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_persistent_volume_status(name, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the PersistentVolume (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1PersistentVolume If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.patch_persistent_volume_status_with_http_info(name, body, **kwargs) # noqa: E501 else: (data) = self.patch_persistent_volume_status_with_http_info(name, body, **kwargs) # noqa: E501 return data
patch_persistent_volume_status # noqa: E501 partially update status of the specified PersistentVolume # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_persistent_volume_status(name, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the PersistentVolume (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1PersistentVolume If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: patch_persistent_volume_status # noqa: E501 partially update status of the specified PersistentVolume # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_persistent_volume_status(name, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the PersistentVolume (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1PersistentVolume If the method is called asynchronously, returns the request thread. ### Response: def patch_persistent_volume_status(self, name, body, **kwargs): # noqa: E501 """patch_persistent_volume_status # noqa: E501 partially update status of the specified PersistentVolume # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_persistent_volume_status(name, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the PersistentVolume (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1PersistentVolume If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.patch_persistent_volume_status_with_http_info(name, body, **kwargs) # noqa: E501 else: (data) = self.patch_persistent_volume_status_with_http_info(name, body, **kwargs) # noqa: E501 return data
def combine_calls(*args): """Combine multiple callsets into a final set of merged calls. """ if len(args) == 3: is_cwl = False batch_id, samples, data = args caller_names, vrn_files = _organize_variants(samples, batch_id) else: is_cwl = True samples = [utils.to_single_data(x) for x in args] samples = [cwlutils.unpack_tarballs(x, x) for x in samples] data = samples[0] batch_id = data["batch_id"] caller_names = data["variants"]["variantcallers"] vrn_files = data["variants"]["calls"] logger.info("Ensemble consensus calls for {0}: {1}".format( batch_id, ",".join(caller_names))) edata = copy.deepcopy(data) base_dir = utils.safe_makedir(os.path.join(edata["dirs"]["work"], "ensemble", batch_id)) if any([vcfutils.vcf_has_variants(f) for f in vrn_files]): # Decompose multiallelic variants and normalize passonly = not tz.get_in(["config", "algorithm", "ensemble", "use_filtered"], edata, False) vrn_files = [normalize.normalize(f, data, passonly=passonly, rerun_effects=False, remove_oldeffects=True, nonrefonly=True, work_dir=utils.safe_makedir(os.path.join(base_dir, c))) for c, f in zip(caller_names, vrn_files)] if "classifiers" not in (dd.get_ensemble(edata) or {}): callinfo = _run_ensemble_intersection(batch_id, vrn_files, caller_names, base_dir, edata) else: config_file = _write_config_file(batch_id, caller_names, base_dir, edata) callinfo = _run_ensemble(batch_id, vrn_files, config_file, base_dir, dd.get_ref_file(edata), edata) callinfo["vrn_file"] = vcfutils.bgzip_and_index(callinfo["vrn_file"], data["config"]) # After decomposing multiallelic variants and normalizing, re-evaluate effects ann_ma_file, _ = effects.add_to_vcf(callinfo["vrn_file"], data) if ann_ma_file: callinfo["vrn_file"] = ann_ma_file edata["config"]["algorithm"]["variantcaller"] = "ensemble" edata["vrn_file"] = callinfo["vrn_file"] edata["ensemble_bed"] = callinfo["bed_file"] callinfo["validate"] = validate.compare_to_rm(edata)[0][0].get("validate") else: out_vcf_file = os.path.join(base_dir, "{0}-ensemble.vcf".format(batch_id)) vcfutils.write_empty_vcf(out_vcf_file, samples=[dd.get_sample_name(d) for d in samples]) callinfo = {"variantcaller": "ensemble", "vrn_file": vcfutils.bgzip_and_index(out_vcf_file, data["config"]), "bed_file": None} if is_cwl: callinfo["batch_samples"] = data["batch_samples"] callinfo["batch_id"] = batch_id return [{"ensemble": callinfo}] else: return [[batch_id, callinfo]]
Combine multiple callsets into a final set of merged calls.
Below is the the instruction that describes the task: ### Input: Combine multiple callsets into a final set of merged calls. ### Response: def combine_calls(*args): """Combine multiple callsets into a final set of merged calls. """ if len(args) == 3: is_cwl = False batch_id, samples, data = args caller_names, vrn_files = _organize_variants(samples, batch_id) else: is_cwl = True samples = [utils.to_single_data(x) for x in args] samples = [cwlutils.unpack_tarballs(x, x) for x in samples] data = samples[0] batch_id = data["batch_id"] caller_names = data["variants"]["variantcallers"] vrn_files = data["variants"]["calls"] logger.info("Ensemble consensus calls for {0}: {1}".format( batch_id, ",".join(caller_names))) edata = copy.deepcopy(data) base_dir = utils.safe_makedir(os.path.join(edata["dirs"]["work"], "ensemble", batch_id)) if any([vcfutils.vcf_has_variants(f) for f in vrn_files]): # Decompose multiallelic variants and normalize passonly = not tz.get_in(["config", "algorithm", "ensemble", "use_filtered"], edata, False) vrn_files = [normalize.normalize(f, data, passonly=passonly, rerun_effects=False, remove_oldeffects=True, nonrefonly=True, work_dir=utils.safe_makedir(os.path.join(base_dir, c))) for c, f in zip(caller_names, vrn_files)] if "classifiers" not in (dd.get_ensemble(edata) or {}): callinfo = _run_ensemble_intersection(batch_id, vrn_files, caller_names, base_dir, edata) else: config_file = _write_config_file(batch_id, caller_names, base_dir, edata) callinfo = _run_ensemble(batch_id, vrn_files, config_file, base_dir, dd.get_ref_file(edata), edata) callinfo["vrn_file"] = vcfutils.bgzip_and_index(callinfo["vrn_file"], data["config"]) # After decomposing multiallelic variants and normalizing, re-evaluate effects ann_ma_file, _ = effects.add_to_vcf(callinfo["vrn_file"], data) if ann_ma_file: callinfo["vrn_file"] = ann_ma_file edata["config"]["algorithm"]["variantcaller"] = "ensemble" edata["vrn_file"] = callinfo["vrn_file"] edata["ensemble_bed"] = callinfo["bed_file"] callinfo["validate"] = validate.compare_to_rm(edata)[0][0].get("validate") else: out_vcf_file = os.path.join(base_dir, "{0}-ensemble.vcf".format(batch_id)) vcfutils.write_empty_vcf(out_vcf_file, samples=[dd.get_sample_name(d) for d in samples]) callinfo = {"variantcaller": "ensemble", "vrn_file": vcfutils.bgzip_and_index(out_vcf_file, data["config"]), "bed_file": None} if is_cwl: callinfo["batch_samples"] = data["batch_samples"] callinfo["batch_id"] = batch_id return [{"ensemble": callinfo}] else: return [[batch_id, callinfo]]
def _setue(self, i): """Initialise bitstring with unsigned exponential-Golomb code for integer i. Raises CreationError if i < 0. """ if i < 0: raise CreationError("Cannot use negative initialiser for unsigned " "exponential-Golomb.") if not i: self._setbin_unsafe('1') return tmp = i + 1 leadingzeros = -1 while tmp > 0: tmp >>= 1 leadingzeros += 1 remainingpart = i + 1 - (1 << leadingzeros) binstring = '0' * leadingzeros + '1' + Bits(uint=remainingpart, length=leadingzeros).bin self._setbin_unsafe(binstring)
Initialise bitstring with unsigned exponential-Golomb code for integer i. Raises CreationError if i < 0.
Below is the the instruction that describes the task: ### Input: Initialise bitstring with unsigned exponential-Golomb code for integer i. Raises CreationError if i < 0. ### Response: def _setue(self, i): """Initialise bitstring with unsigned exponential-Golomb code for integer i. Raises CreationError if i < 0. """ if i < 0: raise CreationError("Cannot use negative initialiser for unsigned " "exponential-Golomb.") if not i: self._setbin_unsafe('1') return tmp = i + 1 leadingzeros = -1 while tmp > 0: tmp >>= 1 leadingzeros += 1 remainingpart = i + 1 - (1 << leadingzeros) binstring = '0' * leadingzeros + '1' + Bits(uint=remainingpart, length=leadingzeros).bin self._setbin_unsafe(binstring)
def encipher(self, string): """Encipher string using Playfair cipher according to initialised key. Punctuation and whitespace are removed from the input. If the input plaintext is not an even number of characters, an 'X' will be appended. Example:: ciphertext = Playfair(key='zgptfoihmuwdrcnykeqaxvsbl').encipher(plaintext) :param string: The string to encipher. :returns: The enciphered string. """ string = self.remove_punctuation(string) string = re.sub(r'[J]', 'I', string) if len(string) % 2 == 1: string += 'X' ret = '' for c in range(0, len(string), 2): ret += self.encipher_pair(string[c], string[c + 1]) return ret
Encipher string using Playfair cipher according to initialised key. Punctuation and whitespace are removed from the input. If the input plaintext is not an even number of characters, an 'X' will be appended. Example:: ciphertext = Playfair(key='zgptfoihmuwdrcnykeqaxvsbl').encipher(plaintext) :param string: The string to encipher. :returns: The enciphered string.
Below is the the instruction that describes the task: ### Input: Encipher string using Playfair cipher according to initialised key. Punctuation and whitespace are removed from the input. If the input plaintext is not an even number of characters, an 'X' will be appended. Example:: ciphertext = Playfair(key='zgptfoihmuwdrcnykeqaxvsbl').encipher(plaintext) :param string: The string to encipher. :returns: The enciphered string. ### Response: def encipher(self, string): """Encipher string using Playfair cipher according to initialised key. Punctuation and whitespace are removed from the input. If the input plaintext is not an even number of characters, an 'X' will be appended. Example:: ciphertext = Playfair(key='zgptfoihmuwdrcnykeqaxvsbl').encipher(plaintext) :param string: The string to encipher. :returns: The enciphered string. """ string = self.remove_punctuation(string) string = re.sub(r'[J]', 'I', string) if len(string) % 2 == 1: string += 'X' ret = '' for c in range(0, len(string), 2): ret += self.encipher_pair(string[c], string[c + 1]) return ret
def getLabel(self, key): """Gets the label assigned to an axes :param key:??? :type key: str """ axisItem = self.getPlotItem().axes[key]['item'] return axisItem.label.toPlainText()
Gets the label assigned to an axes :param key:??? :type key: str
Below is the the instruction that describes the task: ### Input: Gets the label assigned to an axes :param key:??? :type key: str ### Response: def getLabel(self, key): """Gets the label assigned to an axes :param key:??? :type key: str """ axisItem = self.getPlotItem().axes[key]['item'] return axisItem.label.toPlainText()
def _set_show_mpls_rsvp_neighbor(self, v, load=False): """ Setter method for show_mpls_rsvp_neighbor, mapped from YANG variable /brocade_mpls_rpc/show_mpls_rsvp_neighbor (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_show_mpls_rsvp_neighbor is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_show_mpls_rsvp_neighbor() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=show_mpls_rsvp_neighbor.show_mpls_rsvp_neighbor, is_leaf=True, yang_name="show-mpls-rsvp-neighbor", rest_name="show-mpls-rsvp-neighbor", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'full', u'actionpoint': u'showMplsRsvpNeighbor'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """show_mpls_rsvp_neighbor must be of a type compatible with rpc""", 'defined-type': "rpc", 'generated-type': """YANGDynClass(base=show_mpls_rsvp_neighbor.show_mpls_rsvp_neighbor, is_leaf=True, yang_name="show-mpls-rsvp-neighbor", rest_name="show-mpls-rsvp-neighbor", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'full', u'actionpoint': u'showMplsRsvpNeighbor'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True)""", }) self.__show_mpls_rsvp_neighbor = t if hasattr(self, '_set'): self._set()
Setter method for show_mpls_rsvp_neighbor, mapped from YANG variable /brocade_mpls_rpc/show_mpls_rsvp_neighbor (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_show_mpls_rsvp_neighbor is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_show_mpls_rsvp_neighbor() directly.
Below is the the instruction that describes the task: ### Input: Setter method for show_mpls_rsvp_neighbor, mapped from YANG variable /brocade_mpls_rpc/show_mpls_rsvp_neighbor (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_show_mpls_rsvp_neighbor is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_show_mpls_rsvp_neighbor() directly. ### Response: def _set_show_mpls_rsvp_neighbor(self, v, load=False): """ Setter method for show_mpls_rsvp_neighbor, mapped from YANG variable /brocade_mpls_rpc/show_mpls_rsvp_neighbor (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_show_mpls_rsvp_neighbor is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_show_mpls_rsvp_neighbor() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=show_mpls_rsvp_neighbor.show_mpls_rsvp_neighbor, is_leaf=True, yang_name="show-mpls-rsvp-neighbor", rest_name="show-mpls-rsvp-neighbor", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'full', u'actionpoint': u'showMplsRsvpNeighbor'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """show_mpls_rsvp_neighbor must be of a type compatible with rpc""", 'defined-type': "rpc", 'generated-type': """YANGDynClass(base=show_mpls_rsvp_neighbor.show_mpls_rsvp_neighbor, is_leaf=True, yang_name="show-mpls-rsvp-neighbor", rest_name="show-mpls-rsvp-neighbor", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u'tailf-common': {u'hidden': u'full', u'actionpoint': u'showMplsRsvpNeighbor'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='rpc', is_config=True)""", }) self.__show_mpls_rsvp_neighbor = t if hasattr(self, '_set'): self._set()
def run(command=None, *arguments): """ Run the given command. Parameters: :param command: A string describing a command. :param arguments: A list of strings describing arguments to the command. """ if command is None: sys.exit('django-shortcuts: No argument was supplied, please specify one.') if command in ALIASES: command = ALIASES[command] if command == 'startproject': return call('django-admin.py startproject %s' % ' '.join(arguments), shell=True) script_path = os.getcwd() while not os.path.exists(os.path.join(script_path, 'manage.py')): base_dir = os.path.dirname(script_path) if base_dir != script_path: script_path = base_dir else: sys.exit('django-shortcuts: No \'manage.py\' script found in this directory or its parents.') return call('%(python)s %(script_path)s %(command)s %(arguments)s' % { 'python': sys.executable, 'script_path': os.path.join(script_path, 'manage.py'), 'command': command or '', 'arguments': ' '.join(arguments) }, shell=True)
Run the given command. Parameters: :param command: A string describing a command. :param arguments: A list of strings describing arguments to the command.
Below is the the instruction that describes the task: ### Input: Run the given command. Parameters: :param command: A string describing a command. :param arguments: A list of strings describing arguments to the command. ### Response: def run(command=None, *arguments): """ Run the given command. Parameters: :param command: A string describing a command. :param arguments: A list of strings describing arguments to the command. """ if command is None: sys.exit('django-shortcuts: No argument was supplied, please specify one.') if command in ALIASES: command = ALIASES[command] if command == 'startproject': return call('django-admin.py startproject %s' % ' '.join(arguments), shell=True) script_path = os.getcwd() while not os.path.exists(os.path.join(script_path, 'manage.py')): base_dir = os.path.dirname(script_path) if base_dir != script_path: script_path = base_dir else: sys.exit('django-shortcuts: No \'manage.py\' script found in this directory or its parents.') return call('%(python)s %(script_path)s %(command)s %(arguments)s' % { 'python': sys.executable, 'script_path': os.path.join(script_path, 'manage.py'), 'command': command or '', 'arguments': ' '.join(arguments) }, shell=True)
def range_metadata(start, end, dst_folder, num_worker_threads=0, writers=[file_writer], geometry_check=None): """ Extra metadata for all products in a date range """ assert isinstance(start, date) assert isinstance(end, date) delta = end - start dates = [] for i in range(delta.days + 1): dates.append(start + timedelta(days=i)) days = len(dates) total_counter = { 'days': days, 'products': 0, 'saved_tiles': 0, 'skipped_tiles': 0, 'skipped_tiles_paths': [] } def update_counter(counter): for key in iterkeys(total_counter): if key in counter: total_counter[key] += counter[key] for d in dates: logger.info('Getting metadata of {0}-{1}-{2}'.format(d.year, d.month, d.day)) update_counter(daily_metadata(d.year, d.month, d.day, dst_folder, writers, geometry_check, num_worker_threads)) return total_counter
Extra metadata for all products in a date range
Below is the the instruction that describes the task: ### Input: Extra metadata for all products in a date range ### Response: def range_metadata(start, end, dst_folder, num_worker_threads=0, writers=[file_writer], geometry_check=None): """ Extra metadata for all products in a date range """ assert isinstance(start, date) assert isinstance(end, date) delta = end - start dates = [] for i in range(delta.days + 1): dates.append(start + timedelta(days=i)) days = len(dates) total_counter = { 'days': days, 'products': 0, 'saved_tiles': 0, 'skipped_tiles': 0, 'skipped_tiles_paths': [] } def update_counter(counter): for key in iterkeys(total_counter): if key in counter: total_counter[key] += counter[key] for d in dates: logger.info('Getting metadata of {0}-{1}-{2}'.format(d.year, d.month, d.day)) update_counter(daily_metadata(d.year, d.month, d.day, dst_folder, writers, geometry_check, num_worker_threads)) return total_counter
def list_product_versions_for_build_configuration(id=None, name=None, page_size=200, page_index=0, sort="", q=""): """ List all ProductVersions associated with a BuildConfiguration """ data = list_product_versions_for_build_configuration_raw(id, name, page_size, page_index, sort, q) if data: return utils.format_json_list(data)
List all ProductVersions associated with a BuildConfiguration
Below is the the instruction that describes the task: ### Input: List all ProductVersions associated with a BuildConfiguration ### Response: def list_product_versions_for_build_configuration(id=None, name=None, page_size=200, page_index=0, sort="", q=""): """ List all ProductVersions associated with a BuildConfiguration """ data = list_product_versions_for_build_configuration_raw(id, name, page_size, page_index, sort, q) if data: return utils.format_json_list(data)
def _get_default_help_message(func, args, description=None, args_help=None): """Create a default description for the parser and help message for the agurments if they are missing. Args: func: the method we are creating a parser for args: the argument names of the method description: a potentially existing description created from the function docstring args_help: a dict {arg_name: help} with potentially missing arguments Returns: a tuple (arg_parse_description, complete_args_help) """ if description is None: description = "Argument parsing for %s" % func.__name__ args_help = args_help or {} # If an argument is missing a help message we create a simple one for argument in [arg_name for arg_name in args if arg_name not in args_help]: args_help[argument] = "Help message for %s" % argument return (description, args_help)
Create a default description for the parser and help message for the agurments if they are missing. Args: func: the method we are creating a parser for args: the argument names of the method description: a potentially existing description created from the function docstring args_help: a dict {arg_name: help} with potentially missing arguments Returns: a tuple (arg_parse_description, complete_args_help)
Below is the the instruction that describes the task: ### Input: Create a default description for the parser and help message for the agurments if they are missing. Args: func: the method we are creating a parser for args: the argument names of the method description: a potentially existing description created from the function docstring args_help: a dict {arg_name: help} with potentially missing arguments Returns: a tuple (arg_parse_description, complete_args_help) ### Response: def _get_default_help_message(func, args, description=None, args_help=None): """Create a default description for the parser and help message for the agurments if they are missing. Args: func: the method we are creating a parser for args: the argument names of the method description: a potentially existing description created from the function docstring args_help: a dict {arg_name: help} with potentially missing arguments Returns: a tuple (arg_parse_description, complete_args_help) """ if description is None: description = "Argument parsing for %s" % func.__name__ args_help = args_help or {} # If an argument is missing a help message we create a simple one for argument in [arg_name for arg_name in args if arg_name not in args_help]: args_help[argument] = "Help message for %s" % argument return (description, args_help)
def handle_error(self, error=None): """Trap for TCPClient errors, otherwise continue.""" if _debug: TCPClient._debug("handle_error %r", error) # core does not take parameters asyncore.dispatcher.handle_error(self)
Trap for TCPClient errors, otherwise continue.
Below is the the instruction that describes the task: ### Input: Trap for TCPClient errors, otherwise continue. ### Response: def handle_error(self, error=None): """Trap for TCPClient errors, otherwise continue.""" if _debug: TCPClient._debug("handle_error %r", error) # core does not take parameters asyncore.dispatcher.handle_error(self)
def prep_bbox(sess, x, y, x_train, y_train, x_test, y_test, nb_epochs, batch_size, learning_rate, rng, nb_classes=10, img_rows=28, img_cols=28, nchannels=1): """ Define and train a model that simulates the "remote" black-box oracle described in the original paper. :param sess: the TF session :param x: the input placeholder for MNIST :param y: the ouput placeholder for MNIST :param x_train: the training data for the oracle :param y_train: the training labels for the oracle :param x_test: the testing data for the oracle :param y_test: the testing labels for the oracle :param nb_epochs: number of epochs to train model :param batch_size: size of training batches :param learning_rate: learning rate for training :param rng: numpy.random.RandomState :return: """ # Define TF model graph (for the black-box model) nb_filters = 64 model = ModelBasicCNN('model1', nb_classes, nb_filters) loss = CrossEntropy(model, smoothing=0.1) predictions = model.get_logits(x) print("Defined TensorFlow model graph.") # Train an MNIST model train_params = { 'nb_epochs': nb_epochs, 'batch_size': batch_size, 'learning_rate': learning_rate } train(sess, loss, x_train, y_train, args=train_params, rng=rng) # Print out the accuracy on legitimate data eval_params = {'batch_size': batch_size} accuracy = model_eval(sess, x, y, predictions, x_test, y_test, args=eval_params) print('Test accuracy of black-box on legitimate test ' 'examples: ' + str(accuracy)) return model, predictions, accuracy
Define and train a model that simulates the "remote" black-box oracle described in the original paper. :param sess: the TF session :param x: the input placeholder for MNIST :param y: the ouput placeholder for MNIST :param x_train: the training data for the oracle :param y_train: the training labels for the oracle :param x_test: the testing data for the oracle :param y_test: the testing labels for the oracle :param nb_epochs: number of epochs to train model :param batch_size: size of training batches :param learning_rate: learning rate for training :param rng: numpy.random.RandomState :return:
Below is the the instruction that describes the task: ### Input: Define and train a model that simulates the "remote" black-box oracle described in the original paper. :param sess: the TF session :param x: the input placeholder for MNIST :param y: the ouput placeholder for MNIST :param x_train: the training data for the oracle :param y_train: the training labels for the oracle :param x_test: the testing data for the oracle :param y_test: the testing labels for the oracle :param nb_epochs: number of epochs to train model :param batch_size: size of training batches :param learning_rate: learning rate for training :param rng: numpy.random.RandomState :return: ### Response: def prep_bbox(sess, x, y, x_train, y_train, x_test, y_test, nb_epochs, batch_size, learning_rate, rng, nb_classes=10, img_rows=28, img_cols=28, nchannels=1): """ Define and train a model that simulates the "remote" black-box oracle described in the original paper. :param sess: the TF session :param x: the input placeholder for MNIST :param y: the ouput placeholder for MNIST :param x_train: the training data for the oracle :param y_train: the training labels for the oracle :param x_test: the testing data for the oracle :param y_test: the testing labels for the oracle :param nb_epochs: number of epochs to train model :param batch_size: size of training batches :param learning_rate: learning rate for training :param rng: numpy.random.RandomState :return: """ # Define TF model graph (for the black-box model) nb_filters = 64 model = ModelBasicCNN('model1', nb_classes, nb_filters) loss = CrossEntropy(model, smoothing=0.1) predictions = model.get_logits(x) print("Defined TensorFlow model graph.") # Train an MNIST model train_params = { 'nb_epochs': nb_epochs, 'batch_size': batch_size, 'learning_rate': learning_rate } train(sess, loss, x_train, y_train, args=train_params, rng=rng) # Print out the accuracy on legitimate data eval_params = {'batch_size': batch_size} accuracy = model_eval(sess, x, y, predictions, x_test, y_test, args=eval_params) print('Test accuracy of black-box on legitimate test ' 'examples: ' + str(accuracy)) return model, predictions, accuracy
def _verifySender(self, sender): """ Verify that this sender is valid. """ if self.store.findFirst( LoginMethod, AND(LoginMethod.localpart == sender.localpart, LoginMethod.domain == sender.domain, LoginMethod.internal == True)) is None: raise BadSender(sender.localpart + u'@' + sender.domain, [lm.localpart + u'@' + lm.domain for lm in self.store.query( LoginMethod, LoginMethod.internal == True)])
Verify that this sender is valid.
Below is the the instruction that describes the task: ### Input: Verify that this sender is valid. ### Response: def _verifySender(self, sender): """ Verify that this sender is valid. """ if self.store.findFirst( LoginMethod, AND(LoginMethod.localpart == sender.localpart, LoginMethod.domain == sender.domain, LoginMethod.internal == True)) is None: raise BadSender(sender.localpart + u'@' + sender.domain, [lm.localpart + u'@' + lm.domain for lm in self.store.query( LoginMethod, LoginMethod.internal == True)])
def tagImplicitly(self, superTag): """Return implicitly tagged *TagSet* Create a new *TagSet* representing callee *TagSet* implicitly tagged with passed tag(s). With implicit tagging mode, new tag(s) replace the last existing tag. Parameters ---------- superTag: :class:`~pyasn1.type.tag.Tag` *Tag* object to tag this *TagSet* Returns ------- : :class:`~pyasn1.type.tag.TagSet` New *TagSet* object """ if self.__superTags: superTag = Tag(superTag.tagClass, self.__superTags[-1].tagFormat, superTag.tagId) return self[:-1] + superTag
Return implicitly tagged *TagSet* Create a new *TagSet* representing callee *TagSet* implicitly tagged with passed tag(s). With implicit tagging mode, new tag(s) replace the last existing tag. Parameters ---------- superTag: :class:`~pyasn1.type.tag.Tag` *Tag* object to tag this *TagSet* Returns ------- : :class:`~pyasn1.type.tag.TagSet` New *TagSet* object
Below is the the instruction that describes the task: ### Input: Return implicitly tagged *TagSet* Create a new *TagSet* representing callee *TagSet* implicitly tagged with passed tag(s). With implicit tagging mode, new tag(s) replace the last existing tag. Parameters ---------- superTag: :class:`~pyasn1.type.tag.Tag` *Tag* object to tag this *TagSet* Returns ------- : :class:`~pyasn1.type.tag.TagSet` New *TagSet* object ### Response: def tagImplicitly(self, superTag): """Return implicitly tagged *TagSet* Create a new *TagSet* representing callee *TagSet* implicitly tagged with passed tag(s). With implicit tagging mode, new tag(s) replace the last existing tag. Parameters ---------- superTag: :class:`~pyasn1.type.tag.Tag` *Tag* object to tag this *TagSet* Returns ------- : :class:`~pyasn1.type.tag.TagSet` New *TagSet* object """ if self.__superTags: superTag = Tag(superTag.tagClass, self.__superTags[-1].tagFormat, superTag.tagId) return self[:-1] + superTag
def split_len(s, length): """split string *s* into list of strings no longer than *length*""" return [s[i:i+length] for i in range(0, len(s), length)]
split string *s* into list of strings no longer than *length*
Below is the the instruction that describes the task: ### Input: split string *s* into list of strings no longer than *length* ### Response: def split_len(s, length): """split string *s* into list of strings no longer than *length*""" return [s[i:i+length] for i in range(0, len(s), length)]
def _ProcessMessageHandlerRequests(self, requests): """Processes message handler requests.""" logging.debug("Leased message handler request ids: %s", ",".join(str(r.request_id) for r in requests)) grouped_requests = collection.Group(requests, lambda r: r.handler_name) for handler_name, requests_for_handler in iteritems(grouped_requests): handler_cls = handler_registry.handler_name_map.get(handler_name) if not handler_cls: logging.error("Unknown message handler: %s", handler_name) continue stats_collector_instance.Get().IncrementCounter( "well_known_flow_requests", fields=[handler_name]) try: logging.debug("Running %d messages for handler %s", len(requests_for_handler), handler_name) handler_cls(token=self.token).ProcessMessages(requests_for_handler) except Exception as e: # pylint: disable=broad-except logging.exception("Exception while processing message handler %s: %s", handler_name, e) logging.debug("Deleting message handler request ids: %s", ",".join(str(r.request_id) for r in requests)) data_store.REL_DB.DeleteMessageHandlerRequests(requests)
Processes message handler requests.
Below is the the instruction that describes the task: ### Input: Processes message handler requests. ### Response: def _ProcessMessageHandlerRequests(self, requests): """Processes message handler requests.""" logging.debug("Leased message handler request ids: %s", ",".join(str(r.request_id) for r in requests)) grouped_requests = collection.Group(requests, lambda r: r.handler_name) for handler_name, requests_for_handler in iteritems(grouped_requests): handler_cls = handler_registry.handler_name_map.get(handler_name) if not handler_cls: logging.error("Unknown message handler: %s", handler_name) continue stats_collector_instance.Get().IncrementCounter( "well_known_flow_requests", fields=[handler_name]) try: logging.debug("Running %d messages for handler %s", len(requests_for_handler), handler_name) handler_cls(token=self.token).ProcessMessages(requests_for_handler) except Exception as e: # pylint: disable=broad-except logging.exception("Exception while processing message handler %s: %s", handler_name, e) logging.debug("Deleting message handler request ids: %s", ",".join(str(r.request_id) for r in requests)) data_store.REL_DB.DeleteMessageHandlerRequests(requests)
def get_gpu_requirements(gpus_reqs): """ Extracts the GPU from a dictionary requirements as list of GPURequirements. :param gpus_reqs: A dictionary {'count': <count>} or a list [{min_vram: <min_vram>}, {min_vram: <min_vram>}, ...] :return: A list of GPURequirements """ requirements = [] if gpus_reqs: if type(gpus_reqs) is dict: count = gpus_reqs.get('count') if count: for i in range(count): requirements.append(GPURequirement()) elif type(gpus_reqs) is list: for gpu_req in gpus_reqs: requirements.append(GPURequirement(min_vram=gpu_req['minVram'])) return requirements else: # If no requirements are supplied return []
Extracts the GPU from a dictionary requirements as list of GPURequirements. :param gpus_reqs: A dictionary {'count': <count>} or a list [{min_vram: <min_vram>}, {min_vram: <min_vram>}, ...] :return: A list of GPURequirements
Below is the the instruction that describes the task: ### Input: Extracts the GPU from a dictionary requirements as list of GPURequirements. :param gpus_reqs: A dictionary {'count': <count>} or a list [{min_vram: <min_vram>}, {min_vram: <min_vram>}, ...] :return: A list of GPURequirements ### Response: def get_gpu_requirements(gpus_reqs): """ Extracts the GPU from a dictionary requirements as list of GPURequirements. :param gpus_reqs: A dictionary {'count': <count>} or a list [{min_vram: <min_vram>}, {min_vram: <min_vram>}, ...] :return: A list of GPURequirements """ requirements = [] if gpus_reqs: if type(gpus_reqs) is dict: count = gpus_reqs.get('count') if count: for i in range(count): requirements.append(GPURequirement()) elif type(gpus_reqs) is list: for gpu_req in gpus_reqs: requirements.append(GPURequirement(min_vram=gpu_req['minVram'])) return requirements else: # If no requirements are supplied return []
def next_instruction_in_row(self): """The instruction after this one or None. :return: the instruction in :attr:`row_instructions` after this or :obj:`None` if this is the last :rtype: knittingpattern.Instruction.InstructionInRow This can be used to traverse the instructions. .. seealso:: :attr:`previous_instruction_in_row` """ index = self.index_in_row + 1 if index >= len(self.row_instructions): return None return self.row_instructions[index]
The instruction after this one or None. :return: the instruction in :attr:`row_instructions` after this or :obj:`None` if this is the last :rtype: knittingpattern.Instruction.InstructionInRow This can be used to traverse the instructions. .. seealso:: :attr:`previous_instruction_in_row`
Below is the the instruction that describes the task: ### Input: The instruction after this one or None. :return: the instruction in :attr:`row_instructions` after this or :obj:`None` if this is the last :rtype: knittingpattern.Instruction.InstructionInRow This can be used to traverse the instructions. .. seealso:: :attr:`previous_instruction_in_row` ### Response: def next_instruction_in_row(self): """The instruction after this one or None. :return: the instruction in :attr:`row_instructions` after this or :obj:`None` if this is the last :rtype: knittingpattern.Instruction.InstructionInRow This can be used to traverse the instructions. .. seealso:: :attr:`previous_instruction_in_row` """ index = self.index_in_row + 1 if index >= len(self.row_instructions): return None return self.row_instructions[index]
def format_help(self) -> str: """Copy of format_help() from argparse.ArgumentParser with tweaks to separately display required parameters""" formatter = self._get_formatter() # usage formatter.add_usage(self.usage, self._actions, self._mutually_exclusive_groups) # description formatter.add_text(self.description) # Begin cmd2 customization (separate required and optional arguments) # positionals, optionals and user-defined groups for action_group in self._action_groups: if action_group.title == 'optional arguments': # check if the arguments are required, group accordingly req_args = [] opt_args = [] for action in action_group._group_actions: if action.required: req_args.append(action) else: opt_args.append(action) # separately display required arguments formatter.start_section('required arguments') formatter.add_text(action_group.description) formatter.add_arguments(req_args) formatter.end_section() # now display truly optional arguments formatter.start_section(action_group.title) formatter.add_text(action_group.description) formatter.add_arguments(opt_args) formatter.end_section() else: formatter.start_section(action_group.title) formatter.add_text(action_group.description) formatter.add_arguments(action_group._group_actions) formatter.end_section() # End cmd2 customization # epilog formatter.add_text(self.epilog) # determine help from format above return formatter.format_help()
Copy of format_help() from argparse.ArgumentParser with tweaks to separately display required parameters
Below is the the instruction that describes the task: ### Input: Copy of format_help() from argparse.ArgumentParser with tweaks to separately display required parameters ### Response: def format_help(self) -> str: """Copy of format_help() from argparse.ArgumentParser with tweaks to separately display required parameters""" formatter = self._get_formatter() # usage formatter.add_usage(self.usage, self._actions, self._mutually_exclusive_groups) # description formatter.add_text(self.description) # Begin cmd2 customization (separate required and optional arguments) # positionals, optionals and user-defined groups for action_group in self._action_groups: if action_group.title == 'optional arguments': # check if the arguments are required, group accordingly req_args = [] opt_args = [] for action in action_group._group_actions: if action.required: req_args.append(action) else: opt_args.append(action) # separately display required arguments formatter.start_section('required arguments') formatter.add_text(action_group.description) formatter.add_arguments(req_args) formatter.end_section() # now display truly optional arguments formatter.start_section(action_group.title) formatter.add_text(action_group.description) formatter.add_arguments(opt_args) formatter.end_section() else: formatter.start_section(action_group.title) formatter.add_text(action_group.description) formatter.add_arguments(action_group._group_actions) formatter.end_section() # End cmd2 customization # epilog formatter.add_text(self.epilog) # determine help from format above return formatter.format_help()
def _cmd(self, endpoint, cmd): """ endpoint is (host, port) """ cmdbuf = "%s\n" % (cmd) # some cmds have large outputs and ZK closes the connection as soon as it # finishes writing. so read in huge chunks. recvsize = 1 << 20 replies = [] host, port = endpoint ips = get_ips(host, port) if len(ips) == 0: raise self.CmdFailed("Failed to resolve: %s" % (host)) for ip in ips: try: with connected_socket((ip, port)) as sock: sock.send(cmdbuf.encode()) while True: buf = sock.recv(recvsize).decode("utf-8") if buf == "": break replies.append(buf) except socket.error as ex: # if there's only 1 record, give up. # if there's more, keep trying. if len(ips) == 1: raise self.CmdFailed("Error(%s): %s" % (ip, ex)) return "".join(replies)
endpoint is (host, port)
Below is the the instruction that describes the task: ### Input: endpoint is (host, port) ### Response: def _cmd(self, endpoint, cmd): """ endpoint is (host, port) """ cmdbuf = "%s\n" % (cmd) # some cmds have large outputs and ZK closes the connection as soon as it # finishes writing. so read in huge chunks. recvsize = 1 << 20 replies = [] host, port = endpoint ips = get_ips(host, port) if len(ips) == 0: raise self.CmdFailed("Failed to resolve: %s" % (host)) for ip in ips: try: with connected_socket((ip, port)) as sock: sock.send(cmdbuf.encode()) while True: buf = sock.recv(recvsize).decode("utf-8") if buf == "": break replies.append(buf) except socket.error as ex: # if there's only 1 record, give up. # if there's more, keep trying. if len(ips) == 1: raise self.CmdFailed("Error(%s): %s" % (ip, ex)) return "".join(replies)
def CreateGaugeMetadata(metric_name, value_type, fields=None, docstring=None, units=None): """Helper function for creating MetricMetadata for gauge metrics.""" return rdf_stats.MetricMetadata( varname=metric_name, metric_type=rdf_stats.MetricMetadata.MetricType.GAUGE, value_type=MetricValueTypeFromPythonType(value_type), fields_defs=FieldDefinitionProtosFromTuples(fields or []), docstring=docstring, units=units)
Helper function for creating MetricMetadata for gauge metrics.
Below is the the instruction that describes the task: ### Input: Helper function for creating MetricMetadata for gauge metrics. ### Response: def CreateGaugeMetadata(metric_name, value_type, fields=None, docstring=None, units=None): """Helper function for creating MetricMetadata for gauge metrics.""" return rdf_stats.MetricMetadata( varname=metric_name, metric_type=rdf_stats.MetricMetadata.MetricType.GAUGE, value_type=MetricValueTypeFromPythonType(value_type), fields_defs=FieldDefinitionProtosFromTuples(fields or []), docstring=docstring, units=units)
def _wait_until_machine_finish(self): """ Internal method wait until machine finish and kill main process (booted) :return: None """ self.image._wait_for_machine_finish(self.name) # kill main run process self.start_process.kill() # TODO: there are some backgroud processes, dbus async events or something similar, there is better to wait # to provide enough time to finish also some async ops time.sleep(constants.DEFAULT_SLEEP)
Internal method wait until machine finish and kill main process (booted) :return: None
Below is the the instruction that describes the task: ### Input: Internal method wait until machine finish and kill main process (booted) :return: None ### Response: def _wait_until_machine_finish(self): """ Internal method wait until machine finish and kill main process (booted) :return: None """ self.image._wait_for_machine_finish(self.name) # kill main run process self.start_process.kill() # TODO: there are some backgroud processes, dbus async events or something similar, there is better to wait # to provide enough time to finish also some async ops time.sleep(constants.DEFAULT_SLEEP)
def warning(message): """ prints a warning :param message: the message :return: """ message = message or "" if Console.color: Console.cprint('WARNING', "WARNING: ", message) else: print(Console.msg("WARNING: " + message))
prints a warning :param message: the message :return:
Below is the the instruction that describes the task: ### Input: prints a warning :param message: the message :return: ### Response: def warning(message): """ prints a warning :param message: the message :return: """ message = message or "" if Console.color: Console.cprint('WARNING', "WARNING: ", message) else: print(Console.msg("WARNING: " + message))
def ConnectDevice(self, port_path=None, serial=None, default_timeout_ms=None, **kwargs): """Convenience function to setup a transport handle for the adb device from usb path or serial then connect to it. Args: port_path: The filename of usb port to use. serial: The serial number of the device to use. default_timeout_ms: The default timeout in milliseconds to use. kwargs: handle: Device handle to use (instance of common.TcpHandle or common.UsbHandle) banner: Connection banner to pass to the remote device rsa_keys: List of AuthSigner subclass instances to be used for authentication. The device can either accept one of these via the Sign method, or we will send the result of GetPublicKey from the first one if the device doesn't accept any of them. auth_timeout_ms: Timeout to wait for when sending a new public key. This is only relevant when we send a new public key. The device shows a dialog and this timeout is how long to wait for that dialog. If used in automation, this should be low to catch such a case as a failure quickly; while in interactive settings it should be high to allow users to accept the dialog. We default to automation here, so it's low by default. If serial specifies a TCP address:port, then a TCP connection is used instead of a USB connection. """ # If there isnt a handle override (used by tests), build one here if 'handle' in kwargs: self._handle = kwargs.pop('handle') else: # if necessary, convert serial to a unicode string if isinstance(serial, (bytes, bytearray)): serial = serial.decode('utf-8') if serial and ':' in serial: self._handle = common.TcpHandle(serial, timeout_ms=default_timeout_ms) else: self._handle = common.UsbHandle.FindAndOpen( DeviceIsAvailable, port_path=port_path, serial=serial, timeout_ms=default_timeout_ms) self._Connect(**kwargs) return self
Convenience function to setup a transport handle for the adb device from usb path or serial then connect to it. Args: port_path: The filename of usb port to use. serial: The serial number of the device to use. default_timeout_ms: The default timeout in milliseconds to use. kwargs: handle: Device handle to use (instance of common.TcpHandle or common.UsbHandle) banner: Connection banner to pass to the remote device rsa_keys: List of AuthSigner subclass instances to be used for authentication. The device can either accept one of these via the Sign method, or we will send the result of GetPublicKey from the first one if the device doesn't accept any of them. auth_timeout_ms: Timeout to wait for when sending a new public key. This is only relevant when we send a new public key. The device shows a dialog and this timeout is how long to wait for that dialog. If used in automation, this should be low to catch such a case as a failure quickly; while in interactive settings it should be high to allow users to accept the dialog. We default to automation here, so it's low by default. If serial specifies a TCP address:port, then a TCP connection is used instead of a USB connection.
Below is the the instruction that describes the task: ### Input: Convenience function to setup a transport handle for the adb device from usb path or serial then connect to it. Args: port_path: The filename of usb port to use. serial: The serial number of the device to use. default_timeout_ms: The default timeout in milliseconds to use. kwargs: handle: Device handle to use (instance of common.TcpHandle or common.UsbHandle) banner: Connection banner to pass to the remote device rsa_keys: List of AuthSigner subclass instances to be used for authentication. The device can either accept one of these via the Sign method, or we will send the result of GetPublicKey from the first one if the device doesn't accept any of them. auth_timeout_ms: Timeout to wait for when sending a new public key. This is only relevant when we send a new public key. The device shows a dialog and this timeout is how long to wait for that dialog. If used in automation, this should be low to catch such a case as a failure quickly; while in interactive settings it should be high to allow users to accept the dialog. We default to automation here, so it's low by default. If serial specifies a TCP address:port, then a TCP connection is used instead of a USB connection. ### Response: def ConnectDevice(self, port_path=None, serial=None, default_timeout_ms=None, **kwargs): """Convenience function to setup a transport handle for the adb device from usb path or serial then connect to it. Args: port_path: The filename of usb port to use. serial: The serial number of the device to use. default_timeout_ms: The default timeout in milliseconds to use. kwargs: handle: Device handle to use (instance of common.TcpHandle or common.UsbHandle) banner: Connection banner to pass to the remote device rsa_keys: List of AuthSigner subclass instances to be used for authentication. The device can either accept one of these via the Sign method, or we will send the result of GetPublicKey from the first one if the device doesn't accept any of them. auth_timeout_ms: Timeout to wait for when sending a new public key. This is only relevant when we send a new public key. The device shows a dialog and this timeout is how long to wait for that dialog. If used in automation, this should be low to catch such a case as a failure quickly; while in interactive settings it should be high to allow users to accept the dialog. We default to automation here, so it's low by default. If serial specifies a TCP address:port, then a TCP connection is used instead of a USB connection. """ # If there isnt a handle override (used by tests), build one here if 'handle' in kwargs: self._handle = kwargs.pop('handle') else: # if necessary, convert serial to a unicode string if isinstance(serial, (bytes, bytearray)): serial = serial.decode('utf-8') if serial and ':' in serial: self._handle = common.TcpHandle(serial, timeout_ms=default_timeout_ms) else: self._handle = common.UsbHandle.FindAndOpen( DeviceIsAvailable, port_path=port_path, serial=serial, timeout_ms=default_timeout_ms) self._Connect(**kwargs) return self
def lexeme(p): """ From a parser (or string), make a parser that consumes whitespace on either side. """ if isinstance(p, str): p = string(p) return regex(r'\s*') >> p << regex(r'\s*')
From a parser (or string), make a parser that consumes whitespace on either side.
Below is the the instruction that describes the task: ### Input: From a parser (or string), make a parser that consumes whitespace on either side. ### Response: def lexeme(p): """ From a parser (or string), make a parser that consumes whitespace on either side. """ if isinstance(p, str): p = string(p) return regex(r'\s*') >> p << regex(r'\s*')
def convert_old_variant_handle(handle_dict): """Convert a variant handle from serialize_version < 4.0.""" old_variables = handle_dict.get("variables", {}) variables = dict(repository_type="filesystem") for old_key, key in variant_key_conversions.iteritems(): value = old_variables.get(old_key) #if value is not None: variables[key] = value path = handle_dict["path"] filename = os.path.basename(path) if os.path.splitext(filename)[0] == "package": key = "filesystem.variant" else: key = "filesystem.variant.combined" return dict(key=key, variables=variables)
Convert a variant handle from serialize_version < 4.0.
Below is the the instruction that describes the task: ### Input: Convert a variant handle from serialize_version < 4.0. ### Response: def convert_old_variant_handle(handle_dict): """Convert a variant handle from serialize_version < 4.0.""" old_variables = handle_dict.get("variables", {}) variables = dict(repository_type="filesystem") for old_key, key in variant_key_conversions.iteritems(): value = old_variables.get(old_key) #if value is not None: variables[key] = value path = handle_dict["path"] filename = os.path.basename(path) if os.path.splitext(filename)[0] == "package": key = "filesystem.variant" else: key = "filesystem.variant.combined" return dict(key=key, variables=variables)
def safe_decode(text, incoming=None, errors='strict'): """Decodes incoming text/bytes string using `incoming` if they're not already unicode. This function was copied from novaclient.openstack.strutils :param incoming: Text's current encoding :param errors: Errors handling policy. See here for valid values http://docs.python.org/2/library/codecs.html :returns: text or a unicode `incoming` encoded representation of it. :raises TypeError: If text is not an instance of str """ if not isinstance(text, (six.string_types, six.binary_type)): raise TypeError("%s can't be decoded" % type(text)) if isinstance(text, six.text_type): return text if not incoming: incoming = (sys.stdin.encoding or sys.getdefaultencoding()) try: return text.decode(incoming, errors) except UnicodeDecodeError: # Note(flaper87) If we get here, it means that # sys.stdin.encoding / sys.getdefaultencoding # didn't return a suitable encoding to decode # text. This happens mostly when global LANG # var is not set correctly and there's no # default encoding. In this case, most likely # python will use ASCII or ANSI encoders as # default encodings but they won't be capable # of decoding non-ASCII characters. # # Also, UTF-8 is being used since it's an ASCII # extension. return text.decode('utf-8', errors)
Decodes incoming text/bytes string using `incoming` if they're not already unicode. This function was copied from novaclient.openstack.strutils :param incoming: Text's current encoding :param errors: Errors handling policy. See here for valid values http://docs.python.org/2/library/codecs.html :returns: text or a unicode `incoming` encoded representation of it. :raises TypeError: If text is not an instance of str
Below is the the instruction that describes the task: ### Input: Decodes incoming text/bytes string using `incoming` if they're not already unicode. This function was copied from novaclient.openstack.strutils :param incoming: Text's current encoding :param errors: Errors handling policy. See here for valid values http://docs.python.org/2/library/codecs.html :returns: text or a unicode `incoming` encoded representation of it. :raises TypeError: If text is not an instance of str ### Response: def safe_decode(text, incoming=None, errors='strict'): """Decodes incoming text/bytes string using `incoming` if they're not already unicode. This function was copied from novaclient.openstack.strutils :param incoming: Text's current encoding :param errors: Errors handling policy. See here for valid values http://docs.python.org/2/library/codecs.html :returns: text or a unicode `incoming` encoded representation of it. :raises TypeError: If text is not an instance of str """ if not isinstance(text, (six.string_types, six.binary_type)): raise TypeError("%s can't be decoded" % type(text)) if isinstance(text, six.text_type): return text if not incoming: incoming = (sys.stdin.encoding or sys.getdefaultencoding()) try: return text.decode(incoming, errors) except UnicodeDecodeError: # Note(flaper87) If we get here, it means that # sys.stdin.encoding / sys.getdefaultencoding # didn't return a suitable encoding to decode # text. This happens mostly when global LANG # var is not set correctly and there's no # default encoding. In this case, most likely # python will use ASCII or ANSI encoders as # default encodings but they won't be capable # of decoding non-ASCII characters. # # Also, UTF-8 is being used since it's an ASCII # extension. return text.decode('utf-8', errors)
def automation_delete(self, id, **kwargs): "https://developer.zendesk.com/rest_api/docs/core/automations#delete-automation" api_path = "/api/v2/automations/{id}.json" api_path = api_path.format(id=id) return self.call(api_path, method="DELETE", **kwargs)
https://developer.zendesk.com/rest_api/docs/core/automations#delete-automation
Below is the the instruction that describes the task: ### Input: https://developer.zendesk.com/rest_api/docs/core/automations#delete-automation ### Response: def automation_delete(self, id, **kwargs): "https://developer.zendesk.com/rest_api/docs/core/automations#delete-automation" api_path = "/api/v2/automations/{id}.json" api_path = api_path.format(id=id) return self.call(api_path, method="DELETE", **kwargs)
def get_smart_data(): """ Get SMART attribute data :return: list of multi leveled dictionaries each dict has a key "DeviceName" with the identification of the device in smartctl also has keys of the SMART attribute id, with value of another dict of the attributes [ { "DeviceName": "/dev/sda blahblah", "1": { "flags": "..", "raw": "..", etc, } } ] """ stats = [] # get all devices devlist = DeviceList() for dev in devlist.devices: stats.append({ DEVKEY: str(dev) }) for attribute in dev.attributes: if attribute is None: pass else: attribdict = convert_attribute_to_dict(attribute) # we will use the attribute number as the key num = attribdict.pop('num', None) try: assert num is not None except Exception as e: # we should never get here, but if we do, continue to next iteration and skip this attribute continue stats[-1][num] = attribdict return stats
Get SMART attribute data :return: list of multi leveled dictionaries each dict has a key "DeviceName" with the identification of the device in smartctl also has keys of the SMART attribute id, with value of another dict of the attributes [ { "DeviceName": "/dev/sda blahblah", "1": { "flags": "..", "raw": "..", etc, } } ]
Below is the the instruction that describes the task: ### Input: Get SMART attribute data :return: list of multi leveled dictionaries each dict has a key "DeviceName" with the identification of the device in smartctl also has keys of the SMART attribute id, with value of another dict of the attributes [ { "DeviceName": "/dev/sda blahblah", "1": { "flags": "..", "raw": "..", etc, } } ] ### Response: def get_smart_data(): """ Get SMART attribute data :return: list of multi leveled dictionaries each dict has a key "DeviceName" with the identification of the device in smartctl also has keys of the SMART attribute id, with value of another dict of the attributes [ { "DeviceName": "/dev/sda blahblah", "1": { "flags": "..", "raw": "..", etc, } } ] """ stats = [] # get all devices devlist = DeviceList() for dev in devlist.devices: stats.append({ DEVKEY: str(dev) }) for attribute in dev.attributes: if attribute is None: pass else: attribdict = convert_attribute_to_dict(attribute) # we will use the attribute number as the key num = attribdict.pop('num', None) try: assert num is not None except Exception as e: # we should never get here, but if we do, continue to next iteration and skip this attribute continue stats[-1][num] = attribdict return stats
def feed_ssldata(self, data): """Feed SSL record level data into the pipe. The data must be a bytes instance. It is OK to send an empty bytes instance. This can be used to get ssldata for a handshake initiated by this endpoint. Return a (ssldata, appdata) tuple. The ssldata element is a list of buffers containing SSL data that needs to be sent to the remote SSL. The appdata element is a list of buffers containing plaintext data that needs to be forwarded to the application. The appdata list may contain an empty buffer indicating an SSL "close_notify" alert. This alert must be acknowledged by calling :meth:`shutdown`. """ if self._state == self.S_UNWRAPPED: # If unwrapped, pass plaintext data straight through. return ([], [data] if data else []) ssldata = []; appdata = [] self._need_ssldata = False if data: self._incoming.write(data) try: if self._state == self.S_DO_HANDSHAKE: # Call do_handshake() until it doesn't raise anymore. self._sslobj.do_handshake() self._state = self.S_WRAPPED if self._handshake_cb: self._handshake_cb() if self._state == self.S_WRAPPED: # Main state: read data from SSL until close_notify while True: chunk = self._sslobj.read(self.bufsize) appdata.append(chunk) if not chunk: # close_notify break if self._state == self.S_SHUTDOWN: # Call shutdown() until it doesn't raise anymore. self._sslobj.unwrap() self._sslobj = None self._state = self.S_UNWRAPPED if self._shutdown_cb: self._shutdown_cb() if self._state == self.S_UNWRAPPED: # Drain possible plaintext data after close_notify. appdata.append(self._incoming.read()) except (ssl.SSLError, sslcompat.CertificateError) as e: if getattr(e, 'errno', None) not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, ssl.SSL_ERROR_SYSCALL): if self._state == self.S_DO_HANDSHAKE and self._handshake_cb: self._handshake_cb(e) raise self._need_ssldata = e.errno == ssl.SSL_ERROR_WANT_READ # Check for record level data that needs to be sent back. # Happens for the initial handshake and renegotiations. if self._outgoing.pending: ssldata.append(self._outgoing.read()) return (ssldata, appdata)
Feed SSL record level data into the pipe. The data must be a bytes instance. It is OK to send an empty bytes instance. This can be used to get ssldata for a handshake initiated by this endpoint. Return a (ssldata, appdata) tuple. The ssldata element is a list of buffers containing SSL data that needs to be sent to the remote SSL. The appdata element is a list of buffers containing plaintext data that needs to be forwarded to the application. The appdata list may contain an empty buffer indicating an SSL "close_notify" alert. This alert must be acknowledged by calling :meth:`shutdown`.
Below is the the instruction that describes the task: ### Input: Feed SSL record level data into the pipe. The data must be a bytes instance. It is OK to send an empty bytes instance. This can be used to get ssldata for a handshake initiated by this endpoint. Return a (ssldata, appdata) tuple. The ssldata element is a list of buffers containing SSL data that needs to be sent to the remote SSL. The appdata element is a list of buffers containing plaintext data that needs to be forwarded to the application. The appdata list may contain an empty buffer indicating an SSL "close_notify" alert. This alert must be acknowledged by calling :meth:`shutdown`. ### Response: def feed_ssldata(self, data): """Feed SSL record level data into the pipe. The data must be a bytes instance. It is OK to send an empty bytes instance. This can be used to get ssldata for a handshake initiated by this endpoint. Return a (ssldata, appdata) tuple. The ssldata element is a list of buffers containing SSL data that needs to be sent to the remote SSL. The appdata element is a list of buffers containing plaintext data that needs to be forwarded to the application. The appdata list may contain an empty buffer indicating an SSL "close_notify" alert. This alert must be acknowledged by calling :meth:`shutdown`. """ if self._state == self.S_UNWRAPPED: # If unwrapped, pass plaintext data straight through. return ([], [data] if data else []) ssldata = []; appdata = [] self._need_ssldata = False if data: self._incoming.write(data) try: if self._state == self.S_DO_HANDSHAKE: # Call do_handshake() until it doesn't raise anymore. self._sslobj.do_handshake() self._state = self.S_WRAPPED if self._handshake_cb: self._handshake_cb() if self._state == self.S_WRAPPED: # Main state: read data from SSL until close_notify while True: chunk = self._sslobj.read(self.bufsize) appdata.append(chunk) if not chunk: # close_notify break if self._state == self.S_SHUTDOWN: # Call shutdown() until it doesn't raise anymore. self._sslobj.unwrap() self._sslobj = None self._state = self.S_UNWRAPPED if self._shutdown_cb: self._shutdown_cb() if self._state == self.S_UNWRAPPED: # Drain possible plaintext data after close_notify. appdata.append(self._incoming.read()) except (ssl.SSLError, sslcompat.CertificateError) as e: if getattr(e, 'errno', None) not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, ssl.SSL_ERROR_SYSCALL): if self._state == self.S_DO_HANDSHAKE and self._handshake_cb: self._handshake_cb(e) raise self._need_ssldata = e.errno == ssl.SSL_ERROR_WANT_READ # Check for record level data that needs to be sent back. # Happens for the initial handshake and renegotiations. if self._outgoing.pending: ssldata.append(self._outgoing.read()) return (ssldata, appdata)
def to_time(dt, tzinfo=None, format=None): """ Convert a datetime to time with tzinfo """ d = to_datetime(dt, tzinfo, format) if not d: return d return time_(d.hour, d.minute, d.second, d.microsecond, tzinfo=d.tzinfo)
Convert a datetime to time with tzinfo
Below is the the instruction that describes the task: ### Input: Convert a datetime to time with tzinfo ### Response: def to_time(dt, tzinfo=None, format=None): """ Convert a datetime to time with tzinfo """ d = to_datetime(dt, tzinfo, format) if not d: return d return time_(d.hour, d.minute, d.second, d.microsecond, tzinfo=d.tzinfo)
def _non_connect_send_msg(self, msg_code, data): """ Similar to self._send, but doesn't try to initiate a connection, thus preventing an infinite loop. """ try: self._socket.sendall(self._encode_msg(msg_code, data)) except (IOError, socket.error) as e: if e.errno == errno.EPIPE: raise ConnectionClosed(e) else: raise
Similar to self._send, but doesn't try to initiate a connection, thus preventing an infinite loop.
Below is the the instruction that describes the task: ### Input: Similar to self._send, but doesn't try to initiate a connection, thus preventing an infinite loop. ### Response: def _non_connect_send_msg(self, msg_code, data): """ Similar to self._send, but doesn't try to initiate a connection, thus preventing an infinite loop. """ try: self._socket.sendall(self._encode_msg(msg_code, data)) except (IOError, socket.error) as e: if e.errno == errno.EPIPE: raise ConnectionClosed(e) else: raise
def is_contiguous(self): """Return offset and size of contiguous data, else None. Excludes prediction and fill_order. """ if (self.compression != 1 or self.bitspersample not in (8, 16, 32, 64)): return None if 'TileWidth' in self.tags: if (self.imagewidth != self.tilewidth or self.imagelength % self.tilelength or self.tilewidth % 16 or self.tilelength % 16): return None if ('ImageDepth' in self.tags and 'TileDepth' in self.tags and (self.imagelength != self.tilelength or self.imagedepth % self.tiledepth)): return None offsets = self.dataoffsets bytecounts = self.databytecounts if len(offsets) == 1: return offsets[0], bytecounts[0] if self.is_stk or all((offsets[i] + bytecounts[i] == offsets[i+1] or bytecounts[i+1] == 0) # no data/ignore offset for i in range(len(offsets)-1)): return offsets[0], sum(bytecounts) return None
Return offset and size of contiguous data, else None. Excludes prediction and fill_order.
Below is the the instruction that describes the task: ### Input: Return offset and size of contiguous data, else None. Excludes prediction and fill_order. ### Response: def is_contiguous(self): """Return offset and size of contiguous data, else None. Excludes prediction and fill_order. """ if (self.compression != 1 or self.bitspersample not in (8, 16, 32, 64)): return None if 'TileWidth' in self.tags: if (self.imagewidth != self.tilewidth or self.imagelength % self.tilelength or self.tilewidth % 16 or self.tilelength % 16): return None if ('ImageDepth' in self.tags and 'TileDepth' in self.tags and (self.imagelength != self.tilelength or self.imagedepth % self.tiledepth)): return None offsets = self.dataoffsets bytecounts = self.databytecounts if len(offsets) == 1: return offsets[0], bytecounts[0] if self.is_stk or all((offsets[i] + bytecounts[i] == offsets[i+1] or bytecounts[i+1] == 0) # no data/ignore offset for i in range(len(offsets)-1)): return offsets[0], sum(bytecounts) return None
def parse(self, argv=None, keyring_namespace=None, strict=False): """Find settings from all sources. :keyword strict: fail if unknown args are passed in. :returns: dict of parsed option name and values :raises: SystemExit if invalid arguments supplied along with stdout message (same as argparser). """ if argv is None: argv = self._argv or sys.argv results = self.load_options(argv=argv, keyring_namespace=keyring_namespace) # Run validation results = self.validate_config(results, argv=argv, strict=strict) self._values = results return self
Find settings from all sources. :keyword strict: fail if unknown args are passed in. :returns: dict of parsed option name and values :raises: SystemExit if invalid arguments supplied along with stdout message (same as argparser).
Below is the the instruction that describes the task: ### Input: Find settings from all sources. :keyword strict: fail if unknown args are passed in. :returns: dict of parsed option name and values :raises: SystemExit if invalid arguments supplied along with stdout message (same as argparser). ### Response: def parse(self, argv=None, keyring_namespace=None, strict=False): """Find settings from all sources. :keyword strict: fail if unknown args are passed in. :returns: dict of parsed option name and values :raises: SystemExit if invalid arguments supplied along with stdout message (same as argparser). """ if argv is None: argv = self._argv or sys.argv results = self.load_options(argv=argv, keyring_namespace=keyring_namespace) # Run validation results = self.validate_config(results, argv=argv, strict=strict) self._values = results return self
def _ParseDataObject(self, file_object, file_offset): """Parses a data object. Args: file_object (dfvfs.FileIO): a file-like object. file_offset (int): offset of the data object relative to the start of the file-like object. Returns: bytes: data. Raises: ParseError: if the data object cannot be parsed. """ data_object_map = self._GetDataTypeMap('systemd_journal_data_object') try: data_object, _ = self._ReadStructureFromFileObject( file_object, file_offset, data_object_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError(( 'Unable to parse data object at offset: 0x{0:08x} with error: ' '{1!s}').format(file_offset, exception)) if data_object.object_type != self._OBJECT_TYPE_DATA: raise errors.ParseError('Unsupported object type: {0:d}.'.format( data_object.object_type)) if data_object.object_flags not in ( 0, self._OBJECT_COMPRESSED_FLAG_XZ, self._OBJECT_COMPRESSED_FLAG_LZ4): raise errors.ParseError('Unsupported object flags: 0x{0:02x}.'.format( data_object.object_flags)) # The data is read separately for performance reasons. data_size = data_object.data_size - 64 data = file_object.read(data_size) if data_object.object_flags & self._OBJECT_COMPRESSED_FLAG_XZ: data = lzma.decompress(data) elif data_object.object_flags & self._OBJECT_COMPRESSED_FLAG_LZ4: uncompressed_size_map = self._GetDataTypeMap('uint32le') try: uncompressed_size = self._ReadStructureFromByteStream( data, file_offset + 64, uncompressed_size_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError(( 'Unable to parse LZ4 uncompressed size at offset: 0x{0:08x} with ' 'error: {1!s}').format(file_offset + 64, exception)) data = lz4.block.decompress( data[8:], uncompressed_size=uncompressed_size) return data
Parses a data object. Args: file_object (dfvfs.FileIO): a file-like object. file_offset (int): offset of the data object relative to the start of the file-like object. Returns: bytes: data. Raises: ParseError: if the data object cannot be parsed.
Below is the the instruction that describes the task: ### Input: Parses a data object. Args: file_object (dfvfs.FileIO): a file-like object. file_offset (int): offset of the data object relative to the start of the file-like object. Returns: bytes: data. Raises: ParseError: if the data object cannot be parsed. ### Response: def _ParseDataObject(self, file_object, file_offset): """Parses a data object. Args: file_object (dfvfs.FileIO): a file-like object. file_offset (int): offset of the data object relative to the start of the file-like object. Returns: bytes: data. Raises: ParseError: if the data object cannot be parsed. """ data_object_map = self._GetDataTypeMap('systemd_journal_data_object') try: data_object, _ = self._ReadStructureFromFileObject( file_object, file_offset, data_object_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError(( 'Unable to parse data object at offset: 0x{0:08x} with error: ' '{1!s}').format(file_offset, exception)) if data_object.object_type != self._OBJECT_TYPE_DATA: raise errors.ParseError('Unsupported object type: {0:d}.'.format( data_object.object_type)) if data_object.object_flags not in ( 0, self._OBJECT_COMPRESSED_FLAG_XZ, self._OBJECT_COMPRESSED_FLAG_LZ4): raise errors.ParseError('Unsupported object flags: 0x{0:02x}.'.format( data_object.object_flags)) # The data is read separately for performance reasons. data_size = data_object.data_size - 64 data = file_object.read(data_size) if data_object.object_flags & self._OBJECT_COMPRESSED_FLAG_XZ: data = lzma.decompress(data) elif data_object.object_flags & self._OBJECT_COMPRESSED_FLAG_LZ4: uncompressed_size_map = self._GetDataTypeMap('uint32le') try: uncompressed_size = self._ReadStructureFromByteStream( data, file_offset + 64, uncompressed_size_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError(( 'Unable to parse LZ4 uncompressed size at offset: 0x{0:08x} with ' 'error: {1!s}').format(file_offset + 64, exception)) data = lz4.block.decompress( data[8:], uncompressed_size=uncompressed_size) return data
def split_ls(func): """Decorator to split files into manageable chunks as not to exceed the windows cmd limit :param func: Function to call for each chunk :type func: :py:class:Function """ @wraps(func) def wrapper(self, files, silent=True, exclude_deleted=False): if not isinstance(files, (tuple, list)): files = [files] counter = 0 index = 0 results = [] while files: if index >= len(files): results += func(self, files, silent, exclude_deleted) break length = len(str(files[index])) if length + counter > CHAR_LIMIT: # -- at our limit runfiles = files[:index] files = files[index:] counter = 0 index = 0 results += func(self, runfiles, silent, exclude_deleted) runfiles = None del runfiles else: index += 1 counter += length return results return wrapper
Decorator to split files into manageable chunks as not to exceed the windows cmd limit :param func: Function to call for each chunk :type func: :py:class:Function
Below is the the instruction that describes the task: ### Input: Decorator to split files into manageable chunks as not to exceed the windows cmd limit :param func: Function to call for each chunk :type func: :py:class:Function ### Response: def split_ls(func): """Decorator to split files into manageable chunks as not to exceed the windows cmd limit :param func: Function to call for each chunk :type func: :py:class:Function """ @wraps(func) def wrapper(self, files, silent=True, exclude_deleted=False): if not isinstance(files, (tuple, list)): files = [files] counter = 0 index = 0 results = [] while files: if index >= len(files): results += func(self, files, silent, exclude_deleted) break length = len(str(files[index])) if length + counter > CHAR_LIMIT: # -- at our limit runfiles = files[:index] files = files[index:] counter = 0 index = 0 results += func(self, runfiles, silent, exclude_deleted) runfiles = None del runfiles else: index += 1 counter += length return results return wrapper
def from_json(cls, json): """Create new DatastoreInputReader from the json, encoded by to_json. Args: json: json map representation of DatastoreInputReader. Returns: an instance of DatastoreInputReader with all data deserialized from json. """ if json[cls.KEY_RANGE_PARAM] is None: # pylint: disable=redefined-outer-name key_ranges = None else: key_ranges = [] for k in json[cls.KEY_RANGE_PARAM]: if k: key_ranges.append(key_range.KeyRange.from_json(k)) else: key_ranges.append(None) if json[cls.NAMESPACE_RANGE_PARAM] is None: ns_range = None else: ns_range = namespace_range.NamespaceRange.from_json_object( json[cls.NAMESPACE_RANGE_PARAM]) if json[cls.CURRENT_KEY_RANGE_PARAM] is None: current_key_range = None else: current_key_range = key_range.KeyRange.from_json( json[cls.CURRENT_KEY_RANGE_PARAM]) return cls( json[cls.ENTITY_KIND_PARAM], key_ranges, ns_range, json[cls.BATCH_SIZE_PARAM], current_key_range, filters=json.get(cls.FILTERS_PARAM))
Create new DatastoreInputReader from the json, encoded by to_json. Args: json: json map representation of DatastoreInputReader. Returns: an instance of DatastoreInputReader with all data deserialized from json.
Below is the the instruction that describes the task: ### Input: Create new DatastoreInputReader from the json, encoded by to_json. Args: json: json map representation of DatastoreInputReader. Returns: an instance of DatastoreInputReader with all data deserialized from json. ### Response: def from_json(cls, json): """Create new DatastoreInputReader from the json, encoded by to_json. Args: json: json map representation of DatastoreInputReader. Returns: an instance of DatastoreInputReader with all data deserialized from json. """ if json[cls.KEY_RANGE_PARAM] is None: # pylint: disable=redefined-outer-name key_ranges = None else: key_ranges = [] for k in json[cls.KEY_RANGE_PARAM]: if k: key_ranges.append(key_range.KeyRange.from_json(k)) else: key_ranges.append(None) if json[cls.NAMESPACE_RANGE_PARAM] is None: ns_range = None else: ns_range = namespace_range.NamespaceRange.from_json_object( json[cls.NAMESPACE_RANGE_PARAM]) if json[cls.CURRENT_KEY_RANGE_PARAM] is None: current_key_range = None else: current_key_range = key_range.KeyRange.from_json( json[cls.CURRENT_KEY_RANGE_PARAM]) return cls( json[cls.ENTITY_KIND_PARAM], key_ranges, ns_range, json[cls.BATCH_SIZE_PARAM], current_key_range, filters=json.get(cls.FILTERS_PARAM))
def parse_result_to_dsl(tokens): """Convert a ParseResult to a PyBEL DSL object. :type tokens: dict or pyparsing.ParseResults :rtype: BaseEntity """ if MODIFIER in tokens: return parse_result_to_dsl(tokens[TARGET]) elif REACTION == tokens[FUNCTION]: return _reaction_po_to_dict(tokens) elif VARIANTS in tokens: return _variant_po_to_dict(tokens) elif MEMBERS in tokens: return _list_po_to_dict(tokens) elif FUSION in tokens: return _fusion_to_dsl(tokens) return _simple_po_to_dict(tokens)
Convert a ParseResult to a PyBEL DSL object. :type tokens: dict or pyparsing.ParseResults :rtype: BaseEntity
Below is the the instruction that describes the task: ### Input: Convert a ParseResult to a PyBEL DSL object. :type tokens: dict or pyparsing.ParseResults :rtype: BaseEntity ### Response: def parse_result_to_dsl(tokens): """Convert a ParseResult to a PyBEL DSL object. :type tokens: dict or pyparsing.ParseResults :rtype: BaseEntity """ if MODIFIER in tokens: return parse_result_to_dsl(tokens[TARGET]) elif REACTION == tokens[FUNCTION]: return _reaction_po_to_dict(tokens) elif VARIANTS in tokens: return _variant_po_to_dict(tokens) elif MEMBERS in tokens: return _list_po_to_dict(tokens) elif FUSION in tokens: return _fusion_to_dsl(tokens) return _simple_po_to_dict(tokens)
def names(self) -> [str]: """ The snames of all axis objects """ return sorted([name for name in self.axes_by_sname.keys() if name is not ''])
The snames of all axis objects
Below is the the instruction that describes the task: ### Input: The snames of all axis objects ### Response: def names(self) -> [str]: """ The snames of all axis objects """ return sorted([name for name in self.axes_by_sname.keys() if name is not ''])
def detranslify(in_string): """ Detranslify @param in_string: input string @type in_string: C{basestring} @return: detransliterated string @rtype: C{unicode} @raise ValueError: if in_string is C{str}, but it isn't ascii """ try: russian = six.text_type(in_string) except UnicodeDecodeError: raise ValueError("We expects if in_string is 8-bit string," + \ "then it consists only ASCII chars, but now it doesn't. " + \ "Use unicode in this case.") for symb_out, symb_in in TRANSTABLE: russian = russian.replace(symb_in, symb_out) # TODO: выбрать правильный регистр для ь и ъ # твердый и мягкий знак в dentranslify всегда будут в верхнем регистре # потому что ` и ' не несут информацию о регистре return russian
Detranslify @param in_string: input string @type in_string: C{basestring} @return: detransliterated string @rtype: C{unicode} @raise ValueError: if in_string is C{str}, but it isn't ascii
Below is the the instruction that describes the task: ### Input: Detranslify @param in_string: input string @type in_string: C{basestring} @return: detransliterated string @rtype: C{unicode} @raise ValueError: if in_string is C{str}, but it isn't ascii ### Response: def detranslify(in_string): """ Detranslify @param in_string: input string @type in_string: C{basestring} @return: detransliterated string @rtype: C{unicode} @raise ValueError: if in_string is C{str}, but it isn't ascii """ try: russian = six.text_type(in_string) except UnicodeDecodeError: raise ValueError("We expects if in_string is 8-bit string," + \ "then it consists only ASCII chars, but now it doesn't. " + \ "Use unicode in this case.") for symb_out, symb_in in TRANSTABLE: russian = russian.replace(symb_in, symb_out) # TODO: выбрать правильный регистр для ь и ъ # твердый и мягкий знак в dentranslify всегда будут в верхнем регистре # потому что ` и ' не несут информацию о регистре return russian
def cmd_link_add(self, args): '''add new link''' device = args[0] print("Adding link %s" % device) self.link_add(device)
add new link
Below is the the instruction that describes the task: ### Input: add new link ### Response: def cmd_link_add(self, args): '''add new link''' device = args[0] print("Adding link %s" % device) self.link_add(device)
def deltran(tree, feature): """ DELTRAN (delayed transformation) (Swofford & Maddison, 1987) aims at reducing the number of ambiguities in the parsimonious result. DELTRAN makes the changes as close as possible to the leaves, hence prioritizing parallel mutations. DELTRAN is performed after DOWNPASS. if N is not a root: P <- parent(N) if intersection(S(N), S(P)) is not empty: S(N) <- intersection(S(N), S(P)) if N is not a tip: L, R <- left and right children of N DELTRAN(L) DELTRAN(R) :param tree: ete3.Tree, the tree of interest :param feature: str, character for which the parsimonious states are reconstructed :return: void, modifies get_personalized_feature_name(feature, PARS_STATES) feature of the tree nodes """ ps_feature = get_personalized_feature_name(feature, PARS_STATES) for node in tree.traverse('preorder'): if not node.is_root(): node_states = getattr(node, ps_feature) parent_states = getattr(node.up, ps_feature) state_intersection = node_states & parent_states if state_intersection: node.add_feature(ps_feature, state_intersection)
DELTRAN (delayed transformation) (Swofford & Maddison, 1987) aims at reducing the number of ambiguities in the parsimonious result. DELTRAN makes the changes as close as possible to the leaves, hence prioritizing parallel mutations. DELTRAN is performed after DOWNPASS. if N is not a root: P <- parent(N) if intersection(S(N), S(P)) is not empty: S(N) <- intersection(S(N), S(P)) if N is not a tip: L, R <- left and right children of N DELTRAN(L) DELTRAN(R) :param tree: ete3.Tree, the tree of interest :param feature: str, character for which the parsimonious states are reconstructed :return: void, modifies get_personalized_feature_name(feature, PARS_STATES) feature of the tree nodes
Below is the the instruction that describes the task: ### Input: DELTRAN (delayed transformation) (Swofford & Maddison, 1987) aims at reducing the number of ambiguities in the parsimonious result. DELTRAN makes the changes as close as possible to the leaves, hence prioritizing parallel mutations. DELTRAN is performed after DOWNPASS. if N is not a root: P <- parent(N) if intersection(S(N), S(P)) is not empty: S(N) <- intersection(S(N), S(P)) if N is not a tip: L, R <- left and right children of N DELTRAN(L) DELTRAN(R) :param tree: ete3.Tree, the tree of interest :param feature: str, character for which the parsimonious states are reconstructed :return: void, modifies get_personalized_feature_name(feature, PARS_STATES) feature of the tree nodes ### Response: def deltran(tree, feature): """ DELTRAN (delayed transformation) (Swofford & Maddison, 1987) aims at reducing the number of ambiguities in the parsimonious result. DELTRAN makes the changes as close as possible to the leaves, hence prioritizing parallel mutations. DELTRAN is performed after DOWNPASS. if N is not a root: P <- parent(N) if intersection(S(N), S(P)) is not empty: S(N) <- intersection(S(N), S(P)) if N is not a tip: L, R <- left and right children of N DELTRAN(L) DELTRAN(R) :param tree: ete3.Tree, the tree of interest :param feature: str, character for which the parsimonious states are reconstructed :return: void, modifies get_personalized_feature_name(feature, PARS_STATES) feature of the tree nodes """ ps_feature = get_personalized_feature_name(feature, PARS_STATES) for node in tree.traverse('preorder'): if not node.is_root(): node_states = getattr(node, ps_feature) parent_states = getattr(node.up, ps_feature) state_intersection = node_states & parent_states if state_intersection: node.add_feature(ps_feature, state_intersection)
def fatal_noexc(msg, *args, **kwargs): """Print a red `msg` to STDERR and exit. The message is formatted with `args` & `kwargs`. """ print(Fore.RED + 'Fatal: ' + msg.format(*args, **kwargs) + Style.RESET_ALL, file=sys.stderr) sys.exit(1)
Print a red `msg` to STDERR and exit. The message is formatted with `args` & `kwargs`.
Below is the the instruction that describes the task: ### Input: Print a red `msg` to STDERR and exit. The message is formatted with `args` & `kwargs`. ### Response: def fatal_noexc(msg, *args, **kwargs): """Print a red `msg` to STDERR and exit. The message is formatted with `args` & `kwargs`. """ print(Fore.RED + 'Fatal: ' + msg.format(*args, **kwargs) + Style.RESET_ALL, file=sys.stderr) sys.exit(1)
def populate(self, source=DEFAULT_SEGMENT_SERVER, segments=None, pad=True, on_error='raise', **kwargs): """Query the segment database for each flag's active segments. This method assumes all of the metadata for each flag have been filled. Minimally, the following attributes must be filled .. autosummary:: ~DataQualityFlag.name ~DataQualityFlag.known Segments will be fetched from the database, with any :attr:`~DataQualityFlag.padding` added on-the-fly. Entries in this dict will be modified in-place. Parameters ---------- source : `str` source of segments for this flag. This must be either a URL for a segment database or a path to a file on disk. segments : `SegmentList`, optional a list of known segments during which to query, if not given, existing known segments for flags will be used. pad : `bool`, optional, default: `True` apply the `~DataQualityFlag.padding` associated with each flag, default: `True`. on_error : `str` how to handle an error querying for one flag, one of - `'raise'` (default): raise the Exception - `'warn'`: print a warning - `'ignore'`: move onto the next flag as if nothing happened **kwargs any other keyword arguments to be passed to :meth:`DataQualityFlag.query` or :meth:`DataQualityFlag.read`. Returns ------- self : `DataQualityDict` a reference to the modified DataQualityDict """ # check on_error flag if on_error not in ['raise', 'warn', 'ignore']: raise ValueError("on_error must be one of 'raise', 'warn', " "or 'ignore'") # format source source = urlparse(source) # perform query for all segments if source.netloc and segments is not None: segments = SegmentList(map(Segment, segments)) tmp = type(self).query(self.keys(), segments, url=source.geturl(), on_error=on_error, **kwargs) elif not source.netloc: tmp = type(self).read(source.geturl(), **kwargs) # apply padding and wrap to given known segments for key in self: if segments is None and source.netloc: try: tmp = {key: self[key].query( self[key].name, self[key].known, **kwargs)} except URLError as exc: if on_error == 'ignore': pass elif on_error == 'warn': warnings.warn('Error querying for %s: %s' % (key, exc)) else: raise continue self[key].known &= tmp[key].known self[key].active = tmp[key].active if pad: self[key] = self[key].pad(inplace=True) if segments is not None: self[key].known &= segments self[key].active &= segments return self
Query the segment database for each flag's active segments. This method assumes all of the metadata for each flag have been filled. Minimally, the following attributes must be filled .. autosummary:: ~DataQualityFlag.name ~DataQualityFlag.known Segments will be fetched from the database, with any :attr:`~DataQualityFlag.padding` added on-the-fly. Entries in this dict will be modified in-place. Parameters ---------- source : `str` source of segments for this flag. This must be either a URL for a segment database or a path to a file on disk. segments : `SegmentList`, optional a list of known segments during which to query, if not given, existing known segments for flags will be used. pad : `bool`, optional, default: `True` apply the `~DataQualityFlag.padding` associated with each flag, default: `True`. on_error : `str` how to handle an error querying for one flag, one of - `'raise'` (default): raise the Exception - `'warn'`: print a warning - `'ignore'`: move onto the next flag as if nothing happened **kwargs any other keyword arguments to be passed to :meth:`DataQualityFlag.query` or :meth:`DataQualityFlag.read`. Returns ------- self : `DataQualityDict` a reference to the modified DataQualityDict
Below is the the instruction that describes the task: ### Input: Query the segment database for each flag's active segments. This method assumes all of the metadata for each flag have been filled. Minimally, the following attributes must be filled .. autosummary:: ~DataQualityFlag.name ~DataQualityFlag.known Segments will be fetched from the database, with any :attr:`~DataQualityFlag.padding` added on-the-fly. Entries in this dict will be modified in-place. Parameters ---------- source : `str` source of segments for this flag. This must be either a URL for a segment database or a path to a file on disk. segments : `SegmentList`, optional a list of known segments during which to query, if not given, existing known segments for flags will be used. pad : `bool`, optional, default: `True` apply the `~DataQualityFlag.padding` associated with each flag, default: `True`. on_error : `str` how to handle an error querying for one flag, one of - `'raise'` (default): raise the Exception - `'warn'`: print a warning - `'ignore'`: move onto the next flag as if nothing happened **kwargs any other keyword arguments to be passed to :meth:`DataQualityFlag.query` or :meth:`DataQualityFlag.read`. Returns ------- self : `DataQualityDict` a reference to the modified DataQualityDict ### Response: def populate(self, source=DEFAULT_SEGMENT_SERVER, segments=None, pad=True, on_error='raise', **kwargs): """Query the segment database for each flag's active segments. This method assumes all of the metadata for each flag have been filled. Minimally, the following attributes must be filled .. autosummary:: ~DataQualityFlag.name ~DataQualityFlag.known Segments will be fetched from the database, with any :attr:`~DataQualityFlag.padding` added on-the-fly. Entries in this dict will be modified in-place. Parameters ---------- source : `str` source of segments for this flag. This must be either a URL for a segment database or a path to a file on disk. segments : `SegmentList`, optional a list of known segments during which to query, if not given, existing known segments for flags will be used. pad : `bool`, optional, default: `True` apply the `~DataQualityFlag.padding` associated with each flag, default: `True`. on_error : `str` how to handle an error querying for one flag, one of - `'raise'` (default): raise the Exception - `'warn'`: print a warning - `'ignore'`: move onto the next flag as if nothing happened **kwargs any other keyword arguments to be passed to :meth:`DataQualityFlag.query` or :meth:`DataQualityFlag.read`. Returns ------- self : `DataQualityDict` a reference to the modified DataQualityDict """ # check on_error flag if on_error not in ['raise', 'warn', 'ignore']: raise ValueError("on_error must be one of 'raise', 'warn', " "or 'ignore'") # format source source = urlparse(source) # perform query for all segments if source.netloc and segments is not None: segments = SegmentList(map(Segment, segments)) tmp = type(self).query(self.keys(), segments, url=source.geturl(), on_error=on_error, **kwargs) elif not source.netloc: tmp = type(self).read(source.geturl(), **kwargs) # apply padding and wrap to given known segments for key in self: if segments is None and source.netloc: try: tmp = {key: self[key].query( self[key].name, self[key].known, **kwargs)} except URLError as exc: if on_error == 'ignore': pass elif on_error == 'warn': warnings.warn('Error querying for %s: %s' % (key, exc)) else: raise continue self[key].known &= tmp[key].known self[key].active = tmp[key].active if pad: self[key] = self[key].pad(inplace=True) if segments is not None: self[key].known &= segments self[key].active &= segments return self
def show_generic_keywords(self, layer): """Show the keywords defined for the active layer. .. note:: The print button will be disabled if this method is called. .. versionchanged:: 3.3 - changed parameter from keywords object to a layer object so that we can show extra stuff like CRS and data source in the keywords. :param layer: A QGIS layer. :type layer: QgsMapLayer """ keywords = KeywordIO(layer) self.print_button.setEnabled(False) try: message = keywords.to_message() send_static_message(self, message) except InvalidMessageItemError: # FIXME (elpaso) pass self.show_question_button.setVisible(False) self.question_group.setEnabled(True) self.question_group.setVisible(True)
Show the keywords defined for the active layer. .. note:: The print button will be disabled if this method is called. .. versionchanged:: 3.3 - changed parameter from keywords object to a layer object so that we can show extra stuff like CRS and data source in the keywords. :param layer: A QGIS layer. :type layer: QgsMapLayer
Below is the the instruction that describes the task: ### Input: Show the keywords defined for the active layer. .. note:: The print button will be disabled if this method is called. .. versionchanged:: 3.3 - changed parameter from keywords object to a layer object so that we can show extra stuff like CRS and data source in the keywords. :param layer: A QGIS layer. :type layer: QgsMapLayer ### Response: def show_generic_keywords(self, layer): """Show the keywords defined for the active layer. .. note:: The print button will be disabled if this method is called. .. versionchanged:: 3.3 - changed parameter from keywords object to a layer object so that we can show extra stuff like CRS and data source in the keywords. :param layer: A QGIS layer. :type layer: QgsMapLayer """ keywords = KeywordIO(layer) self.print_button.setEnabled(False) try: message = keywords.to_message() send_static_message(self, message) except InvalidMessageItemError: # FIXME (elpaso) pass self.show_question_button.setVisible(False) self.question_group.setEnabled(True) self.question_group.setVisible(True)
def _validate_input(self, table: pd.DataFrame, failed_only=False) -> pd.DataFrame: """Return a dataframe of validation results for the appropriate series vs the vector of validators.""" return self._do_validation_set( table=table, columns=self.input_columns, validation_type="input", failed_only=failed_only,)
Return a dataframe of validation results for the appropriate series vs the vector of validators.
Below is the the instruction that describes the task: ### Input: Return a dataframe of validation results for the appropriate series vs the vector of validators. ### Response: def _validate_input(self, table: pd.DataFrame, failed_only=False) -> pd.DataFrame: """Return a dataframe of validation results for the appropriate series vs the vector of validators.""" return self._do_validation_set( table=table, columns=self.input_columns, validation_type="input", failed_only=failed_only,)
def to_excel(self, filepath: str, title: str): """Write the main dataframe to an Excell file :param filepath: path of the Excel file to write :type filepath: str :param title: Title of the stylesheet :type title: str :example: ``ds.to_excel_("./myfile.xlsx", "My data")`` """ try: self.start("Saving data to Excell file: "+filepath + " ...") writer = pytablewriter.ExcelXlsxTableWriter() writer.from_dataframe(self.df) writer.open(filepath) writer.make_worksheet(title) writer.write_table() writer.close() self.end("File exported to", filepath) except Exception as e: self.err(e, "Can not convert data to Excel")
Write the main dataframe to an Excell file :param filepath: path of the Excel file to write :type filepath: str :param title: Title of the stylesheet :type title: str :example: ``ds.to_excel_("./myfile.xlsx", "My data")``
Below is the the instruction that describes the task: ### Input: Write the main dataframe to an Excell file :param filepath: path of the Excel file to write :type filepath: str :param title: Title of the stylesheet :type title: str :example: ``ds.to_excel_("./myfile.xlsx", "My data")`` ### Response: def to_excel(self, filepath: str, title: str): """Write the main dataframe to an Excell file :param filepath: path of the Excel file to write :type filepath: str :param title: Title of the stylesheet :type title: str :example: ``ds.to_excel_("./myfile.xlsx", "My data")`` """ try: self.start("Saving data to Excell file: "+filepath + " ...") writer = pytablewriter.ExcelXlsxTableWriter() writer.from_dataframe(self.df) writer.open(filepath) writer.make_worksheet(title) writer.write_table() writer.close() self.end("File exported to", filepath) except Exception as e: self.err(e, "Can not convert data to Excel")
def _is_valid_date_time(self, inpt, metadata): """Checks if input is a valid DateTime object""" # NEED TO ADD CHECKS FOR OTHER METADATA, LIKE MINIMUM, MAXIMUM, ETC. from dlkit.abstract_osid.calendaring.primitives import DateTime as abc_datetime if isinstance(inpt, abc_datetime): return True else: return False
Checks if input is a valid DateTime object
Below is the the instruction that describes the task: ### Input: Checks if input is a valid DateTime object ### Response: def _is_valid_date_time(self, inpt, metadata): """Checks if input is a valid DateTime object""" # NEED TO ADD CHECKS FOR OTHER METADATA, LIKE MINIMUM, MAXIMUM, ETC. from dlkit.abstract_osid.calendaring.primitives import DateTime as abc_datetime if isinstance(inpt, abc_datetime): return True else: return False
def delete(self): """ Deletes the results of this query, it first fetches all the items to be deletes and then issues a PATCH against the list uri of the resource. """ uris = [obj["resource_uri"] for obj in self.results()] data = self.resource._meta.api.resource_serialize({"objects": [], "deleted_objects": uris}) self.resource._meta.api.http_resource("PATCH", self.resource._meta.resource_name, data=data) return len(uris)
Deletes the results of this query, it first fetches all the items to be deletes and then issues a PATCH against the list uri of the resource.
Below is the the instruction that describes the task: ### Input: Deletes the results of this query, it first fetches all the items to be deletes and then issues a PATCH against the list uri of the resource. ### Response: def delete(self): """ Deletes the results of this query, it first fetches all the items to be deletes and then issues a PATCH against the list uri of the resource. """ uris = [obj["resource_uri"] for obj in self.results()] data = self.resource._meta.api.resource_serialize({"objects": [], "deleted_objects": uris}) self.resource._meta.api.http_resource("PATCH", self.resource._meta.resource_name, data=data) return len(uris)
def exit_with_error(msg='Aborted', details=None, code=-1, *args, **kwargs): '''Exit with error''' error(msg, details=details, *args, **kwargs) sys.exit(code)
Exit with error
Below is the the instruction that describes the task: ### Input: Exit with error ### Response: def exit_with_error(msg='Aborted', details=None, code=-1, *args, **kwargs): '''Exit with error''' error(msg, details=details, *args, **kwargs) sys.exit(code)
def save(potential, f): """ Write a :class:`~gala.potential.PotentialBase` object out to a text (YAML) file. Parameters ---------- potential : :class:`~gala.potential.PotentialBase` The instantiated :class:`~gala.potential.PotentialBase` object. f : str, file_like A filename or file-like object to write the input potential object to. """ d = to_dict(potential) if hasattr(f, 'write'): yaml.dump(d, f, default_flow_style=False) else: with open(f, 'w') as f2: yaml.dump(d, f2, default_flow_style=False)
Write a :class:`~gala.potential.PotentialBase` object out to a text (YAML) file. Parameters ---------- potential : :class:`~gala.potential.PotentialBase` The instantiated :class:`~gala.potential.PotentialBase` object. f : str, file_like A filename or file-like object to write the input potential object to.
Below is the the instruction that describes the task: ### Input: Write a :class:`~gala.potential.PotentialBase` object out to a text (YAML) file. Parameters ---------- potential : :class:`~gala.potential.PotentialBase` The instantiated :class:`~gala.potential.PotentialBase` object. f : str, file_like A filename or file-like object to write the input potential object to. ### Response: def save(potential, f): """ Write a :class:`~gala.potential.PotentialBase` object out to a text (YAML) file. Parameters ---------- potential : :class:`~gala.potential.PotentialBase` The instantiated :class:`~gala.potential.PotentialBase` object. f : str, file_like A filename or file-like object to write the input potential object to. """ d = to_dict(potential) if hasattr(f, 'write'): yaml.dump(d, f, default_flow_style=False) else: with open(f, 'w') as f2: yaml.dump(d, f2, default_flow_style=False)
def create(self, ip_dest, next_hop, **kwargs): """Create a static route Args: ip_dest (string): The ip address of the destination in the form of A.B.C.D/E next_hop (string): The next hop interface or ip address **kwargs['next_hop_ip'] (string): The next hop address on destination interface **kwargs['distance'] (string): Administrative distance for this route **kwargs['tag'] (string): Route tag **kwargs['route_name'] (string): Route name Returns: True if the operation succeeds, otherwise False. """ # Call _set_route with delete and default set to False return self._set_route(ip_dest, next_hop, **kwargs)
Create a static route Args: ip_dest (string): The ip address of the destination in the form of A.B.C.D/E next_hop (string): The next hop interface or ip address **kwargs['next_hop_ip'] (string): The next hop address on destination interface **kwargs['distance'] (string): Administrative distance for this route **kwargs['tag'] (string): Route tag **kwargs['route_name'] (string): Route name Returns: True if the operation succeeds, otherwise False.
Below is the the instruction that describes the task: ### Input: Create a static route Args: ip_dest (string): The ip address of the destination in the form of A.B.C.D/E next_hop (string): The next hop interface or ip address **kwargs['next_hop_ip'] (string): The next hop address on destination interface **kwargs['distance'] (string): Administrative distance for this route **kwargs['tag'] (string): Route tag **kwargs['route_name'] (string): Route name Returns: True if the operation succeeds, otherwise False. ### Response: def create(self, ip_dest, next_hop, **kwargs): """Create a static route Args: ip_dest (string): The ip address of the destination in the form of A.B.C.D/E next_hop (string): The next hop interface or ip address **kwargs['next_hop_ip'] (string): The next hop address on destination interface **kwargs['distance'] (string): Administrative distance for this route **kwargs['tag'] (string): Route tag **kwargs['route_name'] (string): Route name Returns: True if the operation succeeds, otherwise False. """ # Call _set_route with delete and default set to False return self._set_route(ip_dest, next_hop, **kwargs)
def _configure(self): """ Configure the core of sirbot Merge the config with the default core config and configure logging. The default logging level is `INFO` """ path = os.path.join( os.path.dirname(os.path.abspath(__file__)), 'config.yml' ) with open(path) as file: defaultconfig = yaml.load(file) self.config = merge_dict(self.config, defaultconfig) if 'logging' in self.config: logging.config.dictConfig(self.config['logging']) else: logging.getLogger('sirbot').setLevel('INFO')
Configure the core of sirbot Merge the config with the default core config and configure logging. The default logging level is `INFO`
Below is the the instruction that describes the task: ### Input: Configure the core of sirbot Merge the config with the default core config and configure logging. The default logging level is `INFO` ### Response: def _configure(self): """ Configure the core of sirbot Merge the config with the default core config and configure logging. The default logging level is `INFO` """ path = os.path.join( os.path.dirname(os.path.abspath(__file__)), 'config.yml' ) with open(path) as file: defaultconfig = yaml.load(file) self.config = merge_dict(self.config, defaultconfig) if 'logging' in self.config: logging.config.dictConfig(self.config['logging']) else: logging.getLogger('sirbot').setLevel('INFO')
def tempo_account_delete_account_by_id(self, account_id): """ Delete an Account by id. Caller must have the Manage Account Permission for the Account. The Account can not be deleted if it has an AccountLinkBean. :param account_id: the id of the Account to be deleted. :return: """ url = 'rest/tempo-accounts/1/account/{id}/'.format(id=account_id) return self.delete(url)
Delete an Account by id. Caller must have the Manage Account Permission for the Account. The Account can not be deleted if it has an AccountLinkBean. :param account_id: the id of the Account to be deleted. :return:
Below is the the instruction that describes the task: ### Input: Delete an Account by id. Caller must have the Manage Account Permission for the Account. The Account can not be deleted if it has an AccountLinkBean. :param account_id: the id of the Account to be deleted. :return: ### Response: def tempo_account_delete_account_by_id(self, account_id): """ Delete an Account by id. Caller must have the Manage Account Permission for the Account. The Account can not be deleted if it has an AccountLinkBean. :param account_id: the id of the Account to be deleted. :return: """ url = 'rest/tempo-accounts/1/account/{id}/'.format(id=account_id) return self.delete(url)
def get_sha(self) -> str: """ :return: SHA of the latest commit :rtype: str """ current_sha: str = self.repo.head.commit.hexsha LOGGER.debug('current commit SHA: %s', current_sha) return current_sha
:return: SHA of the latest commit :rtype: str
Below is the the instruction that describes the task: ### Input: :return: SHA of the latest commit :rtype: str ### Response: def get_sha(self) -> str: """ :return: SHA of the latest commit :rtype: str """ current_sha: str = self.repo.head.commit.hexsha LOGGER.debug('current commit SHA: %s', current_sha) return current_sha
def assume(self, other): """ Assume the identity of another target. This can be useful to make the global target assume the identity of an ELF executable. Arguments: other(:class:`Target`): The target whose identity to assume. Example: >>> from pwny import * >>> target.assume(ELF('my-executable')) """ self._arch = other._arch self._bits = other._bits self._endian = other._endian self._mode = other._mode
Assume the identity of another target. This can be useful to make the global target assume the identity of an ELF executable. Arguments: other(:class:`Target`): The target whose identity to assume. Example: >>> from pwny import * >>> target.assume(ELF('my-executable'))
Below is the the instruction that describes the task: ### Input: Assume the identity of another target. This can be useful to make the global target assume the identity of an ELF executable. Arguments: other(:class:`Target`): The target whose identity to assume. Example: >>> from pwny import * >>> target.assume(ELF('my-executable')) ### Response: def assume(self, other): """ Assume the identity of another target. This can be useful to make the global target assume the identity of an ELF executable. Arguments: other(:class:`Target`): The target whose identity to assume. Example: >>> from pwny import * >>> target.assume(ELF('my-executable')) """ self._arch = other._arch self._bits = other._bits self._endian = other._endian self._mode = other._mode
def get_field_groups(layer_purpose, layer_subcategory=None): """Obtain list of field groups from layer purpose and subcategory. :param layer_purpose: The layer purpose. :type layer_purpose: str :param layer_subcategory: Exposure or hazard value. :type layer_subcategory: str :returns: List of layer groups. :rtype: list """ layer_purpose_dict = definition(layer_purpose) if not layer_purpose_dict: return [] field_groups = deepcopy(layer_purpose_dict.get('field_groups', [])) if layer_purpose in [ layer_purpose_exposure['key'], layer_purpose_hazard['key']]: if layer_subcategory: subcategory = definition(layer_subcategory) if 'field_groups' in subcategory: field_groups += deepcopy(subcategory['field_groups']) return field_groups
Obtain list of field groups from layer purpose and subcategory. :param layer_purpose: The layer purpose. :type layer_purpose: str :param layer_subcategory: Exposure or hazard value. :type layer_subcategory: str :returns: List of layer groups. :rtype: list
Below is the the instruction that describes the task: ### Input: Obtain list of field groups from layer purpose and subcategory. :param layer_purpose: The layer purpose. :type layer_purpose: str :param layer_subcategory: Exposure or hazard value. :type layer_subcategory: str :returns: List of layer groups. :rtype: list ### Response: def get_field_groups(layer_purpose, layer_subcategory=None): """Obtain list of field groups from layer purpose and subcategory. :param layer_purpose: The layer purpose. :type layer_purpose: str :param layer_subcategory: Exposure or hazard value. :type layer_subcategory: str :returns: List of layer groups. :rtype: list """ layer_purpose_dict = definition(layer_purpose) if not layer_purpose_dict: return [] field_groups = deepcopy(layer_purpose_dict.get('field_groups', [])) if layer_purpose in [ layer_purpose_exposure['key'], layer_purpose_hazard['key']]: if layer_subcategory: subcategory = definition(layer_subcategory) if 'field_groups' in subcategory: field_groups += deepcopy(subcategory['field_groups']) return field_groups