code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def read_sql( sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, columns=None, chunksize=None, ): """ Read SQL query or database table into a DataFrame. Args: sql: string or SQLAlchemy Selectable (select or text object) SQL query to be executed or a table name. con: SQLAlchemy connectable (engine/connection) or database string URI or DBAPI2 connection (fallback mode) index_col: Column(s) to set as index(MultiIndex). coerce_float: Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets. params: List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249's paramstyle, is supported. parse_dates: - List of column names to parse as dates. - Dict of ``{column_name: format string}`` where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps. - Dict of ``{column_name: arg dict}``, where the arg dict corresponds to the keyword arguments of :func:`pandas.to_datetime` Especially useful with databases without native Datetime support, such as SQLite. columns: List of column names to select from SQL table (only used when reading a table). chunksize: If specified, return an iterator where `chunksize` is the number of rows to include in each chunk. Returns: Modin Dataframe """ _, _, _, kwargs = inspect.getargvalues(inspect.currentframe()) return DataFrame(query_compiler=BaseFactory.read_sql(**kwargs))
Read SQL query or database table into a DataFrame. Args: sql: string or SQLAlchemy Selectable (select or text object) SQL query to be executed or a table name. con: SQLAlchemy connectable (engine/connection) or database string URI or DBAPI2 connection (fallback mode) index_col: Column(s) to set as index(MultiIndex). coerce_float: Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets. params: List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249's paramstyle, is supported. parse_dates: - List of column names to parse as dates. - Dict of ``{column_name: format string}`` where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps. - Dict of ``{column_name: arg dict}``, where the arg dict corresponds to the keyword arguments of :func:`pandas.to_datetime` Especially useful with databases without native Datetime support, such as SQLite. columns: List of column names to select from SQL table (only used when reading a table). chunksize: If specified, return an iterator where `chunksize` is the number of rows to include in each chunk. Returns: Modin Dataframe
Below is the the instruction that describes the task: ### Input: Read SQL query or database table into a DataFrame. Args: sql: string or SQLAlchemy Selectable (select or text object) SQL query to be executed or a table name. con: SQLAlchemy connectable (engine/connection) or database string URI or DBAPI2 connection (fallback mode) index_col: Column(s) to set as index(MultiIndex). coerce_float: Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets. params: List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249's paramstyle, is supported. parse_dates: - List of column names to parse as dates. - Dict of ``{column_name: format string}`` where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps. - Dict of ``{column_name: arg dict}``, where the arg dict corresponds to the keyword arguments of :func:`pandas.to_datetime` Especially useful with databases without native Datetime support, such as SQLite. columns: List of column names to select from SQL table (only used when reading a table). chunksize: If specified, return an iterator where `chunksize` is the number of rows to include in each chunk. Returns: Modin Dataframe ### Response: def read_sql( sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, columns=None, chunksize=None, ): """ Read SQL query or database table into a DataFrame. Args: sql: string or SQLAlchemy Selectable (select or text object) SQL query to be executed or a table name. con: SQLAlchemy connectable (engine/connection) or database string URI or DBAPI2 connection (fallback mode) index_col: Column(s) to set as index(MultiIndex). coerce_float: Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets. params: List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249's paramstyle, is supported. parse_dates: - List of column names to parse as dates. - Dict of ``{column_name: format string}`` where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps. - Dict of ``{column_name: arg dict}``, where the arg dict corresponds to the keyword arguments of :func:`pandas.to_datetime` Especially useful with databases without native Datetime support, such as SQLite. columns: List of column names to select from SQL table (only used when reading a table). chunksize: If specified, return an iterator where `chunksize` is the number of rows to include in each chunk. Returns: Modin Dataframe """ _, _, _, kwargs = inspect.getargvalues(inspect.currentframe()) return DataFrame(query_compiler=BaseFactory.read_sql(**kwargs))
def get_cql_models(app, connection=None, keyspace=None): """ :param app: django models module :param connection: connection name :param keyspace: keyspace :return: list of all cassandra.cqlengine.Model within app that should be synced to keyspace. """ from .models import DjangoCassandraModel models = [] single_cassandra_connection = len(list(get_cassandra_connections())) == 1 is_default_connection = connection == DEFAULT_DB_ALIAS or \ single_cassandra_connection for name, obj in inspect.getmembers(app): cql_model_types = ( cqlengine.models.Model, DjangoCassandraModel ) if ( inspect.isclass(obj) and issubclass(obj, cql_model_types) and not obj.__abstract__ ): if obj.__connection__ == connection or \ (obj.__connection__ is None and is_default_connection) or \ obj.__connection__ is None and obj.__keyspace__ is not None and obj.__keyspace__ == keyspace: models.append(obj) return models
:param app: django models module :param connection: connection name :param keyspace: keyspace :return: list of all cassandra.cqlengine.Model within app that should be synced to keyspace.
Below is the the instruction that describes the task: ### Input: :param app: django models module :param connection: connection name :param keyspace: keyspace :return: list of all cassandra.cqlengine.Model within app that should be synced to keyspace. ### Response: def get_cql_models(app, connection=None, keyspace=None): """ :param app: django models module :param connection: connection name :param keyspace: keyspace :return: list of all cassandra.cqlengine.Model within app that should be synced to keyspace. """ from .models import DjangoCassandraModel models = [] single_cassandra_connection = len(list(get_cassandra_connections())) == 1 is_default_connection = connection == DEFAULT_DB_ALIAS or \ single_cassandra_connection for name, obj in inspect.getmembers(app): cql_model_types = ( cqlengine.models.Model, DjangoCassandraModel ) if ( inspect.isclass(obj) and issubclass(obj, cql_model_types) and not obj.__abstract__ ): if obj.__connection__ == connection or \ (obj.__connection__ is None and is_default_connection) or \ obj.__connection__ is None and obj.__keyspace__ is not None and obj.__keyspace__ == keyspace: models.append(obj) return models
def process(self, msg, kwargs): """Process the logging message and keyword arguments passed in to a logging call to insert contextual information. :param str msg: The message to process :param dict kwargs: The kwargs to append :rtype: (str, dict) """ kwargs['extra'] = { 'correlation_id': self.parent.correlation_id, 'parent': self.parent.name } return msg, kwargs
Process the logging message and keyword arguments passed in to a logging call to insert contextual information. :param str msg: The message to process :param dict kwargs: The kwargs to append :rtype: (str, dict)
Below is the the instruction that describes the task: ### Input: Process the logging message and keyword arguments passed in to a logging call to insert contextual information. :param str msg: The message to process :param dict kwargs: The kwargs to append :rtype: (str, dict) ### Response: def process(self, msg, kwargs): """Process the logging message and keyword arguments passed in to a logging call to insert contextual information. :param str msg: The message to process :param dict kwargs: The kwargs to append :rtype: (str, dict) """ kwargs['extra'] = { 'correlation_id': self.parent.correlation_id, 'parent': self.parent.name } return msg, kwargs
def _dereference_args(pipeline_name, args, kwargs): """Dereference a Pipeline's arguments that are slots, validating them. Each argument value passed in is assumed to be a dictionary with the format: {'type': 'value', 'value': 'serializable'} # A resolved value. {'type': 'slot', 'slot_key': 'str() on a db.Key'} # A pending Slot. Args: pipeline_name: The name of the pipeline class; used for debugging. args: Iterable of positional arguments. kwargs: Dictionary of keyword arguments. Returns: Tuple (args, kwargs) where: Args: A list of positional arguments values that are all dereferenced. Kwargs: A list of keyword arguments values that are all dereferenced. Raises: SlotNotFilledError if any of the supplied 'slot_key' records are not present in the Datastore or have not yet been filled. UnexpectedPipelineError if an unknown parameter type was passed. """ lookup_slots = set() for arg in itertools.chain(args, kwargs.itervalues()): if arg['type'] == 'slot': lookup_slots.add(db.Key(arg['slot_key'])) slot_dict = {} for key, slot_record in zip(lookup_slots, db.get(lookup_slots)): if slot_record is None or slot_record.status != _SlotRecord.FILLED: raise SlotNotFilledError( 'Slot "%s" missing its value. From %s(*args=%s, **kwargs=%s)' % (key, pipeline_name, _short_repr(args), _short_repr(kwargs))) slot_dict[key] = slot_record.value arg_list = [] for current_arg in args: if current_arg['type'] == 'slot': arg_list.append(slot_dict[db.Key(current_arg['slot_key'])]) elif current_arg['type'] == 'value': arg_list.append(current_arg['value']) else: raise UnexpectedPipelineError('Unknown parameter type: %r' % current_arg) kwarg_dict = {} for key, current_arg in kwargs.iteritems(): if current_arg['type'] == 'slot': kwarg_dict[key] = slot_dict[db.Key(current_arg['slot_key'])] elif current_arg['type'] == 'value': kwarg_dict[key] = current_arg['value'] else: raise UnexpectedPipelineError('Unknown parameter type: %r' % current_arg) return (arg_list, kwarg_dict)
Dereference a Pipeline's arguments that are slots, validating them. Each argument value passed in is assumed to be a dictionary with the format: {'type': 'value', 'value': 'serializable'} # A resolved value. {'type': 'slot', 'slot_key': 'str() on a db.Key'} # A pending Slot. Args: pipeline_name: The name of the pipeline class; used for debugging. args: Iterable of positional arguments. kwargs: Dictionary of keyword arguments. Returns: Tuple (args, kwargs) where: Args: A list of positional arguments values that are all dereferenced. Kwargs: A list of keyword arguments values that are all dereferenced. Raises: SlotNotFilledError if any of the supplied 'slot_key' records are not present in the Datastore or have not yet been filled. UnexpectedPipelineError if an unknown parameter type was passed.
Below is the the instruction that describes the task: ### Input: Dereference a Pipeline's arguments that are slots, validating them. Each argument value passed in is assumed to be a dictionary with the format: {'type': 'value', 'value': 'serializable'} # A resolved value. {'type': 'slot', 'slot_key': 'str() on a db.Key'} # A pending Slot. Args: pipeline_name: The name of the pipeline class; used for debugging. args: Iterable of positional arguments. kwargs: Dictionary of keyword arguments. Returns: Tuple (args, kwargs) where: Args: A list of positional arguments values that are all dereferenced. Kwargs: A list of keyword arguments values that are all dereferenced. Raises: SlotNotFilledError if any of the supplied 'slot_key' records are not present in the Datastore or have not yet been filled. UnexpectedPipelineError if an unknown parameter type was passed. ### Response: def _dereference_args(pipeline_name, args, kwargs): """Dereference a Pipeline's arguments that are slots, validating them. Each argument value passed in is assumed to be a dictionary with the format: {'type': 'value', 'value': 'serializable'} # A resolved value. {'type': 'slot', 'slot_key': 'str() on a db.Key'} # A pending Slot. Args: pipeline_name: The name of the pipeline class; used for debugging. args: Iterable of positional arguments. kwargs: Dictionary of keyword arguments. Returns: Tuple (args, kwargs) where: Args: A list of positional arguments values that are all dereferenced. Kwargs: A list of keyword arguments values that are all dereferenced. Raises: SlotNotFilledError if any of the supplied 'slot_key' records are not present in the Datastore or have not yet been filled. UnexpectedPipelineError if an unknown parameter type was passed. """ lookup_slots = set() for arg in itertools.chain(args, kwargs.itervalues()): if arg['type'] == 'slot': lookup_slots.add(db.Key(arg['slot_key'])) slot_dict = {} for key, slot_record in zip(lookup_slots, db.get(lookup_slots)): if slot_record is None or slot_record.status != _SlotRecord.FILLED: raise SlotNotFilledError( 'Slot "%s" missing its value. From %s(*args=%s, **kwargs=%s)' % (key, pipeline_name, _short_repr(args), _short_repr(kwargs))) slot_dict[key] = slot_record.value arg_list = [] for current_arg in args: if current_arg['type'] == 'slot': arg_list.append(slot_dict[db.Key(current_arg['slot_key'])]) elif current_arg['type'] == 'value': arg_list.append(current_arg['value']) else: raise UnexpectedPipelineError('Unknown parameter type: %r' % current_arg) kwarg_dict = {} for key, current_arg in kwargs.iteritems(): if current_arg['type'] == 'slot': kwarg_dict[key] = slot_dict[db.Key(current_arg['slot_key'])] elif current_arg['type'] == 'value': kwarg_dict[key] = current_arg['value'] else: raise UnexpectedPipelineError('Unknown parameter type: %r' % current_arg) return (arg_list, kwarg_dict)
def remove(self, elem): """ Return new deque with first element from left equal to elem removed. If no such element is found a ValueError is raised. >>> pdeque([2, 1, 2]).remove(2) pdeque([1, 2]) """ try: return PDeque(self._left_list.remove(elem), self._right_list, self._length - 1) except ValueError: # Value not found in left list, try the right list try: # This is severely inefficient with a double reverse, should perhaps implement a remove_last()? return PDeque(self._left_list, self._right_list.reverse().remove(elem).reverse(), self._length - 1) except ValueError: raise ValueError('{0} not found in PDeque'.format(elem))
Return new deque with first element from left equal to elem removed. If no such element is found a ValueError is raised. >>> pdeque([2, 1, 2]).remove(2) pdeque([1, 2])
Below is the the instruction that describes the task: ### Input: Return new deque with first element from left equal to elem removed. If no such element is found a ValueError is raised. >>> pdeque([2, 1, 2]).remove(2) pdeque([1, 2]) ### Response: def remove(self, elem): """ Return new deque with first element from left equal to elem removed. If no such element is found a ValueError is raised. >>> pdeque([2, 1, 2]).remove(2) pdeque([1, 2]) """ try: return PDeque(self._left_list.remove(elem), self._right_list, self._length - 1) except ValueError: # Value not found in left list, try the right list try: # This is severely inefficient with a double reverse, should perhaps implement a remove_last()? return PDeque(self._left_list, self._right_list.reverse().remove(elem).reverse(), self._length - 1) except ValueError: raise ValueError('{0} not found in PDeque'.format(elem))
def remove(self, item): """Remove either an unparsed argument string or an argument object. :param Union[str,Arg] item: Item to remove >>> arguments = TexArgs([RArg('arg0'), '[arg2]', '{arg3}']) >>> arguments.remove('{arg0}') >>> len(arguments) 2 >>> arguments[0] OArg('arg2') """ item = self.__coerce(item) self.all.remove(item) super().remove(item)
Remove either an unparsed argument string or an argument object. :param Union[str,Arg] item: Item to remove >>> arguments = TexArgs([RArg('arg0'), '[arg2]', '{arg3}']) >>> arguments.remove('{arg0}') >>> len(arguments) 2 >>> arguments[0] OArg('arg2')
Below is the the instruction that describes the task: ### Input: Remove either an unparsed argument string or an argument object. :param Union[str,Arg] item: Item to remove >>> arguments = TexArgs([RArg('arg0'), '[arg2]', '{arg3}']) >>> arguments.remove('{arg0}') >>> len(arguments) 2 >>> arguments[0] OArg('arg2') ### Response: def remove(self, item): """Remove either an unparsed argument string or an argument object. :param Union[str,Arg] item: Item to remove >>> arguments = TexArgs([RArg('arg0'), '[arg2]', '{arg3}']) >>> arguments.remove('{arg0}') >>> len(arguments) 2 >>> arguments[0] OArg('arg2') """ item = self.__coerce(item) self.all.remove(item) super().remove(item)
def encipher_shift(plaintext, plain_vocab, shift): """Encrypt plain text with a single shift layer. Args: plaintext (list of list of Strings): a list of plain text to encrypt. plain_vocab (list of Integer): unique vocabularies being used. shift (Integer): number of shift, shift to the right if shift is positive. Returns: ciphertext (list of Strings): encrypted plain text. """ ciphertext = [] cipher = ShiftEncryptionLayer(plain_vocab, shift) for _, sentence in enumerate(plaintext): cipher_sentence = [] for _, character in enumerate(sentence): encrypted_char = cipher.encrypt_character(character) cipher_sentence.append(encrypted_char) ciphertext.append(cipher_sentence) return ciphertext
Encrypt plain text with a single shift layer. Args: plaintext (list of list of Strings): a list of plain text to encrypt. plain_vocab (list of Integer): unique vocabularies being used. shift (Integer): number of shift, shift to the right if shift is positive. Returns: ciphertext (list of Strings): encrypted plain text.
Below is the the instruction that describes the task: ### Input: Encrypt plain text with a single shift layer. Args: plaintext (list of list of Strings): a list of plain text to encrypt. plain_vocab (list of Integer): unique vocabularies being used. shift (Integer): number of shift, shift to the right if shift is positive. Returns: ciphertext (list of Strings): encrypted plain text. ### Response: def encipher_shift(plaintext, plain_vocab, shift): """Encrypt plain text with a single shift layer. Args: plaintext (list of list of Strings): a list of plain text to encrypt. plain_vocab (list of Integer): unique vocabularies being used. shift (Integer): number of shift, shift to the right if shift is positive. Returns: ciphertext (list of Strings): encrypted plain text. """ ciphertext = [] cipher = ShiftEncryptionLayer(plain_vocab, shift) for _, sentence in enumerate(plaintext): cipher_sentence = [] for _, character in enumerate(sentence): encrypted_char = cipher.encrypt_character(character) cipher_sentence.append(encrypted_char) ciphertext.append(cipher_sentence) return ciphertext
def export(self, class_name, method_name, export_data=False, export_dir='.', export_filename='data.json', export_append_checksum=False, **kwargs): """ Port a trained estimator to the syntax of a chosen programming language. Parameters ---------- :param class_name : string The name of the class in the returned result. :param method_name : string The name of the method in the returned result. :param export_data : bool, default: False Whether the model data should be saved or not. :param export_dir : string, default: '.' (current directory) The directory where the model data should be saved. :param export_filename : string, default: 'data.json' The filename of the exported model data. :param export_append_checksum : bool, default: False Whether to append the checksum to the filename or not. Returns ------- :return : string The transpiled algorithm with the defined placeholders. """ # Arguments: self.class_name = class_name self.method_name = method_name # Estimator: est = self.estimator self.output_activation = est.out_activation_ self.hidden_activation = est.activation self.n_layers = est.n_layers_ self.n_hidden_layers = est.n_layers_ - 2 self.n_inputs = len(est.coefs_[0]) self.n_outputs = est.n_outputs_ self.hidden_layer_sizes = est.hidden_layer_sizes if isinstance(self.hidden_layer_sizes, int): self.hidden_layer_sizes = [self.hidden_layer_sizes] self.hidden_layer_sizes = list(self.hidden_layer_sizes) self.layer_units = \ [self.n_inputs] + self.hidden_layer_sizes + [est.n_outputs_] # Weights: self.coefficients = est.coefs_ # Bias: self.intercepts = est.intercepts_ # Binary or multiclass classifier? self.is_binary = self.n_outputs == 1 self.prefix = 'binary' if self.is_binary else 'multi' if self.target_method == 'predict': # Exported: if export_data and os.path.isdir(export_dir): self.export_data(export_dir, export_filename, export_append_checksum) return self.predict('exported') # Separated: return self.predict('separated')
Port a trained estimator to the syntax of a chosen programming language. Parameters ---------- :param class_name : string The name of the class in the returned result. :param method_name : string The name of the method in the returned result. :param export_data : bool, default: False Whether the model data should be saved or not. :param export_dir : string, default: '.' (current directory) The directory where the model data should be saved. :param export_filename : string, default: 'data.json' The filename of the exported model data. :param export_append_checksum : bool, default: False Whether to append the checksum to the filename or not. Returns ------- :return : string The transpiled algorithm with the defined placeholders.
Below is the the instruction that describes the task: ### Input: Port a trained estimator to the syntax of a chosen programming language. Parameters ---------- :param class_name : string The name of the class in the returned result. :param method_name : string The name of the method in the returned result. :param export_data : bool, default: False Whether the model data should be saved or not. :param export_dir : string, default: '.' (current directory) The directory where the model data should be saved. :param export_filename : string, default: 'data.json' The filename of the exported model data. :param export_append_checksum : bool, default: False Whether to append the checksum to the filename or not. Returns ------- :return : string The transpiled algorithm with the defined placeholders. ### Response: def export(self, class_name, method_name, export_data=False, export_dir='.', export_filename='data.json', export_append_checksum=False, **kwargs): """ Port a trained estimator to the syntax of a chosen programming language. Parameters ---------- :param class_name : string The name of the class in the returned result. :param method_name : string The name of the method in the returned result. :param export_data : bool, default: False Whether the model data should be saved or not. :param export_dir : string, default: '.' (current directory) The directory where the model data should be saved. :param export_filename : string, default: 'data.json' The filename of the exported model data. :param export_append_checksum : bool, default: False Whether to append the checksum to the filename or not. Returns ------- :return : string The transpiled algorithm with the defined placeholders. """ # Arguments: self.class_name = class_name self.method_name = method_name # Estimator: est = self.estimator self.output_activation = est.out_activation_ self.hidden_activation = est.activation self.n_layers = est.n_layers_ self.n_hidden_layers = est.n_layers_ - 2 self.n_inputs = len(est.coefs_[0]) self.n_outputs = est.n_outputs_ self.hidden_layer_sizes = est.hidden_layer_sizes if isinstance(self.hidden_layer_sizes, int): self.hidden_layer_sizes = [self.hidden_layer_sizes] self.hidden_layer_sizes = list(self.hidden_layer_sizes) self.layer_units = \ [self.n_inputs] + self.hidden_layer_sizes + [est.n_outputs_] # Weights: self.coefficients = est.coefs_ # Bias: self.intercepts = est.intercepts_ # Binary or multiclass classifier? self.is_binary = self.n_outputs == 1 self.prefix = 'binary' if self.is_binary else 'multi' if self.target_method == 'predict': # Exported: if export_data and os.path.isdir(export_dir): self.export_data(export_dir, export_filename, export_append_checksum) return self.predict('exported') # Separated: return self.predict('separated')
def init_modules(self): ''' Initialize all modules consecutively''' for module_id, module_cfg in self._module_cfgs.items(): if module_id in self._modules or module_id in self._tx_module_groups: if module_id in self._modules: module_id_str = "module " + module_id else: module_id_str = module_id.split('=', 1) module_id_str[0] = module_id_str[0].replace("_", " ") module_id_str = "=".join(module_id_str) logging.info("Initializing configuration for %s..." % module_id_str) # adding scan parameters to dict if 'scan_parameters' in self._module_run_conf[module_id] and self._module_run_conf[module_id]['scan_parameters'] is not None: # evaluating string for support of nested lists and other complex data structures if isinstance(self._module_run_conf[module_id]['scan_parameters'], basestring): self._module_run_conf[module_id]['scan_parameters'] = ast.literal_eval(self._module_run_conf[module_id]['scan_parameters']) sp = namedtuple('scan_parameters', field_names=zip(*self._module_run_conf[module_id]['scan_parameters'])[0]) self._scan_parameters[module_id] = sp(*zip(*self._module_run_conf[module_id]['scan_parameters'])[1]) else: sp = namedtuple_with_defaults('scan_parameters', field_names=[]) self._scan_parameters[module_id] = sp() # init FE config if module_id in self._modules: # only real modules can have an existing configuration last_configuration = self.get_configuration(module_id=module_id) else: last_configuration = None if (('configuration' not in module_cfg or module_cfg['configuration'] is None) and last_configuration is None) or (isinstance(module_cfg['configuration'], (int, long)) and module_cfg['configuration'] <= 0): if 'chip_address' in module_cfg: if module_cfg['chip_address'] is None: chip_address = 0 broadcast = True else: chip_address = module_cfg['chip_address'] broadcast = False else: raise ValueError('Parameter "chip_address" not specified for module "%s".' % module_id) if 'flavor' in module_cfg and module_cfg['flavor']: module_cfg['configuration'] = FEI4Register(fe_type=module_cfg['flavor'], chip_address=chip_address, broadcast=broadcast) else: raise ValueError('Parameter "flavor" not specified for module "%s".' % module_id) # use existing config elif not module_cfg['configuration'] and last_configuration: module_cfg['configuration'] = FEI4Register(configuration_file=last_configuration) # path string elif isinstance(module_cfg['configuration'], basestring): if os.path.isabs(module_cfg['configuration']): # absolute path module_cfg['configuration'] = FEI4Register(configuration_file=module_cfg['configuration']) else: # relative path module_cfg['configuration'] = FEI4Register(configuration_file=os.path.join(module_cfg['working_dir'], module_cfg['configuration'])) # run number elif isinstance(module_cfg['configuration'], (int, long)) and module_cfg['configuration'] > 0: module_cfg['configuration'] = FEI4Register(configuration_file=self.get_configuration(module_id=module_id, run_number=module_cfg['configuration'])) # assume configuration already initialized elif not isinstance(module_cfg['configuration'], FEI4Register): raise ValueError('Found no valid value for parameter "configuration" for module "%s".' % module_id) # init register utils self._registers[module_id] = self._module_cfgs[module_id]['configuration'] self._register_utils[module_id] = FEI4RegisterUtils(self._module_dut[module_id], self._module_cfgs[module_id]['configuration']) if module_id in self._modules: # Create module data path for real modules module_path = self.get_module_path(module_id) if not os.path.exists(module_path): os.makedirs(module_path) # Set all modules to conf mode to prevent from receiving BCR and ECR broadcast for module_id in self._tx_module_groups: with self.access_module(module_id=module_id): self.register_utils.set_conf_mode() # Initial configuration (reset and configuration) of all modules. # This is done by iterating over each module individually for module_id in self._modules: logging.info("Configuring %s..." % module_id) with self.access_module(module_id=module_id): if self._run_conf['configure_fe']: self.register_utils.global_reset() self.register_utils.configure_all() else: self.register_utils.set_conf_mode() if is_fe_ready(self): fe_not_ready = False else: fe_not_ready = True # BCR and ECR might result in RX errors # a reset of the RX and FIFO will happen just before scan() if self._run_conf['reset_fe']: self.register_utils.reset_bunch_counter() self.register_utils.reset_event_counter() if fe_not_ready: # resetting service records must be done once after power up self.register_utils.reset_service_records() if not is_fe_ready(self): logging.warning('Module "%s" is not sending any data.' % module_id) # set all modules to conf mode afterwards to be immune to ECR and BCR self.register_utils.set_conf_mode()
Initialize all modules consecutively
Below is the the instruction that describes the task: ### Input: Initialize all modules consecutively ### Response: def init_modules(self): ''' Initialize all modules consecutively''' for module_id, module_cfg in self._module_cfgs.items(): if module_id in self._modules or module_id in self._tx_module_groups: if module_id in self._modules: module_id_str = "module " + module_id else: module_id_str = module_id.split('=', 1) module_id_str[0] = module_id_str[0].replace("_", " ") module_id_str = "=".join(module_id_str) logging.info("Initializing configuration for %s..." % module_id_str) # adding scan parameters to dict if 'scan_parameters' in self._module_run_conf[module_id] and self._module_run_conf[module_id]['scan_parameters'] is not None: # evaluating string for support of nested lists and other complex data structures if isinstance(self._module_run_conf[module_id]['scan_parameters'], basestring): self._module_run_conf[module_id]['scan_parameters'] = ast.literal_eval(self._module_run_conf[module_id]['scan_parameters']) sp = namedtuple('scan_parameters', field_names=zip(*self._module_run_conf[module_id]['scan_parameters'])[0]) self._scan_parameters[module_id] = sp(*zip(*self._module_run_conf[module_id]['scan_parameters'])[1]) else: sp = namedtuple_with_defaults('scan_parameters', field_names=[]) self._scan_parameters[module_id] = sp() # init FE config if module_id in self._modules: # only real modules can have an existing configuration last_configuration = self.get_configuration(module_id=module_id) else: last_configuration = None if (('configuration' not in module_cfg or module_cfg['configuration'] is None) and last_configuration is None) or (isinstance(module_cfg['configuration'], (int, long)) and module_cfg['configuration'] <= 0): if 'chip_address' in module_cfg: if module_cfg['chip_address'] is None: chip_address = 0 broadcast = True else: chip_address = module_cfg['chip_address'] broadcast = False else: raise ValueError('Parameter "chip_address" not specified for module "%s".' % module_id) if 'flavor' in module_cfg and module_cfg['flavor']: module_cfg['configuration'] = FEI4Register(fe_type=module_cfg['flavor'], chip_address=chip_address, broadcast=broadcast) else: raise ValueError('Parameter "flavor" not specified for module "%s".' % module_id) # use existing config elif not module_cfg['configuration'] and last_configuration: module_cfg['configuration'] = FEI4Register(configuration_file=last_configuration) # path string elif isinstance(module_cfg['configuration'], basestring): if os.path.isabs(module_cfg['configuration']): # absolute path module_cfg['configuration'] = FEI4Register(configuration_file=module_cfg['configuration']) else: # relative path module_cfg['configuration'] = FEI4Register(configuration_file=os.path.join(module_cfg['working_dir'], module_cfg['configuration'])) # run number elif isinstance(module_cfg['configuration'], (int, long)) and module_cfg['configuration'] > 0: module_cfg['configuration'] = FEI4Register(configuration_file=self.get_configuration(module_id=module_id, run_number=module_cfg['configuration'])) # assume configuration already initialized elif not isinstance(module_cfg['configuration'], FEI4Register): raise ValueError('Found no valid value for parameter "configuration" for module "%s".' % module_id) # init register utils self._registers[module_id] = self._module_cfgs[module_id]['configuration'] self._register_utils[module_id] = FEI4RegisterUtils(self._module_dut[module_id], self._module_cfgs[module_id]['configuration']) if module_id in self._modules: # Create module data path for real modules module_path = self.get_module_path(module_id) if not os.path.exists(module_path): os.makedirs(module_path) # Set all modules to conf mode to prevent from receiving BCR and ECR broadcast for module_id in self._tx_module_groups: with self.access_module(module_id=module_id): self.register_utils.set_conf_mode() # Initial configuration (reset and configuration) of all modules. # This is done by iterating over each module individually for module_id in self._modules: logging.info("Configuring %s..." % module_id) with self.access_module(module_id=module_id): if self._run_conf['configure_fe']: self.register_utils.global_reset() self.register_utils.configure_all() else: self.register_utils.set_conf_mode() if is_fe_ready(self): fe_not_ready = False else: fe_not_ready = True # BCR and ECR might result in RX errors # a reset of the RX and FIFO will happen just before scan() if self._run_conf['reset_fe']: self.register_utils.reset_bunch_counter() self.register_utils.reset_event_counter() if fe_not_ready: # resetting service records must be done once after power up self.register_utils.reset_service_records() if not is_fe_ready(self): logging.warning('Module "%s" is not sending any data.' % module_id) # set all modules to conf mode afterwards to be immune to ECR and BCR self.register_utils.set_conf_mode()
def _load_model(self, dct): """Load a serialized PyPhi model. The object is memoized for reuse elsewhere in the object graph. """ classname, version, _ = _pop_metadata(dct) _check_version(version) cls = self._models[classname] # Use `from_json` if available if hasattr(cls, 'from_json'): return cls.from_json(dct) # Default to object constructor return cls(**dct)
Load a serialized PyPhi model. The object is memoized for reuse elsewhere in the object graph.
Below is the the instruction that describes the task: ### Input: Load a serialized PyPhi model. The object is memoized for reuse elsewhere in the object graph. ### Response: def _load_model(self, dct): """Load a serialized PyPhi model. The object is memoized for reuse elsewhere in the object graph. """ classname, version, _ = _pop_metadata(dct) _check_version(version) cls = self._models[classname] # Use `from_json` if available if hasattr(cls, 'from_json'): return cls.from_json(dct) # Default to object constructor return cls(**dct)
def _extract_member(self, tarinfo, targetpath, set_attrs=True): """Extract the TarInfo object tarinfo to a physical file called targetpath. """ # Fetch the TarInfo object for the given name # and build the destination pathname, replacing # forward slashes to platform specific separators. targetpath = targetpath.rstrip("/") targetpath = targetpath.replace("/", os.sep) # Create all upper directories. upperdirs = os.path.dirname(targetpath) if upperdirs and not os.path.exists(upperdirs): # Create directories that are not part of the archive with # default permissions. os.makedirs(upperdirs) if tarinfo.islnk() or tarinfo.issym(): self._dbg(1, "%s -> %s" % (tarinfo.name, tarinfo.linkname)) else: self._dbg(1, tarinfo.name) if tarinfo.isreg(): self.makefile(tarinfo, targetpath) elif tarinfo.isdir(): self.makedir(tarinfo, targetpath) elif tarinfo.isfifo(): self.makefifo(tarinfo, targetpath) elif tarinfo.ischr() or tarinfo.isblk(): self.makedev(tarinfo, targetpath) elif tarinfo.islnk() or tarinfo.issym(): self.makelink(tarinfo, targetpath) elif tarinfo.type not in SUPPORTED_TYPES: self.makeunknown(tarinfo, targetpath) else: self.makefile(tarinfo, targetpath) if set_attrs: self.chown(tarinfo, targetpath) if not tarinfo.issym(): self.chmod(tarinfo, targetpath) self.utime(tarinfo, targetpath)
Extract the TarInfo object tarinfo to a physical file called targetpath.
Below is the the instruction that describes the task: ### Input: Extract the TarInfo object tarinfo to a physical file called targetpath. ### Response: def _extract_member(self, tarinfo, targetpath, set_attrs=True): """Extract the TarInfo object tarinfo to a physical file called targetpath. """ # Fetch the TarInfo object for the given name # and build the destination pathname, replacing # forward slashes to platform specific separators. targetpath = targetpath.rstrip("/") targetpath = targetpath.replace("/", os.sep) # Create all upper directories. upperdirs = os.path.dirname(targetpath) if upperdirs and not os.path.exists(upperdirs): # Create directories that are not part of the archive with # default permissions. os.makedirs(upperdirs) if tarinfo.islnk() or tarinfo.issym(): self._dbg(1, "%s -> %s" % (tarinfo.name, tarinfo.linkname)) else: self._dbg(1, tarinfo.name) if tarinfo.isreg(): self.makefile(tarinfo, targetpath) elif tarinfo.isdir(): self.makedir(tarinfo, targetpath) elif tarinfo.isfifo(): self.makefifo(tarinfo, targetpath) elif tarinfo.ischr() or tarinfo.isblk(): self.makedev(tarinfo, targetpath) elif tarinfo.islnk() or tarinfo.issym(): self.makelink(tarinfo, targetpath) elif tarinfo.type not in SUPPORTED_TYPES: self.makeunknown(tarinfo, targetpath) else: self.makefile(tarinfo, targetpath) if set_attrs: self.chown(tarinfo, targetpath) if not tarinfo.issym(): self.chmod(tarinfo, targetpath) self.utime(tarinfo, targetpath)
def reset_namespace(self, namespace=None, params=None): """ Will delete and recreate specified namespace args: namespace(str): Namespace to reset params(dict): params used to reset the namespace """ log = logging.getLogger("%s.%s" % (self.log_name, inspect.stack()[0][3])) log.setLevel(self.log_level) namespace = pick(namespace, self.namespace) params = pick(params, self.namespace_params) log.warning(" Reseting namespace '%s' at host: %s", namespace, self.url) try: self.delete_namespace(namespace) except RuntimeError: pass self.create_namespace(namespace, params)
Will delete and recreate specified namespace args: namespace(str): Namespace to reset params(dict): params used to reset the namespace
Below is the the instruction that describes the task: ### Input: Will delete and recreate specified namespace args: namespace(str): Namespace to reset params(dict): params used to reset the namespace ### Response: def reset_namespace(self, namespace=None, params=None): """ Will delete and recreate specified namespace args: namespace(str): Namespace to reset params(dict): params used to reset the namespace """ log = logging.getLogger("%s.%s" % (self.log_name, inspect.stack()[0][3])) log.setLevel(self.log_level) namespace = pick(namespace, self.namespace) params = pick(params, self.namespace_params) log.warning(" Reseting namespace '%s' at host: %s", namespace, self.url) try: self.delete_namespace(namespace) except RuntimeError: pass self.create_namespace(namespace, params)
def run(self, in_batches): """Run shell operator synchronously to eat `in_batches` :param in_batches: `tuple` of batches to process """ if len(in_batches) != len(self._batcmd.batch_to_file_s): BaseShellOperator._rm_process_input_tmpfiles(self._batcmd.batch_to_file_s) # [todo] - Removing tmpfiles can be easily forgot. Less lifetime for tmpfile. raise AttributeError('len(in_batches) == %d, while %d IN_BATCH* are specified in command below:%s$ %s' % (len(in_batches), len(self._batcmd.batch_to_file_s), os.linesep, self._batcmd.sh_cmd)) # prepare & start process (if necessary) BaseShellOperator._batches_to_tmpfile(self._in_record_sep, self._in_column_sep, in_batches, self._batcmd.batch_to_file_s) if self._process is None: self._process = BaseShellOperator._start_process( self._batcmd, self._cwd, self._env, non_blocking_stdout=True) # Begin thread to read from subprocess's stdout. # Without this thread, subprocess's output buffer becomes full and no one solves it. t_consumer = Thread(target=get_subprocess_output, args=(self._process.stdout, self._batch_done_output, self._subprocess_out_str)) t_consumer.start() # pass batch to subprocess BaseShellOperator._batch_to_stdin(self._process, self._in_record_sep, self._in_column_sep, in_batches, self._batcmd.batch_to_file_s) # pass batch-done indicator to subprocess self._process.stdin.write(self._batch_done_indicator) # get output from subprocess t_consumer.join() subprocess_out_str = self._subprocess_out_str[0] self._subprocess_out_str = [] out_batch = BaseShellOperator._out_str_to_batch(subprocess_out_str, self._out_recdef, self._out_col_patterns) return out_batch
Run shell operator synchronously to eat `in_batches` :param in_batches: `tuple` of batches to process
Below is the the instruction that describes the task: ### Input: Run shell operator synchronously to eat `in_batches` :param in_batches: `tuple` of batches to process ### Response: def run(self, in_batches): """Run shell operator synchronously to eat `in_batches` :param in_batches: `tuple` of batches to process """ if len(in_batches) != len(self._batcmd.batch_to_file_s): BaseShellOperator._rm_process_input_tmpfiles(self._batcmd.batch_to_file_s) # [todo] - Removing tmpfiles can be easily forgot. Less lifetime for tmpfile. raise AttributeError('len(in_batches) == %d, while %d IN_BATCH* are specified in command below:%s$ %s' % (len(in_batches), len(self._batcmd.batch_to_file_s), os.linesep, self._batcmd.sh_cmd)) # prepare & start process (if necessary) BaseShellOperator._batches_to_tmpfile(self._in_record_sep, self._in_column_sep, in_batches, self._batcmd.batch_to_file_s) if self._process is None: self._process = BaseShellOperator._start_process( self._batcmd, self._cwd, self._env, non_blocking_stdout=True) # Begin thread to read from subprocess's stdout. # Without this thread, subprocess's output buffer becomes full and no one solves it. t_consumer = Thread(target=get_subprocess_output, args=(self._process.stdout, self._batch_done_output, self._subprocess_out_str)) t_consumer.start() # pass batch to subprocess BaseShellOperator._batch_to_stdin(self._process, self._in_record_sep, self._in_column_sep, in_batches, self._batcmd.batch_to_file_s) # pass batch-done indicator to subprocess self._process.stdin.write(self._batch_done_indicator) # get output from subprocess t_consumer.join() subprocess_out_str = self._subprocess_out_str[0] self._subprocess_out_str = [] out_batch = BaseShellOperator._out_str_to_batch(subprocess_out_str, self._out_recdef, self._out_col_patterns) return out_batch
def _get_initial_rnn_and_grammar_state(self, question: Dict[str, torch.LongTensor], table: Dict[str, torch.LongTensor], world: List[WikiTablesWorld], actions: List[List[ProductionRule]], outputs: Dict[str, Any]) -> Tuple[List[RnnStatelet], List[LambdaGrammarStatelet]]: """ Encodes the question and table, computes a linking between the two, and constructs an initial RnnStatelet and LambdaGrammarStatelet for each batch instance to pass to the decoder. We take ``outputs`` as a parameter here and `modify` it, adding things that we want to visualize in a demo. """ table_text = table['text'] # (batch_size, question_length, embedding_dim) embedded_question = self._question_embedder(question) question_mask = util.get_text_field_mask(question).float() # (batch_size, num_entities, num_entity_tokens, embedding_dim) embedded_table = self._question_embedder(table_text, num_wrapping_dims=1) table_mask = util.get_text_field_mask(table_text, num_wrapping_dims=1).float() batch_size, num_entities, num_entity_tokens, _ = embedded_table.size() num_question_tokens = embedded_question.size(1) # (batch_size, num_entities, embedding_dim) encoded_table = self._entity_encoder(embedded_table, table_mask) # (batch_size, num_entities, num_neighbors) neighbor_indices = self._get_neighbor_indices(world, num_entities, encoded_table) # Neighbor_indices is padded with -1 since 0 is a potential neighbor index. # Thus, the absolute value needs to be taken in the index_select, and 1 needs to # be added for the mask since that method expects 0 for padding. # (batch_size, num_entities, num_neighbors, embedding_dim) embedded_neighbors = util.batched_index_select(encoded_table, torch.abs(neighbor_indices)) neighbor_mask = util.get_text_field_mask({'ignored': neighbor_indices + 1}, num_wrapping_dims=1).float() # Encoder initialized to easily obtain a masked average. neighbor_encoder = TimeDistributed(BagOfEmbeddingsEncoder(self._embedding_dim, averaged=True)) # (batch_size, num_entities, embedding_dim) embedded_neighbors = neighbor_encoder(embedded_neighbors, neighbor_mask) # entity_types: tensor with shape (batch_size, num_entities), where each entry is the # entity's type id. # entity_type_dict: Dict[int, int], mapping flattened_entity_index -> type_index # These encode the same information, but for efficiency reasons later it's nice # to have one version as a tensor and one that's accessible on the cpu. entity_types, entity_type_dict = self._get_type_vector(world, num_entities, encoded_table) entity_type_embeddings = self._entity_type_encoder_embedding(entity_types) projected_neighbor_embeddings = self._neighbor_params(embedded_neighbors.float()) # (batch_size, num_entities, embedding_dim) entity_embeddings = torch.tanh(entity_type_embeddings + projected_neighbor_embeddings) # Compute entity and question word similarity. We tried using cosine distance here, but # because this similarity is the main mechanism that the model can use to push apart logit # scores for certain actions (like "n -> 1" and "n -> -1"), this needs to have a larger # output range than [-1, 1]. question_entity_similarity = torch.bmm(embedded_table.view(batch_size, num_entities * num_entity_tokens, self._embedding_dim), torch.transpose(embedded_question, 1, 2)) question_entity_similarity = question_entity_similarity.view(batch_size, num_entities, num_entity_tokens, num_question_tokens) # (batch_size, num_entities, num_question_tokens) question_entity_similarity_max_score, _ = torch.max(question_entity_similarity, 2) # (batch_size, num_entities, num_question_tokens, num_features) linking_features = table['linking'] linking_scores = question_entity_similarity_max_score if self._use_neighbor_similarity_for_linking: # The linking score is computed as a linear projection of two terms. The first is the # maximum similarity score over the entity's words and the question token. The second # is the maximum similarity over the words in the entity's neighbors and the question # token. # # The second term, projected_question_neighbor_similarity, is useful when a column # needs to be selected. For example, the question token might have no similarity with # the column name, but is similar with the cells in the column. # # Note that projected_question_neighbor_similarity is intended to capture the same # information as the related_column feature. # # Also note that this block needs to be _before_ the `linking_params` block, because # we're overwriting `linking_scores`, not adding to it. # (batch_size, num_entities, num_neighbors, num_question_tokens) question_neighbor_similarity = util.batched_index_select(question_entity_similarity_max_score, torch.abs(neighbor_indices)) # (batch_size, num_entities, num_question_tokens) question_neighbor_similarity_max_score, _ = torch.max(question_neighbor_similarity, 2) projected_question_entity_similarity = self._question_entity_params( question_entity_similarity_max_score.unsqueeze(-1)).squeeze(-1) projected_question_neighbor_similarity = self._question_neighbor_params( question_neighbor_similarity_max_score.unsqueeze(-1)).squeeze(-1) linking_scores = projected_question_entity_similarity + projected_question_neighbor_similarity feature_scores = None if self._linking_params is not None: feature_scores = self._linking_params(linking_features).squeeze(3) linking_scores = linking_scores + feature_scores # (batch_size, num_question_tokens, num_entities) linking_probabilities = self._get_linking_probabilities(world, linking_scores.transpose(1, 2), question_mask, entity_type_dict) # (batch_size, num_question_tokens, embedding_dim) link_embedding = util.weighted_sum(entity_embeddings, linking_probabilities) encoder_input = torch.cat([link_embedding, embedded_question], 2) # (batch_size, question_length, encoder_output_dim) encoder_outputs = self._dropout(self._encoder(encoder_input, question_mask)) # This will be our initial hidden state and memory cell for the decoder LSTM. final_encoder_output = util.get_final_encoder_states(encoder_outputs, question_mask, self._encoder.is_bidirectional()) memory_cell = encoder_outputs.new_zeros(batch_size, self._encoder.get_output_dim()) # To make grouping states together in the decoder easier, we convert the batch dimension in # all of our tensors into an outer list. For instance, the encoder outputs have shape # `(batch_size, question_length, encoder_output_dim)`. We need to convert this into a list # of `batch_size` tensors, each of shape `(question_length, encoder_output_dim)`. Then we # won't have to do any index selects, or anything, we'll just do some `torch.cat()`s. encoder_output_list = [encoder_outputs[i] for i in range(batch_size)] question_mask_list = [question_mask[i] for i in range(batch_size)] initial_rnn_state = [] for i in range(batch_size): initial_rnn_state.append(RnnStatelet(final_encoder_output[i], memory_cell[i], self._first_action_embedding, self._first_attended_question, encoder_output_list, question_mask_list)) initial_grammar_state = [self._create_grammar_state(world[i], actions[i], linking_scores[i], entity_types[i]) for i in range(batch_size)] if not self.training: # We add a few things to the outputs that will be returned from `forward` at evaluation # time, for visualization in a demo. outputs['linking_scores'] = linking_scores if feature_scores is not None: outputs['feature_scores'] = feature_scores outputs['similarity_scores'] = question_entity_similarity_max_score return initial_rnn_state, initial_grammar_state
Encodes the question and table, computes a linking between the two, and constructs an initial RnnStatelet and LambdaGrammarStatelet for each batch instance to pass to the decoder. We take ``outputs`` as a parameter here and `modify` it, adding things that we want to visualize in a demo.
Below is the the instruction that describes the task: ### Input: Encodes the question and table, computes a linking between the two, and constructs an initial RnnStatelet and LambdaGrammarStatelet for each batch instance to pass to the decoder. We take ``outputs`` as a parameter here and `modify` it, adding things that we want to visualize in a demo. ### Response: def _get_initial_rnn_and_grammar_state(self, question: Dict[str, torch.LongTensor], table: Dict[str, torch.LongTensor], world: List[WikiTablesWorld], actions: List[List[ProductionRule]], outputs: Dict[str, Any]) -> Tuple[List[RnnStatelet], List[LambdaGrammarStatelet]]: """ Encodes the question and table, computes a linking between the two, and constructs an initial RnnStatelet and LambdaGrammarStatelet for each batch instance to pass to the decoder. We take ``outputs`` as a parameter here and `modify` it, adding things that we want to visualize in a demo. """ table_text = table['text'] # (batch_size, question_length, embedding_dim) embedded_question = self._question_embedder(question) question_mask = util.get_text_field_mask(question).float() # (batch_size, num_entities, num_entity_tokens, embedding_dim) embedded_table = self._question_embedder(table_text, num_wrapping_dims=1) table_mask = util.get_text_field_mask(table_text, num_wrapping_dims=1).float() batch_size, num_entities, num_entity_tokens, _ = embedded_table.size() num_question_tokens = embedded_question.size(1) # (batch_size, num_entities, embedding_dim) encoded_table = self._entity_encoder(embedded_table, table_mask) # (batch_size, num_entities, num_neighbors) neighbor_indices = self._get_neighbor_indices(world, num_entities, encoded_table) # Neighbor_indices is padded with -1 since 0 is a potential neighbor index. # Thus, the absolute value needs to be taken in the index_select, and 1 needs to # be added for the mask since that method expects 0 for padding. # (batch_size, num_entities, num_neighbors, embedding_dim) embedded_neighbors = util.batched_index_select(encoded_table, torch.abs(neighbor_indices)) neighbor_mask = util.get_text_field_mask({'ignored': neighbor_indices + 1}, num_wrapping_dims=1).float() # Encoder initialized to easily obtain a masked average. neighbor_encoder = TimeDistributed(BagOfEmbeddingsEncoder(self._embedding_dim, averaged=True)) # (batch_size, num_entities, embedding_dim) embedded_neighbors = neighbor_encoder(embedded_neighbors, neighbor_mask) # entity_types: tensor with shape (batch_size, num_entities), where each entry is the # entity's type id. # entity_type_dict: Dict[int, int], mapping flattened_entity_index -> type_index # These encode the same information, but for efficiency reasons later it's nice # to have one version as a tensor and one that's accessible on the cpu. entity_types, entity_type_dict = self._get_type_vector(world, num_entities, encoded_table) entity_type_embeddings = self._entity_type_encoder_embedding(entity_types) projected_neighbor_embeddings = self._neighbor_params(embedded_neighbors.float()) # (batch_size, num_entities, embedding_dim) entity_embeddings = torch.tanh(entity_type_embeddings + projected_neighbor_embeddings) # Compute entity and question word similarity. We tried using cosine distance here, but # because this similarity is the main mechanism that the model can use to push apart logit # scores for certain actions (like "n -> 1" and "n -> -1"), this needs to have a larger # output range than [-1, 1]. question_entity_similarity = torch.bmm(embedded_table.view(batch_size, num_entities * num_entity_tokens, self._embedding_dim), torch.transpose(embedded_question, 1, 2)) question_entity_similarity = question_entity_similarity.view(batch_size, num_entities, num_entity_tokens, num_question_tokens) # (batch_size, num_entities, num_question_tokens) question_entity_similarity_max_score, _ = torch.max(question_entity_similarity, 2) # (batch_size, num_entities, num_question_tokens, num_features) linking_features = table['linking'] linking_scores = question_entity_similarity_max_score if self._use_neighbor_similarity_for_linking: # The linking score is computed as a linear projection of two terms. The first is the # maximum similarity score over the entity's words and the question token. The second # is the maximum similarity over the words in the entity's neighbors and the question # token. # # The second term, projected_question_neighbor_similarity, is useful when a column # needs to be selected. For example, the question token might have no similarity with # the column name, but is similar with the cells in the column. # # Note that projected_question_neighbor_similarity is intended to capture the same # information as the related_column feature. # # Also note that this block needs to be _before_ the `linking_params` block, because # we're overwriting `linking_scores`, not adding to it. # (batch_size, num_entities, num_neighbors, num_question_tokens) question_neighbor_similarity = util.batched_index_select(question_entity_similarity_max_score, torch.abs(neighbor_indices)) # (batch_size, num_entities, num_question_tokens) question_neighbor_similarity_max_score, _ = torch.max(question_neighbor_similarity, 2) projected_question_entity_similarity = self._question_entity_params( question_entity_similarity_max_score.unsqueeze(-1)).squeeze(-1) projected_question_neighbor_similarity = self._question_neighbor_params( question_neighbor_similarity_max_score.unsqueeze(-1)).squeeze(-1) linking_scores = projected_question_entity_similarity + projected_question_neighbor_similarity feature_scores = None if self._linking_params is not None: feature_scores = self._linking_params(linking_features).squeeze(3) linking_scores = linking_scores + feature_scores # (batch_size, num_question_tokens, num_entities) linking_probabilities = self._get_linking_probabilities(world, linking_scores.transpose(1, 2), question_mask, entity_type_dict) # (batch_size, num_question_tokens, embedding_dim) link_embedding = util.weighted_sum(entity_embeddings, linking_probabilities) encoder_input = torch.cat([link_embedding, embedded_question], 2) # (batch_size, question_length, encoder_output_dim) encoder_outputs = self._dropout(self._encoder(encoder_input, question_mask)) # This will be our initial hidden state and memory cell for the decoder LSTM. final_encoder_output = util.get_final_encoder_states(encoder_outputs, question_mask, self._encoder.is_bidirectional()) memory_cell = encoder_outputs.new_zeros(batch_size, self._encoder.get_output_dim()) # To make grouping states together in the decoder easier, we convert the batch dimension in # all of our tensors into an outer list. For instance, the encoder outputs have shape # `(batch_size, question_length, encoder_output_dim)`. We need to convert this into a list # of `batch_size` tensors, each of shape `(question_length, encoder_output_dim)`. Then we # won't have to do any index selects, or anything, we'll just do some `torch.cat()`s. encoder_output_list = [encoder_outputs[i] for i in range(batch_size)] question_mask_list = [question_mask[i] for i in range(batch_size)] initial_rnn_state = [] for i in range(batch_size): initial_rnn_state.append(RnnStatelet(final_encoder_output[i], memory_cell[i], self._first_action_embedding, self._first_attended_question, encoder_output_list, question_mask_list)) initial_grammar_state = [self._create_grammar_state(world[i], actions[i], linking_scores[i], entity_types[i]) for i in range(batch_size)] if not self.training: # We add a few things to the outputs that will be returned from `forward` at evaluation # time, for visualization in a demo. outputs['linking_scores'] = linking_scores if feature_scores is not None: outputs['feature_scores'] = feature_scores outputs['similarity_scores'] = question_entity_similarity_max_score return initial_rnn_state, initial_grammar_state
def flush(self, file=str()): """ Flushes the updated file content to the given *file*. .. note:: Overwrites an existing file. :param str file: name and location of the file. Default is the original file. """ if file: Path(file).write_bytes(self._cache) else: self.path.write_bytes(self._cache)
Flushes the updated file content to the given *file*. .. note:: Overwrites an existing file. :param str file: name and location of the file. Default is the original file.
Below is the the instruction that describes the task: ### Input: Flushes the updated file content to the given *file*. .. note:: Overwrites an existing file. :param str file: name and location of the file. Default is the original file. ### Response: def flush(self, file=str()): """ Flushes the updated file content to the given *file*. .. note:: Overwrites an existing file. :param str file: name and location of the file. Default is the original file. """ if file: Path(file).write_bytes(self._cache) else: self.path.write_bytes(self._cache)
def from_environ(cls, environ=os.environ): """Constructs a _PipelineContext from the task queue environment.""" base_path, unused = (environ['PATH_INFO'].rsplit('/', 1) + [''])[:2] return cls( environ['HTTP_X_APPENGINE_TASKNAME'], environ['HTTP_X_APPENGINE_QUEUENAME'], base_path)
Constructs a _PipelineContext from the task queue environment.
Below is the the instruction that describes the task: ### Input: Constructs a _PipelineContext from the task queue environment. ### Response: def from_environ(cls, environ=os.environ): """Constructs a _PipelineContext from the task queue environment.""" base_path, unused = (environ['PATH_INFO'].rsplit('/', 1) + [''])[:2] return cls( environ['HTTP_X_APPENGINE_TASKNAME'], environ['HTTP_X_APPENGINE_QUEUENAME'], base_path)
def Input_setIgnoreInputEvents(self, ignore): """ Function path: Input.setIgnoreInputEvents Domain: Input Method name: setIgnoreInputEvents Parameters: Required arguments: 'ignore' (type: boolean) -> Ignores input events processing when set to true. No return value. Description: Ignores input events (useful while auditing page). """ assert isinstance(ignore, (bool,) ), "Argument 'ignore' must be of type '['bool']'. Received type: '%s'" % type( ignore) subdom_funcs = self.synchronous_command('Input.setIgnoreInputEvents', ignore=ignore) return subdom_funcs
Function path: Input.setIgnoreInputEvents Domain: Input Method name: setIgnoreInputEvents Parameters: Required arguments: 'ignore' (type: boolean) -> Ignores input events processing when set to true. No return value. Description: Ignores input events (useful while auditing page).
Below is the the instruction that describes the task: ### Input: Function path: Input.setIgnoreInputEvents Domain: Input Method name: setIgnoreInputEvents Parameters: Required arguments: 'ignore' (type: boolean) -> Ignores input events processing when set to true. No return value. Description: Ignores input events (useful while auditing page). ### Response: def Input_setIgnoreInputEvents(self, ignore): """ Function path: Input.setIgnoreInputEvents Domain: Input Method name: setIgnoreInputEvents Parameters: Required arguments: 'ignore' (type: boolean) -> Ignores input events processing when set to true. No return value. Description: Ignores input events (useful while auditing page). """ assert isinstance(ignore, (bool,) ), "Argument 'ignore' must be of type '['bool']'. Received type: '%s'" % type( ignore) subdom_funcs = self.synchronous_command('Input.setIgnoreInputEvents', ignore=ignore) return subdom_funcs
def AdjustDescriptor(self, fields): """Payload-aware metadata processor.""" for f in fields: if f.name == "args_rdf_name": f.name = "payload_type" if f.name == "args": f.name = "payload" return fields
Payload-aware metadata processor.
Below is the the instruction that describes the task: ### Input: Payload-aware metadata processor. ### Response: def AdjustDescriptor(self, fields): """Payload-aware metadata processor.""" for f in fields: if f.name == "args_rdf_name": f.name = "payload_type" if f.name == "args": f.name = "payload" return fields
def feature_parser(uni_feature, word_surface): # type: (text_type, text_type) -> Tuple[Tuple[text_type, text_type, text_type], text_type] """ Parse the POS feature output by Mecab :param uni_feature unicode: :return ( (pos1, pos2, pos3), word_stem ): """ list_feature_items = uni_feature.split(',') # if word has no feature at all if len(list_feature_items) == 1: return ('*'), ('*') pos1 = list_feature_items[0] pos2 = list_feature_items[1] pos3 = list_feature_items[2] tuple_pos = (pos1, pos2, pos3) # if without constraint(output is normal mecab dictionary like) if len(list_feature_items) == 9: word_stem = list_feature_items[6] # if with constraint(output format depends on Usedict.txt) else: word_stem = word_surface return tuple_pos, word_stem
Parse the POS feature output by Mecab :param uni_feature unicode: :return ( (pos1, pos2, pos3), word_stem ):
Below is the the instruction that describes the task: ### Input: Parse the POS feature output by Mecab :param uni_feature unicode: :return ( (pos1, pos2, pos3), word_stem ): ### Response: def feature_parser(uni_feature, word_surface): # type: (text_type, text_type) -> Tuple[Tuple[text_type, text_type, text_type], text_type] """ Parse the POS feature output by Mecab :param uni_feature unicode: :return ( (pos1, pos2, pos3), word_stem ): """ list_feature_items = uni_feature.split(',') # if word has no feature at all if len(list_feature_items) == 1: return ('*'), ('*') pos1 = list_feature_items[0] pos2 = list_feature_items[1] pos3 = list_feature_items[2] tuple_pos = (pos1, pos2, pos3) # if without constraint(output is normal mecab dictionary like) if len(list_feature_items) == 9: word_stem = list_feature_items[6] # if with constraint(output format depends on Usedict.txt) else: word_stem = word_surface return tuple_pos, word_stem
def dragEnterEvent(self, event): """Allow user to drag files""" if mimedata2url(event.mimeData()): event.accept() else: event.ignore()
Allow user to drag files
Below is the the instruction that describes the task: ### Input: Allow user to drag files ### Response: def dragEnterEvent(self, event): """Allow user to drag files""" if mimedata2url(event.mimeData()): event.accept() else: event.ignore()
def route(self, request, response): """Processes every request. Directs control flow to the appropriate HTTP/1.1 method. """ # Ensure that we're allowed to use this HTTP method. self.require_http_allowed_method(request) # Retrieve the function corresponding to this HTTP method. function = getattr(self, request.method.lower(), None) if function is None: # Server is not capable of supporting it. raise http.exceptions.NotImplemented() # Delegate to the determined function to process the request. return function(request, response)
Processes every request. Directs control flow to the appropriate HTTP/1.1 method.
Below is the the instruction that describes the task: ### Input: Processes every request. Directs control flow to the appropriate HTTP/1.1 method. ### Response: def route(self, request, response): """Processes every request. Directs control flow to the appropriate HTTP/1.1 method. """ # Ensure that we're allowed to use this HTTP method. self.require_http_allowed_method(request) # Retrieve the function corresponding to this HTTP method. function = getattr(self, request.method.lower(), None) if function is None: # Server is not capable of supporting it. raise http.exceptions.NotImplemented() # Delegate to the determined function to process the request. return function(request, response)
def save(self): """Either create or persist changes on this object back to the One Codex server.""" check_bind(self) creating = self.id is None if creating and not self.__class__._has_schema_method("create"): raise MethodNotSupported("{} do not support creating.".format(self.__class__.__name__)) if not creating and not self.__class__._has_schema_method("update"): raise MethodNotSupported("{} do not support updating.".format(self.__class__.__name__)) try: self._resource.save() except HTTPError as e: if e.response.status_code == 400: err_json = e.response.json().get("errors", []) msg = pretty_print_error(err_json) raise ServerError(msg) elif e.response.status_code == 404: action = "creating" if creating else "updating" raise MethodNotSupported( "{} do not support {}.".format(self.__class__.__name__, action) ) elif e.response.status_code == 409: raise ServerError("This {} object already exists".format(self.__class__.__name__)) else: raise e
Either create or persist changes on this object back to the One Codex server.
Below is the the instruction that describes the task: ### Input: Either create or persist changes on this object back to the One Codex server. ### Response: def save(self): """Either create or persist changes on this object back to the One Codex server.""" check_bind(self) creating = self.id is None if creating and not self.__class__._has_schema_method("create"): raise MethodNotSupported("{} do not support creating.".format(self.__class__.__name__)) if not creating and not self.__class__._has_schema_method("update"): raise MethodNotSupported("{} do not support updating.".format(self.__class__.__name__)) try: self._resource.save() except HTTPError as e: if e.response.status_code == 400: err_json = e.response.json().get("errors", []) msg = pretty_print_error(err_json) raise ServerError(msg) elif e.response.status_code == 404: action = "creating" if creating else "updating" raise MethodNotSupported( "{} do not support {}.".format(self.__class__.__name__, action) ) elif e.response.status_code == 409: raise ServerError("This {} object already exists".format(self.__class__.__name__)) else: raise e
def _fix_namespace(self): """Internal helper to fix the namespace. This is called to ensure that for queries without an explicit namespace, the namespace used by async calls is the one in effect at the time the async call is made, not the one in effect when the the request is actually generated. """ if self.namespace is not None: return self namespace = namespace_manager.get_namespace() return self.__class__(kind=self.kind, ancestor=self.ancestor, filters=self.filters, orders=self.orders, app=self.app, namespace=namespace, default_options=self.default_options, projection=self.projection, group_by=self.group_by)
Internal helper to fix the namespace. This is called to ensure that for queries without an explicit namespace, the namespace used by async calls is the one in effect at the time the async call is made, not the one in effect when the the request is actually generated.
Below is the the instruction that describes the task: ### Input: Internal helper to fix the namespace. This is called to ensure that for queries without an explicit namespace, the namespace used by async calls is the one in effect at the time the async call is made, not the one in effect when the the request is actually generated. ### Response: def _fix_namespace(self): """Internal helper to fix the namespace. This is called to ensure that for queries without an explicit namespace, the namespace used by async calls is the one in effect at the time the async call is made, not the one in effect when the the request is actually generated. """ if self.namespace is not None: return self namespace = namespace_manager.get_namespace() return self.__class__(kind=self.kind, ancestor=self.ancestor, filters=self.filters, orders=self.orders, app=self.app, namespace=namespace, default_options=self.default_options, projection=self.projection, group_by=self.group_by)
def add_gateway_router(self, router, body=None): """Adds an external network gateway to the specified router.""" return self.put((self.router_path % router), body={'router': {'external_gateway_info': body}})
Adds an external network gateway to the specified router.
Below is the the instruction that describes the task: ### Input: Adds an external network gateway to the specified router. ### Response: def add_gateway_router(self, router, body=None): """Adds an external network gateway to the specified router.""" return self.put((self.router_path % router), body={'router': {'external_gateway_info': body}})
def _check_submodule_status(root, submodules): """check submodule status Has three return values: 'missing' - submodules are absent 'unclean' - submodules have unstaged changes 'clean' - all submodules are up to date """ if hasattr(sys, "frozen"): # frozen via py2exe or similar, don't bother return 'clean' if not os.path.exists(os.path.join(root, '.git')): # not in git, assume clean return 'clean' for submodule in submodules: if not os.path.exists(submodule): return 'missing' # Popen can't handle unicode cwd on Windows Python 2 if sys.platform == 'win32' and sys.version_info[0] < 3 \ and not isinstance(root, bytes): root = root.encode(sys.getfilesystemencoding() or 'ascii') # check with git submodule status proc = subprocess.Popen('git submodule status', stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, cwd=root) status, _ = proc.communicate() status = status.decode("ascii", "replace") for line in status.splitlines(): if line.startswith('-'): return 'missing' elif line.startswith('+'): return 'unclean' return 'clean'
check submodule status Has three return values: 'missing' - submodules are absent 'unclean' - submodules have unstaged changes 'clean' - all submodules are up to date
Below is the the instruction that describes the task: ### Input: check submodule status Has three return values: 'missing' - submodules are absent 'unclean' - submodules have unstaged changes 'clean' - all submodules are up to date ### Response: def _check_submodule_status(root, submodules): """check submodule status Has three return values: 'missing' - submodules are absent 'unclean' - submodules have unstaged changes 'clean' - all submodules are up to date """ if hasattr(sys, "frozen"): # frozen via py2exe or similar, don't bother return 'clean' if not os.path.exists(os.path.join(root, '.git')): # not in git, assume clean return 'clean' for submodule in submodules: if not os.path.exists(submodule): return 'missing' # Popen can't handle unicode cwd on Windows Python 2 if sys.platform == 'win32' and sys.version_info[0] < 3 \ and not isinstance(root, bytes): root = root.encode(sys.getfilesystemencoding() or 'ascii') # check with git submodule status proc = subprocess.Popen('git submodule status', stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, cwd=root) status, _ = proc.communicate() status = status.decode("ascii", "replace") for line in status.splitlines(): if line.startswith('-'): return 'missing' elif line.startswith('+'): return 'unclean' return 'clean'
def get_port_for_process(self, pid): """Allocates and returns port for pid or 0 if none could be allocated.""" if not self._port_queue: raise RuntimeError('No ports being managed.') # Avoid an infinite loop if all ports are currently assigned. check_count = 0 max_ports_to_test = len(self._port_queue) while check_count < max_ports_to_test: # Get the next candidate port and move it to the back of the queue. candidate = self._port_queue.pop() self._port_queue.appendleft(candidate) check_count += 1 if (candidate.start_time == 0 or candidate.start_time != _get_process_start_time(candidate.pid)): if _is_port_free(candidate.port): candidate.pid = pid candidate.start_time = _get_process_start_time(pid) if not candidate.start_time: log.info("Can't read start time for pid %d.", pid) self.ports_checked_for_last_request = check_count return candidate.port else: log.info( 'Port %d unexpectedly in use, last owning pid %d.', candidate.port, candidate.pid) log.info('All ports in use.') self.ports_checked_for_last_request = check_count return 0
Allocates and returns port for pid or 0 if none could be allocated.
Below is the the instruction that describes the task: ### Input: Allocates and returns port for pid or 0 if none could be allocated. ### Response: def get_port_for_process(self, pid): """Allocates and returns port for pid or 0 if none could be allocated.""" if not self._port_queue: raise RuntimeError('No ports being managed.') # Avoid an infinite loop if all ports are currently assigned. check_count = 0 max_ports_to_test = len(self._port_queue) while check_count < max_ports_to_test: # Get the next candidate port and move it to the back of the queue. candidate = self._port_queue.pop() self._port_queue.appendleft(candidate) check_count += 1 if (candidate.start_time == 0 or candidate.start_time != _get_process_start_time(candidate.pid)): if _is_port_free(candidate.port): candidate.pid = pid candidate.start_time = _get_process_start_time(pid) if not candidate.start_time: log.info("Can't read start time for pid %d.", pid) self.ports_checked_for_last_request = check_count return candidate.port else: log.info( 'Port %d unexpectedly in use, last owning pid %d.', candidate.port, candidate.pid) log.info('All ports in use.') self.ports_checked_for_last_request = check_count return 0
def UploadBaseFiles(self, issue, rpc_server, patch_list, patchset, options, files): """Uploads the base files (and if necessary, the current ones as well).""" def UploadFile(filename, file_id, content, is_binary, status, is_base): """Uploads a file to the server.""" set_status("uploading " + filename) file_too_large = False if is_base: type = "base" else: type = "current" if len(content) > MAX_UPLOAD_SIZE: print ("Not uploading the %s file for %s because it's too large." % (type, filename)) file_too_large = True content = "" checksum = md5(content).hexdigest() if options.verbose > 0 and not file_too_large: print "Uploading %s file for %s" % (type, filename) url = "/%d/upload_content/%d/%d" % (int(issue), int(patchset), file_id) form_fields = [ ("filename", filename), ("status", status), ("checksum", checksum), ("is_binary", str(is_binary)), ("is_current", str(not is_base)), ] if file_too_large: form_fields.append(("file_too_large", "1")) if options.email: form_fields.append(("user", options.email)) ctype, body = EncodeMultipartFormData(form_fields, [("data", filename, content)]) response_body = rpc_server.Send(url, body, content_type=ctype) if not response_body.startswith("OK"): StatusUpdate(" --> %s" % response_body) sys.exit(1) # Don't want to spawn too many threads, nor do we want to # hit Rietveld too hard, or it will start serving 500 errors. # When 8 works, it's no better than 4, and sometimes 8 is # too many for Rietveld to handle. MAX_PARALLEL_UPLOADS = 4 sema = threading.BoundedSemaphore(MAX_PARALLEL_UPLOADS) upload_threads = [] finished_upload_threads = [] class UploadFileThread(threading.Thread): def __init__(self, args): threading.Thread.__init__(self) self.args = args def run(self): UploadFile(*self.args) finished_upload_threads.append(self) sema.release() def StartUploadFile(*args): sema.acquire() while len(finished_upload_threads) > 0: t = finished_upload_threads.pop() upload_threads.remove(t) t.join() t = UploadFileThread(args) upload_threads.append(t) t.start() def WaitForUploads(): for t in upload_threads: t.join() patches = dict() [patches.setdefault(v, k) for k, v in patch_list] for filename in patches.keys(): base_content, new_content, is_binary, status = files[filename] file_id_str = patches.get(filename) if file_id_str.find("nobase") != -1: base_content = None file_id_str = file_id_str[file_id_str.rfind("_") + 1:] file_id = int(file_id_str) if base_content != None: StartUploadFile(filename, file_id, base_content, is_binary, status, True) if new_content != None: StartUploadFile(filename, file_id, new_content, is_binary, status, False) WaitForUploads()
Uploads the base files (and if necessary, the current ones as well).
Below is the the instruction that describes the task: ### Input: Uploads the base files (and if necessary, the current ones as well). ### Response: def UploadBaseFiles(self, issue, rpc_server, patch_list, patchset, options, files): """Uploads the base files (and if necessary, the current ones as well).""" def UploadFile(filename, file_id, content, is_binary, status, is_base): """Uploads a file to the server.""" set_status("uploading " + filename) file_too_large = False if is_base: type = "base" else: type = "current" if len(content) > MAX_UPLOAD_SIZE: print ("Not uploading the %s file for %s because it's too large." % (type, filename)) file_too_large = True content = "" checksum = md5(content).hexdigest() if options.verbose > 0 and not file_too_large: print "Uploading %s file for %s" % (type, filename) url = "/%d/upload_content/%d/%d" % (int(issue), int(patchset), file_id) form_fields = [ ("filename", filename), ("status", status), ("checksum", checksum), ("is_binary", str(is_binary)), ("is_current", str(not is_base)), ] if file_too_large: form_fields.append(("file_too_large", "1")) if options.email: form_fields.append(("user", options.email)) ctype, body = EncodeMultipartFormData(form_fields, [("data", filename, content)]) response_body = rpc_server.Send(url, body, content_type=ctype) if not response_body.startswith("OK"): StatusUpdate(" --> %s" % response_body) sys.exit(1) # Don't want to spawn too many threads, nor do we want to # hit Rietveld too hard, or it will start serving 500 errors. # When 8 works, it's no better than 4, and sometimes 8 is # too many for Rietveld to handle. MAX_PARALLEL_UPLOADS = 4 sema = threading.BoundedSemaphore(MAX_PARALLEL_UPLOADS) upload_threads = [] finished_upload_threads = [] class UploadFileThread(threading.Thread): def __init__(self, args): threading.Thread.__init__(self) self.args = args def run(self): UploadFile(*self.args) finished_upload_threads.append(self) sema.release() def StartUploadFile(*args): sema.acquire() while len(finished_upload_threads) > 0: t = finished_upload_threads.pop() upload_threads.remove(t) t.join() t = UploadFileThread(args) upload_threads.append(t) t.start() def WaitForUploads(): for t in upload_threads: t.join() patches = dict() [patches.setdefault(v, k) for k, v in patch_list] for filename in patches.keys(): base_content, new_content, is_binary, status = files[filename] file_id_str = patches.get(filename) if file_id_str.find("nobase") != -1: base_content = None file_id_str = file_id_str[file_id_str.rfind("_") + 1:] file_id = int(file_id_str) if base_content != None: StartUploadFile(filename, file_id, base_content, is_binary, status, True) if new_content != None: StartUploadFile(filename, file_id, new_content, is_binary, status, False) WaitForUploads()
def __entropy(data): '''Compute entropy of the flattened data set (e.g. a density distribution).''' # normalize and convert to float data = data/float(numpy.sum(data)) # for each grey-value g with a probability p(g) = 0, the entropy is defined as 0, therefore we remove these values and also flatten the histogram data = data[numpy.nonzero(data)] # compute entropy return -1. * numpy.sum(data * numpy.log2(data))
Compute entropy of the flattened data set (e.g. a density distribution).
Below is the the instruction that describes the task: ### Input: Compute entropy of the flattened data set (e.g. a density distribution). ### Response: def __entropy(data): '''Compute entropy of the flattened data set (e.g. a density distribution).''' # normalize and convert to float data = data/float(numpy.sum(data)) # for each grey-value g with a probability p(g) = 0, the entropy is defined as 0, therefore we remove these values and also flatten the histogram data = data[numpy.nonzero(data)] # compute entropy return -1. * numpy.sum(data * numpy.log2(data))
def sudo_remove_dirtree(dir_name): """Removes directory tree as a superuser. Args: dir_name: name of the directory to remove. This function is necessary to cleanup directories created from inside a Docker, since they usually written as a root, thus have to be removed as a root. """ try: subprocess.check_output(['sudo', 'rm', '-rf', dir_name]) except subprocess.CalledProcessError as e: raise WorkerError('Can''t remove directory {0}'.format(dir_name), e)
Removes directory tree as a superuser. Args: dir_name: name of the directory to remove. This function is necessary to cleanup directories created from inside a Docker, since they usually written as a root, thus have to be removed as a root.
Below is the the instruction that describes the task: ### Input: Removes directory tree as a superuser. Args: dir_name: name of the directory to remove. This function is necessary to cleanup directories created from inside a Docker, since they usually written as a root, thus have to be removed as a root. ### Response: def sudo_remove_dirtree(dir_name): """Removes directory tree as a superuser. Args: dir_name: name of the directory to remove. This function is necessary to cleanup directories created from inside a Docker, since they usually written as a root, thus have to be removed as a root. """ try: subprocess.check_output(['sudo', 'rm', '-rf', dir_name]) except subprocess.CalledProcessError as e: raise WorkerError('Can''t remove directory {0}'.format(dir_name), e)
def valid_remainder(cntxt: Context, n: Node, matchables: RDFGraph, S: ShExJ.Shape) -> bool: """ Let **outs** be the arcsOut in remainder: `outs = remainder ∩ arcsOut(G, n)`. Let **matchables** be the triples in outs whose predicate appears in a TripleConstraint in `expression`. If `expression` is absent, matchables = Ø (the empty set). * There is no triple in **matchables** which matches a TripleConstraint in expression * There is no triple in **matchables** whose predicate does not appear in extra. * closed is false or unmatchables is empty :param cntxt: evaluation context :param n: focus node :param matchables: non-matched triples :param S: Shape being evaluated :return: True if remainder is valid """ # TODO: Update this and satisfies to address the new algorithm # Let **outs** be the arcsOut in remainder: `outs = remainder ∩ arcsOut(G, n)`. outs = arcsOut(cntxt.graph, n).intersection(matchables) # predicates that in a TripleConstraint in `expression` predicates = predicates_in_expression(S, cntxt) # Let **matchables** be the triples in outs whose predicate appears in predicates. If # `expression` is absent, matchables = Ø (the empty set). matchables = RDFGraph(t for t in outs if str(t.p) in predicates) # There is no triple in **matchables** which matches a TripleConstraint in expression if matchables and S.expression is not None: tes = triple_constraints_in_expression(S.expression, cntxt) for m in matchables: if any(matchesTripleConstraint(cntxt, m, te) for te in tes): return False # There is no triple in **matchables** whose predicate does not appear in extra. extras = {iriref_to_uriref(e) for e in S.extra} if S.extra is not None else {} if any(t.p not in extras for t in matchables): return False # closed is false or unmatchables is empty. return not S.closed.val or not bool(outs - matchables)
Let **outs** be the arcsOut in remainder: `outs = remainder ∩ arcsOut(G, n)`. Let **matchables** be the triples in outs whose predicate appears in a TripleConstraint in `expression`. If `expression` is absent, matchables = Ø (the empty set). * There is no triple in **matchables** which matches a TripleConstraint in expression * There is no triple in **matchables** whose predicate does not appear in extra. * closed is false or unmatchables is empty :param cntxt: evaluation context :param n: focus node :param matchables: non-matched triples :param S: Shape being evaluated :return: True if remainder is valid
Below is the the instruction that describes the task: ### Input: Let **outs** be the arcsOut in remainder: `outs = remainder ∩ arcsOut(G, n)`. Let **matchables** be the triples in outs whose predicate appears in a TripleConstraint in `expression`. If `expression` is absent, matchables = Ø (the empty set). * There is no triple in **matchables** which matches a TripleConstraint in expression * There is no triple in **matchables** whose predicate does not appear in extra. * closed is false or unmatchables is empty :param cntxt: evaluation context :param n: focus node :param matchables: non-matched triples :param S: Shape being evaluated :return: True if remainder is valid ### Response: def valid_remainder(cntxt: Context, n: Node, matchables: RDFGraph, S: ShExJ.Shape) -> bool: """ Let **outs** be the arcsOut in remainder: `outs = remainder ∩ arcsOut(G, n)`. Let **matchables** be the triples in outs whose predicate appears in a TripleConstraint in `expression`. If `expression` is absent, matchables = Ø (the empty set). * There is no triple in **matchables** which matches a TripleConstraint in expression * There is no triple in **matchables** whose predicate does not appear in extra. * closed is false or unmatchables is empty :param cntxt: evaluation context :param n: focus node :param matchables: non-matched triples :param S: Shape being evaluated :return: True if remainder is valid """ # TODO: Update this and satisfies to address the new algorithm # Let **outs** be the arcsOut in remainder: `outs = remainder ∩ arcsOut(G, n)`. outs = arcsOut(cntxt.graph, n).intersection(matchables) # predicates that in a TripleConstraint in `expression` predicates = predicates_in_expression(S, cntxt) # Let **matchables** be the triples in outs whose predicate appears in predicates. If # `expression` is absent, matchables = Ø (the empty set). matchables = RDFGraph(t for t in outs if str(t.p) in predicates) # There is no triple in **matchables** which matches a TripleConstraint in expression if matchables and S.expression is not None: tes = triple_constraints_in_expression(S.expression, cntxt) for m in matchables: if any(matchesTripleConstraint(cntxt, m, te) for te in tes): return False # There is no triple in **matchables** whose predicate does not appear in extra. extras = {iriref_to_uriref(e) for e in S.extra} if S.extra is not None else {} if any(t.p not in extras for t in matchables): return False # closed is false or unmatchables is empty. return not S.closed.val or not bool(outs - matchables)
def scanFolderForRegexp(folder = None, listRegexp = None, recursive = False, verbosity=1, logFolder= "./logs"): ''' [Optionally] recursive method to scan the files in a given folder. :param folder: the folder to be scanned. :param listRegexp: listRegexp is an array of <RegexpObject>. :param recursive: when True, it performs a recursive search on the subfolders. :return: a list of the available objects containing the expressions found in the provided data. [ { "attributes": [], "type": "i3visio.email", "value": "foo@bar.com" }, { "attributes": [], "type": "i3visio.email", "value": "bar@foo.com" } ] ''' i3visiotools.logger.setupLogger(loggerName="entify", verbosity=verbosity, logFolder=logFolder) logger = logging.getLogger("entify") logger.info("Scanning the folder: " + folder) results = {} #onlyfiles = [] #for f in listdir(args.input_folder): # if isfile(join(args.input_folder, f)): # onlyfiles.append(f) onlyfiles = [ f for f in listdir(folder) if isfile(join(folder,f)) ] for f in onlyfiles: filePath = join(folder,f) logger.debug("Looking for regular expressions in: " + filePath) with open(filePath, "r") as tempF: # reading data foundExpr = getEntitiesByRegexp(data = tempF.read(), listRegexp = listRegexp) logger.debug("Updating the " + str(len(foundExpr)) + " results found on: " + filePath) results[filePath] = foundExpr if recursive: onlyfolders = [ f for f in listdir(folder) if isdir(join(folder,f)) ] for f in onlyfolders: folderPath = join(folder, f) logger.debug("Looking for additional in the folder: "+ folderPath) results.update(scanFolderForRegexp(folder = folderPath,listRegexp = listRegexp, recursive = recursive)) return results
[Optionally] recursive method to scan the files in a given folder. :param folder: the folder to be scanned. :param listRegexp: listRegexp is an array of <RegexpObject>. :param recursive: when True, it performs a recursive search on the subfolders. :return: a list of the available objects containing the expressions found in the provided data. [ { "attributes": [], "type": "i3visio.email", "value": "foo@bar.com" }, { "attributes": [], "type": "i3visio.email", "value": "bar@foo.com" } ]
Below is the the instruction that describes the task: ### Input: [Optionally] recursive method to scan the files in a given folder. :param folder: the folder to be scanned. :param listRegexp: listRegexp is an array of <RegexpObject>. :param recursive: when True, it performs a recursive search on the subfolders. :return: a list of the available objects containing the expressions found in the provided data. [ { "attributes": [], "type": "i3visio.email", "value": "foo@bar.com" }, { "attributes": [], "type": "i3visio.email", "value": "bar@foo.com" } ] ### Response: def scanFolderForRegexp(folder = None, listRegexp = None, recursive = False, verbosity=1, logFolder= "./logs"): ''' [Optionally] recursive method to scan the files in a given folder. :param folder: the folder to be scanned. :param listRegexp: listRegexp is an array of <RegexpObject>. :param recursive: when True, it performs a recursive search on the subfolders. :return: a list of the available objects containing the expressions found in the provided data. [ { "attributes": [], "type": "i3visio.email", "value": "foo@bar.com" }, { "attributes": [], "type": "i3visio.email", "value": "bar@foo.com" } ] ''' i3visiotools.logger.setupLogger(loggerName="entify", verbosity=verbosity, logFolder=logFolder) logger = logging.getLogger("entify") logger.info("Scanning the folder: " + folder) results = {} #onlyfiles = [] #for f in listdir(args.input_folder): # if isfile(join(args.input_folder, f)): # onlyfiles.append(f) onlyfiles = [ f for f in listdir(folder) if isfile(join(folder,f)) ] for f in onlyfiles: filePath = join(folder,f) logger.debug("Looking for regular expressions in: " + filePath) with open(filePath, "r") as tempF: # reading data foundExpr = getEntitiesByRegexp(data = tempF.read(), listRegexp = listRegexp) logger.debug("Updating the " + str(len(foundExpr)) + " results found on: " + filePath) results[filePath] = foundExpr if recursive: onlyfolders = [ f for f in listdir(folder) if isdir(join(folder,f)) ] for f in onlyfolders: folderPath = join(folder, f) logger.debug("Looking for additional in the folder: "+ folderPath) results.update(scanFolderForRegexp(folder = folderPath,listRegexp = listRegexp, recursive = recursive)) return results
def export_metadata(self, fields=None, forms=None, format='json', df_kwargs=None): """ Export the project's metadata Parameters ---------- fields : list Limit exported metadata to these fields forms : list Limit exported metadata to these forms format : (``'json'``), ``'csv'``, ``'xml'``, ``'df'`` Return the metadata in native objects, csv or xml. ``'df'`` will return a ``pandas.DataFrame``. df_kwargs : dict Passed to ``pandas.read_csv`` to control construction of returned DataFrame. by default ``{'index_col': 'field_name'}`` Returns ------- metadata : list, str, ``pandas.DataFrame`` metadata sttructure for the project. """ ret_format = format if format == 'df': from pandas import read_csv ret_format = 'csv' pl = self.__basepl('metadata', format=ret_format) to_add = [fields, forms] str_add = ['fields', 'forms'] for key, data in zip(str_add, to_add): if data: pl[key] = ','.join(data) response, _ = self._call_api(pl, 'metadata') if format in ('json', 'csv', 'xml'): return response elif format == 'df': if not df_kwargs: df_kwargs = {'index_col': 'field_name'} return read_csv(StringIO(response), **df_kwargs)
Export the project's metadata Parameters ---------- fields : list Limit exported metadata to these fields forms : list Limit exported metadata to these forms format : (``'json'``), ``'csv'``, ``'xml'``, ``'df'`` Return the metadata in native objects, csv or xml. ``'df'`` will return a ``pandas.DataFrame``. df_kwargs : dict Passed to ``pandas.read_csv`` to control construction of returned DataFrame. by default ``{'index_col': 'field_name'}`` Returns ------- metadata : list, str, ``pandas.DataFrame`` metadata sttructure for the project.
Below is the the instruction that describes the task: ### Input: Export the project's metadata Parameters ---------- fields : list Limit exported metadata to these fields forms : list Limit exported metadata to these forms format : (``'json'``), ``'csv'``, ``'xml'``, ``'df'`` Return the metadata in native objects, csv or xml. ``'df'`` will return a ``pandas.DataFrame``. df_kwargs : dict Passed to ``pandas.read_csv`` to control construction of returned DataFrame. by default ``{'index_col': 'field_name'}`` Returns ------- metadata : list, str, ``pandas.DataFrame`` metadata sttructure for the project. ### Response: def export_metadata(self, fields=None, forms=None, format='json', df_kwargs=None): """ Export the project's metadata Parameters ---------- fields : list Limit exported metadata to these fields forms : list Limit exported metadata to these forms format : (``'json'``), ``'csv'``, ``'xml'``, ``'df'`` Return the metadata in native objects, csv or xml. ``'df'`` will return a ``pandas.DataFrame``. df_kwargs : dict Passed to ``pandas.read_csv`` to control construction of returned DataFrame. by default ``{'index_col': 'field_name'}`` Returns ------- metadata : list, str, ``pandas.DataFrame`` metadata sttructure for the project. """ ret_format = format if format == 'df': from pandas import read_csv ret_format = 'csv' pl = self.__basepl('metadata', format=ret_format) to_add = [fields, forms] str_add = ['fields', 'forms'] for key, data in zip(str_add, to_add): if data: pl[key] = ','.join(data) response, _ = self._call_api(pl, 'metadata') if format in ('json', 'csv', 'xml'): return response elif format == 'df': if not df_kwargs: df_kwargs = {'index_col': 'field_name'} return read_csv(StringIO(response), **df_kwargs)
def get_template_as_json(template_id, **kwargs): """ Get a template (including attribute and dataset definitions) as a JSON string. This is just a wrapper around the get_template_as_dict function. """ user_id = kwargs['user_id'] return json.dumps(get_template_as_dict(template_id, user_id=user_id))
Get a template (including attribute and dataset definitions) as a JSON string. This is just a wrapper around the get_template_as_dict function.
Below is the the instruction that describes the task: ### Input: Get a template (including attribute and dataset definitions) as a JSON string. This is just a wrapper around the get_template_as_dict function. ### Response: def get_template_as_json(template_id, **kwargs): """ Get a template (including attribute and dataset definitions) as a JSON string. This is just a wrapper around the get_template_as_dict function. """ user_id = kwargs['user_id'] return json.dumps(get_template_as_dict(template_id, user_id=user_id))
def getParser(): "Creates and returns the argparse parser object." # text epilog =""" examples: %(prog)s -e example.nii grid.nii 10 Generates an empty image with the same attributes as example.nii, overlays it with a regular grid of width 10 voxels and saves it as grid.nii. %(prog)s -e example.nii grid.nii 10,11,12 -r Same as above, but with an irregular grid and using real world coordinates (i.e. taking the voxel spacing of the image into account). %(prog)s -s 100,200 grid.nii 10,2 -p 0.5,3 Generates a 10x2 spaced grid in a 100x200 image with a voxel spacing of 0.5x3. %(prog)s -s 100,100,50 grid.nii 5,5,0 Generates a 100x100x50 3D volume but fills it only with a regular 5x5 2D grid over the first two dimensions. """ # command line argument parser parser = argparse.ArgumentParser(formatter_class=argparse.RawDescriptionHelpFormatter, description=__description__, epilog=epilog) parser.add_argument('output', help='Generated grid volume.') parser.add_argument('spacing', type=list_of_integers_or_int, help='The grid spacing. Can be a single digit for regular spacing in all dimensions or a colon-separated list of N integers, where N is the number of dimension in the generated volume. To skip the grid in one dimension, simply supply a 0 for it.') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('-e', '--example', dest='example', help='Option 1/2: Supply an image to create the grid volume by example (i.e. with same shape, voxel spacing and offset).') group.add_argument('-s', '--shape', type=list_of_integers, dest='shape', help='Option 2/2: Supply a colon-separated list of integers that constitute the target volumes shape.') parser.add_argument('-p', '--pixel-spacing', type=list_of_floats, dest='pixelspacing', help='Set the pixel spacing of the target volume by supplying a colon-separated list of N numbers, where N is the number of dimension in the generated volume.') parser.add_argument('-o', '--offset', type=list_of_floats, dest='offset', help='Set offset of the target volume by supplying a colon-separated list of N numbers, where N is the number of dimension in the generated volume.') parser.add_argument('-r', '--real', dest='real', action='store_true', help='Spacing is given in real world coordinates, rather than voxels. For this to make a difference, either the -e switch or the -p switch must be set.') parser.add_argument('-v', dest='verbose', action='store_true', help='Display more information.') parser.add_argument('-d', dest='debug', action='store_true', help='Display debug information.') parser.add_argument('-f', '--force', dest='force', action='store_true', help='Silently override existing output images.') return parser
Creates and returns the argparse parser object.
Below is the the instruction that describes the task: ### Input: Creates and returns the argparse parser object. ### Response: def getParser(): "Creates and returns the argparse parser object." # text epilog =""" examples: %(prog)s -e example.nii grid.nii 10 Generates an empty image with the same attributes as example.nii, overlays it with a regular grid of width 10 voxels and saves it as grid.nii. %(prog)s -e example.nii grid.nii 10,11,12 -r Same as above, but with an irregular grid and using real world coordinates (i.e. taking the voxel spacing of the image into account). %(prog)s -s 100,200 grid.nii 10,2 -p 0.5,3 Generates a 10x2 spaced grid in a 100x200 image with a voxel spacing of 0.5x3. %(prog)s -s 100,100,50 grid.nii 5,5,0 Generates a 100x100x50 3D volume but fills it only with a regular 5x5 2D grid over the first two dimensions. """ # command line argument parser parser = argparse.ArgumentParser(formatter_class=argparse.RawDescriptionHelpFormatter, description=__description__, epilog=epilog) parser.add_argument('output', help='Generated grid volume.') parser.add_argument('spacing', type=list_of_integers_or_int, help='The grid spacing. Can be a single digit for regular spacing in all dimensions or a colon-separated list of N integers, where N is the number of dimension in the generated volume. To skip the grid in one dimension, simply supply a 0 for it.') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('-e', '--example', dest='example', help='Option 1/2: Supply an image to create the grid volume by example (i.e. with same shape, voxel spacing and offset).') group.add_argument('-s', '--shape', type=list_of_integers, dest='shape', help='Option 2/2: Supply a colon-separated list of integers that constitute the target volumes shape.') parser.add_argument('-p', '--pixel-spacing', type=list_of_floats, dest='pixelspacing', help='Set the pixel spacing of the target volume by supplying a colon-separated list of N numbers, where N is the number of dimension in the generated volume.') parser.add_argument('-o', '--offset', type=list_of_floats, dest='offset', help='Set offset of the target volume by supplying a colon-separated list of N numbers, where N is the number of dimension in the generated volume.') parser.add_argument('-r', '--real', dest='real', action='store_true', help='Spacing is given in real world coordinates, rather than voxels. For this to make a difference, either the -e switch or the -p switch must be set.') parser.add_argument('-v', dest='verbose', action='store_true', help='Display more information.') parser.add_argument('-d', dest='debug', action='store_true', help='Display debug information.') parser.add_argument('-f', '--force', dest='force', action='store_true', help='Silently override existing output images.') return parser
def get_recurring_bill_by_subscription(self, subscription_id): """ Consulta de las facturas que están pagadas o pendientes por pagar. Se puede consultar por cliente, por suscripción o por rango de fechas. Args: subscription_id: Returns: """ params = { "subscriptionId": subscription_id, } return self.client._get(self.url + 'recurringBill', params=params, headers=self.get_headers())
Consulta de las facturas que están pagadas o pendientes por pagar. Se puede consultar por cliente, por suscripción o por rango de fechas. Args: subscription_id: Returns:
Below is the the instruction that describes the task: ### Input: Consulta de las facturas que están pagadas o pendientes por pagar. Se puede consultar por cliente, por suscripción o por rango de fechas. Args: subscription_id: Returns: ### Response: def get_recurring_bill_by_subscription(self, subscription_id): """ Consulta de las facturas que están pagadas o pendientes por pagar. Se puede consultar por cliente, por suscripción o por rango de fechas. Args: subscription_id: Returns: """ params = { "subscriptionId": subscription_id, } return self.client._get(self.url + 'recurringBill', params=params, headers=self.get_headers())
def _from_dict(cls, _dict): """Initialize a AlignedElement object from a json dictionary.""" args = {} if 'element_pair' in _dict: args['element_pair'] = [ ElementPair._from_dict(x) for x in (_dict.get('element_pair')) ] if 'identical_text' in _dict: args['identical_text'] = _dict.get('identical_text') if 'provenance_ids' in _dict: args['provenance_ids'] = _dict.get('provenance_ids') if 'significant_elements' in _dict: args['significant_elements'] = _dict.get('significant_elements') return cls(**args)
Initialize a AlignedElement object from a json dictionary.
Below is the the instruction that describes the task: ### Input: Initialize a AlignedElement object from a json dictionary. ### Response: def _from_dict(cls, _dict): """Initialize a AlignedElement object from a json dictionary.""" args = {} if 'element_pair' in _dict: args['element_pair'] = [ ElementPair._from_dict(x) for x in (_dict.get('element_pair')) ] if 'identical_text' in _dict: args['identical_text'] = _dict.get('identical_text') if 'provenance_ids' in _dict: args['provenance_ids'] = _dict.get('provenance_ids') if 'significant_elements' in _dict: args['significant_elements'] = _dict.get('significant_elements') return cls(**args)
def grouped_mappings(self,id): """ return all mappings for a node, grouped by ID prefix """ g = self.get_xref_graph() m = {} for n in g.neighbors(id): [prefix, local] = n.split(':') if prefix not in m: m[prefix] = [] m[prefix].append(n) return m
return all mappings for a node, grouped by ID prefix
Below is the the instruction that describes the task: ### Input: return all mappings for a node, grouped by ID prefix ### Response: def grouped_mappings(self,id): """ return all mappings for a node, grouped by ID prefix """ g = self.get_xref_graph() m = {} for n in g.neighbors(id): [prefix, local] = n.split(':') if prefix not in m: m[prefix] = [] m[prefix].append(n) return m
def safe_service(attr, default_value=None): '''A **method** decorator for creating safe services. Given an attribute name, this returns a decorator for creating safe services. Namely, if a service that is not yet available is requested (like a database connection), then ``safe_service`` will log any errors and set the given attribute to ``default_value``. :param str attr: attribute name :param object default_value: default value to set :rtype: decorator ''' def _(fun): @functools.wraps(fun) def run(self): try: return fun(self) except: logger.error(traceback.format_exc()) setattr(self, attr, default_value) return run return _
A **method** decorator for creating safe services. Given an attribute name, this returns a decorator for creating safe services. Namely, if a service that is not yet available is requested (like a database connection), then ``safe_service`` will log any errors and set the given attribute to ``default_value``. :param str attr: attribute name :param object default_value: default value to set :rtype: decorator
Below is the the instruction that describes the task: ### Input: A **method** decorator for creating safe services. Given an attribute name, this returns a decorator for creating safe services. Namely, if a service that is not yet available is requested (like a database connection), then ``safe_service`` will log any errors and set the given attribute to ``default_value``. :param str attr: attribute name :param object default_value: default value to set :rtype: decorator ### Response: def safe_service(attr, default_value=None): '''A **method** decorator for creating safe services. Given an attribute name, this returns a decorator for creating safe services. Namely, if a service that is not yet available is requested (like a database connection), then ``safe_service`` will log any errors and set the given attribute to ``default_value``. :param str attr: attribute name :param object default_value: default value to set :rtype: decorator ''' def _(fun): @functools.wraps(fun) def run(self): try: return fun(self) except: logger.error(traceback.format_exc()) setattr(self, attr, default_value) return run return _
def serializeType(self, hdlType: HdlType) -> str: """ :see: doc of method on parent class """ def createTmpVar(suggestedName, dtype): raise NotImplementedError( "Can not seraialize hdl type %r into" "ipcore format" % (hdlType)) return VhdlSerializer.HdlType(hdlType, VhdlSerializer.getBaseContext())
:see: doc of method on parent class
Below is the the instruction that describes the task: ### Input: :see: doc of method on parent class ### Response: def serializeType(self, hdlType: HdlType) -> str: """ :see: doc of method on parent class """ def createTmpVar(suggestedName, dtype): raise NotImplementedError( "Can not seraialize hdl type %r into" "ipcore format" % (hdlType)) return VhdlSerializer.HdlType(hdlType, VhdlSerializer.getBaseContext())
def transformToNative(self): """ Transform this object into a custom VBase subclass. transformToNative should always return a representation of this object. It may do so by modifying self in place then returning self, or by creating a new object. """ if self.isNative or not self.behavior or not self.behavior.hasNative: return self else: try: return self.behavior.transformToNative(self) except Exception as e: # wrap errors in transformation in a ParseError lineNumber = getattr(self, 'lineNumber', None) if isinstance(e, ParseError): if lineNumber is not None: e.lineNumber = lineNumber raise else: msg = "In transformToNative, unhandled exception on line %s: %s: %s" msg = msg % (lineNumber, sys.exc_info()[0], sys.exc_info()[1]) raise ParseError(msg, lineNumber)
Transform this object into a custom VBase subclass. transformToNative should always return a representation of this object. It may do so by modifying self in place then returning self, or by creating a new object.
Below is the the instruction that describes the task: ### Input: Transform this object into a custom VBase subclass. transformToNative should always return a representation of this object. It may do so by modifying self in place then returning self, or by creating a new object. ### Response: def transformToNative(self): """ Transform this object into a custom VBase subclass. transformToNative should always return a representation of this object. It may do so by modifying self in place then returning self, or by creating a new object. """ if self.isNative or not self.behavior or not self.behavior.hasNative: return self else: try: return self.behavior.transformToNative(self) except Exception as e: # wrap errors in transformation in a ParseError lineNumber = getattr(self, 'lineNumber', None) if isinstance(e, ParseError): if lineNumber is not None: e.lineNumber = lineNumber raise else: msg = "In transformToNative, unhandled exception on line %s: %s: %s" msg = msg % (lineNumber, sys.exc_info()[0], sys.exc_info()[1]) raise ParseError(msg, lineNumber)
def postcmd(self, stop, line): ''' Exit cmd cleanly. ''' self.color_prompt() return Cmd.postcmd(self, stop, line)
Exit cmd cleanly.
Below is the the instruction that describes the task: ### Input: Exit cmd cleanly. ### Response: def postcmd(self, stop, line): ''' Exit cmd cleanly. ''' self.color_prompt() return Cmd.postcmd(self, stop, line)
def get_param_list_for_prediction(model_obj, replicates): """ Create the `param_list` argument for use with `model_obj.predict`. Parameters ---------- model_obj : an instance of an MNDC object. Should have the following attributes: `['ind_var_names', 'intercept_names', 'shape_names', 'nest_names']`. This model should have already undergone a complete estimation process. I.e. its `fit_mle` method should have been called without `just_point=True`. replicates : 2D ndarray. Should represent the set of parameter values that we now wish to partition for use with the `model_obj.predict` method. Returns ------- param_list : list. Contains four elements, each being a numpy array. Either all of the arrays should be 1D or all of the arrays should be 2D. If 2D, the arrays should have the same number of columns. Each column being a particular set of parameter values that one wants to predict with. The first element in the list should be the index coefficients. The second element should contain the 'outside' intercept parameters if there are any, or None otherwise. The third element should contain the shape parameters if there are any or None otherwise. The fourth element should contain the nest coefficients if there are any or None otherwise. Default == None. """ # Check the validity of the passed arguments ensure_samples_is_ndim_ndarray(replicates, ndim=2, name='replicates') # Determine the number of index coefficients, outside intercepts, # shape parameters, and nest parameters num_idx_coefs = len(model_obj.ind_var_names) intercept_names = model_obj.intercept_names num_outside_intercepts =\ 0 if intercept_names is None else len(intercept_names) shape_names = model_obj.shape_names num_shapes = 0 if shape_names is None else len(shape_names) nest_names = model_obj.nest_names num_nests = 0 if nest_names is None else len(nest_names) parameter_numbers =\ [num_nests, num_shapes, num_outside_intercepts, num_idx_coefs] current_idx = 0 param_list = [] for param_num in parameter_numbers: if param_num == 0: param_list.insert(0, None) continue upper_idx = current_idx + param_num param_list.insert(0, replicates[:, current_idx:upper_idx].T) current_idx += param_num return param_list
Create the `param_list` argument for use with `model_obj.predict`. Parameters ---------- model_obj : an instance of an MNDC object. Should have the following attributes: `['ind_var_names', 'intercept_names', 'shape_names', 'nest_names']`. This model should have already undergone a complete estimation process. I.e. its `fit_mle` method should have been called without `just_point=True`. replicates : 2D ndarray. Should represent the set of parameter values that we now wish to partition for use with the `model_obj.predict` method. Returns ------- param_list : list. Contains four elements, each being a numpy array. Either all of the arrays should be 1D or all of the arrays should be 2D. If 2D, the arrays should have the same number of columns. Each column being a particular set of parameter values that one wants to predict with. The first element in the list should be the index coefficients. The second element should contain the 'outside' intercept parameters if there are any, or None otherwise. The third element should contain the shape parameters if there are any or None otherwise. The fourth element should contain the nest coefficients if there are any or None otherwise. Default == None.
Below is the the instruction that describes the task: ### Input: Create the `param_list` argument for use with `model_obj.predict`. Parameters ---------- model_obj : an instance of an MNDC object. Should have the following attributes: `['ind_var_names', 'intercept_names', 'shape_names', 'nest_names']`. This model should have already undergone a complete estimation process. I.e. its `fit_mle` method should have been called without `just_point=True`. replicates : 2D ndarray. Should represent the set of parameter values that we now wish to partition for use with the `model_obj.predict` method. Returns ------- param_list : list. Contains four elements, each being a numpy array. Either all of the arrays should be 1D or all of the arrays should be 2D. If 2D, the arrays should have the same number of columns. Each column being a particular set of parameter values that one wants to predict with. The first element in the list should be the index coefficients. The second element should contain the 'outside' intercept parameters if there are any, or None otherwise. The third element should contain the shape parameters if there are any or None otherwise. The fourth element should contain the nest coefficients if there are any or None otherwise. Default == None. ### Response: def get_param_list_for_prediction(model_obj, replicates): """ Create the `param_list` argument for use with `model_obj.predict`. Parameters ---------- model_obj : an instance of an MNDC object. Should have the following attributes: `['ind_var_names', 'intercept_names', 'shape_names', 'nest_names']`. This model should have already undergone a complete estimation process. I.e. its `fit_mle` method should have been called without `just_point=True`. replicates : 2D ndarray. Should represent the set of parameter values that we now wish to partition for use with the `model_obj.predict` method. Returns ------- param_list : list. Contains four elements, each being a numpy array. Either all of the arrays should be 1D or all of the arrays should be 2D. If 2D, the arrays should have the same number of columns. Each column being a particular set of parameter values that one wants to predict with. The first element in the list should be the index coefficients. The second element should contain the 'outside' intercept parameters if there are any, or None otherwise. The third element should contain the shape parameters if there are any or None otherwise. The fourth element should contain the nest coefficients if there are any or None otherwise. Default == None. """ # Check the validity of the passed arguments ensure_samples_is_ndim_ndarray(replicates, ndim=2, name='replicates') # Determine the number of index coefficients, outside intercepts, # shape parameters, and nest parameters num_idx_coefs = len(model_obj.ind_var_names) intercept_names = model_obj.intercept_names num_outside_intercepts =\ 0 if intercept_names is None else len(intercept_names) shape_names = model_obj.shape_names num_shapes = 0 if shape_names is None else len(shape_names) nest_names = model_obj.nest_names num_nests = 0 if nest_names is None else len(nest_names) parameter_numbers =\ [num_nests, num_shapes, num_outside_intercepts, num_idx_coefs] current_idx = 0 param_list = [] for param_num in parameter_numbers: if param_num == 0: param_list.insert(0, None) continue upper_idx = current_idx + param_num param_list.insert(0, replicates[:, current_idx:upper_idx].T) current_idx += param_num return param_list
def add_edge(self, name1, name2, **attr): ''' API: add_edge(self, name1, name2, **attr) Description: Adds edge to the graph. Sets edge attributes using attr argument. Input: name1: Name of the source node (if directed). name2: Name of the sink node (if directed). attr: Edge attributes. Pre: Graph should not already contain this edge. We do not allow multiple edges with same source and sink nodes. Post: self.edge_attr is updated. self.neighbors, self.nodes and self.in_neighbors are updated if graph was missing at least one of the nodes. ''' if (name1, name2) in self.edge_attr: raise MultipleEdgeException if self.graph_type is UNDIRECTED_GRAPH and (name2,name1) in self.edge_attr: raise MultipleEdgeException self.edge_attr[(name1,name2)] = copy.deepcopy(DEFAULT_EDGE_ATTRIBUTES) for a in attr: self.edge_attr[(name1,name2)][a] = attr[a] if name1 not in self.nodes: self.add_node(name1) if name2 not in self.nodes: self.add_node(name2) self.neighbors[name1].append(name2) if self.graph_type is UNDIRECTED_GRAPH: self.neighbors[name2].append(name1) else: self.in_neighbors[name2].append(name1)
API: add_edge(self, name1, name2, **attr) Description: Adds edge to the graph. Sets edge attributes using attr argument. Input: name1: Name of the source node (if directed). name2: Name of the sink node (if directed). attr: Edge attributes. Pre: Graph should not already contain this edge. We do not allow multiple edges with same source and sink nodes. Post: self.edge_attr is updated. self.neighbors, self.nodes and self.in_neighbors are updated if graph was missing at least one of the nodes.
Below is the the instruction that describes the task: ### Input: API: add_edge(self, name1, name2, **attr) Description: Adds edge to the graph. Sets edge attributes using attr argument. Input: name1: Name of the source node (if directed). name2: Name of the sink node (if directed). attr: Edge attributes. Pre: Graph should not already contain this edge. We do not allow multiple edges with same source and sink nodes. Post: self.edge_attr is updated. self.neighbors, self.nodes and self.in_neighbors are updated if graph was missing at least one of the nodes. ### Response: def add_edge(self, name1, name2, **attr): ''' API: add_edge(self, name1, name2, **attr) Description: Adds edge to the graph. Sets edge attributes using attr argument. Input: name1: Name of the source node (if directed). name2: Name of the sink node (if directed). attr: Edge attributes. Pre: Graph should not already contain this edge. We do not allow multiple edges with same source and sink nodes. Post: self.edge_attr is updated. self.neighbors, self.nodes and self.in_neighbors are updated if graph was missing at least one of the nodes. ''' if (name1, name2) in self.edge_attr: raise MultipleEdgeException if self.graph_type is UNDIRECTED_GRAPH and (name2,name1) in self.edge_attr: raise MultipleEdgeException self.edge_attr[(name1,name2)] = copy.deepcopy(DEFAULT_EDGE_ATTRIBUTES) for a in attr: self.edge_attr[(name1,name2)][a] = attr[a] if name1 not in self.nodes: self.add_node(name1) if name2 not in self.nodes: self.add_node(name2) self.neighbors[name1].append(name2) if self.graph_type is UNDIRECTED_GRAPH: self.neighbors[name2].append(name1) else: self.in_neighbors[name2].append(name1)
def send_music(self, user_id, url, hq_url, thumb_media_id, title=None, description=None, account=None): """ 发送音乐消息 详情请参考 http://mp.weixin.qq.com/wiki/7/12a5a320ae96fecdf0e15cb06123de9f.html :param user_id: 用户 ID 。 就是你收到的 `Message` 的 source :param url: 音乐链接 :param hq_url: 高品质音乐链接,wifi环境优先使用该链接播放音乐 :param thumb_media_id: 缩略图的媒体ID。 可以通过 :func:`upload_media` 上传。 :param title: 音乐标题 :param description: 音乐描述 :param account: 可选,客服账号 :return: 返回的 JSON 数据包 """ music_data = { 'musicurl': url, 'hqmusicurl': hq_url, 'thumb_media_id': thumb_media_id } if title: music_data['title'] = title if description: music_data['description'] = description data = { 'touser': user_id, 'msgtype': 'music', 'music': music_data } return self._send_custom_message(data, account=account)
发送音乐消息 详情请参考 http://mp.weixin.qq.com/wiki/7/12a5a320ae96fecdf0e15cb06123de9f.html :param user_id: 用户 ID 。 就是你收到的 `Message` 的 source :param url: 音乐链接 :param hq_url: 高品质音乐链接,wifi环境优先使用该链接播放音乐 :param thumb_media_id: 缩略图的媒体ID。 可以通过 :func:`upload_media` 上传。 :param title: 音乐标题 :param description: 音乐描述 :param account: 可选,客服账号 :return: 返回的 JSON 数据包
Below is the the instruction that describes the task: ### Input: 发送音乐消息 详情请参考 http://mp.weixin.qq.com/wiki/7/12a5a320ae96fecdf0e15cb06123de9f.html :param user_id: 用户 ID 。 就是你收到的 `Message` 的 source :param url: 音乐链接 :param hq_url: 高品质音乐链接,wifi环境优先使用该链接播放音乐 :param thumb_media_id: 缩略图的媒体ID。 可以通过 :func:`upload_media` 上传。 :param title: 音乐标题 :param description: 音乐描述 :param account: 可选,客服账号 :return: 返回的 JSON 数据包 ### Response: def send_music(self, user_id, url, hq_url, thumb_media_id, title=None, description=None, account=None): """ 发送音乐消息 详情请参考 http://mp.weixin.qq.com/wiki/7/12a5a320ae96fecdf0e15cb06123de9f.html :param user_id: 用户 ID 。 就是你收到的 `Message` 的 source :param url: 音乐链接 :param hq_url: 高品质音乐链接,wifi环境优先使用该链接播放音乐 :param thumb_media_id: 缩略图的媒体ID。 可以通过 :func:`upload_media` 上传。 :param title: 音乐标题 :param description: 音乐描述 :param account: 可选,客服账号 :return: 返回的 JSON 数据包 """ music_data = { 'musicurl': url, 'hqmusicurl': hq_url, 'thumb_media_id': thumb_media_id } if title: music_data['title'] = title if description: music_data['description'] = description data = { 'touser': user_id, 'msgtype': 'music', 'music': music_data } return self._send_custom_message(data, account=account)
def delete_simple_httpauth_read_api( request, database_name, collection_name, slug): """Delete Simple PublicReadAPI""" ss = get_object_or_404(HTTPAuthReadAPI, database_name=database_name, collection_name=collection_name, slug=slug) ss.delete() messages.success(request, _("Simple HTTP Auth Read API deleted.")) return HttpResponseRedirect( reverse('djmongo_show_apis', args=(ss.database_name, ss.collection_name)))
Delete Simple PublicReadAPI
Below is the the instruction that describes the task: ### Input: Delete Simple PublicReadAPI ### Response: def delete_simple_httpauth_read_api( request, database_name, collection_name, slug): """Delete Simple PublicReadAPI""" ss = get_object_or_404(HTTPAuthReadAPI, database_name=database_name, collection_name=collection_name, slug=slug) ss.delete() messages.success(request, _("Simple HTTP Auth Read API deleted.")) return HttpResponseRedirect( reverse('djmongo_show_apis', args=(ss.database_name, ss.collection_name)))
def config_shortcut(action, context, name, parent): """ Create a Shortcut namedtuple for a widget The data contained in this tuple will be registered in our shortcuts preferences page """ keystr = get_shortcut(context, name) qsc = QShortcut(QKeySequence(keystr), parent, action) qsc.setContext(Qt.WidgetWithChildrenShortcut) sc = Shortcut(data=(qsc, context, name)) return sc
Create a Shortcut namedtuple for a widget The data contained in this tuple will be registered in our shortcuts preferences page
Below is the the instruction that describes the task: ### Input: Create a Shortcut namedtuple for a widget The data contained in this tuple will be registered in our shortcuts preferences page ### Response: def config_shortcut(action, context, name, parent): """ Create a Shortcut namedtuple for a widget The data contained in this tuple will be registered in our shortcuts preferences page """ keystr = get_shortcut(context, name) qsc = QShortcut(QKeySequence(keystr), parent, action) qsc.setContext(Qt.WidgetWithChildrenShortcut) sc = Shortcut(data=(qsc, context, name)) return sc
def disable_all_breakpoints(self): """ Disables all breakpoints in all processes. @see: disable_code_breakpoint, disable_page_breakpoint, disable_hardware_breakpoint """ # disable code breakpoints for (pid, bp) in self.get_all_code_breakpoints(): self.disable_code_breakpoint(pid, bp.get_address()) # disable page breakpoints for (pid, bp) in self.get_all_page_breakpoints(): self.disable_page_breakpoint(pid, bp.get_address()) # disable hardware breakpoints for (tid, bp) in self.get_all_hardware_breakpoints(): self.disable_hardware_breakpoint(tid, bp.get_address())
Disables all breakpoints in all processes. @see: disable_code_breakpoint, disable_page_breakpoint, disable_hardware_breakpoint
Below is the the instruction that describes the task: ### Input: Disables all breakpoints in all processes. @see: disable_code_breakpoint, disable_page_breakpoint, disable_hardware_breakpoint ### Response: def disable_all_breakpoints(self): """ Disables all breakpoints in all processes. @see: disable_code_breakpoint, disable_page_breakpoint, disable_hardware_breakpoint """ # disable code breakpoints for (pid, bp) in self.get_all_code_breakpoints(): self.disable_code_breakpoint(pid, bp.get_address()) # disable page breakpoints for (pid, bp) in self.get_all_page_breakpoints(): self.disable_page_breakpoint(pid, bp.get_address()) # disable hardware breakpoints for (tid, bp) in self.get_all_hardware_breakpoints(): self.disable_hardware_breakpoint(tid, bp.get_address())
def insert_system_path(opts, paths): ''' Inserts path into python path taking into consideration 'root_dir' option. ''' if isinstance(paths, six.string_types): paths = [paths] for path in paths: path_options = {'path': path, 'root_dir': opts['root_dir']} prepend_root_dir(path_options, path_options) if (os.path.isdir(path_options['path']) and path_options['path'] not in sys.path): sys.path.insert(0, path_options['path'])
Inserts path into python path taking into consideration 'root_dir' option.
Below is the the instruction that describes the task: ### Input: Inserts path into python path taking into consideration 'root_dir' option. ### Response: def insert_system_path(opts, paths): ''' Inserts path into python path taking into consideration 'root_dir' option. ''' if isinstance(paths, six.string_types): paths = [paths] for path in paths: path_options = {'path': path, 'root_dir': opts['root_dir']} prepend_root_dir(path_options, path_options) if (os.path.isdir(path_options['path']) and path_options['path'] not in sys.path): sys.path.insert(0, path_options['path'])
def khatri_rao(matrices): """Khatri-Rao product of a list of matrices. Parameters ---------- matrices : list of ndarray Returns ------- khatri_rao_product: matrix of shape ``(prod(n_i), m)`` where ``prod(n_i) = prod([m.shape[0] for m in matrices])`` i.e. the product of the number of rows of all the matrices in the product. Author ------ Jean Kossaifi <https://github.com/tensorly> """ n_columns = matrices[0].shape[1] n_factors = len(matrices) start = ord('a') common_dim = 'z' target = ''.join(chr(start + i) for i in range(n_factors)) source = ','.join(i+common_dim for i in target) operation = source+'->'+target+common_dim return np.einsum(operation, *matrices).reshape((-1, n_columns))
Khatri-Rao product of a list of matrices. Parameters ---------- matrices : list of ndarray Returns ------- khatri_rao_product: matrix of shape ``(prod(n_i), m)`` where ``prod(n_i) = prod([m.shape[0] for m in matrices])`` i.e. the product of the number of rows of all the matrices in the product. Author ------ Jean Kossaifi <https://github.com/tensorly>
Below is the the instruction that describes the task: ### Input: Khatri-Rao product of a list of matrices. Parameters ---------- matrices : list of ndarray Returns ------- khatri_rao_product: matrix of shape ``(prod(n_i), m)`` where ``prod(n_i) = prod([m.shape[0] for m in matrices])`` i.e. the product of the number of rows of all the matrices in the product. Author ------ Jean Kossaifi <https://github.com/tensorly> ### Response: def khatri_rao(matrices): """Khatri-Rao product of a list of matrices. Parameters ---------- matrices : list of ndarray Returns ------- khatri_rao_product: matrix of shape ``(prod(n_i), m)`` where ``prod(n_i) = prod([m.shape[0] for m in matrices])`` i.e. the product of the number of rows of all the matrices in the product. Author ------ Jean Kossaifi <https://github.com/tensorly> """ n_columns = matrices[0].shape[1] n_factors = len(matrices) start = ord('a') common_dim = 'z' target = ''.join(chr(start + i) for i in range(n_factors)) source = ','.join(i+common_dim for i in target) operation = source+'->'+target+common_dim return np.einsum(operation, *matrices).reshape((-1, n_columns))
def remnant_mass(eta, ns_g_mass, ns_sequence, chi, incl, shift): """ Function that determines the remnant disk mass of an NS-BH system using the fit to numerical-relativity results discussed in Foucart PRD 86, 124007 (2012). Parameters ----------- eta: float the symmetric mass ratio of the binary ns_g_mass: float NS gravitational mass (in solar masses) ns_sequence: 3D-array contains the sequence data in the form NS gravitational mass (in solar masses), NS baryonic mass (in solar masses), NS compactness (dimensionless) chi: float the BH dimensionless spin parameter incl: float the inclination angle between the BH spin and the orbital angular momentum in radians shift: float an amount to be subtracted to the remnant mass predicted by the model (in solar masses) Returns ---------- remnant_mass: float The remnant mass in solar masses """ # Sanity checks if not (eta>0. and eta<=0.25 and abs(chi)<=1): print('The BH spin magnitude must be <=1 and eta must be between 0 and 0.25') print('This script was launched with ns_mass={0}, eta={1}, chi={2}, inclination={3}\n'.format(ns_g_mass, eta, chi, incl)) raise Exception('Unphysical parameters!') # Binary mass ratio define to be > 1 q = (1+math.sqrt(1-4*eta)-2*eta)/eta*0.5 # NS compactness and rest mass ns_compactness = ns_g_mass_to_ns_compactness(ns_g_mass, ns_sequence) ns_b_mass = ns_g_mass_to_ns_b_mass(ns_g_mass, ns_sequence) # Sanity checks if not (ns_compactness>0 and q>=1): print('A positive NS compactness and a mass ratio that is >1 must be obtained.') print('This script was launched with ns_mass={0}, eta={1}, chi={2}, inclination={3}'.format(ns_b_mass, eta, chi, incl)) print('and obtained ns_compactness={0} and q={1}.'.format(ns_compactness, q)) print('SOMETHING WENT WRONG!!\n') raise Exception('Unphysical parameters!') # Calculate the dimensionless parameter kappa kappa = q*ns_compactness # Effective equatorial spin parameter needed to determine the torus mass*) chi_eff = bh_effective_spin(chi, incl) #Sanity checks if not abs(chi_eff)<=1: print('The effective BH spin magnitude must be <=1') print('This script was launched with ns_mass={0}, eta={1}, chi={2}, inclination={3}'.format(ns_b_mass, eta, chi, incl)) print('and obtained chi_eff={0}.'.format(chi_eff)) print('SOMETHING WENT WRONG!!\n') raise Exception('Unphysical parameters!') # Taking the 1st element with full_output=1 avoids some annoying messages on stdout xi = scipy.optimize.fsolve(xi_eq, 100., args=(kappa,chi_eff,q), full_output=1)[0] # Fit parameters and tidal correction alpha = 0.296 # +/- 0.011 beta = 0.171 # +/- 0.008 # The remnant mass over the NS rest mass remnant_mass = alpha*xi*(1-2*ns_compactness)-beta*kappa*PG_ISSO_solver(chi_eff,0) # The remnant mass in the same units as the NS rest mass (presumably solar masses) remnant_mass = remnant_mass*ns_b_mass - shift return remnant_mass
Function that determines the remnant disk mass of an NS-BH system using the fit to numerical-relativity results discussed in Foucart PRD 86, 124007 (2012). Parameters ----------- eta: float the symmetric mass ratio of the binary ns_g_mass: float NS gravitational mass (in solar masses) ns_sequence: 3D-array contains the sequence data in the form NS gravitational mass (in solar masses), NS baryonic mass (in solar masses), NS compactness (dimensionless) chi: float the BH dimensionless spin parameter incl: float the inclination angle between the BH spin and the orbital angular momentum in radians shift: float an amount to be subtracted to the remnant mass predicted by the model (in solar masses) Returns ---------- remnant_mass: float The remnant mass in solar masses
Below is the the instruction that describes the task: ### Input: Function that determines the remnant disk mass of an NS-BH system using the fit to numerical-relativity results discussed in Foucart PRD 86, 124007 (2012). Parameters ----------- eta: float the symmetric mass ratio of the binary ns_g_mass: float NS gravitational mass (in solar masses) ns_sequence: 3D-array contains the sequence data in the form NS gravitational mass (in solar masses), NS baryonic mass (in solar masses), NS compactness (dimensionless) chi: float the BH dimensionless spin parameter incl: float the inclination angle between the BH spin and the orbital angular momentum in radians shift: float an amount to be subtracted to the remnant mass predicted by the model (in solar masses) Returns ---------- remnant_mass: float The remnant mass in solar masses ### Response: def remnant_mass(eta, ns_g_mass, ns_sequence, chi, incl, shift): """ Function that determines the remnant disk mass of an NS-BH system using the fit to numerical-relativity results discussed in Foucart PRD 86, 124007 (2012). Parameters ----------- eta: float the symmetric mass ratio of the binary ns_g_mass: float NS gravitational mass (in solar masses) ns_sequence: 3D-array contains the sequence data in the form NS gravitational mass (in solar masses), NS baryonic mass (in solar masses), NS compactness (dimensionless) chi: float the BH dimensionless spin parameter incl: float the inclination angle between the BH spin and the orbital angular momentum in radians shift: float an amount to be subtracted to the remnant mass predicted by the model (in solar masses) Returns ---------- remnant_mass: float The remnant mass in solar masses """ # Sanity checks if not (eta>0. and eta<=0.25 and abs(chi)<=1): print('The BH spin magnitude must be <=1 and eta must be between 0 and 0.25') print('This script was launched with ns_mass={0}, eta={1}, chi={2}, inclination={3}\n'.format(ns_g_mass, eta, chi, incl)) raise Exception('Unphysical parameters!') # Binary mass ratio define to be > 1 q = (1+math.sqrt(1-4*eta)-2*eta)/eta*0.5 # NS compactness and rest mass ns_compactness = ns_g_mass_to_ns_compactness(ns_g_mass, ns_sequence) ns_b_mass = ns_g_mass_to_ns_b_mass(ns_g_mass, ns_sequence) # Sanity checks if not (ns_compactness>0 and q>=1): print('A positive NS compactness and a mass ratio that is >1 must be obtained.') print('This script was launched with ns_mass={0}, eta={1}, chi={2}, inclination={3}'.format(ns_b_mass, eta, chi, incl)) print('and obtained ns_compactness={0} and q={1}.'.format(ns_compactness, q)) print('SOMETHING WENT WRONG!!\n') raise Exception('Unphysical parameters!') # Calculate the dimensionless parameter kappa kappa = q*ns_compactness # Effective equatorial spin parameter needed to determine the torus mass*) chi_eff = bh_effective_spin(chi, incl) #Sanity checks if not abs(chi_eff)<=1: print('The effective BH spin magnitude must be <=1') print('This script was launched with ns_mass={0}, eta={1}, chi={2}, inclination={3}'.format(ns_b_mass, eta, chi, incl)) print('and obtained chi_eff={0}.'.format(chi_eff)) print('SOMETHING WENT WRONG!!\n') raise Exception('Unphysical parameters!') # Taking the 1st element with full_output=1 avoids some annoying messages on stdout xi = scipy.optimize.fsolve(xi_eq, 100., args=(kappa,chi_eff,q), full_output=1)[0] # Fit parameters and tidal correction alpha = 0.296 # +/- 0.011 beta = 0.171 # +/- 0.008 # The remnant mass over the NS rest mass remnant_mass = alpha*xi*(1-2*ns_compactness)-beta*kappa*PG_ISSO_solver(chi_eff,0) # The remnant mass in the same units as the NS rest mass (presumably solar masses) remnant_mass = remnant_mass*ns_b_mass - shift return remnant_mass
def get_qset(self, queryset, q): """Performs filtering against the default queryset returned by mongoengine. """ if self.mongoadmin.search_fields and q: params = {} for field in self.mongoadmin.search_fields: if field == 'id': # check to make sure this is a valid ID, otherwise we just continue if is_valid_object_id(q): return queryset.filter(pk=q) continue search_key = "{field}__icontains".format(field=field) params[search_key] = q queryset = queryset.filter(**params) return queryset
Performs filtering against the default queryset returned by mongoengine.
Below is the the instruction that describes the task: ### Input: Performs filtering against the default queryset returned by mongoengine. ### Response: def get_qset(self, queryset, q): """Performs filtering against the default queryset returned by mongoengine. """ if self.mongoadmin.search_fields and q: params = {} for field in self.mongoadmin.search_fields: if field == 'id': # check to make sure this is a valid ID, otherwise we just continue if is_valid_object_id(q): return queryset.filter(pk=q) continue search_key = "{field}__icontains".format(field=field) params[search_key] = q queryset = queryset.filter(**params) return queryset
def execute_api_request(self): """ Execute the request and return json data as a dict :return: data dict """ if not self.auth.check_auth(): raise Exception('Authentification needed or API not available with your type of connection') if self.auth.is_authentified(): id_cookie = {BboxConstant.COOKIE_BBOX_ID: self.auth.get_cookie_id()} if self.parameters is None: resp = self.call_method(self.api_url.get_url(), cookies=id_cookie) else: resp = self.call_method(self.api_url.get_url(), data=self.parameters, cookies=id_cookie) else: if self.parameters is None: resp = self.call_method(self.api_url.get_url()) else: resp = self.call_method(self.api_url.get_url(), data=self.parameters) if resp.status_code != 200: # This means something went wrong. raise Exception('Error {} with request {}'.format( resp.status_code, self.api_url.get_url())) return resp
Execute the request and return json data as a dict :return: data dict
Below is the the instruction that describes the task: ### Input: Execute the request and return json data as a dict :return: data dict ### Response: def execute_api_request(self): """ Execute the request and return json data as a dict :return: data dict """ if not self.auth.check_auth(): raise Exception('Authentification needed or API not available with your type of connection') if self.auth.is_authentified(): id_cookie = {BboxConstant.COOKIE_BBOX_ID: self.auth.get_cookie_id()} if self.parameters is None: resp = self.call_method(self.api_url.get_url(), cookies=id_cookie) else: resp = self.call_method(self.api_url.get_url(), data=self.parameters, cookies=id_cookie) else: if self.parameters is None: resp = self.call_method(self.api_url.get_url()) else: resp = self.call_method(self.api_url.get_url(), data=self.parameters) if resp.status_code != 200: # This means something went wrong. raise Exception('Error {} with request {}'.format( resp.status_code, self.api_url.get_url())) return resp
def save_token(self): """ Saves the token dict in the store :return bool: Success / Failure """ if self.token is None: raise ValueError('You have to set the "token" first.') try: # set token will overwrite previous data self.doc_ref.set({ self.field_name: self.serializer.dumps(self.token) }) except Exception as e: log.error('Token could not be saved: {}'.format(str(e))) return False return True
Saves the token dict in the store :return bool: Success / Failure
Below is the the instruction that describes the task: ### Input: Saves the token dict in the store :return bool: Success / Failure ### Response: def save_token(self): """ Saves the token dict in the store :return bool: Success / Failure """ if self.token is None: raise ValueError('You have to set the "token" first.') try: # set token will overwrite previous data self.doc_ref.set({ self.field_name: self.serializer.dumps(self.token) }) except Exception as e: log.error('Token could not be saved: {}'.format(str(e))) return False return True
def pathconf(path, os_name=os.name, isdir_fnc=os.path.isdir, pathconf_fnc=getattr(os, 'pathconf', None), pathconf_names=getattr(os, 'pathconf_names', ())): ''' Get all pathconf variables for given path. :param path: absolute fs path :type path: str :returns: dictionary containing pathconf keys and their values (both str) :rtype: dict ''' if pathconf_fnc and pathconf_names: return {key: pathconf_fnc(path, key) for key in pathconf_names} if os_name == 'nt': maxpath = 246 if isdir_fnc(path) else 259 # 260 minus <END> else: maxpath = 255 # conservative sane default return { 'PC_PATH_MAX': maxpath, 'PC_NAME_MAX': maxpath - len(path), }
Get all pathconf variables for given path. :param path: absolute fs path :type path: str :returns: dictionary containing pathconf keys and their values (both str) :rtype: dict
Below is the the instruction that describes the task: ### Input: Get all pathconf variables for given path. :param path: absolute fs path :type path: str :returns: dictionary containing pathconf keys and their values (both str) :rtype: dict ### Response: def pathconf(path, os_name=os.name, isdir_fnc=os.path.isdir, pathconf_fnc=getattr(os, 'pathconf', None), pathconf_names=getattr(os, 'pathconf_names', ())): ''' Get all pathconf variables for given path. :param path: absolute fs path :type path: str :returns: dictionary containing pathconf keys and their values (both str) :rtype: dict ''' if pathconf_fnc and pathconf_names: return {key: pathconf_fnc(path, key) for key in pathconf_names} if os_name == 'nt': maxpath = 246 if isdir_fnc(path) else 259 # 260 minus <END> else: maxpath = 255 # conservative sane default return { 'PC_PATH_MAX': maxpath, 'PC_NAME_MAX': maxpath - len(path), }
def closed(self, error=None): """ Notify the application that the connection has been closed. :param error: The exception which has caused the connection to be closed. If the connection has been closed due to an EOF, pass ``None``. """ if self._application: try: self._application.closed(error) except Exception: # Ignore exceptions from the notification pass
Notify the application that the connection has been closed. :param error: The exception which has caused the connection to be closed. If the connection has been closed due to an EOF, pass ``None``.
Below is the the instruction that describes the task: ### Input: Notify the application that the connection has been closed. :param error: The exception which has caused the connection to be closed. If the connection has been closed due to an EOF, pass ``None``. ### Response: def closed(self, error=None): """ Notify the application that the connection has been closed. :param error: The exception which has caused the connection to be closed. If the connection has been closed due to an EOF, pass ``None``. """ if self._application: try: self._application.closed(error) except Exception: # Ignore exceptions from the notification pass
def is_configured(self, project, **kwargs): """ Check if plugin is configured. """ params = self.get_option return bool(params('server_host', project) and params('server_port', project))
Check if plugin is configured.
Below is the the instruction that describes the task: ### Input: Check if plugin is configured. ### Response: def is_configured(self, project, **kwargs): """ Check if plugin is configured. """ params = self.get_option return bool(params('server_host', project) and params('server_port', project))
def register(self, name, func): """ Register a new callback.\ When the name/id is not found\ a new hook is created under its name,\ meaning the hook is usually created by\ the first registered callback :param str name: Hook name :param callable func: A func reference (callback) """ try: templatehook = self._registry[name] except KeyError: templatehook = self._register(name) templatehook.register(func)
Register a new callback.\ When the name/id is not found\ a new hook is created under its name,\ meaning the hook is usually created by\ the first registered callback :param str name: Hook name :param callable func: A func reference (callback)
Below is the the instruction that describes the task: ### Input: Register a new callback.\ When the name/id is not found\ a new hook is created under its name,\ meaning the hook is usually created by\ the first registered callback :param str name: Hook name :param callable func: A func reference (callback) ### Response: def register(self, name, func): """ Register a new callback.\ When the name/id is not found\ a new hook is created under its name,\ meaning the hook is usually created by\ the first registered callback :param str name: Hook name :param callable func: A func reference (callback) """ try: templatehook = self._registry[name] except KeyError: templatehook = self._register(name) templatehook.register(func)
def stringify_summary(summary): """ stringify summary, in order to dump json file and generate html report. """ for index, suite_summary in enumerate(summary["details"]): if not suite_summary.get("name"): suite_summary["name"] = "testcase {}".format(index) for record in suite_summary.get("records"): meta_datas = record['meta_datas'] __stringify_meta_datas(meta_datas) meta_datas_expanded = [] __expand_meta_datas(meta_datas, meta_datas_expanded) record["meta_datas_expanded"] = meta_datas_expanded record["response_time"] = __get_total_response_time(meta_datas_expanded)
stringify summary, in order to dump json file and generate html report.
Below is the the instruction that describes the task: ### Input: stringify summary, in order to dump json file and generate html report. ### Response: def stringify_summary(summary): """ stringify summary, in order to dump json file and generate html report. """ for index, suite_summary in enumerate(summary["details"]): if not suite_summary.get("name"): suite_summary["name"] = "testcase {}".format(index) for record in suite_summary.get("records"): meta_datas = record['meta_datas'] __stringify_meta_datas(meta_datas) meta_datas_expanded = [] __expand_meta_datas(meta_datas, meta_datas_expanded) record["meta_datas_expanded"] = meta_datas_expanded record["response_time"] = __get_total_response_time(meta_datas_expanded)
def gdalwarp(src, dst, options): """ a simple wrapper for :osgeo:func:`gdal.Warp` Parameters ---------- src: str, :osgeo:class:`ogr.DataSource` or :osgeo:class:`gdal.Dataset` the input data set dst: str the output data set options: dict additional parameters passed to gdal.Warp; see :osgeo:func:`gdal.WarpOptions` Returns ------- """ try: out = gdal.Warp(dst, src, options=gdal.WarpOptions(**options)) except RuntimeError as e: raise RuntimeError('{}:\n src: {}\n dst: {}\n options: {}'.format(str(e), src, dst, options)) out = None
a simple wrapper for :osgeo:func:`gdal.Warp` Parameters ---------- src: str, :osgeo:class:`ogr.DataSource` or :osgeo:class:`gdal.Dataset` the input data set dst: str the output data set options: dict additional parameters passed to gdal.Warp; see :osgeo:func:`gdal.WarpOptions` Returns -------
Below is the the instruction that describes the task: ### Input: a simple wrapper for :osgeo:func:`gdal.Warp` Parameters ---------- src: str, :osgeo:class:`ogr.DataSource` or :osgeo:class:`gdal.Dataset` the input data set dst: str the output data set options: dict additional parameters passed to gdal.Warp; see :osgeo:func:`gdal.WarpOptions` Returns ------- ### Response: def gdalwarp(src, dst, options): """ a simple wrapper for :osgeo:func:`gdal.Warp` Parameters ---------- src: str, :osgeo:class:`ogr.DataSource` or :osgeo:class:`gdal.Dataset` the input data set dst: str the output data set options: dict additional parameters passed to gdal.Warp; see :osgeo:func:`gdal.WarpOptions` Returns ------- """ try: out = gdal.Warp(dst, src, options=gdal.WarpOptions(**options)) except RuntimeError as e: raise RuntimeError('{}:\n src: {}\n dst: {}\n options: {}'.format(str(e), src, dst, options)) out = None
def present(name, uid=None, gid=None, usergroup=None, groups=None, optional_groups=None, remove_groups=True, home=None, createhome=True, password=None, hash_password=False, enforce_password=True, empty_password=False, shell=None, unique=True, system=False, fullname=None, roomnumber=None, workphone=None, homephone=None, other=None, loginclass=None, date=None, mindays=None, maxdays=None, inactdays=None, warndays=None, expire=None, win_homedrive=None, win_profile=None, win_logonscript=None, win_description=None, nologinit=False, allow_uid_change=False, allow_gid_change=False): ''' Ensure that the named user is present with the specified properties name The name of the user to manage uid The user id to assign. If not specified, and the user does not exist, then the next available uid will be assigned. gid The id of the default group to assign to the user. Either a group name or gid can be used. If not specified, and the user does not exist, then the next available gid will be assigned. allow_uid_change : False Set to ``True`` to allow the state to update the uid. .. versionadded:: 2018.3.1 allow_gid_change : False Set to ``True`` to allow the state to update the gid. .. versionadded:: 2018.3.1 usergroup If True, a group with the same name as the user will be created. If False, a group with the same name as the user will not be created. The default is distribution-specific. See the USERGROUPS_ENAB section of the login.defs(5) man page. .. note:: Only supported on GNU/Linux distributions .. versionadded:: Fluorine groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will fail. If set to the empty list, the user will be removed from all groups except the default group. If unset, salt will assume current groups are still wanted (see issue #28706). optional_groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will silently ignore it. NOTE: If the same group is specified in both "groups" and "optional_groups", then it will be assumed to be required and not optional. remove_groups Remove groups that the user is a member of that weren't specified in the state, Default is ``True``. home The custom login directory of user. Uses default value of underlying system if not set. Notice that this directory does not have to exist. This also the location of the home directory to create if createhome is set to True. createhome : True If set to ``False``, the home directory will not be created if it doesn't already exist. .. warning:: Not supported on Windows or Mac OS. Additionally, parent directories will *not* be created. The parent directory for ``home`` must already exist. nologinit : False If set to ``True``, it will not add the user to lastlog and faillog databases. .. note:: Not supported on Windows or Mac OS. password A password hash to set for the user. This field is only supported on Linux, FreeBSD, NetBSD, OpenBSD, and Solaris. If the ``empty_password`` argument is set to ``True`` then ``password`` is ignored. For Windows this is the plain text password. For Linux, the hash can be generated with ``mkpasswd -m sha-256``. .. versionchanged:: 0.16.0 BSD support added. hash_password Set to True to hash the clear text password. Default is ``False``. enforce_password Set to False to keep the password from being changed if it has already been set and the password hash differs from what is specified in the "password" field. This option will be ignored if "password" is not specified, Default is ``True``. empty_password Set to True to enable password-less login for user, Default is ``False``. shell The login shell, defaults to the system default shell unique Require a unique UID, Default is ``True``. system Choose UID in the range of FIRST_SYSTEM_UID and LAST_SYSTEM_UID, Default is ``False``. loginclass The login class, defaults to empty (BSD only) User comment field (GECOS) support (currently Linux, BSD, and MacOS only): The below values should be specified as strings to avoid ambiguities when the values are loaded. (Especially the phone and room number fields which are likely to contain numeric data) fullname The user's full name roomnumber The user's room number (not supported in MacOS) workphone The user's work phone number (not supported in MacOS) homephone The user's home phone number (not supported in MacOS) other The user's other attribute (not supported in MacOS) If GECOS field contains more than 4 commas, this field will have the rest of 'em .. versionchanged:: 2014.7.0 Shadow attribute support added. Shadow attributes support (currently Linux only): The below values should be specified as integers. date Date of last change of password, represented in days since epoch (January 1, 1970). mindays The minimum number of days between password changes. maxdays The maximum number of days between password changes. inactdays The number of days after a password expires before an account is locked. warndays Number of days prior to maxdays to warn users. expire Date that account expires, represented in days since epoch (January 1, 1970). The below parameters apply to windows only: win_homedrive (Windows Only) The drive letter to use for the home directory. If not specified the home directory will be a unc path. Otherwise the home directory will be mapped to the specified drive. Must be a letter followed by a colon. Because of the colon, the value must be surrounded by single quotes. ie: - win_homedrive: 'U: .. versionchanged:: 2015.8.0 win_profile (Windows Only) The custom profile directory of the user. Uses default value of underlying system if not set. .. versionchanged:: 2015.8.0 win_logonscript (Windows Only) The full path to the logon script to run when the user logs in. .. versionchanged:: 2015.8.0 win_description (Windows Only) A brief description of the purpose of the users account. .. versionchanged:: 2015.8.0 ''' # First check if a password is set. If password is set, check if # hash_password is True, then hash it. if password and hash_password: log.debug('Hashing a clear text password') # in case a password is already set, it will contain a Salt # which should be re-used to generate the new hash, other- # wise the Salt will be generated randomly, causing the # hash to change each time and thereby making the # user.present state non-idempotent. algorithms = { '1': 'md5', '2a': 'blowfish', '5': 'sha256', '6': 'sha512', } try: _, algo, shadow_salt, shadow_hash = __salt__['shadow.info'](name)['passwd'].split('$', 4) if algo == '1': log.warning('Using MD5 for hashing passwords is considered insecure!') log.debug('Re-using existing shadow salt for hashing password using %s', algorithms.get(algo)) password = __salt__['shadow.gen_password'](password, crypt_salt=shadow_salt, algorithm=algorithms.get(algo)) except ValueError: log.info('No existing shadow salt found, defaulting to a randomly generated new one') password = __salt__['shadow.gen_password'](password) if fullname is not None: fullname = salt.utils.data.decode(fullname) if roomnumber is not None: roomnumber = salt.utils.data.decode(roomnumber) if workphone is not None: workphone = salt.utils.data.decode(workphone) if homephone is not None: homephone = salt.utils.data.decode(homephone) if other is not None: other = salt.utils.data.decode(other) # createhome not supported on Windows or Mac if __grains__['kernel'] in ('Darwin', 'Windows'): createhome = False ret = {'name': name, 'changes': {}, 'result': True, 'comment': 'User {0} is present and up to date'.format(name)} # the comma is used to separate field in GECOS, thus resulting into # salt adding the end of fullname each time this function is called for gecos_field in [fullname, roomnumber, workphone]: if isinstance(gecos_field, string_types) and ',' in gecos_field: ret['comment'] = "Unsupported char ',' in {0}".format(gecos_field) ret['result'] = False return ret if groups: missing_groups = [x for x in groups if not __salt__['group.info'](x)] if missing_groups: ret['comment'] = 'The following group(s) are not present: ' \ '{0}'.format(','.join(missing_groups)) ret['result'] = False return ret if optional_groups: present_optgroups = [x for x in optional_groups if __salt__['group.info'](x)] for missing_optgroup in [x for x in optional_groups if x not in present_optgroups]: log.debug( 'Optional group "%s" for user "%s" is not present', missing_optgroup, name ) else: present_optgroups = None # Log a warning for all groups specified in both "groups" and # "optional_groups" lists. if groups and optional_groups: for isected in set(groups).intersection(optional_groups): log.warning( 'Group "%s" specified in both groups and optional_groups ' 'for user %s', isected, name ) # If usergroup was specified, we'll also be creating a new # group. We should report this change without setting the gid # variable. if usergroup and __salt__['file.group_to_gid'](name) != '': changes_gid = name else: changes_gid = gid try: changes = _changes(name, uid, changes_gid, groups, present_optgroups, remove_groups, home, createhome, password, enforce_password, empty_password, shell, fullname, roomnumber, workphone, homephone, other, loginclass, date, mindays, maxdays, inactdays, warndays, expire, win_homedrive, win_profile, win_logonscript, win_description, allow_uid_change, allow_gid_change) except CommandExecutionError as exc: ret['result'] = False ret['comment'] = exc.strerror return ret if changes: if __opts__['test']: ret['result'] = None ret['comment'] = ('The following user attributes are set to be ' 'changed:\n') for key, val in iteritems(changes): if key == 'passwd': val = 'XXX-REDACTED-XXX' elif key == 'group' and not remove_groups: key = 'ensure groups' ret['comment'] += '{0}: {1}\n'.format(key, val) return ret # The user is present if 'shadow.info' in __salt__: lshad = __salt__['shadow.info'](name) if __grains__['kernel'] in ('OpenBSD', 'FreeBSD'): lcpre = __salt__['user.get_loginclass'](name) pre = __salt__['user.info'](name) for key, val in iteritems(changes): if key == 'passwd' and not empty_password: __salt__['shadow.set_password'](name, password) continue if key == 'passwd' and empty_password: log.warning("No password will be set when empty_password=True") continue if key == 'empty_password' and val: __salt__['shadow.del_password'](name) continue if key == 'date': __salt__['shadow.set_date'](name, date) continue # run chhome once to avoid any possible bad side-effect if key == 'home' and 'homeDoesNotExist' not in changes: if __grains__['kernel'] in ('Darwin', 'Windows'): __salt__['user.chhome'](name, val) else: __salt__['user.chhome'](name, val, persist=False) continue if key == 'homeDoesNotExist': if __grains__['kernel'] in ('Darwin', 'Windows'): __salt__['user.chhome'](name, val) else: __salt__['user.chhome'](name, val, persist=True) if not os.path.isdir(val): __salt__['file.mkdir'](val, pre['uid'], pre['gid'], 0o755) continue if key == 'mindays': __salt__['shadow.set_mindays'](name, mindays) continue if key == 'maxdays': __salt__['shadow.set_maxdays'](name, maxdays) continue if key == 'inactdays': __salt__['shadow.set_inactdays'](name, inactdays) continue if key == 'warndays': __salt__['shadow.set_warndays'](name, warndays) continue if key == 'expire': __salt__['shadow.set_expire'](name, expire) continue if key == 'win_homedrive': __salt__['user.update'](name=name, homedrive=val) continue if key == 'win_profile': __salt__['user.update'](name=name, profile=val) continue if key == 'win_logonscript': __salt__['user.update'](name=name, logonscript=val) continue if key == 'win_description': __salt__['user.update'](name=name, description=val) continue if key == 'groups': __salt__['user.ch{0}'.format(key)]( name, val, not remove_groups ) else: __salt__['user.ch{0}'.format(key)](name, val) post = __salt__['user.info'](name) spost = {} if 'shadow.info' in __salt__ and lshad['passwd'] != password: spost = __salt__['shadow.info'](name) if __grains__['kernel'] in ('OpenBSD', 'FreeBSD'): lcpost = __salt__['user.get_loginclass'](name) # See if anything changed for key in post: if post[key] != pre[key]: ret['changes'][key] = post[key] if 'shadow.info' in __salt__: for key in spost: if lshad[key] != spost[key]: if key == 'passwd': ret['changes'][key] = 'XXX-REDACTED-XXX' else: ret['changes'][key] = spost[key] if __grains__['kernel'] in ('OpenBSD', 'FreeBSD') and lcpost != lcpre: ret['changes']['loginclass'] = lcpost if ret['changes']: ret['comment'] = 'Updated user {0}'.format(name) changes = _changes(name, uid, gid, groups, present_optgroups, remove_groups, home, createhome, password, enforce_password, empty_password, shell, fullname, roomnumber, workphone, homephone, other, loginclass, date, mindays, maxdays, inactdays, warndays, expire, win_homedrive, win_profile, win_logonscript, win_description, allow_uid_change=True, allow_gid_change=True) # allow_uid_change and allow_gid_change passed as True to avoid race # conditions where a uid/gid is modified outside of Salt. If an # unauthorized change was requested, it would have been caught the # first time we ran _changes(). if changes: ret['comment'] = 'These values could not be changed: {0}'.format( changes ) ret['result'] = False return ret if changes is False: # The user is not present, make it! if __opts__['test']: ret['result'] = None ret['comment'] = 'User {0} set to be added'.format(name) return ret if groups and present_optgroups: groups.extend(present_optgroups) elif present_optgroups: groups = present_optgroups[:] # Setup params specific to Linux and Windows to be passed to the # add.user function if not salt.utils.platform.is_windows(): params = {'name': name, 'uid': uid, 'gid': gid, 'groups': groups, 'home': home, 'shell': shell, 'unique': unique, 'system': system, 'fullname': fullname, 'roomnumber': roomnumber, 'workphone': workphone, 'homephone': homephone, 'other': other, 'createhome': createhome, 'nologinit': nologinit, 'loginclass': loginclass, 'usergroup': usergroup} else: params = ({'name': name, 'password': password, 'fullname': fullname, 'description': win_description, 'groups': groups, 'home': home, 'homedrive': win_homedrive, 'profile': win_profile, 'logonscript': win_logonscript}) if __salt__['user.add'](**params): ret['comment'] = 'New user {0} created'.format(name) ret['changes'] = __salt__['user.info'](name) if not createhome: # pwd incorrectly reports presence of home ret['changes']['home'] = '' if 'shadow.info' in __salt__ \ and not salt.utils.platform.is_windows() \ and not salt.utils.platform.is_darwin(): if password and not empty_password: __salt__['shadow.set_password'](name, password) spost = __salt__['shadow.info'](name) if spost['passwd'] != password: ret['comment'] = 'User {0} created but failed to set' \ ' password to' \ ' {1}'.format(name, 'XXX-REDACTED-XXX') ret['result'] = False ret['changes']['password'] = 'XXX-REDACTED-XXX' if empty_password and not password: __salt__['shadow.del_password'](name) spost = __salt__['shadow.info'](name) if spost['passwd'] != '': ret['comment'] = 'User {0} created but failed to ' \ 'empty password'.format(name) ret['result'] = False ret['changes']['password'] = '' if date is not None: __salt__['shadow.set_date'](name, date) spost = __salt__['shadow.info'](name) if spost['lstchg'] != date: ret['comment'] = 'User {0} created but failed to set' \ ' last change date to' \ ' {1}'.format(name, date) ret['result'] = False ret['changes']['date'] = date if mindays: __salt__['shadow.set_mindays'](name, mindays) spost = __salt__['shadow.info'](name) if spost['min'] != mindays: ret['comment'] = 'User {0} created but failed to set' \ ' minimum days to' \ ' {1}'.format(name, mindays) ret['result'] = False ret['changes']['mindays'] = mindays if maxdays: __salt__['shadow.set_maxdays'](name, maxdays) spost = __salt__['shadow.info'](name) if spost['max'] != maxdays: ret['comment'] = 'User {0} created but failed to set' \ ' maximum days to' \ ' {1}'.format(name, maxdays) ret['result'] = False ret['changes']['maxdays'] = maxdays if inactdays: __salt__['shadow.set_inactdays'](name, inactdays) spost = __salt__['shadow.info'](name) if spost['inact'] != inactdays: ret['comment'] = 'User {0} created but failed to set' \ ' inactive days to' \ ' {1}'.format(name, inactdays) ret['result'] = False ret['changes']['inactdays'] = inactdays if warndays: __salt__['shadow.set_warndays'](name, warndays) spost = __salt__['shadow.info'](name) if spost['warn'] != warndays: ret['comment'] = 'User {0} created but failed to set' \ ' warn days to' \ ' {1}'.format(name, warndays) ret['result'] = False ret['changes']['warndays'] = warndays if expire: __salt__['shadow.set_expire'](name, expire) spost = __salt__['shadow.info'](name) if spost['expire'] != expire: ret['comment'] = 'User {0} created but failed to set' \ ' expire days to' \ ' {1}'.format(name, expire) ret['result'] = False ret['changes']['expire'] = expire elif salt.utils.platform.is_windows(): if password and not empty_password: if not __salt__['user.setpassword'](name, password): ret['comment'] = 'User {0} created but failed to set' \ ' password to' \ ' {1}'.format(name, 'XXX-REDACTED-XXX') ret['result'] = False ret['changes']['passwd'] = 'XXX-REDACTED-XXX' if expire: __salt__['shadow.set_expire'](name, expire) spost = __salt__['shadow.info'](name) if salt.utils.dateutils.strftime(spost['expire']) != salt.utils.dateutils.strftime(expire): ret['comment'] = 'User {0} created but failed to set' \ ' expire days to' \ ' {1}'.format(name, expire) ret['result'] = False ret['changes']['expiration_date'] = spost['expire'] elif salt.utils.platform.is_darwin() and password and not empty_password: if not __salt__['shadow.set_password'](name, password): ret['comment'] = 'User {0} created but failed to set' \ ' password to' \ ' {1}'.format(name, 'XXX-REDACTED-XXX') ret['result'] = False ret['changes']['passwd'] = 'XXX-REDACTED-XXX' else: ret['comment'] = 'Failed to create new user {0}'.format(name) ret['result'] = False return ret
Ensure that the named user is present with the specified properties name The name of the user to manage uid The user id to assign. If not specified, and the user does not exist, then the next available uid will be assigned. gid The id of the default group to assign to the user. Either a group name or gid can be used. If not specified, and the user does not exist, then the next available gid will be assigned. allow_uid_change : False Set to ``True`` to allow the state to update the uid. .. versionadded:: 2018.3.1 allow_gid_change : False Set to ``True`` to allow the state to update the gid. .. versionadded:: 2018.3.1 usergroup If True, a group with the same name as the user will be created. If False, a group with the same name as the user will not be created. The default is distribution-specific. See the USERGROUPS_ENAB section of the login.defs(5) man page. .. note:: Only supported on GNU/Linux distributions .. versionadded:: Fluorine groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will fail. If set to the empty list, the user will be removed from all groups except the default group. If unset, salt will assume current groups are still wanted (see issue #28706). optional_groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will silently ignore it. NOTE: If the same group is specified in both "groups" and "optional_groups", then it will be assumed to be required and not optional. remove_groups Remove groups that the user is a member of that weren't specified in the state, Default is ``True``. home The custom login directory of user. Uses default value of underlying system if not set. Notice that this directory does not have to exist. This also the location of the home directory to create if createhome is set to True. createhome : True If set to ``False``, the home directory will not be created if it doesn't already exist. .. warning:: Not supported on Windows or Mac OS. Additionally, parent directories will *not* be created. The parent directory for ``home`` must already exist. nologinit : False If set to ``True``, it will not add the user to lastlog and faillog databases. .. note:: Not supported on Windows or Mac OS. password A password hash to set for the user. This field is only supported on Linux, FreeBSD, NetBSD, OpenBSD, and Solaris. If the ``empty_password`` argument is set to ``True`` then ``password`` is ignored. For Windows this is the plain text password. For Linux, the hash can be generated with ``mkpasswd -m sha-256``. .. versionchanged:: 0.16.0 BSD support added. hash_password Set to True to hash the clear text password. Default is ``False``. enforce_password Set to False to keep the password from being changed if it has already been set and the password hash differs from what is specified in the "password" field. This option will be ignored if "password" is not specified, Default is ``True``. empty_password Set to True to enable password-less login for user, Default is ``False``. shell The login shell, defaults to the system default shell unique Require a unique UID, Default is ``True``. system Choose UID in the range of FIRST_SYSTEM_UID and LAST_SYSTEM_UID, Default is ``False``. loginclass The login class, defaults to empty (BSD only) User comment field (GECOS) support (currently Linux, BSD, and MacOS only): The below values should be specified as strings to avoid ambiguities when the values are loaded. (Especially the phone and room number fields which are likely to contain numeric data) fullname The user's full name roomnumber The user's room number (not supported in MacOS) workphone The user's work phone number (not supported in MacOS) homephone The user's home phone number (not supported in MacOS) other The user's other attribute (not supported in MacOS) If GECOS field contains more than 4 commas, this field will have the rest of 'em .. versionchanged:: 2014.7.0 Shadow attribute support added. Shadow attributes support (currently Linux only): The below values should be specified as integers. date Date of last change of password, represented in days since epoch (January 1, 1970). mindays The minimum number of days between password changes. maxdays The maximum number of days between password changes. inactdays The number of days after a password expires before an account is locked. warndays Number of days prior to maxdays to warn users. expire Date that account expires, represented in days since epoch (January 1, 1970). The below parameters apply to windows only: win_homedrive (Windows Only) The drive letter to use for the home directory. If not specified the home directory will be a unc path. Otherwise the home directory will be mapped to the specified drive. Must be a letter followed by a colon. Because of the colon, the value must be surrounded by single quotes. ie: - win_homedrive: 'U: .. versionchanged:: 2015.8.0 win_profile (Windows Only) The custom profile directory of the user. Uses default value of underlying system if not set. .. versionchanged:: 2015.8.0 win_logonscript (Windows Only) The full path to the logon script to run when the user logs in. .. versionchanged:: 2015.8.0 win_description (Windows Only) A brief description of the purpose of the users account. .. versionchanged:: 2015.8.0
Below is the the instruction that describes the task: ### Input: Ensure that the named user is present with the specified properties name The name of the user to manage uid The user id to assign. If not specified, and the user does not exist, then the next available uid will be assigned. gid The id of the default group to assign to the user. Either a group name or gid can be used. If not specified, and the user does not exist, then the next available gid will be assigned. allow_uid_change : False Set to ``True`` to allow the state to update the uid. .. versionadded:: 2018.3.1 allow_gid_change : False Set to ``True`` to allow the state to update the gid. .. versionadded:: 2018.3.1 usergroup If True, a group with the same name as the user will be created. If False, a group with the same name as the user will not be created. The default is distribution-specific. See the USERGROUPS_ENAB section of the login.defs(5) man page. .. note:: Only supported on GNU/Linux distributions .. versionadded:: Fluorine groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will fail. If set to the empty list, the user will be removed from all groups except the default group. If unset, salt will assume current groups are still wanted (see issue #28706). optional_groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will silently ignore it. NOTE: If the same group is specified in both "groups" and "optional_groups", then it will be assumed to be required and not optional. remove_groups Remove groups that the user is a member of that weren't specified in the state, Default is ``True``. home The custom login directory of user. Uses default value of underlying system if not set. Notice that this directory does not have to exist. This also the location of the home directory to create if createhome is set to True. createhome : True If set to ``False``, the home directory will not be created if it doesn't already exist. .. warning:: Not supported on Windows or Mac OS. Additionally, parent directories will *not* be created. The parent directory for ``home`` must already exist. nologinit : False If set to ``True``, it will not add the user to lastlog and faillog databases. .. note:: Not supported on Windows or Mac OS. password A password hash to set for the user. This field is only supported on Linux, FreeBSD, NetBSD, OpenBSD, and Solaris. If the ``empty_password`` argument is set to ``True`` then ``password`` is ignored. For Windows this is the plain text password. For Linux, the hash can be generated with ``mkpasswd -m sha-256``. .. versionchanged:: 0.16.0 BSD support added. hash_password Set to True to hash the clear text password. Default is ``False``. enforce_password Set to False to keep the password from being changed if it has already been set and the password hash differs from what is specified in the "password" field. This option will be ignored if "password" is not specified, Default is ``True``. empty_password Set to True to enable password-less login for user, Default is ``False``. shell The login shell, defaults to the system default shell unique Require a unique UID, Default is ``True``. system Choose UID in the range of FIRST_SYSTEM_UID and LAST_SYSTEM_UID, Default is ``False``. loginclass The login class, defaults to empty (BSD only) User comment field (GECOS) support (currently Linux, BSD, and MacOS only): The below values should be specified as strings to avoid ambiguities when the values are loaded. (Especially the phone and room number fields which are likely to contain numeric data) fullname The user's full name roomnumber The user's room number (not supported in MacOS) workphone The user's work phone number (not supported in MacOS) homephone The user's home phone number (not supported in MacOS) other The user's other attribute (not supported in MacOS) If GECOS field contains more than 4 commas, this field will have the rest of 'em .. versionchanged:: 2014.7.0 Shadow attribute support added. Shadow attributes support (currently Linux only): The below values should be specified as integers. date Date of last change of password, represented in days since epoch (January 1, 1970). mindays The minimum number of days between password changes. maxdays The maximum number of days between password changes. inactdays The number of days after a password expires before an account is locked. warndays Number of days prior to maxdays to warn users. expire Date that account expires, represented in days since epoch (January 1, 1970). The below parameters apply to windows only: win_homedrive (Windows Only) The drive letter to use for the home directory. If not specified the home directory will be a unc path. Otherwise the home directory will be mapped to the specified drive. Must be a letter followed by a colon. Because of the colon, the value must be surrounded by single quotes. ie: - win_homedrive: 'U: .. versionchanged:: 2015.8.0 win_profile (Windows Only) The custom profile directory of the user. Uses default value of underlying system if not set. .. versionchanged:: 2015.8.0 win_logonscript (Windows Only) The full path to the logon script to run when the user logs in. .. versionchanged:: 2015.8.0 win_description (Windows Only) A brief description of the purpose of the users account. .. versionchanged:: 2015.8.0 ### Response: def present(name, uid=None, gid=None, usergroup=None, groups=None, optional_groups=None, remove_groups=True, home=None, createhome=True, password=None, hash_password=False, enforce_password=True, empty_password=False, shell=None, unique=True, system=False, fullname=None, roomnumber=None, workphone=None, homephone=None, other=None, loginclass=None, date=None, mindays=None, maxdays=None, inactdays=None, warndays=None, expire=None, win_homedrive=None, win_profile=None, win_logonscript=None, win_description=None, nologinit=False, allow_uid_change=False, allow_gid_change=False): ''' Ensure that the named user is present with the specified properties name The name of the user to manage uid The user id to assign. If not specified, and the user does not exist, then the next available uid will be assigned. gid The id of the default group to assign to the user. Either a group name or gid can be used. If not specified, and the user does not exist, then the next available gid will be assigned. allow_uid_change : False Set to ``True`` to allow the state to update the uid. .. versionadded:: 2018.3.1 allow_gid_change : False Set to ``True`` to allow the state to update the gid. .. versionadded:: 2018.3.1 usergroup If True, a group with the same name as the user will be created. If False, a group with the same name as the user will not be created. The default is distribution-specific. See the USERGROUPS_ENAB section of the login.defs(5) man page. .. note:: Only supported on GNU/Linux distributions .. versionadded:: Fluorine groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will fail. If set to the empty list, the user will be removed from all groups except the default group. If unset, salt will assume current groups are still wanted (see issue #28706). optional_groups A list of groups to assign the user to, pass a list object. If a group specified here does not exist on the minion, the state will silently ignore it. NOTE: If the same group is specified in both "groups" and "optional_groups", then it will be assumed to be required and not optional. remove_groups Remove groups that the user is a member of that weren't specified in the state, Default is ``True``. home The custom login directory of user. Uses default value of underlying system if not set. Notice that this directory does not have to exist. This also the location of the home directory to create if createhome is set to True. createhome : True If set to ``False``, the home directory will not be created if it doesn't already exist. .. warning:: Not supported on Windows or Mac OS. Additionally, parent directories will *not* be created. The parent directory for ``home`` must already exist. nologinit : False If set to ``True``, it will not add the user to lastlog and faillog databases. .. note:: Not supported on Windows or Mac OS. password A password hash to set for the user. This field is only supported on Linux, FreeBSD, NetBSD, OpenBSD, and Solaris. If the ``empty_password`` argument is set to ``True`` then ``password`` is ignored. For Windows this is the plain text password. For Linux, the hash can be generated with ``mkpasswd -m sha-256``. .. versionchanged:: 0.16.0 BSD support added. hash_password Set to True to hash the clear text password. Default is ``False``. enforce_password Set to False to keep the password from being changed if it has already been set and the password hash differs from what is specified in the "password" field. This option will be ignored if "password" is not specified, Default is ``True``. empty_password Set to True to enable password-less login for user, Default is ``False``. shell The login shell, defaults to the system default shell unique Require a unique UID, Default is ``True``. system Choose UID in the range of FIRST_SYSTEM_UID and LAST_SYSTEM_UID, Default is ``False``. loginclass The login class, defaults to empty (BSD only) User comment field (GECOS) support (currently Linux, BSD, and MacOS only): The below values should be specified as strings to avoid ambiguities when the values are loaded. (Especially the phone and room number fields which are likely to contain numeric data) fullname The user's full name roomnumber The user's room number (not supported in MacOS) workphone The user's work phone number (not supported in MacOS) homephone The user's home phone number (not supported in MacOS) other The user's other attribute (not supported in MacOS) If GECOS field contains more than 4 commas, this field will have the rest of 'em .. versionchanged:: 2014.7.0 Shadow attribute support added. Shadow attributes support (currently Linux only): The below values should be specified as integers. date Date of last change of password, represented in days since epoch (January 1, 1970). mindays The minimum number of days between password changes. maxdays The maximum number of days between password changes. inactdays The number of days after a password expires before an account is locked. warndays Number of days prior to maxdays to warn users. expire Date that account expires, represented in days since epoch (January 1, 1970). The below parameters apply to windows only: win_homedrive (Windows Only) The drive letter to use for the home directory. If not specified the home directory will be a unc path. Otherwise the home directory will be mapped to the specified drive. Must be a letter followed by a colon. Because of the colon, the value must be surrounded by single quotes. ie: - win_homedrive: 'U: .. versionchanged:: 2015.8.0 win_profile (Windows Only) The custom profile directory of the user. Uses default value of underlying system if not set. .. versionchanged:: 2015.8.0 win_logonscript (Windows Only) The full path to the logon script to run when the user logs in. .. versionchanged:: 2015.8.0 win_description (Windows Only) A brief description of the purpose of the users account. .. versionchanged:: 2015.8.0 ''' # First check if a password is set. If password is set, check if # hash_password is True, then hash it. if password and hash_password: log.debug('Hashing a clear text password') # in case a password is already set, it will contain a Salt # which should be re-used to generate the new hash, other- # wise the Salt will be generated randomly, causing the # hash to change each time and thereby making the # user.present state non-idempotent. algorithms = { '1': 'md5', '2a': 'blowfish', '5': 'sha256', '6': 'sha512', } try: _, algo, shadow_salt, shadow_hash = __salt__['shadow.info'](name)['passwd'].split('$', 4) if algo == '1': log.warning('Using MD5 for hashing passwords is considered insecure!') log.debug('Re-using existing shadow salt for hashing password using %s', algorithms.get(algo)) password = __salt__['shadow.gen_password'](password, crypt_salt=shadow_salt, algorithm=algorithms.get(algo)) except ValueError: log.info('No existing shadow salt found, defaulting to a randomly generated new one') password = __salt__['shadow.gen_password'](password) if fullname is not None: fullname = salt.utils.data.decode(fullname) if roomnumber is not None: roomnumber = salt.utils.data.decode(roomnumber) if workphone is not None: workphone = salt.utils.data.decode(workphone) if homephone is not None: homephone = salt.utils.data.decode(homephone) if other is not None: other = salt.utils.data.decode(other) # createhome not supported on Windows or Mac if __grains__['kernel'] in ('Darwin', 'Windows'): createhome = False ret = {'name': name, 'changes': {}, 'result': True, 'comment': 'User {0} is present and up to date'.format(name)} # the comma is used to separate field in GECOS, thus resulting into # salt adding the end of fullname each time this function is called for gecos_field in [fullname, roomnumber, workphone]: if isinstance(gecos_field, string_types) and ',' in gecos_field: ret['comment'] = "Unsupported char ',' in {0}".format(gecos_field) ret['result'] = False return ret if groups: missing_groups = [x for x in groups if not __salt__['group.info'](x)] if missing_groups: ret['comment'] = 'The following group(s) are not present: ' \ '{0}'.format(','.join(missing_groups)) ret['result'] = False return ret if optional_groups: present_optgroups = [x for x in optional_groups if __salt__['group.info'](x)] for missing_optgroup in [x for x in optional_groups if x not in present_optgroups]: log.debug( 'Optional group "%s" for user "%s" is not present', missing_optgroup, name ) else: present_optgroups = None # Log a warning for all groups specified in both "groups" and # "optional_groups" lists. if groups and optional_groups: for isected in set(groups).intersection(optional_groups): log.warning( 'Group "%s" specified in both groups and optional_groups ' 'for user %s', isected, name ) # If usergroup was specified, we'll also be creating a new # group. We should report this change without setting the gid # variable. if usergroup and __salt__['file.group_to_gid'](name) != '': changes_gid = name else: changes_gid = gid try: changes = _changes(name, uid, changes_gid, groups, present_optgroups, remove_groups, home, createhome, password, enforce_password, empty_password, shell, fullname, roomnumber, workphone, homephone, other, loginclass, date, mindays, maxdays, inactdays, warndays, expire, win_homedrive, win_profile, win_logonscript, win_description, allow_uid_change, allow_gid_change) except CommandExecutionError as exc: ret['result'] = False ret['comment'] = exc.strerror return ret if changes: if __opts__['test']: ret['result'] = None ret['comment'] = ('The following user attributes are set to be ' 'changed:\n') for key, val in iteritems(changes): if key == 'passwd': val = 'XXX-REDACTED-XXX' elif key == 'group' and not remove_groups: key = 'ensure groups' ret['comment'] += '{0}: {1}\n'.format(key, val) return ret # The user is present if 'shadow.info' in __salt__: lshad = __salt__['shadow.info'](name) if __grains__['kernel'] in ('OpenBSD', 'FreeBSD'): lcpre = __salt__['user.get_loginclass'](name) pre = __salt__['user.info'](name) for key, val in iteritems(changes): if key == 'passwd' and not empty_password: __salt__['shadow.set_password'](name, password) continue if key == 'passwd' and empty_password: log.warning("No password will be set when empty_password=True") continue if key == 'empty_password' and val: __salt__['shadow.del_password'](name) continue if key == 'date': __salt__['shadow.set_date'](name, date) continue # run chhome once to avoid any possible bad side-effect if key == 'home' and 'homeDoesNotExist' not in changes: if __grains__['kernel'] in ('Darwin', 'Windows'): __salt__['user.chhome'](name, val) else: __salt__['user.chhome'](name, val, persist=False) continue if key == 'homeDoesNotExist': if __grains__['kernel'] in ('Darwin', 'Windows'): __salt__['user.chhome'](name, val) else: __salt__['user.chhome'](name, val, persist=True) if not os.path.isdir(val): __salt__['file.mkdir'](val, pre['uid'], pre['gid'], 0o755) continue if key == 'mindays': __salt__['shadow.set_mindays'](name, mindays) continue if key == 'maxdays': __salt__['shadow.set_maxdays'](name, maxdays) continue if key == 'inactdays': __salt__['shadow.set_inactdays'](name, inactdays) continue if key == 'warndays': __salt__['shadow.set_warndays'](name, warndays) continue if key == 'expire': __salt__['shadow.set_expire'](name, expire) continue if key == 'win_homedrive': __salt__['user.update'](name=name, homedrive=val) continue if key == 'win_profile': __salt__['user.update'](name=name, profile=val) continue if key == 'win_logonscript': __salt__['user.update'](name=name, logonscript=val) continue if key == 'win_description': __salt__['user.update'](name=name, description=val) continue if key == 'groups': __salt__['user.ch{0}'.format(key)]( name, val, not remove_groups ) else: __salt__['user.ch{0}'.format(key)](name, val) post = __salt__['user.info'](name) spost = {} if 'shadow.info' in __salt__ and lshad['passwd'] != password: spost = __salt__['shadow.info'](name) if __grains__['kernel'] in ('OpenBSD', 'FreeBSD'): lcpost = __salt__['user.get_loginclass'](name) # See if anything changed for key in post: if post[key] != pre[key]: ret['changes'][key] = post[key] if 'shadow.info' in __salt__: for key in spost: if lshad[key] != spost[key]: if key == 'passwd': ret['changes'][key] = 'XXX-REDACTED-XXX' else: ret['changes'][key] = spost[key] if __grains__['kernel'] in ('OpenBSD', 'FreeBSD') and lcpost != lcpre: ret['changes']['loginclass'] = lcpost if ret['changes']: ret['comment'] = 'Updated user {0}'.format(name) changes = _changes(name, uid, gid, groups, present_optgroups, remove_groups, home, createhome, password, enforce_password, empty_password, shell, fullname, roomnumber, workphone, homephone, other, loginclass, date, mindays, maxdays, inactdays, warndays, expire, win_homedrive, win_profile, win_logonscript, win_description, allow_uid_change=True, allow_gid_change=True) # allow_uid_change and allow_gid_change passed as True to avoid race # conditions where a uid/gid is modified outside of Salt. If an # unauthorized change was requested, it would have been caught the # first time we ran _changes(). if changes: ret['comment'] = 'These values could not be changed: {0}'.format( changes ) ret['result'] = False return ret if changes is False: # The user is not present, make it! if __opts__['test']: ret['result'] = None ret['comment'] = 'User {0} set to be added'.format(name) return ret if groups and present_optgroups: groups.extend(present_optgroups) elif present_optgroups: groups = present_optgroups[:] # Setup params specific to Linux and Windows to be passed to the # add.user function if not salt.utils.platform.is_windows(): params = {'name': name, 'uid': uid, 'gid': gid, 'groups': groups, 'home': home, 'shell': shell, 'unique': unique, 'system': system, 'fullname': fullname, 'roomnumber': roomnumber, 'workphone': workphone, 'homephone': homephone, 'other': other, 'createhome': createhome, 'nologinit': nologinit, 'loginclass': loginclass, 'usergroup': usergroup} else: params = ({'name': name, 'password': password, 'fullname': fullname, 'description': win_description, 'groups': groups, 'home': home, 'homedrive': win_homedrive, 'profile': win_profile, 'logonscript': win_logonscript}) if __salt__['user.add'](**params): ret['comment'] = 'New user {0} created'.format(name) ret['changes'] = __salt__['user.info'](name) if not createhome: # pwd incorrectly reports presence of home ret['changes']['home'] = '' if 'shadow.info' in __salt__ \ and not salt.utils.platform.is_windows() \ and not salt.utils.platform.is_darwin(): if password and not empty_password: __salt__['shadow.set_password'](name, password) spost = __salt__['shadow.info'](name) if spost['passwd'] != password: ret['comment'] = 'User {0} created but failed to set' \ ' password to' \ ' {1}'.format(name, 'XXX-REDACTED-XXX') ret['result'] = False ret['changes']['password'] = 'XXX-REDACTED-XXX' if empty_password and not password: __salt__['shadow.del_password'](name) spost = __salt__['shadow.info'](name) if spost['passwd'] != '': ret['comment'] = 'User {0} created but failed to ' \ 'empty password'.format(name) ret['result'] = False ret['changes']['password'] = '' if date is not None: __salt__['shadow.set_date'](name, date) spost = __salt__['shadow.info'](name) if spost['lstchg'] != date: ret['comment'] = 'User {0} created but failed to set' \ ' last change date to' \ ' {1}'.format(name, date) ret['result'] = False ret['changes']['date'] = date if mindays: __salt__['shadow.set_mindays'](name, mindays) spost = __salt__['shadow.info'](name) if spost['min'] != mindays: ret['comment'] = 'User {0} created but failed to set' \ ' minimum days to' \ ' {1}'.format(name, mindays) ret['result'] = False ret['changes']['mindays'] = mindays if maxdays: __salt__['shadow.set_maxdays'](name, maxdays) spost = __salt__['shadow.info'](name) if spost['max'] != maxdays: ret['comment'] = 'User {0} created but failed to set' \ ' maximum days to' \ ' {1}'.format(name, maxdays) ret['result'] = False ret['changes']['maxdays'] = maxdays if inactdays: __salt__['shadow.set_inactdays'](name, inactdays) spost = __salt__['shadow.info'](name) if spost['inact'] != inactdays: ret['comment'] = 'User {0} created but failed to set' \ ' inactive days to' \ ' {1}'.format(name, inactdays) ret['result'] = False ret['changes']['inactdays'] = inactdays if warndays: __salt__['shadow.set_warndays'](name, warndays) spost = __salt__['shadow.info'](name) if spost['warn'] != warndays: ret['comment'] = 'User {0} created but failed to set' \ ' warn days to' \ ' {1}'.format(name, warndays) ret['result'] = False ret['changes']['warndays'] = warndays if expire: __salt__['shadow.set_expire'](name, expire) spost = __salt__['shadow.info'](name) if spost['expire'] != expire: ret['comment'] = 'User {0} created but failed to set' \ ' expire days to' \ ' {1}'.format(name, expire) ret['result'] = False ret['changes']['expire'] = expire elif salt.utils.platform.is_windows(): if password and not empty_password: if not __salt__['user.setpassword'](name, password): ret['comment'] = 'User {0} created but failed to set' \ ' password to' \ ' {1}'.format(name, 'XXX-REDACTED-XXX') ret['result'] = False ret['changes']['passwd'] = 'XXX-REDACTED-XXX' if expire: __salt__['shadow.set_expire'](name, expire) spost = __salt__['shadow.info'](name) if salt.utils.dateutils.strftime(spost['expire']) != salt.utils.dateutils.strftime(expire): ret['comment'] = 'User {0} created but failed to set' \ ' expire days to' \ ' {1}'.format(name, expire) ret['result'] = False ret['changes']['expiration_date'] = spost['expire'] elif salt.utils.platform.is_darwin() and password and not empty_password: if not __salt__['shadow.set_password'](name, password): ret['comment'] = 'User {0} created but failed to set' \ ' password to' \ ' {1}'.format(name, 'XXX-REDACTED-XXX') ret['result'] = False ret['changes']['passwd'] = 'XXX-REDACTED-XXX' else: ret['comment'] = 'Failed to create new user {0}'.format(name) ret['result'] = False return ret
def marshal(self, o, use_value_list=False): """ Packages the return from a parser for easy use in a rule. """ if o is None: return elif isinstance(o, dict): if use_value_list: for k, v in o.items(): o[k] = [v] return o elif isinstance(o, six.string_types): if use_value_list: return {o: [True]} else: return {o: True} else: raise TypeError("Marshaller doesn't support given type %s" % type(o))
Packages the return from a parser for easy use in a rule.
Below is the the instruction that describes the task: ### Input: Packages the return from a parser for easy use in a rule. ### Response: def marshal(self, o, use_value_list=False): """ Packages the return from a parser for easy use in a rule. """ if o is None: return elif isinstance(o, dict): if use_value_list: for k, v in o.items(): o[k] = [v] return o elif isinstance(o, six.string_types): if use_value_list: return {o: [True]} else: return {o: True} else: raise TypeError("Marshaller doesn't support given type %s" % type(o))
def namedb_get_all_namespace_ids( cur ): """ Get a list of all READY namespace IDs. """ query = "SELECT namespace_id FROM namespaces WHERE op = ?;" args = (NAMESPACE_READY,) namespace_rows = namedb_query_execute( cur, query, args ) ret = [] for namespace_row in namespace_rows: ret.append( namespace_row['namespace_id'] ) return ret
Get a list of all READY namespace IDs.
Below is the the instruction that describes the task: ### Input: Get a list of all READY namespace IDs. ### Response: def namedb_get_all_namespace_ids( cur ): """ Get a list of all READY namespace IDs. """ query = "SELECT namespace_id FROM namespaces WHERE op = ?;" args = (NAMESPACE_READY,) namespace_rows = namedb_query_execute( cur, query, args ) ret = [] for namespace_row in namespace_rows: ret.append( namespace_row['namespace_id'] ) return ret
def glob(cls, files=None): ''' Glob a pattern or a list of pattern static storage relative(s). ''' files = files or [] if isinstance(files, str): files = os.path.normpath(files) matches = lambda path: matches_patterns(path, [files]) return [path for path in cls.get_static_files() if matches(path)] elif isinstance(files, (list, tuple)): all_files = cls.get_static_files() files = [os.path.normpath(f) for f in files] sorted_result = [] for pattern in files: sorted_result.extend([f for f in all_files if matches_patterns(f, [pattern])]) return sorted_result
Glob a pattern or a list of pattern static storage relative(s).
Below is the the instruction that describes the task: ### Input: Glob a pattern or a list of pattern static storage relative(s). ### Response: def glob(cls, files=None): ''' Glob a pattern or a list of pattern static storage relative(s). ''' files = files or [] if isinstance(files, str): files = os.path.normpath(files) matches = lambda path: matches_patterns(path, [files]) return [path for path in cls.get_static_files() if matches(path)] elif isinstance(files, (list, tuple)): all_files = cls.get_static_files() files = [os.path.normpath(f) for f in files] sorted_result = [] for pattern in files: sorted_result.extend([f for f in all_files if matches_patterns(f, [pattern])]) return sorted_result
def _get_compressed_vlan_list(self, pvlan_ids): """Generate a compressed vlan list ready for XML using a vlan set. Sample Use Case: Input vlan set: -------------- 1 - s = set([11, 50, 25, 30, 15, 16, 3, 8, 2, 1]) 2 - s = set([87, 11, 50, 25, 30, 15, 16, 3, 8, 2, 1, 88]) Returned compressed XML list: ---------------------------- 1 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50'] 2 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50', '87-88'] """ if not pvlan_ids: return [] pvlan_list = list(pvlan_ids) pvlan_list.sort() compressed_list = [] begin = -1 prev_vlan = -1 for port_vlan in pvlan_list: if prev_vlan == -1: prev_vlan = port_vlan else: if (port_vlan - prev_vlan) == 1: if begin == -1: begin = prev_vlan prev_vlan = port_vlan else: if begin == -1: compressed_list.append(str(prev_vlan)) else: compressed_list.append("%d-%d" % (begin, prev_vlan)) begin = -1 prev_vlan = port_vlan if begin == -1: compressed_list.append(str(prev_vlan)) else: compressed_list.append("%s-%s" % (begin, prev_vlan)) return compressed_list
Generate a compressed vlan list ready for XML using a vlan set. Sample Use Case: Input vlan set: -------------- 1 - s = set([11, 50, 25, 30, 15, 16, 3, 8, 2, 1]) 2 - s = set([87, 11, 50, 25, 30, 15, 16, 3, 8, 2, 1, 88]) Returned compressed XML list: ---------------------------- 1 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50'] 2 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50', '87-88']
Below is the the instruction that describes the task: ### Input: Generate a compressed vlan list ready for XML using a vlan set. Sample Use Case: Input vlan set: -------------- 1 - s = set([11, 50, 25, 30, 15, 16, 3, 8, 2, 1]) 2 - s = set([87, 11, 50, 25, 30, 15, 16, 3, 8, 2, 1, 88]) Returned compressed XML list: ---------------------------- 1 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50'] 2 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50', '87-88'] ### Response: def _get_compressed_vlan_list(self, pvlan_ids): """Generate a compressed vlan list ready for XML using a vlan set. Sample Use Case: Input vlan set: -------------- 1 - s = set([11, 50, 25, 30, 15, 16, 3, 8, 2, 1]) 2 - s = set([87, 11, 50, 25, 30, 15, 16, 3, 8, 2, 1, 88]) Returned compressed XML list: ---------------------------- 1 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50'] 2 - compressed_list = ['1-3', '8', '11', '15-16', '25', '30', '50', '87-88'] """ if not pvlan_ids: return [] pvlan_list = list(pvlan_ids) pvlan_list.sort() compressed_list = [] begin = -1 prev_vlan = -1 for port_vlan in pvlan_list: if prev_vlan == -1: prev_vlan = port_vlan else: if (port_vlan - prev_vlan) == 1: if begin == -1: begin = prev_vlan prev_vlan = port_vlan else: if begin == -1: compressed_list.append(str(prev_vlan)) else: compressed_list.append("%d-%d" % (begin, prev_vlan)) begin = -1 prev_vlan = port_vlan if begin == -1: compressed_list.append(str(prev_vlan)) else: compressed_list.append("%s-%s" % (begin, prev_vlan)) return compressed_list
def convert_node_labels_to_integers(G, first_label=0, ordering="default", label_attribute=None): """Return a copy of the graph G with the nodes relabeled with integers. Parameters ---------- G : graph A NetworkX graph first_label : int, optional (default=0) An integer specifying the offset in numbering nodes. The n new integer labels are numbered first_label, ..., n-1+first_label. ordering : string "default" : inherit node ordering from G.nodes() "sorted" : inherit node ordering from sorted(G.nodes()) "increasing degree" : nodes are sorted by increasing degree "decreasing degree" : nodes are sorted by decreasing degree label_attribute : string, optional (default=None) Name of node attribute to store old label. If None no attribute is created. Notes ----- Node and edge attribute data are copied to the new (relabeled) graph. See Also -------- relabel_nodes """ N = G.number_of_nodes() + first_label if ordering == "default": mapping = dict(zip(G.nodes(), range(first_label, N))) elif ordering == "sorted": nlist = G.nodes() nlist.sort() mapping = dict(zip(nlist, range(first_label, N))) elif ordering == "increasing degree": dv_pairs = [(d, n) for (n, d) in G.degree_iter()] dv_pairs.sort() # in-place sort from lowest to highest degree mapping = dict(zip([n for d, n in dv_pairs], range(first_label, N))) elif ordering == "decreasing degree": dv_pairs = [(d, n) for (n, d) in G.degree_iter()] dv_pairs.sort() # in-place sort from lowest to highest degree dv_pairs.reverse() mapping = dict(zip([n for d, n in dv_pairs], range(first_label, N))) else: raise nx.NetworkXError('Unknown node ordering: {0}'.format(ordering)) H = relabel_nodes(G, mapping) H.name = "(" + G.name + ")_with_int_labels" # create node attribute with the old label if label_attribute is not None: nx.set_node_attributes(H, label_attribute, dict((v, k) for k, v in mapping.items())) return H
Return a copy of the graph G with the nodes relabeled with integers. Parameters ---------- G : graph A NetworkX graph first_label : int, optional (default=0) An integer specifying the offset in numbering nodes. The n new integer labels are numbered first_label, ..., n-1+first_label. ordering : string "default" : inherit node ordering from G.nodes() "sorted" : inherit node ordering from sorted(G.nodes()) "increasing degree" : nodes are sorted by increasing degree "decreasing degree" : nodes are sorted by decreasing degree label_attribute : string, optional (default=None) Name of node attribute to store old label. If None no attribute is created. Notes ----- Node and edge attribute data are copied to the new (relabeled) graph. See Also -------- relabel_nodes
Below is the the instruction that describes the task: ### Input: Return a copy of the graph G with the nodes relabeled with integers. Parameters ---------- G : graph A NetworkX graph first_label : int, optional (default=0) An integer specifying the offset in numbering nodes. The n new integer labels are numbered first_label, ..., n-1+first_label. ordering : string "default" : inherit node ordering from G.nodes() "sorted" : inherit node ordering from sorted(G.nodes()) "increasing degree" : nodes are sorted by increasing degree "decreasing degree" : nodes are sorted by decreasing degree label_attribute : string, optional (default=None) Name of node attribute to store old label. If None no attribute is created. Notes ----- Node and edge attribute data are copied to the new (relabeled) graph. See Also -------- relabel_nodes ### Response: def convert_node_labels_to_integers(G, first_label=0, ordering="default", label_attribute=None): """Return a copy of the graph G with the nodes relabeled with integers. Parameters ---------- G : graph A NetworkX graph first_label : int, optional (default=0) An integer specifying the offset in numbering nodes. The n new integer labels are numbered first_label, ..., n-1+first_label. ordering : string "default" : inherit node ordering from G.nodes() "sorted" : inherit node ordering from sorted(G.nodes()) "increasing degree" : nodes are sorted by increasing degree "decreasing degree" : nodes are sorted by decreasing degree label_attribute : string, optional (default=None) Name of node attribute to store old label. If None no attribute is created. Notes ----- Node and edge attribute data are copied to the new (relabeled) graph. See Also -------- relabel_nodes """ N = G.number_of_nodes() + first_label if ordering == "default": mapping = dict(zip(G.nodes(), range(first_label, N))) elif ordering == "sorted": nlist = G.nodes() nlist.sort() mapping = dict(zip(nlist, range(first_label, N))) elif ordering == "increasing degree": dv_pairs = [(d, n) for (n, d) in G.degree_iter()] dv_pairs.sort() # in-place sort from lowest to highest degree mapping = dict(zip([n for d, n in dv_pairs], range(first_label, N))) elif ordering == "decreasing degree": dv_pairs = [(d, n) for (n, d) in G.degree_iter()] dv_pairs.sort() # in-place sort from lowest to highest degree dv_pairs.reverse() mapping = dict(zip([n for d, n in dv_pairs], range(first_label, N))) else: raise nx.NetworkXError('Unknown node ordering: {0}'.format(ordering)) H = relabel_nodes(G, mapping) H.name = "(" + G.name + ")_with_int_labels" # create node attribute with the old label if label_attribute is not None: nx.set_node_attributes(H, label_attribute, dict((v, k) for k, v in mapping.items())) return H
def enable(name, **kwargs): ''' Enable the named service to start at boot CLI Example: .. code-block:: bash salt '*' service.enable <service name> ''' cmd = '/usr/sbin/svcadm enable {0}'.format(name) return not __salt__['cmd.retcode'](cmd, python_shell=False)
Enable the named service to start at boot CLI Example: .. code-block:: bash salt '*' service.enable <service name>
Below is the the instruction that describes the task: ### Input: Enable the named service to start at boot CLI Example: .. code-block:: bash salt '*' service.enable <service name> ### Response: def enable(name, **kwargs): ''' Enable the named service to start at boot CLI Example: .. code-block:: bash salt '*' service.enable <service name> ''' cmd = '/usr/sbin/svcadm enable {0}'.format(name) return not __salt__['cmd.retcode'](cmd, python_shell=False)
def remove_quotes(self, value): """ Remove any surrounding quotes from a value and unescape any contained quotes of that type. """ # beware the empty string if not value: return value if value[0] == value[-1] == '"': return value[1:-1].replace('\\"', '"') if value[0] == value[-1] == "'": return value[1:-1].replace("\\'", "'") return value
Remove any surrounding quotes from a value and unescape any contained quotes of that type.
Below is the the instruction that describes the task: ### Input: Remove any surrounding quotes from a value and unescape any contained quotes of that type. ### Response: def remove_quotes(self, value): """ Remove any surrounding quotes from a value and unescape any contained quotes of that type. """ # beware the empty string if not value: return value if value[0] == value[-1] == '"': return value[1:-1].replace('\\"', '"') if value[0] == value[-1] == "'": return value[1:-1].replace("\\'", "'") return value
def collect_metrics(): """ Register the decorated function to run for the collect_metrics hook. """ def _register(action): handler = Handler.get(action) handler.add_predicate(partial(_restricted_hook, 'collect-metrics')) return action return _register
Register the decorated function to run for the collect_metrics hook.
Below is the the instruction that describes the task: ### Input: Register the decorated function to run for the collect_metrics hook. ### Response: def collect_metrics(): """ Register the decorated function to run for the collect_metrics hook. """ def _register(action): handler = Handler.get(action) handler.add_predicate(partial(_restricted_hook, 'collect-metrics')) return action return _register
def from_passwd(uid_min=None, uid_max=None): """Create collection from locally discovered data, e.g. /etc/passwd.""" import pwd users = Users(oktypes=User) passwd_list = pwd.getpwall() if not uid_min: uid_min = UID_MIN if not uid_max: uid_max = UID_MAX sudoers_entries = read_sudoers() for pwd_entry in passwd_list: if uid_min <= pwd_entry.pw_uid <= uid_max: user = User(name=text_type(pwd_entry.pw_name), passwd=text_type(pwd_entry.pw_passwd), uid=pwd_entry.pw_uid, gid=pwd_entry.pw_gid, gecos=text_type(pwd_entry.pw_gecos), home_dir=text_type(pwd_entry.pw_dir), shell=text_type(pwd_entry.pw_shell), public_keys=read_authorized_keys(username=pwd_entry.pw_name), sudoers_entry=get_sudoers_entry(username=pwd_entry.pw_name, sudoers_entries=sudoers_entries)) users.append(user) return users
Create collection from locally discovered data, e.g. /etc/passwd.
Below is the the instruction that describes the task: ### Input: Create collection from locally discovered data, e.g. /etc/passwd. ### Response: def from_passwd(uid_min=None, uid_max=None): """Create collection from locally discovered data, e.g. /etc/passwd.""" import pwd users = Users(oktypes=User) passwd_list = pwd.getpwall() if not uid_min: uid_min = UID_MIN if not uid_max: uid_max = UID_MAX sudoers_entries = read_sudoers() for pwd_entry in passwd_list: if uid_min <= pwd_entry.pw_uid <= uid_max: user = User(name=text_type(pwd_entry.pw_name), passwd=text_type(pwd_entry.pw_passwd), uid=pwd_entry.pw_uid, gid=pwd_entry.pw_gid, gecos=text_type(pwd_entry.pw_gecos), home_dir=text_type(pwd_entry.pw_dir), shell=text_type(pwd_entry.pw_shell), public_keys=read_authorized_keys(username=pwd_entry.pw_name), sudoers_entry=get_sudoers_entry(username=pwd_entry.pw_name, sudoers_entries=sudoers_entries)) users.append(user) return users
def add_file(self, filename): """ Read and adds given file's content to data array that will be used to generate output :param filename File name to add :type str or unicode """ with (open(filename, 'rb')) as f: data = f.read() # below won't handle the same name files # in different paths fname = os.path.basename(filename) self.files[fname] = base64.b64encode(data)
Read and adds given file's content to data array that will be used to generate output :param filename File name to add :type str or unicode
Below is the the instruction that describes the task: ### Input: Read and adds given file's content to data array that will be used to generate output :param filename File name to add :type str or unicode ### Response: def add_file(self, filename): """ Read and adds given file's content to data array that will be used to generate output :param filename File name to add :type str or unicode """ with (open(filename, 'rb')) as f: data = f.read() # below won't handle the same name files # in different paths fname = os.path.basename(filename) self.files[fname] = base64.b64encode(data)
def run_setup(setup_script, args): """Run a distutils setup script, sandboxed in its directory""" setup_dir = os.path.abspath(os.path.dirname(setup_script)) with setup_context(setup_dir): try: sys.argv[:] = [setup_script]+list(args) sys.path.insert(0, setup_dir) # reset to include setup dir, w/clean callback list working_set.__init__() working_set.callbacks.append(lambda dist:dist.activate()) def runner(): ns = dict(__file__=setup_script, __name__='__main__') _execfile(setup_script, ns) DirectorySandbox(setup_dir).run(runner) except SystemExit as v: if v.args and v.args[0]: raise
Run a distutils setup script, sandboxed in its directory
Below is the the instruction that describes the task: ### Input: Run a distutils setup script, sandboxed in its directory ### Response: def run_setup(setup_script, args): """Run a distutils setup script, sandboxed in its directory""" setup_dir = os.path.abspath(os.path.dirname(setup_script)) with setup_context(setup_dir): try: sys.argv[:] = [setup_script]+list(args) sys.path.insert(0, setup_dir) # reset to include setup dir, w/clean callback list working_set.__init__() working_set.callbacks.append(lambda dist:dist.activate()) def runner(): ns = dict(__file__=setup_script, __name__='__main__') _execfile(setup_script, ns) DirectorySandbox(setup_dir).run(runner) except SystemExit as v: if v.args and v.args[0]: raise
def build_from_path(package, path, dry_run=False, env='default', outfilename=DEFAULT_BUILDFILE): """ Compile a Quilt data package from a build file. Path can be a directory, in which case the build file will be generated automatically. """ team, owner, pkg, subpath = parse_package(package, allow_subpath=True) if not os.path.exists(path): raise CommandException("%s does not exist." % path) try: if os.path.isdir(path): buildpath = os.path.join(path, outfilename) if os.path.exists(buildpath): raise CommandException( "Build file already exists. Run `quilt build %r` instead." % buildpath ) contents = generate_contents(path, outfilename) build_package_from_contents(team, owner, pkg, subpath, path, contents, dry_run=dry_run, env=env) else: build_package(team, owner, pkg, subpath, path, dry_run=dry_run, env=env) if not dry_run: print("Built %s successfully." % package) except BuildException as ex: raise CommandException("Failed to build the package: %s" % ex)
Compile a Quilt data package from a build file. Path can be a directory, in which case the build file will be generated automatically.
Below is the the instruction that describes the task: ### Input: Compile a Quilt data package from a build file. Path can be a directory, in which case the build file will be generated automatically. ### Response: def build_from_path(package, path, dry_run=False, env='default', outfilename=DEFAULT_BUILDFILE): """ Compile a Quilt data package from a build file. Path can be a directory, in which case the build file will be generated automatically. """ team, owner, pkg, subpath = parse_package(package, allow_subpath=True) if not os.path.exists(path): raise CommandException("%s does not exist." % path) try: if os.path.isdir(path): buildpath = os.path.join(path, outfilename) if os.path.exists(buildpath): raise CommandException( "Build file already exists. Run `quilt build %r` instead." % buildpath ) contents = generate_contents(path, outfilename) build_package_from_contents(team, owner, pkg, subpath, path, contents, dry_run=dry_run, env=env) else: build_package(team, owner, pkg, subpath, path, dry_run=dry_run, env=env) if not dry_run: print("Built %s successfully." % package) except BuildException as ex: raise CommandException("Failed to build the package: %s" % ex)
def update(self, y, exogenous=None, maxiter=None, **kwargs): """Update an ARIMA or auto-ARIMA as well as any necessary transformers Passes the newly observed values through the appropriate endog transformations, and the exogenous array through the exog transformers (updating where necessary) before finally updating the ARIMA model. Parameters ---------- y : array-like or iterable, shape=(n_samples,) The time-series data to add to the endogenous samples on which the ``ARIMA`` estimator was previously fit. This may either be a Pandas ``Series`` object or a numpy array. This should be a one- dimensional array of finite floats. exogenous : array-like, shape=[n_obs, n_vars], optional (default=None) An optional 2-d array of exogenous variables. If the model was fit with an exogenous array of covariates, it will be required for updating the observed values. maxiter : int, optional (default=None) The number of iterations to perform when updating the model. If None, will perform ``max(5, n_samples // 10)`` iterations. **kwargs : keyword args Extra keyword arguments used for each stage's ``update`` stage. Similar to scikit-learn pipeline keyword args, the keys are compound, comprised of the stage name and the argument name separated by a "__". """ check_is_fitted(self, "steps_") # Push the arrays through all of the transformer steps that have the # appropriate update_and_transform method yt = y Xt = exogenous named_kwargs = self._get_kwargs(**kwargs) for step_idx, name, transformer in self._iter(with_final=False): kw = named_kwargs[name] if hasattr(transformer, "update_and_transform"): yt, Xt = transformer.update_and_transform( y=yt, exogenous=Xt, **kw) else: yt, Xt = transformer.transform(yt, exogenous=Xt, **kw) # Now we can update the arima nm, est = self.steps_[-1] return est.update( yt, exogenous=Xt, maxiter=maxiter, **named_kwargs[nm])
Update an ARIMA or auto-ARIMA as well as any necessary transformers Passes the newly observed values through the appropriate endog transformations, and the exogenous array through the exog transformers (updating where necessary) before finally updating the ARIMA model. Parameters ---------- y : array-like or iterable, shape=(n_samples,) The time-series data to add to the endogenous samples on which the ``ARIMA`` estimator was previously fit. This may either be a Pandas ``Series`` object or a numpy array. This should be a one- dimensional array of finite floats. exogenous : array-like, shape=[n_obs, n_vars], optional (default=None) An optional 2-d array of exogenous variables. If the model was fit with an exogenous array of covariates, it will be required for updating the observed values. maxiter : int, optional (default=None) The number of iterations to perform when updating the model. If None, will perform ``max(5, n_samples // 10)`` iterations. **kwargs : keyword args Extra keyword arguments used for each stage's ``update`` stage. Similar to scikit-learn pipeline keyword args, the keys are compound, comprised of the stage name and the argument name separated by a "__".
Below is the the instruction that describes the task: ### Input: Update an ARIMA or auto-ARIMA as well as any necessary transformers Passes the newly observed values through the appropriate endog transformations, and the exogenous array through the exog transformers (updating where necessary) before finally updating the ARIMA model. Parameters ---------- y : array-like or iterable, shape=(n_samples,) The time-series data to add to the endogenous samples on which the ``ARIMA`` estimator was previously fit. This may either be a Pandas ``Series`` object or a numpy array. This should be a one- dimensional array of finite floats. exogenous : array-like, shape=[n_obs, n_vars], optional (default=None) An optional 2-d array of exogenous variables. If the model was fit with an exogenous array of covariates, it will be required for updating the observed values. maxiter : int, optional (default=None) The number of iterations to perform when updating the model. If None, will perform ``max(5, n_samples // 10)`` iterations. **kwargs : keyword args Extra keyword arguments used for each stage's ``update`` stage. Similar to scikit-learn pipeline keyword args, the keys are compound, comprised of the stage name and the argument name separated by a "__". ### Response: def update(self, y, exogenous=None, maxiter=None, **kwargs): """Update an ARIMA or auto-ARIMA as well as any necessary transformers Passes the newly observed values through the appropriate endog transformations, and the exogenous array through the exog transformers (updating where necessary) before finally updating the ARIMA model. Parameters ---------- y : array-like or iterable, shape=(n_samples,) The time-series data to add to the endogenous samples on which the ``ARIMA`` estimator was previously fit. This may either be a Pandas ``Series`` object or a numpy array. This should be a one- dimensional array of finite floats. exogenous : array-like, shape=[n_obs, n_vars], optional (default=None) An optional 2-d array of exogenous variables. If the model was fit with an exogenous array of covariates, it will be required for updating the observed values. maxiter : int, optional (default=None) The number of iterations to perform when updating the model. If None, will perform ``max(5, n_samples // 10)`` iterations. **kwargs : keyword args Extra keyword arguments used for each stage's ``update`` stage. Similar to scikit-learn pipeline keyword args, the keys are compound, comprised of the stage name and the argument name separated by a "__". """ check_is_fitted(self, "steps_") # Push the arrays through all of the transformer steps that have the # appropriate update_and_transform method yt = y Xt = exogenous named_kwargs = self._get_kwargs(**kwargs) for step_idx, name, transformer in self._iter(with_final=False): kw = named_kwargs[name] if hasattr(transformer, "update_and_transform"): yt, Xt = transformer.update_and_transform( y=yt, exogenous=Xt, **kw) else: yt, Xt = transformer.transform(yt, exogenous=Xt, **kw) # Now we can update the arima nm, est = self.steps_[-1] return est.update( yt, exogenous=Xt, maxiter=maxiter, **named_kwargs[nm])
def iter(self, start=0, count=1000): """ @start: #int cursor start position @stop: #int cursor stop position @count: #int buffer limit -> yields all of the items in the list """ cursor = '0' _loads = self._loads stop = start + count while cursor: cursor = self._client.lrange(self.key_prefix, start, stop) for x in cursor or []: yield _loads(x) start += (count + 1) stop += (count + 1)
@start: #int cursor start position @stop: #int cursor stop position @count: #int buffer limit -> yields all of the items in the list
Below is the the instruction that describes the task: ### Input: @start: #int cursor start position @stop: #int cursor stop position @count: #int buffer limit -> yields all of the items in the list ### Response: def iter(self, start=0, count=1000): """ @start: #int cursor start position @stop: #int cursor stop position @count: #int buffer limit -> yields all of the items in the list """ cursor = '0' _loads = self._loads stop = start + count while cursor: cursor = self._client.lrange(self.key_prefix, start, stop) for x in cursor or []: yield _loads(x) start += (count + 1) stop += (count + 1)
def get_domains(self): """ Retrieves the domains of the users from elastic. """ search = User.search() search.aggs.bucket('domains', 'terms', field='domain', order={'_count': 'desc'}, size=100) response = search.execute() return [entry.key for entry in response.aggregations.domains.buckets]
Retrieves the domains of the users from elastic.
Below is the the instruction that describes the task: ### Input: Retrieves the domains of the users from elastic. ### Response: def get_domains(self): """ Retrieves the domains of the users from elastic. """ search = User.search() search.aggs.bucket('domains', 'terms', field='domain', order={'_count': 'desc'}, size=100) response = search.execute() return [entry.key for entry in response.aggregations.domains.buckets]
def from_shapely(geometry, label=None): """ Create a MultiPolygon from a Shapely MultiPolygon, a Shapely Polygon or a Shapely GeometryCollection. This also creates all necessary Polygons contained by this MultiPolygon. Parameters ---------- geometry : shapely.geometry.MultiPolygon or shapely.geometry.Polygon\ or shapely.geometry.collection.GeometryCollection The object to convert to a MultiPolygon. label : None or str, optional A label assigned to all Polygons within the MultiPolygon. Returns ------- imgaug.MultiPolygon The derived MultiPolygon. """ # load shapely lazily, which makes the dependency more optional import shapely.geometry if isinstance(geometry, shapely.geometry.MultiPolygon): return MultiPolygon([Polygon.from_shapely(poly, label=label) for poly in geometry.geoms]) elif isinstance(geometry, shapely.geometry.Polygon): return MultiPolygon([Polygon.from_shapely(geometry, label=label)]) elif isinstance(geometry, shapely.geometry.collection.GeometryCollection): ia.do_assert(all([isinstance(poly, shapely.geometry.Polygon) for poly in geometry.geoms])) return MultiPolygon([Polygon.from_shapely(poly, label=label) for poly in geometry.geoms]) else: raise Exception("Unknown datatype '%s'. Expected shapely.geometry.Polygon or " "shapely.geometry.MultiPolygon or " "shapely.geometry.collections.GeometryCollection." % (type(geometry),))
Create a MultiPolygon from a Shapely MultiPolygon, a Shapely Polygon or a Shapely GeometryCollection. This also creates all necessary Polygons contained by this MultiPolygon. Parameters ---------- geometry : shapely.geometry.MultiPolygon or shapely.geometry.Polygon\ or shapely.geometry.collection.GeometryCollection The object to convert to a MultiPolygon. label : None or str, optional A label assigned to all Polygons within the MultiPolygon. Returns ------- imgaug.MultiPolygon The derived MultiPolygon.
Below is the the instruction that describes the task: ### Input: Create a MultiPolygon from a Shapely MultiPolygon, a Shapely Polygon or a Shapely GeometryCollection. This also creates all necessary Polygons contained by this MultiPolygon. Parameters ---------- geometry : shapely.geometry.MultiPolygon or shapely.geometry.Polygon\ or shapely.geometry.collection.GeometryCollection The object to convert to a MultiPolygon. label : None or str, optional A label assigned to all Polygons within the MultiPolygon. Returns ------- imgaug.MultiPolygon The derived MultiPolygon. ### Response: def from_shapely(geometry, label=None): """ Create a MultiPolygon from a Shapely MultiPolygon, a Shapely Polygon or a Shapely GeometryCollection. This also creates all necessary Polygons contained by this MultiPolygon. Parameters ---------- geometry : shapely.geometry.MultiPolygon or shapely.geometry.Polygon\ or shapely.geometry.collection.GeometryCollection The object to convert to a MultiPolygon. label : None or str, optional A label assigned to all Polygons within the MultiPolygon. Returns ------- imgaug.MultiPolygon The derived MultiPolygon. """ # load shapely lazily, which makes the dependency more optional import shapely.geometry if isinstance(geometry, shapely.geometry.MultiPolygon): return MultiPolygon([Polygon.from_shapely(poly, label=label) for poly in geometry.geoms]) elif isinstance(geometry, shapely.geometry.Polygon): return MultiPolygon([Polygon.from_shapely(geometry, label=label)]) elif isinstance(geometry, shapely.geometry.collection.GeometryCollection): ia.do_assert(all([isinstance(poly, shapely.geometry.Polygon) for poly in geometry.geoms])) return MultiPolygon([Polygon.from_shapely(poly, label=label) for poly in geometry.geoms]) else: raise Exception("Unknown datatype '%s'. Expected shapely.geometry.Polygon or " "shapely.geometry.MultiPolygon or " "shapely.geometry.collections.GeometryCollection." % (type(geometry),))
def search_user(current): """ Search users for adding to a public room or creating one to one direct messaging .. code-block:: python # request: { 'view':'_zops_search_user', 'query': string, } # response: { 'results': [('full_name', 'key', 'avatar_url'), ], 'status': 'OK', 'code': 200 } """ current.output = { 'results': [], 'status': 'OK', 'code': 201 } qs = UserModel(current).objects.exclude(key=current.user_id).search_on( *settings.MESSAGING_USER_SEARCH_FIELDS, contains=current.input['query']) # FIXME: somehow exclude(key=current.user_id) not working with search_on() for user in qs: if user.key != current.user_id: current.output['results'].append((user.full_name, user.key, user.get_avatar_url()))
Search users for adding to a public room or creating one to one direct messaging .. code-block:: python # request: { 'view':'_zops_search_user', 'query': string, } # response: { 'results': [('full_name', 'key', 'avatar_url'), ], 'status': 'OK', 'code': 200 }
Below is the the instruction that describes the task: ### Input: Search users for adding to a public room or creating one to one direct messaging .. code-block:: python # request: { 'view':'_zops_search_user', 'query': string, } # response: { 'results': [('full_name', 'key', 'avatar_url'), ], 'status': 'OK', 'code': 200 } ### Response: def search_user(current): """ Search users for adding to a public room or creating one to one direct messaging .. code-block:: python # request: { 'view':'_zops_search_user', 'query': string, } # response: { 'results': [('full_name', 'key', 'avatar_url'), ], 'status': 'OK', 'code': 200 } """ current.output = { 'results': [], 'status': 'OK', 'code': 201 } qs = UserModel(current).objects.exclude(key=current.user_id).search_on( *settings.MESSAGING_USER_SEARCH_FIELDS, contains=current.input['query']) # FIXME: somehow exclude(key=current.user_id) not working with search_on() for user in qs: if user.key != current.user_id: current.output['results'].append((user.full_name, user.key, user.get_avatar_url()))
def basicauth(self, realm = b'all', nofail = False): "Try to get the basic authorize info, return (username, password) if succeeded, return 401 otherwise" if b'authorization' in self.headerdict: auth = self.headerdict[b'authorization'] auth_pair = auth.split(b' ', 1) if len(auth_pair) < 2: raise HttpInputException('Authorization header is malformed') if auth_pair[0].lower() == b'basic': try: userpass = base64.b64decode(auth_pair[1]) except Exception: raise HttpInputException('Invalid base-64 string') userpass_pair = userpass.split(b':', 1) if len(userpass_pair) != 2: raise HttpInputException('Authorization header is malformed') return userpass_pair if nofail: return (None, None) else: self.basicauthfail(realm)
Try to get the basic authorize info, return (username, password) if succeeded, return 401 otherwise
Below is the the instruction that describes the task: ### Input: Try to get the basic authorize info, return (username, password) if succeeded, return 401 otherwise ### Response: def basicauth(self, realm = b'all', nofail = False): "Try to get the basic authorize info, return (username, password) if succeeded, return 401 otherwise" if b'authorization' in self.headerdict: auth = self.headerdict[b'authorization'] auth_pair = auth.split(b' ', 1) if len(auth_pair) < 2: raise HttpInputException('Authorization header is malformed') if auth_pair[0].lower() == b'basic': try: userpass = base64.b64decode(auth_pair[1]) except Exception: raise HttpInputException('Invalid base-64 string') userpass_pair = userpass.split(b':', 1) if len(userpass_pair) != 2: raise HttpInputException('Authorization header is malformed') return userpass_pair if nofail: return (None, None) else: self.basicauthfail(realm)
def loadings(self): """Loadings = eigenvectors times sqrt(eigenvalues).""" loadings = self.v[:, : self.keep] * np.sqrt(self.eigenvalues) cols = ["PC%s" % i for i in range(1, self.keep + 1)] loadings = pd.DataFrame( loadings, columns=cols, index=self.feature_names ) return loadings
Loadings = eigenvectors times sqrt(eigenvalues).
Below is the the instruction that describes the task: ### Input: Loadings = eigenvectors times sqrt(eigenvalues). ### Response: def loadings(self): """Loadings = eigenvectors times sqrt(eigenvalues).""" loadings = self.v[:, : self.keep] * np.sqrt(self.eigenvalues) cols = ["PC%s" % i for i in range(1, self.keep + 1)] loadings = pd.DataFrame( loadings, columns=cols, index=self.feature_names ) return loadings
def conductorcmd_push(engine_name, project_name, services, **kwargs): """ Push images to a registry """ username = kwargs.pop('username') password = kwargs.pop('password') email = kwargs.pop('email') url = kwargs.pop('url') namespace = kwargs.pop('namespace') tag = kwargs.pop('tag') config_path = kwargs.pop('config_path') repository_prefix =kwargs.pop('repository_prefix') engine = load_engine(['PUSH', 'LOGIN'], engine_name, project_name, services) logger.info(u'Engine integration loaded. Preparing push.', engine=engine.display_name) # Verify that we can authenticate with the registry username, password = engine.login(username, password, email, url, config_path) # Push each image that has been built using Ansible roles for name, service in iteritems(services): if service.get('containers'): for c in service['containers']: if 'roles' in c: cname = '%s-%s' % (name, c['container_name']) image_id = engine.get_latest_image_id_for_service(cname) engine.push(image_id, cname, url=url, tag=tag, namespace=namespace, username=username, password=password, repository_prefix=repository_prefix) elif 'roles' in service: # if the service has roles, it's an image we should push image_id = engine.get_latest_image_id_for_service(name) engine.push(image_id, name, url=url, tag=tag, namespace=namespace, username=username, password=password, repository_prefix=repository_prefix)
Push images to a registry
Below is the the instruction that describes the task: ### Input: Push images to a registry ### Response: def conductorcmd_push(engine_name, project_name, services, **kwargs): """ Push images to a registry """ username = kwargs.pop('username') password = kwargs.pop('password') email = kwargs.pop('email') url = kwargs.pop('url') namespace = kwargs.pop('namespace') tag = kwargs.pop('tag') config_path = kwargs.pop('config_path') repository_prefix =kwargs.pop('repository_prefix') engine = load_engine(['PUSH', 'LOGIN'], engine_name, project_name, services) logger.info(u'Engine integration loaded. Preparing push.', engine=engine.display_name) # Verify that we can authenticate with the registry username, password = engine.login(username, password, email, url, config_path) # Push each image that has been built using Ansible roles for name, service in iteritems(services): if service.get('containers'): for c in service['containers']: if 'roles' in c: cname = '%s-%s' % (name, c['container_name']) image_id = engine.get_latest_image_id_for_service(cname) engine.push(image_id, cname, url=url, tag=tag, namespace=namespace, username=username, password=password, repository_prefix=repository_prefix) elif 'roles' in service: # if the service has roles, it's an image we should push image_id = engine.get_latest_image_id_for_service(name) engine.push(image_id, name, url=url, tag=tag, namespace=namespace, username=username, password=password, repository_prefix=repository_prefix)
def crc32(self, data): """ Calculate a ZIP 32-bit CRC from data in memory. Origin code by Johann E. Klasek, j AT klasek at """ data_address = 0x1000 # position of the test data self.cpu.memory.load(data_address, data) # write test data into RAM self.cpu.index_x.set(data_address + len(data)) # end address addr_hi, addr_lo = divmod(data_address, 0x100) # start address self.cpu_test_run(start=0x0100, end=None, mem=bytearray([ # 0100| .ORG $100 0x10, 0xCE, 0x40, 0x00, # 0100| LDS #$4000 # 0104| CRCHH: EQU $ED # 0104| CRCHL: EQU $B8 # 0104| CRCLH: EQU $83 # 0104| CRCLL: EQU $20 # 0104| CRCINITH: EQU $FFFF # 0104| CRCINITL: EQU $FFFF # 0104| ; CRC 32 bit in DP (4 bytes) # 0104| CRC: EQU $80 0xCE, addr_hi, addr_lo, # 0104| LDU #.... ; start address in u 0x34, 0x10, # 010C| PSHS x ; end address +1 to TOS 0xCC, 0xFF, 0xFF, # 010E| LDD #CRCINITL 0xDD, 0x82, # 0111| STD crc+2 0x8E, 0xFF, 0xFF, # 0113| LDX #CRCINITH 0x9F, 0x80, # 0116| STX crc # 0118| ; d/x contains the CRC # 0118| BL: 0xE8, 0xC0, # 0118| EORB ,u+ ; XOR with lowest byte 0x10, 0x8E, 0x00, 0x08, # 011A| LDY #8 ; bit counter # 011E| RL: 0x1E, 0x01, # 011E| EXG d,x # 0120| RL1: 0x44, # 0120| LSRA ; shift CRC right, beginning with high word 0x56, # 0121| RORB 0x1E, 0x01, # 0122| EXG d,x 0x46, # 0124| RORA ; low word 0x56, # 0125| RORB 0x24, 0x12, # 0126| BCC cl # 0128| ; CRC=CRC XOR polynomic 0x88, 0x83, # 0128| EORA #CRCLH ; apply CRC polynomic low word 0xC8, 0x20, # 012A| EORB #CRCLL 0x1E, 0x01, # 012C| EXG d,x 0x88, 0xED, # 012E| EORA #CRCHH ; apply CRC polynomic high word 0xC8, 0xB8, # 0130| EORB #CRCHL 0x31, 0x3F, # 0132| LEAY -1,y ; bit count down 0x26, 0xEA, # 0134| BNE rl1 0x1E, 0x01, # 0136| EXG d,x ; CRC: restore correct order 0x27, 0x04, # 0138| BEQ el ; leave bit loop # 013A| CL: 0x31, 0x3F, # 013A| LEAY -1,y ; bit count down 0x26, 0xE0, # 013C| BNE rl ; bit loop # 013E| EL: 0x11, 0xA3, 0xE4, # 013E| CMPU ,s ; end address reached? 0x26, 0xD5, # 0141| BNE bl ; byte loop 0xDD, 0x82, # 0143| STD crc+2 ; CRC low word 0x9F, 0x80, # 0145| STX crc ; CRC high word ])) d = self.cpu.accu_d.value x = self.cpu.index_x.value crc32 = x * 0x10000 + d return crc32 ^ 0xFFFFFFFF
Calculate a ZIP 32-bit CRC from data in memory. Origin code by Johann E. Klasek, j AT klasek at
Below is the the instruction that describes the task: ### Input: Calculate a ZIP 32-bit CRC from data in memory. Origin code by Johann E. Klasek, j AT klasek at ### Response: def crc32(self, data): """ Calculate a ZIP 32-bit CRC from data in memory. Origin code by Johann E. Klasek, j AT klasek at """ data_address = 0x1000 # position of the test data self.cpu.memory.load(data_address, data) # write test data into RAM self.cpu.index_x.set(data_address + len(data)) # end address addr_hi, addr_lo = divmod(data_address, 0x100) # start address self.cpu_test_run(start=0x0100, end=None, mem=bytearray([ # 0100| .ORG $100 0x10, 0xCE, 0x40, 0x00, # 0100| LDS #$4000 # 0104| CRCHH: EQU $ED # 0104| CRCHL: EQU $B8 # 0104| CRCLH: EQU $83 # 0104| CRCLL: EQU $20 # 0104| CRCINITH: EQU $FFFF # 0104| CRCINITL: EQU $FFFF # 0104| ; CRC 32 bit in DP (4 bytes) # 0104| CRC: EQU $80 0xCE, addr_hi, addr_lo, # 0104| LDU #.... ; start address in u 0x34, 0x10, # 010C| PSHS x ; end address +1 to TOS 0xCC, 0xFF, 0xFF, # 010E| LDD #CRCINITL 0xDD, 0x82, # 0111| STD crc+2 0x8E, 0xFF, 0xFF, # 0113| LDX #CRCINITH 0x9F, 0x80, # 0116| STX crc # 0118| ; d/x contains the CRC # 0118| BL: 0xE8, 0xC0, # 0118| EORB ,u+ ; XOR with lowest byte 0x10, 0x8E, 0x00, 0x08, # 011A| LDY #8 ; bit counter # 011E| RL: 0x1E, 0x01, # 011E| EXG d,x # 0120| RL1: 0x44, # 0120| LSRA ; shift CRC right, beginning with high word 0x56, # 0121| RORB 0x1E, 0x01, # 0122| EXG d,x 0x46, # 0124| RORA ; low word 0x56, # 0125| RORB 0x24, 0x12, # 0126| BCC cl # 0128| ; CRC=CRC XOR polynomic 0x88, 0x83, # 0128| EORA #CRCLH ; apply CRC polynomic low word 0xC8, 0x20, # 012A| EORB #CRCLL 0x1E, 0x01, # 012C| EXG d,x 0x88, 0xED, # 012E| EORA #CRCHH ; apply CRC polynomic high word 0xC8, 0xB8, # 0130| EORB #CRCHL 0x31, 0x3F, # 0132| LEAY -1,y ; bit count down 0x26, 0xEA, # 0134| BNE rl1 0x1E, 0x01, # 0136| EXG d,x ; CRC: restore correct order 0x27, 0x04, # 0138| BEQ el ; leave bit loop # 013A| CL: 0x31, 0x3F, # 013A| LEAY -1,y ; bit count down 0x26, 0xE0, # 013C| BNE rl ; bit loop # 013E| EL: 0x11, 0xA3, 0xE4, # 013E| CMPU ,s ; end address reached? 0x26, 0xD5, # 0141| BNE bl ; byte loop 0xDD, 0x82, # 0143| STD crc+2 ; CRC low word 0x9F, 0x80, # 0145| STX crc ; CRC high word ])) d = self.cpu.accu_d.value x = self.cpu.index_x.value crc32 = x * 0x10000 + d return crc32 ^ 0xFFFFFFFF
def digest(self): """Terminate the message-digest computation and return digest. Return the digest of the strings passed to the update() method so far. This is a 16-byte string which may contain non-ASCII characters, including null bytes. """ H0 = self.H0 H1 = self.H1 H2 = self.H2 H3 = self.H3 H4 = self.H4 input = [] + self.input count = [] + self.count index = (self.count[1] >> 3) & 0x3f if index < 56: padLen = 56 - index else: padLen = 120 - index padding = ['\200'] + ['\000'] * 63 self.update(padding[:padLen]) # Append length (before padding). bits = _bytelist2longBigEndian(self.input[:56]) + count self._transform(bits) # Store state in digest. digest = _long2bytesBigEndian(self.H0, 4) + \ _long2bytesBigEndian(self.H1, 4) + \ _long2bytesBigEndian(self.H2, 4) + \ _long2bytesBigEndian(self.H3, 4) + \ _long2bytesBigEndian(self.H4, 4) self.H0 = H0 self.H1 = H1 self.H2 = H2 self.H3 = H3 self.H4 = H4 self.input = input self.count = count return digest
Terminate the message-digest computation and return digest. Return the digest of the strings passed to the update() method so far. This is a 16-byte string which may contain non-ASCII characters, including null bytes.
Below is the the instruction that describes the task: ### Input: Terminate the message-digest computation and return digest. Return the digest of the strings passed to the update() method so far. This is a 16-byte string which may contain non-ASCII characters, including null bytes. ### Response: def digest(self): """Terminate the message-digest computation and return digest. Return the digest of the strings passed to the update() method so far. This is a 16-byte string which may contain non-ASCII characters, including null bytes. """ H0 = self.H0 H1 = self.H1 H2 = self.H2 H3 = self.H3 H4 = self.H4 input = [] + self.input count = [] + self.count index = (self.count[1] >> 3) & 0x3f if index < 56: padLen = 56 - index else: padLen = 120 - index padding = ['\200'] + ['\000'] * 63 self.update(padding[:padLen]) # Append length (before padding). bits = _bytelist2longBigEndian(self.input[:56]) + count self._transform(bits) # Store state in digest. digest = _long2bytesBigEndian(self.H0, 4) + \ _long2bytesBigEndian(self.H1, 4) + \ _long2bytesBigEndian(self.H2, 4) + \ _long2bytesBigEndian(self.H3, 4) + \ _long2bytesBigEndian(self.H4, 4) self.H0 = H0 self.H1 = H1 self.H2 = H2 self.H3 = H3 self.H4 = H4 self.input = input self.count = count return digest
def check_if_exiftool_is_already_installed(): """Requirements This function will check if Exiftool is installed on your system Return: True if Exiftool is Installed False if not """ result = 1; command = ["exiftool", "-ver"] with open(os.devnull, "w") as fnull: result = subprocess.call( command, stdout = fnull, stderr = fnull ) #Exiftool is not installed if result != 0: print_a_header('Exiftool needs to be installed on your system') print_a_header('Visit http://www.sno.phy.queensu.ca/~phil/exiftool/') return False else: return True
Requirements This function will check if Exiftool is installed on your system Return: True if Exiftool is Installed False if not
Below is the the instruction that describes the task: ### Input: Requirements This function will check if Exiftool is installed on your system Return: True if Exiftool is Installed False if not ### Response: def check_if_exiftool_is_already_installed(): """Requirements This function will check if Exiftool is installed on your system Return: True if Exiftool is Installed False if not """ result = 1; command = ["exiftool", "-ver"] with open(os.devnull, "w") as fnull: result = subprocess.call( command, stdout = fnull, stderr = fnull ) #Exiftool is not installed if result != 0: print_a_header('Exiftool needs to be installed on your system') print_a_header('Visit http://www.sno.phy.queensu.ca/~phil/exiftool/') return False else: return True
def similarity(self, other): """Calculate similarity based on best matching permutation of items.""" # Select the longer list as the basis for comparison if len(self.items) > len(other.items): first, second = self, other else: first, second = other, self items = list(first.items) # backup items list length = len(items) sim = self.Similarity(0.0 if length else 1.0) # Calculate the similarity for each permutation of items cname = self.__class__.__name__ for num, perm in enumerate(permutations(items, length), start=1): first.items = perm aname = 'items-p{}'.format(num) self.log(first, second, '%', cname=cname, aname=aname) permutation_sim = super(Group, first).similarity(second) self.log(first, second, '%', cname=cname, aname=aname, result=permutation_sim) sim = max(sim, permutation_sim) logging.debug("highest similarity: %s", sim) first.items = items # restore original items list return sim
Calculate similarity based on best matching permutation of items.
Below is the the instruction that describes the task: ### Input: Calculate similarity based on best matching permutation of items. ### Response: def similarity(self, other): """Calculate similarity based on best matching permutation of items.""" # Select the longer list as the basis for comparison if len(self.items) > len(other.items): first, second = self, other else: first, second = other, self items = list(first.items) # backup items list length = len(items) sim = self.Similarity(0.0 if length else 1.0) # Calculate the similarity for each permutation of items cname = self.__class__.__name__ for num, perm in enumerate(permutations(items, length), start=1): first.items = perm aname = 'items-p{}'.format(num) self.log(first, second, '%', cname=cname, aname=aname) permutation_sim = super(Group, first).similarity(second) self.log(first, second, '%', cname=cname, aname=aname, result=permutation_sim) sim = max(sim, permutation_sim) logging.debug("highest similarity: %s", sim) first.items = items # restore original items list return sim
def _readline_insert(self, char, echo, insptr, line): """Deal properly with inserted chars in a line.""" if not self._readline_do_echo(echo): return # Write out the remainder of the line self.write(char + ''.join(line[insptr:])) # Cursor Left to the current insert point char_count = len(line) - insptr self.write(self.CODES['CSRLEFT'] * char_count)
Deal properly with inserted chars in a line.
Below is the the instruction that describes the task: ### Input: Deal properly with inserted chars in a line. ### Response: def _readline_insert(self, char, echo, insptr, line): """Deal properly with inserted chars in a line.""" if not self._readline_do_echo(echo): return # Write out the remainder of the line self.write(char + ''.join(line[insptr:])) # Cursor Left to the current insert point char_count = len(line) - insptr self.write(self.CODES['CSRLEFT'] * char_count)
def urlparams(url_, hash=None, **query): """Add a fragment and/or query paramaters to a URL. New query params will be appended to exising parameters, except duplicate names, which will be replaced. """ url = urlparse.urlparse(url_) fragment = hash if hash is not None else url.fragment # Use dict(parse_qsl) so we don't get lists of values. q = url.query query_dict = dict(urlparse.parse_qsl(smart_str(q))) if q else {} query_dict.update((k, v) for k, v in query.items()) query_string = _urlencode([(k, v) for k, v in query_dict.items() if v is not None]) new = urlparse.ParseResult(url.scheme, url.netloc, url.path, url.params, query_string, fragment) return new.geturl()
Add a fragment and/or query paramaters to a URL. New query params will be appended to exising parameters, except duplicate names, which will be replaced.
Below is the the instruction that describes the task: ### Input: Add a fragment and/or query paramaters to a URL. New query params will be appended to exising parameters, except duplicate names, which will be replaced. ### Response: def urlparams(url_, hash=None, **query): """Add a fragment and/or query paramaters to a URL. New query params will be appended to exising parameters, except duplicate names, which will be replaced. """ url = urlparse.urlparse(url_) fragment = hash if hash is not None else url.fragment # Use dict(parse_qsl) so we don't get lists of values. q = url.query query_dict = dict(urlparse.parse_qsl(smart_str(q))) if q else {} query_dict.update((k, v) for k, v in query.items()) query_string = _urlencode([(k, v) for k, v in query_dict.items() if v is not None]) new = urlparse.ParseResult(url.scheme, url.netloc, url.path, url.params, query_string, fragment) return new.geturl()
def _send(self, message): """ Private method for send one message. :param SmsMessage message: SmsMessage class instance. :returns: True if message is sended else False :rtype: bool """ params = { 'V': SMSPUBLI_API_VERSION, 'UN': SMSPUBLI_USERNAME, 'PWD': SMSPUBLI_PASSWORD, 'R': SMSPUBLI_ROUTE, 'SA': message.from_phone, 'DA': ','.join(message.to), 'M': message.body.encode('latin-1'), 'DC': SMSPUBLI_DC, 'DR': SMSPUBLI_DR, 'UR': message.from_phone } if SMSPUBLI_ALLOW_LONG_SMS: params['LM'] = '1' response = requests.post(SMSPUBLI_API_URL, params) if response.status_code != 200: if not self.fail_silently: raise else: return False response_msg, response_code = response.content.split(':') if response_msg == 'OK': try: if "," in response_code: codes = map(int, response_code.split(",")) else: codes = [int(response_code)] for code in codes: if code == -5: #: TODO send error signal (no $$) pass elif code == -3: #: TODO send error signal (incorrect num) pass return True except (ValueError, TypeError): if not self.fail_silently: raise return False return False
Private method for send one message. :param SmsMessage message: SmsMessage class instance. :returns: True if message is sended else False :rtype: bool
Below is the the instruction that describes the task: ### Input: Private method for send one message. :param SmsMessage message: SmsMessage class instance. :returns: True if message is sended else False :rtype: bool ### Response: def _send(self, message): """ Private method for send one message. :param SmsMessage message: SmsMessage class instance. :returns: True if message is sended else False :rtype: bool """ params = { 'V': SMSPUBLI_API_VERSION, 'UN': SMSPUBLI_USERNAME, 'PWD': SMSPUBLI_PASSWORD, 'R': SMSPUBLI_ROUTE, 'SA': message.from_phone, 'DA': ','.join(message.to), 'M': message.body.encode('latin-1'), 'DC': SMSPUBLI_DC, 'DR': SMSPUBLI_DR, 'UR': message.from_phone } if SMSPUBLI_ALLOW_LONG_SMS: params['LM'] = '1' response = requests.post(SMSPUBLI_API_URL, params) if response.status_code != 200: if not self.fail_silently: raise else: return False response_msg, response_code = response.content.split(':') if response_msg == 'OK': try: if "," in response_code: codes = map(int, response_code.split(",")) else: codes = [int(response_code)] for code in codes: if code == -5: #: TODO send error signal (no $$) pass elif code == -3: #: TODO send error signal (incorrect num) pass return True except (ValueError, TypeError): if not self.fail_silently: raise return False return False
def run_and_exit(command_class): '''A shortcut for reading from sys.argv and exiting the interpreter''' cmd = command_class(sys.argv[1:]) if cmd.error: print('error: {0}'.format(cmd.error)) sys.exit(1) else: sys.exit(cmd.run())
A shortcut for reading from sys.argv and exiting the interpreter
Below is the the instruction that describes the task: ### Input: A shortcut for reading from sys.argv and exiting the interpreter ### Response: def run_and_exit(command_class): '''A shortcut for reading from sys.argv and exiting the interpreter''' cmd = command_class(sys.argv[1:]) if cmd.error: print('error: {0}'.format(cmd.error)) sys.exit(1) else: sys.exit(cmd.run())
def read(self, address, size, force=False): """ Read a stream of potentially symbolic bytes from a potentially symbolic address :param address: Where to read from :param size: How many bytes :param force: Whether to ignore permissions :rtype: list """ size = self._get_size(size) assert not issymbolic(size) if issymbolic(address): assert solver.check(self.constraints) logger.debug(f'Reading {size} bytes from symbolic address {address}') try: solutions = self._try_get_solutions(address, size, 'r', force=force) assert len(solutions) > 0 except TooManySolutions as e: m, M = solver.minmax(self.constraints, address) logger.debug(f'Got TooManySolutions on a symbolic read. Range [{m:x}, {M:x}]. Not crashing!') # The force param shouldn't affect this, as this is checking for unmapped reads, not bad perms crashing_condition = True for start, end, perms, offset, name in self.mappings(): if start <= M + size and end >= m: if 'r' in perms: crashing_condition = Operators.AND(Operators.OR((address + size).ult(start), address.uge(end)), crashing_condition) if solver.can_be_true(self.constraints, crashing_condition): raise InvalidSymbolicMemoryAccess(address, 'r', size, crashing_condition) # INCOMPLETE Result! We could also fork once for every map logger.info('INCOMPLETE Result! Using the sampled solutions we have as result') condition = False for base in e.solutions: condition = Operators.OR(address == base, condition) from .state import ForkState raise ForkState("Forking state on incomplete result", condition) # So here we have all potential solutions to address condition = False for base in solutions: condition = Operators.OR(address == base, condition) result = [] # consider size ==1 to read following code for offset in range(size): # Given ALL solutions for the symbolic address for base in solutions: addr_value = base + offset byte = Operators.ORD(self.map_containing(addr_value)[addr_value]) if addr_value in self._symbols: for condition, value in self._symbols[addr_value]: byte = Operators.ITEBV(8, condition, Operators.ORD(value), byte) if len(result) > offset: result[offset] = Operators.ITEBV(8, address == base, byte, result[offset]) else: result.append(byte) assert len(result) == offset + 1 return list(map(Operators.CHR, result)) else: result = list(map(Operators.ORD, super().read(address, size, force))) for offset in range(size): if address + offset in self._symbols: for condition, value in self._symbols[address + offset]: if condition is True: result[offset] = Operators.ORD(value) else: result[offset] = Operators.ITEBV(8, condition, Operators.ORD(value), result[offset]) return list(map(Operators.CHR, result))
Read a stream of potentially symbolic bytes from a potentially symbolic address :param address: Where to read from :param size: How many bytes :param force: Whether to ignore permissions :rtype: list
Below is the the instruction that describes the task: ### Input: Read a stream of potentially symbolic bytes from a potentially symbolic address :param address: Where to read from :param size: How many bytes :param force: Whether to ignore permissions :rtype: list ### Response: def read(self, address, size, force=False): """ Read a stream of potentially symbolic bytes from a potentially symbolic address :param address: Where to read from :param size: How many bytes :param force: Whether to ignore permissions :rtype: list """ size = self._get_size(size) assert not issymbolic(size) if issymbolic(address): assert solver.check(self.constraints) logger.debug(f'Reading {size} bytes from symbolic address {address}') try: solutions = self._try_get_solutions(address, size, 'r', force=force) assert len(solutions) > 0 except TooManySolutions as e: m, M = solver.minmax(self.constraints, address) logger.debug(f'Got TooManySolutions on a symbolic read. Range [{m:x}, {M:x}]. Not crashing!') # The force param shouldn't affect this, as this is checking for unmapped reads, not bad perms crashing_condition = True for start, end, perms, offset, name in self.mappings(): if start <= M + size and end >= m: if 'r' in perms: crashing_condition = Operators.AND(Operators.OR((address + size).ult(start), address.uge(end)), crashing_condition) if solver.can_be_true(self.constraints, crashing_condition): raise InvalidSymbolicMemoryAccess(address, 'r', size, crashing_condition) # INCOMPLETE Result! We could also fork once for every map logger.info('INCOMPLETE Result! Using the sampled solutions we have as result') condition = False for base in e.solutions: condition = Operators.OR(address == base, condition) from .state import ForkState raise ForkState("Forking state on incomplete result", condition) # So here we have all potential solutions to address condition = False for base in solutions: condition = Operators.OR(address == base, condition) result = [] # consider size ==1 to read following code for offset in range(size): # Given ALL solutions for the symbolic address for base in solutions: addr_value = base + offset byte = Operators.ORD(self.map_containing(addr_value)[addr_value]) if addr_value in self._symbols: for condition, value in self._symbols[addr_value]: byte = Operators.ITEBV(8, condition, Operators.ORD(value), byte) if len(result) > offset: result[offset] = Operators.ITEBV(8, address == base, byte, result[offset]) else: result.append(byte) assert len(result) == offset + 1 return list(map(Operators.CHR, result)) else: result = list(map(Operators.ORD, super().read(address, size, force))) for offset in range(size): if address + offset in self._symbols: for condition, value in self._symbols[address + offset]: if condition is True: result[offset] = Operators.ORD(value) else: result[offset] = Operators.ITEBV(8, condition, Operators.ORD(value), result[offset]) return list(map(Operators.CHR, result))
def get_parameters_off_value(self): """ Return the string associated to the parameters_off :rtype: string """ if self.parameters_off_value == None: params = self.CM.get_type_list(self.parameters_off) self.parameters_off_value = '({})'.format(' '.join(params)) return self.parameters_off_value
Return the string associated to the parameters_off :rtype: string
Below is the the instruction that describes the task: ### Input: Return the string associated to the parameters_off :rtype: string ### Response: def get_parameters_off_value(self): """ Return the string associated to the parameters_off :rtype: string """ if self.parameters_off_value == None: params = self.CM.get_type_list(self.parameters_off) self.parameters_off_value = '({})'.format(' '.join(params)) return self.parameters_off_value
def reduceByKey(self, func, numPartitions=None): """ Return a new DStream by applying reduceByKey to each RDD. """ if numPartitions is None: numPartitions = self._sc.defaultParallelism return self.combineByKey(lambda x: x, func, func, numPartitions)
Return a new DStream by applying reduceByKey to each RDD.
Below is the the instruction that describes the task: ### Input: Return a new DStream by applying reduceByKey to each RDD. ### Response: def reduceByKey(self, func, numPartitions=None): """ Return a new DStream by applying reduceByKey to each RDD. """ if numPartitions is None: numPartitions = self._sc.defaultParallelism return self.combineByKey(lambda x: x, func, func, numPartitions)
def escribir(dic, formato, contraer_fechas=False): "Genera una cadena dado un formato y un diccionario de claves/valores" linea = " " * sum([fmt[1] for fmt in formato]) comienzo = 1 for fmt in formato: clave, longitud, tipo = fmt[0:3] try: dec = (len(fmt)>3 and isinstance(fmt[3], int)) and fmt[3] or 2 if clave.capitalize() in dic: clave = clave.capitalize() s = dic.get(clave,"") if isinstance(s, unicode): s = s.encode("latin1") if s is None: valor = "" else: valor = str(s) # reemplazo saltos de linea por tabulaci{on vertical valor = valor.replace("\n\r", "\v").replace("\n", "\v").replace("\r", "\v") if tipo == N and valor and valor!="NULL": valor = ("%%0%dd" % longitud) % long(valor) elif tipo == I and valor: valor = ("%%0%d.%df" % (longitud+1, dec) % float(valor)).replace(".", "") elif contraer_fechas and clave.lower().startswith("fec") and longitud <= 8 and valor: valor = valor.replace("-", "") else: valor = ("%%-0%ds" % longitud) % valor linea = linea[:comienzo-1] + valor + linea[comienzo-1+longitud:] comienzo += longitud except Exception, e: warnings.warn("Error al escribir campo %s pos %s val '%s': %s" % ( clave, comienzo, valor, str(e))) return linea + "\n"
Genera una cadena dado un formato y un diccionario de claves/valores
Below is the the instruction that describes the task: ### Input: Genera una cadena dado un formato y un diccionario de claves/valores ### Response: def escribir(dic, formato, contraer_fechas=False): "Genera una cadena dado un formato y un diccionario de claves/valores" linea = " " * sum([fmt[1] for fmt in formato]) comienzo = 1 for fmt in formato: clave, longitud, tipo = fmt[0:3] try: dec = (len(fmt)>3 and isinstance(fmt[3], int)) and fmt[3] or 2 if clave.capitalize() in dic: clave = clave.capitalize() s = dic.get(clave,"") if isinstance(s, unicode): s = s.encode("latin1") if s is None: valor = "" else: valor = str(s) # reemplazo saltos de linea por tabulaci{on vertical valor = valor.replace("\n\r", "\v").replace("\n", "\v").replace("\r", "\v") if tipo == N and valor and valor!="NULL": valor = ("%%0%dd" % longitud) % long(valor) elif tipo == I and valor: valor = ("%%0%d.%df" % (longitud+1, dec) % float(valor)).replace(".", "") elif contraer_fechas and clave.lower().startswith("fec") and longitud <= 8 and valor: valor = valor.replace("-", "") else: valor = ("%%-0%ds" % longitud) % valor linea = linea[:comienzo-1] + valor + linea[comienzo-1+longitud:] comienzo += longitud except Exception, e: warnings.warn("Error al escribir campo %s pos %s val '%s': %s" % ( clave, comienzo, valor, str(e))) return linea + "\n"
def moments_XXXY(X, Y, remove_mean=False, symmetrize=False, weights=None, modify_data=False, sparse_mode='auto', sparse_tol=0.0, column_selection=None, diag_only=False): """ Computes the first two unnormalized moments of X and Y If symmetrize is False, computes .. math: s_x &=& \sum_t x_t s_y &=& \sum_t y_t C_XX &=& X^\top X C_XY &=& X^\top Y If symmetrize is True, computes .. math: s_x = s_y &=& \frac{1}{2} \sum_t(x_t + y_t) C_XX &=& \frac{1}{2} (X^\top X + Y^\top Y) C_XY &=& \frac{1}{2} (X^\top Y + Y^\top X) while exploiting zero or constant columns in the data matrix. Parameters ---------- X : ndarray (T, M) Data matrix Y : ndarray (T, N) Second data matrix remove_mean : bool True: remove column mean from the data, False: don't remove mean. symmetrize : bool Computes symmetrized means and moments (see above) weights : None or ndarray(T, ) weights assigned to each trajectory point of X. If None, all data points have weight one. If ndarray, each data point is assigned a separate weight. time_lagged : bool, indicates that Y is a time-lagged version of X. modify_data : bool If remove_mean=True, the mean will be removed in the data matrix X, without creating an independent copy. This option is faster but might lead to surprises because your input array is changed. sparse_mode : str one of: * 'dense' : always use dense mode * 'sparse' : always use sparse mode if possible * 'auto' : automatic sparse_tol: float Threshold for considering column to be zero in order to save computing effort when the data is sparse or almost sparse. If max(abs(X[:, i])) < sparse_tol, then row i (and also column i if Y is not given) of the covariance matrix will be set to zero. If Y is given and max(abs(Y[:, i])) < sparse_tol, then column i of the covariance matrix will be set to zero. column_selection: ndarray(k, dtype=int) or None Indices of those columns that are to be computed. If None, all columns are computed. diag_only: bool If True, the computation is restricted to the diagonal entries (autocorrelations) only. Returns ------- w : float statistical weight s_x : ndarray (M) x-sum s_y : ndarray (N) y-sum C_XX : ndarray (M, M) unnormalized covariance matrix of X C_XY : ndarray (M, N) unnormalized covariance matrix of XY """ # Check consistency of inputs: if Y is not None: assert Y.shape[0] == X.shape[0], 'X and Y must have equal length.' if weights is not None: assert X.shape[0] == weights.shape[0], 'X and weights_x must have equal length' # diag_only is only implemented for dense mode if diag_only and sparse_mode is not 'dense': if sparse_mode is 'sparse': import warnings warnings.warn('Computing diagonal entries only is not implemented for sparse mode. Switching to dense mode.') sparse_mode = 'dense' if diag_only and X.shape[1] != Y.shape[1]: raise ValueError('Computing diagonal entries only does not make sense for rectangular covariance matrix.') # sparsify X0, mask_X, xconst, Y0, mask_Y, yconst = _sparsify_pair(X, Y, remove_mean=remove_mean, modify_data=modify_data, symmetrize=symmetrize, sparse_mode=sparse_mode, sparse_tol=sparse_tol) is_sparse = mask_X is not None and mask_Y is not None # copy / convert copy = is_sparse or (remove_mean and not modify_data) X0, xconst = _copy_convert(X0, const=xconst, remove_mean=remove_mean, copy=copy) Y0, yconst = _copy_convert(Y0, const=yconst, remove_mean=remove_mean, copy=copy) # sum / center w, sx, sx_centered, sy, sy_centered = _sum(X0, xmask=mask_X, xconst=xconst, Y=Y0, ymask=mask_Y, yconst=yconst, symmetric=symmetrize, remove_mean=remove_mean, weights=weights) if remove_mean: _center(X0, w, sx, mask=mask_X, const=xconst, inplace=True) # fast in-place centering _center(Y0, w, sy, mask=mask_Y, const=yconst, inplace=True) # fast in-place centering if symmetrize: Cxx, Cxy = _M2_symmetric(X0, Y0, mask_X=mask_X, mask_Y=mask_Y, xsum=sx_centered, xconst=xconst, ysum=sy_centered, yconst=yconst, weights=weights, column_selection=column_selection, diag_only=diag_only) else: if column_selection is not None: if is_sparse: Xk = X[:, column_selection] mask_Xk = mask_X[column_selection] X0k = Xk[:, mask_Xk] xksum = sx_centered[column_selection] xkconst = Xk[0, ~mask_Xk] X0k, xkconst = _copy_convert(X0k, const=xkconst, remove_mean=remove_mean, copy=True) Yk = Y[:, column_selection] mask_Yk = mask_Y[column_selection] Y0k = Yk[:, mask_Yk] yksum = sy_centered[column_selection] ykconst = Yk[0, ~mask_Yk] Y0k, ykconst = _copy_convert(Y0k, const=ykconst, remove_mean=remove_mean, copy=True) Cxx = _M2(X0, X0k, mask_X=mask_X, mask_Y=mask_Xk, xsum=sx_centered, xconst=xconst, ysum=xksum, yconst=xkconst, weights=weights) Cxy = _M2(X0, Y0k, mask_X=mask_X, mask_Y=mask_Yk, xsum=sx_centered, xconst=xconst, ysum=yksum, yconst=ykconst, weights=weights) else: X0k = X0[:, column_selection] Y0k = Y0[:, column_selection] Cxx = _M2(X0, X0k, mask_X=mask_X, mask_Y=mask_X, xsum=sx_centered, xconst=xconst, ysum=sx_centered[column_selection], yconst=xconst, weights=weights) Cxy = _M2(X0, Y0k, mask_X=mask_X, mask_Y=mask_Y, xsum=sx_centered, xconst=xconst, ysum=sy_centered[column_selection], yconst=yconst, weights=weights) else: Cxx = _M2(X0, X0, mask_X=mask_X, mask_Y=mask_X, xsum=sx_centered, xconst=xconst, ysum=sx_centered, yconst=xconst, weights=weights, diag_only=diag_only) Cxy = _M2(X0, Y0, mask_X=mask_X, mask_Y=mask_Y, xsum=sx_centered, xconst=xconst, ysum=sy_centered, yconst=yconst, weights=weights, diag_only=diag_only) return w, sx, sy, Cxx, Cxy
Computes the first two unnormalized moments of X and Y If symmetrize is False, computes .. math: s_x &=& \sum_t x_t s_y &=& \sum_t y_t C_XX &=& X^\top X C_XY &=& X^\top Y If symmetrize is True, computes .. math: s_x = s_y &=& \frac{1}{2} \sum_t(x_t + y_t) C_XX &=& \frac{1}{2} (X^\top X + Y^\top Y) C_XY &=& \frac{1}{2} (X^\top Y + Y^\top X) while exploiting zero or constant columns in the data matrix. Parameters ---------- X : ndarray (T, M) Data matrix Y : ndarray (T, N) Second data matrix remove_mean : bool True: remove column mean from the data, False: don't remove mean. symmetrize : bool Computes symmetrized means and moments (see above) weights : None or ndarray(T, ) weights assigned to each trajectory point of X. If None, all data points have weight one. If ndarray, each data point is assigned a separate weight. time_lagged : bool, indicates that Y is a time-lagged version of X. modify_data : bool If remove_mean=True, the mean will be removed in the data matrix X, without creating an independent copy. This option is faster but might lead to surprises because your input array is changed. sparse_mode : str one of: * 'dense' : always use dense mode * 'sparse' : always use sparse mode if possible * 'auto' : automatic sparse_tol: float Threshold for considering column to be zero in order to save computing effort when the data is sparse or almost sparse. If max(abs(X[:, i])) < sparse_tol, then row i (and also column i if Y is not given) of the covariance matrix will be set to zero. If Y is given and max(abs(Y[:, i])) < sparse_tol, then column i of the covariance matrix will be set to zero. column_selection: ndarray(k, dtype=int) or None Indices of those columns that are to be computed. If None, all columns are computed. diag_only: bool If True, the computation is restricted to the diagonal entries (autocorrelations) only. Returns ------- w : float statistical weight s_x : ndarray (M) x-sum s_y : ndarray (N) y-sum C_XX : ndarray (M, M) unnormalized covariance matrix of X C_XY : ndarray (M, N) unnormalized covariance matrix of XY
Below is the the instruction that describes the task: ### Input: Computes the first two unnormalized moments of X and Y If symmetrize is False, computes .. math: s_x &=& \sum_t x_t s_y &=& \sum_t y_t C_XX &=& X^\top X C_XY &=& X^\top Y If symmetrize is True, computes .. math: s_x = s_y &=& \frac{1}{2} \sum_t(x_t + y_t) C_XX &=& \frac{1}{2} (X^\top X + Y^\top Y) C_XY &=& \frac{1}{2} (X^\top Y + Y^\top X) while exploiting zero or constant columns in the data matrix. Parameters ---------- X : ndarray (T, M) Data matrix Y : ndarray (T, N) Second data matrix remove_mean : bool True: remove column mean from the data, False: don't remove mean. symmetrize : bool Computes symmetrized means and moments (see above) weights : None or ndarray(T, ) weights assigned to each trajectory point of X. If None, all data points have weight one. If ndarray, each data point is assigned a separate weight. time_lagged : bool, indicates that Y is a time-lagged version of X. modify_data : bool If remove_mean=True, the mean will be removed in the data matrix X, without creating an independent copy. This option is faster but might lead to surprises because your input array is changed. sparse_mode : str one of: * 'dense' : always use dense mode * 'sparse' : always use sparse mode if possible * 'auto' : automatic sparse_tol: float Threshold for considering column to be zero in order to save computing effort when the data is sparse or almost sparse. If max(abs(X[:, i])) < sparse_tol, then row i (and also column i if Y is not given) of the covariance matrix will be set to zero. If Y is given and max(abs(Y[:, i])) < sparse_tol, then column i of the covariance matrix will be set to zero. column_selection: ndarray(k, dtype=int) or None Indices of those columns that are to be computed. If None, all columns are computed. diag_only: bool If True, the computation is restricted to the diagonal entries (autocorrelations) only. Returns ------- w : float statistical weight s_x : ndarray (M) x-sum s_y : ndarray (N) y-sum C_XX : ndarray (M, M) unnormalized covariance matrix of X C_XY : ndarray (M, N) unnormalized covariance matrix of XY ### Response: def moments_XXXY(X, Y, remove_mean=False, symmetrize=False, weights=None, modify_data=False, sparse_mode='auto', sparse_tol=0.0, column_selection=None, diag_only=False): """ Computes the first two unnormalized moments of X and Y If symmetrize is False, computes .. math: s_x &=& \sum_t x_t s_y &=& \sum_t y_t C_XX &=& X^\top X C_XY &=& X^\top Y If symmetrize is True, computes .. math: s_x = s_y &=& \frac{1}{2} \sum_t(x_t + y_t) C_XX &=& \frac{1}{2} (X^\top X + Y^\top Y) C_XY &=& \frac{1}{2} (X^\top Y + Y^\top X) while exploiting zero or constant columns in the data matrix. Parameters ---------- X : ndarray (T, M) Data matrix Y : ndarray (T, N) Second data matrix remove_mean : bool True: remove column mean from the data, False: don't remove mean. symmetrize : bool Computes symmetrized means and moments (see above) weights : None or ndarray(T, ) weights assigned to each trajectory point of X. If None, all data points have weight one. If ndarray, each data point is assigned a separate weight. time_lagged : bool, indicates that Y is a time-lagged version of X. modify_data : bool If remove_mean=True, the mean will be removed in the data matrix X, without creating an independent copy. This option is faster but might lead to surprises because your input array is changed. sparse_mode : str one of: * 'dense' : always use dense mode * 'sparse' : always use sparse mode if possible * 'auto' : automatic sparse_tol: float Threshold for considering column to be zero in order to save computing effort when the data is sparse or almost sparse. If max(abs(X[:, i])) < sparse_tol, then row i (and also column i if Y is not given) of the covariance matrix will be set to zero. If Y is given and max(abs(Y[:, i])) < sparse_tol, then column i of the covariance matrix will be set to zero. column_selection: ndarray(k, dtype=int) or None Indices of those columns that are to be computed. If None, all columns are computed. diag_only: bool If True, the computation is restricted to the diagonal entries (autocorrelations) only. Returns ------- w : float statistical weight s_x : ndarray (M) x-sum s_y : ndarray (N) y-sum C_XX : ndarray (M, M) unnormalized covariance matrix of X C_XY : ndarray (M, N) unnormalized covariance matrix of XY """ # Check consistency of inputs: if Y is not None: assert Y.shape[0] == X.shape[0], 'X and Y must have equal length.' if weights is not None: assert X.shape[0] == weights.shape[0], 'X and weights_x must have equal length' # diag_only is only implemented for dense mode if diag_only and sparse_mode is not 'dense': if sparse_mode is 'sparse': import warnings warnings.warn('Computing diagonal entries only is not implemented for sparse mode. Switching to dense mode.') sparse_mode = 'dense' if diag_only and X.shape[1] != Y.shape[1]: raise ValueError('Computing diagonal entries only does not make sense for rectangular covariance matrix.') # sparsify X0, mask_X, xconst, Y0, mask_Y, yconst = _sparsify_pair(X, Y, remove_mean=remove_mean, modify_data=modify_data, symmetrize=symmetrize, sparse_mode=sparse_mode, sparse_tol=sparse_tol) is_sparse = mask_X is not None and mask_Y is not None # copy / convert copy = is_sparse or (remove_mean and not modify_data) X0, xconst = _copy_convert(X0, const=xconst, remove_mean=remove_mean, copy=copy) Y0, yconst = _copy_convert(Y0, const=yconst, remove_mean=remove_mean, copy=copy) # sum / center w, sx, sx_centered, sy, sy_centered = _sum(X0, xmask=mask_X, xconst=xconst, Y=Y0, ymask=mask_Y, yconst=yconst, symmetric=symmetrize, remove_mean=remove_mean, weights=weights) if remove_mean: _center(X0, w, sx, mask=mask_X, const=xconst, inplace=True) # fast in-place centering _center(Y0, w, sy, mask=mask_Y, const=yconst, inplace=True) # fast in-place centering if symmetrize: Cxx, Cxy = _M2_symmetric(X0, Y0, mask_X=mask_X, mask_Y=mask_Y, xsum=sx_centered, xconst=xconst, ysum=sy_centered, yconst=yconst, weights=weights, column_selection=column_selection, diag_only=diag_only) else: if column_selection is not None: if is_sparse: Xk = X[:, column_selection] mask_Xk = mask_X[column_selection] X0k = Xk[:, mask_Xk] xksum = sx_centered[column_selection] xkconst = Xk[0, ~mask_Xk] X0k, xkconst = _copy_convert(X0k, const=xkconst, remove_mean=remove_mean, copy=True) Yk = Y[:, column_selection] mask_Yk = mask_Y[column_selection] Y0k = Yk[:, mask_Yk] yksum = sy_centered[column_selection] ykconst = Yk[0, ~mask_Yk] Y0k, ykconst = _copy_convert(Y0k, const=ykconst, remove_mean=remove_mean, copy=True) Cxx = _M2(X0, X0k, mask_X=mask_X, mask_Y=mask_Xk, xsum=sx_centered, xconst=xconst, ysum=xksum, yconst=xkconst, weights=weights) Cxy = _M2(X0, Y0k, mask_X=mask_X, mask_Y=mask_Yk, xsum=sx_centered, xconst=xconst, ysum=yksum, yconst=ykconst, weights=weights) else: X0k = X0[:, column_selection] Y0k = Y0[:, column_selection] Cxx = _M2(X0, X0k, mask_X=mask_X, mask_Y=mask_X, xsum=sx_centered, xconst=xconst, ysum=sx_centered[column_selection], yconst=xconst, weights=weights) Cxy = _M2(X0, Y0k, mask_X=mask_X, mask_Y=mask_Y, xsum=sx_centered, xconst=xconst, ysum=sy_centered[column_selection], yconst=yconst, weights=weights) else: Cxx = _M2(X0, X0, mask_X=mask_X, mask_Y=mask_X, xsum=sx_centered, xconst=xconst, ysum=sx_centered, yconst=xconst, weights=weights, diag_only=diag_only) Cxy = _M2(X0, Y0, mask_X=mask_X, mask_Y=mask_Y, xsum=sx_centered, xconst=xconst, ysum=sy_centered, yconst=yconst, weights=weights, diag_only=diag_only) return w, sx, sy, Cxx, Cxy
def add_ignore(self, depend): """Adds dependencies to ignore.""" try: self._add_child(self.ignore, self.ignore_set, depend) except TypeError as e: e = e.args[0] if SCons.Util.is_List(e): s = list(map(str, e)) else: s = str(e) raise SCons.Errors.UserError("attempted to ignore a non-Node dependency of %s:\n\t%s is a %s, not a Node" % (str(self), s, type(e)))
Adds dependencies to ignore.
Below is the the instruction that describes the task: ### Input: Adds dependencies to ignore. ### Response: def add_ignore(self, depend): """Adds dependencies to ignore.""" try: self._add_child(self.ignore, self.ignore_set, depend) except TypeError as e: e = e.args[0] if SCons.Util.is_List(e): s = list(map(str, e)) else: s = str(e) raise SCons.Errors.UserError("attempted to ignore a non-Node dependency of %s:\n\t%s is a %s, not a Node" % (str(self), s, type(e)))
def describe_api_key(apiKey, region=None, key=None, keyid=None, profile=None): ''' Gets info about the given api key CLI Example: .. code-block:: bash salt myminion boto_apigateway.describe_api_key apigw_api_key ''' try: conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) response = conn.get_api_key(apiKey=apiKey) return {'apiKey': _convert_datetime_str(response)} except ClientError as e: return {'error': __utils__['boto3.get_error'](e)}
Gets info about the given api key CLI Example: .. code-block:: bash salt myminion boto_apigateway.describe_api_key apigw_api_key
Below is the the instruction that describes the task: ### Input: Gets info about the given api key CLI Example: .. code-block:: bash salt myminion boto_apigateway.describe_api_key apigw_api_key ### Response: def describe_api_key(apiKey, region=None, key=None, keyid=None, profile=None): ''' Gets info about the given api key CLI Example: .. code-block:: bash salt myminion boto_apigateway.describe_api_key apigw_api_key ''' try: conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) response = conn.get_api_key(apiKey=apiKey) return {'apiKey': _convert_datetime_str(response)} except ClientError as e: return {'error': __utils__['boto3.get_error'](e)}
def addLabel(self, start, end, labelName): """ Add the label labelName to each record with record ROWID in range from ``start`` to ``end``, noninclusive of end. This will recalculate all points from end to the last record stored in the internal cache of this classifier. :param start: (int) start index :param end: (int) end index (noninclusive) :param labelName: (string) label name """ if len(self._recordsCache) == 0: raise HTMPredictionModelInvalidRangeError("Invalid supplied range for 'addLabel'. " "Model has no saved records.") try: start = int(start) except Exception: start = 0 try: end = int(end) except Exception: end = int(self._recordsCache[-1].ROWID) startID = self._recordsCache[0].ROWID clippedStart = max(0, start - startID) clippedEnd = max(0, min( len( self._recordsCache) , end - startID)) if clippedEnd <= clippedStart: raise HTMPredictionModelInvalidRangeError("Invalid supplied range for 'addLabel'.", debugInfo={ 'requestRange': { 'startRecordID': start, 'endRecordID': end }, 'clippedRequestRange': { 'startRecordID': clippedStart, 'endRecordID': clippedEnd }, 'validRange': { 'startRecordID': startID, 'endRecordID': self._recordsCache[len(self._recordsCache)-1].ROWID }, 'numRecordsStored': len(self._recordsCache) }) # Add label to range [clippedStart, clippedEnd) for state in self._recordsCache[clippedStart:clippedEnd]: if labelName not in state.anomalyLabel: state.anomalyLabel.append(labelName) state.setByUser = True self._addRecordToKNN(state) assert len(self.saved_categories) > 0 # Recompute [end, ...) for state in self._recordsCache[clippedEnd:]: self._classifyState(state)
Add the label labelName to each record with record ROWID in range from ``start`` to ``end``, noninclusive of end. This will recalculate all points from end to the last record stored in the internal cache of this classifier. :param start: (int) start index :param end: (int) end index (noninclusive) :param labelName: (string) label name
Below is the the instruction that describes the task: ### Input: Add the label labelName to each record with record ROWID in range from ``start`` to ``end``, noninclusive of end. This will recalculate all points from end to the last record stored in the internal cache of this classifier. :param start: (int) start index :param end: (int) end index (noninclusive) :param labelName: (string) label name ### Response: def addLabel(self, start, end, labelName): """ Add the label labelName to each record with record ROWID in range from ``start`` to ``end``, noninclusive of end. This will recalculate all points from end to the last record stored in the internal cache of this classifier. :param start: (int) start index :param end: (int) end index (noninclusive) :param labelName: (string) label name """ if len(self._recordsCache) == 0: raise HTMPredictionModelInvalidRangeError("Invalid supplied range for 'addLabel'. " "Model has no saved records.") try: start = int(start) except Exception: start = 0 try: end = int(end) except Exception: end = int(self._recordsCache[-1].ROWID) startID = self._recordsCache[0].ROWID clippedStart = max(0, start - startID) clippedEnd = max(0, min( len( self._recordsCache) , end - startID)) if clippedEnd <= clippedStart: raise HTMPredictionModelInvalidRangeError("Invalid supplied range for 'addLabel'.", debugInfo={ 'requestRange': { 'startRecordID': start, 'endRecordID': end }, 'clippedRequestRange': { 'startRecordID': clippedStart, 'endRecordID': clippedEnd }, 'validRange': { 'startRecordID': startID, 'endRecordID': self._recordsCache[len(self._recordsCache)-1].ROWID }, 'numRecordsStored': len(self._recordsCache) }) # Add label to range [clippedStart, clippedEnd) for state in self._recordsCache[clippedStart:clippedEnd]: if labelName not in state.anomalyLabel: state.anomalyLabel.append(labelName) state.setByUser = True self._addRecordToKNN(state) assert len(self.saved_categories) > 0 # Recompute [end, ...) for state in self._recordsCache[clippedEnd:]: self._classifyState(state)
def announcement_approved_email(request, obj, req): """Email the requested teachers and submitter whenever an administrator approves an announcement request. obj: the Announcement object req: the AnnouncementRequest object """ if not settings.PRODUCTION: logger.debug("Not in production. Ignoring email for approved announcement.") return subject = "Announcement Approved: {}".format(obj.title) """ Email to teachers who approved. """ teachers = req.teachers_approved.all() teacher_emails = [] for u in teachers: em = u.tj_email if em: teacher_emails.append(em) base_url = request.build_absolute_uri(reverse('index')) url = request.build_absolute_uri(reverse('view_announcement', args=[obj.id])) if len(teacher_emails) > 0: data = {"announcement": obj, "request": req, "info_link": url, "base_url": base_url, "role": "approved"} email_send("announcements/emails/announcement_approved.txt", "announcements/emails/announcement_approved.html", data, subject, teacher_emails) messages.success(request, "Sent teacher approved email to {} users".format(len(teacher_emails))) """ Email to submitter. """ submitter = req.user submitter_email = submitter.tj_email if submitter_email: submitter_emails = [submitter_email] data = {"announcement": obj, "request": req, "info_link": url, "base_url": base_url, "role": "submitted"} email_send("announcements/emails/announcement_approved.txt", "announcements/emails/announcement_approved.html", data, subject, submitter_emails) messages.success(request, "Sent teacher approved email to {} users".format(len(submitter_emails)))
Email the requested teachers and submitter whenever an administrator approves an announcement request. obj: the Announcement object req: the AnnouncementRequest object
Below is the the instruction that describes the task: ### Input: Email the requested teachers and submitter whenever an administrator approves an announcement request. obj: the Announcement object req: the AnnouncementRequest object ### Response: def announcement_approved_email(request, obj, req): """Email the requested teachers and submitter whenever an administrator approves an announcement request. obj: the Announcement object req: the AnnouncementRequest object """ if not settings.PRODUCTION: logger.debug("Not in production. Ignoring email for approved announcement.") return subject = "Announcement Approved: {}".format(obj.title) """ Email to teachers who approved. """ teachers = req.teachers_approved.all() teacher_emails = [] for u in teachers: em = u.tj_email if em: teacher_emails.append(em) base_url = request.build_absolute_uri(reverse('index')) url = request.build_absolute_uri(reverse('view_announcement', args=[obj.id])) if len(teacher_emails) > 0: data = {"announcement": obj, "request": req, "info_link": url, "base_url": base_url, "role": "approved"} email_send("announcements/emails/announcement_approved.txt", "announcements/emails/announcement_approved.html", data, subject, teacher_emails) messages.success(request, "Sent teacher approved email to {} users".format(len(teacher_emails))) """ Email to submitter. """ submitter = req.user submitter_email = submitter.tj_email if submitter_email: submitter_emails = [submitter_email] data = {"announcement": obj, "request": req, "info_link": url, "base_url": base_url, "role": "submitted"} email_send("announcements/emails/announcement_approved.txt", "announcements/emails/announcement_approved.html", data, subject, submitter_emails) messages.success(request, "Sent teacher approved email to {} users".format(len(submitter_emails)))
def from_array(array): """ Deserialize a new InlineQueryResultCachedAudio from a given dictionary. :return: new InlineQueryResultCachedAudio instance. :rtype: InlineQueryResultCachedAudio """ if array is None or not array: return None # end if assert_type_or_raise(array, dict, parameter_name="array") from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup data = {} # 'type' is given by class type data['id'] = u(array.get('id')) data['audio_file_id'] = u(array.get('audio_file_id')) data['caption'] = u(array.get('caption')) if array.get('caption') is not None else None data['parse_mode'] = u(array.get('parse_mode')) if array.get('parse_mode') is not None else None data['reply_markup'] = InlineKeyboardMarkup.from_array(array.get('reply_markup')) if array.get('reply_markup') is not None else None data['input_message_content'] = InputMessageContent.from_array(array.get('input_message_content')) if array.get('input_message_content') is not None else None instance = InlineQueryResultCachedAudio(**data) instance._raw = array return instance
Deserialize a new InlineQueryResultCachedAudio from a given dictionary. :return: new InlineQueryResultCachedAudio instance. :rtype: InlineQueryResultCachedAudio
Below is the the instruction that describes the task: ### Input: Deserialize a new InlineQueryResultCachedAudio from a given dictionary. :return: new InlineQueryResultCachedAudio instance. :rtype: InlineQueryResultCachedAudio ### Response: def from_array(array): """ Deserialize a new InlineQueryResultCachedAudio from a given dictionary. :return: new InlineQueryResultCachedAudio instance. :rtype: InlineQueryResultCachedAudio """ if array is None or not array: return None # end if assert_type_or_raise(array, dict, parameter_name="array") from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup data = {} # 'type' is given by class type data['id'] = u(array.get('id')) data['audio_file_id'] = u(array.get('audio_file_id')) data['caption'] = u(array.get('caption')) if array.get('caption') is not None else None data['parse_mode'] = u(array.get('parse_mode')) if array.get('parse_mode') is not None else None data['reply_markup'] = InlineKeyboardMarkup.from_array(array.get('reply_markup')) if array.get('reply_markup') is not None else None data['input_message_content'] = InputMessageContent.from_array(array.get('input_message_content')) if array.get('input_message_content') is not None else None instance = InlineQueryResultCachedAudio(**data) instance._raw = array return instance
def end_output (self, **kwargs): """Finish graph output, and print end of checking info as xml comment.""" self.xml_endtag(u"graph") self.xml_endtag(u"GraphXML") self.xml_end_output() self.close_fileoutput()
Finish graph output, and print end of checking info as xml comment.
Below is the the instruction that describes the task: ### Input: Finish graph output, and print end of checking info as xml comment. ### Response: def end_output (self, **kwargs): """Finish graph output, and print end of checking info as xml comment.""" self.xml_endtag(u"graph") self.xml_endtag(u"GraphXML") self.xml_end_output() self.close_fileoutput()
def get_size(conn, vm_): ''' Return the VM's size object ''' sizes = conn.list_sizes() vm_size = config.get_cloud_config_value('size', vm_, __opts__) if not vm_size: return sizes[0] for size in sizes: if vm_size and str(vm_size) in (str(size.id), str(size.name)): # pylint: disable=blacklisted-function return size raise SaltCloudNotFound( 'The specified size, \'{0}\', could not be found.'.format(vm_size) )
Return the VM's size object
Below is the the instruction that describes the task: ### Input: Return the VM's size object ### Response: def get_size(conn, vm_): ''' Return the VM's size object ''' sizes = conn.list_sizes() vm_size = config.get_cloud_config_value('size', vm_, __opts__) if not vm_size: return sizes[0] for size in sizes: if vm_size and str(vm_size) in (str(size.id), str(size.name)): # pylint: disable=blacklisted-function return size raise SaltCloudNotFound( 'The specified size, \'{0}\', could not be found.'.format(vm_size) )