code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def present(name): ''' Ensure the RabbitMQ VHost exists. name VHost name user Initial user permission to set on the VHost, if present .. deprecated:: 2015.8.0 owner Initial owner permission to set on the VHost, if present .. deprecated:: 2015.8.0 conf Initial conf string to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 write Initial write permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 read Initial read permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 runas Name of the user to run the command .. deprecated:: 2015.8.0 ''' ret = {'name': name, 'result': True, 'comment': '', 'changes': {}} vhost_exists = __salt__['rabbitmq.vhost_exists'](name) if vhost_exists: ret['comment'] = 'Virtual Host \'{0}\' already exists.'.format(name) return ret if not __opts__['test']: result = __salt__['rabbitmq.add_vhost'](name) if 'Error' in result: ret['result'] = False ret['comment'] = result['Error'] return ret elif 'Added' in result: ret['comment'] = result['Added'] # If we've reached this far before returning, we have changes. ret['changes'] = {'old': '', 'new': name} if __opts__['test']: ret['result'] = None ret['comment'] = 'Virtual Host \'{0}\' will be created.'.format(name) return ret
Ensure the RabbitMQ VHost exists. name VHost name user Initial user permission to set on the VHost, if present .. deprecated:: 2015.8.0 owner Initial owner permission to set on the VHost, if present .. deprecated:: 2015.8.0 conf Initial conf string to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 write Initial write permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 read Initial read permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 runas Name of the user to run the command .. deprecated:: 2015.8.0
Below is the the instruction that describes the task: ### Input: Ensure the RabbitMQ VHost exists. name VHost name user Initial user permission to set on the VHost, if present .. deprecated:: 2015.8.0 owner Initial owner permission to set on the VHost, if present .. deprecated:: 2015.8.0 conf Initial conf string to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 write Initial write permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 read Initial read permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 runas Name of the user to run the command .. deprecated:: 2015.8.0 ### Response: def present(name): ''' Ensure the RabbitMQ VHost exists. name VHost name user Initial user permission to set on the VHost, if present .. deprecated:: 2015.8.0 owner Initial owner permission to set on the VHost, if present .. deprecated:: 2015.8.0 conf Initial conf string to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 write Initial write permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 read Initial read permissions to apply to the VHost and user. Defaults to .* .. deprecated:: 2015.8.0 runas Name of the user to run the command .. deprecated:: 2015.8.0 ''' ret = {'name': name, 'result': True, 'comment': '', 'changes': {}} vhost_exists = __salt__['rabbitmq.vhost_exists'](name) if vhost_exists: ret['comment'] = 'Virtual Host \'{0}\' already exists.'.format(name) return ret if not __opts__['test']: result = __salt__['rabbitmq.add_vhost'](name) if 'Error' in result: ret['result'] = False ret['comment'] = result['Error'] return ret elif 'Added' in result: ret['comment'] = result['Added'] # If we've reached this far before returning, we have changes. ret['changes'] = {'old': '', 'new': name} if __opts__['test']: ret['result'] = None ret['comment'] = 'Virtual Host \'{0}\' will be created.'.format(name) return ret
def deviation(reference_intervals, estimated_intervals, trim=False): """Compute the median deviations between reference and estimated boundary times. Examples -------- >>> ref_intervals, _ = mir_eval.io.load_labeled_intervals('ref.lab') >>> est_intervals, _ = mir_eval.io.load_labeled_intervals('est.lab') >>> r_to_e, e_to_r = mir_eval.boundary.deviation(ref_intervals, ... est_intervals) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. trim : boolean if ``True``, the first and last intervals are ignored. Typically, these denote start (0.0) and end-of-track markers. (Default value = False) Returns ------- reference_to_estimated : float median time from each reference boundary to the closest estimated boundary estimated_to_reference : float median time from each estimated boundary to the closest reference boundary """ validate_boundary(reference_intervals, estimated_intervals, trim) # Convert intervals to boundaries reference_boundaries = util.intervals_to_boundaries(reference_intervals) estimated_boundaries = util.intervals_to_boundaries(estimated_intervals) # Suppress the first and last intervals if trim: reference_boundaries = reference_boundaries[1:-1] estimated_boundaries = estimated_boundaries[1:-1] # If we have no boundaries, we get no score. if len(reference_boundaries) == 0 or len(estimated_boundaries) == 0: return np.nan, np.nan dist = np.abs(np.subtract.outer(reference_boundaries, estimated_boundaries)) estimated_to_reference = np.median(dist.min(axis=0)) reference_to_estimated = np.median(dist.min(axis=1)) return reference_to_estimated, estimated_to_reference
Compute the median deviations between reference and estimated boundary times. Examples -------- >>> ref_intervals, _ = mir_eval.io.load_labeled_intervals('ref.lab') >>> est_intervals, _ = mir_eval.io.load_labeled_intervals('est.lab') >>> r_to_e, e_to_r = mir_eval.boundary.deviation(ref_intervals, ... est_intervals) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. trim : boolean if ``True``, the first and last intervals are ignored. Typically, these denote start (0.0) and end-of-track markers. (Default value = False) Returns ------- reference_to_estimated : float median time from each reference boundary to the closest estimated boundary estimated_to_reference : float median time from each estimated boundary to the closest reference boundary
Below is the the instruction that describes the task: ### Input: Compute the median deviations between reference and estimated boundary times. Examples -------- >>> ref_intervals, _ = mir_eval.io.load_labeled_intervals('ref.lab') >>> est_intervals, _ = mir_eval.io.load_labeled_intervals('est.lab') >>> r_to_e, e_to_r = mir_eval.boundary.deviation(ref_intervals, ... est_intervals) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. trim : boolean if ``True``, the first and last intervals are ignored. Typically, these denote start (0.0) and end-of-track markers. (Default value = False) Returns ------- reference_to_estimated : float median time from each reference boundary to the closest estimated boundary estimated_to_reference : float median time from each estimated boundary to the closest reference boundary ### Response: def deviation(reference_intervals, estimated_intervals, trim=False): """Compute the median deviations between reference and estimated boundary times. Examples -------- >>> ref_intervals, _ = mir_eval.io.load_labeled_intervals('ref.lab') >>> est_intervals, _ = mir_eval.io.load_labeled_intervals('est.lab') >>> r_to_e, e_to_r = mir_eval.boundary.deviation(ref_intervals, ... est_intervals) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_intervals` or :func:`mir_eval.io.load_labeled_intervals`. trim : boolean if ``True``, the first and last intervals are ignored. Typically, these denote start (0.0) and end-of-track markers. (Default value = False) Returns ------- reference_to_estimated : float median time from each reference boundary to the closest estimated boundary estimated_to_reference : float median time from each estimated boundary to the closest reference boundary """ validate_boundary(reference_intervals, estimated_intervals, trim) # Convert intervals to boundaries reference_boundaries = util.intervals_to_boundaries(reference_intervals) estimated_boundaries = util.intervals_to_boundaries(estimated_intervals) # Suppress the first and last intervals if trim: reference_boundaries = reference_boundaries[1:-1] estimated_boundaries = estimated_boundaries[1:-1] # If we have no boundaries, we get no score. if len(reference_boundaries) == 0 or len(estimated_boundaries) == 0: return np.nan, np.nan dist = np.abs(np.subtract.outer(reference_boundaries, estimated_boundaries)) estimated_to_reference = np.median(dist.min(axis=0)) reference_to_estimated = np.median(dist.min(axis=1)) return reference_to_estimated, estimated_to_reference
def _fix_offset(self, state, size, arch=None): """ This is a hack to deal with small values being stored at offsets into large registers unpredictably """ if state is not None: arch = state.arch if arch is None: raise ValueError('Either "state" or "arch" must be specified.') offset = arch.registers[self.reg_name][0] if size in self.alt_offsets: return offset + self.alt_offsets[size] elif size < self.size and arch.register_endness == 'Iend_BE': return offset + (self.size - size) return offset
This is a hack to deal with small values being stored at offsets into large registers unpredictably
Below is the the instruction that describes the task: ### Input: This is a hack to deal with small values being stored at offsets into large registers unpredictably ### Response: def _fix_offset(self, state, size, arch=None): """ This is a hack to deal with small values being stored at offsets into large registers unpredictably """ if state is not None: arch = state.arch if arch is None: raise ValueError('Either "state" or "arch" must be specified.') offset = arch.registers[self.reg_name][0] if size in self.alt_offsets: return offset + self.alt_offsets[size] elif size < self.size and arch.register_endness == 'Iend_BE': return offset + (self.size - size) return offset
def get_capture_handler_config_by_name(self, name): ''' Return data for handlers of a given name. Args: name: Name of the capture handler(s) to return config data for. Returns: Dictionary dump from the named capture handler as given by the :func:`SocketStreamCapturer.dump_handler_config_data` method. ''' handler_confs = [] for address, stream_capturer in self._stream_capturers.iteritems(): handler_data = stream_capturer[0].dump_handler_config_data() for h in handler_data: if h['handler']['name'] == name: handler_confs.append(h) return handler_confs
Return data for handlers of a given name. Args: name: Name of the capture handler(s) to return config data for. Returns: Dictionary dump from the named capture handler as given by the :func:`SocketStreamCapturer.dump_handler_config_data` method.
Below is the the instruction that describes the task: ### Input: Return data for handlers of a given name. Args: name: Name of the capture handler(s) to return config data for. Returns: Dictionary dump from the named capture handler as given by the :func:`SocketStreamCapturer.dump_handler_config_data` method. ### Response: def get_capture_handler_config_by_name(self, name): ''' Return data for handlers of a given name. Args: name: Name of the capture handler(s) to return config data for. Returns: Dictionary dump from the named capture handler as given by the :func:`SocketStreamCapturer.dump_handler_config_data` method. ''' handler_confs = [] for address, stream_capturer in self._stream_capturers.iteritems(): handler_data = stream_capturer[0].dump_handler_config_data() for h in handler_data: if h['handler']['name'] == name: handler_confs.append(h) return handler_confs
def swap(tokens, maxdist=2): """Perform a swap operation on a sequence of tokens, exhaustively swapping all tokens up to the maximum specified distance. This is a subset of all permutations.""" assert maxdist >= 2 tokens = list(tokens) if maxdist > len(tokens): maxdist = len(tokens) l = len(tokens) for i in range(0,l - 1): for permutation in permutations(tokens[i:i+maxdist]): if permutation != tuple(tokens[i:i+maxdist]): newtokens = tokens[:i] newtokens += permutation newtokens += tokens[i+maxdist:] yield newtokens if maxdist == len(tokens): break
Perform a swap operation on a sequence of tokens, exhaustively swapping all tokens up to the maximum specified distance. This is a subset of all permutations.
Below is the the instruction that describes the task: ### Input: Perform a swap operation on a sequence of tokens, exhaustively swapping all tokens up to the maximum specified distance. This is a subset of all permutations. ### Response: def swap(tokens, maxdist=2): """Perform a swap operation on a sequence of tokens, exhaustively swapping all tokens up to the maximum specified distance. This is a subset of all permutations.""" assert maxdist >= 2 tokens = list(tokens) if maxdist > len(tokens): maxdist = len(tokens) l = len(tokens) for i in range(0,l - 1): for permutation in permutations(tokens[i:i+maxdist]): if permutation != tuple(tokens[i:i+maxdist]): newtokens = tokens[:i] newtokens += permutation newtokens += tokens[i+maxdist:] yield newtokens if maxdist == len(tokens): break
def token(cls: Type[CLTVType], timestamp: int) -> CLTVType: """ Return CLTV instance from timestamp :param timestamp: Timestamp :return: """ cltv = cls() cltv.timestamp = str(timestamp) return cltv
Return CLTV instance from timestamp :param timestamp: Timestamp :return:
Below is the the instruction that describes the task: ### Input: Return CLTV instance from timestamp :param timestamp: Timestamp :return: ### Response: def token(cls: Type[CLTVType], timestamp: int) -> CLTVType: """ Return CLTV instance from timestamp :param timestamp: Timestamp :return: """ cltv = cls() cltv.timestamp = str(timestamp) return cltv
def _reorient_3d(image): """ Reorganize the data for a 3d nifti """ # Create empty array where x,y,z correspond to LR (sagittal), PA (coronal), IS (axial) directions and the size # of the array in each direction is the same with the corresponding direction of the input image. new_image = numpy.zeros([image.dimensions[image.sagittal_orientation.normal_component], image.dimensions[image.coronal_orientation.normal_component], image.dimensions[image.axial_orientation.normal_component]], dtype=image.nifti_data.dtype) # Fill the new image with the values of the input image but with matching the orientation with x,y,z if image.coronal_orientation.y_inverted: for i in range(new_image.shape[2]): new_image[:, :, i] = numpy.fliplr(numpy.squeeze(image.get_slice(SliceType.AXIAL, new_image.shape[2] - 1 - i).original_data)) else: for i in range(new_image.shape[2]): new_image[:, :, i] = numpy.fliplr(numpy.squeeze(image.get_slice(SliceType.AXIAL, i).original_data)) return new_image
Reorganize the data for a 3d nifti
Below is the the instruction that describes the task: ### Input: Reorganize the data for a 3d nifti ### Response: def _reorient_3d(image): """ Reorganize the data for a 3d nifti """ # Create empty array where x,y,z correspond to LR (sagittal), PA (coronal), IS (axial) directions and the size # of the array in each direction is the same with the corresponding direction of the input image. new_image = numpy.zeros([image.dimensions[image.sagittal_orientation.normal_component], image.dimensions[image.coronal_orientation.normal_component], image.dimensions[image.axial_orientation.normal_component]], dtype=image.nifti_data.dtype) # Fill the new image with the values of the input image but with matching the orientation with x,y,z if image.coronal_orientation.y_inverted: for i in range(new_image.shape[2]): new_image[:, :, i] = numpy.fliplr(numpy.squeeze(image.get_slice(SliceType.AXIAL, new_image.shape[2] - 1 - i).original_data)) else: for i in range(new_image.shape[2]): new_image[:, :, i] = numpy.fliplr(numpy.squeeze(image.get_slice(SliceType.AXIAL, i).original_data)) return new_image
def BitVecVal( value: int, size: int, annotations: Annotations = None ) -> z3.BitVecRef: """Creates a new bit vector with a concrete value.""" return z3.BitVecVal(value, size)
Creates a new bit vector with a concrete value.
Below is the the instruction that describes the task: ### Input: Creates a new bit vector with a concrete value. ### Response: def BitVecVal( value: int, size: int, annotations: Annotations = None ) -> z3.BitVecRef: """Creates a new bit vector with a concrete value.""" return z3.BitVecVal(value, size)
def lock(key, text): """Locks the given text using the given key and returns the result""" return hmac.new(key.encode('utf-8'), text.encode('utf-8')).hexdigest()
Locks the given text using the given key and returns the result
Below is the the instruction that describes the task: ### Input: Locks the given text using the given key and returns the result ### Response: def lock(key, text): """Locks the given text using the given key and returns the result""" return hmac.new(key.encode('utf-8'), text.encode('utf-8')).hexdigest()
def load_label(self, idx): """ Load label image as 1 x height x width integer array of label indices. Shift labels so that classes are 0-39 and void is 255 (to ignore it). The leading singleton dimension is required by the loss. """ label = scipy.io.loadmat('{}/segmentation/img_{}.mat'.format(self.nyud_dir, idx))['segmentation'].astype(np.uint8) label -= 1 # rotate labels label = label[np.newaxis, ...] return label
Load label image as 1 x height x width integer array of label indices. Shift labels so that classes are 0-39 and void is 255 (to ignore it). The leading singleton dimension is required by the loss.
Below is the the instruction that describes the task: ### Input: Load label image as 1 x height x width integer array of label indices. Shift labels so that classes are 0-39 and void is 255 (to ignore it). The leading singleton dimension is required by the loss. ### Response: def load_label(self, idx): """ Load label image as 1 x height x width integer array of label indices. Shift labels so that classes are 0-39 and void is 255 (to ignore it). The leading singleton dimension is required by the loss. """ label = scipy.io.loadmat('{}/segmentation/img_{}.mat'.format(self.nyud_dir, idx))['segmentation'].astype(np.uint8) label -= 1 # rotate labels label = label[np.newaxis, ...] return label
def sha512_digest(instr): ''' Generate a sha512 hash of a given string ''' return salt.utils.stringutils.to_unicode( hashlib.sha512(salt.utils.stringutils.to_bytes(instr)).hexdigest() )
Generate a sha512 hash of a given string
Below is the the instruction that describes the task: ### Input: Generate a sha512 hash of a given string ### Response: def sha512_digest(instr): ''' Generate a sha512 hash of a given string ''' return salt.utils.stringutils.to_unicode( hashlib.sha512(salt.utils.stringutils.to_bytes(instr)).hexdigest() )
def pvt(bars): """ Price Volume Trend """ trend = ((bars['close'] - bars['close'].shift(1)) / bars['close'].shift(1)) * bars['volume'] return trend.cumsum()
Price Volume Trend
Below is the the instruction that describes the task: ### Input: Price Volume Trend ### Response: def pvt(bars): """ Price Volume Trend """ trend = ((bars['close'] - bars['close'].shift(1)) / bars['close'].shift(1)) * bars['volume'] return trend.cumsum()
def store(self, *args, planning=None, forward=None, loading=False, contra=True): """Put a value in various dictionaries for later .retrieve(...). Needs at least five arguments, of which the -1th is the value to store, the -2th is the tick to store it at, the -3th is the turn to store it in, the -4th is the branch the revision is in, the -5th is the key the value is for, and the remaining arguments identify the entity that has the key, eg. a graph, node, or edge. With ``planning=True``, you will be permitted to alter "history" that takes place after the last non-planning moment of time, without much regard to consistency. Otherwise, contradictions will be handled by deleting everything in the contradicted plan after the present moment, unless you set ``contra=False``. ``loading=True`` prevents me from updating the ORM's records of the ends of branches and turns. """ db = self.db if planning is None: planning = db._planning if forward is None: forward = db._forward self._store(*args, planning=planning, loading=loading, contra=contra) if not db._no_kc: self._update_keycache(*args, forward=forward) self.send(self, key=args[-5], branch=args[-4], turn=args[-3], tick=args[-2], value=args[-1], action='store')
Put a value in various dictionaries for later .retrieve(...). Needs at least five arguments, of which the -1th is the value to store, the -2th is the tick to store it at, the -3th is the turn to store it in, the -4th is the branch the revision is in, the -5th is the key the value is for, and the remaining arguments identify the entity that has the key, eg. a graph, node, or edge. With ``planning=True``, you will be permitted to alter "history" that takes place after the last non-planning moment of time, without much regard to consistency. Otherwise, contradictions will be handled by deleting everything in the contradicted plan after the present moment, unless you set ``contra=False``. ``loading=True`` prevents me from updating the ORM's records of the ends of branches and turns.
Below is the the instruction that describes the task: ### Input: Put a value in various dictionaries for later .retrieve(...). Needs at least five arguments, of which the -1th is the value to store, the -2th is the tick to store it at, the -3th is the turn to store it in, the -4th is the branch the revision is in, the -5th is the key the value is for, and the remaining arguments identify the entity that has the key, eg. a graph, node, or edge. With ``planning=True``, you will be permitted to alter "history" that takes place after the last non-planning moment of time, without much regard to consistency. Otherwise, contradictions will be handled by deleting everything in the contradicted plan after the present moment, unless you set ``contra=False``. ``loading=True`` prevents me from updating the ORM's records of the ends of branches and turns. ### Response: def store(self, *args, planning=None, forward=None, loading=False, contra=True): """Put a value in various dictionaries for later .retrieve(...). Needs at least five arguments, of which the -1th is the value to store, the -2th is the tick to store it at, the -3th is the turn to store it in, the -4th is the branch the revision is in, the -5th is the key the value is for, and the remaining arguments identify the entity that has the key, eg. a graph, node, or edge. With ``planning=True``, you will be permitted to alter "history" that takes place after the last non-planning moment of time, without much regard to consistency. Otherwise, contradictions will be handled by deleting everything in the contradicted plan after the present moment, unless you set ``contra=False``. ``loading=True`` prevents me from updating the ORM's records of the ends of branches and turns. """ db = self.db if planning is None: planning = db._planning if forward is None: forward = db._forward self._store(*args, planning=planning, loading=loading, contra=contra) if not db._no_kc: self._update_keycache(*args, forward=forward) self.send(self, key=args[-5], branch=args[-4], turn=args[-3], tick=args[-2], value=args[-1], action='store')
def items_for_result(view, result): """ Generates the actual list of data. """ model_admin = view.model_admin for field_name in view.list_display: empty_value_display = model_admin.get_empty_value_display() row_classes = ['field-%s' % field_name] try: f, attr, value = lookup_field(field_name, result, model_admin) except ObjectDoesNotExist: result_repr = empty_value_display else: empty_value_display = getattr(attr, 'empty_value_display', empty_value_display) if f is None or f.auto_created: allow_tags = getattr(attr, 'allow_tags', False) boolean = getattr(attr, 'boolean', False) if boolean or not value: allow_tags = True if django.VERSION >= (1, 9): result_repr = display_for_value(value, empty_value_display, boolean) else: result_repr = display_for_value(value, boolean) # Strip HTML tags in the resulting text, except if the # function has an "allow_tags" attribute set to True. if allow_tags: result_repr = mark_safe(result_repr) if isinstance(value, (datetime.date, datetime.time)): row_classes.append('nowrap') else: if isinstance(f, models.ManyToOneRel): field_val = getattr(result, f.name) if field_val is None: result_repr = empty_value_display else: result_repr = field_val else: if django.VERSION >= (1, 9): result_repr = display_for_field(value, f, empty_value_display) else: result_repr = display_for_field(value, f) if isinstance(f, (models.DateField, models.TimeField, models.ForeignKey)): row_classes.append('nowrap') if force_text(result_repr) == '': result_repr = mark_safe('&nbsp;') row_classes.extend(model_admin.get_extra_class_names_for_field_col(field_name, result)) row_attributes_dict = model_admin.get_extra_attrs_for_field_col(field_name, result) row_attributes_dict['class'] = ' ' . join(row_classes) row_attributes = ''.join(' %s="%s"' % (key, val) for key, val in row_attributes_dict.items()) row_attributes_safe = mark_safe(row_attributes) yield format_html('<td{}>{}</td>', row_attributes_safe, result_repr)
Generates the actual list of data.
Below is the the instruction that describes the task: ### Input: Generates the actual list of data. ### Response: def items_for_result(view, result): """ Generates the actual list of data. """ model_admin = view.model_admin for field_name in view.list_display: empty_value_display = model_admin.get_empty_value_display() row_classes = ['field-%s' % field_name] try: f, attr, value = lookup_field(field_name, result, model_admin) except ObjectDoesNotExist: result_repr = empty_value_display else: empty_value_display = getattr(attr, 'empty_value_display', empty_value_display) if f is None or f.auto_created: allow_tags = getattr(attr, 'allow_tags', False) boolean = getattr(attr, 'boolean', False) if boolean or not value: allow_tags = True if django.VERSION >= (1, 9): result_repr = display_for_value(value, empty_value_display, boolean) else: result_repr = display_for_value(value, boolean) # Strip HTML tags in the resulting text, except if the # function has an "allow_tags" attribute set to True. if allow_tags: result_repr = mark_safe(result_repr) if isinstance(value, (datetime.date, datetime.time)): row_classes.append('nowrap') else: if isinstance(f, models.ManyToOneRel): field_val = getattr(result, f.name) if field_val is None: result_repr = empty_value_display else: result_repr = field_val else: if django.VERSION >= (1, 9): result_repr = display_for_field(value, f, empty_value_display) else: result_repr = display_for_field(value, f) if isinstance(f, (models.DateField, models.TimeField, models.ForeignKey)): row_classes.append('nowrap') if force_text(result_repr) == '': result_repr = mark_safe('&nbsp;') row_classes.extend(model_admin.get_extra_class_names_for_field_col(field_name, result)) row_attributes_dict = model_admin.get_extra_attrs_for_field_col(field_name, result) row_attributes_dict['class'] = ' ' . join(row_classes) row_attributes = ''.join(' %s="%s"' % (key, val) for key, val in row_attributes_dict.items()) row_attributes_safe = mark_safe(row_attributes) yield format_html('<td{}>{}</td>', row_attributes_safe, result_repr)
def add_dependency(self, name, obj): """Add a code dependency so it gets inserted into globals""" if name in self._deps: if self._deps[name] is obj: return raise ValueError( "There exists a different dep with the same name : %r" % name) self._deps[name] = obj
Add a code dependency so it gets inserted into globals
Below is the the instruction that describes the task: ### Input: Add a code dependency so it gets inserted into globals ### Response: def add_dependency(self, name, obj): """Add a code dependency so it gets inserted into globals""" if name in self._deps: if self._deps[name] is obj: return raise ValueError( "There exists a different dep with the same name : %r" % name) self._deps[name] = obj
def apply_interceptors(work_db, enabled_interceptors): """Apply each registered interceptor to the WorkDB.""" names = (name for name in interceptor_names() if name in enabled_interceptors) for name in names: interceptor = get_interceptor(name) interceptor(work_db)
Apply each registered interceptor to the WorkDB.
Below is the the instruction that describes the task: ### Input: Apply each registered interceptor to the WorkDB. ### Response: def apply_interceptors(work_db, enabled_interceptors): """Apply each registered interceptor to the WorkDB.""" names = (name for name in interceptor_names() if name in enabled_interceptors) for name in names: interceptor = get_interceptor(name) interceptor(work_db)
def _initial_individual(self): """Generates an individual with random parameters within bounds.""" ind = creator.Individual( [random.uniform(-1, 1) for _ in range(len(self.value_means))]) return ind
Generates an individual with random parameters within bounds.
Below is the the instruction that describes the task: ### Input: Generates an individual with random parameters within bounds. ### Response: def _initial_individual(self): """Generates an individual with random parameters within bounds.""" ind = creator.Individual( [random.uniform(-1, 1) for _ in range(len(self.value_means))]) return ind
def RB_to_OPLS(c0, c1, c2, c3, c4, c5): """Converts Ryckaert-Bellemans type dihedrals to OPLS type. Parameters ---------- c0, c1, c2, c3, c4, c5 : Ryckaert-Belleman coefficients (in kcal/mol) Returns ------- opls_coeffs : np.array, shape=(4,) Array containing the OPLS dihedrals coeffs f1, f2, f3, and f4 (in kcal/mol) """ f1 = (-1.5 * c3) - (2 * c1) f2 = c0 + c1 + c3 f3 = -0.5 * c3 f4 = -0.25 * c4 return np.array([f1, f2, f3, f4])
Converts Ryckaert-Bellemans type dihedrals to OPLS type. Parameters ---------- c0, c1, c2, c3, c4, c5 : Ryckaert-Belleman coefficients (in kcal/mol) Returns ------- opls_coeffs : np.array, shape=(4,) Array containing the OPLS dihedrals coeffs f1, f2, f3, and f4 (in kcal/mol)
Below is the the instruction that describes the task: ### Input: Converts Ryckaert-Bellemans type dihedrals to OPLS type. Parameters ---------- c0, c1, c2, c3, c4, c5 : Ryckaert-Belleman coefficients (in kcal/mol) Returns ------- opls_coeffs : np.array, shape=(4,) Array containing the OPLS dihedrals coeffs f1, f2, f3, and f4 (in kcal/mol) ### Response: def RB_to_OPLS(c0, c1, c2, c3, c4, c5): """Converts Ryckaert-Bellemans type dihedrals to OPLS type. Parameters ---------- c0, c1, c2, c3, c4, c5 : Ryckaert-Belleman coefficients (in kcal/mol) Returns ------- opls_coeffs : np.array, shape=(4,) Array containing the OPLS dihedrals coeffs f1, f2, f3, and f4 (in kcal/mol) """ f1 = (-1.5 * c3) - (2 * c1) f2 = c0 + c1 + c3 f3 = -0.5 * c3 f4 = -0.25 * c4 return np.array([f1, f2, f3, f4])
def cron(self, cron_string, func, args=None, kwargs=None, repeat=None, queue_name=None, id=None, timeout=None, description=None, meta=None): """ Schedule a cronjob """ scheduled_time = get_next_scheduled_time(cron_string) # Set result_ttl to -1, as jobs scheduled via cron are periodic ones. # Otherwise the job would expire after 500 sec. job = self._create_job(func, args=args, kwargs=kwargs, commit=False, result_ttl=-1, id=id, queue_name=queue_name, description=description, timeout=timeout, meta=meta) job.meta['cron_string'] = cron_string if repeat is not None: job.meta['repeat'] = int(repeat) job.save() self.connection.zadd(self.scheduled_jobs_key, {job.id: to_unix(scheduled_time)}) return job
Schedule a cronjob
Below is the the instruction that describes the task: ### Input: Schedule a cronjob ### Response: def cron(self, cron_string, func, args=None, kwargs=None, repeat=None, queue_name=None, id=None, timeout=None, description=None, meta=None): """ Schedule a cronjob """ scheduled_time = get_next_scheduled_time(cron_string) # Set result_ttl to -1, as jobs scheduled via cron are periodic ones. # Otherwise the job would expire after 500 sec. job = self._create_job(func, args=args, kwargs=kwargs, commit=False, result_ttl=-1, id=id, queue_name=queue_name, description=description, timeout=timeout, meta=meta) job.meta['cron_string'] = cron_string if repeat is not None: job.meta['repeat'] = int(repeat) job.save() self.connection.zadd(self.scheduled_jobs_key, {job.id: to_unix(scheduled_time)}) return job
def call_api(self, event): """Make a request against the API defined by this app. Return any result from the API call as a JSON object/dict. In the event of a client or server error response from the API endpoint handler (4xx or 5xx), raise a :class:`fleece.httperror.HTTPError` containing some context about the error. Any unexpected exception encountered while executing an API endpoint handler will be appropriately raised as a generic 500 error with no error context (in order to not expose too much of the internals of the application). :param dict event: Dictionary containing the entire request payload passed to the Lambda function handler. :returns: JSON object (as a `list` or `dict`) containing the response data from the API endpoint. :raises: :class:`fleece.httperror.HTTPError` on 4xx or 5xx responses from the API endpoint handler, or a 500 response on an unexpected failure (due to a bug in handler code, for example). """ try: environ = _build_wsgi_env(event, self.import_name) response = werkzeug.wrappers.Response.from_app(self, environ) response_dict = json.loads(response.get_data()) if 400 <= response.status_code < 500: if ('error' in response_dict and 'message' in response_dict['error']): # FIXME(larsbutler): If 'error' is not a collection # (list/dict) and is a scalar such as an int, the check # `message in response_dict['error']` will blow up and # result in a generic 500 error. This check assumes too # much about the format of error responses given by the # handler. It might be good to allow this handling to # support custom structures. msg = response_dict['error']['message'] elif 'detail' in response_dict: # Likely this is a generated 400 response from Connexion. # Reveal the 'detail' of the message to the user. # NOTE(larsbutler): If your API handler explicitly returns # a 'detail' key in in the response, be aware that this # will be exposed to the client. msg = response_dict['detail'] else: # Respond with a generic client error. msg = 'Client Error' # FIXME(larsbutler): The logic above still assumes a lot about # the API response. That's not great. It would be nice to make # this more flexible and explicit. self.logger.error( 'Raising 4xx error', http_status=response.status_code, message=msg, ) raise httperror.HTTPError( status=response.status_code, message=msg, ) elif 500 <= response.status_code < 600: if response_dict['title'] == RESPONSE_CONTRACT_VIOLATION: # This case is generally enountered if the API endpoint # handler code does not conform to the contract dictated by # the Swagger specification. self.logger.error( RESPONSE_CONTRACT_VIOLATION, detail=response_dict['detail'] ) else: # This case will trigger if # a) the handler code raises an unexpected exception # or # b) the handler code explicitly returns a 5xx error. self.logger.error( 'Raising 5xx error', response=response_dict, http_status=response.status_code, ) raise httperror.HTTPError(status=response.status_code) else: return response_dict except httperror.HTTPError: self.logger.exception('HTTPError') raise except Exception: self.logger.exception('Unhandled exception') raise httperror.HTTPError(status=500)
Make a request against the API defined by this app. Return any result from the API call as a JSON object/dict. In the event of a client or server error response from the API endpoint handler (4xx or 5xx), raise a :class:`fleece.httperror.HTTPError` containing some context about the error. Any unexpected exception encountered while executing an API endpoint handler will be appropriately raised as a generic 500 error with no error context (in order to not expose too much of the internals of the application). :param dict event: Dictionary containing the entire request payload passed to the Lambda function handler. :returns: JSON object (as a `list` or `dict`) containing the response data from the API endpoint. :raises: :class:`fleece.httperror.HTTPError` on 4xx or 5xx responses from the API endpoint handler, or a 500 response on an unexpected failure (due to a bug in handler code, for example).
Below is the the instruction that describes the task: ### Input: Make a request against the API defined by this app. Return any result from the API call as a JSON object/dict. In the event of a client or server error response from the API endpoint handler (4xx or 5xx), raise a :class:`fleece.httperror.HTTPError` containing some context about the error. Any unexpected exception encountered while executing an API endpoint handler will be appropriately raised as a generic 500 error with no error context (in order to not expose too much of the internals of the application). :param dict event: Dictionary containing the entire request payload passed to the Lambda function handler. :returns: JSON object (as a `list` or `dict`) containing the response data from the API endpoint. :raises: :class:`fleece.httperror.HTTPError` on 4xx or 5xx responses from the API endpoint handler, or a 500 response on an unexpected failure (due to a bug in handler code, for example). ### Response: def call_api(self, event): """Make a request against the API defined by this app. Return any result from the API call as a JSON object/dict. In the event of a client or server error response from the API endpoint handler (4xx or 5xx), raise a :class:`fleece.httperror.HTTPError` containing some context about the error. Any unexpected exception encountered while executing an API endpoint handler will be appropriately raised as a generic 500 error with no error context (in order to not expose too much of the internals of the application). :param dict event: Dictionary containing the entire request payload passed to the Lambda function handler. :returns: JSON object (as a `list` or `dict`) containing the response data from the API endpoint. :raises: :class:`fleece.httperror.HTTPError` on 4xx or 5xx responses from the API endpoint handler, or a 500 response on an unexpected failure (due to a bug in handler code, for example). """ try: environ = _build_wsgi_env(event, self.import_name) response = werkzeug.wrappers.Response.from_app(self, environ) response_dict = json.loads(response.get_data()) if 400 <= response.status_code < 500: if ('error' in response_dict and 'message' in response_dict['error']): # FIXME(larsbutler): If 'error' is not a collection # (list/dict) and is a scalar such as an int, the check # `message in response_dict['error']` will blow up and # result in a generic 500 error. This check assumes too # much about the format of error responses given by the # handler. It might be good to allow this handling to # support custom structures. msg = response_dict['error']['message'] elif 'detail' in response_dict: # Likely this is a generated 400 response from Connexion. # Reveal the 'detail' of the message to the user. # NOTE(larsbutler): If your API handler explicitly returns # a 'detail' key in in the response, be aware that this # will be exposed to the client. msg = response_dict['detail'] else: # Respond with a generic client error. msg = 'Client Error' # FIXME(larsbutler): The logic above still assumes a lot about # the API response. That's not great. It would be nice to make # this more flexible and explicit. self.logger.error( 'Raising 4xx error', http_status=response.status_code, message=msg, ) raise httperror.HTTPError( status=response.status_code, message=msg, ) elif 500 <= response.status_code < 600: if response_dict['title'] == RESPONSE_CONTRACT_VIOLATION: # This case is generally enountered if the API endpoint # handler code does not conform to the contract dictated by # the Swagger specification. self.logger.error( RESPONSE_CONTRACT_VIOLATION, detail=response_dict['detail'] ) else: # This case will trigger if # a) the handler code raises an unexpected exception # or # b) the handler code explicitly returns a 5xx error. self.logger.error( 'Raising 5xx error', response=response_dict, http_status=response.status_code, ) raise httperror.HTTPError(status=response.status_code) else: return response_dict except httperror.HTTPError: self.logger.exception('HTTPError') raise except Exception: self.logger.exception('Unhandled exception') raise httperror.HTTPError(status=500)
async def psetex(self, name, time_ms, value): """ Set the value of key ``name`` to ``value`` that expires in ``time_ms`` milliseconds. ``time_ms`` can be represented by an integer or a Python timedelta object """ if isinstance(time_ms, datetime.timedelta): ms = int(time_ms.microseconds / 1000) time_ms = (time_ms.seconds + time_ms.days * 24 * 3600) * 1000 + ms return await self.execute_command('PSETEX', name, time_ms, value)
Set the value of key ``name`` to ``value`` that expires in ``time_ms`` milliseconds. ``time_ms`` can be represented by an integer or a Python timedelta object
Below is the the instruction that describes the task: ### Input: Set the value of key ``name`` to ``value`` that expires in ``time_ms`` milliseconds. ``time_ms`` can be represented by an integer or a Python timedelta object ### Response: async def psetex(self, name, time_ms, value): """ Set the value of key ``name`` to ``value`` that expires in ``time_ms`` milliseconds. ``time_ms`` can be represented by an integer or a Python timedelta object """ if isinstance(time_ms, datetime.timedelta): ms = int(time_ms.microseconds / 1000) time_ms = (time_ms.seconds + time_ms.days * 24 * 3600) * 1000 + ms return await self.execute_command('PSETEX', name, time_ms, value)
def base64_decode(data): """ Base 64 decoder """ data = data.replace(__enc64__[64], '') total = len(data) result = [] mod = 0 for i in range(total): mod = i % 4 cur = __enc64__.index(data[i]) if mod == 0: continue elif mod == 1: prev = __enc64__.index(data[i - 1]) result.append(chr(prev << 2 | cur >> 4)) elif mod == 2: prev = __enc64__.index(data[i - 1]) result.append(chr((prev & 0x0f) << 4 | cur >> 2)) elif mod == 3: prev = __enc64__.index(data[i - 1]) result.append(chr((prev & 3) << 6 | cur)) return "".join(result)
Base 64 decoder
Below is the the instruction that describes the task: ### Input: Base 64 decoder ### Response: def base64_decode(data): """ Base 64 decoder """ data = data.replace(__enc64__[64], '') total = len(data) result = [] mod = 0 for i in range(total): mod = i % 4 cur = __enc64__.index(data[i]) if mod == 0: continue elif mod == 1: prev = __enc64__.index(data[i - 1]) result.append(chr(prev << 2 | cur >> 4)) elif mod == 2: prev = __enc64__.index(data[i - 1]) result.append(chr((prev & 0x0f) << 4 | cur >> 2)) elif mod == 3: prev = __enc64__.index(data[i - 1]) result.append(chr((prev & 3) << 6 | cur)) return "".join(result)
def is_allowed(request, level, pid): """Check if one or more subjects are allowed to perform action level on object. If a subject holds permissions for one action level on object, all lower action levels are also allowed. Any included subject that is unknown to this MN is treated as a subject without permissions. Returns: bool True: - The active subjects include one or more subjects that: - are fully trusted DataONE infrastructure subjects, causing all rights to be granted regardless of requested access level and SciObj - OR are in the object's ACL for the requested access level. The ACL contains the subjects from the object's allow rules and the object's rightsHolder, which has all rights. - OR object is public, which always yields a match on the "public" symbolic subject. False: - None of the active subjects are in the object's ACL for the requested access level or for lower levels. - OR PID does not exist - OR access level is invalid """ if is_trusted_subject(request): return True return d1_gmn.app.models.Permission.objects.filter( sciobj__pid__did=pid, subject__subject__in=request.all_subjects_set, level__gte=level, ).exists()
Check if one or more subjects are allowed to perform action level on object. If a subject holds permissions for one action level on object, all lower action levels are also allowed. Any included subject that is unknown to this MN is treated as a subject without permissions. Returns: bool True: - The active subjects include one or more subjects that: - are fully trusted DataONE infrastructure subjects, causing all rights to be granted regardless of requested access level and SciObj - OR are in the object's ACL for the requested access level. The ACL contains the subjects from the object's allow rules and the object's rightsHolder, which has all rights. - OR object is public, which always yields a match on the "public" symbolic subject. False: - None of the active subjects are in the object's ACL for the requested access level or for lower levels. - OR PID does not exist - OR access level is invalid
Below is the the instruction that describes the task: ### Input: Check if one or more subjects are allowed to perform action level on object. If a subject holds permissions for one action level on object, all lower action levels are also allowed. Any included subject that is unknown to this MN is treated as a subject without permissions. Returns: bool True: - The active subjects include one or more subjects that: - are fully trusted DataONE infrastructure subjects, causing all rights to be granted regardless of requested access level and SciObj - OR are in the object's ACL for the requested access level. The ACL contains the subjects from the object's allow rules and the object's rightsHolder, which has all rights. - OR object is public, which always yields a match on the "public" symbolic subject. False: - None of the active subjects are in the object's ACL for the requested access level or for lower levels. - OR PID does not exist - OR access level is invalid ### Response: def is_allowed(request, level, pid): """Check if one or more subjects are allowed to perform action level on object. If a subject holds permissions for one action level on object, all lower action levels are also allowed. Any included subject that is unknown to this MN is treated as a subject without permissions. Returns: bool True: - The active subjects include one or more subjects that: - are fully trusted DataONE infrastructure subjects, causing all rights to be granted regardless of requested access level and SciObj - OR are in the object's ACL for the requested access level. The ACL contains the subjects from the object's allow rules and the object's rightsHolder, which has all rights. - OR object is public, which always yields a match on the "public" symbolic subject. False: - None of the active subjects are in the object's ACL for the requested access level or for lower levels. - OR PID does not exist - OR access level is invalid """ if is_trusted_subject(request): return True return d1_gmn.app.models.Permission.objects.filter( sciobj__pid__did=pid, subject__subject__in=request.all_subjects_set, level__gte=level, ).exists()
def search_knn(self, point, k, dist=None): """ Return the k nearest neighbors of point and their distances point must be an actual point, not a node. k is the number of results to return. The actual results can be less (if there aren't more nodes to return) or more in case of equal distances. dist is a distance function, expecting two points and returning a distance value. Distance values can be any comparable type. The result is an ordered list of (node, distance) tuples. """ if k < 1: raise ValueError("k must be greater than 0.") if dist is None: get_dist = lambda n: n.dist(point) else: get_dist = lambda n: dist(n.data, point) results = [] self._search_node(point, k, results, get_dist, itertools.count()) # We sort the final result by the distance in the tuple # (<KdNode>, distance). return [(node, -d) for d, _, node in sorted(results, reverse=True)]
Return the k nearest neighbors of point and their distances point must be an actual point, not a node. k is the number of results to return. The actual results can be less (if there aren't more nodes to return) or more in case of equal distances. dist is a distance function, expecting two points and returning a distance value. Distance values can be any comparable type. The result is an ordered list of (node, distance) tuples.
Below is the the instruction that describes the task: ### Input: Return the k nearest neighbors of point and their distances point must be an actual point, not a node. k is the number of results to return. The actual results can be less (if there aren't more nodes to return) or more in case of equal distances. dist is a distance function, expecting two points and returning a distance value. Distance values can be any comparable type. The result is an ordered list of (node, distance) tuples. ### Response: def search_knn(self, point, k, dist=None): """ Return the k nearest neighbors of point and their distances point must be an actual point, not a node. k is the number of results to return. The actual results can be less (if there aren't more nodes to return) or more in case of equal distances. dist is a distance function, expecting two points and returning a distance value. Distance values can be any comparable type. The result is an ordered list of (node, distance) tuples. """ if k < 1: raise ValueError("k must be greater than 0.") if dist is None: get_dist = lambda n: n.dist(point) else: get_dist = lambda n: dist(n.data, point) results = [] self._search_node(point, k, results, get_dist, itertools.count()) # We sort the final result by the distance in the tuple # (<KdNode>, distance). return [(node, -d) for d, _, node in sorted(results, reverse=True)]
def get_consumed_read_units_percent( table_name, lookback_window_start=15, lookback_period=5): """ Returns the number of consumed read units in percent :type table_name: str :param table_name: Name of the DynamoDB table :type lookback_window_start: int :param lookback_window_start: Relative start time for the CloudWatch metric :type lookback_period: int :param lookback_period: Number of minutes to look at :returns: float -- Number of consumed reads as a percentage of provisioned reads """ try: metrics = __get_aws_metric( table_name, lookback_window_start, lookback_period, 'ConsumedReadCapacityUnits') except BotoServerError: raise if metrics: lookback_seconds = lookback_period * 60 consumed_read_units = ( float(metrics[0]['Sum']) / float(lookback_seconds)) else: consumed_read_units = 0 try: table_read_units = dynamodb.get_provisioned_table_read_units( table_name) consumed_read_units_percent = ( float(consumed_read_units) / float(table_read_units) * 100) except JSONResponseError: raise logger.info('{0} - Consumed read units: {1:.2f}%'.format( table_name, consumed_read_units_percent)) return consumed_read_units_percent
Returns the number of consumed read units in percent :type table_name: str :param table_name: Name of the DynamoDB table :type lookback_window_start: int :param lookback_window_start: Relative start time for the CloudWatch metric :type lookback_period: int :param lookback_period: Number of minutes to look at :returns: float -- Number of consumed reads as a percentage of provisioned reads
Below is the the instruction that describes the task: ### Input: Returns the number of consumed read units in percent :type table_name: str :param table_name: Name of the DynamoDB table :type lookback_window_start: int :param lookback_window_start: Relative start time for the CloudWatch metric :type lookback_period: int :param lookback_period: Number of minutes to look at :returns: float -- Number of consumed reads as a percentage of provisioned reads ### Response: def get_consumed_read_units_percent( table_name, lookback_window_start=15, lookback_period=5): """ Returns the number of consumed read units in percent :type table_name: str :param table_name: Name of the DynamoDB table :type lookback_window_start: int :param lookback_window_start: Relative start time for the CloudWatch metric :type lookback_period: int :param lookback_period: Number of minutes to look at :returns: float -- Number of consumed reads as a percentage of provisioned reads """ try: metrics = __get_aws_metric( table_name, lookback_window_start, lookback_period, 'ConsumedReadCapacityUnits') except BotoServerError: raise if metrics: lookback_seconds = lookback_period * 60 consumed_read_units = ( float(metrics[0]['Sum']) / float(lookback_seconds)) else: consumed_read_units = 0 try: table_read_units = dynamodb.get_provisioned_table_read_units( table_name) consumed_read_units_percent = ( float(consumed_read_units) / float(table_read_units) * 100) except JSONResponseError: raise logger.info('{0} - Consumed read units: {1:.2f}%'.format( table_name, consumed_read_units_percent)) return consumed_read_units_percent
async def _seek(self, ctx, *, time: str): """ Seeks to a given position in a track. """ player = self.bot.lavalink.players.get(ctx.guild.id) if not player.is_playing: return await ctx.send('Not playing.') seconds = time_rx.search(time) if not seconds: return await ctx.send('You need to specify the amount of seconds to skip!') seconds = int(seconds.group()) * 1000 if time.startswith('-'): seconds *= -1 track_time = player.position + seconds await player.seek(track_time) await ctx.send(f'Moved track to **{lavalink.Utils.format_time(track_time)}**')
Seeks to a given position in a track.
Below is the the instruction that describes the task: ### Input: Seeks to a given position in a track. ### Response: async def _seek(self, ctx, *, time: str): """ Seeks to a given position in a track. """ player = self.bot.lavalink.players.get(ctx.guild.id) if not player.is_playing: return await ctx.send('Not playing.') seconds = time_rx.search(time) if not seconds: return await ctx.send('You need to specify the amount of seconds to skip!') seconds = int(seconds.group()) * 1000 if time.startswith('-'): seconds *= -1 track_time = player.position + seconds await player.seek(track_time) await ctx.send(f'Moved track to **{lavalink.Utils.format_time(track_time)}**')
def _get_pct(isomirs, mirna): """ Get pct of variants respect to the reference using reads and different sequences """ pass_pos = [] for isomir in isomirs.iterrows(): mir = isomir[1]["chrom"] mut = isomir[1]["sv"] mut_counts = isomir[1]["counts"] total = mirna.loc[mir, "counts"] * 1.0 - mut_counts mut_diff = isomir[1]["diff"] ratio = mut_counts / total if mut_counts > 10 and ratio > 0.4 and mut != "0" and mut_diff > 1: isomir[1]["ratio"] = ratio pass_pos.append(isomir[1]) return pass_pos
Get pct of variants respect to the reference using reads and different sequences
Below is the the instruction that describes the task: ### Input: Get pct of variants respect to the reference using reads and different sequences ### Response: def _get_pct(isomirs, mirna): """ Get pct of variants respect to the reference using reads and different sequences """ pass_pos = [] for isomir in isomirs.iterrows(): mir = isomir[1]["chrom"] mut = isomir[1]["sv"] mut_counts = isomir[1]["counts"] total = mirna.loc[mir, "counts"] * 1.0 - mut_counts mut_diff = isomir[1]["diff"] ratio = mut_counts / total if mut_counts > 10 and ratio > 0.4 and mut != "0" and mut_diff > 1: isomir[1]["ratio"] = ratio pass_pos.append(isomir[1]) return pass_pos
def price_projection(price_data=price_data(), ex_best_offers_overrides=ex_best_offers_overrides(), virtualise=True, rollover_stakes=False): """ Selection criteria of the returning price data. :param list price_data: PriceData filter to specify what market data we wish to receive. :param dict ex_best_offers_overrides: define order book depth, rollup method. :param bool virtualise: whether to receive virtualised prices also. :param bool rollover_stakes: whether to accumulate volume at each price as sum of volume at that price and all better prices. :returns: price data criteria for market data. :rtype: dict """ args = locals() return { to_camel_case(k): v for k, v in args.items() if v is not None }
Selection criteria of the returning price data. :param list price_data: PriceData filter to specify what market data we wish to receive. :param dict ex_best_offers_overrides: define order book depth, rollup method. :param bool virtualise: whether to receive virtualised prices also. :param bool rollover_stakes: whether to accumulate volume at each price as sum of volume at that price and all better prices. :returns: price data criteria for market data. :rtype: dict
Below is the the instruction that describes the task: ### Input: Selection criteria of the returning price data. :param list price_data: PriceData filter to specify what market data we wish to receive. :param dict ex_best_offers_overrides: define order book depth, rollup method. :param bool virtualise: whether to receive virtualised prices also. :param bool rollover_stakes: whether to accumulate volume at each price as sum of volume at that price and all better prices. :returns: price data criteria for market data. :rtype: dict ### Response: def price_projection(price_data=price_data(), ex_best_offers_overrides=ex_best_offers_overrides(), virtualise=True, rollover_stakes=False): """ Selection criteria of the returning price data. :param list price_data: PriceData filter to specify what market data we wish to receive. :param dict ex_best_offers_overrides: define order book depth, rollup method. :param bool virtualise: whether to receive virtualised prices also. :param bool rollover_stakes: whether to accumulate volume at each price as sum of volume at that price and all better prices. :returns: price data criteria for market data. :rtype: dict """ args = locals() return { to_camel_case(k): v for k, v in args.items() if v is not None }
def _encoder(self): """Transliterate a string from the input language to English.""" if self.source_lang == 'en': return Transliterator._dummy_coder else: weights = load_transliteration_table(self.source_lang) encoder_weights = weights["encoder"] return Transliterator._transliterate_string(encoder_weights)
Transliterate a string from the input language to English.
Below is the the instruction that describes the task: ### Input: Transliterate a string from the input language to English. ### Response: def _encoder(self): """Transliterate a string from the input language to English.""" if self.source_lang == 'en': return Transliterator._dummy_coder else: weights = load_transliteration_table(self.source_lang) encoder_weights = weights["encoder"] return Transliterator._transliterate_string(encoder_weights)
def handle_message(self, msg, host): """Processes messages that have been delivered from the transport protocol Args: msg (string): The raw packet data delivered from the transport protocol. host (tuple): A tuple containing the (address, port) combination of the message's origin. Returns: A formatted response to the client with the results of the processed message. Examples: >>> msg {"method": "OHAI Client", "version": "1.0"} >>> host ('192.168.0.20', 36545) """ logger.debug("Executing handle_message method.") response = None # Unserialize the data packet # If encryption is enabled, and we've receive the server's public key # already, try to decrypt if self.encryption and self.server_key: msg_data = unserialize_data(msg, self.compression, self.encryption) else: msg_data = unserialize_data(msg, self.compression) # Log the packet logger.debug("Packet received: " + pformat(msg_data)) # If the message data is blank, return none if not msg_data: return response if "method" in msg_data: if msg_data["method"] == "OHAI Client": logger.debug("<%s> Autodiscover response from server received " "from: %s" % (self.cuuid, host[0])) self.discovered_servers[host]= [msg_data["version"], msg_data["server_name"]] # Try to register with the discovered server if self.autoregistering: self.register(host) self.autoregistering = False elif msg_data["method"] == "NOTIFY": self.event_notifies[msg_data["euuid"]] = msg_data["event_data"] logger.debug("<%s> Notify received" % self.cuuid) logger.debug("<%s> Notify event buffer: %s" % (self.cuuid, pformat(self.event_notifies))) # Send an OK NOTIFY to the server confirming we got the message response = serialize_data( {"cuuid": str(self.cuuid), "method": "OK NOTIFY", "euuid": msg_data["euuid"]}, self.compression, self.encryption, self.server_key) elif msg_data["method"] == "OK REGISTER": logger.debug("<%s> Ok register received" % self.cuuid) self.registered = True self.server = host # If the server sent us their public key, store it if "encryption" in msg_data and self.encryption: self.server_key = PublicKey( msg_data["encryption"][0], msg_data["encryption"][1]) elif (msg_data["method"] == "LEGAL" or msg_data["method"] == "ILLEGAL"): logger.debug("<%s> Legality message received" % str(self.cuuid)) self.legal_check(msg_data) # Send an OK EVENT response to the server confirming we # received the message response = serialize_data( {"cuuid": str(self.cuuid), "method": "OK EVENT", "euuid": msg_data["euuid"]}, self.compression, self.encryption, self.server_key) logger.debug("Packet processing completed") return response
Processes messages that have been delivered from the transport protocol Args: msg (string): The raw packet data delivered from the transport protocol. host (tuple): A tuple containing the (address, port) combination of the message's origin. Returns: A formatted response to the client with the results of the processed message. Examples: >>> msg {"method": "OHAI Client", "version": "1.0"} >>> host ('192.168.0.20', 36545)
Below is the the instruction that describes the task: ### Input: Processes messages that have been delivered from the transport protocol Args: msg (string): The raw packet data delivered from the transport protocol. host (tuple): A tuple containing the (address, port) combination of the message's origin. Returns: A formatted response to the client with the results of the processed message. Examples: >>> msg {"method": "OHAI Client", "version": "1.0"} >>> host ('192.168.0.20', 36545) ### Response: def handle_message(self, msg, host): """Processes messages that have been delivered from the transport protocol Args: msg (string): The raw packet data delivered from the transport protocol. host (tuple): A tuple containing the (address, port) combination of the message's origin. Returns: A formatted response to the client with the results of the processed message. Examples: >>> msg {"method": "OHAI Client", "version": "1.0"} >>> host ('192.168.0.20', 36545) """ logger.debug("Executing handle_message method.") response = None # Unserialize the data packet # If encryption is enabled, and we've receive the server's public key # already, try to decrypt if self.encryption and self.server_key: msg_data = unserialize_data(msg, self.compression, self.encryption) else: msg_data = unserialize_data(msg, self.compression) # Log the packet logger.debug("Packet received: " + pformat(msg_data)) # If the message data is blank, return none if not msg_data: return response if "method" in msg_data: if msg_data["method"] == "OHAI Client": logger.debug("<%s> Autodiscover response from server received " "from: %s" % (self.cuuid, host[0])) self.discovered_servers[host]= [msg_data["version"], msg_data["server_name"]] # Try to register with the discovered server if self.autoregistering: self.register(host) self.autoregistering = False elif msg_data["method"] == "NOTIFY": self.event_notifies[msg_data["euuid"]] = msg_data["event_data"] logger.debug("<%s> Notify received" % self.cuuid) logger.debug("<%s> Notify event buffer: %s" % (self.cuuid, pformat(self.event_notifies))) # Send an OK NOTIFY to the server confirming we got the message response = serialize_data( {"cuuid": str(self.cuuid), "method": "OK NOTIFY", "euuid": msg_data["euuid"]}, self.compression, self.encryption, self.server_key) elif msg_data["method"] == "OK REGISTER": logger.debug("<%s> Ok register received" % self.cuuid) self.registered = True self.server = host # If the server sent us their public key, store it if "encryption" in msg_data and self.encryption: self.server_key = PublicKey( msg_data["encryption"][0], msg_data["encryption"][1]) elif (msg_data["method"] == "LEGAL" or msg_data["method"] == "ILLEGAL"): logger.debug("<%s> Legality message received" % str(self.cuuid)) self.legal_check(msg_data) # Send an OK EVENT response to the server confirming we # received the message response = serialize_data( {"cuuid": str(self.cuuid), "method": "OK EVENT", "euuid": msg_data["euuid"]}, self.compression, self.encryption, self.server_key) logger.debug("Packet processing completed") return response
def set_python(self, value): """Set field internal value from the python representation of field value""" # hook exists to stringify before validation # set to string if not string or unicode if value is not None and not isinstance(value, self.supported_types) or isinstance(value, int): value = str(value) return super(TextField, self).set_python(value)
Set field internal value from the python representation of field value
Below is the the instruction that describes the task: ### Input: Set field internal value from the python representation of field value ### Response: def set_python(self, value): """Set field internal value from the python representation of field value""" # hook exists to stringify before validation # set to string if not string or unicode if value is not None and not isinstance(value, self.supported_types) or isinstance(value, int): value = str(value) return super(TextField, self).set_python(value)
def mill(self): ''' Processes the variables collected from agents using the function millRule, storing the results in attributes named in aggr_sow. Parameters ---------- none Returns ------- none ''' # Make a dictionary of inputs for the millRule reap_vars_string = '' for name in self.reap_vars: reap_vars_string += ' \'' + name + '\' : self.' + name + ',' const_vars_string = '' for name in self.const_vars: const_vars_string += ' \'' + name + '\' : self.' + name + ',' mill_dict = eval('{' + reap_vars_string + const_vars_string + '}') # Run the millRule and store its output in self product = self.millRule(**mill_dict) for j in range(len(self.sow_vars)): this_var = self.sow_vars[j] this_product = getattr(product,this_var) setattr(self,this_var,this_product)
Processes the variables collected from agents using the function millRule, storing the results in attributes named in aggr_sow. Parameters ---------- none Returns ------- none
Below is the the instruction that describes the task: ### Input: Processes the variables collected from agents using the function millRule, storing the results in attributes named in aggr_sow. Parameters ---------- none Returns ------- none ### Response: def mill(self): ''' Processes the variables collected from agents using the function millRule, storing the results in attributes named in aggr_sow. Parameters ---------- none Returns ------- none ''' # Make a dictionary of inputs for the millRule reap_vars_string = '' for name in self.reap_vars: reap_vars_string += ' \'' + name + '\' : self.' + name + ',' const_vars_string = '' for name in self.const_vars: const_vars_string += ' \'' + name + '\' : self.' + name + ',' mill_dict = eval('{' + reap_vars_string + const_vars_string + '}') # Run the millRule and store its output in self product = self.millRule(**mill_dict) for j in range(len(self.sow_vars)): this_var = self.sow_vars[j] this_product = getattr(product,this_var) setattr(self,this_var,this_product)
def force_encoding(self, encoding): """Sets a fixed encoding. The change is emitted right away. From now one, this buffer will switch the code page anymore. However, it will still keep track of the current code page. """ if not encoding: self.disabled = False else: self.write_with_encoding(encoding, None) self.disabled = True
Sets a fixed encoding. The change is emitted right away. From now one, this buffer will switch the code page anymore. However, it will still keep track of the current code page.
Below is the the instruction that describes the task: ### Input: Sets a fixed encoding. The change is emitted right away. From now one, this buffer will switch the code page anymore. However, it will still keep track of the current code page. ### Response: def force_encoding(self, encoding): """Sets a fixed encoding. The change is emitted right away. From now one, this buffer will switch the code page anymore. However, it will still keep track of the current code page. """ if not encoding: self.disabled = False else: self.write_with_encoding(encoding, None) self.disabled = True
def get_object_from_dictionary_representation(dictionary, class_type): """Instantiates a new class (that takes no init params) and populates its attributes with a dictionary @type dictionary: dict @param dictionary: Dictionary representation of the object @param class_type: type @return: None """ assert inspect.isclass(class_type), 'Cannot instantiate an object that is not a class' instance = class_type() CoyoteDb.update_object_from_dictionary_representation(dictionary, instance) return instance
Instantiates a new class (that takes no init params) and populates its attributes with a dictionary @type dictionary: dict @param dictionary: Dictionary representation of the object @param class_type: type @return: None
Below is the the instruction that describes the task: ### Input: Instantiates a new class (that takes no init params) and populates its attributes with a dictionary @type dictionary: dict @param dictionary: Dictionary representation of the object @param class_type: type @return: None ### Response: def get_object_from_dictionary_representation(dictionary, class_type): """Instantiates a new class (that takes no init params) and populates its attributes with a dictionary @type dictionary: dict @param dictionary: Dictionary representation of the object @param class_type: type @return: None """ assert inspect.isclass(class_type), 'Cannot instantiate an object that is not a class' instance = class_type() CoyoteDb.update_object_from_dictionary_representation(dictionary, instance) return instance
def update_environment(ApplicationName=None, EnvironmentId=None, EnvironmentName=None, GroupName=None, Description=None, Tier=None, VersionLabel=None, TemplateName=None, SolutionStackName=None, PlatformArn=None, OptionSettings=None, OptionsToRemove=None): """ Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment. Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination error. When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus values. See also: AWS API Documentation Examples The following operation updates an environment named "my-env" to version "v2" of the application to which it belongs: Expected Output: The following operation configures several options in the aws:elb:loadbalancer namespace: Expected Output: :example: response = client.update_environment( ApplicationName='string', EnvironmentId='string', EnvironmentName='string', GroupName='string', Description='string', Tier={ 'Name': 'string', 'Type': 'string', 'Version': 'string' }, VersionLabel='string', TemplateName='string', SolutionStackName='string', PlatformArn='string', OptionSettings=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string', 'Value': 'string' }, ], OptionsToRemove=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string' }, ] ) :type ApplicationName: string :param ApplicationName: The name of the application with which the environment is associated. :type EnvironmentId: string :param EnvironmentId: The ID of the environment to update. If no environment with this ID exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type EnvironmentName: string :param EnvironmentName: The name of the environment to update. If no environment with this name exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type GroupName: string :param GroupName: The name of the group to which the target environment belongs. Specify a group name only if the environment's name is specified in an environment manifest and not with the environment name or environment ID parameters. See Environment Manifest (env.yaml) for details. :type Description: string :param Description: If this parameter is specified, AWS Elastic Beanstalk updates the description of this environment. :type Tier: dict :param Tier: This specifies the tier to use to update the environment. Condition: At this time, if you change the tier version, name, or type, AWS Elastic Beanstalk returns InvalidParameterValue error. Name (string) --The name of this environment tier. Type (string) --The type of this environment tier. Version (string) --The version of this environment tier. :type VersionLabel: string :param VersionLabel: If this parameter is specified, AWS Elastic Beanstalk deploys the named application version to the environment. If no such application version is found, returns an InvalidParameterValue error. :type TemplateName: string :param TemplateName: If this parameter is specified, AWS Elastic Beanstalk deploys this configuration template to the environment. If no such configuration template is found, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type SolutionStackName: string :param SolutionStackName: This specifies the platform version that the environment will run after the environment is updated. :type PlatformArn: string :param PlatformArn: The ARN of the platform, if used. :type OptionSettings: list :param OptionSettings: If specified, AWS Elastic Beanstalk updates the configuration set associated with the running environment and sets the specified configuration options to the requested value. (dict) --A specification identifying an individual configuration option along with its current value. For a list of possible option values, go to Option Values in the AWS Elastic Beanstalk Developer Guide . ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. Value (string) --The current value for the configuration option. :type OptionsToRemove: list :param OptionsToRemove: A list of custom user-defined configuration options to remove from the configuration set for this environment. (dict) --A specification identifying an individual configuration option. ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. :rtype: dict :return: { 'EnvironmentName': 'string', 'EnvironmentId': 'string', 'ApplicationName': 'string', 'VersionLabel': 'string', 'SolutionStackName': 'string', 'PlatformArn': 'string', 'TemplateName': 'string', 'Description': 'string', 'EndpointURL': 'string', 'CNAME': 'string', 'DateCreated': datetime(2015, 1, 1), 'DateUpdated': datetime(2015, 1, 1), 'Status': 'Launching'|'Updating'|'Ready'|'Terminating'|'Terminated', 'AbortableOperationInProgress': True|False, 'Health': 'Green'|'Yellow'|'Red'|'Grey', 'HealthStatus': 'NoData'|'Unknown'|'Pending'|'Ok'|'Info'|'Warning'|'Degraded'|'Severe', 'Resources': { 'LoadBalancer': { 'LoadBalancerName': 'string', 'Domain': 'string', 'Listeners': [ { 'Protocol': 'string', 'Port': 123 }, ] } }, 'Tier': { 'Name': 'string', 'Type': 'string', 'Version': 'string' }, 'EnvironmentLinks': [ { 'LinkName': 'string', 'EnvironmentName': 'string' }, ] } :returns: Launching : Environment is in the process of initial deployment. Updating : Environment is in the process of updating its configuration settings or application version. Ready : Environment is available to have an action performed on it, such as update or terminate. Terminating : Environment is in the shut-down process. Terminated : Environment is not running. """ pass
Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment. Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination error. When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus values. See also: AWS API Documentation Examples The following operation updates an environment named "my-env" to version "v2" of the application to which it belongs: Expected Output: The following operation configures several options in the aws:elb:loadbalancer namespace: Expected Output: :example: response = client.update_environment( ApplicationName='string', EnvironmentId='string', EnvironmentName='string', GroupName='string', Description='string', Tier={ 'Name': 'string', 'Type': 'string', 'Version': 'string' }, VersionLabel='string', TemplateName='string', SolutionStackName='string', PlatformArn='string', OptionSettings=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string', 'Value': 'string' }, ], OptionsToRemove=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string' }, ] ) :type ApplicationName: string :param ApplicationName: The name of the application with which the environment is associated. :type EnvironmentId: string :param EnvironmentId: The ID of the environment to update. If no environment with this ID exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type EnvironmentName: string :param EnvironmentName: The name of the environment to update. If no environment with this name exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type GroupName: string :param GroupName: The name of the group to which the target environment belongs. Specify a group name only if the environment's name is specified in an environment manifest and not with the environment name or environment ID parameters. See Environment Manifest (env.yaml) for details. :type Description: string :param Description: If this parameter is specified, AWS Elastic Beanstalk updates the description of this environment. :type Tier: dict :param Tier: This specifies the tier to use to update the environment. Condition: At this time, if you change the tier version, name, or type, AWS Elastic Beanstalk returns InvalidParameterValue error. Name (string) --The name of this environment tier. Type (string) --The type of this environment tier. Version (string) --The version of this environment tier. :type VersionLabel: string :param VersionLabel: If this parameter is specified, AWS Elastic Beanstalk deploys the named application version to the environment. If no such application version is found, returns an InvalidParameterValue error. :type TemplateName: string :param TemplateName: If this parameter is specified, AWS Elastic Beanstalk deploys this configuration template to the environment. If no such configuration template is found, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type SolutionStackName: string :param SolutionStackName: This specifies the platform version that the environment will run after the environment is updated. :type PlatformArn: string :param PlatformArn: The ARN of the platform, if used. :type OptionSettings: list :param OptionSettings: If specified, AWS Elastic Beanstalk updates the configuration set associated with the running environment and sets the specified configuration options to the requested value. (dict) --A specification identifying an individual configuration option along with its current value. For a list of possible option values, go to Option Values in the AWS Elastic Beanstalk Developer Guide . ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. Value (string) --The current value for the configuration option. :type OptionsToRemove: list :param OptionsToRemove: A list of custom user-defined configuration options to remove from the configuration set for this environment. (dict) --A specification identifying an individual configuration option. ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. :rtype: dict :return: { 'EnvironmentName': 'string', 'EnvironmentId': 'string', 'ApplicationName': 'string', 'VersionLabel': 'string', 'SolutionStackName': 'string', 'PlatformArn': 'string', 'TemplateName': 'string', 'Description': 'string', 'EndpointURL': 'string', 'CNAME': 'string', 'DateCreated': datetime(2015, 1, 1), 'DateUpdated': datetime(2015, 1, 1), 'Status': 'Launching'|'Updating'|'Ready'|'Terminating'|'Terminated', 'AbortableOperationInProgress': True|False, 'Health': 'Green'|'Yellow'|'Red'|'Grey', 'HealthStatus': 'NoData'|'Unknown'|'Pending'|'Ok'|'Info'|'Warning'|'Degraded'|'Severe', 'Resources': { 'LoadBalancer': { 'LoadBalancerName': 'string', 'Domain': 'string', 'Listeners': [ { 'Protocol': 'string', 'Port': 123 }, ] } }, 'Tier': { 'Name': 'string', 'Type': 'string', 'Version': 'string' }, 'EnvironmentLinks': [ { 'LinkName': 'string', 'EnvironmentName': 'string' }, ] } :returns: Launching : Environment is in the process of initial deployment. Updating : Environment is in the process of updating its configuration settings or application version. Ready : Environment is available to have an action performed on it, such as update or terminate. Terminating : Environment is in the shut-down process. Terminated : Environment is not running.
Below is the the instruction that describes the task: ### Input: Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment. Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination error. When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus values. See also: AWS API Documentation Examples The following operation updates an environment named "my-env" to version "v2" of the application to which it belongs: Expected Output: The following operation configures several options in the aws:elb:loadbalancer namespace: Expected Output: :example: response = client.update_environment( ApplicationName='string', EnvironmentId='string', EnvironmentName='string', GroupName='string', Description='string', Tier={ 'Name': 'string', 'Type': 'string', 'Version': 'string' }, VersionLabel='string', TemplateName='string', SolutionStackName='string', PlatformArn='string', OptionSettings=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string', 'Value': 'string' }, ], OptionsToRemove=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string' }, ] ) :type ApplicationName: string :param ApplicationName: The name of the application with which the environment is associated. :type EnvironmentId: string :param EnvironmentId: The ID of the environment to update. If no environment with this ID exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type EnvironmentName: string :param EnvironmentName: The name of the environment to update. If no environment with this name exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type GroupName: string :param GroupName: The name of the group to which the target environment belongs. Specify a group name only if the environment's name is specified in an environment manifest and not with the environment name or environment ID parameters. See Environment Manifest (env.yaml) for details. :type Description: string :param Description: If this parameter is specified, AWS Elastic Beanstalk updates the description of this environment. :type Tier: dict :param Tier: This specifies the tier to use to update the environment. Condition: At this time, if you change the tier version, name, or type, AWS Elastic Beanstalk returns InvalidParameterValue error. Name (string) --The name of this environment tier. Type (string) --The type of this environment tier. Version (string) --The version of this environment tier. :type VersionLabel: string :param VersionLabel: If this parameter is specified, AWS Elastic Beanstalk deploys the named application version to the environment. If no such application version is found, returns an InvalidParameterValue error. :type TemplateName: string :param TemplateName: If this parameter is specified, AWS Elastic Beanstalk deploys this configuration template to the environment. If no such configuration template is found, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type SolutionStackName: string :param SolutionStackName: This specifies the platform version that the environment will run after the environment is updated. :type PlatformArn: string :param PlatformArn: The ARN of the platform, if used. :type OptionSettings: list :param OptionSettings: If specified, AWS Elastic Beanstalk updates the configuration set associated with the running environment and sets the specified configuration options to the requested value. (dict) --A specification identifying an individual configuration option along with its current value. For a list of possible option values, go to Option Values in the AWS Elastic Beanstalk Developer Guide . ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. Value (string) --The current value for the configuration option. :type OptionsToRemove: list :param OptionsToRemove: A list of custom user-defined configuration options to remove from the configuration set for this environment. (dict) --A specification identifying an individual configuration option. ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. :rtype: dict :return: { 'EnvironmentName': 'string', 'EnvironmentId': 'string', 'ApplicationName': 'string', 'VersionLabel': 'string', 'SolutionStackName': 'string', 'PlatformArn': 'string', 'TemplateName': 'string', 'Description': 'string', 'EndpointURL': 'string', 'CNAME': 'string', 'DateCreated': datetime(2015, 1, 1), 'DateUpdated': datetime(2015, 1, 1), 'Status': 'Launching'|'Updating'|'Ready'|'Terminating'|'Terminated', 'AbortableOperationInProgress': True|False, 'Health': 'Green'|'Yellow'|'Red'|'Grey', 'HealthStatus': 'NoData'|'Unknown'|'Pending'|'Ok'|'Info'|'Warning'|'Degraded'|'Severe', 'Resources': { 'LoadBalancer': { 'LoadBalancerName': 'string', 'Domain': 'string', 'Listeners': [ { 'Protocol': 'string', 'Port': 123 }, ] } }, 'Tier': { 'Name': 'string', 'Type': 'string', 'Version': 'string' }, 'EnvironmentLinks': [ { 'LinkName': 'string', 'EnvironmentName': 'string' }, ] } :returns: Launching : Environment is in the process of initial deployment. Updating : Environment is in the process of updating its configuration settings or application version. Ready : Environment is available to have an action performed on it, such as update or terminate. Terminating : Environment is in the shut-down process. Terminated : Environment is not running. ### Response: def update_environment(ApplicationName=None, EnvironmentId=None, EnvironmentName=None, GroupName=None, Description=None, Tier=None, VersionLabel=None, TemplateName=None, SolutionStackName=None, PlatformArn=None, OptionSettings=None, OptionsToRemove=None): """ Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment. Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination error. When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus values. See also: AWS API Documentation Examples The following operation updates an environment named "my-env" to version "v2" of the application to which it belongs: Expected Output: The following operation configures several options in the aws:elb:loadbalancer namespace: Expected Output: :example: response = client.update_environment( ApplicationName='string', EnvironmentId='string', EnvironmentName='string', GroupName='string', Description='string', Tier={ 'Name': 'string', 'Type': 'string', 'Version': 'string' }, VersionLabel='string', TemplateName='string', SolutionStackName='string', PlatformArn='string', OptionSettings=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string', 'Value': 'string' }, ], OptionsToRemove=[ { 'ResourceName': 'string', 'Namespace': 'string', 'OptionName': 'string' }, ] ) :type ApplicationName: string :param ApplicationName: The name of the application with which the environment is associated. :type EnvironmentId: string :param EnvironmentId: The ID of the environment to update. If no environment with this ID exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type EnvironmentName: string :param EnvironmentName: The name of the environment to update. If no environment with this name exists, AWS Elastic Beanstalk returns an InvalidParameterValue error. Condition: You must specify either this or an EnvironmentId, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter error. :type GroupName: string :param GroupName: The name of the group to which the target environment belongs. Specify a group name only if the environment's name is specified in an environment manifest and not with the environment name or environment ID parameters. See Environment Manifest (env.yaml) for details. :type Description: string :param Description: If this parameter is specified, AWS Elastic Beanstalk updates the description of this environment. :type Tier: dict :param Tier: This specifies the tier to use to update the environment. Condition: At this time, if you change the tier version, name, or type, AWS Elastic Beanstalk returns InvalidParameterValue error. Name (string) --The name of this environment tier. Type (string) --The type of this environment tier. Version (string) --The version of this environment tier. :type VersionLabel: string :param VersionLabel: If this parameter is specified, AWS Elastic Beanstalk deploys the named application version to the environment. If no such application version is found, returns an InvalidParameterValue error. :type TemplateName: string :param TemplateName: If this parameter is specified, AWS Elastic Beanstalk deploys this configuration template to the environment. If no such configuration template is found, AWS Elastic Beanstalk returns an InvalidParameterValue error. :type SolutionStackName: string :param SolutionStackName: This specifies the platform version that the environment will run after the environment is updated. :type PlatformArn: string :param PlatformArn: The ARN of the platform, if used. :type OptionSettings: list :param OptionSettings: If specified, AWS Elastic Beanstalk updates the configuration set associated with the running environment and sets the specified configuration options to the requested value. (dict) --A specification identifying an individual configuration option along with its current value. For a list of possible option values, go to Option Values in the AWS Elastic Beanstalk Developer Guide . ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. Value (string) --The current value for the configuration option. :type OptionsToRemove: list :param OptionsToRemove: A list of custom user-defined configuration options to remove from the configuration set for this environment. (dict) --A specification identifying an individual configuration option. ResourceName (string) --A unique resource name for a time-based scaling configuration option. Namespace (string) --A unique namespace identifying the option's associated AWS resource. OptionName (string) --The name of the configuration option. :rtype: dict :return: { 'EnvironmentName': 'string', 'EnvironmentId': 'string', 'ApplicationName': 'string', 'VersionLabel': 'string', 'SolutionStackName': 'string', 'PlatformArn': 'string', 'TemplateName': 'string', 'Description': 'string', 'EndpointURL': 'string', 'CNAME': 'string', 'DateCreated': datetime(2015, 1, 1), 'DateUpdated': datetime(2015, 1, 1), 'Status': 'Launching'|'Updating'|'Ready'|'Terminating'|'Terminated', 'AbortableOperationInProgress': True|False, 'Health': 'Green'|'Yellow'|'Red'|'Grey', 'HealthStatus': 'NoData'|'Unknown'|'Pending'|'Ok'|'Info'|'Warning'|'Degraded'|'Severe', 'Resources': { 'LoadBalancer': { 'LoadBalancerName': 'string', 'Domain': 'string', 'Listeners': [ { 'Protocol': 'string', 'Port': 123 }, ] } }, 'Tier': { 'Name': 'string', 'Type': 'string', 'Version': 'string' }, 'EnvironmentLinks': [ { 'LinkName': 'string', 'EnvironmentName': 'string' }, ] } :returns: Launching : Environment is in the process of initial deployment. Updating : Environment is in the process of updating its configuration settings or application version. Ready : Environment is available to have an action performed on it, such as update or terminate. Terminating : Environment is in the shut-down process. Terminated : Environment is not running. """ pass
def get_documents_count(self): """Counts documents in database :return: Number of documents in db """ db_collections = [ self.database[c] for c in self.get_collection_names() ] # list of all collections in database return sum([c.count() for c in db_collections])
Counts documents in database :return: Number of documents in db
Below is the the instruction that describes the task: ### Input: Counts documents in database :return: Number of documents in db ### Response: def get_documents_count(self): """Counts documents in database :return: Number of documents in db """ db_collections = [ self.database[c] for c in self.get_collection_names() ] # list of all collections in database return sum([c.count() for c in db_collections])
def _deserialize(self, value, attr, data): """Format and validate the phone number using libphonenumber.""" if value: value = self._format_phone_number(value, attr) return super(PhoneNumberField, self)._deserialize(value, attr, data)
Format and validate the phone number using libphonenumber.
Below is the the instruction that describes the task: ### Input: Format and validate the phone number using libphonenumber. ### Response: def _deserialize(self, value, attr, data): """Format and validate the phone number using libphonenumber.""" if value: value = self._format_phone_number(value, attr) return super(PhoneNumberField, self)._deserialize(value, attr, data)
def validate(self, value): """ Applies the validation criteria. Returns value, new value, or None if invalid. Overload this in derived classes. """ try: # trap blank fields here if not self.blank or value: v = int(value) if v < 0: return None return value except ValueError: return None
Applies the validation criteria. Returns value, new value, or None if invalid. Overload this in derived classes.
Below is the the instruction that describes the task: ### Input: Applies the validation criteria. Returns value, new value, or None if invalid. Overload this in derived classes. ### Response: def validate(self, value): """ Applies the validation criteria. Returns value, new value, or None if invalid. Overload this in derived classes. """ try: # trap blank fields here if not self.blank or value: v = int(value) if v < 0: return None return value except ValueError: return None
def get_temporal_score_df(self): ''' Returns ------- ''' scoredf = {} tdf = self.term_ranker(self.corpus).get_ranks() for cat in sorted(self.corpus.get_categories()): if cat >= self.starting_time_step: negative_categories = self._get_negative_categories(cat, tdf) scores = self.term_scorer.get_scores( tdf[cat + ' freq'].astype(int), tdf[negative_categories].sum(axis=1) ) scoredf[cat + ' score'] = scores scoredf[cat + ' freq'] = tdf[cat + ' freq'].astype(int) return pd.DataFrame(scoredf)
Returns -------
Below is the the instruction that describes the task: ### Input: Returns ------- ### Response: def get_temporal_score_df(self): ''' Returns ------- ''' scoredf = {} tdf = self.term_ranker(self.corpus).get_ranks() for cat in sorted(self.corpus.get_categories()): if cat >= self.starting_time_step: negative_categories = self._get_negative_categories(cat, tdf) scores = self.term_scorer.get_scores( tdf[cat + ' freq'].astype(int), tdf[negative_categories].sum(axis=1) ) scoredf[cat + ' score'] = scores scoredf[cat + ' freq'] = tdf[cat + ' freq'].astype(int) return pd.DataFrame(scoredf)
def dump(self): """Return the object itself.""" return { 'title': self.title, 'issue_id': self.issue_id, 'reporter': self.reporter, 'assignee': self.assignee, 'status': self.status, 'product': self.product, 'component': self.component, 'created_at': self.created_at, 'updated_at': self.updated_at, 'closed_at': self.closed_at, 'status_code': self.status_code }
Return the object itself.
Below is the the instruction that describes the task: ### Input: Return the object itself. ### Response: def dump(self): """Return the object itself.""" return { 'title': self.title, 'issue_id': self.issue_id, 'reporter': self.reporter, 'assignee': self.assignee, 'status': self.status, 'product': self.product, 'component': self.component, 'created_at': self.created_at, 'updated_at': self.updated_at, 'closed_at': self.closed_at, 'status_code': self.status_code }
def get(self, cluster, environ, topology, comp_name): ''' :param cluster: :param environ: :param topology: :param comp_name: :return: ''' start_time = time.time() comp_names = [] if comp_name == "All": lplan = yield access.get_logical_plan(cluster, environ, topology) if not lplan: self.write(dict()) return if not 'spouts' in lplan or not 'bolts' in lplan: self.write(dict()) return comp_names = lplan['spouts'].keys() comp_names.extend(lplan['bolts'].keys()) else: comp_names = [comp_name] exception_infos = dict() for comp_name in comp_names: exception_infos[comp_name] = yield access.get_component_exceptionsummary( cluster, environ, topology, comp_name) # Combine exceptions from multiple component aggregate_exceptions = dict() for comp_name, exception_logs in exception_infos.items(): for exception_log in exception_logs: class_name = exception_log['class_name'] if class_name != '': if not class_name in aggregate_exceptions: aggregate_exceptions[class_name] = 0 aggregate_exceptions[class_name] += int(exception_log['count']) # Put the exception value in a table aggregate_exceptions_table = [] for key in aggregate_exceptions: aggregate_exceptions_table.append([key, str(aggregate_exceptions[key])]) result = dict( status="success", executiontime=time.time() - start_time, result=aggregate_exceptions_table) self.write(result)
:param cluster: :param environ: :param topology: :param comp_name: :return:
Below is the the instruction that describes the task: ### Input: :param cluster: :param environ: :param topology: :param comp_name: :return: ### Response: def get(self, cluster, environ, topology, comp_name): ''' :param cluster: :param environ: :param topology: :param comp_name: :return: ''' start_time = time.time() comp_names = [] if comp_name == "All": lplan = yield access.get_logical_plan(cluster, environ, topology) if not lplan: self.write(dict()) return if not 'spouts' in lplan or not 'bolts' in lplan: self.write(dict()) return comp_names = lplan['spouts'].keys() comp_names.extend(lplan['bolts'].keys()) else: comp_names = [comp_name] exception_infos = dict() for comp_name in comp_names: exception_infos[comp_name] = yield access.get_component_exceptionsummary( cluster, environ, topology, comp_name) # Combine exceptions from multiple component aggregate_exceptions = dict() for comp_name, exception_logs in exception_infos.items(): for exception_log in exception_logs: class_name = exception_log['class_name'] if class_name != '': if not class_name in aggregate_exceptions: aggregate_exceptions[class_name] = 0 aggregate_exceptions[class_name] += int(exception_log['count']) # Put the exception value in a table aggregate_exceptions_table = [] for key in aggregate_exceptions: aggregate_exceptions_table.append([key, str(aggregate_exceptions[key])]) result = dict( status="success", executiontime=time.time() - start_time, result=aggregate_exceptions_table) self.write(result)
def by_label(self, label): """Like `.get()`, but by label.""" # don't use .first(), so that MultipleResultsFound can be raised try: return self.filter_by(label=label).one() except sa.orm.exc.NoResultFound: return None
Like `.get()`, but by label.
Below is the the instruction that describes the task: ### Input: Like `.get()`, but by label. ### Response: def by_label(self, label): """Like `.get()`, but by label.""" # don't use .first(), so that MultipleResultsFound can be raised try: return self.filter_by(label=label).one() except sa.orm.exc.NoResultFound: return None
def resetLoggingLocks(): ''' This function is a HACK! Basically, if we fork() while a logging lock is held, the lock is /copied/ while in the acquired state. However, since we've forked, the thread that acquired the lock no longer exists, so it can never unlock the lock, and we end up blocking forever. Therefore, we manually enter the logging module, and forcefully release all the locks it holds. THIS IS NOT SAFE (or thread-safe). Basically, it MUST be called right after a process starts, and no where else. ''' try: logging._releaseLock() except RuntimeError: pass # The lock is already released # Iterate over the root logger hierarchy, and # force-free all locks. # if logging.Logger.root for handler in logging.Logger.manager.loggerDict.values(): if hasattr(handler, "lock") and handler.lock: try: handler.lock.release() except RuntimeError: pass
This function is a HACK! Basically, if we fork() while a logging lock is held, the lock is /copied/ while in the acquired state. However, since we've forked, the thread that acquired the lock no longer exists, so it can never unlock the lock, and we end up blocking forever. Therefore, we manually enter the logging module, and forcefully release all the locks it holds. THIS IS NOT SAFE (or thread-safe). Basically, it MUST be called right after a process starts, and no where else.
Below is the the instruction that describes the task: ### Input: This function is a HACK! Basically, if we fork() while a logging lock is held, the lock is /copied/ while in the acquired state. However, since we've forked, the thread that acquired the lock no longer exists, so it can never unlock the lock, and we end up blocking forever. Therefore, we manually enter the logging module, and forcefully release all the locks it holds. THIS IS NOT SAFE (or thread-safe). Basically, it MUST be called right after a process starts, and no where else. ### Response: def resetLoggingLocks(): ''' This function is a HACK! Basically, if we fork() while a logging lock is held, the lock is /copied/ while in the acquired state. However, since we've forked, the thread that acquired the lock no longer exists, so it can never unlock the lock, and we end up blocking forever. Therefore, we manually enter the logging module, and forcefully release all the locks it holds. THIS IS NOT SAFE (or thread-safe). Basically, it MUST be called right after a process starts, and no where else. ''' try: logging._releaseLock() except RuntimeError: pass # The lock is already released # Iterate over the root logger hierarchy, and # force-free all locks. # if logging.Logger.root for handler in logging.Logger.manager.loggerDict.values(): if hasattr(handler, "lock") and handler.lock: try: handler.lock.release() except RuntimeError: pass
def within_radius_sphere(self, x, y, radius): """ Adapted from the Mongo docs:: session.query(Places).filter(Places.loc.within_radius_sphere(1, 2, 50) """ return QueryExpression({ self : {'$within' : { '$centerSphere' : [[x, y], radius], }} })
Adapted from the Mongo docs:: session.query(Places).filter(Places.loc.within_radius_sphere(1, 2, 50)
Below is the the instruction that describes the task: ### Input: Adapted from the Mongo docs:: session.query(Places).filter(Places.loc.within_radius_sphere(1, 2, 50) ### Response: def within_radius_sphere(self, x, y, radius): """ Adapted from the Mongo docs:: session.query(Places).filter(Places.loc.within_radius_sphere(1, 2, 50) """ return QueryExpression({ self : {'$within' : { '$centerSphere' : [[x, y], radius], }} })
def remove_child(self, id_, child_id): """Removes a childfrom an ``Id``. arg: id (osid.id.Id): the ``Id`` of the node arg: child_id (osid.id.Id): the ``Id`` of the child to remove raise: NotFound - ``id`` or ``child_id`` was not found or ``child_id`` is not a child of ``id`` raise: NullArgument - ``id`` or ``child_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ result = self._rls.get_relationships_by_genus_type_for_peers(id_, child_id, self._relationship_type) if not bool(result.available()): raise errors.NotFound() self._ras.delete_relationship(result.get_next_relationship().get_id())
Removes a childfrom an ``Id``. arg: id (osid.id.Id): the ``Id`` of the node arg: child_id (osid.id.Id): the ``Id`` of the child to remove raise: NotFound - ``id`` or ``child_id`` was not found or ``child_id`` is not a child of ``id`` raise: NullArgument - ``id`` or ``child_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Removes a childfrom an ``Id``. arg: id (osid.id.Id): the ``Id`` of the node arg: child_id (osid.id.Id): the ``Id`` of the child to remove raise: NotFound - ``id`` or ``child_id`` was not found or ``child_id`` is not a child of ``id`` raise: NullArgument - ``id`` or ``child_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* ### Response: def remove_child(self, id_, child_id): """Removes a childfrom an ``Id``. arg: id (osid.id.Id): the ``Id`` of the node arg: child_id (osid.id.Id): the ``Id`` of the child to remove raise: NotFound - ``id`` or ``child_id`` was not found or ``child_id`` is not a child of ``id`` raise: NullArgument - ``id`` or ``child_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ result = self._rls.get_relationships_by_genus_type_for_peers(id_, child_id, self._relationship_type) if not bool(result.available()): raise errors.NotFound() self._ras.delete_relationship(result.get_next_relationship().get_id())
def d8flowdir(np, filleddem, flowdir, slope, workingdir=None, mpiexedir=None, exedir=None, log_file=None, runtime_file=None, hostfile=None): """Run D8 flow direction""" fname = TauDEM.func_name('d8flowdir') return TauDEM.run(FileClass.get_executable_fullpath(fname, exedir), {'-fel': filleddem}, workingdir, None, {'-p': flowdir, '-sd8': slope}, {'mpipath': mpiexedir, 'hostfile': hostfile, 'n': np}, {'logfile': log_file, 'runtimefile': runtime_file})
Run D8 flow direction
Below is the the instruction that describes the task: ### Input: Run D8 flow direction ### Response: def d8flowdir(np, filleddem, flowdir, slope, workingdir=None, mpiexedir=None, exedir=None, log_file=None, runtime_file=None, hostfile=None): """Run D8 flow direction""" fname = TauDEM.func_name('d8flowdir') return TauDEM.run(FileClass.get_executable_fullpath(fname, exedir), {'-fel': filleddem}, workingdir, None, {'-p': flowdir, '-sd8': slope}, {'mpipath': mpiexedir, 'hostfile': hostfile, 'n': np}, {'logfile': log_file, 'runtimefile': runtime_file})
def _parse_snapshot_hits(self, file_obj): """Parse and store snapshot hits.""" for _ in range(self.n_snapshot_hits): dom_id, pmt_id = unpack('<ib', file_obj.read(5)) tdc_time = unpack('>I', file_obj.read(4))[0] tot = unpack('<b', file_obj.read(1))[0] self.snapshot_hits.append((dom_id, pmt_id, tdc_time, tot))
Parse and store snapshot hits.
Below is the the instruction that describes the task: ### Input: Parse and store snapshot hits. ### Response: def _parse_snapshot_hits(self, file_obj): """Parse and store snapshot hits.""" for _ in range(self.n_snapshot_hits): dom_id, pmt_id = unpack('<ib', file_obj.read(5)) tdc_time = unpack('>I', file_obj.read(4))[0] tot = unpack('<b', file_obj.read(1))[0] self.snapshot_hits.append((dom_id, pmt_id, tdc_time, tot))
def check_offset(self): """Check to see if initial position and goal are the same if they are, offset slightly so that the forcing term is not 0""" for d in range(self.dmps): if (self.y0[d] == self.goal[d]): self.goal[d] += 1e-4
Check to see if initial position and goal are the same if they are, offset slightly so that the forcing term is not 0
Below is the the instruction that describes the task: ### Input: Check to see if initial position and goal are the same if they are, offset slightly so that the forcing term is not 0 ### Response: def check_offset(self): """Check to see if initial position and goal are the same if they are, offset slightly so that the forcing term is not 0""" for d in range(self.dmps): if (self.y0[d] == self.goal[d]): self.goal[d] += 1e-4
def cygpath(filename): """Convert a cygwin path into a windows style path""" if sys.platform == 'cygwin': proc = Popen(['cygpath', '-am', filename], stdout=PIPE) return proc.communicate()[0].strip() else: return filename
Convert a cygwin path into a windows style path
Below is the the instruction that describes the task: ### Input: Convert a cygwin path into a windows style path ### Response: def cygpath(filename): """Convert a cygwin path into a windows style path""" if sys.platform == 'cygwin': proc = Popen(['cygpath', '-am', filename], stdout=PIPE) return proc.communicate()[0].strip() else: return filename
def notify(self, clusters, centers): """! @brief This method is called by K-Means algorithm to notify about changes. @param[in] clusters (array_like): Allocated clusters by K-Means algorithm. @param[in] centers (array_like): Allocated centers by K-Means algorithm. """ self.__evolution_clusters.append(clusters) self.__evolution_centers.append(centers)
! @brief This method is called by K-Means algorithm to notify about changes. @param[in] clusters (array_like): Allocated clusters by K-Means algorithm. @param[in] centers (array_like): Allocated centers by K-Means algorithm.
Below is the the instruction that describes the task: ### Input: ! @brief This method is called by K-Means algorithm to notify about changes. @param[in] clusters (array_like): Allocated clusters by K-Means algorithm. @param[in] centers (array_like): Allocated centers by K-Means algorithm. ### Response: def notify(self, clusters, centers): """! @brief This method is called by K-Means algorithm to notify about changes. @param[in] clusters (array_like): Allocated clusters by K-Means algorithm. @param[in] centers (array_like): Allocated centers by K-Means algorithm. """ self.__evolution_clusters.append(clusters) self.__evolution_centers.append(centers)
def getChild(self, name, request): """ Postpath needs to contain all segments of the url, if it is incomplete then that incomplete url will be passed on to the child resource (in this case our wsgi application). """ request.prepath = [] request.postpath.insert(0, name) # re-establishes request.postpath so to contain the entire path return self.wsgi_resource
Postpath needs to contain all segments of the url, if it is incomplete then that incomplete url will be passed on to the child resource (in this case our wsgi application).
Below is the the instruction that describes the task: ### Input: Postpath needs to contain all segments of the url, if it is incomplete then that incomplete url will be passed on to the child resource (in this case our wsgi application). ### Response: def getChild(self, name, request): """ Postpath needs to contain all segments of the url, if it is incomplete then that incomplete url will be passed on to the child resource (in this case our wsgi application). """ request.prepath = [] request.postpath.insert(0, name) # re-establishes request.postpath so to contain the entire path return self.wsgi_resource
def printer(self): """Prints PDA states and their attributes""" i = 0 while i < self.n + 1: print "--------- State No --------" + repr(i) self.s[i].printer() i = i + 1
Prints PDA states and their attributes
Below is the the instruction that describes the task: ### Input: Prints PDA states and their attributes ### Response: def printer(self): """Prints PDA states and their attributes""" i = 0 while i < self.n + 1: print "--------- State No --------" + repr(i) self.s[i].printer() i = i + 1
def load(cls, path, format=None): """Load project from file. Use ``format`` to specify the file format to use. Path can be a file-like object, in which case format is required. Otherwise, can guess the appropriate format from the extension. If you pass a file-like object, you're responsible for closing the file. :param path: Path or file pointer. :param format: :attr:`KurtFileFormat.name` eg. ``"scratch14"``. Overrides the extension. :raises: :class:`UnknownFormat` if the extension is unrecognised. :raises: :py:class:`ValueError` if the format doesn't exist. """ path_was_string = isinstance(path, basestring) if path_was_string: (folder, filename) = os.path.split(path) (name, extension) = os.path.splitext(filename) if format is None: plugin = kurt.plugin.Kurt.get_plugin(extension=extension) if not plugin: raise UnknownFormat(extension) fp = open(path, "rb") else: fp = path assert format, "Format is required" plugin = kurt.plugin.Kurt.get_plugin(format) if not plugin: raise ValueError, "Unknown format %r" % format project = plugin.load(fp) if path_was_string: fp.close() project.convert(plugin) if isinstance(path, basestring): project.path = path if not project.name: project.name = name return project
Load project from file. Use ``format`` to specify the file format to use. Path can be a file-like object, in which case format is required. Otherwise, can guess the appropriate format from the extension. If you pass a file-like object, you're responsible for closing the file. :param path: Path or file pointer. :param format: :attr:`KurtFileFormat.name` eg. ``"scratch14"``. Overrides the extension. :raises: :class:`UnknownFormat` if the extension is unrecognised. :raises: :py:class:`ValueError` if the format doesn't exist.
Below is the the instruction that describes the task: ### Input: Load project from file. Use ``format`` to specify the file format to use. Path can be a file-like object, in which case format is required. Otherwise, can guess the appropriate format from the extension. If you pass a file-like object, you're responsible for closing the file. :param path: Path or file pointer. :param format: :attr:`KurtFileFormat.name` eg. ``"scratch14"``. Overrides the extension. :raises: :class:`UnknownFormat` if the extension is unrecognised. :raises: :py:class:`ValueError` if the format doesn't exist. ### Response: def load(cls, path, format=None): """Load project from file. Use ``format`` to specify the file format to use. Path can be a file-like object, in which case format is required. Otherwise, can guess the appropriate format from the extension. If you pass a file-like object, you're responsible for closing the file. :param path: Path or file pointer. :param format: :attr:`KurtFileFormat.name` eg. ``"scratch14"``. Overrides the extension. :raises: :class:`UnknownFormat` if the extension is unrecognised. :raises: :py:class:`ValueError` if the format doesn't exist. """ path_was_string = isinstance(path, basestring) if path_was_string: (folder, filename) = os.path.split(path) (name, extension) = os.path.splitext(filename) if format is None: plugin = kurt.plugin.Kurt.get_plugin(extension=extension) if not plugin: raise UnknownFormat(extension) fp = open(path, "rb") else: fp = path assert format, "Format is required" plugin = kurt.plugin.Kurt.get_plugin(format) if not plugin: raise ValueError, "Unknown format %r" % format project = plugin.load(fp) if path_was_string: fp.close() project.convert(plugin) if isinstance(path, basestring): project.path = path if not project.name: project.name = name return project
def save_dispatcher(dsp, path): """ Write Dispatcher object in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_dispatcher(dsp, file_name) """ import dill with open(path, 'wb') as f: dill.dump(dsp, f)
Write Dispatcher object in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_dispatcher(dsp, file_name)
Below is the the instruction that describes the task: ### Input: Write Dispatcher object in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_dispatcher(dsp, file_name) ### Response: def save_dispatcher(dsp, path): """ Write Dispatcher object in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_dispatcher(dsp, file_name) """ import dill with open(path, 'wb') as f: dill.dump(dsp, f)
def next(self): """ Return the next available item. If there are no more items in the local 'results' list, check if there is a 'next_uri' value. If so, use that to get the next page of results from the API, and return the first item from that query. """ try: return self.results.pop(0) except IndexError: if self.next_uri is None: raise StopIteration() else: if not self.next_uri: if self.domain: self.results = self.list_method(self.domain) else: self.results = self.list_method() else: args = self.extra_args self.results = self._list_method(self.next_uri, *args) self.next_uri = self.manager._paging.get( self.paging_service, {}).get("next_uri") # We should have more results. try: return self.results.pop(0) except IndexError: raise StopIteration()
Return the next available item. If there are no more items in the local 'results' list, check if there is a 'next_uri' value. If so, use that to get the next page of results from the API, and return the first item from that query.
Below is the the instruction that describes the task: ### Input: Return the next available item. If there are no more items in the local 'results' list, check if there is a 'next_uri' value. If so, use that to get the next page of results from the API, and return the first item from that query. ### Response: def next(self): """ Return the next available item. If there are no more items in the local 'results' list, check if there is a 'next_uri' value. If so, use that to get the next page of results from the API, and return the first item from that query. """ try: return self.results.pop(0) except IndexError: if self.next_uri is None: raise StopIteration() else: if not self.next_uri: if self.domain: self.results = self.list_method(self.domain) else: self.results = self.list_method() else: args = self.extra_args self.results = self._list_method(self.next_uri, *args) self.next_uri = self.manager._paging.get( self.paging_service, {}).get("next_uri") # We should have more results. try: return self.results.pop(0) except IndexError: raise StopIteration()
def datatype_from_token(self, token): """ Given a SystemRDLParser token, lookup the type This only includes types under the "data_type" grammar rule """ if token.type == SystemRDLParser.ID: # Is an identifier for either an enum or struct type typ = self.compiler.namespace.lookup_type(get_ID_text(token)) if typ is None: self.msg.fatal( "Type '%s' is not defined" % get_ID_text(token), SourceRef.from_antlr(token) ) if rdltypes.is_user_enum(typ) or rdltypes.is_user_struct(typ): return typ else: self.msg.fatal( "Type '%s' is not a struct or enum" % get_ID_text(token), SourceRef.from_antlr(token) ) else: return self._DataType_Map[token.type]
Given a SystemRDLParser token, lookup the type This only includes types under the "data_type" grammar rule
Below is the the instruction that describes the task: ### Input: Given a SystemRDLParser token, lookup the type This only includes types under the "data_type" grammar rule ### Response: def datatype_from_token(self, token): """ Given a SystemRDLParser token, lookup the type This only includes types under the "data_type" grammar rule """ if token.type == SystemRDLParser.ID: # Is an identifier for either an enum or struct type typ = self.compiler.namespace.lookup_type(get_ID_text(token)) if typ is None: self.msg.fatal( "Type '%s' is not defined" % get_ID_text(token), SourceRef.from_antlr(token) ) if rdltypes.is_user_enum(typ) or rdltypes.is_user_struct(typ): return typ else: self.msg.fatal( "Type '%s' is not a struct or enum" % get_ID_text(token), SourceRef.from_antlr(token) ) else: return self._DataType_Map[token.type]
def py_str2float(version): """Convert a Python version into a two-digit 'canonic' floating-point number, e.g. 2.5, 3.6. A runtime error is raised if "version" is not found. Note that there can be several strings that map to a single floating- point number. For example 3.2a1, 3.2.0, 3.2.2, 3.2.6 among others all map to 3.2. """ if version.endswith('pypy'): version = version[:-len('pypy')] if version in magics: magic = magics[version] for v, m in list(magics.items()): if m == magic: try: return float(canonic_python_version[v]) except: try: m = re.match(r'^(\d\.)(\d+)\.(\d+)$', v) if m: return float(m.group(1)+m.group(2)) except: pass pass pass pass raise RuntimeError("Can't find a valid Python version for version %s" % version) return
Convert a Python version into a two-digit 'canonic' floating-point number, e.g. 2.5, 3.6. A runtime error is raised if "version" is not found. Note that there can be several strings that map to a single floating- point number. For example 3.2a1, 3.2.0, 3.2.2, 3.2.6 among others all map to 3.2.
Below is the the instruction that describes the task: ### Input: Convert a Python version into a two-digit 'canonic' floating-point number, e.g. 2.5, 3.6. A runtime error is raised if "version" is not found. Note that there can be several strings that map to a single floating- point number. For example 3.2a1, 3.2.0, 3.2.2, 3.2.6 among others all map to 3.2. ### Response: def py_str2float(version): """Convert a Python version into a two-digit 'canonic' floating-point number, e.g. 2.5, 3.6. A runtime error is raised if "version" is not found. Note that there can be several strings that map to a single floating- point number. For example 3.2a1, 3.2.0, 3.2.2, 3.2.6 among others all map to 3.2. """ if version.endswith('pypy'): version = version[:-len('pypy')] if version in magics: magic = magics[version] for v, m in list(magics.items()): if m == magic: try: return float(canonic_python_version[v]) except: try: m = re.match(r'^(\d\.)(\d+)\.(\d+)$', v) if m: return float(m.group(1)+m.group(2)) except: pass pass pass pass raise RuntimeError("Can't find a valid Python version for version %s" % version) return
def _import_serializer_class(self, location): """ Resolves a dot-notation string to serializer class. <app>.<SerializerName> will automatically be interpreted as: <app>.serializers.<SerializerName> """ pieces = location.split(".") class_name = pieces.pop() if pieces[len(pieces) - 1] != "serializers": pieces.append("serializers") module = importlib.import_module(".".join(pieces)) return getattr(module, class_name)
Resolves a dot-notation string to serializer class. <app>.<SerializerName> will automatically be interpreted as: <app>.serializers.<SerializerName>
Below is the the instruction that describes the task: ### Input: Resolves a dot-notation string to serializer class. <app>.<SerializerName> will automatically be interpreted as: <app>.serializers.<SerializerName> ### Response: def _import_serializer_class(self, location): """ Resolves a dot-notation string to serializer class. <app>.<SerializerName> will automatically be interpreted as: <app>.serializers.<SerializerName> """ pieces = location.split(".") class_name = pieces.pop() if pieces[len(pieces) - 1] != "serializers": pieces.append("serializers") module = importlib.import_module(".".join(pieces)) return getattr(module, class_name)
def train_on_replay_memory(self, batch_info): """ Train agent on a memory gotten from replay buffer """ self.model.train() # Algo will aggregate data into this list: batch_info['sub_batch_data'] = [] for i in range(self.settings.training_rounds): sampled_rollout = self.env_roller.sample(batch_info, self.model, self.settings.training_steps) batch_result = self.algo.optimizer_step( batch_info=batch_info, device=self.device, model=self.model, rollout=sampled_rollout.to_device(self.device) ) self.env_roller.update(rollout=sampled_rollout, batch_info=batch_result) batch_info['sub_batch_data'].append(batch_result) batch_info.aggregate_key('sub_batch_data')
Train agent on a memory gotten from replay buffer
Below is the the instruction that describes the task: ### Input: Train agent on a memory gotten from replay buffer ### Response: def train_on_replay_memory(self, batch_info): """ Train agent on a memory gotten from replay buffer """ self.model.train() # Algo will aggregate data into this list: batch_info['sub_batch_data'] = [] for i in range(self.settings.training_rounds): sampled_rollout = self.env_roller.sample(batch_info, self.model, self.settings.training_steps) batch_result = self.algo.optimizer_step( batch_info=batch_info, device=self.device, model=self.model, rollout=sampled_rollout.to_device(self.device) ) self.env_roller.update(rollout=sampled_rollout, batch_info=batch_result) batch_info['sub_batch_data'].append(batch_result) batch_info.aggregate_key('sub_batch_data')
def get_function(fn_name): """Retrieve the function defined by the function_name. Arguments: fn_name: specification of the type module:function_name. """ module_name, callable_name = fn_name.split(':') current = globals() if not callable_name: callable_name = module_name else: import importlib try: module = importlib.import_module(module_name) except ImportError: log.error("failed to import %s", module_name) raise current = module for level in callable_name.split('.'): current = getattr(current, level) code = current.__code__ if code.co_argcount != 2: raise ValueError('function should take 2 arguments: lines, file_name') return current
Retrieve the function defined by the function_name. Arguments: fn_name: specification of the type module:function_name.
Below is the the instruction that describes the task: ### Input: Retrieve the function defined by the function_name. Arguments: fn_name: specification of the type module:function_name. ### Response: def get_function(fn_name): """Retrieve the function defined by the function_name. Arguments: fn_name: specification of the type module:function_name. """ module_name, callable_name = fn_name.split(':') current = globals() if not callable_name: callable_name = module_name else: import importlib try: module = importlib.import_module(module_name) except ImportError: log.error("failed to import %s", module_name) raise current = module for level in callable_name.split('.'): current = getattr(current, level) code = current.__code__ if code.co_argcount != 2: raise ValueError('function should take 2 arguments: lines, file_name') return current
def amount(self, amount): """ Sets the amount of this Money. The amount of money, in the smallest denomination of the currency indicated by `currency`. For example, when `currency` is `USD`, `amount` is in cents. :param amount: The amount of this Money. :type: int """ if amount is None: raise ValueError("Invalid value for `amount`, must not be `None`") if amount < 0: raise ValueError("Invalid value for `amount`, must be a value greater than or equal to `0`") self._amount = amount
Sets the amount of this Money. The amount of money, in the smallest denomination of the currency indicated by `currency`. For example, when `currency` is `USD`, `amount` is in cents. :param amount: The amount of this Money. :type: int
Below is the the instruction that describes the task: ### Input: Sets the amount of this Money. The amount of money, in the smallest denomination of the currency indicated by `currency`. For example, when `currency` is `USD`, `amount` is in cents. :param amount: The amount of this Money. :type: int ### Response: def amount(self, amount): """ Sets the amount of this Money. The amount of money, in the smallest denomination of the currency indicated by `currency`. For example, when `currency` is `USD`, `amount` is in cents. :param amount: The amount of this Money. :type: int """ if amount is None: raise ValueError("Invalid value for `amount`, must not be `None`") if amount < 0: raise ValueError("Invalid value for `amount`, must be a value greater than or equal to `0`") self._amount = amount
def rotvec(v1, angle, iaxis): """ Transform a vector to a new coordinate system rotated by angle radians about axis iaxis. This transformation rotates v1 by angle radians about the specified axis. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/rotvec_c.html :param v1: Vector whose coordinate system is to be rotated. :type v1: 3-Element Array of floats :param angle: Angle of rotation (radians). :type angle: float :param iaxis: Axis of rotation X=1, Y=2, Z=3. :type iaxis: int :return: the vector expressed in the new coordinate system. :rtype: 3-Element Array of floats """ v1 = stypes.toDoubleVector(v1) angle = ctypes.c_double(angle) iaxis = ctypes.c_int(iaxis) vout = stypes.emptyDoubleVector(3) libspice.rotvec_c(v1, angle, iaxis, vout) return stypes.cVectorToPython(vout)
Transform a vector to a new coordinate system rotated by angle radians about axis iaxis. This transformation rotates v1 by angle radians about the specified axis. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/rotvec_c.html :param v1: Vector whose coordinate system is to be rotated. :type v1: 3-Element Array of floats :param angle: Angle of rotation (radians). :type angle: float :param iaxis: Axis of rotation X=1, Y=2, Z=3. :type iaxis: int :return: the vector expressed in the new coordinate system. :rtype: 3-Element Array of floats
Below is the the instruction that describes the task: ### Input: Transform a vector to a new coordinate system rotated by angle radians about axis iaxis. This transformation rotates v1 by angle radians about the specified axis. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/rotvec_c.html :param v1: Vector whose coordinate system is to be rotated. :type v1: 3-Element Array of floats :param angle: Angle of rotation (radians). :type angle: float :param iaxis: Axis of rotation X=1, Y=2, Z=3. :type iaxis: int :return: the vector expressed in the new coordinate system. :rtype: 3-Element Array of floats ### Response: def rotvec(v1, angle, iaxis): """ Transform a vector to a new coordinate system rotated by angle radians about axis iaxis. This transformation rotates v1 by angle radians about the specified axis. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/rotvec_c.html :param v1: Vector whose coordinate system is to be rotated. :type v1: 3-Element Array of floats :param angle: Angle of rotation (radians). :type angle: float :param iaxis: Axis of rotation X=1, Y=2, Z=3. :type iaxis: int :return: the vector expressed in the new coordinate system. :rtype: 3-Element Array of floats """ v1 = stypes.toDoubleVector(v1) angle = ctypes.c_double(angle) iaxis = ctypes.c_int(iaxis) vout = stypes.emptyDoubleVector(3) libspice.rotvec_c(v1, angle, iaxis, vout) return stypes.cVectorToPython(vout)
def grain_funcs(opts, proxy=None): ''' Returns the grain functions .. code-block:: python import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') grainfuncs = salt.loader.grain_funcs(__opts__) ''' ret = LazyLoader( _module_dirs( opts, 'grains', 'grain', ext_type_dirs='grains_dirs', ), opts, tag='grains', ) ret.pack['__utils__'] = utils(opts, proxy=proxy) return ret
Returns the grain functions .. code-block:: python import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') grainfuncs = salt.loader.grain_funcs(__opts__)
Below is the the instruction that describes the task: ### Input: Returns the grain functions .. code-block:: python import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') grainfuncs = salt.loader.grain_funcs(__opts__) ### Response: def grain_funcs(opts, proxy=None): ''' Returns the grain functions .. code-block:: python import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') grainfuncs = salt.loader.grain_funcs(__opts__) ''' ret = LazyLoader( _module_dirs( opts, 'grains', 'grain', ext_type_dirs='grains_dirs', ), opts, tag='grains', ) ret.pack['__utils__'] = utils(opts, proxy=proxy) return ret
def v(self, version, max_version=None, **kwargs): """ Returns either a single requirement version or a requirement version range depending on whether two arguments are supplied or one Parameters ---------- version : str | Version Either a version of the requirement, or the first version in a range of acceptable versions """ if not isinstance(version, Version): version = self.version_cls(self, version, **kwargs) # Return a version range instead of version if max_version is not None: if not isinstance(max_version, Version): max_version = self.version_cls(self, max_version, **kwargs) version = VersionRange(version, max_version) return version
Returns either a single requirement version or a requirement version range depending on whether two arguments are supplied or one Parameters ---------- version : str | Version Either a version of the requirement, or the first version in a range of acceptable versions
Below is the the instruction that describes the task: ### Input: Returns either a single requirement version or a requirement version range depending on whether two arguments are supplied or one Parameters ---------- version : str | Version Either a version of the requirement, or the first version in a range of acceptable versions ### Response: def v(self, version, max_version=None, **kwargs): """ Returns either a single requirement version or a requirement version range depending on whether two arguments are supplied or one Parameters ---------- version : str | Version Either a version of the requirement, or the first version in a range of acceptable versions """ if not isinstance(version, Version): version = self.version_cls(self, version, **kwargs) # Return a version range instead of version if max_version is not None: if not isinstance(max_version, Version): max_version = self.version_cls(self, max_version, **kwargs) version = VersionRange(version, max_version) return version
def get(self, return_type='object', target=None, extensions=None, scope='all', regex_search=False, defined_fields=None, absolute_paths=None, **kwargs): """ Retrieve files and/or metadata from the current Layout. Args: return_type (str): Type of result to return. Valid values: 'object' (default): return a list of matching BIDSFile objects. 'file': return a list of matching filenames. 'dir': return a list of directories. 'id': return a list of unique IDs. Must be used together with a valid target. target (str): Optional name of the target entity to get results for (only used if return_type is 'dir' or 'id'). extensions (str, list): One or more file extensions to filter on. BIDSFiles with any other extensions will be excluded. scope (str, list): Scope of the search space. If passed, only nodes/directories that match the specified scope will be searched. Possible values include: 'all' (default): search all available directories. 'derivatives': search all derivatives directories 'raw': search only BIDS-Raw directories <PipelineName>: the name of a BIDS-Derivatives pipeline regex_search (bool or None): Whether to require exact matching (False) or regex search (True) when comparing the query string to each entity. defined_fields (list): Optional list of names of metadata fields that must be defined in JSON sidecars in order to consider the file a match, but which don't need to match any particular value. absolute_paths (bool): Optionally override the instance-wide option to report either absolute or relative (to the top of the dataset) paths. If None, will fall back on the value specified at BIDSLayout initialization. kwargs (dict): Any optional key/values to filter the entities on. Keys are entity names, values are regexes to filter on. For example, passing filter={'subject': 'sub-[12]'} would return only files that match the first two subjects. Returns: A list of BIDSFiles (default) or strings (see return_type). Notes: As of pybids 0.7.0 some keywords have been changed. Namely: 'type' becomes 'suffix', 'modality' becomes 'datatype', 'acq' becomes 'acquisition' and 'mod' becomes 'modality'. Using the wrong version could result in get() silently returning wrong or no results. See the changelog for more details. """ # Warn users still expecting 0.6 behavior if 'type' in kwargs: raise ValueError("As of pybids 0.7.0, the 'type' argument has been" " replaced with 'suffix'.") layouts = self._get_layouts_in_scope(scope) # Create concatenated file, node, and entity lists files, entities, nodes = {}, {}, [] for l in layouts: files.update(l.files) entities.update(l.entities) nodes.extend(l.nodes) # Separate entity kwargs from metadata kwargs ent_kwargs, md_kwargs = {}, {} for k, v in kwargs.items(): if k in entities: ent_kwargs[k] = v else: md_kwargs[k] = v # Provide some suggestions if target is specified and invalid. if target is not None and target not in entities: import difflib potential = list(entities.keys()) suggestions = difflib.get_close_matches(target, potential) if suggestions: message = "Did you mean one of: {}?".format(suggestions) else: message = "Valid targets are: {}".format(potential) raise ValueError(("Unknown target '{}'. " + message) .format(target)) results = [] # Search on entities filters = ent_kwargs.copy() for f in files.values(): if f._matches(filters, extensions, regex_search): results.append(f) # Search on metadata if return_type not in {'dir', 'id'}: if md_kwargs: results = [f.path for f in results] results = self.metadata_index.search(results, defined_fields, **md_kwargs) results = [files[f] for f in results] # Convert to relative paths if needed if absolute_paths is None: # can be overloaded as option to .get absolute_paths = self.absolute_paths if not absolute_paths: for i, f in enumerate(results): f = copy.copy(f) f.path = os.path.relpath(f.path, self.root) results[i] = f if return_type == 'file': results = natural_sort([f.path for f in results]) elif return_type in ['id', 'dir']: if target is None: raise ValueError('If return_type is "id" or "dir", a valid ' 'target entity must also be specified.') results = [x for x in results if target in x.entities] if return_type == 'id': results = list(set([x.entities[target] for x in results])) results = natural_sort(results) elif return_type == 'dir': template = entities[target].directory if template is None: raise ValueError('Return type set to directory, but no ' 'directory template is defined for the ' 'target entity (\"%s\").' % target) # Construct regex search pattern from target directory template template = self.root + template to_rep = re.findall(r'\{(.*?)\}', template) for ent in to_rep: patt = entities[ent].pattern template = template.replace('{%s}' % ent, patt) template += r'[^\%s]*$' % os.path.sep matches = [ f.dirname if absolute_paths else os.path.relpath(f.dirname, self.root) for f in results if re.search(template, f.dirname) ] results = natural_sort(list(set(matches))) else: raise ValueError("Invalid return_type specified (must be one " "of 'tuple', 'file', 'id', or 'dir'.") else: results = natural_sort(results, 'path') return results
Retrieve files and/or metadata from the current Layout. Args: return_type (str): Type of result to return. Valid values: 'object' (default): return a list of matching BIDSFile objects. 'file': return a list of matching filenames. 'dir': return a list of directories. 'id': return a list of unique IDs. Must be used together with a valid target. target (str): Optional name of the target entity to get results for (only used if return_type is 'dir' or 'id'). extensions (str, list): One or more file extensions to filter on. BIDSFiles with any other extensions will be excluded. scope (str, list): Scope of the search space. If passed, only nodes/directories that match the specified scope will be searched. Possible values include: 'all' (default): search all available directories. 'derivatives': search all derivatives directories 'raw': search only BIDS-Raw directories <PipelineName>: the name of a BIDS-Derivatives pipeline regex_search (bool or None): Whether to require exact matching (False) or regex search (True) when comparing the query string to each entity. defined_fields (list): Optional list of names of metadata fields that must be defined in JSON sidecars in order to consider the file a match, but which don't need to match any particular value. absolute_paths (bool): Optionally override the instance-wide option to report either absolute or relative (to the top of the dataset) paths. If None, will fall back on the value specified at BIDSLayout initialization. kwargs (dict): Any optional key/values to filter the entities on. Keys are entity names, values are regexes to filter on. For example, passing filter={'subject': 'sub-[12]'} would return only files that match the first two subjects. Returns: A list of BIDSFiles (default) or strings (see return_type). Notes: As of pybids 0.7.0 some keywords have been changed. Namely: 'type' becomes 'suffix', 'modality' becomes 'datatype', 'acq' becomes 'acquisition' and 'mod' becomes 'modality'. Using the wrong version could result in get() silently returning wrong or no results. See the changelog for more details.
Below is the the instruction that describes the task: ### Input: Retrieve files and/or metadata from the current Layout. Args: return_type (str): Type of result to return. Valid values: 'object' (default): return a list of matching BIDSFile objects. 'file': return a list of matching filenames. 'dir': return a list of directories. 'id': return a list of unique IDs. Must be used together with a valid target. target (str): Optional name of the target entity to get results for (only used if return_type is 'dir' or 'id'). extensions (str, list): One or more file extensions to filter on. BIDSFiles with any other extensions will be excluded. scope (str, list): Scope of the search space. If passed, only nodes/directories that match the specified scope will be searched. Possible values include: 'all' (default): search all available directories. 'derivatives': search all derivatives directories 'raw': search only BIDS-Raw directories <PipelineName>: the name of a BIDS-Derivatives pipeline regex_search (bool or None): Whether to require exact matching (False) or regex search (True) when comparing the query string to each entity. defined_fields (list): Optional list of names of metadata fields that must be defined in JSON sidecars in order to consider the file a match, but which don't need to match any particular value. absolute_paths (bool): Optionally override the instance-wide option to report either absolute or relative (to the top of the dataset) paths. If None, will fall back on the value specified at BIDSLayout initialization. kwargs (dict): Any optional key/values to filter the entities on. Keys are entity names, values are regexes to filter on. For example, passing filter={'subject': 'sub-[12]'} would return only files that match the first two subjects. Returns: A list of BIDSFiles (default) or strings (see return_type). Notes: As of pybids 0.7.0 some keywords have been changed. Namely: 'type' becomes 'suffix', 'modality' becomes 'datatype', 'acq' becomes 'acquisition' and 'mod' becomes 'modality'. Using the wrong version could result in get() silently returning wrong or no results. See the changelog for more details. ### Response: def get(self, return_type='object', target=None, extensions=None, scope='all', regex_search=False, defined_fields=None, absolute_paths=None, **kwargs): """ Retrieve files and/or metadata from the current Layout. Args: return_type (str): Type of result to return. Valid values: 'object' (default): return a list of matching BIDSFile objects. 'file': return a list of matching filenames. 'dir': return a list of directories. 'id': return a list of unique IDs. Must be used together with a valid target. target (str): Optional name of the target entity to get results for (only used if return_type is 'dir' or 'id'). extensions (str, list): One or more file extensions to filter on. BIDSFiles with any other extensions will be excluded. scope (str, list): Scope of the search space. If passed, only nodes/directories that match the specified scope will be searched. Possible values include: 'all' (default): search all available directories. 'derivatives': search all derivatives directories 'raw': search only BIDS-Raw directories <PipelineName>: the name of a BIDS-Derivatives pipeline regex_search (bool or None): Whether to require exact matching (False) or regex search (True) when comparing the query string to each entity. defined_fields (list): Optional list of names of metadata fields that must be defined in JSON sidecars in order to consider the file a match, but which don't need to match any particular value. absolute_paths (bool): Optionally override the instance-wide option to report either absolute or relative (to the top of the dataset) paths. If None, will fall back on the value specified at BIDSLayout initialization. kwargs (dict): Any optional key/values to filter the entities on. Keys are entity names, values are regexes to filter on. For example, passing filter={'subject': 'sub-[12]'} would return only files that match the first two subjects. Returns: A list of BIDSFiles (default) or strings (see return_type). Notes: As of pybids 0.7.0 some keywords have been changed. Namely: 'type' becomes 'suffix', 'modality' becomes 'datatype', 'acq' becomes 'acquisition' and 'mod' becomes 'modality'. Using the wrong version could result in get() silently returning wrong or no results. See the changelog for more details. """ # Warn users still expecting 0.6 behavior if 'type' in kwargs: raise ValueError("As of pybids 0.7.0, the 'type' argument has been" " replaced with 'suffix'.") layouts = self._get_layouts_in_scope(scope) # Create concatenated file, node, and entity lists files, entities, nodes = {}, {}, [] for l in layouts: files.update(l.files) entities.update(l.entities) nodes.extend(l.nodes) # Separate entity kwargs from metadata kwargs ent_kwargs, md_kwargs = {}, {} for k, v in kwargs.items(): if k in entities: ent_kwargs[k] = v else: md_kwargs[k] = v # Provide some suggestions if target is specified and invalid. if target is not None and target not in entities: import difflib potential = list(entities.keys()) suggestions = difflib.get_close_matches(target, potential) if suggestions: message = "Did you mean one of: {}?".format(suggestions) else: message = "Valid targets are: {}".format(potential) raise ValueError(("Unknown target '{}'. " + message) .format(target)) results = [] # Search on entities filters = ent_kwargs.copy() for f in files.values(): if f._matches(filters, extensions, regex_search): results.append(f) # Search on metadata if return_type not in {'dir', 'id'}: if md_kwargs: results = [f.path for f in results] results = self.metadata_index.search(results, defined_fields, **md_kwargs) results = [files[f] for f in results] # Convert to relative paths if needed if absolute_paths is None: # can be overloaded as option to .get absolute_paths = self.absolute_paths if not absolute_paths: for i, f in enumerate(results): f = copy.copy(f) f.path = os.path.relpath(f.path, self.root) results[i] = f if return_type == 'file': results = natural_sort([f.path for f in results]) elif return_type in ['id', 'dir']: if target is None: raise ValueError('If return_type is "id" or "dir", a valid ' 'target entity must also be specified.') results = [x for x in results if target in x.entities] if return_type == 'id': results = list(set([x.entities[target] for x in results])) results = natural_sort(results) elif return_type == 'dir': template = entities[target].directory if template is None: raise ValueError('Return type set to directory, but no ' 'directory template is defined for the ' 'target entity (\"%s\").' % target) # Construct regex search pattern from target directory template template = self.root + template to_rep = re.findall(r'\{(.*?)\}', template) for ent in to_rep: patt = entities[ent].pattern template = template.replace('{%s}' % ent, patt) template += r'[^\%s]*$' % os.path.sep matches = [ f.dirname if absolute_paths else os.path.relpath(f.dirname, self.root) for f in results if re.search(template, f.dirname) ] results = natural_sort(list(set(matches))) else: raise ValueError("Invalid return_type specified (must be one " "of 'tuple', 'file', 'id', or 'dir'.") else: results = natural_sort(results, 'path') return results
def XYZ100_to_sRGB1_linear(XYZ100): """Convert XYZ to linear sRGB, where XYZ is normalized so that reference white D65 is X=95.05, Y=100, Z=108.90 and sRGB is on the 0-1 scale. Linear sRGB has a linear relationship to actual light, so it is an appropriate space for simulating light (e.g. for alpha blending). """ XYZ100 = np.asarray(XYZ100, dtype=float) # this is broadcasting matrix * array-of-vectors, where the vector is the # last dim RGB_linear = np.einsum("...ij,...j->...i", XYZ100_to_sRGB1_matrix, XYZ100 / 100) return RGB_linear
Convert XYZ to linear sRGB, where XYZ is normalized so that reference white D65 is X=95.05, Y=100, Z=108.90 and sRGB is on the 0-1 scale. Linear sRGB has a linear relationship to actual light, so it is an appropriate space for simulating light (e.g. for alpha blending).
Below is the the instruction that describes the task: ### Input: Convert XYZ to linear sRGB, where XYZ is normalized so that reference white D65 is X=95.05, Y=100, Z=108.90 and sRGB is on the 0-1 scale. Linear sRGB has a linear relationship to actual light, so it is an appropriate space for simulating light (e.g. for alpha blending). ### Response: def XYZ100_to_sRGB1_linear(XYZ100): """Convert XYZ to linear sRGB, where XYZ is normalized so that reference white D65 is X=95.05, Y=100, Z=108.90 and sRGB is on the 0-1 scale. Linear sRGB has a linear relationship to actual light, so it is an appropriate space for simulating light (e.g. for alpha blending). """ XYZ100 = np.asarray(XYZ100, dtype=float) # this is broadcasting matrix * array-of-vectors, where the vector is the # last dim RGB_linear = np.einsum("...ij,...j->...i", XYZ100_to_sRGB1_matrix, XYZ100 / 100) return RGB_linear
def parse_config(data: dict) -> dict: """Parse MIP config file. Args: data (dict): raw YAML input from MIP analysis config file Returns: dict: parsed data """ return { 'email': data.get('email'), 'family': data['family_id'], 'samples': [{ 'id': sample_id, 'type': analysis_type, } for sample_id, analysis_type in data['analysis_type'].items()], 'config_path': data['config_file_analysis'], 'is_dryrun': True if 'dry_run_all' in data else False, 'log_path': data['log_file'], 'out_dir': data['outdata_dir'], 'priority': data['slurm_quality_of_service'], 'sampleinfo_path': data['sample_info_file'], }
Parse MIP config file. Args: data (dict): raw YAML input from MIP analysis config file Returns: dict: parsed data
Below is the the instruction that describes the task: ### Input: Parse MIP config file. Args: data (dict): raw YAML input from MIP analysis config file Returns: dict: parsed data ### Response: def parse_config(data: dict) -> dict: """Parse MIP config file. Args: data (dict): raw YAML input from MIP analysis config file Returns: dict: parsed data """ return { 'email': data.get('email'), 'family': data['family_id'], 'samples': [{ 'id': sample_id, 'type': analysis_type, } for sample_id, analysis_type in data['analysis_type'].items()], 'config_path': data['config_file_analysis'], 'is_dryrun': True if 'dry_run_all' in data else False, 'log_path': data['log_file'], 'out_dir': data['outdata_dir'], 'priority': data['slurm_quality_of_service'], 'sampleinfo_path': data['sample_info_file'], }
def _find_binary(binary=None): """Find the absolute path to the GnuPG binary. Also run checks that the binary is not a symlink, and check that our process real uid has exec permissions. :param str binary: The path to the GnuPG binary. :raises: :exc:`~exceptions.RuntimeError` if it appears that GnuPG is not installed. :rtype: str :returns: The absolute path to the GnuPG binary to use, if no exceptions occur. """ found = None if binary is not None: if os.path.isabs(binary) and os.path.isfile(binary): return binary if not os.path.isabs(binary): try: found = _which(binary) log.debug("Found potential binary paths: %s" % '\n'.join([path for path in found])) found = found[0] except IndexError as ie: log.info("Could not determine absolute path of binary: '%s'" % binary) elif os.access(binary, os.X_OK): found = binary if found is None: try: found = _which('gpg', abspath_only=True, disallow_symlinks=True)[0] except IndexError as ie: log.error("Could not find binary for 'gpg'.") try: found = _which('gpg2')[0] except IndexError as ie: log.error("Could not find binary for 'gpg2'.") if found is None: raise RuntimeError("GnuPG is not installed!") return found
Find the absolute path to the GnuPG binary. Also run checks that the binary is not a symlink, and check that our process real uid has exec permissions. :param str binary: The path to the GnuPG binary. :raises: :exc:`~exceptions.RuntimeError` if it appears that GnuPG is not installed. :rtype: str :returns: The absolute path to the GnuPG binary to use, if no exceptions occur.
Below is the the instruction that describes the task: ### Input: Find the absolute path to the GnuPG binary. Also run checks that the binary is not a symlink, and check that our process real uid has exec permissions. :param str binary: The path to the GnuPG binary. :raises: :exc:`~exceptions.RuntimeError` if it appears that GnuPG is not installed. :rtype: str :returns: The absolute path to the GnuPG binary to use, if no exceptions occur. ### Response: def _find_binary(binary=None): """Find the absolute path to the GnuPG binary. Also run checks that the binary is not a symlink, and check that our process real uid has exec permissions. :param str binary: The path to the GnuPG binary. :raises: :exc:`~exceptions.RuntimeError` if it appears that GnuPG is not installed. :rtype: str :returns: The absolute path to the GnuPG binary to use, if no exceptions occur. """ found = None if binary is not None: if os.path.isabs(binary) and os.path.isfile(binary): return binary if not os.path.isabs(binary): try: found = _which(binary) log.debug("Found potential binary paths: %s" % '\n'.join([path for path in found])) found = found[0] except IndexError as ie: log.info("Could not determine absolute path of binary: '%s'" % binary) elif os.access(binary, os.X_OK): found = binary if found is None: try: found = _which('gpg', abspath_only=True, disallow_symlinks=True)[0] except IndexError as ie: log.error("Could not find binary for 'gpg'.") try: found = _which('gpg2')[0] except IndexError as ie: log.error("Could not find binary for 'gpg2'.") if found is None: raise RuntimeError("GnuPG is not installed!") return found
def data_filler_user_agent(self, number_of_rows, pipe): '''creates keys with user agent data ''' try: for i in range(number_of_rows): pipe.hmset('user_agent:%s' % i, { 'id': rnd_id_generator(self), 'ip': self.faker.ipv4(), 'countrycode': self.faker.country_code(), 'useragent': self.faker.user_agent() }) pipe.execute() logger.warning('user_agent Commits are successful after write job!', extra=d) except Exception as e: logger.error(e, extra=d)
creates keys with user agent data
Below is the the instruction that describes the task: ### Input: creates keys with user agent data ### Response: def data_filler_user_agent(self, number_of_rows, pipe): '''creates keys with user agent data ''' try: for i in range(number_of_rows): pipe.hmset('user_agent:%s' % i, { 'id': rnd_id_generator(self), 'ip': self.faker.ipv4(), 'countrycode': self.faker.country_code(), 'useragent': self.faker.user_agent() }) pipe.execute() logger.warning('user_agent Commits are successful after write job!', extra=d) except Exception as e: logger.error(e, extra=d)
def time_to_seconds(x): """Convert a time in a seconds sum""" if isinstance(x, time): return ((((x.hour * 60) + x.minute) * 60 + x.second) * 10**6 + x.microsecond) / 10**6 if is_str(x): return x # Clamp to valid time return x and max(0, min(x, 24 * 3600 - 10**-6))
Convert a time in a seconds sum
Below is the the instruction that describes the task: ### Input: Convert a time in a seconds sum ### Response: def time_to_seconds(x): """Convert a time in a seconds sum""" if isinstance(x, time): return ((((x.hour * 60) + x.minute) * 60 + x.second) * 10**6 + x.microsecond) / 10**6 if is_str(x): return x # Clamp to valid time return x and max(0, min(x, 24 * 3600 - 10**-6))
def move(self, destination, remove_tombstone=True): ''' Method to move resource to another location. Note: by default, this method removes the tombstone at the resource's original URI. Can use optional flag remove_tombstone to keep tombstone on successful move. Note: other resource's triples that are managed by Fedora that point to this resource, *will* point to the new URI after the move. Args: destination (rdflib.term.URIRef, str): URI location to move resource remove_tombstone (bool): defaults to False, set to True to keep tombstone Returns: (Resource) new, moved instance of resource ''' # set move headers destination_uri = self.repo.parse_uri(destination) # http request response = self.repo.api.http_request('MOVE', self.uri, data=None, headers={'Destination':destination_uri.toPython()}) # handle response if response.status_code == 201: # set self exists self.exists = False # handle tombstone if remove_tombstone: tombstone_response = self.repo.api.http_request('DELETE', "%s/fcr:tombstone" % self.uri) # udpdate uri, refresh, and return self.uri = destination_uri self.refresh() return destination_uri else: raise Exception('HTTP %s, could not move resource %s to %s' % (response.status_code, self.uri, destination_uri))
Method to move resource to another location. Note: by default, this method removes the tombstone at the resource's original URI. Can use optional flag remove_tombstone to keep tombstone on successful move. Note: other resource's triples that are managed by Fedora that point to this resource, *will* point to the new URI after the move. Args: destination (rdflib.term.URIRef, str): URI location to move resource remove_tombstone (bool): defaults to False, set to True to keep tombstone Returns: (Resource) new, moved instance of resource
Below is the the instruction that describes the task: ### Input: Method to move resource to another location. Note: by default, this method removes the tombstone at the resource's original URI. Can use optional flag remove_tombstone to keep tombstone on successful move. Note: other resource's triples that are managed by Fedora that point to this resource, *will* point to the new URI after the move. Args: destination (rdflib.term.URIRef, str): URI location to move resource remove_tombstone (bool): defaults to False, set to True to keep tombstone Returns: (Resource) new, moved instance of resource ### Response: def move(self, destination, remove_tombstone=True): ''' Method to move resource to another location. Note: by default, this method removes the tombstone at the resource's original URI. Can use optional flag remove_tombstone to keep tombstone on successful move. Note: other resource's triples that are managed by Fedora that point to this resource, *will* point to the new URI after the move. Args: destination (rdflib.term.URIRef, str): URI location to move resource remove_tombstone (bool): defaults to False, set to True to keep tombstone Returns: (Resource) new, moved instance of resource ''' # set move headers destination_uri = self.repo.parse_uri(destination) # http request response = self.repo.api.http_request('MOVE', self.uri, data=None, headers={'Destination':destination_uri.toPython()}) # handle response if response.status_code == 201: # set self exists self.exists = False # handle tombstone if remove_tombstone: tombstone_response = self.repo.api.http_request('DELETE', "%s/fcr:tombstone" % self.uri) # udpdate uri, refresh, and return self.uri = destination_uri self.refresh() return destination_uri else: raise Exception('HTTP %s, could not move resource %s to %s' % (response.status_code, self.uri, destination_uri))
def actor_url(parser, token): """ Renders the URL for a particular actor instance :: <a href="{% actor_url request.user %}">View your actions</a> <a href="{% actor_url another_user %}">{{ another_user }}'s actions</a> """ bits = token.split_contents() if len(bits) != 2: raise TemplateSyntaxError("Accepted format " "{% actor_url [actor_instance] %}") else: return DisplayActivityActorUrl(*bits[1:])
Renders the URL for a particular actor instance :: <a href="{% actor_url request.user %}">View your actions</a> <a href="{% actor_url another_user %}">{{ another_user }}'s actions</a>
Below is the the instruction that describes the task: ### Input: Renders the URL for a particular actor instance :: <a href="{% actor_url request.user %}">View your actions</a> <a href="{% actor_url another_user %}">{{ another_user }}'s actions</a> ### Response: def actor_url(parser, token): """ Renders the URL for a particular actor instance :: <a href="{% actor_url request.user %}">View your actions</a> <a href="{% actor_url another_user %}">{{ another_user }}'s actions</a> """ bits = token.split_contents() if len(bits) != 2: raise TemplateSyntaxError("Accepted format " "{% actor_url [actor_instance] %}") else: return DisplayActivityActorUrl(*bits[1:])
def search_file (pattern, f): """ Function to searach a single file for a single search pattern. """ fn_matched = False contents_matched = False # Use mimetypes to exclude binary files where possible if not re.match(r'.+_mqc\.(png|jpg|jpeg)', f['fn']): (ftype, encoding) = mimetypes.guess_type(os.path.join(f['root'], f['fn'])) if encoding is not None: return False if ftype is not None and ftype.startswith('image'): return False # Search pattern specific filesize limit if pattern.get('max_filesize') is not None and 'filesize' in f: if f['filesize'] > pattern.get('max_filesize'): logger.debug("Ignoring because exceeded search pattern filesize limit: {}".format(f['fn'])) return False # Search by file name (glob) if pattern.get('fn') is not None: if fnmatch.fnmatch(f['fn'], pattern['fn']): fn_matched = True if pattern.get('contents') is None and pattern.get('contents_re') is None: return True # Search by file name (regex) if pattern.get('fn_re') is not None: if re.match( pattern['fn_re'], f['fn']): fn_matched = True if pattern.get('contents') is None and pattern.get('contents_re') is None: return True # Search by file contents if pattern.get('contents') is not None or pattern.get('contents_re') is not None: if pattern.get('contents_re') is not None: repattern = re.compile(pattern['contents_re']) try: with io.open (os.path.join(f['root'],f['fn']), "r", encoding='utf-8') as f: l = 1 for line in f: # Search by file contents (string) if pattern.get('contents') is not None: if pattern['contents'] in line: contents_matched = True if pattern.get('fn') is None and pattern.get('fn_re') is None: return True break # Search by file contents (regex) elif pattern.get('contents_re') is not None: if re.search(repattern, line): contents_matched = True if pattern.get('fn') is None and pattern.get('fn_re') is None: return True break # Break if we've searched enough lines for this pattern if pattern.get('num_lines') and l >= pattern.get('num_lines'): break l += 1 except (IOError, OSError, ValueError, UnicodeDecodeError): if config.report_readerrors: logger.debug("Couldn't read file when looking for output: {}".format(f['fn'])) return False return fn_matched and contents_matched
Function to searach a single file for a single search pattern.
Below is the the instruction that describes the task: ### Input: Function to searach a single file for a single search pattern. ### Response: def search_file (pattern, f): """ Function to searach a single file for a single search pattern. """ fn_matched = False contents_matched = False # Use mimetypes to exclude binary files where possible if not re.match(r'.+_mqc\.(png|jpg|jpeg)', f['fn']): (ftype, encoding) = mimetypes.guess_type(os.path.join(f['root'], f['fn'])) if encoding is not None: return False if ftype is not None and ftype.startswith('image'): return False # Search pattern specific filesize limit if pattern.get('max_filesize') is not None and 'filesize' in f: if f['filesize'] > pattern.get('max_filesize'): logger.debug("Ignoring because exceeded search pattern filesize limit: {}".format(f['fn'])) return False # Search by file name (glob) if pattern.get('fn') is not None: if fnmatch.fnmatch(f['fn'], pattern['fn']): fn_matched = True if pattern.get('contents') is None and pattern.get('contents_re') is None: return True # Search by file name (regex) if pattern.get('fn_re') is not None: if re.match( pattern['fn_re'], f['fn']): fn_matched = True if pattern.get('contents') is None and pattern.get('contents_re') is None: return True # Search by file contents if pattern.get('contents') is not None or pattern.get('contents_re') is not None: if pattern.get('contents_re') is not None: repattern = re.compile(pattern['contents_re']) try: with io.open (os.path.join(f['root'],f['fn']), "r", encoding='utf-8') as f: l = 1 for line in f: # Search by file contents (string) if pattern.get('contents') is not None: if pattern['contents'] in line: contents_matched = True if pattern.get('fn') is None and pattern.get('fn_re') is None: return True break # Search by file contents (regex) elif pattern.get('contents_re') is not None: if re.search(repattern, line): contents_matched = True if pattern.get('fn') is None and pattern.get('fn_re') is None: return True break # Break if we've searched enough lines for this pattern if pattern.get('num_lines') and l >= pattern.get('num_lines'): break l += 1 except (IOError, OSError, ValueError, UnicodeDecodeError): if config.report_readerrors: logger.debug("Couldn't read file when looking for output: {}".format(f['fn'])) return False return fn_matched and contents_matched
def get_width(self, element): '''get_width High-level api: Calculate how much indent is needed for a node. Parameters ---------- element : `Element` A node in model tree. Returns ------- int Start position from the left margin. ''' parent = element.getparent() if parent in self.width: return self.width[parent] ret = 0 for sibling in parent.getchildren(): w = len(self.get_name_str(sibling)) if w > ret: ret = w self.width[parent] = math.ceil((ret + 3) / 3.0) * 3 return self.width[parent]
get_width High-level api: Calculate how much indent is needed for a node. Parameters ---------- element : `Element` A node in model tree. Returns ------- int Start position from the left margin.
Below is the the instruction that describes the task: ### Input: get_width High-level api: Calculate how much indent is needed for a node. Parameters ---------- element : `Element` A node in model tree. Returns ------- int Start position from the left margin. ### Response: def get_width(self, element): '''get_width High-level api: Calculate how much indent is needed for a node. Parameters ---------- element : `Element` A node in model tree. Returns ------- int Start position from the left margin. ''' parent = element.getparent() if parent in self.width: return self.width[parent] ret = 0 for sibling in parent.getchildren(): w = len(self.get_name_str(sibling)) if w > ret: ret = w self.width[parent] = math.ceil((ret + 3) / 3.0) * 3 return self.width[parent]
def pdmerge_respeta_tz(func_merge, tz_ant=None, *args, **kwargs): """ Programación defensiva por issue: pandas BUG (a veces, el index pierde el tz): - issue #7795: concat of objects with the same timezone get reset to UTC; - issue #10567: DataFrame combine_first() loses timezone information for datetime columns https://github.com/pydata/pandas/issues/10567 :param tz_ant: TZ de referencia para establecerlo al final si se ha perdido en la operación :param func_merge: puntero a función que realiza la operación de merge / join / combine / etc. :param args: argumentos para func_merge :param kwargs: argumentos para func_merge :return: dataframe_merged con TZ anterior """ df_merged = func_merge(*args, **kwargs) if tz_ant is not None and tz_ant != df_merged.index.tz: # print_warn('Error pandas: join prevProg + demandaGeneracion pierde timezone (%s->%s)' # % (data_import[KEYS_DATA[0]].index.tz, tz_ant)) df_merged.index = df_merged.index.tz_convert(tz_ant) return df_merged
Programación defensiva por issue: pandas BUG (a veces, el index pierde el tz): - issue #7795: concat of objects with the same timezone get reset to UTC; - issue #10567: DataFrame combine_first() loses timezone information for datetime columns https://github.com/pydata/pandas/issues/10567 :param tz_ant: TZ de referencia para establecerlo al final si se ha perdido en la operación :param func_merge: puntero a función que realiza la operación de merge / join / combine / etc. :param args: argumentos para func_merge :param kwargs: argumentos para func_merge :return: dataframe_merged con TZ anterior
Below is the the instruction that describes the task: ### Input: Programación defensiva por issue: pandas BUG (a veces, el index pierde el tz): - issue #7795: concat of objects with the same timezone get reset to UTC; - issue #10567: DataFrame combine_first() loses timezone information for datetime columns https://github.com/pydata/pandas/issues/10567 :param tz_ant: TZ de referencia para establecerlo al final si se ha perdido en la operación :param func_merge: puntero a función que realiza la operación de merge / join / combine / etc. :param args: argumentos para func_merge :param kwargs: argumentos para func_merge :return: dataframe_merged con TZ anterior ### Response: def pdmerge_respeta_tz(func_merge, tz_ant=None, *args, **kwargs): """ Programación defensiva por issue: pandas BUG (a veces, el index pierde el tz): - issue #7795: concat of objects with the same timezone get reset to UTC; - issue #10567: DataFrame combine_first() loses timezone information for datetime columns https://github.com/pydata/pandas/issues/10567 :param tz_ant: TZ de referencia para establecerlo al final si se ha perdido en la operación :param func_merge: puntero a función que realiza la operación de merge / join / combine / etc. :param args: argumentos para func_merge :param kwargs: argumentos para func_merge :return: dataframe_merged con TZ anterior """ df_merged = func_merge(*args, **kwargs) if tz_ant is not None and tz_ant != df_merged.index.tz: # print_warn('Error pandas: join prevProg + demandaGeneracion pierde timezone (%s->%s)' # % (data_import[KEYS_DATA[0]].index.tz, tz_ant)) df_merged.index = df_merged.index.tz_convert(tz_ant) return df_merged
def add_methods(self, service): """Build method view for service.""" bindings = { "document/literal": Document(self), "rpc/literal": RPC(self), "rpc/encoded": Encoded(self)} for p in service.ports: binding = p.binding ptype = p.binding.type operations = p.binding.type.operations.values() for name in (op.name for op in operations): m = Facade("Method") m.name = name m.location = p.location m.binding = Facade("binding") op = binding.operation(name) m.soap = op.soap key = "/".join((op.soap.style, op.soap.input.body.use)) m.binding.input = bindings.get(key) key = "/".join((op.soap.style, op.soap.output.body.use)) m.binding.output = bindings.get(key) p.methods[name] = m
Build method view for service.
Below is the the instruction that describes the task: ### Input: Build method view for service. ### Response: def add_methods(self, service): """Build method view for service.""" bindings = { "document/literal": Document(self), "rpc/literal": RPC(self), "rpc/encoded": Encoded(self)} for p in service.ports: binding = p.binding ptype = p.binding.type operations = p.binding.type.operations.values() for name in (op.name for op in operations): m = Facade("Method") m.name = name m.location = p.location m.binding = Facade("binding") op = binding.operation(name) m.soap = op.soap key = "/".join((op.soap.style, op.soap.input.body.use)) m.binding.input = bindings.get(key) key = "/".join((op.soap.style, op.soap.output.body.use)) m.binding.output = bindings.get(key) p.methods[name] = m
def cache_relationships(self, cache_super=True, cache_sub=True): """ Caches the super and sub relationships by doing a prefetch_related. """ relationships_to_cache = compress( ['super_relationships__super_entity', 'sub_relationships__sub_entity'], [cache_super, cache_sub]) return self.prefetch_related(*relationships_to_cache)
Caches the super and sub relationships by doing a prefetch_related.
Below is the the instruction that describes the task: ### Input: Caches the super and sub relationships by doing a prefetch_related. ### Response: def cache_relationships(self, cache_super=True, cache_sub=True): """ Caches the super and sub relationships by doing a prefetch_related. """ relationships_to_cache = compress( ['super_relationships__super_entity', 'sub_relationships__sub_entity'], [cache_super, cache_sub]) return self.prefetch_related(*relationships_to_cache)
def _parse_dnamasq(filename): ''' Generic function for parsing dnsmasq files including includes. ''' fileopts = {} if not os.path.isfile(filename): raise CommandExecutionError( 'Error: No such file \'{0}\''.format(filename) ) with salt.utils.files.fopen(filename, 'r') as fp_: for line in fp_: line = salt.utils.stringutils.to_unicode(line) if not line.strip(): continue if line.startswith('#'): continue if '=' in line: comps = line.split('=') if comps[0] in fileopts: if isinstance(fileopts[comps[0]], six.string_types): temp = fileopts[comps[0]] fileopts[comps[0]] = [temp] fileopts[comps[0]].append(comps[1].strip()) else: fileopts[comps[0]] = comps[1].strip() else: if 'unparsed' not in fileopts: fileopts['unparsed'] = [] fileopts['unparsed'].append(line) return fileopts
Generic function for parsing dnsmasq files including includes.
Below is the the instruction that describes the task: ### Input: Generic function for parsing dnsmasq files including includes. ### Response: def _parse_dnamasq(filename): ''' Generic function for parsing dnsmasq files including includes. ''' fileopts = {} if not os.path.isfile(filename): raise CommandExecutionError( 'Error: No such file \'{0}\''.format(filename) ) with salt.utils.files.fopen(filename, 'r') as fp_: for line in fp_: line = salt.utils.stringutils.to_unicode(line) if not line.strip(): continue if line.startswith('#'): continue if '=' in line: comps = line.split('=') if comps[0] in fileopts: if isinstance(fileopts[comps[0]], six.string_types): temp = fileopts[comps[0]] fileopts[comps[0]] = [temp] fileopts[comps[0]].append(comps[1].strip()) else: fileopts[comps[0]] = comps[1].strip() else: if 'unparsed' not in fileopts: fileopts['unparsed'] = [] fileopts['unparsed'].append(line) return fileopts
def transform_with(self, estimator, out_ds, fmt=None): """Call the partial_transform method of the estimator on this dataset Parameters ---------- estimator : object with ``partial_fit`` method This object will be used to transform this dataset into a new dataset. The estimator should be fitted prior to calling this method. out_ds : str or Dataset This dataset will be transformed and saved into out_ds. If out_ds is a path, a new dataset will be created at that path. fmt : str The type of dataset to create if out_ds is a string. Returns ------- out_ds : Dataset The tranformed dataset. """ if isinstance(out_ds, str): out_ds = self.create_derived(out_ds, fmt=fmt) elif isinstance(out_ds, _BaseDataset): err = "Dataset must be opened in write mode." assert out_ds.mode in ('w', 'a'), err else: err = "Please specify a dataset path or an existing dataset." raise ValueError(err) for key in self.keys(): out_ds[key] = estimator.partial_transform(self[key]) return out_ds
Call the partial_transform method of the estimator on this dataset Parameters ---------- estimator : object with ``partial_fit`` method This object will be used to transform this dataset into a new dataset. The estimator should be fitted prior to calling this method. out_ds : str or Dataset This dataset will be transformed and saved into out_ds. If out_ds is a path, a new dataset will be created at that path. fmt : str The type of dataset to create if out_ds is a string. Returns ------- out_ds : Dataset The tranformed dataset.
Below is the the instruction that describes the task: ### Input: Call the partial_transform method of the estimator on this dataset Parameters ---------- estimator : object with ``partial_fit`` method This object will be used to transform this dataset into a new dataset. The estimator should be fitted prior to calling this method. out_ds : str or Dataset This dataset will be transformed and saved into out_ds. If out_ds is a path, a new dataset will be created at that path. fmt : str The type of dataset to create if out_ds is a string. Returns ------- out_ds : Dataset The tranformed dataset. ### Response: def transform_with(self, estimator, out_ds, fmt=None): """Call the partial_transform method of the estimator on this dataset Parameters ---------- estimator : object with ``partial_fit`` method This object will be used to transform this dataset into a new dataset. The estimator should be fitted prior to calling this method. out_ds : str or Dataset This dataset will be transformed and saved into out_ds. If out_ds is a path, a new dataset will be created at that path. fmt : str The type of dataset to create if out_ds is a string. Returns ------- out_ds : Dataset The tranformed dataset. """ if isinstance(out_ds, str): out_ds = self.create_derived(out_ds, fmt=fmt) elif isinstance(out_ds, _BaseDataset): err = "Dataset must be opened in write mode." assert out_ds.mode in ('w', 'a'), err else: err = "Please specify a dataset path or an existing dataset." raise ValueError(err) for key in self.keys(): out_ds[key] = estimator.partial_transform(self[key]) return out_ds
def _get_config_file_in_folder(cls, path): """Look for a configuration file in `path`. If exists return its full path, otherwise None. """ if os.path.isfile(path): path = os.path.dirname(path) for fn in cls.PROJECT_CONFIG_FILES: config = RawConfigParser() full_path = os.path.join(path, fn) if config.read(full_path) and cls._get_section_name(config): return full_path
Look for a configuration file in `path`. If exists return its full path, otherwise None.
Below is the the instruction that describes the task: ### Input: Look for a configuration file in `path`. If exists return its full path, otherwise None. ### Response: def _get_config_file_in_folder(cls, path): """Look for a configuration file in `path`. If exists return its full path, otherwise None. """ if os.path.isfile(path): path = os.path.dirname(path) for fn in cls.PROJECT_CONFIG_FILES: config = RawConfigParser() full_path = os.path.join(path, fn) if config.read(full_path) and cls._get_section_name(config): return full_path
def _process_uniprot_ids(self, limit=None): """ This method processes the mappings from ZFIN gene IDs to UniProtKB IDs. Triples created: <zfin_gene_id> a class <zfin_gene_id> rdfs:label gene_symbol <uniprot_id> is an Individual <uniprot_id> has type <polypeptide> <zfin_gene_id> has_gene_product <uniprot_id> :param limit: :return: """ LOG.info("Processing UniProt IDs") if self.test_mode: graph = self.testgraph else: graph = self.graph line_counter = 0 model = Model(graph) geno = Genotype(graph) raw = '/'.join((self.rawdir, self.files['uniprot']['file'])) with open(raw, 'r', encoding="iso-8859-1") as csvfile: filereader = csv.reader(csvfile, delimiter='\t', quotechar='\"') for row in filereader: line_counter += 1 (gene_id, gene_so_id, gene_symbol, uniprot_id # , empty ) = row if self.test_mode and gene_id not in self.test_ids['gene']: continue gene_id = 'ZFIN:' + gene_id.strip() uniprot_id = 'UniProtKB:' + uniprot_id.strip() geno.addGene(gene_id, gene_symbol) # TODO: Abstract to one of the model utilities model.addIndividualToGraph( uniprot_id, None, self.globaltt['polypeptide']) graph.addTriple( gene_id, self.globaltt['has gene product'], uniprot_id) if not self.test_mode and limit is not None and line_counter > limit: break LOG.info("Done with UniProt IDs") return
This method processes the mappings from ZFIN gene IDs to UniProtKB IDs. Triples created: <zfin_gene_id> a class <zfin_gene_id> rdfs:label gene_symbol <uniprot_id> is an Individual <uniprot_id> has type <polypeptide> <zfin_gene_id> has_gene_product <uniprot_id> :param limit: :return:
Below is the the instruction that describes the task: ### Input: This method processes the mappings from ZFIN gene IDs to UniProtKB IDs. Triples created: <zfin_gene_id> a class <zfin_gene_id> rdfs:label gene_symbol <uniprot_id> is an Individual <uniprot_id> has type <polypeptide> <zfin_gene_id> has_gene_product <uniprot_id> :param limit: :return: ### Response: def _process_uniprot_ids(self, limit=None): """ This method processes the mappings from ZFIN gene IDs to UniProtKB IDs. Triples created: <zfin_gene_id> a class <zfin_gene_id> rdfs:label gene_symbol <uniprot_id> is an Individual <uniprot_id> has type <polypeptide> <zfin_gene_id> has_gene_product <uniprot_id> :param limit: :return: """ LOG.info("Processing UniProt IDs") if self.test_mode: graph = self.testgraph else: graph = self.graph line_counter = 0 model = Model(graph) geno = Genotype(graph) raw = '/'.join((self.rawdir, self.files['uniprot']['file'])) with open(raw, 'r', encoding="iso-8859-1") as csvfile: filereader = csv.reader(csvfile, delimiter='\t', quotechar='\"') for row in filereader: line_counter += 1 (gene_id, gene_so_id, gene_symbol, uniprot_id # , empty ) = row if self.test_mode and gene_id not in self.test_ids['gene']: continue gene_id = 'ZFIN:' + gene_id.strip() uniprot_id = 'UniProtKB:' + uniprot_id.strip() geno.addGene(gene_id, gene_symbol) # TODO: Abstract to one of the model utilities model.addIndividualToGraph( uniprot_id, None, self.globaltt['polypeptide']) graph.addTriple( gene_id, self.globaltt['has gene product'], uniprot_id) if not self.test_mode and limit is not None and line_counter > limit: break LOG.info("Done with UniProt IDs") return
async def stream_frames_stop(self): """Stop streaming frames.""" self._protocol.set_on_packet(None) cmd = "streamframes stop" await self._protocol.send_command(cmd, callback=False)
Stop streaming frames.
Below is the the instruction that describes the task: ### Input: Stop streaming frames. ### Response: async def stream_frames_stop(self): """Stop streaming frames.""" self._protocol.set_on_packet(None) cmd = "streamframes stop" await self._protocol.send_command(cmd, callback=False)
def add_continuous_annotations(self, x, y, colourName='Purple', colour='#c832ff', name='', view=None, vscale=None, presentationName=None): """ add a continous annotation layer Args: x (float iterable): temporal indices of the dataset y (float iterable): values of the dataset Kwargs: view (<DOM Element: view>): environment view used to display the spectrogram, if set to None, a new view is created Returns: <DOM Element: view>: the view used to store the spectrogram """ model = self.data.appendChild(self.doc.createElement('model')) imodel = self.nbdata for atname, atval in [('id', imodel + 1), ('dataset', imodel), ('name', name), ('sampleRate', self.samplerate), ('start', int(min(x) * self.samplerate)), ('end', int(max(x) * self.samplerate)), ('type', 'sparse'), ('dimensions', '2'), ('resolution', '1'), ('notifyOnAdd', 'true'), ('minimum', min(y)), ('maximum', max(y)), ('units', '') ]: model.setAttribute(atname, str(atval)) # dataset = self.data.appendChild(self.doc.createElement('dataset')) # dataset.setAttribute('id', str(imodel)) # dataset.setAttribute('dimensions', '2') # self.nbdata += 2 # datasetnode = SVDataset2D(self.doc, str(imodel), self.samplerate) # datasetnode.set_data_from_iterable(map(int, np.array(x) * self.samplerate), y) # data = dataset.appendChild(datasetnode) dataset = self.data.appendChild(SVDataset2D(self.doc, str(imodel), self.samplerate)) dataset.set_data_from_iterable(map(int, np.array(x) * self.samplerate), y) self.nbdata += 2 ###### add layers valruler = self.__add_time_ruler() vallayer = self.__add_val_layer(imodel + 1) vallayer.setAttribute('colourName', colourName) vallayer.setAttribute('colour', colour) if presentationName: vallayer.setAttribute('presentationName', presentationName) if vscale is None: vallayer.setAttribute('verticalScale', '0') vallayer.setAttribute('scaleMinimum', str(min(y))) vallayer.setAttribute('scaleMaximum', str(max(y))) else: vallayer.setAttribute('verticalScale', '0') vallayer.setAttribute('scaleMinimum', str(vscale[0])) vallayer.setAttribute('scaleMaximum', str(vscale[1])) if view is None: view = self.__add_view() self.__add_layer_reference(view, valruler) self.__add_layer_reference(view, vallayer) return view
add a continous annotation layer Args: x (float iterable): temporal indices of the dataset y (float iterable): values of the dataset Kwargs: view (<DOM Element: view>): environment view used to display the spectrogram, if set to None, a new view is created Returns: <DOM Element: view>: the view used to store the spectrogram
Below is the the instruction that describes the task: ### Input: add a continous annotation layer Args: x (float iterable): temporal indices of the dataset y (float iterable): values of the dataset Kwargs: view (<DOM Element: view>): environment view used to display the spectrogram, if set to None, a new view is created Returns: <DOM Element: view>: the view used to store the spectrogram ### Response: def add_continuous_annotations(self, x, y, colourName='Purple', colour='#c832ff', name='', view=None, vscale=None, presentationName=None): """ add a continous annotation layer Args: x (float iterable): temporal indices of the dataset y (float iterable): values of the dataset Kwargs: view (<DOM Element: view>): environment view used to display the spectrogram, if set to None, a new view is created Returns: <DOM Element: view>: the view used to store the spectrogram """ model = self.data.appendChild(self.doc.createElement('model')) imodel = self.nbdata for atname, atval in [('id', imodel + 1), ('dataset', imodel), ('name', name), ('sampleRate', self.samplerate), ('start', int(min(x) * self.samplerate)), ('end', int(max(x) * self.samplerate)), ('type', 'sparse'), ('dimensions', '2'), ('resolution', '1'), ('notifyOnAdd', 'true'), ('minimum', min(y)), ('maximum', max(y)), ('units', '') ]: model.setAttribute(atname, str(atval)) # dataset = self.data.appendChild(self.doc.createElement('dataset')) # dataset.setAttribute('id', str(imodel)) # dataset.setAttribute('dimensions', '2') # self.nbdata += 2 # datasetnode = SVDataset2D(self.doc, str(imodel), self.samplerate) # datasetnode.set_data_from_iterable(map(int, np.array(x) * self.samplerate), y) # data = dataset.appendChild(datasetnode) dataset = self.data.appendChild(SVDataset2D(self.doc, str(imodel), self.samplerate)) dataset.set_data_from_iterable(map(int, np.array(x) * self.samplerate), y) self.nbdata += 2 ###### add layers valruler = self.__add_time_ruler() vallayer = self.__add_val_layer(imodel + 1) vallayer.setAttribute('colourName', colourName) vallayer.setAttribute('colour', colour) if presentationName: vallayer.setAttribute('presentationName', presentationName) if vscale is None: vallayer.setAttribute('verticalScale', '0') vallayer.setAttribute('scaleMinimum', str(min(y))) vallayer.setAttribute('scaleMaximum', str(max(y))) else: vallayer.setAttribute('verticalScale', '0') vallayer.setAttribute('scaleMinimum', str(vscale[0])) vallayer.setAttribute('scaleMaximum', str(vscale[1])) if view is None: view = self.__add_view() self.__add_layer_reference(view, valruler) self.__add_layer_reference(view, vallayer) return view
def ip_address(self): """ Public ip_address """ ip = None for eth in self.networks['v4']: if eth['type'] == 'public': ip = eth['ip_address'] break if ip is None: raise ValueError("No public IP found") return ip
Public ip_address
Below is the the instruction that describes the task: ### Input: Public ip_address ### Response: def ip_address(self): """ Public ip_address """ ip = None for eth in self.networks['v4']: if eth['type'] == 'public': ip = eth['ip_address'] break if ip is None: raise ValueError("No public IP found") return ip
def make_a_pause(self, timeout=0.0001, check_time_change=True): """ Wait up to timeout and check for system time change. This function checks if the system time changed since the last call. If so, the difference is returned to the caller. The duration of this call is removed from the timeout. If this duration is greater than the required timeout, no sleep is executed and the extra time is returned to the caller If the required timeout was overlapped, then the first return value will be greater than the required timeout. If the required timeout is null, then the timeout value is set as a very short time to keep a nice behavior to the system CPU ;) :param timeout: timeout to wait for activity :type timeout: float :param check_time_change: True (default) to check if the system time changed :type check_time_change: bool :return:Returns a 2-tuple: * first value is the time spent for the time change check * second value is the time change difference :rtype: tuple """ if timeout == 0: timeout = 0.0001 if not check_time_change: # Time to sleep time.sleep(timeout) self.sleep_time += timeout return 0, 0 # Check is system time changed before = time.time() time_changed = self.check_for_system_time_change() after = time.time() elapsed = after - before if elapsed > timeout: return elapsed, time_changed # Time to sleep time.sleep(timeout - elapsed) # Increase our sleep time for the time we slept before += time_changed self.sleep_time += time.time() - before return elapsed, time_changed
Wait up to timeout and check for system time change. This function checks if the system time changed since the last call. If so, the difference is returned to the caller. The duration of this call is removed from the timeout. If this duration is greater than the required timeout, no sleep is executed and the extra time is returned to the caller If the required timeout was overlapped, then the first return value will be greater than the required timeout. If the required timeout is null, then the timeout value is set as a very short time to keep a nice behavior to the system CPU ;) :param timeout: timeout to wait for activity :type timeout: float :param check_time_change: True (default) to check if the system time changed :type check_time_change: bool :return:Returns a 2-tuple: * first value is the time spent for the time change check * second value is the time change difference :rtype: tuple
Below is the the instruction that describes the task: ### Input: Wait up to timeout and check for system time change. This function checks if the system time changed since the last call. If so, the difference is returned to the caller. The duration of this call is removed from the timeout. If this duration is greater than the required timeout, no sleep is executed and the extra time is returned to the caller If the required timeout was overlapped, then the first return value will be greater than the required timeout. If the required timeout is null, then the timeout value is set as a very short time to keep a nice behavior to the system CPU ;) :param timeout: timeout to wait for activity :type timeout: float :param check_time_change: True (default) to check if the system time changed :type check_time_change: bool :return:Returns a 2-tuple: * first value is the time spent for the time change check * second value is the time change difference :rtype: tuple ### Response: def make_a_pause(self, timeout=0.0001, check_time_change=True): """ Wait up to timeout and check for system time change. This function checks if the system time changed since the last call. If so, the difference is returned to the caller. The duration of this call is removed from the timeout. If this duration is greater than the required timeout, no sleep is executed and the extra time is returned to the caller If the required timeout was overlapped, then the first return value will be greater than the required timeout. If the required timeout is null, then the timeout value is set as a very short time to keep a nice behavior to the system CPU ;) :param timeout: timeout to wait for activity :type timeout: float :param check_time_change: True (default) to check if the system time changed :type check_time_change: bool :return:Returns a 2-tuple: * first value is the time spent for the time change check * second value is the time change difference :rtype: tuple """ if timeout == 0: timeout = 0.0001 if not check_time_change: # Time to sleep time.sleep(timeout) self.sleep_time += timeout return 0, 0 # Check is system time changed before = time.time() time_changed = self.check_for_system_time_change() after = time.time() elapsed = after - before if elapsed > timeout: return elapsed, time_changed # Time to sleep time.sleep(timeout - elapsed) # Increase our sleep time for the time we slept before += time_changed self.sleep_time += time.time() - before return elapsed, time_changed
def plot(self, ax: GeoAxesSubplot, **kwargs) -> Artist: """Plotting function. All arguments are passed to the geometry""" if "facecolor" not in kwargs: kwargs["facecolor"] = "None" if "edgecolor" not in kwargs: kwargs["edgecolor"] = ax._get_lines.get_next_color() if "projection" in ax.__dict__: return ax.add_geometries([self.shape], crs=PlateCarree(), **kwargs) else: return ax.add_patch( MplPolygon(list(self.shape.exterior.coords), **kwargs) )
Plotting function. All arguments are passed to the geometry
Below is the the instruction that describes the task: ### Input: Plotting function. All arguments are passed to the geometry ### Response: def plot(self, ax: GeoAxesSubplot, **kwargs) -> Artist: """Plotting function. All arguments are passed to the geometry""" if "facecolor" not in kwargs: kwargs["facecolor"] = "None" if "edgecolor" not in kwargs: kwargs["edgecolor"] = ax._get_lines.get_next_color() if "projection" in ax.__dict__: return ax.add_geometries([self.shape], crs=PlateCarree(), **kwargs) else: return ax.add_patch( MplPolygon(list(self.shape.exterior.coords), **kwargs) )
def clean_tempdir(context, scenario): """ Clean up temporary test dirs for passed tests. Leave failed test dirs for manual inspection. """ tempdir = getattr(context, 'tempdir', None) if tempdir and scenario.status == 'passed': shutil.rmtree(tempdir) del(context.tempdir)
Clean up temporary test dirs for passed tests. Leave failed test dirs for manual inspection.
Below is the the instruction that describes the task: ### Input: Clean up temporary test dirs for passed tests. Leave failed test dirs for manual inspection. ### Response: def clean_tempdir(context, scenario): """ Clean up temporary test dirs for passed tests. Leave failed test dirs for manual inspection. """ tempdir = getattr(context, 'tempdir', None) if tempdir and scenario.status == 'passed': shutil.rmtree(tempdir) del(context.tempdir)
def matchremove_noun_endings(word): """Remove the noun and adverb word endings""" was_stemmed = False """common and proper noun and adjective word endings sorted by charlen, then alph""" noun_endings = ['arons', 'ains', 'aron', 'ment', 'ain', 'age', 'on', 'es', 'ée', 'ee', 'ie', 's'] for ending in noun_endings: """ignore exceptions""" if word in exceptions: word = word was_stemmed = True break if word == ending: word = word was_stemmed = True break """removes noun endings""" if word.endswith(ending): word = re.sub(r'{0}$'.format(ending), '', word) was_stemmed = True break return word, was_stemmed
Remove the noun and adverb word endings
Below is the the instruction that describes the task: ### Input: Remove the noun and adverb word endings ### Response: def matchremove_noun_endings(word): """Remove the noun and adverb word endings""" was_stemmed = False """common and proper noun and adjective word endings sorted by charlen, then alph""" noun_endings = ['arons', 'ains', 'aron', 'ment', 'ain', 'age', 'on', 'es', 'ée', 'ee', 'ie', 's'] for ending in noun_endings: """ignore exceptions""" if word in exceptions: word = word was_stemmed = True break if word == ending: word = word was_stemmed = True break """removes noun endings""" if word.endswith(ending): word = re.sub(r'{0}$'.format(ending), '', word) was_stemmed = True break return word, was_stemmed
def mergeWithLabels(json, firstField, firstFieldLabel, secondField, secondFieldLabel): """ merge two fields of a json into an array of { firstFieldLabel : firstFieldLabel, secondFieldLabel : secondField } """ merged = [] for i in range(0, len(json[firstField])): merged.append({ firstFieldLabel : json[firstField][i], secondFieldLabel : json[secondField][i] }) return merged
merge two fields of a json into an array of { firstFieldLabel : firstFieldLabel, secondFieldLabel : secondField }
Below is the the instruction that describes the task: ### Input: merge two fields of a json into an array of { firstFieldLabel : firstFieldLabel, secondFieldLabel : secondField } ### Response: def mergeWithLabels(json, firstField, firstFieldLabel, secondField, secondFieldLabel): """ merge two fields of a json into an array of { firstFieldLabel : firstFieldLabel, secondFieldLabel : secondField } """ merged = [] for i in range(0, len(json[firstField])): merged.append({ firstFieldLabel : json[firstField][i], secondFieldLabel : json[secondField][i] }) return merged
def _get_model_nodes(self, model): """ Find all the non-auto created nodes of the model. """ nodes = [(name, node) for name, node in model._nodes.items() if node._is_auto_created is False] nodes.sort(key=lambda n: n[0]) return nodes
Find all the non-auto created nodes of the model.
Below is the the instruction that describes the task: ### Input: Find all the non-auto created nodes of the model. ### Response: def _get_model_nodes(self, model): """ Find all the non-auto created nodes of the model. """ nodes = [(name, node) for name, node in model._nodes.items() if node._is_auto_created is False] nodes.sort(key=lambda n: n[0]) return nodes
def write_padding(self, s): """ Write string that are not part of the original file. """ lines = s.splitlines(True) for line in lines: self.stream.write(line) if line[-1] in '\r\n': self._newline() else: # this is the last line self.generated_col += len(line)
Write string that are not part of the original file.
Below is the the instruction that describes the task: ### Input: Write string that are not part of the original file. ### Response: def write_padding(self, s): """ Write string that are not part of the original file. """ lines = s.splitlines(True) for line in lines: self.stream.write(line) if line[-1] in '\r\n': self._newline() else: # this is the last line self.generated_col += len(line)
def build_walker(concurrency): """This will return a function suitable for passing to :class:`stacker.plan.Plan` for walking the graph. If concurrency is 1 (no parallelism) this will return a simple topological walker that doesn't use any multithreading. If concurrency is 0, this will return a walker that will walk the graph as fast as the graph topology allows. If concurrency is greater than 1, it will return a walker that will only execute a maximum of concurrency steps at any given time. Returns: func: returns a function to walk a :class:`stacker.dag.DAG`. """ if concurrency == 1: return walk semaphore = UnlimitedSemaphore() if concurrency > 1: semaphore = threading.Semaphore(concurrency) return ThreadedWalker(semaphore).walk
This will return a function suitable for passing to :class:`stacker.plan.Plan` for walking the graph. If concurrency is 1 (no parallelism) this will return a simple topological walker that doesn't use any multithreading. If concurrency is 0, this will return a walker that will walk the graph as fast as the graph topology allows. If concurrency is greater than 1, it will return a walker that will only execute a maximum of concurrency steps at any given time. Returns: func: returns a function to walk a :class:`stacker.dag.DAG`.
Below is the the instruction that describes the task: ### Input: This will return a function suitable for passing to :class:`stacker.plan.Plan` for walking the graph. If concurrency is 1 (no parallelism) this will return a simple topological walker that doesn't use any multithreading. If concurrency is 0, this will return a walker that will walk the graph as fast as the graph topology allows. If concurrency is greater than 1, it will return a walker that will only execute a maximum of concurrency steps at any given time. Returns: func: returns a function to walk a :class:`stacker.dag.DAG`. ### Response: def build_walker(concurrency): """This will return a function suitable for passing to :class:`stacker.plan.Plan` for walking the graph. If concurrency is 1 (no parallelism) this will return a simple topological walker that doesn't use any multithreading. If concurrency is 0, this will return a walker that will walk the graph as fast as the graph topology allows. If concurrency is greater than 1, it will return a walker that will only execute a maximum of concurrency steps at any given time. Returns: func: returns a function to walk a :class:`stacker.dag.DAG`. """ if concurrency == 1: return walk semaphore = UnlimitedSemaphore() if concurrency > 1: semaphore = threading.Semaphore(concurrency) return ThreadedWalker(semaphore).walk
def msg(self, msg=None, ret_r=False): '''code's message''' if msg or ret_r: self._msg = msg return self return self._msg
code's message
Below is the the instruction that describes the task: ### Input: code's message ### Response: def msg(self, msg=None, ret_r=False): '''code's message''' if msg or ret_r: self._msg = msg return self return self._msg
def db_remove(name, user=None, password=None, host=None, port=None): ''' Remove a database name Database name to remove user The user to connect as password The password of the user host The host to connect to port The port to connect to CLI Example: .. code-block:: bash salt '*' influxdb08.db_remove <name> salt '*' influxdb08.db_remove <name> <user> <password> <host> <port> ''' if not db_exists(name, user, password, host, port): log.info('DB \'%s\' does not exist', name) return False client = _client(user=user, password=password, host=host, port=port) return client.delete_database(name)
Remove a database name Database name to remove user The user to connect as password The password of the user host The host to connect to port The port to connect to CLI Example: .. code-block:: bash salt '*' influxdb08.db_remove <name> salt '*' influxdb08.db_remove <name> <user> <password> <host> <port>
Below is the the instruction that describes the task: ### Input: Remove a database name Database name to remove user The user to connect as password The password of the user host The host to connect to port The port to connect to CLI Example: .. code-block:: bash salt '*' influxdb08.db_remove <name> salt '*' influxdb08.db_remove <name> <user> <password> <host> <port> ### Response: def db_remove(name, user=None, password=None, host=None, port=None): ''' Remove a database name Database name to remove user The user to connect as password The password of the user host The host to connect to port The port to connect to CLI Example: .. code-block:: bash salt '*' influxdb08.db_remove <name> salt '*' influxdb08.db_remove <name> <user> <password> <host> <port> ''' if not db_exists(name, user, password, host, port): log.info('DB \'%s\' does not exist', name) return False client = _client(user=user, password=password, host=host, port=port) return client.delete_database(name)
async def update_current_price_info(self): """Update current price info async.""" query = gql( """ { viewer { home(id: "%s") { currentSubscription { priceInfo { current { energy tax total startsAt } } } } } } """ % self.home_id ) price_info_temp = await self._tibber_control.execute(query) if not price_info_temp: _LOGGER.error("Could not find current price info.") return try: home = price_info_temp["viewer"]["home"] current_subscription = home["currentSubscription"] price_info = current_subscription["priceInfo"]["current"] except (KeyError, TypeError): _LOGGER.error("Could not find current price info.") return if price_info: self._current_price_info = price_info
Update current price info async.
Below is the the instruction that describes the task: ### Input: Update current price info async. ### Response: async def update_current_price_info(self): """Update current price info async.""" query = gql( """ { viewer { home(id: "%s") { currentSubscription { priceInfo { current { energy tax total startsAt } } } } } } """ % self.home_id ) price_info_temp = await self._tibber_control.execute(query) if not price_info_temp: _LOGGER.error("Could not find current price info.") return try: home = price_info_temp["viewer"]["home"] current_subscription = home["currentSubscription"] price_info = current_subscription["priceInfo"]["current"] except (KeyError, TypeError): _LOGGER.error("Could not find current price info.") return if price_info: self._current_price_info = price_info
def namespace(self): """ Return the Namespace URI (if any) as a String for the current tag """ if self.m_name == -1 or (self.m_event != const.START_TAG and self.m_event != const.END_TAG): return u'' # No Namespace if self.m_namespaceUri == 0xFFFFFFFF: return u'' return self.sb[self.m_namespaceUri]
Return the Namespace URI (if any) as a String for the current tag
Below is the the instruction that describes the task: ### Input: Return the Namespace URI (if any) as a String for the current tag ### Response: def namespace(self): """ Return the Namespace URI (if any) as a String for the current tag """ if self.m_name == -1 or (self.m_event != const.START_TAG and self.m_event != const.END_TAG): return u'' # No Namespace if self.m_namespaceUri == 0xFFFFFFFF: return u'' return self.sb[self.m_namespaceUri]
def dump(self, stream, contentType=None, version=None): ''' Serializes this NoteItem to a byte-stream and writes it to the file-like object `stream`. `contentType` and `version` must be one of the supported content-types, and if not specified, will default to ``text/plain``. ''' if contentType is None or contentType == constants.TYPE_TEXT_PLAIN: stream.write(self.body) return if contentType == constants.TYPE_SIF_NOTE: root = ET.Element('note') # TODO: check `version`... ET.SubElement(root, 'SIFVersion').text = '1.1' if self.name is not None: ET.SubElement(root, 'Subject').text = self.name if self.body is not None: ET.SubElement(root, 'Body').text = self.body for name, values in self.extensions.items(): for value in values: ET.SubElement(root, name).text = value ET.ElementTree(root).write(stream) return raise common.InvalidContentType('cannot serialize NoteItem to "%s"' % (contentType,))
Serializes this NoteItem to a byte-stream and writes it to the file-like object `stream`. `contentType` and `version` must be one of the supported content-types, and if not specified, will default to ``text/plain``.
Below is the the instruction that describes the task: ### Input: Serializes this NoteItem to a byte-stream and writes it to the file-like object `stream`. `contentType` and `version` must be one of the supported content-types, and if not specified, will default to ``text/plain``. ### Response: def dump(self, stream, contentType=None, version=None): ''' Serializes this NoteItem to a byte-stream and writes it to the file-like object `stream`. `contentType` and `version` must be one of the supported content-types, and if not specified, will default to ``text/plain``. ''' if contentType is None or contentType == constants.TYPE_TEXT_PLAIN: stream.write(self.body) return if contentType == constants.TYPE_SIF_NOTE: root = ET.Element('note') # TODO: check `version`... ET.SubElement(root, 'SIFVersion').text = '1.1' if self.name is not None: ET.SubElement(root, 'Subject').text = self.name if self.body is not None: ET.SubElement(root, 'Body').text = self.body for name, values in self.extensions.items(): for value in values: ET.SubElement(root, name).text = value ET.ElementTree(root).write(stream) return raise common.InvalidContentType('cannot serialize NoteItem to "%s"' % (contentType,))
def download_and_extract_to_mkdtemp(bucket, key, session=None): """Download zip archive and extract it to temporary directory.""" if session: s3_client = session.client('s3') else: s3_client = boto3.client('s3') transfer = S3Transfer(s3_client) filedes, temp_file = tempfile.mkstemp() os.close(filedes) transfer.download_file(bucket, key, temp_file) output_dir = tempfile.mkdtemp() zip_ref = zipfile.ZipFile(temp_file, 'r') zip_ref.extractall(output_dir) zip_ref.close() os.remove(temp_file) return output_dir
Download zip archive and extract it to temporary directory.
Below is the the instruction that describes the task: ### Input: Download zip archive and extract it to temporary directory. ### Response: def download_and_extract_to_mkdtemp(bucket, key, session=None): """Download zip archive and extract it to temporary directory.""" if session: s3_client = session.client('s3') else: s3_client = boto3.client('s3') transfer = S3Transfer(s3_client) filedes, temp_file = tempfile.mkstemp() os.close(filedes) transfer.download_file(bucket, key, temp_file) output_dir = tempfile.mkdtemp() zip_ref = zipfile.ZipFile(temp_file, 'r') zip_ref.extractall(output_dir) zip_ref.close() os.remove(temp_file) return output_dir
def tell(self): """Returns the file offset. If this is a compressed file, then the offset in the compressed file is returned. """ if isinstance(self.fileobj, gzip2.GzipFile): return self.fileobj.fileobj.tell() else: return self.fileobj.tell()
Returns the file offset. If this is a compressed file, then the offset in the compressed file is returned.
Below is the the instruction that describes the task: ### Input: Returns the file offset. If this is a compressed file, then the offset in the compressed file is returned. ### Response: def tell(self): """Returns the file offset. If this is a compressed file, then the offset in the compressed file is returned. """ if isinstance(self.fileobj, gzip2.GzipFile): return self.fileobj.fileobj.tell() else: return self.fileobj.tell()
def generate_apiary_doc(task_router): """Generate apiary documentation. Create a Apiary generator and add application packages to it. :param task_router: task router, injected :type task_router: TaskRouter :return: apiary generator :rtype: ApiaryDoc """ generator = ApiaryDoc() for m in task_router.get_task_packages() + get_method_packages(): m = importlib.import_module(m) generator.docmodule(m) return generator
Generate apiary documentation. Create a Apiary generator and add application packages to it. :param task_router: task router, injected :type task_router: TaskRouter :return: apiary generator :rtype: ApiaryDoc
Below is the the instruction that describes the task: ### Input: Generate apiary documentation. Create a Apiary generator and add application packages to it. :param task_router: task router, injected :type task_router: TaskRouter :return: apiary generator :rtype: ApiaryDoc ### Response: def generate_apiary_doc(task_router): """Generate apiary documentation. Create a Apiary generator and add application packages to it. :param task_router: task router, injected :type task_router: TaskRouter :return: apiary generator :rtype: ApiaryDoc """ generator = ApiaryDoc() for m in task_router.get_task_packages() + get_method_packages(): m = importlib.import_module(m) generator.docmodule(m) return generator