code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def chroot(self, new_workdir): """ Change the workir of the :class:`Flow`. Mainly used for allowing the user to open the GUI on the local host and access the flow from remote via sshfs. .. note:: Calling this method will make the flow go in read-only mode. """ self._chrooted_from = self.workdir self.set_workdir(new_workdir, chroot=True) for i, work in enumerate(self): new_wdir = os.path.join(self.workdir, "w" + str(i)) work.chroot(new_wdir)
Change the workir of the :class:`Flow`. Mainly used for allowing the user to open the GUI on the local host and access the flow from remote via sshfs. .. note:: Calling this method will make the flow go in read-only mode.
Below is the the instruction that describes the task: ### Input: Change the workir of the :class:`Flow`. Mainly used for allowing the user to open the GUI on the local host and access the flow from remote via sshfs. .. note:: Calling this method will make the flow go in read-only mode. ### Response: def chroot(self, new_workdir): """ Change the workir of the :class:`Flow`. Mainly used for allowing the user to open the GUI on the local host and access the flow from remote via sshfs. .. note:: Calling this method will make the flow go in read-only mode. """ self._chrooted_from = self.workdir self.set_workdir(new_workdir, chroot=True) for i, work in enumerate(self): new_wdir = os.path.join(self.workdir, "w" + str(i)) work.chroot(new_wdir)
def axes(self, axes): '''Set the linear axis of displacement for this joint. Parameters ---------- axes : list containing one 3-tuple of floats A list of the axes for this joint. For a slider joint, which has one degree of freedom, this must contain one 3-tuple specifying the X, Y, and Z axis for the joint. ''' self.lmotor.axes = [axes[0]] self.ode_obj.setAxis(tuple(axes[0]))
Set the linear axis of displacement for this joint. Parameters ---------- axes : list containing one 3-tuple of floats A list of the axes for this joint. For a slider joint, which has one degree of freedom, this must contain one 3-tuple specifying the X, Y, and Z axis for the joint.
Below is the the instruction that describes the task: ### Input: Set the linear axis of displacement for this joint. Parameters ---------- axes : list containing one 3-tuple of floats A list of the axes for this joint. For a slider joint, which has one degree of freedom, this must contain one 3-tuple specifying the X, Y, and Z axis for the joint. ### Response: def axes(self, axes): '''Set the linear axis of displacement for this joint. Parameters ---------- axes : list containing one 3-tuple of floats A list of the axes for this joint. For a slider joint, which has one degree of freedom, this must contain one 3-tuple specifying the X, Y, and Z axis for the joint. ''' self.lmotor.axes = [axes[0]] self.ode_obj.setAxis(tuple(axes[0]))
def add_concept_filter(self, concept, concept_name=None): """ Add a concept filter :param concept: concept which will be used as lowercase string in a search term :param concept_name: name of the place where there will be searched for """ if concept in self.query_params.keys(): if not concept_name: concept_name = concept if isinstance(self.query_params[concept], list): if self.es_version == '1': es_filter = {'or': []} for or_filter in self.query_params[concept]: es_filter['or'].append(self._build_concept_term(concept_name, or_filter)) else: es_filter = {"bool": {"should": []}} for or_filter in self.query_params[concept]: es_filter["bool"]["should"].append(self._build_concept_term(concept_name, or_filter)) else: es_filter = self._build_concept_term(concept_name, self.query_params[concept]) self.filters.append(es_filter)
Add a concept filter :param concept: concept which will be used as lowercase string in a search term :param concept_name: name of the place where there will be searched for
Below is the the instruction that describes the task: ### Input: Add a concept filter :param concept: concept which will be used as lowercase string in a search term :param concept_name: name of the place where there will be searched for ### Response: def add_concept_filter(self, concept, concept_name=None): """ Add a concept filter :param concept: concept which will be used as lowercase string in a search term :param concept_name: name of the place where there will be searched for """ if concept in self.query_params.keys(): if not concept_name: concept_name = concept if isinstance(self.query_params[concept], list): if self.es_version == '1': es_filter = {'or': []} for or_filter in self.query_params[concept]: es_filter['or'].append(self._build_concept_term(concept_name, or_filter)) else: es_filter = {"bool": {"should": []}} for or_filter in self.query_params[concept]: es_filter["bool"]["should"].append(self._build_concept_term(concept_name, or_filter)) else: es_filter = self._build_concept_term(concept_name, self.query_params[concept]) self.filters.append(es_filter)
def _init_client(self, from_archive=False): """Init client""" return MattermostClient(self.url, self.api_token, max_items=self.max_items, sleep_for_rate=self.sleep_for_rate, min_rate_to_sleep=self.min_rate_to_sleep, sleep_time=self.sleep_time, archive=self.archive, from_archive=from_archive)
Init client
Below is the the instruction that describes the task: ### Input: Init client ### Response: def _init_client(self, from_archive=False): """Init client""" return MattermostClient(self.url, self.api_token, max_items=self.max_items, sleep_for_rate=self.sleep_for_rate, min_rate_to_sleep=self.min_rate_to_sleep, sleep_time=self.sleep_time, archive=self.archive, from_archive=from_archive)
def directory_button_clicked(self): """Show a dialog to choose directory.""" # noinspection PyCallByClass,PyTypeChecker self.output_directory.setText(QFileDialog.getExistingDirectory( self, self.tr('Select download directory')))
Show a dialog to choose directory.
Below is the the instruction that describes the task: ### Input: Show a dialog to choose directory. ### Response: def directory_button_clicked(self): """Show a dialog to choose directory.""" # noinspection PyCallByClass,PyTypeChecker self.output_directory.setText(QFileDialog.getExistingDirectory( self, self.tr('Select download directory')))
def get_lines(self, force=False): """ Return a list of lists or strings, representing the code body. Each list is a block, each string is a statement. force (True or False): if an attribute object cannot be included, it is usually skipped to be processed later. With 'force' set, there will be no waiting: a get_or_create() call is written instead. """ code_lines = [] # Don't return anything if this is an instance that should be skipped if self.skip(): return [] # Initialise our new object # e.g. model_name_35 = Model() code_lines += self.instantiate() # Add each field # e.g. model_name_35.field_one = 1034.91 # model_name_35.field_two = "text" code_lines += self.get_waiting_list() if force: # TODO: Check that M2M are not affected code_lines += self.get_waiting_list(force=force) # Print the save command for our new object # e.g. model_name_35.save() if code_lines: code_lines.append("%s = importer.save_or_locate(%s)\n" % (self.variable_name, self.variable_name)) code_lines += self.get_many_to_many_lines(force=force) return code_lines
Return a list of lists or strings, representing the code body. Each list is a block, each string is a statement. force (True or False): if an attribute object cannot be included, it is usually skipped to be processed later. With 'force' set, there will be no waiting: a get_or_create() call is written instead.
Below is the the instruction that describes the task: ### Input: Return a list of lists or strings, representing the code body. Each list is a block, each string is a statement. force (True or False): if an attribute object cannot be included, it is usually skipped to be processed later. With 'force' set, there will be no waiting: a get_or_create() call is written instead. ### Response: def get_lines(self, force=False): """ Return a list of lists or strings, representing the code body. Each list is a block, each string is a statement. force (True or False): if an attribute object cannot be included, it is usually skipped to be processed later. With 'force' set, there will be no waiting: a get_or_create() call is written instead. """ code_lines = [] # Don't return anything if this is an instance that should be skipped if self.skip(): return [] # Initialise our new object # e.g. model_name_35 = Model() code_lines += self.instantiate() # Add each field # e.g. model_name_35.field_one = 1034.91 # model_name_35.field_two = "text" code_lines += self.get_waiting_list() if force: # TODO: Check that M2M are not affected code_lines += self.get_waiting_list(force=force) # Print the save command for our new object # e.g. model_name_35.save() if code_lines: code_lines.append("%s = importer.save_or_locate(%s)\n" % (self.variable_name, self.variable_name)) code_lines += self.get_many_to_many_lines(force=force) return code_lines
def resolve_virtual_columns(self, *names): """ Assume that all ``names`` are legacy-style tuple declarations, and generate modern columns instances to match the behavior of the old syntax. """ from .views.legacy import get_field_definition virtual_columns = {} for name in names: field = get_field_definition(name) column = TextColumn(sources=field.fields, label=field.pretty_name, processor=field.callback) column.name = field.pretty_name if field.pretty_name else field.fields[0] virtual_columns[name] = column # Make sure it's in the same order as originally defined new_columns = OrderedDict() for name in self._meta.columns: # Can't use self.config yet, hasn't been generated if self.columns.get(name): column = self.columns[name] else: column = virtual_columns[name] new_columns[column.name] = column self.columns = new_columns
Assume that all ``names`` are legacy-style tuple declarations, and generate modern columns instances to match the behavior of the old syntax.
Below is the the instruction that describes the task: ### Input: Assume that all ``names`` are legacy-style tuple declarations, and generate modern columns instances to match the behavior of the old syntax. ### Response: def resolve_virtual_columns(self, *names): """ Assume that all ``names`` are legacy-style tuple declarations, and generate modern columns instances to match the behavior of the old syntax. """ from .views.legacy import get_field_definition virtual_columns = {} for name in names: field = get_field_definition(name) column = TextColumn(sources=field.fields, label=field.pretty_name, processor=field.callback) column.name = field.pretty_name if field.pretty_name else field.fields[0] virtual_columns[name] = column # Make sure it's in the same order as originally defined new_columns = OrderedDict() for name in self._meta.columns: # Can't use self.config yet, hasn't been generated if self.columns.get(name): column = self.columns[name] else: column = virtual_columns[name] new_columns[column.name] = column self.columns = new_columns
def sys_transmit(self, cpu, fd, buf, count, tx_bytes): """ transmit - send bytes through a file descriptor The transmit system call writes up to count bytes from the buffer pointed to by buf to the file descriptor fd. If count is zero, transmit returns 0 and optionally sets *tx_bytes to zero. :param cpu current CPU :param fd a valid file descriptor :param buf a memory buffer :param count number of bytes to send :param tx_bytes if valid, points to the actual number of bytes transmitted :return: 0 Success EBADF fd is not a valid file descriptor or is not open. EFAULT buf or tx_bytes points to an invalid address. """ data = [] if count != 0: if not self._is_open(fd): logger.error("TRANSMIT: Not valid file descriptor. Returning EBADFD %d", fd) return Decree.CGC_EBADF # TODO check count bytes from buf if buf not in cpu.memory or (buf + count) not in cpu.memory: logger.debug("TRANSMIT: buf points to invalid address. Rerurning EFAULT") return Decree.CGC_EFAULT if fd > 2 and self.files[fd].is_full(): cpu.PC -= cpu.instruction.size self.wait([], [fd], None) raise RestartSyscall() for i in range(0, count): value = Operators.CHR(cpu.read_int(buf + i, 8)) if not isinstance(value, str): logger.debug("TRANSMIT: Writing symbolic values to file %d", fd) #value = str(value) data.append(value) self.files[fd].transmit(data) logger.info("TRANSMIT(%d, 0x%08x, %d, 0x%08x) -> <%.24r>" % (fd, buf, count, tx_bytes, ''.join([str(x) for x in data]))) self.syscall_trace.append(("_transmit", fd, data)) self.signal_transmit(fd) # TODO check 4 bytes from tx_bytes if tx_bytes: if tx_bytes not in cpu.memory: logger.debug("TRANSMIT: Not valid tx_bytes pointer on transmit. Returning EFAULT") return Decree.CGC_EFAULT cpu.write_int(tx_bytes, len(data), 32) return 0
transmit - send bytes through a file descriptor The transmit system call writes up to count bytes from the buffer pointed to by buf to the file descriptor fd. If count is zero, transmit returns 0 and optionally sets *tx_bytes to zero. :param cpu current CPU :param fd a valid file descriptor :param buf a memory buffer :param count number of bytes to send :param tx_bytes if valid, points to the actual number of bytes transmitted :return: 0 Success EBADF fd is not a valid file descriptor or is not open. EFAULT buf or tx_bytes points to an invalid address.
Below is the the instruction that describes the task: ### Input: transmit - send bytes through a file descriptor The transmit system call writes up to count bytes from the buffer pointed to by buf to the file descriptor fd. If count is zero, transmit returns 0 and optionally sets *tx_bytes to zero. :param cpu current CPU :param fd a valid file descriptor :param buf a memory buffer :param count number of bytes to send :param tx_bytes if valid, points to the actual number of bytes transmitted :return: 0 Success EBADF fd is not a valid file descriptor or is not open. EFAULT buf or tx_bytes points to an invalid address. ### Response: def sys_transmit(self, cpu, fd, buf, count, tx_bytes): """ transmit - send bytes through a file descriptor The transmit system call writes up to count bytes from the buffer pointed to by buf to the file descriptor fd. If count is zero, transmit returns 0 and optionally sets *tx_bytes to zero. :param cpu current CPU :param fd a valid file descriptor :param buf a memory buffer :param count number of bytes to send :param tx_bytes if valid, points to the actual number of bytes transmitted :return: 0 Success EBADF fd is not a valid file descriptor or is not open. EFAULT buf or tx_bytes points to an invalid address. """ data = [] if count != 0: if not self._is_open(fd): logger.error("TRANSMIT: Not valid file descriptor. Returning EBADFD %d", fd) return Decree.CGC_EBADF # TODO check count bytes from buf if buf not in cpu.memory or (buf + count) not in cpu.memory: logger.debug("TRANSMIT: buf points to invalid address. Rerurning EFAULT") return Decree.CGC_EFAULT if fd > 2 and self.files[fd].is_full(): cpu.PC -= cpu.instruction.size self.wait([], [fd], None) raise RestartSyscall() for i in range(0, count): value = Operators.CHR(cpu.read_int(buf + i, 8)) if not isinstance(value, str): logger.debug("TRANSMIT: Writing symbolic values to file %d", fd) #value = str(value) data.append(value) self.files[fd].transmit(data) logger.info("TRANSMIT(%d, 0x%08x, %d, 0x%08x) -> <%.24r>" % (fd, buf, count, tx_bytes, ''.join([str(x) for x in data]))) self.syscall_trace.append(("_transmit", fd, data)) self.signal_transmit(fd) # TODO check 4 bytes from tx_bytes if tx_bytes: if tx_bytes not in cpu.memory: logger.debug("TRANSMIT: Not valid tx_bytes pointer on transmit. Returning EFAULT") return Decree.CGC_EFAULT cpu.write_int(tx_bytes, len(data), 32) return 0
def mainloop(self): """ Manage category views. """ # Get client state proxy = config.engine.open() views = [x for x in sorted(proxy.view.list()) if x.startswith(self.PREFIX)] current_view = real_current_view = proxy.ui.current_view() if current_view not in views: if views: current_view = views[0] else: raise error.UserError("There are no '{}*' views defined at all!".format(self.PREFIX)) # Check options if self.options.list: for name in sorted(views): print("{} {:5d} {}".format( '*' if name == real_current_view else ' ', proxy.view.size(xmlrpc.NOHASH, name), name[self.PREFIX_LEN:])) elif self.options.next or self.options.prev or self.options.update: # Determine next in line if self.options.update: new_view = current_view else: new_view = (views * 2)[views.index(current_view) + (1 if self.options.next else -1)] self.LOG.info("{} category view '{}'.".format( "Updating" if self.options.update else "Switching to", new_view)) # Update and switch to filtered view proxy.pyro.category.update(xmlrpc.NOHASH, new_view[self.PREFIX_LEN:]) proxy.ui.current_view.set(new_view) else: self.LOG.info("Current category view is '{}'.".format(current_view[self.PREFIX_LEN:])) self.LOG.info("Use '--help' to get usage information.")
Manage category views.
Below is the the instruction that describes the task: ### Input: Manage category views. ### Response: def mainloop(self): """ Manage category views. """ # Get client state proxy = config.engine.open() views = [x for x in sorted(proxy.view.list()) if x.startswith(self.PREFIX)] current_view = real_current_view = proxy.ui.current_view() if current_view not in views: if views: current_view = views[0] else: raise error.UserError("There are no '{}*' views defined at all!".format(self.PREFIX)) # Check options if self.options.list: for name in sorted(views): print("{} {:5d} {}".format( '*' if name == real_current_view else ' ', proxy.view.size(xmlrpc.NOHASH, name), name[self.PREFIX_LEN:])) elif self.options.next or self.options.prev or self.options.update: # Determine next in line if self.options.update: new_view = current_view else: new_view = (views * 2)[views.index(current_view) + (1 if self.options.next else -1)] self.LOG.info("{} category view '{}'.".format( "Updating" if self.options.update else "Switching to", new_view)) # Update and switch to filtered view proxy.pyro.category.update(xmlrpc.NOHASH, new_view[self.PREFIX_LEN:]) proxy.ui.current_view.set(new_view) else: self.LOG.info("Current category view is '{}'.".format(current_view[self.PREFIX_LEN:])) self.LOG.info("Use '--help' to get usage information.")
def drop_right_t(n): """ Transformation for Sequence.drop_right :param n: number to drop from right :return: transformation """ if n <= 0: end_index = None else: end_index = -n return Transformation( 'drop_right({0})'.format(n), lambda sequence: sequence[:end_index], None )
Transformation for Sequence.drop_right :param n: number to drop from right :return: transformation
Below is the the instruction that describes the task: ### Input: Transformation for Sequence.drop_right :param n: number to drop from right :return: transformation ### Response: def drop_right_t(n): """ Transformation for Sequence.drop_right :param n: number to drop from right :return: transformation """ if n <= 0: end_index = None else: end_index = -n return Transformation( 'drop_right({0})'.format(n), lambda sequence: sequence[:end_index], None )
def forward_dashboard_event(self, dashboard, job_data, event, job_num): """ Hook to preprocess and publish dashboard events. By default, every event is passed to the dashboard's :py:meth:`law.job.dashboard.BaseJobDashboard.publish` method unchanged. """ # possible events: # - action.submit # - action.cancel # - status.pending # - status.running # - status.finished # - status.retry # - status.failed # forward to dashboard in any event by default return dashboard.publish(job_data, event, job_num)
Hook to preprocess and publish dashboard events. By default, every event is passed to the dashboard's :py:meth:`law.job.dashboard.BaseJobDashboard.publish` method unchanged.
Below is the the instruction that describes the task: ### Input: Hook to preprocess and publish dashboard events. By default, every event is passed to the dashboard's :py:meth:`law.job.dashboard.BaseJobDashboard.publish` method unchanged. ### Response: def forward_dashboard_event(self, dashboard, job_data, event, job_num): """ Hook to preprocess and publish dashboard events. By default, every event is passed to the dashboard's :py:meth:`law.job.dashboard.BaseJobDashboard.publish` method unchanged. """ # possible events: # - action.submit # - action.cancel # - status.pending # - status.running # - status.finished # - status.retry # - status.failed # forward to dashboard in any event by default return dashboard.publish(job_data, event, job_num)
def GetSubBitmap(self, x: int, y: int, width: int, height: int) -> 'Bitmap': """ x: int. y: int. width: int. height: int. Return `Bitmap`, a sub bitmap of the input rect. """ colors = self.GetPixelColorsOfRect(x, y, width, height) bitmap = Bitmap(width, height) bitmap.SetPixelColorsOfRect(0, 0, width, height, colors) return bitmap
x: int. y: int. width: int. height: int. Return `Bitmap`, a sub bitmap of the input rect.
Below is the the instruction that describes the task: ### Input: x: int. y: int. width: int. height: int. Return `Bitmap`, a sub bitmap of the input rect. ### Response: def GetSubBitmap(self, x: int, y: int, width: int, height: int) -> 'Bitmap': """ x: int. y: int. width: int. height: int. Return `Bitmap`, a sub bitmap of the input rect. """ colors = self.GetPixelColorsOfRect(x, y, width, height) bitmap = Bitmap(width, height) bitmap.SetPixelColorsOfRect(0, 0, width, height, colors) return bitmap
def get_password_hash(username): """ Fetch a user's password hash. """ try: h = spwd.getspnam(username) except KeyError: return None # mitogen.core.Secret() is a Unicode subclass with a repr() that hides the # secret data. This keeps secret stuff out of logs. Like blobs, secrets can # also be serialized. return mitogen.core.Secret(h)
Fetch a user's password hash.
Below is the the instruction that describes the task: ### Input: Fetch a user's password hash. ### Response: def get_password_hash(username): """ Fetch a user's password hash. """ try: h = spwd.getspnam(username) except KeyError: return None # mitogen.core.Secret() is a Unicode subclass with a repr() that hides the # secret data. This keeps secret stuff out of logs. Like blobs, secrets can # also be serialized. return mitogen.core.Secret(h)
def _get_net(self, entry): """Get the network for a specific row""" try: net = entry[1] return net[net.find('(')+1:net.find(')')] except IndexError: return None
Get the network for a specific row
Below is the the instruction that describes the task: ### Input: Get the network for a specific row ### Response: def _get_net(self, entry): """Get the network for a specific row""" try: net = entry[1] return net[net.find('(')+1:net.find(')')] except IndexError: return None
def _get_matchable_segments(segments): """ Performs a depth-first search of the segment tree to get all matchable segments. """ for subsegment in segments: if isinstance(subsegment, Token): break # No tokens allowed next to segments if isinstance(subsegment, Segment): if isinstance(subsegment, MatchableSegment): yield subsegment for matchable_subsegment in _get_matchable_segments(subsegment): yield matchable_subsegment
Performs a depth-first search of the segment tree to get all matchable segments.
Below is the the instruction that describes the task: ### Input: Performs a depth-first search of the segment tree to get all matchable segments. ### Response: def _get_matchable_segments(segments): """ Performs a depth-first search of the segment tree to get all matchable segments. """ for subsegment in segments: if isinstance(subsegment, Token): break # No tokens allowed next to segments if isinstance(subsegment, Segment): if isinstance(subsegment, MatchableSegment): yield subsegment for matchable_subsegment in _get_matchable_segments(subsegment): yield matchable_subsegment
def _notify_reader_writes(writeto): """Notify reader closures about these writes and return a sorted list of thus-satisfied closures. """ satisfied = [] for var in writeto: if var.readable: for reader in var.readers: reader.notify_read_ready() if reader.satisfied: satisfied.append(reader) return Closure.sort(satisfied)
Notify reader closures about these writes and return a sorted list of thus-satisfied closures.
Below is the the instruction that describes the task: ### Input: Notify reader closures about these writes and return a sorted list of thus-satisfied closures. ### Response: def _notify_reader_writes(writeto): """Notify reader closures about these writes and return a sorted list of thus-satisfied closures. """ satisfied = [] for var in writeto: if var.readable: for reader in var.readers: reader.notify_read_ready() if reader.satisfied: satisfied.append(reader) return Closure.sort(satisfied)
def bin_approach(data): """Check for binning approach from configuration or normalized file. """ for approach in ["cnvkit", "gatk-cnv"]: if approach in dd.get_svcaller(data): return approach norm_file = tz.get_in(["depth", "bins", "normalized"], data) if norm_file.endswith(("-crstandardized.tsv", "-crdenoised.tsv")): return "gatk-cnv" if norm_file.endswith(".cnr"): return "cnvkit"
Check for binning approach from configuration or normalized file.
Below is the the instruction that describes the task: ### Input: Check for binning approach from configuration or normalized file. ### Response: def bin_approach(data): """Check for binning approach from configuration or normalized file. """ for approach in ["cnvkit", "gatk-cnv"]: if approach in dd.get_svcaller(data): return approach norm_file = tz.get_in(["depth", "bins", "normalized"], data) if norm_file.endswith(("-crstandardized.tsv", "-crdenoised.tsv")): return "gatk-cnv" if norm_file.endswith(".cnr"): return "cnvkit"
def assortativity_attributes(user): """ Computes the assortativity of the nominal attributes. This indicator measures the homophily of the current user with his correspondants, for each attributes. It returns a value between 0 (no assortativity) and 1 (all the contacts share the same value): the percentage of contacts sharing the same value. """ matrix = matrix_undirected_unweighted(user) neighbors = [k for k in user.network.keys() if k != user.name] neighbors_attrbs = {} for i, u_name in enumerate(matrix_index(user)): correspondent = user.network.get(u_name, None) if correspondent is None or u_name == user.name or matrix[0][i] == 0: continue if correspondent.has_attributes: neighbors_attrbs[correspondent.name] = correspondent.attributes assortativity = {} for a in user.attributes: total = sum(1 for n in neighbors if n in neighbors_attrbs and user.attributes[a] == neighbors_attrbs[n][a]) den = sum(1 for n in neighbors if n in neighbors_attrbs) assortativity[a] = total / den if den != 0 else None return assortativity
Computes the assortativity of the nominal attributes. This indicator measures the homophily of the current user with his correspondants, for each attributes. It returns a value between 0 (no assortativity) and 1 (all the contacts share the same value): the percentage of contacts sharing the same value.
Below is the the instruction that describes the task: ### Input: Computes the assortativity of the nominal attributes. This indicator measures the homophily of the current user with his correspondants, for each attributes. It returns a value between 0 (no assortativity) and 1 (all the contacts share the same value): the percentage of contacts sharing the same value. ### Response: def assortativity_attributes(user): """ Computes the assortativity of the nominal attributes. This indicator measures the homophily of the current user with his correspondants, for each attributes. It returns a value between 0 (no assortativity) and 1 (all the contacts share the same value): the percentage of contacts sharing the same value. """ matrix = matrix_undirected_unweighted(user) neighbors = [k for k in user.network.keys() if k != user.name] neighbors_attrbs = {} for i, u_name in enumerate(matrix_index(user)): correspondent = user.network.get(u_name, None) if correspondent is None or u_name == user.name or matrix[0][i] == 0: continue if correspondent.has_attributes: neighbors_attrbs[correspondent.name] = correspondent.attributes assortativity = {} for a in user.attributes: total = sum(1 for n in neighbors if n in neighbors_attrbs and user.attributes[a] == neighbors_attrbs[n][a]) den = sum(1 for n in neighbors if n in neighbors_attrbs) assortativity[a] = total / den if den != 0 else None return assortativity
def estimate_voc(photocurrent, saturation_current, nNsVth): """ Rough estimate of open circuit voltage useful for bounding searches for ``i`` of ``v`` when using :func:`~pvlib.pvsystem.singlediode`. Parameters ---------- photocurrent : numeric photo-generated current [A] saturation_current : numeric diode reverse saturation current [A] nNsVth : numeric product of thermal voltage ``Vth`` [V], diode ideality factor ``n``, and number of series cells ``Ns`` Returns ------- numeric rough estimate of open circuit voltage [V] Notes ----- Calculating the open circuit voltage, :math:`V_{oc}`, of an ideal device with infinite shunt resistance, :math:`R_{sh} \\to \\infty`, and zero series resistance, :math:`R_s = 0`, yields the following equation [1]. As an estimate of :math:`V_{oc}` it is useful as an upper bound for the bisection method. .. math:: V_{oc, est}=n Ns V_{th} \\log \\left( \\frac{I_L}{I_0} + 1 \\right) [1] http://www.pveducation.org/pvcdrom/open-circuit-voltage """ return nNsVth * np.log(np.asarray(photocurrent) / saturation_current + 1.0)
Rough estimate of open circuit voltage useful for bounding searches for ``i`` of ``v`` when using :func:`~pvlib.pvsystem.singlediode`. Parameters ---------- photocurrent : numeric photo-generated current [A] saturation_current : numeric diode reverse saturation current [A] nNsVth : numeric product of thermal voltage ``Vth`` [V], diode ideality factor ``n``, and number of series cells ``Ns`` Returns ------- numeric rough estimate of open circuit voltage [V] Notes ----- Calculating the open circuit voltage, :math:`V_{oc}`, of an ideal device with infinite shunt resistance, :math:`R_{sh} \\to \\infty`, and zero series resistance, :math:`R_s = 0`, yields the following equation [1]. As an estimate of :math:`V_{oc}` it is useful as an upper bound for the bisection method. .. math:: V_{oc, est}=n Ns V_{th} \\log \\left( \\frac{I_L}{I_0} + 1 \\right) [1] http://www.pveducation.org/pvcdrom/open-circuit-voltage
Below is the the instruction that describes the task: ### Input: Rough estimate of open circuit voltage useful for bounding searches for ``i`` of ``v`` when using :func:`~pvlib.pvsystem.singlediode`. Parameters ---------- photocurrent : numeric photo-generated current [A] saturation_current : numeric diode reverse saturation current [A] nNsVth : numeric product of thermal voltage ``Vth`` [V], diode ideality factor ``n``, and number of series cells ``Ns`` Returns ------- numeric rough estimate of open circuit voltage [V] Notes ----- Calculating the open circuit voltage, :math:`V_{oc}`, of an ideal device with infinite shunt resistance, :math:`R_{sh} \\to \\infty`, and zero series resistance, :math:`R_s = 0`, yields the following equation [1]. As an estimate of :math:`V_{oc}` it is useful as an upper bound for the bisection method. .. math:: V_{oc, est}=n Ns V_{th} \\log \\left( \\frac{I_L}{I_0} + 1 \\right) [1] http://www.pveducation.org/pvcdrom/open-circuit-voltage ### Response: def estimate_voc(photocurrent, saturation_current, nNsVth): """ Rough estimate of open circuit voltage useful for bounding searches for ``i`` of ``v`` when using :func:`~pvlib.pvsystem.singlediode`. Parameters ---------- photocurrent : numeric photo-generated current [A] saturation_current : numeric diode reverse saturation current [A] nNsVth : numeric product of thermal voltage ``Vth`` [V], diode ideality factor ``n``, and number of series cells ``Ns`` Returns ------- numeric rough estimate of open circuit voltage [V] Notes ----- Calculating the open circuit voltage, :math:`V_{oc}`, of an ideal device with infinite shunt resistance, :math:`R_{sh} \\to \\infty`, and zero series resistance, :math:`R_s = 0`, yields the following equation [1]. As an estimate of :math:`V_{oc}` it is useful as an upper bound for the bisection method. .. math:: V_{oc, est}=n Ns V_{th} \\log \\left( \\frac{I_L}{I_0} + 1 \\right) [1] http://www.pveducation.org/pvcdrom/open-circuit-voltage """ return nNsVth * np.log(np.asarray(photocurrent) / saturation_current + 1.0)
def tables_with_counts(self): """Return the number of entries in all table.""" table_to_count = lambda t: self.count_rows(t) return zip(self.tables, map(table_to_count, self.tables))
Return the number of entries in all table.
Below is the the instruction that describes the task: ### Input: Return the number of entries in all table. ### Response: def tables_with_counts(self): """Return the number of entries in all table.""" table_to_count = lambda t: self.count_rows(t) return zip(self.tables, map(table_to_count, self.tables))
def _parse_zone_group_state(self): """The Zone Group State contains a lot of useful information. Retrieve and parse it, and populate the relevant properties. """ # zoneGroupTopology.GetZoneGroupState()['ZoneGroupState'] returns XML like # this: # # <ZoneGroups> # <ZoneGroup Coordinator="RINCON_000XXX1400" ID="RINCON_000XXXX1400:0"> # <ZoneGroupMember # BootSeq="33" # Configuration="1" # Icon="x-rincon-roomicon:zoneextender" # Invisible="1" # IsZoneBridge="1" # Location="http://192.168.1.100:1400/xml/device_description.xml" # MinCompatibleVersion="22.0-00000" # SoftwareVersion="24.1-74200" # UUID="RINCON_000ZZZ1400" # ZoneName="BRIDGE"/> # </ZoneGroup> # <ZoneGroup Coordinator="RINCON_000XXX1400" ID="RINCON_000XXX1400:46"> # <ZoneGroupMember # BootSeq="44" # Configuration="1" # Icon="x-rincon-roomicon:living" # Location="http://192.168.1.101:1400/xml/device_description.xml" # MinCompatibleVersion="22.0-00000" # SoftwareVersion="24.1-74200" # UUID="RINCON_000XXX1400" # ZoneName="Living Room"/> # <ZoneGroupMember # BootSeq="52" # Configuration="1" # Icon="x-rincon-roomicon:kitchen" # Location="http://192.168.1.102:1400/xml/device_description.xml" # MinCompatibleVersion="22.0-00000" # SoftwareVersion="24.1-74200" # UUID="RINCON_000YYY1400" # ZoneName="Kitchen"/> # </ZoneGroup> # </ZoneGroups> # def parse_zone_group_member(member_element): """Parse a ZoneGroupMember or Satellite element from Zone Group State, create a SoCo instance for the member, set basic attributes and return it.""" # Create a SoCo instance for each member. Because SoCo # instances are singletons, this is cheap if they have already # been created, and useful if they haven't. We can then # update various properties for that instance. member_attribs = member_element.attrib ip_addr = member_attribs['Location'].\ split('//')[1].split(':')[0] zone = config.SOCO_CLASS(ip_addr) # uid doesn't change, but it's not harmful to (re)set it, in case # the zone is as yet unseen. zone._uid = member_attribs['UUID'] zone._player_name = member_attribs['ZoneName'] # add the zone to the set of all members, and to the set # of visible members if appropriate is_visible = False if member_attribs.get( 'Invisible') == '1' else True if is_visible: self._visible_zones.add(zone) self._all_zones.add(zone) return zone # This is called quite frequently, so it is worth optimising it. # Maintain a private cache. If the zgt has not changed, there is no # need to repeat all the XML parsing. In addition, switch on network # caching for a short interval (5 secs). zgs = self.zoneGroupTopology.GetZoneGroupState( cache_timeout=5)['ZoneGroupState'] if zgs == self._zgs_cache: return self._zgs_cache = zgs tree = XML.fromstring(zgs.encode('utf-8')) # Empty the set of all zone_groups self._groups.clear() # and the set of all members self._all_zones.clear() self._visible_zones.clear() # Loop over each ZoneGroup Element for group_element in tree.findall('ZoneGroup'): coordinator_uid = group_element.attrib['Coordinator'] group_uid = group_element.attrib['ID'] group_coordinator = None members = set() for member_element in group_element.findall('ZoneGroupMember'): zone = parse_zone_group_member(member_element) # Perform extra processing relevant to direct zone group # members # # If this element has the same UUID as the coordinator, it is # the coordinator if zone._uid == coordinator_uid: group_coordinator = zone zone._is_coordinator = True else: zone._is_coordinator = False # is_bridge doesn't change, but it does no real harm to # set/reset it here, just in case the zone has not been seen # before zone._is_bridge = True if member_element.attrib.get( 'IsZoneBridge') == '1' else False # add the zone to the members for this group members.add(zone) # Loop over Satellite elements if present, and process as for # ZoneGroup elements for satellite_element in member_element.findall('Satellite'): zone = parse_zone_group_member(satellite_element) # Assume a satellite can't be a bridge or coordinator, so # no need to check. # # Add the zone to the members for this group. members.add(zone) # Now create a ZoneGroup with this info and add it to the list # of groups self._groups.add(ZoneGroup(group_uid, group_coordinator, members))
The Zone Group State contains a lot of useful information. Retrieve and parse it, and populate the relevant properties.
Below is the the instruction that describes the task: ### Input: The Zone Group State contains a lot of useful information. Retrieve and parse it, and populate the relevant properties. ### Response: def _parse_zone_group_state(self): """The Zone Group State contains a lot of useful information. Retrieve and parse it, and populate the relevant properties. """ # zoneGroupTopology.GetZoneGroupState()['ZoneGroupState'] returns XML like # this: # # <ZoneGroups> # <ZoneGroup Coordinator="RINCON_000XXX1400" ID="RINCON_000XXXX1400:0"> # <ZoneGroupMember # BootSeq="33" # Configuration="1" # Icon="x-rincon-roomicon:zoneextender" # Invisible="1" # IsZoneBridge="1" # Location="http://192.168.1.100:1400/xml/device_description.xml" # MinCompatibleVersion="22.0-00000" # SoftwareVersion="24.1-74200" # UUID="RINCON_000ZZZ1400" # ZoneName="BRIDGE"/> # </ZoneGroup> # <ZoneGroup Coordinator="RINCON_000XXX1400" ID="RINCON_000XXX1400:46"> # <ZoneGroupMember # BootSeq="44" # Configuration="1" # Icon="x-rincon-roomicon:living" # Location="http://192.168.1.101:1400/xml/device_description.xml" # MinCompatibleVersion="22.0-00000" # SoftwareVersion="24.1-74200" # UUID="RINCON_000XXX1400" # ZoneName="Living Room"/> # <ZoneGroupMember # BootSeq="52" # Configuration="1" # Icon="x-rincon-roomicon:kitchen" # Location="http://192.168.1.102:1400/xml/device_description.xml" # MinCompatibleVersion="22.0-00000" # SoftwareVersion="24.1-74200" # UUID="RINCON_000YYY1400" # ZoneName="Kitchen"/> # </ZoneGroup> # </ZoneGroups> # def parse_zone_group_member(member_element): """Parse a ZoneGroupMember or Satellite element from Zone Group State, create a SoCo instance for the member, set basic attributes and return it.""" # Create a SoCo instance for each member. Because SoCo # instances are singletons, this is cheap if they have already # been created, and useful if they haven't. We can then # update various properties for that instance. member_attribs = member_element.attrib ip_addr = member_attribs['Location'].\ split('//')[1].split(':')[0] zone = config.SOCO_CLASS(ip_addr) # uid doesn't change, but it's not harmful to (re)set it, in case # the zone is as yet unseen. zone._uid = member_attribs['UUID'] zone._player_name = member_attribs['ZoneName'] # add the zone to the set of all members, and to the set # of visible members if appropriate is_visible = False if member_attribs.get( 'Invisible') == '1' else True if is_visible: self._visible_zones.add(zone) self._all_zones.add(zone) return zone # This is called quite frequently, so it is worth optimising it. # Maintain a private cache. If the zgt has not changed, there is no # need to repeat all the XML parsing. In addition, switch on network # caching for a short interval (5 secs). zgs = self.zoneGroupTopology.GetZoneGroupState( cache_timeout=5)['ZoneGroupState'] if zgs == self._zgs_cache: return self._zgs_cache = zgs tree = XML.fromstring(zgs.encode('utf-8')) # Empty the set of all zone_groups self._groups.clear() # and the set of all members self._all_zones.clear() self._visible_zones.clear() # Loop over each ZoneGroup Element for group_element in tree.findall('ZoneGroup'): coordinator_uid = group_element.attrib['Coordinator'] group_uid = group_element.attrib['ID'] group_coordinator = None members = set() for member_element in group_element.findall('ZoneGroupMember'): zone = parse_zone_group_member(member_element) # Perform extra processing relevant to direct zone group # members # # If this element has the same UUID as the coordinator, it is # the coordinator if zone._uid == coordinator_uid: group_coordinator = zone zone._is_coordinator = True else: zone._is_coordinator = False # is_bridge doesn't change, but it does no real harm to # set/reset it here, just in case the zone has not been seen # before zone._is_bridge = True if member_element.attrib.get( 'IsZoneBridge') == '1' else False # add the zone to the members for this group members.add(zone) # Loop over Satellite elements if present, and process as for # ZoneGroup elements for satellite_element in member_element.findall('Satellite'): zone = parse_zone_group_member(satellite_element) # Assume a satellite can't be a bridge or coordinator, so # no need to check. # # Add the zone to the members for this group. members.add(zone) # Now create a ZoneGroup with this info and add it to the list # of groups self._groups.add(ZoneGroup(group_uid, group_coordinator, members))
def sparql(self, dataset_key, query, desired_mimetype='application/sparql-results+json', **kwargs): """Executes SPARQL queries against a dataset via POST :param dataset_key: Dataset identifier, in the form of owner/id :type dataset_key: str :param query: SPARQL query :type query: str :returns: file object that can be used in file parsers and data handling modules. :rtype: file object :raises RestApiException: If a server error occurs Examples -------- >>> import datadotworld as dw >>> api_client = dw.api_client() >>> api_client.sparql_post('username/test-dataset', ... query) # doctest: +SKIP """ api_client = self._build_api_client( default_mimetype_header_accept=desired_mimetype) sparql_api = kwargs.get('sparql_api_mock', _swagger.SparqlApi(api_client)) owner_id, dataset_id = parse_dataset_key(dataset_key) try: response = sparql_api.sparql_post( owner_id, dataset_id, query, _preload_content=False, **kwargs) return six.BytesIO(response.data) except _swagger.rest.ApiException as e: raise RestApiError(cause=e)
Executes SPARQL queries against a dataset via POST :param dataset_key: Dataset identifier, in the form of owner/id :type dataset_key: str :param query: SPARQL query :type query: str :returns: file object that can be used in file parsers and data handling modules. :rtype: file object :raises RestApiException: If a server error occurs Examples -------- >>> import datadotworld as dw >>> api_client = dw.api_client() >>> api_client.sparql_post('username/test-dataset', ... query) # doctest: +SKIP
Below is the the instruction that describes the task: ### Input: Executes SPARQL queries against a dataset via POST :param dataset_key: Dataset identifier, in the form of owner/id :type dataset_key: str :param query: SPARQL query :type query: str :returns: file object that can be used in file parsers and data handling modules. :rtype: file object :raises RestApiException: If a server error occurs Examples -------- >>> import datadotworld as dw >>> api_client = dw.api_client() >>> api_client.sparql_post('username/test-dataset', ... query) # doctest: +SKIP ### Response: def sparql(self, dataset_key, query, desired_mimetype='application/sparql-results+json', **kwargs): """Executes SPARQL queries against a dataset via POST :param dataset_key: Dataset identifier, in the form of owner/id :type dataset_key: str :param query: SPARQL query :type query: str :returns: file object that can be used in file parsers and data handling modules. :rtype: file object :raises RestApiException: If a server error occurs Examples -------- >>> import datadotworld as dw >>> api_client = dw.api_client() >>> api_client.sparql_post('username/test-dataset', ... query) # doctest: +SKIP """ api_client = self._build_api_client( default_mimetype_header_accept=desired_mimetype) sparql_api = kwargs.get('sparql_api_mock', _swagger.SparqlApi(api_client)) owner_id, dataset_id = parse_dataset_key(dataset_key) try: response = sparql_api.sparql_post( owner_id, dataset_id, query, _preload_content=False, **kwargs) return six.BytesIO(response.data) except _swagger.rest.ApiException as e: raise RestApiError(cause=e)
def create_graphviz_digraph(self, vleaf, format=None): ''' Create a :obj:`graphviz.Digraph` object given the leaf variable of a computation graph. One of nice things of getting ``Digraph`` directly is that the drawn graph can be displayed inline in a Jupyter notebook as described in `Graphviz documentation <https://graphviz.readthedocs.io/en/stable/manual.html#jupyter-notebooks>`_. Args: vleaf (`nnabla.Variable`): End variable. All variables and functions which can be traversed from this variable are shown in the reuslt. format (str): Force overwrite ``format`` (``'pdf', 'png', ...)``) configuration. Returns: graphviz.Digraph ''' from nnabla import get_parameters import copy try: from graphviz import Digraph except: raise ImportError("Install graphviz. `pip install graphviz.`") if format is None: format = self._format graph = Digraph(format=format) graph.attr("node", style="filled") params = get_parameters(grad_only=False) var2name = {v.data: k for k, v in params.items()} fun2scope = {} var2postname = copy.copy(var2name) def fscope(f): names = [var2name[v.data] for v in f.inputs if v.data in var2name] if names: c = os.path.commonprefix(names) fun2scope[f] = c for n in names: var2postname[params[n].data] = n[len(c):] vleaf.visit(fscope) func = self.functor(graph, self._verbose, fun2scope=fun2scope, var2name=var2postname) vleaf.visit(func) return graph
Create a :obj:`graphviz.Digraph` object given the leaf variable of a computation graph. One of nice things of getting ``Digraph`` directly is that the drawn graph can be displayed inline in a Jupyter notebook as described in `Graphviz documentation <https://graphviz.readthedocs.io/en/stable/manual.html#jupyter-notebooks>`_. Args: vleaf (`nnabla.Variable`): End variable. All variables and functions which can be traversed from this variable are shown in the reuslt. format (str): Force overwrite ``format`` (``'pdf', 'png', ...)``) configuration. Returns: graphviz.Digraph
Below is the the instruction that describes the task: ### Input: Create a :obj:`graphviz.Digraph` object given the leaf variable of a computation graph. One of nice things of getting ``Digraph`` directly is that the drawn graph can be displayed inline in a Jupyter notebook as described in `Graphviz documentation <https://graphviz.readthedocs.io/en/stable/manual.html#jupyter-notebooks>`_. Args: vleaf (`nnabla.Variable`): End variable. All variables and functions which can be traversed from this variable are shown in the reuslt. format (str): Force overwrite ``format`` (``'pdf', 'png', ...)``) configuration. Returns: graphviz.Digraph ### Response: def create_graphviz_digraph(self, vleaf, format=None): ''' Create a :obj:`graphviz.Digraph` object given the leaf variable of a computation graph. One of nice things of getting ``Digraph`` directly is that the drawn graph can be displayed inline in a Jupyter notebook as described in `Graphviz documentation <https://graphviz.readthedocs.io/en/stable/manual.html#jupyter-notebooks>`_. Args: vleaf (`nnabla.Variable`): End variable. All variables and functions which can be traversed from this variable are shown in the reuslt. format (str): Force overwrite ``format`` (``'pdf', 'png', ...)``) configuration. Returns: graphviz.Digraph ''' from nnabla import get_parameters import copy try: from graphviz import Digraph except: raise ImportError("Install graphviz. `pip install graphviz.`") if format is None: format = self._format graph = Digraph(format=format) graph.attr("node", style="filled") params = get_parameters(grad_only=False) var2name = {v.data: k for k, v in params.items()} fun2scope = {} var2postname = copy.copy(var2name) def fscope(f): names = [var2name[v.data] for v in f.inputs if v.data in var2name] if names: c = os.path.commonprefix(names) fun2scope[f] = c for n in names: var2postname[params[n].data] = n[len(c):] vleaf.visit(fscope) func = self.functor(graph, self._verbose, fun2scope=fun2scope, var2name=var2postname) vleaf.visit(func) return graph
def is_appendable_to(self, group): """Returns True if the data can be appended in a given group.""" # First check only the names if not all([k in group for k in self._entries.keys()]): return False # If names are matching, check the contents for k in self._entries.keys(): if not self._entries[k].is_appendable_to(group): return False return True
Returns True if the data can be appended in a given group.
Below is the the instruction that describes the task: ### Input: Returns True if the data can be appended in a given group. ### Response: def is_appendable_to(self, group): """Returns True if the data can be appended in a given group.""" # First check only the names if not all([k in group for k in self._entries.keys()]): return False # If names are matching, check the contents for k in self._entries.keys(): if not self._entries[k].is_appendable_to(group): return False return True
def init_app(self, app, config_group="flask_keystone"): """ Iniitialize the Flask_Keystone module in an application factory. :param app: `flask.Flask` application to which to connect. :type app: `flask.Flask` :param str config_group: :class:`oslo_config.cfg.OptGroup` to which to attach. When initialized, the extension will apply the :mod:`keystonemiddleware` WSGI middleware to the flask Application, attach it's own error handler, and generate a User model based on its :mod:`oslo_config` configuration. """ cfg.CONF.register_opts(RAX_OPTS, group=config_group) self.logger = logging.getLogger(__name__) try: logging.register_options(cfg.CONF) except cfg.ArgsAlreadyParsedError: # pragma: no cover pass logging.setup(cfg.CONF, "flask_keystone") self.config = cfg.CONF[config_group] self.roles = self._parse_roles() self.User = self._make_user_model() self.Anonymous = self._make_anonymous_model() self.logger.debug("Initialized keystone with roles: %s and " "allow_anonymous: %s" % ( self.roles, self.config.allow_anonymous_access )) app.wsgi_app = auth_token.AuthProtocol(app.wsgi_app, {}) self.logger.debug("Adding before_request request handler.") app.before_request(self._make_before_request()) self.logger.debug("Registering Custom Error Handler.") app.register_error_handler(FlaskKeystoneException, handle_exception)
Iniitialize the Flask_Keystone module in an application factory. :param app: `flask.Flask` application to which to connect. :type app: `flask.Flask` :param str config_group: :class:`oslo_config.cfg.OptGroup` to which to attach. When initialized, the extension will apply the :mod:`keystonemiddleware` WSGI middleware to the flask Application, attach it's own error handler, and generate a User model based on its :mod:`oslo_config` configuration.
Below is the the instruction that describes the task: ### Input: Iniitialize the Flask_Keystone module in an application factory. :param app: `flask.Flask` application to which to connect. :type app: `flask.Flask` :param str config_group: :class:`oslo_config.cfg.OptGroup` to which to attach. When initialized, the extension will apply the :mod:`keystonemiddleware` WSGI middleware to the flask Application, attach it's own error handler, and generate a User model based on its :mod:`oslo_config` configuration. ### Response: def init_app(self, app, config_group="flask_keystone"): """ Iniitialize the Flask_Keystone module in an application factory. :param app: `flask.Flask` application to which to connect. :type app: `flask.Flask` :param str config_group: :class:`oslo_config.cfg.OptGroup` to which to attach. When initialized, the extension will apply the :mod:`keystonemiddleware` WSGI middleware to the flask Application, attach it's own error handler, and generate a User model based on its :mod:`oslo_config` configuration. """ cfg.CONF.register_opts(RAX_OPTS, group=config_group) self.logger = logging.getLogger(__name__) try: logging.register_options(cfg.CONF) except cfg.ArgsAlreadyParsedError: # pragma: no cover pass logging.setup(cfg.CONF, "flask_keystone") self.config = cfg.CONF[config_group] self.roles = self._parse_roles() self.User = self._make_user_model() self.Anonymous = self._make_anonymous_model() self.logger.debug("Initialized keystone with roles: %s and " "allow_anonymous: %s" % ( self.roles, self.config.allow_anonymous_access )) app.wsgi_app = auth_token.AuthProtocol(app.wsgi_app, {}) self.logger.debug("Adding before_request request handler.") app.before_request(self._make_before_request()) self.logger.debug("Registering Custom Error Handler.") app.register_error_handler(FlaskKeystoneException, handle_exception)
def put(self, request, id, format=None): """ Update an existing bot --- serializer: BotUpdateSerializer responseMessages: - code: 401 message: Not authenticated - code: 400 message: Not valid request """ bot = self.get_bot(id, request.user) serializer = BotUpdateSerializer(bot, data=request.data) if serializer.is_valid(): try: bot = serializer.save() except: return Response(status=status.HTTP_400_BAD_REQUEST) else: return Response(BotSerializer(bot).data) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
Update an existing bot --- serializer: BotUpdateSerializer responseMessages: - code: 401 message: Not authenticated - code: 400 message: Not valid request
Below is the the instruction that describes the task: ### Input: Update an existing bot --- serializer: BotUpdateSerializer responseMessages: - code: 401 message: Not authenticated - code: 400 message: Not valid request ### Response: def put(self, request, id, format=None): """ Update an existing bot --- serializer: BotUpdateSerializer responseMessages: - code: 401 message: Not authenticated - code: 400 message: Not valid request """ bot = self.get_bot(id, request.user) serializer = BotUpdateSerializer(bot, data=request.data) if serializer.is_valid(): try: bot = serializer.save() except: return Response(status=status.HTTP_400_BAD_REQUEST) else: return Response(BotSerializer(bot).data) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
def default(self, obj): """Encode values as JSON strings. This method overrides the default implementation from `json.JSONEncoder`. """ if isinstance(obj, datetime.datetime): return self._encode_datetime(obj) # Fallback to the default encoding return json.JSONEncoder.default(self, obj)
Encode values as JSON strings. This method overrides the default implementation from `json.JSONEncoder`.
Below is the the instruction that describes the task: ### Input: Encode values as JSON strings. This method overrides the default implementation from `json.JSONEncoder`. ### Response: def default(self, obj): """Encode values as JSON strings. This method overrides the default implementation from `json.JSONEncoder`. """ if isinstance(obj, datetime.datetime): return self._encode_datetime(obj) # Fallback to the default encoding return json.JSONEncoder.default(self, obj)
def MICECache(subsystem, parent_cache=None): """Construct a |MICE| cache. Uses either a Redis-backed cache or a local dict cache on the object. Args: subsystem (Subsystem): The subsystem that this is a cache for. Kwargs: parent_cache (MICECache): The cache generated by the uncut version of ``subsystem``. Any cached |MICE| which are unaffected by the cut are reused in this cache. If None, the cache is initialized empty. """ if config.REDIS_CACHE: cls = RedisMICECache else: cls = DictMICECache return cls(subsystem, parent_cache=parent_cache)
Construct a |MICE| cache. Uses either a Redis-backed cache or a local dict cache on the object. Args: subsystem (Subsystem): The subsystem that this is a cache for. Kwargs: parent_cache (MICECache): The cache generated by the uncut version of ``subsystem``. Any cached |MICE| which are unaffected by the cut are reused in this cache. If None, the cache is initialized empty.
Below is the the instruction that describes the task: ### Input: Construct a |MICE| cache. Uses either a Redis-backed cache or a local dict cache on the object. Args: subsystem (Subsystem): The subsystem that this is a cache for. Kwargs: parent_cache (MICECache): The cache generated by the uncut version of ``subsystem``. Any cached |MICE| which are unaffected by the cut are reused in this cache. If None, the cache is initialized empty. ### Response: def MICECache(subsystem, parent_cache=None): """Construct a |MICE| cache. Uses either a Redis-backed cache or a local dict cache on the object. Args: subsystem (Subsystem): The subsystem that this is a cache for. Kwargs: parent_cache (MICECache): The cache generated by the uncut version of ``subsystem``. Any cached |MICE| which are unaffected by the cut are reused in this cache. If None, the cache is initialized empty. """ if config.REDIS_CACHE: cls = RedisMICECache else: cls = DictMICECache return cls(subsystem, parent_cache=parent_cache)
def create_language_model(self, name, base_model_name, dialect=None, description=None, **kwargs): """ Create a custom language model. Creates a new custom language model for a specified base model. The custom language model can be used only with the base model for which it is created. The model is owned by the instance of the service whose credentials are used to create it. **See also:** [Create a custom language model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#createModel-language). :param str name: A user-defined name for the new custom language model. Use a name that is unique among all custom language models that you own. Use a localized name that matches the language of the custom model. Use a name that describes the domain of the custom model, such as `Medical custom model` or `Legal custom model`. :param str base_model_name: The name of the base language model that is to be customized by the new custom language model. The new custom model can be used only with the base model that it customizes. To determine whether a base model supports language model customization, use the **Get a model** method and check that the attribute `custom_language_model` is set to `true`. You can also refer to [Language support for customization](https://cloud.ibm.com/docs/services/speech-to-text/custom.html#languageSupport). :param str dialect: The dialect of the specified language that is to be used with the custom language model. The parameter is meaningful only for Spanish models, for which the service creates a custom language model that is suited for speech in one of the following dialects: * `es-ES` for Castilian Spanish (the default) * `es-LA` for Latin American Spanish * `es-US` for North American (Mexican) Spanish A specified dialect must be valid for the base model. By default, the dialect matches the language of the base model; for example, `en-US` for either of the US English language models. :param str description: A description of the new custom language model. Use a localized description that matches the language of the custom model. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse """ if name is None: raise ValueError('name must be provided') if base_model_name is None: raise ValueError('base_model_name must be provided') headers = {} if 'headers' in kwargs: headers.update(kwargs.get('headers')) sdk_headers = get_sdk_headers('speech_to_text', 'V1', 'create_language_model') headers.update(sdk_headers) data = { 'name': name, 'base_model_name': base_model_name, 'dialect': dialect, 'description': description } url = '/v1/customizations' response = self.request( method='POST', url=url, headers=headers, json=data, accept_json=True) return response
Create a custom language model. Creates a new custom language model for a specified base model. The custom language model can be used only with the base model for which it is created. The model is owned by the instance of the service whose credentials are used to create it. **See also:** [Create a custom language model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#createModel-language). :param str name: A user-defined name for the new custom language model. Use a name that is unique among all custom language models that you own. Use a localized name that matches the language of the custom model. Use a name that describes the domain of the custom model, such as `Medical custom model` or `Legal custom model`. :param str base_model_name: The name of the base language model that is to be customized by the new custom language model. The new custom model can be used only with the base model that it customizes. To determine whether a base model supports language model customization, use the **Get a model** method and check that the attribute `custom_language_model` is set to `true`. You can also refer to [Language support for customization](https://cloud.ibm.com/docs/services/speech-to-text/custom.html#languageSupport). :param str dialect: The dialect of the specified language that is to be used with the custom language model. The parameter is meaningful only for Spanish models, for which the service creates a custom language model that is suited for speech in one of the following dialects: * `es-ES` for Castilian Spanish (the default) * `es-LA` for Latin American Spanish * `es-US` for North American (Mexican) Spanish A specified dialect must be valid for the base model. By default, the dialect matches the language of the base model; for example, `en-US` for either of the US English language models. :param str description: A description of the new custom language model. Use a localized description that matches the language of the custom model. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse
Below is the the instruction that describes the task: ### Input: Create a custom language model. Creates a new custom language model for a specified base model. The custom language model can be used only with the base model for which it is created. The model is owned by the instance of the service whose credentials are used to create it. **See also:** [Create a custom language model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#createModel-language). :param str name: A user-defined name for the new custom language model. Use a name that is unique among all custom language models that you own. Use a localized name that matches the language of the custom model. Use a name that describes the domain of the custom model, such as `Medical custom model` or `Legal custom model`. :param str base_model_name: The name of the base language model that is to be customized by the new custom language model. The new custom model can be used only with the base model that it customizes. To determine whether a base model supports language model customization, use the **Get a model** method and check that the attribute `custom_language_model` is set to `true`. You can also refer to [Language support for customization](https://cloud.ibm.com/docs/services/speech-to-text/custom.html#languageSupport). :param str dialect: The dialect of the specified language that is to be used with the custom language model. The parameter is meaningful only for Spanish models, for which the service creates a custom language model that is suited for speech in one of the following dialects: * `es-ES` for Castilian Spanish (the default) * `es-LA` for Latin American Spanish * `es-US` for North American (Mexican) Spanish A specified dialect must be valid for the base model. By default, the dialect matches the language of the base model; for example, `en-US` for either of the US English language models. :param str description: A description of the new custom language model. Use a localized description that matches the language of the custom model. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse ### Response: def create_language_model(self, name, base_model_name, dialect=None, description=None, **kwargs): """ Create a custom language model. Creates a new custom language model for a specified base model. The custom language model can be used only with the base model for which it is created. The model is owned by the instance of the service whose credentials are used to create it. **See also:** [Create a custom language model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#createModel-language). :param str name: A user-defined name for the new custom language model. Use a name that is unique among all custom language models that you own. Use a localized name that matches the language of the custom model. Use a name that describes the domain of the custom model, such as `Medical custom model` or `Legal custom model`. :param str base_model_name: The name of the base language model that is to be customized by the new custom language model. The new custom model can be used only with the base model that it customizes. To determine whether a base model supports language model customization, use the **Get a model** method and check that the attribute `custom_language_model` is set to `true`. You can also refer to [Language support for customization](https://cloud.ibm.com/docs/services/speech-to-text/custom.html#languageSupport). :param str dialect: The dialect of the specified language that is to be used with the custom language model. The parameter is meaningful only for Spanish models, for which the service creates a custom language model that is suited for speech in one of the following dialects: * `es-ES` for Castilian Spanish (the default) * `es-LA` for Latin American Spanish * `es-US` for North American (Mexican) Spanish A specified dialect must be valid for the base model. By default, the dialect matches the language of the base model; for example, `en-US` for either of the US English language models. :param str description: A description of the new custom language model. Use a localized description that matches the language of the custom model. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse """ if name is None: raise ValueError('name must be provided') if base_model_name is None: raise ValueError('base_model_name must be provided') headers = {} if 'headers' in kwargs: headers.update(kwargs.get('headers')) sdk_headers = get_sdk_headers('speech_to_text', 'V1', 'create_language_model') headers.update(sdk_headers) data = { 'name': name, 'base_model_name': base_model_name, 'dialect': dialect, 'description': description } url = '/v1/customizations' response = self.request( method='POST', url=url, headers=headers, json=data, accept_json=True) return response
def _get_name_filter(package, context="decorate", reparse=False): """Makes sure that the name filters for the specified package have been loaded. Args: package (str): name of the package that this method belongs to. context (str): one of ['decorate', 'time', 'analyze']; specifies which section of the configuration settings to check. """ global name_filters pkey = (package, context) if pkey in name_filters and not reparse: return name_filters[pkey] from acorn.config import settings spack = settings(package) # The acorn.* sections allow for global settings that affect every package # that ever gets wrapped. sections = { "decorate": ["tracking", "acorn.tracking"], "time": ["timing", "acorn.timing"], "analyze": ["analysis", "acorn.analysis"] } filters, rfilters = None, None import re if context in sections: # We are interested in the 'filter' and 'rfilter' options if they exist. filters, rfilters = [], [] ignores, rignores = [], [] for section in sections[context]: if spack.has_section(section): options = spack.options(section) if "filter" in options: filters.extend(re.split(r"\s*\$\s*", spack.get(section, "filter"))) if "rfilter" in options: # pragma: no cover #Until now, the fnmatch filters have been the most #useful. So I don't have any unit tests for regex filters. pfilters = re.split(r"\s*\$\s*", spack.get(section, "rfilter")) rfilters.extend([re.compile(p, re.I) for p in pfilters]) if "ignore" in options: ignores.extend(re.split(r"\s*\$\s*", spack.get(section, "ignore"))) if "rignore" in options: # pragma: no cover pignores = re.split(r"\s*\$\s*", spack.get(section, "rignore")) rignores.extend([re.compile(p, re.I) for p in pfilters]) name_filters[pkey] = { "filters": filters, "rfilters": rfilters, "ignores": ignores, "rignores": rignores } else: name_filters[pkey] = None return name_filters[pkey]
Makes sure that the name filters for the specified package have been loaded. Args: package (str): name of the package that this method belongs to. context (str): one of ['decorate', 'time', 'analyze']; specifies which section of the configuration settings to check.
Below is the the instruction that describes the task: ### Input: Makes sure that the name filters for the specified package have been loaded. Args: package (str): name of the package that this method belongs to. context (str): one of ['decorate', 'time', 'analyze']; specifies which section of the configuration settings to check. ### Response: def _get_name_filter(package, context="decorate", reparse=False): """Makes sure that the name filters for the specified package have been loaded. Args: package (str): name of the package that this method belongs to. context (str): one of ['decorate', 'time', 'analyze']; specifies which section of the configuration settings to check. """ global name_filters pkey = (package, context) if pkey in name_filters and not reparse: return name_filters[pkey] from acorn.config import settings spack = settings(package) # The acorn.* sections allow for global settings that affect every package # that ever gets wrapped. sections = { "decorate": ["tracking", "acorn.tracking"], "time": ["timing", "acorn.timing"], "analyze": ["analysis", "acorn.analysis"] } filters, rfilters = None, None import re if context in sections: # We are interested in the 'filter' and 'rfilter' options if they exist. filters, rfilters = [], [] ignores, rignores = [], [] for section in sections[context]: if spack.has_section(section): options = spack.options(section) if "filter" in options: filters.extend(re.split(r"\s*\$\s*", spack.get(section, "filter"))) if "rfilter" in options: # pragma: no cover #Until now, the fnmatch filters have been the most #useful. So I don't have any unit tests for regex filters. pfilters = re.split(r"\s*\$\s*", spack.get(section, "rfilter")) rfilters.extend([re.compile(p, re.I) for p in pfilters]) if "ignore" in options: ignores.extend(re.split(r"\s*\$\s*", spack.get(section, "ignore"))) if "rignore" in options: # pragma: no cover pignores = re.split(r"\s*\$\s*", spack.get(section, "rignore")) rignores.extend([re.compile(p, re.I) for p in pfilters]) name_filters[pkey] = { "filters": filters, "rfilters": rfilters, "ignores": ignores, "rignores": rignores } else: name_filters[pkey] = None return name_filters[pkey]
def makeConstructor(self, originalConstructor, syntheticMemberList, doesConsumeArguments): """ :type syntheticMemberList: list(SyntheticMember) :type doesConsumeArguments: bool """ # Original constructor's expected args. originalConstructorExpectedArgList = [] doesExpectVariadicArgs = False doesExpectKeywordedArgs = False if inspect.isfunction(originalConstructor) or inspect.ismethod(originalConstructor): argSpec = inspect.getargspec(originalConstructor) # originalConstructorExpectedArgList = expected args - self. originalConstructorExpectedArgList = argSpec.args[1:] doesExpectVariadicArgs = (argSpec.varargs is not None) doesExpectKeywordedArgs = (argSpec.keywords is not None) def init(instance, *args, **kwargs): if doesConsumeArguments: # Merge original constructor's args specification with member list and make an args dict. positionalArgumentKeyValueList = self._positionalArgumentKeyValueList( originalConstructorExpectedArgList, syntheticMemberList, args) # Set members values. for syntheticMember in syntheticMemberList: memberName = syntheticMember.memberName() # Default value. value = syntheticMember.default() # Constructor is synthesized. if doesConsumeArguments: value = self._consumeArgument(memberName, positionalArgumentKeyValueList, kwargs, value) # Checking that the contract is respected. syntheticMember.checkContract(memberName, value) # Initalizing member with a value. setattr(instance, syntheticMember.privateMemberName(), value) if doesConsumeArguments: # Remove superfluous arguments that have been used for synthesization but are not expected by constructor. args, kwargs = self._filterArgsAndKwargs( originalConstructorExpectedArgList=originalConstructorExpectedArgList, syntheticMemberList=syntheticMemberList, positionalArgumentKeyValueList=positionalArgumentKeyValueList, keywordedArgDict=kwargs ) # Call original constructor. if originalConstructor is not None: originalConstructor(instance, *args, **kwargs) return init
:type syntheticMemberList: list(SyntheticMember) :type doesConsumeArguments: bool
Below is the the instruction that describes the task: ### Input: :type syntheticMemberList: list(SyntheticMember) :type doesConsumeArguments: bool ### Response: def makeConstructor(self, originalConstructor, syntheticMemberList, doesConsumeArguments): """ :type syntheticMemberList: list(SyntheticMember) :type doesConsumeArguments: bool """ # Original constructor's expected args. originalConstructorExpectedArgList = [] doesExpectVariadicArgs = False doesExpectKeywordedArgs = False if inspect.isfunction(originalConstructor) or inspect.ismethod(originalConstructor): argSpec = inspect.getargspec(originalConstructor) # originalConstructorExpectedArgList = expected args - self. originalConstructorExpectedArgList = argSpec.args[1:] doesExpectVariadicArgs = (argSpec.varargs is not None) doesExpectKeywordedArgs = (argSpec.keywords is not None) def init(instance, *args, **kwargs): if doesConsumeArguments: # Merge original constructor's args specification with member list and make an args dict. positionalArgumentKeyValueList = self._positionalArgumentKeyValueList( originalConstructorExpectedArgList, syntheticMemberList, args) # Set members values. for syntheticMember in syntheticMemberList: memberName = syntheticMember.memberName() # Default value. value = syntheticMember.default() # Constructor is synthesized. if doesConsumeArguments: value = self._consumeArgument(memberName, positionalArgumentKeyValueList, kwargs, value) # Checking that the contract is respected. syntheticMember.checkContract(memberName, value) # Initalizing member with a value. setattr(instance, syntheticMember.privateMemberName(), value) if doesConsumeArguments: # Remove superfluous arguments that have been used for synthesization but are not expected by constructor. args, kwargs = self._filterArgsAndKwargs( originalConstructorExpectedArgList=originalConstructorExpectedArgList, syntheticMemberList=syntheticMemberList, positionalArgumentKeyValueList=positionalArgumentKeyValueList, keywordedArgDict=kwargs ) # Call original constructor. if originalConstructor is not None: originalConstructor(instance, *args, **kwargs) return init
def _clean_dir(root, keep, exclude_pat): ''' Clean out all of the files and directories in a directory (root) while preserving the files in a list (keep) and part of exclude_pat ''' root = os.path.normcase(root) real_keep = _find_keep_files(root, keep) removed = set() def _delete_not_kept(nfn): if nfn not in real_keep: # -- check if this is a part of exclude_pat(only). No need to # check include_pat if not salt.utils.stringutils.check_include_exclude( os.path.relpath(nfn, root), None, exclude_pat): return removed.add(nfn) if not __opts__['test']: try: os.remove(nfn) except OSError: __salt__['file.remove'](nfn) for roots, dirs, files in salt.utils.path.os_walk(root): for name in itertools.chain(dirs, files): _delete_not_kept(os.path.join(roots, name)) return list(removed)
Clean out all of the files and directories in a directory (root) while preserving the files in a list (keep) and part of exclude_pat
Below is the the instruction that describes the task: ### Input: Clean out all of the files and directories in a directory (root) while preserving the files in a list (keep) and part of exclude_pat ### Response: def _clean_dir(root, keep, exclude_pat): ''' Clean out all of the files and directories in a directory (root) while preserving the files in a list (keep) and part of exclude_pat ''' root = os.path.normcase(root) real_keep = _find_keep_files(root, keep) removed = set() def _delete_not_kept(nfn): if nfn not in real_keep: # -- check if this is a part of exclude_pat(only). No need to # check include_pat if not salt.utils.stringutils.check_include_exclude( os.path.relpath(nfn, root), None, exclude_pat): return removed.add(nfn) if not __opts__['test']: try: os.remove(nfn) except OSError: __salt__['file.remove'](nfn) for roots, dirs, files in salt.utils.path.os_walk(root): for name in itertools.chain(dirs, files): _delete_not_kept(os.path.join(roots, name)) return list(removed)
def raises(*exceptions): """Test must raise one of expected exceptions to pass. Example use:: @raises(TypeError, ValueError) def test_raises_type_error(): raise TypeError("This test passes") @raises(Exception) def test_that_fails_by_passing(): pass If you want to test many assertions about exceptions in a single test, you may want to use `assert_raises` instead. """ valid = ' or '.join([e.__name__ for e in exceptions]) def decorate(func): name = func.__name__ def newfunc(*arg, **kw): try: func(*arg, **kw) except exceptions: pass except: raise else: message = "%s() did not raise %s" % (name, valid) raise AssertionError(message) newfunc = make_decorator(func)(newfunc) return newfunc return decorate
Test must raise one of expected exceptions to pass. Example use:: @raises(TypeError, ValueError) def test_raises_type_error(): raise TypeError("This test passes") @raises(Exception) def test_that_fails_by_passing(): pass If you want to test many assertions about exceptions in a single test, you may want to use `assert_raises` instead.
Below is the the instruction that describes the task: ### Input: Test must raise one of expected exceptions to pass. Example use:: @raises(TypeError, ValueError) def test_raises_type_error(): raise TypeError("This test passes") @raises(Exception) def test_that_fails_by_passing(): pass If you want to test many assertions about exceptions in a single test, you may want to use `assert_raises` instead. ### Response: def raises(*exceptions): """Test must raise one of expected exceptions to pass. Example use:: @raises(TypeError, ValueError) def test_raises_type_error(): raise TypeError("This test passes") @raises(Exception) def test_that_fails_by_passing(): pass If you want to test many assertions about exceptions in a single test, you may want to use `assert_raises` instead. """ valid = ' or '.join([e.__name__ for e in exceptions]) def decorate(func): name = func.__name__ def newfunc(*arg, **kw): try: func(*arg, **kw) except exceptions: pass except: raise else: message = "%s() did not raise %s" % (name, valid) raise AssertionError(message) newfunc = make_decorator(func)(newfunc) return newfunc return decorate
def do_lisp(self, subcmd, opts, folder=""): """${cmd_name}: list messages in the specified folder in JSON format ${cmd_usage} """ client = MdClient(self.maildir, filesystem=self.filesystem) client.lisp( foldername=folder, stream=self.stdout, reverse=getattr(opts, "reverse", False), since=float(getattr(opts, "since", -1)) )
${cmd_name}: list messages in the specified folder in JSON format ${cmd_usage}
Below is the the instruction that describes the task: ### Input: ${cmd_name}: list messages in the specified folder in JSON format ${cmd_usage} ### Response: def do_lisp(self, subcmd, opts, folder=""): """${cmd_name}: list messages in the specified folder in JSON format ${cmd_usage} """ client = MdClient(self.maildir, filesystem=self.filesystem) client.lisp( foldername=folder, stream=self.stdout, reverse=getattr(opts, "reverse", False), since=float(getattr(opts, "since", -1)) )
def perform_command(self): """ Perform command and return the appropriate exit code. :rtype: int """ if len(self.actual_arguments) < 4: return self.print_help() text_format = gf.safe_unicode(self.actual_arguments[0]) if text_format == u"list": text = gf.safe_unicode(self.actual_arguments[1]) elif text_format in TextFileFormat.ALLOWED_VALUES: text = self.actual_arguments[1] if not self.check_input_file(text): return self.ERROR_EXIT_CODE else: return self.print_help() l1_id_regex = self.has_option_with_value(u"--l1-id-regex") l2_id_regex = self.has_option_with_value(u"--l2-id-regex") l3_id_regex = self.has_option_with_value(u"--l3-id-regex") id_regex = self.has_option_with_value(u"--id-regex") class_regex = self.has_option_with_value(u"--class-regex") sort = self.has_option_with_value(u"--sort") parameters = { gc.PPN_TASK_IS_TEXT_MUNPARSED_L1_ID_REGEX: l1_id_regex, gc.PPN_TASK_IS_TEXT_MUNPARSED_L2_ID_REGEX: l2_id_regex, gc.PPN_TASK_IS_TEXT_MUNPARSED_L3_ID_REGEX: l3_id_regex, gc.PPN_TASK_IS_TEXT_UNPARSED_CLASS_REGEX: class_regex, gc.PPN_TASK_IS_TEXT_UNPARSED_ID_REGEX: id_regex, gc.PPN_TASK_IS_TEXT_UNPARSED_ID_SORT: sort, } if (text_format == TextFileFormat.MUNPARSED) and ((l1_id_regex is None) or (l2_id_regex is None) or (l3_id_regex is None)): self.print_error(u"You must specify --l1-id-regex and --l2-id-regex and --l3-id-regex for munparsed format") return self.ERROR_EXIT_CODE if (text_format == TextFileFormat.UNPARSED) and (id_regex is None) and (class_regex is None): self.print_error(u"You must specify --id-regex and/or --class-regex for unparsed format") return self.ERROR_EXIT_CODE language = gf.safe_unicode(self.actual_arguments[2]) audio_file_path = self.actual_arguments[3] if not self.check_input_file(audio_file_path): return self.ERROR_EXIT_CODE text_file = self.get_text_file(text_format, text, parameters) if text_file is None: self.print_error(u"Unable to build a TextFile from the given parameters") return self.ERROR_EXIT_CODE elif len(text_file) == 0: self.print_error(u"No text fragments found") return self.ERROR_EXIT_CODE text_file.set_language(language) self.print_info(u"Read input text with %d fragments" % (len(text_file))) self.print_info(u"Reading audio...") try: audio_file_mfcc = AudioFileMFCC(audio_file_path, rconf=self.rconf, logger=self.logger) except AudioFileConverterError: self.print_error(u"Unable to call the ffmpeg executable '%s'" % (self.rconf[RuntimeConfiguration.FFMPEG_PATH])) self.print_error(u"Make sure the path to ffmpeg is correct") return self.ERROR_EXIT_CODE except (AudioFileUnsupportedFormatError, AudioFileNotInitializedError): self.print_error(u"Cannot read file '%s'" % (audio_file_path)) self.print_error(u"Check that its format is supported by ffmpeg") return self.ERROR_EXIT_CODE except Exception as exc: self.print_error(u"An unexpected error occurred while reading the audio file:") self.print_error(u"%s" % exc) return self.ERROR_EXIT_CODE self.print_info(u"Reading audio... done") self.print_info(u"Running VAD...") audio_file_mfcc.run_vad() self.print_info(u"Running VAD... done") min_head = gf.safe_float(self.has_option_with_value(u"--min-head"), None) max_head = gf.safe_float(self.has_option_with_value(u"--max-head"), None) min_tail = gf.safe_float(self.has_option_with_value(u"--min-tail"), None) max_tail = gf.safe_float(self.has_option_with_value(u"--max-tail"), None) self.print_info(u"Detecting audio interval...") start_detector = SD(audio_file_mfcc, text_file, rconf=self.rconf, logger=self.logger) start, end = start_detector.detect_interval(min_head, max_head, min_tail, max_tail) self.print_info(u"Detecting audio interval... done") self.print_result(audio_file_mfcc.audio_length, start, end) return self.NO_ERROR_EXIT_CODE
Perform command and return the appropriate exit code. :rtype: int
Below is the the instruction that describes the task: ### Input: Perform command and return the appropriate exit code. :rtype: int ### Response: def perform_command(self): """ Perform command and return the appropriate exit code. :rtype: int """ if len(self.actual_arguments) < 4: return self.print_help() text_format = gf.safe_unicode(self.actual_arguments[0]) if text_format == u"list": text = gf.safe_unicode(self.actual_arguments[1]) elif text_format in TextFileFormat.ALLOWED_VALUES: text = self.actual_arguments[1] if not self.check_input_file(text): return self.ERROR_EXIT_CODE else: return self.print_help() l1_id_regex = self.has_option_with_value(u"--l1-id-regex") l2_id_regex = self.has_option_with_value(u"--l2-id-regex") l3_id_regex = self.has_option_with_value(u"--l3-id-regex") id_regex = self.has_option_with_value(u"--id-regex") class_regex = self.has_option_with_value(u"--class-regex") sort = self.has_option_with_value(u"--sort") parameters = { gc.PPN_TASK_IS_TEXT_MUNPARSED_L1_ID_REGEX: l1_id_regex, gc.PPN_TASK_IS_TEXT_MUNPARSED_L2_ID_REGEX: l2_id_regex, gc.PPN_TASK_IS_TEXT_MUNPARSED_L3_ID_REGEX: l3_id_regex, gc.PPN_TASK_IS_TEXT_UNPARSED_CLASS_REGEX: class_regex, gc.PPN_TASK_IS_TEXT_UNPARSED_ID_REGEX: id_regex, gc.PPN_TASK_IS_TEXT_UNPARSED_ID_SORT: sort, } if (text_format == TextFileFormat.MUNPARSED) and ((l1_id_regex is None) or (l2_id_regex is None) or (l3_id_regex is None)): self.print_error(u"You must specify --l1-id-regex and --l2-id-regex and --l3-id-regex for munparsed format") return self.ERROR_EXIT_CODE if (text_format == TextFileFormat.UNPARSED) and (id_regex is None) and (class_regex is None): self.print_error(u"You must specify --id-regex and/or --class-regex for unparsed format") return self.ERROR_EXIT_CODE language = gf.safe_unicode(self.actual_arguments[2]) audio_file_path = self.actual_arguments[3] if not self.check_input_file(audio_file_path): return self.ERROR_EXIT_CODE text_file = self.get_text_file(text_format, text, parameters) if text_file is None: self.print_error(u"Unable to build a TextFile from the given parameters") return self.ERROR_EXIT_CODE elif len(text_file) == 0: self.print_error(u"No text fragments found") return self.ERROR_EXIT_CODE text_file.set_language(language) self.print_info(u"Read input text with %d fragments" % (len(text_file))) self.print_info(u"Reading audio...") try: audio_file_mfcc = AudioFileMFCC(audio_file_path, rconf=self.rconf, logger=self.logger) except AudioFileConverterError: self.print_error(u"Unable to call the ffmpeg executable '%s'" % (self.rconf[RuntimeConfiguration.FFMPEG_PATH])) self.print_error(u"Make sure the path to ffmpeg is correct") return self.ERROR_EXIT_CODE except (AudioFileUnsupportedFormatError, AudioFileNotInitializedError): self.print_error(u"Cannot read file '%s'" % (audio_file_path)) self.print_error(u"Check that its format is supported by ffmpeg") return self.ERROR_EXIT_CODE except Exception as exc: self.print_error(u"An unexpected error occurred while reading the audio file:") self.print_error(u"%s" % exc) return self.ERROR_EXIT_CODE self.print_info(u"Reading audio... done") self.print_info(u"Running VAD...") audio_file_mfcc.run_vad() self.print_info(u"Running VAD... done") min_head = gf.safe_float(self.has_option_with_value(u"--min-head"), None) max_head = gf.safe_float(self.has_option_with_value(u"--max-head"), None) min_tail = gf.safe_float(self.has_option_with_value(u"--min-tail"), None) max_tail = gf.safe_float(self.has_option_with_value(u"--max-tail"), None) self.print_info(u"Detecting audio interval...") start_detector = SD(audio_file_mfcc, text_file, rconf=self.rconf, logger=self.logger) start, end = start_detector.detect_interval(min_head, max_head, min_tail, max_tail) self.print_info(u"Detecting audio interval... done") self.print_result(audio_file_mfcc.audio_length, start, end) return self.NO_ERROR_EXIT_CODE
def get_reference_data( self, modified_since: Optional[datetime.datetime] = None ) -> GetReferenceDataResponse: """ Fetches API reference data. :param modified_since: The response will be empty if no changes have been made to the reference data since this timestamp, otherwise all reference data will be returned. """ if modified_since is None: modified_since = datetime.datetime(year=2010, month=1, day=1) response = requests.get( '{}/lovs'.format(API_URL_BASE), headers={ 'if-modified-since': self._format_dt(modified_since), **self._get_headers(), }, timeout=self._timeout, ) if not response.ok: raise FuelCheckError.create(response) # return response.text return GetReferenceDataResponse.deserialize(response.json())
Fetches API reference data. :param modified_since: The response will be empty if no changes have been made to the reference data since this timestamp, otherwise all reference data will be returned.
Below is the the instruction that describes the task: ### Input: Fetches API reference data. :param modified_since: The response will be empty if no changes have been made to the reference data since this timestamp, otherwise all reference data will be returned. ### Response: def get_reference_data( self, modified_since: Optional[datetime.datetime] = None ) -> GetReferenceDataResponse: """ Fetches API reference data. :param modified_since: The response will be empty if no changes have been made to the reference data since this timestamp, otherwise all reference data will be returned. """ if modified_since is None: modified_since = datetime.datetime(year=2010, month=1, day=1) response = requests.get( '{}/lovs'.format(API_URL_BASE), headers={ 'if-modified-since': self._format_dt(modified_since), **self._get_headers(), }, timeout=self._timeout, ) if not response.ok: raise FuelCheckError.create(response) # return response.text return GetReferenceDataResponse.deserialize(response.json())
def _wait(self, objects, attr, value, wait_interval=None, wait_time=None): r""" Calls the ``fetch`` method of each object in ``objects`` periodically until the ``attr`` attribute of each one equals ``value``, yielding the final state of each object as soon as it satisfies the condition. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress objects) is raised. If a `KeyboardInterrupt` is caught, any remaining objects are returned immediately without waiting for completion. .. versionchanged:: 0.2.0 Raises `WaitTimeoutError` on timeout :param iterable objects: an iterable of `Resource`\ s with ``fetch`` methods :param string attr: the attribute to watch :param value: the value of ``attr`` to wait for :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any objects have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded """ objects = list(objects) if not objects: return if wait_interval is None: wait_interval = self.wait_interval if wait_time < 0: end_time = None else: if wait_time is None: wait_time = self.wait_time if wait_time is None or wait_time < 0: end_time = None else: end_time = time() + wait_time while end_time is None or time() < end_time: loop_start = time() next_objs = [] for o in objects: obj = o.fetch() if getattr(obj, attr, None) == value: yield obj else: next_objs.append(obj) objects = next_objs if not objects: break loop_end = time() time_left = wait_interval - (loop_end - loop_start) if end_time is not None: time_left = min(time_left, end_time - loop_end) if time_left > 0: try: sleep(time_left) except KeyboardInterrupt: for o in objects: yield o return if objects: raise WaitTimeoutError(objects, attr, value, wait_interval, wait_time)
r""" Calls the ``fetch`` method of each object in ``objects`` periodically until the ``attr`` attribute of each one equals ``value``, yielding the final state of each object as soon as it satisfies the condition. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress objects) is raised. If a `KeyboardInterrupt` is caught, any remaining objects are returned immediately without waiting for completion. .. versionchanged:: 0.2.0 Raises `WaitTimeoutError` on timeout :param iterable objects: an iterable of `Resource`\ s with ``fetch`` methods :param string attr: the attribute to watch :param value: the value of ``attr`` to wait for :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any objects have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded
Below is the the instruction that describes the task: ### Input: r""" Calls the ``fetch`` method of each object in ``objects`` periodically until the ``attr`` attribute of each one equals ``value``, yielding the final state of each object as soon as it satisfies the condition. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress objects) is raised. If a `KeyboardInterrupt` is caught, any remaining objects are returned immediately without waiting for completion. .. versionchanged:: 0.2.0 Raises `WaitTimeoutError` on timeout :param iterable objects: an iterable of `Resource`\ s with ``fetch`` methods :param string attr: the attribute to watch :param value: the value of ``attr`` to wait for :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any objects have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded ### Response: def _wait(self, objects, attr, value, wait_interval=None, wait_time=None): r""" Calls the ``fetch`` method of each object in ``objects`` periodically until the ``attr`` attribute of each one equals ``value``, yielding the final state of each object as soon as it satisfies the condition. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress objects) is raised. If a `KeyboardInterrupt` is caught, any remaining objects are returned immediately without waiting for completion. .. versionchanged:: 0.2.0 Raises `WaitTimeoutError` on timeout :param iterable objects: an iterable of `Resource`\ s with ``fetch`` methods :param string attr: the attribute to watch :param value: the value of ``attr`` to wait for :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any objects have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded """ objects = list(objects) if not objects: return if wait_interval is None: wait_interval = self.wait_interval if wait_time < 0: end_time = None else: if wait_time is None: wait_time = self.wait_time if wait_time is None or wait_time < 0: end_time = None else: end_time = time() + wait_time while end_time is None or time() < end_time: loop_start = time() next_objs = [] for o in objects: obj = o.fetch() if getattr(obj, attr, None) == value: yield obj else: next_objs.append(obj) objects = next_objs if not objects: break loop_end = time() time_left = wait_interval - (loop_end - loop_start) if end_time is not None: time_left = min(time_left, end_time - loop_end) if time_left > 0: try: sleep(time_left) except KeyboardInterrupt: for o in objects: yield o return if objects: raise WaitTimeoutError(objects, attr, value, wait_interval, wait_time)
def remoteIndexer2to3(oldIndexer): """ The documentType keyword was added to all indexable items. Indexes need to be regenerated for this to take effect. Also, PyLucene no longer stores the text of messages it indexes, so deleting and re-creating the indexes will make them much smaller. """ newIndexer = oldIndexer.upgradeVersion( oldIndexer.typeName, 2, 3, indexCount=oldIndexer.indexCount, installedOn=oldIndexer.installedOn, indexDirectory=oldIndexer.indexDirectory) # the 3->4 upgrader for PyLuceneIndexer calls reset(), so don't do it # here. also, it won't work because it's a DummyItem if oldIndexer.typeName != PyLuceneIndexer.typeName: newIndexer.reset() return newIndexer
The documentType keyword was added to all indexable items. Indexes need to be regenerated for this to take effect. Also, PyLucene no longer stores the text of messages it indexes, so deleting and re-creating the indexes will make them much smaller.
Below is the the instruction that describes the task: ### Input: The documentType keyword was added to all indexable items. Indexes need to be regenerated for this to take effect. Also, PyLucene no longer stores the text of messages it indexes, so deleting and re-creating the indexes will make them much smaller. ### Response: def remoteIndexer2to3(oldIndexer): """ The documentType keyword was added to all indexable items. Indexes need to be regenerated for this to take effect. Also, PyLucene no longer stores the text of messages it indexes, so deleting and re-creating the indexes will make them much smaller. """ newIndexer = oldIndexer.upgradeVersion( oldIndexer.typeName, 2, 3, indexCount=oldIndexer.indexCount, installedOn=oldIndexer.installedOn, indexDirectory=oldIndexer.indexDirectory) # the 3->4 upgrader for PyLuceneIndexer calls reset(), so don't do it # here. also, it won't work because it's a DummyItem if oldIndexer.typeName != PyLuceneIndexer.typeName: newIndexer.reset() return newIndexer
def pause(self): """Set the execution mode to paused """ if self.state_machine_manager.active_state_machine_id is None: logger.info("'Pause' is not a valid action to initiate state machine execution.") return if self.state_machine_manager.get_active_state_machine() is not None: self.state_machine_manager.get_active_state_machine().root_state.recursively_pause_states() logger.debug("Pause execution ...") self.set_execution_mode(StateMachineExecutionStatus.PAUSED)
Set the execution mode to paused
Below is the the instruction that describes the task: ### Input: Set the execution mode to paused ### Response: def pause(self): """Set the execution mode to paused """ if self.state_machine_manager.active_state_machine_id is None: logger.info("'Pause' is not a valid action to initiate state machine execution.") return if self.state_machine_manager.get_active_state_machine() is not None: self.state_machine_manager.get_active_state_machine().root_state.recursively_pause_states() logger.debug("Pause execution ...") self.set_execution_mode(StateMachineExecutionStatus.PAUSED)
def promise_method(func): """ A decorator which ensures that once a method has been marked as resolved (via Class.__resolved)) will then propagate the attribute (function) call upstream. """ name = func.__name__ @wraps(func) def wrapped(self, *args, **kwargs): cls_name = type(self).__name__ if getattr(self, '_%s__resolved' % (cls_name,)): return getattr(getattr(self, '_%s__wrapped' % (cls_name,)), name)(*args, **kwargs) return func(self, *args, **kwargs) return wrapped
A decorator which ensures that once a method has been marked as resolved (via Class.__resolved)) will then propagate the attribute (function) call upstream.
Below is the the instruction that describes the task: ### Input: A decorator which ensures that once a method has been marked as resolved (via Class.__resolved)) will then propagate the attribute (function) call upstream. ### Response: def promise_method(func): """ A decorator which ensures that once a method has been marked as resolved (via Class.__resolved)) will then propagate the attribute (function) call upstream. """ name = func.__name__ @wraps(func) def wrapped(self, *args, **kwargs): cls_name = type(self).__name__ if getattr(self, '_%s__resolved' % (cls_name,)): return getattr(getattr(self, '_%s__wrapped' % (cls_name,)), name)(*args, **kwargs) return func(self, *args, **kwargs) return wrapped
def generalized_lsp_value_withtau(times, mags, errs, omega): '''Generalized LSP value for a single omega. This uses tau to provide an arbitrary time-reference point. The relations used are:: P(w) = (1/YY) * (YC*YC/CC + YS*YS/SS) where: YC, YS, CC, and SS are all calculated at T and where: tan 2omegaT = 2*CS/(CC - SS) and where: Y = sum( w_i*y_i ) C = sum( w_i*cos(wT_i) ) S = sum( w_i*sin(wT_i) ) YY = sum( w_i*y_i*y_i ) - Y*Y YC = sum( w_i*y_i*cos(wT_i) ) - Y*C YS = sum( w_i*y_i*sin(wT_i) ) - Y*S CpC = sum( w_i*cos(w_T_i)*cos(w_T_i) ) CC = CpC - C*C SS = (1 - CpC) - S*S CS = sum( w_i*cos(w_T_i)*sin(w_T_i) ) - C*S Parameters ---------- times,mags,errs : np.array The time-series to calculate the periodogram value for. omega : float The frequency to calculate the periodogram value at. Returns ------- periodogramvalue : float The normalized periodogram at the specified test frequency `omega`. ''' one_over_errs2 = 1.0/(errs*errs) W = npsum(one_over_errs2) wi = one_over_errs2/W sin_omegat = npsin(omega*times) cos_omegat = npcos(omega*times) sin2_omegat = sin_omegat*sin_omegat cos2_omegat = cos_omegat*cos_omegat sincos_omegat = sin_omegat*cos_omegat # calculate some more sums and terms Y = npsum( wi*mags ) C = npsum( wi*cos_omegat ) S = npsum( wi*sin_omegat ) CpS = npsum( wi*sincos_omegat ) CpC = npsum( wi*cos2_omegat ) CS = CpS - C*S CC = CpC - C*C SS = 1 - CpC - S*S # use SpS = 1 - CpC # calculate tau tan_omega_tau_top = 2.0*CS tan_omega_tau_bottom = CC - SS tan_omega_tau = tan_omega_tau_top/tan_omega_tau_bottom tau = nparctan(tan_omega_tau)/(2.0*omega) # now we need to calculate all the bits at tau sin_omega_tau = npsin(omega*(times - tau)) cos_omega_tau = npcos(omega*(times - tau)) sin2_omega_tau = sin_omega_tau*sin_omega_tau cos2_omega_tau = cos_omega_tau*cos_omega_tau sincos_omega_tau = sin_omega_tau*cos_omega_tau C_tau = npsum(wi*cos_omega_tau) S_tau = npsum(wi*sin_omega_tau) CpS_tau = npsum( wi*sincos_omega_tau ) CpC_tau = npsum( wi*cos2_omega_tau ) CS_tau = CpS_tau - C_tau*S_tau CC_tau = CpC_tau - C_tau*C_tau SS_tau = 1 - CpC_tau - S_tau*S_tau # use SpS = 1 - CpC YpY = npsum( wi*mags*mags) YpC_tau = npsum( wi*mags*cos_omega_tau ) YpS_tau = npsum( wi*mags*sin_omega_tau ) # SpS = npsum( wi*sin2_omegat ) # the final terms YY = YpY - Y*Y YC_tau = YpC_tau - Y*C_tau YS_tau = YpS_tau - Y*S_tau periodogramvalue = (YC_tau*YC_tau/CC_tau + YS_tau*YS_tau/SS_tau)/YY return periodogramvalue
Generalized LSP value for a single omega. This uses tau to provide an arbitrary time-reference point. The relations used are:: P(w) = (1/YY) * (YC*YC/CC + YS*YS/SS) where: YC, YS, CC, and SS are all calculated at T and where: tan 2omegaT = 2*CS/(CC - SS) and where: Y = sum( w_i*y_i ) C = sum( w_i*cos(wT_i) ) S = sum( w_i*sin(wT_i) ) YY = sum( w_i*y_i*y_i ) - Y*Y YC = sum( w_i*y_i*cos(wT_i) ) - Y*C YS = sum( w_i*y_i*sin(wT_i) ) - Y*S CpC = sum( w_i*cos(w_T_i)*cos(w_T_i) ) CC = CpC - C*C SS = (1 - CpC) - S*S CS = sum( w_i*cos(w_T_i)*sin(w_T_i) ) - C*S Parameters ---------- times,mags,errs : np.array The time-series to calculate the periodogram value for. omega : float The frequency to calculate the periodogram value at. Returns ------- periodogramvalue : float The normalized periodogram at the specified test frequency `omega`.
Below is the the instruction that describes the task: ### Input: Generalized LSP value for a single omega. This uses tau to provide an arbitrary time-reference point. The relations used are:: P(w) = (1/YY) * (YC*YC/CC + YS*YS/SS) where: YC, YS, CC, and SS are all calculated at T and where: tan 2omegaT = 2*CS/(CC - SS) and where: Y = sum( w_i*y_i ) C = sum( w_i*cos(wT_i) ) S = sum( w_i*sin(wT_i) ) YY = sum( w_i*y_i*y_i ) - Y*Y YC = sum( w_i*y_i*cos(wT_i) ) - Y*C YS = sum( w_i*y_i*sin(wT_i) ) - Y*S CpC = sum( w_i*cos(w_T_i)*cos(w_T_i) ) CC = CpC - C*C SS = (1 - CpC) - S*S CS = sum( w_i*cos(w_T_i)*sin(w_T_i) ) - C*S Parameters ---------- times,mags,errs : np.array The time-series to calculate the periodogram value for. omega : float The frequency to calculate the periodogram value at. Returns ------- periodogramvalue : float The normalized periodogram at the specified test frequency `omega`. ### Response: def generalized_lsp_value_withtau(times, mags, errs, omega): '''Generalized LSP value for a single omega. This uses tau to provide an arbitrary time-reference point. The relations used are:: P(w) = (1/YY) * (YC*YC/CC + YS*YS/SS) where: YC, YS, CC, and SS are all calculated at T and where: tan 2omegaT = 2*CS/(CC - SS) and where: Y = sum( w_i*y_i ) C = sum( w_i*cos(wT_i) ) S = sum( w_i*sin(wT_i) ) YY = sum( w_i*y_i*y_i ) - Y*Y YC = sum( w_i*y_i*cos(wT_i) ) - Y*C YS = sum( w_i*y_i*sin(wT_i) ) - Y*S CpC = sum( w_i*cos(w_T_i)*cos(w_T_i) ) CC = CpC - C*C SS = (1 - CpC) - S*S CS = sum( w_i*cos(w_T_i)*sin(w_T_i) ) - C*S Parameters ---------- times,mags,errs : np.array The time-series to calculate the periodogram value for. omega : float The frequency to calculate the periodogram value at. Returns ------- periodogramvalue : float The normalized periodogram at the specified test frequency `omega`. ''' one_over_errs2 = 1.0/(errs*errs) W = npsum(one_over_errs2) wi = one_over_errs2/W sin_omegat = npsin(omega*times) cos_omegat = npcos(omega*times) sin2_omegat = sin_omegat*sin_omegat cos2_omegat = cos_omegat*cos_omegat sincos_omegat = sin_omegat*cos_omegat # calculate some more sums and terms Y = npsum( wi*mags ) C = npsum( wi*cos_omegat ) S = npsum( wi*sin_omegat ) CpS = npsum( wi*sincos_omegat ) CpC = npsum( wi*cos2_omegat ) CS = CpS - C*S CC = CpC - C*C SS = 1 - CpC - S*S # use SpS = 1 - CpC # calculate tau tan_omega_tau_top = 2.0*CS tan_omega_tau_bottom = CC - SS tan_omega_tau = tan_omega_tau_top/tan_omega_tau_bottom tau = nparctan(tan_omega_tau)/(2.0*omega) # now we need to calculate all the bits at tau sin_omega_tau = npsin(omega*(times - tau)) cos_omega_tau = npcos(omega*(times - tau)) sin2_omega_tau = sin_omega_tau*sin_omega_tau cos2_omega_tau = cos_omega_tau*cos_omega_tau sincos_omega_tau = sin_omega_tau*cos_omega_tau C_tau = npsum(wi*cos_omega_tau) S_tau = npsum(wi*sin_omega_tau) CpS_tau = npsum( wi*sincos_omega_tau ) CpC_tau = npsum( wi*cos2_omega_tau ) CS_tau = CpS_tau - C_tau*S_tau CC_tau = CpC_tau - C_tau*C_tau SS_tau = 1 - CpC_tau - S_tau*S_tau # use SpS = 1 - CpC YpY = npsum( wi*mags*mags) YpC_tau = npsum( wi*mags*cos_omega_tau ) YpS_tau = npsum( wi*mags*sin_omega_tau ) # SpS = npsum( wi*sin2_omegat ) # the final terms YY = YpY - Y*Y YC_tau = YpC_tau - Y*C_tau YS_tau = YpS_tau - Y*S_tau periodogramvalue = (YC_tau*YC_tau/CC_tau + YS_tau*YS_tau/SS_tau)/YY return periodogramvalue
def alignment_to_contacts( sam_merged, assembly, output_dir, output_file_network=DEFAULT_NETWORK_FILE_NAME, output_file_chunk_data=DEFAULT_CHUNK_DATA_FILE_NAME, parameters=DEFAULT_PARAMETERS, ): """Generates a network file (in edgelist form) from an alignment in sam or bam format. Contigs are virtually split into 'chunks' of nearly fixed size (by default between 500 and 1000 bp) to reduce size bias. The chunks are the network nodes and the edges are the contact counts. The network is in a strict barebone form so that it can be reused and imported quickly into other applications etc. Verbose information about every single node in the network is written on a 'chunk data' file, by default called 'idx_contig_hit_size_cov.txt' Parameters ---------- sam_merged : file, str or pathlib.Path The alignment file in SAM/BAM format to be processed. assembly : file, str or pathlib.Path The initial assembly acting as the alignment file's reference genome. output_dir : str or pathlib.Path The output directory to write the network and chunk data into. output_dir_file_network : str or pathlib.Path, optional The specific file name for the output network file. Default is network.txt output_file_chunk_data : str or pathlib.Path, optional The specific file name for the output chunk data file. Default is idx_contig_hit_size_cov.txt parameters : dict, optional A dictionary of parameters for converting the alignment file into a network. These are: -size_chunk_threshold: the size (in bp) under which chunks are discarded. Default is 500. -mapq_threshold: the mapping quality under which alignments are discarded. Default is 10. -chunk_size: the default chunk size (in bp) when applicable, save smaller contigs or tail-ends. Default is 1000. -read_size: the size of reads used for mapping. Default is 65. -self_contacts: whether to count alignments between a chunk and itself. Default is False. -normalized: whether to normalize contacts by their coverage. Default is False. Returns ------- chunk_complete_data : dict A dictionary where the keys are chunks in (contig, position) form and the values are their id, name, total contact count, size and coverage. all_contacts : dict A counter dictionary where the keys are chunk pairs and the values are their contact count. """ all_contacts = collections.Counter() all_chunks = collections.Counter() # Initialize parameters chunk_size = int(parameters["chunk_size"]) mapq_threshold = int(parameters["mapq_threshold"]) size_chunk_threshold = int(parameters["size_chunk_threshold"]) read_size = int(parameters["read_size"]) self_contacts = parameters["self_contacts"] normalized = parameters["normalized"] logger.info("Establishing chunk list...") chunk_complete_data = dict() # Get all information about all chunks from all contigs # (this gets updated at the end) global_id = 1 for record in SeqIO.parse(assembly, "fasta"): length = len(record.seq) n_chunks = length // chunk_size n_chunks += (length % chunk_size) >= size_chunk_threshold for i in range(n_chunks): if (i + 1) * chunk_size <= length: size = chunk_size else: size = length % chunk_size chunk_name = "{}_{}".format(record.id, i) chunk_complete_data[chunk_name] = { "id": global_id, "hit": 0, "size": size, "coverage": 0, } global_id += 1 logger.info("Opening alignment files...") current_read = None # Read the BAM file to detect contacts. with pysam.AlignmentFile(sam_merged, "rb") as alignment_merged_handle: names = alignment_merged_handle.references lengths = alignment_merged_handle.lengths names_and_lengths = { name: length for name, length in zip(names, lengths) } logger.info("Reading contacts...") # Since the BAM file is supposed to be sorted and interleaved, # pairs should be always grouped with one below the other (the exact # order doesn't matter since the network is symmetric, so we simply # treat the first one as 'forward' and the second one as 'reverse') # We keep iterating until two consecutive reads have the same name, # discarding ones that don't. while "Reading forward and reverse alignments alternatively": try: my_read = next(alignment_merged_handle) if current_read is None: # First read current_read = my_read continue elif current_read.query_name != my_read.query_name: # print("{}_{}".format(current_read, my_read)) current_read = my_read continue read_forward, read_reverse = current_read, my_read except StopIteration: break # Get a bunch of info about the alignments to pass the tests below read_name_forward = read_forward.query_name read_name_reverse = read_reverse.query_name flag_forward, flag_reverse = read_forward.flag, read_reverse.flag try: assert read_name_forward == read_name_reverse except AssertionError: logger.error( "Reads don't have the same name: " "%s and %s", read_name_forward, read_name_reverse, ) raise # To check if a flag contains 4 # (digit on the third position from the right in base 2), # 4 = unmapped in SAM spec def is_unmapped(flag): return np.base_repr(flag, padding=3)[-3] == "1" if is_unmapped(flag_forward) or is_unmapped(flag_reverse): # print("Detected unmapped read on one end, skipping") continue contig_name_forward = read_forward.reference_name contig_name_reverse = read_reverse.reference_name len_contig_for = names_and_lengths[contig_name_forward] len_contig_rev = names_and_lengths[contig_name_reverse] position_forward = read_forward.reference_start position_reverse = read_reverse.reference_start mapq_forward = read_forward.mapping_quality mapq_reverse = read_reverse.mapping_quality # Some more tests: checking for size, map quality, map status etc. mapq_test = min(mapq_forward, mapq_reverse) > mapq_threshold min_length = min(len_contig_for, len_contig_rev) length_test = min_length > size_chunk_threshold # Trickest test: # # # contig # pos1 pos2 # ^ ^ # |-------|-------|-------|-------|---| # <-------><------><------><------><--> <-> # chunk chunk tail size_chunk_threshold # # Test is passed if tail >= size_chunk_threshold (pos2) # or if the position is a non-tail chunk (pos1) if position_forward < chunk_size * (len_contig_for // chunk_size): current_chunk_forward_size = chunk_size else: current_chunk_forward_size = len_contig_for % chunk_size if position_reverse < chunk_size * (len_contig_rev // chunk_size): current_chunk_reverse_size = chunk_size else: current_chunk_reverse_size = len_contig_rev % chunk_size min_chunk_size = min( current_chunk_forward_size, current_chunk_reverse_size ) chunk_test = min_chunk_size >= size_chunk_threshold if mapq_test and length_test and chunk_test: chunk_forward = position_forward // chunk_size chunk_reverse = position_reverse // chunk_size chunk_name_forward = "{}_{}".format( contig_name_forward, chunk_forward ) chunk_name_reverse = "{}_{}".format( contig_name_reverse, chunk_reverse ) if self_contacts or chunk_name_forward != chunk_name_reverse: contact = tuple( sorted((chunk_name_forward, chunk_name_reverse)) ) all_contacts[contact] += 1 chunk_key_forward = ( chunk_name_forward, current_chunk_forward_size, ) all_chunks[chunk_key_forward] += 1 chunk_key_reverse = ( chunk_name_reverse, current_chunk_reverse_size, ) all_chunks[chunk_key_reverse] += 1 logger.info("Writing chunk data...") # Now we can update the chunk dictionary # with the info we gathered from the BAM file output_chunk_data_path = os.path.join(output_dir, output_file_chunk_data) with open(output_chunk_data_path, "w") as chunk_data_file_handle: for name in sorted(chunk_complete_data.keys()): chunk_data = chunk_complete_data[name] size = chunk_data["size"] chunk = (name, chunk_data["size"]) hit = all_chunks[chunk] coverage = hit * read_size * 1.0 / size try: chunk_complete_data[name]["hit"] = hit chunk_complete_data[name]["coverage"] = coverage except KeyError: logger.error( "A mismatch was detected between the reference " "genome and the genome used for the alignment " "file, some sequence names were not found" ) raise idx = chunk_complete_data[name]["id"] line = "{}\t{}\t{}\t{}\t{}\n".format( idx, name, hit, size, coverage ) chunk_data_file_handle.write(line) # Lastly, generate the network proper logger.info("Writing network...") output_network_path = os.path.join(output_dir, output_file_network) with open(output_network_path, "w") as network_file_handle: for chunks in sorted(all_contacts.keys()): chunk_name1, chunk_name2 = chunks contact_count = all_contacts[chunks] if normalized: coverage1 = chunk_complete_data[chunk_name1]["coverage"] coverage2 = chunk_complete_data[chunk_name2]["coverage"] mean_coverage = np.sqrt(coverage1 * coverage2) effective_count = contact_count * 1.0 / mean_coverage else: effective_count = contact_count try: idx1 = chunk_complete_data[chunk_name1]["id"] idx2 = chunk_complete_data[chunk_name2]["id"] line = "{}\t{}\t{}\n".format(idx1, idx2, effective_count) network_file_handle.write(line) except KeyError as e: logger.warning("Mismatch detected: %s", e) return chunk_complete_data, all_contacts
Generates a network file (in edgelist form) from an alignment in sam or bam format. Contigs are virtually split into 'chunks' of nearly fixed size (by default between 500 and 1000 bp) to reduce size bias. The chunks are the network nodes and the edges are the contact counts. The network is in a strict barebone form so that it can be reused and imported quickly into other applications etc. Verbose information about every single node in the network is written on a 'chunk data' file, by default called 'idx_contig_hit_size_cov.txt' Parameters ---------- sam_merged : file, str or pathlib.Path The alignment file in SAM/BAM format to be processed. assembly : file, str or pathlib.Path The initial assembly acting as the alignment file's reference genome. output_dir : str or pathlib.Path The output directory to write the network and chunk data into. output_dir_file_network : str or pathlib.Path, optional The specific file name for the output network file. Default is network.txt output_file_chunk_data : str or pathlib.Path, optional The specific file name for the output chunk data file. Default is idx_contig_hit_size_cov.txt parameters : dict, optional A dictionary of parameters for converting the alignment file into a network. These are: -size_chunk_threshold: the size (in bp) under which chunks are discarded. Default is 500. -mapq_threshold: the mapping quality under which alignments are discarded. Default is 10. -chunk_size: the default chunk size (in bp) when applicable, save smaller contigs or tail-ends. Default is 1000. -read_size: the size of reads used for mapping. Default is 65. -self_contacts: whether to count alignments between a chunk and itself. Default is False. -normalized: whether to normalize contacts by their coverage. Default is False. Returns ------- chunk_complete_data : dict A dictionary where the keys are chunks in (contig, position) form and the values are their id, name, total contact count, size and coverage. all_contacts : dict A counter dictionary where the keys are chunk pairs and the values are their contact count.
Below is the the instruction that describes the task: ### Input: Generates a network file (in edgelist form) from an alignment in sam or bam format. Contigs are virtually split into 'chunks' of nearly fixed size (by default between 500 and 1000 bp) to reduce size bias. The chunks are the network nodes and the edges are the contact counts. The network is in a strict barebone form so that it can be reused and imported quickly into other applications etc. Verbose information about every single node in the network is written on a 'chunk data' file, by default called 'idx_contig_hit_size_cov.txt' Parameters ---------- sam_merged : file, str or pathlib.Path The alignment file in SAM/BAM format to be processed. assembly : file, str or pathlib.Path The initial assembly acting as the alignment file's reference genome. output_dir : str or pathlib.Path The output directory to write the network and chunk data into. output_dir_file_network : str or pathlib.Path, optional The specific file name for the output network file. Default is network.txt output_file_chunk_data : str or pathlib.Path, optional The specific file name for the output chunk data file. Default is idx_contig_hit_size_cov.txt parameters : dict, optional A dictionary of parameters for converting the alignment file into a network. These are: -size_chunk_threshold: the size (in bp) under which chunks are discarded. Default is 500. -mapq_threshold: the mapping quality under which alignments are discarded. Default is 10. -chunk_size: the default chunk size (in bp) when applicable, save smaller contigs or tail-ends. Default is 1000. -read_size: the size of reads used for mapping. Default is 65. -self_contacts: whether to count alignments between a chunk and itself. Default is False. -normalized: whether to normalize contacts by their coverage. Default is False. Returns ------- chunk_complete_data : dict A dictionary where the keys are chunks in (contig, position) form and the values are their id, name, total contact count, size and coverage. all_contacts : dict A counter dictionary where the keys are chunk pairs and the values are their contact count. ### Response: def alignment_to_contacts( sam_merged, assembly, output_dir, output_file_network=DEFAULT_NETWORK_FILE_NAME, output_file_chunk_data=DEFAULT_CHUNK_DATA_FILE_NAME, parameters=DEFAULT_PARAMETERS, ): """Generates a network file (in edgelist form) from an alignment in sam or bam format. Contigs are virtually split into 'chunks' of nearly fixed size (by default between 500 and 1000 bp) to reduce size bias. The chunks are the network nodes and the edges are the contact counts. The network is in a strict barebone form so that it can be reused and imported quickly into other applications etc. Verbose information about every single node in the network is written on a 'chunk data' file, by default called 'idx_contig_hit_size_cov.txt' Parameters ---------- sam_merged : file, str or pathlib.Path The alignment file in SAM/BAM format to be processed. assembly : file, str or pathlib.Path The initial assembly acting as the alignment file's reference genome. output_dir : str or pathlib.Path The output directory to write the network and chunk data into. output_dir_file_network : str or pathlib.Path, optional The specific file name for the output network file. Default is network.txt output_file_chunk_data : str or pathlib.Path, optional The specific file name for the output chunk data file. Default is idx_contig_hit_size_cov.txt parameters : dict, optional A dictionary of parameters for converting the alignment file into a network. These are: -size_chunk_threshold: the size (in bp) under which chunks are discarded. Default is 500. -mapq_threshold: the mapping quality under which alignments are discarded. Default is 10. -chunk_size: the default chunk size (in bp) when applicable, save smaller contigs or tail-ends. Default is 1000. -read_size: the size of reads used for mapping. Default is 65. -self_contacts: whether to count alignments between a chunk and itself. Default is False. -normalized: whether to normalize contacts by their coverage. Default is False. Returns ------- chunk_complete_data : dict A dictionary where the keys are chunks in (contig, position) form and the values are their id, name, total contact count, size and coverage. all_contacts : dict A counter dictionary where the keys are chunk pairs and the values are their contact count. """ all_contacts = collections.Counter() all_chunks = collections.Counter() # Initialize parameters chunk_size = int(parameters["chunk_size"]) mapq_threshold = int(parameters["mapq_threshold"]) size_chunk_threshold = int(parameters["size_chunk_threshold"]) read_size = int(parameters["read_size"]) self_contacts = parameters["self_contacts"] normalized = parameters["normalized"] logger.info("Establishing chunk list...") chunk_complete_data = dict() # Get all information about all chunks from all contigs # (this gets updated at the end) global_id = 1 for record in SeqIO.parse(assembly, "fasta"): length = len(record.seq) n_chunks = length // chunk_size n_chunks += (length % chunk_size) >= size_chunk_threshold for i in range(n_chunks): if (i + 1) * chunk_size <= length: size = chunk_size else: size = length % chunk_size chunk_name = "{}_{}".format(record.id, i) chunk_complete_data[chunk_name] = { "id": global_id, "hit": 0, "size": size, "coverage": 0, } global_id += 1 logger.info("Opening alignment files...") current_read = None # Read the BAM file to detect contacts. with pysam.AlignmentFile(sam_merged, "rb") as alignment_merged_handle: names = alignment_merged_handle.references lengths = alignment_merged_handle.lengths names_and_lengths = { name: length for name, length in zip(names, lengths) } logger.info("Reading contacts...") # Since the BAM file is supposed to be sorted and interleaved, # pairs should be always grouped with one below the other (the exact # order doesn't matter since the network is symmetric, so we simply # treat the first one as 'forward' and the second one as 'reverse') # We keep iterating until two consecutive reads have the same name, # discarding ones that don't. while "Reading forward and reverse alignments alternatively": try: my_read = next(alignment_merged_handle) if current_read is None: # First read current_read = my_read continue elif current_read.query_name != my_read.query_name: # print("{}_{}".format(current_read, my_read)) current_read = my_read continue read_forward, read_reverse = current_read, my_read except StopIteration: break # Get a bunch of info about the alignments to pass the tests below read_name_forward = read_forward.query_name read_name_reverse = read_reverse.query_name flag_forward, flag_reverse = read_forward.flag, read_reverse.flag try: assert read_name_forward == read_name_reverse except AssertionError: logger.error( "Reads don't have the same name: " "%s and %s", read_name_forward, read_name_reverse, ) raise # To check if a flag contains 4 # (digit on the third position from the right in base 2), # 4 = unmapped in SAM spec def is_unmapped(flag): return np.base_repr(flag, padding=3)[-3] == "1" if is_unmapped(flag_forward) or is_unmapped(flag_reverse): # print("Detected unmapped read on one end, skipping") continue contig_name_forward = read_forward.reference_name contig_name_reverse = read_reverse.reference_name len_contig_for = names_and_lengths[contig_name_forward] len_contig_rev = names_and_lengths[contig_name_reverse] position_forward = read_forward.reference_start position_reverse = read_reverse.reference_start mapq_forward = read_forward.mapping_quality mapq_reverse = read_reverse.mapping_quality # Some more tests: checking for size, map quality, map status etc. mapq_test = min(mapq_forward, mapq_reverse) > mapq_threshold min_length = min(len_contig_for, len_contig_rev) length_test = min_length > size_chunk_threshold # Trickest test: # # # contig # pos1 pos2 # ^ ^ # |-------|-------|-------|-------|---| # <-------><------><------><------><--> <-> # chunk chunk tail size_chunk_threshold # # Test is passed if tail >= size_chunk_threshold (pos2) # or if the position is a non-tail chunk (pos1) if position_forward < chunk_size * (len_contig_for // chunk_size): current_chunk_forward_size = chunk_size else: current_chunk_forward_size = len_contig_for % chunk_size if position_reverse < chunk_size * (len_contig_rev // chunk_size): current_chunk_reverse_size = chunk_size else: current_chunk_reverse_size = len_contig_rev % chunk_size min_chunk_size = min( current_chunk_forward_size, current_chunk_reverse_size ) chunk_test = min_chunk_size >= size_chunk_threshold if mapq_test and length_test and chunk_test: chunk_forward = position_forward // chunk_size chunk_reverse = position_reverse // chunk_size chunk_name_forward = "{}_{}".format( contig_name_forward, chunk_forward ) chunk_name_reverse = "{}_{}".format( contig_name_reverse, chunk_reverse ) if self_contacts or chunk_name_forward != chunk_name_reverse: contact = tuple( sorted((chunk_name_forward, chunk_name_reverse)) ) all_contacts[contact] += 1 chunk_key_forward = ( chunk_name_forward, current_chunk_forward_size, ) all_chunks[chunk_key_forward] += 1 chunk_key_reverse = ( chunk_name_reverse, current_chunk_reverse_size, ) all_chunks[chunk_key_reverse] += 1 logger.info("Writing chunk data...") # Now we can update the chunk dictionary # with the info we gathered from the BAM file output_chunk_data_path = os.path.join(output_dir, output_file_chunk_data) with open(output_chunk_data_path, "w") as chunk_data_file_handle: for name in sorted(chunk_complete_data.keys()): chunk_data = chunk_complete_data[name] size = chunk_data["size"] chunk = (name, chunk_data["size"]) hit = all_chunks[chunk] coverage = hit * read_size * 1.0 / size try: chunk_complete_data[name]["hit"] = hit chunk_complete_data[name]["coverage"] = coverage except KeyError: logger.error( "A mismatch was detected between the reference " "genome and the genome used for the alignment " "file, some sequence names were not found" ) raise idx = chunk_complete_data[name]["id"] line = "{}\t{}\t{}\t{}\t{}\n".format( idx, name, hit, size, coverage ) chunk_data_file_handle.write(line) # Lastly, generate the network proper logger.info("Writing network...") output_network_path = os.path.join(output_dir, output_file_network) with open(output_network_path, "w") as network_file_handle: for chunks in sorted(all_contacts.keys()): chunk_name1, chunk_name2 = chunks contact_count = all_contacts[chunks] if normalized: coverage1 = chunk_complete_data[chunk_name1]["coverage"] coverage2 = chunk_complete_data[chunk_name2]["coverage"] mean_coverage = np.sqrt(coverage1 * coverage2) effective_count = contact_count * 1.0 / mean_coverage else: effective_count = contact_count try: idx1 = chunk_complete_data[chunk_name1]["id"] idx2 = chunk_complete_data[chunk_name2]["id"] line = "{}\t{}\t{}\n".format(idx1, idx2, effective_count) network_file_handle.write(line) except KeyError as e: logger.warning("Mismatch detected: %s", e) return chunk_complete_data, all_contacts
def del_calc(db, job_id, user): """ Delete a calculation and all associated outputs, if possible. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: job ID, can be an integer or a string :param user: username :returns: None if everything went fine or an error message """ job_id = int(job_id) dependent = db( 'SELECT id FROM job WHERE hazard_calculation_id=?x', job_id) if dependent: return {"error": 'Cannot delete calculation %d: there ' 'are calculations ' 'dependent from it: %s' % (job_id, [j.id for j in dependent])} try: owner, path = db('SELECT user_name, ds_calc_dir FROM job WHERE id=?x', job_id, one=True) except NotFound: return {"error": 'Cannot delete calculation %d:' ' ID does not exist' % job_id} deleted = db('DELETE FROM job WHERE id=?x AND user_name=?x', job_id, user).rowcount if not deleted: return {"error": 'Cannot delete calculation %d: it belongs to ' '%s and you are %s' % (job_id, owner, user)} # try to delete datastore and associated file # path has typically the form /home/user/oqdata/calc_XXX fname = path + ".hdf5" try: os.remove(fname) except OSError as exc: # permission error return {"error": 'Could not remove %s: %s' % (fname, exc)} return {"success": fname}
Delete a calculation and all associated outputs, if possible. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: job ID, can be an integer or a string :param user: username :returns: None if everything went fine or an error message
Below is the the instruction that describes the task: ### Input: Delete a calculation and all associated outputs, if possible. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: job ID, can be an integer or a string :param user: username :returns: None if everything went fine or an error message ### Response: def del_calc(db, job_id, user): """ Delete a calculation and all associated outputs, if possible. :param db: a :class:`openquake.server.dbapi.Db` instance :param job_id: job ID, can be an integer or a string :param user: username :returns: None if everything went fine or an error message """ job_id = int(job_id) dependent = db( 'SELECT id FROM job WHERE hazard_calculation_id=?x', job_id) if dependent: return {"error": 'Cannot delete calculation %d: there ' 'are calculations ' 'dependent from it: %s' % (job_id, [j.id for j in dependent])} try: owner, path = db('SELECT user_name, ds_calc_dir FROM job WHERE id=?x', job_id, one=True) except NotFound: return {"error": 'Cannot delete calculation %d:' ' ID does not exist' % job_id} deleted = db('DELETE FROM job WHERE id=?x AND user_name=?x', job_id, user).rowcount if not deleted: return {"error": 'Cannot delete calculation %d: it belongs to ' '%s and you are %s' % (job_id, owner, user)} # try to delete datastore and associated file # path has typically the form /home/user/oqdata/calc_XXX fname = path + ".hdf5" try: os.remove(fname) except OSError as exc: # permission error return {"error": 'Could not remove %s: %s' % (fname, exc)} return {"success": fname}
def PC_varExplained(Y,standardized=True): """ Run PCA and calculate the cumulative fraction of variance Args: Y: phenotype values standardize: if True, phenotypes are standardized Returns: var: cumulative distribution of variance explained """ # figuring out the number of latent factors if standardized: Y-=Y.mean(0) Y/=Y.std(0) covY = sp.cov(Y) S,U = linalg.eigh(covY+1e-6*sp.eye(covY.shape[0])) S = S[::-1] rv = np.array([S[0:i].sum() for i in range(1,S.shape[0])]) rv/= S.sum() return rv
Run PCA and calculate the cumulative fraction of variance Args: Y: phenotype values standardize: if True, phenotypes are standardized Returns: var: cumulative distribution of variance explained
Below is the the instruction that describes the task: ### Input: Run PCA and calculate the cumulative fraction of variance Args: Y: phenotype values standardize: if True, phenotypes are standardized Returns: var: cumulative distribution of variance explained ### Response: def PC_varExplained(Y,standardized=True): """ Run PCA and calculate the cumulative fraction of variance Args: Y: phenotype values standardize: if True, phenotypes are standardized Returns: var: cumulative distribution of variance explained """ # figuring out the number of latent factors if standardized: Y-=Y.mean(0) Y/=Y.std(0) covY = sp.cov(Y) S,U = linalg.eigh(covY+1e-6*sp.eye(covY.shape[0])) S = S[::-1] rv = np.array([S[0:i].sum() for i in range(1,S.shape[0])]) rv/= S.sum() return rv
def get_genes_for_hgnc_id(self, hgnc_symbol): """ obtain the ensembl gene IDs that correspond to a HGNC symbol """ headers = {"content-type": "application/json"} # http://grch37.rest.ensembl.org/xrefs/symbol/homo_sapiens/KMT2A?content-type=application/json self.attempt = 0 ext = "/xrefs/symbol/homo_sapiens/{}".format(hgnc_symbol) r = self.ensembl_request(ext, headers) genes = [] for item in json.loads(r): if item["type"] == "gene": genes.append(item["id"]) return genes
obtain the ensembl gene IDs that correspond to a HGNC symbol
Below is the the instruction that describes the task: ### Input: obtain the ensembl gene IDs that correspond to a HGNC symbol ### Response: def get_genes_for_hgnc_id(self, hgnc_symbol): """ obtain the ensembl gene IDs that correspond to a HGNC symbol """ headers = {"content-type": "application/json"} # http://grch37.rest.ensembl.org/xrefs/symbol/homo_sapiens/KMT2A?content-type=application/json self.attempt = 0 ext = "/xrefs/symbol/homo_sapiens/{}".format(hgnc_symbol) r = self.ensembl_request(ext, headers) genes = [] for item in json.loads(r): if item["type"] == "gene": genes.append(item["id"]) return genes
def _index_document(index_list): """Helper to generate an index specifying document. Takes a list of (key, direction) pairs. """ if isinstance(index_list, abc.Mapping): raise TypeError("passing a dict to sort/create_index/hint is not " "allowed - use a list of tuples instead. did you " "mean %r?" % list(iteritems(index_list))) elif not isinstance(index_list, (list, tuple)): raise TypeError("must use a list of (key, direction) pairs, " "not: " + repr(index_list)) if not len(index_list): raise ValueError("key_or_list must not be the empty list") index = SON() for (key, value) in index_list: if not isinstance(key, string_type): raise TypeError("first item in each key pair must be a string") if not isinstance(value, (string_type, int, abc.Mapping)): raise TypeError("second item in each key pair must be 1, -1, " "'2d', 'geoHaystack', or another valid MongoDB " "index specifier.") index[key] = value return index
Helper to generate an index specifying document. Takes a list of (key, direction) pairs.
Below is the the instruction that describes the task: ### Input: Helper to generate an index specifying document. Takes a list of (key, direction) pairs. ### Response: def _index_document(index_list): """Helper to generate an index specifying document. Takes a list of (key, direction) pairs. """ if isinstance(index_list, abc.Mapping): raise TypeError("passing a dict to sort/create_index/hint is not " "allowed - use a list of tuples instead. did you " "mean %r?" % list(iteritems(index_list))) elif not isinstance(index_list, (list, tuple)): raise TypeError("must use a list of (key, direction) pairs, " "not: " + repr(index_list)) if not len(index_list): raise ValueError("key_or_list must not be the empty list") index = SON() for (key, value) in index_list: if not isinstance(key, string_type): raise TypeError("first item in each key pair must be a string") if not isinstance(value, (string_type, int, abc.Mapping)): raise TypeError("second item in each key pair must be 1, -1, " "'2d', 'geoHaystack', or another valid MongoDB " "index specifier.") index[key] = value return index
def squareRoot(requestContext, seriesList): """ Takes one metric or a wildcard seriesList, and computes the square root of each datapoint. Example:: &target=squareRoot(Server.instance01.threads.busy) """ for series in seriesList: series.name = "squareRoot(%s)" % (series.name) for i, value in enumerate(series): series[i] = safePow(value, 0.5) return seriesList
Takes one metric or a wildcard seriesList, and computes the square root of each datapoint. Example:: &target=squareRoot(Server.instance01.threads.busy)
Below is the the instruction that describes the task: ### Input: Takes one metric or a wildcard seriesList, and computes the square root of each datapoint. Example:: &target=squareRoot(Server.instance01.threads.busy) ### Response: def squareRoot(requestContext, seriesList): """ Takes one metric or a wildcard seriesList, and computes the square root of each datapoint. Example:: &target=squareRoot(Server.instance01.threads.busy) """ for series in seriesList: series.name = "squareRoot(%s)" % (series.name) for i, value in enumerate(series): series[i] = safePow(value, 0.5) return seriesList
def execute(self, callback=None): """ Given the command-line arguments, this figures out which subcommand is being run, creates a parser appropriate to that command, and runs it. """ # Preprocess options to extract --settings and --pythonpath. # These options could affect the commands that are available, so they # must be processed early. self.parser = parser = NewOptionParser(prog=self.prog_name, usage=self.usage_info, # version=self.get_version(), formatter = NewFormatter(), add_help_option = False, option_list=self.option_list) if not self.global_options: global_options, args = parser.parse_args(self.argv) global_options.apps_dir = os.path.normpath(os.path.join(global_options.project, 'apps')) handle_default_options(global_options) args = args[1:] else: global_options = self.global_options args = self.argv if global_options.envs: for x in global_options.envs: if '=' in x: k, v = x.split('=') os.environ[k.strip()] = v.strip() else: print ('Error: environment variable definition (%s) format is not right, ' 'shoule be -Ek=v or -Ek="a b"' % x) global_options.settings = global_options.settings or os.environ.get('SETTINGS', 'settings.ini') global_options.local_settings = global_options.local_settings or os.environ.get('LOCAL_SETTINGS', 'local_settings.ini') if callback: callback(global_options) if len(args) == 0: if global_options.version: print self.get_version() sys.exit(0) else: self.print_help(global_options) sys.ext(1) self.do_command(args, global_options)
Given the command-line arguments, this figures out which subcommand is being run, creates a parser appropriate to that command, and runs it.
Below is the the instruction that describes the task: ### Input: Given the command-line arguments, this figures out which subcommand is being run, creates a parser appropriate to that command, and runs it. ### Response: def execute(self, callback=None): """ Given the command-line arguments, this figures out which subcommand is being run, creates a parser appropriate to that command, and runs it. """ # Preprocess options to extract --settings and --pythonpath. # These options could affect the commands that are available, so they # must be processed early. self.parser = parser = NewOptionParser(prog=self.prog_name, usage=self.usage_info, # version=self.get_version(), formatter = NewFormatter(), add_help_option = False, option_list=self.option_list) if not self.global_options: global_options, args = parser.parse_args(self.argv) global_options.apps_dir = os.path.normpath(os.path.join(global_options.project, 'apps')) handle_default_options(global_options) args = args[1:] else: global_options = self.global_options args = self.argv if global_options.envs: for x in global_options.envs: if '=' in x: k, v = x.split('=') os.environ[k.strip()] = v.strip() else: print ('Error: environment variable definition (%s) format is not right, ' 'shoule be -Ek=v or -Ek="a b"' % x) global_options.settings = global_options.settings or os.environ.get('SETTINGS', 'settings.ini') global_options.local_settings = global_options.local_settings or os.environ.get('LOCAL_SETTINGS', 'local_settings.ini') if callback: callback(global_options) if len(args) == 0: if global_options.version: print self.get_version() sys.exit(0) else: self.print_help(global_options) sys.ext(1) self.do_command(args, global_options)
async def read_line(stream: asyncio.StreamReader) -> bytes: """ Read a single line from ``stream``. ``stream`` is an :class:`~asyncio.StreamReader`. Return :class:`bytes` without CRLF. """ # Security: this is bounded by the StreamReader's limit (default = 32 KiB). line = await stream.readline() # Security: this guarantees header values are small (hard-coded = 4 KiB) if len(line) > MAX_LINE: raise ValueError("Line too long") # Not mandatory but safe - https://tools.ietf.org/html/rfc7230#section-3.5 if not line.endswith(b"\r\n"): raise ValueError("Line without CRLF") return line[:-2]
Read a single line from ``stream``. ``stream`` is an :class:`~asyncio.StreamReader`. Return :class:`bytes` without CRLF.
Below is the the instruction that describes the task: ### Input: Read a single line from ``stream``. ``stream`` is an :class:`~asyncio.StreamReader`. Return :class:`bytes` without CRLF. ### Response: async def read_line(stream: asyncio.StreamReader) -> bytes: """ Read a single line from ``stream``. ``stream`` is an :class:`~asyncio.StreamReader`. Return :class:`bytes` without CRLF. """ # Security: this is bounded by the StreamReader's limit (default = 32 KiB). line = await stream.readline() # Security: this guarantees header values are small (hard-coded = 4 KiB) if len(line) > MAX_LINE: raise ValueError("Line too long") # Not mandatory but safe - https://tools.ietf.org/html/rfc7230#section-3.5 if not line.endswith(b"\r\n"): raise ValueError("Line without CRLF") return line[:-2]
def load_from_rdf_string(self, rdf_str): """Initialize given an RDF string representing the hierarchy." Parameters ---------- rdf_str : str An RDF string. """ self.graph = rdflib.Graph() self.graph.parse(data=rdf_str, format='nt') self.initialize()
Initialize given an RDF string representing the hierarchy." Parameters ---------- rdf_str : str An RDF string.
Below is the the instruction that describes the task: ### Input: Initialize given an RDF string representing the hierarchy." Parameters ---------- rdf_str : str An RDF string. ### Response: def load_from_rdf_string(self, rdf_str): """Initialize given an RDF string representing the hierarchy." Parameters ---------- rdf_str : str An RDF string. """ self.graph = rdflib.Graph() self.graph.parse(data=rdf_str, format='nt') self.initialize()
def save(self): """Saves all model instances in the batch as model. """ saved = 0 if not self.objects: raise BatchError("Save failed. Batch is empty") for deserialized_tx in self.objects: try: self.model.objects.get(pk=deserialized_tx.pk) except self.model.DoesNotExist: data = {} for field in self.model._meta.get_fields(): try: data.update({field.name: getattr(deserialized_tx, field.name)}) except AttributeError: pass self.model.objects.create(**data) saved += 1 return saved
Saves all model instances in the batch as model.
Below is the the instruction that describes the task: ### Input: Saves all model instances in the batch as model. ### Response: def save(self): """Saves all model instances in the batch as model. """ saved = 0 if not self.objects: raise BatchError("Save failed. Batch is empty") for deserialized_tx in self.objects: try: self.model.objects.get(pk=deserialized_tx.pk) except self.model.DoesNotExist: data = {} for field in self.model._meta.get_fields(): try: data.update({field.name: getattr(deserialized_tx, field.name)}) except AttributeError: pass self.model.objects.create(**data) saved += 1 return saved
def error(self, error_data): """Handle a retrieval error and call apriopriate handler. Should be called when retrieval fails. Do nothing when the fetcher is not active any more (after one of handlers was already called). :Parameters: - `error_data`: additional information about the error (e.g. `StanzaError` instance). :Types: - `error_data`: fetcher dependant """ if not self.active: return if not self._try_backup_item(): self._error_handler(self.address, error_data) self.cache.invalidate_object(self.address) self._deactivate()
Handle a retrieval error and call apriopriate handler. Should be called when retrieval fails. Do nothing when the fetcher is not active any more (after one of handlers was already called). :Parameters: - `error_data`: additional information about the error (e.g. `StanzaError` instance). :Types: - `error_data`: fetcher dependant
Below is the the instruction that describes the task: ### Input: Handle a retrieval error and call apriopriate handler. Should be called when retrieval fails. Do nothing when the fetcher is not active any more (after one of handlers was already called). :Parameters: - `error_data`: additional information about the error (e.g. `StanzaError` instance). :Types: - `error_data`: fetcher dependant ### Response: def error(self, error_data): """Handle a retrieval error and call apriopriate handler. Should be called when retrieval fails. Do nothing when the fetcher is not active any more (after one of handlers was already called). :Parameters: - `error_data`: additional information about the error (e.g. `StanzaError` instance). :Types: - `error_data`: fetcher dependant """ if not self.active: return if not self._try_backup_item(): self._error_handler(self.address, error_data) self.cache.invalidate_object(self.address) self._deactivate()
def get_comparison_methods(): """ makes methods for >, <, =, etc... """ method_list = [] def _register(func): method_list.append(func) return func # Comparison operators for sorting and uniqueness @_register def __lt__(self, other): return compare_instance(op.lt, self, other) @_register def __le__(self, other): return compare_instance(op.le, self, other) @_register def __eq__(self, other): return compare_instance(op.eq, self, other) @_register def __ne__(self, other): return compare_instance(op.ne, self, other) @_register def __gt__(self, other): return compare_instance(op.gt, self, other) @_register def __ge__(self, other): return compare_instance(op.ge, self, other) return method_list
makes methods for >, <, =, etc...
Below is the the instruction that describes the task: ### Input: makes methods for >, <, =, etc... ### Response: def get_comparison_methods(): """ makes methods for >, <, =, etc... """ method_list = [] def _register(func): method_list.append(func) return func # Comparison operators for sorting and uniqueness @_register def __lt__(self, other): return compare_instance(op.lt, self, other) @_register def __le__(self, other): return compare_instance(op.le, self, other) @_register def __eq__(self, other): return compare_instance(op.eq, self, other) @_register def __ne__(self, other): return compare_instance(op.ne, self, other) @_register def __gt__(self, other): return compare_instance(op.gt, self, other) @_register def __ge__(self, other): return compare_instance(op.ge, self, other) return method_list
def extract_capture_time(self): ''' Extract capture time from EXIF return a datetime object TODO: handle GPS DateTime ''' time_string = exif_datetime_fields()[0] capture_time, time_field = self._extract_alternative_fields( time_string, 0, str) if time_field in exif_gps_date_fields()[0]: capture_time = self.extract_gps_time() return capture_time if capture_time is 0: # try interpret the filename try: capture_time = datetime.datetime.strptime(os.path.basename( self.filename)[:-4] + '000', '%Y_%m_%d_%H_%M_%S_%f') except: return None else: capture_time = capture_time.replace(" ", "_") capture_time = capture_time.replace(":", "_") capture_time = capture_time.replace(".", "_") capture_time = capture_time.replace("-", "_") capture_time = capture_time.replace(",", "_") capture_time = "_".join( [ts for ts in capture_time.split("_") if ts.isdigit()]) capture_time, subseconds = format_time(capture_time) sub_sec = "0" if not subseconds: sub_sec = self.extract_subsec() capture_time = capture_time + \ datetime.timedelta(seconds=float("0." + sub_sec)) return capture_time
Extract capture time from EXIF return a datetime object TODO: handle GPS DateTime
Below is the the instruction that describes the task: ### Input: Extract capture time from EXIF return a datetime object TODO: handle GPS DateTime ### Response: def extract_capture_time(self): ''' Extract capture time from EXIF return a datetime object TODO: handle GPS DateTime ''' time_string = exif_datetime_fields()[0] capture_time, time_field = self._extract_alternative_fields( time_string, 0, str) if time_field in exif_gps_date_fields()[0]: capture_time = self.extract_gps_time() return capture_time if capture_time is 0: # try interpret the filename try: capture_time = datetime.datetime.strptime(os.path.basename( self.filename)[:-4] + '000', '%Y_%m_%d_%H_%M_%S_%f') except: return None else: capture_time = capture_time.replace(" ", "_") capture_time = capture_time.replace(":", "_") capture_time = capture_time.replace(".", "_") capture_time = capture_time.replace("-", "_") capture_time = capture_time.replace(",", "_") capture_time = "_".join( [ts for ts in capture_time.split("_") if ts.isdigit()]) capture_time, subseconds = format_time(capture_time) sub_sec = "0" if not subseconds: sub_sec = self.extract_subsec() capture_time = capture_time + \ datetime.timedelta(seconds=float("0." + sub_sec)) return capture_time
def _ols_fit(self): """ Performs OLS Returns ---------- None (stores latent variables) """ # TO DO - A lot of things are VAR specific here; might need to refactor in future, or just move to VAR script method = 'OLS' self.use_ols_covariance = True res_z = self._create_B_direct().flatten() z = res_z.copy() cov = self.ols_covariance() # Inelegant - needs refactoring for i in range(self.ylen): for k in range(self.ylen): if i == k or i > k: z = np.append(z,self.latent_variables.z_list[-1].prior.itransform(cov[i,k])) ihessian = self.estimator_cov('OLS') res_ses = np.power(np.abs(np.diag(ihessian)),0.5) ses = np.append(res_ses,np.ones([z.shape[0]-res_z.shape[0]])) self.latent_variables.set_z_values(z,method,ses,None) self.latent_variables.estimation_method = 'OLS' theta, Y, scores, states, states_var, X_names = self._categorize_model_output(z) # Change this in future try: latent_variables_store = self.latent_variables.copy() except: latent_variables_store = self.latent_variables return MLEResults(data_name=self.data_name,X_names=X_names,model_name=self.model_name, model_type=self.model_type, latent_variables=latent_variables_store,results=None,data=Y, index=self.index, multivariate_model=self.multivariate_model,objective_object=self.neg_loglik, method=method,ihessian=ihessian,signal=theta,scores=scores, z_hide=self._z_hide,max_lag=self.max_lag,states=states,states_var=states_var)
Performs OLS Returns ---------- None (stores latent variables)
Below is the the instruction that describes the task: ### Input: Performs OLS Returns ---------- None (stores latent variables) ### Response: def _ols_fit(self): """ Performs OLS Returns ---------- None (stores latent variables) """ # TO DO - A lot of things are VAR specific here; might need to refactor in future, or just move to VAR script method = 'OLS' self.use_ols_covariance = True res_z = self._create_B_direct().flatten() z = res_z.copy() cov = self.ols_covariance() # Inelegant - needs refactoring for i in range(self.ylen): for k in range(self.ylen): if i == k or i > k: z = np.append(z,self.latent_variables.z_list[-1].prior.itransform(cov[i,k])) ihessian = self.estimator_cov('OLS') res_ses = np.power(np.abs(np.diag(ihessian)),0.5) ses = np.append(res_ses,np.ones([z.shape[0]-res_z.shape[0]])) self.latent_variables.set_z_values(z,method,ses,None) self.latent_variables.estimation_method = 'OLS' theta, Y, scores, states, states_var, X_names = self._categorize_model_output(z) # Change this in future try: latent_variables_store = self.latent_variables.copy() except: latent_variables_store = self.latent_variables return MLEResults(data_name=self.data_name,X_names=X_names,model_name=self.model_name, model_type=self.model_type, latent_variables=latent_variables_store,results=None,data=Y, index=self.index, multivariate_model=self.multivariate_model,objective_object=self.neg_loglik, method=method,ihessian=ihessian,signal=theta,scores=scores, z_hide=self._z_hide,max_lag=self.max_lag,states=states,states_var=states_var)
def xmllint_format(xml): """ Pretty-print XML like ``xmllint`` does. Arguments: xml (string): Serialized XML """ parser = ET.XMLParser(resolve_entities=False, strip_cdata=False, remove_blank_text=True) document = ET.fromstring(xml, parser) return ('%s\n%s' % ('<?xml version="1.0" encoding="UTF-8"?>', ET.tostring(document, pretty_print=True).decode('utf-8'))).encode('utf-8')
Pretty-print XML like ``xmllint`` does. Arguments: xml (string): Serialized XML
Below is the the instruction that describes the task: ### Input: Pretty-print XML like ``xmllint`` does. Arguments: xml (string): Serialized XML ### Response: def xmllint_format(xml): """ Pretty-print XML like ``xmllint`` does. Arguments: xml (string): Serialized XML """ parser = ET.XMLParser(resolve_entities=False, strip_cdata=False, remove_blank_text=True) document = ET.fromstring(xml, parser) return ('%s\n%s' % ('<?xml version="1.0" encoding="UTF-8"?>', ET.tostring(document, pretty_print=True).decode('utf-8'))).encode('utf-8')
def read_version(): "Read the `(version-string, version-info)` from `added_value/version.py`." version_file = local_file('source', 'added_value', 'version.py') local_vars = {} with open(version_file) as handle: exec(handle.read(), {}, local_vars) # pylint: disable=exec-used return local_vars['__version__']
Read the `(version-string, version-info)` from `added_value/version.py`.
Below is the the instruction that describes the task: ### Input: Read the `(version-string, version-info)` from `added_value/version.py`. ### Response: def read_version(): "Read the `(version-string, version-info)` from `added_value/version.py`." version_file = local_file('source', 'added_value', 'version.py') local_vars = {} with open(version_file) as handle: exec(handle.read(), {}, local_vars) # pylint: disable=exec-used return local_vars['__version__']
def request(self, method, url, callback=None, retry=0, **kwargs): """Similar to `requests.request`, but return as NewFuture.""" return self.pool.submit( self._request, method=method, url=url, retry=retry, callback=callback or self.default_callback, **kwargs )
Similar to `requests.request`, but return as NewFuture.
Below is the the instruction that describes the task: ### Input: Similar to `requests.request`, but return as NewFuture. ### Response: def request(self, method, url, callback=None, retry=0, **kwargs): """Similar to `requests.request`, but return as NewFuture.""" return self.pool.submit( self._request, method=method, url=url, retry=retry, callback=callback or self.default_callback, **kwargs )
def mainloop(self): """ Handles events and calls their handler for infinity. """ while self.keep_going: with self.lock: if self.on_connect and not self.readable(2): self.on_connect() self.on_connect = None if not self.keep_going: break self.process_once()
Handles events and calls their handler for infinity.
Below is the the instruction that describes the task: ### Input: Handles events and calls their handler for infinity. ### Response: def mainloop(self): """ Handles events and calls their handler for infinity. """ while self.keep_going: with self.lock: if self.on_connect and not self.readable(2): self.on_connect() self.on_connect = None if not self.keep_going: break self.process_once()
def week(self): """Returns the week in which this game took place. 18 is WC round, 19 is Div round, 20 is CC round, 21 is SB. :returns: Integer from 1 to 21. """ doc = self.get_doc() raw = doc('div#div_other_scores h2 a').attr['href'] match = re.match( r'/years/{}/week_(\d+)\.htm'.format(self.season()), raw ) if match: return int(match.group(1)) else: return 21
Returns the week in which this game took place. 18 is WC round, 19 is Div round, 20 is CC round, 21 is SB. :returns: Integer from 1 to 21.
Below is the the instruction that describes the task: ### Input: Returns the week in which this game took place. 18 is WC round, 19 is Div round, 20 is CC round, 21 is SB. :returns: Integer from 1 to 21. ### Response: def week(self): """Returns the week in which this game took place. 18 is WC round, 19 is Div round, 20 is CC round, 21 is SB. :returns: Integer from 1 to 21. """ doc = self.get_doc() raw = doc('div#div_other_scores h2 a').attr['href'] match = re.match( r'/years/{}/week_(\d+)\.htm'.format(self.season()), raw ) if match: return int(match.group(1)) else: return 21
def sasets(self) -> 'SASets': """ This methods creates a SASets object which you can use to run various analytics. See the sasets.py module. :return: sasets object """ if not self._loaded_macros: self._loadmacros() self._loaded_macros = True return SASets(self)
This methods creates a SASets object which you can use to run various analytics. See the sasets.py module. :return: sasets object
Below is the the instruction that describes the task: ### Input: This methods creates a SASets object which you can use to run various analytics. See the sasets.py module. :return: sasets object ### Response: def sasets(self) -> 'SASets': """ This methods creates a SASets object which you can use to run various analytics. See the sasets.py module. :return: sasets object """ if not self._loaded_macros: self._loadmacros() self._loaded_macros = True return SASets(self)
def _validate_if_patch_supported(self, headers, uri): """Check if the PATCH Operation is allowed on the resource.""" if not self._operation_allowed(headers, 'PATCH'): msg = ('PATCH Operation not supported on the resource ' '"%s"' % uri) raise exception.IloError(msg)
Check if the PATCH Operation is allowed on the resource.
Below is the the instruction that describes the task: ### Input: Check if the PATCH Operation is allowed on the resource. ### Response: def _validate_if_patch_supported(self, headers, uri): """Check if the PATCH Operation is allowed on the resource.""" if not self._operation_allowed(headers, 'PATCH'): msg = ('PATCH Operation not supported on the resource ' '"%s"' % uri) raise exception.IloError(msg)
def generate_cot(context, parent_path=None): """Format and sign the cot body, and write to disk. Args: context (scriptworker.context.Context): the scriptworker context. parent_path (str, optional): The directory to write the chain of trust artifacts to. If None, this is ``artifact_dir/public/``. Defaults to None. Returns: str: the contents of the chain of trust artifact. Raises: ScriptWorkerException: on schema error. """ body = generate_cot_body(context) schema = load_json_or_yaml( context.config['cot_schema_path'], is_path=True, exception=ScriptWorkerException, message="Can't read schema file {}: %(exc)s".format(context.config['cot_schema_path']) ) validate_json_schema(body, schema, name="chain of trust") body = format_json(body) parent_path = parent_path or os.path.join(context.config['artifact_dir'], 'public') unsigned_path = os.path.join(parent_path, 'chain-of-trust.json') write_to_file(unsigned_path, body) if context.config['sign_chain_of_trust']: ed25519_signature_path = '{}.sig'.format(unsigned_path) ed25519_private_key = ed25519_private_key_from_file(context.config['ed25519_private_key_path']) ed25519_signature = ed25519_private_key.sign(body.encode('utf-8')) write_to_file(ed25519_signature_path, ed25519_signature, file_type='binary') return body
Format and sign the cot body, and write to disk. Args: context (scriptworker.context.Context): the scriptworker context. parent_path (str, optional): The directory to write the chain of trust artifacts to. If None, this is ``artifact_dir/public/``. Defaults to None. Returns: str: the contents of the chain of trust artifact. Raises: ScriptWorkerException: on schema error.
Below is the the instruction that describes the task: ### Input: Format and sign the cot body, and write to disk. Args: context (scriptworker.context.Context): the scriptworker context. parent_path (str, optional): The directory to write the chain of trust artifacts to. If None, this is ``artifact_dir/public/``. Defaults to None. Returns: str: the contents of the chain of trust artifact. Raises: ScriptWorkerException: on schema error. ### Response: def generate_cot(context, parent_path=None): """Format and sign the cot body, and write to disk. Args: context (scriptworker.context.Context): the scriptworker context. parent_path (str, optional): The directory to write the chain of trust artifacts to. If None, this is ``artifact_dir/public/``. Defaults to None. Returns: str: the contents of the chain of trust artifact. Raises: ScriptWorkerException: on schema error. """ body = generate_cot_body(context) schema = load_json_or_yaml( context.config['cot_schema_path'], is_path=True, exception=ScriptWorkerException, message="Can't read schema file {}: %(exc)s".format(context.config['cot_schema_path']) ) validate_json_schema(body, schema, name="chain of trust") body = format_json(body) parent_path = parent_path or os.path.join(context.config['artifact_dir'], 'public') unsigned_path = os.path.join(parent_path, 'chain-of-trust.json') write_to_file(unsigned_path, body) if context.config['sign_chain_of_trust']: ed25519_signature_path = '{}.sig'.format(unsigned_path) ed25519_private_key = ed25519_private_key_from_file(context.config['ed25519_private_key_path']) ed25519_signature = ed25519_private_key.sign(body.encode('utf-8')) write_to_file(ed25519_signature_path, ed25519_signature, file_type='binary') return body
def _set_ospf_level2(self, v, load=False): """ Setter method for ospf_level2, mapped from YANG variable /routing_system/router/isis/router_isis_cmds_holder/address_family/ipv6/af_ipv6_unicast/af_ipv6_attributes/af_common_attributes/redistribute/ospf/ospf_level2 (empty) If this variable is read-only (config: false) in the source YANG file, then _set_ospf_level2 is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ospf_level2() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="ospf-level2", rest_name="level-2", parent=self, choice=(u'ch-ospf-levels', u'ca-ospf-level2'), path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'IS-IS Level-2 routes only', u'alt-name': u'level-2', u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-isis', defining_module='brocade-isis', yang_type='empty', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """ospf_level2 must be of a type compatible with empty""", 'defined-type': "empty", 'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="ospf-level2", rest_name="level-2", parent=self, choice=(u'ch-ospf-levels', u'ca-ospf-level2'), path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'IS-IS Level-2 routes only', u'alt-name': u'level-2', u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-isis', defining_module='brocade-isis', yang_type='empty', is_config=True)""", }) self.__ospf_level2 = t if hasattr(self, '_set'): self._set()
Setter method for ospf_level2, mapped from YANG variable /routing_system/router/isis/router_isis_cmds_holder/address_family/ipv6/af_ipv6_unicast/af_ipv6_attributes/af_common_attributes/redistribute/ospf/ospf_level2 (empty) If this variable is read-only (config: false) in the source YANG file, then _set_ospf_level2 is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ospf_level2() directly.
Below is the the instruction that describes the task: ### Input: Setter method for ospf_level2, mapped from YANG variable /routing_system/router/isis/router_isis_cmds_holder/address_family/ipv6/af_ipv6_unicast/af_ipv6_attributes/af_common_attributes/redistribute/ospf/ospf_level2 (empty) If this variable is read-only (config: false) in the source YANG file, then _set_ospf_level2 is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ospf_level2() directly. ### Response: def _set_ospf_level2(self, v, load=False): """ Setter method for ospf_level2, mapped from YANG variable /routing_system/router/isis/router_isis_cmds_holder/address_family/ipv6/af_ipv6_unicast/af_ipv6_attributes/af_common_attributes/redistribute/ospf/ospf_level2 (empty) If this variable is read-only (config: false) in the source YANG file, then _set_ospf_level2 is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_ospf_level2() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="ospf-level2", rest_name="level-2", parent=self, choice=(u'ch-ospf-levels', u'ca-ospf-level2'), path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'IS-IS Level-2 routes only', u'alt-name': u'level-2', u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-isis', defining_module='brocade-isis', yang_type='empty', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """ospf_level2 must be of a type compatible with empty""", 'defined-type': "empty", 'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="ospf-level2", rest_name="level-2", parent=self, choice=(u'ch-ospf-levels', u'ca-ospf-level2'), path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'IS-IS Level-2 routes only', u'alt-name': u'level-2', u'cli-full-no': None}}, namespace='urn:brocade.com:mgmt:brocade-isis', defining_module='brocade-isis', yang_type='empty', is_config=True)""", }) self.__ospf_level2 = t if hasattr(self, '_set'): self._set()
def upload_submit(self, upload_request): """The method is submitting dataset upload""" path = '/api/1.0/upload/save' return self._api_post(definition.DatasetUploadResponse, path, upload_request)
The method is submitting dataset upload
Below is the the instruction that describes the task: ### Input: The method is submitting dataset upload ### Response: def upload_submit(self, upload_request): """The method is submitting dataset upload""" path = '/api/1.0/upload/save' return self._api_post(definition.DatasetUploadResponse, path, upload_request)
def update_host_datetime(host, username, password, protocol=None, port=None, host_names=None): ''' Update the date/time on the given host or list of host_names. This function should be used with caution since network delays and execution delays can result in time skews. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. host_names List of ESXi host names. When the host, username, and password credentials are provided for a vCenter Server, the host_names argument is required to tell vCenter which hosts should update their date/time. If host_names is not provided, the date/time will be updated for the ``host`` location instead. This is useful for when service instance connection information is used for a single ESXi host. CLI Example: .. code-block:: bash # Used for single ESXi host connection information salt '*' vsphere.update_date_time my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.update_date_time my.vcenter.location root bad-password \ host_names='[esxi-1.host.com, esxi-2.host.com]' ''' service_instance = salt.utils.vmware.get_service_instance(host=host, username=username, password=password, protocol=protocol, port=port) host_names = _check_hosts(service_instance, host, host_names) ret = {} for host_name in host_names: host_ref = _get_host_ref(service_instance, host, host_name=host_name) date_time_manager = _get_date_time_mgr(host_ref) try: date_time_manager.UpdateDateTime(datetime.datetime.utcnow()) except vim.fault.HostConfigFault as err: msg = '\'vsphere.update_date_time\' failed for host {0}: {1}'.format(host_name, err) log.debug(msg) ret.update({host_name: {'Error': msg}}) continue ret.update({host_name: {'Datetime Updated': True}}) return ret
Update the date/time on the given host or list of host_names. This function should be used with caution since network delays and execution delays can result in time skews. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. host_names List of ESXi host names. When the host, username, and password credentials are provided for a vCenter Server, the host_names argument is required to tell vCenter which hosts should update their date/time. If host_names is not provided, the date/time will be updated for the ``host`` location instead. This is useful for when service instance connection information is used for a single ESXi host. CLI Example: .. code-block:: bash # Used for single ESXi host connection information salt '*' vsphere.update_date_time my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.update_date_time my.vcenter.location root bad-password \ host_names='[esxi-1.host.com, esxi-2.host.com]'
Below is the the instruction that describes the task: ### Input: Update the date/time on the given host or list of host_names. This function should be used with caution since network delays and execution delays can result in time skews. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. host_names List of ESXi host names. When the host, username, and password credentials are provided for a vCenter Server, the host_names argument is required to tell vCenter which hosts should update their date/time. If host_names is not provided, the date/time will be updated for the ``host`` location instead. This is useful for when service instance connection information is used for a single ESXi host. CLI Example: .. code-block:: bash # Used for single ESXi host connection information salt '*' vsphere.update_date_time my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.update_date_time my.vcenter.location root bad-password \ host_names='[esxi-1.host.com, esxi-2.host.com]' ### Response: def update_host_datetime(host, username, password, protocol=None, port=None, host_names=None): ''' Update the date/time on the given host or list of host_names. This function should be used with caution since network delays and execution delays can result in time skews. host The location of the host. username The username used to login to the host, such as ``root``. password The password used to login to the host. protocol Optionally set to alternate protocol if the host is not using the default protocol. Default protocol is ``https``. port Optionally set to alternate port if the host is not using the default port. Default port is ``443``. host_names List of ESXi host names. When the host, username, and password credentials are provided for a vCenter Server, the host_names argument is required to tell vCenter which hosts should update their date/time. If host_names is not provided, the date/time will be updated for the ``host`` location instead. This is useful for when service instance connection information is used for a single ESXi host. CLI Example: .. code-block:: bash # Used for single ESXi host connection information salt '*' vsphere.update_date_time my.esxi.host root bad-password # Used for connecting to a vCenter Server salt '*' vsphere.update_date_time my.vcenter.location root bad-password \ host_names='[esxi-1.host.com, esxi-2.host.com]' ''' service_instance = salt.utils.vmware.get_service_instance(host=host, username=username, password=password, protocol=protocol, port=port) host_names = _check_hosts(service_instance, host, host_names) ret = {} for host_name in host_names: host_ref = _get_host_ref(service_instance, host, host_name=host_name) date_time_manager = _get_date_time_mgr(host_ref) try: date_time_manager.UpdateDateTime(datetime.datetime.utcnow()) except vim.fault.HostConfigFault as err: msg = '\'vsphere.update_date_time\' failed for host {0}: {1}'.format(host_name, err) log.debug(msg) ret.update({host_name: {'Error': msg}}) continue ret.update({host_name: {'Datetime Updated': True}}) return ret
def sanity_check_subsections(self): """ This function goes through the ConfigParset and checks that any options given in the [SECTION_NAME] section are not also given in any [SECTION_NAME-SUBSECTION] sections. """ # Loop over the sections in the ini file for section in self.sections(): # [pegasus_profile] specially is allowed to be overriden by # sub-sections if section == 'pegasus_profile': continue # Loop over the sections again for section2 in self.sections(): # Check if any are subsections of section if section2.startswith(section + '-'): # Check for duplicate options whenever this exists self.check_duplicate_options(section, section2, raise_error=True)
This function goes through the ConfigParset and checks that any options given in the [SECTION_NAME] section are not also given in any [SECTION_NAME-SUBSECTION] sections.
Below is the the instruction that describes the task: ### Input: This function goes through the ConfigParset and checks that any options given in the [SECTION_NAME] section are not also given in any [SECTION_NAME-SUBSECTION] sections. ### Response: def sanity_check_subsections(self): """ This function goes through the ConfigParset and checks that any options given in the [SECTION_NAME] section are not also given in any [SECTION_NAME-SUBSECTION] sections. """ # Loop over the sections in the ini file for section in self.sections(): # [pegasus_profile] specially is allowed to be overriden by # sub-sections if section == 'pegasus_profile': continue # Loop over the sections again for section2 in self.sections(): # Check if any are subsections of section if section2.startswith(section + '-'): # Check for duplicate options whenever this exists self.check_duplicate_options(section, section2, raise_error=True)
def validate_rc(): """ Before we execute any actions, let's validate our .vacationrc. """ transactions = rc.read() if not transactions: print('Your .vacationrc file is empty! Set days and rate.') return False transactions = sort(unique(transactions)) return validate_setup(transactions)
Before we execute any actions, let's validate our .vacationrc.
Below is the the instruction that describes the task: ### Input: Before we execute any actions, let's validate our .vacationrc. ### Response: def validate_rc(): """ Before we execute any actions, let's validate our .vacationrc. """ transactions = rc.read() if not transactions: print('Your .vacationrc file is empty! Set days and rate.') return False transactions = sort(unique(transactions)) return validate_setup(transactions)
def get_package_hashes( package, version=None, algorithm=DEFAULT_ALGORITHM, python_versions=(), verbose=False, include_prereleases=False, lookup_memory=None, index_url=DEFAULT_INDEX_URL, ): """ Gets the hashes for the given package. >>> get_package_hashes('hashin') { 'package': 'hashin', 'version': '0.10', 'hashes': [ { 'url': 'https://pypi.org/packages/[...]', 'hash': '45d1c5d2237a3b4f78b4198709fb2ecf[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': '0d63bf4c115154781846ecf573049324[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': 'c32e6d9fb09dc36ab9222c4606a1f43a[...]' } ] } """ if lookup_memory is not None and package in lookup_memory: data = lookup_memory[package] else: data = get_package_data(package, index_url, verbose) if not version: version = get_latest_version(data, include_prereleases) assert version if verbose: _verbose("Latest version for {0} is {1}".format(package, version)) # Independent of how you like to case type it, pick the correct # name from the PyPI index. package = data["info"]["name"] try: releases = data["releases"][version] except KeyError: raise PackageError("No data found for version {0}".format(version)) if python_versions: releases = filter_releases(releases, python_versions) if not releases: if python_versions: raise PackageError( "No releases could be found for " "{0} matching Python versions {1}".format(version, python_versions) ) else: raise PackageError("No releases could be found for {0}".format(version)) hashes = list( get_releases_hashes(releases=releases, algorithm=algorithm, verbose=verbose) ) return {"package": package, "version": version, "hashes": hashes}
Gets the hashes for the given package. >>> get_package_hashes('hashin') { 'package': 'hashin', 'version': '0.10', 'hashes': [ { 'url': 'https://pypi.org/packages/[...]', 'hash': '45d1c5d2237a3b4f78b4198709fb2ecf[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': '0d63bf4c115154781846ecf573049324[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': 'c32e6d9fb09dc36ab9222c4606a1f43a[...]' } ] }
Below is the the instruction that describes the task: ### Input: Gets the hashes for the given package. >>> get_package_hashes('hashin') { 'package': 'hashin', 'version': '0.10', 'hashes': [ { 'url': 'https://pypi.org/packages/[...]', 'hash': '45d1c5d2237a3b4f78b4198709fb2ecf[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': '0d63bf4c115154781846ecf573049324[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': 'c32e6d9fb09dc36ab9222c4606a1f43a[...]' } ] } ### Response: def get_package_hashes( package, version=None, algorithm=DEFAULT_ALGORITHM, python_versions=(), verbose=False, include_prereleases=False, lookup_memory=None, index_url=DEFAULT_INDEX_URL, ): """ Gets the hashes for the given package. >>> get_package_hashes('hashin') { 'package': 'hashin', 'version': '0.10', 'hashes': [ { 'url': 'https://pypi.org/packages/[...]', 'hash': '45d1c5d2237a3b4f78b4198709fb2ecf[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': '0d63bf4c115154781846ecf573049324[...]' }, { 'url': 'https://pypi.org/packages/[...]', 'hash': 'c32e6d9fb09dc36ab9222c4606a1f43a[...]' } ] } """ if lookup_memory is not None and package in lookup_memory: data = lookup_memory[package] else: data = get_package_data(package, index_url, verbose) if not version: version = get_latest_version(data, include_prereleases) assert version if verbose: _verbose("Latest version for {0} is {1}".format(package, version)) # Independent of how you like to case type it, pick the correct # name from the PyPI index. package = data["info"]["name"] try: releases = data["releases"][version] except KeyError: raise PackageError("No data found for version {0}".format(version)) if python_versions: releases = filter_releases(releases, python_versions) if not releases: if python_versions: raise PackageError( "No releases could be found for " "{0} matching Python versions {1}".format(version, python_versions) ) else: raise PackageError("No releases could be found for {0}".format(version)) hashes = list( get_releases_hashes(releases=releases, algorithm=algorithm, verbose=verbose) ) return {"package": package, "version": version, "hashes": hashes}
def run(self): """ Wait for an i3bar JSON event, then find the right module to dispatch the message to based on the 'name' and 'instance' of the event. In case the module does NOT support click_events, the default implementation is to clear the module's cache when the MIDDLE button (2) is pressed on it. Example event: {'y': 13, 'x': 1737, 'button': 1, 'name': 'empty', 'instance': 'first'} """ try: while self.py3_wrapper.running: event_str = self.poller_inp.readline() if not event_str: continue try: # remove leading comma if present if event_str[0] == ",": event_str = event_str[1:] event = loads(event_str) self.dispatch_event(event) except Exception: self.py3_wrapper.report_exception("Event failed") except: # noqa e722 err = "Events thread died, click events are disabled." self.py3_wrapper.report_exception(err, notify_user=False) self.py3_wrapper.notify_user(err, level="warning")
Wait for an i3bar JSON event, then find the right module to dispatch the message to based on the 'name' and 'instance' of the event. In case the module does NOT support click_events, the default implementation is to clear the module's cache when the MIDDLE button (2) is pressed on it. Example event: {'y': 13, 'x': 1737, 'button': 1, 'name': 'empty', 'instance': 'first'}
Below is the the instruction that describes the task: ### Input: Wait for an i3bar JSON event, then find the right module to dispatch the message to based on the 'name' and 'instance' of the event. In case the module does NOT support click_events, the default implementation is to clear the module's cache when the MIDDLE button (2) is pressed on it. Example event: {'y': 13, 'x': 1737, 'button': 1, 'name': 'empty', 'instance': 'first'} ### Response: def run(self): """ Wait for an i3bar JSON event, then find the right module to dispatch the message to based on the 'name' and 'instance' of the event. In case the module does NOT support click_events, the default implementation is to clear the module's cache when the MIDDLE button (2) is pressed on it. Example event: {'y': 13, 'x': 1737, 'button': 1, 'name': 'empty', 'instance': 'first'} """ try: while self.py3_wrapper.running: event_str = self.poller_inp.readline() if not event_str: continue try: # remove leading comma if present if event_str[0] == ",": event_str = event_str[1:] event = loads(event_str) self.dispatch_event(event) except Exception: self.py3_wrapper.report_exception("Event failed") except: # noqa e722 err = "Events thread died, click events are disabled." self.py3_wrapper.report_exception(err, notify_user=False) self.py3_wrapper.notify_user(err, level="warning")
def set_preferred_prefix_for_namespace(self, ns_uri, prefix, add_if_not_exist=False): """Sets the preferred prefix for ns_uri. If add_if_not_exist is True, the prefix is added if it's not already registered. Otherwise, setting an unknown prefix as preferred is an error. The default is False. Setting to None always works, and indicates a preference to use the namespace as a default. The given namespace must already be in this set. Args: ns_uri (str): the namespace URI whose prefix is to be set prefix (str): the preferred prefix to set add_if_not_exist (bool): Whether to add the prefix if it is not already set as a prefix of ``ns_uri``. Raises: NamespaceNotFoundError: If namespace ``ns_uri`` isn't in this set. DuplicatePrefixError: If ``prefix`` already maps to a different namespace. """ ni = self.__lookup_uri(ns_uri) if not prefix: ni.preferred_prefix = None elif prefix in ni.prefixes: ni.preferred_prefix = prefix elif add_if_not_exist: self.add_prefix(ns_uri, prefix, set_as_preferred=True) else: raise PrefixNotFoundError(prefix)
Sets the preferred prefix for ns_uri. If add_if_not_exist is True, the prefix is added if it's not already registered. Otherwise, setting an unknown prefix as preferred is an error. The default is False. Setting to None always works, and indicates a preference to use the namespace as a default. The given namespace must already be in this set. Args: ns_uri (str): the namespace URI whose prefix is to be set prefix (str): the preferred prefix to set add_if_not_exist (bool): Whether to add the prefix if it is not already set as a prefix of ``ns_uri``. Raises: NamespaceNotFoundError: If namespace ``ns_uri`` isn't in this set. DuplicatePrefixError: If ``prefix`` already maps to a different namespace.
Below is the the instruction that describes the task: ### Input: Sets the preferred prefix for ns_uri. If add_if_not_exist is True, the prefix is added if it's not already registered. Otherwise, setting an unknown prefix as preferred is an error. The default is False. Setting to None always works, and indicates a preference to use the namespace as a default. The given namespace must already be in this set. Args: ns_uri (str): the namespace URI whose prefix is to be set prefix (str): the preferred prefix to set add_if_not_exist (bool): Whether to add the prefix if it is not already set as a prefix of ``ns_uri``. Raises: NamespaceNotFoundError: If namespace ``ns_uri`` isn't in this set. DuplicatePrefixError: If ``prefix`` already maps to a different namespace. ### Response: def set_preferred_prefix_for_namespace(self, ns_uri, prefix, add_if_not_exist=False): """Sets the preferred prefix for ns_uri. If add_if_not_exist is True, the prefix is added if it's not already registered. Otherwise, setting an unknown prefix as preferred is an error. The default is False. Setting to None always works, and indicates a preference to use the namespace as a default. The given namespace must already be in this set. Args: ns_uri (str): the namespace URI whose prefix is to be set prefix (str): the preferred prefix to set add_if_not_exist (bool): Whether to add the prefix if it is not already set as a prefix of ``ns_uri``. Raises: NamespaceNotFoundError: If namespace ``ns_uri`` isn't in this set. DuplicatePrefixError: If ``prefix`` already maps to a different namespace. """ ni = self.__lookup_uri(ns_uri) if not prefix: ni.preferred_prefix = None elif prefix in ni.prefixes: ni.preferred_prefix = prefix elif add_if_not_exist: self.add_prefix(ns_uri, prefix, set_as_preferred=True) else: raise PrefixNotFoundError(prefix)
def from_file(cls, path, fields=None, encoding='utf-8'): """ Instantiate a Table from a database file. This method instantiates a table attached to the file at *path*. The file will be opened and traversed to determine the number of records, but the contents will not be stored in memory unless they are modified. Args: path: the path to the table file fields: the Relation schema for the table (loaded from the relations file in the same directory if not given) encoding: the character encoding of the file at *path* """ path = _table_filename(path) # do early in case file not found if fields is None: fields = _get_relation_from_table_path(path) table = cls(fields) table.attach(path, encoding=encoding) return table
Instantiate a Table from a database file. This method instantiates a table attached to the file at *path*. The file will be opened and traversed to determine the number of records, but the contents will not be stored in memory unless they are modified. Args: path: the path to the table file fields: the Relation schema for the table (loaded from the relations file in the same directory if not given) encoding: the character encoding of the file at *path*
Below is the the instruction that describes the task: ### Input: Instantiate a Table from a database file. This method instantiates a table attached to the file at *path*. The file will be opened and traversed to determine the number of records, but the contents will not be stored in memory unless they are modified. Args: path: the path to the table file fields: the Relation schema for the table (loaded from the relations file in the same directory if not given) encoding: the character encoding of the file at *path* ### Response: def from_file(cls, path, fields=None, encoding='utf-8'): """ Instantiate a Table from a database file. This method instantiates a table attached to the file at *path*. The file will be opened and traversed to determine the number of records, but the contents will not be stored in memory unless they are modified. Args: path: the path to the table file fields: the Relation schema for the table (loaded from the relations file in the same directory if not given) encoding: the character encoding of the file at *path* """ path = _table_filename(path) # do early in case file not found if fields is None: fields = _get_relation_from_table_path(path) table = cls(fields) table.attach(path, encoding=encoding) return table
def break_type_id(self, break_type_id): """ Sets the break_type_id of this ModelBreak. The `BreakType` this `Break` was templated on. :param break_type_id: The break_type_id of this ModelBreak. :type: str """ if break_type_id is None: raise ValueError("Invalid value for `break_type_id`, must not be `None`") if len(break_type_id) < 1: raise ValueError("Invalid value for `break_type_id`, length must be greater than or equal to `1`") self._break_type_id = break_type_id
Sets the break_type_id of this ModelBreak. The `BreakType` this `Break` was templated on. :param break_type_id: The break_type_id of this ModelBreak. :type: str
Below is the the instruction that describes the task: ### Input: Sets the break_type_id of this ModelBreak. The `BreakType` this `Break` was templated on. :param break_type_id: The break_type_id of this ModelBreak. :type: str ### Response: def break_type_id(self, break_type_id): """ Sets the break_type_id of this ModelBreak. The `BreakType` this `Break` was templated on. :param break_type_id: The break_type_id of this ModelBreak. :type: str """ if break_type_id is None: raise ValueError("Invalid value for `break_type_id`, must not be `None`") if len(break_type_id) < 1: raise ValueError("Invalid value for `break_type_id`, length must be greater than or equal to `1`") self._break_type_id = break_type_id
def add_keyword(self, keyword, schema=None, source=None): """Add a keyword. Args: keyword(str): keyword to add. schema(str): schema to which the keyword belongs. source(str): source for the keyword. """ keyword_dict = self._sourced_dict(source, value=keyword) if schema is not None: keyword_dict['schema'] = schema self._append_to('keywords', keyword_dict)
Add a keyword. Args: keyword(str): keyword to add. schema(str): schema to which the keyword belongs. source(str): source for the keyword.
Below is the the instruction that describes the task: ### Input: Add a keyword. Args: keyword(str): keyword to add. schema(str): schema to which the keyword belongs. source(str): source for the keyword. ### Response: def add_keyword(self, keyword, schema=None, source=None): """Add a keyword. Args: keyword(str): keyword to add. schema(str): schema to which the keyword belongs. source(str): source for the keyword. """ keyword_dict = self._sourced_dict(source, value=keyword) if schema is not None: keyword_dict['schema'] = schema self._append_to('keywords', keyword_dict)
def get_path_list(self, type_str=None): """Get list of the labels of the nodes leading up to this node from the root. Args: type_str: SUBJECT_NODE_TAG, TYPE_NODE_TAG or None. If set, only include information from nodes of that type. Returns: list of str: The labels of the nodes leading up to this node from the root. """ return list( reversed( [v.label_str for v in self.parent_gen if type_str in (None, v.type_str)] ) )
Get list of the labels of the nodes leading up to this node from the root. Args: type_str: SUBJECT_NODE_TAG, TYPE_NODE_TAG or None. If set, only include information from nodes of that type. Returns: list of str: The labels of the nodes leading up to this node from the root.
Below is the the instruction that describes the task: ### Input: Get list of the labels of the nodes leading up to this node from the root. Args: type_str: SUBJECT_NODE_TAG, TYPE_NODE_TAG or None. If set, only include information from nodes of that type. Returns: list of str: The labels of the nodes leading up to this node from the root. ### Response: def get_path_list(self, type_str=None): """Get list of the labels of the nodes leading up to this node from the root. Args: type_str: SUBJECT_NODE_TAG, TYPE_NODE_TAG or None. If set, only include information from nodes of that type. Returns: list of str: The labels of the nodes leading up to this node from the root. """ return list( reversed( [v.label_str for v in self.parent_gen if type_str in (None, v.type_str)] ) )
def uniquetwig(self, ps=None): """ see also :meth:`twig` Determine the shortest (more-or-less) twig which will point to this single Parameter in a given parent :class:`ParameterSet` :parameter ps: :class:`ParameterSet` in which the returned uniquetwig will point to this Parameter. If not provided or None this will default to the parent :class:`phoebe.frontend.bundle.Bundle`, if available. :return: uniquetwig :rtype: str """ if ps is None: ps = self._bundle if ps is None: return self.twig return ps._uniquetwig(self.twig)
see also :meth:`twig` Determine the shortest (more-or-less) twig which will point to this single Parameter in a given parent :class:`ParameterSet` :parameter ps: :class:`ParameterSet` in which the returned uniquetwig will point to this Parameter. If not provided or None this will default to the parent :class:`phoebe.frontend.bundle.Bundle`, if available. :return: uniquetwig :rtype: str
Below is the the instruction that describes the task: ### Input: see also :meth:`twig` Determine the shortest (more-or-less) twig which will point to this single Parameter in a given parent :class:`ParameterSet` :parameter ps: :class:`ParameterSet` in which the returned uniquetwig will point to this Parameter. If not provided or None this will default to the parent :class:`phoebe.frontend.bundle.Bundle`, if available. :return: uniquetwig :rtype: str ### Response: def uniquetwig(self, ps=None): """ see also :meth:`twig` Determine the shortest (more-or-less) twig which will point to this single Parameter in a given parent :class:`ParameterSet` :parameter ps: :class:`ParameterSet` in which the returned uniquetwig will point to this Parameter. If not provided or None this will default to the parent :class:`phoebe.frontend.bundle.Bundle`, if available. :return: uniquetwig :rtype: str """ if ps is None: ps = self._bundle if ps is None: return self.twig return ps._uniquetwig(self.twig)
def create_constraints(self, courses): """Internal use. Creates all constraints in the problem instance for the given courses. """ for i, course1 in enumerate(courses): for j, course2 in enumerate(courses): if i <= j: continue self.p.add_constraint(self.section_constraint, [course1, course2]) self.p.add_constraint(self.time_conflict, [course1])
Internal use. Creates all constraints in the problem instance for the given courses.
Below is the the instruction that describes the task: ### Input: Internal use. Creates all constraints in the problem instance for the given courses. ### Response: def create_constraints(self, courses): """Internal use. Creates all constraints in the problem instance for the given courses. """ for i, course1 in enumerate(courses): for j, course2 in enumerate(courses): if i <= j: continue self.p.add_constraint(self.section_constraint, [course1, course2]) self.p.add_constraint(self.time_conflict, [course1])
def find_container_traits(cls_or_string): """ Find the container traits type of a declaration. Args: cls_or_string (str | declarations.declaration_t): a string Returns: declarations.container_traits: a container traits """ if utils.is_str(cls_or_string): if not templates.is_instantiation(cls_or_string): return None name = templates.name(cls_or_string) if name.startswith('std::'): name = name[len('std::'):] if name.startswith('std::tr1::'): name = name[len('std::tr1::'):] for cls_traits in all_container_traits: if cls_traits.name() == name: return cls_traits else: if isinstance(cls_or_string, class_declaration.class_types): # Look in the cache. if cls_or_string.cache.container_traits is not None: return cls_or_string.cache.container_traits # Look for a container traits for cls_traits in all_container_traits: if cls_traits.is_my_case(cls_or_string): # Store in the cache if isinstance(cls_or_string, class_declaration.class_types): cls_or_string.cache.container_traits = cls_traits return cls_traits
Find the container traits type of a declaration. Args: cls_or_string (str | declarations.declaration_t): a string Returns: declarations.container_traits: a container traits
Below is the the instruction that describes the task: ### Input: Find the container traits type of a declaration. Args: cls_or_string (str | declarations.declaration_t): a string Returns: declarations.container_traits: a container traits ### Response: def find_container_traits(cls_or_string): """ Find the container traits type of a declaration. Args: cls_or_string (str | declarations.declaration_t): a string Returns: declarations.container_traits: a container traits """ if utils.is_str(cls_or_string): if not templates.is_instantiation(cls_or_string): return None name = templates.name(cls_or_string) if name.startswith('std::'): name = name[len('std::'):] if name.startswith('std::tr1::'): name = name[len('std::tr1::'):] for cls_traits in all_container_traits: if cls_traits.name() == name: return cls_traits else: if isinstance(cls_or_string, class_declaration.class_types): # Look in the cache. if cls_or_string.cache.container_traits is not None: return cls_or_string.cache.container_traits # Look for a container traits for cls_traits in all_container_traits: if cls_traits.is_my_case(cls_or_string): # Store in the cache if isinstance(cls_or_string, class_declaration.class_types): cls_or_string.cache.container_traits = cls_traits return cls_traits
def find_and_replace_channel_refs(self, text): '''Find occurrences of Slack channel referenfces and attempts to replace them with just channel names. Args: text (string): The message text Returns: string: The message text with channel references replaced. ''' match = True pattern = re.compile('<#([A-Z0-9]{9})\|([A-Za-z0-9-]+)>') while match: match = pattern.search(text) if match: text = text.replace(match.group(0), '#' + match.group(2)) return text
Find occurrences of Slack channel referenfces and attempts to replace them with just channel names. Args: text (string): The message text Returns: string: The message text with channel references replaced.
Below is the the instruction that describes the task: ### Input: Find occurrences of Slack channel referenfces and attempts to replace them with just channel names. Args: text (string): The message text Returns: string: The message text with channel references replaced. ### Response: def find_and_replace_channel_refs(self, text): '''Find occurrences of Slack channel referenfces and attempts to replace them with just channel names. Args: text (string): The message text Returns: string: The message text with channel references replaced. ''' match = True pattern = re.compile('<#([A-Z0-9]{9})\|([A-Za-z0-9-]+)>') while match: match = pattern.search(text) if match: text = text.replace(match.group(0), '#' + match.group(2)) return text
def _normalize(schema, allow_none=True): """ Normalize a schema. """ if allow_none and schema is None: return schema if isinstance(schema, CommonSchema): return schema if isinstance(schema, StreamSchema): return schema if isinstance(schema, basestring): return StreamSchema(schema) py_types = { _spl_object: CommonSchema.Python, _spl_str: CommonSchema.String, json: CommonSchema.Json, } if schema in py_types: return py_types[schema] # With Python 3 allow a named tuple with type hints # to be used as a schema definition if sys.version_info.major == 3: import typing if isinstance(schema, type) and issubclass(schema, tuple): if hasattr(schema, '_fields') and hasattr(schema, '_field_types'): return _from_named_tuple(schema) raise ValueError("Unknown stream schema type:" + str(schema))
Normalize a schema.
Below is the the instruction that describes the task: ### Input: Normalize a schema. ### Response: def _normalize(schema, allow_none=True): """ Normalize a schema. """ if allow_none and schema is None: return schema if isinstance(schema, CommonSchema): return schema if isinstance(schema, StreamSchema): return schema if isinstance(schema, basestring): return StreamSchema(schema) py_types = { _spl_object: CommonSchema.Python, _spl_str: CommonSchema.String, json: CommonSchema.Json, } if schema in py_types: return py_types[schema] # With Python 3 allow a named tuple with type hints # to be used as a schema definition if sys.version_info.major == 3: import typing if isinstance(schema, type) and issubclass(schema, tuple): if hasattr(schema, '_fields') and hasattr(schema, '_field_types'): return _from_named_tuple(schema) raise ValueError("Unknown stream schema type:" + str(schema))
def ensure_file_is_executable(path): """Exit if file is not executable.""" if platform.system() != 'Windows' and ( not stat.S_IXUSR & os.stat(path)[stat.ST_MODE]): print("Error: File %s is not executable" % path) sys.exit(1)
Exit if file is not executable.
Below is the the instruction that describes the task: ### Input: Exit if file is not executable. ### Response: def ensure_file_is_executable(path): """Exit if file is not executable.""" if platform.system() != 'Windows' and ( not stat.S_IXUSR & os.stat(path)[stat.ST_MODE]): print("Error: File %s is not executable" % path) sys.exit(1)
def run_path(self, path, dot_env_path=None, mapping=None): """ run testcase/testsuite file or folder. Args: path (str): testcase/testsuite file/foler path. dot_env_path (str): specified .env file path. mapping (dict): if mapping is specified, it will override variables in config block. Returns: instance: HttpRunner() instance """ # load tests self.exception_stage = "load tests" tests_mapping = loader.load_tests(path, dot_env_path) tests_mapping["project_mapping"]["test_path"] = path if mapping: tests_mapping["project_mapping"]["variables"] = mapping return self.run_tests(tests_mapping)
run testcase/testsuite file or folder. Args: path (str): testcase/testsuite file/foler path. dot_env_path (str): specified .env file path. mapping (dict): if mapping is specified, it will override variables in config block. Returns: instance: HttpRunner() instance
Below is the the instruction that describes the task: ### Input: run testcase/testsuite file or folder. Args: path (str): testcase/testsuite file/foler path. dot_env_path (str): specified .env file path. mapping (dict): if mapping is specified, it will override variables in config block. Returns: instance: HttpRunner() instance ### Response: def run_path(self, path, dot_env_path=None, mapping=None): """ run testcase/testsuite file or folder. Args: path (str): testcase/testsuite file/foler path. dot_env_path (str): specified .env file path. mapping (dict): if mapping is specified, it will override variables in config block. Returns: instance: HttpRunner() instance """ # load tests self.exception_stage = "load tests" tests_mapping = loader.load_tests(path, dot_env_path) tests_mapping["project_mapping"]["test_path"] = path if mapping: tests_mapping["project_mapping"]["variables"] = mapping return self.run_tests(tests_mapping)
def zLoadFile(self, fileName, append=None): """Loads a zmx file into the DDE server""" reply = None if append: cmd = "LoadFile,{},{}".format(fileName, append) else: cmd = "LoadFile,{}".format(fileName) reply = self._sendDDEcommand(cmd) if reply: return int(reply) #Note: Zemax returns -999 if update fails. else: return -998
Loads a zmx file into the DDE server
Below is the the instruction that describes the task: ### Input: Loads a zmx file into the DDE server ### Response: def zLoadFile(self, fileName, append=None): """Loads a zmx file into the DDE server""" reply = None if append: cmd = "LoadFile,{},{}".format(fileName, append) else: cmd = "LoadFile,{}".format(fileName) reply = self._sendDDEcommand(cmd) if reply: return int(reply) #Note: Zemax returns -999 if update fails. else: return -998
def get_brain_info(brain): """Extract the brain info """ icon = api.get_icon(brain) # avoid 404 errors with these guys if "document_icon.gif" in icon: icon = "" id = api.get_id(brain) url = api.get_url(brain) title = api.get_title(brain) description = api.get_description(brain) parent = api.get_parent(brain) parent_title = api.get_title(parent) parent_url = api.get_url(parent) return { "id": id, "title": title, "title_or_id": title or id, "description": description, "url": url, "parent_title": parent_title, "parent_url": parent_url, "icon": icon, }
Extract the brain info
Below is the the instruction that describes the task: ### Input: Extract the brain info ### Response: def get_brain_info(brain): """Extract the brain info """ icon = api.get_icon(brain) # avoid 404 errors with these guys if "document_icon.gif" in icon: icon = "" id = api.get_id(brain) url = api.get_url(brain) title = api.get_title(brain) description = api.get_description(brain) parent = api.get_parent(brain) parent_title = api.get_title(parent) parent_url = api.get_url(parent) return { "id": id, "title": title, "title_or_id": title or id, "description": description, "url": url, "parent_title": parent_title, "parent_url": parent_url, "icon": icon, }
def post(self, receiver_id=None): """Handle POST request.""" try: user_id = request.oauth.access_token.user_id except AttributeError: user_id = current_user.get_id() event = Event.create( receiver_id=receiver_id, user_id=user_id ) db.session.add(event) db.session.commit() # db.session.begin(subtransactions=True) event.process() db.session.commit() return make_response(event)
Handle POST request.
Below is the the instruction that describes the task: ### Input: Handle POST request. ### Response: def post(self, receiver_id=None): """Handle POST request.""" try: user_id = request.oauth.access_token.user_id except AttributeError: user_id = current_user.get_id() event = Event.create( receiver_id=receiver_id, user_id=user_id ) db.session.add(event) db.session.commit() # db.session.begin(subtransactions=True) event.process() db.session.commit() return make_response(event)
def geom(self): """ :returns: the geometry as an array of shape (N, 3) """ return numpy.array([(p.x, p.y, p.z) for p in self.mesh], numpy.float32)
:returns: the geometry as an array of shape (N, 3)
Below is the the instruction that describes the task: ### Input: :returns: the geometry as an array of shape (N, 3) ### Response: def geom(self): """ :returns: the geometry as an array of shape (N, 3) """ return numpy.array([(p.x, p.y, p.z) for p in self.mesh], numpy.float32)
async def get_cloud(self): """ Get the name of the cloud that this controller lives on. """ cloud_facade = client.CloudFacade.from_connection(self.connection()) result = await cloud_facade.Clouds() cloud = list(result.clouds.keys())[0] # only lives on one cloud return tag.untag('cloud-', cloud)
Get the name of the cloud that this controller lives on.
Below is the the instruction that describes the task: ### Input: Get the name of the cloud that this controller lives on. ### Response: async def get_cloud(self): """ Get the name of the cloud that this controller lives on. """ cloud_facade = client.CloudFacade.from_connection(self.connection()) result = await cloud_facade.Clouds() cloud = list(result.clouds.keys())[0] # only lives on one cloud return tag.untag('cloud-', cloud)
def load_config(self, config): """ Configure grab instance with external config object. """ self.config = copy_config(config, self.mutable_config_keys) if 'cookiejar_cookies' in config['state']: self.cookies = CookieManager.from_cookie_list( config['state']['cookiejar_cookies'])
Configure grab instance with external config object.
Below is the the instruction that describes the task: ### Input: Configure grab instance with external config object. ### Response: def load_config(self, config): """ Configure grab instance with external config object. """ self.config = copy_config(config, self.mutable_config_keys) if 'cookiejar_cookies' in config['state']: self.cookies = CookieManager.from_cookie_list( config['state']['cookiejar_cookies'])
def deploy(target): """Deploys the package and documentation. Proceeds in the following steps: 1. Ensures proper environment variables are set and checks that we are on Circle CI 2. Tags the repository with the new version 3. Creates a standard distribution and a wheel 4. Updates version.py to have the proper version 5. Commits the ChangeLog, AUTHORS, and version.py file 6. Pushes to PyPI 7. Pushes the tags and newly committed files Raises: `EnvironmentError`: - Not running on CircleCI - `*_PYPI_USERNAME` and/or `*_PYPI_PASSWORD` environment variables are missing - Attempting to deploy to production from a branch that isn't master """ # Ensure proper environment if not os.getenv(CIRCLECI_ENV_VAR): # pragma: no cover raise EnvironmentError('Must be on CircleCI to run this script') current_branch = os.getenv('CIRCLE_BRANCH') if (target == 'PROD') and (current_branch != 'master'): raise EnvironmentError( f'Refusing to deploy to production from branch {current_branch!r}. ' f'Production deploys can only be made from master.') if target in ('PROD', 'TEST'): pypi_username = os.getenv(f'{target}_PYPI_USERNAME') pypi_password = os.getenv(f'{target}_PYPI_PASSWORD') else: raise ValueError(f"Deploy target must be 'PROD' or 'TEST', got {target!r}.") if not (pypi_username and pypi_password): # pragma: no cover raise EnvironmentError( f"Missing '{target}_PYPI_USERNAME' and/or '{target}_PYPI_PASSWORD' " f"environment variables. These are required to push to PyPI.") # Twine requires these environment variables to be set. Subprocesses will # inherit these when we invoke them, so no need to pass them on the command # line. We want to avoid that in case something's logging each command run. os.environ['TWINE_USERNAME'] = pypi_username os.environ['TWINE_PASSWORD'] = pypi_password # Set up git on circle to push to the current branch _shell('git config --global user.email "dev@cloverhealth.com"') _shell('git config --global user.name "Circle CI"') _shell('git config push.default current') # Obtain the version to deploy ret = _shell('make version', stdout=subprocess.PIPE) version = ret.stdout.decode('utf-8').strip() print(f'Deploying version {version!r}...') # Tag the version _shell(f'git tag -f -a {version} -m "Version {version}"') # Update the version _shell( f'sed -i.bak "s/^__version__ = .*/__version__ = {version!r}/" */version.py') # Create a standard distribution and a wheel _shell('python setup.py sdist bdist_wheel') # Add the updated ChangeLog and AUTHORS _shell('git add ChangeLog AUTHORS */version.py') # Start the commit message with "Merge" so that PBR will ignore it in the # ChangeLog. Use [skip ci] to ensure CircleCI doesn't recursively deploy. _shell('git commit --no-verify -m "Merge autogenerated files [skip ci]"') # Push the distributions to PyPI. _pypi_push('dist') # Push the tag and AUTHORS / ChangeLog after successful PyPI deploy _shell('git push --follow-tags') print(f'Deployment complete. Latest version is {version}.')
Deploys the package and documentation. Proceeds in the following steps: 1. Ensures proper environment variables are set and checks that we are on Circle CI 2. Tags the repository with the new version 3. Creates a standard distribution and a wheel 4. Updates version.py to have the proper version 5. Commits the ChangeLog, AUTHORS, and version.py file 6. Pushes to PyPI 7. Pushes the tags and newly committed files Raises: `EnvironmentError`: - Not running on CircleCI - `*_PYPI_USERNAME` and/or `*_PYPI_PASSWORD` environment variables are missing - Attempting to deploy to production from a branch that isn't master
Below is the the instruction that describes the task: ### Input: Deploys the package and documentation. Proceeds in the following steps: 1. Ensures proper environment variables are set and checks that we are on Circle CI 2. Tags the repository with the new version 3. Creates a standard distribution and a wheel 4. Updates version.py to have the proper version 5. Commits the ChangeLog, AUTHORS, and version.py file 6. Pushes to PyPI 7. Pushes the tags and newly committed files Raises: `EnvironmentError`: - Not running on CircleCI - `*_PYPI_USERNAME` and/or `*_PYPI_PASSWORD` environment variables are missing - Attempting to deploy to production from a branch that isn't master ### Response: def deploy(target): """Deploys the package and documentation. Proceeds in the following steps: 1. Ensures proper environment variables are set and checks that we are on Circle CI 2. Tags the repository with the new version 3. Creates a standard distribution and a wheel 4. Updates version.py to have the proper version 5. Commits the ChangeLog, AUTHORS, and version.py file 6. Pushes to PyPI 7. Pushes the tags and newly committed files Raises: `EnvironmentError`: - Not running on CircleCI - `*_PYPI_USERNAME` and/or `*_PYPI_PASSWORD` environment variables are missing - Attempting to deploy to production from a branch that isn't master """ # Ensure proper environment if not os.getenv(CIRCLECI_ENV_VAR): # pragma: no cover raise EnvironmentError('Must be on CircleCI to run this script') current_branch = os.getenv('CIRCLE_BRANCH') if (target == 'PROD') and (current_branch != 'master'): raise EnvironmentError( f'Refusing to deploy to production from branch {current_branch!r}. ' f'Production deploys can only be made from master.') if target in ('PROD', 'TEST'): pypi_username = os.getenv(f'{target}_PYPI_USERNAME') pypi_password = os.getenv(f'{target}_PYPI_PASSWORD') else: raise ValueError(f"Deploy target must be 'PROD' or 'TEST', got {target!r}.") if not (pypi_username and pypi_password): # pragma: no cover raise EnvironmentError( f"Missing '{target}_PYPI_USERNAME' and/or '{target}_PYPI_PASSWORD' " f"environment variables. These are required to push to PyPI.") # Twine requires these environment variables to be set. Subprocesses will # inherit these when we invoke them, so no need to pass them on the command # line. We want to avoid that in case something's logging each command run. os.environ['TWINE_USERNAME'] = pypi_username os.environ['TWINE_PASSWORD'] = pypi_password # Set up git on circle to push to the current branch _shell('git config --global user.email "dev@cloverhealth.com"') _shell('git config --global user.name "Circle CI"') _shell('git config push.default current') # Obtain the version to deploy ret = _shell('make version', stdout=subprocess.PIPE) version = ret.stdout.decode('utf-8').strip() print(f'Deploying version {version!r}...') # Tag the version _shell(f'git tag -f -a {version} -m "Version {version}"') # Update the version _shell( f'sed -i.bak "s/^__version__ = .*/__version__ = {version!r}/" */version.py') # Create a standard distribution and a wheel _shell('python setup.py sdist bdist_wheel') # Add the updated ChangeLog and AUTHORS _shell('git add ChangeLog AUTHORS */version.py') # Start the commit message with "Merge" so that PBR will ignore it in the # ChangeLog. Use [skip ci] to ensure CircleCI doesn't recursively deploy. _shell('git commit --no-verify -m "Merge autogenerated files [skip ci]"') # Push the distributions to PyPI. _pypi_push('dist') # Push the tag and AUTHORS / ChangeLog after successful PyPI deploy _shell('git push --follow-tags') print(f'Deployment complete. Latest version is {version}.')
def hide_routemap_holder_route_map_content_set_ipv6_interface_ipv6_null0(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") hide_routemap_holder = ET.SubElement(config, "hide-routemap-holder", xmlns="urn:brocade.com:mgmt:brocade-ip-policy") route_map = ET.SubElement(hide_routemap_holder, "route-map") name_key = ET.SubElement(route_map, "name") name_key.text = kwargs.pop('name') action_rm_key = ET.SubElement(route_map, "action-rm") action_rm_key.text = kwargs.pop('action_rm') instance_key = ET.SubElement(route_map, "instance") instance_key.text = kwargs.pop('instance') content = ET.SubElement(route_map, "content") set = ET.SubElement(content, "set") ipv6 = ET.SubElement(set, "ipv6") interface = ET.SubElement(ipv6, "interface") ipv6_null0 = ET.SubElement(interface, "ipv6-null0") callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def hide_routemap_holder_route_map_content_set_ipv6_interface_ipv6_null0(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") hide_routemap_holder = ET.SubElement(config, "hide-routemap-holder", xmlns="urn:brocade.com:mgmt:brocade-ip-policy") route_map = ET.SubElement(hide_routemap_holder, "route-map") name_key = ET.SubElement(route_map, "name") name_key.text = kwargs.pop('name') action_rm_key = ET.SubElement(route_map, "action-rm") action_rm_key.text = kwargs.pop('action_rm') instance_key = ET.SubElement(route_map, "instance") instance_key.text = kwargs.pop('instance') content = ET.SubElement(route_map, "content") set = ET.SubElement(content, "set") ipv6 = ET.SubElement(set, "ipv6") interface = ET.SubElement(ipv6, "interface") ipv6_null0 = ET.SubElement(interface, "ipv6-null0") callback = kwargs.pop('callback', self._callback) return callback(config)
def delete_image(self, name: str) -> None: """ Deletes a Docker image with a given name. Parameters: name: the name of the Docker image. """ logger.debug("deleting Docker image: %s", name) path = "docker/images/{}".format(name) response = self.__api.delete(path) if response.status_code != 204: try: self.__api.handle_erroneous_response(response) except Exception: logger.exception("failed to delete Docker image: %s", name) raise else: logger.info("deleted Docker image: %s", name)
Deletes a Docker image with a given name. Parameters: name: the name of the Docker image.
Below is the the instruction that describes the task: ### Input: Deletes a Docker image with a given name. Parameters: name: the name of the Docker image. ### Response: def delete_image(self, name: str) -> None: """ Deletes a Docker image with a given name. Parameters: name: the name of the Docker image. """ logger.debug("deleting Docker image: %s", name) path = "docker/images/{}".format(name) response = self.__api.delete(path) if response.status_code != 204: try: self.__api.handle_erroneous_response(response) except Exception: logger.exception("failed to delete Docker image: %s", name) raise else: logger.info("deleted Docker image: %s", name)
async def save_session(self, sid, session): """Store the user session for a client. :param sid: The session id of the client. :param session: The session dictionary. """ socket = self._get_socket(sid) socket.session = session
Store the user session for a client. :param sid: The session id of the client. :param session: The session dictionary.
Below is the the instruction that describes the task: ### Input: Store the user session for a client. :param sid: The session id of the client. :param session: The session dictionary. ### Response: async def save_session(self, sid, session): """Store the user session for a client. :param sid: The session id of the client. :param session: The session dictionary. """ socket = self._get_socket(sid) socket.session = session
def add_to_inventory(self): """Adds lb IPs to stack inventory""" if self.lb_attrs: self.lb_attrs = self.consul.lb_details( self.lb_attrs[A.loadbalancer.ID] ) host = self.lb_attrs['virtualIps'][0]['address'] self.stack.add_lb_secgroup(self.name, [host], self.backend_port) self.stack.add_host( host, [self.name], self.lb_attrs )
Adds lb IPs to stack inventory
Below is the the instruction that describes the task: ### Input: Adds lb IPs to stack inventory ### Response: def add_to_inventory(self): """Adds lb IPs to stack inventory""" if self.lb_attrs: self.lb_attrs = self.consul.lb_details( self.lb_attrs[A.loadbalancer.ID] ) host = self.lb_attrs['virtualIps'][0]['address'] self.stack.add_lb_secgroup(self.name, [host], self.backend_port) self.stack.add_host( host, [self.name], self.lb_attrs )
def create(rawmessage): """Return an INSTEON message class based on a raw byte stream.""" rawmessage = _trim_buffer_garbage(rawmessage) if len(rawmessage) < 2: return (None, rawmessage) code = rawmessage[1] msgclass = _get_msg_class(code) msg = None remaining_data = rawmessage if msgclass is None: _LOGGER.debug('Did not find message class 0x%02x', rawmessage[1]) rawmessage = rawmessage[1:] rawmessage = _trim_buffer_garbage(rawmessage, False) if rawmessage: _LOGGER.debug('Create: %s', create) _LOGGER.debug('rawmessage: %s', binascii.hexlify(rawmessage)) msg, remaining_data = create(rawmessage) else: remaining_data = rawmessage else: if iscomplete(rawmessage): msg = msgclass.from_raw_message(rawmessage) if msg: remaining_data = rawmessage[len(msg.bytes):] # _LOGGER.debug("Returning msg: %s", msg) # _LOGGER.debug('Returning buffer: %s', binascii.hexlify(remaining_data)) return (msg, remaining_data)
Return an INSTEON message class based on a raw byte stream.
Below is the the instruction that describes the task: ### Input: Return an INSTEON message class based on a raw byte stream. ### Response: def create(rawmessage): """Return an INSTEON message class based on a raw byte stream.""" rawmessage = _trim_buffer_garbage(rawmessage) if len(rawmessage) < 2: return (None, rawmessage) code = rawmessage[1] msgclass = _get_msg_class(code) msg = None remaining_data = rawmessage if msgclass is None: _LOGGER.debug('Did not find message class 0x%02x', rawmessage[1]) rawmessage = rawmessage[1:] rawmessage = _trim_buffer_garbage(rawmessage, False) if rawmessage: _LOGGER.debug('Create: %s', create) _LOGGER.debug('rawmessage: %s', binascii.hexlify(rawmessage)) msg, remaining_data = create(rawmessage) else: remaining_data = rawmessage else: if iscomplete(rawmessage): msg = msgclass.from_raw_message(rawmessage) if msg: remaining_data = rawmessage[len(msg.bytes):] # _LOGGER.debug("Returning msg: %s", msg) # _LOGGER.debug('Returning buffer: %s', binascii.hexlify(remaining_data)) return (msg, remaining_data)
def _create_static_actions(self): """ Create all static menu actions. The created actions will be placed in the tray icon context menu. """ logger.info("Creating static context menu actions.") self.action_view_script_error = self._create_action( None, "&View script error", self.reset_tray_icon, "View the last script error." ) self.action_view_script_error.triggered.connect(self.app.show_script_error) # The action should disable itself self.action_view_script_error.setDisabled(True) self.action_view_script_error.triggered.connect(self.action_view_script_error.setEnabled) self.action_hide_icon = self._create_action( "edit-clear", "Temporarily &Hide Icon", self.hide, "Temporarily hide the system tray icon.\nUse the settings to hide it permanently." ) self.action_show_config_window = self._create_action( "configure", "&Show Main Window", self.app.show_configure, "Show the main AutoKey window. This does the same as left clicking the tray icon." ) self.action_quit = self._create_action("application-exit", "Exit AutoKey", self.app.shutdown) # TODO: maybe import this from configwindow.py ? The exact same Action is defined in the main window. self.action_enable_monitoring = self._create_action( None, "&Enable Monitoring", self.app.toggle_service, "Pause the phrase expansion and script execution, both by abbreviations and hotkeys.\n" "The global hotkeys to show the main window and to toggle this setting, as defined in the AutoKey " "settings, are not affected and will work regardless." ) self.action_enable_monitoring.setCheckable(True) self.action_enable_monitoring.setChecked(self.app.service.is_running()) self.action_enable_monitoring.setDisabled(self.app.serviceDisabled) # Sync action state with internal service state self.app.monitoring_disabled.connect(self.action_enable_monitoring.setChecked)
Create all static menu actions. The created actions will be placed in the tray icon context menu.
Below is the the instruction that describes the task: ### Input: Create all static menu actions. The created actions will be placed in the tray icon context menu. ### Response: def _create_static_actions(self): """ Create all static menu actions. The created actions will be placed in the tray icon context menu. """ logger.info("Creating static context menu actions.") self.action_view_script_error = self._create_action( None, "&View script error", self.reset_tray_icon, "View the last script error." ) self.action_view_script_error.triggered.connect(self.app.show_script_error) # The action should disable itself self.action_view_script_error.setDisabled(True) self.action_view_script_error.triggered.connect(self.action_view_script_error.setEnabled) self.action_hide_icon = self._create_action( "edit-clear", "Temporarily &Hide Icon", self.hide, "Temporarily hide the system tray icon.\nUse the settings to hide it permanently." ) self.action_show_config_window = self._create_action( "configure", "&Show Main Window", self.app.show_configure, "Show the main AutoKey window. This does the same as left clicking the tray icon." ) self.action_quit = self._create_action("application-exit", "Exit AutoKey", self.app.shutdown) # TODO: maybe import this from configwindow.py ? The exact same Action is defined in the main window. self.action_enable_monitoring = self._create_action( None, "&Enable Monitoring", self.app.toggle_service, "Pause the phrase expansion and script execution, both by abbreviations and hotkeys.\n" "The global hotkeys to show the main window and to toggle this setting, as defined in the AutoKey " "settings, are not affected and will work regardless." ) self.action_enable_monitoring.setCheckable(True) self.action_enable_monitoring.setChecked(self.app.service.is_running()) self.action_enable_monitoring.setDisabled(self.app.serviceDisabled) # Sync action state with internal service state self.app.monitoring_disabled.connect(self.action_enable_monitoring.setChecked)
def selectedRecords(self): """ Returns a list of all the selected records for this widget. :return [<orb.Table>, ..] """ output = [] for item in self.selectedItems(): if ( isinstance(item, XOrbRecordItem) ): output.append(item.record()) return output
Returns a list of all the selected records for this widget. :return [<orb.Table>, ..]
Below is the the instruction that describes the task: ### Input: Returns a list of all the selected records for this widget. :return [<orb.Table>, ..] ### Response: def selectedRecords(self): """ Returns a list of all the selected records for this widget. :return [<orb.Table>, ..] """ output = [] for item in self.selectedItems(): if ( isinstance(item, XOrbRecordItem) ): output.append(item.record()) return output
def getBar(imageSize, barCenter, barHalfLength, orientation='horizontal'): """ Generate a single horizontal or vertical bar :param imageSize a list of (numPixelX. numPixelY). The number of pixels on horizontal and vertical dimension, e.g., (20, 20) :param barCenter: (list) center of the bar, e.g. (10, 10) :param barHalfLength (int) half length of the bar. Full length is 2*barHalfLength +1 :param orientation: (string) "horizontal" or "vertical" :return: """ (nX, nY) = imageSize (xLoc, yLoc) = barCenter bar = np.zeros((nX, nY), dtype=uintType) if orientation == 'horizontal': xmin = max(0, (xLoc - barHalfLength)) xmax = min(nX - 1, (xLoc + barHalfLength + 1)) bar[xmin:xmax, yLoc] = 1 elif orientation == 'vertical': ymin = max(0, (yLoc - barHalfLength)) ymax = min(nY - 1, (yLoc + barHalfLength + 1)) bar[xLoc, ymin:ymax] = 1 else: raise RuntimeError("orientation has to be horizontal or vertical") return bar
Generate a single horizontal or vertical bar :param imageSize a list of (numPixelX. numPixelY). The number of pixels on horizontal and vertical dimension, e.g., (20, 20) :param barCenter: (list) center of the bar, e.g. (10, 10) :param barHalfLength (int) half length of the bar. Full length is 2*barHalfLength +1 :param orientation: (string) "horizontal" or "vertical" :return:
Below is the the instruction that describes the task: ### Input: Generate a single horizontal or vertical bar :param imageSize a list of (numPixelX. numPixelY). The number of pixels on horizontal and vertical dimension, e.g., (20, 20) :param barCenter: (list) center of the bar, e.g. (10, 10) :param barHalfLength (int) half length of the bar. Full length is 2*barHalfLength +1 :param orientation: (string) "horizontal" or "vertical" :return: ### Response: def getBar(imageSize, barCenter, barHalfLength, orientation='horizontal'): """ Generate a single horizontal or vertical bar :param imageSize a list of (numPixelX. numPixelY). The number of pixels on horizontal and vertical dimension, e.g., (20, 20) :param barCenter: (list) center of the bar, e.g. (10, 10) :param barHalfLength (int) half length of the bar. Full length is 2*barHalfLength +1 :param orientation: (string) "horizontal" or "vertical" :return: """ (nX, nY) = imageSize (xLoc, yLoc) = barCenter bar = np.zeros((nX, nY), dtype=uintType) if orientation == 'horizontal': xmin = max(0, (xLoc - barHalfLength)) xmax = min(nX - 1, (xLoc + barHalfLength + 1)) bar[xmin:xmax, yLoc] = 1 elif orientation == 'vertical': ymin = max(0, (yLoc - barHalfLength)) ymax = min(nY - 1, (yLoc + barHalfLength + 1)) bar[xLoc, ymin:ymax] = 1 else: raise RuntimeError("orientation has to be horizontal or vertical") return bar
def getReadAlignments(self, reference, start=None, end=None): """ Returns an iterator over the specified reads """ return self._getReadAlignments(reference, start, end, self, None)
Returns an iterator over the specified reads
Below is the the instruction that describes the task: ### Input: Returns an iterator over the specified reads ### Response: def getReadAlignments(self, reference, start=None, end=None): """ Returns an iterator over the specified reads """ return self._getReadAlignments(reference, start, end, self, None)
def replace_option_set_by_id(cls, option_set_id, option_set, **kwargs): """Replace OptionSet Replace all attributes of OptionSet This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.replace_option_set_by_id(option_set_id, option_set, async=True) >>> result = thread.get() :param async bool :param str option_set_id: ID of optionSet to replace (required) :param OptionSet option_set: Attributes of optionSet to replace (required) :return: OptionSet If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async'): return cls._replace_option_set_by_id_with_http_info(option_set_id, option_set, **kwargs) else: (data) = cls._replace_option_set_by_id_with_http_info(option_set_id, option_set, **kwargs) return data
Replace OptionSet Replace all attributes of OptionSet This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.replace_option_set_by_id(option_set_id, option_set, async=True) >>> result = thread.get() :param async bool :param str option_set_id: ID of optionSet to replace (required) :param OptionSet option_set: Attributes of optionSet to replace (required) :return: OptionSet If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: Replace OptionSet Replace all attributes of OptionSet This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.replace_option_set_by_id(option_set_id, option_set, async=True) >>> result = thread.get() :param async bool :param str option_set_id: ID of optionSet to replace (required) :param OptionSet option_set: Attributes of optionSet to replace (required) :return: OptionSet If the method is called asynchronously, returns the request thread. ### Response: def replace_option_set_by_id(cls, option_set_id, option_set, **kwargs): """Replace OptionSet Replace all attributes of OptionSet This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async=True >>> thread = api.replace_option_set_by_id(option_set_id, option_set, async=True) >>> result = thread.get() :param async bool :param str option_set_id: ID of optionSet to replace (required) :param OptionSet option_set: Attributes of optionSet to replace (required) :return: OptionSet If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async'): return cls._replace_option_set_by_id_with_http_info(option_set_id, option_set, **kwargs) else: (data) = cls._replace_option_set_by_id_with_http_info(option_set_id, option_set, **kwargs) return data
def _str_dash(self, depth, no_repeat, obj): """Return a string containing dashes (optional) and GO ID.""" if self.indent: # '-' is default character indicating hierarchy level # '=' is used to indicate a hierarchical path printed in detail previously. single_or_double = not no_repeat or not obj.children letter = '-' if single_or_double else '=' return ''.join([letter]*depth) return ""
Return a string containing dashes (optional) and GO ID.
Below is the the instruction that describes the task: ### Input: Return a string containing dashes (optional) and GO ID. ### Response: def _str_dash(self, depth, no_repeat, obj): """Return a string containing dashes (optional) and GO ID.""" if self.indent: # '-' is default character indicating hierarchy level # '=' is used to indicate a hierarchical path printed in detail previously. single_or_double = not no_repeat or not obj.children letter = '-' if single_or_double else '=' return ''.join([letter]*depth) return ""